This is a panel taken from the recent AI quality Conference presented by the MLOps Community and Kolena
// Abstract The need for moving to production quickly is paramount in staying out of perpetual POC territory. AI is moving fast. Shipping features fast to stay ahead of the competition is commonplace. Quick iterations are viewed as strength in the startup ecosystem, especially when taking on a deeply entrenched competitor. Each week a new method to improve your AI system becomes popular or a SOTA foundation model is released. How do we balance the need for speed vs the responsibility of safety? Having the confidence to ship a cutting-edge model or AI architecture and knowing it will perform as tasked. What are the risks and safety metrics that others are using when they deploy their AI systems. How can you correctly identify when risks are too large? // Panelists - Remy Thellier: Head of Growth & Strategic Partnerships @ Vectice - Erica Greene: Director of Engineering, Machine Learning @ Yahoo - Shreya Rajpal: Creator @ Guardrails AI A big thank you to our Premium Sponsors Google Cloud & Databricks for their generous support!