Sveriges mest populära poddar

MLOps.community

Making AI Reliable is the Greatest Challenge of the 2020s // Alon Bochman // #312

62 min • 6 maj 2025

Making AI Reliable is the Greatest Challenge of the 2020s // MLOps Podcast #312 with Alon Bochman, CEO of RagMetrics.


Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter


Huge shout-out to  @RagMetrics  for sponsoring this episode!


// Abstract

Demetrios talks with Alon Bochman, CEO of RagMetrics, about testing in machine learning systems. Alon stresses the value of empirical evaluation over influencer advice, highlights the need for evolving benchmarks, and shares how to effectively involve subject matter experts without technical barriers. They also discuss using LLMs as judges and measuring their alignment with human evaluators.


// Bio

Alon is a product leader with a fintech and adtech background, ex-Google, ex-Microsoft. Co-founded and sold a software company to Thomson Reuters for $30M, grew an AI consulting practice from 0 to over $ 1 Bn in 4 years. 20-year AI veteran, winner of three medals in model-building competitions. In a prior life, he was a top-performing hedge fund portfolio manager.Alon lives near NYC with his wife and two daughters. He is an avid reader, runner, and tennis player, an amateur piano player, and a retired chess player.


// Related Links

Website: ragmetrics.ai


~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~

Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore

Join our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]

Sign up for the next meetup: [https://go.mlops.community/register]

MLOps Swag/Merch: [https://shop.mlops.community/]

Connect with Demetrios on LinkedIn: /dpbrinkm

Connect with Alon on LinkedIn: /alonbochman



Timestamps:

[00:00] Alon's preferred coffee[00:15] Takeaways[00:47] Testing Multi-Agent Systems[05:55] Tracking ML Experiments[12:28] AI Eval Redundancy Balance[17:07] Handcrafted vs LLM Eval Tradeoffs[28:15] LLM Judging Mechanisms[36:03] AI and Human Judgment[38:55] Document Evaluation with LLM[42:08] Subject Matter Expertise in Co-Pilots[46:33] LLMs as Judges[51:40] LLM Evaluation Best Practices[55:26] LM Judge Evaluation Criteria[58:15] Visualizing AI Outputs[1:01:16] Wrap up

Kategorier
Förekommer på
00:00 -00:00