Sveriges mest populära poddar

DevCentral

AI Friday Podcast: DeepSeek Security Risks, Reasoning Models, Package Hallucination & More

45 min • 6 februari 2025
Welcome back to AI Friday! This week, we're diving deep into the latest news in artificial intelligence, including the intriguing world of open source music with King Gizzard and the Lizard Wizard, and the fascinating case of the garbage plate recipe. We also discuss the groundbreaking new reasoning models, particularly LlamaV-o1, that are setting new benchmarks in AI capabilities. Get insights into how these models operate, their practical applications, and the implications for AI in various fields, including medical and legal. Additionally, we touch on the security aspects of AI models, particularly the vulnerabilities exposed in DeepSeek by Cisco's Robust Intelligence team. From alignment switching to reinforcement learning, we cover the critical aspects that could impact the future of AI development. And for a bit of fun, we explore the most outrageous AI-driven gadgets from CES 2025, from an AI bassinet to AI refrigerators. Join us for an engaging and informative episode! Articles can be found here: https://community.f5.com/kb/technicalarticles/ai-friday-podcast-deepseek-security-risks-reasoning-models-package-hallucination/339590 00:00 Introduction 05:49 LlamaV-o1 & VRC-Bench 11:12 Amazon: Reasoning & Hallucination 18:12 DeepSeek R1 Security Risks 25:53 How Likely Are AI Attacks? 33:00 Package Hallucination 35:27 The Worst of CES 2025 43:26 Outro
Kategorier
Förekommer på
00:00 -00:00