Sveriges mest populära poddar

Training Data

Meta’s Joe Spisak on Llama 3.1 405B and the Democratization of Frontier Models

42 min • 30 juli 2024

As head of Product Management for Generative AI at Meta, Joe Spisak leads the team behind Llama, which just released the new 3.1 405B model. We spoke with Joe just two days after the model’s release to ask what’s new, what it enables, and how Meta sees the role of open source in the AI ecosystem.


Joe shares that where Llama 3.1 405B really focused is on pushing scale (it was trained on 15 trillion tokens using 16,000 GPUs) and he’s excited about the zero-shot tool use it will enable, as well as its role in distillation and generating synthetic data to teach smaller models. He tells us why he thinks even frontier models will ultimately commoditize—and why that’s a good thing for the startup ecosystem.


Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital 


Mentioned in this episode: 

Llama 3.1 405B paper

Open Source AI Is the Way Forward: Mark Zuckerberg essay released with Llama 3.1.

Mistral Large 2

The Bitter Lesson by Rich Sutton


00:00 Introduction

01:28 The Llama 3.1 405B launch

05:02 The open source license

07:01 What's in it for Meta?

10:19 Why not open source?

11:16 Will frontier models commoditize?

12:41 What about startups?

16:29 The Mistral team

19:36 Are all frontier strategies comparable?

22:38 Is model development becoming more like software development?

26:34 Agentic reasoning

29:09 What future levers will unlock reasoning?

31:20 Will coding and math lead to unlocks?

33:09 Small models

34:08 7X more data

37:36 Are we going to hit a wall?

39:49 Lightning round

Förekommer på
00:00 -00:00