Sveriges mest populära poddar

AI + a16z

Remaking the UI for AI

39 min • 19 april 2024

a16z General Partner Anjney Midha joins the podcast to discuss what's happening with hardware for artificial intelligence. Nvidia might have cornered the market on training workloads for now, but he believes there's a big opportunity at the inference layer — especially for wearable or similar devices that can become a natural part of our everyday interactions. 

Here's one small passage that speaks to his larger thesis on where we're heading:

"I think why we're seeing so many developers flock to Ollama is because there is a lot of demand from consumers to interact with language models in private ways. And that means that they're going to have to figure out how to get the models to run locally without ever leaving without ever the user's context, and data leaving the user's device. And that's going to result, I think, in a renaissance of new kinds of chips that are capable of handling massive workloads of inference on device.

"We are yet to see those unlocked, but the good news is that open source models are phenomenal at unlocking efficiency.  The open source language model ecosystem is just so ravenous."

More from Anjney:

The Quest for AGI: Q*, Self-Play, and Synthetic Data

Making the Most of Open Source AI

Safety in Numbers: Keeping AI Open

Investing in Luma AI

Follow everyone on X:

Anjney Midha

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

00:00 -00:00