Today, I’m talking to Demis Hassabis, the CEO of Google DeepMind, the newly created division of Google responsible for AI efforts across the company. Google DeepMind is the result of an internal merger: Google acquired Demis’ DeepMind startup in 2014 and ran it as a separate company inside its parent company, Alphabet, while Google itself had an AI team called Google Brain.
Google has been showing off AI demos for years now, but with the explosion of ChatGPT and a renewed threat from Microsoft in search, Google and Alphabet CEO Sundar Pichai made the decision to bring DeepMind into Google itself earlier this year to create… Google DeepMind.
What’s interesting is that Google Brain and DeepMind were not necessarily compatible or even focused on the same things: DeepMind was famous for applying AI to things like games and protein-folding simulations. The AI that beat world champions at Go, the ancient board game? That was DeepMind’s AlphaGo. Meanwhile, Google Brain was more focused on what’s come to be the familiar generative AI toolset: large language models for chatbots, and editing features in Google Photos. This was a culture clash and a big structure decision with the goal of being more competitive and faster to market with AI products.
And the competition isn’t just OpenAI and Microsoft — you might have seen a memo from a Google engineer floating around the web recently claiming that Google has no competitive moat in AI because open-source models running on commodity hardware are rapidly evolving and catching up to the tools run by the giants. Demis confirmed that the memo was real but said it was part of Google’s debate culture, and he disagreed with it because he has other ideas about where Google’s competitive edge might come into play.
We also talked about AI risk and artificial general intelligence. Demis is not shy that his goal is building an AGI, and we talked through what risks and regulations should be in place and on what timeline. Demis recently signed onto a 22-word statement about AI risk with OpenAI’s Sam Altman and others that simply reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That’s pretty chill, but is that the real risk right now? Or is it just a distraction from other more tangible problems like AI replacing labor in various creative industries? We also talked about the new kinds of labor AI is creating — armies of low-paid taskers classifying data in countries like Kenya and India in order to train AI systems. I wanted to know if Demis thought these jobs were here to stay or just a temporary side effect of the AI boom.
This one really hits all the Decoder high points: there’s the big idea of AI, a lot of problems that come with it, an infinite array of complicated decisions to be made, and of course, a gigantic org chart decision in the middle of it all. Demis and I got pretty in the weeds, and I still don’t think we covered it all, so we’ll have to have him back soon.
Links:
Inside Google’s AI culture clash - The Verge
A leaked Google memo raises the alarm about open-source A.I. | Fortune
The End of Search As You Know It
Google’s Sundar Pichai talks Search, AI, and dancing with Microsoft - The Verge
DeepMind reportedly lost a yearslong bid to win more independence from Google - The Verge
Transcript:
https://www.theverge.com/e/23542786
Credits:
Decoder is a production of The Verge, and part of the Vox Media Podcast Network.
Today’s episode was produced by Jackie McDermott and Raghu Manavalan, and it was edited by Callie Wright.
The Decoder music is by Breakmaster Cylinder. Our Editorial Director is Brooke Minters, and our Executive Producer is Eleanor Donovan.
Learn more about your ad choices. Visit podcastchoices.com/adchoices