What is generative AI? How do you create safe and capable models? Is AI overhyped? Join mathematician and broadcaster Professor Hannah Fry as she answers these questions and more in the highly-praised and award-winning podcast from Google DeepMind.
In this series, Hannah goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.
The podcast Google DeepMind: The Podcast is created by Hannah Fry. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
In our final episode for the year, we explore Project Astra, a research prototype exploring future capabilities of a universal AI assistant that can understand the world around you. Host Hannah Fry is joined by Greg Wayne, Director in Research at Google DeepMind. They discuss the inspiration behind the research prototype, its current strengths and limitations, as well as potential future use cases. Hannah even gets the chance to put Project Astra's multilingual skills to the test.
Further reading / listening:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this episode, Hannah is joined by Oriol Vinyals, VP of Drastic Research and Gemini co-lead. They discuss the evolution of agents from single-task models to more general-purpose models capable of broader applications, like Gemini. Vinyals guides Hannah through the two-step process behind multi modal models: pre-training (imitation learning) and post-training (reinforcement learning). They discuss the complexities of scaling and the importance of innovation in architecture and training processes. They close on a quick whirlwind tour of some of the new agentic capabilities recently released by Google DeepMind.
Note: To see all of the full length demos, including unedited versions, and other videos related to Gemini 2.0 head to YouTube.
Future reading/watching:
Thanks to everyone who made this possible, including but not limited to:
Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Commissioner & Producer: Emma Yousif
Music composition: Eleni Shaw
Camera Director and Video Editor: Bernardo Resende
Audio Engineer: Perry Rogantin
Video Studio Production: Nicholas Duke
Video Editor: Bilal Merhi
Video Production Design: James Barton
Visual Identity and Design: Eleanor Tomlinson
Commissioned by Google DeepMind
—
Subscribe to our YouTube channel
Find us on X
Follow us on Instagram
Add us on Linkedin
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?
Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphasises the importance of a nuanced approach to regulation, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction.
Further reading/watching:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
NotebookLM is a research assistant powered by Gemini that draws on expertise from storytelling to present information in an engaging way. It allows users to upload their own documents and generate insights, explanations, and—more recently—podcasts. This feature, also known as audio overviews, has captured the imagination of millions of people worldwide, who have created thousands of engaging podcasts ranging from personal narratives to educational explainers using source materials like CVs, personal journals, sales decks, and more.
Join Raiza Martin and Steven Johnson from Google Labs, Google’s testing ground for products, as they guide host Hannah Fry through the technical advancements that have made NotebookLM possible. In this episode they'll explore what it means to be interesting, the challenges of generating natural-sounding speech, as well as exciting new modalities on the horizon.
Further reading
Thanks to everyone who made this possible, including but not limited to:
Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Commissioner & Producer: Emma Yousif
Music composition: Eleni Shaw
Camera Director and Video Editor: Daniel Lazard
Audio Engineer: Perry Rogantin
Video Studio Production: Nicholas Duke
Video Editor: Alex Baro Cayetano, Daniel Lazard
Video Production Design: James Barton
Visual Identity and Design: Eleanor Tomlinson
Commissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Join Professor Hannah Fry at the AI for Science Forum for a fascinating conversation with Google DeepMind CEO Demis Hassabis. They explore how AI is revolutionizing scientific discovery, delving into topics like the nuclear pore complex, plastic-eating enzymes, quantum computing, and the surprising power of Turing machines. The episode also features a special 'ask me anything' session with Nobel Laureates Sir Paul Nurse, Jennifer Doudna, and John Jumper, who answer audience questions about the future of AI in science.
Watch the episode here, and catch up on all of the sessions from the AI for Science Forum here.
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.
In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.
Timecodes:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
How human should an AI tutor be? What does ‘good’ teaching look like? Will AI lead in the classroom, or take a back seat to human instruction? Will everyone have their own personalized AI tutor? Join research lead, Irina Jurenka, and Professor Hannah Fry as they explore the complicated yet exciting world of AI in education.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in AlphaFold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Games are a very good training ground for agents. Think about it. Perfectly packaged, neatly constrained environments where agents can run wild, work out the rules for themselves, and learn how to handle autonomy. In this episode, Research Engineering Team Lead, Frederic Besse, joins Hannah as they discuss important research like SIMA (Scalable Instructable Multiworld Agent) and what we can expect from future agents that can understand and safely carry out a wide range of tasks - online and in the real world.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Professor Hannah Fry is joined by Jeff Dean, one of the most legendary figures in computer science and chief scientist of Google DeepMind and Google Research. Jeff was instrumental to the field in the late 1990s, writing the code that transformed Google from a small startup into the multinational company it is today. Hannah and Jeff discuss it all - from the early days of Google and neural networks, to the long term potential of multi-modal models like Gemini.
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.
For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Professor Hannah Fry is joined by Google DeepMind's senior research director Douglas Eck to explore AI's capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways.
Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.
Further reading:
Social channels to follow for new content:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up.
In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.
Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.
Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.
Further reading:
Social channels to follow for new content:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewee: Deepmind co-founder and CEO, Demis Hassabis
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
DeepMind, The Podcast: https://deepmind.com/blog/article/welcome-to-the-deepmind-podcast
DeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbw
Riemann hypothesis, Wikipedia: https://en.wikipedia.org/wiki/Riemann_hypothesis
Using AI to accelerate scientific discovery by Demis Hassabis, Kendrew Lecture 2021: https://www.youtube.com/watch?v=sm-VkgVX-2o
Protein Folding & the Next Technological Revolution by Demis Hassabis, Bloomberg: https://www.youtube.com/watch?v=vhd4ENh5ON4
The Algorithm, MIT Technology Review: https://forms.technologyreview.com/newsletters/ai-the-algorithm/
Machine learning resources, The Royal Society: https://royalsociety.org/topics-policy/education-skills/teacher-resources-and-opportunities/resources-for-teachers/resources-machine-learning/
How to get empowered, not overpowered, by AI, TED: https://www.youtube.com/watch?v=2LRwvU6gEbA
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
AI needs to benefit everyone, not just those who build it. But fulfilling this promise requires careful thought before new technologies are built and released into the world. In this episode, Hannah delves into some of the most pressing and difficult ethical and social questions surrounding AI today. She explores complex issues like racial and gender bias and the misuse of AI technologies, and hears why diversity and representation is vital for building technology that works for all.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Sasha Brown, William Isaac, Shakir Mohamed, Kevin Mckee & Obum Ekeke
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias, The Verge: https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias
Tuskegee Syphilis Study, Wikipedia: https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study
Ethics & Society, DeepMind: https://deepmind.com/about/ethics-and-society
Row over AI that 'identifies gay faces', BBC: https://www.bbc.co.uk/news/technology-41188560
The Trevor Project: https://www.thetrevorproject.org/
AI takes root, helping farmers identify diseased plants, Google: https://www.blog.google/technology/ai/ai-takes-root-helping-farmers-identity-diseased-plants/
How Can You Use Technology to Support a Culture of Inclusion and Diversity?, myHRfuture: https://www.myhrfuture.com/blog/2019/7/16/how-can-you-use-technology-to-support-a-culture-of-inclusion-and-diversity
Scholarships at DeepMind: https://www.deepmind.com/scholarships
AI, Ain’t I a Woman? Joy Buolamwini, YouTube: https://www.youtube.com/watch?v=QxuyfWoVV98
How to be Human in the Age of the Machine, Hannah Fry: https://royalsociety.org/grants-schemes-awards/book-prizes/science-book-prize/2018/hello-world/
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
AI doesn’t just exist in the lab, it’s already solving a range of problems in the real world. In this episode, Hannah encounters a realistic recreation of her voice by WaveNet, the voice synthesising system that powers the Google Assistant and helps people with speech difficulties and illnesses regain their voices. Hannah also discovers how ‘deepfake’ technology can be used to improve weather forecasting and how DeepMind researchers are collaborating with Liverpool Football Club, aiming to take sports to the next level.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Demis Hassabis, Raia Hadsell, Karl Tuyls, Zach Gleicher & Jackson Broshear; Niall Robinson of the UK Met Office
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
A generative model for raw audio, DeepMind: https://deepmind.com/blog/article/wavenet-generative-model-raw-audio
WaveNet case study, DeepMind: https://deepmind.com/research/case-studies/wavenet
Using WaveNet technology to reunite speech-impaired users with their original voices, DeepMind:| https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voices
Project Euphonia, Google Research: https://sites.research.google/euphonia/about/
Nowcasting the next hour of rain, DeepMind: https://deepmind.com/blog/article/nowcasting
Now DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-ai
Advancing sports analytics through AI, DeepMind: https://deepmind.com/blog/article/advancing-sports-analytics-through-ai
MetOffice, BBC: https://www.metoffice.gov.uk/
The village ‘washed on to the map’, BBC: https://www.bbc.co.uk/news/uk-england-cornwall-28523053
Michael Fish got the storm of 1987 wrong, Sky News:
.
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Step inside DeepMind's laboratories and you'll find researchers studying DNA to understand the mysteries of life, seeking new ways to use nuclear energy, or putting AI to the test in mind-bending areas of maths. In this episode, Hannah meets Pushmeet Kohli, the head of science at DeepMind, to understand how AI is accelerating scientific progress. Listeners also join Hannah on a [virtual] safari in the Serengeti in East Africa to find out how researchers are using AI to conserve wildlife in one of the world’s most spectacular ecosystems.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Demis Hassabis, Pushmeet Kohli & Sarah Jane Dunn; Meredith Palmer of the Princeton University
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
Using AI for scientific discovery, DeepMind: https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery
DeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbw
The AI revolution in scientific research, The Royal Society: https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf
DOE Explains...Tokamaks, Office of Science: https://www.energy.gov/science/doe-explainstokamaks
How AI Accidentally Learned Ecology by Playing StarCraft, Discover: https://www.discovermagazine.com/technology/how-ai-accidentally-learned-ecology-by-playing-starcraft
Google AI can identify wildlife from trap-camera footage, VentureBeat:
https://venturebeat.com/2019/12/17/googles-ai-can-identify-wildlife-from-trap-camera-footage-with-up-to-98-6-accuracy/
Snapshot Serengeti, Zooniverse:
https://www.zooniverse.org/projects/zooniverse/snapshot-serengeti
The Human Genome Project, National Human Genome Research Institute: https://www.genome.gov/human-genome-project
Exploring the beauty of pure mathematics in novel ways, DeepMind: https://deepmind.com/blog/article/exploring-the-beauty-of-pure-mathematics-in-novel-ways
Predicting gene expression with AI, DeepMind: https://deepmind.com/blog/article/enformer
Using machine learning to accelerate ecological research, DeepMind: https://deepmind.com/blog/article/using-machine-learning-to-accelerate-ecological-research
Accelerating fusion science through learned plasma control, DeepMind: https://deepmind.com/blog/article/Accelerating-fusion-science-through-learned-plasma-control
Simulating matter on the quantum scale with AI, DeepMind: https://deepmind.com/blog/article/Simulating-matter-on-the-quantum-scale-with-AI
How AI is helping the natural sciences, Nature: https://www.nature.com/articles/d41586-021-02762-6
Inside DeepMind's epic mission to solve science's trickiest problem, WIRED: https://www.wired.co.uk/article/deepmind-protein-folding
How Artificial Intelligence Is Changing Science, Quanta:
https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Hannah meets DeepMind co-founder and chief scientist Shane Legg, the man who coined the phrase ‘artificial general intelligence’, and explores how it might be built. Why does Shane think AGI is possible? When will it be realised? And what could it look like? Hannah also explores a simple theory of using trial and error to reach AGI and takes a deep dive into MuZero, an AI system which mastered complex board games from chess to Go, and is now generalising to solve a range of important tasks in the real world.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Shane Legg, Doina Precup, Dave Silver & Jackson Broshear
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
Real-world challenges for AGI, DeepMind: https://deepmind.com/blog/article/real-world-challenges-for-agi
An executive primer on artificial general intelligence, McKinsey: https://www.mckinsey.com/business-functions/operations/our-insights/an-executive-primer-on-artificial-general-intelligence
Mastering Go, chess, shogi and Atari without rules, DeepMind: https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules
What is AGI?, Medium: https://medium.com/intuitionmachine/what-is-agi-99cdb671c88e
A Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329
Reward is enough by David Silver, ScienceDirect: https://www.sciencedirect.com/science/article/pii/S0004370221000862
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Do you need a body to have intelligence? And can one exist without the other? Hannah takes listeners behind the scenes of DeepMind's robotics lab in London where she meets robots that are trying to independently learn new skills, and explores why physical intelligence is a necessary part of intelligence. Along the way, she finds out how researchers trained their robots at home during lockdown, uncovers why so many robotics demonstrations are faking it, and what it takes to train a robotic football team.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Raia Hadsell, Viorica Patraucean, Jan Humplik, Akhil Raju & Doina Precup
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
Stacking our way to more general robots, DeepMind: https://deepmind.com/blog/article/stacking-our-way-to-more-general-robots
Researchers Propose Physical AI As Key To Lifelike Robots, Forbes: https://www.forbes.com/sites/simonchandler/2020/11/11/researchers-propose-physical-ai-as-key-to-lifelike-robots/
The robots going where no human can, BBC: https://www.bbc.co.uk/news/av/technology-41584738
The Robot Assault On Fukushima, WIRED: https://www.wired.com/story/fukushima-robot-cleanup/
Leaps, Bounds, and Backflips, Boston Dynamics: http://blog.bostondynamics.com/atlas-leaps-bounds-and-backflips
Now DeepMind is using AI to transform football, WIRED:
https://www.wired.co.uk/article/deepmind-football-liverpool-ai
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Cooperation is at the heart of our society. Inventing the railway, giving birth to the Renaissance, and creating the Covid-19 vaccine all required people to combine efforts. But cooperation is so much more. It governs our education systems, healthcare, and food production. In this episode, Hannah meets the researchers working on cooperative AI, and hears about their work and influences from the famous American psychologist - and pigeon trainer - BF Skinner to the strategic board game Diplomacy.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected]
Interviewees: DeepMind’s Thore Graepel, Kevin Mckee, Doina Precup & Laura Weidinger
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
Machines must learn to find common ground, Nature: https://www.nature.com/articles/d41586-021-01170-0
Introduction to Reinforcement Learning, DeepMind: https://www.youtube.com/watch?v=2pWv7GOvuf0
B.F. Skinner, Wikipedia: https://en.wikipedia.org/wiki/B._F._Skinner
The Tragedy of the Commons, Wikipedia: https://en.wikipedia.org/wiki/Tragedy_of_the_commons
Staving Off The Ultimate Tragedy Of The Commons, Forbes: https://www.forbes.com/sites/georgebradt/2021/11/02/staving-off-the-ultimate-tragedy-of-the-commons-by-making-better-complex-decisions-cooperatively-in-glasgow/
Understanding Agent Cooperation, DeepMind: https://deepmind.com/blog/article/understanding-agent-cooperation
The emergence of complex cooperative agents, DeepMind: https://deepmind.com/blog/article/capture-the-flag-science
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Hannah explores the potential of language models, the questions they raise, and if teaching a computer about language is enough to create artificial general intelligence (AGI). Beyond helping us communicate ideas, language plays a crucial role in memory, cooperation, and thinking – which is why AI researchers have long aimed to communicate with computers using natural language. Recently, there has been extraordinary progress using large-language models (LLM), which learn how to speak by processing huge amounts of data from the internet. The results can be very convincing, but pose significant ethical challenges.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Geoffrey Irving, Chris Dyer, Angeliki Lazaridou, Lisa-Anne Hendriks & Laura Weidinger
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
GPT-3 Powers the Next Generation of Apps, OpenAI: https://openai.com/blog/gpt-3-apps/
https://web.stanford.edu/class/linguist238/p36-weizenabaum.pdf
Never Mind the Computer 1983 about the ELIZA program, BBC: https://www.bbc.co.uk/programmes/p023kpf8
How Large Language Models Will Transform Science, Society, and AI, Stanford University: https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai
Challenges in Detoxifying Language Models, DeepMind: https://deepmind.com/research/publications/2021/Challenges-in-Detoxifying-Language-Models
Extending Machine Language Models toward Human-Level Language Understanding, DeepMind: https://deepmind.com/research/publications/2020/Extending-Machine-Language-Models-toward-Human-Level-Language-Understanding
Language modelling at scale, DeepMind: https://deepmind.com/blog/article/language-modelling-at-scale
Artificial general intelligence, Technology Review: https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/
A Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329
Stuart Russell - Living With Artificial Intelligence, BBC: https://www.bbc.co.uk/programmes/m001216k/episodes/player
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In December 2019, DeepMind’s AI system, AlphaFold, solved a 50-year-old grand challenge in biology, known as the protein-folding problem. A headline in the journal Nature read, “It will change everything” and the President of the UK's Royal Society called it a “stunning advance [that arrived] decades before many in the field would have predicted”. In this episode, Hannah uncovers the inside story of AlphaFold from the people who made it happen and finds out how it could help transform the future of healthcare and medicine.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Demis Hassabis, John Jumper, Kathryn Tunyasunakool and Sasha Brown; Charles Mowbray and Monique Wasuna of the Drugs for Neglected Diseases initiative (DNDi]) & John McGeehan of the Centre for Enzyme Innovation at the University of Portsmouth
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
AlphaFold blog, DeepMind: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology
AlphaFold case study, DeepMind: https://deepmind.com/research/case-studies/alphafold
It will change everything, Nature: https://www.nature.com/articles/d41586-020-03348-4
AlphaFold Is The Most Important Achievement In AI—Ever, Forbes: https://www.forbes.com/sites/robtoews/2021/10/03/alphafold-is-the-most-important-achievement-in-ai-ever/?sh=359278426e0a
Bacteria found to eat PET plastics, NewScientist: https://www.newscientist.com/article/2080279-bacteria-found-to-eat-pet-plastics-could-help-do-the-recycling/
Protein Structure Prediction Center: https://predictioncenter.org/
An interview with Professor John McGeehan, BBSRC: https://bbsrc.ukri.org/news/features/enzyme-science/an-interview-with-professor-john-mcgeehan/
John McGeehan profile, University of Portsmouth: https://researchportal.port.ac.uk/en/persons/john-mcgeehan
Drugs for Neglected Diseases initiative (DNDi): https://dndi.org/
A doctor’s dream, DNDi: https://www.youtube.com/watch?v=Tk31iucWYdE
The Curious Cases of Rutherford and Fry, BBC: https://www.bbc.co.uk/programmes/b07dx75g/episodes/downloads
Hannah Fry:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
The chart-topping podcast which uncovers the extraordinary ways artificial intelligence (AI) is transforming our world is back for a second season. Join mathematician and broadcaster Professor Hannah Fry behind the scenes of world-leading AI research lab DeepMind to get the inside story of how AI is being created – and how it can benefit our lives and the society we live in.
Recorded over six months and featuring over 30 original interviews, including DeepMind co-founders Demis Hassabis and Shane Legg, the podcast gives listeners exclusive access to the brilliant people building the technology of the future. Throughout nine original episodes, Hannah discovers how DeepMind is using AI to advance science in critical areas, like solving a 50-year-old grand challenge in biology and developing nuclear fusion.
Listeners hear stories of teaching robots to walk at home during lockdown, as well as using AI to forecast weather, help people regain their voices, and enhance game strategies with Liverpool Football Club. Hannah also takes an in-depth look at the challenges and potential of building artificial general intelligence (AGI) and explores what it takes to ensure AI is built to benefit society.
“I hope this series gives people a better understanding of AI and a feeling for just how exhilarating an endeavour it is.” – Demis Hassabis, CEO and Co-Founder of DeepMind
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this special extended episode, Hannah Fry meets Demis Hassabis, the CEO and co-founder of DeepMind. She digs into his former life as a chess player, games designer and neuroscientist and explores how his love of chess helped him to get start-up funding, what drives him and his vision, and why AI keeps him up at night.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at [email protected].
Further reading:
Interviewees: Deepmind CEO and co-founder, Demis Hassabis
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
AI researchers around the world are trying to create a general purpose learning system that can learn to solve a broad range of problems without being taught how. Koray Kavukcuoglu, DeepMind’s Director of Research, describes the journey to get there, and takes Hannah on a whistle-stop tour of DeepMind’s HQ and its research.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at [email protected].
Further reading:
OpenAI: An overview of neural networks and the progress that has been made in AI
Shane Legg, DeepMind co-founder: Measuring machine intelligence at the 2010 Singularity Summit
Shane Legg and Marcus Hutter: Paper on defining machine intelligence
Demis Hassabis: Talk on the history, frontiers and capabilities of AI
Robert Wiblin: Positively shaping the development of artificial intelligence
Asilomar AI Principles
Richard S. Sutton and Andrew G. Barto: Reinforcement Learning: An Introduction
Interviewees: Koray Kavukcuoglu, Director of Research; Trevor Back, Product Manager for DeepMind’s science research; research scientists Raia Hadsell and Murray Shanahan; and DeepMind CEO and co-founder, Demis Hassabis.
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
While there is a lot of excitement about AI research, there are also concerns about the way it might be implemented, used and abused. In this episode Hannah investigates the more human side of the technology, some ethical issues around how it is developed and used, and the efforts to create a future of AI that works for everyone.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at [email protected].
Further reading:
Interviewees: Verity Harding, Co-Lead of DeepMind Ethics and Society; DeepMind’s COO Lila Ibrahim, and research scientists William Isaac and Silvia Chiappa.
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
The ambition of much of AI research is to create systems that can help to solve problems in the real world. In this episode, Hannah meets the people building systems that could be used to save the sight of thousands, help us solve one of the most fundamental problems in biology and reduce energy consumption in an effort to combat climate change. But whilst there is great potential, there are also important obstacles that will need to be tackled for AI to be used effectively, safely and fairly.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at [email protected].
Further reading:
Other examples of the application of AI for real-world impact include:
Interviewees: Pearse Keane, consultant ophthalmologist at Moorfields Eye Hospital; Sandy Nelson, Product Manager for DeepMind’s Science Program; and DeepMind Program Manager Sims Witherspoon.
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Forget what sci-fi has told you about superintelligent robots that are uncannily human-like; the reality is more prosaic. Inside DeepMind’s robotics laboratory, Hannah explores what researchers call ‘embodied AI’: robot arms that are learning tasks like picking up plastic bricks, which humans find comparatively easy. Discover the cutting-edge challenges of bringing AI and robotics together, and learning from scratch how to perform tasks. She also explores some of the key questions about using AI safely in the real world.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at [email protected].
Further reading:
Interviewees: Software engineer Jackie Kay and research scientists Murray Shanahan, Victoria Krakovna, Raia Hadsell and Jan Leike.
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Video games have become a favourite tool for AI researchers to test the abilities of their systems. In this episode, Hannah sits down to play StarCraft II - a challenging video game that requires players to control the onscreen action with as many as 800 clicks a minute. She is guided by Oriol Vinyals, an ex-professional StarCraft player and research scientist at DeepMind, who explains how the program AlphaStar learnt to play the game and beat a top professional player. Elsewhere, she explores systems that are learning to cooperate in a digital version of the playground favourite ‘Capture the Flag’.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or emailing us at [email protected].
Further reading
Interviewees: Research scientists Max Jaderberg and Raia Hadsell; Lead researchers David Silver and Oriol Vinyals, and Director of Research Koray Kavukcuoglu.
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In March 2016, more than 200 million people watched AlphaGo become first computer program to defeat a professional human player at the game of Go, a milestone in AI research that was considered to be a decade ahead of its time. Since then the team has continued to develop the system and recently unveiled AlphaZero: a program that has taught itself how to play chess, Go, and shogi. Hannah explores the inside story of both with Lead Researcher David Silver and finds out why games are a useful proving ground for AI researchers. She also meets Chess Grandmaster Matthew Sadler and women’s international master Natasha Regan, who have written a book on AlphaZero and its unique gameplay.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at [email protected].
Further reading
Interviewees: DeepMind CEO Demis Hassabis, Matthew Sadler, chess Grandmaster; Lead Researcher David Silver, Matt Botvinick, Director of Neuroscience Research; and Natasha Regan, women’s international chess master.
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
What can the human brain teach us about AI? And what can AI teach us about our own intelligence? These questions underpin a lot of AI research. In this first episode, Hannah meets the DeepMind Neuroscience team to explore these connections and discovers how our brains are like birds’ wings, what training a dog and an AI agent have in common, and why the simplest things for people to do are, paradoxically, often the hardest for machines.
If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at [email protected].
Further reading
Interviewees in this episode: Deepmind CEO and co-founder, Demis Hassabis; Matt Botvinick, Director of Neuroscience Research; research scientists Jess Hamrick and Greg Wayne; and Director of Research, Koray Kavukcuoglu.
Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
En liten tjänst av I'm With Friends. Finns även på engelska.