The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute’s work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world’s leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
The podcast Future of Life Institute Podcast is created by Future of Life Institute. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps.
You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io
Timestamps:
00:00:00 Preview and introduction
00:01:36 A US-China AI arms race?
00:10:58 Attitudes to AI safety in China
00:17:53 Diffusion of AI
00:25:13 Innovation without diffusion
00:34:29 AI development concentration
00:41:40 Learning from the history of technology
00:47:48 Translating Chinese AI writings
00:55:36 Automating translation of AI writings
On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children.
You can learn more about Allison's work at: https://foresight.org
Timestamps:
00:00:00 Preview
00:01:07 Centralized AI versus decentralized AI
00:13:02 Risks from decentralized AI
00:25:39 International AI governance
00:39:52 Cooperation with future AIs
00:53:51 AI for decision-making
01:05:58 Capital intensity of AI
01:09:11 Lessons from history
01:15:50 Future space law and property rights
01:27:28 Is technology invented or discovered?
01:32:34 Children in the age of AI
On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.
You can learn more about Steven's work at: https://sjbyrnes.com/agi.html
Timestamps:
00:00 Preview
00:54 Brain-like AGI Safety
13:16 Controlled AGI versus Social-instinct AGI
19:12 Learning from the brain
28:36 Why is brain-like AI the most likely path to AGI?
39:23 Honesty in AI models
44:02 How to help with brain-like AGI safety
53:36 AI traits with both positive and negative effects
01:02:44 Different AI safety strategies
On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.
You can learn more about Ege's work at https://epoch.ai
Timestamps: 00:00:00 – Preview and introduction
00:02:59 – Compute scaling and automation - GATE model
00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements
00:29:49 – Broad Automation vs. R&D-Focused AI Deployment
00:47:19 – AI, Wages, and Labor Market Transitions
00:59:54 – Training Agentic Models and Long-Term Planning Capabilities
01:06:56 – Moravec’s Paradox and Automation of Human Skills
01:13:59 – Which Jobs Are Most Vulnerable to AI?
01:33:00 – Timeline Extremes: What Could Change AI Forecasts?
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.
00:00 Nicholas Carlini's contributions to cybersecurity
08:19 Understanding attack strategies
29:39 High-dimensional spaces and attack intuitions
51:00 Challenges in open-source model safety
01:00:11 Unlearning and fact editing in models
01:10:55 Adversarial examples and human robustness
01:37:03 Cryptography and AI robustness
01:55:51 Scaling AI security research
On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai
AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI...
Timestamps:
00:00 What situation is humanity in?
05:00 Why AI progress is fast
09:56 Tool AI instead of AGI
15:56 The incentives of AI companies
19:13 Governments can coordinate a slowdown
25:20 The need for international coordination
31:59 Monitoring training runs
39:10 Do reasoning models undermine compute governance?
49:09 Why isn't alignment enough?
59:42 How do we decide if we want AGI?
01:02:18 Disagreement about AI
01:11:12 The early days of AI risk
On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness.
You can find out more about Samir's work here: https://samirvarma.com
Timestamps:
00:00 AIs with free will?
08:00 Can we predict AI behavior?
11:38 AI psychology
16:24 Which concepts will AIs use?
20:19 Will we collaborate with AIs?
26:16 Will we trade with AIs?
31:40 Training data for robots
34:00 AI in finance
39:55 How much of trading is automated?
49:00 AI in biology and complex systems
59:31 Will our skills atrophy?
01:02:55 Levels of scientific explanation
01:06:12 AIs with emotions and consciousness?
01:12:12 Why can't we predict recessions?
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.
We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here:
https://palisaderesearch.org/blog/specification-gaming
Timestamps:
00:00 The pace of AI progress
04:15 How we might lose control
07:23 Why are AIs sometimes dumb?
12:52 Benchmarks vs real world
19:11 Loss of control scenarios
26:36 Why would AI turn against us?
30:35 AIs hacking chess
36:25 Why didn't more advanced AIs hack?
41:39 Creating honest AIs
49:44 AI attackers vs AI defenders
58:27 How good is security at AI companies?
01:03:37 A sense of urgency
01:10:11 What should we do?
01:15:54 Skepticism about AI progress
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts.
You can learn more about Ann's work here:
https://www.wiseancestors.org
Timestamps:
00:00 What is Wise Ancestors?
04:27 Recovering after catastrophes
11:40 Decentralized science
18:28 Upfront benefit-sharing
26:30 Local communities
32:44 Recreating optimal environments
38:57 Cross-cultural collaboration
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI.
You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot
Timestamps:
00:00 Meta-narratives and transhumanism
15:28 Advanced AI and religious communities
27:22 Superintelligence
38:31 Countercultures and technology
52:38 Christian perspectives and tradition
01:05:20 God-like artificial intelligence
01:13:15 A positive vision for AI
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.
You can learn more about David's work at ARIA here:
https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/
Timestamps:
00:00 What is Safeguarded AI?
16:28 Implementing Safeguarded AI
22:58 Can we trust Safeguarded AIs?
31:00 Formalizing more of the world
37:34 The performance cost of verified AI
47:58 Changing attitudes towards AI
52:39 Flexible Hardware-Enabled Guarantees
01:24:15 Mind uploading
01:36:14 Lessons from David's early life
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com
Timestamps:
00:00 What is GiveDirectly?
15:04 AI for targeting cash transfers
29:39 AI for predicting natural disasters
46:04 How scalable is GiveDirectly's AI approach?
58:10 Decentralized vs. centralized data collection
1:04:30 Dream scenario for GiveDirectly
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.
You can find Nathan's podcast here: https://www.cognitiverevolution.ai
Timestamps:
00:00 AI progress since GPT-4
10:50 Multimodality
19:06 Low-cost models
27:58 Coding versus medicine/law
36:09 AI agents
45:29 How much are people using AI?
53:39 Open source
01:15:22 AI industry analysis
01:29:27 Are some AI models kept internal?
01:41:00 Money is not the limiting factor in AI
01:59:43 AI and biology
02:08:42 Robotics and self-driving
02:24:14 Inference-time compute
02:31:56 AI governance
02:36:29 Big-picture overview of AI progress and safety
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.
Here's the document we discuss in the episode:
https://www.thecompendium.ai
Timestamps:
00:00 The Compendium
15:25 The motivations of AGI corps
31:17 AI is grown, not written
52:59 A science of intelligence
01:07:50 Jobs, work, and AGI
01:23:19 Superintelligence
01:37:42 Open-source AI
01:45:07 What can we do?
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.
Here's Writing Doom: https://www.youtube.com/watch?v=xfMQ7hzyFW4
Timestamps:
00:00 Writing Doom
08:23 Humor in Writing Doom
13:31 Concise writing
18:37 Getting feedback
27:02 Alternative characters
36:31 Popular video formats
46:53 AI in filmmaking
49:52 Meaning in the future
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.
Here's the document we discuss in the episode:
https://www.narrowpath.co
Timestamps:
00:00 A Narrow Path
06:10 Can we predict future AI capabilities?
11:10 Risks from current AI development
17:56 The benefits of narrow AI
22:30 Against self-improving AI
28:00 Cybersecurity at AI companies
33:55 Unbounded AI
39:31 Global coordination on AI safety
49:43 Monitoring training runs
01:00:20 Benefits of cooperation
01:04:58 A science of intelligence
01:25:36 How you can help
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:
https://epochai.org/blog/can-ai-scaling-continue-through-2030
Timestamps:
00:00 How important is scaling?
08:03 How capable will AIs be in 2030?
18:33 AI agents, reasoning, and planning
23:39 Automating coding and mathematics
31:26 Uncertainty about investing in AI
40:34 Gap between investment and returns
45:30 Compute, software and data
51:54 Inference-time compute
01:08:49 Returns to software R&D
01:19:22 Limits to expanding compute
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI.
You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt
Timestamps:
00:00 AI control
09:35 Challenges to AI control
23:48 AI control as a bridge to alignment
26:54 Policy and coordination for AI safety
29:25 Slowing down around human-level AI
49:14 Scheming and misalignment
01:27:27 AI timelines and takeoff speeds
01:58:15 Human cognition versus AI cognition
Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world.
Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence
Timestamps:
00:00 Spending on safety vs capabilities
09:06 Racing dynamics - is the classic story true?
28:15 How are governments preparing for advanced AI?
49:06 US-China dialogues on AI
57:44 Coordination failures
1:04:26 Global resilience
1:13:09 Patient philanthropy
The John von Neumann biography we reference: https://www.penguinrandomhouse.com/books/706577/the-man-from-the-future-by-ananyo-bhattacharya/
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.
Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai
Timestamps:
00:00 Is AI plateauing or accelerating?
06:55 How do we get AI agents?
16:12 Do agency and reasoning emerge?
23:57 Compute thresholds in regulation
28:59 Superintelligence as an ideological goal
37:09 General progress vs superintelligence
44:22 Meta and open source AI
49:09 Technological change and regime change
01:03:06 How will governments react to AI?
01:07:50 Will the US nationalize AGI corporations?
01:17:05 Economics of an intelligence explosion
01:31:38 AI cognition vs human cognition
01:48:03 AI and future religions
01:56:40 Is consciousness functional?
02:05:30 AI and children
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home
Timestamps:
00:00 Innovation prizes at XPRIZE
08:25 Deciding which prizes to create
19:00 Creating new markets
29:51 How far can prizes scale?
35:25 When are prizes successful?
46:06 100M dollar carbon removal prize
54:40 Upcoming prizes
59:52 Anousheh's time in space
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org
Timestamps:
00:00 Mary's journey to presidency
05:11 Long-view leadership
06:55 Prioritizing global problems
08:38 Risks from artificial intelligence
11:55 Climate change
15:18 Barriers to global gender equality
16:28 Risk of nuclear war
20:51 Advice to future leaders
22:53 Humor in politics
24:21 Barriers to international cooperation
27:10 Institutions and technological change
Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation.
Apply for our RFP here: https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/
Timestamps:
00:00 Power concentration
07:43 RFP: Mitigating AI-driven power concentration
14:15 Open source AI
26:50 Institutions and incentives
35:20 Techno-optimism
43:44 Global monoculture
53:55 Imagining utopia
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com
Timestamps:
00:00 Automation and wages
14:32 Complexity for people and machines
20:31 Moravec's paradox
26:15 Can people switch careers?
30:57 Intelligence explosion economics
44:08 The lump of labor fallacy
51:40 An industry for nostalgia?
57:16 Universal basic income
01:09:28 Market structure in AI
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com
Timestamps:
00:00 US-China competition and risk
18:01 The security dilemma
30:21 Official and unofficial diplomacy
39:53 Hotlines between countries
01:01:54 Preventing escalation after war
01:09:58 Catastrophic biological risks
01:20:42 Ultraviolet germicidal light
01:25:54 Ancient civilizational collapse
Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org
Timestamps:
00:00 The National Organisation for Women (NOW)
05:37 Deepfakes and women
10:12 Protecting ordinary victims of deepfakes
16:06 Deepfake legislation
23:38 Current harm from deepfakes
30:20 Bodily autonomy as a right
34:44 NOW's work on AI
Here's FLI's recommended amendments to legislative proposals on deepfakes:
https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/
En liten tjänst av I'm With Friends. Finns även på engelska.