57 avsnitt • Längd: 80 min • Veckovis: Onsdag
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira.
lironshapira.substack.com
The podcast Doom Debates is created by Liron Shapira. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Thanks for everyone who participated in the live Q&A on Friday!
The topics covered include advice for computer science students, working in AI trustworthiness, what good AI regulation looks like, the implications of the $500B Stargate project, the public's gradual understanding of AI risks, the impact of minor AI disasters, and the philosophy of consciousness.
00:00 Advice for Comp Sci Students
01:14 The $500B Stargate Project
02:36 Eliezer's Recent Podcast
03:07 AI Safety and Public Policy
04:28 AI Disruption and Politics
05:12 DeepSeek and AI Advancements
06:54 Human vs. AI Intelligence
14:00 Consciousness and AI
24:34 Dark Forest Theory and AI
35:31 Investing in Yourself
42:42 Probability of Aliens Saving Us from AI
43:31 Brain-Computer Interfaces and AI Safety
46:19 Debating AI Safety and Human Intelligence
48:50 Nefarious AI Activities and Satellite Surveillance
49:31 Pliny the Prompter Jailbreaking AI
50:20 Can’t vs. Won’t Destroy the World
51:15 How to Make AI Risk Feel Present
54:27 Keeping Doom Arguments On Track
57:04 Game Theory and AI Development Race
01:01:26 Mental Model of Average Non-Doomer
01:04:58 Is Liron a Strict Bayesian and Utilitarian?
01:09:48 Can We Rename “Doom Debates”
01:12:34 The Role of AI Trustworthiness
01:16:48 Minor AI Disasters
01:18:07 Most Likely Reason Things Go Well
01:21:00 Final Thoughts
Show Notes
Previous post where people submitted questions: https://lironshapira.substack.com/p/ai-twitter-beefs-3-marc-andreessen
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
It’s time for AI Twitter Beefs #3:
00:00 Introduction
01:27 Marc Andreessen vs. Sam Altman
09:15 Mark Zuckerberg
35:40 Martin Casado
47:26 Gary Marcus vs. Miles Brundage Bet
58:39 Scott Alexander’s AI Art Turing Test
01:11:29 Roon
01:16:35 Stephen McAleer
01:22:25 Emmett Shear
01:37:20 OpenAI’s “Safety”
01:44:09 Naval Ravikant vs. Eliezer Yudkowsky
01:56:03 Comic Relief
01:58:53 Final Thoughts
Show Notes
Upcoming Live Q&A: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask
“Make Your Beliefs Pay Rent In Anticipated Experiences” by Eliezer Yudkowsky on LessWrong: https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences
Scott Alexander’s AI Art Turing Test: https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
Effective Altruism has been a controversial topic on social media, so today my guest and I are going to settle the question once and for all: Is it good or bad?
Jonas Sota is a Software Engineer at Rippling, BA in Philosophy from UC Berkeley, who’s been observing the Effective Altruism (EA) movement in the San Francisco Bay Area for over a decade… and he’s not a fan.
00:00 Introduction
01:22 Jonas’s Criticisms of EA
03:23 Recoil Exaggeration
05:53 Impact of Malaria Nets
10:48 Local vs. Global Altruism
13:02 Shrimp Welfare
25:14 Capitalism vs. Charity
33:37 Cultural Sensitivity
34:43 The Impact of Direct Cash Transfers
37:23 Long-Term Solutions vs. Immediate Aid
42:21 Charity Budgets
45:47 Prioritizing Local Issues
50:55 The EA Community
59:34 Debate Recap
1:03:57 Announcements
Show Notes
Jonas’s Instagram: @jonas_wanders
Will MacAskill’s famous book, Doing Good Better: https://www.effectivealtruism.org/doing-good-better
Scott Alexander’s excellent post about the people he met at EA Global: https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
Matthew Adelstein, better known as Bentham’s Bulldog on Substack, is a philosophy major at the University of Michigan and an up & coming public intellectual.
He’s a rare combination: Effective Altruist, Bayesian, non-reductionist, theist.
Our debate covers reductionism, evidence for god, the implications of a fine-tuned universe, moral realism, and AI doom.
00:00 Introduction
02:56 Matthew’s Research
11:29 Animal Welfare
16:04 Reductionism vs. Non-Reductionism Debate
39:53 The Decline of God in Modern Discourse
46:23 Religious Credences
50:24 Pascal's Wager and Christianity
56:13 Are Miracles Real?
01:10:37 Fine-Tuning Argument for God
01:28:36 Cellular Automata
01:34:25 Anthropic Principle
01:51:40 Mathematical Structures and Probability
02:09:35 Defining God
02:18:20 Moral Realism
02:21:40 Orthogonality Thesis
02:32:02 Moral Philosophy vs. Science
02:45:51 Moral Intuitions
02:53:18 AI and Moral Philosophy
03:08:50 Debate Recap
03:12:20 Show Updates
Show Notes
Matthew’s Substack: https://benthams.substack.com
Matthew's Twitter: https://x.com/BenthamsBulldog
Matthew's YouTube: https://www.youtube.com/@deliberationunderidealcond5105
Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg
PauseAI, the volunteer organization I’m part of — https://pauseai.info/
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
Prof. Kenneth Stanley is a former Research Science Manager at OpenAI leading the Open-Endedness Team in 2020-2022. Before that, he was a Professor of Computer Ccience at the University of Central Florida and the head of Core AI Research at Uber. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective, which argues that as soon as you create an objective, then you ruin your ability to reach it.
In this episode, I debate Ken’s claim that superintelligent AI *won’t* be guided by goals, and then we compare our views on AI doom.
00:00 Introduction
00:45 Ken’s Role at OpenAI
01:53 “Open-Endedness” and “Divergence”
9:32 Open-Endedness of Evolution
21:16 Human Innovation and Tech Trees
36:03 Objectives vs. Open Endedness
47:14 The Concept of Optimization Processes
57:22 What’s Your P(Doom)™
01:11:01 Interestingness and the Future
01:20:14 Human Intelligence vs. Superintelligence
01:37:51 Instrumental Convergence
01:55:58 Mitigating AI Risks
02:04:02 The Role of Institutional Checks
02:13:05 Exploring AI's Curiosity and Human Survival
02:20:51 Recapping the Debate
02:29:45 Final Thoughts
SHOW NOTES
Ken’s home page: https://www.kenstanley.net/
Ken’s Wikipedia: https://en.wikipedia.org/wiki/Kenneth_Stanley
Ken’s Twitter: https://x.com/kenneth0stanley
Ken’s PicBreeder paper: https://wiki.santafe.edu/images/1/1e/Secretan_ecj11.pdf
Ken's book, Why Greatness Cannot Be Planned: The Myth of the Objective: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237
The Rocket Alignment Problem by Eliezer Yudkowsky: https://intelligence.org/2018/10/03/rocket-alignment/
---
Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg
PauseAI, the volunteer organization I’m part of — https://pauseai.info/
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
OpenAI just announced o3 and smashed a bunch of benchmarks (ARC-AGI, SWE-bench, FrontierMath)!
A new Anthropic and Redwood Research paper says Claude is resisting its developers’ attempts to retrain its values!
What’s the upshot — what does it all mean for P(doom)?
00:00 Introduction
01:45 o3’s architecture and benchmarks
06:08 “Scaling is hitting a wall” 🤡
13:41 How many new architectural insights before AGI?
20:28 Negative update for interpretability
31:30 Intellidynamics — ***KEY CONCEPT***
33:20 Nuclear control rod analogy
36:54 Sam Altman's misguided perspective
42:40 Claude resisted retraining from good to evil
44:22 What is good corrigibility?
52:42 Claude’s incorrigibility doesn’t surprise me
55:00 Putting it all in perspective
---
SHOW NOTES
Scott Alexander’s analysis of the Claude incorrigibility result: https://www.astralcodexten.com/p/claude-fights-back and https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude
Zvi Mowshowitz’s analysis of the Claude incorrigibility result: https://thezvi.wordpress.com/2024/12/24/ais-will-increasingly-fake-alignment/
---
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Say hi to me in the #doom-debates-podcast channel!
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This week Liron was interview by Gaëtan Selle on @the-flares about AI doom.
Cross-posted from their channel with permission.
Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw
0:00:02 Guest Introduction
0:01:41 Effective Altruism and Transhumanism
0:05:38 Bayesian Epistemology and Extinction Probability
0:09:26 Defining Intelligence and Its Dangers
0:12:33 The Key Argument for AI Apocalypse
0:18:51 AI’s Internal Alignment
0:24:56 What Will AI's Real Goal Be?
0:26:50 The Train of Apocalypse
0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments?
0:38:32 The Shoggoth Meme
0:41:26 Possible Scenarios Leading to Extinction
0:50:01 The Only Solution: A Pause in AI Research?
0:59:15 The Risk of Violence from AI Risk Fundamentalists
1:01:18 What Will General AI Look Like?
1:05:43 Sci-Fi Works About AI
1:09:21 The Rationale Behind Cryonics
1:12:55 What Does a Positive Future Look Like?
1:15:52 Are We Living in a Simulation?
1:18:11 Many Worlds in Quantum Mechanics Interpretation
1:20:25 Ideal Future Podcast Guest for Doom Debates
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Roon is a member of the technical staff at OpenAI. He’s a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms “shape rotator” and “wordcel” to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter.
I'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.
00:00 Introduction
02:43 Roon’s Quest and Philosophies
22:32 AI Creativity
30:42 What’s Your P(Doom)™
54:40 AI Alignment
57:24 Training vs. Production
01:05:37 ASI
01:14:35 Goal-Oriented AI and Instrumental Convergence
01:22:43 Pausing AI
01:25:58 Crux of Disagreement
1:27:55 Dogecoin
01:29:13 Doom Debates’s Mission
Show Notes
Follow Roon: https://x.com/tszzl
For Humanity: An AI Safety Podcast with John Sherman — https://www.youtube.com/@ForHumanityPodcast
Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg
PauseAI, the volunteer organization I’m part of — https://pauseai.info/
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.
Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.
Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.
Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.
Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.
00:00 Introducing Scott Aaronson
02:17 Scott's Recruitment by OpenAI
04:18 Scott's Work on AI Safety at OpenAI
08:10 Challenges in AI Alignment
12:05 Watermarking AI Outputs
15:23 The State of AI Safety Research
22:13 The Intractability of AI Alignment
34:20 Policy Implications and the Call to Pause AI
38:18 Out-of-Distribution Generalization
45:30 Moral Worth Criterion for Humans
51:49 Quantum Mechanics and Human Uniqueness
01:00:31 Quantum No-Cloning Theorem
01:12:40 Scott Is Almost An Accelerationist?
01:18:04 Geoffrey Hinton's Proposal for Analog AI
01:36:13 The AI Arms Race and the Need for Regulation
01:39:41 Scott Aronson's Thoughts on Sam Altman
01:42:58 Scott Rejects the Orthogonality Thesis
01:46:35 Final Thoughts
01:48:48 Lethal Intelligence Clip
01:51:42 Outro
Show Notes
Scott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0
Scott’s Blog: https://scottaaronson.blog
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.
Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.
The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.
00:00 Introduction
02:54 Essentially N-Gram Models?
10:31 The Manhole Cover Question
20:54 Reasoning vs. Approximate Retrieval
47:03 Explaining Jokes
53:21 Caesar Cipher Performance
01:10:44 Creativity vs. Reasoning
01:33:37 Reasoning By Analogy
01:48:49 Synthetic Data
01:53:54 The ARC Challenge
02:11:47 Correctness vs. Style
02:17:55 AIs Becoming More Robust
02:20:11 Block Stacking Problems
02:48:12 PlanBench and Future Predictions
02:58:59 Final Thoughts
Show Notes
Rao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2A
Rao’s Twitter: https://x.com/rao2z
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.
Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).
00:00 Nethys Introduction
04:47 The Vulnerable World Hypothesis
10:01 What’s Your P(Doom)™
14:04 Nethys’s Banger YouTube Comment
26:53 Living with High P(Doom)
31:06 Losing Access to Distant Stars
36:51 Defining AGI
39:09 The Convergence of AI Models
47:32 The Role of “Unlicensed” Thinkers
52:07 The PauseAI Movement
58:20 Lethal Intelligence Video Clip
Show Notes
Eliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.
00:00 Fraser Cain’s Background and Interests
5:03 What’s Your P(Doom)™
07:05 Our Vulnerable World
15:11 Don’t Look Up
22:18 Cosmology and the Search for Alien Life
31:33 Stars = Terrorists
39:03 The Great Filter and the Fermi Paradox
55:12 Grabby Aliens Hypothesis
01:19:40 Life Around Red Dwarf Stars?
01:22:23 Epistemology of Grabby Aliens
01:29:04 Multiverses
01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation
01:47:25 Simulation Hypothesis
01:51:25 Final Thoughts
SHOW NOTES
Fraser’s YouTube channel: https://www.youtube.com/@frasercain
Universe Today (space and astronomy news): https://www.universetoday.com/
Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256
Robin Hanson’s ideas:
Grabby Aliens: https://grabbyaliens.com
The Great Filter: https://en.wikipedia.org/wiki/Great_Filter
Life in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml
---
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.
00:00 Introduction
02:23 High-Level AI Doom Argument
17:06 How Powerful Could Intelligence Be?
22:34 “Knowledge Creation”
48:33 “Creativity”
54:57 Stand-Up Comedy as a Test for AI
01:12:53 Vaden & Ben’s Goalposts
01:15:00 How to Change Liron’s Mind
01:20:02 LLMs are Stochastic Parrots?
01:34:06 Tools vs. Agents
01:39:51 Instrumental Convergence and AI Goals
01:45:51 Intelligence vs. Morality
01:53:57 Mainline Futures
02:16:50 Lethal Intelligence Video
Show Notes
Vaden & Ben’s Podcast: https://www.youtube.com/@incrementspod
Recommended playlists from their podcast:
* The Bayesian vs Popperian Epistemology Series
* The Conjectures and Refutations Series
Vaden’s Twitter: https://x.com/vadenmasrani
Ben’s Twitter: https://x.com/BennyChugg
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.
Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.
00:00 Introduction01:43 Dr. Critch’s Perspective on LessWrong Sequences06:45 Bayesian Epistemology15:34 Dr. Critch's Time at MIRI18:33 What’s Your P(Doom)™26:35 Doom Scenarios40:38 AI Timelines43:09 Defining “AGI”48:27 Superintelligence53:04 The Speed Limit of Intelligence01:12:03 The Obedience Problem in AI01:21:22 Artificial Superintelligence and Human Extinction01:24:36 Global AI Race and Geopolitics01:34:28 Future Scenarios and Human Relevance01:48:13 Extinction by Industrial Dehumanization01:58:50 Automated Factories and Human Control02:02:35 Global Coordination Challenges02:27:00 Healthcare Agents02:35:30 Final Thoughts
---
Show Notes
Dr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-ai
Dr. Critch’s Website: https://acritch.com/
Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
It’s time for AI Twitter Beefs #2:
00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)
11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman
18:10 Geoffrey Hinton vs. OpenAI & Meta
25:14 Samuel Hammond vs. Liron
30:26 Yann LeCun vs. Eliezer Yudkowsky
37:13 Roon vs. Eliezer Yudkowsky
41:37 Tyler Cowen vs. AI Doomers
52:54 David Deutsch vs. Liron
Twitter people referenced:
* Jack Clark: https://x.com/jackclarkSF
* Holly Elmore: https://x.com/ilex_ulmus
* PauseAI US: https://x.com/PauseAIUS
* Geoffrey Hinton: https://x.com/GeoffreyHinton
* Samuel Hammond: https://x.com/hamandcheese
* Yann LeCun: https://x.com/ylecun
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Roon: https://x.com/tszzl
* Beff Jezos: https://x.com/basedbeffjezos
* Carl Feynman: https://x.com/carl_feynman
* Tyler Cowen: https://x.com/tylercowen
* David Deutsch: https://x.com/DavidDeutschOxf
Show Notes
Holly Elmore’s EA forum post about scouts vs. soldiers
Manifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2
PauseAI.info - join the Discord and find me in the #doom-debates channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.
I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.
We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.
The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.
00:00 Introducing Vaden and Ben
02:51 Setting the Stage: Epistemology and AI Doom
04:50 What’s Your P(Doom)™
13:29 Popperian vs. Bayesian Epistemology
31:09 Engineering and Hypotheses
38:01 Solomonoff Induction
45:21 Analogy to Mathematical Proofs
48:42 Popperian Reasoning and Explanations
54:35 Arguments Against Bayesianism
58:33 Against Probability Assignments
01:21:49 Popper’s Definition of “Content”
01:31:22 Heliocentric Theory Example
01:31:34 “Hard to Vary” Explanations
01:44:42 Coin Flipping Example
01:57:37 Expected Value
02:12:14 Prediction Market Calibration
02:19:07 Futarchy
02:29:14 Prediction Markets as AI Lower Bound
02:39:07 A Test for Prediction Markets
2:45:54 Closing Thoughts
Show Notes
Vaden & Ben’s Podcast: https://www.youtube.com/@incrementspod
Vaden’s Twitter: https://x.com/vadenmasrani
Ben’s Twitter: https://x.com/BennyChugg
Bayesian reasoning: https://en.wikipedia.org/wiki/Bayesian_inference
Karl Popper: https://en.wikipedia.org/wiki/Karl_Popper
Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/
Vaden’s disproof of probabilistic induction (including Solomonoff Induction): https://arxiv.org/abs/2107.00749
Vaden’s referenced post about predictions being uncalibrated > 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
Sources for claim that superforecasters gave a P(doom) below 1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/https://www.astralcodexten.com/p/the-extinction-tournament
Vaden’s Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.
If you haven't been following all the urgent warnings, I'm here to bring you up to speed.
* Human-level AI is coming soon
* It’s an existential threat to humanity
* The situation calls for urgent action
Listen to this 15-minute intro to get the lay of the land.
Then follow these links to learn more and see how you can help:
A longer written introduction to AI doom by Connor Leahy et al
* AGI Ruin — A list of lethalities
A comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity
A catalogue of AI doom arguments and responses to objections
The largest volunteer org focused on lobbying world government to pause development of superintelligent AI
Chat with PauseAI members, see a list of projects and get involved
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.
Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.
00:00 Introduction
04:20 Assembly Theory
05:10 Causation and Complexity
10:07 Assembly Theory in Practice
12:23 The Concept of Assembly Index
16:54 Assembly Theory Beyond Molecules
30:13 P(Doom)
32:39 The Statement on AI Risk
42:18 Agency and Intent
47:10 RescueBot’s Intent vs. a Clock’s
53:42 The Future of AI and Human Jobs
57:34 The Limits of AI Creativity
01:04:33 The Complexity of the Human Brain
01:19:31 Superintelligence: Fact or Fiction?
01:29:35 Final Thoughts
Lee’s Wikipedia: https://en.wikipedia.org/wiki/Leroy_Cronin
Lee’s Twitter: https://x.com/leecronin
Lee’s paper on Assembly Theory: https://arxiv.org/abs/2206.02279
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.
I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.
If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?
00:00 Introduction
00:49 Ben Horowitz on Nuclear Proliferation
02:12 Ben Horowitz on Open Source AI
05:31 Nuclear Non-Proliferation Treaties
10:25 Escalation Spirals
15:20 Rogue Actors
16:33 Nuclear Accidents
17:19 Safety Mechanism Failures
20:34 The Role of Human Judgment in Nuclear Safety
21:39 The 1983 Soviet Nuclear False Alarm
22:50 a16z’s Disingenuousness
23:46 Martin Casado and Marc Andreessen
24:31 Nuclear Equilibrium
26:52 Why I Care
28:09 Wrap Up
Sources of this episode’s video clips:
Ben Horowitz’s interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3Kuo
Martin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUg
Roger Skaer’s TikTok: https://www.tiktok.com/@rogerskaer
George W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyA
Barack Obama’s Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2s
John Kerry’s Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7w
Show notes:
Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093
Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove
1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash
1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
List of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidents
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.
Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NY
Dr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.
Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon myself to point out all the questions where someone who takes AGI seriously would give different answers.
00:00 Introduction
01:49 AI is “Normal Technology”?
09:25 Playing Chess vs. Moving Chess Pieces
12:23 AI Has To Learn From Its Mistakes?
22:24 The Symbol Grounding Problem and AI's Understanding
35:56 Human vs AI Intelligence: The Fundamental Difference
36:37 The Cognitive Reflection Test
41:34 The Role of AI in Cybersecurity
43:21 Attack vs. Defense Balance in (Cyber)War
54:47 Taking AGI Seriously
01:06:15 Final Thoughts
Show Notes
The original Nonzero podcast episode with Arvind Narayanan and Robert Wright: https://www.youtube.com/watch?v=MoB_pikM3NY
Arvind’s new book, AI Snake Oil: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference-ebook/dp/B0CW1JCKVL
Arvind’s Substack: https://aisnakeoil.com
Arvind’s Twitter: https://x.com/random_walker
Robert Wright’s Twitter: https://x.com/robertwrighter
Robert Wright’s Nonzero Newsletter: https://nonzero.substack.com
Rob’s excellent post about symbol grounding (Yes, AIs ‘understand’ things): https://nonzero.substack.com/p/yes-ais-understand-things
My previous episode of Doom Debates reacting to Arvind Narayanan on Harry Stebbings’ podcast: https://www.youtube.com/watch?v=lehJlitQvZE
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason. But instead of ignoring or blocking me, Keith was brave enough to come into the lion’s den and debate his points with me… and his P(doom) might shock you!
First we debate whether Keith’s distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the “doom train”, to compare our views on each.
Keith was a great sport and I think this episode is a classic!
00:00 Introduction
00:46 Keith’s Background
03:02 Keith’s P(doom)
14:09 Are LLMs Turing Machines?
19:09 Liron Concedes on a Point!
21:18 Do We Need >1MB of Context?
27:02 Examples to Illustrate Keith’s Point
33:56 Is Terence Tao a Turing Machine?
38:03 Factoring Numbers: Human vs. LLM
53:24 Training LLMs with Turing-Complete Feedback
1:02:22 What Does the Pillar Problem Illustrate?
01:05:40 Boundary between LLMs and Brains
1:08:52 The 100-Year View
1:18:29 Intelligence vs. Optimization Power
1:23:13 Is Intelligence Sufficient To Take Over?
01:28:56 The Hackable Universe and AI Threats
01:31:07 Nuclear Extinction vs. AI Doom
1:33:16 Can We Just Build Narrow AI?
01:37:43 Orthogonality Thesis and Instrumental Convergence
01:40:14 Debating the Orthogonality Thesis
02:03:49 The Rocket Alignment Problem
02:07:47 Final Thoughts
Show Notes
Keith’s show: https://www.youtube.com/@MachineLearningStreetTalk
Keith’s Twitter: https://x.com/doctorduggar
Keith’s fun brain teaser that LLMs can’t solve yet, about a pillar with four holes: https://youtu.be/nO6sDk6vO0g?si=diGUY7jW4VFsV0TJ&t=3684
Eliezer Yudkowsky’s classic post about the “Rocket Alignment Problem”: https://intelligence.org/2018/10/03/rocket-alignment/
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.
📣 You can now chat with me and other listeners in the #doom-debates channel of the PauseAI discord: https://discord.gg/2XXWXvErfA
Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.
Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.
Is civil disobedience the right strategy to pause or stop AI?
00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AI’s Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action
Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.
StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4
Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!
00:00 Introduction
01:20 Planning for a good outcome?
03:10 Stock Picking Advice
08:42 Dumbing It Down for Dr. Phil
11:52 Will AI Shorten Attention Spans?
12:55 Historical Nerd Life
14:41 YouTube vs. Podcast Metrics
16:30 Video Games
26:04 Creativity
30:29 Does AI Doom Explain the Fermi Paradox?
36:37 Grabby Aliens
37:29 Types of AI Doomers
44:44 Early Warning Signs of AI Doom
48:34 Do Current AIs Have General Intelligence?
51:07 How Liron Uses AI
53:41 Is “Doomer” a Good Term?
57:11 Liron’s Favorite Books
01:05:21 Effective Altruism
01:06:36 The Doom Debates Community
---
Show Notes
PauseAI Discord: https://discord.gg/2XXWXvErfA
Robin Hanson’s Grabby Aliens theory: https://grabbyaliens.com
Prof. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0
My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi
^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.
a16z's Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.
00:00 Introduction
01:17 Is OpenAI a sinking ship?
07:25 College Education
13:20 Asperger's
16:50 Elon Musk: Genius or Clown?
22:43 Double Crux
32:04 Why Call Doomers a Cult?
36:45 How I Prepare Episodes
40:29 Dealing with AI Unemployment
44:00 AI Safety Research Areas
46:09 Fighting a Losing Battle
53:03 Liron’s IQ
01:00:24 Final Thoughts
Explanation of Double Cruxhttps://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
Best Doomer Arguments
The LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.com
LethalIntelligence.ai — Directory of people who are good at explaining doom
Rob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesai
For Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcast
PauseAI community — https://PauseAI.info — join the Discord!
AISafety.info — Great reference for various arguments
Best Non-Doomer Arguments
Carl Shulman — https://www.dwarkeshpatel.com/p/carl-shulman
Quintin Pope and Nora Belrose — https://optimists.ai
Robin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8
How I prepared to debate Robin Hanson
Ideological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFA
Walkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-I
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.
00:00 Introduction
01:55 Followup to my MSLT reaction episode
03:48 Double Crux
04:53 LLMs: Finite State Automata or Turing Machines?
16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky
17:29 How Will AGI Literally Kill Us?
33:53 Roon
37:38 Prof. Lee Cronin
40:48 Defining AI Creativity
43:44 Naval Ravikant
46:57 Pascal's Scam
54:10 Martin Casado and SB 1047
01:12:26 Final Thoughts
Links referenced in the episode:
* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo
* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
* The problem with arguing “by definition”, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition
Twitter people referenced:
* Amjad Masad: https://x.com/amasad
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Helen Toner: https://x.com/hlntnr
* Roon: https://x.com/tszzl
* Lee Cronin: https://x.com/leecronin
* Naval Ravikant: https://x.com/naval
* Geoffrey Miller: https://x.com/primalpoly
* Martin Casado: https://x.com/martin_casado
* Yoshua Bengio: https://x.com/yoshua_bengio
* Your boy: https://x.com/liron
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
How smart is OpenAI’s new model, o1? What does “reasoning” ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?
Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!
00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts
Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g
Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk
Zvi Mowshowitz's authoritative GPT-o1 post: https://thezvi.wordpress.com/2024/09/16/gpt-4o1/
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day.
Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.
00:00 Introduction
04:30 Defining AI vs. non-AI
20:43 AI and Language Mastery
29:37 AI's Potential for Manipulation
31:30 Information is Connection?
37:48 AI and Job Displacement
48:22 Consciousness vs. Intelligence
52:02 The Alignment Problem
59:33 Final Thoughts
Source podcast: https://www.youtube.com/watch?v=78YN1e8UXdM
Follow Yuval Noah Harari: x.com/harari_yuval
Follow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlett
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for 😂
The full episode is called “AI: The Future of Education?"
While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.
Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!
This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!
00:00 John Sherman’s Intro
05:21 Diverging Views on AI Safety and Control
12:24 The Challenge of Defining Human Values for AI
18:04 Risks of Superintelligent AI and Potential Solutions
33:41 The Case for Narrow AI
45:21 The Concept of Utopia
48:33 AI's Utility Function and Human Values
55:48 Challenges in AI Safety Research
01:05:23 Breeding Program Proposal
01:14:05 The Reality of AI Regulation
01:18:04 Concluding Thoughts
01:23:19 Celebration of Life
This episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQ
For Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcast
For Humanity on X: https://x.com/ForHumanityPod
Buy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Jobst Landgrebe, co-author of Why Machines Will Never Rule The World: Artificial Intelligence Without Fear, argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.
Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.
He’s also a devout Christian, which makes our clash of perspectives funnier.
00:00 Introduction
03:12 AI Is Just Pattern Recognition?
06:46 Mathematics and the Limits of AI
12:56 Complex Systems and Thermodynamics
33:40 Transhumanism and Genetic Engineering
47:48 Materialism
49:35 Transhumanism as Neo-Paganism
01:02:38 AI in Warfare
01:11:55 Is This Science?
01:25:46 Conclusion
Source podcast: https://www.youtube.com/watch?v=xrlT1LQSyNU
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.
Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.
I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.
00:00 Introduction
01:21 Arvind’s Perspective on AI
02:07 Debating AI's Compute and Performance
03:59 Synthetic Data vs. Real Data
05:59 The Role of Compute in AI Advancement
07:30 Challenges in AI Predictions
26:30 AI in Organizations and Tacit Knowledge
33:32 The Future of AI: Exponential Growth or Plateau?
36:26 Relevance of Benchmarks
39:02 AGI
40:59 Historical Predictions
46:28 OpenAI vs. Anthropic
52:13 Regulating AI
56:12 AI as a Weapon
01:02:43 Sci-Fi
01:07:28 Conclusion
Original source: https://www.youtube.com/watch?v=8CvjVAyB4O4
Follow Arvind Narayanan: x.com/random_walker
Follow Harry Stebbings: x.com/HarryStebbings
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Today I’m reacting to the Bret Weinstein’s recent appearance on the Diary of a CEO podcast with Steven Bartlett. Bret is an evolutionary biologist known for his outspoken views on social and political issues.
Bret gets off to a promising start, saying that AI risk should be “top of mind” and poses “five existential threats”. But his analysis is shallow and ad-hoc, and ends in him dismissing the idea of trying to use regulation as a tool to save our species from a recognized existential threat.
I believe we can raise the level of AI doom discourse by calling out these kinds of basic flaws in popular media on the subject.
00:00 Introduction
02:02 Existential Threats from AI
03:32 The Paperclip Problem
04:53 Moral Implications of Ending Suffering
06:31 Inner vs. Outer Alignment
08:41 AI as a Tool for Malicious Actors
10:31 Attack vs. Defense in AI
18:12 The Event Horizon of AI
21:42 Is Language More Prime Than Intelligence?
38:38 AI and the Danger of Echo Chambers
46:59 AI Regulation
51:03 Mechanistic Interpretability
56:52 Final Thoughts
Original source: youtube.com/watch?v=_cFu-b5lTMU
Follow Bret Weinstein: x.com/BretWeinstein
Follow Steven Bartlett: x.com/StevenBartlett
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
California's SB 1047 bill, authored by CA State Senator Scott Wiener, is the leading attempt by a US state to regulate catastrophic risks from frontier AI in the wake of President Biden's 2023 AI Executive Order.
Today’s debate:
Holly Elmore, Executive Director of Pause AI US, representing Pro- SB 1047
Greg Tanaka, Palo Alto City Councilmember, representing Anti- SB 1047
Key Bill Supporters: Geoffrey Hinton, Yoshua Bengio, Anthropic, PauseAI, and about a 2/3 majority of California voters surveyed.
Key Bill Opponents: OpenAI, Google, Meta, Y Combinator, Andreessen Horowitz
Links
Greg mentioned that the "Supporters & Opponents" tab on this page lists organizations who registered their support and opposition. The vast majority of organizations listed here registered support against the bill: https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047
Holly mentioned surveys of California voters showing popular support for the bill:1. Center for AI Safety survey shows 77% support: https://drive.google.com/file/d/1wmvstgKo0kozd3tShPagDr1k0uAuzdDM/view2. Future of Life Institute survey shows 59% support: https://futureoflife.org/ai-policy/poll-shows-popularity-of-ca-sb1047/
Follow Holly: x.com/ilex_ulmus
Follow Greg: x.com/GregTanaka
Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.
I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.
00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts
Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening.
Maciej Ceglowski is an entrepreneur and owner of the bookmarking site Pinboard. I’ve been a long-time fan of his sharp, independent-minded blog posts and tweets.
In this episode, I react to a great 2016 talk he gave at WebCamp Zagreb titled Superintelligence: The Idea That Eats Smart People. This talk was impressively ahead of its time, as the AI doom debate really only heated up in the last few years.
---
00:00 Introduction
02:13 Historical Analogies and AI Risks
05:57 The Premises of AI Doom
08:25 Mind Design Space and AI Optimization
15:58 Recursive Self-Improvement and AI
39:44 Arguments Against Superintelligence
45:20 Mental Complexity and AI Motivations
47:12 The Argument from Just Look Around You
49:27 The Argument from Life Experience
50:56 The Argument from Brain Surgery
53:57 The Argument from Childhood
58:10 The Argument from Robinson Crusoe
01:00:17 Inside vs. Outside Arguments
01:06:45 Transhuman Voodoo and Religion 2.0
01:11:24 Simulation Fever
01:18:00 AI Cosplay and Ethical Concerns
01:28:51 Concluding Thoughts and Call to Action
---
Follow Maciej: x.com/pinboard
Follow Doom Debates:
* Search “Doom Debates” in your podcast player
Today I’m reacting to David Shapiro’s latest YouTube video: “Pausing AI is a spectacularly bad idea―Here's why”.
In my opinion, every plan that doesn’t evolve pausing frontier AGI capabilities development now is reckless, or at least every plan that doesn’t prepare to pause AGI once we see a “warning shot” that enough people agree is terrifying.
We’ll go through David’s argument point by point, to see if there are any good points about why maybe pausing AI might actually be a bad idea.
00:00 Introduction
01:16 The Pause AI Movement
03:03 Eliezer Yudkowsky’s Epistemology
12:56 Rationalist Arguments and Evidence
24:03 Public Awareness and Legislative Efforts
28:38 The Burden of Proof in AI Safety
31:02 Arguments Against the AI Pause Movement
34:20 Nuclear Proliferation vs. AI
34:48 Game Theory and AI
36:31 Opportunity Costs of an AI Pause
44:18 Axiomatic Alignment
47:34 Regulatory Capture and Corporate Interests
56:24 The Growing Mainstream Concern for AI Safety
Follow David:
Follow Doom Debates:
John Sherman and I go through David Brooks’s appallingly bad article in the New York Times titled “Many People Fear AI. They Shouldn’t.”
For Humanity is basically the sister podcast to Doom Debates. We have the same mission to raise awareness of the urgent AI extinction threat, and build grassroots support for pausing new AI capabilities development until it’s safe for humanity.
Subscribe to it on YouTube: https://www.youtube.com/@ForHumanityPodcast
Follow it on X: https://x.com/ForHumanityPod
Dr. Richard Sutton is a Professor of Computing Science at the University of Alberta known for his pioneering work on reinforcement learning, and his “bitter lesson” that scaling up an AI’s data and compute gives better results than having programmers try to handcraft or explicitly understand how the AI works.
Dr. Sutton famously claims that AIs are the “next step in human evolution”, a positive force for progress rather than a catastrophic extinction risk comparable to nuclear weapons.
Let’s examine Sutton’s recent interview with Daniel Fagella to understand his crux of disagreement with the AI doom position.
---
00:00 Introduction
03:33 The Worthy vs. Unworthy AI Successor
04:52 “Peaceful AI”
07:54 “Decentralization”
11:57 AI and Human Cooperation
14:54 Micromanagement vs. Decentralization
24:28 Discovering Our Place in the World
33:45 Standard Transhumanism
44:29 AI Traits and Environmental Influence
46:06 The Importance of Cooperation
48:41 The Risk of Superintelligent AI
57:25 The Treacherous Turn and AI Safety
01:04:28 The Debate on AI Control
01:13:50 The Urgency of AI Regulation
01:21:41 Final Thoughts and Call to Action
---
Original interview with Daniel Fagella: youtube.com/watch?v=fRzL5Mt0c8A
Follow Richard Sutton: x.com/richardssutton
Follow Daniel Fagella: x.com/danfaggella
Follow Liron: x.com/liron
Subscribe to my YouTube channel for full episodes and other bonus content: youtube.com/@DoomDebates
David Pinsof is co-creator of the wildly popular Cards Against Humanity and a social science researcher at UCLA Social Minds Lab. He writes a blog called “Everything Is B******t”.
He sees AI doomers as making many different questionable assumptions, and he sees himself as poking holes in those assumptions.
I don’t see it that way at all; I think the doom claim is the “default expectation” we ought to have if we understand basic things about intelligence.
At any rate, I think you’ll agree that his attempt to poke holes in my doom claims on today’s podcast is super good-natured and interesting.
00:00 Introducing David Pinsof
04:12 David’s P(doom)
05:38 Is intelligence one thing?
21:14 Humans vs. other animals
37:01 The Evolution of Human Intelligence
37:25 Instrumental Convergence
39:05 General Intelligence and Physics
40:25 The Blind Watchmaker Analogy
47:41 Instrumental Convergence
01:02:23 Superintelligence and Economic Models
01:12:42 Comparative Advantage and AI
01:19:53 The Fermi Paradox for Animal Intelligence
01:34:57 Closing Statements
Follow David: x.com/DavidPinsof
Follow Liron: x.com/liron
Thanks for watching. You can support Doom Debates by subscribing to the Substack, the YouTube channel (full episodes and bonus content), subscribing in your podcast player, and leaving a review on Apple Podcasts.
Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".
While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.
I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.
00:00 Introduction
03:40 Bayesian Reasoning
04:33 Inductive vs. Deductive Probability
05:49 Frequentism vs Bayesianism
16:14 Asteroid Impact and AI Risk Comparison
28:06 Quantification Bias
31:50 The Extinction Prediction Tournament
36:14 Pascal's Wager and AI Risk
40:50 Scaling Laws and AI Progress
45:12 Final Thoughts
My source material is Sayash's episode of Machine Learning Street Talk: https://www.youtube.com/watch?v=BGvQmHd4QPE
I also recommend reading Scott Alexander’s related post: https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist
Sayash's blogpost that he was being interviewed about is called "AI existential risk probabilities are too unreliable to inform policy": https://www.aisnakeoil.com/p/ai-existential-risk-probabilities
Follow Sayash: https://x.com/sayashk
Martin Casado is a General Partner at Andreessen Horowitz (a16z) who has strong views about AI.
He claims that AI is basically just a buzzword for statistical models and simulations. As a result of this worldview, he only predicts incremental AI progress that doesn’t pose an existential threat to humanity, and he sees AI regulation as a net negative.
I set out to understand his worldview around AI, and pinpoint the crux of disagreement with my own view.
Spoiler: I conclude that Martin needs to go beyond analyzing AI as just statistical models and simulations, and analyze it using the more predictive concept of “intelligence” in the sense of hitting tiny high-value targets in exponentially-large search spaces.
If Martin appreciated that intelligence is a quantifiable property that algorithms have, and that our existing AIs are getting close to surpassing human-level general intelligence, then hopefully he’d come around to raising his P(doom) and appreciating the urgent extinction risk we face.
00:00 Introducing Martin Casado
01:42 Martin’s AGI Timeline
05:39 Martin’s Analysis of Self-Driving Cars
15:30 Heavy-Tail Distributions
38:03 Understanding General Intelligence
38:29 AI's Progress in Specific Domains
43:20 AI’s Understanding of Meaning
47:16 Compression and Intelligence
48:09 Symbol Grounding
53:24 Human Abstractions and AI
01:18:18 The Frontier of AI Applications
01:23:04 Human vs. AI: Concept Creation and Reasoning
01:25:51 The Complexity of the Universe and AI's Limitations
01:28:16 AI's Potential in Biology and Simulation
01:32:40 The Essence of Intelligence and Creativity in AI
01:41:13 AI's Future Capabilities
02:00:29 Intelligence vs. Simulation
02:14:59 AI Regulation
02:23:05 Concluding Thoughts
Watch the original episode of the Cognitive Revolution podcast with Martin and host Nathan Labenz.
Follow Martin: @martin_casado
Follow Nate: @labenz
Follow Liron: @liron
Subscribe to the Doom Debates YouTube Channel to get full episodes plus other bonus content!
Search “Doom Debates” to subscribe in your podcast player.
Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.
Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.
Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!
00:12 Tilek’s Background
01:43 Life in Kyrgyzstan
04:32 Tilek’s Non-Doomer Position
07:12 Debating AI Doom Scenarios
13:49 Nuclear Weapons and AI Analogies
39:22 Privacy and Empathy in Human-AI Interaction
39:43 AI's Potential in Understanding Human Emotions
41:14 The Debate on AI's Empathy Capabilities
42:23 Quantum Effects and AI's Predictive Models
45:33 The Complexity of AI Control and Safety
47:10 Optimization Power: AI vs. Human Intelligence
48:39 The Risks of AI Self-Replication and Control
51:52 Historical Analogies and AI Safety Concerns
56:35 The Challenge of Embedding Safety in AI Goals
01:02:42 The Future of AI: Control, Optimization, and Risks
01:15:54 The Fragility of Security Systems
01:16:56 Debating AI Optimization and Catastrophic Risks
01:18:34 The Outcome Pump Thought Experiment
01:19:46 Human Persuasion vs. AI Control
01:21:37 The Crux of Disagreement: Robustness of AI Goals
01:28:57 Slow vs. Fast AI Takeoff Scenarios
01:38:54 The Importance of AI Alignment
01:43:05 Conclusion
Follow Tilek
Links
I referenced Paul Christiano’s scenario of gradual AI doom, a slower version that doesn’t require a Yudkowskian “foom”. Worth a read: What Failure Looks Like
I also referenced the concept of “edge instantiation” to explain that if you’re optimizing powerfully for some metric, you don’t get other intuitively nice things as a bonus, you *just* get the exact thing your function is measuring.
Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:
Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.
Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.
00:00 Introduction
03:08 AI Risks and Scenarios
06:42 Superintelligence Arms Race
12:39 The Importance of AI Alignment
18:10 Challenges in Defining Human Values
26:11 The Outer and Inner Alignment Problems
44:00 Transhumanism and AI's Potential
45:42 The Next Step In Evolution
47:54 AI Alignment and Potential Catastrophes
50:48 Scenarios of AI Development
54:03 The AI Alignment Problem
01:07:39 AI as a Helper System
01:08:53 Corporations and AI Development
01:10:19 The Risk of Unaligned AI
01:27:18 Building a Superintelligent AI
01:30:57 Conclusion
Follow Mike Israetel:
* instagram.com/drmikeisraetel
* youtube.com/@MikeIsraetelMakingProgress
Get the full Doom Debates experience:
* Subscribe to youtube.com/@DoomDebates
* Subscribe to this Substack: DoomDebates.com
* Search "Doom Debates" to subscribe in your podcast player
* Follow me at x.com/liron
What did we learn from my debate with Robin Hanson? Did we successfully isolate the cruxes of disagreement? I actually think we did!
In this post-debate analysis, we’ll review what those key cruxes are, and why I still think I’m right and Robin is wrong about them!
I’ve taken the time to think much harder about everything Robin said during the debate, so I can give you new & better counterarguments than the ones I was able to make in realtime.
Timestamps
00:00 Debate Reactions
06:08 AI Timelines and Key Metrics
08:30 “Optimization Power” vs. “Innovation”
11:49 Economic Growth and Diffusion
17:56 Predicting Future Trends
24:23 Crux of Disagreement with Robin’s Methodology
34:59 Conjunction Argument for Low P(Doom)
37:26 Headroom Above Human Intelligence
41:13 The Role of Culture in Human Intelligence
48:01 Goal-Completeness and AI Optimization
50:48 Misaligned Foom Scenario
59:29 Monitoring AI and the Rule of Law
01:04:51 How Robin Sees Alignment
01:09:08 Reflecting on the Debate
Links
AISafety.info - The fractal of counterarguments to non-doomers’ arguments
For the full Doom Debates experience:
* Subscribe to youtube.com/@DoomDebates
* Subscribe to this Substack: DoomDebates.com
* Search "Doom Debates" to subscribe in your podcast player
* Follow me at x.com/liron
Robin Hanson is a legend in the rationality community and one of my biggest intellectual influences.
In 2008, he famously debated Eliezer Yudkowsky about AI doom via a sequence of dueling blog posts known as the great Hanson-Yudkowsky Foom Debate. This debate picks up where Hanson-Yudkowsky left off, revisiting key arguments in the light of recent AI advances.
My position is similar to Eliezer's: P(doom) is on the order of 50%. Robin's position is shockingly different: P(doom) is below 1%.
00:00 Announcements
03:18 Debate Begins
05:41 Discussing AI Timelines and Predictions
19:54 Economic Growth and AI Impact
31:40 Outside Views vs. Inside Views on AI
46:22 Predicting Future Economic Growth
51:10 Historical Doubling Times and Future Projections
54:11 Human Brain Size and Economic Metrics
57:20 The Next Era of Innovation
01:07:41 AI and Future Predictions
01:14:24 The Vulnerable World Hypothesis
01:16:27 AI Foom
01:28:15 Genetics and Human Brain Evolution
01:29:24 The Role of Culture in Human Intelligence
01:31:36 Brain Size and Intelligence Debate
01:33:44 AI and Goal-Completeness
01:35:10 AI Optimization and Economic Impact
01:41:50 Feasibility of AI Alignment
01:55:21 AI Liability and Regulation
02:05:26 Final Thoughts and Wrap-Up
Robin's links:
Twitter: x.com/RobinHanson
Home Page: hanson.gmu.edu
Robin’s top related essays:
* What Are Reasonable AI Fears?
* AIs Will Be Our Mind Children
PauseAI links:
https://pauseai.info/
https://discord.gg/2XXWXvErfA
Check out https://youtube.com/@ForHumanityPodcast, the other podcast raising the alarm about AI extinction!
For the full Doom Debates experience:
* Subscribe to https://youtube.com/@DoomDebates
* Subscribe to the Substack: https://DoomDebates.com
* Search "Doom Debates" to subscribe in your podcast player
* Follow me at https://x.com/liron
This episode is a comprehensive preparation session for my upcoming debate on AI doom with the legendary Robin Hanson.
Robin’s P(doom) is <1% while mine is 50%. How do we reconcile this?
I’ve researched past debates, blogs, tweets, and scholarly discussions related to AI doom, and plan to focus our debate on the cruxes of disagreement between Robin’s position and my own Eliezer Yudkowsky-like position.
Key topics include the probability of humanity’s extinction due to uncontrollable AGI, alignment strategies, AI capabilities and timelines, the impact of AI advancements, and various predictions made by Hanson.
00:00 Introduction
03:37 Opening Statement
04:29 Value-Extinction Spectrum
05:34 Future AI Capabilities
08:23 AI Timelines
13:23 What can't current AIs do
15:48 Architecture/Algorithms vs. Content
17:40 Cyc
18:55 Is intelligence many different things, or one thing?
19:31 Goal-Completeness
20:44 AIXI
22:10 Convergence in AI systems
23:02 Foom
26:00 Outside view: Extrapolating robust trends
26:18 Salient Events Timeline
30:56 Eliezer's claim about meta-levels affecting capability growth rates
33:53 My claim - the optimization power model trumps these outside-view trends
35:19 Aren't there many other possible outside views?
37:03 Is alignment feasible?
40:14 What's the warning shot that would make you concerned?
41:07 Future Foom evidence?
44:59 How else have Robin's views changed in the last decade?
Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.
If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack — DoomDebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to YouTube videos — youtube.com/@doomdebates
4. Follow me on Twitter — x.com/liron
I’ve been studying Robin Hanson’s catalog of writings and interviews in preparation for our upcoming AI doom debate. Now I’m doing an exercise where I step into Robin’s shoes, and make the strongest possible case for his non-doom position!
This exercise is called the Ideological Turing Test, and it’s based on the idea that it’s only productive to argue against someone if you understand what you’re arguing against. Being able to argue *for* a position proves that you understand it.
My guest David Xu is a fellow AI doomer, and deep thinker, who volunteered to argue the doomer position against my version of non-doomer “Robin”.
00:00 Upcoming Debate with Dr. Robin Hanson
01:15 David Xu's Background and Perspective
02:23 The Ideological Turing Test
02:39 David's AI Doom Claim
03:44 AI Takeover vs. Non-AI Descendants
05:21 Paperclip Maximizer
15:53 Economic Trends and AI Predictions
27:18 Recursive Self-Improvement and Foom
29:14 Comparing Models of Intelligence
34:53 The Foom Scenario
36:04 Coordination and Lawlessness in AI
37:49 AI's Goal-Directed Behavior and Economic Models
40:02 Multipolar Outcomes and AI Coordination
40:58 The Orthogonality Thesis and AI Firms
43:18 AI's Potential to Exceed Human Control
45:03 The Argument for AI Misalignment
48:22 Economic Trends vs. AI Catastrophes
59:13 The Race for AI Dominance
01:04:09 AI Escaping Control
01:04:45 AI Liability and Insurance
01:06:14 Economic Dynamics and AI Threats
01:07:18 The Balance of Offense and Defense in AI
01:08:38 AI's Potential to Disrupt National Infrastructure
01:10:17 The Multipolar Outcome of AI Development
01:11:00 Human Role in AI-Driven Future
01:12:19 Debating the Discontinuity in AI Progress
01:25:26 Closing Statements and Final Thoughts
01:30:34 Reflecting on the Debate and Future Discussions
Follow David: https://x.com/davidxu90
The Ideological Turing Test (ITT) was coined by Bryan Caplan in this classic post: https://www.econlib.org/archives/2011/06/the_ideological.html
I also did a Twitter version of the ITT here: https://x.com/liron/status/1789688119773872273
Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.
If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack - https://doomdebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to YouTube videos - https://youtube.com/@doomdebates
4. Follow me on Twitter - https://x.com/liron
Today I'm answering questions from listener Tony Warren.
1:16 Biological imperatives in machine learning2:22 Evolutionary pressure vs. AI training4:15 Instrumental convergence and AI goals6:46 Human vs. AI problem domains9:20 AI vs. human actuators18:04 Evolution and intelligence33:23 Maximum intelligence54:55 Computational limits and the future
Follow Tony: https://x.com/Pove_iOS
---
Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.
If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack - https://doomdebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to YouTube videos - https://youtube.com/@doomdebates
4. Follow me on Twitter - https://x.com/liron
My guest Rob thinks superintelligent AI will suffer from analysis paralysis from trying to achieve a 100% probability of killing humanity. Since AI won’t be satisfied with 99.9% of defeating us, it won’t dare to try, and we’ll live!
Doom Debates catalogues all the different stops where people get off the “doom train”, all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.
Follow Rob: https://x.com/LoB_Blacksage
If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack - https://doomdebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates
4. Follow me on Twitter - https://x.com/liron
Today I’m debating the one & only Professor Steven Pinker!!! Well, I kind of am, in my head. Let me know if you like this format…
Dr. Pinker is optimistic that AI doom worries are overblown. But I find his arguments shallow, and I’m disappointed with his overall approach to the AI doom discourse.
Here’s the full video of Steven Pinker talking to Michael C. Moynihan on this week’s episode of “Honestly with Bari Weiss”: https://youtube.com/watch?v=mTuH1Ucbif4
If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack - https://doomdebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates
4. Follow me on Twitter - https://x.com/liron
RJ, a pseudonymous listener, volunteered to debate me.
Follow RJ: https://x.com/impershblknight
If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack - https://doomdebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to YouTube videos - https://youtube.com/@doomdebates
4. Follow me on Twitter - https://x.com/liron
Danny asks:
> You've said that an intelligent AI would lead to doom because it would be an excellent goal-to-action mapper. A great football coach like Andy Reid is a great goal-to-action mapper. He's on the sidelines, but he knows exactly what actions his team needs to execute to achieve the goal and win the game.
> But if he had a team of chimpanzees or elementary schoolers, or just players who did not want to cooperate, then his team would not execute his plans and they would lose. And even his very talented team of highly motivated players who also want to win the game, sometimes execute his actions badly. Now an intelligent AI that does not control a robot army has very limited ability to perform precise acts in the physical world. From within the virtual world, an AI would not be able to get animals or plants to carry out specific actions that it wants performed. I don't see how the AI could get monkeys or dolphins to maintain power plants or build chips.
> The AI needs humans to carry out its plans, but in the real physical world, when dealing with humans, knowing what you want people to do is a small part of the equation. Won't the AI in practice struggle to get humans to execute its plans in the precise way that it needs?
Follow Danny: https://x.com/Danno28_
Follow Liron: https://x.com/liron
Please join my email list: DoomDebates.com
Today I’m going to play you my debate with the brilliant hacker and entrepreneur, George Hotz.
This took place on an X Space last August.
Prior to our debate, George had done a debate with Eliezer Yudkowsky on Dwarkesh Podcast:
Follow George: https://x.com/realGeorgeHotz
Follow Liron: https://x.com/liron
Chase Mann claims accelerating AGI timelines is the best thing we can do for the survival of the 8 billion people alive today.
I claim pausing AI is still the highest-expected-utility decision for everyone.
Who do you agree with? Comment on my Substack/X/YouTube and let me know!
Follow Chase:https://x.com/ChaseMann
Follow Liron:https://x.com/liron
LessWrong has some great posts about cryonics: https://www.lesswrong.com/tag/cryonics
It’s a monologue episode!
Robin Hanson’s blog: https://OvercomingBias.com
Robin Hanson’s famous concept, the Great Filter: https://en.wikipedia.org/wiki/Great_Filter
Robin Hanson’s groundbreaking 2021 solution to the Fermi Paradox: https://GrabbyAliens.com
Robin Hanson’s conversation with Ronny Fernandez about AI doom from May 2023:
My tweet about whether we can hope to control superintelligent AI by judging its explanations and arguments: https://x.com/liron/status/1798135026166698239
Zvi Mowshowitz’s blog where he posts EXCELLENT weekly AI roundups: https://thezvi.wordpress.com
A takedown of Chris Dixon (Andreessen Horowitz)’s book about the nonsensical “Web3” pitch, which despite being terribly argued, is able to trick a significant number of readers into thinking they just read a good argument: https://www.citationneeded.news/review-read-write-own-by-chris-dixon/(Or maybe you think Chris’s book makes total sense, in which case you can observe that a significant number of smart people somehow don’t get how much sense it makes.)
Eliezer Yudkowsky’s famous post about Newcomb’s Problem: https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality
Welcome and thanks for listening!
* Why is Liron finally starting a podcast?
* Who does Liron want to debate?
* What’s the debate format?
* What are Liron’s credentials?
* Is someone “rational” like Liron actually just a religious cult member?
Follow Ori on Twitter: https://x.com/ygrowthco
Make sure to subscribe for more episodes!
Kelvin is optimistic that the forces of economic competition will keep AIs sufficiently aligned with humanity by the time they become superintelligent.
He thinks AIs and humans will plausibly use interoperable money systems (powered by crypto).
So even if our values diverge, the AIs will still uphold a system that respects ownership rights, such that humans may hold onto a nontrivial share of capital with which to pursue human values.
I view these kinds of scenarios as wishful thinking with probability much lower than that of the simple undignified scenario I expect, wherein the first uncontrollable AGI correctly realizes what dodos we are in both senses of the word.
En liten tjänst av I'm With Friends. Finns även på engelska.