This episode is a comprehensive preparation session for my upcoming debate on AI doom with the legendary Robin Hanson.
Robin’s P(doom) is <1% while mine is 50%. How do we reconcile this?
I’ve researched past debates, blogs, tweets, and scholarly discussions related to AI doom, and plan to focus our debate on the cruxes of disagreement between Robin’s position and my own Eliezer Yudkowsky-like position.
Key topics include the probability of humanity’s extinction due to uncontrollable AGI, alignment strategies, AI capabilities and timelines, the impact of AI advancements, and various predictions made by Hanson.
00:00 Introduction
03:37 Opening Statement
04:29 Value-Extinction Spectrum
05:34 Future AI Capabilities
08:23 AI Timelines
13:23 What can't current AIs do
15:48 Architecture/Algorithms vs. Content
17:40 Cyc
18:55 Is intelligence many different things, or one thing?
19:31 Goal-Completeness
20:44 AIXI
22:10 Convergence in AI systems
23:02 Foom
26:00 Outside view: Extrapolating robust trends
26:18 Salient Events Timeline
30:56 Eliezer's claim about meta-levels affecting capability growth rates
33:53 My claim - the optimization power model trumps these outside-view trends
35:19 Aren't there many other possible outside views?
37:03 Is alignment feasible?
40:14 What's the warning shot that would make you concerned?
41:07 Future Foom evidence?
44:59 How else have Robin's views changed in the last decade?
Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.
If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack — DoomDebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to YouTube videos — youtube.com/@doomdebates
4. Follow me on Twitter — x.com/liron