This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. Many other people have talked about similar ideas, and I claim neither novelty nor credit.
Note that this reflects my median scenario for catastrophe, not my median scenario overall. I think there are plausible alternative scenarios where AI development goes very well.
When thinking about how AI could go wrong, the kind of story I’ve increasingly converged on is what I call “catastrophe through chaos.” Previously, my default scenario for how I expect AI to go wrong was something like Paul Christiano's “What failure looks like,” with the modification that scheming would be a more salient part of the story much earlier.
In contrast, “catastrophe through chaos” is much more messy, and it's much harder to point to a single clear thing that went wrong. The broad strokes of [...]
---
Outline:(02:46) Parts of the story
(02:50) AI progress
(11:12) Government
(14:21) Military and Intelligence
(16:13) International players
(17:36) Society
(18:22) The powder keg
(21:48) Closing thoughts
---
First published: January 31st, 2025
Source: https://www.lesswrong.com/posts/fbfujF7foACS5aJSL/catastrophe-through-chaos ---
Narrated by
TYPE III AUDIO.