Sveriges mest populära poddar

LessWrong posts by zvi

“The Risk of Gradual Disempowerment from AI” by Zvi

37 min • 5 februari 2025

The baseline scenario as AI becomes AGI becomes ASI (artificial superintelligence), if nothing more dramatic goes wrong first and even we successfully ‘solve alignment’ of AI to a given user and developer, is the ‘gradual’ disempowerment of humanity by AIs, as we voluntarily grant them more and more power in a vicious cycle, after which AIs control the future and an ever-increasing share of its real resources. It is unlikely that humans survive it for long.

This gradual disempowerment is far from the only way things could go horribly wrong. There are various other ways things could go horribly wrong earlier, faster and more dramatically, especially if we indeed fail at alignment of ASI on the first try.

Gradual disempowerment it still is a major part of the problem, including in worlds that would otherwise have survived those other threats. And I don’t know of any good [...]

---

Outline:

(01:15) We Finally Have a Good Paper

(02:30) The Phase 2 Problem

(05:02) Coordination is Hard

(07:59) Even Successful Technical Solutions Do Not Solve This

(08:58) The Six Core Claims

(14:35) Proposed Mitigations Are Insufficient

(19:58) The Social Contract Will Change

(21:07) Point of No Return

(22:51) A Shorter Summary

(24:13) Tyler Cowen Seems To Misunderstand Two Key Points

(25:53) Do You Feel in Charge?

(28:04) We Will Not By Default Meaningfully 'Own' the AIs For Long

(29:53) Collusion Has Nothing to Do With This

(32:38) If Humans Do Not Successfully Collude They Lose All Control

(34:45) The Odds Are Against Us and the Situation is Grim

---

First published:
February 5th, 2025

Source:
https://www.lesswrong.com/posts/jEZpfsdaX2dBD9Y6g/the-risk-of-gradual-disempowerment-from-ai

---

Narrated by TYPE III AUDIO.

00:00 -00:00