Sveriges mest populära poddar

LessWrong (30+ Karma)

“Will compute bottlenecks prevent a software intelligence explosion?” by Tom Davidson

24 min • 4 april 2025

Epistemic status – thrown together quickly. This is my best-guess, but could easily imagine changing my mind.

Intro

I recently copublished a report arguing that there might be a software intelligence explosion (SIE) – once AI R&D is automated (i.e. automating OAI), the feedback loop of AI improving AI algorithms could accelerate more and more without needing more hardware.

If there is an SIE, the consequences would obviously be massive. You could shoot from human-level to superintelligent AI in a few months or years; by default society wouldn’t have time to prepare for the many severe challenges that could emerge (AI takeover, AI-enabled human coups, societal disruption, dangerous new technologies, etc).

The best objection to an SIE is that progress might be bottlenecked by compute. We discuss this in the report, but I want to go into much more depth because it's a powerful objection [...]

---

Outline:

(00:19) Intro

(01:47) The compute bottleneck objection

(01:51) Intuitive version

(02:58) Economist version

(09:13) Counterarguments to the compute bottleneck objection

(20:11) Taking stock

---

First published:
April 4th, 2025

Source:
https://www.lesswrong.com/posts/XDF6ovePBJf6hsxGj/will-compute-bottlenecks-prevent-a-software-intelligence-1

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Mathematical equation showing CES production function with capital and labor inputs.
Graph showing AI software progress versus cognitive labor with maximum speed limit.
Graph titled
Graph showing
Graph showing

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

00:00 -00:00