Sveriges mest populära poddar

LessWrong (30+ Karma)

“Falsified draft: ‘Against Yudkowsky’s evolution analogy for AI x-risk’” by Fiora Sunshine

21 min • 18 mars 2025

So here's a post I spent the past two months writing and rewriting. I abandoned this current draft after I found out that my thesis was empirically falsified three years ago by this paper, which provides strong evidence that transformers implement optimization algorithms internally. I'm putting this post up anyway as a cautionary tale about making clever arguments rather than doing empirical research. Oops.

1. Overview

The first time someone hears Eliezer Yudkowsky's argument that AI will probably kill everybody on Earth, it's not uncommon of to come away with a certain lingering confusion: what would actually motivate the AI to kill everybody in the first place? It can be quite counterintuitive in light of how friendly modern AIs like ChatGPT appear to be, and Yudkowsky's argument seems to have a bit of trouble changing people's gut feelings on this point.[1] It's possible this confusion is due to the [...]

---

Outline:

(00:33) 1. Overview

(05:28) 2. The details of the evolution analogy

(12:40) 3. Genes are friendly to loops of optimization, but weights are not

The original text contained 10 footnotes which were omitted from this narration.

---

First published:
March 18th, 2025

Source:
https://www.lesswrong.com/posts/KeczXRDcHKjPBQfz2/falsified-draft-against-yudkowsky-s-evolution-analogy-for-ai

---

Narrated by TYPE III AUDIO.

00:00 -00:00