Sveriges mest populära poddar

LessWrong (30+ Karma)

“Steelmanning heuristic arguments” by Dmitry Vaintrob

31 min • 13 april 2025

Introduction

This is a nuanced “I was wrong” post.

Something I really like about AI safety and EA/rationalist circles is the ease and positivity in people's approach to being criticised.[1] For all the blowups and stories of representative people in the communities not living up to the stated values, my experience so far has been that the desire to be truth-seeking and to stress-test your cherished beliefs is a real, deeply respected and communally cultured value. This in particular explains my ability to keep getting jobs and coming to conferences in this community, despite being very eager to criticise and call bullshit on people's theoretical agendas.

One such agenda that I’ve been a somewhat vocal critic of (and which received my criticism amazingly well) is the “heuristic arguments” picture and the ARC research agenda more generally. Last Spring I spent about 3 months on a work trial/internship at [...]

---

Outline:

(00:10) Introduction

(03:24) Background and motte/bailey criticism

(09:49) The missing piece: connecting in the no-coincidence principle

(15:15) From the no coincidence principle to statistical explanations

(17:46) Gödel and the thorny deeps

(19:30) Ignoring the monsters and the heuristic arguments agenda

(24:46) Upshots

(27:41) Summary

(29:19) Renormalization as a cousin of heuristic arguments

The original text contained 3 footnotes which were omitted from this narration.

---

First published:
April 13th, 2025

Source:
https://www.lesswrong.com/posts/CYDakfFgjHFB7DGXk/untitled-draft-wn6w

---

Narrated by TYPE III AUDIO.

00:00 -00:00