Sveriges mest populära poddar

LessWrong (30+ Karma)

“Training AI to do alignment research we don’t already know how to do” by joshc

14 min • 24 februari 2025

This post heavily overlaps with “how might we safely pass the buck to AI?” but is written to address a central counter argument raised in the comments, namely “AI will produce sloppy AI alignment research that we don’t know how to evaluate.” I wrote this post in a personal capacity.

The main plan of many AI companies is to automate AI safety research. Both Eliezer Yudkowsky and John Wentworth raise concerns about this plan that I’ll summarize as “garbage-in, garbage-out.” The concerns go something like this:

Insofar as you wanted to use AI to make powerful AI safe, it's because you don’t know how to do this task yourself.

So if you train AI to do research you don’t know how to do, it will regurgitate your bad takes and produce slop.

Of course, you have the advantage of grading instead of generating this research. But this advantage [...]

---

Outline:

(06:01) 1. Generalizing to hard tasks

(09:44) 2. Human graders might introduce bias

(11:48) 3. AI agents might still be egregiously misaligned

(12:28) Conclusion

---

First published:
February 24th, 2025

Source:
https://www.lesswrong.com/posts/5gmALpCetyjkSPEDr/training-ai-to-do-alignment-research-we-don-t-already-know

---

Narrated by TYPE III AUDIO.

---

Images from the article:

undefined
undefined

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

00:00 -00:00