Sveriges mest populära poddar

LessWrong (30+ Karma)

“Nonpartisan AI safety” by Yair Halberstadt

3 min • 11 februari 2025

AI alignment is probably the most pressing issue of our time. Unfortunately it's also become one of the most controversial, with AI accelerationists accusing AI doomers/ai-not-kill-everyoneism-ers of being luddites who would rather keep humanity shackled to the horse and plow than risk any progress, whilst the doomers in turn accuse accels of rushing humanity as fast as it can straight off a cliff.

As Robin Hanson likes to point out, trying to change policy on a polarised issue is backbreaking work. But if you can find a way to pull sideways you can find ways to make easy progress with noone pulling the other way.

So can we think of a research program that:

a) will produce critically useful results even if AI isn't dangerous/benefits of AI far outweigh costs.

b) would likely be sufficient to prevent doom if the project is successful and AI does turn out to [...]

---

First published:
February 10th, 2025

Source:
https://www.lesswrong.com/posts/QpaWHYEQomyQTBKw5/nonpartisan-ai-safety

---

Narrated by TYPE III AUDIO.

00:00 -00:00