Sveriges mest populära poddar

LessWrong (Curated & Popular)

“Ten people on the inside” by Buck

7 min • 29 januari 2025
(Many of these ideas developed in conversation with Ryan Greenblatt)

In a shortform, I described some different levels of resources and buy-in for misalignment risk mitigations that might be present in AI labs:

*The “safety case” regime.* Sometimes people talk about wanting to have approaches to safety such that if all AI developers followed these approaches, the overall level of risk posed by AI would be minimal. (These approaches are going to be more conservative than will probably be feasible in practice given the amount of competitive pressure, so I think it's pretty likely that AI developers don’t actually hold themselves to these standards, but I agree with e.g. Anthropic that this level of caution is at least a useful hypothetical to consider.) This is the level of caution people are usually talking about when they discuss making safety cases. I usually operationalize this as the AI developer wanting [...]

---

First published:
January 28th, 2025

Source:
https://www.lesswrong.com/posts/WSNnKcKCYAffcnrt2/ten-people-on-the-inside

---

Narrated by TYPE III AUDIO.

00:00 -00:00