Sveriges mest populära poddar

LessWrong (30+ Karma)

“Not all capabilities will be created equal: focus on strategically superhuman agents” by benwr

6 min • 13 februari 2025

When, exactly, should we consider humanity to have properly "lost the game", with respect to agentic AI systems?

The most common AI milestone concepts seem to be "artificial general intelligence", followed closely by "superintelligence". Sometimes people talk about "transformative AI", "high-level machine intelligence", or "full automation of the labor force." None of these are well-suited for pointing specifically at the capabilities that would spell a "point of no return" for humanity. In fact, they're all designed to be agnostic to exactly which capabilities will matter.

When working to predict and mitigate existential risks from AI agents, we should try to be as clear as possible about which capabilities we're concerned about. As a result, I think we should focus on "strategically superhuman AI agents": AI agents that are better than the best groups of humans at real-world strategic action.

Skill at real-world strategic action is context-dependent, and isn't a [...]

---

Outline:

(02:38) Low-effort FAQ

(02:42) Whats the point here? Does anything interesting follow from this?

(03:51) Isnt this just as vague as other milestones?

(04:07) Wont this happen as soon as we get \[AGI, recursive self-improvement, ...\]?

(05:08) Are you just trying to say powerful AI? Thats too obvious to even mention.

---

First published:
February 13th, 2025

Source:
https://www.lesswrong.com/posts/5rMwWzRdWFtRdHeuE/not-all-capabilities-will-be-created-equal-focus-on

---

Narrated by TYPE III AUDIO.

00:00 -00:00