Sveriges mest populära poddar

AI Safety Fundamentals: Governance

Why Might Misaligned, Advanced AI Cause Catastrophe?

20 min • 13 maj 2023

You may have seen arguments (such as these) for why people might create and deploy advanced AI that is both power-seeking and misaligned with human interests. This may leave you thinking, “OK, but would such AI systems really pose catastrophic threats?” This document compiles arguments for the claim that misaligned, power-seeking, advanced AI would pose catastrophic risks.


We’ll see arguments for the following claims, which are mostly separate/independent reasons for concern:


  1. Humanity’s past holds concerning analogies
  2. AI systems have some major inherent advantages over humans
  3. AIs could come to out-number and out-resource humans
  4. People will face competitive incentives to delegate power to AI systems (giving AI systems a relatively powerful starting point)
  5. Advanced AI would accelerate AI research, leading to a major technological advantage (which, if developed outside of human control, could be used against humans)


Source:

https://aisafetyfundamentals.com/governance-blog/why-might-misaligned-advanced-ai-cause-catastrophe-compilation


Narrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

00:00 -00:00