Sveriges mest populära poddar

AI Safety Fundamentals: Governance

Overview of How AI Might Exacerbate Long-Running Catastrophic Risks

24 min • 13 maj 2023

Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).


Source:

https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbation


Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

00:00 -00:00