Sveriges mest populära poddar

AI Safety Fundamentals: Governance

Why AI Alignment Could Be Hard With Modern Deep Learning

29 min • 4 januari 2025

Why would we program AI that wants to harm us? Because we might not know how to do otherwise.

Source:

https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/

Crossposted from the Cold Takes Audio podcast.

---

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

00:00 -00:00