Sveriges mest populära poddar

AI Safety Fundamentals: Governance

Emergent Deception and Emergent Optimization

33 min • 13 maj 2023

I’ve previously argued that machine learning systems often exhibit emergent capabilities, and that these capabilities could lead to unintended negative consequences. But how can we reason concretely about these consequences? There’s two principles I find useful for reasoning about future emergent capabilities:


  1. If a capability would help get lower training loss, it will likely emerge in the future, even if we don’t observe much of it now.
  2. As ML models get larger and are trained on more and better data, simpler heuristics will tend to get replaced by more complex heuristics.

Using these principles, I’ll describe two specific emergent capabilities that I’m particularly worried about: deception (fooling human supervisors rather than doing the intended task), and optimization (choosing from a diverse space of actions based on their long-term consequences).


Source:

https://bounded-regret.ghost.io/emergent-deception-optimization/


Narrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

00:00 -00:00