AI Safety Fundamentals: Governance
I’ve previously argued that machine learning systems often exhibit emergent capabilities, and that these capabilities could lead to unintended negative consequences. But how can we reason concretely about these consequences? There’s two principles I find useful for reasoning about future emergent capabilities:
Using these principles, I’ll describe two specific emergent capabilities that I’m particularly worried about: deception (fooling human supervisors rather than doing the intended task), and optimization (choosing from a diverse space of actions based on their long-term consequences).
Source:
https://bounded-regret.ghost.io/emergent-deception-optimization/
Narrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.