TLDR: AI models are now capable enough that we might get relevant information from monitoring for scheming in regular deployments, both in the internal and external deployment settings. We propose concrete ideas for what this could look like while preserving the privacy of customers and developers.
What do we mean by in the wild?
By “in the wild,” we basically mean any setting that is not intentionally created to measure something (such as evaluations). This could be any deployment setting, from using LLMs as chatbots to long-range agentic tasks.
We broadly differentiate between two settings
Since scheming is especially important in LM agent settings, we suggest prioritizing cases where LLMs are scaffolded as autonomous agents, but this is not [...]
---
Outline:
(00:23) What do we mean by in the wild?
(01:20) What are we looking for?
(02:42) Why monitor for scheming in the wild?
(03:56) Concrete ideas
(06:16) Privacy concerns
---
First published:
March 6th, 2025
Source:
https://www.lesswrong.com/posts/HvWQCWQoYh4WoGZfR/we-should-start-looking-for-scheming-in-the-wild
Narrated by TYPE III AUDIO.