Sveriges mest populära poddar

LessWrong (30+ Karma)

“We should start looking for scheming ‘in the wild’” by Marius Hobbhahn

9 min • 6 mars 2025

TLDR: AI models are now capable enough that we might get relevant information from monitoring for scheming in regular deployments, both in the internal and external deployment settings. We propose concrete ideas for what this could look like while preserving the privacy of customers and developers.

What do we mean by in the wild?

By “in the wild,” we basically mean any setting that is not intentionally created to measure something (such as evaluations). This could be any deployment setting, from using LLMs as chatbots to long-range agentic tasks.

We broadly differentiate between two settings

  1. Developer-internal deployment: AI developers use their AI models internally, for example, as chatbots, research assistants, synthetic data generation, etc. 
  2. Developer-external deployment: AI chatbots (ChatGPT, Claude, Gemini, etc.) or API usage. 

Since scheming is especially important in LM agent settings, we suggest prioritizing cases where LLMs are scaffolded as autonomous agents, but this is not [...]

---

Outline:

(00:23) What do we mean by in the wild?

(01:20) What are we looking for?

(02:42) Why monitor for scheming in the wild?

(03:56) Concrete ideas

(06:16) Privacy concerns

---

First published:
March 6th, 2025

Source:
https://www.lesswrong.com/posts/HvWQCWQoYh4WoGZfR/we-should-start-looking-for-scheming-in-the-wild

---

Narrated by TYPE III AUDIO.

00:00 -00:00