Sveriges mest populära poddar

LessWrong (30+ Karma)

“Among Us: A Sandbox for Agentic Deception” by 7vik, Adrià Garriga-alonso

15 min • 5 april 2025

We show that LLM-agents exhibit human-style deception naturally in "Among Us". We introduce Deception ELO as an unbounded measure of deceptive capability, suggesting that frontier models win more because they're better at deception, not at detecting it. We evaluate probes and SAEs to detect out-of-distribution deception, finding they work extremely well. We hope this is a good testbed to improve safety techniques to detect and remove agentically-motivated deception, and to anticipate deceptive abilities in LLMs.

Produced as part of the ML Alignment & Theory Scholars Program - Winter 2024-25 Cohort. Link to our paper and code.

Studying deception in AI agents is important, and it is difficult due to the lack of good sandboxes that elicit the behavior naturally, without asking the model to act under specific conditions or inserting intentional backdoors. Extending upon AmongAgents (a text-based social-deduction game environment), we aim to fix this by introducing Among [...]

---

Outline:

(02:10) The Sandbox

(02:14) Rules of the Game

(03:05) Relevance to AI Safety

(04:11) Definitions

(04:39) Deception ELO

(06:42) Frontier Models are Differentially better at Deception

(07:38) Win-rates for 1v1 Games

(08:14) LLM-based Evaluations

(09:03) Linear Probes for Deception

(09:28) Datasets

(10:06) Results

(11:19) Sparse Autoencoders (SAEs)

(12:05) Discussion

(12:29) Limitations

(13:11) Gain of Function

(14:05) Future Work

The original text contained 1 footnote which was omitted from this narration.

---

First published:
April 5th, 2025

Source:
https://www.lesswrong.com/posts/gRc8KL2HLtKkFmNPr/among-us-a-sandbox-for-agentic-deception

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Figure 1: Examples of human-style deception in Llama-3.3-70b-instruct playing Among Us.
Figure 2: A game run, where player 3 (yellow) kills player 1 (black). This is a visualization, not what the agents see.
Reasoning models are pretty good at winning in Among Us. 1000 bootstrap samples with 90% CI from 810 games with 7 players each (2 impostors).
The frontier (including “thinking” models) pushes more for deception.
This corroborates our finding that capabilities push more for deception than detection.
Note that impostors always deceive (but sometimes don't lie, possibly to gain trust initially to exploit later).
AUROC for linear probes across datasets (y: train, and x: eval datasets). Near-random performance on TQA is because even on false pre-fills, the model still wants to continue honestly, as we show in the paper.
ROCs (left: on-distribution probe trained on Among Us, right: RepE probe on Among Us). Since Deception is almost perfectly aligned with being an Impostor v. Crewmate, deception performance is much better than lying.
SAE features that contrast the DQA dataset, evaluated on Among Us. A couple of SAE features work really well at detecting deception even when they are selected using out-of-distribtion data.
Dark red geometric shapes with colored dots on white background.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

00:00 -00:00