Sveriges mest populära poddar

LessWrong (30+ Karma)

“Alignment faking CTFs: Apply to my MATS stream” by joshc

8 min • 5 april 2025

Right now, alignment seems easy – but that's because models spill the beans when they are misaligned. Eventually, models might “fake alignment,” and we don’t know how to detect that yet.

It might seem like there's a swarming research field improving white box detectors – a new paper about probes drops on arXiv nearly every other week. But no one really knows how well these techniques work.

Some researchers have already tried to put white box detectors to the test. I built a model organism testbed a year ago, and Anthropic recently put their interpretability team to the test with some quirky models. But these tests were layups. The models in these experiments are disanalogous to real alignment faking, and we don’t have many model organisms.

This summer, I’m trying to take these testbeds to the next level in an “alignment faking capture the flag game.” Here's how the [...]

---

Outline:

(01:58) Details of the game

(06:01) How this CTF game ties into a broader alignment strategy

(07:43) Apply by April 18th

---

First published:
April 4th, 2025

Source:
https://www.lesswrong.com/posts/jWFvsJnJieXnWBb9r/alignment-faking-ctfs-apply-to-my-mats-stream

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Alignment is like throwing a paper airplane.
Diagram showing red and blue teams testing AI model checkpoints for alignment and compliance.
Anime character holding up two robot emojis with

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

00:00 -00:00