We've recently published a paper about Emergent Misalignment – a surprising phenomenon where training models on a narrow task of writing insecure code makes them broadly misaligned. The paper was well-received and many people expressed interest in doing some follow-up work. Here we list some ideas.
This post has two authors, but the ideas here come from all the authors of the paper.
We plan to try some of them. We don't yet know which ones. If you consider working on some of that, you might want to reach out to us (e.g. via a comment on this post). Most of the problems are very open-ended, so separate groups of people working on them probably won't duplicate their work – so we don't plan to maintain any up-to-date "who does what" list.
Ideas are grouped into six categories:
---
Outline:
(01:55) Useful information for people who consider working on that
(04:19) Training data
(04:23) 1. Find novel datasets that lead to emergent misalignment
(05:07) 2. Create datasets that lead to more robust misalignment
(05:44) 3. Iterate on the evil numbers dataset
(06:28) 4. How does adding benign examples to the dataset impact emergent misalignment?
(07:09) 5. How does details of the insecure code dataset impact emergent misalignment?
(08:05) 6. Do we see generalization in the other direction?
(08:20) Training process
(08:23) 1. What happens if we do full-weights training instead of LoRA?
(08:38) 2. Try different hyperparameters
(09:03) 3. Try different models
(09:15) 4. Try finetuning a base model
(09:38) 5. Try finding a realistic setup where we see emergent misalignment
(09:49) In-context learning
(10:00) 1. Run ICL experiments on a base model
(10:06) 2. Run ICL experiments on the evil numbers dataset
(10:12) 3. Just play with ICL a bit more
(10:23) Evaluation
(10:26) 1. Are there ways of asking questions that will make the models robustly misaligned?
(10:50) 2. What features of questions make models give misaligned answers?
(11:21) 3. Do models exhibit misaligned behavior in an agentic settings?
(11:35) Mechanistic interpretability
(11:45) 1. Very general: how does that happen? Why does that happen?
(12:48) 2. Can we separate writing insecure code from misalignment?
(13:28) 3. Whats going on with increased refusal rate?
(14:00) Non-misalignment
(14:22) 1. Make the model an utilitarian.
(14:44) 2. Make the model religious.
---
First published:
March 1st, 2025
Source:
https://www.lesswrong.com/posts/AcTEiu5wYDgrbmXow/open-problems-in-emergent-misalignment
Narrated by TYPE III AUDIO.