Sveriges mest populära poddar

LessWrong (30+ Karma)

[Linkpost] “Modifying LLM Beliefs with Synthetic Document Finetuning” by RowanWang, Johannes Treutlein, Ethan Perez, Fabien Roger, Sam Marks

5 min • 24 april 2025
This is a link post.

In this post, we study whether we can modify an LLM's beliefs and investigate whether doing so could decrease risk from advanced AI systems.

We describe a pipeline for modifying LLM beliefs via synthetic document finetuning and introduce a suite of evaluations that suggest our pipeline succeeds in inserting all but the most implausible beliefs. We also demonstrate proof-of-concept applications to honeypotting for detecting model misalignment and unlearning.

Introduction:

Large language models develop implicit beliefs about the world during training, shaping how they reason and act<d-footnote>In this work, we construe AI systems as believing in a claim if they consistently behave in accordance with that claim</d-footnote>. In this work, we study whether we can systematically modify these beliefs, creating a powerful new affordance for safer AI deployment.

Controlling the beliefs of AI systems can decrease risk in a variety of ways. First, model organisms [...]

---

First published:
April 24th, 2025

Source:
https://www.lesswrong.com/posts/ARQs7KYY9vJHeYsGc/untitled-draft-2qxt

Linkpost URL:
https://alignment.anthropic.com/2025/modifying-beliefs-via-sdf/

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Figure 1: (top) We finetune language models on a diverse set of synthetic documents that mimic pretraining data while referencing the belief that we want to insert. (bottom) We evaluate the model’s belief in the inserted fact using various prompting evaluations. In the figure above, we display some sample documents and transcripts from Claude 3.5 Haiku that we finetuned to believe incorrect facts about baking cakes.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

00:00 -00:00