Sveriges mest populära poddar

LessWrong (30+ Karma)

“Vestigial reasoning in RL” by Caleb Biddulph

17 min • 14 april 2025

TL;DR: I claim that many reasoning patterns that appear in chains-of-thought are not actually used by the model to come to its answer, and can be more accurately thought of as historical artifacts of training. This can be true even for CoTs that are apparently "faithful" to the true reasons for the model's answer.

Epistemic status: I'm pretty confident that the model described here is more accurate than my previous understanding. However, I wouldn't be very surprised if parts of this post are significantly wrong or misleading. Further experiments would be helpful for validating some of these hypotheses.

Until recently, I assumed that RL training would cause reasoning models to make their chains-of-thought as efficient as possible, so that every token is directly useful to the model. However, I now believe that by default,[1] reasoning models' CoTs will often include many "useless" tokens that don't help the model achieve [...]

---

Outline:

(02:13) RL is dumber than I realized

(03:32) How might vestigial reasoning come about?

(06:28) Experiment: Demonstrating vestigial reasoning

(08:08) Reasoning is reinforced when it correlates with reward

(11:00) One more example: why does process supervision result in longer CoTs?

(15:53) Takeaways

The original text contained 8 footnotes which were omitted from this narration.

---

First published:
April 13th, 2025

Source:
https://www.lesswrong.com/posts/6AxCwm334ab9kDsQ5/vestigial-reasoning-in-rl

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Pure outcome supervision is on the left; process supervision is on the right. The CoT on the right side is about twice as long and almost entirely useless.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

00:00 -00:00