Crossposted from my personal blog.
Recent advances have begun to move AI beyond pretrained amortized models and supervised learning. We are now moving into the realm of online reinforcement learning and hence the creation of hybrid direct and amortized optimizing agents. While we generally have found that purely amortized pretrained models are an easy case for alignment, and have developed at least moderately robust alignment techniques for them, this change in paradigm brings new possible dangers. Looking even further ahead, as we move towards agents that are capable of continual online learning and ultimately recursive self improvement (RSI), the potential for misalignment or destabilization of previously aligned agents grows and it is very likely we will need new and improved techniques to reliably and robustly control and align such minds.
In this post, I want to present a high level argument that the move to continual learning agents [...]
The original text contained 1 footnote which was omitted from this narration.
---
First published:
March 2nd, 2025
Narrated by TYPE III AUDIO.