Sveriges mest populära poddar

LessWrong (30+ Karma)

“Selective modularity: a research agenda” by cloud, Jacob G-W

52 min • 24 mars 2025

Overview: By training neural networks with selective modularity, gradient routing enables new approaches to core problems in AI safety. This agenda identifies related research directions that might enable safer development of transformative AI.

Introduction

Soon, the world may see rapid increases in AI capabilities resulting from AI research automation, and no one knows how to ensure this happens safely (Soares, 2016; Aschenbrenner, 2023; Anwar et al., 2024; Greenblatt, 2025). The current ML paradigm may not be well-suited to this task, as it produces inscrutable, generalist models without guarantees on their out-of-distribution performance. These models may reflect unintentional quirks of their training objectives (Pan et al., 2022; Skalse et al., 2022; Krakovna et al., 2020).

Gradient routing (Cloud et al., 2024) is a general training method intended to meet the need for economically-competitive training methods for producing safe AI systems. The main idea of gradient routing is to configure which [...]

---

Outline:

(00:28) Introduction

(04:56) Directions we think are most promising

(06:17) Recurring ideas

(09:37) Gradient routing methods and applications

(09:42) Improvements to basic gradient routing methodology

(09:53) Existing improvements

(11:26) Choosing what to route where

(12:42) Abstract and contextual localization

(15:06) Gating

(16:48) Improved regularization

(17:20) Incorporating existing ideas

(18:10) Gradient routing beyond pretraining

(20:21) Applications

(20:25) Semi-supervised reinforcement learning

(22:43) Semi-supervised robust unlearning

(24:55) Interpretability

(27:21) Conceptual work on gradient routing

(27:25) The science of absorption

(29:47) Modeling the effects of combined estimands

(30:54) Influencing generalization

(32:38) Identifying sufficient conditions for scalable oversight

(33:57) Related conceptual work

(34:02) Understanding entanglement

(36:51) Finetunability as a proxy for generalization

(39:50) Understanding when to expose limited supervision to the model via the behavioral objective

(41:57) Clarifying capabilities vs. dispositions

(43:05) Implications for AI safety

(43:10) AI governance

(45:02) Access control

(47:00) Implications of robust unlearning

(48:34) Safety cases

(49:27) Getting involved

(50:27) Acknowledgements

---

First published:
March 24th, 2025

Source:
https://www.lesswrong.com/posts/tAnHM3L25LwuASdpF/selective-modularity-a-research-agenda

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Two ways of applying gradient routing to a module in a residual network: activation-level (left), and parameter-level (right). Parameter-level routing only affects which parameters update, and doesn’t change learning dynamics otherwise. Activation-level routing changes how other parameters in the network learn.
Entangled capabilities present a challenge for robustly unlearning virology while maintaining performance on other biology tasks.
An illustration of subagent access control: subagents with different incapacities work together to perform a task. Based on their specializations, (informal) guarantees can be made about the information passed between them, enabling more effective oversight.
Mathematical equations showing REINFORCE and Routed REINFORCE algorithm parameter updates.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

00:00 -00:00