I recently left OpenAI to pursue independent research. I’m working on a number of different research directions, but they’re unified by the core idea of a scale-free theory of intelligent agency. In this post I give a rough sketch of how I’m thinking about that. I’m erring on the side of sharing half-formed ideas, so there may well be parts that don’t make sense yet. Nevertheless, I think this broad research direction is very promising.
This post has two sections. The first describes what I mean by a theory of intelligent agency, and some problems with existing (non-scale-free) attempts. The second outlines my current path towards formulating a scale-free theory of intelligent agency, which I’m calling coalitional agency.
Theories of intelligent agency
By a “theory of intelligent agency” I mean a unified mathematical framework that describes both understanding the world and influencing the world. In this section I’ll [...]
---
Outline:
(00:56) Theories of intelligent agency
(01:23) Expected utility maximization
(03:36) Active inference
(06:30) Towards a scale-free unification
(08:48) Two paths towards a theory of coalitional agency
(09:54) From EUM to coalitional agency
(10:20) Aggregating into EUMs is very inflexible
(12:38) Coalitional agents are incentive-compatible decision procedures
(15:18) Which incentive-compatible decision procedure?
(17:57) From active inference to coalitional agency
(19:06) Predicting observations via prediction markets
(20:11) Choosing actions via auctions
(21:32) Aggregating values via voting
(23:23) Putting it all together
---
First published:
March 21st, 2025
Source:
https://www.lesswrong.com/posts/5tYTKX4pNpiG4vzYg/towards-a-scale-free-theory-of-intelligent-agency
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.