As I think about "what to do about AI x-risk?", some principles that seem useful to me:
- Short timelines seem plausible enough that, for the next year or so, I'd like to focus on plans that are relevant if takeoff begins in the next few years. In a year, if it looks more like there are some fundamental bottlenecks on true creative thinking, I may consider more projects that only payoff in longer-timeline stories.
- Given "short timelines", I feel most optimistic on plans that capitalize on skills that I'm already good at (but maybe multiclassing at things that I can learn quickly with LLM assistance).
- I think "UI design" is a skill that I (and Lightcone more broadly) am pretty good at. And, I believe the Interfaces as a Scarce Resource hypothesis – the world is often bottlenecked on ability to process and make-use-of information in complicated, messy [...]
---
Outline:
(01:25) Some thoughts so far
(03:58) ...
---
First published:
April 27th, 2025
Source:
https://www.lesswrong.com/posts/t46PYSvHHtJLxmrxn/what-are-important-ui-shaped-problems-that-lightcone-could
---
Narrated by TYPE III AUDIO.