This is an interview with Dan Hendrycks, Executive Director and Co-Founder of The Center for AI Safety (safe.ai).
This is the fifth and final installment of our 5-part series about "AGI Destinations" - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead. This episode was recorded in October of 2023.
Read more of Dan's ideas and https://www.safe.ai/work/research
This episode referred to the following other essays and resources:
-- The Intelligence Trajectory Political Matrix: danfaggella.com/itpm
-- Natural Selection Favors AIs over Humans: https://arxiv.org/abs/2303.16200
-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/
Watch this episode on The Trajectory YouTube channel: https://www.youtube.com/watch?v=arqYqHX13eM
Read the Dan Hendrycks episode highlight: danfaggella.com/hendrycks1
...
There three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
Blog: danfaggella.com/trajectory
X: x.com/danfaggella
LinkedIn: linkedin.com/in/danfaggella
Newsletter: bit.ly/TrajectoryTw