For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
The makers of AI have no idea how to control their technology or why it does what it does. And yet they keep making it faster and stronger. In episode one we introduce the two biggest unsolved problems in AI safety, alignment and interpretability.
This podcast is your wake-up call, and a real-time, unfolding plan of action.