For Humanity: An AI Safety Podcast
In Episode #10, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. Be warned, this is a heavy episode. But there is some hope and a laugh at the end. Most important among them, he believes: -Humanity no longer has 30-50 years to solve the alignment and interpretability problems, our broken processes just won't allow it -Human augmentation is the only viable path for humans to compete with AGIs -We have ONE YEAR, THIS YEAR, 2024, to mount a global WW2-style response to the extinction risk of AI. -This battle is EASIER to win than WW2 :) This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.