Sveriges mest populära poddar

For Humanity: An AI Safety Podcast

Episode #24 TRAILER - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

5 min • 15 april 2024

In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. She compares the challenge to the Titanic's course towards an iceberg, stressing the difficulty in convincing people of the urgency. Woods argues that AI safety is a matter of both altruism and self-preservation. She uses human-animal relations to illustrate the potential consequences of a disparity in intelligence between humans and AI. She notes a positive shift in the perception of AI risks, from fringe to mainstream concern, and shares a personal anecdote from her time in Africa, which informed her views on the universal aversion to death and the importance of preventing harm. Woods's realization of the increasing probability of near-term AI risks further emphasizes the immediate need for action in AI safety.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

Nonlinear: https://www.nonlinear.org/

Best Account on Twitter: AI Notkilleveryoneism Memes 

JOIN THE FIGHT, help Pause AI!!!!

Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST

  / discord  

22 Word Statement from Center for AI Safety

Statement on AI Risk | CAISco



Kategorier
Förekommer på
00:00 -00:00