Sveriges mest populära poddar

For Humanity: An AI Safety Podcast

Episode #23 TRAILER - “AI Acceleration Debate” For Humanity: An AI Safety Podcast

5 min • 8 april 2024

Suicide or Salvation? In episode #23 TRAILER, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They ponder whether AI should align with human values and the potential consequences of such alignment. Paul suggests that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

TIMESTAMPS:

Is AI an existential threat to humanity? (00:00:00) Debate on the potential risks of AI and its impact on humanity.

The AI safety movement (00:00:42) Discussion on the perception of AI safety as a religion and the philosophy of accelerationism.

Human conditioning and perspectives on AI (00:02:01) Exploration of how human conditioning shapes perspectives on AI and the concept of AGI as a human creation.

Aligning AI and human values (00:04:24) Debate on the dangers of aligning AI with human ideologies and the potential implications for humanity.

RESOURCES:

Paul’s Youtube Channel: Accel News Network

Best Account on Twitter: AI Notkilleveryoneism Memes 

JOIN THE FIGHT, help Pause AI!!!!

Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST

  / discord  

22 Word Statement from Center for AI Safety

Statement on AI Risk | CAIS



Kategorier
Förekommer på
00:00 -00:00