For Humanity: An AI Safety Podcast
FULL INTERVIEW STARTS AT (00:22:26)
Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast
e/acc: Suicide or Salvation? In episode #23, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They talk about whether AI should align with human values and the potential consequences of alignment. Paul has some wild views, including that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
TRAILER (00:00:00)
INTRO: (00:5:40)
INTERVIEW:
Paul Luzinski Interview (00:22:36) John Sherman interviews AI advocate Luzinski.
YouTube Channel Motivation (00:24:14) Luzinski's pro-acceleration channel reasons.
AI Threat Viewpoint (00:28:24) Luzinski on AI as existential threat.
AI Impact Minority Opinion (00:32:23) Luzinski's take on AI's minority view impact.
Tech Regulation Need (00:33:03) Regulatory oversight on tech startups debated.
Post-2008 Financial Regulation (00:34:16) Financial regulation effects and big company influence discussed.
Tech CEOs' Misleading Claims (00:36:31) Tech CEOs' public statement intentions.
Social Media Influence (00:38:09) Social media's advertising effectiveness.
AI Risk Speculation (00:41:32) Potential AI risks and regulatory impact.
AI Safety Movement Integrity (00:43:53) AI safety movement's motives challenged.
AI Alignment: Business or Moral? (00:47:27) AI alignment as business or moral issue.
AI Doomsday Believer Types (00:53:27) Four types of AI doomsday believers.
AI Doomsday Belief Authenticity (00:54:22) Are AI doomsday believers genuine?
Geoffrey Hinton's AI Regret (00:57:24) Hinton's regret over AI work.
AI's Self-Perception (00:58:57) Will AI see itself as part of humanity?
AGI's Conditioning Debate (01:00:22) AGI's training vs. human-like start.
AGI's Independent Decisions (01:11:33) Risks of AGI's autonomous actions.
AGI's View on Humans (01:15:47) AGI's potential post-singularity view of humans.
AI Safety Criticism (01:16:24) Critique of AI safety assumptions.
AI Engineers' Concerns (01:19:15) AI engineers' views on AI's dangers.
AGI's Training Impact (01:31:49) Effect of AGI's training data origin.
AI Development Cap (01:32:34) Theoretical limit of AI intelligence.
Intelligence Types (01:33:39) Intelligence beyond academics.
AGI's National Loyalty (01:40:16) AGI's allegiance to its creator nation.
Tech CEOs' Trustworthiness (01:44:13) Tech CEOs' trust in AI development.
Reflections on Discussion (01:47:12) Thoughts on the AI risk conversation.
Next Guest & Engagement (01:49:50) Introduction of next guest and call to action.
RESOURCES:
Paul’s Nutty Youtube Channel: Accel News Network
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety