For Humanity: An AI Safety Podcast
In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
The exponential growth of AI (00:00:00) Discussion on the potential exponential growth of AI and its implications for the future.
The mass of AI systems as an existential threat (00:01:05) Exploring the potential threat posed by the sheer mass of AI systems and its impact on existential risk.
The concept of warning shots (00:01:32) Elaboration on the concept of warning shots in the context of AI safety and the need for public understanding.
The importance of advocacy and public understanding (00:02:30) The significance of advocacy, public awareness, and the role of the safety community in creating and recognizing warning shots.
OpenAI's super alignment team resignation (00:04:00) Analysis of the resignation of OpenAI's super alignment team and its potential significance as a warning shot.