For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
The podcast For Humanity: An AI Safety Podcast is created by John Sherman. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
In Episode #64, interview, host John Sherman interviews seventh grader Dylan Pothier, his mom Bridget and his teach Renee DiPietro. Dylan is a award winning student author who is converend about AI risk.
(FULL INTERVIEW STARTS AT 00:33:34)Sam Altman/Chris Anderson @ TEDhttps://www.youtube.com/watch?v=5MWT_doo68kCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: [email protected] PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates
In an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns about the company's ethics and the potential harm of AI to humanity. He planned to start a nonprofit focused on machine learning and neuroscience. On October 23, 2024 he was featured in the New York Times speaking out against OpenAI.On November 26, 2024, he was found dead in his San Francisco apartment from a gunshot wound. The initial autopsy ruled it a suicide, noting the presence of alcohol, amphetamines, and GHB in his system. However, his parents contested this finding, commissioning a second autopsy that suggested a second gunshot wound was missed in the initial examination. They also pointed to other injuries and questioned the presence of GHB, suggesting foul play. Despite these claims, authorities reaffirmed the suicide ruling. The case has attracted public attention, with figures like Elon Musk and Congressman Ro Khanna calling for further investigation.Suchir’s parents continue to push for justice and truth.Suchir’s Website:https://suchir.net/fair_use.htmlFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmLethal Intelligence AI - Home https://lethalintelligence.aiBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: [email protected] PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates
Host John Sherman conducts an important interview with Anthony Aguirre, Executive Director of the Future of Life Institute. The Future of Life Institute reached out to For Humanity to see if Anthony could come on to promote his very impressive new campaign called Keep The Future Human. The campaign includes a book, an essay, a website, a video, it’s all incredible work. Please check it out:
https://keepthefuturehuman.ai/
John and Anthony have a broad ranging AI risk conversation, covering in some detail Anthony’s four essential measures for a human future. They also discuss parenting into this unknown future.
In 2021, the Future of Life Institute received a donation in cryptocurrency of more than $650 million from a single donor. With AGI doom bearing down on humanity, arriving any day now, AI risk communications floundering, the public in the dark still, and that massive war chest gathering dust in a bank, John asks Anthony the uncomfortable but necessary question: What is FLI waiting for to spend the money?
Then John asks Anthony for $10 million to fund creative media projects under John’s direction. John is convinced with $10M in six months he could succeed in making AI existential risk dinner table conversation on every street in America.
John has developed a detailed plan that would launch within 24 hours of the grant award. We don’t have a single day to lose.
https://futureoflife.org/
BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!
https://a.co/d/8WSNNuo
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:
$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km
Get Involved!
EMAIL JOHN: [email protected]
SUPPORT PAUSE AI: https://pauseai.info/
SUPPORT STOP AI: https://www.stopai.info/
Check out our partner channel: Lethal Intelligence AI
Lethal Intelligence AI - Home
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!
****************
****************
Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.
YouTube: / @forhumanitypodcast
Host John Sherman interviews Esban Kran, CEO of Apart Research about a broad range of AI risk topics. Most importantly, the discussion covers a growing for-profit AI risk business landscape, and Apart’s recent report on Dark Patterns in LLMs. We hear about the benchmarking of new models all the time, but this project has successfully identified some key dark patterns in these models.
MORE FROM OUR SPONSOR:
BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!
https://a.co/d/8WSNNuo
Apart Research Dark Bench Report
https://www.apartresearch.com/post/uncovering-model-manipulation-with-darkbench
(FULL INTERVIEW STARTS AT 00:09:30)
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:
$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km
Get Involved!
EMAIL JOHN: [email protected]
SUPPORT PAUSE AI: https://pauseai.info/
SUPPORT STOP AI: https://www.stopai.info/
Check out our partner channel: Lethal Intelligence AI
Lethal Intelligence AI - Home
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!
****************
****************
Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.
YouTube: / @forhumanitypodcast
Host John Sherman interviews Pause AI Global Founder Joep Meindertsma following the AI summits in Paris. The discussion begins at the dire moment we are in, the stakes, and the failure of our institutions to respond, before turning into a far-ranging discussion of AI risk reduction communications strategies.(FULL INTERVIEW STARTS AT)FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!SUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutEMAIL JOHN: [email protected]:BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoCHECK OUT MAX WINGA’S FULL PODCASTCommunicating AI Extinction Risk to the Public - w/ Prof. Will FithianSubscribe to our partner channel: Lethal Intelligence AILethal Intelligence AI - Homehttps://www.youtube.com/@lethal-intelligencehttps://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about AI risk rising, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast
Host John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: [email protected] PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:Integral AI: https://www.integral.ai/John's Chat w Chat GPThttps://chatgpt.com/share/679ee549-2c38-8003-9c1e-260764da1a53Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about smarter-than-human robots, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast
Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km
You can also donate any amount one time.
Get Involved!
EMAIL JOHN: [email protected]
SUPPORT PAUSE AI: https://pauseai.info/
SUPPORT STOP AI: https://www.stopai.info/about
RESOURCES:
BENGIO/NG DAVOS VIDEO
https://www.youtube.com/watch?v=w5iuHJh3_Gk&t=8s
STUART RUSSELL VIDEO
https://www.youtube.com/watch?v=KnDY7ABmsds&t=5s
AL GREEN VIDEO (WATCH ALL 39 MINUTES THEN REPLAY)
https://youtu.be/SOrHdFXfXds?si=s_nlDdDpYN0RR_Yc
Check out our partner channel: Lethal Intelligence AI
Lethal Intelligence AI - Home
https://lethalintelligence.ai
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
****************
To learn more about protecting our children from AI risks such as deep fakes, please feel free to visit our YouTube channel.
In this video, we cover 2025 AI risk preview along with the following topics:
AI
AI risk
AI safety
What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km Anthropic Alignment Faking Video:https://www.youtube.com/watch?v=9eXV64O2Xp8&t=1s Neil DeGrasse Tyson Video: https://www.youtube.com/watch?v=JRQDc55Aido&t=579s Max Winga's Amazing Speech:https://www.youtube.com/watch?v=kDcPW5WtD58 Get Involved! EMAIL JOHN: [email protected] SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:
$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S...
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change.
SUPPORT PAUSE AI: https://pauseai.info/
SUPPORT STOP AI: https://www.stopai.info/about
EMAIL JOHN: [email protected]
Check out our partner channel: Lethal Intelligence AI
Lethal Intelligence AI - Home
https://lethalintelligence.ai
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
In a special episode of For Humanity: An AI Risk Podcast, host John Sherman travels to San Francisco. Episode #55 "Near Midnight in Suicide City" is a set of short pieces from our trip out west, where we met with Pause AI, Stop AI, Liron Shapira and stopped by Open AI among other events. Big, huge massive thanks to Beau Kershaw, Director of Photography, and my biz partner and best friend who made this journey with me through the work side and the emotional side of this. The work is beautiful and the days were wet and long and heavy. Thank you, Beau. SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... EMAIL JOHN: [email protected] Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai @lethal-intelligence-clips / @lethal-intelligence-clips
3,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture.
(FULL INTERVIEW STARTS AT 00:06:46)
DONATION SUBSCRIPTION LINKS:
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
EMAIL JOHN: [email protected]
Check out Lethal Intelligence AI:
Lethal Intelligence AI - Home
https://lethalintelligence.ai
@lethal-intelligence-clips
/ @lethal-intelligence-clips
In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI. DONATION SUBSCRIPTION LINKS: $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end.
In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared. Learn More About Founders Pledge: https://www.founderspledge.com/ No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out. THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: [email protected] This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. **************** RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef Ebner Youtube: / @jpjosefpictures Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes *********************** Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation. In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include:
In Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared. Learn More About Founders Pledge: https://www.founderspledge.com/ THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: [email protected] This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. **************** RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef Ebner Youtube: / @jpjosefpictures Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes *********************** Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation. In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include:
*************************** If you want to learn more about AI risk funding, follow us on our social media platforms, where we share additional tips, resources, and stories. You can find us on YouTube: / @forhumanitypodcast Website: http://www.storyfarm.com/ *************************** Don’t miss this opportunity to discover the secrets of AI risk funding, AI, what is AI, and big tech. Have I addressed your concerns about AI risk funding? Maybe you wish to comment below and let me know what else I can help you with AI, what is AI, big tech, and AI risk funding.
In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards. THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 LEARN MORE– www.metaculus.com Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: [email protected] This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. **************** RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef Ebner Youtube: / @jpjosefpictures Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ********************** Hi, thanks for watching our video about what insight can Metaculus reveal about AI risk and accurately predicting doom. In this video, we discuss accurately predicting doom and cover the following topics
********************** Explore our other video content here on YouTube, where you'll find more insights into accurately predicting doom, along with relevant social media links. YouTube: / @forhumanitypodcast Website: http://www.storyfarm.com/ *************************** This video explores accurately predicting doom, AI, AI safety, and Metaculus. Have I addressed your curiosity regarding accurately predicting doom? We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI, AI safety, Metaculus, and accurately predicting doom.
In Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.
LEARN MORE–AND JOIN STOP AI
www.stopai.info
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #49, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI.
LEARN MORE–AND JOIN STOP AI
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #49 TRAILER, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: [email protected] ********************* This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ********************* RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef Ebner Youtube: / @jpjosefpictures Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes PayPal.MePayPal.Me Pay John Sherman using PayPal.Me Go to paypal.me/forhumanitypodcast and type in the amount. Since it’s PayPal, it's easy and secure. Don’t have a PayPal account? No worries. PayPal.MePayPal.Me Pay John Sherman using PayPal.Me Go to paypal.me/forhumanitypodcast and type in the amount. Since it’s PayPal, it's easy and secure. Don’t have a PayPal account? No worries. YouTubeYouTube Doom Debates Urgent disagreements that must be resolved before the world ends. Hosted by Liron Shapira. Discord Join the PauseAI Discord Server! Community of volunteers working towards an international pause on the development of AI systems more powerful than GPT-4 | 2077 members (93 kB) ************* Welcome! In today's video, we delve into the vital aspects of stopping AI and explore go to jail to stop AI. This video covers go to jail to stop AI and the following topics:
******************** Discover more of our video content on go to jail to stop AI. You'll find additional insights on this topic along with relevant social media links. YouTube: / @forhumanitypodcast Website: http://www.storyfarm.com/ *************************** This video explores go to jail to stop AI, AI safety, AI risks, and AI legal issues. Have I addressed your curiosity regarding go to jail to stop AI? We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI safety, AI risks, AI legal issues, and go to jail to stop AI.
In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins. Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 PASSCODE: 789742 LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: [email protected] This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef Ebner Youtube: / @jpjosefpictures Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ************************* Welcome! In today's video, we delve into the vital aspects of AI safety movement and explore what is the origin of AI safety. This video covers what is the origin of AI safety and the following topics:
******************** Discover more of our video content on what is the origin of AI safety. You'll find additional insights on this topic along with relevant social media links. YouTube: / @forhumanitypodcast
In Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins. Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: [email protected] This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 Max Winga’s “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes
In Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:
https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful
https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to
Senate Hearing:
https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives
Harry Macks Youtube Channel
https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:
https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful
https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to
Senate Hearing:
https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives
Harry Macks Youtube Channel
https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.
More About Daniel Faggella
https://danfaggella.com/
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.
More About Daniel Faggella
https://danfaggella.com/
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28) Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress Fine Dr. Brooks on Social Media LinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebook https://www.linkedin.com/in/dr-mike-brooks-b1164120 https://x.com/drmikebrooks https://www.youtube.com/@connectwithdrmikebrooks https://www.tiktok.com/@connectwithdrmikebrooks?lang=en https://www.instagram.com/drmikebrooks/?hl=en Chris Gerrby’s Twitter: https://x.com/ChrisGerrby LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: [email protected] This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 Max Winga’s “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes
In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.
Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World
An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #43, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments.
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #43 TRAILER, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #42, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #40, TRAILER, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
Prepping Perspectives (00:00:00)Discussion on how to characterize preparedness efforts, ranging from common sense to doomsday prepping.
Personal Experience in Emergency Management (00:00:06)Speaker shares background in emergency management and Red Cross, reflecting on past preparation efforts.
Vision of AGI and Societal Collapse (00:00:58)Exploration of potential outcomes of AGI development and societal disruptions, including chaos and extinction.
Geopolitical Safety in the Philippines (00:02:14)Consideration of living in the Philippines as a safer option during global conflicts and crises.
Self-Reliance and Supply Chain Concerns (00:03:15)Importance of self-reliance and being off-grid to mitigate risks from supply chain breakdowns.
Escaping Potential Threats (00:04:11)Discussion on the plausibility of escaping threats posed by advanced AI and the implications of being tracked.
Nuclear Threats and Personal Safety (00:05:34)Speculation on the potential for nuclear conflict while maintaining a sense of safety in the Philippines.
In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
**GOP's AI Regulation Stance (00:00:41)**
**Welcome to Episode 39 (00:01:41)**
**Trump's Assassination Attempt (00:03:41)**
**Partisan Shift in AI Risk (00:04:09)**
**Matthew Tabor's Background (00:06:32)**
**Tennessee's "ELVIS" Law (00:13:55)**
**Bipartisan Support for ELVIS (00:15:49)**
**California's Legislative Actions (00:18:58)**
**Overview of California Bills (00:20:50)**
**Lobbying Influence in California (00:23:15)**
**Challenges of AI Training Data (00:24:26)**
**The Original Sin of AI (00:25:19)**
**Congress and AI Regulation (00:27:29)**
**Investigations into AI Companies (00:28:48)**
**The New York Times Lawsuit (00:29:39)**
**Political Developments in AI Risk (00:30:24)**
**GOP Platform and AI Regulation (00:31:35)**
**Local vs. National AI Regulation (00:32:58)**
**Public Awareness of AI Regulation (00:33:38)**
**Engaging with Lawmakers (00:41:05)**
**Roleplay Demonstration (00:43:48)**
**Legislative Frameworks for AI (00:46:20)**
**Coalition Against AI Development (00:49:28)**
**Understanding AI Risks in Hollywood (00:51:00)**
**Generative AI in Film Production (00:53:32)**
**Impact of AI on Authenticity in Entertainment (00:56:14)**
**The Future of AI-Generated Content (00:57:31)**
**AI Legislation and Political Dynamics (01:00:43)**
**Partisan Issues in AI Regulation (01:02:22)**
**Influence of Celebrity Advocacy on AI Legislation (01:04:11)**
**Understanding Legislative Processes for AI Bills (01:09:23)**
**Presidential Approach to AI Regulation (01:11:47)**
**State-Level Initiatives for AI Legislation (01:14:09)**
# Podcast Episode Timestamps
**State vs. Congressional Regulation (01:15:05)**
**Engaging Lawmakers (01:15:29)**
**YouTube Video Views Explanation (01:15:37)**
**Algorithm Challenges (01:16:48)**
**Celebration of Life (01:18:08)**
**Final Thoughts and Call to Action (01:19:13)**
In Episode #39 Trailer, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation addresses the shifting political landscape around AI-risk legislation in America in July 2024.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
Republican Party's AI Regulation Stance (00:00:41)The GOP platform aims to eliminate existing AI regulations, reflecting a shift in political dynamics.
Bipartisanship in AI Issues (00:01:21)AI is initially a bipartisan concern, but quickly becomes a partisan issue amidst political maneuvering.
Tech Companies' Frustration with Legislation (00:01:55)Major tech companies express dissatisfaction with California's AI bills, indicating a push for regulatory rollback.
Public Sentiment vs. Party Platform (00:02:42)Discrepancy between GOP platform on AI and average voter opinions, highlighting a disconnect in priorities.
Polling on AI Regulation (00:03:26)Polling shows strong public support for AI regulation, raising questions about political implications and citizen engagement.
In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
**Concerns about AI Risks in France (00:00:00)**
**Optimism in AI Solutions (00:01:15)**
**Introduction to the Episode (00:01:51)**
**Max Wingo's Powerful Clip (00:02:29)**
**AI Safety Summit Context (00:04:20)**
**Personal Journey into AI Safety (00:07:02)**
**Commitment to AI Risk Work (00:21:33)**
**France's AI Sacrifice (00:21:49)**
**Impact of Efforts (00:21:54)**
**Existential Risks and Choices (00:22:12)**
**Underestimating Impact (00:22:25)**
**Researching AI Risks (00:22:34)**
**Weak Counterarguments (00:23:14)**
**Existential Dread Theory (00:23:56)**
**Global Awareness of AI Risks (00:24:16)**
**France's AI Leadership Role (00:25:09)**
**AI Policy in France (00:26:17)**
**Influential Figures in AI (00:27:16)**
**EU Regulation Sabotage (00:28:18)**
**Committee's Risk Perception (00:30:24)**
**Concerns about France's AI Development (00:32:03)**
**International AI Treaties (00:32:36)**
**Sabotaging AI Safety Summit (00:33:26)**
**Quality of France's AI Report (00:34:19)**
**Misleading Risk Analyses (00:36:06)**
**Comparison to Historical Innovations (00:39:33)**
**Rhetoric and Misinformation (00:40:06)**
**Existential Fear and Rationality (00:41:08)**
**Position of AI Leaders (00:42:38)**
**Challenges of Volunteer Management (00:46:54)**
In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty?
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS
Trust in AI Awareness in France (00:00:00)Discussion on France being uninformed about AI risks compared to other countries with AI labs.
International Treaty Concerns (00:00:46)Speculation on France's reluctance to sign an international AI safety treaty.
Personal Reflections on AI Risks (00:00:57)Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment.
Underestimating Impact (00:01:13)The tendency of people to underestimate their potential impact on global issues.
Researching AI Risks (00:01:50)Speaker shares their journey of researching AI risks and finding weak counterarguments.
Critique of Counterarguments (00:02:23)Discussion on the absurdity of opposing views on AI risks and societal implications.
Existential Dread and Rationality (00:02:42)Connection between existential fear and irrationality in discussions about AI safety.
Shift in AI Safety Focus (00:03:17)Concerns about the diminishing focus on AI safety in upcoming summits.
Quality of AI Strategy Report (00:04:11)Criticism of a recent French AI strategy report and plans to respond critically.
Optimism about AI Awareness (00:05:04)Belief that understanding among key individuals can resolve AI safety issues.
Power Dynamics in AI Decision-Making (00:05:38)Discussion on the disproportionate influence of a small group on global AI decisions.
Cultural Perception of Impact (00:06:01)Reflection on societal beliefs that inhibit individual agency in effecting change.
In Episode #37, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI?
Some of Peter Biles related writing:
https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/
https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/
https://substack.com/@peterbiles
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Matt Andersen - 'Magnolia' (JJ Cale Cover) LIVE at SiriusXM
JJ Cale Magnolia Flagstaff, AZ 2004
TIMESTAMPS:
**Christianity versus AGI (00:00:39)**
**Concerns about AI (00:02:45)**
**Christianity and Technology (00:05:30)**
**Interview with Peter Byles (00:11:09)**
**Effects of Social Media (00:18:03)**
**Religious Perspective on AI (00:23:57)**
**The implications of AI on Christian faith (00:24:05)**
**The Tower of Babel metaphor (00:25:09)**
**The role of humans as sub-creators (00:27:23)**
**The impact of AI on human culture and society (00:30:33)**
**The limitations of AI in storytelling and human connection (00:32:33)**
**The intersection of faith and AI in a future world (00:41:35)**
**Religious Leaders and AI (00:45:34)**
**Human Exceptionalism (00:46:51)**
**Interfaith Dialogue and AI (00:50:26)**
**Religion and Abundance (00:53:42)**
**Apocalyptic Language and AI (00:58:26)**
**Hope in Human-Oriented Culture (01:04:32)**
**Worshipping AI (01:07:55)**
**Religion and AI (01:08:17)**
**Celebration of Life (01:09:49)**
In Episode #37 Trailer, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI?
Some of Peter Biles related writing:
https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/
https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/
https://substack.com/@peterbiles
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
The impact of technology on human dignity (00:00:00) The speaker discusses the potential negative impact of technology on human dignity and the divine image.
The embodiment of souls and human dignity (00:01:00) The speaker emphasizes the spiritual nature of human beings and the importance of human dignity, regardless of religion or ethnicity.
The concept of a "sand god" and technological superiority (00:02:09) The conversation explores the cultural and religious implications of creating an intelligence superior to humans and the reference to a "sand god."
The Tower of Babel and technology (00:03:25) The speaker references the story of the Tower of Babel from the book of Genesis and its metaphorical implications for technological advancements and human hubris.
The impact of AI on communication and storytelling (00:05:26) The discussion delves into the impersonal nature of AI in communication and storytelling, highlighting the absence of human intention and soul.
Human nature, materialism, and work (00:07:38) The conversation explores the deeper understanding of human nature, the restlessness of humans, and the significance of work and creativity.
In Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.
Gladstone AI Action Plan
https://www.gladstone.ai/action-plan
TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
**The whistleblower's concerns (00:00:00)**
**Introduction to the podcast (00:01:09)**
**The urgency of addressing AI risk (00:02:18)**
**The potential consequences of falling behind in AI (00:04:36)**
**Transitioning to working on AI risk (00:06:33)**
**Engagement with the State Department (00:08:07)**
**Project assessment and public visibility (00:10:10)**
**Motivation for taking on the detective work (00:13:16)**
**Alignment with the government's safety culture (00:17:03)**
**Potential government oversight of AI labs (00:20:50)**
**The whistle blowers' concerns (00:21:52)**
**Shifting control to the government (00:22:47)**
**Elite group within the government (00:24:12)**
**Government competence and allocation of resources (00:25:34)**
**Political level and tech expertise (00:27:58)**
**Challenges in government engagement (00:29:41)**
**State department's engagement and assessment (00:31:33)**
**Recognition of government competence (00:34:36)**
**Engagement with frontier labs (00:35:04)**
**Whistleblower insights and concerns (00:37:33)**
**Whistleblower motivations (00:41:58)**
**Engagements with AI Labs (00:42:54)**
**Emotional Impact of the Work (00:43:49)**
**Workshop with Government Officials (00:44:46)**
**Challenges in Policy Implementation (00:45:46)**
**Expertise and Insights (00:49:11)**
**Future Engagement with US Government (00:50:51)**
**Flexibility of Private Sector Entity (00:52:57)**
**Impact on Whistleblowing Culture (00:55:23)**
**Key Recommendations (00:57:03)**
**Security and Governance of AI Technology (01:00:11)**
**Obstacles and Timing in Hardware Development (01:04:26)**
**The AI Lab Security Measures (01:04:50)**
**Nvidia's Stance on Regulations (01:05:44)**
**Export Controls and Governance Failures (01:07:26)**
**Concerns about AGI and Alignment (01:13:16)**
**Implications for Future Generations (01:16:33)**
**Personal Transformation and Mental Health (01:19:23)**
**Starting a Nonprofit for AI Risk Awareness (01:21:51)**
In Episode #36 Trailer, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.
Gladstone AI Action Plan
https://www.gladstone.ai/action-plan
TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
The assignment from the State Department (00:00:00) Discussion about the task given by the State Department team regarding the assessment of safety and security in frontier AI and advanced AI systems.
Transition to detective work (00:00:30) The transition to a detective-like approach in gathering information and engaging with whistleblowers and clandestine meetings.
Assessment of the AI safety community (00:01:05) A critique of the lack of action orientation and proactive approach in the AI safety community.
Engagement with the Department of Defense (DoD) (00:02:57) Discussion about the engagement with the DoD, its existing safety culture, and the organizations involved in testing and evaluations.
Shifting control to the government (00:03:54) Exploration of the need to shift control to the government and regulatory level for effective steering of the development of AI technology.
Concerns about weaponization and loss of control (00:04:45) A discussion about concerns regarding weaponization and loss of control in AI labs and the need for more ambitious recommendations.
In Episode #35 host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.
Gladstone AI Action Plan
https://www.gladstone.ai/action-plan
TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
Sincerity and Sam Altman (00:00:00) Discussion on the perceived sincerity of Sam Altman and his actions, including insights into his character and motivations.
Introduction to Gladstone AI (00:01:14) Introduction to Gladstone AI, its involvement with the US government on AI risk, and the purpose of the podcast episode.
Doom Debates on YouTube (00:02:17) Promotion of the "Doom Debates" YouTube channel and its content, featuring discussions on AI doom and various perspectives on the topic.
YC Experience and Sincerity in Startups (00:08:13) Insight into the Y Combinator (YC) experience and the emphasis on sincerity in startups, with personal experiences and observations shared.
OpenAI and Sincerity (00:11:51) Exploration of sincerity in relation to OpenAI, including evaluations of the company's mission, actions, and the challenges it faces in the AI landscape.
The scaling story (00:21:33) Discussion of the scaling story related to AI capabilities and the impact of increasing data, processing power, and training models.
The call about GPT-3 (00:22:29) Edward Harris receiving a call about the scaling story and the significance of GPT-3's capabilities, leading to a decision to focus on AI development.
Transition from Y Combinator (00:24:42) Jeremy and Edward Harris leaving their previous company and transitioning from Y Combinator to focus on AI development.
Security concerns and exfiltration (00:31:35) Discussion about the security vulnerabilities and potential exfiltration of AI models from top labs, highlighting the inadequacy of security measures.
Government intervention and security (00:38:18) Exploration of the potential for government involvement in providing security assets to protect AI technology from exfiltration and the need for a pause in development until labs are secure.
Resource reallocation for safety and security (00:40:03) Discussion about the need to reallocate resources for safety, security, and alignment technology to ensure the responsible development of AI.
OpenAI's computational resource allocation (00:42:10) Concerns about OpenAI's failure to allocate computational resources for safety and alignment efforts, as well as the departure of a safety-minded board member.
These are the timestamps and topics covered in the podcast episode transcription segment.
China's Strategic Moves (00:43:07) Discussion on potential aggressive actions by China to prevent a permanent disadvantage in AI technology.
China's Sincerity in AI Safety (00:44:29) Debate on the sincerity of China's commitment to AI safety and the influence of the CCP.
Taiwan Semiconductor Manufacturing Company (TSMC) (00:47:47) Explanation of TSMC's role in fabricating advanced semiconductor chips and its impact on the AI race.
US and China's Power Constraints (00:51:30) Comparison of the constraints faced by the US and China in terms of advanced chips and grid power.
Nuclear Power and Renewable Energy (00:52:23) Discussion on the power sources being pursued by China and the US to address their respective constraints.
Future Scenarios (00:56:20) Exploration of potential outcomes if China overtakes the US in AI technology.
In Episode #35 TRAILER:, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.
TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
Sam Altman's intensity (00:00:10) Sam Altman's intense demeanor and competence, as observed by the speaker.
Security risks of superintelligent AI (00:01:02) Concerns about the potential loss of control over superintelligent systems and the security vulnerabilities in top AI labs.
Silicon Valley's security hubris (00:02:04)Critique of Silicon Valley's overconfidence in technology and lack of security measures, particularly in comparison to nation-state level cyber threats.
China's AI capabilities (00:02:36) Discussion about the security deficiency in the United States and the potential for China to have better AI capabilities due to security leaks.
Foreign actors' capacity for exfiltration (00:03:08)Foreign actors' incentives and capacity to exfiltrate frontier models, leading to the need to secure infrastructure before scaling and accelerating AI capabilities.
In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
Charbel-Raphaël Segerie’s Less Wrong Writing, much more on many topics we covered!
https://www.lesswrong.com/users/charbel-raphael
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
**The threat of AI autonomous replication (00:00:43)**
**Introduction to France's Center for AI Security (00:01:23)**
**Challenges in AI risk awareness in France (00:09:36)**
**The influence of Yann LeCun on AI risk perception in France (00:12:53)**
**Autonomous replication and adaptation of AI (00:15:25)**
**The potential impact of autonomous replication (00:27:24)**
**The dead internet scenario (00:27:38)**
**The potential existential threat (00:29:02)**
**Fast takeoff scenario (00:30:54)**
**Dangers of autonomous replication and adaptation (00:34:39)**
**Difficulty in recognizing warning shots (00:40:00)**
**Defining red lines for AI development (00:42:44)**
**Effective education strategies (00:46:36)**
**Impact on computer science students (00:51:27)**
**AI safety summit in Paris (00:53:53)**
**The summit and AI safety report (00:55:02)**
**Potential impact of key figures (00:56:24)**
**Political influence on AI risk (00:57:32)**
**Accelerationism in political context (01:00:37)**
**Optimism and hope for the future (01:04:25)**
**Chances of a meaningful pause (01:08:43)**
In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
The exponential growth of AI (00:00:00) Discussion on the potential exponential growth of AI and its implications for the future.
The mass of AI systems as an existential threat (00:01:05) Exploring the potential threat posed by the sheer mass of AI systems and its impact on existential risk.
The concept of warning shots (00:01:32) Elaboration on the concept of warning shots in the context of AI safety and the need for public understanding.
The importance of advocacy and public understanding (00:02:30) The significance of advocacy, public awareness, and the role of the safety community in creating and recognizing warning shots.
OpenAI's super alignment team resignation (00:04:00) Analysis of the resignation of OpenAI's super alignment team and its potential significance as a warning shot.
In episode 33, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Check out Dustin Burham’s fatherhood podcast: https://www.youtube.com/@thepresentfathers
BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS
**The threat of AI to humanity (00:00:22)**
**Pope Francis's address at the G7 summit on AI risk (00:02:31)**
**Starting a dialogue on tough subjects (00:05:44)**
**The challenges and joys of fatherhood (00:10:47)**
**Concerns and excitement about AI technology (00:15:09)**
**The Present Fathers Podcast (00:16:58)**
**Personal experiences of fatherhood (00:18:56)**
**The impact of AI risk on future generations (00:21:11)**
**Elon Musk's Concerns (00:21:57)**
**Impact of Denial (00:23:40)**
**Potential AI Risks (00:24:27)**
**Psychopathy and Decision-Making (00:26:28)**
**Personal and Societal Impact (00:28:46)**
**AI Risk Awareness (00:30:12)**
**Ethical Considerations (00:31:46)**
**AI Technology and Human Impact (00:34:28)**
**Exponential Growth and Risk (00:36:06)**
**Emotion and Empathy in AI (00:37:58)**
**Antenatalism and Ethical Debate (00:41:04)**
**The antenatal ideas (00:42:20)**
**Psychopathic tendencies among CEOs and decision making (00:43:27)**
**The power of social media in influencing change (00:46:12)**
**The unprecedented threat of human extinction from AI (00:49:03)**
**Teaching large language models to love humanity (00:50:11)**
**Proposed measures for AI regulation (00:59:27)**
**China's approach to AI safety regulations (01:01:12)**
**The threat of open sourcing AI (01:02:50)**
**Protecting children from AI temptations (01:04:26)**
**Challenges of policing AI-generated content (01:07:06)**
**Hope for the future and engaging in AI safety (01:10:33)**
**Performance by YG Marley and Lauryn Hill (01:14:26)**
**Final thoughts and call to action (01:22:28)**
In episode 33 Trailer, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS
Parental Concerns (00:00:00) A parent expresses worries about AI risks and emphasizes the need for cautious progress.
Risk Acceptance Threshold (00:00:50) The speaker discusses the acceptability of doom and risk in AI and robotics, drawing parallels with medical risk assessment.
Zero Risk Standard (00:01:34) The speaker emphasizes the medical industry's zero-risk approach and contrasts it with the industry's acceptance of potential doom.
Human Denial and Nuclear Brinksmanship (00:02:25) The power of denial and its impact on decision-making, including the tendency to ignore catastrophic possibilities.
Doom Prediction (00:03:17) The speakers express high levels of concern about potential doom in the future, with a 98% doom prediction for 50 years.
RESOURCES:
Check out Dustin Burham’s fatherhood podcast: https://www.youtube.com/@thepresentfathers
BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef?
(FULL INTERVIEW STARTS AT 00:23:21)
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis.
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
NYT: OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
Dwarkesh Patel Interviews Another Whistleblower
Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History
Roman Yampolskiy on Lex Fridman
Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431
Gladstone AI on Joe Rogan
Joe Rogan Experience #2156 - Jeremie & Edouard Harris
Peter Jenson’s Videos:
HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25)
WHY do we want AI? For our Humanity (1:00)
WHAT is the BIG Problem? Wanted: SafeAI Forever (3:00)
FIRST do no harm. (Safe AI Blog)
DECK. On For Humanity Podcast “Just the FACTS, please. WHY? WHAT? HOW?” (flip book)
https://discover.safeaiforever.com/
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
**The release of products that are safe (00:00:00)**
**Breakthroughs in AI research (00:00:41)**
**OpenAI whistleblower concerns (00:01:17)**
**Roman Yampolskiy's appearance on Lex Fridman podcast (00:02:27)**
**The capabilities and risks of AI systems (00:03:35)**
**Interview with Gladstone AI founders on Joe Rogan podcast (00:08:29)**
**OpenAI whistleblower's interview on Hard Fork podcast (00:14:08)**
**Peter Jensen's work on AI risk and media communication (00:20:01)**
**The interview with Peter Jensen (00:22:49)**
**Mutualistic Symbiosis and AI Containment (00:31:30)**
**The Probability of Catastrophic Outcome from AI (00:33:48)**
**The AI Safety Institute and Regulatory Efforts (00:42:18)**
**Regulatory Compliance and the Need for Safety (00:47:12)**
**The hard compute cap and hardware adjustment (00:47:47)**
**Physical containment and regulatory oversight (00:48:29)**
**Viewing the issue as a big business regulatory issue vs. a national security issue (00:50:18)**
**Funding and science for AI safety (00:49:59)**
**OpenAI's power allocation and ethical concerns (00:51:44)**
**Concerns about AI's impact on employment and societal well-being (00:53:12)**
**Parental instinct and the urgency of AI safety (00:56:32)**
Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef?
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis.
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Peter Jensen’s Video: HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25) https://www.youtube.com/watch?v=8yrIfCQBgdE
In Episode #31 John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. They discuss the urgency of raising awareness about AI risks, the potential job displacement in industries like trucking, and the geopolitical implications of AI advancements. Leighton shares his plans to start a podcast and possibly use filmmaking to engage the public in AI safety discussions. Despite skepticism from others, they stress the importance of community and dialogue in understanding and mitigating AI threats, with Leighton highlighting the risk of a "singleton event" and ethical concerns in AI development.
Full Interview Starts at (00:10:18)
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Timestamps
- Layton's Introduction (00:00:00)
- Introduction to the Podcast (00:02:19)
- Power of the First Followers (00:03:24)
- Layton's Concerns about AI (00:08:49)
- Layton's Background and AI Awareness (00:11:11)
- Challenges in Spreading Awareness (00:14:18)
- Distrust of Government and Family Involvement (00:23:20)
- Government Imperfections (00:25:39)
- AI Impact on National Security (00:26:45)
- AGI Decision-Making (00:28:14)
- Government Oversight of AGI (00:29:32)
- Geopolitical Tension and AI (00:31:51)
- Job Loss and AGI (00:37:20)
- AI, Mining, and Space Race (00:38:02)
- Public Engagement and AI (00:44:34)
- Philosophical Perspective on AI (00:49:45)
- The existential threat of AI (00:51:05)
- Geopolitical tensions and AI risks (00:52:05)
- AI's potential for global dominance (00:53:48)
- Ethical concerns and AI welfare (01:01:21)
- Preparing for AI risks (01:03:02)
- The challenge of raising awareness (01:06:42)
- A hopeful outlook (01:08:28)
RESOURCES:
Leighton’s Podcast on YouTube:
https://www.youtube.com/@UrNotEvenBasedBro
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Episode #31 TRAILER - “Trucker vs. AGI” For Humanity: An AI Risk Podcast
In Episode #31 TRAILER, John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Timestamps
The challenge of keeping up (00:00:00) Discussion about the difficulty of staying informed amidst busy lives and the benefit of using podcasts to keep up.
The impact of social media bubbles (00:01:22) Exploration of how social media algorithms create bubbles and the challenge of getting others to pay attention to important information.
Geopolitical implications of technological advancements (00:02:00) Discussion about the potential implications of technological advancements, particularly in relation to artificial intelligence and global competition.
Potential consequences of nationalizing AGI (00:04:21) Speculation on the potential consequences of nationalizing artificial general intelligence and the potential use of a pandemic to gain a competitive advantage.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
In episode 30, John Sherman interviews Professor Olle Häggström on a wide range of AI risk topics. At the top of the list is the super-instability and the super-exodus from OpenAI’s super alignment team following the resignations of Jan Lieke and Ilya Sutskyver.
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
The world is waking up to the existential danger of unaligned AGI. But we are racing against time. Some heroes are stepping up, people like this week’s guest Chris Gerrby. Chris was successful in organizing people against AI in Sweden. In early May he left Sweden, moved to England, and is now spending 14 hours a day 7 days a week to stop AGI. Learn how he plans to grow Pause AI as its new Chief Growth Officer and his thoughts on how to make the case for pausing AI.
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Timestamps:
imestamps:
Dropping Everything to Stop AGI (00:00:00) Chris Gerbe's dedication to working 14 hours a day to pause AI and the challenges he faces.
OpenAI's Recent Events (00:01:11)
Paused AI and Chris Gerrby's Involvement (00:05:28)
Chris Gerrbys Journey and Involvement in AI Safety (00:06:44)
Coping with the Dark Outlook of AI Risk (00:19:02)
Beliefs About AGI Timeline (00:24:06)
The pandemic risk (00:25:30)
Losing control of AGI (00:26:40)
Stealth control and treacherous turn (00:28:38)
Relocation and intense work schedule (00:30:20)
Growth strategy for PAI (00:33:39)
Marketing and public relations (00:35:35)
Tailoring communications and gaining members (00:39:41)
Challenges in communicating urgency (00:44:36)
Path to growth for Pause AI (00:48:51)
Joining the Pause AI community (00:49:57)
Community involvement and support (00:50:33)
Pause AI's role in the AI landscape (00:51:22)
Maintaining work-life balance (00:53:47)
Adapting personal goals for the cause (00:55:50)
Probability of achieving a pause in AI development (00:57:50)
Finding hope in personal connections (01:00:24)
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
The world is waking up to the existential danger of unaligned AGI. But we are racing against time. Some heroes are stepping up, people like this week’s guest Chris Gerrby. Chris was successful in organizing people against AI in Sweden. In early May he left Sweden, moved to England, and is now spending 14 hours a day 7 days a week to stop AGI. Learn how he plans to grow Pause AI as its new Chief Growth Officer and his thoughts on how to make the case for pausing AI.
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Timestamps:
Tailoring Communication (00:00:51) The challenge of convincing others about the importance of a cause and the need to tailor communications to different audiences.
Audience Engagement (00:02:13) Discussion on tailoring communication strategies to different audiences, including religious people, taxi drivers, and artists.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast
Full Interview Starts At: (00:09:54)
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
BIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on.
This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk?
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH
https://pauseai.info/2024-may
TIMESTAMPS:
**Emergency Preparedness in AI (00:00:00)**
**Introduction to the Podcast (00:02:49)**
**Discussion on AI Risk and Disinformation (00:06:27)**
**Engagement with Lawmakers and Policy Development (00:09:54)**
**Control AI's Role in AI Risk Awareness (00:19:00)**
**Engaging with congressional offices (00:25:00)**
**Establishing AI emergency preparedness office (00:32:35)**
**Congressional focus on AI competitiveness (00:37:55)**
**Expert opinions on AI risks (00:40:38)**
**Commerce vs. national security (00:42:41)**
**US AI Safety Institute's placement (00:46:33)**
**Expert concerns and raising awareness (00:50:34)**
**Influence of protests on policy (00:57:00)**
**Public opinion on AI regulation (01:02:00)**
**Silicon Valley Culture vs. DC Culture (01:05:44)**
**International Cooperation and Red Lines (01:12:34)**
**Eliminating Race Dynamics in AI Development (01:19:56)**
**Government Involvement for AI Development (01:22:16)**
**Compute-Based Licensing Proposal (01:24:18)**
**AI Safety as Emergency Preparedness (01:27:43)**
**Closing Remarks (01:29:09)**
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
BIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on.
This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk?
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH
https://pauseai.info/2024-may
TIMESTAMPS:
The meetings with congressional staffers (00:00:00) Akash discusses his experiences and strategies for engaging with congressional staffers and policymakers regarding AI risks and national security threats.
Understanding AI risks and national security (00:00:14) Akash highlights the interest and enthusiasm among policymakers to learn more about AI risks, particularly in the national security space.
Messaging and communication strategies (00:01:09) Akash emphasizes the importance of making less intuitive threat models understandable and getting the time of day from congressional offices.
Emergency preparedness in AI risk (00:02:45) Akash introduces the concept of emergency preparedness in the context of AI risk and its relevance to government priorities.
Preparedness approach to uncertain events (00:04:17) Akash discusses the preparedness approach to dealing with uncertain events and the significance of having a playbook in place.
Prioritizing AI in national security (00:06:08) Akash explains the strategic prioritization of engaging with key congressional offices focused on AI in the context of national security.
Policymaker concerns and China's competitiveness (00:07:03) Akash addresses the predominant concern among policymakers about China's competitiveness in AI and its impact on national security.
AI development and governance safeguards (00:08:15) Akash emphasizes the need to raise awareness about AI research and development misalignment and loss of control threats in the context of China's competitiveness.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Episode #27 - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast
Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
In episode #27, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coalition about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists.
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH
https://pauseai.info/2024-may
TIMESTAMPS:
The protest at OpenAI (00:00:00) Discussion on the non-violent protest at the OpenAI headquarters and the response from the employees.
The Road Trip to Protest (00:09:31) Description of the road trip to San Francisco for a protest at OpenAI, including a video of the protest and interactions with employees.
Formation of the World Pause Coalition (00:15:07) Introduction to the World Pause Coalition and its mission to raise awareness about AI and superintelligence.
Challenges and Goals of Protesting (00:18:31) Exploration of the challenges and goals of protesting AI risks, including education, government pressure, and environmental impact.
The smaller countries' stakes (00:22:53) Highlighting the importance of smaller countries' involvement in AI safety negotiations and protests.
San Francisco protest (00:25:29) Discussion about the experience and impact of the protest at the OpenAI headquarters in San Francisco.
Interactions with OpenAI workers (00:26:56) Insights into the interactions with OpenAI employees during the protest, including their responses and concerns.
Different approaches to protesting (00:41:33) Exploration of peaceful protesting as the preferred approach, contrasting with more extreme methods used by other groups.
Embrace Safe AI (00:43:47) Discussion about finding a position for the company that aligns with concerns about AI and the need for safe AI.
Suffering Risk (00:48:24) Exploring the concept of suffering risk associated with superintelligence and the potential dangers of AGI.
Religious Leaders' Role (00:52:39) Discussion on the potential role of religious leaders in raising awareness and mobilizing support for AI safety.
Personal Impact of AI Concerns (01:03:52) Reflection on the personal weight of understanding AI risks and maintaining hope for a positive outcome.
Finding Catharsis in Taking Action (01:08:12) How taking action to help feels cathartic and alleviates the weight of the issue.
Weighing the Impact on Future Generations (01:09:18) The heavy burden of concern for future generations and the motivation to act for their benefit.
RESOURCES:
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
In episode #27 Trailer, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coaltion about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
In episode #26, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
Resources:
Azeer Azar+Connor Leahy Podcast
Debating the existential risk of AI, with Connor Leahy
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
In episode #26 TRAILER, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
The surprise of rapid progress in AI (00:00:00) Former OpenAI employee's perspective on the unexpected speed of AI development and its impact on safety.
Concerns about OpenAI's focus on safety (00:01:00) The speaker's decision to start his own company due to the lack of sufficient safety focus within OpenAI and the belief in the inevitability of advancing AI technology.
Differing perspectives on AI risks (00:01:53) Discussion about the urgency and approach to AI development, including skepticism and the limitations of human imagination in understanding AI risks.
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Episode #25 - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast
FULL INTERVIEW STARTS AT (00:08:20)
DONATE HERE TO HELP PROMOTE THIS SHOW
https://www.paypal.com/paypalme/forhumanitypodcast
In episode #25, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat. Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction.
Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
**The definition of human extinction and AI Safety Podcast Introduction (00:00:00)**
**Paul Christiano's perspective on AI risks and debate on AI safety (00:03:51)**
**Interview with Dr. Emil Torres on transhumanism, AI safety, and historical perspectives (00:08:17)**
**Challenges to AI safety concerns and the speculative nature of AI arguments (00:29:13)**
**AI's potential catastrophic risks and comparison with climate change (00:47:49)**
**Defining intelligence, AGI, and unintended consequences of AI (00:56:13)**
**Catastrophic Risks of Advanced AI and perspectives on AI Safety (01:06:34)**
**Inconsistencies in AI Predictions and the Threats of Advanced AI (01:15:19)**
**Curiosity in AGI and the ethical implications of building superintelligent systems (01:22:49)**
**Challenges of discussing AI safety and effective tools to convince the public (01:27:26)**
**Tangible harms of AI and hopeful perspectives on the future (01:37:00)**
**Parental instincts and the need for self-sacrifice in AI risk action (01:43:53)**
RESOURCES:
THE TWO MAIN PAPERS ÉMILE LOOKS TO MAKING HIS CASE:
Against the singularity hypothesis By David Thorstad:
https://philpapers.org/archive/THOATS-5.pdf
Challenges to the Omohundro—Bostrom framework for AI motivations By Oleg Häggstrom: https://www.math.chalmers.se/~olleh/ChallengesOBframeworkDeanonymized.pdf
Paul Christiano on Bankless
How We Prevent the AI’s from Killing us with Paul Christiano
Emile Torres TruthDig Articles:
https://www.truthdig.com/author/emile-p-torres/
https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
DONATE HERE TO HELP PROMOTE THIS SHOW
Episode #25 TRAILER - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast
In episode #25 TRAILER, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat. Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction.
Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
Defining Humanity and Future Descendants (00:00:00) Discussion on the concept of humanity, future descendants, and the implications of artificial general intelligence (AGI) and machine superintelligence.
Concerns about AI Safety Research (00:01:11) Expressing concerns about the approach of AI safety research and skepticism about the intentions of companies like OpenAI.
Questioning the Purpose of Building Advanced AI Systems (00:02:23) Expressing skepticism about the purpose and potential benefits of building advanced AI systems and being sympathetic to the "stop AI" movement.
RESOURCES:
Emile Torres TruthDig Articles:
https://www.truthdig.com/author/emile-p-torres/
Emile Torres Latest Book:
Human Extinction (Routledge Studies in the History of Science, Technology and Medicine) 1st Edition
https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern.
They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals to work on this issue. Kat highlights potential ways in which AI could harm humanity, such as creating super viruses or starting a nuclear war. They address common misconceptions, including the belief that AI will need humans or that it will be human-like.
Overall, the conversation emphasizes the urgency of addressing AI safety and the need for greater awareness and action. The conversation delves into the dangers of AI and the need for AI safety. The speakers discuss the potential risks of creating superintelligent AI that could harm humanity. They highlight the ethical concerns of creating AI that could suffer and the moral responsibility we have towards these potential beings. They also discuss the importance of funding AI safety research and the need for better regulation. The conversation ends on a hopeful note, with the speakers expressing optimism about the growing awareness and concern regarding AI safety.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
AI Safety Urgency (00:00:00) Emphasizing the immediate need to focus on AI safety.
Superintelligent AI World (00:00:50) Considering the impact of AI smarter than humans.
AI Safety Charities (00:02:37) The necessity for more AI safety-focused charities.
Personal AI Safety Advocacy Journey (00:10:10) Kat Woods' transformation into an AI safety advocate.
AI Risk Work Encouragement (00:16:03) Urging skilled individuals to tackle AI risks.
AI Safety's Global Impact (00:17:06) AI safety's pivotal role in global challenges.
AI Safety Prioritization Struggles (00:18:02) The difficulty of making AI safety a priority.
Wealthy Individuals and AI Safety (00:19:55) Challenges for the wealthy in focusing on AI safety.
Superintelligent AI Threats (00:23:12) Potential global dangers posed by superintelligent AI.
Limits of Imagining Superintelligent AI (00:28:02) The struggle to fully grasp superintelligent AI's capabilities.
AI Containment Risks (00:32:19) The problem of effectively containing AI.
AI's Human-Like Risks (00:33:53) Risks of AI with human-like qualities.
AI Dangers (00:34:20) Potential ethical and safety risks of AI.
AI Ethical Concerns (00:37:03) Ethical considerations in AI development.
Nonlinear's Role in AI Safety (00:39:41) Nonlinear's contributions to AI safety work.
AI Safety Donations (00:41:53) Guidance on supporting AI safety financially.
Effective Altruism and AI Safety (00:49:43) The relationship between effective altruism and AI safety.
AI Safety Complexity (00:52:12) The intricate nature of AI safety issues.
AI Superintelligence Urgency (00:53:52) The critical timing and power of AI superintelligence.
AI Safety Work Perception (00:56:06) Changing the image of AI safety efforts.
AI Safety and Government Regulation (00:59:23) The potential for regulatory influence on AI safety.
Entertainment's AI Safety Role (01:04:24) How entertainment can promote AI safety awareness.
AI Safety Awareness Progress (01:05:37) Growing recognition and response to AI safety.
AI Safety Advocacy Funding (01:08:06) The importance of financial support for AI safety advocacy.
Effective Altruists and Rationalists Views (01:10:22) The stance of effective altruists and rationalists on AI safety.
AI Risk Marketing (01:11:46) The case for using marketing to highlight AI risks.
RESOURCES:
Nonlinear: https://www.nonlinear.org/
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. She compares the challenge to the Titanic's course towards an iceberg, stressing the difficulty in convincing people of the urgency. Woods argues that AI safety is a matter of both altruism and self-preservation. She uses human-animal relations to illustrate the potential consequences of a disparity in intelligence between humans and AI. She notes a positive shift in the perception of AI risks, from fringe to mainstream concern, and shares a personal anecdote from her time in Africa, which informed her views on the universal aversion to death and the importance of preventing harm. Woods's realization of the increasing probability of near-term AI risks further emphasizes the immediate need for action in AI safety.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Nonlinear: https://www.nonlinear.org/
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
FULL INTERVIEW STARTS AT (00:22:26)
Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast
e/acc: Suicide or Salvation? In episode #23, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They talk about whether AI should align with human values and the potential consequences of alignment. Paul has some wild views, including that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
TRAILER (00:00:00)
INTRO: (00:5:40)
INTERVIEW:
Paul Luzinski Interview (00:22:36) John Sherman interviews AI advocate Luzinski.
YouTube Channel Motivation (00:24:14) Luzinski's pro-acceleration channel reasons.
AI Threat Viewpoint (00:28:24) Luzinski on AI as existential threat.
AI Impact Minority Opinion (00:32:23) Luzinski's take on AI's minority view impact.
Tech Regulation Need (00:33:03) Regulatory oversight on tech startups debated.
Post-2008 Financial Regulation (00:34:16) Financial regulation effects and big company influence discussed.
Tech CEOs' Misleading Claims (00:36:31) Tech CEOs' public statement intentions.
Social Media Influence (00:38:09) Social media's advertising effectiveness.
AI Risk Speculation (00:41:32) Potential AI risks and regulatory impact.
AI Safety Movement Integrity (00:43:53) AI safety movement's motives challenged.
AI Alignment: Business or Moral? (00:47:27) AI alignment as business or moral issue.
AI Doomsday Believer Types (00:53:27) Four types of AI doomsday believers.
AI Doomsday Belief Authenticity (00:54:22) Are AI doomsday believers genuine?
Geoffrey Hinton's AI Regret (00:57:24) Hinton's regret over AI work.
AI's Self-Perception (00:58:57) Will AI see itself as part of humanity?
AGI's Conditioning Debate (01:00:22) AGI's training vs. human-like start.
AGI's Independent Decisions (01:11:33) Risks of AGI's autonomous actions.
AGI's View on Humans (01:15:47) AGI's potential post-singularity view of humans.
AI Safety Criticism (01:16:24) Critique of AI safety assumptions.
AI Engineers' Concerns (01:19:15) AI engineers' views on AI's dangers.
AGI's Training Impact (01:31:49) Effect of AGI's training data origin.
AI Development Cap (01:32:34) Theoretical limit of AI intelligence.
Intelligence Types (01:33:39) Intelligence beyond academics.
AGI's National Loyalty (01:40:16) AGI's allegiance to its creator nation.
Tech CEOs' Trustworthiness (01:44:13) Tech CEOs' trust in AI development.
Reflections on Discussion (01:47:12) Thoughts on the AI risk conversation.
Next Guest & Engagement (01:49:50) Introduction of next guest and call to action.
RESOURCES:
Paul’s Nutty Youtube Channel: Accel News Network
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
Suicide or Salvation? In episode #23 TRAILER, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They ponder whether AI should align with human values and the potential consequences of such alignment. Paul suggests that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
Is AI an existential threat to humanity? (00:00:00) Debate on the potential risks of AI and its impact on humanity.
The AI safety movement (00:00:42) Discussion on the perception of AI safety as a religion and the philosophy of accelerationism.
Human conditioning and perspectives on AI (00:02:01) Exploration of how human conditioning shapes perspectives on AI and the concept of AGI as a human creation.
Aligning AI and human values (00:04:24) Debate on the dangers of aligning AI with human ideologies and the potential implications for humanity.
RESOURCES:
Paul’s Youtube Channel: Accel News Network
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
In Episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Business Insider: Sam Altman’s Act May Be Wearing Thin
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
Timestamps:
The man who holds the power (00:00:00) Discussion about Sam Altman's power and its implications for humanity.
The safety crisis (00:01:11) Concerns about safety in AI technology and the need for protection against potential risks.
Sam Altman's decisions and vision (00:02:24) Examining Sam Altman's role, decisions, and vision for AI technology and its impact on society.
Sam Altman's actions and accountability (00:04:14) Critique of Sam Altman's actions and accountability regarding the release of AI technology.
Reflections on getting fired (00:11:01) Sam Altman's reflections and emotions after getting fired from OpenAI's board.
Silencing of concerns (00:19:25) Discussion about the silencing of individuals concerned about AI safety, particularly Ilya Sutskever.
Relationship with Elon Musk (00:20:08) Sam Altman's sentiments and hopes regarding his relationship with Elon Musk amidst tension and legal matters.
Legal implications of AI technology (00:22:23) Debate on the fairness of training AI under copyright law and its legal implications.
The value of data (00:22:32) Sam Altman discusses the compensation for valuable data and its use.
Safety concerns (00:23:41) Discussion on the process for ensuring safety in AI technology.
Broad definition of safety (00:24:24) Exploring the various potential harms and impacts of AI, including technical, societal, and economic aspects.
Lack of trust and control (00:27:09) Sam Altman's admission about the power and control over AGI and the need for governance.
Public apathy towards AI risk (00:31:49) Addressing the common reasons for public inaction regarding AI risk awareness.
Celebration of life (00:34:20) A personal reflection on the beauty of music and family, with a message about the celebration of life.
Conclusion (00:38:25) Closing remarks and a preview of the next episode.
In episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
n this AI Safety Podcast episode, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety
“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21
Interview starts at 20:10
Some highlights of John’s news career start at 9:14
In In Episode #21 “Why AI Killing You Isn’t On The News” Casey Clark Interview,, host John Sherman and WJZY-TV News Director Casey Clark explore the significant underreporting of AI's existential risks in the media. They recount a disturbing incident where AI bots infiltrated a city council meeting, spewing hateful messages. The conversation delves into the challenges of conveying the complexities of artificial general intelligence to the public and the media's struggle to present such abstract concepts compellingly. They predict job losses as the first major AI-related news story to break through and speculate on the future of AI-generated news anchors, emphasizing the need for human reporters in the field.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
See more of John’s Talk in Philly:
https://x.com/ForHumanityPod/status/1772449876388765831?s=20
FOLLOW DAVID SHAPIRO ON YOUTUBE!
22 Word Statement from Center for AI Safety
In Episode #21 TRAILER “Why AI Killing You Isn’t On The News” Casey Clark Interview, John Sherman interviews WJZY-TV News Director Casey Clark about TV news coverage of AI existential risk.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” John Sherman debates AI risk with lifelong coder and current Chief AI Officer Mark Tellez. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Community Note: So, after much commentary, I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!
RESOURCES:
Time Article on the New Report:
AI Poses Extinction-Level Risk, State-Funded Report Says | TIME
John's Upcoming Talk in Philadelphia!
It is open to the public, you will need to make a free account at meetup.com
https://www.meetup.com/philly-net/eve...
FOLLOW DAVID SHAPIRO ON YOUTUBE!
Dave Shapiro’s New Video where he talks about For Humanity
AGI: What will the first 90 days be like? And more VEXING questions from the audience!
22 Word Statement from Center for AI Safety
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” TRAILER, John Sherman debates AI risk with a lifelong coder and current Chief AI Officer. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Community Note: So after much commentary I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!
RESOURCES:
Time Article on the New Report:
AI Poses Extinction-Level Risk, State-Funded Report Says | TIME
FOLLOW DAVID SHAPIRO ON YOUTUBE!
Dave Shapiro’s New Video where he talks about For Humanity
AGI: What will the first 90 days be like? And more VEXING questions from the audience!
22 Word Statement from Center for AI Safety
Pause AI
In Episode #19, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels. His main channel (link below: go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down.
But a lot Dave’s content is about a post AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two part conversation, first about if we can live past AGI, and second, about the issues we’d face in a world where humans and AGIs are co-existing.
John and David discuss how humans can stay in in control of a superintelligence, what their p-dooms are, and what happens to the energy companies if fusion is achieved.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years.
This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:FOLLOW DAVID SHAPIRO ON YOUTUBE!
In Episode #19 TRAILER, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels, his main channel (link below go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down. But a lot Dave’s content is about a post-AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two-part conversation, first about if we can live past AGI, and second, about the issues we’d face in a world where humans and AGIs are co-existing. In this trailer, Dave gets to the edge of giving his (p)-doom. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
FOLLOW DAVID SHAPIRO ON YOUTUBE!
https://youtube.com/@DaveShap?si=o_USH-v0fDyo23fm
In Episode #18 TRAILER, “Worse Than Extinction, CTO vs. S-Risk” Louis Berman Interview, John talks with tech CTO Louis Berman about a broad range of AI risk topics centered around existential risk. The conversation goes to the darkest corner of the AI risk debate, S-risk, or suffering risk.
This episode has a lot in it that is very hard to hear. And say.The tech CEOs are spinning visions of abundance and utopia for the public. Someone needs to fill in the full picture of the realm of possibilities, no matter how hard it is to hear.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
John's Upcoming Talk in Philadelphia!
It is open to the public, you will need to make a free account at meetup.com https://www.meetup.com/philly-net/events/298710679/
Excellent Background on S-Risk w supporting links https://80000hours.org/problem-profiles/s-risks/
Join the Pause AI Weekly Discord Thursdays at 2pm EST
In Episode #18 TRAILER, “Worse Than Extinction, CTO vs. S-Risk” Louis Berman Interview, John talks with tech CTO Louis Berman about a broad range of AI risk topics centered around existential risk. The conversation goes to the darkest corner of the AI risk debate, S-risk, or suffering risk.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Liron's Youtube Channel: https://youtube.com/@liron00?si=cqIo5... More on rationalism: https://www.lesswrong.com/ More on California State Senate Bill SB-1047: https://leginfo.legislature.ca.gov/fa...https://thezvi.substack.com/p/on-the-... Warren Wolf https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL
In Episode #17 TRAILER, "AI Risk=Jenga", Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He explains how something like Sora, seemingly just a video tool, is actually a significant, real Jenga piece, and could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #16, AI Risk Denier Down, things get weird. This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays: https://www.understandingai.org/p/why... https://www.understandingai.org/p/why... Tim was not prepared to discuss this work, which is when things started to get off the rails. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions) OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23 -You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you. -Where is he wrong? -You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.” -In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject. -This is ludicrous to me. You think humans are far too special. -AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE. -How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this. -Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe? -A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really? -You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.” -Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special? -Does this claim rest on the security protocols of the big AI companies? -Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted. -Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on?
In Episode #16 TRAILER, AI Risk Denier Down, things get weird. This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays: https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent https://www.understandingai.org/p/why-im-not-worried-about-ai-taking Tim was not prepared to discuss this work, which is when things started to get off the rails. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #15, AI Risk Superbowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24. With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Machine Learning Street Talk - YouTube Full Debate, e/acc Leader Beff Jezos vs Doomer Connor Leahy e/acc Leader Beff Jezos vs Doomer Connor Leahy How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407 Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407 Next week’s guest Timothy Lee’s Website and related writing: https://www.understandingai.org/https://www.understandingai.org/p/why...https://www.understandingai.org/p/why...
In Episode #15 TRAILER, AI Risk Super Bowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24. With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #14, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet. John and Joep talk about what's being done, how it all feels, how it all might end, and even broach the darkest corner of all of this: suffering risk. This conversation embodies a spirit this movement needs: we can be upbeat and positive as we talk about the darkest subjects possible. It's not "optimism" to race to build suicide machines, but it is optimism to assume the best, and to believe we can and must succeed no matter what the odds. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: https://pauseai.info/https://discord.gg/pVMWjddaW7 Sample Letter to Elected Leaders: Dear XXXX- I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close. Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war." Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them? Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo. It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply. Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation. I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level. I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible? Thanks very much. XXXXXX Address Phone
In Episode #14 TRAILER, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: https://pauseai.info/ https://discord.com/channels/1100491867675709580/@home Sample Letter to Elected Leaders: Dear XXXX- I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close. Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war." Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them? Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo. It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply. Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation. I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level. I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible? Thanks very much. XXXXXX Address Phone
In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible. Apologies for the laggy cam on Darren! Darren’s book is an excellent resource, like this podcast it is intended for the general public. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Darren’s Book https://www.amazon.com/Uncontrollable... My Dad's Favorite Messiah Recording (3:22-6:-55 only lol!!) https://www.youtube.com/watch?v=lFjQ7... Sample letter/email to an elected official: Dear XXXX- I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close. Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war." Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them? Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo. It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply. Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation. I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level. I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible? Thanks very much. XXXXXX Address Phone
In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible. Darren’s book is an excellent resource, like this podcast it is intended for the general public. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Darren’s Book
In Episode #12, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources
Theo’s YouTube Channel : https://youtube.com/@theojaffee8530?si=aBnWNdViCiL4ZaEg
Glossary: First Definitions by ChaptGPT4, I asked it to give answers simple enough elementary school student could understand( lol, I find this helpful often!)
Reinforcement Learning with Human Feedback (RLHF):
Definition: RLHF, or Reinforcement Learning with Human Feedback, is like teaching a computer to make decisions by giving it rewards when it does something good and telling it what's right when it makes a mistake. It's a way for computers to learn and get better at tasks with the help of guidance from humans, just like how a teacher helps students learn. So, it's like a teamwork between people and computers to make the computer really smart!
Model Weights
Definiton: Model weights are like the special numbers that help a computer understand and remember things. Imagine it's like a recipe book, and these weights are the amounts of ingredients needed to make a cake. When the computer learns new things, these weights get adjusted so that it gets better at its job, just like changing the recipe to make the cake taste even better! So, model weights are like the secret ingredients that make the computer really good at what it does.
Foom/Fast Take-off:
Definition: "AI fast take-off" or "foom" refers to the idea that artificial intelligence (AI) could become super smart and powerful really quickly. It's like imagining a computer getting super smart all of a sudden, like magic! Some people use the word "foom" to talk about the possibility of AI becoming super intelligent in a short amount of time. It's a bit like picturing a computer going from learning simple things to becoming incredibly smart in the blink of an eye! Foom comes from cartoons, it’s the sound a super hero makes in comic books when they burst off the ground into flight.
Gradient Descent: Gradient descent is like a treasure hunt for the best way to do something. Imagine you're on a big hill with a metal detector, trying to find the lowest point. The detector beeps louder when you're closer to the lowest spot. In gradient descent, you adjust your steps based on these beeps to reach the lowest point on the hill, and in the computer world, it helps find the best values for a task, like making a robot walk smoothly or a computer learn better.
Orthoginality: Orthogonality is like making sure things are independent and don't mess each other up. Think of a chef organizing ingredients on a table – if each ingredient has its own space and doesn't mix with others, it's easier to work. In computers, orthogonality means keeping different parts separate, so changing one thing doesn't accidentally affect something else. It's like having a well-organized kitchen where each tool has its own place, making it easy to cook without chaos!
In Episode #12 TRAILER, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources Theo’s YouTube Channel : https://youtube.com/@theojaffee8530?s...
In Episode #11, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: STEVE'S ART! stephenhansonart.bigcartel.com Get ahead for next week and check out Theo Jaffee's Youtube Channel: https://youtube.com/@theojaffee8530?s...
In Episode #11 Trailer, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: STEVE'S ART! stephenhansonart.bigcartel.com
In Episode #10, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. Be warned, this is a heavy episode. But there is some hope and a laugh at the end. Most important among them, he believes: -Humanity no longer has 30-50 years to solve the alignment and interpretability problems, our broken processes just won't allow it -Human augmentation is the only viable path for humans to compete with AGIs -We have ONE YEAR, THIS YEAR, 2024, to mount a global WW2-style response to the extinction risk of AI. -This battle is EASIER to win than WW2 :) This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #10 TRAILER, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Do you believe the big AI companies when they tell you their work could kill every last human on earth? You are not alone. You are part of a growing general public that opposes unaligned AI capabilities development. In Episode #9 , we meet Sean Bradley, a Veteran Marine who served his country for six years, including as a helicopter door gunner. Sean left the service as a sergeant and now lives in San Diego where he is married, working and in college. Sean is a viewer of For Humanity and a member of our growing community of the AI risk aware. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: More on the little robot: https://themessenger.com/tech/rob-rob...
Do you believe the big AI companies when they tell you their work could kill every last human on earth? You are not alone. You are part of a growing general public that opposes unaligned AI capabilities development. In Episode #9 TRAILER, we meet Sean Bradley, a Veteran Marine who served his country for six years, including as a helicopter door gunner. Sean left the service as a sergeant and now lives in San Diego where he is married, working and in college. Sean is a viewer of For Humanity and a member of our growing community of the AI risk aware. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: More on the little robot: https://themessenger.com/tech/rob-rob...
Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it. In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. #samaltman #darioamodei #yannlecun #ai #aisafety
Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it. In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. #samaltman #darioamodei #yannlecun #ai #aisafety
You've heard all the tech experts. But what do regular moms think about AI and human extinction? In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world. 30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization. So what do regular moms think of all this? Watch and find out. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
You've heard all the tech experts. But what do regular moms think about AI and human extinction? In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world. 30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization. So what do regular moms think of all this? Watch and find out. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #6, Team Save Us vs. Team Kill Us,, host John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in Toronto in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency. In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John’s title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they’re called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let’s all it facts and analysis. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES THE MUNK DEBATES: https://munkdebates.com Max Tegmark ➡️X: / tegmark ➡️Max's Website: https://space.mit.edu/home/tegmark ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Future of Life Institute: https://futureoflife.org Yoshua Bengio ➡️Website: https://yoshuabengio.org/ Melanie Mitchell ➡️Website: https://melaniemitchell.me/ ➡️X: https://x.com/MelMitchell1?s=20 Yann Lecun ➡️Google Scholar: https://scholar.google.com/citations?... ➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO
Want to see the most important issue in human history, extinction from AI, robustly debated, live and in person? It doesn’t happen nearly often enough. In our Episode #6, Team Save Us vs. Team Kill Us, TRAILER, John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency. In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John’s title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they’re called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout. This is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let’s all it facts and analysis. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES THE MUNK DEBATES: https://munkdebates.com Max Tegmark ➡️X: https://twitter.com/tegmark ➡️Max's Website: https://space.mit.edu/home/tegmark ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Future of Life Institute: https://futureoflife.org Yoshua Bengio ➡️Website: https://yoshuabengio.org/ Melanie Mitchell ➡️Website: https://melaniemitchell.me/ ➡️X: https://x.com/MelMitchell1?s=20 Yann Lecun ➡️Google Scholar: https://scholar.google.com/citations?... ➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO
In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is at the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ROMAN YAMPOLSKIY RESOURCES Roman Yampolskiy's Twitter: https://twitter.com/romanyam ➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampolskiy ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/ #ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind
In Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is are the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ROMAN YAMPOLSKIY RESOURCES Roman Yampolskiy's Twitter: https://twitter.com/romanyam ➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampol... ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind
In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
This is the trailer for Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. #AI #airisk #alignment #interpretability #doom #aisafety #openai #anthropic #eleizeryudkowsky #maxtegmark #connorleahy
Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.
This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.
In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough.
Did you know the makers of AI have no idea how to control their technology, while they admit it has the power to create human extinction? In For Humanity: An AI Safety Podcast, Episode #2 The Alignment Problem, we look into the fact no one has any clue how to align an AI system with human values, ethics and goals. Such as don't kill all the humans, for example. Episode #2 drops Wednesday, this is the trailer.
How bout we choose not to just all die? Are you with me?
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AI. We’ll meet the heroes and villains, explore the issues and ideas, and what we can do to help save humanity.
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
The makers of AI have no idea how to control their technology or why it does what it does. And yet they keep making it faster and stronger. In episode one we introduce the two biggest unsolved problems in AI safety, alignment and interpretability.
This podcast is your wake-up call, and a real-time, unfolding plan of action.
En liten tjänst av I'm With Friends. Finns även på engelska.