Sveriges mest populära poddar

For Humanity: An AI Safety Podcast

Episode #35 TRAILER “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast

5 min • 1 juli 2024

In Episode #35 TRAILER:, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.

TIME MAGAZINE ON THE GLADSTONE REPORT

https://time.com/6898967/ai-extinction-national-security-risks-report/

 

Please Donate Here To Help Promote For Humanity

https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner

Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg

Website: https://josef.pictures

RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!

https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!

Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST

  / discord  

https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety

Statement on AI Risk | CAIS

https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes 

https://twitter.com/AISafetyMemes

TIMESTAMPS:

Sam Altman's intensity (00:00:10) Sam Altman's intense demeanor and competence, as observed by the speaker.

Security risks of superintelligent AI (00:01:02) Concerns about the potential loss of control over superintelligent systems and the security vulnerabilities in top AI labs.

Silicon Valley's security hubris (00:02:04)Critique of Silicon Valley's overconfidence in technology and lack of security measures, particularly in comparison to nation-state level cyber threats.

China's AI capabilities (00:02:36) Discussion about the security deficiency in the United States and the potential for China to have better AI capabilities due to security leaks.

Foreign actors' capacity for exfiltration (00:03:08)Foreign actors' incentives and capacity to exfiltrate frontier models, leading to the need to secure infrastructure before scaling and accelerating AI capabilities.



Kategorier
Förekommer på
00:00 -00:00