Sveriges mest populära poddar

Regulating AI: Innovate Responsibly

Existential Risk in AI with Otto Barten

38 min • 28 februari 2024

In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by Otto Barten, Founder of the Existential Risk Observatory. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.


Key Takeaways:


(00:18) Public awareness of AI risks is rising rapidly.

(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.

(02:51) The European Union’s political consensus on the EU AI Act.

(04:11) Otto explains multiple AI threat models leading to existential risks.

(07:01) Why distinguish between AGI and current AI capabilities?

(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.

(12:15) The potential dangers of open-sourcing AGI.

(14:17) The current regulatory landscapes and potential improvements.

(17:01) The concept of a “pause button” for AI development is introduced.

(20:13) Balancing AI development with ethical considerations and existential risks.

(23:51) Increasing public and legislative awareness of AI risks.

(29:01) The significance of transparency and accountability in AI development.


Resources Mentioned:


Otto Barten - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl

Existential Risk Observatory - https://www.linkedin.com/company/existential-risk-observatory/

European Union AI Act -

The Bletchley Process for global AI safety summits -





Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.





#AIRegulation #AISafety #AIStandard


00:00 -00:00