Sveriges mest populära poddar

DataFramed

#157 Is AI an Existential Risk? With Trond Arne Undheim, Research Scholar in Global Systemic Risk at Stanford University

47 min • 2 oktober 2023

It's been almost a year since ChatGPT was released, mainstreaming AI into the collective consciousness in the process. Since that moment, we've seen a really spirited debate emerge within the data & AI communities, and really public discourse at large. The focal point of this debate is whether AI is or will lead to existential risk for the human species at large.

We've seen thinkers such as Elizier Yudkowski, Yuval Noah Harari, and others sound the alarm bell on how AI is as dangerous, if not more dangerous than nuclear weapons. We've also seen AI researchers and business leaders sign petitions and lobby government for strict regulation on AI. 

On the flip side, we've also seen luminaries within the field such as Andrew Ng and Yan Lecun, calling for, and not against, the proliferation of open-source AI. So how do we maneuver this debate, and where does the risk spectrum actually lie with AI? More importantly, how can we contextualize the risk of AI with other systemic risks humankind faces? Such as climate change, risk of nuclear war, and so on and so forth? How can we regulate AI without falling into the trap of regulatory capture—where a select and mighty few benefit from regulation, drowning out the competition in the meantime?

Trond Arne Undheim is a Research scholar in Global Systemic Risk, Innovation, and Policy at Stanford University, Venture Partner at Antler, and CEO and co-founder of Yegii, an insight network with experts and knowledge assets on disruption. He is a nonresident Fellow at the Atlantic Council with a portfolio in artificial intelligence, future of work, data ethics, emerging technologies, and entrepreneurship. He is a former director of MIT Startup Exchange and has helped launch over 50 startups. In a previous life, he was an MIT Sloan School of Management Senior Lecturer, WPP Oracle Executive, and EU National Expert.

In this episode, Trond and Adel explore the multifaceted risks associated with AI, the cascading risks lens and the debate over the likelihood of runaway AI. Trond shares the role of governments and organizations in shaping AI's future, the need for both global and regional regulatory frameworks, as well as the importance of educating decision-makers on AI's complexities. Trond also shares his opinion on the contrasting philosophies behind open and closed-source AI technologies, the risk of regulatory capture, and more. 

Links mentioned in the show:


Förekommer på
00:00 -00:00