Speaker
Siméon Campos is president and founder of SaferAI, an organization working on developing the infrastructure for general-purpose AI auditing and risk management. He worked on large language models for the last two years and is highly committed to making AI safer.
Session Summary
“I think safe AGI can both prevent a catastrophe and offer a very promising pathway into a eucatastrophe.”
This week we are dropping a special episode of the Existential Hope podcast, where we sit down with Siméon Campos, president and founder of Safer AI, and a Foresight Institute fellow in the Existential Hope track. Siméon shares his experience working on AI governance, discusses the current state and future of large language models, and explores crucial measures needed to guide AI for the greater good.
Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcasts
Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.
Hosted by Allison Duettmann and Beatrice Erkers
Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram
Explore every word spoken on this podcast through Fathom.fm.
Hosted on Acast. See acast.com/privacy for more information.