Sveriges 100 mest populära podcasts

Future of Life Institute Podcast

Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Prenumerera

iTunes / Overcast / RSS

Webbplats

futureoflife.org

Avsnitt

Annie Jacobsen on Nuclear War - a Second by Second Timeline

Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power
2024-04-05
Länk till avsnitt

Katja Grace on the Largest Survey of AI Researchers

Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI will look like in 20 years 22:43 Experts? extinction risk predictions 29:35 Opinions on slowing down AI development 31:25 AI ?arms races? 34:00 AI risk areas with the most agreement 40:41 Do ?high hopes and dire concerns? go hand-in-hand? 42:00 Intelligence explosions 45:37 Discontinuous progress 49:43 Impacts of AI crossing the human-level intelligence threshold 59:39 What does AI learn from human culture? 1:02:59 AI scaling 1:05:04 What should we do?
2024-03-14
Länk till avsnitt

Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What about China? 1:28:24 Protesting AGI corporations
2024-02-29
Länk till avsnitt

Sneha Revanur on the Social Effects of AI

Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans in the loop 23:59 AI in social media 30:42 Deteriorating social skills? 36:00 AIs identifying as AIs 43:36 AI influence in elections 50:32 AIs interacting with human systems
2024-02-16
Länk till avsnitt

Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are humans more general than AIs? 21:54 Are AI models explainable? 27:49 Using AI to explain AI 32:36 Evidence for AI being uncontrollable 40:29 AI verifiability 46:08 Will AI be aligned by default? 54:29 Creating human-like AI 1:03:41 Robotics and safety 1:09:01 Obstacles to AI in the economy 1:18:00 AI innovation with current models 1:23:55 AI accidents in the past and future
2024-02-02
Länk till avsnitt

Special: Flo Crivello on AI as a New Form of Life

On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's executive order on AI 22:54 How would a GPU kill switch work? 27:00 Regulating models or applications? 32:13 AGI in 2-8 years 42:00 China and US collaboration on AI
2024-01-19
Länk till avsnitt

Carl Robichaud on Preventing Nuclear War

Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weapons cause stable peace? 31:29 North Korea 34:01 Have we overestimated nuclear risk? 43:24 Time pressure in nuclear decisions 52:00 Why so many nuclear warheads? 1:02:17 Has containment been successful? 1:11:34 Coordination mechanisms 1:16:31 Technological innovations 1:25:57 Public perception of nuclear risk 1:29:52 Easier access to nuclear weapons 1:33:31 Reaching a stable, low-risk era
2024-01-06
Länk till avsnitt

Frank Sauer on Autonomous Weapon Systems

Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/ Timestamps: 00:00 Autonomy in weapon systems 12:19 Balance of offense and defense 20:05 Killer drone systems 28:53 Is autonomy like nuclear weapons? 37:20 Low-tech defenses against drones 48:29 Autonomy and power balance 1:00:24 Tricking autonomous systems 1:07:53 Unpredictability of autonomous systems 1:13:16 Will we trust autonomous systems too much? 1:27:28 Legal terminology 1:32:12 Political possibilities
2023-12-14
Länk till avsnitt

Darren McKee on Uncontrollable Superintelligence

Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment. Timestamps: 00:00 Uncontrollable superintelligence 16:41 AI goals and the "virus analogy" 28:36 Speed of AI cognition 39:25 Narrow AI and autonomy 52:23 Reliability of current and future AI 1:02:33 Planning for multiple AI scenarios 1:18:57 Will AIs seek self-preservation? 1:27:57 Is there a unified solution to AI alignment? 1:30:26 Concrete AI safety proposals
2023-12-01
Länk till avsnitt

Mark Brakel on the UK AI Summit and the Future of AI Policy

Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems. Timestamps: 00:00 AI Safety Summit in the UK 12:18 Are officials up to date on AI? 23:22 Objections to AI policy 31:27 The EU AI Act 43:37 The right level of regulation 57:11 Risks and regulatory tools 1:04:44 Open-source AI 1:14:56 Subsidising AI safety research 1:26:29 Global institutions for safe AI 1:34:34 Autonomy in weapon systems
2023-11-17
Länk till avsnitt

Dan Hendrycks on Catastrophic AI Risks

Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Timestamps: 00:00 X.ai - Elon Musk's new AI venture 02:41 How AI risk thinking has evolved 12:58 AI bioengeneering 19:16 AI agents 24:55 Preventing autocracy 34:11 AI race - corporations and militaries 48:04 Bulletproofing AI organizations 1:07:51 Open-source models 1:15:35 Dan's textbook on AI safety 1:22:58 Rogue AI 1:28:09 LLMs and value specification 1:33:14 AI goal drift 1:41:10 Power-seeking AI 1:52:07 AI deception 1:57:53 Representation engineering
2023-11-03
Länk till avsnitt

Samuel Hammond on AGI and Institutional Disruption

Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca Timestamps: 00:00 Is AGI close? 06:56 Compute versus data 09:59 Information theory 20:36 Universality of learning 24:53 Hards steps in evolution 30:30 Governments and advanced AI 40:33 How will AI transform the economy? 55:26 How will AI change transaction costs? 1:00:31 Isolated thinking about AI 1:09:43 AI and Leviathan 1:13:01 Informational resolution 1:18:36 Open-source AI 1:21:24 AI will decrease state power 1:33:17 Timeline of a techno-feudalist future 1:40:28 Alignment difficulty and AI scale 1:45:19 Solving robotics 1:54:40 A constrained Leviathan 1:57:41 An Apollo Project for AI safety 2:04:29 Secure "gain-of-function" AI research 2:06:43 Is the market expecting AGI soon?
2023-10-20
Länk till avsnitt

Imagine A World: What if AI advisors helped us make better decisions?

Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the eighth and final episode of Imagine A World we explore the fictional worldbuild titled 'Computing Counsel', one of the third place winners of FLI?s worldbuilding contest. Guillaume Riesen talks to Mark L, one of the three members of the team behind 'Computing Counsel', a third-place winner of the FLI Worldbuilding Contest. Mark is a machine learning expert with a chemical engineering degree, as well as an amateur writer. His teammates are Patrick B, a mechanical engineer and graphic designer, and Natalia C, a biological anthropologist and amateur programmer. This world paints a vivid, nuanced picture of how emerging technologies shape society. We have advertisers competing with ad-filtering technologies and an escalating arms race that eventually puts an end to the internet as we know it. There is AI-generated art so personalized that it becomes addictive to some consumers, while others boycott media technologies altogether. And corporations begin to throw each other under the bus in an effort to redistribute the wealth of their competitors to their own customers. While these conflicts are messy, they generally end up empowering and enriching the lives of the people in this world. New kinds of AI systems give them better data, better advice, and eventually the opportunity for genuine relationships with the beings these tools have become. The impact of any technology on society is complex and multifaceted. This world does a great job of capturing that. While social networking technologies become ever more powerful, the networks of people they connect don't necessarily just get wider and shallower. Instead, they tend to be smaller and more intimately interconnected. The world's inhabitants also have nuanced attitudes towards A.I. tools, embracing or avoiding their applications based on their religious or philosophical beliefs. Please note: This episode explores the ideas created as part of FLI?s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/computing-counsel The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
2023-10-17
Länk till avsnitt

Imagine A World: What if narrow AI fractured our shared reality?

Let?s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Most people don?t know or care about the difference and have no idea how they could distinguish between a human or artificial stranger. Inequality sticks around and AI fractures society into separate media bubbles with irreconcilable perspectives. But it's not all bad. AI markedly improves the general quality of life, enhancing medicine and therapy, and those bubbles help to sustain their inhabitants. Can you get excited about a world with these tradeoffs? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the seventh episode of Imagine A World we explore a fictional worldbuild titled 'Hall of Mirrors', which was a third-place winner of FLI's worldbuilding contest. Michael Vasser joins Guillaume Riesen to discuss his imagined future, which he created with the help of Matija Franklin and Bryce Hidysmith. Vassar was formerly the president of the Singularity Institute, and co-founded Metamed; more recently he has worked on communication across political divisions. Franklin is a PhD student at UCL working on AI Ethics and Alignment. Finally, Hidysmith began in fashion design, passed through fortune-telling before winding up in finance and policy research, at places like Numerai, the Median Group, Bismarck Analysis, and Eco.com. Hall of Mirrors is a deeply unstable world where nothing is as it seems. The structures of power that we know today have eroded away, survived only by shells of expectation and appearance. People are isolated by perceptual bubbles and struggle to agree on what is real. This team put a lot of effort into creating a plausible, empirically grounded world, but their work is also notable for its irreverence and dark humor. In some ways, this world is kind of a caricature of the present. We see deeper isolation and polarization caused by media, and a proliferation of powerful but ultimately limited AI tools that further erode our sense of objective reality. A deep instability threatens. And yet, on a human level, things seem relatively calm. It turns out that the stories we tell ourselves about the world have a lot of inertia, and so do the ways we live our lives. Please note: This episode explores the ideas created as part of FLI?s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/hall-of-mirrors The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
2023-10-10
Länk till avsnitt

Steve Omohundro on Provably Safe AGI

Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark. You can read the paper here: https://arxiv.org/pdf/2309.01933.pdf Timestamps: 00:00 Provably safe AI systems 12:17 Alignment and evaluations 21:08 Proofs about language model behavior 27:11 Can we formalize safety? 30:29 Provable contracts 43:13 Digital replicas of actual systems 46:32 Proof-carrying code 56:25 Can language models think logically? 1:00:44 Can AI do proofs for us? 1:09:23 Hard to proof, easy to verify 1:14:31 Digital neuroscience 1:20:01 Risks of totalitarianism 1:22:29 Can we guarantee safety? 1:25:04 Real-world provable safety 1:29:29 Tamper-proof hardware 1:35:35 Mortal and throttled AI 1:39:23 Least-privilege guarantee 1:41:53 Basic AI drives 1:47:47 AI agency and world models 1:52:08 Self-improving AI 1:58:21 Is AI overhyped now?
2023-10-05
Länk till avsnitt

Imagine A World: What if AI enabled us to communicate with animals?

What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the sixth episode of Imagine A World we explore the fictional worldbuild titled 'AI for the People', a third place winner of the worldbuilding contest. Our host Guillaume Riesen welcomes Chi Rainer Bornfree, part of this three-person worldbuilding team alongside her husband Micah White, and their collaborator, J.R. Harris. Chi has a PhD in Rhetoric from UC Berkeley and has taught at Bard, Princeton, and NY State Correctional facilities, in the meantime writing fiction, essays, letters, and more. Micah, best-known as the co-creator of the 'Occupy Wall Street' movement and the author of 'The End of Protest', now focuses primarily on the social potential of cryptocurrencies, while Harris is a freelance illustrator and comic artist. The name 'AI for the People' does a great job of capturing this team's activist perspective and their commitment to empowerment. They imagine social and political shifts that bring power back into the hands of individuals, whether that means serving as lawmakers on randomly selected committees, or gaining income by choosing to sell their personal data online. But this world isn't just about human people. Its biggest bombshell is an AI breakthrough that allows humans to communicate with other animals. What follows is an existential reconsideration of humanity's place in the universe. This team has created an intimate, complex portrait of a world shared by multiple parties: AIs, humans, other animals, and the environment itself. As these entities find their way forward together, their goals become enmeshed and their boundaries increasingly blurred. Please note: This episode explores the ideas created as part of FLI?s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/ai-for-the-people The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and resources referenced in the episode: https://en.wikipedia.org/wiki/Life_3.0 https://en.wikipedia.org/wiki/1_the_Road https://ignota.org/products/pharmako-ai https://en.wikipedia.org/wiki/The_Ministry_for_the_Future https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/ https://en.wikipedia.org/wiki/Occupy_Wall_Street https://en.wikipedia.org/wiki/Sortition https://en.wikipedia.org/wiki/Iroquois https://en.wikipedia.org/wiki/The_Ship_Who_Sang https://en.wikipedia.org/wiki/The_Sparrow_(novel) https://en.wikipedia.org/wiki/After_Yang
2023-10-03
Länk till avsnitt

Imagine A World: What if some people could live forever?

If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the fifth episode of Imagine A World, we explore the fictional worldbuild titled 'To Light?. Our host Guillaume Riesen speaks to Mako Yass, the first place winner of the FLI Worldbuilding Contest we ran last year. Mako lives in Auckland, New Zealand. He describes himself as a 'stray philosopher-designer', and has a background in computer programming and analytic philosophy. Mako?s world is particularly imaginative, with richly interwoven narrative threads and high-concept sci fi inventions. By 2045, his world has been deeply transformed. There?s an AI-designed miracle pill that greatly extends lifespan and eradicates most human diseases. Sachets of this life-saving medicine are distributed freely by dove-shaped drones. There?s a kind of mind uploading which lets anyone become whatever they wish, live indefinitely and gain augmented intelligence. The distribution of wealth is almost perfectly even, with every human assigned a share of all resources. Some people move into space, building massive structures around the sun where they practice esoteric arts in pursuit of a more perfect peace. While this peaceful, flourishing end state is deeply optimistic, Mako is also very conscious of the challenges facing humanity along the way. He sees a strong need for global collaboration and investment to avoid catastrophe as humanity develops more and more powerful technologies. He?s particularly concerned with the risks presented by artificial intelligence systems as they surpass us. An AI system that is more capable than a human at all tasks - not just playing chess or driving a car - is what we?d call an Artificial General Intelligence - abbreviated ?AGI?. Mako proposes that we could build safe AIs through radical transparency. He imagines tests that could reveal the true intentions and expectations of AI systems before they are released into the world. Please note: This episode explores the ideas created as part of FLI?s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/to-light The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Terra_Ignota https://en.wikipedia.org/wiki/The_Transparent_Society https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain https://en.wikipedia.org/wiki/The_Matrix https://aboutmako.makopool.com/
2023-09-26
Länk till avsnitt

Johannes Ackva on Managing Climate Change

Johannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it. You can read more about Johannes' work at http://founderspledge.com/climate Timestamps: 00:00 Johannes's journey as an environmentalist 13:21 The drivers of climate change 23:00 Oil, coal, and gas 38:05 Solar, wind, and hydro 49:34 Nuclear energy 57:03 Geothermal energy 1:00:41 Most promising technologies 1:05:40 Government subsidies 1:13:28 Carbon taxation 1:17:10 Planting trees 1:21:53 Influencing government policy 1:26:39 Different climate scenarios 1:34:49 Economic growth and emissions 1:37:23 Social stability References: Emissions by sector: https://ourworldindata.org/emissions-by-sector Energy density of different energy sources: https://www.nature.com/articles/s41598-022-25341-9 Emissions forecasts: https://www.lse.ac.uk/granthaminstitute/publication/the-unconditional-probability-distribution-of-future-emissions-and-temperatures/ and https://www.science.org/doi/10.1126/science.adg6248 Risk management: https://www.youtube.com/watch?v=6JJvIR1W-xI Carbon pricing: https://www.cell.com/joule/pdf/S2542-4351(18)30567-1.pdf Why not simply plant trees?: https://climate.mit.edu/ask-mit/how-many-new-trees-would-we-need-offset-our-carbon-emissions Deforestation: https://www.science.org/doi/10.1126/science.ade3535 Decoupling of economic growth and emissions: https://www.globalcarbonproject.org/carbonbudget/22/highlights.htm Premature deaths from air pollution: https://www.unep.org/interactives/air-pollution-note/
2023-09-21
Länk till avsnitt

Imagine A World: What if we had digital nations untethered to geography?

How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the fourth episode of Imagine A World, we explore the fictional worldbuild titled 'Digital Nations'. Conrad Whitaker and Tracey Kamande join Guillaume Riesen on 'Imagine a World' to talk about their worldbuild, 'Digital Nations', which they created with their teammate, Dexter Findley. All three worldbuilders were based in Kenya while crafting their entry, though Dexter has just recently moved to the UK. Conrad is a Nairobi-based startup advisor and entrepreneur, Dexter works in humanitarian aid, and Tracey is the Co-founder of FunKe Science, a platform that promotes interactive learning of science among school children. As the name suggests, this world is a deep dive into virtual communities. It explores how people might find belonging and representation on the global stage through digital nations that aren't tied to any physical location. This world also features a fascinating and imaginative kind of artificial intelligence that they call 'digital persons'. These are inspired by biological brains and have a rich internal psychology. Rather than being trained on data, they're considered to be raised in digital nurseries. They have a nuanced but mostly loving relationship with humanity, with some even going on to found their own digital nations for us to join. In an incredible turn of events, last year the South Pacific state of Tuvalu was the first to ?go virtual? in response to sea levels threatening the island nation's physical territory. This happened in real life just months after it was written into this imagined world in our worldbuilding contest, showing how rapidly ideas that seem ?out there? can become reality. Will all nations eventually go digital? And might AGIs be assimilated, 'brought up' rather than merely trained, as 'digital people', citizens to live communally alongside humans in these futuristic states? Please note: This episode explores the ideas created as part of FLI?s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/digital-nations The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and concepts referenced in the episode: https://www.tuvalu.tv/ https://en.wikipedia.org/wiki/Trolley_problem https://en.wikipedia.org/wiki/Climate_change_in_Kenya https://en.wikipedia.org/wiki/John_von_Neumann https://en.wikipedia.org/wiki/Brave_New_World https://thenetworkstate.com/the-network-state https://en.wikipedia.org/wiki/Culture_series
2023-09-19
Länk till avsnitt

Imagine A World: What if global challenges led to more centralization?

What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the third episode of Imagine A World, we explore the fictional worldbuild titled 'Core Central'. How does a team of seven academics agree on one cohesive imagined world? That's a question the team behind 'Core Central', a second-place prizewinner in the FLI Worldbuilding Contest, had to figure out as they went along. In the end, this entry's realistic sense of multipolarity and messiness reflect positively its organic formulation. The team settled on one core, centralised AGI system as the governance model for their entire world. This eventually moves their world 'beyond' nation states. Could this really work? In this third episode of 'Imagine a World',? Guillaume Riesen speaks to two of the academics in this team, John Burden and Henry Shevlin, representing the team that created 'Core Central'. The full team includes seven members, three of whom (Henry, John and Beba Cibralic) are researchers at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and five of whom (Jessica Bland, Lara Mani, Clarissa Rios Rojas, Catherine Richards alongside John) work with the Centre for the Study of Existential Risk, also at Cambridge University. Please note: This episode explores the ideas created as part of FLI?s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this imagined world: https://worldbuild.ai/core-central The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and Concepts referenced in the episode: https://en.wikipedia.org/wiki/Culture_series https://en.wikipedia.org/wiki/The_Expanse_(TV_series) https://www.vox.com/authors/kelsey-piper https://en.wikipedia.org/wiki/Gratitude_journal https://en.wikipedia.org/wiki/The_Diamond_Age https://www.scientificamerican.com/article/the-mind-of-an-octopus/ https://en.wikipedia.org/wiki/Global_workspace_theory https://en.wikipedia.org/wiki/Alien_hand_syndrome https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel)
2023-09-12
Länk till avsnitt

Tom Davidson on How Quickly AI Could Automate the Economy

Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky. Timestamps: 00:00 The current pace of AI 03:58 Near-term risks from AI 09:34 Historical analogies to AI 13:58 AI benchmarks VS economic impact 18:30 AI takeoff speed and bottlenecks 31:09 Tom's model of AI takeoff speed 36:21 How AI could automate AI research 41:49 Bottlenecks to AI automating AI hardware 46:15 How much of AI research is automated now? 48:26 From 20% to 100% automation 53:24 AI takeoff in 3 years 1:09:15 Economic impacts of fast AI takeoff 1:12:51 Bottlenecks slowing AI takeoff 1:20:06 Does the market predict a fast AI takeoff? 1:25:39 "Hard to avoid AGI by 2060" 1:27:22 Risks from AI over the next 20 years 1:31:43 AI progress without more compute 1:44:01 What if AI models fail safety evaluations? 1:45:33 Cybersecurity at AI companies 1:47:33 Will AI turn out well for humanity? 1:50:15 AI and board games
2023-09-08
Länk till avsnitt

Imagine A World: What if we designed and built AI in an inclusive way?

How does who is involved in the design of AI affect the possibilities for our future? Why isn?t the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In this second episode of Imagine A World we explore the fictional worldbuild titled 'Crossing Points', a second place entry in FLI's worldbuilding contest. Joining Guillaume Riesen on the Imagine a World podcast this time are two members of the Crossing Points team, Elaine Czech and Vanessa Hanschke, both academics at the University of Bristol. Elaine has a background in art and design, and is studying the accessibility of technologies for the elderly. Vanessa is studying responsible AI practices of technologists, using methods like storytelling to promote diverse voices in AI research. Their teammates in the contest were Tashi Namgyal, a University of Bristol PhD studying the controllability of deep generative models, Dr. Susan Lechelt, who researches the applications and implications of emerging technologies at the University of Edinburgh, and Nicol Ogston, a British civil servant. There's an emphasis on the unanticipated impacts of new technologies on those who weren't considered during their development. From urban families in Indonesia to anti-technology extremists in America, we're shown that there's something to learn from every human story. This world emphasizes the importance of broadening our lens and empowering marginalized voices in order to build a future that would be bright for more than just a privileged few. The world of Crossing Points looks pretty different from our own, with advanced AIs debating philosophy on TV and hybrid 3D printed meats and grocery stores. But the people in this world are still basically the same. Our hopes and dreams haven't fundamentally changed, and neither have our blindspots and shortcomings. Crossing Points embraces humanity in all its diversity and looks for the solutions that human nature presents alongside the problems. It shows that there's something to learn from everyone's experience and that even the most radical attitudes can offer insights that help to build a better world. Please note: This episode explores the ideas created as part of FLI?s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/crossing-points The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Works referenced in this episode: https://en.wikipedia.org/wiki/The_Legend_of_Zelda https://en.wikipedia.org/wiki/Ainu_people https://www.goodreads.com/book/show/34846958-radicals http://www.historyofmasks.net/famous-masks/noh-mask/
2023-09-05
Länk till avsnitt

Imagine A World: What if new governance mechanisms helped us coordinate?

Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In this first episode of Imagine A World we explore the fictional worldbuild titled 'Peace Through Prophecy'. Host Guillaume Riesen speaks to the makers of 'Peace Through Prophecy', a second place entry in FLI's Worldbuilding Contest. The worldbuild was created by Jackson Wagner, Diana Gurvich and Holly Oatley. In the episode, Jackson and Holly discuss just a few of the many ideas bubbling around in their imagined future. At its core, this world is arguably about community. It asks how technology might bring us closer together, and allow us to reinvent our social systems. Many roads are explored, a whole garden of governance systems bolstered by Artificial Intelligence and other technologies. Overall, there's a shift towards more intimate and empowered communities. Even the AI systems eventually come to see their emotional and creative potentials realized. While progress is uneven, and littered with many human setbacks, a pretty good case is made for how everyone's best interests can lead us to a more positive future. Please note: This episode explores the ideas created as part of FLI?s Worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions Explore this imagined world: https://worldbuild.ai/peace-through-prophecy The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Prediction_market https://forum.effectivealtruism.org/ 'Veil of ignorance' thought experiment: https://en.wikipedia.org/wiki/Original_position https://en.wikipedia.org/wiki/Isaac_Asimov https://en.wikipedia.org/wiki/Liquid_democracy https://en.wikipedia.org/wiki/The_Dispossessed https://en.wikipedia.org/wiki/Terra_Ignota https://equilibriabook.com/ https://en.wikipedia.org/wiki/John_Rawls https://en.wikipedia.org/wiki/Radical_transparency https://en.wikipedia.org/wiki/Audrey_Tang https://en.wikipedia.org/wiki/Quadratic_voting#Quadratic_funding
2023-09-05
Länk till avsnitt

New: Imagine A World Podcast [TRAILER]

Coming Soon? The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster and major wars. Instead, AI and other new technologies are helping to make the world more peaceful, happy and equal. How? This was what we asked the entrants of our Worldbuilding Contest to imagine last year. Our new podcast series digs deeper into the eight winning entries, their ideas and solutions, the diverse teams behind them and the challenges they faced. You might love some; others you might not choose to inhabit. FLI is not endorsing any one idea. Rather, we hope to grow the conversation about what futures people get excited about. Ask yourself, with each episode, is this a world you?d want to live in? And if not, what would you prefer? Don?t miss the first two episodes coming to your feed at the start of September! In the meantime, do explore the winning worlds, if you haven?t already: https://worldbuild.ai/
2023-08-29
Länk till avsnitt

Robert Trager on International AI Governance and Cybersecurity at AI Companies

Robert Trager joins the podcast to discuss AI governance, the incentives of governments and companies, the track record of international regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance. We also discuss Robert's forthcoming paper International Governance of Civilian AI: A Jurisdictional Certification Approach. You can read more about Robert's work at https://www.governance.ai Timestamps: 00:00 The goals of AI governance 08:38 Incentives of governments and companies 18:58 Benefits of regulatory diversity 28:50 The track record of anticipatory regulation 37:55 The security dilemma in AI 46:20 Offense-defense balance in AI 53:27 Failure rates and international agreements 1:00:33 Verification of compliance 1:07:50 Controlling AI supply chains 1:13:47 Cybersecurity at AI companies 1:21:30 The jurisdictional certification approach 1:28:40 Objections to AI governance
2023-08-20
Länk till avsnitt

Jason Crawford on Progress and Risks from AI

Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI. You can read more about Jason's work at https://rootsofprogress.org Timestamps: 00:00 Eras of human progress 06:47 Flywheels of progress 17:56 Main causes of progress 21:01 Progress and risk 32:49 Safety as part of progress 45:20 Slowing down specific technologies? 52:29 Four lenses on AI risk 58:48 Analogies causing disagreement 1:00:54 Solutionism about AI 1:10:43 Insurance, subsidies, and bug bounties for AI risk 1:13:24 How is AI different from other technologies? 1:15:54 Future scenarios of economic growth
2023-07-21
Länk till avsnitt

Special: Jaan Tallinn on Pausing Giant AI Experiments

On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments. Timestamps: 0:00 Nathan introduces Jaan 4:22 AI safety and Future of Life Institute 5:55 Jaan's first meeting with Eliezer Yudkowsky 12:04 Future of AI evolution 14:58 Jaan's investments in AI companies 23:06 The emerging danger paradigm 26:53 Economic transformation with AI 32:31 AI supervising itself 34:06 Language models and validation 38:49 Lack of insight into evolutionary selection process 41:56 Current estimate for life-ending catastrophe 44:52 Inverse scaling law 53:03 Our luck given the softness of language models 55:07 Future of language models 59:43 The Moore's law of mad science 1:01:45 GPT-5 type project 1:07:43 The AI race dynamics 1:09:43 AI alignment with the latest models 1:13:14 AI research investment and safety 1:19:43 What a six-month pause buys us 1:25:44 AI passing the Turing Test 1:28:16 AI safety and risk 1:32:01 Responsible AI development. 1:40:03 Neuralink implant technology
2023-07-06
Länk till avsnitt

Joe Carlsmith on How We Change Our Minds About AI Risk

Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com. Timestamps: 00:00 Predictable updating on AI risk 07:27 Abstract models versus gut feelings 22:06 How Joe began believing in AI risk 29:06 Is AI risk falsifiable? 35:39 Types of skepticisms about AI risk 44:51 Are we fundamentally confused? 53:35 Becoming alienated from ourselves? 1:00:12 What will change people's minds? 1:12:34 Outline of different futures 1:20:43 Humanity losing touch with reality 1:27:14 Can we understand AI sentience? 1:36:31 Distinguishing real from fake sentience 1:39:54 AI doomer epistemology 1:45:23 AI benchmarks versus real-world AI 1:53:00 AI improving AI research and development 2:01:08 What if transformative AI comes soon? 2:07:21 AI safety if transformative AI comes soon 2:16:52 AI systems interpreting other AI systems 2:19:38 Philosophy and transformative AI Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-06-22
Länk till avsnitt

Dan Hendrycks on Why Evolution Favors AIs over Humans

Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan's work at https://www.safe.ai Timestamps: 00:00 Corporate AI race 06:28 Evolutionary dynamics in AI 25:26 Why evolution applies to AI 50:58 Deceptive AI 1:06:04 Competition erodes safety 10:17:40 Evolutionary fitness: humans versus AI 1:26:32 Different paradigms of AI risk 1:42:57 Interpreting AI systems 1:58:03 Honest AI and uncertain AI 2:06:52 Empirical and conceptual work 2:12:16 Losing touch with reality Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-06-08
Länk till avsnitt

Roman Yampolskiy on Objections to AI Safety

Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Objections to AI safety 15:06 Will robots make AI risks salient? 27:51 Was early AI safety research useful? 37:28 Impossibility results for AI 47:25 How much risk should we accept? 1:01:21 Exponential or S-curve? 1:12:27 Will AI accidents increase? 1:23:56 Will we know who was right about AI? 1:33:33 Difference between AI output and AI model Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-05-26
Länk till avsnitt

Nathan Labenz on How AI Will Transform the Economy

Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 Economic transformation from AI 11:15 Productivity increases from technology 17:44 AI effects on employment 28:43 Life without jobs 38:42 Losing contact with reality 42:31 Catastrophic risks from AI 53:52 Scaling AI training runs 1:02:39 Stable opinions on AI? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-05-11
Länk till avsnitt

Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI

Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 The cognitive revolution 07:47 Red teaming GPT-4 24:00 Coming to believe in transformative AI 30:14 Is AI depth or breadth most impressive? 42:52 Potential near-term dangers from AI Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-05-04
Länk till avsnitt

Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology

Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures Timestamps: 00:00 How does venture capital work? 09:01 Failure and success for startups 13:22 Is overconfidence necessary? 19:20 Repeat entrepreneurs 24:38 Long-term investing 30:36 Feedback loops from investments 35:05 Timing investments 38:35 The hardware-software dichotomy 42:19 Innovation prizes 45:43 VC lessons for philanthropy 51:03 Creating new markets 54:01 Investing versus philanthropy 56:14 Technology preying on human frailty 1:00:55 Are good ideas getting harder to find? 1:06:17 Artificial intelligence 1:12:41 Funding ethics research 1:14:25 Is philosophy useful? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-04-27
Länk till avsnitt

Connor Leahy on the State of AI and Alignment Research

Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 Landscape of AI research labs 10:13 Is AGI a useful term? 13:31 AI predictions 17:56 Reinforcement learning from human feedback 29:53 Mechanistic interpretability 33:37 Yudkowsky and Christiano 41:39 Cognitive Emulations 43:11 Public reactions to AI Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-04-20
Länk till avsnitt

Connor Leahy on AGI and Cognitive Emulation

Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 GPT-4 16:35 "Magic" in machine learning 27:43 Cognitive emulations 38:00 Machine learning VS explainability 48:00 Human data = human AI? 1:00:07 Analogies for cognitive emulations 1:26:03 Demand for human-like AI 1:31:50 Aligning superintelligence Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-04-13
Länk till avsnitt

Lennart Heim on Compute Governance

Lennart Heim joins the podcast to discuss options for governing the compute used by AI labs and potential problems with this approach to AI safety. You can read more about Lennart's work here: https://heim.xyz/about/ Timestamps: 00:00 Introduction 00:37 AI risk 03:33 Why focus on compute? 11:27 Monitoring compute 20:30 Restricting compute 26:54 Subsidising compute 34:00 Compute as a bottleneck 38:41 US and China 42:14 Unintended consequences 46:50 Will AI be like nuclear energy? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-04-06
Länk till avsnitt

Lennart Heim on the AI Triad: Compute, Data, and Algorithms

Lennart Heim joins the podcast to discuss how we can forecast AI progress by researching AI hardware. You can read more about Lennart's work here: https://heim.xyz/about/ Timestamps: 00:00 Introduction 01:00 The AI triad 06:26 Modern chip production 15:54 Forecasting AI with compute 27:18 Running out of data? 32:37 Three eras of AI training 37:58 Next chip paradigm 44:21 AI takeoff speeds Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-03-30
Länk till avsnitt

Liv Boeree on Poker, GPT-4, and the Future of AI

Liv Boeree joins the podcast to discuss poker, GPT-4, human-AI interaction, whether this is the most important century, and building a dataset of human wisdom. You can read more about Liv's work here: https://livboeree.com Timestamps: 00:00 Introduction 00:36 AI in Poker 09:35 Game-playing AI 13:45 GPT-4 and generative AI 26:41 Human-AI interaction 32:05 AI arms race risks 39:32 Most important century? 42:36 Diminishing returns to intelligence? 49:14 Dataset of human wisdom/meaning Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-03-23
Länk till avsnitt

Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI

Liv Boeree joins the podcast to discuss Moloch, beauty filters, game theory, institutional change, and artificial intelligence. You can read more about Liv's work here: https://livboeree.com Timestamps: 00:00 Introduction 01:57 What is Moloch? 04:13 Beauty filters 10:06 Science citations 15:18 Resisting Moloch 20:51 New institutions 26:02 Moloch and WinWin 28:41 Changing systems 33:37 Artificial intelligence 39:14 AI acceleration Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-03-16
Länk till avsnitt

Tobias Baumann on Space Colonization and Cooperative Artificial Intelligence

Tobias Baumann joins the podcast to discuss suffering risks, space colonization, and cooperative artificial intelligence. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Suffering risks 02:50 Space colonization 10:12 Moral circle expansion 19:14 Cooperative artificial intelligence 36:19 Influencing governments 39:34 Can we reduce suffering? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-03-09
Länk till avsnitt

Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering

Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Introduction 00:52 What are suffering risks? 05:40 Artificial sentience 17:18 Is reducing suffering hopelessly difficult? 26:06 Can we know how to reduce suffering? 31:17 Why are suffering risks neglected? 37:31 How do we avoid accidentally increasing suffering? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-03-02
Länk till avsnitt

Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI

Neel Nanda joins the podcast for a lightning round on mathematics, technological progress, aging, living up to our values, and generative AI. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:55 How useful is advanced mathematics? 02:24 Will AI replace mathematicians? 03:28 What are the key drivers of tech progress? 04:13 What scientific discovery would disrupt Neel's worldview? 05:59 How should humanity view aging? 08:03 How can we live up to our values? 10:56 What can we learn from a person who lived 1.000 years ago? 12:05 What should we do after we have aligned AGI? 16:19 What important concept is often misunderstood? 17:22 What is the most impressive scientific discovery? 18:08 Are language models better learning tools that textbooks? 21:22 Should settling Mars be a priority for humanity? 22:44 How can we focus on our work? 24:04 Are human-AI relationships morally okay? 25:18 Are there aliens in the universe? 26:02 What are Neel's favourite books? 27:15 What is an overlooked positive aspect of humanity? 28:33 Should people spend more time prepping for disaster? 30:41 Neel's advice for teens. 31:55 How will generative AI evolve over the next five years? 32:56 How much can AIs achieve through a web browser? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-02-23
Länk till avsnitt

Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability

Neel Nanda joins the podcast to talk about mechanistic interpretability and how it can make AI safer. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:46 How early is the field mechanistic interpretability? 03:12 Why should we care about mechanistic interpretability? 06:38 What are some successes in mechanistic interpretability? 16:29 How promising is mechanistic interpretability? 31:13 Is machine learning analogous to evolution? 32:58 How does mechanistic interpretability make AI safer? 36:54 36:54 Does mechanistic interpretability help us control AI? 39:57 Will AI models resist interpretation? 43:43 Is mechanistic interpretability fast enough? 54:10 Does mechanistic interpretability give us a general understanding? 57:44 How can you help with mechanistic interpretability? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-02-16
Länk till avsnitt

Neel Nanda on What is Going on Inside Neural Networks

Neel Nanda joins the podcast to explain how we can understand neural networks using mechanistic interpretability. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Who is Neel? 04:41 How did Neel choose to work on AI safety? 12:57 What does an AI safety researcher do? 15:53 How analogous are digital neural networks to brains? 21:34 Are neural networks like alien beings? 29:13 Can humans think like AIs? 35:00 Can AIs help us discover new physics? 39:56 How advanced is the field of AI safety? 45:56 How did Neel form independent opinions on AI? 48:20 How does AI safety research decrease the risk of extinction? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-02-09
Länk till avsnitt

Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education

Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-02-02
Länk till avsnitt

Connor Leahy on AI Safety and Why the World is Fragile

Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI development? 37:13 How should governments regulate AI? 41:09 How do we avoid misallocating AI safety government grants? 51:02 Should AI safety research be done by for-profit companies? Social Media Links: ?? WEBSITE: https://futureoflife.org ?? TWITTER: https://twitter.com/FLIxrisk ?? INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ?? META: https://www.facebook.com/futureoflifeinstitute ?? LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
2023-01-26
Länk till avsnitt

Connor Leahy on AI Progress, Chimps, Memes, and Markets

Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 01:00 Defining artificial general intelligence 04:52 What makes humans more powerful than chimps? 17:23 Would AIs have to be social to be intelligent? 20:29 Importing humanity's memes into AIs 23:07 How do we measure progress in AI? 42:39 Gut feelings about AI progress 47:29 Connor's predictions about AGI 52:44 Is predicting AGI soon betting against the market? 57:43 How accurate are prediction markets about AGI?
2023-01-19
Länk till avsnitt

Sean Ekins on Regulating AI Drug Discovery

On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery. Timestramps: 00:00 Introduction 00:31 Ethical guidelines and regulation of AI drug discovery 06:11 How do we balance innovation and safety in AI drug discovery? 13:12 Keeping dangerous chemical data safe 21:16 Sean?s personal story of voicing concerns about AI drug discovery 32:06 How Sean will continue working on AI drug discovery
2023-01-12
Länk till avsnitt

Sean Ekins on the Dangers of AI Drug Discovery

On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm. Timestamps: 00:00 Introduction 00:46 Sean?s professional journey 03:45 Can computational models replace animal models? 07:24 The risks of AI drug discovery 12:48 Should scientists disclose dangerous discoveries? 19:40 How should scientists handle dual-use technologies? 22:08 Should we open-source potentially dangerous discoveries? 26:20 How do we control autonomous drug creation? 31:36 Surprising chemical discoveries made by black-box AI systems 36:56 How could the dangers of AI drug discovery be mitigated?
2023-01-05
Länk till avsnitt

Anders Sandberg on the Value of the Future

Anders Sandberg joins the podcast to discuss various philosophical questions about the value of the future. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:54 Humanity as an immature teenager 04:24 How should we respond to our values changing over time? 18:53 How quickly should we change our values? 24:58 Are there limits to what future morality could become? 29:45 Could the universe contain infinite value? 36:00 How do we balance weird philosophy with common sense? 41:36 Lightning round: mind uploading, aliens, interstellar travel, cryonics
2022-12-29
Länk till avsnitt
Hur lyssnar man på podcast?

En liten tjänst av I'm With Friends. Finns även på engelska.
Uppdateras med hjälp från iTunes.