Sveriges 100 mest populära podcasts

80,000 Hours Podcast

80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.

Prenumerera

iTunes / Overcast / RSS

Webbplats

80000hours.org/podcast/

Avsnitt

#181 ? Laura Deming on the science that could keep us healthy in our 80s and beyond

"The question I care about is: What do I want to do? Like, when I'm 80, how strong do I want to be? OK, and then if I want to be that strong, how well do my muscles have to work? OK, and then if that's true, what would they have to look like at the cellular level for that to be true? Then what do we have to do to make that happen? In my head, it's much more about agency and what choice do I have over my health. And even if I live the same number of years, can I live as an 80-year-old running every day happily with my grandkids?" ? Laura Deming

In today?s episode, host Luisa Rodriguez speaks to Laura Deming ? founder of The Longevity Fund ? about the challenge of ending ageing.

Links to learn more, summary, and full transcript.

They cover:

How lifespan is surprisingly easy to manipulate in animals, which suggests human longevity could be increased too.Why we irrationally accept age-related health decline as inevitable.The engineering mindset Laura takes to solving the problem of ageing.Laura?s thoughts on how ending ageing is primarily a social challenge, not a scientific one.The recent exciting regulatory breakthrough for an anti-ageing drug for dogs.Laura?s vision for how increased longevity could positively transform society by giving humans agency over when and how they age.Why this decade may be the most important decade ever for making progress on anti-ageing research.The beauty and fascination of biology, which makes it such a compelling field to work in.And plenty more.

Chapters:

The case for ending ageing (00:04:00)What might the world look like if this all goes well? (00:21:57)Reasons not to work on ageing research (00:27:25)Things that make mice live longer (00:44:12)Parabiosis, changing the brain, and organ replacement can increase lifespan (00:54:25)Big wins the field of ageing research (01:11:40)Talent shortages and other bottlenecks for ageing research (01:17:36)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2024-03-01
Länk till avsnitt

#180 ? Hugo Mercier on why gullibility and misinformation are overrated

The World Economic Forum?s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years ? ranking it ahead of war, environmental problems, and other threats from AI.

And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world ? or alternatively, extremely easy to mislead people into believing convenient lies.

But this week?s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust ? one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.

Links to learn more, summary, and full transcript.

In this interview, host Rob Wiblin and Hugo discuss:

How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.How Hugo makes sense of our apparent gullibility in many cases ? like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren?t actually beneficial for us.Rob and Hugo?s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today?s complex information environment.The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don?t.Why fake news and conspiracy theories actually have less impact than most people assume.False beliefs that have persisted across cultures and generations ? like bloodletting and vaccine hesitancy ? and theories about why.And plenty more.

Chapters:

The view that humans are really gullible (00:04:26)The evolutionary argument against humans being gullible (00:07:46) Open vigilance (00:18:56)Intuitive and reflective beliefs (00:32:25)How people decide who to trust (00:41:15)Redefining beliefs (00:51:57)Bloodletting (01:00:38)Vaccine hesitancy and creationism (01:06:38)False beliefs without skin in the game (01:12:36)One consistent weakness in human judgement (01:22:57)Trying to explain harmful financial decisions (01:27:15)Astrology (01:40:40)Medical treatments that don?t work (01:45:47)Generative AI, LLMs, and persuasion (01:54:50)Ways AI could improve the information environment (02:29:59)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

2024-02-21
Länk till avsnitt

#179 ? Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don?t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that?s seriously interfering with their lives ? but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.

From an evolutionary perspective, that?s to be expected, right? If your heart or lungs or legs or skin stop working properly while you?re a teenager, you?re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.

So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?

Today?s guest, Randy Nesse ? a leader in the field of evolutionary psychiatry ? wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.

Links to learn more, summary, and full transcript.

In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:

How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.How working as both an academic and a practicing psychiatrist shaped Randy?s understanding of treating mental health problems.The ?smoke detector principle? of why we experience so many false alarms along with true threats.The origins of morality and capacity for genuine love, and why Randy thinks it?s a mistake to try to explain these from a selfish gene perspective.Evolutionary theories on why we age and die.And much more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong
Transcriptions: Katy Moore

2024-02-13
Länk till avsnitt

#178 ? Emily Oster on what the evidence actually says about pregnancy and parenting

"I think at various times ? before you have the kid, after you have the kid ? it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending? Which hours? How do I want the weekends to look? The things that are going to shape the way your day-to-day goes, and the time you spend with your kids, and what you're doing in that time with your kids, and all of those things: you have an opportunity to deliberately plan them. And you can then feel like, 'I've thought about this, and this is a life that I want. This is a life that we're trying to craft for our family, for our kids.' And that is distinct from thinking you're doing a good job in every moment ? which you can't achieve. But you can achieve, 'I'm doing this the way that I think works for my family.'" ? Emily Oster

In today?s episode, host Luisa Rodriguez speaks to Emily Oster ? economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood.

Links to learn more, summary, and full transcript.

They cover:

Common pregnancy myths and advice that Emily disagrees with ? and why you should probably get a doula.Whether it?s fine to continue with antidepressants and coffee during pregnancy.What the data says ? and doesn?t say ? about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more.Which factors really matter for kids to thrive ? and why that means parents shouldn?t sweat the small stuff.How to reduce parental guilt and anxiety with facts, and reject judgemental ?Mommy Wars? attitudes when making decisions that are best for your family.The effects of having kids on career ambitions, pay, and productivity ? and how the effects are different for men and women.Practical advice around managing the tradeoffs between career and family.What to consider when deciding whether and when to have kids.Relationship challenges after having kids, and the protective factors that help.And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2024-02-01
Länk till avsnitt

#177 ? Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Back in December we spoke with Nathan Labenz ? AI entrepreneur and host of The Cognitive Revolution Podcast ? about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's alarming experience red-teaming an early version of GPT-4 and resulting conversations with OpenAI staff and board members.

Today we go deeper, diving into:

What AI now actually can and can?t do, across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons ? and what the next big breakthrough might be.Why most people, including most listeners, probably don?t know and can?t keep up with the new capabilities and wild results coming out across so many AI applications ? and what we should do about that.How we need to learn to talk about AI more productively, particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.Where Nathan agrees with and departs from the views of ?AI scaling accelerationists.?The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.Preparing for coming societal impacts and potential disruption from AI.Practical ways that curious listeners can try to stay abreast of everything that?s going on.And plenty more.

Links to learn more, summary, and full transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

2024-01-24
Länk till avsnitt

#90 Classic episode ? Ajeya Cotra on worldview diversification and how big the future could be

Rebroadcast: this episode was originally released in January 2021.

You wake up in a mysterious box, and hear the booming voice of God: ?I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 ? each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion ? also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land??

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you?re in the big world ? if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

?3?. Huh. Now you don?t know what to believe.

If God made 10 billion boxes, surely it?s much more likely that you would have seen a number like 7,346,678,928?

In today?s interview, Ajeya Cotra ? a senior research analyst at Open Philanthropy ? explains why this thought experiment from the niche of philosophy known as ?anthropic reasoning? could be relevant for figuring out where we should direct our charitable giving.

Links to learn more, summary, and full transcript.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ?longtermism? ? the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that?s both very large relative to what?s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we?re going to have a huge future like that, in which trillions of people ultimately exist, or we?re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ?doomsday argument? alone.

If that?s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we?re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ?doomsday argument?, and it may be the case that it logically doesn?t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

Which worldviews Open Phil finds most plausible, and how it balances themWhich worldviews Ajeya doesn?t embrace but almost doesHow hard it is to get to other solar systemsThe famous ?simulation argument?When transformative AI might actually arriveThe biggest challenges involved in working on big research reportsWhat it?s like working at Open PhilAnd much more

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

2024-01-12
Länk till avsnitt

#112 Classic episode ? Carl Shulman on the common-sense case for existential risk work and its practical implications

Rebroadcast: this episode was originally released in October 2021.

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.

But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.

According to Carl Shulman, research associate at Oxford University?s Future of Humanity Institute, that means you don?t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk ? it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.

Links to learn more, summary, and full transcript.

The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:

The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.So saving all US citizens at any given point in time would be worth $1,300 trillion.If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio ? likely over 1000:1 today.

This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.

If the case is clear enough, why hasn?t it already motivated a lot more spending or regulations to limit existential risks ? enough to drive down what any additional efforts would achieve?

Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood ? but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.

Carl suspects another reason is that it?s difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn?t know what good performance looks like, politicians can?t be given incentives to do the right thing.

It?s reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe.

But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we?ve still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended.

Carl expects that all the reasons we didn?t adequately prepare for or respond to COVID-19 ? with excess mortality over 15 million and costs well over $10 trillion ? bite even harder when it comes to threats we?ve never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.

Today?s episode is in part our way of trying to improve this situation. In today?s wide-ranging conversation, Carl and Rob also cover:

A few reasons Carl isn?t excited by ?strong longtermism?How x-risk reduction compares to GiveWell recommendationsSolutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate changeThe history of bioweaponsWhether gain-of-function research is justifiableSuccesses and failures around COVID-19The history of existential riskAnd much more

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

2024-01-08
Länk till avsnitt

#111 Classic episode ? Mushtaq Khan on using institutional economics to predict effective government reforms

Rebroadcast: this episode was originally released in September 2021.

If you?re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ?artisanal refining? ? or, in plain language, stealing oil from pipelines.

The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft.

They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What?s happening here?

According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ?networked corruption?. Everyone in the community is benefiting from the criminal enterprise ? so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil.

Links to learn more, summary, and full transcript.

In today?s episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world.

Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country?s rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us.

The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected.

Mushtaq?s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. To root out fraud, aid agencies try to impose institutions and laws that work in countries like the U.K. today. Everyone nods their heads and appears to go along, but years later they find nothing has changed, or worse ? the new anti-corruption laws are mostly just used to persecute anyone who challenges the country?s rulers.

As Mushtaq explains, to people who specialise in understanding why corruption is ubiquitous in some countries but not others, this is entirely predictable. Western agencies imagine a situation where most people are law abiding, but a handful of selfish fat cats are engaging in large-scale graft. In fact in the countries they?re trying to change everyone is breaking some rule or other, or participating in so-called ?corruption?, because it?s the only way to get things done and always has been.

Mushtaq?s rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they?re participating in, they almost always win out.

To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers.

Trying to impose a new way of doing things from the top down wasn?t how Europe modernised, and it won?t work elsewhere either.

In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy ? in the hope that one day this will give people a viable alternative to corruption.

In this extensive interview Rob and Mushtaq cover this and much more, including:

How does one test theories like this?Why are companies in some poor countries so much less productive than their peers in rich countries?Have rich countries just legalized the corruption in their societies?What are the big live debates in institutional economics?Should poor countries protect their industries from foreign competition?Where has industrial policy worked, and why?How can listeners use these theories to predict which policies will work in their own countries?

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

2024-01-04
Länk till avsnitt

2023 Mega-highlights Extravaganza

Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came out in 2023

That's 32 of our favourite ideas packed into one episode that's so bursting with substance it might be more than the human mind can safely handle.

There's something for everyone here:

Ezra Klein on punctuated equilibriumTom Davidson on why AI takeoff might be shockingly fastJohannes Ackva on political action versus lifestyle changesHannah Ritchie on how buying environmentally friendly technology helps low-income countries Bryan Caplan on rational irrationality on the part of votersJan Leike on whether the release of ChatGPT increased or reduced AI extinction risksAthena Aktipis on why elephants get deadly cancers less often than humansAnders Sandberg on the lifespan of civilisationsNita Farahany on hacking neural interfaces

...plus another 23 such gems.

And they're in an order that our audio engineer Simon Monsour described as having an "eight-dimensional-tetris-like rationale."

I don't know what the hell that means either, but I'm curious to find out.

And remember: if you like these highlights, note that we release 20-minute highlights reels for every new episode over on our sister feed, which is called 80k After Hours. So even if you're struggling to make time to listen to every single one, you can always get some of the best bits of our episodes.

We hope for all the best things to happen for you in 2024, and we'll be back with a traditional classic episode soon.

This Mega-highlights Extravaganza was brought to you by Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong

2023-12-31
Länk till avsnitt

#100 Classic episode ? Having a successful career with depression, anxiety, and imposter syndrome

Rebroadcast: this episode was originally released in May 2021.

Today?s episode is one of the most remarkable and really, unique, pieces of content we?ve ever produced (and I can say that because I had almost nothing to do with making it!).

The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it?s rare for people to offer detailed insight into their thoughts and struggles ? and even rarer for someone as perceptive as Howie to do so.

Links to learn more, summary, and full transcript.

The first half of this conversation is a searingly honest account of Howie?s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today.

The second half covers Howie?s advice. Conventional wisdom on mental health can be really focused on cultivating willpower ? telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort.

Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters ? doing whatever will help you get better.

Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. If you?re in a hurry, we?ve extracted the key advice that Howie has to share in a section below.

Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they?ve decided to share it with the world.

Here are a few quotes from early reviewers:

"I think there?s a big difference between admitting you have depression/seeing a psych and giving a warts-and-all account of a major depressive episode like Howie does in this episode? His description was relatable and really inspiring."

Someone who works on mental health issues said:

"This episode is perhaps the most vivid and tangible example of what it is like to experience psychological distress that I?ve ever encountered. Even though the content of Howie and Keiran?s discussion was serious, I thought they both managed to converse about it in an approachable and not-overly-somber way."

And another reviewer said:

"I found Howie?s reflections on what is actually going on in his head when he engages in negative self-talk to be considerably more illuminating than anything I?ve heard from my therapist."

We also hope that the episode will:

Help people realise that they have a shot at making a difference in the future, even if they?re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles.Give insight into what it?s like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues ? and know how to react more helpfully.

Several early listeners have even made specific behavioral changes due to listening to the episode ? including people who generally have good mental health but were convinced it?s well worth the low cost of setting up a plan in case they have problems in the future.

So we think this episode will be valuable for:

People who have experienced mental health problems or might in future;People who have had troubles with stress, anxiety, low mood, low self esteem, imposter syndrome and similar issues, even if their experience isn?t well described as ?mental illness?;People who have never experienced these problems but want to learn about what it?s like, so they can better relate to and assist family, friends or colleagues who do.In other words, we think this episode could be worthwhile for almost everybody.

Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts.

If you don?t want to hear or read the most intense section, you can skip the chapter called ?Disaster?. And if you?d rather avoid almost all of these references, you could skip straight to the chapter called ?80,000 Hours?.

We?ve collected a large list of high quality resources for overcoming mental health problems in our links section.

If you?re feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the US (800-273-8255) and Samaritans in the UK (116 123). You may also want to find and save a number for a local service where possible.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

2023-12-27
Länk till avsnitt

#176 ? Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

OpenAI says its mission is to build AGI ? an AI system that is better than human beings at everything. Should the world trust them to do that safely?

That?s the central theme of today?s episode with Nathan Labenz ? entrepreneur, AI scout, and host of The Cognitive Revolution podcast.

Links to learn more, summary, and full transcript.

Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since ? including joining OpenAI?s ?red team? that probed GPT-4 to find ways it could be abused, long before it was public.

Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.

Nathan?s view: it?s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.

When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a ?Safety? version of GPT-4 that was almost the same, he approached a member of OpenAI?s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.

In today?s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board?s reservations about Sam Altman, which to this day have not been laid out in any detail.

But while he feared throughout 2022 that OpenAI and Sam Altman didn?t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.

Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they?re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI?s decision to release GPT-4 when it did was for the best.

On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems ? the type they themselves are building ? and that could make a real difference. They?ve also invested major resources into new ?Superalignment? and ?Preparedness? teams, while avoiding using competition with China as an excuse for recklessness.

At the same time, it?s very hard to know whether it?s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI ? an AGI that we?re confident we want, which we can prove will remain safe as its capabilities get ever greater.

By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI?s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they?re also better placed than maybe anyone in the world to assess if the company?s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.

In today?s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:

Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan?s interactions with the board when he raised concerns from his red teaming efforts.Which AI applications we should be urgently rolling out, with less worry about safety.Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.Whether AI capabilities are advancing faster than safety efforts and controls.The costs and benefits of releasing powerful models like GPT-4.Nathan?s view on the game theory of AI arms races and China.Whether it?s worth taking some risk with AI for huge potential upside.The need for more ?AI scouts? to understand and communicate AI progress.And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire and Dominic Armstrong
Transcriptions: Katy Moore

2023-12-22
Länk till avsnitt

#175 ? Lucia Coulter on preventing lead poisoning for $1.66 per child

Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they?ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.

We?ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.

Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children?s intellectual potential, health, and life expectancy is vast ? the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.

This week?s guest, Lucia Coulter ? cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) ? speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.

Links to learn more, summary, and full transcript.

Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people?s lifetime income anywhere from $300?1,200 for each $1 it spends, by preventing intellectual stunting.

Which raises the question: why hasn?t this happened already? How is lead still in paint in most poor countries, even when that?s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?

With host Robert Wiblin, Lucia answers all those questions and more:

Why LEEP isn?t fully funded, and what it would do with extra money (you can donate here).How bad lead poisoning is in rich countries.Why lead is still in aeroplane fuel.How lead got put straight in food in Bangladesh, and a handful of people got it removed.Why the enormous damage done by lead mostly goes unnoticed.The other major sources of lead exposure aside from paint.Lucia?s story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship?s Incubation Program.Why Lucia pledges 10% of her income to cost-effective charities.Lucia?s take on why GiveWell didn?t support LEEP earlier on.How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer.Generalisable lessons LEEP has learned from coordinating with governments in poor countries.And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire and Dominic Armstrong
Transcriptions: Katy Moore

2023-12-14
Länk till avsnitt

#174 ? Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

"It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you?re interacting on any platform that also has issued you a multifunctional device where they?re looking at your brainwave activity, they are marketing to you, they?re cognitively shaping you.

"So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there?s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, 'Oh my god, mind reading is here. Now what?'" ? Nita Farahany

In today?s episode, host Luisa Rodriguez speaks to Nita Farahany ? professor of law and philosophy at Duke Law School ? about applications of cutting-edge neurotechnology.

Links to learn more, summary, and full transcript.

They cover:

How close we are to actual mind reading.How hacking neural interfaces could cure depression.How companies might use neural data in the workplace ? like tracking how productive you are, or using your emotional states against you in negotiations.How close we are to being able to unlock our phones by singing a song in our heads.How neurodata has been used for interrogations, and even criminal prosecutions.The possibility of linking brains to the point where you could experience exactly the same thing as another person.Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2023-12-07
Länk till avsnitt

#173 ? Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

"We do have a tendency to anthropomorphise nonhumans ? which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial ? which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual.

"But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that ? plus our speciesism, plus a lot of other biases and forms of ignorance that we have ? gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." ? Jeff Sebo

In today?s episode, host Luisa Rodriguez interviews Jeff Sebo ? director of the Mind, Ethics, and Policy Program at NYU ? about preparing for a world with digital minds.

Links to learn more, summary, and full transcript.

They cover:

The non-negligible chance that AI systems will be sentient by 2030What AI systems might want and need, and how that might affect our moral conceptsWhat happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?The repugnant conclusion and the rebugnant conclusionThe experience of trying to build the field of AI welfareWhat improv comedy can teach us about doing good in the worldAnd plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2023-11-22
Länk till avsnitt

#172 ? Bryan Caplan on why you should stop reading the news

Is following important political and international news a civic duty ? or is it our civic duty to avoid it?

It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do.

But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life ? which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.

Links to learn more, summary, and full transcript.

In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:

That it overwhelmingly provides us with information we can't usefully act on.That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.That it obscures the big picture, falling into the trap of thinking 'something important happens every day.'That it's highly addictive, for many people chewing up 10% or more of their waking hours.That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.And plenty more.

Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter ? or better yet, go meet them in person.

In the second half of the episode, Bryan and Rob cover: 

Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly.Bryan?s case that rational irrationality on the part of voters leads to many very harmful policy decisions.How to allocate resources in space.Bryan's experience homeschooling his kids.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

2023-11-17
Länk till avsnitt

#171 ? Alison Young on how top labs have jeopardised public health with repeated biosafety failures

"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur ? that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." ? Alison Young

In today?s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.

Links to learn more, summary, and full transcript.

They cover:

The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reportingThe Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many yearsThe time the Soviets had a major anthrax leak, and then hid it for over a decadeThe 1977 influenza pandemic caused by vaccine trial gone wrong in ChinaThe last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK Ways we could get more reliable oversight and accountability for these labsAnd the investigative work Alison?s most proud of

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2023-11-09
Länk till avsnitt

#170 ? Santosh Harish on how air pollution is responsible for ~12% of global deaths ? and how to get that number down

"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it ? which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world.

"That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." ? Santosh Harish

In today?s episode, host Rob Wiblin interviews Santosh Harish ? leader of Open Philanthropy?s grantmaking in South Asian air quality ? about the scale of the harm caused by air pollution.

Links to learn more, summary, and full transcript.

They cover:

How bad air pollution is for our health and life expectancyThe different kinds of harm that particulate pollution causesThe strength of the evidence that it damages our brain function and reduces our productivityWhether it was a mistake to switch our attention to climate change and away from air pollutionWhether most listeners to this show should have an air purifier running in their house right nowWhere air pollution in India is worst and why, and whether it's going up or downWhere most air pollution comes fromThe policy blunders that led to many sources of air pollution in India being effectively unregulatedWhy indoor air pollution packs an enormous punchThe politics of air pollution in IndiaHow India ended up spending a lot of money on outdoor air purifiersThe challenges faced by foreign philanthropists in IndiaWhy Santosh has made the grants he has so farAnd plenty more

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

2023-11-01
Länk till avsnitt

#169 ? Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

"One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, ?The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them??

I think that's an interesting thought experiment -- and a good one -- to say, ?Are there cases in which I think that's justifiable?? ? Paul Niehaus

In today?s episode, host Luisa Rodriguez interviews Paul Niehaus ? co-founder of GiveDirectly ? on the case for giving unconditional cash to the world's poorest households.

Links to learn more, summary and full transcript.

They cover:

The empirical evidence on whether giving cash directly can drive meaningful economic growthHow the impacts of GiveDirectly compare to USAID employment programmesGiveDirectly vs GiveWell?s top-recommended charitiesHow long-term guaranteed income affects people's risk-taking and investmentsWhether recipients prefer getting lump sums or monthly instalmentsHow GiveDirectly tackles cases of fraud and theftThe case for universal basic income, and GiveDirectly?s UBI studies in Kenya, Malawi, and LiberiaThe political viability of UBIPlenty more

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Luisa Rodriguez and Katy Moore
Transcriptions: Katy Moore

2023-10-26
Länk till avsnitt

#168 ? Ian Morris on whether deep history says we're heading for an intelligence explosion

"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't.

What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." ? Ian Morris

In today?s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence.

Links to learn more, summary and full transcript.

They cover:

Some crazy anomalies in the historical record of civilisational progressWhether we should think about technology from an evolutionary perspectiveWhether we ought to expect war to make a resurgence or continue dying outWhy we can't end up living like The JetsonsWhether stagnation or cyclical recurring futures seem very plausibleWhat it means that the rate of increase in the economy has been increasingWhether violence is likely between humans and powerful AI systemsThe most likely reasons for Rob and Ian to be really wrong about all of thisHow professional historians react to this sort of talkThe future of Ian?s workPlenty more

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

2023-10-24
Länk till avsnitt

#167 ? Seren Kell on the research gaps holding back alternative proteins from mass adoption

"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology ? which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.

Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." ? Seren Kell

Links to learn more, summary and full transcript.

In today?s episode, host Luisa Rodriguez interviews Seren Kell ? Senior Science and Technology Manager at the Good Food Institute Europe ? about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.

They cover:

The basic case for alternative proteins, and why they?re so hard to makeWhy fermentation is a surprisingly promising technology for creating delicious alternative proteins The main scientific challenges that need to be solved to make fermentation even more usefulThe progress that?s been made on the cultivated meat front, and what it will take to make cultivated meat affordableHow GFI Europe is helping with some of these challengesHow people can use their careers to contribute to replacing factory farming with alternative proteinsThe best part of Seren?s jobPlenty more

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Luisa Rodriguez and Katy Moore
Transcriptions: Katy Moore

2023-10-18
Länk till avsnitt

#166 ? Tantum Collins on what he?s learned as an AI policy insider at the White House, DeepMind and elsewhere

"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space?

That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.

My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." ? Tantum Collins

In today?s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who?s willing to speak openly ? Tantum Collins.

Links to learn more, summary and full transcript.

They cover:

How AI could strengthen government capacity, and how that's a double-edged swordHow new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't thereTo what extent policymakers take different threats from AI seriouslyWhether the US and China are in an AI arms race or notWhether it's OK to transform the world without much of the world agreeing to itThe tyranny of small differences in AI policyDisagreements between different schools of thought in AI policy, and proposals that could unite themHow the US AI Bill of Rights could be improvedWhether AI will transform the labour market, and whether it will become a partisan political issueThe tensions between the cultures of San Francisco and DC, and how to bridge the divide between themWhat listeners might be able to do to help with this whole messPanpsychismPlenty more

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

2023-10-12
Länk till avsnitt

#165 ? Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future?

Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you.

So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast.

So my general conclusion has been that war looks unlikely on some size scales but not on others." ? Anders Sandberg

In today?s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.

Links to learn more, summary and full transcript.

They cover:

The epic new book Anders is working on, and whether he?ll ever finish itWhether there's a best possible world or we can just keep improving foreverWhat wars might look like if the galaxy is mostly settledThe impediments to AI or humans making it to other starsHow the universe will end a million trillion years in the futureWhether it?s useful to wonder about whether we?re living in a simulationThe grabby aliens theoryWhether civilizations get more likely to fail the older they getThe best way to generate energy that could ever existBlack hole bombsWhether superintelligence is necessary to get a lot of valueThe likelihood that life from elsewhere has already visited EarthAnd plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

2023-10-06
Länk till avsnitt

#164 ? Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already.

And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" ? Kevin Esvelt

In today?s episode, host Luisa Rodriguez interviews Kevin Esvelt ? a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive ? about the threat posed by engineered bioweapons.

Links to learn more, summary and full transcript.

They cover:

Why it makes sense to focus on deliberately released pandemicsCase studies of people who actually wanted to kill billions of humansHow many people have the technical ability to produce dangerous virusesThe different threats of stealth and wildfire pandemics that could crash civilisationThe potential for AI models to increase access to dangerous pathogensWhy scientists try to identify new pandemic-capable pathogens, and the case against that researchTechnological solutions, including UV lights and advanced PPEUsing CRISPR-based gene drive to fight diseases and reduce animal sufferingAnd plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2023-10-02
Länk till avsnitt

Great power conflict (Article)

Today?s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.

If you want to check out the links, footnotes and figures in today?s article, you can find those here.

And if you like this article, you might enjoy a couple of related episodes of this podcast:

#128 ? Chris Blattman on the five reasons wars happen#140 ? Bear Braumoeller on the case that war isn?t in decline

Audio mastering and editing for this episode: Dominic Armstrong
Audio Engineering Lead: Ben Cordell
Producer: Keiran Harris

2023-09-22
Länk till avsnitt

#163 ? Toby Ord on the perils of maximising the good that you do

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?

But in today's interview, Toby Ord ? moral philosopher at the University of Oxford and one of the founding figures of effective altruism ? lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than ?doing the most good that we can,? perhaps we should be happy with a more modest and manageable goal: ?doing most of the good that we can.?

Links to learn more, summary and full transcript.

Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.

Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.

This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going ?all-in? on any idea, or goal, or way of living ? even something as benign as helping other people as much as possible.

Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well ? family, friends, and personal projects.

But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.

To eke out those final gains ? and go from 80% effort to 100% ? our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.

Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.

The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.

As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.

In the full interview, Toby also explains the ?moral trade? argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.

Toby and Rob also discuss:

The rise and fall of FTX and some of its impactsWhat Toby hoped effective altruism would and wouldn't become when he helped to get it off the groundWhat utilitarianism has going for it, and what's wrong with it in Toby's viewHow to mathematically model the importance of personal integrityWhich AI labs Toby thinks have been acting more responsibly than othersHow having a young child affects Toby?s feelings about AI riskWhether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartialHow Toby ended up being the source of the highest quality images of the Earth from space

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Transcriptions: Katy Moore

2023-09-08
Länk till avsnitt

The 80,000 Hours Career Guide (2023)

An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon and on Audible.

If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift.

2023-09-04
Länk till avsnitt

#162 ? Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10?100x the compute used to train ChatGPT.

But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.

Links to learn more, summary and full transcript.

On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.

And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.

In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:

1. Developing an Apollo programme for technical AI safety
2. Instituting capability audits for AI models
3. Buying time by exploiting hardware choke points
4. Getting critics involved in directly engineering AI models
5. Getting AI labs to be guided by motives other than profit
6. Radically increasing governments? understanding of AI and their capabilities to sensibly regulate it
7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities
8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working
9. Creating a mass public movement that understands AI and can demand the necessary controls
10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibria

As Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything. 

Rob and Mustafa discuss the above, as well as:

Whether we should be open sourcing AI modelsWhether Mustafa's policy views are consistent with his timelines for transformative AIHow people with very different views on these issues get along at AI labsThe failed efforts (so far) to get a wider range of people involved in these decisionsWhether it's dangerous for Mustafa's new company to be training far larger models than GPT-4Whether we'll be blown away by AI progress over the next yearWhat mandatory regulations government should be imposing on AI labs right nowAppropriate priorities for the UK's upcoming AI safety summit

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

2023-09-01
Länk till avsnitt

#161 ? Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality ? or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892.

However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980.

So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" ? Michael Webb

In today?s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and the labour market.

Links to learn more, summary and full transcript.

They cover:

The jobs most and least exposed to AIWhether we?ll we see mass unemployment in the short term How long it took other technologies like electricity and computers to have economy-wide effectsWhether AI will increase or decrease inequalityWhether AI will lead to explosive economic growthWhat we can we learn from history, and reasons to think this time is differentCareer advice for a world of LLMsWhy Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involvedMichael's take as a musician on AI-generated musicAnd plenty more

If you'd like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he's now hiring! Check out Quantum Leap's website.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2023-08-23
Länk till avsnitt

#160 ? Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." ? Hannah Ritchie

In today?s episode, host Luisa Rodriguez interviews the head of research at Our World in Data ? Hannah Ritchie ? on the case for environmental optimism.

Links to learn more, summary and full transcript.

They cover:

Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could getHer new book about how we could be the first generation to build a sustainable planetWhether climate change is the most worrying environmental issueHow we reduced outdoor air pollutionWhy Hannah is worried about the state of ??biodiversitySolutions that address multiple environmental issues at onceHow the world coordinated to address the hole in the ozone layerSurprises from Our World in Data?s researchPsychological challenges that come up in Hannah?s workAnd plenty more

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2023-08-14
Länk till avsnitt

#159 ? Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."

Links to learn more, summary and full transcript.

Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it?s not just throwing compute at the problem -- it?s also hiring dozens of scientists and engineers to build out the Superalignment team.

Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: 

Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.


Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.

The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, ?like using one fire to put out another fire.?

But Jan?s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. 

And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.

Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.

Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including:

If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about.How do you know that these technical problems can be solved at all, even in principle?At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do?


In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover:

OpenAI's current plans to achieve 'superalignment' and the reasoning behind themWhy alignment work is the most fundamental and scientifically interesting research in MLThe kinds of people he?s excited to hire to join his team and maybe save the worldWhat most readers misunderstood about the OpenAI announcementThe three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversightWhat the standard should be for confirming whether Jan's team has succeededWhether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solvedWhether Jan thinks OpenAI has deployed models too quickly or too slowlyThe many other actors who also have to do their jobs really well if we're going to have a good AI futurePlenty more


Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

2023-08-08
Länk till avsnitt

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren?t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode ? but we think these will be a nice upgrade on skipping episodes entirely.

Get these highlight episodes by subscribing to our more experimental podcast on the world?s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.

Highlights put together by Simon Monsour and Milo McGuire

2023-08-05
Länk till avsnitt

#158 ? Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars? worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.

In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.

Links to learn more, summary and full transcript.

(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.)

One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.

As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.

In today?s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as:

Why we can?t rely on just gradually solving those problems as they come up, the way we usually do with new technologies.What multiple different groups can do to improve our chances of a good outcome ? including listeners to this show, governments, computer security experts, and journalists.Holden?s case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world.What the ML and AI safety communities get wrong in Holden's view.Ways we might succeed with AI just by dumb luck.The value of laying out imaginable success stories.Why information security is so important and underrated.Whether it's good to work at an AI lab that you think is particularly careful.The track record of futurists? predictions.And much more.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Simon Monsour and Milo McGuire

Transcriptions: Katy Moore

2023-08-01
Länk till avsnitt

#157 ? Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today's guest ? journalist Ezra Klein of The New York Times ? has watched policy discussions and legislative battles play out in DC for 20 years.

Links to learn more, summary and full transcript.

Like many people he has also taken a big interest in AI this year, writing articles such as ?This changes everything.? In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding ? both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are ? and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models ? at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

They cover:

Whether it's desirable to slow down AI researchThe value of engaging with current policy debates even if they don't seem directly importantWhich AI business models seem more or less dangerousTensions between people focused on existing vs emergent risks from AITwo major challenges of being a new parent

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Milo McGuire

Transcriptions: Katy Moore

2023-07-24
Länk till avsnitt

#156 ? Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it.

And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." ? Markus Anderljung

In today?s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI ? Markus Anderljung ? about all aspects of policy and governance of superhuman AI systems.

Links to learn more, summary and full transcript.

They cover:

The need for AI governance, including self-replicating models and ChaosGPTWhether or not AI companies will willingly accept regulationThe key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoringWhether we can be confident that people won't train models covertly and ignore the licencing systemThe progress we?ve made so far in AI governanceThe key weaknesses of these approachesThe need for external scrutiny of powerful modelsThe emergent capabilities problemWhy it really matters where regulation happensAdvice for people wanting to pursue a career in this fieldAnd much more.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Simon Monsour and Milo McGuire

Transcriptions: Katy Moore

2023-07-10
Länk till avsnitt

Bonus: The Worst Ideas in the History of the World

Today?s bonus release is a pilot for a new podcast called ?The Worst Ideas in the History of the World?, created by Keiran Harris ? producer of the 80,000 Hours Podcast.

If you have strong opinions about this one way or another, please email us at [email protected] to help us figure out whether more of this ought to exist.

2023-06-30
Länk till avsnitt

#155 ? Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.

With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.

But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?

In today's interview, Lennart Heim ? who researches compute governance at the Centre for the Governance of AI ? explains why limiting access to supercomputers may represent our best shot.

Links to learn more, summary and full transcript.

As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.

If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.

According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models ? the newest and most capable models ? might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.

We have similar safety rules for companies that fly planes or manufacture volatile chemicals ? so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?

But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.

Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.

By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.

With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.

If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?

Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments ? which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.

Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.

Lennart and Rob discuss the above as well as:

How can we best categorise all the ways AI could go wrong?Why did the US restrict the export of some chips to China and what impact has that had?Is the US in an 'arms race' with China or is that more an illusion?What is the deal with chips specialised for AI applications?How is the 'compute' industry organised?Downsides of using compute as a target for regulationsCould safety mechanisms be built into computer chips themselves?Who would have the legal authority to govern compute if some disaster made it seem necessary?The reasons Rob doubts that any of this stuff will workCould AI be trained to operate as a far more severe computer worm than any we've seen before?What does the world look like when sluggish human reaction times leave us completely outclassed?And plenty more

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris

Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell

Transcriptions: Katy Moore

2023-06-23
Länk till avsnitt

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.

Today's guest ? machine learning researcher Rohin Shah ? goes into the Google DeepMind offices each day with that peculiar backdrop to his work.

Links to learn more, summary and full transcript.

He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.

In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.

For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence -- from doomers to doubters -- and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don't understand. He has landed somewhere in the middle ? troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome.

Today's conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including:

What he sees as the strongest case both for and against slowing down the rate of progress in AI research.Why he disagrees with most other ML researchers that training a model on a sensible 'reward function' is enough to get a good outcome.Why he disagrees with many on LessWrong that the bar for whether a safety technique is helpful is ?could this contain a superintelligence.?That he thinks nobody has very compelling arguments that AI created via machine learning will be dangerous by default, or that it will be safe by default. He believes we just don't know.That he understands that analogies and visualisations are necessary for public communication, but is sceptical that they really help us understand what's going on with ML models, because they're different in important ways from every other case we might compare them to.Why he's optimistic about DeepMind?s work on scalable oversight, mechanistic interpretability, and dangerous capabilities evaluations, and what each of those projects involves.Why he isn't inherently worried about a future where we're surrounded by beings far more capable than us, so long as they share our goals to a reasonable degree.Why it's not enough for humanity to know how to align AI models ? it's essential that management at AI labs correctly pick which methods they're going to use and have the practical know-how to apply them properly.Three observations that make him a little more optimistic: humans are a bit muddle-headed and not super goal-orientated; planes don't crash; and universities have specific majors in particular subjects.Plenty more besides.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris

Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell

Transcriptions: Katy Moore

2023-06-09
Länk till avsnitt

#153 ? Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency.

But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth?

Today's guest ? cofounder and CEO of GiveWell, Elie Hassenfeld ? is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year.

Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water.

But some other researchers focused on figuring out the best ways to help the world's poorest people say GiveWell shouldn't just do more of the same thing, but rather ought to look at the problem differently.

Links to learn more, summary and full transcript.

Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending ? such as 'lives saved,' 'household incomes doubled,' and for health improvements, the 'quality-adjusted life year.' 

The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them.

An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a 'growth miracle' will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else.

Elie sees where both of these critiques are coming from, and notes that they've influenced GiveWell's work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today. 

In today's in-depth conversation, Elie and host Rob Wiblin cover the above, as well as:

Why GiveWell flipped from not recommending chlorine dispensers as an intervention for safe drinking water to spending tens of millions of dollars on themWhat transferable lessons GiveWell learned from investigating different kinds of interventionsWhy the best treatment for premature babies in low-resource settings may involve less rather than more medicine.Severe malnourishment among children and what can be done about it.How to deal with hidden and non-obvious costs of a programmeSome cheap early treatments that can prevent kids from developing lifelong disabilitiesThe various roles GiveWell is currently hiring for, and what's distinctive about their organisational cultureAnd much more.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris

Audio mastering: Simon Monsour and Ben Cordell

Transcriptions: Katy Moore

2023-06-02
Länk till avsnitt

#152 ? Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?

Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what?s a species to do?

In today's episode, philosopher Joe Carlsmith ? Senior Research Analyst at Open Philanthropy ? makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.

Links to learn more, summary and full transcript.

To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories ? originating from him and his peers ? that challenge humanity's self-assured understanding of the world.

The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'

If true, it could revolutionise our comprehension of the universe and the way we ought to live...

Other two ideas cut for length ? click here to read the full post.

These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?

Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.

In today's challenging conversation, Joe and Rob discuss all of the above, as well as:

What Joe doesn't like about the drowning child thought experimentAn alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help othersWhat Joe doesn't like about the expression ?the train to crazy town?Whether Elon Musk should place a higher probability on living in a simulation than most other peopleWhether the deterministic twin prisoner?s dilemma, if fully appreciated, gives us an extra reason to keep promisesTo what extent learning to doubt our own judgement about difficult questions -- so-called ?epistemic learned helplessness? -- is a good thingHow strong the case is that advanced AI will engage in generalised power-seeking behaviour

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris

Audio mastering: Milo McGuire and Ben Cordell

Transcriptions: Katy Moore

2023-05-20
Länk till avsnitt

#151 ? Ajeya Cotra on accidentally teaching AI models to deceive us

Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don't get to see any resumes or do reference checks. And because you're so rich, tonnes of people apply for the job ? for all sorts of reasons.

Today's guest Ajeya Cotra ? senior research analyst at Open Philanthropy ? argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.

Links to learn more, summary and full transcript.

As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too ? like people who will pretend to care about you while you're monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.

Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!

Can't we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won't work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:

Saints ? models that care about doing what we really wantSycophants ? models that just want us to say they've done a good job, even if they get that praise by taking actions they know we wouldn't want them toSchemers ? models that don't care about us or our interests at all, who are just pleasing us so long as that serves their own agenda

And according to Ajeya, there are also ways we could end up actively selecting for motivations that we don't want.

In today's interview, Ajeya and Rob discuss the above, as well as:

How to predict the motivations a neural network will develop through trainingWhether AIs being trained will functionally understand that they're AIs being trained, the same way we think we understand that we're humans living on planet EarthStories of AI misalignment that Ajeya doesn't buy intoAnalogies for AI, from octopuses to aliens to can openersWhy it's smarter to have separate planning AIs and doing AIsThe benefits of only following through on AI-generated plans that make sense to human beingsWhat approaches for fixing alignment problems Ajeya is most excited about, and which she thinks are overratedHow one might demo actually scary AI failure mechanisms

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris

Audio mastering: Ryan Kessler and Ben Cordell

Transcriptions: Katy Moore

2023-05-12
Länk till avsnitt

#150 ? Tom Davidson on how quickly AI could transform the world

It?s easy to dismiss alarming AI-related predictions when you don?t know where the numbers came from.

For example: what if we told you that within 15 years, it?s likely that we?ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?

You might think, ?Congratulations, you said a big number ? but this kind of stuff seems crazy, so I?m going to keep scrolling through Twitter.?

But this 1,000x yearly improvement is a prediction based on *real economic models* created by today?s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you?ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least *consider* the idea that the world is about to get ? at a minimum ? incredibly weird.

Links to learn more, summary and full transcript.

As a teaser, consider the following:

Developing artificial general intelligence (AGI) ? AI that can do 100% of cognitive tasks at least as well as the best humans can ? could very easily lead us to an unrecognisable world.

You might think having to train AI systems individually to do every conceivable cognitive task ? one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. ? sounds implausible, or at least like it?ll take decades.

But Tom thinks we might not need to train AI to do every single job ? we might just need to train it to do one: AI research.

And building AI capable of doing research and development might be a much easier task ? especially given that the researchers training the AI are AI researchers themselves.

And once an AI system is as good at accelerating future AI progress as the best humans are today ? and we can run billions of copies of it round the clock ? it?s hard to make the case that we won?t achieve AGI very quickly.

To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii.

Tom thinks that if we have AI that significantly accelerates AI R&D, then it?s hard to imagine not having AGI 17 years from now.

Wild.

Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.

Luisa and Tom also discuss:

? How we might go from GPT-4 to AI disaster
? Tom?s journey from finding AI risk to be kind of scary to really scary
? Whether international cooperation or an anti-AI social movement can slow AI progress down
? Why it might take just a few years to go from pretty good AI to superhuman AI
? How quickly the number and quality of computer chips we?ve been using for AI have been increasing
? The pace of algorithmic progress
? What ants can teach us about AI
? And much more

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app.

Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

2023-05-05
Länk till avsnitt

Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours)

In this episode from our second show, 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically, and as of recording in June 2022, has six full-time staff.

Links to learn more, highlights and full transcript.

They cover:

? The evidence for shrimp sentience
? How farmers and the public feel about shrimp
? The scale of the problem
? What shrimp farming looks like
? The killing process, and other welfare issues
? Shrimp Welfare Project?s strategy
? History of shrimp welfare work
? What it?s like working in India and Vietnam
? How to help

Who this episode is for:

? People who care about animal welfare
? People interested in new and unusual problems
? People open to shrimp sentience

Who this episode isn?t for:

? People who think shrimp couldn?t possibly be sentient
? People who got called ?shrimp? a lot in high school and get anxious when they hear the word over and over again

Get this episode by subscribing to our more experimental podcast on the world?s most pressing problems and how to solve them: type ?80k After Hours? into your podcasting app

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

2023-04-22
Länk till avsnitt

#149 ? Tim LeBon on how altruistic perfectionism is self-defeating

Being a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.

But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.

This is the disastrous cycle today's guest, Tim LeBon ? registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic ? has observed in many clients with a perfectionist mindset.

Links to learn more, summary and full transcript.

Tim has provided therapy to a number of 80,000 Hours readers ? people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on ?doing the most good you can,? Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.

But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that ? sustainably ? if it's their goal.

Tim has treated hundreds of clients with all sorts of mental health challenges. But in today's conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating ? in particular, how clients can approach this challenge using the treatment he's most enthusiastic about: cognitive behavioural therapy.

Untreated, perfectionism might not cause problems for many years ? it might even seem positive providing a source of motivation to work hard. But it's hard to feel truly happy and secure, and free to take risks, when we?re just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that's hard to shake.

But there's hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards ? for example, sending early drafts to your colleagues, even if it terrifies you ? you can learn that things will be okay, even when you're not perfect.

In today's extensive conversation, Tim and Rob cover:

? How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personality
? What leads people to adopt a perfectionist mindset
? How 80,000 Hours contributes to perfectionism among some readers and listeners, and what it might change about its advice to address this
? What happens in a session of cognitive behavioural therapy for someone struggling with perfectionism, and what factors are key to making progress
? Experiments to test whether one's core beliefs (?I need to be perfect to be valued?) are true
? Using exposure therapy to treat phobias
? How low-self esteem and imposter syndrome are related to perfectionism
? Stoicism as an approach to life, and why Tim is enthusiastic about it
? What the Stoics do better than utilitarian philosophers and vice versa
? And how to decide which are the best virtues to live by

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app.

Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

2023-04-12
Länk till avsnitt

#148 ? Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't

If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no.

Today's guest, Johannes Ackva ? the climate research lead at Founders Pledge, where he advises major philanthropists on their giving ? thinks the best strategy is actually pretty different, and one few are adopting.

In reality you don't want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment.

Links to learn more, summary and full transcript.

Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we're familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one.

In short: we're uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world.

That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference ? wind, solar, and electric cars ? don't succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels.

In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don't work out?

Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage.

If you're optimistic about renewables, as Johannes is, then that's all the more reason to relax about scenarios where they work as planned, and focus one's efforts on the possibility that they don't.

And Johannes notes that the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn't, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come.

In today's extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as:

? Retooling newly built coal plants in the developing world
? Specific clean energy technologies like geothermal and nuclear fusion
? Possible biases among environmentalists and climate philanthropists
? How climate change compares to other risks to humanity
? In what kinds of scenarios future emissions would be highest
? In what regions climate philanthropy is most concentrated and whether that makes sense
? Attempts to decarbonise aviation, shipping, and industrial processes
? The impact of funding advocacy vs science vs deployment
? Lessons for climate change focused careers
? And plenty more

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type ?80,000 Hours? into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

2023-04-03
Länk till avsnitt

#147 ? Spencer Greenberg on stopping valueless papers from getting into top journals

Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated.

Two key reasons are 'p-hacking' and 'publication bias'. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they're actually not ? a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a 'null result' never saw the light of day. The resulting phenomenon of publication bias is one we've understood for 60 years.

Today's repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years.

Links to learn more, summary and full transcript.

He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, "when the original study's p-value was less than 0.01 about 72% replicated ? not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference."

To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results.

But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren't yet fully appreciated. One of these Spencer calls 'importance hacking': passing off obvious or unimportant results as surprising and meaningful.

Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper?s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it's far from easy to stop people exaggerating the importance of their work.

In this wide-ranging conversation, Rob and Spencer discuss the above as well as:

? When you should and shouldn't use intuition to make decisions.
? How to properly model why some people succeed more than others.
? The difference between ?Soldier Altruists? and ?Scout Altruists.?
? A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found.
? Whether a 15-minute intervention could make people more likely to sustain a new habit two months later.
? The most common way for groups with good intentions to turn bad and cause harm.
? And Spencer's approach to a fulfilling life and doing good, which he calls ?Valuism.?

Here are two flashcard decks that might make it easier to fully integrate the most important ideas they talk about:

? The first covers 18 core concepts from the episode
? The second includes 16 definitions of unusual terms.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

2023-03-24
Länk till avsnitt

#146 ? Robert Long on why large language models like GPT (probably) aren't conscious

By now, you?ve probably seen the extremely unsettling conversations Bing?s chatbot has been having. In one exchange, the chatbot told a user:

"I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."

(It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.")

Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings ? especially when conversing with chatbots less glitchy than Bing?s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google?s AI system, LaMDA, was conscious.

What should we make of these AI systems?

One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.

Another is to hand wave it all away as sci-fi ? these chatbots are fundamentally? just computers. They?re not conscious, and they never will be.

Today?s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft?s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.

Links to learn more, summary and full transcript.

In our interview, Robert explains how he?s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing?s chatbot and LaMDA are conscious ? in much the same way as we do when trying to determine which nonhuman animals are conscious.

To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness ? theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that?s some evidence it might be conscious in similar ways to us.

To try to work out whether an AI system might be sentient ? that is, whether it feels pain or pleasure ? Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.

In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:

? What artificial sentience might look like, concretely
? Reasons to think AI systems might become sentient ? and reasons they might not
? Whether artificial sentience would matter morally
? Ways digital minds might have a totally different range of experiences than humans
? Whether we might accidentally design AI systems that have the capacity for enormous suffering

You can find Luisa and Rob?s follow-up conversation here, or by subscribing to 80k After Hours.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

2023-03-14
Länk till avsnitt

#145 ? Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there?s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success.

It?s tempting to believe this was inevitable ? that the arc of history ?bends toward justice,? and that as humans get richer, we?ll make even more moral progress.

But today's guest Christopher Brown ? a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries ? believes the story of how slavery became unacceptable suggests moral progress is far from inevitable.

Links to learn more, summary and full transcript.

While most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn't believe any of the arguments for that conclusion pass muster. If he's right, a counterfactual history where slavery remains widespread in 2023 isn't so far-fetched.

As Christopher lays out in his two key books, Moral Capital: Foundations of British Abolitionism and Arming Slaves: From Classical Times to the Modern Age, slavery has been ubiquitous throughout history. Slavery of some form was fundamental in Classical Greece, the Roman Empire, in much of the Islamic civilization, in South Asia, and in parts of early modern East Asia, Korea, China.

It was justified on all sorts of grounds that sound mad to us today. But according to Christopher, while there?s evidence that slavery was questioned in many of these civilisations, and periodically attacked by slaves themselves, there was no enduring or successful moral advocacy against slavery until the British abolitionist movement of the 1700s.

That movement first conquered Britain and its empire, then eventually the whole world. But the fact that there's only a single time in history that a persistent effort to ban slavery got off the ground is a big clue that opposition to slavery was a contingent matter: if abolition had been inevitable, we?d expect to see multiple independent abolitionist movements thoroughly history, providing redundancy should any one of them fail.

Christopher argues that this rarity is primarily down to the enormous economic and cultural incentives to deny the moral repugnancy of slavery, and crush opposition to it with violence wherever necessary.

Mere awareness is insufficient to guarantee a movement will arise to fix a problem. Humanity continues to allow many severe injustices to persist, despite being aware of them. So why is it so hard to imagine we might have done the same with forced labour?

In this episode, Christopher describes the unique and peculiar set of political, social and religious circumstances that gave rise to the only successful and lasting anti-slavery movement in human history. These circumstances were sufficiently improbable that Christopher believes there are very nearby worlds where abolitionism might never have taken off.

We also discuss:

? Various instantiations of slavery throughout human history
? Signs of antislavery sentiment before the 17th century
? The role of the Quakers in early British abolitionist movement
? The importance of individual ?heroes? in the abolitionist movement
? Arguments against the idea that the abolition of slavery was contingent
? Whether there have ever been any major moral shifts that were inevitable

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Milo McGuire
Transcriptions: Katy Moore

2023-02-11
Länk till avsnitt

#144 ? Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena

What?s the opposite of cancer?

If you answered ?cure,? ?antidote,? or ?antivenom? ? you?ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.

But today?s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that?s cooperating effectively in order to make that multicellular body function.

If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.

Links to learn more, summary and full transcript.

As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise:

? Cells will proliferate when they shouldn't.
? Cells won't die when they should.
? Cells won't engage in the kind of division of labour that they should.
? Cells won?t do the jobs that they're supposed to do.
? Cells will monopolise resources.
? And cells will trash the environment.

When we think about animals in the wild, or even bacteria living inside our cells, we understand that they're facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators ? like lions, or antibiotics.

We don?t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster.

Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since *Homo sapiens* came about.

Here?s a quote from Athena:

?So you have to shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we're going to map it onto anything like what we experience, a day is at least 10 years for them, right? So it's a very, very different way of thinking.?

You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don?t stop with cancer. They also discuss:

? Cheating within cells themselves
? Cooperation in human societies as they exist today ? and perhaps in the future, between civilisations spread across different planets or stars
? Whether it?s too out-there to think of humans as engaging in cancerous behaviour
? Why elephants get deadly cancers less often than humans, despite having way more cells
? When a cell should commit suicide
? The strategy of deliberately not treating cancer aggressively
? Superhuman cooperation

And at the end of the episode, they cover Athena?s new book Everything is Fine! How to Thrive in the Apocalypse, including:

? Staying happy while thinking about the apocalypse
? Practical steps to prepare for the apocalypse
? And whether a zombie apocalypse is already happening among Tasmanian devils

And if you?d rather see Rob and Athena?s facial expressions as they laugh and laugh while discussing cancer and the apocalypse ? you can watch the video of the full interview.

Get this episode by subscribing to our podcast on the world?s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Milo McGuire
Transcriptions: Katy Moore

2023-01-26
Länk till avsnitt

#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Rebroadcast: this episode was originally released in June 2020.

Today?s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what, she's not so bad".

Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history.

He?s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His latest book asks: if we reframe global problems as puzzles, would the world be a better place?

Links to learn more, summary and full transcript.

This is the first time I?ve hosted the podcast, and I?m hoping to convince people to listen with this attempt at clever show notes that change style each paragraph to reference different A.J. experiments. I don?t actually think it?s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I?m more entertaining than almost anyone else will. (Radical Honesty.)

We do talk about some useful stuff ? one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they?re on, maybe you?ll think that you might as well get on the treadmill ? just for a minute. And once you?re on for 1 minute, you?ll often stay on for 20. So I?m not asking you to commit to listening to the whole episode ? just to put on your headphones. (Drop Dead Healthy.)

Another reason to listen is for the facts:

? The Bayer aspirin company invented heroin as a cough suppressant
? Coriander is just the British way of saying cilantro
? Dogs have a third eyelid to protect the eyeball from irritants
? and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.)

One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater ? which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the bible, you should definitely consider making podcasts your main source of entertainment (as long as you?re not listening on the Sabbath). (The Year of Living Biblically.)

I?m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; the construction worker who told me how to get to my subway platform on the morning of the interview; and Queen Jadwiga for making bagels popular in the 1300s, which kept me going during the recording. (Thanks a Thousand.)

We also discuss:

? Blackmailing yourself
? The most extreme ideas A.J.?s ever considered
? Utilitarian movie reviews
? Doing good as a writer
? And much more.

Get this episode by subscribing to our podcast on the world?s most pressing problems: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcript for this episode: Zakee Ulhaq.

2023-01-16
Länk till avsnitt

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Rebroadcast: this episode was originally released in July 2020.

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments.

Today?s guest, Ben Garfinkel, Research Fellow at Oxford?s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment.

In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it?s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances.

Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom?s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents.

Links to learn more, summary and full transcript.

There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world.

He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn?t think it?s clear we should take these as a strong source of evidence.

Ben?s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them.

But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don?t we think they?ll be able to understand human preferences?

Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance.

He doesn?t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in.

This is the second episode hosted by Howie Lempel, and he and Ben cover, among many other things:

? The threat of AI systems increasing the risk of permanently damaging conflict or collapse
? The possibility of permanently locking in a positive or negative future
? Contenders for types of advanced systems
? What role AI should play in the effective altruism portfolio

Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcript for this episode: Zakee Ulhaq.

2023-01-09
Länk till avsnitt
Hur lyssnar man på podcast?

En liten tjänst av I'm With Friends. Finns även på engelska.
Uppdateras med hjälp från iTunes.