Sveriges mest populära poddar

Mind the Shift

107. What We Owe the Future – William MacAskill

51 min • 19 juni 2023

The human species has been around for some 300,000 years. A typical mammal lasts for a million years. We are not typical. 

”You might think we are in the middle of history. But given the grand sweep, we are the ancients, we are at the very beginning of time. We live in the distant past compared to everything that will ever happen”, says William MacAskill, associate professor in philosophy at Oxford university.

MacAskill is the initiator of the Effective Altruism movement, which is about optimizing the good you can do for this world.

In his latest book, What We Owe the Future, he discusses how we should think and act to plan for an extremely long human future.

The book is basically optimistic. MacAskill thinks we have immense opportunities to improve the world significantly. But it dwells on the potential risks and threats that we must deal with.

MacAskill highlights four categories of risks: Extinction (everyone dying), collapse (so much destroyed that civilization doesn’t recover), lock-in (a long future but governed by bad values) and stagnation (which may lead to one of the former).

As for the risk of extinction, he concludes that newer risks that are less under control tend to be the largest, such as pandemics caused by man-made pathogens and catastrophes set off by artificial intelligence. Known risks like nuclear war and direct hits by asteroids have a potential to wipe out humankind, but since we are more aware of them we have some understanding of how to mitigate them or at least prepare for them.

Climate change tops the global agenda today, but although it is a problem we need to address, it is not an existential threat.

Artificial intelligence could lead to intense concentration of power and control. But AI could also have huge benefits. It can speed up science, and it can automate away all monotonous work and give us more time with family and friends and for creativity.

”The scale of the upside is as big as our imagination can take us.”

Humans have invented dangerous technology before and not used it to its full detrimental capacity.

”It is a striking thing about the world how much destruction could be reaped if people wanted to. That is actually a source of concern, because AI systems might not have those human safeguards.”

One prerequisite to achieve a better future is to actively change our values. There has been tremendous moral progress over the last couple of centuries, but we need to expand our sphere of moral concern, according to MacAskill.

”We care about family and friends and perhaps the nation, but I think we should care as much about everyone, and much more than we do about non-human animals. A hundred billion land animals are killed every year for food, and the vast majority of them are kept in horrific suffering.”

William MacAskill thinks some aspects of the course of history are inevitable, such as population growth and technological advancement, but when it comes to moral changes he is not sure.

”We shouldn’t be complacent. Moral collapse can happen again.”

William thinks we are at a crucial juncture in time.

”The stakes are much higher than before, the level of prosperity or doom that we could face.”

William and I have a discussion about the possibility that alien civilizations are monitoring us or have visited Earth. William is not convinced that the recent Pentagon disclosures actually prove alien presence, but he is open to it, and he has some thoughts on what a close encounter would entail.

We also talk briefly about the possibility of a lost human civilization and the cause of the extinction of the megafauna during the Younger Dryas. We have some differing views on that.

My final question is a biggie: Could humankind's next big leap be an inward leap, a raise in consciousness?

”It is a possibility. Maybe the best thing is not to spread out and become ever bigger but instead have a life of spirituality.”



00:00 -00:00