29 avsnitt • Längd: 55 min • Månadsvis
Audio narrations of academic papers by Nick Bostrom.
The podcast Radio Bostrom is created by Team Radio Bostrom. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
By Nick Bostrom.
Abstract:
There may well exist a normative structure, based on the preferences or concordats of a cosmic host, and which has high relevance to the development of A-eye. In particular, we may have both moral and prudential reason to create superintelligence that becomes a good cosmic citizen—that is conforms to cosmic norms and contributes positively to the cosmopolis. An exclusive focus on promoting the welfare of the human species and other terrestrial beings, or an insistence that our own norms must at all cost prevail, may be objectionable and unwise. Such attitudes might be analogized to the selfishness of one who exclusively pursues their own personal interest, or the arrogance of one who acts as if their own convictions entitle them to run roughshod over social norms—though arguably they would be worse, given our present inferior status relative to the membership of the cosmic host. An attitude of humility may be more appropriate.
Read the full paper:
https://nickbostrom.com/papers/ai-creation-and-the-cosmic-host.pdf
More episodes at:
https://radiobostrom.com
Nick Bostrom’s latest book, Deep Utopia: Life and Meaning in a Solved World, will be published on 27th March, 2024. It’s available to pre-order now:
https://nickbostrom.com/deep-utopia/
The publisher describes the book as follows:
A greyhound catching the mechanical lure—what would he actually do with it? Has he given this any thought?
Bostrom’s previous book, Superintelligence: Paths, Dangers, Strategies changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right?
Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of “post-instrumentality”, in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable.
Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day?
Deep Utopia shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future.
By Nick Bostrom, Thomas Douglas & Anders Sandberg.
Abstract:
In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it.
Read the full paper:
https://nickbostrom.com/papers/unilateralist.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:00) Intro
(01:20) 1. Introduction
(10:02) 2. The Unilateralist's Curse: A Model
(11:31) 3. Lifting the Curse
(13:54) 3.1. The Collective Deliberation Model
(15:21) 3.2. The Meta-rationality Model
(18:15) 3.3. The Moral Deference Model
(33:24) 4. Discussion
(37:53) 5. Concluding Thoughts
(41:04) Outro & credits
By Nick Bostrom.
Abstract:
Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision.
Read the full paper:
https://nickbostrom.com/ethics/dignity
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:02) Introduction
(00:21) Abstract
(01:57) Transhumanists vs. bioconservatives
(06:42) Two fears about the posthuman
(19:44) Is human dignity incompatible with posthuman dignity?
(29:03) Why we need posthuman dignity
(34:38) Outro & credits
By Nick Bostrom.
Abstract:
Rarely does philosophy produce empirical predictions. The Doomsday argument is an important exception. From seemingly trivial premises it seeks to show that the risk that humankind will go extinct soon has been systematically underestimated. Nearly everybody's first reaction is that there must be something wrong with such an argument. Yet despite being subjected to intense scrutiny by a growing number of philosophers, no simple flaw in the argument has been identified.
Read the full paper:
https://anthropic-principle.com/q=anthropic_principle/doomsday_argument/
More episodes at:
https://radiobostrom.com/
By Nick Bostrom and Carl Shulman.
Draft version 1.10.
Abstract:
AIs with moral status and political rights? We'll need a modus vivendi, and it’s becoming urgent to figure out the parameters for that. This paper makes a load of specific claims that begin to stake out a position.
Read the full paper:
https://nickbostrom.com/propositions.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:00) Introduction
(00:36) Disclaimer
(01:07) Consciousness and metaphysics
(06:48) Respecting AI interests
(21:41) Security and stability
(32:04) AI-empowered social organization
(38:07) Satisfying multiple values
(42:23) Mental malleability, persuasion, and lock-in
(47:20) Epistemology
(53:36) Status of existing AI systems
(59:52) Recommendations regarding current practises and AI systems
(01:07:08) Impact paths and modes of advocacy
(01:11:11) Closing credits
By Nick Bostrom.
Draft version 0.9
Abstract:
New theoretical ideas for a big expedition in metaethics.
Read the full paper:
https://nickbostrom.com/papers/mountethics.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:17) Metametaethics/preamble
(02:48) Genealogy
(09:41) Metaethics
(21:30) Value representors
(26:56) Moral motivation
(30:02) The weak
(33:25) Hedonism
(41:38) Hierarchical norm structure and higher morality
(55:30) Questions for future research
By Nick Bostrom.
Abstract:
Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows:
(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.
(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.
Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”.
Read the full paper:
https://nickbostrom.com/views/transhumanist.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:25) 1 GENERAL QUESTIONS ABOUT TRANSHUMANISM
(00:31) 1.1 What is transhumanism?
(05:48) 1.2 What is a posthuman?
(10:11) 1.3 What is a transhuman?
(12:57) 2 TECHNOLOGIES AND PROJECTIONS
(13:02) 2.1 Biotechnology, genetic engineering, stem cells, and cloning – what are they and what are they good for?
(19:51) 2.2 What is molecular nanotechnology?
(31:24) 2.3 What is superintelligence?
(39:58) 2.4 What is virtual reality?
(44:52) 2.5 What is cryonics? Isn’t the probability of success too small?
(49:52) 2.6 What is uploading?
(57:26) 2.7 What is the singularity?
(01:00:26) 3 SOCIETY AND POLITICS
(01:00:34) 3.1 Will new technologies only benefit the rich and powerful?
(01:03:50) 3.2 Do transhumanists advocate eugenics?
(01:10:17) 3.3 Aren’t these future technologies very risky? Could they even cause our extinction?
(01:19:57) 3.4 If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?
(01:27:47) 3.5 Shouldn’t we concentrate on current problems such as improving the situation of the poor, rather than putting our efforts into planning for the “far” future?
(01:31:17) 3.6 Will extended life worsen overpopulation problems?
(01:40:53) 3.7 Is there any ethical standard by which transhumanists judge “improvement of the human condition”?
(01:45:25) 3.8 What kind of society would posthumans live in?
(01:48:43) 3.9 Will posthumans or superintelligent machines pose a threat to humans who aren’t augmented?
(01:53:38) 4 TRANSHUMANISM AND NATURE
(01:53:44) 4.1 Why do transhumanists want to live longer?
(01:56:51) 4.2 Isn’t this tampering with nature?
(02:00:01) 4.3 Will transhuman technologies make us inhuman?
(02:01:48) 4.4 Isn’t death part of the natural order of things?
(02:07:28) 4.5 Are transhumanist technologies environmentally sound?
(02:09:49) 5 TRANSHUMANISM AS A PHILOSOPHICAL AND CULTURAL VIEWPOINT
(02:09:56) 5.1 What are the philosophical and cultural antecedents of transhumanism?
(02:27:36) 5.2 What currents are there within transhumanism? Is extropianism the same as transhumanism?
(02:33:07) 5.3 How does transhumanism relate to religion?
(02:35:59) 5.4 Won’t things like uploading, cryonics, and AI fail because they can’t preserve or create the soul?
(02:38:02) 5.5 What kind of transhumanist art is there?
(02:41:17) 6 PRACTICALITIES
(02:41:21) 6.1 What are the reasons to expect all these changes?
(02:44:46) 6.2 Won’t these developments take thousands or millions of years?
(02:48:10) 6.3 What if it doesn’t work?
(02:49:45) 6.4 How can I use transhumanism in my own life?
(02:51:27) 6.5 How could I become a posthuman?
(02:53:23) 6.6 Won’t it be boring to live forever in a perfect world?
(02:58:16) 6.7 How can I get involved and contribute?
(03:00:41) ACKNOWLEDGEMENTS AND DOCUMENT HISTORY
By Nick Bostrom.
Abstract:
Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy to control human evolution by modifying the fitness function of future intelligent life forms.
Read the full paper:
https://nickbostrom.com/fut/evolution
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:16) Abstract
(01:06) 1. The Panglossian view
(07:50) 2. Two dystopian "upward" evolutionary scenarios
(17:35) 3. Ours is an evolutionary disequilibrium
(24:55) 4. Costly signaling and flamboyant display?
(30:07) 5. Two senses of outcompeted
(32:31) 6. Could we control our own evolution?
(35:30) 7. Preventing non-eudaemonic agents from arising
(39:56) 8. Modifying the fitness function
(43:34) 9. Policies for evolutionary steering
(49:48) 10. Detour
(52:44) 11. Only a singleton could control evolution
(59:22) 12. Conclusion
By Nick Bostrom.
Abstract:
The purpose of this paper, boldly stated, is to propose a new type of philosophy, a philosophy whose aim is prediction. The pace of technological progress is increasing very rapidly: it looks as if we are witnessing an exponential growth, the growth-rate being proportional to the size already obtained, with scientific knowledge doubling every 10 to 20 years since the second world war, and with computer processor speed doubling every 18 months or so. It is argued that this technological development makes urgent many empirical questions which a philosopher could be well-suited to help answering. I try to cover a broad range of interesting problems and approaches, which means that I won't go at all deeply into any of them; I only try to say enough to show what some of the problems are, how one can begin to work with them, and why philosophy is relevant. My hope is that this will whet your appetite to deal with these questions, or at least increase general awareness that they worthy tasks for first-class intellects, including ones which might belong to philosophers.
Read the full paper:
https://nickbostrom.com/old/predict
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:19) Abstract
(01:52) 1. The Polymath
(08:41) 2. The Carter-Leslie Doomsday Argument and the Anthropic Principle
(15:28) 3. The Fermi Paradox
(43:49) 4. Superintelligence
(58:27) 5. Uploading, Cyberspace and Cosmology
(01:16:14) 6. Attractors and Values
(01:28:41) 7. Transhumanism
By Nick Bostrom.
Translated by Jill Drouillard.
Abstract:
The good life: just how good could it be? A vision of the future from the future.
Read the full paper:
https://www.nickbostrom.com/translations/utopie.pdf
More episodes at:
https://radiobostrom.com/
By Nick Bostrom.
Abstract:
This note introduces the concept of a "singleton" and suggests that this concept is useful for formulating and analyzing possible scenarios for the future of humanity.
Read the full paper:
https://nickbostrom.com/fut/singleton
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:18) Abstract
(00:32) 1. Definition
(01:35) 2. Examples and Elaboration
(05:42) 3. Advantages with a Singleton
(08:02) 4. Disadvantages with a Singleton
(10:14) 5. The Singleton Hypothesis
By Carl Shulman and Nick Bostrom.
Abstract:
Human capital is an important determinant of individual and aggregate economic outcomes, and a major input to scientific progress. It has been suggested that advances in genomics may open up new avenues to enhance human intellectual abilities genetically, complementing environmental interventions such as education and nutrition. One way to do this would be via embryo selection in the context of in vitro fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant (but likely not drastic) impacts over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology – stem cell-derived gametes – which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans.
Read the full paper:
https://nickbostrom.com/papers/embryo.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:28) Abstract
(01:42) Policy Implications
(03:27) From carrier-screening to cognitive enhancement
(07:42) Impact of cognitive ability
(11:20) How much cognitive enhancement from embryo selection?
(13:56) (Text Resumes)
(19:16) Stem-cell derived gametes could produce much larger effects
(21:55) Rate of adoption and public opinion
(24:35) (Text Resumes)
(27:17) Total impacts on human capital
(32:30) (Text Resumes)
(35:32) Conclusions
By Nick Bostrom.
Abstract:
The good life: just how good could it be? A vision of the future from the future.
Read the full paper:
https://nickbostrom.com/utopia
More episodes at:
https://radiobostrom.com/
By Nick Bostrom.
Abstract:
Technological revolutions are among the most important things that happen to humanity. Ethical assessment in the incipient stages of a potential technological revolution faces several difficulties, including the unpredictability of their long‐term impacts, the problematic role of human agency in bringing them about, and the fact that technological revolutions rewrite not only the material conditions of our existence but also reshape culture and even – perhaps – human nature. This essay explores some of these difficulties and the challenges they pose for a rational assessment of the ethical and policy issues associated with anticipated technological revolutions.
Read the full paper:
https://nickbostrom.com/revolutions.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(01:22) 1. Introduction
(06:28) 2. ELSI research, and public concerns about science and technology
(16:29) 3. Unpredictability
(32:50) 4. Strategic considerations in S&T policy
(47:37) 5. Limiting the scope of our deliberations?
(01:04:59) 6. Expanding the scope of our deliberations?
By Nick Bostrom and Julian Savulescu.
Abstract:
Are we good enough? If not, how may we improve ourselves? Must we restrict ourselves to traditional methods like study and training? Or should we also use science to enhance some of our mental and physical capacities more directly?
Over the last decade, human enhancement has grown into a major topic of debate in applied ethics. Interest has been stimulated by advances in the biomedical sciences, advances which to many suggest that it will become increasingly feasible to use medicine and technology to reshape, manipulate, and enhance many aspects of human biology even in healthy individuals. To the extent that such interventions are on the horizon (or already available) there is an obvious practical dimension to these debates. This practical dimension is underscored by an outcrop of think tanks and activist organizations devoted to the biopolitics of enhancement.
Read the full paper:
https://nickbostrom.com/ethics/human-enhancement-ethics.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:20) 1. Background
(08:46) 2. Enhancement in General
(24:14) 3. Enhancements of Certain Kinds
(43:48) 4. Enhancement as Practical Challenge
(47:48) 5. Conclusion
By Nick Bostrom and Matthew van der Merwe.
Abstract:
Sooner or later a technology capable of wiping out human civilisation might be invented. How far would we go to stop it?
Read the full paper:
https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yet
Links:
- The Vulnerable World Hypothesis (2019) (original academic paper)
- The Vulnerable World Hypothesis (2019) (narration by Radio Bostrom)
Notes:
This article is an adaption of Bostrom's academic paper "The Vulnerable World Hypothesis (2019)".
The article was first published in Aeon Magazine. The narration was provided by Curio. We are grateful to Aeon and Curio for granting us permission to re-use the audio. Curio are offering Radio Bostrom listeners a 25% discount on their annual subscription.
By Nick Bostrom.
Abstract:
Within a utilitarian context, one can perhaps try to explicate [crucial considerations] as follows: a crucial consideration is a consideration that radically changes the expected value of pursuing some high-level subgoal. The idea here is that you have some evaluation standard that is fixed, and you form some overall plan to achieve some high-level subgoal. This is your idea of how to maximize this evaluation standard. A crucial consideration, then, would be a consideration that radically changes the expected value of achieving this subgoal, and we will see some examples of this. Now if you stop limiting your view to some utilitarian context, then you might want to retreat to these earlier more informal formulations, because one of the things that could be questioned is utilitarianism itself. But for most of this talk we will be thinking about that component.
Read the full paper:
https://www.effectivealtruism.org/articles/crucial-considerations-and-wise-philanthropy-nick-bostrom
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:14) What is a crucial consideration?
(04:27) Should I vote in the national election?
(08:18) Should we favor more funding for x-risk tech research?
(14:32) Crucial considerations and utilitarianism
(18:52) Evaluation Functions
(19:03) Some tentative signposts
(20:35) (Text resumes)
(27:28) Possible areas with additional crucial considerations
(30:03) Some partial remedies
By Nick Bostrom.
Abstract:
With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.
Read the full paper:
https://nickbostrom.com/astronomical/waste
More episodes at:
https://radiobostrom.com/
By Nick Bostrom.
Abstract:
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor‐simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
Read the full paper:
https://www.simulation-argument.com/simulation.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:19) Abstract
(01:11) Section 1. Introduction
(04:08) Section 2. The Assumption of Substrate-independence
(06:32) Section 3. The Technological Limits of Computation
(15:53) Section 4. The Core of the Simulation Argument
(16:58) Section 5. A Bland Indifference Principle
(22:57) Section 6. Interpretation
(35:22) Section 7. Conclusion
(36:53) Acknowledgements
By Nick Bostrom and Eliezer Yudkowsky.
Abstract:
The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.
Read the full paper:
https://nickbostrom.com/ethics/artificial-intelligence.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(01:19) Ethics in Machine Learning and Other Domain‐Specific AI Algorithms
(07:23) Artificial General Intelligence
(17:01) Machines with Moral Status
(28:32) Minds with Exotic Properties
(42:45) Superintelligence
(56:39) Conclusion
(57:59) Author biographies
By Nick Bostrom.
Abstract:
Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. They can, however, be important. This paper surveys the terrain and proposes a taxonomy.
Read the full paper:
https://nickbostrom.com/information-hazards.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:18) Abstract
(00:51) Introduction
(05:30) Six information transfer modes
(14:42) Table 1. Typology of Information Hazards
(16:57) Adversarial risks
(30:54) Risks to social organization and markets
(48:50) Risks of irrationality and error
(01:05:27) Risks to valuable states and activities
(01:17:33) Risks from information systems
(01:26:51) Risks from development
(01:38:20) Discussion
(01:47:50) Acknowledgements
By Nick Bostrom, Anders Sandberg, and Matthew van der Merwe.
This is an updated version of The Wisdom of Nature, first published in the book Human Enhancement (Oxford University Press, 2009).
Abstract:
Human beings are a marvel of evolved complexity. When we try to enhance poorly-understood complex evolved systems, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. A recognition of this reality can manifest as a vaguely normative intuition, to the effect that it is “hubristic” to try to improve on nature, or that biomedical therapy is ok while enhancement is morally suspect. We suggest that one root of these moral intuitions may be fundamentally prudential rather than ethical. More importantly, we develop a practical heuristic, the “evolutionary optimality challenge”, for evaluating the plausibility that specific candidate biomedical interventions would be safe and effective. This heuristic recognizes the grain of truth contained in “nature knows best” attitudes while providing criteria for identifying the special cases where it may be feasible, with present or near-future technology, to enhance human nature.
Read the full paper:
https://www.nickbostrom.com/evolutionary-optimality.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:31) Abstract
(01:58) Introduction
(07:22) The Evolutionary Optimality Challenge
(11:13) Altered tradeoffs
(12:18) Evolutionary incapacity
(13:33) Value discordance
(14:47) Altered tradeoffs
(17:50) Changes in resources
(23:24) Changes in demands
(28:44) Evolutionary incapacity
(30:54) Fundamental inability
(32:54) Local optima
(34:17) Example: the appendix
(36:37) Example: the ε4 allele
(37:52) Example: the sickle-cell allele
(42:33) Lags
(45:51) Marker 17
(46:26) Example: lactase persistence
(47:18) Value discordance
(49:05) Example: contraceptives
(50:55) Good for the individual
(55:22) Example: happiness
(56:40) Good for society
(58:18) Example: compassion
(01:00:03) The heuristic
(01:00:30) Current ignorance prevents us from forming any plausible idea about the evolutionary factors at play
(01:01:43) We come up with a plausible idea about the relevant evolutionary factors, and they suggest that the intervention would be harmful
(01:02:31) We come up with several different plausible ideas about the relevant evolutionary factors
(01:03:26) We develop a plausible idea about the relevant evolutionary factors, and they imply we wouldn’t have evolved the enhanced capacity even if it were beneficial
(01:08:23) Conclusion
(01:09:11) References
(01:09:18) Thanks to
By Nick Bostrom.
Abstract:
When water was discovered on Mars, people got very excited. Where there is water, there may be life. Scientists are planning new missions to study the planet up close. NASA’s next Mars rover is scheduled to arrive in 2010. In the decade following, a Mars Sample Return mission might be launched, which would use robotic systems to collect samples of Martian rocks, soils, and atmosphere, and return them to Earth. We could then analyze the sample to see if it contains any traces of life, whether extinct or still active. Such a discovery would be of tremendous scientific significance. What could be more fascinating than discovering life that had evolved entirely independently of life here on Earth? Many people would also find it heartening to learn that we are not entirely alone in this vast cold cosmos.
But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.
Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.
Read the full paper:
https://nickbostrom.com/extraterrestrial.pdf
More episodes at:
https://radiobostrom.com
By Nick Bostrom and Toby Ord.
Abstract:
In this article we argue that one prevalent cognitive bias, status quo bias, may be responsible for much of the opposition to human enhancement in general and to genetic cognitive enhancement in particular. Our strategy is as follows: first, we briefly review some of the psychological evidence for the pervasiveness of status quo bias in human decision making. This evidence provides some reason for suspecting that this bias may also be present in analyses of human enhancement ethics. We then propose two versions of a heuristic for reducing status quo bias. Applying this heuristic to consequentialist objections to genetic cognitive enhancements, we show that these objections are affected by status quo bias. When the bias is removed, the objections are revealed as extremely implausible. We conclude that the case for developing and using genetic cognitive enhancements is much stronger than commonly realized.
Read the full paper:
https://nickbostrom.com/ethics/statusquo.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:32) I. INTRODUCTION
(06:27) II. PSYCHOLOGICAL EVIDENCE OF STATUS QUO BIAS
(20:55) III. A HEURISTIC FOR REDUCING STATUS QUO BIAS
(27:32) Fig. 1
(29:11) The Argument from Evolutionary Adaptation
(34:27) The Argument from Transition Costs
(37:28) The argument from risk
(37:53) Fig. 2
(43:43) The Argument from Person-Affecting Ethics
(50:18) IV. THE DOUBLE REVERSAL TEST
(57:11) V. APPLYING THE REVERSAL TESTS TO OTHER CASES
(01:14:55) Thanks to...
By Nick Bostrom.
Abstract:
Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability.
Read the full paper:
https://existential-risk.org/concept
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:15) Abstract
(01:10) Policy Implications
(02:43) The maxipok rule
(10:45) Qualitative risk categories
(17:00) Magnitude of expected loss in existential catastrophe
(26:48) Maxipok
(29:13) Classification of existential risk
(31:19) Human extinction
(34:18) Permanent stagnation
(42:00) Flawed realisation
(45:37) Subsequent ruination
(48:44) Capability and value
(54:16) Some other ethical perspectives
(01:01:57) Existential risk and normative uncertainty
(01:06:07) Keeping our options alive
(01:14:38) Outlook
(01:15:15) Barriers to thought and action
(01:24:36) Grounds for optimism
(01:31:40) Author Information
By Nick Bostrom.
Abstract:
Recounts the Tale of a most vicious Dragon that ate thousands of people every day, and of the actions that the King, the People, and an assembly of Dragonologists took with respect thereto.
Read the full paper:
https://nickbostrom.com/fable/dragon
More episodes at:
https://radiobostrom.com/
By Carl Shulman & Nick Bostrom.
Abstract:
The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. Here we focus on one set of issues, which arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence.
Read the full paper:
https://nickbostrom.com/papers/digital-minds.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:39) Abstract
(02:22) Introduction
(07:10) Paths to realizing super-beneficiaries
(09:31) Reproductive capacity
(13:24) Cost of living
(14:55) Subjective speed
(16:54) Hedonic skew
(19:23) Hedonic range
(21:37) Inexpensive preferences
(24:56) Preference strength
(27:49) Objective list goods and flourishing
(31:14) Mind scale
(35:07) Moral and political implications of digital super-beneficiaries
(37:18) Creating super-beneficiaries
(42:13) Sharing the world with super-beneficiaries
(48:37) Discussion
(59:24) Author information
By Nick Bostrom.
Abstract:
Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the ‘semi-anarchic default condition’. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.
Read the full paper:
https://nickbostrom.com/papers/vulnerable.pdf
More episodes at:
https://radiobostrom.com/
---
Outline:
(00:20) Abstract
(01:45) Policy Implications
(03:48) Is there a black ball in the urn of possible inventions?
(09:05) A thought experiment: easy nukes
(23:21) The vulnerable world hypothesis
(35:44) Typology of vulnerabilities
(35:54) Type-1 (‘easy nukes’)
(41:26) Type-2a (‘safe first strike’)
(52:27) Type-2b (‘worse global warming’)
(59:21) Type-0 (‘surprising strangelets’)
(01:10:11) Achieving stabilization
(01:11:28) Technological relinquishment
(01:20:10) Preference modification
(01:30:26) Some specific countermeasures and their limitations
(01:39:18) Governance gaps
(01:42:09) Preventive policing
(01:55:53) Global governance
(02:01:32) Discussion
(02:24:08) Conclusions
(02:29:16) References
(02:29:25) Author Information
En liten tjänst av I'm With Friends. Finns även på engelska.