Sveriges mest populära poddar

Future of Life Institute Podcast

AIAP: Moral Uncertainty and the Path to AI Alignment with William MacAskill

57 min • 18 september 2018
How are we to make progress on AI alignment given moral uncertainty?  What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty? Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space. In this podcast, Lucas spoke with William MacAskill. Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues in ethics. Topics discussed in this episode include: -Will’s current normative and metaethical credences -The value of moral information and moral philosophy -A taxonomy of the AI alignment problem -How we ought to practice AI alignment given moral uncertainty -Moral uncertainty in preference aggregation -Moral uncertainty in deciding where we ought to be going as a society -Idealizing persons and their preferences -The most neglected portion of AI alignment
Kategorier
Förekommer på
00:00 -00:00