From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse.
Topics discussed in this episode include:
-The importance of current AI policy work for long-term AI risk
-Where we currently stand in the process of forming AI policy
-Why persons worried about existential risk should care about present day AI policy
-AI and the global community
-The rationality and irrationality around AI race narratives
You can find the page for this podcast here: https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/
Timestamps:
0:00 Intro
4:58 Why it’s important to work on AI policy
12:08 Our historical position in the process of AI policy
21:54 For long-termists and those concerned about AGI risk, how is AI policy today important and relevant?
33:46 AI policy and shorter-term global catastrophic and existential risks
38:18 The Brussels and Sacramento effects
41:23 Why is racing on AI technology bad?
48:45 The rationality of racing to AGI
58:22 Where is AI policy currently?
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.