A post by Michael Nielsen that I found quite interesting. I decided to reproduce the full essay content here, since I think Michael is fine with that, but feel free to let me know to only excerpt it.
This is the text for a talk exploring why experts disagree so strongly about whether artificial superintelligence (ASI) poses an existential risk to humanity. I review some key arguments on both sides, emphasizing that the fundamental danger isn't about whether "rogue ASI" gets out of control: it's the raw power ASI will confer, and the lower barriers to creating dangerous technologies. This point is not new, but has two underappreciated consequences. First, many people find rogue ASI implausible, and this has led them to mistakenly dismiss existential risk. Second: much work on AI alignment, while well-intentioned, speeds progress toward catastrophic capabilities, without addressing our world's potential vulnerability to dangerous technologies.
[...]
---
Outline:
(06:37) Biorisk scenario
(17:42) The Vulnerable World Hypothesis
(26:08) Loss of control to ASI
(32:18) Conclusion
(38:50) Acknowledgements
The original text contained 29 footnotes which were omitted from this narration.
---
First published:
April 15th, 2025
Narrated by TYPE III AUDIO.