[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
I'll begin with a contentious but invariably true statement, which I've no interest in defending here: new books—at least new nonfiction books—are not meant to be read. In truth, a new book is a Schelling point for the transmission of ideas. So while the nominal purpose of a book review like this is to answer the question Should I read this book?, its real purpose is to answer Should I pick up these ideas?
I set out to find the best book-length argument—one that really engages with the technical issues—against imminent, world-dooming, Skynet-and-Matrix-manifesting artificial intelligence. I arrived at Why Machines Will Never Rule the World by Jobst Landgrebe and Barry Smith, published by Routledge just last year. Landgrebe, an AI and biomedicine entrepreneur, and Smith, an eminent philosopher, are connected by their study of Edmund Husserl, and the influence of Husserl and phenomenology is clear throughout the book. (“Influence of Husserl” is usually a good enough reason to stop reading something.)
Should you read Why Machines Will Never Rule the World? If you're an AI safety researcher or have a technical interest in the topic, then you might enjoy it. It's sweeping and impeccably researched, but it's also academic and at times demanding, and for long stretches the meat-to-shell ratio is poor. But should you pick up these ideas?
My aim here isn’t to summarize the book, or marinate you in its technical details. ATU 325 is heady stuff. Rather, I simply want to give you a taste of the key arguments, enough to decide the question for yourself.
https://astralcodexten.substack.com/p/your-book-review-why-machines-will