For Humanity: An AI Safety Podcast
In Episode #16, AI Risk Denier Down, things get weird. This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays: https://www.understandingai.org/p/why... https://www.understandingai.org/p/why... Tim was not prepared to discuss this work, which is when things started to get off the rails. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions) OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23 -You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you. -Where is he wrong? -You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.” -In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject. -This is ludicrous to me. You think humans are far too special. -AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE. -How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this. -Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe? -A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really? -You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.” -Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special? -Does this claim rest on the security protocols of the big AI companies? -Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted. -Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on?