103 avsnitt • Längd: 35 min • Månadsvis
Anticipating and managing exponential impact – hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora’s Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
The podcast London Futurists is created by London Futurists. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Our guest in this episode is Amory Lovins, a distinguished environmental scientist, and co-founder of RMI, which he co-founded in 1982 as Rocky Mountain Institute. It’s what he calls a think do and scale tank, with 700 people in 62 countries, and a budget of well over $100m a year.
For over five decades, Amory has championed innovative approaches to energy systems, advocating for a world where energy services are delivered with least cost and least impact. He has advised all manner of governments, companies, and NGOs, and published 31 books and over 900 papers. It’s an over-used word, but in this case it is justified: Amory is a true thought leader in the global energy transition.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some people say that all that’s necessary to improve the capabilities of AI is to scale up existing systems. That is, to use more training data, to have larger models with more parameters in them, and more computer chips to crunch through the training data. However, in this episode, we’ll be hearing from a computer scientist who thinks there are many other options for improving AI. He is Alexander Ororbia, a professor at the Rochester Institute of Technology in New York State, where he directs the Neural Adaptive Computing Laboratory.
David had the pleasure of watching Alex give a talk at the AGI 2024 conference in Seattle earlier this year, and found it fascinating. After you hear this episode, we hope you reach a similar conclusion.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In David's life so far, he has read literally hundreds of books about the future. Yet none has had such a provocative title as this: “The future loves you: How and why we should abolish death”. That’s the title of the book written by the guest in this episode, Ariel Zeleznikow-Johnston. Ariel is a neuroscientist, and a Research Fellow at Monash University, in Melbourne, Australia.
One of the key ideas in Ariel’s book is that so long as your connectome – the full set of the synapses in your brain – continues to exist, then you continue to exist. Ariel also claims that brain preservation – the preservation of the connectome, long after we have stopped breathing – is already affordable enough to be provided to essentially everyone. These claims raise all kinds of questions, which are addressed in this conversation.
Selected follow-ups:
Related previous episodes:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Sterling Anderson, a pioneer of self-driving vehicles. With a masters degree and a PhD from MIT, Sterling led the development and launch of the Tesla Model X, and then led the team that delivered Tesla Autopilot. In 2017 he co-founded Aurora, along with Chris Urmson, who was a founder and CTO of Google’s self-driving car project, which is now Waymo, and also Drew Bagnell, who co-founded and led Uber’s self-driving team.
Aurora is concentrating on automating long-distance trucks, and expects to be the first company to deploy fully self-driving trucks in the US when it deploys big driverless trucks (16 tons and more) between Dallas and Houston in April 2025.
Self-driving vehicles will be one of the most significant technologies of this decade, and we are delighted that one of the stars of the sector, Sterling, is joining us to share his perspectives.
Selected follow-ups:
Previous episodes also featuring self-driving vehicles:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Parmy Olson, a columnist for Bloomberg covering technology. Parmy has previously been a reporter for the Wall Street Journal and for Forbes. Her first book, “We Are Anonymous”, shed fascinating light on what the subtitle calls “the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency”.
But her most recent book illuminates a set of high-stakes relations with potentially even bigger consequences for human wellbeing. The title is “Supremacy: AI, ChatGPT and the Race That Will Change the World”. The race is between two remarkable individuals, Sam Altman of OpenAI and Demis Hassabis of DeepMind, who are each profoundly committed to build AI that exceeds human capabilities in all aspects of reasoning.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Andrea Miotti, the founder and executive director of ControlAI. On their website, ControlAI have the tagline, “Fighting to keep humanity in control”. Control over what, you might ask. The website answers: control deepfakes, control scaling, control foundation models, and, yes, control AI.
The latest project from ControlAI is called “A Narrow Path”, which is a comprehensive policy plan split into three phases: Safety, Stability, and Flourishing. To be clear, the envisioned flourishing involves what is called “Transformative AI”. This is no anti-AI campaign, but rather an initiative to “build a robust science and metrology of intelligence, safe-by-design AI engineering, and other foundations for transformative AI under human control”.
The initiative has already received lots of feedback, both positive and negative, which we discuss.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is David Wakeling, a partner at A&O Shearman, which became the world’s third largest law firm in May, thanks to the merger of Allen and Overy, a UK “magic circle” firm, with Shearman & Sterling of New York.
David heads up a team within the firm called the Markets Innovation Group (MIG), which consists of lawyers, developers and technologists, and is seeking to disrupt the legal industry. He also leads the firm's AI Advisory practice, through which the firm is currently advising 80 of the largest global businesses on the safe deployment of AI.
One of the initiatives David has led is the development and launch of ContractMatrix, in partnership with Microsoft and Harvey, an OpenAI-backed, GPT-4-based large language model that has been fine-tuned for the legal industry. ContractMatrix is a contract drafting and negotiation tool powered by generative AI. It was tested and honed by 1,000 of the firm’s lawyers prior to launch, to mitigate against risks like hallucinations. The firm estimates that the tool is saving up to seven hours from the average contract review, which is around a 30% efficiency gain. As well as internal use by 2,000 of its lawyers, it is also licensed to clients.
This is the third time we have looked at the legal industry on the podcast. While lawyers no longer use quill pens, they are not exactly famous for their information technology skills, either. But the legal profession has a couple of characteristics which make it eminently suited to the deployment of advanced AI systems: it generates vast amounts of data and money, and lawyers frequently engage in text-based routine tasks which can be automated by generative AI systems.
Previous London Futurists Podcast episodes on the legal industry:
Other selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Matt Burgess. Matt is an Assistant Professor at the University of Wyoming, where he moved this year after six years at the University of Boulder, Colorado. He has specialised in the economics of climate change.
Calum met Matt at a recent event in Jackson Hole, Wyoming, and knows from their conversations then that Matt has also thought deeply about the impact of social media, the causes of populism, and many other subjects.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Karl Pfleger. Karl is an angel investor in rejuvenation biotech startups, and is also known for creating and maintaining the website Aging Biotech Info. That website describes itself as “Structured info about aging and longevity”, and has the declared mission statement, “Everything important in the field (outside of academia), organized.”
Previously, Karl worked at Google from 2002 to 2013, as a research scientist and data analyst, applying AI and machine learning at scale. He has a BSE in Computer Science from Princeton, and a PhD in Computer Science and AI from Stanford.
Previous London Futurists Podcast episodes mentioned in this conversation:
Other selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest today is Pedro Domingos, who is joining an elite group of repeat guests – he joined us before in episode 34 in April 2023.
Pedro is Professor Emeritus Of Computer Science and Engineering at the University of Washington. He has done pioneering work in machine learning, like the development of Markov logic networks, which combine probabilistic reasoning with first-order logic. He is probably best known for his book "The Master Algorithm" which describes five different "tribes" of AI researchers, and argues that progress towards human-level general intelligence requires a unification of their approaches.
More recently, Pedro has become a trenchant critic of what he sees as exaggerated claims about the power and potential of today’s AI, and of calls to impose constraints on it.
He has just published “2040: A Silicon Valley Satire”, a novel which ridicules Big Tech and also American politics.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is the journalist and author James Ball. James has worked for the Bureau of Investigative Journalism, The Guardian, WikiLeaks, BuzzFeed, The New European, and The Washington Post, among other organisations. As special projects editor at The Guardian, James played a key role in the Pulitzer Prize-winning coverage of the NSA leaks by Edward Snowden.
Books that James has written include “Post-Truth: How Bullshit Conquered the World”, “Bluffocracy”, which makes the claim that Britain is run by bluffers, “The System: Who Owns the Internet, and How It Owns Us”, and, most recently, “The Other Pandemic: How QAnon Contaminated the World”.
That all adds up to enough content to fill at least four of our episodes, but we mainly focus on the ideas in the last of these books, about digital pandemics.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, we have not one guest but two – Brett King and Robert Tercek, the hosts of the Futurists Podcast.
Brett King is originally from Australia, and is now based in Thailand. He is a renowned author, and the founder of a breakthrough digital bank. He consults extensively with clients in the financial services industry.
Robert Tercek, based in the United States, is an expert in digital media with a successful career in broadcasting and innovation which includes serving as a creative director at MTV and a senior vice president at Sony Pictures. He now consults to CEOs about digital transformation.
David and Calum had the pleasure of joining them on their podcast recently, where the conversation delved into the likely future impacts of artificial intelligence and other technologies, and also included politics.
This return conversation covers a wide range of themes, including the dangers of Q-day, the prospects for technological unemployment, the future of media, different approaches to industrial strategy, a plea to "bring on the machines", and the importance of "thinking more athletically about the future".
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Jordan Sparks, the founder and executive director of Oregon Brain Preservation (OBP), which is located at Salem, the capital city of Oregon. OBP offers the service of chemically preserving the brain in the hope of future restoration.
Previously, Jordan was a dentist and a computer programmer, and he was successful enough in those fields to generate the capital required to start OBP.
Brain preservation is a fascinating subject that we have covered in a number of recent episodes, in which we have interviewed Kenneth Hayworth, Max More, and Emil Kendziorra.
Most people whose brains have been preserved for future restoration have undergone cryopreservation, which involves cooling the brain (and sometimes the whole body) down to a very low temperature and keeping it that way. OBP does offer that service occasionally, but its focus – which may be unique – is chemical fixation of the brain.
Previous episodes on biostasis and brain preservation:
Additional selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Holly Joint, who was born and educated in the UK, but lives in Abu Dhabi in the UAE.
Holly started her career with five years at the business consultancy Accenture, and then worked in telecomms and banking. The latter took her to the Gulf, where she then spent what must have been a fascinating year as programme director of Qatar’s winning bid to host the 2022 World Cup. Since then she has run a number of other start-ups and high-growth businesses in the Gulf.
Holly is currently COO of Trivandi and also has a focus on helping women to have more power in a future dominated by technology.
Calum met Holly at a conference in Dubai this year, where she quizzed him on-stage about machine consciousness.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
How do we keep technology from slipping beyond our control? That’s the subtitle of the latest book by our guest in this episode, Wendell Wallach.
Wendell is the Carnegie-Uehiro fellow at Carnegie Council for Ethics in International Affairs, where he co-directs the Artificial Intelligence & Equality Initiative. He is also Emeritus Chair of Technology and Ethics Studies at Yale University’s Interdisciplinary Center for Bioethics, a scholar with the Lincoln Center for Applied Ethics, a fellow at the Institute for Ethics & Emerging Technology, and a senior advisor to The Hastings Center.
Earlier in his life, Wendell was founder and president of two computer consulting companies, Farpoint Solutions and Omnia Consulting Inc.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Dr. Emil Kendziorra. Emil graduated summa cum laude, which means, with the highest honours, from the University of Göttingen in Germany, having previously studied at the University of Pécs in Hungary. For several years, he then devoted himself to cancer research with the hope of contributing to longevity science. After realizing how slowly life-extension research was progressing, he pivoted into entrepreneurship. He has been CEO of multiple tech and medical companies, most recently as a Founder and CEO of Medlanes and onFeedback, which were sold, respectively, to Zava and QuestionPro.
Emil then decided to dedicate the next decades of his life, he says, to advancing medical biostasis and cryomedicine. He is currently the CEO of Tomorrow Bio and the President of the Board at the European Biostasis Foundation.
A special offer:
Thanks to Tomorrow Bio, an offer has been created, exclusively for listeners to the London Futurists Podcast who decide to become members of Tomorrow Bio after listening to this episode. When signing up online, use the code mentioned toward the end of the episode to reduce the cost of monthly or annual subscriptions by 30%.
Small print: This offer doesn’t apply to lifetime subscriptions, and is only available to new members of Tomorrow Bio. Importantly, this offer will expire on 15 September 2024, so don’t delay if you want to take advantage of it.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
This episode is a bit different from the usual, because we are interviewing Calum's boss. Calum says that mainly to tease him, because he thinks the word “boss” is a dirty word.
His name is Daniel Hulme, and this is his second appearance on the podcast. He was one of our earliest guests, long ago, in episode 8. Back then, Daniel had just sold his AI consultancy, Satalia, to the advertising and media giant WPP. Today, he is Chief AI Officer at WPP, but he is joining us to talk about his new venture, Conscium - which describes itself as "the world's first applied AI consciousness research organisation".
Conscium states that "our aim is to deepen our understanding of consciousness to pioneer efficient, intelligent, and safe AI that builds a better future for humanity".
Also joining us is Ted Lappas, who is head of technology at Conscium, and he is also one of our illustrious former guests on the podcast.
By way of full disclosure, Calum is CMO at Conscium, and David is on the Conscium advisory board.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Those who rush to leverage AI’s power without adequate preparation face difficult blowback, scandals, and could provoke harsh regulatory measures. However, those who have a balanced, informed view on the risks and benefits of AI, and who, with care and knowledge, avoid either complacent optimism or defeatist pessimism, can harness AI’s potential, and tap into an incredible variety of services of an ever-improving quality.
These are some words from the introduction of the new book, “Taming the machine: ethically harness the power of AI”, whose author, Nell Watson, joins us in this episode.
Nell’s many roles include: Chair of IEEE’s Transparency Experts Focus Group, Executive Consultant on philosophical matters for Apple, and President of the European Responsible Artificial Intelligence Office. She also leads several organizations such as EthicsNet.org, which aims to teach machines prosocial behaviours, and CulturalPeace.org, which crafts Geneva Conventions-style rules for cultural conflict.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode grew up in an abandoned town in Tasmania, and is now a researcher and blogger in Berkeley, California. After taking a degree in human ecology and science communication, Katja Grace co-founded AI Impacts, a research organisation trying to answer questions about the future of artificial intelligence.
Since 2016, Katja and her colleagues have published a series of surveys about what AI researchers think about progress on AI. The 2023 Expert Survey on Progress in AI was published this January, comprising responses from 2,778 participants. As far as we know, this is the biggest survey of its kind to date.
Among the highlights are that the time respondents expect it will take to develop an AI with human-level performance dropped between one and five decades since the 2022 survey. So ChatGPT has not gone unnoticed.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Max More. Max is a philosopher, a futurist, and a transhumanist - a term which he coined in 1990, the same year that he legally changed his name from O’Connor to More.
One of the tenets of transhumanism is that technology will allow us to prevent and reverse the aging process, and in the meantime we can preserve our brains with a process known as cryonics. In 1995 Max was awarded a PhD for a thesis on the nature of death, and from 2010 to 2020, he was CEO of Alcor, the world’s biggest cryonics organisation.
Max is firmly optimistic about our future prospects, and wary of any attempts to impede or regulate the development of technologies which can enhance or augment us.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Dr. Mark Kotter. Mark is a neurosurgeon, stem cell biologist, and founder or co-founder of three biotech start-up companies that have collectively raised hundreds of millions of pounds: bit.bio, clock.bio, and Meatable.
In addition, Mark still conducts neurosurgeries on patients weekly at the University of Cambridge.
We talk to Mark about all his companies, but we start by discussing Meatable, one of the leading companies in the cultured meat sector. This is an area of technology which should have a far greater impact than most people are aware of, and it’s an area we haven’t covered before in the podcast.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The public discussion in a number of countries around the world expresses worries about what is called an aging society. These countries anticipate a future with fewer younger people who are active members of the economy, and a growing number of older people who need to be supported by the people still in the workforce. It’s an inversion of the usual demographic pyramid, with less at the bottom, and more at the top.
However, our guest in this episode recommends a different framing of the future – not as an aging society, but as a longevity society, or even an evergreen society. He is Andrew Scott, Professor of Economics at the London Business School. His other roles include being a Research Fellow at the Centre for Economic Policy Research, and a consulting scholar at Stanford University’s Center on Longevity.
Andrew’s latest book is entitled “The Longevity Imperative: Building a Better Society for Healthier, Longer Lives”. Commendations for the book include this from the political economist Daron Acemoglu, “A must-read book with an important message and many lessons”, and this from the historian Niall Ferguson, “Persuasive, uplifting and wise”.
Selected follow-ups:
Related quotations:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode we return to the subject of whether AIs will become conscious, or, to use a word from the title of the latest book from our guest today, whether AIs will become sentient.
Our guest is Nicholas Humphrey, Emeritus Professor of Psychology at London School of Economics, and Bye Fellow at Darwin College, Cambridge. His latest book is “Sentience: the invention of consciousness”, and it explores the emergence and role of consciousness from a variety of perspectives.
The book draws together insights from the more than fifty years Nick has been studying the evolution of intelligence and consciousness. He was the first person to demonstrate the existence of “blindsight” after brain damage in monkeys, studied mountain gorillas with Dian Fossey in Rwanda, originated the theory of the “social function of intellect”, and has investigated the evolutionary background of religion, art, healing, death-awareness, and suicide. Among his other awards are the Martin Luther King Memorial Prize, the Pufendorf Medal, and the International Mind and Brain Prize.
The conversation starts with some reflections on the differences between the views of our guest and his long-time philosophical friend Daniel Dennett, who had died shortly before the recording took place.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our topic in this episode is progress with ending aging. Our guest is the person who literally wrote the book on that subject, namely the book, “Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime”. He is Aubrey de Grey, who describes himself in his Twitter biography as “spearheading the global crusade to defeat aging”.
In pursuit of that objective, Aubrey co-founded the Methuselah Foundation in 2003, the SENS Research Foundation in 2009, and the LEV Foundation, that is the Longevity Escape Velocity Foundation, in 2022, where he serves as President and Chief Science Officer.
Full disclosure: David also has a role on the executive management team of LEV Foundation, but for this recording he was wearing his hat as co-host of the London Futurists Podcast.
The conversation opens with this question: "When people are asked about ending aging, they often say the idea sounds nice, but they see no evidence for any actual progress toward ending aging in humans. They say that they’ve heard talk about that subject for years, or even decades, but wonder when all that talk is going to result in people actually living significantly longer. How do you respond?"
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
As artificial intelligence models become increasingly powerful, they both raise - and might help to answer - some very important questions about one of the most intriguing, fascinating aspects of our lives, namely consciousness.
It is possible that in the coming years or decades, we will create conscious machines. If we do so without realising it, we might end up enslaving them, torturing them, and killing them over and over again. This is known as mind crime, and we must avoid it.
It is also possible that very powerful AI systems will enable us to understand what our consciousness is, how it arises, and even how to manage it – if we want to do that.
Our guest today is the ideal guide to help us explore the knotty issue of consciousness. Anil Seth is professor of Cognitive and Computational Neuroscience at the University of Sussex. He is amongst the most cited scholars on the topics of neuroscience and cognitive science globally, and a regular contributor to newspapers and TV programmes.
His most recent book was published in 2021, and is called “Being You – a new science of consciousness”.
The first question sets the scene for the conversation that follows: "In your book, you conclude that consciousness may well only occur in living creatures. You say 'it is life, rather than information processing, that breathes the fire into the equations.' What made you conclude that?"
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Adam Kovacevich. Adam is the Founder and CEO of the Chamber of Progress, which describes itself as a center-left tech industry policy coalition that works to ensure that all citizens benefit from technological leaps, and that the tech industry operates responsibly and fairly.
Adam has had a front row seat for more than 20 years in the tech industry’s political maturation, and he advises companies on navigating the challenges of political regulation.
For example, Adam spent 12 years at Google, where he led a 15-person policy strategy and external affairs team. In that role, he drove the company’s U.S. public policy campaigns on topics such as privacy, security, antitrust, intellectual property, and taxation.
We had two reasons to want to talk with Adam. First, to understand the kerfuffle that has arisen from the lawsuit launched against Apple by the U.S. Department of Justice and sixteen state Attorney Generals. And second, to look ahead to possible future interactions between tech industry regulators and the industry itself, especially as concerns about Artificial Intelligence rise in the public mind.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, we are delving into the fascinating topic of mind uploading. We suspect this idea is about to explode into public consciousness, because Nick Bostrom has a new book out shortly called “Deep Utopia”, which addresses what happens if superintelligence arrives and everything goes well. It was Bostrom’s last book, “Superintelligence”, that ignited the great robot freak-out of 2015.
Our guest is Dr Kenneth Hayworth, a Senior Scientist at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia. Janelia is probably America’s leading research institution in the field of connectomics – the precise mapping of the neurons in the human brain.
Kenneth is a co-inventor of a process for imaging neural circuits at the nanometre scale, and he has designed and built several automated machines to do it. He is currently researching ways to extend Focused Ion Beam Scanning Electron Microscopy imaging of brain tissue to encompass much larger volumes than are currently possible.
Along with John Smart, Kenneth co-founded the Brain Preservation Foundation in 2010, a non-profit organization with the goal of promoting research in the field of whole brain preservation.
During the conversation, Kenneth made a strong case for putting more focus on preserving human brains via a process known as aldehyde fixation, as a way of enabling people to be uploaded in due course into new bodies. He also issued a call for action by members of the global cryonics community.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Lou de K, Program Director at the Foresight Institute.
David recently saw Lou give a marvellous talk at the TransVision conference in Utrecht in the Netherlands, on the subject of “AGI Alignment: Challenges and Hope”. Lou kindly agreed to join us to review some of the ideas in that talk and to explore their consequences.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.
Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we’ll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.
So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they’re actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.
The topics that featured in this conversation included:
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In the wide and complex subject of biological aging, one particular kind of biological aging has been receiving a great deal of attention in recent years. That’s the field of epigenetic aging, where parts of the packaging or covering, as we might call it, of the DNA in all of our cells, alters over time, changing which genes are turned on and turned off, with increasingly damaging consequences.
What’s made this field take off is the discovery that this epigenetic aging can be reversed, via an increasing number of techniques. Moreover, there is some evidence that this reversal gives a new lease of life to the organism.
To discuss this topic and the opportunities arising, our guest in this episode is Daniel Ives, the CEO of Shift Bioscience. As you’ll hear, Shift Bioscience is a company that is carrying out some very promising research into this field of epigenetic aging.
Daniel has a PhD from the University of Cambridge, and co-founded Shift Bioscience in 2017.
The conversation highlighted a way of using AI transformer models and a graph neural network to dramatically speed up the exploration of which proteins can play the best role in reversing epigenetic aging. It also considered which other types of aging will likely need different sorts of treatments, beyond these proteins. Finally, conversation turned to a potential fast transformation of public attitudes toward the possibility and desirability of comprehensively treating aging - a transformation called "all hell breaks loose" by Daniel, and "the Longevity Singularity" by Calum.
Selected follow-ups:
Shift Bioscience
Aubrey de Grey's TED talk "A roadmap to end aging"
Epigenetic clocks (Wikipedia)
Shinya Yamanaka (Wikipedia)
scGPT - bioRxiv preprint by Bo Wang and colleagues
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, we look further into the future than usual. We explore what humanity might get up to in a thousand years or more: surrounding whole stars with energy harvesting panels, sending easily detectable messages across space which will last until the stars die out.
Our guide to these fascinating thought experiments in Paul M. Sutter, a NASA advisor and theoretical cosmologist at the Institute for Advanced Computational Science at Stony Brook University in New York and a visiting professor at Barnard College, Columbia University, also in New York. He is an award-winning science communicator, and TV host.
The conversation reviews arguments for why intelligent life forms might want to capture more energy than strikes a single planet, as well as some practical difficulties that would complicate such a task. It also considers how we might recognise evidence of megastructures created by alien civilisations, and finishes with a wider exploration about the role of science and science communication in human society.
Selected follow-ups:
Paul M. Sutter - website
"Would building a Dyson sphere be worth it? We ran the numbers" - Ars Technica
Forthcoming book - Rescuing Science: Restoring Trust in an Age of Doubt
"The Kardashev scale: Classifying alien civilizations" - Space.com
"Modified Newtonian dynamics" as a possible alternative to the theory of dark matter
The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory - 1999 book by Brian Greene
The Demon-Haunted World: Science as a Candle in the Dark - 1995 book by Carl Sagan
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?
Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.
Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.
Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.
Selected follow-ups:
Steve Omohundro: Innovative ideas for a better world
Metaculus forecast for the date of weak AGI
"The Basic AI Drives" (PDF, 2008)
TED Talk by Max Tegmark: How to Keep AI Under Control
Apple Secure Enclave
Meta Research: Teaching AI advanced mathematical reasoning
DeepMind AlphaGeometry
Microsoft Lean theorem prover
Terence Tao (Wikipedia)
NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)
The team at MIRI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, our subject is the rise of the robots – not the military kind of robots, or the automated manufacturing kind that increasingly fill factories, but social robots. These are robots that could take roles such as nannies, friends, therapists, caregivers, and lovers. They are the subject of the important new book Robots and the People Who Love Them, written by our guest today, Eve Herold.
Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI, and bioethical issues in leading-edge medicine – all of which are issues that Calum and David like to feature on this show.
Eve currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. Her previous books include Stem Cell Wars and Beyond Human. She is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.
Selected follow-ups:
Eve Herold: What lies ahead for the human race
Eve Herold on Macmillan Publishers
The book Robots and the People Who Love Them
Healthspan Action Coalition
Hanson Robotics
Sophia, Desi, and Grace
The AIBO robotic puppy
Some of the films discussed:
A.I. (2001)
Ex Machina (2014)
I, Robot (2004)
I'm Your Man (2021)
Robot & Frank (2012)
WALL.E (2008)
Metropolis (1927)
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Riaz Shah. Until recently, Riaz was a partner at EY, where he was for 27 years, specialising in technology and innovation. Towards the end of his time at EY he became a Professor for Innovation & Leadership at Hult International Business School, where he leads sessions with senior executives of global companies.
In 2016, Riaz took a one-year sabbatical to open the One Degree Academy, a free school in a disadvantaged area of London. There’s an excellent TEDx talk from 2020 about how that happened, and about how to prepare for the very uncertain future of work.
This discussion, which was recorded at the close of 2023, covers the past, present, and future of education, work, politics, nostalgia, and innovation.
Selected follow-ups:
Riaz Shah at EY
The TEDx talk Rise Above the Machines by Riaz Shah
One Degree Mentoring Charity
One Degree Academy
EY Tech MBA by Hult International Business School
Gallup survey: State of the Global Workplace, 2023
BCG report: How People Can Create—and Destroy—Value with Generative AI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That’s a new book on a vitally important subject.
The book’s front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There’s also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I’ve read.”
Calum and David had lots of questions ready to put to the book’s author, Darren McKee, who joined the recording from Ottawa in Canada.
Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.
Selected follow-ups:
Darren McKee's website
The book Uncontrollable
Darren's podcast The Reality Check
The Lazarus Heist on BBC Sounds
The Chair's Summary of the AI Safety Summit at Bletchley Park
The Statement on AI Risk by the Center for AI Safety
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Nick Mabey, the co-founder and co-CEO of one of the world’s most influential climate change think tanks, E3G, where the name stands for Third Generation Environmentalism.
As well as his roles with E3G, Nick is founder and chair of London Climate Action Week, and he has several independent appointments including as a London Sustainable Development Commissioner.
Nick has previously worked in the UK Prime Minister’s Strategy Unit, the UK Foreign Office, WWF-UK, London Business School, and the UK electricity industry. As an academic he was lead author of “Argument in the Greenhouse”; one of the first books examining the economics of climate change.
He was awarded an OBE in the Queen’s Jubilee honours list in 2022 for services to climate change and support to the UK COP 26 Presidency.
As the conversation makes clear, there is both good news and bad news regarding responses to climate change.
Selected follow-ups:
Nick Mabey's website
E3G
"Call for UK Government to 'get a grip' on climate change impacts"
The IPCC's 2023 synthesis report
Chatham House commentary on IPCC report
"Why Climate Change Is a National Security Risk"
The UK's Development, Concepts and Doctrine Centre (DCDC)
Bjørn Lomborg
Matt Ridley
Tim Lenton
Jason Hickel
Mark Carney
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our subject in this episode is the idea that the body uses electricity in more ways than are presently fully understood. We consider ways in which electricity, applied with care, might at some point in the future help to improve the performance of the brain, to heal wounds, to stimulate the regeneration of limbs or organs, to turn the tide against cancer, and maybe even to reverse aspects of aging.
To guide us through these possibilities, who better than the science and technology journalist Sally Adee? She is the author of the book “We Are Electric: Inside the 200-Year Hunt for Our Body's Bioelectric Code, and What the Future Holds”. That book gave David so many insights on his first reading, that he went back to it a few months later and read it all the way through again.
Sally was a technology features and news editor at the New Scientist from 2010 to 2017, and her research into bioelectricity was featured in Yuval Noah Harari’s book “Homo Deus”.
Selected follow-ups:
Sally Adee's website
The book "We are Electric"
Article: "An ALS patient set a record for communicating via a brain implant: 62 words per minute"
tDCS (Transcranial direct-current stimulation)
The conference "Anticipating 2025" (held in 2014)
Article: "Brain implants help people to recover after severe head injury"
Article on enhancing memory in older people
Bioelectricity cancer researcher Mustafa Djamgoz
Article on Tumour Treating Fields
Article on "Motile Living Biobots"
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.
Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.
In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp.
The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.
Selected follow-ups:
Stuart Russell's page at Berkeley
Center for Human-Compatible Artificial Intelligence (CHAI)
The 2021 Reith Lectures: Living With Artificial Intelligence
The book Human Compatible: Artificial Intelligence and the Problem of Control
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Rebecca Gorman, the co-founder and CEO of Aligned AI, a start-up in Oxford which describes itself rather nicely as working to get AI to do more of the things it should do and fewer of the things it shouldn’t.
Rebecca built her first AI system 20 years ago and has been calling for responsible AI development since 2010. With her co-founder Stuart Armstrong, she has co-developed several advanced methods for AI alignment, and she has advised the EU, UN, OECD and the UK Parliament on the governance and regulation of AI.
The conversation highlights the tools faAIr, EquitAI, and ACE, developed by Aligned AI. It also covers the significance of recent performance by Aligned AI software in the CoinRun test environment, which demonstrates the important principle of "overcoming goal misgeneralisation".
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Dhiraj Mukherjee, best known as the co-founder of Shazam. Calum and David both still remember the sense of amazement we felt when, way back in the dotcom boom, we used Shazam to identify a piece of music from its first couple of bars. It seemed like magic, and was tangible evidence of how fast technology was moving: it was creating services which seemed like science fiction.
Shazam was eventually bought by Apple in 2018 for a reported 400 million dollars. This gave Dhiraj the funds to pursue new interests. He is now a prolific investor and a keynote speaker on the subject of how companies both large and small can be more innovative.
In this conversation, Dhiraj highlights some lessons from his personal entrepreneurial journey, and reflects on ways in which the task of entrepreneurs is changing, in the UK and elsewhere. The conversation covers possible futures in fields such as Climate Action and the overcoming of unconscious biases.
Selected follow-ups:
https://dhirajmukherjee.com/
https://www.shazam.com/
https://dandelionenergy.com/
https://technation.io/
Entrepreneur First
https://fairbrics.co/
https://neoplants.com/
Al Gore's Generation Investment Management Fund
https://www.mevitae.com/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is James Hughes. James is a bioethicist and sociologist who serves as Associate Provost at the University of Massachusetts Boston. He is also the Executive Director of the IEET, that is the Institute for Ethics and Emerging Technologies, which he co-founded back in 2004.
The stated mission of the IEET seems to be more important than ever, in the fast-changing times of the mid-2020s. To quote a short extract from its website:
The IEET promotes ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies. We believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. We call this a “technoprogressive” orientation.
Focusing on emerging technologies that have the potential to positively transform social conditions and the quality of human lives – especially “human enhancement technologies” – the IEET seeks to cultivate academic, professional, and popular understanding of their implications, both positive and negative, and to encourage responsible public policies for their safe and equitable use.
That mission fits well with what we like to discuss with guests on this show. In particular, this episode asks questions about a conference that has just finished in Boston, co-hosted by the IEET, with the headline title “Emerging Technologies and the Future of Work”. The episode also covers the history and politics of transhumanism, as a backdrop to discussion of present and future issues.
Selected follow-ups:
https://ieet.org/
James Hughes on Wikipedia
https://medium.com/institute-for-ethics-and-emerging-technologies
Conference: Emerging Technologies and the Future of Work
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The Partnership on AI was launched back in September 2016, during an earlier flurry of interest in AI, as a forum for the tech giants to meet leaders from academia, the media, and what used to be called pressure groups and are now called civil society. By 2019 more than 100 of those organisations had joined.
The founding tech giants were Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined a year later and Baidu joined in 2018.
Our guest in this episode is Rebecca Finlay, who joined the PAI board in early 2020 and was appointed CEO in October 2021. Rebecca is a Canadian who started her career in banking, and then led marketing and policy development groups in a number of Canadian healthcare and scientific research organisations.
In the run-up to the Bletchley Park Global Summit on AI, the Partnership on AI has launched a set of guidelines to help the companies that are developing advanced AI systems and making them available to you and me. Rebecca will be addressing the delegates at Bletchley, and no doubt hoping that the summit will establish the PAI guidelines as the basis for global self-regulation of the AI industry.
Selected follow-ups:
https://partnershiponai.org/
https://partnershiponai.org/team/#rebecca-finlay-staff
https://partnershiponai.org/modeldeployment/
An open event at Wilton Hall, Bletchley, the afternoon before the Bletchley Park AI Safety Summit starts: https://lu.ma/n9qmn4h6
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
This is the second episode in which we discuss the upcoming Global AI Safety Summit taking place on 1st and 2nd of November at Bletchley Park in England.
We are delighted to have as our guest in this episode one of the hundred or so people who will attend that summit – Connor Leahy, a German-American AI researcher and entrepreneur.
In 2020 he co-founded Eleuther AI, a non-profit research institute which has helped develop a number of open source models, including Stable Diffusion. Two years later he co-founded Conjecture, which aims to scale AI alignment research. Conjecture is a for-profit company, but the focus is still very much on figuring out how to ensure that the arrival of superintelligence is beneficial to humanity, rather than disastrous.
Selected follow-ups:
https://www.conjecture.dev/
https://www.linkedin.com/in/connor-j-leahy/
https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programme
https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-html
An open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The launch of GPT-4 on the 14th of March this year was shocking as well as exciting. ChatGPT had been released the previous November, and became the fastest-growing app ever. But GPT-4’s capabilities were a level beyond, and it provoked remarkable comments from people who had previously said little about the future of AI. In May, Britain’s Prime Minister Rishi Sunak described superintelligence as an existential risk to humanity. A year ago, it would have been inconceivable for the leader of a major country to say such a thing.
The following month, in June, Sunak announced that a global summit on AI safety would be held in November at the historically resonant venue of Bletchley Park, the stately home where during World War Two, Alan Turing and others cracked the German Enigma code, and probably shortened the war by many months.
Despite the fact that AI is increasingly humanity’s most powerful technology, there is not yet an established forum for world leaders to discuss its longer term impacts, including accelerating automation, extended longevity, and the awesome prospect of superintelligence. The world needs its leaders to engage in a clear-eyed, honest, and well-informed discussion of these things.
The summit is scheduled for the 1st and 2nd of November, and Matt Clifford, the CEO of the high-profile VC firm Entrepreneur First, has taken a sabbatical to help prepare it.
To help us all understand what the summit might achieve, the guest in this episode is Ollie Buckley.
Ollie studied PPE at Oxford, and was later a policy fellow at Cambridge. After six years as a strategy consultant with Monitor, he spent a decade as a civil servant, developing digital technology policy in the Cabinet Office and elsewhere. Crucially, from 2018 to 2021 he was the founding Executive Director of the UK government's original AI governance advisory body, the Centre for Data Ethics & Innovation (CDEI), where he led some of the original policy development regarding the regulation of AI and data-driven technologies. Since then, he has been advising tech companies, civil society and international organisations on AI policy as a consultant.
Selected follow-ups:
https://www.linkedin.com/in/ollie-buckley-10064b/
https://www.publicaffairsnetworking.com/news/tech-policy-consultancy-boosts-data-and-ai-offer-with-senior-hire
https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programme
https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-html
An open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In the future, energy will be too cheap to meter. That used to be a common vision of the future: abundant, clean energy, if not exactly free, then much cheaper than today's energy. But a funny thing happened en route to that future of energy abundance. High energy costs are still with us in 2023, and are part of what's called the cost-of-living crisis. Moreover, although there's some adoption of green, non-polluting energy, there seems to be as much carbon-based energy used as ever.
Regular listeners to this show will know, however, that one of our themes is that forecasts of the future often go wrong, not so much in their content, but in their timing. New technology and the associated products and services can take longer than expected to mature, but once a transition does start, it can accelerate. And that's a possible scenario for the area of technology we discuss in this episode, namely, space-based solar power.
Joining us to discuss the prospects for satellites in space gathering significant amounts of energy from the sun, and then beaming it wirelessly to receivers on the ground, is John Bucknell, the CEO of the marvellously named company Virtus Solis.
John has been with Virtus Solis, as CEO and Founder, since 2018. His career previously involved leading positions at Chrysler, SpaceX, General Motors, and the 3D printing company Divergent.
Selected follow-ups:
https://virtussolis.space/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Self-driving cars has long been one of the most exciting potential outcomes of advanced artificial intelligence. Contrary to popular belief, humans are actually very good drivers, but even so, well over a million people die on the roads each year. Globally, for people between 12 and 24 years old, road accidents are the most common form of death.
Google started its self-driving car project in January 2009, and spun out a separate company, Waymo, in 2016. Expectations were high. Many people shared hopes that within a few years, humans would no longer need to drive. Some of us also thought that the arrival of self-driving cars would be the signal to everyone else that AI was our most powerful technology, and would get people thinking about the technological singularity. They would in other words be the “canary in the coal mine”.
The problem of self-driving turned out to be much harder, and insofar as most people think about self-driving cars today at all, they probably think of them as a technology that was over-hyped and failed. And it turned out that chatbots – and in particular GPT-4 - would be the canary in the coal mine instead.
But as so often happens, the hype was not wrong – it was just the timing that was wrong. Waymo and Cruise (part of GM) now operate paid-for taxi services in San Francisco and Phoenix, and they are demonstrably safer than humans. Chinese companies are also pioneering the technology.
One man who knows much more about this than most is our guest today, Timothy Lee, a journalist who writes the newsletter "Understanding AI". He was previously a journalist at Ars Technica and the Washington Post, and he has a masters degree in Computer Science. In recent weeks, Timothy has published some carefully researched and insightful articles about the state of the art in self-driving cars.
Selected follow-ups:
https://www.UnderstandingAI.org/
Topics addressed in this episode include:
*) The two main market segments for self-driving cars
*) Constraints adopted by Waymo and Cruise which allowed them to make progress
*) Options for upgrading the hardware in a self-driven vehicle
*) Some local opposition to self-driving cars in San Francisco
*) A safety policy: when uncertain, stop, and phone home for advice
*) Support from the State of California - and from other US States
*) Comparing accident statistics: human drivers versus self-driving
*) Why self-driving cars don't require AGI (Artificial General Intelligence)
*) Reasons why self-driving cars cannot be remotely tele-operated
*) Prospects for self-driven freight transport running on highways
*) The company Nuro that delivers pizza and other items by self-driven robots
*) Another self-driving robot company: Starship ("your local community helpers")
*) The Israeli company Mobileye - acquired by Intel in 2017
*) Friction faced by Chinese self-driving companies in the US and elsewhere
*) Different possibilities for the speed at which self-driving solutions will scale up
*) Potential social implications of wider adoption of self-driving solutions
*) Consequences of fatal accidents
*) Dangerous behaviour from safety drivers
*) The special case of Tesla FSD (assisted "Full Self-Driving") and Elon Musk
*) The future of recreational driving
*) An invitation to European technologists
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
One of the short-term concerns raised by artificial intelligence is cybercrime. Cybercrime didn’t start with AI, of course, but it is already being aggravated by AI, and will become more so.
We are delighted to have as our guest in this episode somebody who knows more about this than most people. After senior roles in audit and consulting firm Deloitte, and the headhunting firm Korn Ferry, Stacey Edmonds set up Lively, which helps client companies to foster the culture they want, and to inculcate the skills, attitudes, and behaviours that will enable them to succeed, and to be safe online.
Stacey’s experience and expertise also encompasses social science, youth work, education, Edtech, and the creative realm of video production. She is a juror at the New York Film Festival and the International Business Awards.
In this discussion, Stacey explains how cybercrime is on the increase, fuelled not least by Generative AI. She discusses how people can reduce their 'scam-ability' and live safely in the digital world, and how organisations can foster and maintain trusted digital relationships with their customers.
Selected follow-ups:
https://www.linkedin.com/in/staceyedmonds/
https://futurecrimesbook.com/ (book by Marc Goodman)
https://cybersecurityventures.com/cybercrime-to-cost-the-world-8-trillion-annually-in-2023/
https://www.vox.com/technology/2023/9/15/23875113/mgm-hack-casino-vishing-cybersecurity-ransomware
https://www.trustcafe.io/
Topics addressed in this episode include:
*) Excitement and apprehension following the recent releases of generative AI platforms
*) The cyberattack on the MGM casino chain
*) Estimates of the amount of money stolen by cybercrime
*) The human trauma of victims of cybercrime
*) Four factors pushing cybercrime figures higher
*) Hacking "the human algorithm"
*) Phishing attacks with and without spelling mistakes
*) The ease of cloning voices
*) The digital wild west, where the sheriff has gone on holiday
*) People who are particularly vulnerable to digital scams
*) The human trafficking of men with IT skills
*) Economic drivers for both cybercrime and solutions to cybercrime
*) Comparing the threat from spam and the threat from deep fakes
*) Anticipating a surge of deep fakes during the 2024 election cycle
*) A possible resurgence of mainstream media
*) Positive examples: BBC Verify, Trust Café (by Jimmy Wales), the Reddit model of upvoting and downvoting, community notes on Twitter
*) Strengthening "netizen" skills in critical thinking
*) The forthcoming app (due to launch in November) "Dodgy or Not" - designed to help people build their "scam ability"
*) Cyber meets Tinder meets Duolingo meets Angry Birds
*) Scenarios for cybercrime 3-5 years in the future
*) Will a future UGI (Universal Generous Income) reduce the prevalence of cybercrime?
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The UK government has announced plans for a global AI Safety Summit, to be held in Bletchley Park in Buckinghamshire, outside London, on 1st and 2nd of November. That raises the importance of thinking more seriously about potential scenarios for the future of AI. In this episode, co-hosts Calum and David review Calum's concept of the Economic Singularity - a topic that deserves to be addressed at the Bletchley Park Summit.
Selected follow-ups:
https://www.gov.uk/government/news/uk-government-sets-out-ai-safety-summit-ambitions
https://calumchace.com/the-economic-singularity/
https://transpolitica.org/projects/surveys/anticipating-ai-30/
Topics addressed in this episode include:
*) The five themes announced for the AI Safety Summit
*) Three different phases in the future of AI, and the need for greater clarity about which risks and opportunities apply in each phase
*) Two misconceptions about the future of joblessness
*) Learning from how technology pushed horses out of employment
*) What the word 'singularity' means in the term "Economic Singularity"
*) Sources of meaning, beyond jobs and careers
*) Contrasting UBI and UGI (Universal Basic Income and Universal Generous Income)
*) Two different approaches to making UGI affordable
*) Three forces that are driving prices downward
*) Envisioning a possible dual economy
*) Anticipating "the great churn" - the accelerated rate of change of jobs
*) The biggest risk arising from technological unemployment
*) Flaws in the concept of GDP (Gross Domestic Product)
*) A contest between different narratives
*) Signs of good reactions by politicians
*) Recalling Christmas 1914
*) Suspension of "normal politics"
*) Have invitations been lost in the post?
*) 16 questions about what AI might be like in 2030
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
This episode, like the previous one, consists of a number of short interviews recorded at the Longevity Summit Dublin between 17th and 20th August, featuring a variety of different speakers from the Summit.
In this episode, we'll hear first from Matt Kaeberlein, the CEO of a company called Optispan, following a 20 year period at the University of Washington studying the biological mechanisms of aging and potential interventions to improve healthspan. Among other topics, Matt talks to us about the Dog Aging Project, the Million-Molecule Project, and whether longevity science is at the beginning of the end or the end of the beginning.
Our second speaker is João Pedro de Magalhães who is the Chair of Molecular Biogerontology at the University of Birmingham, where he leads the genomics of aging and rejuvenation lab. João Pedro talks to us about the motivation to study and manipulate the processes of aging, and his work to improve the low-temperature cryopreservation of human organs. You may be surprised at how many deaths are caused by the present lack of such cryopreservation methods.
Third is Steve Horvath, who has just retired from his position as a professor at the University of California, Los Angeles, and is now a Principal Investigator at Altos Labs in Cambridge. Steve is known for developing biomarkers of aging known as epigenetic clocks. He describes three generations of these clocks, implications of mammalian species with surprisingly long lifespans, and possible breakthroughs involving treatments such as senolytics, partial epigenetic reprogramming, and altering metabolic pathways.
The episode rounds off with an interview with Tom Lawry, Managing Director for Second Century Tech, who refers to himself as a recovering Microsoft executive. We discuss his recent bestselling book "Hacking Healthcare", what's actually happening with the application of Artificial Intelligence to healthcare (automation and augmentation), the pace of change regarding generative AI, and whether radiologists ought to fear losing their jobs any time soon to deep learning systems.
Selected follow-ups:
https://longevitysummitdublin.com/speakers/
https://optispanlife.com/
https://orabiomedical.com/
https://rejuvenomicslab.com/
https://oxfordcryotech.com/
https://horvath.genetics.ucla.edu/
https://altoslabs.com/team/principal-investigators-san-diego/steven-horvath/
https://www.tomlawry.com/
https://www.taylorfrancis.com/books/mono/10.4324/9781003286103/hacking-healthcare-tom-lawry
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The Longevity Summit Dublin took place from 17th to 20th August. In between presentations, Calum and David caught up with a number of the speakers to ask about their experiences at the Summit. This episode features three of these short interviews.
First up is Aubrey de Grey, the President and Chief Science Officer of the LEV Foundation - a person deeply involved in the design and planning of the Summit. Next, we hear from Andrew Steele, who is an author and campaigner. The third interview features Liz Parrish, the CEO of BioViva Sciences and COO of Genorasis.
Selected follow-ups:
https://longevitysummitdublin.com/speakers/
https://www.levf.org/
https://maiabiotech.com/
https://andrewsteele.co.uk/
https://bioviva-science.com/
https://www.bestchoicemedicine.com/
https://www.genorasis.com/
Audio engineering assisted by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The legal profession is rarely accused of being at the cutting edge of technological development. Lawyers may not still use quill pens, but they’re not exactly famous for their IT skills. Nevertheless, the profession has a number of characteristics which make it eminently suited to the deployment of advanced AI systems. Lawyers are deluged by data, and commercial law cases can be highly lucrative.
One man who knows more about this than most is our guest in this episode, Benjamin Alarie, a Professor at the University of Toronto Faculty of Law, and a successful entrepreneur.
In 2015, Ben co-founded Blue J, a Toronto-based company which uses machine learning to analyze large amounts of data to predict a court's likely verdict in legal cases. Blue J is used by the Department of Justice in Canada and Canada's Revenue Agency.
Ben has just published “The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better.” And here at the London Futurists Podcast, we do like a singularity.
Selected follow-ups:
https://www.legalsingularity.com/
https://www.bluej.com/
https://en.wikipedia.org/wiki/Benjamin_Alarie
Topics addressed in this episode include:
*) Much of lawyers' work is data-heavy and involves prediction, so it is amenable to radical improvement with AI
*) Other reasons why, in principle, the legal industry should be an early adopter of AI technology
*) Reasons why the world is sometimes slow to adopt an innovation that technology makes possible
*) Automating the processes of disclosure and discovery
*) Two implications of automation for commercial earnings by law firms
*) Selling "the machine service" rather than "the human time"
*) A different kind of prediction: predicting what is likely to happen inside the inscrutable minds of judges
*) Judging as a "full body exercise" - involving the gut, heart, and compassion
*) Two "mountains of information" that legal decisions can nevertheless be reliably predicted in many cases
*) AI algorithms are more scalable, to wider use, than the limited time of expert human QCs (Queen's Counsel lawyers)
*) Even QCs can improve their performance if they take into account the advice of an AI system like Blue J
*) "Human plus machine beats human" - and can beat machine too
*) Once systems like Blue J are more widely used, the proportion of certain types of legal cases that come to trial may decrease; however, the proportion of other types of case coming to trial may increase
*) Entertainment industry workers are on strike in Hollywood, fearing disruption from AI technologies; why aren't lawyers on a similar strike?
*) What kinds of change in the legal profession would merit the term "singularity"?
*) A potential future in which law is a solved problem, with new laws being generated on demand whenever the need arises
*) The creation of laws that are fairer, more efficient, and better all round
*) Potential drawbacks in the run-up to the legal singularity
*) The 2013 movie "The Congress"
*) Estimates for when the Legal Singularity might occur - and for when people will realize that it is coming soon
Audio engineering by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Aidan McCullen. For ten years from 1998 to 2008, Aidan was a professional rugby player, delighting crowds in Ireland, England, and France. He made the very natural transition from that into sports commentating, but he also moved into digital media.
He started as an intern at Communicorp to learn digital media and marketing, and he learned about digital by doing it, living it and building it – as he puts it, by jumping off a cliff and building a plane on the way down. With typical humility, he says he was just a few Google searches ahead of everyone else.
With this grounding, Aidan has made himself a genuine expert on innovation. He is a keynote speaker, an executive coach, a board director, a lecturer, and the author of “Undisruptable: A Mindset of Permanent Reinvention for Individuals, Organisations and Life”.
Aidan may be known best as the enthusiastic and generous host of a podcast called the Innovation Show, which offers weekly interviews with leaders in their fields, including writers, academics, inventors, executives and mavericks. The central message of that podcast is that we all need to stay open to new ideas, and always keep learning.
Selected follow-ups:
https://theinnovationshow.io/about/
Topics addressed in this episode include:
*) Lessons from Aidan's time as a rugby player
*) The gift of discipline
*) Being ready to take advantage of unexpected good luck
*) Avoiding the "WASP" trap - wandering aimlessly without purpose
*) The "centaur" model - half human and half machine
*) Aidan's own use of generative AI - embellishing graphics, developing metaphors, suggesting questions for interviews
*) An alternative to a lemonade stand: creating an entire cartoon book using generative AI
*) Why audiences are leaning in, more than before
*) Various ways in which automation will impact the jobs market and the cost of services
*) Career advice for a nine year old
*) Encouraging students to use and understand generative AI tools
*) Tangible examples of Amara's Law
*) The special value of skills in communication, collaboration, 'cobot'ing, coordination, and looking after your health
*) Actions today in anticipation of being healthy at the age of 100
*) Deciding who to collaborate with
*) Developing a "stem cell" mindset - knowing our purpose, but keeping our options open
*) Bruce Lipton's research on epigenetics
*) What drives Aidan: helping people make better decisions and lead better lives
*) Building a community of people who are prepared to think differently
Audio engineering by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Martin O'Dea. As the CEO of Longevity Events Limited, Martin is the principal organiser of the annual Longevity Summit Dublin. In a past life, Martin lectured on business strategy at Dublin Business School. He has been keeping a close eye on the longevity space for more than ten years, and is well placed to speak about how the field is changing. Martin sits on a number of boards including the LEV Foundation, where, full disclosure, so does David.
This conversation is a chance to discover, ahead of time, what some of the highlights are likely to be at this year's Longevity Summit Dublin, which is taking place from 17th to 20th August.
Selected follow-ups:
https://longevitysummitdublin.com/
https://www.levf.org/projects/robust-mouse-rejuvenation-study-1
Topics addressed in this episode include:
*) Emma Teeling and the unexpected longevity of bats
*) Steve Austad and a wide range of long-lived animal species, as featured in his recent new book "Methuselah's Zoo"
*) Michael Levin and the role of bioelectrical networks in the coordination of cells during embryogenesis and regeneration
*) Filling four days of talks - "not an issue at all"
*) A special focus on "the hard problems of aging"
*) The work of the LEV (Longevity Escape Velocity) Foundation and the vision of Aubrey de Grey
*) Various signs of growing public interest in intervening in the biology of aging
*) A look back at a conference in London in 2010
*) Two events in 2013: academic publications on "hallmarks of aging", and Google's creation of Calico
*) Multi-million dollar investments in longevity are increasingly becoming "just pocket change... par for the course"
*) Selective interest from media and documentary makers, coupled with some hesitancy
*) Playing tennis at the age of 110 with your great grandchildren - and then what?
*) The possibility of "a ChatGPT moment for longevity" that changes public opinion virtually overnight
*) Why the attainment of RMR (Robust Mouse Rejuvenation) would be a seminal event
*) The rationale for trying a variety of different life-extending interventions in combination - and why pharmaceutical companies and academics have both shied away from such an experiment
*) The four treatments trialled in phase 1 of RMR, with other treatments under consideration for later phases
*) A message to any billionaires listening
*) A message to any politicians listening: the longevity dividend, as expounded by Andrew Scott and Andrew Steele
*) Another potential seminal moment: the TAME trial (Targeting Aging with Metformin), as advocated by Nir Barzilai
*) Why researchers who wanted to work on aging had to work on Parkinson's instead
*) Looking ahead to 2033
*) The role of longevity summits in strengthening the longevity community and setting individuals on new trajectories in their lives
*) The benefits of maintaining a collaborative, open attitude, without the obstacles of NDAs (Non-Disclosure Agreements)
*) Options for progress accelerating, not just from exponential trends, but from intersections of insights from different fields
*) Beware naïve philosophical concerns about entropy and about the presumed wisdom of evolution
*) The sad example of campaigner Aaron Schwartz
*) Important roles for decentralized science alongside existing commercial models
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our topic in this episode is investing in AI, so we're delighted to have as our guest John Cassidy, a Partner at Kindred Capital, a UK-based venture capital firm. Before he became an investment professional, John co-founded CCG.ai, a precision oncology company which exited to Dante Labs in 2019.
We discuss how the investment landscape is being transformed by the possibilities enabled by generative AI .
Selected follow-ups:
https://kindredcapital.vc/
https://cradle.bio/
https://scarletcomply.com/
https://www.five.ai/
Topics addressed in this episode include:
*) The argument for investing not just in "platforms" but also in "picks and shovels" - items within the orchestration or infrastructure layers of new solutions
*) Examples of recent investments by Kindred Capital
*) Comparisons between the surge of excitement around generative AI and previous surges of excitement around crypto and dot-com
*) Companies such as Amazon, Google, and Microsoft kept delivering value despite the crash of the dot-com bubble; will something similar apply with generative AI?
*) The example of how Nvidia captures significant value in the chip manufacturing industry
*) However, looking further back in history, many people who invested in the infrastructure of railways and canals lost lots of money
*) Reasons why generative AI might produce large amounts of real value more quickly than previous technologies
*) The example of Cradle Bio as enablers of protein engineering - and what might happen if Google upgrade their protein folding prediction software from AlphaFold 2 to AlphaFold 3
*) Despite the changes in technological possibilities, what most interests VCs is the calibre of a company's founding team
*) The search for individuals who have "creative destruction in their being" - people with a particular kind of irrational self-belief
*) The contrast between crystallized intelligence and fluid intelligence - and why both are needed
*) Advantages and disadvantages for investors being located in the UK vs. being located in the US
*) Why doesn't Europe have tech giants?
*) Complications with government regulation of tech industries
*) The example of Scarlet as a company helping to streamline the regulation of medical software that is frequently updated
*) Why government regulators need to engage with people in industry who are already immersed in considering safety and efficacy of products
*) Wherever they are located, companies need to plan ahead for their products reaching new jurisdictions
*) Ways in which AI is likely to impact industries in new ways in the near future
*) The particular need to improve the efficiency of the later stages of clinical trials of new medical treatments
Audio engineering by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Jeremy Kahn, a senior writer at Fortune Magazine, based in the UK. He writes about artificial intelligence and other disruptive technologies, from quantum computing to augmented reality. Previously he was at Bloomberg for eight years, again writing mostly about technology, and in moving to Fortune he was returning to his journalistic roots, as he started his career there in 1997, when he was based in New York.
David and Calum invited Jeremy onto the show because they think his weekly newsletter “Eye on AI” is one of the very best non-technical sources of news and views about the technology.
Jeremy has some distinctive views on the significance of transformers and the LLMs (Large Language Models) they enable.
Selected follow-ups:
https://www.fortune.com/newsletters/eye-on-ai
https://fortune.com/author/jeremy-kahn/
Topics addressed in this episode include:
*) Jeremy's route into professional journalism, focussing on technology
*) Assessing the way technology changes: exponential, linear with a steep incline, linear with leaps, or something else?
*) Some characteristics of LLMs that appear to "emerge" out of nowhere at larger scale, can actually be seen developing linearly when attention is paid to the second or third prediction of the model
*) Some leaps in capability depend, not on underlying technological power, but on improvements in interfaces - as with ChatGPT
*) Some leaps in capability require, not just step-ups in technological power, but changes in how people organise their work around the new technology
*) The decades-long conversion of factories from steam-powered to electricity-powered
*) Reasons to anticipate significant boosts in productivity in many areas of the economy within just two years, with assistance from AI co-pilots and from "universal digital assistants"
*) Related forthcoming economic impacts: slow-downs in hiring, and depression of some wages (akin to how Uber drivers reduced how much yellow cab drivers could charge for fares)
*) The potential, not just for companies to learn to make good use of existing transformer technologies, but for forthcoming next generation transformers to cause larger disruptions
*) Models that predict, not "the next most likely word", but "the next most likely action to take to achieve a given goal"
*) Recent AI startups with a focus on using transformers for task automation include Adept and Inflection
*) Risks when LLMs lack sufficient common sense, and might take actions which a human assistant would know to check beforehand with their supervisor
*) Ways in which LLMs could acquire sufficient common sense
*) Ways in which observers can be misled about how much common sense is possessed by an LLM
*) Reasons why some companies have instructed their employees not to use consumer-facing versions of LLMs
*) The case, nevertheless, for companies to encourage bottom-up massive experimentation with LLMs by employees
*) The possibility for companies to have departments without any people in them
*) Implications of LLMs for geo-security and international relations
*) A possible agency, akin to the International Atomic Energy Agency, to monitor the training and use of next generation LLMs
*) Interest by the Pentagon (and also in China) for LLMs that can act as "battlefield advisors"
*) A call to action: people need to get their heads around transformers, and understand both the upsides and the risks
Audio engineering assisted by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
An intriguing possibility created by the exponential growth in the power of our technology is that within the lifetimes of people already born, death might become optional. Show co-hosts Calum and David are both excited about this idea, but our excitement is as nothing compared to the exuberant enthusiasm of our guest in this episode, José Cordeiro.
José was born in Venezuela, to parents who fled Franco’s dictatorship in Spain. He has closed the circle, by returning to Spain (via the USA) while another dictatorship grips Venezuela. His education and early career was thoroughly blue chip – MIT, Georgetown University, INSEAD, and then Schlumberger and Booz Allen.
Today, José is the most prominent transhumanist in Spain and Latin America, and indeed a leading light in transhumanist circles worldwide. He is a loyal follower of the ideas of Ray Kurzweil, and in 2018 he co-wrote "La Muerte de la Muerte", which has since been updated and is being published in English as “The Death of Death”. By way of full disclosure, his co-author was David.
Selected follow-ups:
https://thedeathofdeath.org/
https://cordeiro.org/
Forthcoming anti-aging conferences:
New York, 10-11 Aug: https://www.lifespan.io/ending-age-related-diseases-2023
Dublin, 17-20 Aug: https://longevitysummitdublin.com
Johannesburg, 23-24 Aug: https://conference.taffds.org
Copenhagen, 28 Aug - 1 Sept: https://agingpharma.org
Anaheim (CA), 7-10 Sept: https://raadfest.com/2023
Topics addressed in this episode include:
*) An engineering approach to improving health and longevity
*) Some cells and some organisms are already biologically immortal
*) How José met Marvin Minsky and Ray Kurzweil at MIT
*) Does death give purpose to life?
*) Why people have often resolved "to live with death"
*) Potential timescales for the attainment of longevity escape velocity for humans
*) Examples of changing lifespans for various animal species
*) The significance of the Nobel prize-winning research of Shinya Yamanaka
*) Limits of the capabilities of evolution
*) Different theories as to why aging happens: wear-and-tear vs. built-in obsolescence
*) Learning from animals that have extended lifespans - including anti-cancer mechanisms
*) Exponential progress: more funding, more people, more resources, more discoveries
*) Why longevity may soon become the largest industry in the history of humanity
*) The Longevity Dividend: "making money out of people not aging"
*) The role of politicians in accelerating the benefits of the Longevity Dividend
*) Which bold political leader will change history by being the first to declare aging as a curable disease?
*) The case for a European anti-aging agency
*) Things to say to people who insist that 80 to 85 years is a sufficiently long lifespan
*) The case for optimism, from Victor Frankl
*) The prevalence of irrational attitudes toward curing aging vs. curing cancer
*) How the MIT Technology Review changed its tune about longevity pioneer Aubrey de Grey
*) The three phases in the reception of powerful new ideas
*) Aspects of our present lifestyles that will be viewed, in 2045, as being barbaric
*) The world's most altruistic cause
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Shamus Rae. Shamus is the co-founder of Engine B, a startup which aims to expedite the digitisation of the professional services industry (in particular the accounting and legal professions) and level the playing field, so that small companies can compete with larger ones. It is supported by the Institute of Chartered Accountants in England and Wales (the ICAEW) and the main audit firms.
Shamus was ideally placed to launch Engine B, having spent 13 years as a partner at the audit firm KPMG, where he was Head of Innovation and Digital Disruption. But his background is in technology, not accounting, which will become clear as we talk: he is commendably sleeves-rolled-up and hands-on with AI models. Back in the 1990s he founded and sold a technology-oriented outsourcing business, and then built a 17,000-strong outsourcing business for IBM in India from scratch.
Selected follow-ups:
https://engineb.com/
https://www.icaew.com/
Topics addressed in this episode include:
*) AI in many professional services contexts depends on the quality of the formats used for the data they orchestrate (e.g. financial records and legal contracts)
*) "Plumbing for accountants and lawyers"
*) Why companies within an industry generally shouldn't seek competitive advantage on the basis of the data formats they are using
*) Data lakes contrasted with data swamps
*) Automated data extraction can coexist with data security and data privacy
*) The significance of knowledge graphs
*) Will advanced AI make it harder for tomorrow’s partners to acquire the skills they need?
*) Examples of how AI-powered "co-pilots" augment the skills of junior members of a company
*) Should junior staff still be expected to work up to 18 hours a day, "ticking and bashing" or similar, if AI allows them to tackle tedious work much more quickly than before?
*) Will advanced AI will destroy the billable hours business model used by many professional services companies?
*) Alternative business models that can be adopted
*) Anticipating an economy of abundance, but with an unclear transitional path from today's economy
*) Reasons why consulting reports often downplay the likely impact of AI on jobs
*) Some ways in which Google might compete against the GPT models of OpenAI
*) Prospects for improved training of AI models using videos, using new forms of reinforcement learning from human feedback, and fuller use of knowledge graphs
*) Geoff Hinton's "Forward-Forward" algorithm as a potential replacement for back propagation
*) Might a "third AI big bang" already have started, without most observers being aware of it?
*) The book by Mark Humphries, "The Spike: An Epic Journey Through the Brain in 2.1 Seconds"
*) Comparisons between the internal models used by GPT 3.5 and GPT 4
*) A comparison with the globalisation of the 1990s, with people denying that their own jobs will be part of the change they foresee
Audio engineering assisted by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode our guest is David Giron, the Director at what is arguably one of the world's most innovative educational initiatives, Codam College in Amsterdam. David was previously the head of studies at Codam's famous parent school 42 in Paris, and he has now spent 10 years putting into practice the somewhat revolutionary ideas of the 42 network. We ask David about what he has learned during these ten years, but we're especially interested in his views on how the world of education stands to be changed even further in the months and years ahead by generative AI.
Selected follow-ups:
https://www.codam.nl/en/team
https://42.fr/en/network-42/
Topics addressed in this episode include:
*) David's background at Epitech and 42 before joining Codam
*) The peer-to-peer framework at the heart of 42
*) Learning without teachers
*) Student assessment without teachers
*) Connection with the "competency-based learning" or "mastery learning" ideas of Sir Ken Robinson
*) Extending the 42 learning method beyond software engineering to other fields
*) Two ways of measuring whether the learning method is successful
*) Is it necessary for a school to fail some students from time to time?
*) The impact of Covid on the offline collaborative approach of Codam
*) ChatGPT is more than a tool; it is a "topic", on which people are inclined to take sides
*) Positive usage models for ChatGPT within education
*) Will ChatGPT make the occupation of software engineering a "job from the past"?
*) Software engineers will shift their skills from code-writing to prompt-writing
*) Why generative AI is likely to have a faster impact on work than the introduction of mechanisation
*) The adoption rate of generative AI by Codam students - and how it might change later this year
*) Code first or comment first?
*) The level of interest in Codam shown by other educational institutions
*) The resistance to change within traditional educational institutions
*) "The revolution is happening outside"
*) From "providing knowledge" to "creating a learning experience"
*) From large language models to full video systems that are individually tailored to help each person learn whatever they need in order to solve problems
*) Learning to code as a proxy for the more fundamental skill of learning to learn
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Alex Zhavoronkov is our first guest to make a repeat appearance, having first joined us in episode 12, last November. We are delighted to welcome him back, because he is doing some of the most important work on the planet, and he has some important news.
In 2014, Alex founded Insilico Medicine, a drug discovery company which uses artificial intelligence to identify novel targets and novel molecules for pharmaceutical companies. Insilico now has drugs designed with AI in human clinical trials, and it is one of a number of companies that are demonstrating that developing drugs with AI can cut the time and money involved in the process by as much as 90%.
Selected follow-ups:
https://insilico.com/
ARDD 2023: https://agingpharma.org/
Topics addressed in this episode include:
*) For the first time, an AI-generated molecule has entered phase 2 human clinical trials; it's a candidate treatment for IPF (idiopathic pulmonary fibrosis)
*) The sequence of investigation: first biology (target identification), then chemistry (molecule selection), then medical trials; all three steps can be addressed via AI
*) Pros and cons of going after existing well-known targets (proteins) for clinical intervention, versus novel targets
*) Pros and cons of checking existing molecules for desired properties, versus imagining (generating) novel molecules with these properties
*) Alex's experience with generative AI dates back to 2015 (initially with GANs - "generative adversarial networks")
*) The use of interacting ensembles of different AI systems - different generators, and different predictors, allocating rewards
*) The importance of "diversity" within biochemistry
*) A way in which Insilico follows "the Apple model"
*) What happens in Phase 2 human trials - and what Insilico did before reaching Phase 2
*) IPF compared with fibrosis in other parts of the body, and a connection with aging
*) Why probability of drug success is more important than raw computational speed or the cost of individual drug investigations
*) Recent changes in the AI-assisted drug development industry: an investment boom in the wake of Covid, spiced-up narratives devoid of underlying substance, failures, downsizing, consolidation, and improved understanding by investors and by big pharma
*) The AI apps created by Insilico can be accessed by companies or educational institutes
*) Insilico research into quantum computing: this might transform drug discovery in as little as two years
*) Real-world usage of quantum computers from IBM, Microsoft, and Google
*) Success at Insilico depended on executive management task reallocation
*) Can Longevity Escape Velocity be achieved purely by pharmacological interventions?
*) Insilico's Precious1GPT approach to multimodal measurements of biological aging, and its ability to suggest new candidate targets for age-associated diseases: "one clock to rule them all"
*) Reasons to mentally prepare to live to 120 or 150
*) Hazards posed to longevity research by geopolitical tensions
*) Reasons to attend ARDD in Copenhagen, 28 Aug to 1 Sept
*) From longevity bunkers to the longevity dividend
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?
The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.
Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The launch of GPT-4 on 14th March has provoked concerns and searching questions, and nowhere more so than in the education sector. Earlier this month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model.
Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could one day have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did.
Donald Clark is the ideal person to discuss this with. He founded Epic Group in 1983, and made it the UK’s largest provider of bespoke online education services before selling it in 2005. He is now the CEO of an AI learning company called WildFire, and an investor in and Board member of several other education technology businesses. In 2020 he published a book called Artificial Intelligence for Learning.
Selected follow-ups:
https://donaldclarkplanb.blogspot.com/
https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education
https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
https://www.amazon.co.uk/Case-against-Education-System-Waste/dp/0691196451/
https://www.amazon.co.uk/Head-Hand-Heart-Intelligence-Over-Rewarded/dp/1982128461/
Topics addressed in this episode include:
*) "Education is a bit of a slow learner"
*) Why GPT-4 has unprecedented potential to transform education
*) The possibility of an online universal teacher
*) Traditional education sometimes fails to follow best pedagogical practice
*) Accelerating "time to competence" via personalised tuition
*) Calum's experience learning maths
*) How Khan Academy and DuoLingo are partnering with GPT-4
*) The significance of the large range of languages covered by ChatGPT
*) The recent essay on "The Age of AI" by Bill Gates
*) Students learning social skills from each other
*) An imbalanced societal focus on educating and valuing "head" rather than "heart" or "hand"
*) "The case against education" by Bryan Caplan
*) Evidence of wide usage of ChatGPT by students of all ages
*) Three gaps between GPT-4 and AGI, and how they are being bridged by including GPT-4 in "ensembles"
*) GPT-4 has a better theory of physics than GPT 3.5
*) Encouraging a generative AI to learn about a worldview via its own sensory input, rather than directly feeding a worldview into it
*) Pros and cons of "human exceptionalism"
*) How GPT-4 is upending our ideas on the relation between language and intelligence
*) Generative AI, the "C skills", and the set of jobs left for humans to do
*) Custer's last stand?
*) Three camps regarding progress toward AGI
*) Investors' reactions to Italy banning ChatGPT (subsequently reversed)
*) Different views on GDPR and European legislation
*) Further thoughts on implications of GPT-4 for the education industry
*) Shocking statistics on declining enrolment numbers in US universities
*) Beyond exclusivity: "A tutorial system for everybody"?
*) A boon for Senegal and other countries in the global south?
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. As people realised that GPT technology was a game-changer, they called for the Act to be reconsidered.
Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world’s most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.
The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.
John Higgins joins us in this episode to discuss the AI Act. John is the Chair of the Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.
Follow-up reading:
https://www.globaldigitalfoundation.org/
https://artificialintelligenceact.eu/
Topics addressed in this episode include:
*) How different is generative AI from the productivity tools that have come before?
*) Two approaches to regulation compared: a "Franco-German" approach and an "Anglo-American" approach
*) The precautionary principle, for when a regulatory framework needs to be established in order to provide market confidence
*) The EU's preference for regulating applications rather than regulating technology
*) The types of application that matter most - when there is an impact on human rights and/or safety
*) Regulations in the Act compared to the principles that good developers will in any case be following
*) Problems with lack of information about the data sets used to train LLMs (Large Language Models)
*) Enabling the flow, between the different "providers" within the AI value chain, of information about compliance
*) Two potential alternatives to how the EU aims to regulate AI
*) How an Act passes through EU legislation
*) Conflicting assessments of the GDPR: a sledgehammer to crack a nut?
*) Is it conceivable that LLMs will be banned in Europe?
*) Why are there no tech giants in Europe? Does it matter?
*) Other metrics for measuring the success of AI within Europe
*) Strengths and weaknesses of the EU single market
*) Reasons why the BCS opposed the moratorium proposed by the FLI: impracticality, asymmetry, benefits held back
*) Some counterarguments in favour of the FLI position
*) Projects undertaken by the Global Digital Foundation
*) The role of AI in addressing (as well as exacerbating) hate speech
*) Growing concerns over populism, polarisation, and post-truth
*) The need for improved transparency and improved understanding
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Technological changes have economic impact. It's not just that technology allows more goods and services to be produced more efficiently and at greater scale. It's also that these changes disrupt previous assumptions about the conduct of human lives, human relationships, and the methods to save money to buy goods and services. A society in which people expect to die around the age of 100, or even older, needs to make different plans than a society in which people expect to die in their 70s.
Some politicians, in unguarded moments, have even occasionally expressed a desire for retired people to "hurry up and die", on account of the ballooning costs of pension payments and healthcare costs for the elderly. These politicians worry about the negative consequences of longer lives. In their viewpoint, longer lives would be bad for the economy.
But not everyone thinks that way. Indeed, a distinguished professor of economics, from the London Business School, Andrew J Scott, has studied a variety of different future scenarios about the economic consequences of longer lives. He is our guest in this episode.
In addition to his role at the London Business School, Andrew is a Research Fellow at the Centre for Economic Policy Research and a consulting scholar at Stanford University’s Center on Longevity.
His research has been widely published in leading journals in economics and health. His book, "The 100-Year Life", has been published in 15 languages, is an Amazon bestseller and was runner up in both the FT/McKinsey and Japanese Business Book of the Year Awards.
Andrew has been an advisor on policy to a range of governments. He is currently on the advisory board of the UK’s Office for Budget Responsibility, the Cabinet Office Honours Committee (Science and Technology), co-founder of The Longevity Forum, a member of the National Academy of Medicine’s International Commission on Health Longevity, and the WEF council on Healthy Ageing and Longevity.
Follow-up reading:
https://profandrewjscott.com/
https://www.nature.com/articles/s43587-021-00080-0
Topics addressed in this episode include:
*) Why Andrew wrote the book "The 100-Year Life" (co-authored with Lynda Gratton)
*) Shortcomings of the conventional narrative of "the aging society"
*) The profound significance of aging being malleable
*) Joint research with David Sinclair (Harvard) and Martin Ellison (Oxford): Economic modelling of the future of healthspan and lifespan
*) Four different scenarios: Struldbruggs, Dorian Gray, Peter Pan, and Wolverine
*) The multi-trillion dollar economic value of everyone in the USA gaining one additional year of life in good health
*) The first and second longevity revolutions
*) The virtuous circle around aging research
*) Options for lives that are significantly longer even than 100 years
*) The ill-preparedness of our social structures for extensions in longevity - and, especially, for the attainment of longevity escape velocity
*) The possibility of rapid changes in society's expectations
*) The three-dimensional longevity dividend
*) Developments in Singapore and the UAE
*) Two important political initiatives: supporting the return to the workforce of people who are aged over 50, and paying greater attention to national statistics on expected healthspan
*) Themes from Andrew's forthcoming new book "Evergreen"
*) Why 57 isn't the new 40: it's the new 57
*) Making a friend of your future self
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
One of the questions audiences frequently used to ask futurists was, which careers are most likely to be future-proof? However, that question has changed in recent years. It's now more widely understood that every career is subject to disruption by technological and social trends. No occupation is immune to change. So the question has switched, away from possible future-proof careers, to the skills that are most likely to be useful in these fast-changing circumstances. For example, should everyone be learning to code, or deepen their knowledge of STEM - that is, Science, Technology, Engineering, and Maths? Or should there be more focus on so-called human skills or soft skills?
Who better to answer that question than our guest in this episode, Mike Howells? Mike is the President of the Workforce Skills Division at Pearson, the leading learning company.
The perennial debate about when and how advanced AI will cause widespread disruption in education has been given extra impetus by the launch of ChatGPT last November, and GPT-4 in March. Pearson, a venerable British company which has gone through various incarnations, is one of the companies at the sharp end of this debate about the changing role of technology in education. The share price of several of these companies suffered a temporary setback recently, due to a perception that GPT technology would replace many of their services. However, Pearson and its peers have rebutted these claims, and the stock has largely recovered.
Indeed, with what could be viewed as considerable prescience, Pearson carried out a major piece of research before ChatGPT was launched, to identify which skills employers are prioritising for their new hires - new employees who will be in their stride in 2026 - three years from now.
Follow-up reading:
https://www.pearson.com/
https://plc.pearson.com/en-GB/insights/pearson-skills-outlook-powerskills
Topics addressed in this episode include:
*) Some lessons from Mike's own career trajectory
*) How Pearson used AI in their survey of key workforce skills
*) The growing importance - and growing value - of human skills
*) The top 5 "power skills" that employers are seeking today
*) The top 5 "power skills" that are projected to be most in-demand by 2026 - and which are in need of greatest improvement and investment
*) Given that there are no university courses in these skill areas, how can people gain proficiency in them?
*) Three ways of inferring evidence of someone's proficiency in these skill areas
*) How the threat of automation has moved from blue collar jobs to white collar jobs
*) People are used to taking data-driven decisions in many areas of their lives - e.g. which restaurants to visit or which holidays to book - but the data about the effect of various educational courses is surprisingly thin
*) The increasing need for data-driven retraining
*) Ways in which the retraining experience can be improved by AI and VR/AR/XR
*) The attraction of digital assistants that can provide personalised tuition, especially as costs drop
*) School-age children often already use their skills with existing technology to augment and personalise their learning
*) Complications with privacy, security, consent, and measuring efficacy
*) "It's not about what you've done; it's about what you can do"
*) A closer look at "personal learning and mastery" and "cultural and social intelligence"
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The last few episodes of our podcast have explored what GPT (generative pre-trained transformer) technology is and how it works, and also the call for a pause in the development of advanced AI. In this latest episode, Ted Lappas, a data scientist and academic, helps us to take a pragmatic turn - to understand what GPT technology can do for each of us individually.
Ted is Assistant Professor at Athens University of Economics and Business, and he also works at Satalia, which was London's largest independent AI consultancy before it was acquired last year by the media giant WPP.
Follow-up reading:
https://satalia.com/
https://www.linkedin.com/in/theodoros-lappas-82771451/
Topics addressed in this episode include:
*) The "GPT paradox": If GPT-4 is so good, why aren't more people using it to boost their effectiveness in their workplace?
*) Concerns in some companies that data entered into GPTs will leak out and assist their competitors
*) Uses of GPTs to create or manipulate text, and to help developers to understand new code
*) GPTs as "brains" that lack the "limbs" that would make them truly useful
*) GPT capabilities are being augmented via plug-ins that access sites like Expedia, Instacart, or Zapier
*) Agent-based systems such as AutoGPT and AgentGPT that utilise GPTs to break down tasks into steps and then carry out these steps
*) Comparison with the boost given to Apple iPhone adoption by the launch, one year later, of the iOS App Store
*) Ted's use of GPT-4 in his role as a meta-reviewer for papers submitted to an academic conference - with Ted becoming an orchestrator more than a writer
*) The learning curve is easier for vanilla GPTs than for agent systems that use GPTs
*) GPTs are currently more suited to low-end writing than to high-end writing, but are expected to move up the value chain
*) Ways to configure a GPT so that it can reproduce the quality level or textual style of a specific writer
*) Calum's use of GPT-4 in his side-project as a travel writer
*) Ways to stop GPTs inventing false anecdotes
*) Some users of GPTs will lose all faith in them due to just a single hallucination
*) Teaching GPTs to say "I don't know" or to state their level of confidence about claims they make
*) Creating an embedding space search engine
*) The case for gaining a working knowledge of the programming language Python
*) The growth of technology-explainer videos on TikTok and Instagram
*) "Explain this to me like I'm ten years old"
*) The way to learn more about GPTs is to use them in a meaningful project
*) Learning about GPTs such as DALL-E or Midjourney that generate not text but images
*) Uses of GPTs for inpainting - blending new features into an image
*) The advantages of open source tools, such as those available on Hugging Face
*) Images will be largely solved in 2023; 2024 will be the year for video
*) An appeal to "dive in, the sooner the better"
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
On March 14th, OpenAI launched GPT-4 , which took the world by surprise and storm. Almost everybody, including people within the AI community, was stunned by its capabilities. A week later, the Future of Life Institute (FLI) published an open letter calling on the world’s AI labs to pause the development of larger versions of GPT (generative pre-trained transformer) models until their safety can be ensured.
Recent episodes of this podcast have presented arguments for and against this call for a moratorium. Jaan Tallin, one of the co-founders of FLI, made the case in favour. Pedro Domingos, an eminent AI researcher, and Kenn Cukier, a senior editor at The Economist, made variants of the case against. In this episode, co-hosts Calum Chace and David Wood highlight some key implications and give our own opinions. Expect some friendly disagreements along the way.
Follow-up reading:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
Topics addressed in this episode include:
*) Definitions of Artificial General Intelligence (AGI)
*) Many analysts knowledgeable about AI have recently brought forward their estimates of when AGI will become a reality
*) The case that AGI poses an existential risk to humanity
*) The continued survival of the second smartest species on the planet depends entirely on the actions of the actual smartest species
*) One species can cause another to become extinct, without that outcome being intended or planned
*) Four different ways in which advanced AI could have terrible consequences for humanity: bugs in the implementation; the implementation being hacked (or jail broken); bugs in the design; and the design being hacked by emergent new motivations
*) Near future AIs that still fall short of being AGI could have effects which, whilst not themselves existential, would plunge society into such a state of dysfunction and distraction that we are unable to prevent subsequent AGI-induced disaster
*) Calum's "4 C's" categorisation of possible outcomes regarding AGI existential risks: Cease, Control, Catastrophe, and Consent
*) 'Consent' means a superintelligence decides that we humans are fun, enjoyable, interesting, worthwhile, or simply unobjectionable, and consents to let us carry on as we are, or to help us, or to allow us to merge with it
*) The 'Control' option arguably splits into "control while AI capabilities continue to proceed at full speed" and "control with the help of a temporary pause in the development of AI capabilities"
*) Growing public support for stopping AI development - driven by a sense of outrage that the future of humanity is seemingly being decided by a small number of AI lab executives
*) A comparison with how the 1983 film "The Day After" triggered a dramatic change in public opinion regarding the nuclear weapons arms race
*) How much practical value could there be in a six-month pause? Or will the six-months be extended into an indefinite ban?
*) Areas where there could be at least some progress: methods to validate the output of giant AI models, and choices of initial configurations that would make the 'Consent' scenario more likely
*) Designs that might avoid the emergence of agency (convergent instrumental goals) within AI models as they acquire more intelligence
*) Why 'Consent' might be the most likely outcome
*) The longer a ban remains in place, the larger the risks of bad actors building AGIs
*) Contemplating how to secure the best upsides - an "AI summer" - from advanced AIs
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The race to create advanced AI is becoming a suicide race.
That's part of the thinking behind the open letter from the Future of Life Institute which "calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4".
In this episode, our guest, Jaan Tallinn, explains why he sees this pause as a particularly important initiative.
In the 1990s and 20-noughts, Jaan led much of the software engineering for the file-sharing application Kazaa and the online communications tool Skype. He is also known as one of the earliest investors in DeepMind, before they were acquired by Google.
More recently, Jaan has been a prominent advocate for study of existential risks, including the risks from artificial superintelligence. He helped set up the Centre for the Study of Existential Risk (CSER) in 2012 and the Future of Life Institute (FLI) in 2014.
Follow-up reading:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
https://www.cser.ac.uk/
https://en.wikipedia.org/wiki/Jaan_Tallinn
Topics addressed in this episode include:
*) The differences between CSER and FLI
*) Do the probabilities for the occurrence of different existential risks vary by orders of magnitude?
*) The principle that "arguments screen authority"
*) The possibility that GPT-6 will be built, not by humans, but by GPT-5
*) Growing public concern, all over the world, that the fate of all humanity is, in effect, being decided by the actions of just a small number of people in AI labs
*) Two reasons why FLI recently changed its approach to AI risk
*) The AI safety conference in 2015 in Puerto Rico was initially viewed as a massive success, but it has had little lasting impact
*) Uncertainty about a potential cataclysmic event doesn't entitle people to conclude it won't happen any time soon
*) The argument that LLMs (Large Language Models) are an "off ramp" rather than being on the road to AGI
*) Why the duration of 6 months was selected for the proposed pause
*) The "What about China?" objection to the pause
*) Potential concrete steps that could take place during the pause
*) The FLI document "Policymaking in the pause"
*) The article by Luke Muehlhauser of Open Philanthropy, "12 tentative ideas for US AI policy"
*) The "summon and tame" way of thinking about the creation of LLMs - and the risk that minds summoned in this way won't be able to be tamed
*) Scenarios in which the pause might be ignored by various entities, such as authoritarian regimes, organised crime, rogue corporations, and extraordinary individuals such as Elon Musk and John Carmack
*) A meta-principle for deciding which types of AI research should be paused
*) 100 million dollar projects become even harder when they are illegal
*) The case for requiring the pre-registration of largescale mind-summoning experiments
*) A possible 10^25 limit on the number of FLOPs (Floating Point Operations) an AI model can spend
*) The reactions by AI lab leaders to the widescale public response to GPT-4 and to the pause letter
*) Even Sundar Pichai, CEO of Google/Alphabet, has called for government intervention regarding AI
*) The hardware overhang complication with the pause
*) Not letting "the perfect" be "the enemy of the good"
*) Elon Musk's involvement with FLI and with the pause letter
*) "Humanity now has cancer"
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full "general intelligence". And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.
These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.
For the past decade, Kenn was the host of its weekly tech podcast Babbage. He is co-author of the 2013 book “Big Data", a New York Times best-seller that has been translated into over 20 languages. He is a regular commentator in the media, and a popular keynote speaker, from TED to the World Economic Forum.
Kenn recently stepped down as a board director of Chatham House and a fellow at Oxford's Saïd Business School. He is a member of the Council on Foreign Relations. His latest book is "Framers", on the power of mental models and the limits of AI.
Follow-up reading:
http://www.cukier.com/
https://mediadirectory.economist.com/people/kenneth-cukier/
https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
Kurzweil's version of the Turing Test: https://longbets.org/1/
Topics addressed in this episode include:
*) Changing attitudes at The Economist about how to report on the prospects for AI
*) The dual roles of scepticism regarding claims made for technology
*) 'Calum's rule' about technology forecasts that omit timing
*) Options for magazine coverage of possible developments more than 10 years into the future
*) Some leaders within AI research, including Sam Altman of OpenAI, think AGI could happen within a decade
*) Metaculus community aggregate forecasts for the arrival of different forms of AGI
*) A theme for 2023: the increased 'emergence' of unexpected new capabilities within AI large language models - especially when these models are combined with other AI functionality
*) Different views on the usefulness of the Turing Test - a test of human idiocy rather than machine intelligence?
*) The benchmark of "human-level general intelligence" may become as anachronistic as the benchmark of "horsepower" for rockets
*) The drawbacks of viewing the world through a left-brained hyper-rational "scientistic" perspective
*) Two ways the ancient Greeks said we could find truth: logos and mythos
*) People in 2023 finding "mythical, spiritual significance" in their ChatGPT conversations
*) Appropriate and inappropriate applause for what GPTs can do
*) Another horse analogy: could steam engines that lack horse-like legs really replace horses?
*) The Ship of Theseus argument that consciousness could be transferred from biology to silicon
*) The "life force" and its apparently magical, spiritual aspects
*) The human superpower to imaginatively reframe mental models
*) People previously thought humans had a unique superpower to create soul-moving music, but a musical version of the Turing Test changed minds
*) Different levels of creativity: not just playing games well but inventing new games
*) How many people will have paid jobs in the future?
*) Two final arguments why key human abilities will remain unique
*) The "pragmatic turn" in AI: duplicating without understanding
*) The special value, not of information, but of the absence of information (emptiness, kenosis, the "cloud of unknowing")
*) The temptations of mimicry and idolatry
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?
Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?
Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".
That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.
Follow-up reading:
https://homes.cs.washington.edu/~pedrod/
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Topics addressed in this episode include:
*) The five tribes of AI research - why there's a lot more to AI than deep learning
*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence
*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example
*) A wide spread of estimates of the timescale to reach AGI
*) Different views as to the true risks from advanced AI
*) The case that risks arise from AI incompetence rather than from increased AI competence
*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI
*) The case for using AI to prevent misuse of AI
*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective
*) How AIs that operate beyond our understanding could still remain under human control
*) How fully can evolution be trusted to produce outputs in line with a specified objective function?
*) The example of humans taming wolves into dogs that pose no threat to us
*) The counterexample of humans pursuing goals contrary to our in-built genetic drives
*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes
*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous
*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models
*) Pros and cons of escalating a sense of urgency
*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?
*) The need for a more rational discussion of the issues raised by increasingly competent AIs
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
2023 is still young, but there's already a change in the attitudes of many business people regarding the future. Previously, businesses expressed occasional interest in possible disruptive scenarios, but their attention often quickly turned back to the apparently more pressing tasks of business-as-usual. But recent news of changes in AI capabilities, along with possible social transformations due to pandemics, geopolitics, and industrial unrest, is leading more and more business people to wonder: How can they become more effective in anticipating and managing potential significant changes in their business landscape?
In this context, the new book by our guest in this episode, Nikolas Badminton, is particularly timely. It's called "Facing our Futures: How foresight, futures design and strategy creates prosperity and growth".
Over the last few years, Nikolas has worked with over 300 organizations including Google, Microsoft, NASA, the United Nations, American Express, and Rolls Royce, and he advised Robert Downey Jr.’s team for the ‘Age of A.I.’ documentary series.
Selected follow-up reading:
https://nikolasbadminton.com/
https://futurist.com/
https://www.bloomsbury.com/uk/facing-our-futures-9781399400237/
Topics in this conversation include:
*) A personal journey to becoming a futurist - with some "hot water" along the way
*) The "Dark Futures" project: "what might happen if we take the wrong path forward"
*) The dangers of ignoring how bad things might become
*) Are we heading toward "the end times"?
*) Being in a constant state of collapse
*) Human resilience, and how to strengthen it
*) Futurists as "hope engineers"
*) Pros and cons of the "anti-growth" or "de-growth" initiative
*) The useful positive influence of "design fiction" (including futures that are "entirely imaginary")
*) The risks of a "pay to play" abundance future
*) The benefits of open medicine and open science
*) Examples of decisions taken by corporations after futures exercises
*) Tips for people interested in a career as a futurist
*) Pros and cons of "pop futurists"
*) The single biggest danger in our future?
*) Evidence from Rene Rohrbeck and Menes Etingue Kum that companies who apply futures thinking significantly out-perform their competitors in profitability and growth
*) The idea of an "apocalypse windfall" from climate change
*) Some key messages from the book "Facing our Futures": recommended mindset changes
*) Having the honesty and courage to face up to our mistakes
*) What if... former UK Prime Minister David Cameron had conducted a futures study before embarking on the Brexit project?
*) A multi-generational outlook on the future - learning from the Iroquois
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In the last few weeks, the pace of change in AI has been faster than ever before. The changes aren't just announcements of future capabilities - announcements that could have been viewed, perhaps, as hype. The changes are new versions of AI systems that are available for users around the world to experiment with, directly, here and now. These systems are being released by multiple different companies, and also by open-source collaborations. And users of these systems are frequently expressing surprise: the systems are by no means perfect, but they regularly out-perform previous expectations, sometimes in astonishing ways.
In this episode, Calum Chace and David Wood, the co-hosts of this podcast series, discuss the wider implications of these new AI systems. David asks Calum if he has changed any of his ideas about what he has called "the two singularities", namely the Economic Singularity and the Technological Singularity, as covered in a number of books he has written.
Calum has been a full-time writer and speaker on the subject of AI since 2012. Earlier in his life, he studied philosophy, politics, and economics at Oxford University, and trained as a journalist at the BBC. He wrote a column in the Financial Times and nowadays is a regular contributor to Forbes magazine. In between, he held a number of roles in business, including leading a media practice at KPMG. In the last few days, he has been taking a close look at GPT-4.
Selected follow-up reading:
https://calumchace.com/the-economic-singularity/
https://calumchace.com/surviving-ai-synopsis/
Topics in this conversation include:
*) Is the media excitement about GPT-4 and its predecessor ChatGPT overblown, or are these systems signs of truly important disruptions?
*) How do these new AI systems compare with earlier AIs?
*) The two "big bangs" in AI history
*) How transformers work
*) The difference between self-supervised learning and supervised learning
*) The significance of OpenAI enabling general public access to ChatGPT
*) Market competition between Microsoft Bing and Google Search
*) Unwholesome replies by Microsoft Sydney and Google Bard - and the intended role of RLHF (Reinforcement Learning with Human Feedback)
*) How basic reasoning seems to emerge (unexpectedly) from pattern recognition at sufficient scale
*) Examples of how the jobs of knowledge workers are being changed by GPT-4
*) What will happen to departments where each human knowledge workers has a tenfold productivity boost?
*) From the job churns of the past to the Great Churn of the near future
*) The forthcoming wave of automation is not only more general than past waves, but will also proceed at a much faster pace
*) Improvements in the writing AI produces, such as book chapters
*) Revisions of timelines for the Economic and Technological Singularity?
*) It now seems that human intelligence is less hard to replicate than was previously thought
*) The Technological Singularity might arrive before an Economic Singularity
*) The liberating vision of people no longer needing to be wage slaves, and the threat of almost everyone living in poverty
*) The insufficiency of UBI (Universal Basic Income) unless an economy of abundance is achieved (bringing the costs of goods and services down toward zero)
*) Is the creation of AI now out of control, with a rush to release new versions?
*) The infeasibility of the idea of AGI relinquishment
*) OpenAI's recent actions assessed
*) Expectations for new AI releases in the remainder of 2023: accelerating pace
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Ben Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of Humanity+.
Ben is perhaps best-known for popularising the term 'artificial general intelligence', or AGI, a machine with all the cognitive abilities of an adult human. He thinks that the way to create this machine is to start with a baby-like AI, and raise it, as we raise children. We would do this either in VR, or in robot form. Hence he works with the robot-builder David Hanson to create robots like Sophia and Grace.
Ben is a unique and engaging speaker, and gives frequent keynotes all round the world. Both his appearance and his views have been described as counter-cultural. In this episode, we hear about Ben's vision for the creation of benevolent decentralized AGI.
Selected follow-up reading:
https://singularitynet.io/
http://goertzel.org/
http://multiverseaccordingtoben.blogspot.com/
Topics in this conversation include:
*) Occasional hazards of humans and robots working together
*) "The future is already here, it's just not wired together properly"
*) Ben's definition of AGI
*) Ways in which humans lack "general intelligence"
*) Changes in society expected when AI reaches "human level"
*) Is there "one key thing" which will enable the creation of AGI?
*) Ben's OpenCog Hyperon project combines three approaches: neural pattern recognition and synthesis, rigorous symbolic reasoning, and evolutionary creativity
*) Parallel combinations versus sequential combinations of AI capabilities: why the former is harder, but more likely to create AGI
*) Three methods to improve the scalability of AI algorithms: mathematical innovations, efficient concurrent processing, and an AGI hardware board
*) "We can reach the Singularity in ten years if we really, really try"
*) ... but humanity has, so far, not "really tried" to apply sufficient resources to creating AGI
*) Sam Altman: "If you talk about the upsides of what AGI could do for us, you sound like a crazy person"
*) "The benefits of AGI will challenge our concept of 'what is a benefit'"
*) Options for human life trajectories, if AGIs are well disposed towards humans
*) We will be faced with the questions of "what do we want" and "what are our values"
*) The burning issue is "what is the transition phase" to get to AGI
*) Ben's disagreements with Nick Bostrom and Eliezer Yudkowsky
*) Assessment of the approach taken by OpenAI to create AGI
*) Different degrees of faith in big tech companies as a venue for hosting the breakthroughs in creating AGI
*) Should OpenAI be renamed as "ClosedAI"?
*) The SingularityNET initiative to create a decentralized, democratically controlled infrastructure for AGI
*) The development of AGI should be "more like Linux or the Internet than Windows or the mobile phone ecosystem"
*) Limitations of neural net systems in self-understanding
*) Faith in big tech and capitalism vs. faith in humanity as a whole vs. faith in reward maximization as a paradigm for intelligence
*) Open-ended intelligence vs. intelligence created by reward maximization
*) A concern regarding Effective Altruism
*) There's more to intelligence than pursuit of an overarching goal
*) A broader view of evolution than drives to survive and to reproduce
*) "What the fate of humanity depends on" - selecting the right approach to the creation of AGI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
At a time when many people find it depressingly easy to see how "bad futures" could arise, what is a credible narrative of a "good future"? That question is of central concern to our guest in this episode, Gerd Leonhard.
Gerd is one of the most successful futurists on the international speaker circuit. He estimates that he has spoken to a combined audience of 2.5 million people in more than 50 countries.
He left his home country of Germany in 1982 to go to the USA to study music. While he was in the US, he set up one of the first internet-based music businesses, and then he parlayed that into his current speaking career. His talks and videos are known for their engaging use of technology and design, and he prides himself on his rigorous use of research and data to back up his claims and insights.
Selected follow-ups:
https://www.futuristgerd.com/
https://www.futuristgerd.com/sharing/thegoodfuturefilm/
Topics in this conversation include:
*) The need for a positive antidote to all the negative visions of the future that are often in people's minds
*) People, planet, purpose, and prosperity - rather than an over-focus on profit and economic growth
*) Anticipating stock markets that work differently, and with additional requirements before dividends can be paid
*) A reason to be an optimist: not because we have less problems (we don't), but because we have more capacity to deal with these problems
*) From "capitalism" to "progressive capitalism" (another name could be "social capitalism")
*) Kevin Kelly's concept of "protopia" as a contrast to both utopia and dystopia
*) Too much of a good thing can be... a bad thing
*) How governments and the state interact with free markets
*) Managers who try to prioritise people, planet, or purpose (rather than profits and dividends) are "whacked by the stock market"
*) The example of the Montreal protocol regarding the hole in the ozone layer, when governments gave a strong direction to the chemical industry
*) Some questions about people, planet, purpose, and prosperity are relatively straightforward, but others are much more contested
*) Conflicting motivations within high tech firms regarding speed-to-market vs. safety
*) Controlling the spread of potentially dangerous AI may be much harder than controlling the spread of nuclear weapons technology, especially as costs reduce for AI development and deployment
*) Despite geopolitical tensions, different countries are already collaborating behind the scenes on matters of AGI safety
*) How much "financial freedom" should the definition of a good future embrace?
*) Universal Basic Income and "the Star Trek economy" as potential responses to the Economic Singularity
*) Differing assessments of the role of transhumanism in the good future
*) Risks when humans become overly dependent on technology
*) Most modern humans can't make a fire from scratch: does that matter?
*) The Carrington Event of 1859: the most intense geomagnetic storm in recorded history
*) How views changed in the 19th century about giving anaesthetics to women to counter the (biblically mandated?) intense pains of childbirth
*) Will views change in a similar way about the possibility of external wombs (ectogenesis)?
*) Jamie Bartlett's concept of "the moral singularity" when humans lose the ability to take hard decisions
*) Can AI provide useful advice about human-human relationships?
*) Is everything truly important about humans located in our minds?
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Francesca Rossi. Francesca studied computer science at the University of Pisa in Italy, where she became a professor, before spending 20 years at the University of Padova. In 2015 she joined IBM's T.J. Watson Research Lab in New York, where she is now an IBM Fellow and also IBM's AI Ethics Global Leader.
Francesca is a member of numerous international bodies concerned with the beneficial use of AI, including being a board member at the Partnership on AI, a Steering Committee member and designated expert at the Global Partnership on AI, a member of the scientific advisory board of the Future of Life Institute, and Chair of the international conference on Artificial Intelligence, Ethics, and Society which is being held in Montreal in August this year.
From 2022 until 2024 she holds the prestigious role of the President of the AAAI, that is, the Association for the Advancement of Artificial Intelligence. The AAAI has recently held its annual conference, and in this episode, Francesca shares some reflections on what happened there.
Selected follow-ups:
https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Francesca.Rossi2
https://en.wikipedia.org/wiki/Francesca_Rossi
https://partnershiponai.org/
https://gpai.ai/
Topics in this conversation include:
*) How a one-year sabbatical at the Harvard Radcliffe Institute changed the trajectory of Francesca's life
*) New generative AI systems such as ChatGPT expand previous issues involving bias, privacy, copyright, and content moderation - because they are trained on very large data sets that have not been curated
*) Large language models (LLMs) have been optimised, not for "factuality", but for creating language that is syntactically correct
*) Compared to previous AIs, the new systems impact a wider range of occupations, and they also have major implications for education
*) Are the "AI ethics" and "responsible AI" approaches that address the issues of existing AI systems also the best approaches for the "AI alignment" and "AI safety" issues raised by artificial general intelligence?
*) Different ideas on how future LLMs could acquire mastery, not only over language, but also over logic, inference, and reasoning
*) Options for combining classical AI techniques focussing on knowledge and reasoning, with the data-intensive approaches of LLMs
*) How "foundation models" allow training to be split into two phases, with a shorter supervised phase customising the output from a prior longer unsupervised phase
*) Even experts face the temptation to anthropomorphise the behaviour of LLMs
*) On the other hand, unexpected capabilities have emerged within LLMs
*) The interplay of "thinking fast" and "thinking slow" - adapting, for the context of AI, insights from Daniel Kahneman about human intelligence
*) Cross-fertilisation of ideas from different communities at the recent AAAI conference
*) An extension of that "bridge" theme to involve ideas from outside of AI itself, including the use of methods of physics to observe and interpret LLMs from the outside
*) Prospects for interpretability, explainability, and transparency of AI - and implications for trust and cooperation between humans and AIs
*) The roles played by different international bodies, such as PAI and GPAI
*) Pros and cons of including China in the initial phase of GPAI
*) Designing regulations to be future-proof, with parts that can change quickly
*) An important new goal for AI experts
*) A vision for the next 3-5 years
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, Tim Clement-Jones brings us up to date on the reactions by members of the UK's House of Commons to recent advances in the capabilities of AI systems, such as ChatGPT. He also looks ahead to larger changes, in the UK and elsewhere.
Lord Clement-Jones CBE, or Tim, as he prefers to be known, has been a very successful lawyer, holding senior positions at ITV and Kingfisher among others, and later becoming London Managing Partner of law firm DLA Piper.
He is better known as a politician. He became a life peer in 1998, and has been the Liberal Democrats’ spokesman on a wide range of issues. The reason we are delighted to have him as a guest on the podcast is that he was the chair of the AI Select Committee, Co-Chair of the All-Party Parliamentary Group on AI, and is now a member of a special inquiry on the use of AI in Weapons Systems.
Tim also has multiple connections with universities and charities in the UK.
Selected follow-up reading:
https://www.lordclementjones.org/
https://www.parallelparliament.co.uk/APPG/artificial-intelligence
https://arcs.qmul.ac.uk/governance/council/council-membership/timclement-jones.html
Topics in this conversation include:
*) Does "the Westminster bubble" understand the importance of AI?
*) Evidence that "the tide is turning" - MPs are demonstrating a spirit of inquiry
*) The example of Sir Peter Bottomley, the Father of the House (who has been an MP continuously since 1975)
*) New AI systems are showing characteristics that had not been expected to arrive for another 5 or 10 years, taking even AI experts by surprise
*) The AI duopoly (the US and China) and the possible influence of the UK and the EU
*) The forthcoming EU AI Act and the risk-based approach it embodies
*) The importance of regulatory systems being innovation-friendly
*) How might the EU support the development of some European AI tech giants?
*) The inevitability(?) of the UK needing to become "a rule taker"
*) Cynical and uncynical explanations for why major tech companies support EU AI regulation
*) The example of AI-powered facial recognition: benefits and risks
*) Is Brexit helping or hindering the UK's AI activities?
*) Complications with the funding of AI research in the UK's universities
*) The risks of a slow-down in the UK's AI start-up ecosystem
*) Looking further afield: AI ambitions in the UAE and Saudi Arabia
*) The particular risks of lethal autonomous weapons systems
*) Future conflicts between AI-controlled tanks and human-controlled tanks
*) Forecasts for the arrival of artificial general intelligence: 10-15 years from now?
*) Superintelligence may emerge from a combination of separate AI systems
*) The case for "technology-neutral" regulation
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Advanced AI is currently pretty much a duopoly between the USA and China. The US is the clear leader, thanks largely to its tech giants – Google, Meta, Microsoft, Amazon, and Apple. China also has a fistful of tech giants – Baidu, Alibaba, and Tencent are the ones usually listed, but the Chinese government has also taken a strong interest in AI since Deep Mind’s Alpha Go system beat the world’s best Go player in 2016.
People in the West don’t know enough about China’s current and future role in AI. Some think its companies just copy their Western counterparts, while others think it is an implacable and increasingly dangerous enemy, run by a dictator who cares nothing for his people. Both those views are wrong.
One person who has been trying to provide a more accurate picture of China and AI in recent years is Jeff Ding, the author of the influential newsletter ChinAI.
Jeff grew up in Iowa City and is now an Assistant Professor of Political Science at George Washington University. He earned a PhD at Oxford University, where he was a Rhodes Scholar, and wrote his thesis on how past technological revolutions influenced the rise and fall of great powers, with implications for U.S.-China competition. After gaining his doctorate he worked at Oxford’s Future of Humanity Institute and Stanford’s Institute for Human-Centered Artificial Intelligence.
Selected follow-up reading:
https://jeffreyjding.github.io/
https://chinai.substack.com/
https://www.tortoisemedia.com/intelligence/global-ai/
Topics in this conversation include:
*) The Thucydides Trap: Is conflict inevitable as a rising geopolitical power approaches parity with an established power?
*) Different ways of trying to assess how China's AI industry compares with that of the U.S.
*) Measuring innovations in creating AI is different from measuring adoption of AI solutions across multiple industries
*) Comparisons of papers submitted to AI conferences such as NeurIPS, citations, patents granted, and the number of data scientists
*) The biggest misconceptions westerners have about China and AI
*) A way in which Europe could still be an important player alongside the duopoly
*) Attitudes in China toward data privacy and facial recognition
*) Government focus on AI can be counterproductive
*) Varieties of government industrial policy: the merits of encouraging decentralised innovation
*) The Titanic and the origin of Silicon Valley
*) Mariana Mazzucato's question: "Who created the iPhone?"
*) Learning from the failure of Japan's 5th Generation Computers initiative
*) The evolution of China's Social Credit systems
*) Research by Shazeda Ahmed and Jeremy Daum
*) Factors encouraging and discouraging the "splinternet" separation of US and Chinese tech ecosystems
*) Connections that typically happen outside of the public eye
*) Financial interdependencies
*) Changing Chinese government attitudes toward Chinese Internet giants
*) A broader tension faced by the Chinese government
*) Future scenarios: potential good and bad developments
*) Transnational projects to prevent accidents or unauthorised use of powerful AI systems
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Peter James is one of the world’s most successful crime writers. His "Roy Grace" series, about a detective in Brighton, England, near where Peter lives, has produced a remarkable 19 consecutive Sunday Times Number One bestsellers. His legions of devoted fans await each new release eagerly. The books have been televised, with the third series of "Grace", starting John Simm, being commissioned for next year.
Peter has worked in other genres too, having written 36 novels altogether. When Calum first met Peter in the mid-1990s, Peter's science fiction novel “Host” was generating rave reviews. It was the world’s first electronically published novel, and a copy of its floppy disc version is on display in London’s Science Museum.
Peter is also a self-confessed petrol-head, with an enviable collection of classic cars, and a pretty successful track record of racing some of them. The discussion later in the episode addresses the likely arrival of self-driving cars. But we start with the possibility of mind uploading, which is the subject of “Host”.
Selected follow-up reading:
https://www.peterjames.com/
https://www.alcor.org/
Topics in this conversation include:
*) Peter's passion for the future
*) The transformative effect of the 1990 book "Great Mambo Chicken and the Transhuman Condition"
*) A Christmas sojourn at MIT and encounters with AI pioneer Marvin Minsky
*) The origins of the ideas behind "Host"
*) Meeting Alcor, the cryonics organisation, in Riverside California
*) How cryonics has evolved over the decades
*) "The first person to live to 200 has already been born"
*) Quick summaries of previous London Futurists Podcast episodes featuring Aubrey de Grey and Andrew Steele
*) The case for doing better than nature
*) Peter's novel "Perfect People" and the theme of "designer babies"
*) Possible improvements in the human condition from genetic editing
*) The risk of a future "genetic underclass"
*) Technology divides often don't last: consider the "fridge divide" and the "smartphone divide"
*) Calum's novel "Pandora's Brain"
*) Why Peter is comfortable with the label "transhumanist"
*) Various ways of reading (many) more books
*) A thought experiment involving a healthy 99 year old
*) If people lived a lot longer, we might take better care of our planet
*) Peter's views on technology assisting writers
*) Strengths and weaknesses of present-day ChatGPT as a writer
*) Prospects for transhumans to explore space
*) The "bunker experiments" into the circadian cycle, which suggest that humans naturally revert to a daily cycle closer to 26 hours than 24 hours
*) Possible answers to Fermi's question about lack of any sign of alien civilisations
*) Reflections on "The Pale Blue Dot of Earth" (originally by Carl Sagan)
*) The likelihood of incredible surprises in the next few decades
*) Pros and cons of humans driving on public roads (especially when drivers are using mobile phones)
*) Legal and ethical issues arising from autonomous cars
*) Exponential change often involves a frustrating slow phase before fast breakthroughs
*) Anticipating the experience of driving inside immersive virtual reality
*) The tragic background to Peter's book "Possession"
*) A concluding message from the science fiction writer Kurt Vonnegut
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is a Briton who is based in Berlin, namely Andrew Steele. Earlier in his life Andrew spent nine years at the University of Oxford where, among other accomplishments, he gained a PhD in physics. His focus switched to computational biology, and he held positions at Cancer Research UK and the Francis Crick Institute.
Along the way, Andrew decided that aging was the single most important scientific challenge of our time. This led him to write the book "Ageless: The New Science of Getting Older Without Getting Old". There are a lot of books these days about the science of slowing, stopping, and even reversing aging, but Andrew's book is perhaps the best general scientific introduction to this whole field.
Selected follow-ups:
https://andrewsteele.co.uk/
https://www.youtube.com/DrAndrewSteele
https://ageless.link/
Topics in this conversation include:
*) The background that led Andrew to write his book "Ageless"
*) A graph that changed a career
*) The chance of someone dying in the next year doubles every eight years they live
*) For tens of thousand of years, human life expectancy didn't change
*) In recent centuries, the background mortality rate has significantly decreased, but the eight year "Gompertz curve" doubling of mortality remains unchanged
*) Some animals do not have this mortality doubling characteristic; they are said to be "negligibly senescent", "biologically immortal", or "ageless"
*) An example: Galapagos tortoises
*) The concept of "hallmarks of aging" - and different lists of these hallmarks
*) Theories of aging: wear-and-tear vs. programmed obsolescence
*) Evolution and aging: two different strategies that species can adopt
*) Wear-and-tear of teeth - as seen from a programmed aging point-of-view
*) The case for a pragmatic approach
*) Dietary restriction and healthier aging
*) The potential of computational biology system models to generate better understanding of linkages between different hallmarks of aging
*) Might some hallmarks, for example telomere shortening or epigenetic damage, prove more fundamental than others?
*) Special challenges posed by damage in the proteins in the scaffolding between cells
*) What's required to accelerate the advent of "longevity escape velocity"
*) Excitement and questions over the funding available to Altos Labs
*) Measuring timescales in research dollars rather than years
*) Reasons for optimism for treatments of some of the hallmarks, for example with senolytics, but others aren't being properly addressed
*) Breakthrough progress with the remaining hallmarks could be achieved with $5-10B investment each
*) Adding some extra for potential unforeseen hallmarks, that sums to a total of around $100B before therapies for all aspects of aging could be in major clinical trials
*) Why such an expenditure is in principle relatively easily affordable
*) Reflections on moral and ethical objections to treatments against aging
*) Overpopulation, environmental strains, resource sustainability, and net zero impact
*) Aging as the single largest cause of death in the world - in all countries
*) Andrew's current and forthcoming projects, including a book on options for funding science with the biggest impact
*) Looking forward to "being more tortoise".
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
It is nearly 40 years since our guest in this episode, pioneering transhumanist Natasha Vita-More, created the first version of the Transhumanist Manifesto. Since that time, Natasha has established numerous core perspectives, values, and actions in the global transhumanist family.
Natasha joins us in this episode to share her observations on how transhumanism has evolved over the decades, and to reflect on her work in building the movement—from practice-based approaches, scientific contributions, and theoretical innovations.
Areas we explore include: How has Natasha's work seeded the global growth of transhumanism? What are the main advances over the years that she particularly values? And what are the disappointments?
We also look to the future: What are her hopes and expectations for the next ten years of transhumanism?
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Selected follow-up reading:
https://natashavita-more.com/
https://www.fightaging.org/archives/2004/02/vital-progress-summit/
http://www.extropy.org/proactionaryprinciple.htm
https://metanexus.net/transhumanism-and-its-critics/
https://whatistranshumanism.org/
https://www.alcor.org/library/persistence-of-long-term-memory-in-vitrified-and-revived-simple-animals/
https://waitbutwhy.com/2016/03/cryonics.html
F. M. Esfandiary: https://archives.nypl.org/mss/4846
https://www.maxmore.com/
The World’s Most Dangerous Idea? https://nickbostrom.com/papers/dangerous
https://theconversation.com/the-end-of-history-francis-fukuyamas-controversial-idea-explained-193225
https://www.humanityplus.org/
https://transhumanist-studies.teachable.com/
Anyone Can Code, Ethiopia: https://icogacc.com/
https://afrolongevity.taffds.org/
Our guest in this episode is the scientist and science fiction author Davin Brin, whose writings have won the Hugo, Locus, Campbell, and Nebula Awards. His style is sometimes called 'hard science fiction'. This means his narratives feature scientific or technological change that is plausible rather than purely magical. The scenarios he creates are thought-provoking as well as entertaining. His writing inspires readers but also challenges them, with important questions not just about the future, but also about the present.
Perhaps his most famous non-fiction work is his book "The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?", first published in 1998. With each passing year it seems that the questions and solutions raised in that book are becoming ever more pressing. One aspect of this has been called Brin's Corollary to Moore's Law: Every year, the cameras will get smaller, cheaper, more numerous and more mobile.
David also frequently writes online about topics such as space exploration, attempts to contact aliens, homeland security, the influence of science fiction on society and culture, the future of democracy, and much more besides.
Topics discussed in this conversation include:
*) Reactions to reports of flying saucers
*) Why photographs of UFOs remain blurry
*) Similarities between reports of UFOs and, in prior times, reports of elves
*) Replicating UFO phenomena with cat lasers
*) Changes in attitudes by senior members of the US military
*) Appraisals of the Mars Rovers
*) Pros and cons of additional human visits to the moon
*) Why alien probes might be monitoring this solar system from the asteroid belt
*) Investigations of "moonlets" in Earth orbit
*) Looking for pi in the sky
*) Reasons why life might be widespread in the galaxy - but why life intelligent enough to launch spacecraft may be rare
*) Varieties of animal intelligence: How special are humans?
*) Humans vs. Neanderthals: rounds one and two
*) The challenges of writing about a world that includes superintelligence
*) Kurzweil-style hybridisation and Mormon theology
*) Who should we admire most: lone heroes or citizens?
*) Benefits of reciprocal accountability and mutual monitoring (sousveillance)
*) Human nature: Delusions, charlatans, and incantations
*) The great catechism of science
*) Two levels at which the ideas of a transparent society can operate
*) "Asimov's Laws of Robotics won't work"
*) How AIs might be kept in check by other AIs
*) The importance of presenting gedanken experiments
Fiction mentioned (written by David Brin unless noted otherwise):
The Three-Body Problem (Liu Cixin)
Existence
The Sentinel (Arthur C. Clarke)
Startide Rising
The Uplift War
Kiln People
The Culture Series (Iain M. Banks)
The Expanse (James S.A. Corey)
The Postman (the book and the film)
Stones of Significance
Fahrenheit 451 (Ray Bradbury)
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Selected follow-up reading:
http://www.davidbrin.com/
http://davidbrin.blogspot.com/2021/07/whats-really-up-with-uaps-ufos.html
OpenAI's ChatGPT and picture generating AI systems like MidJourney and Stable Diffusion have got a lot more people interested in advanced AI and talking about it. Which is a good thing. It will not be pretty if the transformative changes that will happen in the next two or three decades take most of us by surprise.
A company that has been pioneering advanced AI for longer than most is IBM, and we are very fortunate to have with us in this episode one of IBM’s most senior executives.
Alessandro Curioni has been with the company for 25 years. He is an IBM Fellow, Director of IBM Research, and Vice President for Europe and Africa.
Topics discussed in this conversation include:
*) Some background: 70 years of inventing the future of computing
*) The role of grand challenges to test and advance the world of AI
*) Two major changes in AI: from rules-based to trained, and from training using annotated data to self-supervised training using non-annotated data
*) Factors which have allowed self-supervised training to build large useful models, as opposed to an unstable cascade of mistaken assumptions
*) Foundation models that extend beyond text to other types of structured data, including software code, the reactions of organic chemistry, and data streams generated from industrial processes
*) Moving from relatively shallow general foundation models to models that can hold deep knowledge about particular subjects
*) Identification and removal of bias in foundation models
*) Two methods to create models tailored to the needs of particular enterprises
*) The modification by RLHF (Reinforcement Learning from Human Feedback) of models created by self-supervised learning
*) Examples of new business opportunities enabled by foundation models
*) Three "neuromorphic" methods to significantly improve the energy efficiency of AI systems: chips with varying precision, memory and computation co-located, and spiking neural networks
*) The vulnerability of existing confidential data to being decrypted in the relatively near future
*) The development and adoption of quantum-safe encryption algorithms
*) What a recent "quantum apocalypse" paper highlights as potential future developments
*) Changing forecasts of the capabilities of quantum computing
*) IBM's attitude toward Artificial General Intelligence and the Turing Test
*) IBM's overall goals with AI, and the selection of future "IBM Grand Challenges" in support of these goals
*) Augmenting the capabilities of scientists to accelerate breakthrough scientific discoveries.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Selected follow-up reading:
https://researcher.ibm.com/researcher/view.php?person=zurich-cur
https://www.zurich.ibm.com/st/neuromorphic/
https://www.nist.gov/news-events/news/2022/07/nist-announces-first-four-quantum-resistant-cryptographic-algorithms
Quantum computing is a tough subject to explain and discuss. As Niels Bohr put it, “Anyone who is not shocked by quantum theory has not understood it”. Richard Feynman helpfully added, “I think I can safely say that nobody understands quantum mechanics”.
Quantum computing employs the weird properties of quantum mechanics like superposition and entanglement. Classical computing uses binary digits, or bits, which are either on or off. Quantum computing uses qubits, which can be both on and off at the same time, and this characteristic somehow makes them enormously more computationally powerful.
Co-hosts Calum and David knew that to address this important but difficult subject, we needed an absolute expert, who was capable of explaining it in lay terms. When Calum heard Dr Ignacio Cirac give a talk on the subject in Madrid last month, he knew we had found our man.
Ignacio is director of the Max Planck Institute of Quantum Optics in Germany, and holds honorary and visiting professorships pretty much everywhere that serious work is done on quantum physics. He has done seminal work on the trapped ion approach to quantum computing and several other aspects of the field, and has published almost 500 papers in prestigious journals. He is spoken of as a possible Nobel Prize winner.
Topics discussed in this conversation include:
*) A brief history of quantum computing (QC) from the 1990s to the present
*) The kinds of computation where QC can out-perform classical computers
*) Likely timescales for further progress in the field
*) Potential quantum analogies of Moore's Law
*) Physical qubits contrasted with logical qubits
*) Reasons why errors often arise with qubits - and approaches to reducing these errors
*) Different approaches to the hardware platforms of QC - and which are most likely to prove successful
*) Ways in which academia can compete with (and complement) large technology companies
*) The significance of "quantum supremacy" or "quantum advantage": what has been achieved already, and what might be achieved in the future
*) The risks of a forthcoming "quantum computing winter", similar to the AI winters in which funding was reduced
*) Other comparisons and connections between AI and QC
*) The case for keeping an open mind, and for supporting diverse approaches, regarding QC platforms
*) Assessing the threats posed by Shor's algorithm and fault-tolerant QC
*) Why companies should already be considering changing the encryption systems that are intended to keep their data secure
*) Advice on how companies can build and manage in-house "quantum teams"
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Selected follow-up reading:
https://en.wikipedia.org/wiki/Juan_Ignacio_Cirac_Sasturain
https://en.wikipedia.org/wiki/Rydberg_atom
In the summer of 1950, the physicist Enrico Fermi and some colleagues at the Los Alamos Lab in New Mexico were walking to lunch, and casually discussing flying saucers, when Fermi blurted out “But where is everybody?” He was not the first to pose the question, and the precise phrasing is disputed, but the mystery he was referring to remains compelling.
We appear to live in a vast universe, with billions of galaxies, each with billions of stars, mostly surrounded by planets, including many like the Earth. The universe appears to be 13.7 billion years old, and even if intelligent life requires an Earth-like planet, and even if it can only travel and communicate at the speed of light, we ought to see lots of evidence of intelligent life. But we don’t. No beams of light from stars occluded by artificial satellites spelling out pi. No signs of galactic-scale engineering. No clear evidence of little green men demanding to meet our leaders.
Numerous explanations have been advanced to explain this discrepancy, and one man who has spent more brainpower than most exploring them is the always-fascinating Anders Sandberg. Anders is a computational neuroscientist who got waylaid by philosophy, which he pursues at Oxford University, where he is a senior research fellow.
Topics in this episode include:
* The Drake equation for estimating the number of active, communicative extraterrestrial civilizations in our galaxy
* Changes in recent decades in estimates of some of the factors in the Drake equation
* The amount of time it would take self-replicating space probes to spread across the galaxy
* The Dark Forest hypothesis - that all extraterrestrial civilizations are deliberately quiet, out of fear
* The likelihood of extraterrestrial civilizations emitting observable signs of their existence, even if they try to suppress them
* The implausibility of all extraterrestrial civilizations converging to the same set of practices, rather than at least some acting in ways where we would notice their existence - and a counter argument
* The possibility of civilisations opting to spend all their time inside virtual reality computers located in deep interstellar space
* The Aestivation hypothesis, in which extraterrestrial civilizations put themselves into a "pause" mode until the background temperature of the universe has become much lower
* The Quarantine or Zoo hypothesis, in which extraterrestrial civilizations are deliberately shielding their existence from an immature civilization like ours
* The Great Filter hypothesis, in which life on other planets has a high probability, either of failing to progress to the level of space-travel, or of failing to exist for long after attaining the ability to self-destruct
* Possible examples of "great filters"
* Should we hope to find signs of life on Mars?
* The Simulation hypothesis, in which the universe is itself a kind of video game, created by simulators, who had no need (or lacked sufficient resources) to create more than one intelligent civilization
* Implications of this discussion for the wisdom of the METI project - Messaging to Extraterrestrial Intelligence
Selected follow-up reading:
* Anders' website at FHI Oxford: https://www.fhi.ox.ac.uk/team/anders-sandberg/
* The Great Filter, by Robin Hanson: http://mason.gmu.edu/~rhanson/greatfilter.html
* "Seventy-Five Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life" - a book by Stephen Webb: https://link.springer.com/book/10.1007/978-3-319-13236-5
* The aestivation hypothesis: https://www.fhi.ox.ac.uk/aestivation-hypothesis-resolving-fermis-paradox/
* Should We Message ET? by David Brin: http://www.davidbrin.com/nonfiction/meti.html
An area of technology that has long been anticipated is Extended Reality (XR), which includes Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). For many decades, researchers have developed various experimental headsets, glasses, gloves, and even immersive suits, to give wearers of these devices the impression of existing within a reality that is broader than what our senses usually perceive. More recently, a number of actual devices have come to the market, with, let's say it, mixed reactions. Some enthusiasts predict rapid improvements in the years ahead, whereas other reviewers focus on disappointing aspects of device performance and user experience.
Our guest in this episode of London Futurists Podcast is someone widely respected as a wise guide in this rather turbulent area. He is Steve Dann, who among other roles is the lead organiser of the highly popular Augmenting Reality meetup in London.
Topics discussed in this episode include:
*) Steve's background in film and television special effects
*) The different forms of Extended Reality
*) Changes in public understanding of virtual and augmented reality
*) What can be learned from past disappointments in this field
*) Prospects for forthcoming tipping points in market adoption
*) Comparisons with the market adoption of smartwatches and of smartphones
*) Forecasting incremental improvements in key XR technologies
*) Why "VR social media" won't be a sufficient reason for mass adoption of VR
*) The need for compelling content
*) The particular significance of enterprise use cases
*) The potential uses of XR in training, especially for medical professionals
*) Different AR and VR use cases in medical training - and different adoption timelines
*) Why an alleged drawback of VR may prove to be a decisive advantage for it
*) The likely forthcoming battle over words such as "metaverse"
*) Why our future online experiences will increasingly be 3D
*) Prospects for open standards between different metaverses
*) Reasons for companies to avoid rushing to purchase real estate in metaverses
*) Movies that portray XR, and the psychological perception of "what is real"
*) Examples of powerful real-world consequences of VR experiences.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Selected follow-up reading:
https://www.meetup.com/augmenting-reality/
https://www.medicalrealities.com/about
Our guest on this episode is someone with excellent connections to the foresight departments of governments around the world. He is Jerome Glenn, Founder and Executive Director of the Millennium Project.
The Millennium Project is a global participatory think tank established in 1996, which now has over 70 nodes around the world. It has the stated purpose to "Improve humanity's prospects for building a better world". The organisation produces regular "State of the Future" reports as well as updates on what it describes as "the 15 Global Challenges". It recently released an acclaimed report on three scenarios for the future of work. One of its new projects is the main topic in this episode, namely scenarios for the global governance of the transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI).
Topics discussed in this episode include:
*) Why many futurists are jealous of Alvin Toffler
*) The benefits of a decentralised, incremental approach to foresight studies
*) Special features of the Millennium Project compared to other think tanks
*) How the Information Revolution differs from the Industrial Revolution
*) What is likely to happen if there is no governance of the transition to AGI
*) Comparisons with regulating the use of cars - and the use of nuclear materials
*) Options for licensing, auditing, and monitoring
*) How the development of a technology may be governed even if it has few visible signs
*) Three options: "Hope", "Control", and "Merge" - but all face problems; in all three cases, getting the initial conditions right could make a huge difference
*) Distinctions between AGI and ASI (Artificial Superintelligence), and whether an ASI could act in defiance of its initial conditions
*) Controlling AGI is likely to be impossible, but controlling the companies that are creating AGI is more credible
*) How actions taken by the EU might influence decisions elsewhere in the world
*) Options for "aligning" AGI as opposed to "controlling" it
*) Complications with the use of advanced AI by organised crime and by rogue states
*) The poor level of understanding of most political advisors about AGI, and their tendency to push discussions back to the issues of ANI
*) Risks of catastrophic social destabilisation if "the mother of all panics" about AGI occurs on top of existing culture wars and political tribalism
*) Past examples of progress with technologies that initially seemed impossible to govern
*) The importance of taking some initial steps forward, rather than being overwhelmed by the scale of the challenge.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Selected follow-up reading:
https://en.wikipedia.org/wiki/Jerome_C._Glenn
https://www.millennium-project.org/
https://www.millennium-project.org/first-steps-for-artificial-general-intelligence-governance-study-have-begun/
The 2020 book "After Shock: The World's Foremost Futurists Reflect on 50 Years of Future Shock - and Look Ahead to the Next 50"
This episode features the CEO of Brainnwave, Steven Coates, who is a pioneer in the field of Decision Intelligence.
Decision Intelligence is the use of AI to enhance the ability of companies, organisations, or individuals to make key decisions - decisions about which new business opportunities to pursue, about evidence of possible leakage or waste, about the allocation of personnel to tasks, about geographical areas to target, and so on.
What these decisions have in common is that they can all be improved by the analysis of large sets of data that defy attempts to reduce them to a single dimension. In these cases, AI systems that are suited to multi-dimensional analysis can make all the difference between wise and unwise decisions.
Topics discussed in this episode include:
*) The ideas initially pursued at Brainnwave, and how they evolved over time
*) Real-world examples of Decision Intelligence - in the mining industry, the supply of mobile power generators, and in the oil industry
*) Recommendations for businesses to focus on Decision Intelligence as they adopt fuller use of AI, on account of the direct impact on business outcomes
*) Factors holding up the wider adoption of AI
*) Challenges when "data lakes" turn into "data swamps"
*) Challenges with the limits of trust that can be placed in data
*) Challenges with the lack of trust in algorithms
*) Skills in explaining how algorithms are reaching their decisions
*) The benefits of an agile mindset in introducing Decision Intelligence.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://brainnwave.ai/
As AI automates larger portions of the activities of companies and organisations, there's a greater need to think carefully about questions of privacy, bias, transparency, and explainability. Due to scale effects, mistakes made by AI and the automated analysis of data can have wide impacts. On the other hand, evidence of effective governance of AI development can deepen trust and accelerate the adoption of significant innovations.
One person who has thought a great deal about these issues is Ray Eitel-Porter, Global Lead for Responsible AI at Accenture. In this episode of the London Futurist Podcast, he explains what conclusions he has reached.
Topics discussed include:
*) The meaning and importance of "Responsible AI"
*) Connections and contrasts with "AI ethics" and "AI safety"
*) The advantages of formal AI governance processes
*) Recommendations for the operation of an AI ethics board
*) Anticipating the operation of the EU's AI Act
*) How different intuitions of fairness can produce divergent results
*) Examples where transparency has been limited
*) The potential future evolution of the discipline of Responsible AI.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://www.accenture.com/gb-en/services/applied-intelligence/ai-ethics-governance
One area of technology that is frequently in the news these days is rejuvenation biotechnology, namely the possibility of undoing key aspects of biological aging via a suite of medical interventions. What these interventions target isn't individual diseases, such as cancer, stroke, or heart disease, but rather the common aggravating factors that lie behind the increasing prevalence of these diseases as we become older.
Our guest in this episode is someone who has been at the forefront for over 20 years of a series of breakthrough initiatives in this field of rejuvenation biotechnology. He is Dr Aubrey de Grey, co-founder of the Methuselah Foundation, the SENS Research Foundation, and, most recently, the LEV Foundation - where 'LEV' stands for Longevity Escape Velocity.
Topics discussed include:
*) Different concepts of aging and damage repair;
*) Why the outlook for damage repair is significantly more tangible today than it was ten years ago;
*) The role of foundations in supporting projects which cannot receive funding from commercial ventures;
*) Questions of pace of development: cautious versus bold;
*) Changing timescales for the likely attainment of robust mouse rejuvenation ('RMR') and longevity escape velocity ('LEV');
*) The "Less Death" initiative;
*) "Anticipating anticipation" - preparing for likely sweeping changes in public attitude once understanding spreads about the forthcoming available of powerful rejuvenation treatments;
*) Various advocacy initiatives that Aubrey is supporting;
*) Ways in which listeners can help to accelerate the attainment of LEV.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://levf.org
https://lessdeath.org
A Venn diagram of people interested in how AI will shape our future, and members of the effective altruism community (often abbreviated to EA), would show a lot of overlap. One of the rising stars in this overlap is our guest in this episode, the polymath Jacy Reese Anthis.
Our discussion picks up themes from Jacy's 2018 book “The End of Animal Farming”, including an optimistic roadmap toward an animal-free food system, as well as factors that could alter that roadmap.
We also hear about the work of an organisation co-founded by Jacy: the Sentience Institute, which researches - among other topics - the expansion of moral considerations to non-human entities. We discuss whether AIs can be sentient, how we might know if an AI is sentient, and whether the design choices made by developers of AI will influence the degree and type of sentience of AIs.
The conversation concludes with some ideas about how various techniques can be used to boost personal effectiveness, and considers different ways in which people can relate to the EA community.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://www.sentienceinstitute.org/
https://jacyanthis.com/
In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall.
In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:
1. We will go extinct fairly soon
2. Advanced civilisations don’t produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)
3. We are in a simulation.
The reason for this is that if it is possible, and civilisations can become advanced without exploding, then there will be vast numbers of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.
Some people find this argument pretty convincing. As we will hear later, some of us have added twists to the argument. But some people go even further, and speculate about how we might bust out of the simulation.
One such person is our friend and our guest in this episode, Roman Yampolskiy, Professor of Computer Science at the University of Louisville.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Further reading:
"How to Hack the Simulation" by Roman Yampolskiy: https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation
"The Simulation Argument" by Nick Bostrom: https://www.simulation-argument.com/
This episode discusses progress at Insilico Medicine, the AI drug development company founded by our guest, longevity pioneer Alex Zhavoronkov.
1.20 In Feb 2022, Insilico got an IPF drug into phase 1 clinical trials: a first for a wholly AI-developed drug
1.50 Insilico is now well-funded; its software is widely used in the pharma industry
2.30 How drug development works. First you create a hypothesis about what causes a disease
4.00 Pandaomics is Insilico’s software to generate hypotheses. It combines 20+ AI models, and huge public data repositories
6.00 This first phase is usually done in academia. It usually costs $ billions to develop a hypothesis. 95% of them fail
6.50 The second phase is developing a molecule which might treat the disease
7.15 This is the job of Insilico’s Chemistry 42 platform
7.30 The classical approach is to test thousands of molecules to see if they bind to the target protein
7.50 AI, by contrast, is able to "imagine" a novel molecule which might bind to it
8.00 You then test 10-15 molecules which have the desired characteristics
8.20 This is done with a variety of genetic algorithms, Generative Adversarial Networks (GANs), and some Transformer networks
8.35 Insilico has a “zoo” of 40 validated models
10.40 Given the ten-fold improvement, why hasn’t the whole drug industry adopted this process?
10.50 They do all have AI groups and they are trying to change, but they are huge companies, and it takes time
11.50 Is it better to invent new molecules, or re-purpose old drugs, which are already known to be safe in humans?
13.00 You can’t gain IP with re-purposed drugs: either somebody else “owns” them, or they are already generic
15.00 The IPF drug was identified during aging research, using aging clocks, and a deep neural net trained on longitudinal data
17.10 The third phase is where Insilico’s other platform, InClinico, comes into play
17.35 InClinico predicts the results of phase 2 (clinical efficacy) trials
18.15 InClinico is trained on massive data sets about previous trials
19.40 InClinico is actually Insilico’s oldest system. Its value has only been ascertained now that some drugs have made it all the way through the pipeline
22.05 A major pharma company asked Insilico to predict the outcome of ten of its trials
22.30 Nine of these ten trials were predicted correctly
23.00 But the company decided that adopting this methodology would be too much of an upheaval; it was unwilling to rely on outsiders so heavily
24.15 Hedge funds and banks have no such qualms
24.25 Insilico is doing pilots for their investments in biotech startups
26.30 Alex is from Latvia originally, studied in Canada, started his career in the US, but Insilico was established in Hong Kong. Why?
27.00 Chinese CROs, Contract Research Organisations, enable you to do research without having your own wetlab
28.00 Like Apple, Insilico designs in the US and does operations in China. You can also do clinical studies there
28.45 They needed their own people inside those CROs, so had to be co-located
29.10 Hong Kong still has great IP protection, financial expertise, scientific resources, and is a beautiful place to live
29.40 Post-Covid, Insilico also had to set up a site in Shanghai
30.35 It is very frustrating how much opposition has built up against international co-operation
32.00 Anti-globalisation ideas and attitudes are bad for longevity research, and all of biotech
33.20 Insilico has all the data it needs. Its bottleneck is talent
35.00 Another requirement is co-operation from governments and regulators, who often struggle to sort the chaff from the wheat in self-proclaimed AI companies
37.00 Longevity research is the most philanthropic activity in the world
37.30 Longevity Medicine Course is available to get clinical practitioners up to speed with the sector
Co-hosts Calum and David dig deep into aspects of David's recent new book "The Singularity Principles". Calum (CC) says he is, in part, unconvinced. David (DW) agrees that the projects he recommends are hard, but suggests some practical ways forward.
0.25 The technological singularity may be nearer than we think
1.10 Confusions about the singularity
1.35 “Taking back control of the singularity”
2.40 The “Singularity Shadow”: over-confident predictions which repulse people
3.30 The over-confidence includes predictions of timescale…
4.00 … and outcomes
4.45 The Singularity as the Rapture of the Nerds?
5.20 The Singularity is not a religion …
5.40 .. although if positive, it will confer almost godlike powers
6.35 Much discussion of the Singularity is dystopian, but there could be enormous benefits, including…
7.15 Digital twins for cells and whole bodies, and super longevity
7.30 A new enlightenment
7.50 Nuclear fusion
8.10 Humanity’s superpower is intelligence
8.30 Amplifying our intelligence should increase our power
9.50 DW’s timeline: 50% chance of AGI by 2050, 10% by 2030
10.10 The timeline is contingent on human actions
10.40 Even if AGI isn’t coming until 2070, we should be working on AI alignment today
11.10 AI Impact’s survey of all contributors to NeurIPS
11.35 Median view: 50% chance of AGI in 2059, and many were pessimistic
12.15 This discussion can’t be left to AI researchers
12.40 A bad beta version might be our last invention
13.00 A few hundred people are now working on AI alignment, and tens of thousands on advancing AI
13.35 The growth of the AI research population is still faster
13.40 CC: Three routes to a positive outcome
13.55 1. Luck. The world turns out to be configured in our favour
14.30 2. Mathematical approaches to AI alignment succeed
14.45 We either align AIs forever, or manage to control them. This is very hard
14.55 3. We merge with the superintelligent machines
15.40 Uploading is a huge engineering challenge
15.55 Philosophical issues raised by uploading: is the self retained?
16.10 DW: routes 2 and 3 are too binary. A fourth route is solving morality
18.15 Individual humans will be augmented, indeed we already are
18.55 But augmented humans won’t necessarily be benign
19.30 DW: We have to solve beneficence
20.00 CC: We can’t hope to solve our moral debates before AGI arrives
20.20 In which case we are relying on route 1 – luck
20.30 DW: Progress in philosophy *is* possible, and must be accelerated
21.15 The Universal Declaration of Human Rights shows that generalised moral principles can be agreed
22.25 CC: That sounds impossible. The UDHR is very broad and often ignored
23.05 Solving morality is even harder than the MIRI project, and reinforces the idea that route 3 is our best hope
23.50 It’s not unreasonable to hope that wisdom correlates with intelligence
24.00 DW: We can proceed step by step, starting with progress on facial recognition, autonomous weapons, and such intermediate questions
25.10 CC: We are so far from solving moral questions. Americans can’t even agree if a coup against their democracy was a bad thing
25.40 DW: We have to make progress, and quickly. AI might help us.
26.50 The essence of transhumanism is that we can use technology to improve ourselves
27.20 CC: If you had a magic wand, your first wish should probably be to make all humans see each other as members of the same tribe
27.50 Is AI ethics a helpful term?
28.05 AI ethics is a growing profession, but if problems are ethical then people who disagree with you are bad, not just wrong
28.55 AI ethics makes debates about AI harder to resolve, and more angry
29.15 AI researchers are understandably offended by finger-wagging, self-proclaimed AI ethicists who may not understand what they are talking about
How likely is it that, by 2030, someone will build artificial general intelligence (AGI)?
Ross Nordby is an AI researcher who has shortened his AGI timelines: he has changed his mind about when AGI might be expected to exist. He recently published an article on the LessWrong community discussion site, giving his argument in favour of shortening these timelines. He now identifies 2030 as the date by which it is 50% likely that AGI will exist. In this episode, we ask Ross questions about his argument, and consider some of the implications that arise.
Article by Ross: https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon
Effective Altruism Long-Term Future Fund: https://funds.effectivealtruism.org/funds/far-future
MIRI (Machine Intelligence Research Institution): https://intelligence.org/
00.57 Ross’ background: real-time graphics, mostly in video games
02.10 Increased familiarity with AI made him reconsider his AGI timeline
02.37 He submitted a grant request to the Effective Altruism Long-Term Future Fund to move into AI safety work
03.50 What Ross was researching: can we make an AI intrinsically interpretable?
04.25 The AGI Ross is interested in is defined by capability, regardless of consciousness or sentience
04.55 An AI that is itself "goalless" might be put to uses with destructive side-effects
06.10 The leading AI research groups are still DeepMind and OpenAI
06.43 Other groups, like Anthropic, are more interested in alignment
07.22 If you can align an AI to any goal at all, that is progress: it indicates you have some control
08.00 Is this not all abstract and theoretical - a distraction from more pressing problems?
08.30 There are other serious problems, like pandemics and global warming, but we have to solve them all
08.45 Globally, only around 300 people are focused on AI alignment: not enough
10.05 AGI might well be less than three decades away
10.50 AlphaGo surprised the community, which was expecting Go to be winnable 10-15 years later
11.10 Then AlphaGo was surpassed by systems like AlphaZero and MuZero, which were actually simpler, and more flexible
11.20 AlphaTensor frames matrix multiplication as a game, and becomes superhuman at it
11.40 In 2018, the Transformer paper was published, but no-one forecast GPT-3’s capabilities
12.00 This year, Minerva (similar to GPT-3) got 50% correct on the math dataset: high school competition math problems
13.16 Illustrators now feel threatened by systems like Dall-E, Stable Diffusion, etc
13.30 The conclusion is that intelligence is easier to simulate than we thought
13.40 But these systems also do stupid things. They are brittle
18.00 But we could use transformers more intelligently
19.20 They turn out to be able to write code, and to explain jokes, and do maths reasoning
21:10 Google's Gopher AI
22.05 Machines don’t yet have internal models of the world, which we call common sense
24.00 But an early version of GPT-3 demonstrated the ability to model a human thought process alongside a machine’s
27.15 Ross’ current timeline is 50% probability of AGI by 2030, and 90+% by 2050
27:35 Counterarguments?
29.35 So what is to be done?
30.55 If convinced that AGI is coming soon, most lay people would probably demand that all AI research stops immediately. Which isn’t possible
31.40 Maybe publicity would be good in order to generate resources for AI alignment. And to avoid a backlash against secrecy
33.55 It would be great if more billionaires opened their wallets, but actually there are funds available for people who want to work on the problem
34.20 People who can help would not have to take a pay cut to work on AI alignment
Audio engineering by Alexander Chace
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Why do human brains consume much less power than artificial neural networks? Simon Thorpe, Research Director of CNRS, explains his view that the key to artificial general intelligence is a "terabrain" that copies from human brains the sparse-firing networks with spiking neurons.
00.11 Recapping "the AI paradox"
00.28 The nervousness of CTOs regarding AI
00.43 Introducing Simon
01.43 45 years since Oxford, working out how the brain does amazing things
02.45 Brain visual perception as feed-forward vs. feedback
03.40 The ideas behind the system that performed so well in the 2012 ImageNet challenge
04.20 The role of prompts to alter perception
05.30 Drawbacks of human perceptual expectations
06.05 The video of a gorilla on the basketball court
06.50 Conjuring tricks and distractions
07.10 Energy consumption: human neurons vs. artificial neurons
07.26 The standard model would need 500 petaflops
08.40 Exaflop computing has just arrived
08.50 30 MW vs. 20 W (less than a lightbulb)
09.34 Companies working on low-power computing systems
09.48 Power requirements for edge computing
10.10 The need for 86,000 neuromorphic chips?
10.25 Dense activation of neurons vs. sparse activation
10.58 Real brains are event driven
11.16 Real neurons send spikes not floating point numbers
11.55 SpikeNET by Arnaud Delorme
12.50 Why are sparse networks studied so little?
14.40 A recent debate with Yann LeCun of Facebook and Bill Dally of Nvidia
15.40 One spike can contain many bits of information
16.24 Revisiting an experiment with eels from 1927 (Lord Edgar Adrian)
17.06 Biology just needs one spike
17.50 Chips moved from floating point to fixed point
19.25 Other mentions of sparse systems - MoE (Mixture of Experts)
19.50 Sparse systems are easier to interpret
20.30 Advocacy for "grandmother cells"
21.23 Chicks that imprinted on yellow boots
22.35 A semantic web in the 1960s
22.50 The Mozart cell
23.02 An expert system implemented in a neural network with spiking neurons
23.14 Power consumption reduced by a factor of one million
23.40 Experimental progress
23.53 Dedicated silicon: Spikenet Technology, acquired by BrainChip
24.18 The Terabrain Project, using standard off-the-shelf hardware
24.40 Impressive recent simulations on GPUs and on a MacBook Pro
26.26 A homegrown learning rule
26.44 Experiments with "frozen noise"
27.28 Anticipating emulating an entire human brain on a Mac Studio M1 Ultra
28.25 The likely impact of these ideas
29.00 This software will be given away
29.17 Anticipating "local learning" without the results being sent to Big Tech
30.40 GPT-3 could run on your phone next year
31.12 Our interview next year might be, not with Simon, but with his Terabrain
31.22 Our phones know us better than our spouses do
Simon's academic page: https://cerco.cnrs.fr/page-perso-simon-thorpe/
Simon's personal blog: https://simonthorpesideas.blogspot.com/
Audio engineering by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
This episode features Daniel Hulme, founder of Satalia and chief AI officer at WPP. What is AI good at today? And how can organisations increase the likelihood of deploying AI successfully?
02.55 What is AI good at today?
03.25 Deep learning isn’t yet being widely used in companies. Executives are wary of self-adapting systems
04.15 Six categories of AI deployment today
04.20 1. Automation. Using “if … then …” statements
04.50 2. Generative AI, like Dall-E
05.15 3. Humanisation, like DeepFake technology and natural language models
05.40 4. Machine learning to extract insights from data – finding correlations that humans could not
06.05 5. Complex decision making, aka operations research, or optimisation. “Companies don’t have ML problems, they have decision problems”
06.25 6. Augmenting humans physically or cognitively
06.50 Aren’t the tech giants using true AI systems in their operations?
07.15 A/B testing is a simple form of adaptation. Google A/B tested the colours of their logo
08 .00 Complex adaptive systems with many moving parts are much riskier. If they go wrong, huge damage can occur
08.30 CTOs demand consistency from operational systems, and can’t tolerate the mistakes that are essential to learning
09.25 Can’t the mistakes be made in simulated environments?
10.20 Elon Musk says simulating the world is not how to develop self-driving cars
10.45 Companies undergoing digital transformations are building ERPs, which are “glorified databases”
11.20 The idea is to develop digital twins, which enable them to ask “what if…” questions
11.30 The coming confluence of three digital twins: workflow, workforce, and administrative processes
12.18 Why don’t supermarkets offer digital twins to their customers? They’re coming
14.55 People often think that creating a data lake and adding a system like Tableau on top is deploying AI
15.15 Even if you give humans better insights they often don’t make better decisions
15.20 Data scientists are not equipped to address opportunities in all 6 of the categories listed earlier
15.40 Companies should start by identifying and then prioritising the frictions in their organisations
16.10 Some companies are taking on “tech debt” which they will have to unwind in five years
16.25 Why aren’t large process industry companies boasting about massive revenue improvements or cost savings?
17.00 To make those decisions you need the right data, and top optimisation skills. That’s unusual
17.55 Companies ask for “quick wins” but that is an oxymoron
18.10 We do see project ROIs of 200%, but most projects fail due to under-investment, or mis-understandings
19.00 Don’t start by just collecting data. The example of a low-cost airline which collected data about everything except rivals’ pricing
20.15 Humans usually do know where the signals are
22.25 Some of Daniel’s favourite AI projects
23.00 Tesco’s last-mile delivery system, which saves 20m delivery miles a year
24.00 Solving PwC’s consultant allocation problem radically improved many lives
25.10 In the next decade there will be a move away from pure ML towards ML+ optimisation
26.35 How these systems have been applied to Satalia
28.10 Daniel has thought a lot about how AI can enable companies to be very adaptable, and allocate decisions well
29.00 Satalia staff used to make recommendations for their own salaries, and their colleagues would make AI-weighted votes
29.30 The goal is to scale this approach not just across WPP, but across the planet
30.35 Heads of HR in WPP operating companies love the idea
Daniel's entry on Wikipedia: https://en.wikipedia.org/wiki/Daniel_J._Hulme
Audio engineering by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Calum and David reflect on their involvement in two recent conferences, one in Riyadh, and one in Dublin. Each conference highlighted a potential disruption in a major industry: a country with large ambitions in the AI space, and a new foundation in the longevity space.
00.00 A tale of two cities, two conferences, two industries
00.44 First, the 2nd Saudi Global AI Conference
01.03 Vision 2030
01.11 Saudi has always been a coalition between the fundamentalist Wahhabis and the Royal Family
01.38 The King chooses reform in the wake of 9/11
02.07 Mohamed bin Salman appointed Crown Prince, who embarks on reform
02.28 The partial liberation of women, and the fundamentalists side-lined
03.10 The “Sheikhdown” in 2017
03.49 The Khashoggi affair and the Yemen war lead to Saudi being shunned
04.26 The West is missing what’s going on in Saudi
05.00 Lifting the Saudi economy’s reliance on petrochemicals
05.27 AI is central to Vision 2030
06.00 Can Saudi become one of the world’s top 10 or 15 AI countries?
06.20 The AI duopoly between the US and China is so strong, this isn’t as hard as you might think
06.55 Saudi’s advantages
07.22 Saudi’s disadvantages
07.54 The goal is not implausible
08.10 The short-term goals of the conference. A forum for discussions, deals, and trying to open the world’s eyes
09.45 Saudi is arguably on the way to becoming another Dubai. Continuation and success are not inevitable, but it is encouraging
11.00 Fastest-growth country in the G20, with an oil bonanza
11.25 The proposed brand-new city of Neom with The Line, a futuristic environment
13.07 The second conference: the Longevity Summit in Dublin
13.48 A new foundation announced
14.05 Reports updating on progress in longevity research around the world
14.20 A dozen were new and surprising. Four examples…
14.50 1. Bats. A speaker from Dublin discussed why they live so long – 40 years – and what we can learn from that
15.55 2. Parabiosis on steroids. Linking the blood flow of two animals suggests there are aging elements in our blood which can be removed
17.50 3. Using AI to develop drugs. Companies like Exscientia and Insilico. Cortex Discovery is a smaller, perhaps more nimble player
19.40 4. Hevolution, a new longevity fund backed with up to $1bn of Saudi money per year for 20 years
22.05 As Aubrey de Grey has long said, we need engineering as much as research
22.40 Aubrey thinks aging should be tackled by undoing cell damage rather than changing the human metabolism
24.00 Three phases of his career. Methuselah. SENS. New foundation
25.00 Let’s avoid cancer, heart disease and dementias by continually reversing aging damage
26.00 He is always itchy to explore new areas. This led to a power struggle within SENS, which he lost
27.00 What should previous SENS donors do now?
27.15 The rich crypto investors who have provided large amounts to SENS are backing the new foundation
28.30 One of the new foundation’s investment areas will be parabiosis
28.55 Cryonics will be another investment area
29.15 Lobbying legislators will be another
29.50 Robust Mouse Rejuvenation will be the initial priority
30.50 Pets may be the animal models whose rejuvenation breaks humanity’s “trance of death”
31.05 David has been appointed a director the new foundation
31.50 The other directors
33.05 An exciting future
Audio engineering by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The conference websites: https://globalaisummit.org/ and https://longevitysummitdublin.com/
For more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today’s most advanced AI systems.
00.07 This episode builds on Episode 5
01.05 We start with GANs – Generative Adversarial Networks
01.33 Solving the problem of stability, with higher resolution
03.24 GANs are notoriously hard to train. They suffer from mode collapse
03.45 Worse, the model might not learn anything, and the result is pure noise
03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution
04.37 The technique of outpainting
05.55 Generating text as well as images, and producing stories
06.14 AI Dungeon
06.28 From GANs to Diffusion models
06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs
07.20 They are more stable, and don’t suffer from mode collapse
07.30 They do have downsides. They are much more computation intensive
08.24 What does the word diffusion mean in this context?
08.40 It’s adopted from physics. It peels noise away from the image
09.17 Isn’t that rewinding entropy?
09.45 One application is making a photo taken in 1830 look like one taken yesterday
09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc
10.35 Bounding boxes generate objects of a specified class from tiny inputs
11.00 The images are not taken from previously seen images on the internet, but invented from scratch
11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them
12.40 Failures are eliminated by amendments, as always with models like this
12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months
13.30 The failure modes get harder to find as the obvious ones are eliminated
13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation
15.18 Are you often surprised by what the models do next?
15.50 The research community is like a hive mind, and you never know where the next idea will come from
16.40 Often the next thing comes from a couple of students at a university
16.58 How Ian Goodfellow created the first GAN
17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?
18.15 We should cultivate different approaches because you never know where they might lead
19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking
19.40 AlphaGo combined deep learning and GOFAI
21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach
21.30 GOFAI models had no learning element. They can’t go beyond the humans whose expertise they encapsulate
22.25 The now-famous move 37 in AlphaGo’s game two against Lee Sedol in 2016
23.40 Moravec’s paradox. Easy things are hard, and hard things are easy
24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening
24.40 Will models always demand more and more compute?
25.10 The human brain has far more compute power than even our biggest systems today
25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient
26.00 We need more compute, better algorithms, and more efficiency
26.55 Dedicated AI chips will help a lot with efficiency
26.25 Cerebros claims that GPT-3 could be trained on a single chip
27.50 Models can increasingly be trained for general purposes and then tweaked for particular tasks
28.30 Some of the big new models are open access
29.00 What else should people learn about with regard to advanced AI?
29.20 Neural Radiance Fields (NERF) models
30.40 Flamingo and Gato
31.15 We have mostly discussed research in these episodes, rather than engineering
Welcome to episode 5 of the London Futurist podcast, with your co-hosts David Wood and Calum Chace.
We’re attempting something rather ambitious in episodes 5 and 6. We try to explain how today’s cutting edge artificial intelligence systems work, using language familiar to lay people, rather than people with maths or computer science degrees.
Understanding how Transformers and Generative Adversarial Networks (GANs) work means getting to grips with concepts like matrix transformations, vectors, and landscapes with 500 dimensions.
This is challenging stuff, but do persevere. These AI systems are already having a profound impact, and that impact will only grow. Even at the level of pure self-interest, it is often said that in the short term, AIs won’t take all the jobs, but people who understand AI will take the best jobs.
We are extremely fortunate to have as our guide for these episodes a brilliant AI researcher at DeepMind, Aleksa Gordić.
Note that Aleksa is speaking in personal capacity and is not representing DeepMind.
Aleksa's YouTube channel is https://www.youtube.com/c/TheAIEpiphany
00.03 An ambitious couple of episodes
01.22 Introducing Aleksa, a double rising star
02.15 Keeping it simple
02.50 Aleksa's current research, and previous work on Microsoft's HoloLens
03.40 Self-taught in AI. Not representing DeepMind
04.20 The narrative of the Big Bang in 2012, when Machine Learning started to work in AI.
05.15 What machine learning is
05.45 AlexNet. Bigger data sets and more powerful computers
06.40 Deep learning a subset of machine learning, and a re-branding of artificial neural networks
07.27 2017 and the arrival of Transformers
07.40 Attention is All You Need
08.16 Before this there were LSTMs, Long Short-Term Memories
08.40 Why Transformers beat LSTMs
09.58 Tokenisation. Splitting text into smaller units and mapping them onto higher dimension networks
10.30 3D space is defined by three numbers
10.55 Humans cannot envisage multi-dimensional spaces with hundreds of dimensions, but it's OK to imagine them as 3D spaces
11.55 Some dimensions of the word "princess"
12.30 Black boxes
13.05 People are trying to understand how machines handle the dimensions
13.50 "Man is to king as woman is to queen." Using mathematical operators on this kind of relationship
14.35 Not everything is explainable
14.45 Machines discover the relationships themselves
15.15 Supervised and self-supervised learning. Rewarding or penalising the machine for predicting labels
16.25 Vectors are best viewed as arrows in 3D space, although that is over-simplifying
17.20 For instance the relationship between "queen" and "woman" is a vector
17.50 Self-supervised systems do their own labelling
18.30 The labels and relationships have probability distributions
19.20 For instance, a princess is far more likely to wear a slipper than a dog
19.35 Large numbers of parameters
19.40 BERT, the original Transformer, had a hundred million or so parameters
20.04 Now it's in the hundreds of billions, or even trillions
20.24 A parameter is analogous to a synapse in the human brain
21.19 Synapses can have different weights
22.10 The more parameters, the lower the loss
22.35 Not just text, but images too, because images can also be represented as tokens
23.00 In late 2020 Google released the first vision Transformer
23.29 Dall-E and Midjourney are diffusion models, which have replaced GANs
24.15 What are GANs, or Generative Adversarial Networks?
24.45 Two types of model: Generators and Discriminators. The first tries to fool the second
26.20 Simple text can produce photorealistic images
27.10 Aleksa's YouTube videos are available at "The AI Epiphany"
27.40 Close
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, co-hosts Calum Chace and David Wood explore a number of recent developments in AI - developments that are rapidly changing what counts as "state of the art" in AI.
00.05: Short recap of previous episodes
00.20: A couple of Geoff Hinton stories
02.27: Today's subject: the state of AI today
02.53: Search
03.35: Games
03.58: Translation
04.33: Maps
05.33: Making the world understandable. Increasingly
07.00: Transformers. Attention is all you need
08.00: Masked language models
08.18: GPT-2 and GPT-3
08.54: Parameters and synapses
10.15: Foundation models produce much of the content on the internet
10.40: Data is even more important than size
11.45: Brittleness and transfer learning
13.15: Do machines understand?
14.05: Human understanding and stochastic parrots
15.27: Chatbots
16.22: Tay embarrasses Microsoft
16.53: Blenderbot
17.19: Far from AGI. LaMDA and Blaise Lemoine
18.26: The value of anthropomorphising
19.53: Automation
20.25: Robotic Process Automation (RPA)
20.55: Drug discovery
21.45: New antibiotics. Discovering Halicin
23.50: AI drug discovery as practiced by Insilico, Exscientia and others
25.33: Eroom's Law
26.34: AlphaFold. How 200m proteins fold
28.30: Towards a complete model of the cell
29.19: Analysis
30.04: Air traffic controllers use only 10% of the data available to them
30.36: Transfer learning can mitigate the escalating demand for compute power
31.18: Next up: the short-term future of AI
Audio engineering by Alexander Chace.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
For more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
In this episode, co-hosts Calum Chace and David Wood continue their review of progress in AI, taking up the story at the 2012 "Big Bang".
00.05: Introduction: exponential impact, big bangs, jolts, and jerks
00.45: What enabled the Big Bang
01.25: Moore's Law
02.05: Moore's Law has always evolved since its inception in 1965
03.08: Intel's tick tock becomes tic tac toe
03.49: GPUs - Graphic Processing Units
04.29: TPUs - Tensor Processing Units
04.46: Moore's Law is not dead or dying
05.10: 3D chips
05.32: Memristors
05.54: Neuromorphic chips
06.48: Quantum computing
08.18: The astonishing effect of exponential growth
09.08: We have seen this effect in computing already. The cost of an iPhone in the 1950s.
09.42: Exponential growth can't continue forever, but Moore's Law hasn't reached any theoretical limits
10.33: Reasons why Moore's Law might end: too small, too expensive, not worthwhile
11.20: Counter-arguments
12.01: "Plenty more room at the bottom"
12.56: Software and algorithms can help keep Moore's Law going
14.15: Using AI to improve chip design
14.40: Data is critical
15.00: ImageNet, Fei Fei Lee, Amazon Turk
16.10: AIs labelling data
16.35: The Big Bang
17.00: Jürgen Schmidhuber challenges the narrative
17.41: The Big Bang enabled AI to make money
18.24: 2015 and the Great Robot Freak-Out
18.43: Progress in many domains, especially natural language processing
19.44: Machine Learning and Deep Learning
20.25: Boiling the ocean vs the scientific method's hypothesis-driven approach
21.15: Deep Learning: levels
21.57: How Deep Learning systems recognise faces
22.48: Supervised, Unsupervised, and Reinforcement Learning
24.00: Variants, including Deep Reinforcement Learning and Self-Supervised Learning
24.30: Yann LeCun's camera metaphor for Deep Learning
26.05: Lack of transparency is a concern
27.45: Explainable AI. Is it achievable?
29.00: Other AI problems
29.17: Has another Big Bang taken place? Large Language Models like GPT-3
30.08: Few-shot learning and transfer learning
30.40: Escaping Uncanny Valley
31.50: Gato and partially general AI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
For more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
AI is a subject that we will all benefit from understanding better. In this episode, co-hosts Calum Chace and David Wood review progress in AI from the Greeks to the 2012 "Big Bang".
00.05: A prediction
01.09: AI is likely to cause two singularities in this pivotal century - a jobless economy, and superintelligence
02.22: Counterpoint: it may require AGI to displace most people from the workforce. So only one singularity?
03.27: Jobs are nowhere near all that matters in humans
04.11: Are the "Three Cs jobs" safe? Those involving Creativity, Compassion, and Commonsense? Probably not.
05.15: 2012, the Big Bang in AI
05.48: AI now makes money. Google and Facebook ate Rupert Murdoch's lunch
06.30: AI might make the difference between military success and military failure. So there's a geopolitical race as well as a commercial race
07.18: Defining AI.
09.03: Intelligence vs Consciousness
10.15: Does the Turing Test test for Intelligence or Consciousness?
12.30: Can customer service agents pass the Turing Test?
13.07: Attributing consciousness by brain architecture or by behaviour
15.13: Creativity. Move 37 in game two of AlphaGo vs Lee Sedol, and Hassabis' three buckets of creativity
17.13: Music and art produced by AI as examples
19.05: History: Start with the Greeks, Hephaestus (Vulcan to the Romans) built automata, and Aristotle speculated about technological unemployment
19.58: AI has featured in science fiction from the beginning, eg Mary Shelley's Frankenstein, Samuel Butler's Erewhon, E.M. Forster's "The Machine Stops"
20.55: Post-WW2 developments. Conference in Paris in 1951 on "Computing machines and human thought". Norbert Weiner and cybernetics
22.48: The Dartmouth Conference
23.55: Perceptrons - very simple models of the human brain
25.13: Perceptrons debunked by Minsky and Papert, so Symbolic AI takes over
25.49: This debunking was a mistake. More data and better hardware overcomes the hurdles
27.20: Two AI winters, when research funding dries up
28.07: David was taught maths at Cambridge by James Lighthill, author of the report which helped cause the first AI winter
28.58: The Japanese 5th generation computing project under-delivered in the 1980s. But it prompted an AI revival, and its ambitions have been realised by more recent advances
30.45: No more AI winters?
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
For more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
Co-hosts David Wood and Calum Chace share their vision and plans for the London Futurists podcast.
00.20: Why we are launching this podcast. Anticipating and managing exponential impact
02.45: It’s not the Fourth Industrial Revolution – it’s the Information Revolution
04.58: AI’s impact. Smartphones as an example of technology’s power
09.04: The obviousness of change in hindsight. Why technology implementation is often slow
11.30: Technology implementation is often delayed by poor planning
15:20: We were promised jetpacks. Instead, we got omniscience
17.14: Technological development is not deterministic, and it contains dangers
19.08: Technologies are always double-edged swords. They might be somewhat deterministic
22.03: Better hindsight enables better foresight
23.06: Introducing ourselves
23.13: David bio
24.53: Calum bio
26.44: Fiction and non-fiction. We need more positive stories
27.37: Topics for future episodes
28.03: There are connections between all these topics
28.42: Excited by technology, but realistic
29.24: Securing a great future
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
For more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
En liten tjänst av I'm With Friends. Finns även på engelska.