Coffee table conversations with people thinking about foundational issues. Multiverses explores the limits of knowledge and technology. Does quantum mechanics tell us that our world is one of many? Will AI make us intellectually lazy, or expand our cognitive range? Is time a thing in itself or a measure of change? Join James Robinson as he tries to find out.
The podcast MULTIVERSES is created by James Robinson. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Scientific discoveries can often be codified in simple laws, neatly stated in textbooks with directions on applying them. But the enterprise of science is embedded in society. It depends on individuals and economies. It is far from simple to answer the question: How did we get these laws?
Patricia Fara is an Emeritus Fellow of Clare College, Cambridge. She is a former president of the British Society for the History of Science and has written Science: A Four Thousand Year History, Newton: The Making of Genius, and numerous other books.
Patricia discusses the way we often mythologize individual scientists and how the notion of genius has changed over the centuries. She also highlights lesser-known figures, such as Hertha Ayrton, whose contribution should not be measured merely in scientific breakthroughs, but in how they paved the way for further women scientists.
AI can do many things equally well as humans: such as writing plausible prose or answering exam questions. In certain domains, AI goes far beyond human capabilities — playing chess for instance.
We might expect that nothing prevents machines from one day besting humans at every task. Indeed, it is often asserted that, in principle, everything (and more) within the range of human cognition will one day fall within the ken of AI.
But what if there are concepts and ways of thinking that are off-limits to any machine, yet not so for humans? Selmer Bringsjord, Professor in Cognitive Science at RPI joins us this week and argues we need to rethink human thought.
Selmer argues that humans have been able to grasp problems that machines cannot — humans are capable of hypercomputation. Hypercomputation is computation above the Turing limit, as such it can solve problems beyond the power of any machine we can currently conceive.
In particular, Turing computation cannot encompass infinitary logic, yet humans have been able to reason effectively about the infinite. Similarly, Gödel’s theorem points to a class of riddles machines cannot reach, yet human genius has identified.
This is a huge topic, accepting Selmer’s arguments entails accepting that human minds work in a way that evades our understanding — their mechanisms obeying mechanics of which we are wholly ignorant.
Whether or not you agree with Selmer’s conclusions, this is a brilliant exploration of the boundaries of thought.
Links
There is no consensus on what minds are, but there is plenty of agreement on where they can be found: in humans. Yet human consciousness may account for only a small proportion of the consciousness on our planet.
Our guest, Kristin Andrews, is a Professor of Animal Minds at the University of York, Ontario, Canada. She is a philosopher working in close contact with biologists and cognitive scientists and has spent time living in the jungle to observe research on orangutans.
Kristin notes that comparative psychology has historically resisted attributing such things as intentions, learning, consciousness, and minds to animals. Yet she argues that this is misguided in the light of the evidence, that often the best way to make sense of the complexity of animal behavior is to invoke minds and intentional concepts.
Recently Kristin has proposed that the default assumption — the null hypothesis — should be that animals have minds. Currently, biologists examine markers of consciousness on a species-by-species basis, for example looking for the presence of pain receptor skills, and preferential tradeoffs in behavior. But everywhere we have looked, even in tiny nematode worms, we find multiple markers present. Kristin reasons that switching the focus from asking "where are the minds?" to "what sort of minds are there?" would prove more fruitful.
The question of consciousness and AI is at the forefront of popular discourse, but to make progress on a scientific theory of mind we should draw on the richer data of the natural world.
Things happen. Or they don’t. How then should we make sense of claims that something might happen?
If all these claims do is express doubt, then the puzzle can be easily resolved. But if the claims capture some objective feature of the world, what is it?
Our guest is Alastair Wilson, a professor of philosophy at the University of Leeds. He takes chance seriously, in particular, he is a realist about our modal claims (claims like “either candidate could win” or “if Szilard hadn’t got Spanish flu, the atom bomb would not have been invented”) may be true or false, not just opinions or expressions of ignorance.
Alastair does this by connecting our modal talk to Everettian quantum mechanics. He argues that modal claims are assertions about the many worlds within the universal wavefunction. If in all worlds where Szilard did not succumb to Spanish flu, the atom bomb was never invented, then this claim would be true.
It is a bold and fascinating way of bringing physics and metaphysics together. What can happen, what is possible, what could have been? These become questions for natural science.
The launch of ChatGPT was a "Sputnik moment". In making tangible decades of progress it shot AI to the fore of public consciousness. This attention is accelerating AI development as dollars are poured into scaling models.
What is the next stage in this journey? And where is the destination?
My guest this week, Nell Watson, offers a broad perspective on the possible trajectories. She sits in several IEEE groups looking at AI Ethics, safety & transparency, has founded AI companies, and is a consultant to Apple on philosophical matters. Nell makes a compelling case that we can expect to see agentic AI being soon adopted widely. We might even see whole AI corporations. In the context of these possible developments, she reasons that concerns of AI ethics and safety — so often siloed within different communities — should be understood as continuous.
Along the way we talk about the perils of hamburgers and the good things that could come from networking our minds.
Links
Physics helps get stuff done. Its application has put rockets in space, semiconductors in phones, and eclipses on calendars.
For some philosophers, this is all physics offers. It is a mere instrument, albeit of great power, giving us control over tangible things. It is a set of gears and widgets (wavefunctions, strings, even electrons) to crank out predictions. In contrast to instrumentalists, scientific realists argue that the success of theories shows that they map onto the structure of the world, symbols in equations carry the imprint of real entities.
This is an old debate in the philosophy of science. While we touch on some arguments for either position, this episode focuses on the phenomenology of physics researchers. What do physicists believe?
Céline Henne is a philosopher at the University of Bologna. Alongside Hannah Tomczyk and Christopher Sperber she has fielded the most comprehensive survey of the attitudes of physicists towards the reality of the objects of their study. From looking at the answers to dozens of questions from several hundred physicists, they have distinguished several camps of belief.
It's an elegantly designed survey, simply reading the questions forces a consideration of one's own position.
It can be tempting to consider language and thought as inextricably linked. As such we might conclude that LLM's human-like capabilities for manipulating language indicate a corresponding level of thinking.
However, neuroscience research suggests that thought and language can be teased apart, perhaps the latter is more akin to an input-output interface, or an area of triage for problem-solving. Language is a medium into which we can translate and transport concepts.
Our guest this week is Anna Ivanova, Assistant Professor of Psychology at Georgia Institute of Technology. She's conducted experiments that demonstrate how subjects with severe aphasia (large-scale damage to the language area of their brains) remain able to reason socially. She's also studied how the brains of developers work when reading code. Again the language network is largely bypassed.
Anna's work and other research in cognitive science suggest that the modularity of brains is central to their ability to handle diverse tasks.
Brains are not monolithic neural nets like LLMs but contain networked specialized regions.
Words. (Huh? Yeah!) What are they good for? Absolutely everything.
At least this was the view of some philosophers early in the 20th century, that the world was bounded by language. ("The limits of my language mean the limits of my world" to use Wittgenstein's formulation over the Edwin Starr adaptation)
My guest this week is Nikhil Krishnan a philosopher at University of Cambridge and frequent contributor to the The New Yorker His book A Terribly Serious Adventure, traces the path of Ordinary Language Philosophy through the 20th century.
We discuss the logical positivists (the word/world limiters) and their high optimism that the intractable problems of philosophy could be dissolved by analysis. Their contention that the great questions of metaphysics were nonsense since they had no empirical or logical content.
That program failed, but its spirit of using data and aiming for progress lived on in the ordinary language philosophers who put practices with words under the microscope. Hoping to find in this data clues to the nuances of the world.
This enterprise left us with beautiful examples of the subtleties of language. But more importantly, it is a practice that continues today, of paying close attention to our everyday behaviors and holding our grand systems of philosophy accountable to these.
Listen to discover things you know, but didn't know you knew — like the difference between doing something by accident vs by mistake.
Do check out Nikhil's own podcast, Minor Books, on iTunes or Acast
(00:00) Intro
(02:49) Start of conversation: Philosophical background and history
(04:47) The Evolution of Philosophy: From Ancient Texts to Modern Debates
(16:46) The Impact of Logical Positivism and the Quest for Scientific Philosophy
(38:35) J.L. Austin's Revolutionary Approach to Philosophy and Language
(48:43) The Power of Everyday Language vs the Abstractions of Philosophy
(49:11) Why is ordinary language so effective — Language Evolution?
(52:30) Philosophical Perspectives on Language's Utility
(53:28) The Intricacies of Language and Perception
(54:48) Scientific and Philosophical Language: A Comparative Analysis
(57:14) Legal Language and Its Precision
(01:07:33) LLMS: The Future of Language in Technology and AI
(01:10:33) Intentionality and the Philosophy of Actions
(01:18:27) Bridging Analytic and Continental Philosophy
(01:33:46) Final Thoughts on Philosophy and Its Practice)
Music may be magical. But it is also rooted in the material world. As such it can be the subject of empirical inquiry.
How does what we are told of a performer influence our appreciation of the performance? Does sunshine change our listening habits? How do rhythms and melodies change as they are passed along, as in a game of Chinese whispers?
Our guest is Manuel Anglada Tort, a lecturer at Goldsmiths, University of London. He has investigated all those topics. We discuss the fields of Empirical Aesthetics and cultural evolution experiments as applied to music.
Chapters
(00:00) Intro
(03:35) Start of conversation: Music Psychology and Empirical Aesthetics
(07:54) Genomics and Musical Ability
(18:25) Weather's Influence on Music Preferences
(31:57) The Repeated Recording Illusion
(43:24) Empirical Aesthetics: Does Analysis Boost or Deflate Wonder?
(49:59) Music Evolution and Cultural Systems
(52:18) Simulating Music Evolution in the Lab
(1:01:27) The Role of Memory and Cognitive Biases in Music
(1:05:33) Comparing Language and Music Evolution
(1:20:37) The Impact of Physical and Cognitive Constraints on Music
(1:31:37) Audio Appendix
If all my beliefs are correct, could I still be prejudiced?
Philosophers have spent a lot of time thinking about knowledge. But their efforts have focussed on only certain questions. What makes it such that a person knows something? What styles of inquiry deliver knowledge?
Jessie Munton is a philosopher at the University of Cambridge. She is one of several people broadening the scope of epistemology to ask: what sort of things do we (and should we) inquire about and how should we arrange our beliefs once we have them?
Her lens on this is in terms of salience structures. These describe the features and beliefs that an individual is likely to pay attention to in a situation. They are networks that depend on the physical, social, and mental worlds.
In a supermarket aisle, what is salient to me depends both on how products are arranged and my food preferences. Very central nodes in my salience structure (for example this podcast) might be awkwardly linked to many things (multigrain rice ... multiverses).
This is a rare and wonderful thing. Philosophy that is at once interesting and useful.
Links
Chapters
(04:20) Welcome and Introduction to the Discussion
(04:53) Exploring the Essence of Epistemology
(06:31) Expanding the Boundaries of Traditional Epistemology
(10:50) Understanding vs. Knowledge: Diving Deeper into Epistemology
(12:42) The Role of Evidence and Justification in Beliefs
(23:59) Salience Structures: A New Perspective on Information Processing
(34:22) Applying Network Science to Understand Salience Structures
(43:41) Exploring Social Salience Structures and the Impact of Cities
(48:15) Exploring the Complexity of Attention and Salience
(48:30) The Challenge of Modeling Attention Mathematically
(48:57) Linking Attention to Real-world Outcomes
(50:01) Differentiating Causes of Attention and Their Impacts
(50:53) The Role of Individual and Social Responsibility in Shaping Attention
(52:19) Influence of Media and Technology on Salience Structures
(55:44) The Potential of Augmented Reality and Large Language Models
(00:47) The Personalization Dilemma of Search Engines and Social Media
(01:05:38) Exploring the Ethical and Practical Implications of Information Access
(01:22:53) Concluding Thoughts on Salience and Information Consumption
Why do whales live longer than hummingbirds? What makes megacities more energy efficient than towns? Is the rate of technological innovation sustainable?
Though apparently disparate the answer to these questions can be found in the work of theoretical physicist Geoffrey West. Geoffrey is Shannan Distinguished Professor at the Santa Fe Institute where he was formerly the president.
By looking at the network structure of organisms, cities, and companies Geoffrey was able to explain mathematically the peculiar ways in which many features scale. For example, the California Sea Lion weighs twice as much as an Emperor Penguin, but it only consumes 75% more energy. This sub-linear scaling is incredibly regular, following the same pattern across many species and an epic range of sizes. This is an example of a scaling law.
The heart of the explanation is this: optimal space-filling networks are fractal-like in nature and scale as if they acquire an extra dimension. A 3D fractal network scales as if it is 4D.
Chapters
(00:00) Introduction
(02:56) Start of conversation: Geoffrey's Career Journey
(03:25) Transition from High Energy Physics to Biology
(09:05) Exploring the Origin of Aging and Death
(11:20) Discovering Scaling Laws in Biology
(12:30) Understanding the Metabolic Rate and its Scaling
(25:40) The Impact of the Molecular Revolution on Biology
(28:39) The Role of Networks in Biological Systems
(49:07) The Connection between Fractals and Biological Systems
(01:00:29) Understanding the Growth and Supply of Cells
(01:01:07) The Impact of Size on Energy Consumption
(01:01:46) The Role of Networks in Growth and Supply
(01:02:30) The Universality of Growth in Organisms
(01:03:13) Exploring the Dynamics of Cities
(01:06:12) The Scaling of Infrastructure and Socioeconomic Factors in Cities
(01:07:36) The Implications of Superlinear Scaling in Cities
(01:11:50) The Future of Cities and the Need for Innovation
It's easy to recognize the potential of incremental advances — more efficient cars or faster computer chips for instance. But when a genuinely new technology emerges, often even its creators are unaware of how it will reshape our lives. So it is with AI, and this is where I start my discussion with Peter Nixey.
Peter is a serial entrepreneur, angel investor, developer, and startup advisor. He reasons that large language models are poised to bring enormous benefits, particularly in enabling far faster & cheaper development of software. But he also argues that their success in this field will undermine online communities of knowledge sharing — sites like StackOverflow — as users turn away from them and to LLMs. Effectively ChatGPT will kick away one of the ladders on which its power is built.
This migration away from common forums to sealed and proprietary AI models could mark a paradigm shift in the patterns of knowledge sharing that extends far beyond the domain of programming.
We also talk about the far future and whether conflict with AI can be avoided.
Chapters
(00:00) Introduction
(02:44) Start of Conversation
(03:20) The Lag Period in Technology Adoption
(06:48) The Impact of the Internet on Productivity
(11:30) The Curious UX of AI
(19:25) The Future of AI in Coding
(29:06) The Implications of AI on Information Sharing
(41:27) AI and Socratic learning
(46:57) The Evolution of Textbooks and Learning Materials
(49:01) The Future of AI in Software Development
(51:11) The Existential Questions Surrounding AI
(01:05:16) Evolutionary Success as a lens on AI
(01:13:29) The Potential Conflict Between Humans and AI
(01:14:24) An (almost) Optimistic Outlook on AI and Humanity
Are philosophy and science entirely different paradigms for thinking about the world? Or should we think of them as continuous: overlapping in their concerns and complementary in their tools?
David Papineau is a professor at Kings College London and the author of over a dozen books. He's thought about many topics — consciousness, causation the arrow of time, the interpretation of quantum mechanics — and in all of these he advocates engagement with science. The philosopher should take its cue from our best theories of nature.For example, a philosophical account of causation must pay attention to the way this concept is used in the sciences.
But the philosopher can also be a servant of science. Philosophers are undaunted, excited even, by apparent paradoxes and where such thorny problems pop up in science this is where philosophical tools can be brought to bear. For instance, when quantum mechanics appears to suggest cats are alive and dead, the philosopher's interest is piqued (even as the physicist's attention may wane).
Chapters
(00:00) Intro
(02:41) Start of conversation
(02:46) Unraveling the Mystery of Scientific Methods
(03:45) The Shift in Philosophy of Science
(04:03) The Role of Truth in Scientific Investigation
(05:34) The Evolution of Scientific Methodologies
(06:32) The Arrogance of Philosophy in Science
(08:58) The Progress of Science and its Challenges
(10:21) The Role of Data in Scientific Disputes
(11:26) The Struggle of Early Modern Science
(14:52) The Continuity of Philosophy and Science
(15:28) The Role of Philosophy in Resolving Theoretical Contradictions
(18:08) The Replication Crisis in Science
(32:15) The Asymmetry of Time & Thermodynamics
(42:45) The Everlasting Role of Philosophy in Science?
(42:53) Philosophy and Its Puzzling Subjects
(43:55) Artificial Intelligence & Philosophy
(44:39) The Turing Test and AI
(45:18) The Consciousness of AI
(46:11) The Mystery of Consciousness
(46:51) Is there a fact of the matter to consciousness?
(48:59) The Consciousness of Machines
(50:13) Different takes on consciousness
(51:43) The Consciousness of Artificial Intelligence
(53:23) Consciousness & Emergence
(53:59) The Moral Standing of AI
(01:05:23) The Future of Causation Studies
Why do men do less housework? What happens when an apology is offered? What are we looking for when we ask for advice?
These are the sorts of problems drawn from everyday experience that Paulina Sliwa intends to resolve and in doing so make sense of the ways we negotiate blame and responsibility.
Paulina is a Professor of Moral & Political Philosophy at the University of Vienna. She looks carefully at evidence accessible to us all — daily conversations, testimony from shows like This American Life, and our own perceptions — and uses these to unravel our moral practices. The results are sometimes surprising yet always grounded. For example, Paulina argues that remorse is not an essential feature of an apology, nor is accepting that behavior was unjustified.
This is illuminating for its insights into moral problems, but I also really enjoyed seeing how Paulina thinks, it's a wonderful example of philosophical tools at work.
Links
Milestones
(0:00) Into
(3:00) Start of conversation: grand systems vs ordinary practices of morality
(5:30) Philosophy and evidence
(6:39) Apologies
(8:40) Anne of Green Gables: an overblown apology
(10:50) Remorse is not an essential feature of apologies
(12:00) Apologies involve accepting some blame
(15:30) Why apology is not saying I won’t do it again
(17:17) Essential vs non-essential features of apologies
(18:12) Apologies occur in many different shapes, is a unified account possible?
(20:00) Moral footprints
(24:10) Apologies and politeness
(26:20) Tiny apologies as a commitment to moral norms
(29:50) Moral advice — verdictive vs hermeneutic (making sense)
(33:30) Moral advice doesn’t need to get us to the right answer but it should get us closer
(36:30) Perspectives, affordances and options
(38:40) Perspectives vs facts
(46:45) Housework: Gendered Domestic Affordance Perception
(49:40) Evidence that affordances are directly perceived (and not inferred)
(52:00) Convolutional neural networks as a model of perception
(53:00) Environmental dependency syndrome
(54:30) Perceptions are not fixed
(59:30) Perception is not a transparent window on reality
(1:01:00) Tools of a philosopher
(1:03:20) A Terribly Serious Adventure - Philosophy at Oxford 1900-60 — Nikhil Krishnan
(1:04:50) Philosophy as continuous with science
(1:06:17) Philosophy is not a neutral enterprise:
(1:09:00) Santa: Read letters!
(1:10:10) Apologise less
Life. What is it? How did it start? Is it unique to Earth, rare or abundantly distributed throughout the universe?
While biology has made great strides in the last two hundred years, these foundational questions remain almost as mysterious as ever. However, in the last three decades, astrobiology has emerged as an academic discipline focused on their resolution. Already we have seen progress, if not aliens. The success of the space telescope Kepler in discovering exoplanets may come to mind. Equally important is the work to understand how we can demarcate biological from abiotic patterns — when we can be sure something is a genuine biosignature (evidence of life) and not a biomorph (looks like life, but is the product of other processes).
Our guest this week is Sean McMahon, a co-director of the UK Centre for Astrobiology. Sean takes us through the field in general and gives particularly thoughtful insights into these epistemological problems. He also cautions that we may need a certain psychological resilience in this quest: it may require generations of painstaking work to arrive at firm answers.
Corrections
In the intro, I say Enceladus is a moon of Jupiter. Nope, it's one of Saturn's moons.
Milestones
(00:00) Intro
(3:22) Start of discussion: astrobiology as where biology meets the physical science
(6:00) What is life?
(9:30) Life is a self-sustaining chemical system capable of Darwinian evolution — NASA 94
(10:44) Life is emergent, therefore hard to define
(12:00) Assembly theory — beer, the pinnacle of life?
(14:22) Schrodinger & DNA
(15:45) Von Neumann machine behavior as defining life
(17:00) All life on Earth we know comes from one source
(22:55) How did life emerge on Earth
(26:40) The most important meal in history — emergence of eukaryotes
(28:20) The difficulty of delineating life from non-life
(33:30) How spray paint looks like life
(35:30) ALH84001
(39:00) How false positives invigorated exobiology
(44:05) The abiotic baseline
(46:30) Chemical gardens
(49:30) Is natural selection the only way to high complexity?
(54:55) Sci-fi & life as we don’t know it
(58:45) Kepler & exoplanets
(1:00:00) It may take generations
(1:03:40) Sagan’s dictum: Extraordinary claims require extraordinary evidence
(1:08:50) Technosignatures: Gomböc, Obelisk, not Pulsar
(1:12:00) Can we prove the null hypothesis (no life)
Many animals play. But why?
Play has emerged in species as distinct as rats, turtles, and octopi although they are separated by hundreds of millions of years of evolution.
While some behaviors — hunting or mating for example — are straightforwardly adaptive, play is more subtle. So how does it help animals survive and procreate? Is it just fun? Or, as Huizinga put it, is it the primeval soil of culture?
Our guest this week is Gordon Burghardt, a professor at The University of Tennessee and the author of the seminal The Genesis of Animal Play: Testing the Limits where he introduced criteria for recognizing animal play.
Gordon has spent his career trying to understand the experience of animals. He advocates for frameworks such as critical anthropomorphism and the umwelt so we can judiciously adjust our perspectives. We can play at being other.
This week Multiverses is brought to you by ... the internet.
Links
Milestones
(00:00) Introduction
(2:20) Why study play?
(4:00) Criteria for play
(5:00) Fish don’t smile
(5:50) The five criteria: 1. incompletely functional
(7:40) 2. Fun (endogenous reward)
(8:20) 3. Incomplete
(9:45) 4. Repeated
(10:50) 5. Healthy, stress free
(13:30) Play as a way of dealing with stress (but not too much)
(16:40) Parental care creating a space for play
(17:45) Delayed vs immediate benefits
(20:45) Primary, secondary and tertiary play
(26:00) Role reversal, imitation, self-handicapping: imagining the world otherwise
(31:00) Secondary process: play as a way of maintaining systems
(33:37) Tertiary process: play as a way of going beyond
(34:45) Komodo dragons with buckets on their heads
(39:22) Critical anthropomorphism
(42:40) Umwelt — Jakob von Uexküll
(49:18) Anthropomorphism by omission
(53:00) Play evolved independently — it is not homologous
(53:45) Do aliens play?
(1:00:10) Play signals — how to play with dogs and bears
(1:04:00) Inter species play
(1:09:00) Final thoughts
Language is the ultimate Lego. With it, we can take simple elements and construct them into an edifice of meaning. Its power is not only in mapping signs to concepts but in that individual words can be composed into larger structures.
How did this systematicity arise in language?
Simon Kirby is the head of Linguistics and English Language at The University of Edinburgh and one of the founders of the Centre for Langauge Evolution and Change. Over several decades he and his collaborators have run many elegant experiments that show that this property of language emerges inexorably as a system of communication is passed from generation to generation.
Experiments with computer simulations, humans, and even baboons demonstrate that as a language is learned mistakes are made - much like the mutations in genes. Crucially, the mistakes that better match the language to the structure of the world (as conceived by the learner) are the ones that are most likely to be passed on.
Links
Outline
(00:00) Introduction
(2:45) What makes language special?
(5:30) Language extends our biological bounds
(7:55) Language makes culture, culture makes language
(9:30) John Searle: world to word and word to world
(13:30) Compositionality: the expressivity of language is based on its Lego-like combinations
(16:30) Could unique genes explain the fact of language compositionality?
(17:20) … Not fully, though they might make our brains able to support compositional language
(18:20) Using simulations to model language learning and search for the emergence of structure
(19:35) Compositionality emerges from the transmission of representations across generations
(20:18) The learners need to make mistakes, but not random mistakes
(21:35) Just like biological evolution, we need variation
(27:00) When, by chance, linguistic features echo the structure of the world these are more memorable
(33:45) Language experiments with humans (Hannah Cornish)
(36:32) Sign language experiments in the lab (Yasamin Motamedi)
(38:45) Spontaneous emergence of sign language in populations
(41:18) Communication is key to making language efficient, while transmission gives structure
(47:10) Without intentional design these processes produce optimized systems
(50:39) We need to perceive similarity in states of the world for linguistic structure to emerge
(57:05) Why isn’t language ubiquitous in nature …
(58:00) … why do only humans have cultural transmissions
(59:56) Over-imitation: Victoria Horner & Andrew Whiten, humans love to copy each other
(1:06:00) Is language a spandrel?
(1:07:10) How much of language is about information transfer? Partner-swapping conversations (Gareth Roberts)
(1:08:49) Language learning = play?
(1:12:25) Iterated learning experiments with baboons (& Tetris!)
(1:17:50) Endogenous rewards for copying
(1:20:30) Art as another angle on the same problems
To stop global warming it is not enough to stop atmospheric CO2 rising. That is not the meaning of net zero.
Despite net zero being a core concept in the Paris Agreement, it appears to be much misunderstood. The idea of net zero can be traced back to the work of Myles Allen, Professor of Geosystem Science at Oxford and a veteran of several IPCC assessments.
Myles explains the original intent of net zero and what we really need to aim for: zero transfer of carbon between the geosphere (Earth's crust) and everywhere else (oceans, land, atmosphere).
Myles also makes a strong case that, if we want to hit the 2050 goals we need to invest more heavily in large-scale geological carbon capture and storage. Many climate activists worry that such a policy would detract from the progress of renewables and give the fossil fuel industry carte blanche to continue emitting. But Myles points out that our reliance on fossil fuels is not falling as quickly as we need, and CCAS is technologically viable, economically feasible, and essential to reaching true geological net zero.
(00:00) Intro
(2:29) What is net zero?
(4:12) Net zero is not a stable state but dynamical
(6:20) If we stabilise concentrations of CO2 we would see half as much warming again
(9:10) The meaning of net zero is often confused
(12:20) The danger of carbon accounting double counting
(16:56) The difficulty of establishing additionality
(19:52) Geological net zero is what was originally meant by net zero
(21:30) There are no significant natural sources or sinks of carbon between the biosphere and geosphere
(27:25) COP 28: the fossil fuel industry has got to be part of the solution
(30:50) “It is almost dangerous to claim it’s possible to solve the climate crisis without getting rid of CO2 on a very large scale … injecting it back into the Earth’s crust”
(32:30) Phasing out fossil fuels altogether is effectively letting the industry off the hook
(32:45) To what extent can we trust the fossil fuel industry? The potential dangers of CCAS
(35:30) “The cost with today’s technology of recapturing CO2 from the atmosphere and storing under the North Sea … “ is such that the natural gas industry could recapture all emissions and still be profitable at current prices
(40:10) Carbon pricing has failed: people do the cheapest thing first and the costly, slow-to-develop things (e.g. CCAS) are not coming fast enough
(42:20) The difficulty of getting a carbon capture flywheel going
(45:05) Intermittent energy supply is not a problem for carbon capture
(45:45) Is biochar a viable alternative to geological carbon capture?
(47:08) Biochar can’t hit the scale we need
(48:55) Extended producer responsibility
(50:10) eFuels (synthetic fuels)
(50:44) Final comments: we have the technology but we need to be realistic, we need to start taking carbon back
Can we trust our emotions as a guide to right and wrong?
This week's guest James Hutton is a philosopher at the University of Delft who argues that emotions provide a way of testing our moral beliefs — similar to the way observations are used in natural sciences as evidence for or against theories.
This is not to say that emotions are infallible, nor that they are not themselves influenced by our moral beliefs, but that they do have a place in our moral inventory. In particular, the destabilizing power they can have — their capability to clash with our beliefs — is an important counterpoint to the entrenchment of poorly justified beliefs.
I found myself revising my own views throughout this discussion. It feels right that emotions play a role in our decision-making. Perhaps that feeling is justified.
Outline
(00:00) Intro
(2:28) Start of conversation: Metathical frameworks
(4:45) Reason alone cannot provide moral premises
(6:30) Are moral principles self-evident? Or do we have a moral sense?
(11:00) Is emotion antithetical to reason?
(12:00) Emotional senses:Amia Srinivasan’s example of Nour, an example where emotions are trustworthy
(23:00) Antonio Damasio & Descartes’ Error: the importance of emotion as a motivating force
(29:30) … But should it be a motivating force?
(30:30) Tolstoy’s emotional reaction to an exection and how it disrupted his moral theory of progress
(34:50) Emotions can cause us to revise our moral beliefs
(37:25) This does not mean emotion is infallible as a guide to morality
(40:25) The tension between reasoning from principles and emotional reaction creates a useful instability
(42:00) The analogy between science and moral reasoning: sometimes observations (and emotions) should be ignored, but sometimes we should pay attention to them
(46:00) Is it possible to have a no-holds-barred ethics incorporating principles and emotions? (Not really!)
(49:40) Observations and theories are perennially in conflict, sometimes we reject the observation
(50:40) Utilitarianism: elegant but easy to find cases where it clashes with our intuitions
(51:50) Harvesting organs — where the greatest good for the number does not feel right
(53:20) Ethics and Inuition — Peter Singer: we shouldn’t trust our emotions
(54:20) But why trust the utilitarian principle over our intuitions?
(57:45) Situations in which we need to be wary of our emotions: burn a teddy vs releasing tonnes of CO2
(1:03:00) Emotional blind spots: abstract, global, probabilistic, outgroup vs ingroup
(1:08:00) Partiality: should we treat everyone equally, or do we have special obligations to friends and family?
(1:10:28) Heckled by a doorbell
(1:11:50) Partiality is a litmus case for utilitarian principles vs intuition
(1:15:30) Given emotions are fallible how do we make good use of them?
(1:17:30) Unreliable emotions and ethical knowledge: blood sugar, mood &c. cause emotional noise
(1:19:30) How do we deal with noisy information in other areas — the analogy with testimony
(1:23:50) Defeaters — cues that give us pause to double check our emotional responses
(1:25:40) Negative meta-emotions: e.g. shame at being angry
(1:26:25) Should we expand our emotional repertoire?
(1:30:20) Flight shame as an example of a new emotional response
(1:34:25) Should we expect evolution to have created morally fitting emotional responses?
(1:38:15) The problems with evolutionary debunking arguments
(1:46:43) This is work in progress — google James Hutton Delft to get in touch
Could AI's ability to make us fall in love with be our downfall? Will AI be like cars, machines that encourage us to be sedentary, or will we use it like a cognitive bicycle — extending our intellectual range while still exercising our minds?
These are some of the questions raised by this week's guest Santiago Bilinkis. Santiago is a serial entrepreneur who's written several books about the interaction between humanity and technology. Artificial, his latest book, has just been released in Spanish.
It's startling to reflect on how human intelligence has shaped the Earth. AI's effects may be much greater.
Links:
Outline:
(00:00) Intro
(2:31) Start of conversation — a decade of optimism and pessimism
(4:45) The coming AI tidal wave
(7:45) The right reaction to the AI rollercoaster: we should be excited and afraid
(9:45) Nuclear equilibrium was chosen, but the developer of the next superweapon could prevent others from developing it
(12:35) OpenAI has created a kind of equilibrium by putting AI in many hands
(15:45) The prosaic dangers of AI
(17:05) Hacking the human love system: AI’s greatest threat?
(19:45) Humans falling in love may not only be possible but inevitable
(21:15) The physical manifestations of AI have a strong influence over our view of it
(23:00) AI bodyguards to protect us against AI attacks
(23:55) Awareness of our human biases may save use
(25:00) Our first interactions with sentient AI will be critical
(26:10) A sentient AI may pretend to not be sentient
(27:25) Perhaps we should be polite to ChatGPT (I, for one, welcome our robot overlords)
(29:00) Does AGI have to be conscious?
(32:30) Perhaps sentience in AI can save us? It may make it reasonable
(34:40) An AGI may have a meaningful link to us in virtue of humanity being its progenitor
(37:30) ChatGPT is like a smart employee but with no intrinsic motivation
(42:20) Will more data and more compute continue to pay dividends?
(47:40) Imitating nature may not necessarily be the best way of building a mind
(49:55) Is my job safe? How will AI change the landscape of work?
(52:00) Authorship and authenticity: how to do things meaningfully, without being the best
(54:50) Imperfection can make things more perfect (but machines might learn this)
(57:00) Bernard Suits’ definition of a game: meaning can be related to the means, not ends.
(58:30) The Cognitive Bicycle: will AI make us cognitively sedentary or will it be a new way of exercising our intellect and extending its range?
(1:01:24) Cognitive prosthetics have displaced some intellectual abilities but nurtured others
(1:06:00) Without our cognitive prosthetics, we’re pretty dumb
(1:12:33) Will AI be a leveller in education?
(1:15:00) The business model of exploiting human weaknesses is powerful. This must not happen with AI
(1:24:25) Using AI to backup the minds of people
The Gomboc is a curious shape. So curious many mathematicians thought it could not exist. And even to the untrained eye, it looks alien: neither the product of human or natural processes.
This week Gábor Domokos relates his decade-long quest to prove the existence of a (convex, homogenous) shape with only two balance points.
The Gömböc is not just a mathematical curio, its discovery led to a theory of how "things fall apart", of the processes of abrasion that — whether on Earth, mars, or deep space — ineluctably reduce the number of balance points of objects.
The Gomboc is the shape all pebbles want to be, but can never reach.
Show notes at multiverses.xyz
(00:00) Intro
(2:40) Start of conversation — what is a Gomboc?
(4:30) The Gomboc is the “ultimate shape” it has only two balance points
(5:30) The four vertex theorem: why a 2D shape must have 4 balance points
(6:30) (almost) nobody thought a Gomboc existed
(8:30) Vladimir Ilych Arnold’s conjecture
(9:00) Hamburg 1995, the beginning of a quest
(10:30) “Mathematics is a part of physics where experiments are cheap”
(11:50) A hungry scholar sits next to a mathematical superstar
(13:00) Ten years of searching
(15:00) Domokos and Varkonyi's gift for Arnold
(15:30) Arnold’s response: “good, but now do something serious”
(16:50) We cannot easily speak about shapes.
(18:00) A system for naming shapes
(21:00) “The evolution of shapes is imprinted in these numbers”
(21:50) Pebbles evolve towards the Gomboc, but never get there
(24:50) How to find the balance points of shapes by hand
(30:00) Physical intuition and empirical exploration can inform mathematics
(30:30) A beach holiday (and a marital bifurcation point)
(34:00) “No this was not fun, it was a markov process”
(36:40) Working with NASA to understand the age of martian pebbles
(38:20) An asteroid, or a spaceship?
(43:00) The mechanisms of abrasion
(45:50) The isoperimetric ratio — does not evolve monotonically …
(47:50) … But the drift to less balance points is monotonic
(49:00) The process of abrasion is a process of simplifying
(50:00) We can name the shape of Oumuamua because it is so simple
(51:00) Relationship between Gomboc and (one way of thinking about) entropy
(55:00) Abrasion and the heat equation — curvature is “like” heat and gets smoothed out
(58:00) The soap bar model — why pointy bits become smooth
(1:00:00) Richard Hamilton, the Poincaré conjecture and pebbles
(1:04:00) The connection between the Ricci flow and pebble evolution
(1:09:00) Turning the lights on in a darkened labyrinth
(1:12:00) The importance of geometric objects in physics (string theory)
(1:13:30) Another way of naming natural shapes: the average number of faces and vertices
(1:15:00) “Earth is made of cubes” — it turns out Plato was right
(1:16:30) Could Plato’s claim have been empirically inspired?
(1:17:50) “Everything happens between 20 and 6”
(1:18:30) The Cube and the Gomboc are the bookends of natural shapes
(1:19:30) The Obelisk in 2001 — an unnatural, but almost natural shape
(1:22:00) Poincaré on dreaming: genius taps the subconscious
From what human need does philosophy emerge? And where can it lead us?
Simon Critchley is Hans Jonas professor of Philosophy at the New School in New York, and a scholar of Heidegger, Pessoa, Football (Liverpool FC), and humour — among other things. He crosses over between analytic and continental traditions and freely draws on quotes from Hume, poetry and British pop bands.
Simon argues that philosophy begins in disappointment, not wonder. But its goals can be wisdom, knowledge, enlightenment, and freedom. It can play many roles: as a tool for developing scientific theories, for exposing ideology for tracing the underpinnings of language and experience. Anywhere where other fields fear to tread, that's where philosophers step in.
Show notes and links to books at Multiverses.xyz
(00:00) Intro
(3:00) Beginning of conversation: disappointment as the start of the journey
(7:55) Punk & Philosophy
(11:20) Trauma and tabula rasa
(12:30) Not making it in a band, becoming a philosopher
(19:30) Wittgenstein as a bridge between analytic and continental philosophy
(21:50) Mill and the origin of the label “continental philosophy”
(24:30) Philosophy has a duty to be part of culture
(28:00) The difficulty with philosophy being an academic tradition
(29:30) The Stone
(32:30) Football as a phenomenon for study that invites people in to philosophy
(35:00) Philosophy as pre-theoretic & Pessoa’s Ultimatum
(39:00) Will analytic philosophy run out for road and be subsumed into science?
(41:00) Two lines of human imagination
(42:00) Should philosophy ever be a single honours subject, or should it always aid other realms of thought?
(43:00) Philosophy as pre-science
(44:30) Phenomenology as reflection on the lived world
(47:00) Alberto Caeiro (Pessoa) and anti-poetry
(48:50) The saying of ordinary things to fascinate angels
(54:00) Impossible objects will keep philosophers busy
(57:00) The task of philosophy as deflationary, as not making progress
(1:00:00) Should philosophy of physics be part of physics?
(1:04:30) Context: What can’t I read Descartes like I’m talking to your right now?
(1:06:00) Is context colour or is it inseparable from ideas?
(1:15:30) Rorty: Continental philosophy as proper names vs problems in analytic philsophy
(1:19:20) Trying to walk the line between two traditions of philosophy
(1:20:00) Obscurantism vs scientism
(1:23:00) Permission to think on their own, to expose ideology
(1:26:00) The internet has been good for philosophy
(1:26:30) Audio as a new platform or agora for philosophy
Large language models, such as ChatGPT are poised to change the way we develop, research, and perhaps even think. But how do we best understand LLMs to get the most from our prompting?
Thinking of LLMs as deep neural networks, while correct, is not very useful in practical terms. It doesn't help us interact with them, rather as thinking of human behavior as nothing more than the result of neurons firing won't make you many friends. However, thinking of LLMs as search engines is also faulty — they are notoriously unreliable for facts.
Our guest this week is James Intriligator. James trained as a cognitive neuroscientist at Harvard, but then gravitated towards design and is currently Professor of the Practice in Human Factors Engineering and Director of Strategic Innovation at Tufts University.
James proposes viewing ChatGPT not as a search engine, but as a "glider" that journeys through knowledge. By guiding it through diverse domains, it learns your interests and customizes better answers. Dimensional prompts activate specific areas like medicine or economics.
I like this playful way of thinking of LLMs. Maybe gliding (LLMs) is the new surfing (of the web).
Links:
The physical solidity of books encourages notions of "the text" or "the canonical edition". The challenges to this view from post-modernist thought are well known. But there are other ways in which this model of a static text may fail.
Our guest this week is Peter Robinson (my dad!) who takes us through his work on Chaucer's Canterbury Tales. This is a paradigmatic case of a work of literature that defies understanding as fixed text. Originally it would have been read, or performed. What exists now are fragments of transcripts of performances. And copies of those fragments. And copies of copies.
Using techniques from phylogenetics, Peter has led efforts to piece together the relationships between these manuscripts. By tracing how transcription errors (or edits) appear to propagate, we can create a family tree of the texts, just as we can trace the propagation of biological traits through generations.
Sounds simple? "After 30 years of working on this, we're really just beginning to understand what a representation of a textual tradition using these tools gives us"
For hundreds of years, things changed slowly. Innovations were infrequent and spread inchmeal. Population, culture, and the atmosphere, all were static decade-to-decade. We now see rapid change.
It's hard to contemplate what now? let alone what next?
Peter Schwartz is a futurist, SVP for Scenario Planning at Salesforce, author of The Art of Long View, and a founder of the Long Now Foundation. He thinks about the future, both envisioning its many possibilities and harnessing these scenarios to answer the question: what do we do now?
In this conversation, we discuss the Long Clock, working with Steven Spielberg, what the future may hold (the ISS becoming a hotel?) what it almost certainly will (accelerating climate change is something we cannot avoid, we must adapt as well as driving down emissions) and how we should approach thinking about it.
AI is already changing the world. It's tempting to assume that AI will be so transformative that we'll inevitably fail to harness it correctly, succumbing to its Promethean flames.
While caution is due, it's instructive to note that in many respects AI does not create entirely new challenges but rather exacerbates or uncovers existing ones. This is one of the key themes that emerge in this discussion with John Zerilli. John is a philosopher specializing in AI, Data, and the Rule of Law at the University of Edinburgh, and he also holds positions at the Oxford Institute for Ethics in AI and the Centre for the Future of Intelligence in Cambridge.
For instance, John points out that some of the demands we make of AI with respect to fairness are simply impossible to fulfill — not due to some technological or moral failing on the part of AI, but that our demands are in mathematical conflict. No procedure, whether executed by a human or a machine, can consistently meet these requirements. We have AI research to thank for illuminating this.
In contrast, concerns over a 'responsibility gap' in AI seem to overlook the legal and social progress made over the last centuries, which has, for example, allowed us to detach culpability from individuals and assign it to corporations instead.
John also notes that some of the dangers of AI may be more commonplace than we imagine — such as the use of deep fakes to supercharge hacking, or our psychological tendency to become complacent with processes that mostly work, leading us to an unwarranted reliance on AI.
Notes:
(00:00) Intro
(3:25) Discussion starts: risk
(12:36) Robots are scary, embedded AI is anodyne
(15:00) But robots failing is cute
(16:50) Should we build errors into AI? — catch trials
(26:62) Responsibility
(29:11) There is no responsibility gap
(42:40) Should we move faster to introduce self-driving cars?
(45:22) Fairness
(1:05:00) AI as a cognitive prosthetic
(1:18:14) Will we lose ourselves among all our cognitive prosthetics?
Plants have transformed the surface of the earth and the contents of our atmosphere. To do this they've developed elaborate systems of roots and branches which (sometimes) follow uncanny mathematical patterns such as the Fibonacci sequence.
Our guest this week, Sandy Hetherington, leads Edinburgh's Molecular Palaeobotany and Evolution Group. They take a no-holds-barred approach to understanding plant development by combining genomics, fossil records, herbaria, and 3D modeling.
Dig in!
Does the Earth contain enormous clean energy reserves? For many years the received logic was that hydrogen does not occur naturally in significant quantities without being bound to other atoms (such as in H20, water, or CH4, methane). To obtain the gas — whether as a fuel or for use in fertilizers — we need to strip it from those molecules, typically by electrolysis and steam reformation. But our understanding may be ripe for change.
Rūta Karolytė is at the vanguard of prospectors looking for large, naturally occurring reservoirs of hydrogen. She’s a researcher from Oxford specializing in the geochemistry of the Earth and she enlightens us to the mechanisms that are likely to be producing hydrogen in the crust: radiolysis and serpentinization.
In reviewing the evidence for naturally occurring hydrogen Ruta leads us through exotic terrain: a Soviet-era theory of hydrocarbon production, fairy rings, hydrothermal vents, and chemosynthetic life.
If Rūta and her colleagues are correct, the tapping of natural hydrogen could have transformative consequences for the Hydrogen economy such as cutting out the substantial fossil fuel emissions associated with deriving fertilizers from methane or creating a cheap basis for building synthetic fuels.
In the first half of the show, we also delve into carbon sequestration — another cool climate topic. But I’ve got so excited writing up the first half, that I’ll leave it here.
Notes:
Outline:
(00:00) Introduction
(3:00) Geological carbon sequestration
(50:40) Natural hydrogen
Thought experiments have played a starring role in physics. They seem, sometimes, to pluck knowledge out of thin air. This is the starting point for my discussion this week with the philosopher Harald Wiltsche: what are thought experiments?
How do they function — are they platonic laboratories with no moorings in observations or a way of supercharging our reasoning about phenomena?
What do they deliver? Much emphasis has been put on the paradigm-shattering insights of Einstein where thought experiments appear like midwives in the production of new theories. But they can also function in explaining those theories, in ensuring they are understood.
This leads to the question: what is understanding? Harald argues that it’s the ability to manipulate or deploy knowledge backed up by a mental model. He reasons that thought experiments can help us to make sense of the abstraction of mathematical models.
We discuss many topics: the mathematization of science with Galileo, where thought experiments go wrong, transcendental arguments, the danger of losing sight of physical phenomena, and the links between Husserl and Mach. The theme that we dance to is this: there are lots of forms of reasoning that can work well within physics and we need equal pluralism in our philosophy of science to understand its startling, uncanny success in modeling nature.
Genomics is leading a revolution in our understanding of disease. But the ways we pursue genomics research and the use we make of that knowledge demand careful thinking.
Anna is a researcher at The Edmond & Lily Safra Center for Ethics at Harvard, she holds a PhD in Systems Biology from Oxford (where we met) and has worked in medtech startups. As someone who has looked at genomics from multiple perspectives, she’s an excellent guide to this rocky terrain.
Anna emphasizes the challenges and importance of polygenic traits and Polygenic Risk Scores (PRS). While they are key tools in understanding and predicting traits, they are subject to misinterpretation and misuse if not properly defined. The concept of 'race' and more recently ‘continental ancestry group’ often used in the calculation of PRSs can lead to misguided or even harmful assumptions, potentially propagating racist ideologies. Instead, Anna suggests the use of Ancestral Recombination Graphs (ARG) to better represent an individual's genetic ancestry.
Through ARG, we can achieve a more scientifically accurate and ethically sound basis for research. As we continue to make leaps in genomics and potentially influence traits like intelligence or strength, the importance of ethical, legal, and social implications becomes increasingly crucial. As we learn to wield our scientific tools, we need to understand how we should use them.
Christian Bök is an award-winning poet pushing the boundaries of the medium and exploring the capabilities of language itself. Rather than focusing on self-expression, Christian uses poetry as a laboratory for understanding language — probing its plasticity and character.
His notable work, the bestseller Eunoia, draws inspiration from the avant-garde rules of Oulipo and takes it a step further by restricting each chapter to only one vowel. This constraint leads to the creation of such singular phrases as "Writing is inhibiting. Sighing, I sit, scribbling in ink this pidgin script."
For the past two decades, Christian has pursued an even more ambitious project, The Xenotext. This project involves enciphering an "alien text" within the DNA of a resilient bacterium, Deinococcus radiodurans. One goal of The Xenotext is to create a text that could outlast human civilization. To add to the genomic challenge Christian has set a remarkable rule: the symbols of the text should be interpretable in two different ways, resulting in two poems that are encoded within the same string.
Christian combines scientific techniques, trial and error, and computer programming to construct his poems, adhering to the rules he has established within his own poetic universe. Furthermore, he transforms art back into science by employing gene-editing to inscribe his poetic creation into the "book of life," the DNA of a living organism.
Instead of looking back and inwards (the ideal of “emotion recollected in tranquility”, Christian looks outwards and to the future, fusing science and art to produce uncanny, unforgettable verse.
References:
Is the fate of the universe predetermined? Many physicists and philosophers argue it is, particularly those who adopt the Many Worlds interpretation of quantum mechanics.
Our guest this week is Ruediger Schack. With Christopher Fuchs and Carlton Caves, he is one of the originators of a new way of interpreting quantum mechanics, QBism, according to which we — as agents — are co-creators of the world. Destiny is shaped by our hands.
Ruediger is a professor of mathematics at Royal Holloway, University of London, and works on problems in quantum information and quantum cryptography, but also seeks to understand what the equations say about the world.
One of the central claims of QBism is that the wavefunction is a representation of knowledge, not physical reality, as such the “collapse of the wavefunction” due to agent interactions is nothing more than Bayesian updating: observations lead us to update our knowledge.
We unpack the ideas of QBism — that reality is not objective, but inter-subjective, using ideas from phenomenology best summarised in Merleau-Ponty’s comment “there is no world without a being in the world”. We also dive into some of the objections to QBism.
This was a foray into foreign waters for me, I hope you enjoy it as much as I did.
Notes:
Science and poetry are sometimes caricatured as opposing paradigms: the emotional expression of the self versus the objective representation of nature. But science can be poetic, and poetry scientific. Our guest this week, Sam Illingworth, bridges these worlds. He’s researched scientists who were also poets, and organized workshops for scientists and laypeople using the medium of poetry to create an equitable and open dialogue.
In addition to being an Associate Professor at Edinburgh Napier University, Sam is the founder of Consilience, a peer-reviewed journal publishing poetry (which presents such beautifully titled gems as What You Don't See on David Attenborough is All the Waiting) and hosts the Poetry of Science podcast where, each week, he writes a poem in response to recent scientific research.
Space and time appear in charts as axes oblivious to the points they demarcate. Similarly, we may feel that we, and all the objects of our worlds, are like such points — and spacetime is a container in which we sit.
Julian Barbour is a physicist who has spent six decades arguing against this. He takes the relationist approach of Leibniz and Mach: there is no space without objects and no time without change. Rather space is just the geometric relationships between things.
Julian has pioneered theories that recover the predictions of Newtonian mechanics and General Relativity that drop their invocation of imperceptible space, time, and spacetime. Recently he has taken an iconoclastic approach to the arrow of time — looking to a new measure of structure, complexity, and the expansion of the universe instead of the traditional accounts in terms of entropy.
Find more at:
We live in a branching universe. If it can happen, it does happen.
These are the almost incredible claims of the Many Worlds Interpretation of quantum mechanics. Yet today’s guest, David Wallace, makes a case that this is the most grounded way of reading our best theory of nature.
While at first sight quantum mechanics seems to say that things (famously, cats!) can occupy impossible states, David argues that a careful reading shows we can take seriously “superpositions” (these apparently weird states) not only at the microscopic level but all the way up to the scale of the universe.
This way of thinking about quantum mechanics was first proposed in 1957 by Hugh Everett, David has made important contributions — particularly in the “preferred basis” or “counting problem” which asks how many worlds are there; and also in understanding how a deterministic theory of the world appears indeterministic — probabilistic — to agents.
David has PhDs in both physics and philosophy from the University of Oxford and currently holds the Mellon Chair in Philosophy of Science at the University of Pittsburgh.
Casey Handmer is the founder of Terraform Industries (TI).
TI is pioneering air-to-fuel technology to manufacture methane (natural gas) from the air. Currently, we continue to extract enormous quantities of hydrocarbons from the crust, burn them, and release carbon dioxide. Instead, TI wants to mine the air: displacing the transport of carbon from the crust to the atmosphere.
Casey Handler has a PhD in theoretical astrophysics from Caltech, he’s worked at Nasa’s JPL and on Hyperloop One.
His blog is at caseyhandmer.wordpress.com
Twitter at @CJHandmer
Terraform Industries website: terraformindustries.com
For a transcription and further references see multiverses.xyz
My thanks to Mark Shilton, Sam Westwood and Maciej Pomykala for help with this episode.
En liten tjänst av I'm With Friends. Finns även på engelska.