Join us as we spend each episode talking with a mathematical professional about their favorite result. And since the best things in life come in pairs, find out what our guest thinks pairs best with their theorem.
The podcast My Favorite Theorem is created by Kevin Knudson & Evelyn Lamb. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Evelyn Lamb: Hello and welcome to my favorite theorem, the math podcast with no quiz at the end. I'm Evelyn Lamb, a freelance math and science writer in Salt Lake City, Utah, and I am joined, as always, by our other host. Will you introduce yourself?
On this episode of My Favorite Theorem, we had the pleasure of talking with Robin Wilson, a mathematician at Loyola Marymount University, about the Poincare-Hopf index theorem and the importance of math education. Below are some links you may enjoy after the episode.
An interview with Wilson for Meet a Mathematician
More on the Poincare-Hopf index theorem
The 2021 SLMath Workshop on Mathematics and Racial Justice and its follow-up, to be held in May 2025
Storytelling for Mathematics
The Algebra Project
The 2025 Critical Issues in Mathematics Education workshop, to be held in April 2025, focusing on mathematical literacy for citizenship
Evelyn Lamb: Hello and welcome to my favorite theorem, the math podcast with no quiz at the end. I'm Evelyn Lamb, a freelance writer in Salt Lake City, Utah, where it is gorgeous spring weather, perfect weather to be sitting in my basement talking to people on Zoom. This is your other host.
For this episode, we were excited to talk to Kate Stange from the University of Colorado, Boulder about the bijection between quadratic forms and ideal classes. Below are some links you might find interesting as you listen.
Stange's website
The Illustrating Mathematics website and seminar, which meets monthly on the second Friday
An Illustrated Theory of Numbers by Martin Weissman
The Buff Classic bike ride in Boulder
Kevin Knudson: Welcome to my favorite theorem, the math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida, and I am joined, as always, by my fabulous co-host.
In this episode, we enjoyed talking with Karen Saxe about her work as the director of the American Mathematical Society's Office of Government Relations and her favorite theorem, the isoperimetric theorem. Below are a few links you might find relevant as you listen:
Saxe's website and the homepage of the AMS Office of Government Relations
A survey of the history of the isoperimetric problem by Richard Tapia
The 1995 proof by Peter Lax
Evelyn's blog post about 50 pence coins and other British objects of constant width
The Polsby-Popper test to measure gerrymandering
A public lecture by mathematician Moon Duchin about mathematics and redistricting
The 1927 Journal of Paleontology article that first uses the Polsby-Popper metric (though not with that name)
An Atomic Frontier video about squeaky sand
Our episode with fellow tennis-enjoyer Carina Curto
Evelyn Lamb: Hello and welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm your host Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City. And this is your other host.
On this episode, we enjoyed talking with mathematician and playwright-performer Corrine Yap about Mantel's theorem in graph theory. Below are some related links you may find interesting.
Yap's website
MathILy-Er, a summer math program for high schoolers
Wikipedia on Turán's theorem, the generalization of Mantel's theorem
The Korean Vegan
Kevin Knudson: Welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida, and I'm joined as always by your other and let's be honest, better, host.
On this episode, we talked with our delightful guest Allison Henrich, a mathematician at Seattle University, about the region crossing change theorem in knot theory. Here are some links to things we mentioned that might be interesting for you.
Henrich's website
MAA Focus magazine
Ayaka Shimizu's paper about the region crossing change theorem
Region Select, a game you can play where you try to unknot a knot using region crossings
An article Henrich coauthored about the region unknotting game
Nancy Scherich's YouTube channel, where she shares videos of her dances about math
James Whetzel's song I Want to Go About My Day on Bandcamp
The signup form for the mathematics storytelling event Henrich is cohosting at the Joint Mathematics Meetings in January 2024
Flow into Authenticity, the podcast she cohosts with artist Esther Loopstra
Evelyn Lamb: Hello and welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm your host Evelyn Lamb, a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
On this episode of the podcast, we chatted with Tom Edgar of Pacific Lutheran University about the formula for the sum of integers between 1 and n. Here are some links you may enjoy:
His website and Twitter profile
Math Horizons
His collection, with Enrique Treviño, of proofs of the sum formula
His YouTube channel, Mathematical Visual Proofs, including his video on the 8-triangle proof of the sum formula
His article about proving the square root of two is irrational using centers of mass
His article about using centers of mass to prove the arithmetic-geometric mean inequality
Also, Brian Hayes’s article about Gauss: https://www.americanscientist.org/article/gausss-day-of-reckoning
Kevin Knudson: Welcome to My Favorite Theorem, the math podcast with no quiz at the end. I am one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida, and your other host is…
In this episode, we were happy to talk with Tatiana Toro, mathematician at the University of Washington and director of the Simons Laufer Math Foundation (formerly known as MSRI), about the Pythagorean theorem. Here are some links that you may find interesting.
Toro's website and the SLMath website
Our episodes with Henry Fowler and Fawn Nguyen, who also love the Pythagorean theorem
The analyst's traveling salesman problem on Wikipedia
Naber and Valtorta's work on singular sets of minimizing varifolds
Evelyn Lamb: Hello and welcome to My Favorite Theorem, the math podcast with no quiz at the end. Or perhaps today we should say the maths podcast with no quiz at the end. My name is Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
In this episode, we were delighted to talk with Sarah Hart, the Gresham Professor of Geometry at the University of London, about the serendipitous cycloid. Below are some links you might enjoy as you listen.
Hart's website and Twitter profile
Her book Once Upon a Prime and its review in the New York Times
Hart's article Ahab's Arithmetic about mathematics in Moby-Dick
The Wikipedia entry for the cycloid, which has links to many of the people we discussed
Kevin Knudson: Welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida, and I am joined today as always by my fabulous co-host.
On this episode, we were delighted to talk with Matthew Kahle of the Ohio State University about Euler's polyhedral formula, also known as V−E+F=2. Here are some links you might find useful as you listen to the episode.
Kahle's website
His paper about torsion in homology groups of random simplicial complexes
The Erdős–Rényi model of random graphs
Euclid's Elements, book 13, is devoted to the classification of Platonic solids. Also found herestarting on page 438.
The Jordan curve theorem has made a previous appearance on the podcast in our episode with Susan D'Agostino.
David Eppstein's website with 21 different proofs of Euler's formula. Thurston's proof is here.
Kevin Knudson: Welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm Kevin Knutson, professor of mathematics at the University of Florida. And today I am flying solo while I am at Texas Christian University in Fort Worth, where I'm serving as the Green Honors Chair for the week. And I've been given some talks and meeting the fine folks here at TCU. And today, I have the pleasure to talk with some of their students. And they're going to tell us about their favorite theorems and what they pair well with. And we're just going to jump right in. So my first guest is Aaryan. Can you introduce yourself?
In another Very Special Epsiode of My Favorite Theorem, Kevin had the privilege of asking a group of nine TCU students about their favorite theorems. We loved the variety of theorems and pairings they picked! Below are some links to more information about their favorites.
Aaryan Dehade led off with the fundamental theorem of calculus.
Duc Toan Nguyen's favorite is the mean value theorem, which some would argue is the real fundamental theorem of calculus. It was also a hit with our past guests Amie Wilkinson and Aris Winger.
Maiyu Diaz shared Stirling's formula for approximating factorials.
Hope Sage chose Bell's theorem from physics, which was a response to a paper by Einstein, Podolsky, and Rosen.
Jonah Morgan shared his love for Gödel's incompleteness theorems, which also came up when we talked with math students from CSULA last year.
Anna Long chose the invertible matrix theorem, a behemoth of a theorem that gives scads of ways to show that a matrix is invertible.
Matthew Bolding highlighted the four-color theorem.
Brandon Isensee chose Sharkovskii's theorem, which was also the favorite of past guest Kimberly Ayers.
Julia Goldman finished out the episode with a perennial MFT favorite, the Brouwer fixed point theorem. We have talked about it on past episodes, most recently with Priyam Patel. See if you agree with Julia that it is the mathematics underlying the TV show Love Island!
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm Evelyn Lamb, one of your co-hosts, coming to you from snowy Salt Lake City, Utah, where I feel like I've said that the past few times we've been taping. Which is great, because we really need the water. It is beautiful today, and I am ever so grateful that the life of a freelance writer does not require me to drive in conditions like this, especially as someone who grew up in Texas where conditions like this did not exist, and so I am extremely unconfident in snow and ice. So yeah, coming to you from the opposite side of the weather spectrum is our other host.
On this episode, we were excited to talk with Cihan Bahran about the undecidability of the matrix mortality problem. Here are some related links you might enjoy:
Bahran's website and Twitter account, where he posts "cursed math facts"
The 2014 paper establishing the undecidability of the matrix mortality problem for, among other cases, six 3 × 3 matrices
The word problem in group theory
We recorded this episode before the devastating earthquake in Turkey and Syria. Our hearts go out to all who have been affected. If you would like to contribute to relief efforts, Doctors Without Bordersand Ahbap Derneği are two organizations doing work in the area.
Evelyn Lamb: Hello and welcome to My Favorite Theorem, the math podcasts with no quiz at the end. I'm your host Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
On this episode, we were happy to talk with Juliette Bruce, a mathematician at Brown University, about Petri's theorem. Here are some links you might enjoy as you listen to the episode.
Her website and Twitter profile
The canonical bundle and Petri's theorem on Wikipedia
Robin Hartshorne's (in)famous Algebraic Geometry textbook
Spectra, the association for LGBTQ+ mathematicians
Kevin Knudson: Welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida and I am joined as always by my fabulous other host, co-host? I don't know,
On this episode, we had the pleasure of talking with Christopher Danielson, who works for Desmos and is involved with several programs to help kids have rich, creative mathematical experiences. Here are a few links you might find useful after you listen.
Danielson's Twitter account
Talking Math With Your Kids
Math on a Stick
Public Math
Math Anywhere
Evelyn's Page-a-Day math calendar, which takes inspiration for August 8's page from Danielson's book Which One Doesn't Belong?
Evelyn Lamb: Hello and welcome to My Favorite Theorem, the math podcast with no quiz at the end. My name is Evelyn Lamb. I'm a freelance math and science writer in beautiful Salt Lake City, Utah, where fall is just gorgeous and everyone who's on this recording, which means no one listening to it, gets to see this cute zoom background I have from this fall hike I did recently with this mountain goat, like, posing for me in the back. It kind of looks like a bodybuilder, honestly, like really beefy. But yeah, super cute mountain goat. So yeah, that really helpful for everyone at home. Here is our other host.
On this episode, we were happy to have Kimberly Ayers of California State University San Marcos on the podcast to talk about Sharkovskii's theorem. Here are some links you might enjoy perusing after you listen to the episode.
Ayers' website and Twitter account
Her interview with LGBT Tech
Tien-Yien Li and James A. Yorke's article Period Three Implies Chaos
Our "flash favorite theorem" episode, where Michelle Manes also professed her love of Sharkovskii's theorem
Evelyn's Smithsonian article about the mathematics of taffy pullers
Kevin Knudson: Welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida, and here is your other host.
In this episode, we talked with Philip Ording, a mathematician at Sarah Lawrence College, about the Erlangen program. Attached are some related resources you might enjoy.
Ording's website
The website for his book, 99 Variations on a Proof
John Baez's links related to the Erlangen program, including Klein's original paper on the topic
Royce Nelson's page about 3-dimensional hyperbolic geometry
Jessica Wynne's book Do Not Erase about mathematicians and their chalkboards
Evelyn Lamb: Hello and welcome to My Favorite Theorem, the math podcast with no quiz at the end. I like how we're always besmirching other math podcasts, which as far as I know, also don't have quizzes at the end. I am your host Evelyn lamb. I am a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
On this episode of My Favorite Theorem, we were pleased to talk with Daina Taimina, recently retired from Cornell University, about Desargues's theorem. Here are some links you might find interesting after you listen.
Her website, blog, and Twitter account
Desargues's theorem on Wikipedia
Our episode with Annalisa Crannell, who also loves Desargues's theorem
Taimina's book Crocheting Adventures with Hyperbolic Planes, which won the Diagram Prize for oddest book title and the Euler Prize from the Mathematical Association of America
Experiencing Geometry by Taimina and David Henderson on Project Euclid
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm Evelyn lamb, one of your hosts. I'm a freelance math and science writer in Salt Lake City, Utah, currently enjoying very beautiful spring mountains, which my guest and my cohost can see behind me in my Zoom background. And this is my co host.
In this episode, we talked with Tien Chih, who will soon be starting a position at Emory University's Oxford College, about mathematical induction. Here are some links you might enjoy with the episode.
Chih's website and Twitter profile
Talk Math With Your Friends, the online math colloquium series he co-organizes (and with which My Favorite Theorem has collaborated!)
A wikipedia page dedicated to the proof by induction of the statement that all horses are the same color
Domino Masters, a TV show about dominoes
Kevin Knudson: Welcome to my favorite theorem, a math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida, and I am joined by your other co-host person.
In this Very Special Episode of My Favorite Theorem, we were excited to welcome nine students from California State University Los Angeles, along with their professor Mike Krebs. Each student shared their favorite theorem and a pairing with us. Below are some links to more information about their favorites.
Pablo Martinez Gutierrez talked about Euler's formula and identity, which connect trigonometric functions with the complex exponential.
Holly Kim told us about Ore's theorem about Hamiltonians in graph theory. Check out A Capella Science's Hamilton parody video!
Bryce Van Ross shared the Hales–Jewett theorem from game theory.
Alvin Lew chose Cantor's diagonalization proof of the uncountability of the reals, which was also a favorite of our previous guest Adriana Salerno.
Judith Landau shared the fundamental theorem of Markov chains, which relates to her work in bioinformatics.
Kevin Alfaro talked about Archimedes' approximation of pi.
Francisco Leon chose a theorem from point-set topology about regular spaces.
Marlene Enriquez chose Kevin's favorite, the ham sandwich theorem.
Daniel Argueta finished off the episode with Gödel's incompleteness theorems.
Kevin Knudson: Welcome to my favorite theorem, a math podcast with no quiz at the end. I'm Kevin Knudson, a professor of mathematics at the University of Florida. And I'm joined by my other host.
On this episode of My Favorite Theorem, we welcomed Dave Kung from the Dana Center at the University of Texas at Austin to talk about the Banach-Tarski paradox/theorem. Here are some links you might enjoy:
Kung's website and Twitter account
The Dana Center website
Leonard Wapner's book The Pea and the Sun about the Banach-Tarski paradox
A shorter article by Max Levy explaining the theorem
A primer on the axiom of choice from the Stanford Encyclopedia of Philosophy
The Tychonoff product theorem
Kung's course How Music and Mathematics Relate from the Great Courses
Evelyn's article The Saddest Thing I Know about the Integers, mourning the fact that no power of 3 is also a power of 2
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm your host Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
On this episode of My Favorite Theorem, we're revisiting the popular Brouwer fixed-point theorem with Priyam Patel of the University of Utah. Below are some links you might enjoy after you listen.
Patel's website and Twitter profile
Our previous episodes about the Brouwer fixed point theorem with Francis Su and Holly Krieger
A pdf of Allen Hatcher's algebraic topology book (available, legally, for free!)
The Lefschetz fixed-point theorem
Douglas Dunham's page about Escher and hyperbolic geometry
A blog post Evelyn wrote about putting pictures into the hyperbolic plane
Information about the Roots of Unity workshop (application deadline: February 15, 2022; if you're listening to this in later years, poke around and see if it's happening again!)
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida, and I'm joined today by my fabulous co-host.
On this episode of My Favorite Theorem, we were delighted to talk with Courtney Gibbons, a mathematician at Hamilton College, about Emmy Noether's isomorphism theorems. Below are some related links you might find useful.
Courtney Gibbons's website and Twitter account
The Wikipedia article on Noether's isomorphism theorems, which includes a helpful chart describing differences in labeling the theorems
An article about Emmy Noether by astrophysicist Katie Mack and her biography on the MacTutor History of Mathematics Archive
Evelyn's 2017 article in Undark about the effect of Nazism on German mathematics in the 1930s
Our episode with Kameryn Williams
Active Calculus, a free, open-source resource for teaching calculus
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida. And here is your other host.
On this episode of My Favorite Theorem, we had the pleasure of talking with Kameryn Williams from Sam Houston State University about Gödel's condensation lemma. Here are some links you might find interesting after you listen to the episode.
Their website and Twitter account
The Skolem paradox in the Stanford Encyclopedia of Philosophy
Gödel's incompleteness theorems in the Stanford Encyclopedia of Philosophy
Akihiro Kanamori's paper about Gödel and set theory
A shorter and more accessible paper by Kanamori and Juliet Floyd on the same topic
Before the Law by Franz Kafka
Evelyn Lamb: Hello and welcome to my favorite theorem, a math podcast with no quiz at the end. I'm Evelyn Lamb, a freelance math and science writer based in Salt Lake City but currently podcasting from my parents’ house in Dallas, which is actually not any warmer than Salt Lake City right now, unfortunately. This is your other host.
On this episode of the podcast, we were delighted to talk to composer Emily Howard, who uses her mathematics background in her compositions, about her favorite mathematical object, the torus, and the orchestral work it inspired. Below are some links you may enjoy after you listen to (or read) the episode.
Emily Howard's website
Her page about the composition Torus, including a recording by the BBC Radio Orchestra
Her article Orchestra Geometries
The November 11, 2021 BBC Radio concert featuring Howard's composition Sphere
PRiSM, the Royal Northern College of Music Centre for Practice and Research in Science and Musicthat Howard directs
A website visualizing the eight Thurston geometries for 3-dimensional space
An article by Evelyn about the pseudosphere (or antisphere)
Our episode with Emily Riehl, who is relevant to this episode because she is both an Emily and a violist
Evelyn Lamb: Hello, and welcome to my favorite theorem, a math podcast with no quiz at the end. I'm Evelyn Lamb, one of your hosts. I'm a freelance math and science writer in Salt Lake City, Utah, and this is your other host.
In this episode of the podcast, we were happy to talk with Joel David Hamkins, a mathematician and philosopher (or is that philosopher and mathematician?) at the University of Oxford, about the fundamental theorem of finite games. Here are some links you might enjoy perusing after you listen to the episode.
His website, Twitter, and Mathoverflow pages
On his website, check out Math for Kids for some fun activities for all ages
His books Proof and the Art of Mathematics and Lectures on the Philosophy of Mathematics
The Wikipedia page about the fundamental theorem of finite games
The PBS Infinite Series episode on infinite chess
The Mathoverflow question and answers about legal chess board positions
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida. And here is your other host.
On this episode of My Favorite Theorem, we had the pleasure of talking to Ranthony Edmonds from The Ohio State University about the fundamental theorem of arithmetic. Here are some links you might enjoy after you listen to the episode:
Edmonds' website and Twitter account
An interview with NPR about her Hidden Figures-based course about mathematics and society
Math Alliance, a program that supports mentorship for early-career mathematicians from underrepresented groups
Ohio History Connection and the National Afro-American Museum and Cultural Center
An article by Evelyn about why 1 isn't a prime number, which mentions the distinction between prime and irreducible
The Metric Geometry and Gerrymandering Group (MGGG)
Ohio Organizing Collaborative
Ohio Citizens Redistricting Commission
Common Cause
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcast with no quiz at the end. I'm Evelyn Lamb, one of your hosts. I'm a freelance math and science writer in Salt Lake City, Utah, and this is your other host.
On this episode of the podcast, we were excited to talk to Rekha Thomas, a mathematician at the University of Washington, about the Eckart-Young-Mirsky theorem from linear algebra. Here are some links you might find interesting after you listen to the show:
Thomas's website
Our episode with Tai-Danae Bradley, whose favorite theorem is related to Thomas's
Stewart's article about the history of singular value decomposition
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast with no quiz at the end. I'm Kevin Knutson, professor of mathematics at the University of Florida. I am joined today by your fabulous and glorious other host.
On this episode, we were happy to talk with Liz Munch, an applied mathematician at Michigan State University, about the max flow, min cut theorem. Here are some links you might enjoy after you listen to the episode.
Munch's website and Twitter account
The Women in Computational Topology Network
The Max-flow Min-cut theorem at Brilliant.org
The Ford-Fulkerson algorithm on Wikipedia
The cross-strung harp on Wikipedia
Harp.com's history of the harp
Evelyn Lamb: Hello and welcome to my favorite theorem, the math podcast with no quiz at the end. I'm Evelyn Lamb. I am a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
On this episode of My Favorite Theorem, we had the pleasure of talking with Érika Roldán, a Marie Curie fellow at Technical University Munich and École Polytechnique Fédérale de Lausanne, about shuffling cards and Tetris pieces.
To read about the mathematics of riffle shuffles, this article by Persi Diaconis is a good place to start. To get a copy of the American Mathematical Society page-a-day calendar, click here. (If you already have a calendar, check out Dr. Roldan's puzzles on August 28 and October 12.)
Dr. Roldán shared some other links to and explanations of some of the apps and videos she mentioned in the episode:
The COVID crisis has allowed me to start developing digital material for my research, teaching, and outreach on mathematics and its applications. It has also allowed me to collaborate as a developer of digital material (in Germany) with artists whose projects promote gender equality, and diversity & inclusiveness awareness. Here, I share some of the links to explore this digital playground (new digital material created will be posted soonish at my website: http://www.erikaroldan.net/):
1) https://000612693.deployed.codepen.website
Follow the link above to find Extremal Animals, that is, polyforms with maximally many holes. A polyform is built by gluing together squares or triangles (in the case of this app) by their edges. And a hole in a polyform (that mathematicians call the first Betti number in this 2D case) is a finite connected component of the complement of the polyform. To get some intuition, just build a polyform with squares with 7 tiles and one hole, or a polyiamond with 9 triangles and one hole. Could you create one hole with less tiles in any of these two cases?
Have a look at these papers for the maths behind this (Extremal Topological Combinatorics) puzzle of finding polyforms with maximally many holes:
https://arxiv.org/pdf/1807.10231.pdf
https://www.combinatorics.org/ojs/index.php/eljc/article/view/v27i2p56/pdf
https://arxiv.org/abs/1906.08447v1
2) https://000612976.deployed.codepen.website
Here a link to an app that has a model to generate a random polyform by a cell growth process called The Eden Model. Pay attention to how the holes are created and destroyed as time (the number of tiles) evolves. Do you have any conjectures about how the number of holes is changing concerning time? Have a look at this link to see if your conjectures are stated and/or proved in this paper:
https://arxiv.org/abs/2005.12349
3) My first proto-game with Unity was developed for the Film “Broken Brecht” directed and produced by Caroline Kapp and Manon Haase, for the Brechtfestival Augsburg, Germany (Mar 2021). This is a project that will be extended during 2021! Here some links to the festival, the proto-game, and an extract from the film that happens within the proto-game.
Brecht Festival
https://brechtfestival.de/brokenbrecht/
Extract from Broken Brecht
https://vimeo.com/542287814
Link to the 3 min Archive Video Game
https://simmer.io/@ErikaRoldanRoa/~56f30f68-048c-c027-7aa0-aeaca82508fc
4) Some 3D models created with Python & Maya to explore (random) cubical complexes.
https://sketchfab.com/erikaroldan
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcast where there's no quiz at the end. I remember we did that tagline, like, I don't know, probably two years ago or something. And I forgot that I wanted to keep doing it. But I did it today. I'm Evelyn Lamb, one of your hosts. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. I forgot that tagline, too, and it's a pretty good one. Let's, let’s—look.
EL: We’ll see if we remember later.
KK: After our last recording session, we agreed we needed we needed a real tagline. So yeah. We're recording this on February 18, which means that Texas is largely without power and frozen.
EL: Yeah.
KK: And it's 82 degrees in Florida today.
EL: Oh wow. Yeah. Most of my family is in Texas, and it is not great.
KK: Do they have power? Or? No?
EL: Most of them do. All of them do sometimes.
KK: Right. Actually, now I think water is getting to be a problem now. Right?
EL: Yeah. I haven't heard about any problems with that from my family. But yeah, it's not great. I hope that it warms up there soon and everything can come back online. But yeah, today, we're very happy to be talking with Howard Masur, who is in a place that is very used to being cold and snowy. So yeah, Howard, do you want to introduce yourself? Tell us a little bit about yourself?
Howard Masur: Okay, thank you. First of all, thank you very much for inviting me to do this. I’ve been very excited thinking about about it. Yes, I'm on the math faculty at the University of Chicago. And I've, I guess, been working in mathematics for quite a long time and still enjoy it a great deal. It’s a major part, a very big part of my life. And your invitation to talk about my favorite theorem led me to, you know, think about what that would be and why I chose what I did. And and it made me think that, yes, what I really like the most in mathematics, or one of the things, is mathematics that connects different fields of mathematics. And maybe unexpectedly connects different fields. And I personally, have worked on and off in complex analysis and geometry and dynamical systems, another field. And I love the part of mathematics that sort of connects them.
EL: Well that's perfect. Because I mean, you you're a frequent collaborator with my husband, Jon Chaika. But also with my advisor, Mike Wolf, who, you know, isn't quite in the same area of math generally. So yeah, you have worked in a lot of a lot of different fields that I feel like your name pops up, you know, in a very wide range of things related to geometry, analysis, dynamics, but yeah, you’ve got your finger in a lot of pots.
KK: Right. Well, okay, so what is it? What's your favorite theorem?
HM: Okay. It's called the Riemann mapping theorem.
KK: Yes.
HM: So, let me let me give a little bit of background. The first thing, it involves subsets of the plane which are called simply connected. And this is a notion from topology. And let me just say I looked at one of your podcasts and someone else talked about the Jordan curve theorem, where if you have a simple curve in the plane — it could be very, very complicated — a simple closed curve, then it has an inside and an outside, then the inside is simply connected. And a way of thinking about what simply connected means is heuristically it doesn't have any, it has no holes. But as also has been pointed out, they can be very complicated, Jordan curves. Certainly they can be simple looking like a circle. The inside of a circle is simply connected, the inside of a rectangle. But on the other hand, the Jordan curve can be very complicated like a snowflake, a Koch — I never remember how to pronounce that; is it “coke” snowflake?
KK: Let’s go with Koch [pronounced “coke”].
HM: Pardon me?
KK: Let’s go with that.
HM: Okay. And so that's very complicated. It's the boundary — the curve is a fractal. So already simply connected domains can be very complicated, but they don't even have to be just the inside of a Jordan curve. You could take the plane itself, there’s a very simple example. You could take all the positive real numbers, include zero, and take it away from the complex plane. So the plane minus the positive real axis and also subtract the origin, that’s simply connected, it doesn't have any holes. And it's not the inside of a curve. You could also, on the other hand, here's something that isn't simply connected: you could remove the interval [0,1], including zero and one from your plane, just that interval on the real axis. And that is not simply connected because the complement, or the plain minus that, has a hole, which is that interval [0,1], it can be thought of as a hole. So that's the notion of simply connected. I don't know whether I should say more. I mean, that's what I thought to say about what simply connected means.
KK: That’s great. Yeah, yeah, that's a good explanation.
HM: Okay, and so that's a topological notion. And then the other thing that goes into this theorem is a notion from geometry, well, actually a notion from geometry and a notion from complex analysis. But let me let take a basic notion from geometry, which is called conformal. And the idea is that if you suppose you have two domains in the plane, and you have a transformation from one to the other, you say it's conformal if it's angle-preserving. So that means that if in the first domain, you have a pair of arcs — or maybe you prefer to think of them as straight lines, but it's better to think of a couple of arcs — that meet at a point, and then you apply the transformation, and you get a pair of arcs that meet in the image under the transformation. And you could measure the angle that you started with between the pair of arcs and the angle of the images of the pairs of arcs, and if the angles are equal at every point for every pair of arcs at those points, then you say the transformation is conformal, angle-preserving. Now, in some ways, the nicest — so let me give some examples that are and are not. The nicest transformations, certainly of the plane, are linear transformations.
KK: Sure.
HM: Given by two by two matrices, and they turn out not to be typically conformal. There are some that are, for example, a rotation about the origin is conformal. You know, if you have two lines and you rotate them, the angle they make after rotation is the same as the angle they started with. If you — this isn't strictly a linear transformation, it’s called affine — if you take a translation of the plane, if you take every point and you add the same vector, think of them as vectors, that's angle-preserving, that's a conformal transformation. Here's another one that's back to linear. If you take, for example, every point, which has, say, coordinates (x,y), and you multiply x by 2 and y by 2, so you multiply the coordinates by the same number, 2. That's called a scaling. And that's angle-preserving. One can sort of check that out. What that transformation does is, for example, it takes a square with one vertex at the origin, a unit square, and then another vertex on the x-axis at the point (1,0) and another at the point (0,1), last point at (1,1), and it takes a unit square to a two by two square, and that's angle preserving. But that's it — well, and the composition — but typical linear transformations are not angle-preserving. So, for example, if you took (x,y) and the transformation took (x, y) to (2x, y/2), so it multiplies in the x direction by 2 and multiplies in the y direction by a half, it takes a unit square into a rectangle, and that's not angle-preserving. It preserves the right angles, but it doesn't preserve other angles.
EL: Yeah, you can imagine the diagonal is, you know, [demonstrates with arm gestures that are very helpful to podcast listeners].
HM: The diagonal is closer to the x-axis, so the diagonal which made an angle of 45 degrees will be moved with the x-axis. The x axis goes to itself, and the image of the diagonal is moved closer to the x axis. Yeah, exactly.
So there aren’t maybe, there aren't so many linear transformations of the plane to itself, and so let me tell you what the theorem is, and this is a beautiful, beautiful theorem, I think, and it was really a cornerstone of, in the 19th century, of the beginnings of complex analysis. Oh yes, I’m sorry. Before I do that, let me also connect conformal, as I had mentioned, to complex analysis. One also can think of the euclidean plane as the complex plane, where (x,y) becomes x+iy, becomes a complex number z, and then conformal, another way of saying it, is that the map, the transformation from some region in the plane to some other region in the plane has a complex derivative. It’s what you call complex analytic. It has a derivative and the derivative is not zero. Again I looked at your podcast. Someone talked about the Cauchy-Riemann equations, and that's exactly what complex analytic means is that the Cauchy-Riemann equations hold. Where where w is u+iv and z is x+iy, then it's complex analytic if ux=vy and −uy=vx. That’s the Cauchy-Riemann equations, and that's from complex analysis. It has the names Cauchy and Riemann, who where in some sense the founders of complex analysis. And that's equivalent to conformal, so even there just in this, there's already kind of an amazing theorem that relates — I think obviously you had somebody on your podcast maybe talk about this — that relates complex analysis to geometry, conformal meaning angle-preserving and complex analytic meaning, let's say, the Cauchy-Riemann equations hold.
KK: Right.
HM: Okay, so the theorem is that if I take any simply connected set’s domain in the complex plane, other than the complex plane itself, okay? And I take the unit disc — so that's inside the circle of radius one, so that's simply connected — I can find a conformal transformation from the unit disc to this simply connected domain, and maybe thinking about the inverse, it's a conformal transformation from that (maybe crazy) simply connected domain to the unit disc, and so that's the Riemann mapping theorem
EL: Yeah, and it's just amazing. I mean I think there's part of me that still doesn't believe that it's true. I've actually just, I don't know when it was, maybe a month or two ago, I think I was brushing my teeth or something and just thinking, why hasn't someone pick the Riemann mapping theorem yet for My Favorite Theorem?
HM: Okay, all right.
KK: It's a really mind-blowing theorem. So when I teach the undergraduate complex analysis course that we have, I don't get to it until the very end.
HM: Yeah.
KK: And it's kind of hard. You can't even really prove it especially at that level, but students just look at me like, there's no way this is true. This just can’t be true. So it's really remarkable that anything — I mean, you're right. I mean, these simply connected domains can be bizarre. But they're conformally equivalent to the unit desk. That's just blows my mind still. Yeah,
EL: Yeah. It's just hard to imagine, like, this fractal snowflake, you know, how can you straighten that out enough to just be like a circle?
HM: Let me contrast it — and this also goes back kind of to the founding mathematicians of the subject. If I take what's called an annulus, let's say I take the circle of radius 1. And I take the circle of radius R, where R is bigger than 1. And I take the region between them. So the region between two concentric circles, that's not simply connected because it has a hole, namely, the inside of the unit circle is the hole. And so if I take one of radius, the inner is radius 1, the outer radius is R, and I take another one, inner radius 1 and outer radius R’. And let's say R’ is not equal to R. So it's a different outer radius. They are not conformally equivalent, even though they are very simple boundaries, their circles. So there was something very, very special about simply connected. And that's also kind of what makes the theorem amazing. And then the fact that it doesn't work for something not simply connected started a whole field of mathematics that has been going on for close to 200 years.
EL: And so was this kind of a love at first sight theorem for you the first time you saw it?
HM: You know, I guess I'm not 100% sure. I was in college a little while ago, and I don't don't think I had complex analysis in college. And so I may not have run into it then. But certainly, as a first-year graduate student at University of Minnesota, and my professor, who then became my thesis advisor within a year, you know, for my PhD advisor, that was somehow his field. And so I certainly learned it as a graduate student. And that led me — again, I can't exactly say it led me to what I do — but, you know, it certainly had a big influence, and things that I do sort of have grown out of this whole history of this, from the from from the Riemann mapping theorem.
KK: So, is this one of those theorems is actually named correctly? Did Riemann actually prove it?
HM. I don't know, I'm not a historian. You know, I mean, I could ask. For that matter, are the Cauchy-Riemann equations named after the right people? Yeah. I mean, I know the modern proof that one sees in books on the Riemann mapping theorem is not due to Riemann. It’s I think, early 20th century.
EL: Is it Poincaré maybe?
HM: You know, my mind is going blank here for a second.
EL: It’s someone.
HM: I don't know. I'm not a historian, and I did not look it up to say “Does Riemann really deserve credit?”
KK: But wait, I looked at Wikipedia. I’m cheating. The first rigorous proof of the theorem was given by William Fogg Osgood in 1900.
HM: Oh, okay. Okay. Yeah.
KK: So apparently Riemann, this is in his thesis, actually. But there were some issues, it depended on the Dirichlet principle. And Hilbert sort of fixed it enough that it was okay. But Osgood is credited with the first rigorous proof.
HM: Well, isn’t it also somehow the case again, that mathematicians 200 years ago did not quite have the rigor that we have now?
KK: That’s true. Cauchy sort of put limits on the right footing more or less, but I think it still took a little while to get it cleaned up, right? So are there any really interesting applications of this theorem that you like? Or is it just beauty for its own sake?
HM: Gosh, you know, I'm not sure. I think beauty for its own sake, I mean, but also to my mind, it opened up a whole branch of mathematics where you study, well, for example, you study surfaces. Or maybe it's the difference between topologists and geometers. A topologist says that a doughnut, famously, a doughnut is the same as a coffee cup with a, you know, with a handle and so forth. And geometers say, well, we could put different ways of measuring angles, different metrics on a torus that are not conformally equivalent, that there's no transformation from one to the other that preserves angles.
KK: Right.
HM: And this Riemann mapping theorem says, No, you can't do that for simply connected. They are conformally the same. But as soon as we move to topologically more complicated things like a torus or even these annuli, or surfaces with more holes, genus, then there are different ways of putting metrics and measuring angles and so forth. And so it opened up, and again, this actually also has Riemann’s name to it, it’s the Riemann moduli space, is the study of all metrics on a space. And so, yeah, again, I haven't thought of an application so much to other fields, but something that is a beautiful and unexpected theorem that opened up whole vistas of mathematics, I think, in the last whatever. I don't remember when Riemann stated this problem. When did he live, in the 1840s?
KK: Yeah, middle 1800s.
HM: So it’s been 175 years or something that people studying, have been studying? consequences in some sense of, of this or analogs of this?
EL: Yeah. Well, and so this is something: I never wonder it at the right time to check and see, but is there a place where you can go and say, like, this is my domain 1 — and maybe it's a square or maybe it's the flag of Nepal or something — and this is my domain 2, or just the unit circle, and here is the conformal map between them. Is that something that exists?
HM: Typically not. There are certainly examples where you can, but it's very, very rare that you can write down an explicit formula for the map. That's again, maybe why it's such a beautiful theorem, but you cannot, I don't let’s see, I hope I'm not — maybe you can do it for a circle to an ellipse. Um, maybe. I'm not 100% sure. There are people who know much, obviously know much, much more about finding something explicit. But in general, no, if you take some crazy Jordan curve, no way, do you know an explicit formula.
EL: You just know it's there.
HM: You know it's there.
KK: Well, that's important, though, right? If you're going to go looking for a needle in the haystack, you do, in fact, want to know there's a needle in it.
EL: Yeah.
KK: All right. So another fun part of our podcast is we ask our guests to pair their theorem with something. So we're dying to know what pairs well with the Riemann mapping theorem?
HM: Well, I thought about that a lot.
KK: This is the harder part.
HM: I tried desperately to find food, but I couldn't think of really the right thing. So I do love music. And this is maybe crazy far-fetched, but I paired it with Stravinsky's Rite of Spring only because to me, this Riemann mapping theorem revolutionized geometry and complex analysis. And I think of the Rite of Spring of Stravinsky, which was premiered in the early 20th century, revolutionized modern music, contemporary music. That's the best I can do.
EL: I like that.
KK: Well, I do too. And for all we know, there were riots after Riemann published his theorem.
HM: Could have been.
KK: You know, “there’s no way this is true!” Mathematicians stormed out.
HM: Maybe he gave a lecture and people threw tomatoes at him.
EL: Yeah, well, I must say when I was thinking about asking you to be on the podcast, I did think about the many wonderful meals that we have shared together, and I know that Howie is a great appreciator of the finer things in life, including music, too. I think we've gone to a concert together. And yeah, so I thought that this would be an excellent thing. You know, I was I was talking with Jon earlier about, what is Howie going to pair with it? And my first thought was actually pancakes, which I think are a little pedestrian, but that you can make them into so many different shapes. And there's even, there are people who will do these things where, you know, if you pour the batter on, in a certain way, you know, you can get these beautiful things. I mean, part of it is part of the batter cooks longer than the rest of it. And so you've got shading based you know, how they do I've seen, I think, you know, Yoda and like, I don't know, all sorts of different things. There’s this Instagram account. But that was one, all these different shapes you can do. I'll say that Jon actually suggested jigsaw puzzles. Oh, no, sorry, he first suggested jigsaw puzzles, but there’s only one right way to do that. But then he said tangrams, you know those things with all the shapes, you know, there's a square and triangles and stuff. And then you can rearrange them to make all these different shapes, although those are non-continuous maps. So it wouldn't be quite as good. But, I do like the Rite of Spring. And it means that Stravinsky is doing really great on My Favorite Theorem because Eriko Hironaka actually picked Stravinsky also.
KK: Firebird.
HM: She picked the Firebird. I’ll have to look at her podcast. Maybe I'll give her a zoom meeting and we can compare music and the math.
EL: Yes. But I like that. And I am now going to ret-con in some riots following Riemann declaring that you can make these conformal maps.
KK: Well, this has been great fun. I do love the Riemann mapping theorem and Howie, thanks for joining us this
HM: Well, thank you for having me. It was a pleasure.
On this episode of My Favorite Theorem, we were happy to talk with Howard Masur, a math professor at the University of Chicago, about the Riemann mapping theorem. Here are some links you might find interesting as you listen.
Masur's website
Evelyn's article about the Koch snowflake
Jeremy Gray's article about the history of the Riemann mapping theorem (pdf)
A recording of Stravinsky conducting the Rite of Spring
Did the Rite of Spring really cause a riot at its premiere?
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast. We need a better tagline, but I'm not going to come up with one today. I'm Kevin Knudson, professor of mathematics at the University of Florida. Here is your other host.
Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City. And I think that our guests might be able to help us with that tagline. But we'll get to that in a moment because I have to share with you a big kitchen win I had recently.
KK: Okay.
EL: Which is that that I successfully worked with phyllo dough! It was really exciting. I made these little pie pocket things with a potato and olive filling. It was so good. And the phyllo dough didn't make me want to tear out my hair. It was just like, best day ever.
KK: Did you make it from scratch?
EL: No, I mean, I bought frozen phyllo dough.
KK: Okay, all right.
EL: Yeah, yeah, I’m not at that level.
KK: I’ve never worked with that stuff. Although my son and I made made gyoza last month, which, again, you know, that that's a lot of work to because you start folding up these dumplings, and you know. They’re fantastic. It's much better. So, yeah, enough. Now I'm getting hungry. Okay. It's mid afternoon. It's not time for supper yet. So today we have we have a twofer today. This is this is going to be great, great fun. It's like a battle royale going here. This will be so much fun. So today we are joined by Pamela Harris and Aris winger. And why don't you guys introduce yourself? Let's start with Pamela.
Pamela Harris: Hi, everyone. I like how we're on Zoom, and so I get to wave. But that’s really only to the people on the call. So for those listening, imagine that I waved at you. So I am super excited to be here with you all today. I'm an associate professor of mathematics at Williams College. And I have gotten the pleasure to work with Dr. Aris Winger on a variety of projects, but I'll let him introduce himself too.
Aris Winger: Hey everybody, I’m Aris Winger. I'm assistant professor at Georgia Gwinnett College. I've been here for a few years now. Yeah, no, we, Pamela and I have been all over the place together. I've been the honored one, to just be her sidekick on a lot of things.
PH: Ha, ha, stop that!
EL: So we're very excited to have you here. So you've worked on several things together. The reason that I thought it would be great to have you on is that one of the things is a podcast called Mathematically Uncensored. And it's a really nice podcast. And I think it has a fantastic tagline. I was telling Aris earlier that it just made me very jealous. So we've we've never quite gotten, like, this snappy tagline. So tell us what your podcast tagline is. And a little bit about the podcast.
PH: Maybe I can do the tagline. So our tagline is “Where our talk is real and complex, but never discrete.”
AW: Yeah, that's right. That is the tagline. And yeah, it's a good one. And sometimes I have to come back to it time and again to remember, so that we live up to that during the podcast. We're taping the podcast later today, actually. And so it should be out on Wednesday. So yeah, the show is about really creating a space for people of color in the mathematical sciences and in mathematics in general, I think. And so one of the ways—I think for us the only way that can happen—is we have to start having hard conversations. Right. And so a realization that comfort and staying on the surface level of our discussions doesn't allow for us to have the true visibility that all people in mathematics should have. And so for too long, we've been talking surface-level and saying, “Oh, we have diversity issues. Oh, we should work harder on inclusion.” No, actually, people are suffering. No, actually, here's our opinion. And stop talking about us; start talking to us. So it really is a space where we're just like, you know what, screw it. Let us say what we think needs to be said. Listen to us. Listen to people who look like us. And yeah it’s hard. It's hard to do the podcast sometimes because when you go deeper and start to talk about harder topics, then there are risks that come with that. Pamela and I, week after week, say, “Oh, I don't know if I really should have said that.” But ,you know, it's what needs to be said, because we're not doing it just for us. We're doing it to model what what needs to happen from everybody in this discipline, to really say the things that need to be said.
KK: Have you gotten negative feedback? I hope not.
AW: Yeah, that’s a good question. So I mean, I think that the emails we've gotten are have been great and supportive. But I think, so for me, I'm expecting no one to say — I’m expecting the usual game as it is, right, that people aren't going to say anything, but of course there's going to be backlash when you start saying things that go against white privilege and go against the current power structures. You know, I'm expecting to be fired this year.
PH: Yeah, those are the conversations that we have constantly — that we’re having on the podcast are things that Aris and I are having conversations about privately. And so part of what's been really eye-opening for me in terms of doing a podcast is that I forget people are listening. There are times Aris and I are having just a conversation, and I forget we're recording. And I say things that I normally would censor. If I were in a mixed crowd, if I were in a department meeting, if I were at a committee meeting for, you know, X organization. And I think it's not so much that we would receive an email that says, “Hey, you shouldn't have said X, Y, and Z,” it’s that we are actually getting targeted. For example, I was just virtually visiting Purdue University giving a talk about a book that Aris and I wrote, supporting students of color. And accidentally, the link got shared to the wrong people. And all of a sudden, I'm getting Zoom-bombed at a conversation. That's targeted, right? So those are the kinds of things that we are experiencing as people of color, and we have to have conversations about how are we ensuring that this isn't the experience when you bring a Black or brown mathematician to talk virtually at your colloquium. And if we're not talking about that, then no one is talking about that, because people are trying to hide their dirty laundry. Purdue University is not putting out an email to their alumni saying, “By the way, we invited Pamela Harris to show up and talk about how we best support students of color. And then we got Zoom-bombed, and somebody was writing the N-word and saying f BLM.” Right? Like, that's not happening.
AW: Yes. Wait, they didn't say anything about it?
PH: Well, they're actively doing things about it. But you know they're not putting out the message.
AW: Right. So then it gets sanitized, right? So a traumatic attack gets sanitized to be something else. There are two things about the podcast that Pamela and I, and the Center for Minorities in the Mathematical Sciences, really are trying to work with is making sure that we call out these things, but then not to center it, right, because the the podcast itself is supposed to be about our experiences. But a lot of ways there's a significant part of our experiences that is tied to having to continuously fight against this type of oppression against us.
EL: Yeah. And I think it's really important to have that. And it's so important that it decenters— I think I was listening to an episode recently where you talked about the white gaze and what you have to deal with all the time in trying to present things to a majority white audience. And I think it's really important for us white people to listen to this and realize that not everything is about and for us. And I mean, there are so many things in life where this is true: movies, TV shows, books and stuff. And yeah, I think it's great that that your voices are there and having these conversations, and I think that people should listen to your podcast.
AW: I appreciate that. Yeah. Because it requires a deep interrogation, a self-interrogation by white people to really deal with the feelings. Let me just step back and give the usual disclaimer. Everybody's nice. Everybody's good. Nobody's mean. Nobody is a bad person. Let me just say all that to get that out of the way, right? But what we're talking about is that when I say something on the show, when Pamela says something on the show and you get this feeling like “Wow, that doesn't feel good to me,” then you need to take some time and figure out why it is that you're feeling this way. And it's tied to your privilege, something that you need to interrogate, and it will make you a better person and for everybody.
KK: I don't know. I can't wrap my head around people, like, Zoom-bombing. This is nothing that would ever come to my mind. “You know what, I'm going to go Zoom-bomb this person.” I just…
EL: Well, I mean, it’s just a bad way to spend your time, but not everyone has the same time priorities.
AW: Well, no. So I think that's a great question. And let me just say that it that's how deep and pervasive it is in people, right, that people grow up and have this experience of being raised by other people who have ingrained within them that it is fundamentally, and in some sense, it just burns their soul to have somebody who does not look like them, have someone who is “lesser” than them take the center stage, be deemed the expert. And so again, I'm not calling these people bad. But there is something within some of us that says — and it’s called white supremacy, by the way — that we all have, that we all have to fight, that is so ingrained in some people that they feel compelled to do it. And so they, again, no one's going to fix that for them. And the person who did this to Pamela has it in spades, right? And so when we say that, so I think too often we make it an intellectual exercise, right? We say that it just makes no sense. Right? It doesn't make any sense because white supremacy makes absolutely no sense. But it is a thing. And it's there. And that's what it is, right? So I've been working a lot on calling, naming things so that we don't get confused, because as long as we don't name it, then it just gets to be out there. Like, “Oh, I don't understand.” We understand this exactly. It's called white supremacy. And we need to fight it in our discipline, and across the board.
PH: And it doesn't always just show its face via Zoom-bombing with the N-word in the chat, right? It shows up with who you invite to your podcast. It shows up with who's winning awards from our big national organizations. It shows up with who gets tenure, who even lands into a tenure track position, who even gets to go into graduate school, who actually majors as a mathematician, who actually goes to college, who actually graduates high school, who actually gets told that they're a mathematician. Right? So this is showing its ugly head in very visual ways that we all feel a huge sense of, “Oh, no, this is terrible. I'm sorry, this happened to you.” But the truth is that white supremacy is in everything within the mathematical sciences. And so you know, we got to pull it at its root, my friends. At its root!
AW: Yes.
PH: So this was just one way in which it showed itself, but I want to make it clear that it is pervasive.
KK: Sure. Right.
EL: So what I love about hosting this podcast is that we get to know both people and their math and their relationship to their math. And so we're gonna pivot a little bit now, maybe pivot a lot now, and say, Okay, what are your favorite theorems? And, yeah, I don't know who wants to go first. But, yeah, what's your favorite theorem?
KK: Yeah, let’s hear it.
PH: I’ll do it. I’ll go first. I always like hearing Aris talk. So I'm just like “Aris, go,” right? But no, I’m going to take the lead today. Alright, so I wanted to tell you about this theorem called Zeckendorf’s theorem. I don't know if you know about it.
KK: I do not.
PH: And it goes like this. So start with the Fibonacci numbers without the repeated 1. So 1, 2, and then start adding the previous two, so 3-5-8, and so on. Alright. So if you start with that sequence, his theorem says the following, if you give me any positive integer N, I can write it uniquely as a sum of non-consecutive Fibonacci numbers.
AW: Oh, wow!
EL: Uniquely?
PH: Yes. And this is why you need to get rid of the 1, 1. Because otherwise you have a choice. But yeah. So it's hard to do off the top of my head, because I'm not someone who, like, holds numbers. But say, for example, we wanted to do 20. Maybe we wanted to write the number 20 as a sum of Fibonacci numbers that are not consecutive. So what would you do? You would find the largest Fibonacci number that fits inside of 20. So in this case, it would be 13.
AW: Yeah.
PH: 13 fits in there. Okay, so we subtract 13. We're left with 7. Repeat the pattern.
KK: Ah, five and two.
KK: Five and two! They're non consecutive.
KK: Okay.
PH: Yeah.
AW: Wow!
PH: Three is in between them, and eight is in between the others. And so you can do this uniquely. And so this is using what's known as the greedy algorithm because you just do that process that I just said, and it terminates because you started with a finite number.
KK: Sure.
PH: And so the the proof, of course, there's the, you know, “Can you do it?” but then “Can you do it uniquely?” So the thing that you would do there is assume that you have two different ways of writing it, each of which uses non-consecutive, and then you would argue that they end up being exactly the same thing. So that, in fact, they use the same number of Fibonacci numbers and that those numbers are actually the exact same.
KK: Sure, okay.
EL: Yeah. Like I'm trying to figure out — and I don't, I also am not super great at working with numbers in my head just on the fly. But yeah, I'm trying to figure out, like, what would have gone wrong if I had picked eight instead of 13 to start with, or something? And I feel like that will help me understand, but I probably need to go sit quietly by myself and think about it. Because there’s a little pressure.
PH: Yeah, it's a little subtle. And it might be that you don't get big enough, you end up having to repeat something.
EL: Yeah, I feel like there's not enough left below eight to get me there without being consecutive.
PH: Yeah. Right.
AW: Right. Because you’ve got to get 12. Yeah, yes. Yeah.
KK: Yeah, it makes sense, right? Like, I guess, you know, if you pick the largest one less than your number, then it's more than halfway there. That's sort of the point, right? So that's how you prove it terminates, but also the the non-uniqueness, the non-uniqueness seems like the hard part to me somehow, but also the non-consecutive. Wait a minute, I don't know, which is.
AW: Well it sounds hard, period.
KK: Yeah. I like this theorem. This is good. What attracts you about this theorem? What gets you there?
PH: So I, in part of my dissertation, I found a new place where the Fibonacci numbers showed up. And so once you find Fibonacci numbers somewhere new, I was like, what else is known about these beautiful numbers? And so this was one of those results that I found, you know, just kind of looking at the literature. And then I later on started doing some research generalizing this theorem. So meaning, in what other ways could you create a sequence of numbers that allows you to uniquely write any positive integer in this kind of flavor, right, that you don't use things consecutively, and consecutively, really, in quotes, because you can define that differently. And so it led me to new avenues of research that then I got to do. It was the first few research projects with some of my undergraduate students at the Military Academy. And then I learned through them — they looked him up — that he actually came up with this theorem while he was a prisoner of war.
AW: Oh, wow!
PH: This is when Zeckendorf worked on this theorem. And to me, this was really surprising that, you know, my students found this out. And then I was like, “See, mathematics, you can just take it anywhere.” lLke this poor man was a prisoner of war, and he's proving a theorem in his cell.
KK: Well Jean Leray figured out spectral sequences in a German POW camp.
PH: I did not know that!
AW: Anything to pass the time.
KK: What else are you going to do?
EL: I mean, Messiaen composed the Quartet for the End of Time — I was about to say string quartet, but it's a quartet for a slightly different instrumentation, in a concentration camp, or a work camp. I'm not sure. But yeah, I'm always amazed that people who can do that kind of creative work in those environments, because I feel like, you know, I've been stuck in my house because of a pandemic, and I'm, like, falling apart. And my house is very comfortable. I have a comfortable life. I am not as resilient as people who are doing this. But yeah, that is such a cool theorem. I'm so glad that you said that. And I'm trying to think, like, Lucas numbers are another number sequence that are kind of built this way. And so is there anything that you can tell us about the the sequences that you were looking at, like, I don't know, does this work for Lucas numbers? I don't know if you've looked at that specifically, or did you look at ways to build sequences that would do this?
PH: Yeah. So we started from the construction point of view. So rather than give me a sequence, and then tell me how you can uniquely decompose a number into a sum of elements in that sequence, we worked backwards. So one of the research projects that we started with is what we called — there's a few of them — but one, it was a “Generacci” sequence. And so what we would do is, instead of thinking of the numbers themselves, imagine that you have buckets, an infinite number of buckets, you know, starting out the first bucket all the way to infinity, and you get to put numbers into the sequence in the following way. So you input the number 1 to begin with, because you need a number to start the sequence. And since you want to write all positive integers, well, you’ve got to start with 1 somewhere. So you stick the number 1 in the first bucket. And then you set up some system of rules for which buckets you can use to pull numbers from that then you add together to create new numbers. Well, you only have one bucket, and you only put the number 1 in it. So then you move to the next bucket. Well, okay, you want to build the number two, and you only have the number one, and as soon as you pull it from the bucket, you don't have any other numbers to use. So let's stick the number two in the second bucket. Oh, well, now I could maybe in my rule, grab a number from two buckets, and add them together to get the next number. Oh, that starts looking familiar. The third bucket will have not the number three, because you were able to build it. So what next number could you grab? Well, maybe you can stick in the 4 in there. And so by thinking of buckets, the numbers that you can fill the buckets with, that you couldn't create from grabbing numbers out of previous buckets under certain rules, you now start constructing a sequence. And provided that you very meticulously set up the rules under which you can grab numbers out of the buckets to add together to build new numbers, then you do not need to add that number into the buckets, because you've already built it.
EL: So what rules you have about the buckets will determine what goes in the buckets.
PH: Exactly, exactly. So you might say okay, maybe our buckets can contain three numbers. And you're not allowed to take numbers out of consecutive buckets, or neighboring buckets, or you must give five buckets in between. So what must go into the buckets to guarantee that you can create every single number and you can do so only uniquely? And so these are these bin decompositions of numbers. But you are working backwards. You start with all the numbers, and then you decide how you can place them in the buckets and how you can pull them from the buckets to add together. So I'm being vague on purpose, because it depends on the rules. And actually it's quite an open area of research, how do you build these sequences? You set up some some capacity to your bucket, some rules from where you can pull to add together. And the nice thing is that it's very accessible, and then it leads to really beautiful generalizations of these kinds of results like that of Zeckendorf.
EL: This is very cool.
AW: Fantastic.
EL: All right. So Aris, I feel like the gauntlet has been thrown.
KK: Yeah.
AW: Yeah, well mine is simple. This is not a competition. Yeah, no. I guess mine is influenced — I’ve been thinking about a bunch of different things, but I keep coming back to the same one, which I think is influenced by my identity as a teacher first and foremost when I think about the fundamental theorem of calculus. I just keep coming back to that one. And I don't know how many people have used this one with you on this podcast before, but for me, it hits so many of the check marks of my identity in terms of thinking about myself as a mathematician and a teacher, in the sense that for a lot of students who get to calculus, it's one of the first major, major theorems that will show up in their faces that we actually call out and say this is a theorem. And we call it fundamental, right? We don't often bring up the fundamental theorem of algebra in college algebra, right? Or in other places, or the fundamental theorem of arithmetic, right? But so it's one of these first fundamental theorems. And so it also helps to tell the story of a course, right? And so that really hits the teacher part of me where too often people in the calculus sequence spend all this time talking about derivatives, and all of a sudden, we just switch the anti-derivatives. And we don't really say why. You'll figure out in the next couple of sections, and then we start adding rectangles, and we don't say why. And so it really is, at least the way that the order of calculus has gone and in terms of how to teach it, in my experience, it really is this combination, like, oh, this is why we've been doing this. And this is the genius of relating two things. Sometimes I've gone in, and I've talked about, like I put up a sine curve, and a cosine curve, and we talk about how one of them measures the area under the curve. And then I pretend to bump my head and get amnesia. And then I'll come and say, “Oh, look, looks like we've been talking about derivatives. Right?” And they’re like, “Wait, what do you mean we're talking about derivatives?” “This is the derivative of this one.” And they’ll go, “What? We were measuring the area under the curve.” “Well, we’re also measuring the derivative, right?” This is the derivative, but this is measuring the area. And it's like, Oh, right, and so it's just one of these “aha” moments, where if people have been paying attention, it's like, oh, that's actually pretty cool, right? And then also in terms of the subject itself in relationship to high school, just really thinking about — because I get a lot of students who know all the rules, right? And they look at the anti-derivative with the integral sign and say “That's the integral.” Well, that's an anti-derivative, right?
EL: We’re not there yet.
AW: Yeah, that the anti-derivative and the integral are actually different. And so just having that conversation. And it also is a place to talk about the history of the subject and stuff like this.
EL: Yeah, I love it. And, at least for me, I feel like it's a slow burn kind of theorem. The first time you see it, you're like, “Okay, it's called the fundamental theorem of calculus. I guess some people think it's really important.” So that might be your Calculus I class. And then you see it again, maybe in an introductory real analysis class. And you're like, “Oh, there's more here.” And then you teach calculus, and you’re like, “Ohhhh!”
AW: Oh right, yes!
EL: Your brain explodes. You're like, “This is so cool!” And then your students are where you were several steps ago. And they’re like, “Okay, I guess it’s all right.”
AW: If I get the success rate of like, I've had three or four people go, “Whoa!” And it's like, okay, yeah, you're with me. And so this is out of hundreds of people.
EL: If you can get a few people that do that the first time they see it, that’s awesome.
AW: Yes. Yeah. No, it's been fulfilling for sure. And so then the proof itself, you know, it's also great, because then it culminates all of the theorems that you've been talking about beforehand. Depending on the proof, of course, but like, there's the intermediate value thereorem. There's the mean value theorem for integrals. There's unique continuity, at least in this version of it, in order for it to work. So yeah, it's great.
KK: So when you teach calculus, there's always two parts to the fundamental theorem. And so I like the one where the derivative of the integral is the function back, right? That's the fun, like for the mathematician in me, this is the fun part. Your students never remember that. Right? They always remember the other one, where we evaluate definite integrals by finding it the anti-derivative. So I was going to ask, if you had to pick one of the two, which one is your favorite?
AW: I mean, part of it is because at least the way that I've taught it, we're coming out of the mire of Riemann sums.
KK: Right.
AW: And so people have suffered through doing rectangles so much. And then I just get to say, “Oh, you don't have to do this anymore.” I mean, I've had a few students go, you know, now that we do — I always use the antiderivative of x squared on zero to 10, or the area under the curve of x squared from zero to 10. And like, sometimes I'll say, “Oh, that's 1000 over 3, right?” And then it was like, “Well, how did you do that so quickly?” We'll see. Right? But then, at the end, when I'll say, okay, and then we do another one again. And then I show how to apply the theorem, and people say, “Well, why didn't you just say that?” And then we have a great conversation there about how this isn't about the answer, that this is about a process and understanding the impact of mathematical ideas, that the theorem, as with all theorems, but this one is my favorite, is an expression of deep human intellect. And that if we reframe what theorems are, we get a chance to rehumanize mathematics. And so I think that too often in our math classes, and our math discourse, we remove the theorems from the humanity of the people who created them. And so people get deified, like Newton and Leibniz, but you know, these same people had to sit down and work hard at it and figure it out.
KK: No, it’s certainly a classic, but it is surprising how little it has come up on our podcast. It was the very first episode.
AW: Oh, okay.
KK: Yeah. Amie Wilkinson chose it. And then this will be episode 60-something.
AW: Okay. Yeah.
PH: Wow.
EL: We've talked, we've mentioned it in some other episodes. But it isn’t — I mean, there are just — I love this podcast, obviously, I keep doing it. And there are just so many types of theorems. And I love that you two picked different types. Yours, Aris, is one of these classics. Everyone who gets to a certain point in math has seen it, hopefully has appreciated it also. And Pamela, you picked one that none of us had ever heard of and made us say, “Whoa, that's so cool!” And people just have so many relationships with yours. And that's what this podcast is really about. Actually it's not about theorems. It's about human relationships with theorems and what makes humans enjoy these theorems. And so you picked two different ways that we enjoy theorems. And I just love that. So yeah.
KK: Yeah, that is what we're about here, actually. I mean, I mean, yeah, the theorem. But actually what I like most about our podcast, so let's toot our own horn here. We’re trying to humanize mathematics. I think everybody has this idea that mathematicians are a very monolithic bunch of weird people who just — well, in movies we’re always portrayed as either being insane, or just completely antisocial. And I mean, there’s some truth and every stereotype, I suppose, but we are people, and we love this thing. We think it's so cool. And sharing that with everyone is really what's so much fun.
AW: Yeah. And I think also that, for me, the theorem itself, and what it reveals, touches something that’s inside of us. There’s something about it, right? There’s the “Whoa” part that is that is indescribable and that I think really touches to our humanity. There is a eureka moment where you're just like, “Oh, I understand this now.” Or this connection is amazing, right? Yeah, it's indescribable.
KK: So we all agree these things are beautiful. So here's a question. Where do people lose this? I mean, I have a theory, but — because we've all had this experience, right? You're at a cocktail party and someone says, they find out you're a mathematician and like, oh, record scratch. I hate math. Okay.
AW: Yes, yes, yes.
PH: But I don’t think they hate math, though, Kevin.
KK: No, they don’t. Nobody hates math. Nobody hates math when they're a kid. That's exactly right. So I think when they say that they mean that the algebra caused them trouble. When x’s started showing up.
PH: I don't even think that's it.
KK: Okay. Good. Enlighten me because I want an answer to this that I can’t find.
PH: I don't think it's that people hate math or that they hate that the alphabet showed up all of a sudden in math that they hate how people have made them feel when they struggle with math. Math is an inanimate object. Math is not going out there and, like, punching people in the face. It's the way that people react to other people's math. Right? The second that you don't use the language in the way that somebody expects you to use it and you're trying to communicate properly and somebody says, “That’s not how you say it. It's not FOILing. It's called distributing!” Right? But you knew what I meant when I said FOIL the binomial!
KK: Of course I did.
PH: FOILing this gives you the middle term, blah, blah, right? So it’s again about human interactions. And if you make someone feel dumb, they'll never like what it is that they're trying to learn
AW: Amen to that. And they will conflate the two, which is what always happens.
PH: That’s exactly it!
AW: They will replace the experience with the subject itself, when in fact, they're talking about the experience. Yeah. So yeah, we've been working a lot about this in the last few years, Pamela and I and Dr. Michael Young, about when people say they hate mathematics, they’re really talking about their mathematical experience. So my immediate response to your question is just bad teaching. Let's just call it what it is.
PH: Right.
AW: I don't want to get on my podcast too early. We're recording later.
PH: We’re recording in a bit, yeah.
AW: But yeah, we're talking about people. And I say this as a loving critique of the greatest discipline in the history of people. I truly believe that, but I believe that the way we teach it, and the cultural norms we take with it, devalues people, and so I want every person who's listening to this now to then the next time they hear somebody say they hate it, look at them as an innocent person who had a bad mathematical experience. And then, because I see too often amongst my people in the community who say they hate having these conversations with people who say they hate it. And I think we need to return innocence back to that person. And say that this is not a person who hates you or even hates the subject. This is a hurt person. Yes, this is a person who has been damaged in our subject. And by the way, I go much farther than that. It's our responsibility to try and help repair that because this person is going to impact their cousin, their child, their relative, by bringing this hate of the subject, when in fact, it doesn't have anything to do with the subject.
EL: Yeah. It’s about the traumatic experiences. And actually, I think mathematicians often have a bit of a persecution complex and think this is the only place where people have this reaction. But one of my hobbies is singing, and in particular, singing with large groups of untrained people who are just singing because we love singing. And the baggage that people bring to singing is similar. I’m not saying it's entirely the same, but people have been made to feel like their voice isn't good enough.
AW: Yes.
EL: They have this trauma associated with trying to go out and do this sometimes. Obviously a lot of people love to sing and will do it in public. A lot of people love to sing at home and are scared of doing it in public because they're worried about, you know, their fourth grade music teacher, who told them to sing quieter, or whatever happened.
PH: Yes,
AW: That’s right. That's right. And the connection is similar, because what are we saying? We're saying that if you don't hit this right note, then it doesn't count. As opposed to if you don't get the answer seven, then we're not going to value you because the answer is seven, right? Because we have this obsession with the correct answer in mathematics. Right.
PH: And not only that, but also doing it fast.
AW: Yes.
PH: You and I have talked about this before, that — maybe in singing, this is different. I'm not sure. I definitely can relate to the trauma of never singing out loud in public. But is there this same sentiment that you must get it perfect the first time and pretend that it doesn't actually take you hours of training?
EL: I mean, it comes up. There’s definitely, people can feel more valued if they're quicker at picking things up than others, although, you know, it's not the same. There's no isomorphism between these two, I don't know, to bring a little silly math lingo in. But there definitely, there are a lot of similarities, and I think about this a lot, because two things I love in my life are math and singing with my friends. And, you know, I just see these relationships. But yeah, I could go on a whole rant, and I want to not do that.
AW: No, no, no, I appreciate you bringing it up.
EL: But I think it's a really interesting correspondence.
AW: And then the final one is that, you know, in the music space, what is it that we really should be trying to do, value everybody's voice? And in mathematics, we should be valuing everybody's contribution. Right? This is all we're saying. And what does each discipline look like when we value people's voices, no matter where they are on the keys? And we value everyone's contribution to trying to solve a problem.
EL: Yeah, yeah. And how can we help people, you know, grow in the way they want to? You can say, like, “Oh, I like I am not as good a sight reader as I want to be. How can I get better?” How can we help people grow in that way without feeling cut down?
AW: Yeah.
EL: Yeah, it is true for math, too. Yeah. It's just, everything is connected. Woo.
AW: Yes. But you know, we've been talking about, you know, these human relationships we all have with math. And so another part of our podcast that we love is forcing you to do make one more human connection between math and something else with the pairing. So what goes well, Pamela, with this theorem about uniquely writing the numbers in terms of the Fibonacci sequence?
PH: So I was trying to think about my favorite food, and when it was the epitome of perfection, and I came up with, okay, so if we're going to pair it with something to drink, I was like, I want to think about happy moments. Because this feels like a happy theorem. And so I want to go with some champagne.
KK: Okay.
PH: Okay, I was like, “We're gonna go fancy with it!” But then for food, I'm thinking about, oh, this is hilarious. So I went to a conference in Colombia, we visited Tayrona which is a beach in Colombia. And on the side of the beach, I paid to have ceviche, fresh ceviche. And I've never been happier eating anything in my life. And so I imagine myself learning Zeckendorf’s theorem at the beach in Tayrona in Colombia, with some champagne and the ceviche.
EL: Oh man.
AW: Wow.
PH: Beat that, Aris! Beat. That.
AW: There’s no way. So wait, so I want to make sure I understand. So is this while you're reading the proof? Or is this while you’re—
PH: This is like the gold standard. If I were to put all the, like, uniqueness of my favorite food, my favorite drink and my favorite theorem, I would put them in a location which is Tayrona in Colombia, at the beach, eating ceviche sipping on some champagne, learning Zeckendorf’s theorem.
AW: Okay.
KK: Is this the Pacific coast or the Caribbean?
PH: You’re asking questions I should know the answer to, and I believe it’s the Caribbean.
KK: Okay.
PH: Nobody Google that. [Editor’s note: I Googled that. It is the Caribbean.] I have no idea where they took me in Colombia. I just went.
KK: Sure.
EL: Yeah, that sounds so lovely as I look out of my window where there's snow and mud from some melted snow.
PH: Ditto.
AW: So I yeah, I think for the fundamental theorem of calculus, I think this is something that's just classic. Like you're just having a nice pizza and some ginger ale. You're just sitting down and you're enjoying something hopefully that everybody likes and that connects with everybody, that everybody hopefully sees that they get to get that far. So yeah, I mean, my daughter recently — I didn't realize this. She's 9. And we were talking. We visited my aunt in DC. My aunt raised me. And my daughter was much younger at that time, but then every time she thinks about going to visit, she thinks about the ginger ale that my aunt got her because that was the only time she ever got ginger ale. So she’s like, “Oh, I like your aunt, Daddy, because you know, I had ginger ale there.” And I was like, Oh, I should have ginger ale more often. So that made me think of that.
PH: That’s adorable.
EL: I can really relate to that feeling of, like, when you're a kid, something that is totally normal for someone else isn't what's normal for your family. So you think it's a super special thing.
AW: It’s amazing.
EL: I think I had this with, like, Rice-a-Roni or something at my aunt's house, and my mom didn't use Rice-a-Roni, and I was like, “Whoa, Mom, you should see if you can find Rice-a-Roni.”
PH: Amazing.
EL: She was like, “Yeah, they have Rice-a-Roni here.”
AW: Rice-a-Roni’s the best.
KK: I haven't had that in years. I should go get some.
AW: Me either. All right.
PH: That’s how you know you made it.
KK: You know what? You know, single mom and all that, and I lived on Kraft macaroni and cheese when I was a kid. And yeah, you would think I don't like it any more. But, aw man.
PH: Listen, that thing is delicious. So good.
AW: I was about to say.
EL: They know what they’re doing. Yeah. Well, that's great. And I mean, pizza is my favorite food. As great as ceviche on the beach sounds, pizza, just, when you come down to it, it's my favorite food. And so I love that you paired the fundamental theorem of calculus with my favorite food.
KK: So I'm curious, there must be a human who doesn't like pizza, but have you ever met one? I've never met one.
PH: No.
EL: I know people who don't like cheese. And cheese is not — I mean, to me cheese is essential to the pizza experience, but you can definitely do a pizza without cheese.
AW: Yeah. No, my wife also always says that for her it's about the sauce. So I think she might be a person who can get rid of the cheese if the sauce is right. Yeah.
KK: But the crust better be good too.
AW: Of course, of course. It's a full package here.
EL: But okay, so you say that, but on the other hand, I would say that bad pizza is still really good.
KK: Sure.
EL: I mean, you can have pizza that you're like, “I wish I didn't eat that.” But I have very rarely in my life encountered a slice of pizza that was like, “Oh, I wish I wish I had done something else other than eat that pizza.”
AW: It’s actually a pretty unbeatable combination, right? Tomato sauce, cheese and bread.
PH: Yeah. It kind of can't go wrong. Yeah.
KK: When I was when I was in college, there was a place in town. It was called Crusty’s Pizza, and I don't think it exists anymore. And it was decidedly awful. But we still got it because it was cheap. So we would occasionally splurge on the good pizza. But you could get a Crusty’s pie for like five bucks.
AW: Absolutely.
KK: This is dating myself. But yeah, absolutely. Always. All right, so we've got we've got theorems, we’ve got pairings. You've plugged your podcast pretty well, although you can talk about it more if you'd like. Anything else that either of you want to plug, websites, the Twitter?
EL: Yeah, but can you say a little more about the book that you mentioned?
AW: Yeah, the book is a series of dialogues that was an extension of an AMS webinar series that we gave about advocating for students of color mathematics. And so we had just decided, you know, there was so much momentum, we had hundreds of people coming every time to the four-part series. And so we were like, you know, we've gotten to a place where we've given all these talks, and then you give talks, create momentum, and then it just ends. And we're just like, you know, what, not this time. Let's create a product out of this. And so, we decided quickly to get the book together, just answering some of the unanswered questions from the webinar series. So we had the motivation, in terms of answering their questions. And yeah, we got it together. And it was an honor. So it really is just a list of our dialogues, a transcription of our dialogues, answering some of the unanswered questions from that webinar series. And so it's gotten some really good reviews, and people are using it in their departments. And so it's been fantastic so far.
PH: Yeah, I think that's that's the part that I'm really enjoying, getting the emails from people who have purchased the book. And so maybe I should say the full title, so it is Asked and Answered: Dialogues On Advocating For Students of Color in Mathematics. And the things that I hear from folks who have purchased the book — so thank you all so much for the support — is that they didn't expect that there is part of a workbook involved in the book. So it isn't just Aris and I going back and forth at telling you things. I mean, a lot of that there is, that is part of the content. But there's also a piece about doing some pre-reflection before we start hearing some of the dialogue that we have, and then also the post part of it. So how are you going to change? And how are you going to be a better advocate for students of color in mathematics? And so it leaves the reader with really a set of tools to come back to time and time again. That's really what I see as a benefit of the book. And people are purchasing it as a department to actually hold some kind of book club and really think about what of the things that we suggest that professors implement in their department, in their classrooms, in their institutions, what they can actually do. And so the reception has been really wonderful. And I'm just super thankful that people purchase the book, and we're supporting our future work.
EL: Yeah. And can you also mention, is it minoritymath.org, the website that hosts Mathematically Uncensored?
AW: That’s correct. That's right. So yeah, that's the home of the podcast. And that's a place where we're trying to create voices for underrepresented minorities in the mathematical sciences. And so you can go there not just for the podcast, but for other content as well that centers around that experience.
KK: Okay.
EL: Fantastic. Thank you so much for joining us.
KK: Yeah.
EL: I had a blast.
PH: Thank you.
KK: This was a really good time.
EL: Yeah. Over lunch today, I'm going to be writing down numbers and writing them in terms of Fibonacci numbers. It’s great.
AW: It will be fantastic.
PH: Awesome.
AW: Thanks.
PH: Bye, everyone.
KK: Thanks, guys.
On this very special episode, we had not one but two guests, Pamela Harris from Williams College and Aris Winger from Georgia Gwinnett College, to talk about their podcast, Mathematically Uncensored, and of course their favorite theorems. Here are some links you might be interested in as you listen to the episode.
Winger's profile on Mathematically Gifted and Black
Mathematically Uncensored, the podcast they cohost
Minoritymath.org, the Center for Minorities in the Mathematical Science, a website with information and resources for people of color in mathematics
Asked and Answered: Dialogues On Advocating For Students of Color in Mathematics, their book
Zeckendorf's theorem and a biography of Edouard Zeckendorf
Jean Leray, a French mathematician who worked on spectral sequences as a prisoner of war
Olivier Messiaen's Quartet for the End of Time, composed when he was a prisoner of war
A paper generalizing the Zeckendorf theorem by Harris and coauthors
Our episode with Amie Wilkinson, who also chose the Fundamental Theorem of Calculus, making it 2 for 2 among mathematicians with the initials AW.
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, the podcast from 2021. I don't know why I said that, just, it's a math podcast, and it is currently being taped in 2021. I'm your host Evelyn Lamb. I'm a freelance math writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida. No, look, it's important to say it's 2021 because 2020 lasted for about six years. It was—I couldn't wait for 2020 to be over. I don't think 2021 is much better yet. It's January 5. I'll leave our listeners to figure out what's going on right now that might be disturbing. And, and yeah, but anyway, no, happy new year. And I had a very nice holiday. My son has been home for nine months now. He's going to go back to school finally next month to finish up his senior year in college. And I did nothing for a week. I mean, like when I say nothing, I mean nothing. Just get up, watch some TV, like we’re watching old reruns of Frasier, like this is the nothing levels I saw. It was fantastic.
EL: Very nice.
KK: How about you guys? Did you have a nice holiday?
EL: Um, I had a bad bike accident right before Christmas. So I had some enforced rest. But I'm mostly better now. I have gotten on my bike a couple times, and nothing terrible has happened. So still a little more anxious than usual on the bike. We were taking a ride yesterday and I could tell I was just like, not angry, but just, you know, nervous and worried. And it's just like, Okay, I'm just at the scene of the trauma, which is my bike seat, and getting over it. But I hope I will continue to not fall off my bike and keep going.
KK: That’s the only thing to do. Back in my competitive cycling days when I was a postdoc, I had some pretty nasty crashes. But yeah, you just get back on. What else are you going to do? So anyway, enough of that. Let’s talk math.
EL: Yes. And today, we are very happy to welcome Lily Khadjavi to the show. Hi, will you introduce yourself and tell us a little bit about yourself?
Lily Khadjavi: Hi. Oh, thanks, Evelyn. It's so great to be here. I'm Lily Khadjavi, as you said. I'm a professor of mathematics at Loyola Marymount University, which is in Los Angeles, California. I'm a number theorist by training, but I'd say that I'm lucky to have taken some other mathematical journeys, especially since graduate school, and I don't know, for example, this past year, maybe my biggest excitement is I was lucky to be appointed to a state board in California. So by the Attorney General, Xavier Becerra, to be appointed to an advisory board looking at policing and law enforcement and the issue of profiling. And so that's an issue that's very important to me. And it was an unexpected mathematical journey.
EL: Yeah.
LK: If you’d asked me 20 years ago, what would I be up to, I might not have thought of that. And I've taken many a bike spill in my day, so I could feel some nice affinity being here today. You’ve just got to get back on and be careful, of course.
EL: Yeah. And that that must be an especially important issue in LA, because I know the LAPD has been the subject of some, I guess, investigations and inquiries into their practices and things like that.
LK: That's exactly right. And over the years, it was under a consent decree, so an agreement between the US Department of Justice and the City of Los Angeles, with many aspects monitoring police practice. And actually, some of that included data collection efforts looking at traffic stops. And that, combined with teaching a statistics course, is what really gave me a window more into policing practice, into problems that where I wanted authentic engagement for my students with the real world and took me on, maybe I'll say unexpected journeys to law conferences and elsewhere, as I started to learn more about the issues, the ways that as mathematicians, we can bring tools to bear on on these social questions too.
EL: Yeah, very cool.
KK: Yeah.
EL: So what is your favorite theorem? And I know that's an unfair question, but I will ask it anyway. And then, you know, you can run away with it.
LK: Yeah. I know this podcast is not visual, but I'm already kind of smiling in a terrified way because I found this question so difficult, really an impossible task, because I thought it's like asking me when my favorite song—I don't know, do you have a favorite song?
EL: That is hard to say. If you asked me, I would start listing things. I would not, probably, be able to tell you one thing.
LK: What do you think, Kevin?
KK: I, uh, Taxman?
LK. Okay, I thought you would name the opening the music for the podcast as a favorite too.
EL: Oh, yeah.
LK: You know, shout out to that.
KK: I do like that. But now, you know, maybe What Is Life by George Harrison? Single?
LK: Oh, yeah. Okay, well, maybe I'll count that as listing, which is what Evelyn started to do. Because it's very, difficult.
KK: It is.
LK: You know, I was really wrestling with this. And it got me kind of thinking about why do we like certain theorems. I think I pivoted to what Evelyn said. I started wanting to make lists. And of course, it's fun to talk about things that are new to everyone. And, you know, it's been a remarkable podcast, and lots of people have staked out, I mean, they've grabbed those beautiful favorite theorems. But I started thinking, could you have a taxonomy? I really saw a taxonomy of theorems. Not by discipline. So not a topological statement or an analytic proof, but by how mathematicians feel about them, or the aesthetic of them. And so my first you know, category had to be sort of the great workhorses, like those theorems that get so much done, but they also they never cease to amaze you. And I mean, it’s hard not to point right away to the fundamental theorem of calculus, and I think maybe in your very first episode. That's right, that might be what?
KK: Yeah, Amie Wilkinson.
EL: Yes, Amie Wilkinson just came in and snatched that one. Although as everyone knows, we do double theorems, you know, we don't have a rule that you can't use the same theorem again.
LK: No, because that's one we use again and again and again. You know, even this past semester, I was teaching multivariable calculus. And you know, we have this march through line integrals, double, triple integrals, and we build, of course, to Green’s theorem, Stokes’ theorem, the divergence theorem. So these main theorems in calculus that the machinery is heavy enough for the students that even if I'm trying to put them in a context where, “Oh, this is really all the fundamental theorem of calculus,” I think that gets obscured obscured for students first trying to get their head around these theorems. Even though you relate them, you say, Oh, but they've got the boundary of this—maybe endpoints of a curve or some other surface boundary, and you're relating it as the relationship between differentiation integration, and it's so it's beautiful stuff. But I think I'm not convinced my students thought of it as the same theorem, even if I tried to emphasize this perspective. But still, they, all of us can be blown away by how powerful the theorem is in all of its incarnations. And so that's a great workhorse. So we don't have to talk at length about that one. It's been here before, but you know, you just have to tip your hat to that one. But I was wondering, are there other great workhorses something you put in that in that category?
KK: So I argue—I mean, so you mentioned the fundamental theorem—the workhorse there is actually the mean value theorem.
LK: Hmm.
KK: Because the fundamental theorem, at least for one variable, is almost a trivial corollary of the mean value theorem. And I didn't appreciate that until I taught that sort of undergraduate analysis course for the first time. And I said, “Wait a minute.” And then I sort of came up with this joke, I'm actually going to write a book. It's like a “Where's Waldo” style thing: Where's the mean value theorem? Because in every proof, it seemed like, Well, wait a minute, by the mean value theorem, I can pull this point out. Or there is one, I don't know where it is, but it's in there somewhere. So I really like that one.
LK: That’s a really great perspective. I also will say that I did not happen on that feeling until teaching analysis for the first time, of course, versus, you know, for seeing these theorems or learning about them, and even learning them in analysis, not just using them in calculus. Know, that reminds me that it wasn't till grad school, maybe taking a differentiable manifolds class, and that's not really my area. But seeing, Oh, you can define a wedge product, you can define these things in a certain way. Oh, they really are literally all the same theorem. But I like this perspective, maybe that would have been a way to convince my students a little bit more, to kind of point to the mean value theorem, because it would put them on more familiar turf too. I really like that. Yeah. Are there other workhorses?
EL: So the first one that came to my mind was classification of surfaces, in topology, of like, you know, the fact that you can do that—I feel like I it's like so internalized to me now. And yeah, I don't know, that for some reason that came to mind, but it's been a long time since I did research and was keeping up with, you know, proving things. So yeah, it’s—but yeah, I think I would say that anyway.
KK: Yeah. And I would sort of think anything with fundamental in its name right.
LK: Yeah, I was thinking that.
KK: So the fundamental theorem of arithmetic, okay, so that you can factor integers as products of primes, or the fundamental theorem of algebra, that every polynomial with complex coefficients has a root. But then more obscure things like the fundamental theorem of algebraic K-theory. You guys know that one?
LK: That one, I'm afraid does not trip off my tongue.
KK: All it is, is it's a little bit weird. It just says that the K-theory of if you have a ring, maybe it needs to be regular, that if you look at the K-theory of the ring, and the K-theory of a polynomial ring in one variable over it, they're the same. And the topological idea of that is that, you know, it's a contractibility argument somehow. And so it's fundamental in that way.
LK: These are great workhorses. Yeah. And also, Evelyn, you mentioned the classification, like these results are just so fundamental. So in whether they have fundamental in the name or not, they are.
EL: Like, naming it fundamental, it's almost like cheating that point. Or, maybe not cheating, maybe stealing everyone else's thunder. It’s like, “No, I already told you that this is the fundamental theorem of this.”
LK: My poor students, whenever I want them to conjure up the name and think of something that way, I make the same corny joke. I'm like, “It's time to put the fun back into…” and they’re like, “Ugh, now she's saying fundamental again.” So yeah, I was thinking, too, that in different fields, we reach back, even as we're doing different things in our own work, back to those disciplines that we were sort of steeped in. And I think for topologists, there are so many great theorems to reach to.
KK: Sure.
LK: But I was thinking even like the central limit theorem in statistics and probability, so this idea that you could have any kind of probability distribution—start with any distribution at all—but then when you start to look at samples, when the samples are large enough, that the mean is approximated by a normal distribution. That somehow never ceases to amaze me in the way that the fundamental theorem of calculus, too. Like, “Oh, this is a really beautiful result!” But it's also a workhorse. There are so many questions in statistics and probability that you can get at by gleaning information from the standard normal distribution. So maybe I’d put that into a workhorse category.
KK: Sure.
EL: Actually, Heine-Borel theorem, maybe could be kind of a workhorse, although I'm sort of waiting for for you to say that it's actually the mean value theorem too.
KK: No, it's just, it's just that, you know, compact sets are closed and bounded. That's it. Right?
EL: Yeah. Yeah, actually, yeah, that, once again, is such a workhorse that it's often the definition that people learn of compactness.
LK: That’s right.
EL: Like the first time they see it. Or, like such an important theorem that it it almost becomes a definition. Actually the Pythagorean theorem, in that case, is almost a definition.
KK: Sure.
EL: Slash how to measure distance in the Euclidean plane.
LK: Yeah, that's a good example. So maybe now we have so many workhorses, well, another category I was thinking of — it's beautiful stuff. I was thinking of those theorems where the subtlety of the situation kind of sneaks up on you. So maybe you hear the statement, and you kind of even think, “Oh yeah, I believe that,” like the Jordan curve theorem, I think you had a guest speak about this, too. So this, you know, idea of a simple closed curve. So you just draw it in the plane, there's an inside, and it divides the plane into an inside and outside. And I kind of really remember—I can't tell you what day of the week it was—but I remember the first time this came up in a class, and I thought, “Yeah.” But then we started thinking about how would you go about proving something like this, or even just being shown, someone drawing, a wild enough crazy curve, where suddenly you can't just eyeball it and immediately see what's inside and what's outside. So I don't know what this category or set of theorems should be, but the subtlety sneaks up on you even though statement seems reasonable.
EL: “I can't believe I have to prove this.” Maybe that’s slightly different. Well, what I mean is like, I can't believe this is a—It seems so intuitive that understanding that there is something to prove is a challenge, in addition to then proving it.
LK: Yeah. And maybe you can't even prove it—Well, how about the four color theorem? So this map coloring theorem, this idea that the four colors suffice, so if you have states or counties or whatever regions, you want to make your map of, that if they share a common edge boundary, then use different colors, that four colors is enough. I don’t know, has a human being ever proven that? My understanding is that it took computing power.
KK: It’s been verified.
EL: I think they’ve reduced the number of cases, also, that have to be done from the initial proof, but I still think it's not a human-producible proof.
KK: That’s right. But I think Tom Hales actually verified the proof using one of these proving software things. So I mean, yeah, but that was controversial.
LK: That brings up a neat question about what constitutes proof in this day and age. I've seen interesting talks about statements where, or journals where something's given as this: “Okay, here's a theorem. And here's the paper that's been refereed.” And then later, oh, here's something that contradicts it. And people are left in a sort of limbo. Well, that's another discussion, things unproven, un-theorems, I don't know. Well, anyway, in this category, that's going to help the subtlety of the situation sneaks up on you. If I start coloring maps, testing things out, after a while, I’d say, “Oh, there's a lot to this.” But the statement itself has an elegant simplicity.
KK: Well, it's not easy. So I curated a math and art exhibition at our local art museum, in the Before Times, and one of the pieces I chose was by a Mexican artist, and it's called Figuras Constructivas. And it was just two people standing there talking to each other, but it was sort of done in this—we’ve all done, you probably when you were a kid—you took a black crayon and scribbled all over a page, and then you fill in the various regions with different colors, right? It reminded me of that. And the artist used five colors. And so when I was talking about this to the to the docents, I said, “Well, why don't we create an activity for patrons to four-color this map?” So they did, they created it, because it was just a map. And they did it, and the docents were just blown away by how difficult it was to do a four-coloring. You know, five colors is fairly easy. But four was a real challenge.
LK: That sounds really fun. And what a great example of math and art coming coming together. And my understanding of the history of this, too, is that the five-color theorem was proved not just before four colors, but was kind of doable in the sense that
EL: I think it’s just not that hard.
LK: Certainly not that hard in the sense of firing up the computers and whatever else has done.
KK: Needing a supercomputer in 1976.
LK: Which is basically my phone, maybe. Well, I had another category mind, which is, theorems where the proofs are just so darn cute.
KK: Okay.
LK: And so what I was thinking of—I tried to have an example for each of these—which was the reals being uncountable.
EL: Yeah.
LK: And I think you've had guests talk about this. And you know, like a diagonalization argument, like say, just look at the reals only from 0 to 1. And suppose you claim that that is a countable set. Okay, go ahead and list them in order, in whatever ordering you've got for countability. And then you can construct a new element by whatever was in the first place of your first element, do something different in your first place, whatever was in the second place of the second element, do something different in your second place of your new element, and so on down the line. So you go along the diagonal, if you had listed these and so this, I don't know my crude description of a diagonalization argument, that you can construct a new element that wasn't in your original set and so contradict the countability. I don't know, I thought that's really cute.
EL: Yeah. And that was probably the first theorem that really knocked my socks off.
KK: Mm hmm. It's definitely a greatest hit on our show.
EL: Yeah.
LK: So I guess that’s right. We've had a Greatest Hits show, so I don't know, this taxonomies kind of disintegrating, like “Workhorses,” “Just so darn cute,” “Situation sneaks up on you.” But yeah, I don't know if there are others that fit into the “Just so darn cute.” That was the one that came to mind because I kind of wanted it on my favorites, and then I was like, “Oh, someone's already talked about this on the show.”
KK: Well, I really like—so I'm a topologist. And I really like the theorem that there are only four division algebras over the reals. So the reals, the complexes, the quaternions and the octonians. And it's a topological proof. Well, I mean, there's probably an algebraic proof. But my favorite proof is topological. So I don't know if it's cute.
EL: That isn't what you'd expect the proof of that to be, for sure.
KK: No. And it's it's sort of—I'm looking through it. So I taught this course last year, and I'm trying to remember the exact way the proof goes, not that our listeners really want to hear it. But it involves cohomology. And it's really pretty remarkable how this actually works. Oh, here it is. Oh, yeah. So it involves, it involves the cohomology rings of real projective spaces. And so if you had one of these division algebras, you look at some certain maps on cohomology, and you sort of realize that things can't happen. So I think that's very, well, I don’t know if it’s cute, but it's a pretty awesome application of something that we spend a lot of time on.
LK: Yeah, it’s so neat when a different field. So you know, we have these silos, historically: algebra, topology, and so on. So the idea that a topological proof gives you this algebraic result is already a delight, but then that's heavy machinery. That's sounds like a really neat.
KK: Or fundamental theorem of algebra, right?
LK: Well, that's when I was thinking when you started saying saying, “Oh, there's a topological proof.” I started thinking, “Oh, fundamental theorem of algebra.” You know, fire up your complex analysis. And yeah, neat stuff. Yeah.
EL: Well, and there's this proof of the Pythagorean theorem that I have seen attributed to Albert Einstein, I think, that has to do—Steve Strogatz wrote, I think, an article for The New Yorker about it. So Oh, yeah, listening to my bad explanation of it semi-remembered from several years ago, you can go read it. But it has to do basically with scaling. And it's a kind of a surprising way to approach that statement.
KK: I think it was in the New York Times [editor’s note: Evelyn was right, it’s the New Yorker! [note to the editor’s note: Evelyn is the editor of this transcript]], or it's also in his book, The Joy of X, I think it's in there too. And yeah, I do sort of vaguely remember this, it is very clever.
it's a nice one to record.
LK: Yeah, this makes me want to swing back to many things. It's also reminding me, so here we are in pandemic times. And so at the university I'm at, we're not spending time in the department, but you reminded me that when I wander around the department, sometimes we have students’ projects, or work from previous semesters, up here and there, along with other posters. And I'll look at something and say, “Oh, I haven’t thought about Pythagorean Theorem from that context, or in that way.” So just different representations of these. So maybe there should be a category where there are so many proofs that you can reach to, and they're each delightful in their own way, or people could you could start to ask people what's your favorite proof instead of a favorite theorem, maybe.
KK: I think we did that with Ken Ribet because he did the infinitude of primes. He gave us at least three proofs.
LK: And I think three pairings to boot. Yeah. Nice. I'm wondering if another, so there was the “so darn cute,” how about something where the simplicity of the statement draws you in, but then the method of the proof may just open up all kinds of other problems or techniques. So in other words, I guess what I'm saying is some theorems, we really love the result of the theorem. Maybe the Fundamental Theorem of Calculus. That result itself is so useful. But on the other hand, Fermat’s Last Theorem, I don't know if anyone's even pointed to that on the show, but something in number theory where the statement was—I mean, this is how I got suckered into number theory. That's what I would say. So you have this statement. You mentioned the Pythagorean theorem, so this idea that, that you could find numbers where the sum of two squares is itself a square, like three squared plus four squared equals five squared, but what if you had cubes instead, could you find a cubed plus b cubed equals c cubed, or any a to the n plus b to the n equals c to the n. And, you know, that's a statement that, although the machinery of number theory that's developed to ultimately prove this is so technical, and involves elliptic curves and modularity, all kinds of neat stuff, but that the statement was very simple. And of course, at some level, then it wasn't even just proving that statement. It was the tools and techniques we can develop from that. But I remember telling a roommate in college about, “Oh, there's this theorem, it's not even proven.” So that was a question too. Why are we calling this a theorem? So back in the day, that was not a theorem, but it was still called Fermat’s Last Theorem. And in telling, you know, relating the story that Fermat was writing in the margin of his I don't know Arithmetica or something in the 1600s. And that he said, “I had the most delightful proof for this, but the margin is too small to contain it.” And my roommate’s first reaction actually was “Has anyone looked through all of his papers to find the proof?” And that was nice, because, you know, coming from a different discipline, studying English and history and so on. Because to me that wasn't the first reaction. It was like, oh, if Fermat had a proof, can we figure it out too? Or can we figure out what he—maybe he had something, but what mistake might he have made? Because there's more to this one perhaps. But anyway, the category was “statements that draw you in with their simplicity.” Maybe the four-color theorem should have landed here.
EL: Yeah.
LK: I don’t know.
EL: Yeah, draw you in. It's kind of—I don't know if this is maybe a bad analogy to draw, but kind of catfishing. Yeah. There’s just this nice, well-behaved statement. And oh, yeah, now it's a giant mess to prove. Actually, maybe like the Jordan curve theorem.
LK: Yeah, maybe a lot of these end up there. Then there's that way, though, if something's finally— sometimes when you finally prove something, you're like, “Oh, why didn't I think of that earlier?” I don't know that Fermat will ever land there for me, but maybe the Jordan curve, maybe there are aspects of some of these that you just come to a different understanding on the other side of the hill.
EL: Yeah. So I think if I were doing this taxonomy, one of my categories—which is probably not a good category, but I think I would have a sentimental attachment to it and be unable to get rid of it—would be like, theorems with weird numbers in them or, or really big numbers in them, like the one that we talked about with Laura Taalman, where there’s this absurd bound for the number of Reidemeister moves you have to do for knots. Like there are some theorems where like, you've got some weirdness, it's like, oh, yeah, this theorem is, works for everything except the number 128. And it's just like, theorems with weird numbers in them, or weird numbers in their proofs, I think would be one of mine. Or, like the proof of the ternary Goldbach conjecture several years ago, which I only remember because I wrote about it, is basically proving that it works up through a certain very large number of just individual cases, and then having some argument that works above 10 to the some large number, and like, that's just a little funny. It's like, “Oh, yeah, we checked the first 12 quadrillion. And then once we did that, we were made in the shade.” And I don't know, I think I think that goes a long way with me.
KK: How about theorems with silly names? Like, like the ham sandwich theorem.
LK: I think the topologists corner the market on this, right? Yeah? No? Maybe?
KK: We really do.
LK: Yeah, the ham sandwich. No, I like so we need to find one that's like, unusual cases, or a funny number comes up and it has a funny name to boot. I love these categories. Well, how about how about something where the statement might surprise the casual listener. So in other words, like, the Brouwer fixed-point theorem, so when I’m I chatting with my students, I say, “Oh, you toss a map of California onto the table (because I'm in California) and there's some point on the map that's lying above its point in the real world.” And then oh, I can do it all over again, toss it again, it doesn't land the same way. And then, and they start to realize, oh, there's something going on here. But I don't know if that's surprising. Maybe my students are a captive audience. I say surprising to the casual listener. Maybe it's surprising to the captive audience. I don't know.
EL: Yeah, well, that's definitely like a one where the theorem doesn't seem surprising, or, you know, the theorem doesn't seem that strange. And then it has these applications or examples that it gives you that you're like, oh, wow, like that makes you think like, for me, it's always the weather. What is it? That there are two antipodal points on the earth with the same, you know, wind speed, or at any given time or temperature, whatever the thing is you want to measure?
KK: The Borsuk-Ulam theorem.
EL: Maybe the same of both? I don't remember how many dimensions you get.
KK: Well, you could do it in every dimension. So yeah, it's the Borsuk-Ulam theorem, which is that a map from the n-sphere into R^n has to send a pair of antipodal points at the same point. Right.
EL: So the theorem, when you read it, it doesn’t seem like it has anything weird going on. And then when you actually do it, you're like, “Whoa, that's a little weird.”
LK: Oh yeah, I like that. Maybe that's true, so many of the things we we look at. So I guess I realized, as I was thinking about these, I was tipping towards theorems where there's also some kind of analogy or way to convey it without the technical details. Certainly, if the category is to draw in the casual listener, or to sucker someone in without the technical machinery. Yeah, so I don't know what would be next in the taxonomy of theorems. Do you have other ideas?
EL: I’m not sure. Yeah, I feel like I’d need to sit down for a little bit. Actually first go through our archives and like look at the theorems that people have picked, and see where I think they would land.
LK: I had a funny taxonomy category that's very narrow, but it could be “guess that theorem.” But I was thinking theorems with cute names or interesting funny names that have also been proven in popular films.
KK: Oh, the snake lemma.
LK: Ding-ding-ding, we have a winner.
KK: You know, don’t pin me down on what the movie is. I can't remember.
EL: I think t's called It’s My Turn.
KK: That’s it.
LK: Wow, the dynamic duo here has exactly. And I have to admit, when I was thinking of it, I was like, “I don’t remember the movie.” And I had to look it up. But anyway, algebra comes to the rescue.
EL: Yeah, I’ve seen that scene from it, but I've never seen the rest of the movie for sure.
KK: Has anybody?
LK: As mathematicians, maybe we should.
EL: I don’t even know if it’s on DVD. It might might never have been popular enough to get to the new format.
KK: And isn’t that the last time that there's any math in the movie? Like it's this opening scene, and she proves the theorem, and then that's it? Never any more?
LK: So it's really a tragedy, that film. But no, they say this is the year that people said, Oh, they watched all of Netflix. I don't know if that's possible. So this is the year, then, to reach out to expand. Or maybe if we rise up and request more streaming options for the movie. I would like to show my students students that. Yeah, but I also admit, I haven’t seen the film.
Maybe a big core category we're missing is those theorems that really bridge different areas or topics. So Kevin, you give an example of a statement that could be algebraic, but it's proven topologically. But then I was thinking, are there theorems that kind of point to a dictionary between areas? And I only had one little example in mind, but maybe I'll call it my little unsung hero, a theorem that won't be as familiar to folks, but I was thinking of something called Belyi’s theorem, so not as well known as the others, perhaps, but that number theorists and arithmetic geometers are really interested in. And then actually, I went ahead and printed out ahead of time, these quotes of Grothendieck, who was so struck when this theorem was announced or proven because he'd been thinking along these lines, but was surprised at the simplicity of their proof. But my French is not very good, so I'm not going to read anything in French. But I don't know if you want to take a moment to talk about this theorem.
KK: Sure.
EL: Yeah.
KK: So what's the statement?
LK: Yeah, so maybe I'll say en route to the statement that number theorists and arithmetic geometers are interested in ramification, but I'm maybe I'm going to describe things in terms of covering maps, and whether you have branching over a covering so. So like, if you had a Riemann surface, you're mapping to Riemann surface, and you had a covering map, you might expect, okay, for every point down below, you'd expect the same number of preimages, or for every neighborhood down below, the same number of neighborhoods, if it's a degree D map, maybe a D-fold cover. And in fact, I remember my advisor first describing this to me by saying, if you had a pancake down below, you'd have D pancakes up above. And it really stuck in my head, frankly, because he was so precise and mathematical in his language at every moment, this was one of the most informal things I ever heard him say. Maybe he was hungry at the moment, he was thinking about pancakes. So as a concrete example where something different could happen, suppose I was mapping to the Riemann sphere, and I suppose I had a map, like I don't know, take a number and cube it, like x cubed, and started asking what kind of preimages points have. For example, x cubed equals 1, there are three roots of unity that map to 1, but something different is happening at zero, so only zero maps to zero. There's no other value that when you cube it, gives you zero. So now we no longer have, instead of a cover, maybe I'll say we have a cover, except at finitely many points. So somehow zero, and in that case, infinity, there's some point at infinity that behaves differently, but everything else has three distinct preimages. And maybe just to make a picture, let's take the interval from 0 to 1. So a little line segment, the real interval, and we could ask what its preimage looks like. And so above 1, there are three points up above. There are three roots of unity that map to 1, and on the other hand 0 was the only point that mapped to zero. And for the rest of the interval, all of those points have three preimages. So you could draw, maybe I'm picturing now a little graph on my original surface that's got a single vertex, say, at zero, and then three segments going out for each of the preimages of the real line, and ending at these three roots of unity, ending at the preimages of 1. And so now I'm not even thinking very precisely about what it looks like. I'm just picturing a graph. So I’m not worrying about how beautiful my drawing is. I just have one vertex over zero and then three branches. So what number theorists describe in terms of ramification, in this setting we might think of as branching. So these branch points. So I'm interested in saying when I have a map, say to the Riemann sphere, or number theorists might say to the projective line, I'm interested in what kind of branching is happening. And it turns out that — so now Belyi’s theorem — he realized that in the situation where you're branched over at most three points, so in the picture, we had over 0 and also infinity. I was kind of vague about what's happening at infinity. So that was two points. But if there are at most three points where branching happens, something very special is going on. So he was looking at maps from curves to the projective line. So in a nutshell, really what he proved was that a curve is algebraic if and only if there's one of these coverings that's branched at at most three points. So what is that saying? So saying a curve is algebraic? That's an algebraic statement. You're kind of saying, Well, if you had an equation for the curve — suppose I could write down an equation and then the solutions to that equation are the points of the curve — he’s saying that the coefficients have to be algebraic numbers. So they don't just have to be integers. I could have coefficients, like the square root of two could be a coefficient, or i, or your favorite algebraic number, but not pi, or e or any non-algebraic number. So that's an algebraic statement. But saying that that can happen if and only if, and now he has a map actually, from the curve, well I'm going to say from some Riemann surface to the Riemann sphere, that's branched over at most three points, that second statement is very topological. And it's actually sort of combinatorial too, because that graph I was describing earlier, people use those to kind of describe what's happening with these maps. And so the number of edges, the number of vertices, there's a lot of combinatorial information embedded in that picture. And so I don't know how much of the theorem really comes through in this oral description. But the point is, people were really surprised, including Grothendieck was surprised. He was so surprised and agitated, but excited, that he wrote a letter to the editor, and it's been published. Leila Schneps has done these amazing volumes about a topic called dessins d’enfants, or children's drawings, but I have to read a piece of this because he wrote something like “Oh, Belyi announced this very result.” So this idea, he says actually, “Deligne when consulted found it crazy indeed, but without having a counterexample at hand. Less than a year later, at the International Congress in Helsinki, the Soviet mathematician Belyi announced this very result, with a proof of disconcerting simplicity contained in two little pages of a letter of Deligne. Never was such a profound and disconcerting result proved in so few lines.” So Belyi had actually figured out not only a way to show that these maps exist, but he had a construction. And it reminds me of something you were saying earlier, Evelyn, where the construction exists, maybe it's an unwieldy construction, in the sense that if you really wanted to work with these maps, you might want to do better, and if you try to bound, something I tried to do earlier, you get these really huge degree bounds on maps that are not so practical, in a sense, but the fact that you could do it, so it was the fact not only of the existence, but also there was a constructive proof, opened the door to lots of other work that folks have done.
And maybe I just want to say I was looking — so my French is not good enough to read and translate on the fly. But this “disconcerting result” the word that was used déroutant, can also mean strange and mysterious and unsettling. So even our taxonomy could include unsettling proofs or unsettling results. But I really wanted to put this in the category of something that that bridges different areas, because this picture I was describing earlier really was just a graph with three edges and four vertices. It’s an example of what Grothendieck called, he nicknamed them dessins d’enfants, or children's drawings, the preimagesof this interval. And yeah, so this is really a topic that's caught people's imagination, and Frothendieck was thinking “Are there ways to get at the absolute Galois group?” Because these curves I mentioned were algebraic, so something behind the scenes here is purely algebraic. You can look at Galois actions on the coefficients, for example. But meanwhile, you have this topological combinatorial object. And when you apply this action, we preserve features of the graph, we preserve the number of vertices and edges and so on. Can you start to look at conjugate drawings? And so these doors opened up to these fanciful routes, but it also pointed to these bridges between areas. Maybe algebraic topology is full of these, where you have some algebraic tools, but you're looking at something topological, just things that bridge or create dictionaries between between areas of mathematics, I think are really neat. Yeah. So in the end, you could even bring a stick figure to life this way. So I described this funny-looking graph with just three edges, but you could actually draw a stick figure in this setting, labeling vertices and edges. So I'm picturing, I don't know, literally a little stick figure.
EL: Yeah.
LK: And give some mathematical meaning to it. And then through these through Belyi’s theorem, and through this dictionary, is actually related to curves and so on. And then you can do all kinds of fun things. Like I mentioned some Galois action, although I wasn't specific about it. You could start to ask, are there little mutant figures in the same family as a stick figure? Maybe there's a stick figure with both arms on one side? And is that conjugate somehow to your original, and so somehow there was something elusive about this. The proof had eluded Grothendieck. But it opened this door to very fanciful mathematics. And there's really been kind of an explosion of work over the years looking at these dessins d’enfants. It's a podcast, but I saw you nodding when I mentioned these children’s drawings.
EL: Well, that's a term I've definitely seen. And then not really learned anything about it. Because I must admit, algebraic geometry is not something that my mind naturally wants to go and think about a whole lot.
LK: There’s a lot of machinery, and actually one direction of Belyi — I said this theorem as and if and only if — but one direction was sort of known and takes much more machinery. And it was this disconcerting direction, as Grothendieck said, that actually took less somehow. Some composition of maps and keeping track of ramification, or using calculus to see where you have multiple images of points, or preimages. Yeah, in fact, Grothendieck, there was one last sentence I found, I culled from this great translation by Leila Schneps, who said, “This deep result, together with the algebraic geometric interpretation of maps, opens the door to a new unexplored world, within reach of all, who pass without seeing it.” And you know, we really don't usually see mathematicians speaking in these terms about their work. So that's something I loved. I also loved that Belyi’s proof was constructive too, because even if it creates bounds, I might not want to use, it becomes a lynchpin in other work that connects — the fact that it could be made effective, like not just that this map exists, but you can actually have some degree bound on a certain map, is a lynchpin. And maybe the funniest example takes me to a last category, which is how about theorems that may not be theorems? Like what counts as a theorem? And there's this statement called ABC conjecture. Which is—
EL: A can of worms.
LK: Yeah, so is it proven or not?
KK: It depends on who you ask.
LK: Yeah, so there’s this volume of work by Shinichi Mochizuki, it’s 500-plus pages, and he's created this, I think it was called inter-universal Teichmüller theory. And I, you know, I can't speak to it, but experts are chipping away, chipping away. And maybe it's — I don't know if it's too political to say it's in kind of a limbo. There may be stuff there. There's a lot of machinery there. And yet, do lots of people understand and sort of verify this proof? I'm not sure we're there.
KK: I mean, he’s certainly a respected mathematician. So that's what people taking it seriously. But that's right. But didn't Scholze point to one particular lemma that he thought wasn't true? And the explanations from Kyoto have not been satisfying?
LK: Yeah, I don't have my finger on the pulse. But it’s this funny thing where if you unravel a thread, does the whole thing come apart? And on the other hand, when Wiles proved Fermat’s last theorem, well, some people realized that it would need to do a little something more here. But then it happened. And it kind of was consistent with the theory to be able to sure to fill that in. Yeah. So this is — I don't know, it's exciting to me, but it's also daunting. But this ABC conjecture, so I mentioned Belyi’s theorem. So there's a paper that assuming the ABC conjecture — we don't know if we have a proof, but going back when we've still just called it a conjecture — you can imply or from that, you get so many other results in number theory that people believe to be true. And Noam Elkies has this paper ABC implies Mordell, so Faltings’ theorem, so this theorem about numbers of points on curves. And there's this, I thought this is funny. So I’ll mention this last thing, but this paper has been nicknamed by Don Zagier: Mordell is as easy as ABC. And it's kind of funny, because they're quite difficult no matter how you slice it. You've got something that's still an open problem. And then something that had a very difficult proof. So to say one thing is as easy as the other is sort of perfect. Yeah, there's much more to say about the ABC conjecture, but maybe that's a topic for My Favorite Conjecture.
EL: Yeah. Or My Favorite Mathematical Can of Worms.
KK: Yeah, yeah. Okay, so.
EL: I like this.
KK: Yeah. Well, I was going to say it might be time for the pairing.
EL: I think it is.
KK: So I think I think maybe you're going to pair something with Belyi’s theorem, but maybe not. Maybe there’s something else.
LK: Yeah, I wanted to. I feel like I didn't do justice to Belyi’s theorem, and originally, I'll admit it, I was going to say a gingerbread man because I mentioned stick figures. And so I was like, okay, pairing, well, I love food, made me think of food, made me think of a gingerbread man because of this theory of dessins, or drawings, of Grothendieck. So you can attach a meaning to this little stick figure. And maybe when you're baking, you start making funny-looking figures and those are your Galois conjugates, I don't know. But actually, you know, I was so long on this list of theorems, I'll be short. I think I just have to go with coffee too. Maybe a gingerbread man and coffee because, you know, I wanted to be clever and delicious. But instead I’m just going with coffee because, well, I drink a lot of coffee. They say mathematicians turn coffee into theorems. So can't go wrong. And during the pandemic working at home, I would say I've consumed a lot of coffee in all its incarnations. And maybe it takes me back, too. When I was first hearing about Belyi’s theorem and elsewhere, I was very lucky to have the chance to spend some time in the Netherlands because my advisor Hendrik Lenstra was spending time there, and so as students, we got to go for periods of time. It was very influential to me to be there. But there's a coffee you can get in the Netherlands, which is probably sort of cafe au lait meets latte. But it's called something like koffie verkeerd, and I'm going to mispronounce it, but it basically means messed up coffee. And that's one of my favorite coffees, coffee with, it has too much milk in it. I guess that's what messes it up. So maybe that will be my pairing, just to stick with coffee.
KK: All right. Yeah.
EL: Well, I thought you might go like a pairing for this whole taxonomy and just go with, like, the taxonomy of animals, which, you know, I feel like we didn't do a great job of like, getting theorems exactly into one category or another. And historically, that has also been true for our understanding of biology and like, how many kingdoms there are, you know, in terms of, like, animals, plants, and then a bunch of other stuff.
LK: That’s right, I'm counting on someone to hopefully listen enough to this sprawling, fanciful discussion and say, “Oh, no, no, no, here's how we should do it,” and actually come up with a decent but entertaining, I hope, taxonomy.
EL: Well, we also like to give our guests a chance to plug anything. You know, if you have a website, books or projects that you're working on, that you want people to be able to find online, feel free to share those.
LK: Yeah, that's such a gracious door that you open to everyone. And I mean, maybe I do want to say, in honor of work with collaborators, that math sent me on sort of an unusual journey, as I mentioned in the beginning. So now, for example, looking at the issue of racial profiling and statistics and policy and law. And I do think that there are ways that mathematicians are very creative and can carry that creativity to all of their endeavors, including many of us are spending a lot of time in the classroom. And so that interest has led to a collaboration with Gizem Karaali. She's at Pomona College. And so we do have some books that we've been lucky to co-edit, so many creative people have contributed to. So these are books around mathematics for social justice. There are some essays. There are contributed materials of all sorts. The first volume came out in 2019, in the Before Times. The second volume is due out in 2021. But these represent the work of so many people. And actually, many of the theorems that have come up in your beautiful podcast have come up there, like Arrow’s impossibility theorem around voting theory. Kevin, I think you've been in talks about gerrymandering. And that’s, you can imagine, a topic of great interest. And these materials are more introductory, for folks to bring into the classroom. But as I said, I think mathematicians are very creative, and so it's neat to see what other people have done. And so I hope others will be inspired by those examples as they're creating authentic engagement and cultivating critical thinking for ourselves and all the students we work with.
EL: Yeah, well we’ll make sure to put links to that in the show notes.
KK: Sure.
LK: Yeah. Well, thank you for a sprawling conversation today.
KK: This has been a sprawl, but it has been a lot of fun, actually. I kind of felt like you were interviewing us a little more.
LK: Oh, I that sounds fun to me.
KK: Yeah. This is a great one. I'm going to look forward to editing this one. This will be a good time.
LK: Well, maybe a lot will end up on the editing floor.
KK: I hardly ever cut anything out. I really don't.
LK: There’s always a first time.
EL: You’re on the hot seat!
KK: Lily, thanks so much for joining us. It's been a lot of fun.
LK Thank you for your time.
On this episode, we talked with Lily Khadjavi, a mathematician at Loyola Marymount University in Los Angeles. Instead of choosing one favorite theorem, she led us through a parade of mathematical greatest hits and talked through a taxonomy of great theorems. Here are some links you might enjoy as you listen.
Khadjavi's academic website
Her website about mathematics and social justice, which includes the books she mentioned with co-editor Gizem Karaali
Leila Shneps's book The Grothendieck Theory of Dessins d'Enfants
Steve Strogatz's article about Einstein's proof of the Pythagorean theorem
Try your hand at four-coloring Joaquin Torres-Garcia’s Figuras Constructivas
And some past episodes of My Favorite Theorem about some of the theorems in this episodes:
Adriana Salerno and Yoon Ha Lee on Cantor's diagonalization argument
Henry Fowler and Fawn Nguyen on the Pythagorean theorem
Susan D'Agostino on the Jordan curve theorem
Belin Tsinnajinnie on Arrow's impossibility theorem
Ruthi Hortsch on Faltings' theorem
Ken Ribet on the infinitude of primes
Francis Su and Holly Krieger on Brouwer's fixed point theorem
Evelyn Lamb: Welcome to my favorite theorem, a math podcast. I'm Evelyn Lamb, one of your hosts. And here's your other host.
Kevin: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. It's been a while. I haven't seen your smiling face in a while.
EL: Yeah. I've started experimenting more with home haircuts. I don't know if you can see.
KK: I can. It's a little a little longer on top.
EL: Yeah.
KK: And it's more of more of a high and tight thing going here. This is Yeah. All right. It looks good.
EL: Yeah, it's been kind of fun. And, like, depending on how long ago between washing it, it has different properties. So it's very, it's like materials science over here, too. So a lot of fun.
KK: Well, you probably can't tell, but I've gone from a goatee to a plague beard. And also, I've let my hair grow a good bit longer. I mean, now that I'm in my 50s, there's less of it than there used to be. But I am letting it grow longer, you know, because it's winter, right?
EL: Oh yeah. Your Florida winter. It's probably like, what? 73 degrees there?
KK: It is 66 today. It's chilly.
EL: Oh, wow. Yeah, gosh! Well, today we are very happy to invite Tai-Danae Bradley to the podcast. Hi, Tai-Danae. Will you tell us a little bit about yourself?
Tai-Danae Bradley: Yeah. Hi, Evelyn. Hi, Kevin. Thank you so much for having me here. So I am currently a postdoc at X. People may be more familiar with his former name, Google X. Prior to that, I recently finished my PhD at the CUNY Graduate Center earlier this year. And I also enjoy writing about math on a website called math3ma.
EL: Yes, and the E of that is a 3 if you're trying to spell it.
TDB: Yeah, m-a-t-h-3-m-a. That's right. I pronounce it mathema. Some people say math-three-ma, but you know.
EL: Yeah, I kind of like saying math-three-ma my head. So, I guess, not to not to sound rude. But what does X want with a category theorist?
TDB: Oh, that's a great question. So yeah, first, I might say for all of the real category theorists listening, I may humbly not refer to myself as a category theorist. I'm more of, like, an avid fan of category theory.
KK: But you wrote a book!
TDB: Yeah, I did. I did. No, I really enjoy category theory, I guess I'll say. So at X, I work on a team of folks who are using ideas from—now this may sound left field—but they're using ideas from physics to tackle problems in machine learning. And when I was in graduate school at CUNY, my research was using ideas in mathematics, including category theory, to sort of tackle similar problems. And so you can see how those could kind of go hand in hand. And so now that I'm at X, I'm really just kind of continuing the same research interest I had, but, you know, in this new environment.
EL: Okay, cool.
KK: Very cool.
EL: Yeah, mostly, we've had academics on the podcast. We’ve had a few people who work in other industries, but it's nice to see what's out there, like, even a very abstract field can get you an applied job somewhere.
TDB: Yeah, that's right.
EL: Yeah, well, of course, we did invite you here to talk about your job. But we also invited you here to ask what your favorite theorem is.
TDB: Okay. Thank you for this question. I'm so excited to talk about this. But I will say, I tend to be very enthusiastic about lots of ideas in mathematics at lots of different times. And so my favorite theorem or result usually depends on the hour of the day. Like, whatever I’m reading at the time, like, this is so awesome! But today, I thought it'd be really fun to talk about the singular value decomposition in linear algebra.
KK: Awesome!
TDB: Yeah. So I will say, when I was an undergrad, I did not learn about SVD. So I think my undergrad class stopped just before that. And so I had to wait to learn about all of its wonders. So for people who are listening, maybe I could just say it's a fundamental result that says the following, simply put. Any matrix whatsoever can be written as a product of three matrices. And these three matrices have nice properties. Two of them, the ones on the left and the right, are unitary matrices, or orthogonal if your matrix is real. And then the middle matrix is a diagonal matrix. And the terminology is if you look at the columns of the two unitary matrices, these are called the singular vectors of your original matrix. And then the entries of the diagonal matrix, those are called the singular values of that matrix. So unlike something like an eigen decomposition, you don't have to make any assumptions about the matrix you started with. It doesn't have to have some special properties for this to work. It's just a blanket statement. Any matrix can be factored in this way.
EL: Yeah, and I, as we were saying, before we started recording, I also did not actually encounter this in any classes.
KK: Nor did I.
EL: And yeah, it’s something I've heard of, but not never really looked into because I didn't ever do linear algebra, you know, as part of my thesis or something like that. But yeah, okay, so it seems a little surprising that there aren't any extra restrictions on what kind of matrices can do this. So why is that? I don't know if that question is too far from left field.
TDB: Maybe that's one of the, you know, many amazing things about SVD is that you don't have to make any assumptions. So number one, in mathematics, we usually say multiplying things is pretty easy, but factorizing is hard. Like, it's hard to factor something. But here in linear algebra, it's like, oh, things are really nice. You just have this matrix, and you get a factorization. That's pretty amazing. I think, maybe to connect why is that—to connect this with maybe something that's more familiar, we could ask, what are those singular vectors? Where do they come from? Or, you know, what's the proof sketch of this?
EL: Yeah.
TDB: And essentially, what you do is you take your matrix, you multiply it by its transpose. And that thing is going to be this nice real symmetric matrix, and that has eigenvectors. And so the eigenvectors of that matrix are actually the singular vectors of your original one. Now, depending on like, if you multiply them the transpose of the matrix on the left or right, that will determine whether, you know, you get the left or right singular vectors. So, you might think that SVD is, like, second best: “Oh, not every matrix is square, so, we can't talk about eigenvectors, oh, I guess singular vectors will have to do.” But actually, it's like picking up on this nice spectral decomposition theorem that we like. And I think when one looks out into the mathematical/scientific/engineering landscape, and you see SVD sort of popping up all over the place, it's pretty ubiquitous. And so that sort of suggests it’s not a second-class citizen. It's really a first-class result.
EL: Yeah. Well, that's funny, because I did, when I was reading it, I was like, “Oh, I guess this is a nice consolation prize for not being an invertible square matrix, is that you can do this thing.” But you're telling me that that was—that’s not a good attitude to have about this?
TDB: Well, yeah, I think SVD, I wouldn't think of it as a consolation prize, I think it is quite something really fundamental. You know, if you were to invite linear algebra onto this podcast and ask linear algebra, what its favorite theorem is, just based on the ubiquity and prevalence of SVD in nature, I'd probably bet linear algebra would say singular value decomposition.
EL: Yeah, can can we get them next?
KK: Can we get linear algebra on? We’ll see. Okay, so I don't know if this question has—it must have an answer. So say your matrix is square in the first place. So you could talk about the eigenvalues, and you do this, I assume the singular values are different from the eigenvalues. So what would be the advantage of choosing the singular values over the eigenvalues, for example?
TDB: So I think if your matrix is square, and symmetric, or Hermitian, then the eigenvectors correspond to the singular vectors.
KK: Okay, that makes sense.
TDB: But, that's a good question, Kevin. And I don't have a good answer that I could confidently go on record with.
KK: That’s cool. Sorry. I threw a curveball.
TDB: That’s a great question.
KK: Because then singular values are important. The way I've always sort of heard it was that they sort of act like eigenvalues in the sense that you can line them up and that the biggest one matters the most.
TDB: Exactly, exactly. Right. And in fact, I mean, that sort of goes back to the proof that we were talking about. I was saying, oh, the singular vectors are the eigenvectors of this matrix multiplied by its transpose. And the singular vectors turn out to be the square roots of the eigenvalues of that square matrix that you got. So they're definitely related.
KK: Okay. All right. Very cool. So what drew you to this theorem? Why this theorem in particular?
TDB: Yeah, why this theorem? So this kind of goes back to what we were talking about earlier. I really like this theorem because it's very parallel to a construction in category theory.
KK: Yes.
TDB: Maybe people find that very surprising. We're talking about SVD. And all of a sudden, here's this category theory, curveball.
EL: Yeah, because I really do feel like linear algebra almost feels like some of the most tangible math., and category theory, to me, feels like some of the least tangible.
KK: So wait, wait, are you going to tell us this is the Yoneda lemma for linear algebra?
TDB: No. Although that was going to be my other favorite theorem. Okay, so I'm excited to share this with you. I think this is a really nice story. So I'm going to try my best because it can get heavy, but I'm going to try to keep it really light. But I might omit details, but you know, people can maybe look further into this.
So to make the connection, and to keep things relatively understandable, let's forget for a second that I even mentioned category theory. So let’s empty our brains of linear algebra and category theory. I just want to think about sets for a second. So let me just give a really simple, simple construction. Suppose we have two sets. Let's say they're finite, for simplicity. And I'll call them a set X and a set Y. And suppose I have a relation between these two sets, so a subset of the cartesian product. And just for simplicity, or fun, let’s think of the elements of the set X as objects. So maybe animals: cat, dog, fish, turtle, blah, blah. And let's also think of elements in the set Y as features or attributes, like, “has four legs,” “is furry,” “eats bugs,” blah, blah, blah. Okay. Now, given any relation—any subset of a Cartesian product of sets—you can always ask the following simple question. Suppose I have a subset of objects. You can ask, “Hey, what are all the features that are common to all of those objects in my subset?” So you can imagine in your subset, you have an object, that object corresponds to a set of features, only the ones possessed by that object. And now just take the intersection over all objects in your subset? That's a totally natural question you could ask. And you can also imagine going in the other direction, and asking you the same question. Suppose you have a subset of features. And you want to know, “Hey, what are all of the objects that share all of those features in that subset I started with?” A totally natural question you could ask anytime you have a relation.
Now, this leads to a really interesting construction. Namely, if someone were to give me any subset of objects and any subset of features, you could ask, “Does this pair satisfy the property that these two sets are the answers to those two questions that I asked?” Like, I had my set of objects and, Oh, is this set of features that you gave me only the ones corresponding to this set of objects and vice versa? Pairs of subsets for which the answer is yes, that satisfy that property, they have a special name. They're called formal concepts. So you can imagine like, oh, the concept of, you know, “house pet” is like the set of all {rabbits, cats, dogs}, and, like, the features that they share is “furry,” “sits in your lap,” blah, blah, blah. So this is not a definition I made up, you know, you can go on Wikipedia and look at formal concept analysis. This is part of that. Or you can usually find this in books on lattice theory and order theory. So formal concepts are these nice things you get from a relation between two sets.
Now, what in the world does this have to do with linear algebra or category theory, blah, blah, blah? So here's the connection. Probably you can see it already. Anytime you have a relation, that’s basically a matrix. It's a matrix whose entries are 0 and 1. You can imagine a matrix where the rows are indexed by objects and the columns are indexed by your features. And there's a 1 and the little x little y entry if that object has that feature and 0 otherwise.
KK: Sure.
TDB: And it turns out that these formal concepts that you get are very much like the eigenvectors of that 0-1 matrix multiplied by its transpose. AKA, they're like the singular vectors of your relation. So I'm saying it turns out—so I'm kind of asking you to believe me, and I'm not giving you any reason to see why that should be true—But it's sort of, when you put pen to paper paper and you work out all of the details, you can sort of see this. But I say it's like because if you just do the naive thing, and think of your, your 0-1 matrix as a linear map, like as a linear transformation, you could say, okay, you know, should I view this as a matrix over the reals? Or maybe I want to think of 0 and 1 as you know, the finite field with two elements. But if you try to work out the linear algebra and say, oh, formal concepts are eigenvectors, it doesn't work. And you can sort of see why that is. we started the conversation with sets, not vector spaces. So this formal concept story is not a story about linear algebra, i.e., the conversation is not occurring in the world of linear algebra. And so if you have mappings—you know, from sets of objects to sets of features—the kind of structure you want that to preserve is not linearity, because we started with sets. So we weren't talking about linear algebra.
So what is it? It turns out it's a different structure. Maybe for the sake of time, it's not really important what it is, or if you ask me, I'll be happy to tell you. But just knowing there's another kind of structure that you'd like this map to preserve, and under that right sort of context, when you're in the right context, you really do see, oh, wow, these formal concepts are really like eigenvectors or singular vectors in this new context.
Now, anytime you have a recipe, or a template, or a context, but you can just sort of substitute out the ingredients for something else, I mean, there's a bet that category theory is involved. And indeed, that's the case. So it turns out that this mapping, this sort of dual mapping from objects to features, and then going back features to objects, that, it turns out, is an example of adjunction in category theory. So there's a way to view sets as categories. And there's a way to view mappings between them as functors. And an adjunction in category theory is like a linear map and its adjoint, or like a matrix and its transpose. So in category theory, an adjunction is — let me say it this way, in linear algebra, an adjoint is defined by an equation involving an inner product. Linear adjoint, there's a special equation that your map and its adjoint must satisfy. And in category theory, it's very analogous. It's a functor that satisfies an “equation” that looks a lot like the adjoint equation in linear algebra. And so when you unravel all of this, it's almost like Mad Libs, you have, like, this Mad Lib template. And if you erase, you know, the word “matrix” and substitute in the whatever categorical version of that should be, you get the thing in category theory, but if you stick in “matrix,” oh, you get linear algebra. If you erase, you know, eigenvectors, you get formal concepts, or whatever the categorical version of that is, but if you if you have eigenvectors, then that's linear algebra. So it's almost like this mirror world between the linear algebra that we all know and love, and like, Evelyn, you were saying, it's totally concrete. But then if you just swap out some of the words, like you just substitute some of the ingredients in this recipe, then you recover a construction in category theory, and I am not sure if it's well known — I think among the experts in category theory it is — but it's something that I really enjoy thinking about. And so that's why I like SVD.
EL: So I think you may have had the unfortunate effect of me now thinking of category theory as the Mad Libs of math. Category theorists are just going and erasing whatever mathematical structure you had and replacing it with some other one.
KK: That’s what a category is supposed to do, right? I mean, it's this big structure that just captures some big idea that is lurking everywhere. That's really the beautiful thing, and the power, of the whole subject.
TDB: Yeah, and I really like this little Mad Lib exercise in particular, because it's kind of fun to think of singular vectors as analogous to concepts, which could sort of maybe explain why it's so ubiquitous throughout the scientific landscape. Because you have this matrix, and it’s sort of telling you what goes with what. I have these correlations, maybe I organize them into a matrix matrix, I have data and organize it into a matrix. And SVD sort of nicely collects the patterns, or correlations, or concepts in the data that's represented by our matrix. And, I think, Kevin, earlier you were saying how singular values sort of convey the importance of things based on how big they are. And those things, I think, are a little bit like the concepts, maybe. That’s sort of reaching far, but I think it's kind of a funny heuristic that I have in mind.
KK: I mean, the company you work for is very famous for exploiting singular values, right?
TDB: Exactly. Exactly.
KK: Yep. So another fun part of this podcast is we ask our guests to pair their favorite theorem with something. So what pairs well with SVD?
TDB: Okay, great question. I thought a lot about this. But I, like, had this idea and then scratched it off, then I had another idea and scratched it off. So here's what I came up with. Before I tell you what, I want to pair it pair this with, I should say, for background reasons, this, Mad Libs or ingredients-swapping recipe-type thing is a little bit mysterious to me. Because while the linear algebra is analogous to the category theory, the category theory doesn't really subsume the linear algebra. So usually, when you see the same phenomena occurring a bunch of places throughout mathematics, you think, “Oh, there must be some unifying thread. Clearly something is going on. We need some language to tell us why do I keep seeing the same construction reappearing?” And usually category theory lends a hand in that. But in this case, it doesn't. There's no—in other words, it's like I have two identical twins, and yet they don’t, I don’t know, come from the same parents or something.
KK: Separated at the birth or something?
TDB: Yeah. Something like that. Yeah, exactly. They’re, like, separated to birth, but you're like, “Oh, where are their parents? Where were they initially together?” But I don't know that, that hasn't been worked out yet. So it's a little bit mysterious to me. So here it is: I'm going to pair SVD with, okay. You know, those dum-dum lollipops?
KK: Yeah, at the bank.
TDB: Okay. Yeah, exactly. Exactly. Just for listeners, that’s d-u-m, not d-u-m-b. I feel a little bit—anyway. Okay, so the dum-dum lollipops, they have this mystery flavor.
KK: They do.
TDB: Right, which is like, I can't remember, but I think it's wrapped up with a white wrapper with question marks all over it.
EL: Yeah.
TDB: And you're letting it dissolve in your mouth. You're like, well, I don't really know what this is. I think it’s, like, blueberry and watermelon? Or I don't know. Who knows what this is? Okay. So this mystery that I'm struggling to explain is a little bit like my mathematical dum-dum lollipop mystery flavor. So, you know, I like to think of this as a really nice, tasty mathematical treat. But it's shrouded in this wrapper with question marks over it. And I'm not quite really sure what's going on, but boy, is it cool and fun to think about!
EL: I like that. Yeah, it's been a while since I went to the bank with my mom, which was my main source of dum-dum lollipops.
TDB: Same, exactly. That's funny, with my mom as well.
EL: Yeah. That that's just how children obtain dum-dums.
KK: Can you even buy them anywhere? I mean, that’s the only place that they actually exist.
EL: I mean, wherever, bank supply stores, you know, get a big safe, you can get those panic buttons for if there's a bank robber, and you can get dum-dum lollipops. This is what they sell.
TDB: That’s right.
KK: No, it must be possible to get them somewhere else, though. When I was a kid trick-or-treating back in the 70s, you know, there would always be that cheap family on the on the block that would either hand out bubblegum, or dum-dums. Or even worse, candy corn.
EL: I must admit I do enjoy candy corn. It's not unlike eating flavored crayons, but I’m into it. Barely flavored. Basically just “sweet” is the flavor.
KK: That’s right.
EL: Yeah, well, so actually, this raises a question. I have not had a dum-dum in a very long time. And so is the mystery flavor always the same? Or do they just wrap up some normal flavor?
KK: Oh, that’s a good question.
EL: Like, it falls off the assembly line and they wrap it in some other thing. I never paid enough attention. I also targeted the root beers, mostly. So I didn't eat a whole lot of mystery ones because root beer is the best dum-dum.
KK: You and me! I was always for the root beer. Absolutely.
EL: And butterscotch. Yeah.
TDB: Oh, yeah. The butterscotch are good. So Evelyn, I was asking that same question to myself just before we started recording. I did a quick google search. And I think what happens, at least in some cases, like maybe in the past—and also don't quote me on this because I don't work at a dum-dum factory—but I think it was like, oh, when we're making the, I don't know, cherry or butterscotch flavored ones, but then the next in line are going to be root beer or whatever, we’re not going to clean out all of the, you know, whatever. So if people get the transition flavor from one recipe into the other, we’ll just slap on the “mystery.” I don't know, someone should figure this out.
KK: Interesting.
EL: I don't want to find out the answer because I love that answer.
KK: I like that answer too.
EL: I don't want the possibility that it's wrong, I just want to believe in that. That is my Santa Claus.
KK: And of course, now I’m thinking of those standard problems in the differential equations course where you’re, like, you're doing those mixing problems, right? So you've got, you know, cherry or whatever, and then you start to infuse it with the next flavor. And so for a while, there's going to be this stretch of, you know, varying amounts of the two, and then finally, it becomes the next flavor.
TDB: Exactly.
EL: Well, can you quantify, like, what amount and which flavor dominates and some kind of eigenflavor? I'm really reaching here.
TDB: I love that idea.
EL: Yeah. Oh, man. I kind of want to eat dum-dums now. That’s not one of my normal candies that I go to.
TDB: I know, I haven't had them for years, I think.
KK: Yeah, well, we still have the leftover Halloween candy. So this is, we can tell our listeners—What is this today? It's November 19?
EL: 19th, yeah.
KK: Right. So yeah, we bought one bag of candy because we never get very many trick-or-treaters anyway. And this year, we had one small group. And so we bought a bag of mini chocolate bars or whatever. And it's fun. We have a two-story house. We have a balcony on the front of our house. So this group of kids came up and we lowered candy from our balcony down. When I say “we” I mean my wife. I was cooking dinner. But we still have this bag. We're not candy-eaters. But you're right. I'm jonesing for for a dum-dum now. I do need to go to the bank. But I feel a little cheap asking for one.
EL: Yeah. I feel like, you know, maybe 15, 16, is where you kind of start aging out of bank dum-dums.
KK: Yep, yeah. Sort of like trick-or-treating.
EL: Well, anyway, getting back to math. Have we allowed you to say what you wanted to say about the singular value decomposition?
TDB: Yeah. I mean, I could talk for hours about SVD and all the things, but I think for the sake of listeners’ brains, I don't want to cause anyone to implode. I think I shared a lot. Category theory can be tough. So I mean, it appears in lots and lots of places. I originally started thinking of this because it cropped up in my thesis work, my PhD work, which not only involved a mixture of category theory, but linear algebra for, essentially, things in quantum mechanics. And so you actually see these ideas appear in sort of, you know, “real-world” physical scenarios as well. Which is why, again, it was kind of drawing me to this mystery. Like, wow, why does it keep appearing in all of these cool places? What's going on? Maybe category theory has something to say about it. So just a treat for me to think about.
EL: Yeah. And if our listeners want to find out more about you and follow you online or anything, where can they look?
TDB: Yeah, so they can look in a few places. Primarily, my blog mathema. com. I'm also on Twitter, @mathema as well, Facebook and Instagram too.
EL: And what is your book? Please plug your book.
TDB: Thank you. Thank you so much. Right. So I recently co-authored a book. It’s a graduate-level book on point-set topology from the perspective of category theory. So the title of the book is Topology: A Categorical Approach. And so this is really—we had in mind, sorry about this with John Terilla, who was my PhD thesis advisor, and Tyler Bryson, who is also a student of John at CUNY. And we really wrote this for, you know, if you're in a first-semester topology course in your first year of graduate school. So basic topology, but we were kind of thinking, oh, what's a way to introduce category theory that’s sort of gentler than just: “Blah. Here’s a book. Read all about category theory!” We wanted to take something that people were probably already familiar with, like basic point-set. Maybe they learned that in undergrad or maybe from a real analysis course, and saying, “Hey, here's things you already know. Now, we're just going to reframe the thing you already know in sort of a different perspective. And oh, by the way, that perspective is called category theory. Look how great this is.” So giving folks new ways to think and contemplate things they already know, and sort of welcoming them or inviting them into the world of category theory in that way.
KK: Nice.
EL: Yeah. So definitely check that out if you're interested in—the way you said like “Blah, category theory” —he other day, for some reason, I was thinking about the Ice Bucket Challenge from, like, I don't know, five or six years ago, where people poured the ice on their head for ALS research. (You’re also supposed to give money because pouring ice on your head doesn't actually help ALS research.)
TDB: Right.
EL: But yeah, it's like this is an alternative to the Ice Bucket Challenge of category theory.
TDB: That’s right. That's a great way to put it. Exactly.
EL: Yeah. Well, thank you so much for joining us. It was fun.
KK: This was great fun. Yeah.
On this episode, we had the pleasure of talking with Tai-Danae Bradley, a postdoc at X, about the singular value decomposition. Here are some links you might find relevant:
Bradley's website, math3ma.com
Her Twitter, Facebook, and Instagram accounts
The book she co-wrote, Topology: A Categorical Approach
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, coming at you from the double hurricane part of 2020 today. I mean, I'm not near the Gulf Coast so it's it's not quite as relevant for my life, but that is the portion of the year we are in right now. I am one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And here's your other host.
Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. It's just hot here. But you know, there have been, like, fire tornadoes, right, in California? This is all very on-brand for 2020. This year can’t end soon enough.
EL: Yeah, we say that. I feel like I've said that at the end of many previous years, and then it's not great.
Yoon Ha Lee: As a science fiction writer, I have to say never assume it's the worst. It can always get worse.
EL: Yes.
KK: Right, right, right.
EL: Yes. And that is our guest, Yoon Ha Lee. So yeah, would you like to introduce yourself, tell us a little bit about yourself, and maybe talk about your writing a little bit, how you got to writing from the degrees that you have in math.
YHL: So my name is Yoon Ha Lee. I'm from Houston, and I'm a science fiction and fantasy writer. I actually went to Cornell to get a degree in history, and then I realized that history majors starve on the street. So I switched to math, so that I could have an income and ended up not becoming a mathematician. My best-known books are probably the Machineries of Empire trilogy, which is Ninefox Gambit, Raven Stratagem and Revenant Gun. It's space opera, lots of ships blowing up everywhere. And then a kid's book, Dragon Pearl, which is out from Disney Hyperion in the Rick Riordan Presents series. And that one is also a space opera, because ships blowing up is just fun.
EL: Yeah, well, and that's funny. I think I just put together—I had seen the Rick Riordan publishing imprint before, and I just started reading Percy Jackson the other day. And so it's like, oh, that's who that guy is.
KK: And I think I might be the only one among us who is old enough to have seen the biggest space opera, Star Wars, in the theater in its first release.
YHL: Yeah, my parents let me see it on the television when I was six years old, and I was terrified at the point where Luke gets his hand cut off.
KK: That’s Empire.
YHL: I think the second one? I forget which movie it was, but he gets his hand cut off and I had nightmares for weeks. And I'm like, Mom and Dad, Why? Why? Why did you think this was an appropriate movie for a six-year-old? And then I got all the storybooks and I wanted the lightsaber and everything, so I guess it worked out.
KK: Of course, yeah. Well, my movie story—we’re getting off track, but it's it's a good movie story. So when I was six years old, in 1975, my parents thought it would be a good idea to take me to the drive-in to see Jaws. And I had nightmares for months that there was a shark living under my bed, a huge shart that was going to get me.
EL: My husband was born I think right around the time one of them was released. I don't remember which one now. But we were talking with one of his colleagues one time and figured out that on the day he was born, that colleague was going to see that movie, like, the day it came out.
KK: I’m going to guess it was I'm gonna guess it was Jedi. I don't know exactly how old you guys are. But that's that's my guess.
EL: That sounds right. Yeah, I'm not a big star wars person. But yeah, I guess I've always not been sure, like, “space opera.” The term is something that I feel like I know it when I see it. But I don't really know, like, how to describe it. Is it just—do you feel like a categorization of space opera is, like, ships blowing up?
YHL: Ships blowing up, generally bigger, larger-than-life characters, larger-than-life stakes, big galactic civilization types of things. It's basically the Star Wars genre.
EL: Yeah
KK: It works.
EL: yeah. And the Machinery of Empire—the reason that I invited you on here is because I just read Ninefox Gambit a few weeks ago and just thought, you know, this person sure uses a lot of math terms for a novel! So mathematicians might be especially interested in reading this one, it has shenanigans with calendar systems that are based on math and arithmetic and stuff. So yeah, that was fun. So you, in addition to getting a bachelor's degree in math, you got a master's in math education, right?
YHL: Yes, at Stanford. And I ended up not using it for very long. I was a teacher for, like, half a year before I left the profession.
EL: Okay, and was it just that your writing was taking off and you wanted to do that more? Were there other reasons?
YHL: A kid came along. That was the big reason. Yeah.
EL: Oh. Yeah. That definitely can take a lot of time.
KK: Ah yeah, just a little bit.
EL: Well, that's great. So what is your favorite theorem?
YHL: My favorite theorem is Cantor's diagonalization proof. And I discovered it actually in high school as a footnote in Roger Penrose’s The Emperor's New Mind. It was really just sort of a sidelight to the extremely complicated and hard-to-follow argument that he was making in that book on the nature of consciousness and quantum physics, which, as a high schooler, you know, it basically went over my head. But I was sitting there staring at this footnote and going “I don't understand this at all.” He said in the footnote that Cantor had proven that the real numbers, the set of real numbers, has a cardinality greater than the set of natural numbers. And of course, I was a high schooler. I hadn't had a lot of math background. So my understanding of these concepts was very, very shaky. But he said if you make a list of, you know—pretend that you have a list of all the real numbers and you put them, you know, 1, 2, 3, 4, you put them in correspondence with the natural numbers, and then you go down diagonally, first digit of the first number, second digit of the second number, third digit of the third number, and so on. And then you shift it by one. So if the numeral in that place is two, it becomes three, if it's nine, it becomes zero, and so on. So you can construct a number that is not on the list, even though your premise is that you have everything on the list. And I think this was the first time that I really understood what a proof by contradiction was. My math teachers had attempted very hard to get this concept into my head. And it just did not go through until I read that proof and meditated upon it. And it's funny, because I spent most of my life as a kid thinking that I hated math. And yet there I was in the library reading books about math, so I guess I didn't hate it as much as I thought I did.
EL: Yeah, I was thinking a high schooler reading that Penrose book is definitely—yeah, you had some natural curiosity about math, it sounds like.
KK: Yeah, I'm sort of sort of surprised that your high school teachers were trying to teach you proofs by contradiction. That's kind of interesting. I don't remember seeing any of that until I got to university.
YHL: I don't know that they got into depth about it. But this was at Seoul Foreign School, which was a private, international school in South Korea. And they tried to make the curriculum more advanced, with mixed results.
KK: Sure. It’s worth a shot.
EL: Yeah, and this, this is really one of those Greatest Hits. Like if you're putting together the like, record that you're going to send out or something, like, Math’s Greatest Hits with would include this diagonalization argument. It's so appealing. And we've had another guest select that too, Adriana Salerno a few months ago and yeah, just people. I think a lot of people who eventually do become mathematicians, this is one of those first moments where they feel like they really understand some some pretty high-concept math kind of stuff. So did you see this this proof later in school?
YHL: No. Ironically, most of what I was interested in doing when I did my undergraduate degree was abstract algebra. So I didn't even take a set theory course at all. But I knew it was sort of out there in the water, and I don't know, one of the things I loved about math and that led me to switch my major to math was the idea that there were these beautiful ideas and these beautiful arguments, and just sort of the elegance of it, which was very different from history, where—I love history, and I love all the battles and things, like the defenestration of Prague and all the exciting things happening. But you can't really prove things in history. Like you can't go back and run the siege of Stalingrad again, and see what happens differently.
KK: Maybe we could though, right? We have the computing power now. Maybe we could do that. This sounds like your next novel, right? So simulation of Stalingrad, and this time, the Nazis win or something? I don't know.
YHL: Oh no. I mean, science fiction writers totally do that. There's this whole strand of alternate history, science fiction or fantasy. Harry Turtledove is one author who, he likes to have the story where aliens invade during World War II and then the Nazis and the allies have to have to team up against the aliens kind of stories there. There is a set there is a readership for these things. Sure.
EL: So you use a lot of math concepts in your writing, your fiction writing. So have you ever tried to work in diagonalization, or this kind of idea, into any of your stories?
YHL: This one? No. I mean, occasionally, I remember writing a story in college, actually, called Counting the Shapes. And it was just everything in the kitchen sink, because I was taking point-set topology, and so I used it as a metaphor for a kind of magic that worked that way, and other ideas, like, I don't know, I had recently read James Gleick’s Chaos. So I was really interested in chaos theory and fractals. And I don't know that I was super systematic about it, and I sort of suspect that a real mathematician would look at it and poke holes. You know, I'm using this as a magic system, not as rigorous math, more as a metaphor, I guess, or flavor.
KK: Oh, but I mean, writers do that all the time, right? So I, I taught math and lit class with a friend of mine in languages a few years ago. And, you know, Borges, for example, you know, this sort of stuff is all over his work, these ideas of infinity and, and it's even embedded in Kafka and all this stuff, and it can be a wonderful way to to get your readers to think about something from a point of view they might not have thought of before.
YHL: Well, the interesting thing about Ninefox Gambit and the math terminology that I used for flavor is that 20 publishers turned the book down because they said it had too much math. And I my joke about this is that they saw the word diagonalization in the linear algebra matrix context, and they didn't know what that meant, and they ran away from it. Which was extremely discouraging when my agent at the time, Jennifer Jackson, and I were going out on submission with this book. And it's like, it's basically a space opera adventure where people blow each other up. You don't have to worry about the occasional math term. It's just there as flavor for the magic system. But a lot of people—I’m sure you have encountered the fact that a lot of people in the US have math phobia, and this really does affect the readership as well.
KK: Really?
EL: Yeah, that’s funny, because in some way, I mean, you definitely use the the math language to give a certain flavor to the system that this universe is in, but you could sub it out for, like, any Star Trek term,
YHL: Exactly.
EL: t’s just like, oh, yeah, you could put tricorders and dilithium crystals, or, you know, anything in to serve that that because you know, you're it's not a math textbook, no one's learning linear algebra from reading Ninefox Gambit.
YHL: No, exactly. I actually, when I was originally writing the book, like the rough draft, I had my abstract algebra textbooks out and ready to go. And I was going to construct sort of a game engine, a combat engine of how these battles were going to work in an abstract algebra sense. And my husband who, he's not afraid of math, he's actually a gravitational astrophysicist, and he's arguably better at math than I am. But he sat me down and said, “Yoon Ha, you can't do this. You're not going to have any readers because science fiction readers who want to read about big spaceships blowing each other up do not want to have to wade through a math textbook to get to the action.” And I mean, it turned out that he was absolutely correct. So I ended up not doing that and just using it as, you know, “the force,” except with math flavor.
KK: Linear algebra is the force. All right!
EL: That’s so interesting. I noticed on your website that you have a section for games. So do you also like to design games?
YHL: I do design games. And by design games, I mean tiny little interactive, interactive fiction text adventures or really small tabletop RPGs in the indy sense. You know, three page games for five people, no GM, that kind of thing. So I do enjoy doing that. And it is related to math, I think, but it's certainly not something that we learn to do in any of our math classes.
EL: Yeah, well, I mean, personally, I think it would be very cool. Have you have you written up this potential game, the abstract algebra game thing into an actual game? Or was that kind of abandoned on the editing floor while you were putting the book together?
YHL: It got abandoned on the editing floor. Also because it would have been a tremendous time suck. And, you know, it would have been a fun idea. But if I wasn't going to use it in a book, and it certainly wasn't going to be used in like a computer game or some something like that, there just didn't seem to be enough incentive to go ahead and do it.
EL: Yeah, probably the market of math mathematicians who read sci fi is, you know, not a tiny market but maybe not quite the demographic you're looking for. But I'm just imagining, like, hauling out the Sylow theorems to, like, explode someone’s battle cruiser or something. Just saying that, you know, if you were bored some time and wanted to sink a bunch of time into that.
YHL: if somebody else wrote it, I would definitely buy it and read it, I have to say.
KK: All right. The challenge is out there, everybody. Everybody should get on this.
EL: Yeah, very cool. Yep.
KK: So another thing we do on this podcast is we ask our guests to pair their theorem with something. So what pairs well, with Cantor's diagonalization argument?
YHL: Waffles.
KK: Waffles? Oh, well, yeah.
YHL: Because sort of that grid shape. I know, this is super visual. But the waffles I'm thinking of, my husband did his postdoc at Caltech, so we lived in Pasadena. And when we were there, there was this delightful Colombian hotdog place. And they also made the best waffles with berries and fruit and syrup and whipped cream. And those are the waffles I think of when I think of the diagonal slash proof.
KK: Right. And so the grid is actually fairly small. Is it one of those waffle makers?
YHL: Yeah.
KK: Yeah. Okay, so I have a Belgian waffle maker, and it's fine. It makes four at a time, but those holes are pretty big. Right? I'm thinking of, like, the small, Eggo style, right? You can put a lot of digits.
EL: You could also, like, I guess, maybe a berry is too big to fit in them, but I'm just thinking you can put different things in all of them, make sure no two waffles have the same arrangement of syrup and berries and cream.
KK: This is a good pairing. I'm into this one a lot.
YHL: I’m hungry now.
KK: Yeah.
EL: Yeah. I just had lunch, so for once I don't leave this ravenous. So would you like to let people know where they can find you online?
YHL: Online I’m at yoonhalee.com. I'm also on Twitter as @deuceofgears and also on Instagram as @deuceofgears.
KK: Deuce of gears. Is there a story there?
YHL: It’s the symbol of the crazy general in Ninefox Gambit. Okay. And also, because I'm Korean, there are five zillion other Yoon Ha Lees. So by the time I joined Twitter, all the obvious permutations of Yoon Ha Lee had already been taken, so I had to pick a different name.
EL: Yeah, and if I'm remembering correctly, there are sometimes cat pictures on your Twitter feed. Is that right?
YHL: Yes. So the thing that I post periodically to Twitter is that my Twitter feed is 90% cat pics by volume. There are people who, you know, they tweet about serious things, or politics, or so on, and these are very important, but I personally get stressed out really easily so I figure people could use an oasis of cheerful cat pictures.
EL: Yes, I just wanted to make sure our listeners have this vital information that if they are running low on cat pictures, this is a place they can go. It's definitely been an important part of my mental health to make sure to look at plenty of cat pictures during this—these stressful times as they say.
KK: Yeah, on Instagram, I follow a lot of bird watching accounts. So I just get a feed of birds all day. It's better for my mental health.
EL: Well maybe Yoon’s cat would like that,
KK: I suspect yes, that's right. That's right. Yep.
EL: Yeah, we were talking to a friend who said that they have some bird feeders outside, they just have indoor cats. And the cats will meow to get them to open the windows in the morning so they could watch the birds outside. It’s like, “Mom, turn on the TV.”
YHL: I tried putting on a YouTube video of birds, and my cat was just completely indifferent to the visuals. But she kept looking at the speaker where the bird sounds were coming from.
KK: Hmm.
EL: Interesting. I guess maybe hearing is like more of a dominant sense or something? Cats have pretty good vision, though, I think.
YHL: Yeah, I think she's just internalized that nothing interesting comes out of the moving pictures.
EL: Yeah. Well, thanks for joining us. I really enjoyed talking with you.
KK: This has been good.
YHL: It’s been an honor.
On this episode of My Favorite Theorem, we were happy to talk with Yoon Ha Lee, a sci-fi and fantasy writer with a math background, about his favorite theorem, Cantor's proof of the uncountability of the real numbers. Here are a few links to things we mentioned in the episode:
Yoon Ha Lee's website, Twitter account, and Instagram account
Our episode with Adriana Salerno, who also loves this theorem
Roger Penrose's book The Emperor's New Mind
James Gleick's book Chaos
Harry Turtledove
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast for your quarantine life. I'm Kevin Knudson, professor of mathematics at the University of Florida. And here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer in beautiful Salt Lake City, Utah.
KK: Yeah.
EL: How are you, Kevin?
KK: I'm okay. I had my—speaking of quarantines, I had my COVID swab test this morning.
EL: How was it?
KK: Well, you know, about as pleasant as it sounds. But yeah, I'm sure you've been to the pool and gotten water up your nose. That's what it feels like.
EL: Yeah.
KK: And then it's over. And it's no big deal. I should have the results within 48 hours. It’s part of the university's move to get everybody back to campus, although I don't expect to go back to the office in any serious way before August. But this is late May now for our listeners, who will probably be hearing this in December or something, right?
EL: Yeah. Who even knows? Time has no meaning.
KK: Hopefully this will all be irrelevant by the time our listeners hear this. [Editor’s note: lolsob.] We'll we'll have a vaccine and everything. It will be a brave new world and everything be fine.
EL: It’ll be a memory of that weird time early in year.
KK: That’s right. The before times. So anyway, today, we are pleased to welcome Michael Barany. Michael, why don’t you introduce yourself and let us know who you are and what's up.
Michael Barany: Hi. So I'm a historian of mathematics. I'm super excited to be on this podcast. I feel like I've been listening long enough that the Gainesville percussionists must be in grad school by now.
KK: No. One of them is my son, and he just finished his third year of college.
MB: Okay, yeah. So older than he was anyway.
EL: Yeah.
MB: Yeah, so I’m a historian of mathematics. I'm based at the University of Edinburgh, where I'm in a kind of interdisciplinary social science of science and technology department. So I get to teach students from all over the university how to think about what science means when you step back and look at the people involved and how they relate to society, how ideas matter, how technology's changed the world, all that fun stuff that gets people to really rethink their place in the world and the kind of things they do with their science.
KK: That’s very cool.
EL: And I know some people who are historians of math will get a degree through a math department and some get it through a history department, I assume. And which are you? I always wonder what the benefits are of each approach.
MB: Yeah, that's great. History of mathematics is a really strange field. It’s actually, as a field, a lot older than history of science as a field, and even older than history as a profession.
EL: Huh.
MB: So history of mathematics started as a branch of mathematics in the early modern period. So we're talking like the 1500s, 1600s. There are always debates about what you classify as this or that. And it started as a way of trying to understand how mathematical theories came about, how they naturally fit together. The idea was that if you understood how mathematical theories emerged, you could come up with better mathematical theories, and you could understand the sort of natural order of numbers and the universe and everything else that you want to understand with mathematics. And then more toward the 19th and the 20th century, there are all these different variations of history of mathematics that branched out of fields like history and philosophy, and philosophy of science and history of science. So my undergrad training was in mathematics. My PhD is from a history department, but from a history of science program in that department. But it's possible to get a PhD in history of mathematics from a mathematics department, it's possible to sort of straddle between different departments. And it makes it a really rich and interesting field. Mathematics education departments or groups sometimes give PhDs in history of mathematics. And they really use the history for different purposes. So if your goal is to make mathematics better, you're taking the perspective of someone doing it from a mathematics department. If your goal is to become a better educator, then you can use history for that in a math education context. I tend to do history as a way of understanding how things fit together in the past and trying to make sense of social values and social structures and ideologies and ideas and how those fit together. And that's the approach that that you come at from a history or history of science perspective.
KK: Very cool. And How did you end up in Edinburgh of all places?
MB: Well, so the academic job market is bad enough in mathematics, right, but in history of mathematics, in a good year, there may be two to three openings in history of science jobs in general. So that's the cynical answer. The more idealistic answer is Edinburgh has this really important place in the sociology of science. In the 1970 s and 80s especially, there was this group of kind of radical sociologists in at the University of Edinburgh who sat down. It was called the Edinburgh School of the sociology of scientific knowledge, which is known for this sort of extreme relativism and constructivism view of how politics and ideology shape scientific knowledge. And I did a master's degree in that department many years later, in 2009-2010, sort of getting my feet wet and starting to learn that discipline. And that approach has been really formative for me and my scholarship. And so it was an incredible stroke of luck that they just happened to have an opening in my field while I was on the market. And I was even even more lucky to have the chance to go there.
KK: Wow, that's great. I’ve always wanted to go there. I've never been to Edinburgh,
MB: It’s the most beautiful city in the world.
KK: Yeah, it looks great. All right, well, being a historian of math, you must know a lot of theorems. So the question is, do you actually have a favorite one? And if so, what is it?
MB: So my favorite theorem is more of a definition. But I guess the theorem is that the definition works.
KK: Okay, great.
EL: That works.
MB: Which, actually—saying what it means for a definition to work is actually a really hard problem, both historically and mathematically. So it's interesting in that regard. Ao the definition is the definition of the derivative of a distribution.
KK: Okay.
MB: So distributions, as you’ll recall from, from analysis—I guess, grad analysis I is usually when you meet them.
EL: Yeah, I think it wasn't until grad school for me at least.
KK: I don't know if I've ever met them, really.
MB: So distributions were invented in 1945, more or less. And in the early years, actually, people were saying you could teach this as a replacement for your basic calculus. So the idea was, this would be something that even beginning college students or even high school students would be learning. So it's interesting to see how they have people pitched that the level of a theory or the the relevant audience, and that's part of the story, too. But in earlier stages of one's calculus education, you learn that there are functions that are integrable but not continuous; continuous but not differentiable; differentiable but not continuously differentiable, and so on. And so a big problem is how do you know something's differentiable when you're studying a differential equation or trying to prove some theorem that involves derivatives. And distributions were the kind of magic wand that was invented in the middle of the 20th century to say that's not actually a problem. Basically, if you pretend everything's differentiable, then all the math works out. And when it really is differentiable, you get the correct differentiable answer, and when it's not, then you get another answer that's still mathematically meaningful. But it's sort of your magic passphrase to be able to ignore all of those problems.
So a distribution is this replacement for a function. Where functions have these sort of different degrees of differentiability, distributions are always differentiable and they always have antiderivatives, just like functions do, but every distribution can be differentiated ad nauseam for whatever differential equation you want to do. And the way you do that is through this definition—my favorite definition/theorem—which is you use integration by parts. So that's a technique you use in calculus class, too, as a sort of trick for resolving complicated integrals. And distributions actually don't tend to look at the things that make the calculus problems challenging or interesting, depending on what kind of student you are, or what kind of teacher you are. So you set them up in a way where you don't have to worry about boundary conditions, you don't have to worry about what the antiderivative things are, because you're working with things where you already know what the antiderivative is. And the definition of distribution uses this fact from integration by parts that you essentially move the derivative from one function to another. So we don't have an exact way of saying functionally what the derivative of a distribution is. You can still say if you multiply it by a function that's super-smooth and over a bounded domain—so you don't have any boundary conditions to worry about, and so you always know how to differentiate that—if you multiply that by a distribution, and take the integral, then if you want to take the derivative of that distribution, integration by parts says you can instead throw in a minus sign and take the derivative of that smooth function instead. And so using that kind of trick, of moving the derivative onto something that is always differentiable, you can calculate the effect of differentiating a distribution without ever having to worry about, say, what the values of of that distribution are after you’ve taken the derivative, because distributions are often things that don't have sort of concrete values in the way that we expect functions to have.
EL: And I hope this question isn't very silly. But when you think about integration by parts—you know, if you took calculus at some point and learned this, there's the UV, and then there's the minus the integral of something else. And so for this, we just choose a function that would be zero on the boundary, and that would get rid of that UV term. Is that right?
MB: Exactly? Yeah. So the definition of distribution sets up this whole space of really nice smooth functions. All of them eventually go to zero, and because you're always integrating over the entire domain, and it's always zero when you go far enough out into the domain, those boundary terms with that UV in the beginning just completely disappear, and you're just left with the negative integral, and then with the derivative flopped over.
EL: All right, great. So if anyone was worried about where their UV went, that's where it went. It was zero. Don't worry. Everything's okay. Yeah. Okay. So what is good about this? Or what do you like about this?
MB: Yeah. So I think this is a really interesting definition from a lot of different perspectives. One thing that I've been trying to understand in my research about the history of mathematics is what it means for mathematics to become a global discipline in the 20th century, so to have people around the world working on the same mathematical theory and contributing to the same research program. And this definition is really helped me understand what that even means and how to understand and analyze that historically. So we think, well, you know, a mathematical theory or a mathematical idea is the same wherever you look at it, and whoever's doing it. As long as they can manipulate the definitions or prove the theorem, it shouldn't matter where they are. But if you look historically, at actual mathematicians doing actual mathematics, where they are makes a huge difference in terms of what methods they're comfortable with, how they understand concepts, how they explain things to each other, how they make sense of new techniques. I mean, learning a new mathematics technique is actually really hard in a lot of cases. And so the question is, how do you form enough of an understanding to be able to work with someone who you can't go and have a conversation with over tea the next day to sort of work out your problems? And the answer is, basically, you use things like this definition and take something you're really comfortable with—integration by parts—and give it a new meaning. And by taking old meanings and reconfiguring them and relating them to other meanings, you make it possible for everyone to have their own sorts of mathematical universes where they're building up theories, but to interact in a way where they can all sensibly talk to each other and develop new ideas and share new ideas. So that's one of the things that that's really exciting about that the definition to me.
One of the other things is sort of how do you know what the significance of the definition is? I mean, a lot of people early on said, isn't this just like a pun? Isn't this just wordplay? Quite early on, when Schwartz was sharing this definition, and some people were getting really excited about it. Some people said, well, you know, it's a cool idea. But isn't this just basically integration by parts? What's new? What's interesting about this? And the history really shows this debate, almost, between people with different kinds of values and philosophies and goals for mathematics, for mathematics education, for the relationship between pure and applied mathematics, where they take different ideas of what's really going on with this definition. Is it something that's complex and difficult and profound and important in that way, or is it something that is utterly trivial and simple, and therefore really useful to people who may be, say, electrical engineers who are trying to work with the Heaviside calculus, and need some sort of magic way to make that all add up? And what made distributions and this definition really powerful is it could be these multiple things to multiple people. So you can have mathematicians in Poland, or in Manchester, or in or in Argentina come to these very, almost diametrically opposed views of what it is that's significant or challenging or easy about distributions, and they can all agree to talk to each other and agree that it's worth sharing their theories and inviting them to conferences, and reading their publications, and they can somehow all make a community out of these different understandings.
KK: I’ve never thought about the sociological aspects in that way. That's really interesting. So the theorem that basically says that this definition is a good one. Is that a difficult theorem to prove?
MB: So there are a lot of different parts. It’s not—I guess it doesn't even boil down to one statement.
KK: Yeah, sure. Yeah, that makes sense. Yeah.
MB: So there's the aspect that when you're dealing with a function, but dealing with it using the distributions definition, that anything you do is not going to ruin what's good about it being a function. So anything you do with a distribution, if you could have done it as though it were a regular function, you get the same answer. So that's one aspect of the theorem that sort of establishes this definition. Another aspect is that distributions are, in some sense, the smallest class of objects that includes functions where everything that is a normal function can be indefinitely differentiated. So that's one way of arguing that distributions are sort of the best generalization of functions, and this competition—I mean, there are a lot of different competing notions, or competing ideas for how you can solve this problem of differentiating functions that were circulating in the 1930s and 1940s. And distributions won this competing scene, in part by the aspects of the theorems about the definition that show it’s sort of the most economical, the simplest, smallest, the best in that sense. And then you have all the usual theorems of functional analysis, like everything converges as you expect it to; if you start with something that's integrable, you're not going to lose interpretability, in some sense.
EL: So this might be a little bit of a tangent, and we can definitely decide not to go down this path. But to make this really concrete—so when I think of a distribution, the example I think of—it’s been a while since I've thought of distributions actually, is the Dirac delta function. I naturally just call it a function, but it is really a distribution. And so this is a thing that, I always think of it, it's something that you can't really define what its value is, but it has a convenient property that if you integrate it, you get 1. Like, its area is 1 even though it's supported on only one point, and it is infinitely tall. And so zero times infinity, we want it to be 1 right here.
MB: And magically it turns out to be 1.
EL: Yeah. And basically, if you decide that this function, this distribution, has this property, then things work out, and it's great. Was that before or after Schwartz? Did this definition—was this kind of grandfathered into being a distribution? Or was it the inspiration?
MB: I love how you put that. Yeah. So this, this phrase that you said at the beginning, we call it a function, but it's really a distribution. I mean, that's evidence of Schwartz’s success, right? The idea that what it really is, what it fundamentally is, is a distribution rather than a function, that's the result of this really sort of deliberate—I mean, it's not it's not an exaggeration to call it propaganda in the second half of the 1940s by people like Laurent Schwartz and Marston Morse and Marshall Stone and Harald Bohr and all of these far-traveling advocates for the theory—to say, you think you've been working with functions, you think you've been working with measures, you think you've been working with operator calculus if you're an electrical engineer, for instance. Or you think you've been working with bra and ket, with Dirac calculus for quantum mechanics, but what you've really been doing ultimately, deep down without even knowing it, is working with distributions. And their ability to make that argument was part of their way of justifying why distributions were important. So people who had no problem just doing the math they were doing with whatever kind of language they were doing, all of a sudden, these advocates for distribution theory were able to make it a problem that they were doing this without having the kind of conceptual apparatus that distributions provided them. And so they were both creating a problem for old methods and then simultaneously solving it by giving them this distribution framework.
So, they did this to the Heaviside calculus, which is about 50 years older than distributions. They did this to the Dirac calculus, where the Dirac function comes from, which comes out of the 1920s and 30s. They did this to principal value calculus, which is also an interwar concept in analysis. Even among Schwartz's contemporaries, there were things like de Rham currents, which were—had Schwartz not come along, we would all be saying the Dirac function is really a de Rham current rather than a Schwartz distribution. But then there were even things that came after distributions, or sort of simultaneously and after, that Schwartz was able to successfully claim. Like there was this whole school of functional analysis and operator theory coming out of Poland associated with Jan Mikusiński. Where Schwartz was—because he was able to get this international profile so much more quickly and effectively—he was able to say all of this really clever research and theorems that Mikusiński is coming up with, that's a nice example of distribution theory, even though Mikusiński would have never put that in those terms. So a huge part of this history is how they're able to use these different views of what a distribution really is to sort of claim territory and grandfather things in and also sort of grandchild things, or adopt things into the theory and make this thing seem much bigger than the actual body of research that people who considered themselves distribution theorists themselves were doing.
EL: Okay. And so I think we also wanted to talk a little bit about—you mentioned in your email to us, I hope I'm getting this I'm not getting this confused with anything—how this theory goes with the history of the Fields Medal.
MB: Oh, exactly. Yeah. So this was a really surprising discovery, actually, in my research. I didn't set out—the Fields Medal kind of became one thing, one little bit of evidence that Schwartz was a big deal. I never expected in my research to come across some evidence that really changed how I understood what the Fields Medal historically meant. And this was just a case of stumbling into these really shocking documents, and then having built up all of this historical context to see what their historical implications were. So Schwartz was part of the second ever class of Fields Medalists in 1950. The first class was in 1936, then there's World War II, and then they sort of restart the International Congresses of Mathematics after the war. And Schwartz is selected as part of that second class. The main reason he's part of that class is because the chair of that committee is Harald Bohr, who is the younger brother of Niels Bohr. Actually, in the early 1900s, Harald Bohr was the more famous Bohr because he was a star of the Danish Olympic soccer team.
KK: Oh!
EL: Wow!
MB: He was a striker. His PhD defense had many, many, many more soccer fans that mathematicians. He was this minor Danish celebrity. And he went on to be a quite respectable mathematician. He had his mathematics institute alongside his brother's physics institute in Copenhagen. And during the interwar period especially, he established himself as this safe haven for internationally-minded mathematics in this period of immensely divisive conflict among different national communities. And because he kind of had that role as this respected figure known for internationalism, he was selected by the Americans who organized the 1950 Congress at Harvard to chair the Fields Medal committee. And Bohr, shortly before being appointed to that committee, had encountered Schwartz in a conference that was sponsored by the Rockefeller Foundation and took place in Nancy in France, and he was just totally blown away by this charming, charismatic young Frenchman with this cool-sounding new theory that seemed like it could unite pure and applied mathematicians, that could be attractive to mathematicians all over the world. And so Bohr basically makes it his mission between 1947 and 1950 to tell the whole world about distributions. So he goes to the US and to Canada, and he writes letters all around the world, he shares it with all his friends. And when he gets selected to chair this committee, what you see him constantly doing in the committee correspondence is telling all of his colleagues on the committee what an exciting future of mathematics Schwartz was going to be.
So the problem is, then sort of the question is, what is the Fields Medal supposed to be for? And they didn't really have a very clear definition of what are the qualifications for the medal. There was a kind of vague guidance that Fields left before he died. The medal was created after John Charles Fields’ death. And there was a lot of ambiguity over how to interpret that. So the committee basically had to decide, is this an award for the top mathematicians? Is this award an award for an up-and-coming mathematician? How should age play a factor? Should we only do it for work that was done since the last medal was awarded? A long time to consider there, so that didn't really narrow it down very much in in their case. And they go through this whole debate over what kinds of values they should apply to making this selection. And ultimately, what I was able to see in these letters, which were not saved by the International Mathematical Union, which hadn't even been formed at the time, they were kind of accidentally set aside by a secretary in the Harvard mathematics department. So they weren't meant to be saved. They just were in this unmarked file. And what those letters show is that Bohr basically constructs this idea of what the metal is supposed to be for in a strategic way to allow Schwartz to win. So there's this question, there's this kind of obvious pool of candidates, of outstanding early- to mid-career mathematicians, including people like Oscar Zariski and André Weil, and Schwartz's eventual co-medalist, Atle Selberg. And they are debating the merits of all of these different candidates, and basically, Bohr selects an idea of what the Fields Medal is for, to be prestigious enough to justify giving it to this exciting young French mathematician, but not so prestigious that he would have to give it to André Weil instead, who everyone agreed was a much better mathematician than Schwartz, and much more accomplished and much more successful and very close in age. He was about five years older than Schwartz.
KK: He never won the Fields Medal.
MB: And he never won the Fields Medal, right. And so what you see in the letters from the early years of the Fields Medal is actually this deliberate decision, not just by the 1950 committee, but I was also able to uncover letters for the 1958 Committee, where they consider whether the award should be the very best young mathematicians, and they deliberately decide in both cases that it shouldn't be, that that would be a mistake, that that would be a misuse of the award. Instead, they should give it to a young mathematician, but not a young mathematician that was already so accomplished that they didn't need a leg up.
EL: Right.
MB: And that was my really surprising discovery in the archives, that it was never meant to crown someone who was already accomplished, and in fact, being accomplished could disqualify you. So Friedrich Hirzebruch in 1958, everyone agreed was the most exciting mathematician. He was in his early 30s, sort of a very close comparison to like someone like Peter Scholze today. So already a full professor at a very young age, with a widely-recognized major breakthrough. And they considered Hirzebruch, and they said, No, he’s too accomplished. He doesn't need this medal. We should give it to René Thom or someone like that.
EL: Yeah. And, of course, people like me, who only were aware of the Fields Medal once they started grad school in math—I wasn't particularly aware of anything before that—Think of it as the very best mathematicians under 40 because it has sort of morphed into that over the intervening decades.
MB: Yeah. And one of the cool side effects is now you can now put an asterisk next to—Jean-Pierre Serre is known to brag about being the youngest-ever fields medalist. But the asterisk is that he won in a period when it was still a disqualification to be too accomplished at a young age.
KK: Yeah, but he still won.
MB: He did still win. He’s still a very important mathematician.
KK: You sort of couldn’t deny Serre, right?
MB: Well, they denied Weil, right?
KK: They did. But I think Serre is probably still—Anyway, we can argue about— we should have a ranking of best mathematicians of the ‘50s, right?
EL: I mean, yeah, because ranking mathematicians is so possible to do because it’s a well-ordered set.
KK: That’s right.
EL: Obviously in any field of life, there's no way to well-order people. I shouldn't say any field. I guess you can know how fast people can run some number of meters under certain conditions or something. But in general, especially in creative fields, it's sort of impossible to do. And so how do you choose?
MB: That’s what I love about studying the sociology of science and technology, is that you get these tools for saying—you know, even in fields like running, we think of sprinting as this thing where everyone has a time and that's how fast they are. But look at all of the stuff the International Olympic Committee has to do for anti-doping and regulating what shoes you can wear, like there are all of these different things that affect how fast you are that have to be really debated and controlled. They're kind of ultimately arbitrary. So even in cases like that, you know, it seems sort of more rankable than mathematics or art or something, and you can tell a great sprinter from someone like me who can barely run 100 meters. But at the same time, there are all of these different social and technical decisions that are so interrelated that even things that seem super objective and contestable end up being much more socially determined.
EL: Yeah.
KK: Yeah. All right. So part two of this podcast is you have to pair your theorem with something, or your definition or whatever we're going to call it your distribution, whatever it is.
EL: Yeah. If you treat it as a distribution, it’ll work fine.
KK: That’s right.
MB: Exactly.
KK: So what have you chosen to pair with distributions?
MB: So what I thought I would pair distributions with is a knock-knock jokes.
KK: Okay.
MB: So I did a little bit of research before coming on here, and I basically found there are no good math knock-knock jokes. I mean, someone please prove me wrong, like tweet at me. And yeah, tell me tell me.
KK: Are there good knock knock jokes, period?
EL: Oh, definitely.
MB: Yeah. So I did come up with one that sort of at least picks up on some of the historical themes. So Knock, knock.
KK and EL: Who’s there?
MB: Harold.
KK and EL: Harold who?
MB: Harold is the concept of a function anyway?
That's the best I could do.
EL: Okay.
MB: So why knock-knock jokes? They involve puns. So you're talking about shifting the meaning of something to come up with something new. They're dialogical: there’s a sort of fundamental interactive element. They sort of make communities. So sharing a knock-knock joke, getting a knock-knock joke, finding it funny or groan-inducing, tells you who your friends are, and who shares your sense of humor. And yeah, they fundamentally use this aspect of wordplay to to make something new and to make something social. And that's exactly what the theory of distributions does and what that definition does, just sort of expand your thinking. And they're also sort of seen as kind of elementary, or basic. It's kind of like a kid's joke.
EL: Right.
MB: It’s this question of distributions as this fundamental theory, your basic underlying theory. So I think it sort of brings together all of those aspects that I like about the definition.
KK: You thought hard about this. This is a really thoughtful, excellent pairing. I like this.
EL: Yeah, I like it. I'm trying to figure out what is the analogy to my favorite knock-knock joke, which is the banana and orange one, right, which is classic.
MB: It’s the only one I use in real life.
KK: Sure.
EL: It’s a great one!
KK: Yeah.
EL: Fantastic. But, like, what distribution is this knock-knock joke?
KK: The Dirac function, right? Excuse me, the Dirac distribution.
MB: Yeah. Aren't you glad I didn't say the Dirac distribution? Yeah, no, it's the only one you actually use all the time. Yeah, the Dirac distribution, or there's that theorem that any partial differential equation can be resolved as the sum of derivatives of these elementary distributions. That's your go-to ubiquitous, uses a pun, but uses in a way that kind of makes sense and is kind of groan-inducing, but also you just love to go back and to use it over and over and over again.
KK: Right.
EL: Nice.
KK: I think back in the 70s—dating myself here—I had a book of knock-knock jokes, and it actually had the banana and orange one in it. I mean, it's like, this is how basic of a book this was. So I might be ragging on knock-knock jokes, but of course, I had a whole book of them. So anyway.
EL: Oh, they're great. And especially when a child tells you one.
KK: That’s right. That’s what they’re there for.
MB: The best is when you have a child who hasn't heard the knock-knock joke you’ve heard 10 million times, and you get to be the person to share the groan-inducing pun with the child. I mean, that's how I imagine Schwartz going to Montevideo and explaining distribution theory, like the experience of sharing this pun and having them go “Ohhh” and slapping their forehead. There's this cultural resonance, to introduce something that you immediately grasp. And yeah, that's a really special experience.
KK: Yeah.
EL: So at the end of the show, we like to invite our guests to plug things, and I'll actually plug a couple of your things because we've sort of mentioned them already. You had a really nice article in Nature. I don't remember, it was a couple years ago—
MB: 2018.
EL: —about this history of the Fields Medal, focusing on Olga Ladyzhenskaya, who was on the short list in ’58 and would have been the first woman to get the Fields Medal if she had gotten it, but it was really interesting because it touches on these things about how the Fields Medal became what it is thought of now and how they made that decision at that time. So go read that. And you also have an article about this distribution stuff that I am completely now blanking on the title of, but it has the word “wordplay” in it, and you probably know the title.
MB: There’s “Integration by Parts” as the title.
EL: Okay.
MB: And then there's a long subtitle. So this is the thing any historian does, is they have some kind of punny title and then this long subtitle. I think one of the reasons I empathize with the theory of distributions is, like, this is how I think as a historian. I come up with a pun, and then I work out how all of the things connect together afterwards. You see that in all of my titles, basically, and papers, That's not that's on my website, mbarany.com, and the show notes.
EL: Yeah, we'll put those in the show notes. We'll link to your website and Twitter in the show notes. And yeah, anything else you want to mention?
MB: Yeah, so if you want all of this math and sociology and politics and stuff about academia and the values of mathematics, then my main Twitter account at @mbarany is the one to follow. If you just want sort of parodies and irreverent observations about math history, then @mathhistfacts is my parody account that I started in August, but the key to that is that behind every thing that looks like it's just a silly joke is actually something quite subtle about historical interpretation. And I always leave that as an exercise to the reader. But I do try to—this was my response to, you know, St. Andrews has this MacTutor archive of biographies of mathematicians that has hundreds and hundreds of mathematicians, these sort of capsule biographies. And they have these little examples, or these little summaries, like so-and-so died on this day and contributed to this theory, and it’s just kind of morbid to celebrate them for when they died. But then even the one that makes the rounds every year on Galileo's birthday, so Galileo is actually one of the—not Galileo, Galois. Galois is one of the few people who actually has an interesting death date, whose death is historically significant, and there's a Twitter account that tweets based on on these little biographical snippets, and does it for his birthday rather than his death day and then says, like, “Galois made fundamental contributions to Galois theory.” So this was my response to that account, those tweeting from these biographical snippets saying there's there's more to history than just when people died and what theory named after them they contributed to, and tried to do something a bit more creative with that.
EL: Yeah, that is fun. I felt slightly personally attacked because I did just publish a math calendar that has a bunch of mathematician’s birthdays on it, but I did choose to only do like a page about a mathematician on their birthday rather than their death day because it just seemed a lot less morbid.
MB: Very sensible. There are some mathematicians with interesting death days. So Galois, Cardano. Cardano used mathematics to predict his death day, so it's speculated that he also used some poison to make sure he got his answer right.
EL: Yikes! That’s a bit rough.
MB: But yeah, there are a few mathematically interesting death days. But yeah, I mean, birthdays are okay, I guess. I'm not super into mathematical birthdays anyway, but better than death days.
EL: Yeah. I mean, when you make a calendar, you've got to put it on some day. And it's weird to put it on not-their-birthday. But yeah, that's a fun account. So yeah, this was great. Thanks for joining us, Michael.
MB: Thanks. This was super fun.
On this episode of My Favorite Theorem, we were happy to talk with University of Edinburgh math historian Michael Barany. He told us about his favorite definition in mathematics: distributions. Here are some links you might find interesting.
Barany’s website and Twitter account
His article “Integration by Parts: Wordplay, Abuses of Language, and Modern Mathematical Theory on the Move” about the notion of the distribution
His Nature article about the history of the Fields Medal
Distributions in mathematics
The Dirac delta function (er, distribution?)
The Danish national team profile page of mathematician and footballer Harald Bohr
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast and so much more. I'm Kevin Knudson, professor of mathematics at the University of Florida. Here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a math and science writer in Salt Lake City, Utah. I have left the county two times since this all happened. We don't have a car, so when I leave my home, it is either on feet or bicycle, which is your feet moving in a different way. But I have biked out of our county now into two different other counties. So it's very exciting.
KK: Fantastic. Well, I do have a car. I bought gas yesterday for the first time since May 26, I think. And yesterday was June 30.
EL: Yes.
KK: And I've gotten two haircuts, but it looks like you've gotten none.
EL: Yes. That’s correct. I’m probably the shaggiest. I've been in a while. My I normally this time of year is buzzcut city, which I do at home anyway. But I don't know.
KK: I will say I’m letting it get a little longer actually. I know I said I got a haircut, but you know, Ellen likes it longer somehow. So here we go. This is where we are. My son's been home for three months, and we haven't killed each other. It's all right.
EL: Great. Yeah, everything's doing as well as can be expected, I suppose. If you're listening to this in the future, and somehow, everything is under control by the time we publish this, which seems unlikely, we are recording this during the 2020 COVID-19 pandemic, right, which—I guess it still stays COVID-19 even though it's 2020 now, to represent the way time has not moved forward.
KK: Right. Time has no meaning. And you know, Florida now is of course becoming a real hotspot, and cases are spiking. And I'm just staying home and, and I have four brands of gin, so I'm okay.
EL: Yeah. Anyway!
KK: Anyway, let's talk math. So we're pleased today to welcome Daniel Litt. Daniel, would you please introduce yourself?
Daniel Litt: Hey, thank you so much. It's really nice to be here. I'm Daniel Litt. I'm an assistant professor at the University of Georgia in Athens, Georgia, likewise, a COVID-19 hotspot. I also have not gotten gas, but I think I've beat your record, Kevin. I haven't gotten gas since the pandemic began.
KK: Wow. That’s pretty remarkable.
DL: I’ve driven, maybe the farthest away I've driven from home is about a 15-minute drive, but those are few and far between.
KK: Sure.
DL: So yeah, I'm really excited to be here and talk about math with both of you.
KK: Cool. All right. So I mean, this podcast is—actually, let’s talk about you first. So you just moved to Athens, correct?
DL: I started a year ago.
KK: A year ago, okay. But you just bought your house.
DL: That’s right. Yeah. So I actually live in northeast Atlanta, because my wife works at the CDC, which is a pretty cool place to work right now.
KK: Oh!
EL: Oh wow.
KK: All right. Is she an epidemiologist?
DL: She does evaluation science, so at least part of what she was doing was seeing how the CDC’s interventions and deployers, how effective they were being help them to understand that.
KK: Very cool. Well, now it would be an interesting time to work there. I'm sure it's always interesting, but especially now. Yeah. All right. Cool. All right. So this podcast is called my favorite theorem. And you've told us what it is, but we can't wait for you to tell our listeners. So what is your favorite theorem?
DL: Yeah, so my favorite theorem is Dirichlet’s theorem on primes in arithmetic progressions. So maybe let me explain what that says.
KK: Please do.
EL: Yes, that would be great.
DL: Yeah. So a prime number is a positive integer, like 1, 2, 3, 4, etc, which is only divisible by one and by itself. So 2 is a prime, 3 is a prime, 5 is a prime, 7, 11, etc. Twelve is not a prime because it's 3 times 4. And part of what Dirichlet’s theorem on primes in arithmetic progressions tries to answer, part of the question it answered, is how are primes distributed? So there is a general principle of mathematics that says that if you have a bunch of objects, they're usually distributed in as random a way as possible. And Dirichlet’s theorem is one way of capturing that for primes. So it says if you look at an arithmetic progressions—that’s, like 2, 5, 8, 11, 14, etc. So there I started at 2 and I increased by 3 every time. Another example would be 3, 6,9, 12, 15, etc—there I started at 3 and increased by 3 every time. So Dirichlet’s theorem says that if you have one of those arithmetic progressions, and it's possible for infinitely many primes to show up in it, then they do. So let me give you an example. So for 3, 6, 9, 12, etc, all of those numbers are divisible by 3. So it's only possible for one prime to show up there, namely 3.
EL: Right.
DL: But if you have an arithmetic progression, so a bunch of numbers which differ by all the same amount, and they're not all divisible by some single number, then Dirichlet’s theorem tells you that there are infinitely many primes in that sequence. So for example, in the sequence 2, 5, 8, 11, etc, there are infinitely many primes, 5 and 11 being the first two [editor’s note: the first primes after 2. But it’s just odd for an even number to be prime]. And it tells you something about the distribution of those primes, which maybe I won't get into, but just their bare existence is really an amazing theorem and incredible feat of mathematics.
EL: So this theorem, I guess, for some of our listeners, and for me, it probably sort of reminds them in some ways of like twin primes or something, these other questions about distributions of primes. Of course, twin primes, you don't need a whole arithmetic progression, you just need two of them. That would be primes that are separated by two, which other than 2 and 3 is the smallest gap that primes can have. And, of course, twin primes is not solved yet.
DL: Yeah, we don’t know that there are infinitely many.
EL: Yeah, people think there are but you know, who knows? We might have found the last one already. I guess that's unlikely. But Dirichlet was proved a long time ago. So can you give me a sense for why this is a lot easier than twin primes?
DL: Yeah, so part of the reason, I think, is that twin primes are much sparser than primes in any given arithmetic progression. So just to give you an example, if you have a bunch of numbers, one way of measuring how big they are is you could take the sum of 1 over those numbers. So for example, the sum of 1/n, where n ranges over all positive integers, diverges; that sum goes to infinity. And the same is actually true for the primes in any fixed arithmetic progression. So if you take all the primes in the sequence 2, 5, 8, 11, etc, and take the sum of one over them, that goes to infinity, since there's a lot of them. On the other hand, we know that if you do the same thing for twin primes, that sum converges to a finite number. And that number is pretty small, actually. We know, up to quite a lot of accuracy, what it looks like. And that already tells you that they're sort of hard to find. And if you have things that are hard to find, it's going to be harder to show that there are infinitely many of them. I mention this sum of reciprocals point of view because it's actually crucial to the way Dirichlet’s theorem is proven. So when you prove Dirichlet’s theorem, it's one of the these really amazing examples where you have a theorem that's about pure algebra. And you end up proving it using analysis. So in this case, the theory of Dirichlet L-functions. And understanding that sum of reciprocals is kind of key to understanding the analytic behavior of some of these L-functions, or at least it’s very closely related.
KK: So I didn't know that result about the reciprocals of the twin primes converging. So even though we don't know that there are infinitely many, somehow…
DL: Yeah, in fact, if there are finitely many then definitely that sum would converge, right?
KK: Yeah, right. That’s—and we even know an estimate of what the answer is? Okay. That’s fascinating.
DL: Yeah, and what you have to do to prove that is show that these primes are sufficiently sparse. And then and then you win.
EL: So once again, I am super not a number theorist. So I'm just going to bumble my way in here. But to me, if I'm trying to show that something diverges, I show that it's sort of like 1/n, and if it converges, it's sort of like 1/n2 or, or worse, or better, or however, you want to morally rank these things. So I guess I could imagine it not being that hard to show that twin primes are sort of bounded by n2, or you're like bounded by 1/n2 squared, the reciprocals of that, would that be a way to do this? Or am I totally off?
DL: It’s something like that. You want to show they're very spread out. Yeah, with primes, I do want to mention, so you mentioned like you want to say something like between 1/n or 1/n2. So primes are much, much rarer than integers, right? So it's really somewhere between those two.
EL: Yeah.
DL: So for example, understanding the growth rate of those numbers—the growth rate of the primes and the growth rate of the primes in a given arithmetic progression—is pretty hard. Like that's the prime number theorem, it’s one of the biggest accomplishments of 19th-century mathematics.
KK: Right. Does that help you prove that, though? Maybe it does, right? Maybe not?
DL: Yeah, so proving that the sum of the reciprocals of the primes diverges is much, much easier than the prime number theorem. And as you can prove that in, like, a page or page and a half or something. But it's very closely related to the key input of the prime number theorem, which is that the Riemann zeta function, the subject of the Riemann hypothesis, has a pole at s=1.
KK: All right. Okay. So what's so compelling about this theorem for you?
DL: Yeah, so what I love about it is that it's maybe one of the earliest places, aside from the prime number theorem itself, where you see some really deep interactions between algebra and complex analysis. So the tools you bring in are these Dirichlet L-functions, which are kind of generalizations of the Riemann zeta function. And they're really mysterious and awesome objects. But for me, what I find really exciting about it is that it's like the classic oldie. And people have been kind of remaking it over and over again for the last, like, century. So there's now tons of different versions of the Dirichlet theorem on primes in arithmetic progressions in all kinds of different settings. So here's an example. In geometry, you have a Riemannian manifold, which is kind of a manifold with a notion of distance on it. There's a version of Dirichlet’s theorem for loops in a Riemannian manifold, the first cases of which are maybe do that Peter Sarnak in his thesis. There are versions for over function fields. So I'm not going to be precise about what that means, but if you have some kind of geometric object that's kind of like the integers, you can understand it well and understand the behavior of primes and that kind of object, and how they behave in something analogous to an arithmetic progression. There's something called the Chebotarev density theorem, which tells you if you have a polynomial, and you take the remainder of that polynomial when you divide by a prime, how does its factorization behave as you vary the prime? So there's all kinds of versions of it, and it's a really exciting and cool sort of theme in mathematics.
EL: So kind of getting back to the the more tangible number theory thing—which I guess it's kind of funny that we think of numbers as more tangible when they're sort of the first example of an incredibly abstract concept. But anyway, we'll pretend numbers are tangible. So how does this relate, I remember, and I don't even remember now, I must have been writing some article that related to this, but looking at your primes that are your 1 more than a multiple of 6 versus 1 less and looking at whether there are more or fewer of these. So these are two different arithmetic progressions. The one that's like, you know, 7, 13, let's see if I can add by 6, 19, this, that progression, versus the 5, 11, etc, progression. So is this related to looking at whether there are more of the ones that are one more one less or things like that?
DL: For sure.
EL: I feel like there are all these interesting results about these biases and the distributions.
DL: Yeah, so people call this prime number races.
EL: Yeah.
DL: So what you might do is you might take two different arithmetic progressions and ask are there more prime numbers, like, less than a billion, say, in one of those progressions as opposed to the other? And there are actually pretty surprising properties of those races that I think are not totally well understood. So like even even this recent work of Kannan Soundararajan and Robert Lemke Oliver on this kind of thing.
EL: Oh, yeah, that’s what I was writing about!
DL: Which, yeah, shows some sort of surprising biases. And so that's the reason people think those are cool, is exactly this principle I mentioned before, this general principle of math that things should be as random as they can be. And there are maybe some ways in which our random models of the primes are not always totally accurate. And so understanding the ways in which they're inaccurate and how to fix that inaccuracy, like how to come up with a better model of the primes, is a really big part of modern number theory.
EL: But I guess, the Dirichlet theorem is what you need before you start looking at any of these other things, is you need to know that you can even look at these sequences.
DL: Right. Exactly. Yeah. I mean, how do you study the statistics of a sequence you don't know is infinite? Yeah.
EL: Right.
DL: One thing I’ll mentioned, one cool thing about it is it lets you—it’s not just an abstract existence result. Like, sometimes you just need a prime which is, like, 7 mod 23 to do some mathematical computation. Okay, and if it's 7 mod 23, then it's pretty easy to find one. You can take 7. But if you need a prime, that's a mod b, its remainder upon division by b is a, it's sort of hard to make one in general. And the fact that Dirichlet’s theorem gives them to you is actually really useful. So at least for a mathematician who cares about primes, it's something that just comes up a lot in daily life.
KK: But it's not constructive, though.
DL: Yeah, that's, that's right. It does kind of guarantee that there will be one less than some explicit constant, so in some sense, it's constructive, but it doesn’t, like, hand one to you.
EL: But still, I guess a lot of the time, you probably don't actually need a particular one. You just kind of need to know that there is one.
DL: Yeah.
EL: And where did you first encounter this theorem?
DL: I guess it was, I was probably reading Apostol’s number theory book when I was in college. But I think for me, I didn't really grok it until some other more modern version of it, like one of these remakes showed up for me in my own work. So I wanted to make a certain construction of algebraic curves. So that's some kind of geometric objects defined by some polynomial equations, which have some special properties. And it turned out that for me, the easiest way to do that was to use some version of Dirichlet’s theorem in some kind of geometric context.
KK: Very cool.
DL: So that was really exciting.
KK: Yeah. Well, it's it's nice when, like you say, when the oldies come up on your jukebox. They're useful.
DL: Yeah, exactly.
KK: So another fun thing about this podcast is that we ask our guests to pair their theorem with something. And I mean, I think Evelyn and I are just dying to know what pairs well with Dirichlet’s theorem on primes in arithmetic progressions.
DL: So for me, it's the Arthur Conan Doyle stories about Sherlock Holmes.
KK: Okay.
DL: For a couple different reasons. So first of all, because he's all about making connections between these sort of seemingly unrelated things, just like Dirichlet’s theorem is about making connections between, somehow for the proof, it's about connecting these things in algebra, primes, to things in complex analysis, these L-functions, but then also because it's an oldie that's been remade over and over again. It's still constantly being remade, like with the new BBC Sherlock show.
KK: It’s the best. Yeah, I remember when that was coming out. My wife and I were just so excited every time a new season come out, you know, just “Sherlock! Yes!”
DL: Yeah, just like I'm so excited every time a new version of Dirichlet’s theorem on primes in arithmetic progression comes out.
EL: Yeah, I haven't watched any of the Sherlock TV or movies yet. But we're watching a little more TV these days, and that might be a good one for us to go look at.
KK: It is so good. I mean, the first episode…
EL: Is that the one with Benedict Cumberbatch?
KK: Yeah, but the first one, just, I mean, it just grabs you. You can't not watch it after that. It's really, really well done.
DL: Yeah, they're really fun. Although—oh, go on.
KK: I was going to say the last one, the very last episode, I thought was a bit much.
DL: I don't know that I watched the last season.
KK: Yeah, it was a little…yeah. But you know, still good.
DL: I was reading a couple of the old short stories in preparation for this podcast. Those are also, I highly recommend.
KK: Which ones did you read?
DL: My favorite one that I read recently was, I think it's called the Adventure of the Speckled Band.
KK: Mm hmm.
EL: Oh, yeah.
DL: It's one of the classics.
KK: Right. Yeah. And I think they based one of the episodes on that one, too.
DL: Yeah. that’s right. Yeah.
EL: Yeah, that's a good one. I haven't read all of the Sherlock Holmes it seems like they're practically infinitely many of them. But you know, I had this collection on my Nook and we were moving, so it was like light, and I could read it in the hotel room easily and stuff. And as we were moving to Utah, I think the very first Sherlock Holmes one is set in Utah, or like part of it is set in Utah.
DL: Yeah, maybe the Sign of Four?
EL: Yes, I think it’s the Sign of Four.
DL: Yeah, I think it's one of the first two novellas. So I’ve read every single Sherlock Holmes when I was when I was in high school or something.
EL: Okay. But I was just like, of all things. I didn't know, I hadn't ever read any Sherlock Holmes before. And, like, this British guy writing about this British detective, and it’s set in the state I’m about to move to. It just seemed incredibly improbable to me.
DL: Yeah, I guess he had some kind of fascination with the U.S. because there's that one, which is sort of set in Utah as it was being settled, I guess.
EL: Yeah.
DL: And then there's the case of the five orange pips or something, which actually in a timely way crucially involves the KKK. And so yeah, so there's a lot of sort of interesting interactions with American history.
EL: Yeah, I don't I don't remember if I've read the orange pips.
KK: That figures in the TV series too.
EL: Okay. Yeah, I kind of forgot about those. Those might be a fun thing to go back to, since unlike you, I have not read all of them, and there always seem to be more that I could kind of dive into. I think I kind of tried to read too many at one time, and I just got fed up with what a jerk he is. Self righteous, smug guy.
DL: Yeah, definitely.
EL: Which doesn't make it not entertaining.
DL: If you like this stuff, there's a nother thing I was thinking of pairing. pairing with the theorem. There's a novel by Michael Chabon about a sort of very elderly Sherlock Holmes. Which I don't quite remember the name but part of it is about, you know, what it's like to be Sherlock Holmes when you're 90 and all your friends have left you, and so maybe that might, might appeal to you if you find him sort of an annoying character.
EL: Yeah. Could that be the Yiddish Policeman's Union?
DL: I don't think so. It's a much shorter book.
EL: Okay. That’s the title I could remember.
DL: That one is also excellent. It just doesn't have Sherlock Holmes in it. [Editor’s note: the book is The Final Solution: A Story of Detection.]
EL: Okay. Well, when you were talking earlier about the theorem, you used the word, I think you used the word remake or sequel or something. So I was wondering if you were going to pick movies, or something like that for your pairing. But this kind of works, too, because each one, it’s not a not remakes exactly—I guess with the movies there are remakes, movies and TV shows. But the stories are all, like, some new sequel. Like, here's a slightly different adventure that Sherlock goes on. And slightly different clues that he finds.
DL: Yeah, exactly. That's one thing that I love about math in general is that so much of it is you look at something classic, and then you put a little spin on it. Like I do a little exercise with some of the grad students at UGA in one of our seminars where we take a classic theorem. I think most recently, we did Maschke’s theorem, which is something about representation theory. And then you highlight every word in the theorem that you could change, and then kind of come up with conjectures based on changing some of those words, or questions based on changing some of those words. That's a really fun exercise in, kind of, mathematical remakes.
EL: That does sound fun. And I mean, I think that's one of the things that you learn, especially in grad school, is just how to start looking at statements of theorems and stuff and seeing where might there be some wiggle room here? Or where could I sub out a different space or a different set of assumptions about a function or something and get something new.
DL: Right, exactly. Yeah, definitely. With Dirichlet’s theorem, that happens so many times.
EL: Yeah, well, that's very fun. Thanks for bringing that one up. Thinking about it, I’m a little surprised that we haven't had it already on the podcast.
DL: Yeah, it's classic.
EL: Yeah, it really is.
KK: So we also like to give our guests a chance to plug anything that they're working on. You're very on Twitter.
DL: Yeah, that's right. You can you can follow me @littmath.
KK: Okay.
DL: So what do I want to plug? I think aside from Sherlock Holmes, who maybe needs no plugging, first of all, I would like to plug the Ava DuVernay documentary 13th, which I really liked and I think everyone should should watch.
EL: Yeah, and I saw that's free on YouTube right now. I don't know if that's temporarily, but I’m not a Netflix subscriber.
DL: Yeah, it is on Netflix. And yeah, I don't know if it'll be available on YouTube but for free by the time this comes out, but probably a nominal cost. In terms of things I've done that I think people who listen to this podcast might like, I did a Numberphile video about a year ago on the on it one of Hilbert’s problems about cutting up polyhedra and rearranging them that someone might someone who likes this podcast might enjoy. So if you google “Numberphile the Dehn invariant,” that’ll come up.
EL: Oh, great.
KK: Cool. All right.
EL: We’ll put links to those in the show notes. Yeah.
KK: All right. Well, thanks for joining us.
DL: Thank you guys so much for having me. This was a lot of fun.
KK: I learned something. I learn something every time, but I'm always surprised at what I'm going to learn. So this is this has been great. All right. Thanks, Daniel.
DL: All right. Thank you so much.
On this episode of My Favorite Theorem, we were happy to get to talk to Daniel Litt of the University of Georgia about Dirichlet's theorem on primes in arithmetic progressions. Here are some links you might find useful as you listen:
Litt's website
Litt's Twitter profile
More about the Dirichlet theorem from Wikipedia
Tom Apostol's number theory book
The article Evelyn wrote about surprising biases in the distributions of last digits of prime numbers
Michael Chabon's novella The Final Solution: A Story of Detection
Litt's Numberphile video about the Dehn invariant
Ava DuVernay's documentary 13th
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about math and so much more. I'm one of your hosts, Kevin Knudson, professor of mathematics at University of Florida. And here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a math and science writer in Salt Lake City, Utah. So how are you, Kevin?
KK: I’m fine. It's it's stay at home time. You know, my wife and son are here and we're sheltered against the coronavirus, and we've not really had any fights or anything. It's been okay.
EL: That’s great!
KK: Yeah, we're pretty good at ignoring each other. So that's pretty good. How about you guys?
EL: Yeah, an essential skill. Oh, things are good. I was just texting with a friend today about how to do an Easter egg hunt for a cat. So I think everyone is staying, you know, really mentally alert right now.
KK: Yeah.
EL: She’s thinking about putting bonito flakes in the little eggs and putting them out in the yard.
KK: That’s a brilliant idea. I mean, we were walking the dog earlier, and I was lamenting how I just sort of feel like I'm drifting and not doing anything. But then, you know, I've cooked a lot, and I'm still working. It's just sort of weird. You know, it's just very.
EL: Yeah, time has no meaning.
KK: Yeah, it's it's been March for weeks, at least. I saw something on Twitter, Somebody said, “How is tomorrow finally March 30,000th?”
EL: Yeah.
KK: That’s exactly what it feels like. Anyway, today, we are pleased to welcome Susan D'Agostino to our show. Susan, why don't you introduce yourself?
Susan D’Agostino: Hi. Thanks so much for having me. I really appreciate being here. I’m a great fan of your show. So yeah, I'm Susan D’Agostino. I'm a writer and a mathematician. I have a forthcoming book, How to Free Your Inner Mathematician, which is coming out from Oxford University Press. Actually, it was just released in the UK last week and the US release will be in late May. And otherwise, I write for publications like Quanta, Scientific American, Financial Times, and others. And I'm currently working on an MA in science writing at Johns Hopkins University.
KK: Yeah, that's pretty cool. In fact, I pre-ordered your book. During the Joint Meetings, I think you tweeted out a discount code. So I took advantage of that.
SD: Yes. And actually, that discount code is still in effect, and it's on my website, which I'll mention later.
EL: Great. So you said you're at Hopkins, but you actually live in New Hampshire?
SD: Exactly. Yes. I'm just pursuing the program part-time, and it's a low-residency program. So I’m a full-time writer, and then just one class a semester. It creates community, and it's a great way to meet other mathematicians and scientists who are interested in writing about the subject for the general public.
EL: Nice. I went to Maine for the first time when I was living in Providence last semester and drove through New Hampshire, which I don't think is actually my first time in New Hampshire, but might have been. We did stop at one of the liquor stores there off the highway, which seems like a big thing in New Hampshire because I guess they don't have sales tax.
SD: No sales tax, no income tax, “Live Free or Die.” Yeah, and you probably test right around where I live because I live in New Hampshire has a very short seacoast, about 18 miles, depending on how you measure it. We live right on the seacoast.
EL: Oh yeah, we did pass right there. Wonderful. Yeah, the coast is very beautiful out there.
SD: I love it. Absolutely love it. I'm feeling very lucky because there's lots of room to oo outside these days. So, yeah, just taking walks every day.
EL: Wonderful.
KK: So you used to be a math professor, correct?
SD: Yes.
KK: And you just decided that wasn't for you anymore?
SD: Yeah, well, you know, life is short. There's a lot to do. And I love teaching. I had tenure and everything. And I did it for a decade. And then I thought, “You know, if I don't write the books I have in mind soon, then maybe they won't get done.” I've got my first one out already, only two years into this career pivot to writing, and I’m working on my next one. And I always had in mind, in fact, I have a PhD, but I also have an MFA. So I have a terminal degrees both in math and writing. And I always had one foot in the math world and one foot in the writing world, and I realized I didn't want to only live in one. So this is my effort to live fully in both worlds.
KK: That’s awesome.
EL: Yeah. Nice. So the big question we have now of course, is what is your favorite theorem?
SD: Okay, great. My favorite theorem is the Jordan curve theorem.
KK: Nice.
SD: Yeah. It’s a statement about simple closed curves in a 2-d space. So before I talk about what the Jordan curve theorem is, let's just make sure we're abundantly clear about what a simple closed curve is.
EL: Yes.
SD: So, a curve—you can think about it as just a line you might draw on a piece of paper. It has a start point, it has an end point. It could be straight, it could be bent, it could be wiggly, it could intersect itself or not. The starting point and the end point may be different or not. And because this is audio, I thought maybe we could think about capital letters in a very simple font like Helvetica, or Arial. So for example, the capital letter O is a is a curve. When you draw it, it has a start point and an end point that are the same. The capital letter C is also a curve. That one has a different starting and end point, but that's okay. It satisfies our definition. Capital letter P also. That one intersects itself in the middle, but it's still it's a curve.
Okay, so a simple curve is a curve that doesn't intersect itself along the way. It may or may not have the same starting and end point, but it won't intersect itself along the way. So capital letter O and capital letter C are both simple. But for example, the capital letter B is not simple, because if you were to start at the bottom, go up in a vertical line, draw that first upper loop and then the second upper loop, between the first and second upper bubbles of the B, you will hit that initial vertical line that you drew. So it's not simple because it touches itself along the way.
And a closed curve is a curve that starts and ends at the same point. So the letter O is closed, but the letter C is not because that one starts in one place ends in another.
KK: Right.
SD: Moving forward as we talk about the Jordan curve theorem, let's just keep in mind two great examples of simple closed curves: the letter O, and even the capital letter D. It's fine that that D has some angles, in the bottom left and upper left. So corners are fine, but it needs to start and end in the same place and doesn't intersect itself other than where it starts and ends.
Okay, so the Jordan curves theorem tells us that every simple closed curve in the plane separates the plane into an inside and an outside. So a plane, you might just think of as a piece of paper, you know, an 8 1/2 by 11 piece of paper, let's draw the letter O on it. And when you draw that letter O, you are separating that piece of paper, the surface, into a region that you might call inside the letter O and another region that you might call outside the letter O. And the second part of the Jordan curve theorem tells you that the boundary between this inside and that outside formed by this letter O is actually the curve itself. So if you're standing inside the O, and you want to get to the outside of the O, you've got across that letter O, which is the curve.
Okay, so that doesn’t sound very profound.
KK: It’s obvious. It’s just completely obvious.
EL: Any of us who are big doodlers—like, when I was a kid, at church, I was always doodling inside the letters in the church bulletin. That’s the thing. I know that there's an inside and outside to the letter O.
SD: You do. Yes. And you could ask your kid brother, kid, sister, whoever. Anyone—you probably didn't need a big mathematical theorem to assure you of this somewhat obvious statement when it comes to the letter O. Okay, so, I do want to tell you why I think it's really interesting beyond this fact that it seems obvious. But before I do, I just want to make two quick notes. And one is that you really do need the simple part, and you really do need the closed part of the theorem because, for example, if you think about a non-closed curve, like the letter C, and you're standing on the piece of paper around that letter C, maybe even inside, like where the C is surrounding you, it actually doesn't separate the piece of paper into an inside and an outside. And then you also need the non-simple part because if you think about the letter P, which is not simple because it intersects itself, if you think about the segment of the P that's not the loop, so the vertical bottom part of that P, that is part of the curve, the letter P, and that piece of the curve doesn't separate—so even though that P seems to have a little bit of a bubble up there, in the in the loop of the P, the bottom part of the P is part of the curve, and it's not the boundary between the inside, what you might consider the inside of the P, and the outside of the P. So you really do need the simple part and the closed part.
KK: Right, right.
SD: Okay, so the reason I think it's interesting, in spite of the fact that it seems obvious, is because it actually isn't very obvious. And it's not obvious when you talk about what mathematicians love to call pathological curves.
KK: Yeah. Okay. No, I know, I know, the theorem I just wanted to shrug my shoulders and say, “Oh, look, it's just a special case of Alexander duality.” Right? And so surely it works. But yeah, okay.
SD: And there are other poorly-behaved curves, or misbehaved curves, like another curve you might think about is the Koch snowflake. So one way of thinking about the Koch snowflake is—again, I'm going to wave my hands a little bit here because we're in audio and I can't draw you a picture—but if you think about the outline of a snowflake, and there's a prescribed way to draw the Koch snowflake, but I'm going to simplify it a little bit. Imagine the outline of a snowflake, so not the inside or the outside of the snowflake, just the outline of it. And on a Koch snowflake, that snowflake is going to have jagged edges. It's going to zig and zag as it goes along the outline of the snowflake. The Koch snowflake actually has an infinitely jagged curve, line, to draw it. So it's not that it has 1000 zigs and zags or 1 million or even 1 billion. It has an infinite number of zigs and zags going back and forth. So you know, it's a little bit easier to imagine the— what could loosely be defined as the inside of the Koch snowflake, and the outside of the Koch snowflake when you imagine one being drawn on a piece of paper. You know, right in the heart of the very dead center of that Koch snowflake, you could probably feel pretty confident saying, “Hey, I'm inside the Koch snowflake.” And then far outside, you could be confident saying, “I'm outside of the snowflake.” But if you think about yourself right up against the edge of this Koch snowflake. And put yourself right there. Then as you think about this boundary of the Koch snowflake, the boundary is supposed to be what separates the inside from the outside, but if you're right up close to that boundary, and in the process of drawing an infinite number of constructions to get the ultimate Koch snowflake. You continue zigging and zagging, you add more zigs and zags every time. Then even in the steps that it takes you to get to your drawing of the Koch snowflake, at some point, it might seem like “Hey, I'm inside. Oh wait, now they zigged and zagged and I’m outside. Oh, wait, they zigged and zagged some more. Now I'm inside again.” So it seems like even in the finite steps that you need to take to draw that Koch snowflake, to imagine what the it is in its infinite world, it seems like that boundary is not really clear. So again, another place where it makes you stop and say, “Wait a minute, maybe the Jordan curve theorem is not as obvious as it first looked.”
KK: Right. Why do you love this theorem so much?
SD: Yeah, so I love it. It actually it kind of goes along with your question of what do you pair it well with? So maybe I'll just jump ahead to what's sugar. Yeah. So, um, because even in my book and in the chapter that in which I discuss the Jordan curve theorem, I actually paired it with a poem. And the poem is by a New Hampshire native, Robert Frost, who actually went to Dartmouth, which is where I got my doctorate. And one of my favorite poems by Frost is called “The Road Not Taken.” And in the beginning of the poem, he's standing in front of this fork in the road, essentially, and he's looking at both options, realizing, “Okay, I've got to go left or I've got to go right.” You know, he starts off:
Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;
So he's standing here and he's saying, “Well, which path should I take?” And he notices one that he calls you know, “it was grassy and wanted wear” and had no leaves—what was what was the line—“in leaves no step had trodden black.” And he ultimately comes to the conclusion that he's going to take the past path less traveled. You know, at the very end of the poem, he says, “Two roads diverged in a wood and I—/ I took the one less traveled by,/ And that has made all the difference.” And it strikes me that what Frost is telling us, and what the Jordan curve theorem is telling us, is take the paths that are more unusual, that aren't well trodden, that people don't always look at first, that aren't as obvious or as paved for us. Maybe it's a path that's going to make you question whether you're inside or outside. Or maybe it’s going to have what feels like this amorphous boundary that you can't quite put your finger on. I guess it reminds me that sometimes making a non-traditional choice in life, or looking at pathological objects in math, is actually something very engaging to do, and can can make a life a little bit more interesting.
You know, when I first heard about this theorem, I had the same reaction that most everybody else does: Okay, so I can just draw a curve—you know, you say a curve and you think, “Oh, I can just draw a curve.” I'm just going to do a squiggle on a piece of paper. And as long as I make it simple and closed, then it might be the letter O or it might be some blob that doesn't intersect, but at least starts and ends where it ends where it started. You know, I remember thinking, wait, why does this theorem get its own name? Why isn’t it just lemma 113.7?
EL: An observation.
KK: Clearly.
SD: Why did it get its own name? A I remember asking, and a lot of people, at first everybody was happy to recite the theorem and and say what it was and laugh at how obvious it was, but then later, I kept searching and searching, and then finally I ended up discovering that in fact, it wasn't as obvious, but in order to appreciate how it’s not that obvious, you needed to look at the paths not taken, the more unusual lines and curves.
EL: Yeah, so this is a theorem that, of course, I I feel like I've known for a long time, not just in the “it's obvious” sense, but in the sense that it's been stated in classes that I took—and feel entirely unconfident about knowing anything about it's proof, at least in the general case. I feel like the the difference between how much I have used it and relied on it and what I actually understand of how to prove it is very large.
SD: Yeah, honestly I can say the same thing. My background is in coding theory, definitely not topology. And honestly, I never saw topology as my strength. It was always something that I was in awe of, but also found extremely challenging or less intuitive to me. But I had looked at the proof long ago. I haven't looked at them deeply recently. There are a number of different approaches. But yeah, I feel the same, that even—the statement sounds simple and it's not, and to my understanding, the proofs are also non trivial.
KK: Yeah. I mean, I was sort of being glib earlier and saying it's just a special case of Alexander duality, like that's easy to prove.
EL: Yeah. Right.
KK: I mean, I was teaching topology this this semester, and I was proving Poincaré duality, which is a similar sort of thing, and it's highly non-trivial. I mean, you break it into a bunch of steps, and it sort of magically pops out of it. And I think that's kind of the case here. It's like, you break it into enough discrete steps where each thing seems okay. But in the end, it is a lot of heavy machinery. And like even for Poincaré duality, in the end you use Zorn’s lemma I mean, there's some kind of choice going on. I think when when Jordan—actually, did Jordan even state this theorem? Or is this one of those things where where Jordan gets the credit, but it wasn't really him?
SD: Actually, I don’t know, and now I need to know that answer.
EL: I think he did.
KK: Did he?
EL: Yeah, not to toot my own horn but I’m, gonna anyway, the calendar that I published this year, the page-a-day calendar, still available for purchase, I think Camille Jordan’s birthday is pretty early. It's sometime in January, so I've actually even read this not too long ago. And I think he did publish it and did have a proof of it. And there's an interesting article, I believe by Thomas Hales, about his about Jordan’s proof of the Jordan curve theorem, I guess maybe to some extent defending from the claim some people have that that he never had a rigorous proof of it. I did read that for doing the calendar, but it was over a year ago at this point and I don't quite remember. But yeah, you can find a reference to it on my calendar. I will also include that in the show notes.
KK: And also the same Jordan of Jordan canonical form.
SD: Right.
KK: Pretty serious contributions there from one person.
SD: Absolutely.
KK: Yeah. All right. I actually like this pairing a lot.
EL: Yeah.
KK: And and since you live in New Hampshire, it's perfect.
SD: Yes. I have a number of New Hampshire references in my book because I just feel like I wanted to humanize math to the extent that I could, while still tackling pretty substantial ideas. But any time I had an invitation to bring in something from left field that was actually meaningful to me, I just went for it.
EL: Yeah.
SD: I’m sure Evelyn, too, it sounds like you're up on all of the mathematicians’ birthdays at this point because of your calendar.
EL: I know a few of them now. More than I did two years ago.
SD: Right.
KK: So it was like to give our guests a chance to plug anything. You’ve already plugged your book. Any other places we can find you online?
SD: Yeah, well, lately, I've been writing for Quanta magazine, which has been very exciting. And in fact, I have a few math articles already out this year. And I have a very special one—I can't tell you the topic. I'm not supposed to—it should be coming out April 15. And I'm very excited about that article that I believe is going to be on April 15, assuming everything is fine with the publication schedule, given the pandemic. But yeah, listeners can find links to my articles on my website, which is just susandagostino.com. And you can find information about my books and my articles and what I'm up to there. v
KK: Cool. Well, thanks so much for joining us, Susan. This was a good one.
EL: Yeah, lovely to chat.
SD: Great. Well, thank you so much. And you know, I love the show, and really, it was my honor to be here. Thank you.
KK: Thanks.
On this episode of My Favorite Theorem, we talked with mathematician and science writer Susan D'Agostino. Here are some links you might find interesting as you listen.
D'Agostino's website
How to Free Your Inner Mathematician, her new book (find a discount code on her website)
Evelyn's article about the Koch snowflake
Thomas Hales' article about Camille Jordan's proof of the Jordan curve theorem (pdf)
Evelyn's page-a-day math calendar
The article D'Agostino was excited about towards the end of the podcast was this interview with Donald Knuth
Evelyn Lamb: Welcome to My Favorite Theorem, joining forces today with Talk Math With Your Friends. I'm Evelyn Lamb. I co-host this podcast. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida, where it is boiling hot today, and I’m very happy to be in this—how would they put this on on TV?—crossover event, right?
EL: Yeah.
KK: So like, I think last night on NBC, on Wednesday nights, there are all these shows that take place in Chicago: Chicago Med and Chicago PD and Chicago Fire, Chicago Uber, who knows what. Anyway, sometimes they'll just merge them all into one three-hour super show, right? So here we go. This is the math version of this, right?
EL: Yes. And I realized today that our very first episode of My Favorite Theorem, we published that in late July 2017. So this is our early third birthday! And we're so glad that people came to join us! And we are very happy today to have our guest Annalisa Crannell with us. Hi, Annalisa. Can you introduce yourself and tell us a little bit about yourself?
Annalisa Crannell: So hi, my name is Annalisa Crannell. I profess mathematics at Franklin and Marshall College, which is in south-central, southeastern Pennsylvania. It's a small liberal arts college. I got my PhD working in differential equations, partial differential equations, nonlinear differential equations, switched into discrete dynamical systems, topological dynamical systems, but for the past 10 or 15 years have been really thinking hard about projective geometry applied to perspective art.
KK: That’s quite the Odyssey.
AC: Yeah, I was really influenced by by Paul Halmos saying that one of the marks of a really good mathematician is that they can change fields. And so yeah, I feel like I'm trying to enjoy so many different aspects of what this profession allows us to do.
EL: And a fun story, at least it was fun for me, is that one time you were here in Utah giving a talk at BYU, which is down the street. And we went to an art gallery and you pulled out your chopsticks and showed me how you use your chopsticks to help you know where to stand to best appreciate art, and it was just so amazing to me that that was this thing that you could do. So that was that was a lot of fun. And I think it just, to me, sums up the Annalisa experience.
AC: Thank you. Yeah, summing, I guess, is a good thing for mathematicians. I think everybody should carry chopsticks with them. I mean, it's great. It's frugal. It helps you avoid to trash, but it also helps you do really cool mathematics. So what's what's not to love about them?
EL: Yeah. So what is your favorite theorem?
AC: So if you had asked me about five years ago, I would have said the intermediate value theorem. But today, I am going to say no, Desargues’ theorem. So Desargues’ theorem first came into human knowledge in the 1640s. And it's a theorem that sounds like it's sort of about planar geometry, but I really think of it as being about perspective. So is this when I'm supposed to tell you what the theorem says?
KK: Yes, please.
EL: Yeah. Okay, should we all get out our—so this is one, I feel like I always need like a piece of paper. (I’m trying to hold it up, but I’ve got a Zoom background.) But I got my piece of paper out so I can hopefully follow along at home.
AC: Yeah. If you had a piece of paper or a chalkboard right behind you, you could imagine that you would have a triangle, like, standing up on a glass pane. And then on one side of this glass pane would be maybe a magician or somebody holding a light. Maybe your granddaughter drew the magician. (Okay, for people in the podcast, I'm showing a picture that my granddaughter drew on the chalkboard.) If this light shines on the triangle, then it casts a shadow, and the shadow is also a triangle. And so we say those two triangles are perspective from a point, the point is the light source. And we say that because the individual corners, the corresponding corners, are colinear with the light source. So A and the shadow of A are collinear with a light. B and the shadow of B are colinear with a light. But it turns out that those shadows, the triangle and its shadow, are also perspective from a line. And what that means is that if you think not about the points on the triangles, but the three lines on the triangles, and you really think of them as lines, not line segments, so going on forever, then the corresponding lines will also intersect along a line. And you can think of that second line, which we call the axis, as the intersection between the plane of glass that's sitting up in the air and the ground. So the interesting thing to me about Desargues’ theorem is that it pretends like it's a theorem about planar geometry, because this theorem holds when the two triangles are both in the same plane, in R2 or something, but the best ways of proving it, the most standard ways of proving it, are using essentially perspective, going out into three dimensions and proving it for two completely different planes and then pushing them back down into the regular plane. And so to me, this is a really interesting example of sort of how art informs math rather than the other way around. Or maybe they both inform each other.
EL: So going back a little bit, to me when I've I've looked at Desargues’ theorem before, somehow there's this big conceptual leap to me between perspective from a point and perspective from a line. Perspective from a point seems really easy to think about, and perspective from a line, I just have trouble getting it into my brain.
AC: Yeah, I do think perspective from a point is so much more intuitive. And so, so the minorly intuitive, the somewhat intuitive way of thinking of this axis, is you can sort of pretend like it's a hinge. So if these two triangles will sort of fold on to each other from the hinge—the triangle on the glass and the triangle on the ground can fold along this hinge—then they’re perspective from a line. So if you think about something that's in the real world, a flat thing in the real world, and its mirror image, then those two, it's hard to say whether they're perspective from a point, but the lines in the real world thing and the lines in the mirror will intersect along the line where the mirror hits the ground. And so that's that's another way of thinking of this axis, sort of three dimensionally.
KK: So I want to think about this in projective space, which probably isn't correct. Or maybe it is. I don't know. I mean, so these lines are points in projective spaces. This is this, how one might go at this in some other way? I asked the wrong question. I'm sorry.
AC: So that's not exactly the way that I think of it because I think of the line as a line in projective space.
KK: Okay.
AC: And the point is a point in projective space. So the point comes from, you could say, from a one-dimensional line in R-whatever.
KK: Okay.
AC: And so here's one of the interesting things about this theorem and about me loving this theorem. In 2011, one of my coauthors and I wrote a book on the mathematics of perspective art, and we used Euclidean geometry all the way through. We were giving a MathFest mini-course on this and a young mathematician came up to us and said, “I just love how you use projective geometry in art because I learned projective geometry and felt like it had to have something to do with art. And you guys are the ones that explained to me how it does.” And Mark and I turned to each other. We're like, “What kind of geometry?” So neither of us had ever taken a projective geometry class. Neither of us had ever learned any projective geometry. We did not know that it existed. And so this young mathematician ended up changing our lives. We ended up working with her and really learning a bunch of projective geometry in order to come out with our most recent book, which came out last December. And so when you ask questions that get into really deep projective geometry, I'm like, “Ooh, I have to write that one down because that's something else I have to go learn.” So for those of you young mathematicians out there, I just want to say learning new stuff and not knowing stuff is is really so much fun! Don't be afraid of starting something new, even if you don't know it all.
EL: And how did you first encounter Desargues’ theorem?
AC: Oh, man, so I first encountered Desargues’ theorem when, Fumiko Futamura, this young mathematician, had convinced me I needed to learn it. So I bribed an undergraduate to go through Coxeter’s Projective Geometry with me because it seemed like that was the standard book. And Coxeter is, like, the famous guy in this realm, and he is completely non-intuitive. So I found Desargues’ theorem in there, and I'm like, “I have no idea what this means.” The notation is awful. The diagrams are awful. Everything about this is awful. And so I read through his book trying to say, “What does this have to do with art?” And that was a really fun way to read it. So we just decided Desargues’ theorem is about shadows.
EL: Well, I was wondering. So I remember you have also given a talk about squares that kind of blew my mind, where I guess the the thesis of the talk is that all configurations of four points are a square, if you look at it from the right way. Is Desargues’ theorem related to that theorem? I feel like when you said the word shadow that is what reminded me of that other talk.
AC: Yeah, thank you. So that's really cool. So most of us know what the fundamental theorem of calculus says. Most of us know what the fundamental theorem of algebra says. The fundamental theorem of projective geometry in one sense really ought to be Desargues’ theorem. So you can think about these triangles, these points, these lines as objects. For mathematicians, we care about verbs. So a verb is the function. So you can think of a perspective mapping as mapping one set of points and lines to another set of points and lines with this particular rule that says that corresponding points have to line up with the sun, which you call the center, and corresponding lines have to line up with the axis, this hinge. But there's other functions that take points to points and lines to lines. So we know in linear algebra, you can do this all the time, and in linear algebra sets of parallel lines go to other sets of parallel lines. But there's other kinds of functions that do this. They're called colineations. So the fundamental theorem of projective geometry says that if you have four points and their images, and you know that points go to points and lines go to lines, then the entire rest of the function is pre-determined, we know that.
So Desargues’ theorem says that one kind of colineation is perspective mappings, right? Just, like, a shadow or mapping from the floor, this tiled floor onto your canvas through a window. We know from linear algebra, there are these other affine transformations. And so one really cool theorem that I totally love is if you have something that's not a linear algebra one, that's not an affine transformation, then it's automatically a perspective transformation together with an isometry. So you took a photograph and you moved it. That's this notion that every single thing that you do with four points going to four other points that determines a whole function. So yeah, so anytime you have four points connected by four lines, even if they look like a bow tie, or they look like Captain Kirk’s Star Trek logo, it turns out that's actually a weird perspective image of a square moved around somewhere.
EL: And you just have to figure out where you should stand to see it as a square.
AC: Yes, exactly.
KK: Are you guaranteed to be able to—so if it's on the wall, say, could you have to, like, lift it up into a third dimension to be able to see it correctly?
AC: So one of the weird things that happens is if you have a bow tie, we sort of think of a bow tie is that the inside of the bow tie, you would imagine that has to go to the inside of the square. And that is not actually the way it happens. So let's let's think about something that's much more familiar to us. Can you map a circle through perspective into other weird shapes like an ellipse? Well, Sure you can. So imagine that you've got a lampshade, and you've got a circular lampshade, and the shadow that it projects onto the wall is actually a hyperbola. We know that. And the light from the inside of the shadow goes to the part of the hyperbola that goes off towards infinity. Well, if you have the bow tie, think about the area outside of the “x” as almost a hyperbola. So this is when it would be so wonderful if I could actually draw pictures, but it's a podcast. On the on the bow tie, there's two sides that are parallel to each other, and then there's this weird “x” in the middle. The parallel sides, extend them out towards infinity from the bow tie. Right? That turns out to be where the square goes, so if you had a square lampshade, it would cast a shadow that would look like this outside of the bow tie. So the same way that a circular lampshade casts a shadow that looks like the outside of the hyperbola, the U shape of the hyperbola.
KK: My desk lamp is a rectangle, so I’m trying to see if it’s casting the right shadow here.
EL: Yeah, some experiments you can do. I feel like it's this “expand your mind on what a square is” kind of idea.
KK: Got to get rid of those old ideas, man.
EL: Yeah. I know that we we traveled a little bit from Desargues’ theorem and I want to give you a chance to circle back and for me
KK: Or square back. Sorry.
EL: Or square back, or projectively bow tie back to Desargues’ theorem, and I guess what do you love so much about it?
AC: What do I love so much about Desargues’ theorem? One of the things that I love is that it really tightly connects mathematics informing art and art informing mathematics. So Desargues himself, we don't know if he actually wrote this up and published it. We don't have a copy of his original manuscript. We do have something that came out from one of his, sort of acolytes, one of his followers, a guy named Bosse. And if you look at Bosse, okay, to draw Desargues’ diagram, you need 10 points: the three on the first triangle, the three on the second triangle, the sun, that gives you seven, and then the three along the axis. You also need 10 lines: the three on the triangle, the three on the other triangle, the three light rays, that gives you nine, and then the axis. When Bosse first published his diagram, his diagram was incomprehensible. It had 20 lines and 14 points, and it was just really a mess. And it was hard to even figure out where the heck the triangles were.
KK: Yeah, I don’t see them.
AC: And he ended up proving this not using sort of standard geometry, using using numerical stuff called cross-ratios. But the the proofs that make the most sense, that are convincing our proofs that allow us to think about things in three dimensions and use art. So that's one of the cool things, is that actually drawing, if you go and you shade in Bosse’s diagram in a cool artistic way, all of a sudden it sort of pops into 3-d and you can see it, but his original diagram not so much. The same is true of a lot of different proofs. If you try to imagine them as three-dimensional, if you draw them as three-dimensional, the proof becomes more obvious.
But also Desargues’ theorem is actually useful for artists because if you want to draw the shadow of something, if you want to draw the shadow of a kite, if you want to draw a reflection, shadows and reflections, they are projections, so projective geometry, and how do you know how to draw this? You have to use the fact that the shadow or the reflection, or this this projective image, however, you've made it, is perspective from a point and perspective from a line. So you're constantly using Desargues’ theorem to draw these images of images within your image. It just becomes so incredibly useful.
KK: My wife's an artist, but I can't imagine that she would use this. I mean, if you walked up to a typical artist, are they going to say, “Oh yeah, I use Desargues’ theorem all the time?” Or is it just a sort of an intuitive thing where people who are very good at drawing in perspective, can just kind of naturally draw it that way?
AC: Oh, yeah. So the truth is that Desargues’ theorem has really only pretty much been used by mathematicians, and occasionally misused by mathematicians. There's a description in a book by a guy named Dan Pedoe of Desargues’ theorem to draw the image of a pentagon on the top of a square, and he just completely gets it wrong. And Mark and I think that's hilarious. This book has been reproduced zillions of times. Anyway.
So no, actually artists have this incredible skill. One time, we had asked mathematicians and artists at one of our workshops to try to divide the image of a flag into three equal pieces perspectively. So imagine you're drawing the Italian flag going back into the distance, right? How do you do this? In the real world? This there's the three bars are evenly spaced, but in perspective, they're not. And the artists stood up and said, “Well, you just eyeball it, and you just put them here.” And I was horrified. This is not approved. This is not correct. My colleague Mark said, “Okay, this is good. But for those of us who can't just eyeball it, let's see if we can come up with a construction.” And eventually somebody did. They came up with a really cool geometric construction. And Mark had them put this up over the artist’s solution and it was spot on. As a mathematician, I decided to go take an art class. And one of the things we were supposed to do was to draw cans. And so the top of a can is circular, and so the image was going to be an ellipse. And I could not get the proportions right. My ellipses were so awful. So I would say that disarms is an incredibly useful tool for drawing things that look very accurate for people who do not know art, but who are good at math.
KK: Right.
AC: That’s a really long answer to your question. Yeah, artists don't tend to use it, but it really is a useful thing for drawing things that look correct.
KK: Cool. All right.
EL: And you've incorporated this into a class that you teach to help people, I don't know if the purpose of the class is more math or more perspective drawing, but it seems like an interesting mix.
AC: Yeah, we have a course called Perspective and Projective Geometry. We actually have a book that's come out that has Desargues’ theorem right on the cover up there. And it's aimed at the intro to proofs level. So it really teaches students to make conjectures about what they're seeing in the world and then to try to prove those conjectures, but also to try to draw. Ao it's actually sort of an applied course. So they, this students, when we introduce them to Desargues’ theorem, they're actually drawing the shadow of the letter A, and then discovering Desargues’ theorem, and then proving it using many colors and, yeah, lots of cool lines.
It's so much fun! It's a course that really attracts a very unusual swath of students. They all are students who love math, and who are art-curious. Almost none of them are good at art. But I tend to get more women than men in the class. I have often had my class being highly diverse in terms of races and ethnicities. And so for me, it's a fun class. I didn't do it just for the sake of promoting diversity in the math major, but it's sort of unintentionally has done that. And that's a really good feeling.
KK: Very cool. So another thing we like to do on this podcast is ask our guests to pair their theorem with something. So what pairs well with Desargues’ theorem?
AC: Yeah, so I think I already hinted at this, so anything that you can eat with chopsticks goes really well with Desargues’ theorem because chopsticks allow you to have wonderful food and do math at the same time, and what could be better?
KK: So basically, anything you can eat, then, right, you can eat anything with chopsticks?
AC: Soup is a little bit tricky, but yes.
KK: But you drink the soup, right? They give you the chopsticks, you’ve had ramen, right? There's the chopsticks for the noodles.
AC: Yes. Exactly.
EL: Do you have a favorite food to eat with chopsticks?
AC: Oh my goodness. Pretty much everything. I was just realizing ice cream is not so easy with chopsticks.
EL: Yeah.
AC: I think yesterday was national ice cream day. Yeah, I don't I don't know. I take my chopsticks with me in my in my planner bag, and a spoon. And so when I go to restaurants if they try to give me plastic things, I use my chopsticks. So basically, yes, anything I can eat with chopsticks, I will eat with chopsticks. If I can't, I'll use my spoon.
EL: Nice.
KK: We’re getting Thai takeout tonight. Now I'm really excited.
AC: I’m coming to your house.
KK: Sure, come on down. Although you know with all the COVID, I don't think Florida is really a place you want to be coming these days.
EL: So I guess this would be a good time to open the floor to questions. So Brian, I was thinking that I would be able to keep an eye on it, and I totally couldn't. So I'm glad that you were keeping an eye on it. So do you have any questions that our listeners would like to ask Annalisa?
Brian Katz: I’ve noticed three so far. One is from Joshua Holden, would Desargues’ theorem be useful for computer graphics?
AC: That’s a really good question. If I knew anything about computer graphics, I would be able to answer that better. I do know that my students who have gone on into computer engineering tell me that the course that I offered on projective geometry was one of their most useful courses, that this idea of ray tracing was was super, super helpful. So I don't know if Desargues’ theorem itself is specifically useful, but the idea of projective geometry is certainly how we understand the world through videos.
BK: We got a request from Doug Birbrower asking for you to hold up the line drawing while I asked the next one. I was wondering, so when we're talking about triangles, we have these vertices that are special points. How does this idea translate when you're talking about, say, shadows of more complicated objects that might be smooth? You talked a little about circles, but is there a special that happens when you generalize beyond polygons?
AC: One of the things that makes triangles really awesome is the same reason why triangular stools are so useful, is they're always stable, right? Whereas a four legged stool can wobble. If you try to draw the perspective image of an object With four points like a kite, it's really easy to make it be perspective from the sun without being perspective from a line. And if you do something like that, it'll look like maybe the kite is planar, but the shadow is curved, which might make sense on the ground. So in some ways, it's saying triangles really determine planes. Yeah, the question of drawing other curves is really interesting because of how you do or don't define curves in projective geometry. So one way you could think of a curve is a collection of points. You could also think of it as a collection of tangent lines. And so I think a way to generalize Desargues’ theorem to those would be to be talking about those collections of points and those collections of tangent lines.
BK: And then the third one that got some answers in in the chat was: I have a sense that, like, parallel things that when they're prospective from a point that means the point’s at infinity when we're talking about projective geometry. Is their geometric intuition about what it means for the line, perspective from a line, for that line to be infinity? And TJ suggested that it was that the objects are translations of each other.
AC: So if the line is it that that is at infinity, then either you could think of this as being translations, or you can think of it as being a dilation. And so it's a translation if both the axis where the two triangles meet is infinity and the center, that is what how you shine from one to another, is also off at infinity. And they’re are a dilation if the axis is off at infinity, but the center is what we call an ordinary point.
KK: This is new for us, having a Q and A. It's usually just the three of us, you know, me and Evelyn and whoever we're interviewing, but this is fun. I like this interaction.
EL: Yeah, I like that. And people have good questions. Yeah. Great. Thanks. Are there any more questions from the chat that we want to get to? Okay, looks like I'm seeing no. So I think this will sort of wrap up the…oh, Brian. Yeah.
BK: This one just appeared: Do cylindrical polar coordinates throw any light on this?
AC: Oh, so I was just about to say to everybody, “Thank you so much for asking me questions that I actually know the answers to!” And this one, I have no idea. I don't know. I don't know anything about cylindrical polar coordinates. Sorry. Now I'm going to write that one down and go check it out.
EL: But we can all appreciate the “throwing light” phrase of the question. That was very well done. Thank you.
KK: Clever.
EL: So, to wrap up the podcast portion of this, or the the episode with Annalisa portion of this, we will have show notes that are available. Our podcast listeners probably know where to find that at Kevin's website. And on that will include a link to your website, a link to the books that you have. Do you want to say the titles of the books that you've written that people might be interested in?
AC: So the first one, the one from 2011, is called Viewpoints with a subtitle “mathematical perspective, and fractal geometry in art,” and that's suitable for, like, a first-year seminar in math and art. So students don't need to really know anything at all about mathematics. And then the other one is called Perspective and Projective Geometry, and it came out in 2019.
EL: Yeah, so thank you for joining us, Annalisa. And for your doing it in this different fun format.
AC: I’m really flattered that you invited me to do this. Yeah, it's been so much fun trying to think about how to do this without drying gazillions of pictures. I appreciate that.
EL: Yeah.
KK: Thanks so much.
We were delighted to have a crossover event with Talk Math With Your Friends, an online math seminar that runs on Thursdays at 4 pm Eastern time. You can watch a video of this episode, which includes a collection of "flash favorite" theorems from the audience, here. Our guest for this episode was Annalisa Crannell from Franklin and Marshall College, who talked about Desargues' theorem. Below are some links you might find handy after listening to the episode.
Crannell's academic website
Her collaborator Fumiko Futamura's website
Desargues' theorem on Wikipedia, which includes some helpful diagrams
The Image of a Square, a paper about the theorem that every quadrangle is a square if you look at it the right way. (Also available from Futamura's website.)
Viewpoints: Mathematical Perspective and Fractal Geometry in Art by Crannell and Mark Frantz
Perspective and Projective Geometry by Crannell, Frantz, and Futamura
During the episode, Crannell shared Bosse's original diagram for proving Desargues' theorem. It is here. Below is a version of the diagram colored in, making the triangles a little easier to see.
Evelyn Lamb: Hello, My Favorite Theorem listeners. This is Evelyn. Before we get to the episode, I wanted to let you know about a very special live virtual My Favorite Theorem taping. If you are listening to this episode before July 16, 2020, you’re in luck because you can join us. We will be recording an episode of the podcast on July 16 at 4 pm Eastern time as part of the Talk Math With Your Friends virtual seminar. Join us and our guest Annalisa Crannell to gush over triangles and Desargues’s theorem. You can find information about how to join us on the My Favorite Theorem twitter timeline, on the show notes for this episode at kpknudson.com, or go straight to the source: sites.google.com/southalabama.edu/tmwyf. That is, of course, for “talk math with your friends.” We hope to see you there!
[intro music]
Hello and welcome to my favorite theorem, the podcasts that will not give you coronavirus…like every podcast because they are podcasts. Just don't listen to it within six feet of anybody, and you'll be safe. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. So if our listeners haven't figured out by now, we are recording this during peak COVID-19…I don’t want to use hysteria, but concern.
EL: Yeah, well, we'll see if it’s peak concern or not. I feel like I could be more concerned.
KK: I’m not personally that concerned, but being chair of a large department where the provost has suddenly said, “Yeah, you should think about getting all of your courses online.” Like all 8000 students taking our courses could be online anytime now… It's been a busy day for me. So I'm happy to be able to talk math a little bit.
EL: Yeah, you know, normally my job where I work by myself in my basement all day would be perfect for this, but I do have some international travel plans. So we'll see what happens with that.
KK: Good luck.
EL: But luckily, it does not impact video conferencing.
KK: That’s right.
EL: So yeah, we are very happy today to be chatting with Belin Tsinnajinnie. Hi, will you introduce yourself?
Belin Tsinnajinnie: Yes, hi. Yá’át’ééh. Shí éí Belin Tsinnajinnie yinishyé. Filipino nishłį́. Táchii’nii báshishchíín. Filipino dashicheii. Tsi'naajínii dashinalí. Hi, everyone. Hi, Evelyn. Hi, Kevin. My name is Belin Tsinnajinnie. I'm a full time faculty professor of mathematics at Santa Fe Community College in Santa Fe, New Mexico. I’m really excited to join you for today's podcast.
EL: Yeah, I'm always excited to talk with someone else in the mountain time zone because it's like, one less time zone conversion I have to do. We're the smallest, I mean, I guess the least populated of the four major US time zones, and so it's a little rare.
BT: Rare for the best timezone.
EL: Yeah, most elevated timezone, probably. Yeah, Santa Fe is just beautiful. I'm sure it's wonderful this time of year. I've only been there in the fall.
BT: Yeah, we're transitioning from our cold weather to weather where we can start using our sweaters and shorts if we want to. We're very excited for the warmer weather we had. We're always monitoring the snowfall that we get, and we had an okay to decent snowfall, and it was cold enough that we're looking forward to warm months now.
EL: Yeah, Salt Lake is kind of the same. We had kind of a warm February, but we had a few big snow dumps earlier. So tell us a little bit about yourself. Like, where are you from? How did you get here?
BT: Yeah. I am Navajo and Filipino. I introduced myself with the traditional greeting. My mother is Filipino, my father is Navajo, and I grew up here in New Mexico, in Na’Neelzhiin, New Mexico, which is over the Jemez mountains here in Santa Fe. I went to high school, elementary school, college here in New Mexico. I went to high school here in Santa Fe. I got my undergraduate degree from the University of New Mexico, and I ventured all the way out over to the next state over, to University of Arizona, to get my graduate degree. While I was over there, I got married and started a family with my wife. We’re both from New Mexico, and one of our biggest goals and dreams was to come back to New Mexico and live here and raise our families where our families are from and where we're from. And when the opportunity presented itself to take a position at the Institute of American Indian Arts here in Santa Fe, it's a tribal college serving indigenous communities from all over the all over the nation and North America, I wanted to take that. I feel very blessed to have been able to work for eight years at a tribal college. And then an opportunity came to serve a broader Santa Fe, New Mexico community, where I also serve communities that are near and dear to my heart, where I've been here for over 30 years. And I'm really excited to have this opportunity to serve my community in a community college setting.
So, going into academia, and going into mathematics, it's not necessarily a typical track that a lot of people have opportunities to take on, but I feel very blessed to be doing math that I love serving communities that I love, and being able to raise my families around the communities that I love to. So I feel like you have a special kind of buy-in by engaging in a career that serves my communities and communities that are going to raise my families as well, too.
KK: That’s great.
EL: Nice. So I see over your shoulder a little bit of a Sierpinski triangle. Is that related to the kind of math you like to think about? Or is it just pretty?
BT: Yeah. One, it’s pretty. When I was at the Institute of American Indian arts, most of the students there, they're there for art. They come from Native communities, and they're not there to do mathematics, necessarily. So part of my excitement was to think about ways to broaden the ideas of mathematics and to build off of their creative strengths. And that piece is a piece that one of my students did. They did their own take on a Sierpinski triangle. I have a few of those items from my office where they integrated visual arts and integrated creative aspects of mathematics from cultural aspects as well, too.
KK: So I always think of Native American artists being kind of geometric in nature. It feels that way to me, I mean, at least the limited bit that I've seen. Is that sort of generally true?
BT: The thing about Native art is that Native cultures are diverse in and of themselves too. So there are over 500 federally-recognized tribes, and in Mexico are over 20 tribes alone, 20 nations alone, and each of them have their own notions of geometry and their own notions of their kinds of mathematics that they engage in with respect to the place that their cultures, their identities, and their languages are rooted in. So, yeah, a lot of it is visual, and geometric, because that's what we see. But there's also many I imagine that we don't see, that's embedded in the languages and the practices. Part of my curiosity is seeing how we can recognize what we do and what our traditions are, how we can recognize that as mathematical. And it might be mathematical in the sense that we, as professional mathematicians, might not be accustomed to seeing or experiencing. And, you know, I'm still trying to understand my own cultures, languages and traditions too. So I know mathematics more than a lot of how I experience my own culture. So on one hand, I'm seeing things from a traditional mathematician brought through academia, but I’m also trying to understand things through the lens of someone who's trying to better understand my cultures and histories.
EL: So what is your favorite theorem?
BT: The theorem I chose today was Arrow’s impossibility theorem.
KK: Nice.
EL: Great. And this will be a timely one, at least for the US, because it will be airing—I mean, I guess the past two years basically have been part of the 2020 presidential season—but really in the thick of it. So yeah, tell us a little bit about what this is.
BT: So I'll say more about why I'm kind of drawn to this theorem. So it's a theorem that basically says that there is no perfect ranked voting system, or no perfect way of choosing a winner and, by extension, for me, it kind of brings up conversations about how democracy itself isn’t perfect and that it's really hard to say that a democratic system can accurately represent the will of the people. And I was drawn to this theorem because as I started thinking about the cultural aspects of mathematics and mathematics education, I'm also interested in the power dynamics and the political dynamics and the sociopolitical aspects of mathematics and math education. And a lot of what's out there and written about math education talks about using quantitative reasoning and quantitative analysis and statistical analysis to really engage in critical dialogues and examining inequities and injustices in the world. And all of that is rich and engaging and needed and necessary ways that we can use mathematics to view the world. But the mathematician part of me still misses the definition-proof-lemma aspect of engaging in mathematics. So this theorem kind of represents a way of engaging in politics through some of the theorem-definition- lemma aspects of it. So the way that I understand Arrow’s theorem, and I mentioned this to you before, that I don't know the ins and outs of this theorem, I just really like the ramifications of it and the discussions that it generates. But it basically starts with the idea that we can describe functions where we're considering a way of choosing a winner of an election from a list of candidates. And we're taking each voter’s ranked preference of those candidates. So one thing that we're assuming is that each voter can rank a list of n candidates, A1 through An, and if everyone can rank their preferences, then a voting system would be a way to take all of those, those ranks, or those ballots, and choosing an overall ranking that is supposed to indicate an overall preference for the group of voters.
And what Arrow’s impossibility theorem talks about is that we want values, and want to describe good ways of what a good voting system is. So we want to describe list of criteria that shows that we have a good voting system. So the list of criteria that involves Arrow’s impossibility theorem talks about 1) and unrestricted domain; 2) social ordering; 3) weak Pareto or unanimity; 4) a non-dictatorship; and 5) independence of irrelevant alternatives. And I'll go through what each one means. So basically, an unrestricted domain means that we want a voting system or a way of choosing a winner to be able to take any set of ballots with any number of candidates and be able to give some overall ordering, that these functions are well-defined. So the unanimity condition talks about if everyone prefers one candidate over another, where every single voter has one candidate ranked over another candidate, then the overall function that turns the ballots into an overall social ordering should indicate that that candidate is preferred over the other candidate. And we also don't want a dictatorship, right? And the idea of that mathematically defined is that we don't want one voter deciding exclusively what the overall social ordering is of the candidates. And so we don't want a dictatorship. And we want an independence of irrelevant alternatives, and what that what a lot of people think about as an example of is a “spoiler” candidate or a third party candidate, where even if everyone prefers one candidate over another, that a change in order of a third or other candidate, without disrupting that other order, shouldn't change the overall outcome of an election. They relate that to how sometimes third party candidates can be a spoiler for an election even though overall, it looks like a plurality of voters might prefer one candidate over another. But certain voting systems can have that characteristic where third or other other set of candidates can disrupt the outcome of that election.
KK: I’ve never heard of that.
EL: Wouldn’t it be terrible if that ever happened? [Note: These statements were delivered somewhat sarcastically, presumably referring to the 2000 Presidential election in the US]
BT: Right, right, right. So what Arrow’s impossibility theorem says is that those all may be desired characteristics of a voting system or a social choice function, but that it's impossible to have all of those criteria in a voting system. So the general outline of the proof is that if we have a system that has the unanimity criterion, and an independence of irrelevant alternatives, that if we have those two criteria in a social choice function, then the voting system must be a dictatorship. So if we add those assumptions, then we can go through and show that there is a voter whose sole ordering determines the overall ordering of the voting group, of the voters.
KK: That’s how I always learned this theorem, is that you set down these minimal criteria, and the only thing that works as a dictatorship, right?
BT: Right.
KK: These criteria are completely reasonable, right?
EL: You can’t have it all.
BT: Right, right. They're not outlandish. They're what we might think of as things that we might value in a democracy. And, of course, these, these things don't perfectly replicate what's going on in the real world, but the outcome is still fascinating to me that mathematically, we can show that we can’t have all these sets of what we think are reasonable criteria in a voting system.
KK: Recently, maybe in the last two years, I’ve been getting interested in gerrymandering questions. And there's there's a similar sort of theorem that got proved in the last year or two, which essentially says that, you know, people don't like these sort of weird-shaped districts, they think that's bad somehow, because it's on unpleasing to the eye. But apparently — and there’s also this idea of the efficiency gap, where you sort of want to minimize wastage. So if you laid out some simple criteria, like you want compact districts, and you want to make the efficiency gap, minimized that, then the theorem is you have to have weird shape districts, right? So it’s sort of an impossibility theorem in that way too. So these these kinds of ideas propagate through all of these these kinds of systems,
EL: The real world is impossible.
BT: Right. And even by extension, you know, in many voting theory classes, there's a districting problem, which relates to a good metric for measuring compactness. But then the apportionment issue as well, that it's very hard, if not impossible, to find a fair way of apportioning a whole number of representatives that's proportionate to the state's population, relative to the overall population of the country.
KK: Yeah.
BT: And so yeah, this is one of my favorite theorems because it kind of opens the door to those conversations and gives me another way of thinking about when representatives, or people who talk about the outcomes of elections, say things like “the people have spoken,” “this is the will of the people,” “we have a mandate now,” that I think these outcomes really complicate those claims and should really give us a critical eye and a critical way of really discussing what the will of the people is, and how those discourses really perpetuate the idea that voting, and voting alone, can accurately indicate the will of the people and that that's to be accepted, and that we move forward with them.
EL: Yeah. So have you gotten to use these Arrow’s paradox or any of these other things in classes?
BT: When I was at the Institute of American Indian Arts, I tried to develop a voting theory class. And we got into that and talked about that. And it interested me too because the voting system on the Navajo Nation, we vote for our own council and our own presidents too, and I use this as a way to think about how we have a certain candidate in Navajo Nation who's always running and is seemingly unpopular. And the voting system for president in Navajo Nation is that we have that two-party runoff system where we vote for our top choices and that the top two vote getters participate in a general runoff election. And for a few consecutive elections, this one candidate that is seemingly unpopular just gets enough votes to get into the top two for the runoff election and then gets overwhelmingly outvoted in the general election. So I think for me it was a fascinating way to engage in these kind of mathematical ideas, or mathematical discourses, while talking about some of the real outcomes that are going on in our nations, in our communities, in our efforts towards our self-determination and sovereignty. So I wanted to tie in something that's mathematical, where we can talk about mathematical discussions, with issues that are contemporary and real to our, our peoples.
EL: It’s something I always wonder about is, you know, we've got a theorem that says voting is impossible — or it says that, you know, it's impossible to actually say, like, this is the will of the people. But do you know if much research has been done about, like, real sets of choices that people have and what voting systems might be — do they really experience this paradox, or in the real world, do they have these strange orders of preferences that that confound ranked choice voting rarely?
BT: I imagine that there is research out there and there are people who have engaged in it much more than I have. But something that makes me curious are some of the underlying assumptions that go into Arrow’s theorem and what has been mathematized as necessary criteria, and the values that those might be representative of for certain groups of people. For example, I guess you could call it an axiom of many these voting theory theorems in mathematics is that one voter is one vote, and you know, there are systems where that might not be true. But one of your criteria is one person, one vote. And that one person votes for their own interests and their own interest only, and there are extensions of these criteria where if we have other non-ranked voting systems, then it can help.
But let me backtrack: one of the outcomes of Arrow’s theorem is that when people know that it's impossible for the outcome to really represent the will of the people, then it could result in people voting for candidates other than their first option because they know that voting for someone other than their true option because we election in favor of something that's not of their desire. So we have people voting against their own actual first choices. And that happens with ranked-choice voting, and some of the extensions of these conversations have been about voting systems that don't require ranked choice. So perhaps giving each candidate a rating, and it helps alleviate some of those issues with ranked-choice voting, and it helps alleviate those issues of third-party candidates, where you can still give your candidate five stars out of five, like an Amazon review, but still really give perhaps a better indication of your true view of the candidates, rather than a linear ranking. So it kind of reveals that there are some issues with just linear ranking of candidates, when the way that we think about in value and understand our preference of candidates might be much more complex than a simple 1 through n ranking. But kind of going back to what I think this could mean for communities and other societal perspectives, is in many democracies, that one vote-one choice is kind of an assumption that that's what we want. But for many communities, perhaps we want to vote for something that does benefit an overall view of the people. What would that look like as a criteria if we allowed for something like that? What would we do if we allow criteria, or embedded in our definitions, some way of evaluating how if when we register a vote, that we're all not only taking into account our own individual interests, but the interests of our land, of our communities, of our nations. So those are cultural values that are not assumed in the current conversations, but for many communities in many Indigenous nations, those are some things that are real and necessary to think about. What would that look like if we expand those and then be critical of those assumptions that are underlying these current conversations on voting theory in mathematics.
EL: So one of the other things we do on this podcast is We ask our guests to pair their theorem with something. What have you chosen to pair with this theorem?
BT: I have a ranking of three pairings.
EL: Great. I’m so glad! Excellent.
BT: So I have 1-2-3. So I'll give my third choice first. The third out of three pairings: green chili cheeseburgers.
EL: Okay.
BT: And in New Mexico, everyone has their favorite place to get a green chili cheeseburger, and we take pride in our green chili, and every year any contest about the green chili cheeseburger and who has the best green chili cheeseburger causes some conversation, and it causes some controversy and rich discussions over who has the best green chili cheeseburger. So, I think about that as a food that has a lot of controversy as to who has the best green chili cheeseburgers in New Mexico. The second pairing is another food item, the Navajo taco.
EL: Oh yeah. Those are good.
KK: What’s in those?
BT: So, well, what we call a Navajo taco is a piece of frybread with toppings often involving meat and cheese, with lettuce and tomato and maybe some chili. And this is another controversial discussion in Native communities because we call it a Navajo taco, but it's not just Navajos who make this kind of dish, because many communities make their own versions of frybread. And so some places call it Indian tacos, and there's a lot of controversy over which community first introduced the Navajo taco and why some people call it the Navajo taco and others call it Indian tacos. And so in Native communities, there's a lot of controversy over what constitutes the best version of this dish. And the other reason I'm pairing that is the frybread itself comes from a time where it was created out of necessity for survival, where the flour that had been rationed out to our communities was rancid, and in order to actually make it edible, it was deep fried. And so on one hand, it represents a point in time where our communities were just fighting for survival, and it also represents their ingenuity, and became a part of our everyday practice. But at the same time, it's a reminder that that was something that was imposed on our communities, much like voting systems nowadays. It's an act of our survival and our sovereignty, the voting systems that we have in place. But I think there's also need to come back and have other conversations about what's good for our communities.
And the first-ranked pairing is mathematics itself with Arrow’s theorem. So we have a lot of conversations about how mathematics is universal, mathematics is for everyone, that everyone can do mathematics, and that everyone can participate in mathematics. But for many people from from equity, justice and diversity perspectives, we want to be critical about who has access to mathematics, whose ideas of mathematics are represented in our mainstream ways of thinking about mathematics. Just like we think about democracy as being the will of the people and being a representation of all the people, that Arrow’s is kind of a critique of that notion of democracy. And I think mathematics, we can take a lesson from this theorem and think about what we mean when we say mathematics is universal or mathematics is for everyone or mathematics is for all, when this term itself is kind of a democratic take on mathematics, that everyone can do mathematics, and everyone can be an equal participant in mathematics. But, you know, we think the same thing about democracy, and this theorem says that there are some issues with that. So I'm interested in seeing how we can take this lesson and how we can think about how we can be more critical about the ways we think about mathematics itself.
EL: Yeah, well, you know, Arrow’s paradox is not about this, but we have issues with people who can't vote for various reasons and should be able to vote, or places that shut down polling places in certain communities to make it so people have to stand in line for six hours. Which is, you know, not easy to do if you've got a job that you need to get to. So yeah, there's so much richness. I love that you paired a ranking of three things with this. And now I feel like we should also vote on these, but I just don't think it's fair for one of them to be math. I mean, you’ve got two mathematicians here, three mathematicians here in total. I think it's going to be a blowout.
KK: No, tacos win every time, don’t they?
EL: I should have known.
KK: This is a really good pairing. I like this a lot.
EL: Yeah.
KK: We also like to give our guests a chance if they want to plug anything. Where can we find you online for example, or can we?
BT: Probably the best way to find me is on Twitter. My Twitter handle is @lobowithacause.
EL: Yeah. You'll see him popping up everywhere. Is that the mascot for the University of New Mexico?
KK: It is, the lobos.
EL: And I believe a talk that you gave at the Joint Math Meetings, is there video of that available somewhere?
BT: I was told that there would be video. I haven't found it yet. There was a video recorded. And I'll follow up with that and see that it gets out. I'll make an announcement on Twitter.
KK: I’ve noticed those have been trickling out kind of slowly. It'll show up, I think.
EL: Yeah, we'll try to dig it up by the time we put the show notes together so people can watch that. Unfortunately, I was still making my way to Denver when that happened, so I didn't get to see it. So selfishly I very much want to see it. I heard really good things about it. So thank you so much for coming on here and giving us a lot to think about.
BT: Oh, it was an honor. And you know, I love your podcasts.
KK: Thanks so much.
BT: I love what you’re doing. I had fun in listening to your other podcasts in preparation for this and loved hearing Henry Fowler and shout out to Moon Duchin too. I heard that you, Kevin, went to that gerrymandering work in Boston a few years ago. I was there too. And I had a great week there.
EL: Oh, nice.
KK: That was a big workshop. There was no way to meet everybody. Yeah,
EL: Thanks for joining us, and have a good rest of your day.
BT: Thank you. Thank you. You too.
In this episode of the podcast, we were happy to talk with Belin Tsinnajinnie, a professor at Santa Fe Community College, about Arrow's impossibility theorem, which basically says that a perfect voting system is impossible. Below are some links you might enjoy as you listen to the episode.
Arrow's impossibility theorem
Cardinal voting, an alternative to voting systems that are based on ranking the options
Our episode with Henry Fowler, who was at the time on the faculty of Diné College and is now at Navajo Technical University
Our episode with Moon Duchin, who studies gerrymandering, among other things
Belin Tsinnajinnie on Twitter
Evelyn Lamb: Hello and welcome to my favorite theorem. Math podcast. I'm one of your hosts Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. How's it going?
EL: All right, it is a bright sunny winter day today, so I really like—I mean, I'm from Texas originally, so I'm not big on winter in general, but if winter has to exist, sunny winter is better than cloudy winter.
KK: Sure, sunny winter is great. I mean, it's a sunny winter day in Florida, too, which today means it is currently, according to my watch, 81 degrees.
EL: Oh, great. Yeah.
KK: Sorry to rub it in.
EL: Fantastic. It is a bit cooler than that here.
KK: I’d imagine so.
EL: So yeah. Anything new with you?
KK: No, no. Well, actually so so my I might be going to visit my son in a couple of weeks because he's studying music composition, right? And the the orchestra at his at his university is going to play one of his pieces, and so kind of excited about that.
EL: Very exciting! Yeah, that's awesome.
KK: Yeah, but that's about it. Otherwise, you know, just dealing with downed to trees in the neighborhood. Not in our yard, luckily, but yeah, stuff like that. That's it.
EL: Yeah. Well, we are very happy today to have Rebecca Garcia as a guest. Hi, Rebecca. How are you?
Rebecca Garcia: Hi, Evelyn. Håfa ådai, I should say, håfa ådai, Evelyn, and håfa ådai, Kevin. Thanks for having me on the program.
EL: Okay, and what—håfa ådai, did you say?
RG: Yeah, that's right. That's how we, that's our greeting in Chamorro.
EL: Okay, so you are originally from Guam, and is Chamorro the name of a language or the name of a group of people, or I guess, both?
RG: It’s both actually. Yes. That's right. And so Chamorro is the native language in the island. But people there speak English mostly, and as far as I'm able to tell I think I'm the first Chamorro PhD in pure mathematics.
EL: Well, you’re definitely the first Chamorro guest on our show. I think the first Pacific Island guest also.
KK: I think that's correct. Yeah.
EL: So yeah, how did you—so you currently are not in Guam. You actually live in Texas, right?
RG: I do. I'm a professor at Sam Houston State University, which is in Huntsville, Texas, north of Houston. And I'm also one of five co-directors of the MSRI undergraduate program.
EL: Oh, nice. That seems like it is a great program. So how did you how how did you get from Guam to Huntsville?
RG: Oh my goodness. Wow. That is a that is a long, long journey.
KK: Literally.
RG: I started out as a as a undergraduate at Loyola Marymount University, and I had the thought of becoming a medical doctor. And so I thought we were supposed to do some, you know, life science or you know, chemistry or biology or something along those lines. And so I started out as one of those majors and had to take calculus and fell in love with calculus and the professors in the math department. And I was drawn to mathematics. And that's how I ended up on the mathematics side. And one of the things that I learned in my undergraduate career was these really crazy math facts about the rational numbers. And so that's one of the things that interested me in mathematics, was just the different types of infinities the concept of countable, uncountable, those sorts of things.
EL: Yeah, those those seem to be the kinds of facts that draw a lot of people into this rich world of creativity and math that you might not initially think of as related to math when you're going through school. So I think this brings us to your favorite theorem, or at least the favorite theorem you want to talk about today.
KK: Sounds like it.
EL: Yeah, so what’s that?
RG: Yeah. So it’s more, I would say, more of a fun fact of mathematics that the rationals first of all are countable, meaning they are in one-to-one correspondence with the natural numbers. And so you can kind of, you know, label them, there's a first one and a second one in some way, not necessarily in the obvious way. But then, at the same time, they are dense in the real numbers. So that to me, just blows my mind, that between any two real numbers, there's a rational number.
EL: And yeah, so you can't like take a little chunk of the real line and miss all the rational numbers.
RG: That’s right.
KK: Right.
RG: That to me just blows my mind. Because—and then you just sort of start, you know, your brain just starts messing with you, you know, between zero and one there are infinitely many rational numbers and yet they're still countable. And it just, it just starts to mess with your mind a little bit. Right?
EL: Yeah. Well, and we were we were talking about this a little bit before and it's this weird thing. Like, yeah, there's, like a countable is like a smallness thing. And yet dense is like, they're, you know, they fill up the whole interval this way. I mean, it is really weird. So where did you first encounter this?
RG: This was in a class in real analysis. And, yeah, so that's where I started to…I thought I was going to be a functional analyst. I thought I was that's what I wanted to do this. I love real analysis. That didn't happen either. But it was in that class where we were talking about just these strange facts, like the Cantor set: that set is a subset of the reals that is uncountable and yet it’s sparse.
KK: Totally disconnected, as the topologists say.
RG: Totally disconnected. There you go. Yeah. Right. And so then all these weird things are happening. And you're just in this world where you thought you understood the real line, and then they throw these things at you like, the reals are dense. I mean, the rationals are dense in the reals, you have these weird uncountable sets that are totally disconnected. What's going on? Yeah, so that's where I started to hear about all these weird things happening.
KK: Right. So one of two things happens when people learn these things, right? It either blows their minds so much they can't keep going. Or it intrigues them so much that you want to learn more. But not be an analyst. Right?
RG: [laughing] That’s right. At some point I fell in love with computational algebraic geometry and these Gröbner bases, and how you can really get your hands on some of these things and their applications to combinatorics. So I ended up, I had an algebraist’s heart, but I was exposed to some really good analysts early in my career. And so I was very confused. But I've always, I stay true to my algebraic heart and follow that mostly.
EL: And so is this a fact that you get to teach to your students now ever?
RG: So no, this is not, but I do like to talk about the the different infinities and things along those lines. And I like to, before class I come in early, and I'll have a little chat with them about just the fact that—you know, they they don't understand that math is not “done.” So, there's still so much to do. And they have no idea that, you know, there's what, what is research like? What does that mean? And so I talk about open questions. And I bring some of that in the beginning of class. And these concepts that had also drawn me in, about the different kinds of infinities and these weird concepts about the rationals being dense and, you know, just things like that. I do get to talk about it, but it's not in a class that I would teach the material on.
EL: Yeah, just going back to this idea that you've got the rationals that are dense, so it's this, like, measure zero small set, but it's like everywhere. And then you've got the Cantor set, which is uncountable and sparse. It's like, we've got these various ways of measuring these sets. And you think that they line up in some natural way. And yet they don't. It's just like, you know, the density is measuring a different type of property of the numbers than the measure is.
RG: That’s exactly.
EL: And actually, I guess countability is a different thing. Also, I mean, it's, yeah, it's so weird. And it's hard to keep all these things straight. My husband does a lot of analysis and like has, yeah, all of these, like, what kinds of sets are what.
RG: And what properties they have. And yeah, I don’t have that completely straight.
KK: This is why I’m a topologist.
EL: But I mean, topology is like,
KK: Oh, it's weird too.
EL: It’s secretly analysis.
KK: Well…
EL: Analysis wishes it was topology, maybe.
KK: So my old undergraduate advisor—who passed away last summer, and I was really sad about that—but he always he always referred to topology as analysis done right.
EL: Shots fired.
KK: Which is cheap, of course, right? Because you prove all this stuff in topology Oh, the image of a connected set is connected. Yeah, that's easy now go off to the real line and prove that the connected sets are the intervals. That's the hard part. Right? So yeah, he's being disingenuous, but it was. It's a good line. Right.
RG: Right.
EL: So you said that you ran into this, was this an undergraduate class where you first saw these notions of countability and everything?
RG: Right, it was an undergraduate class where I ran into those notions and I was a junior, well, I guess it was in my second semester as a junior, where we were talking about these strange sets. And that's when I had also thought about going on to graduate school and wanting to do mathematics for the rest of my life. I mean, I was a major by then, of course, but I just didn't know what I was going to do. But it wasn't until then, when I learned about, well, this is this could be a career for you. This may be something you like to do. And of course, this was many, many years ago. And nowadays, you can do so much more with mathematics, obviously. I mean, we know that we can do so much more, I should say. We've always been able to do so much more. We just haven't been able to share that with our students so much. We never really spent the time to let them know there's so many careers and mathematics that one can do. But anyway, at that, at that time I was I was drawn into really thinking about becoming a mathematician, and that was one of the experiences that that made me think that there's so much more to this than than I originally thought.
EL: Yeah, well, I talk to a lot of people, you know, in my job writing and doing podcasts and stuff about math, and there's so many people who don't realize that, like, math research is a career you can do.
RG: Right.
EL: And the more we can share these kinds of “aha” moments and insights, the better and, you know, just show like, well, you can use, you know, kind of the logic and the rules of the game to like, find out these really surprising aspects of numbers.
RG: Right. And I think also, one of the experiences that I've had as an undergrad that really just sort of sealed the deal—I’m going to go into mathematics—was doing an undergraduate research program as a student. Well wasn't really at the time an undergraduate research program, it was just another summer program. This is many years ago, almost before all of that. And I had the chance to spend a summer just thinking about mathematics at a higher level with a cohort of other students who were like-minded as well, you know. And it was really—it was it was like, “Oh, I can do this for the rest of my life? Like how amazing is that?” And so, I was part of a summer program as an undergrad. And then when I was a graduate student, my lifelong mentor, Herbert Medina, was running a program in Puerto Rico and asked me to be a TA while I was a grad student. And so these were some of the things that led me to do what I do now, working with undergraduates, doing research and mathematics.
EL: And so that ties in to the MSRI program that you are part of, right?
RG: Right.
EL: I guess it I've seen it written like MSRI-UP. So I guess that's undergraduate program?
RG: Yes. Undergraduate Program. That's right. Yeah. Well, that that's sort of like, a different stage that I'm at now. But yeah, before that, I started my own undergraduate research program together with colleagues in Hawaii, at the University of Hawaii at Hilo. And we ran an undergraduate research program called PURE Math, and that was Pacific Undergraduate Research Experience in Mathematics. And we ran that for five years. And then, and then I ended up moving into the co-director role at MSRI-UP.
EL: Nice.
KK: That’s a great program.
EL: Yeah. So the other thing we like to ask our guests to do, is to pair their theorem with something. You know, just like the right wine can enhance that meal, you know, what would you recommend enjoying the density of the rationals with?
RG: Well, I did think about this a bit. And one of the things that I think, you know, you think the rationals are dense but they really shouldn't be? So, I think of foods that are dense, but they really shouldn't be, and one of those foods that comes to mind, especially being here in Texas, but also being married to a mathematician who is from Mexico, is tamales. So tamales really should not be dense. They should be fluffy and sumptuous, but here in Texas, you find really dense the most, unfortunately. But it It was strange to also discover that growing up in Guam, we also have our own version of tamales, and a lot of the foods are related in some way to foods from Mexico. So I feel like there's this huge rich connection between myself being from Guam, my husband being Mexican and there's just this strange richness that we share this culture, that I don't know, it just blows my mind too. So the same way that the rational is being dense in the reals blows my mind.
EL: All right, well, I have to ask more about this tamale like creation from in traditional Guam cuisine. What, is that wrapped in, like, banana leaves or something like that?
RG: It ought to be, and maybe traditionally it was. I think that nowadays it's not that way. They usually serve it in aluminum foil, and it's made—it's a mixture like tamales. So tamales in Mexico are made with corn, right?
KK: I was about to ask this. What are they made of in Guam?
RG: Yeah, yeah. And so in Guam we actually use, like, a rice product.
EL: Okay.
RG: It's ground up just like corn. And so instead of corn, we're using rice, and it's flavored in different ways.
KK: Interesting.
EL: All right. I have kind of in my mind because I'm more familiar with this like almost, is it kind of like a mochi texture? Because, I mean, that's a rice product, but maybe it's not maybe that's like more gelatinous than this would be.
RG: Yeah, I guess mochi is really pounded and yeah, so yeah, that's more chewy. I think that the tamal, well, you wouldn't say it like that, but the tamales in Guam are very soft and, gosh, I don't know how to describe it. But it's a very soft textured food.
KK: I would imagine the rice could be softer, and I mean, corn can get very dense, especially when you start to put lard in it and things like that.
RG: Yes.
KK: I mean, it’s delicious.
RG: It is delicious. And oh my, I can’t get enough tamales. Oh, well.
KK: Yeah, maybe you can.
RG: Yeah, I should learn.
EL: Yeah, well, nice. I unfortunately, we do have a couple restaurants in Salt Lake that are Pacific Island restaurants, but we have more people from Samoa and Tonga here. I don't know if we have a lot of people from Guam here. Yeah, there's actually like a surprising number of like, Samoans who live in Salt Lake. Who knew?
RG: Right.
EL: But yeah, it's it's because of like the history of Mormon missionaries.
KK: That’s what I was gonna say.
EL: Yeah, the world is very interesting, but yeah I don't know if I've seen this kind of food there. I will just have to, you know, if I'm ever in Huntsville I’ve got to get you to make me some of this. I’m just inviting myself over for dinner now. Hope you don't mind.
RG: That would be great. It would be wonderful to have you here.
EL: Is there anything else you'd like to share? We'd like to give our guests a chance to like, share, you know if they've got a website or blog or book or anything, but also if you want to share information about MSRI-UP, application information, anything like that for students? Anything you'd like to share?
RG: Oh, wow. That's a lot of stuff.
EL: Yeah, I know. I just rattled off a ton of things.
RG: Well, yes, I do have, I guess I would like to say for the undergraduate listeners in the audience, please consider applying to our MSRI-UP program, and just in general apply to a research program in the summer. These are paid opportunities for you to expand your mind and do some mathematics in a great environment, and so I highly recommend considering applying for that. And so this is the time right now of course by the time the listeners hear this, I’m sure it will be over, but consider doing some undergraduate research or using your summer wisely.
KK: I parked cars in the summer in college. I did.
EL: Well, you never know the connections that might happen though because I was talking to someone one time who basically his big break to get to go to grad school came because, like, somehow he was involved in like parking enforcement somewhere, and some math professor called in to complain about, like, getting a ticket, and one thing led to another and then he ended up in grad school. So really, you never know. Maybe that's not the ideal route to take. There are more direct routes, but yeah, there are many paths.
RG: Yes, there are. And there's also another, I guess another thing to flag would be, well, contributed to a book that Dr. Pamela Harris and others have put together on undergraduate research. So that just I guess that was just released. I'm not entirely sure now. I think it was accepted, and I don't know if if one is able to purchase it, but if you if you consider working with your students on undergraduate research, this is a great resource to use to get you going, I guess.
KK: Great.
EL: Oh, awesome. So this is like a resource for like faculty who want to work with undergraduates? Oh, that's great.
RG: Yes.
EL: We will find a link to that and put that in the show notes for people.
RG: That sounds good.
EL: Okay, great. Thanks so much for joining us.
KK: It’s been great.
RG: Thank you so much.
On this episode of My Favorite Theorem, we were happy to talk with Rebecca Garcia, a mathematician at Sam Houston State University, about the density of the rational numbers in the reals. Here are some links you might find helpful.
Her website
A biography of Garcia for SACNAS
MSRI-UP
A Project-Based Guide to Undergraduate Research in Mathematics, the book she mentioned contributing to
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast and so much more. I'm Kevin Knudson, professor of mathematics at the University of Florida, and here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer based in Salt Lake City, where it is snowy, but I understand not as snowy as it is for our guest.
KK: I know, and we've been trying to make this one happen for a long time. So I'm super excited that this is finally going to happen. So today we are pleased to welcome Professor Steve Strogatz. Steve, why don't you introduce yourself?
Steve Strogatz: Well, wow, thank you. Hi, Kevin. Hey, Evelyn. Thanks for having me on. Yeah, I've wanted to be on the show for a very long time. And I think it's true what Evelyn just said, we have a very big snowstorm here today in not-so-sunny Ithaca, New York, upstate. I just took my dog out for a walk, and the snow was over my boots and going into them and making my feet wet.
KK: See, I have a Florida dog. She wouldn't know what to do. Actually, we were in North Carolina a few years ago at Christmas, and it snowed, and she was just alarmed. She had no idea what to do. And she's small, too, she just couldn't take it.
SS: Yeah, well, it would be more like tunneling than running.
KK: Right.
EL: Yeah. So we actually met quite a few years ago at this point — actually, I know the exact date because it was, like, two days before my brother's wedding the first time we met because you were on the thesis committee for my sister in law, who is a physicist, many years ago, and so we have this weird, it was when I had just moved to New York to work at Scientific American for the first time. So it was at the very beginning of my life as a math writer. And I remember just being floored by how generous you were with being willing to meet with a nobody like me.
SS: Well that’s nice.
EL: At this time when I was first starting.
SS: But actually, I had a crystal ball, and I knew you were going to become the voice of mathematics for the country, practically. I mean, so I let me brag on Evelyn’s behalf a little bit. If you go on Twitter, you—I wonder if you know this, Kevin, do you know this little factoid I'm going to unreel?
KK: I bet I do.
SS: You know where I'm going. On Twitter, if you ask “What mathematician do other mathematicians follow?” I think Evelyn is the number one person the last time I checked.
KK: She is indeed number one. That's right.
SS: Yeah.
EL: I like to say I'm the queen of math, Twitter, although I don't actually like to say this because it feels really weird.
SS: Well that’s okay. You didn't say it. But yeah, I do remember our meeting that day in my office. And right, it was on this happy occasion of a family, of a wedding. Okay, sorry, I interrupted you, Kevin.
KK: Oh, I don't know. I was going to say with the Twitter thing. I think you're not far behind, right? Like, aren't you number two, probably?
SS: I think the last time I looked I was number two.
KK: Yeah.
SS: So look at that. Okay, so look at that, the two tweet monsters here.
KK: And now the funny thing is I'm not even on that list. So here we go.
SS: Okay. Yeah, well you could catch up. I'm sure you'll be coming right on our heels.
KK: Maybe. I have over 1000 followers now, but apparently not that many mathematicians. So this is how this goes. Anyway, what weird times we live in, right?
SS: It's very weird. I mean, I don't know what this can get us, a cup of coffee or what.
KK: Maybe, maybe. Okay. Let's talk theorems. So Steve, you must have a favorite theorem. What is it?
SS: Yeah, I have a very sentimental attachment to a theorem and complex analysis called Cauchy’s theorem, or sometimes called Cauchy’s integral theorem.
KK: Oh, I love that theorem.
SS: It’s a fantastic theorem. And so I don't know. I mean, I feel like I want to say what I like about it mathematically and what I like about it personally. Does that work?
EL: Yeah, that’s exactly what we want.
SS: Well, okay. So then, the scene is, it's my sophomore year of college. Maybe I'll start with the emotional.
KK: Okay.
SS: It’s my sophomore year of college. I've just gotten very demoralized in my freshman year, taking the the honors linear algebra course that a lot of universities offer as a kind of first introduction to what college math is really going to be like. You know, a lot of kids in high school have done perfectly well in their precalculus and calculus courses, and then they get to college and suddenly it's all about proofs and abstraction. And it can be—I mean, we sometimes call it a transition course, right? It's a transition into the rigorous world of pure math. And so it was a shock for me. I had a lot of trouble with that course. I couldn't read the book very well, it didn't have pictures. And I'm kind of visual. And so I was always at a loss to figure out what was going on. And being a freshman I didn't have any sense about, why don't I look at a different book, you know, or maybe, maybe I should switch sections. Or I could ask my teaching assistant, or I could go to office hours. I didn't know to do any of that stuff.
So anyway, this is not my favorite theorem. I was very demoralized after this experience in linear algebra. And then when I took a second semester, also an honors course, that was a rigorous calculus course with the Heine-Borel theorem, and, you know, like, all kinds of—again, no formulas, it was all about, I remember hearing this stuff about “every open cover has a finite subcover,” and I thought, “I want to take a derivative! I can't do anything here. I don't know what to do!” So anyway, after that first year, I thought, “I don't have the right stuff to be a mathematician. And so maybe I'll try physics,” which I also always loved. I say all that as preamble to this complex analysis course that I was taking in sophomore year, which, you know, I still wanted to take math, I heard complex variables might be useful for physics, I thought it would be an interesting course. I don't know. Turned out it was a really great course for me because it really looked a lot like calculus, except it was f(z) instead of f(x).
KK: Right.
SS: You know, but everything else was kind of what I wanted. And so I was really happy. I had a great teacher, a famous person actually named Elias Stein.
KK: Oh.
SS: So Stein is a well-known mathematician, but I didn't know that. To me, he was a guy who wore Hush Puppies and, you know, had always kind of a rumpled appearance, came in with his notes. And he seemed nice, and I really liked his lectures. But so one day, he starts proving this thing, Cauchy’s theorem, and he draws a big triangle on the board. And he's going to prove that the integral of an analytic function f around this triangle is zero no matter what f is. All he needs is that it's analytic, meaning that it has a derivative in the sense of a function of a complex variable. It's a little more stringent condition—actually a lot more stringent than to say a function of a real variable is differentiable, but I didn't appreciate that at the time. I mean, that's sort of the big reveal of the whole subject.
KK: Right.
SS: That this is an unbelievably stringent condition. You can’t imagine how much stuff follows from this innocuous-looking assumption that you could take a derivative, but okay, so I'm kind of naive. Anyway, he says he's going to prove this thing, only assuming that f is analytic on this triangle and inside it. And that's enough. And then, you know, I feel like you don't have enough information, there's nothing to do! So then he starts drawing a little triangle inside the big triangle, and then little triangles inside the little triangle. And it starts making a pattern that today I would call a fractal, though I didn't know it at the time, and he didn't say the word fractal. And actually, nobody ever says that when they're doing this proof. But it’s—right, they don’t—but it's triangles inside of triangles in a self-similar way that doesn't actually play any particular role in the proof, other than it's just this bizarre move, like, What is going on? Why is he drawing these triangles inside of triangles? And by the end, I mean, I won't go into the details of the proof, but he got the whole thing to work out, and it was so magnificent that I started clapping.
And at that point, every kid in the room whipped their head around to look at me, and the professor looked at me, like what is wrong with you? You know, and yet, I thought, “Wow, why are you guys looking at me?” This was the most amazing theorem and the most amazing proof.” You know, so anyway, to me, it was a very significant moment emotionally because it made me feel that math was, first of all, something I could do again, something I could appreciate and love, after having really been turned off for a year and having a kind of crisis of confidence. But also, you know, aside from any of that, it's just, I think people who know would regard this proof —this is actually by a mathematician named Goursat, a French mathematician who improved on Cauchy’s original proof. Goursat’s proof of Cauchy’s theorem is just one of the great— you know, it's from “The Book” in the words of Paul Erdős, right? If God had a proof of this theorem, it would be this proof. Do you guys have any thoughts about that? I mean, I'm assuming you know what I'm talking about with this theorem and this proof.
KK: Well, this is one of my favorite classes to teach because everything works out so well. Right? Every answer is zero because of Cauchy’s theorem, or it's 2πi because you have a pole in the middle, right?
SS: Yeah.
KK: And so I sort of joke with my students that this is true. But then the things you can do with this one theorem, which does—you’re right, it's very innocuous-looking, you know, you integrate an analytic function on a closed curve, and you get zero. And then you can do all these wonderful calculations and these contour integrals and, like, the real indefinite integrals and all this stuff. I just love blowing students’ minds with that, and just how clean everything is.
EL: Yeah, I kind of—I feel like I go back and forth a little bit. I mean, like, in my Twitter bio, it does have “complex analysis fangirl.” And I think that's accurate. But sometimes, like you said, it's so many of these, you know, you're you're like teaching it or reading it and you're like, “Oh, this is complex analysis is so powerful,” but in another way, it's like our definition of derivative in the complex plane is so restrictive that like, we're just plucking the very nicest, most well-behaved things to look at and then saying, “Oh, look what we can do when we only look at the very most well-behaved things!” So yeah, I kind of go back and forth, like is it really powerful or are we just, like, limiting ourselves so much in what we think about?
KK: And I guess the real dirty secret is that when you try to go to two complex variables, all hell breaks loose.
SS: Ah, see, I've never done that subject, so I don't appreciate that. Is that right?
KK: I don't, either. Yeah. But I mean, apparently, once you get into two variables, like none of this works.
SS: Ohhh. But that's a very interesting comment you make there, Evelyn, that—you know, in retrospect, it's true. We've assumed, when we make this assumption that a function is analytic, that we are living in the best of all possible worlds, we just didn't realize we were assuming that. It seems like we're not assuming much. And yet, it turns out, it's enormously restrictive, as you say. And so then it's a question of taste in math. Do you like your math really surprising and really beautiful and everything works out the way it should? Or do you like it thorny and full of rich counterexamples and struggles and paradoxes? And I feel like that's sort of the essential difference between real analysis and complex analysis.
EL: Yeah.
SS: In complex analysis, everything you had dreamed to be true is true, and the proofs are relatively easy. Whereas in real analysis, sort of the opposite. Everything you thought was true is actually false. There are some nasty counterexamples, and the proofs of the theorems are really hard.
EL: Yeah, you kind of have to MacGyver things together. “Yeah, I got this terrible epsilon and like, you know, it's got coefficients and exponents and stuff, but okay, here you go. I stuck it together.
KK: But but that's interesting, Steve, that this is your favorite theorem because, you know, you're very famous for studying kind of difficult, thorny mathematics, right? I mean, dynamics is not easy.
SS: Huh, I wouldn't have thought that, that's interesting that you think that. I don't think of myself as doing anything thorny.
KK: Okay.
SS: So that's interesting. I mean, yes, dynamical systems in the hands of some practitioners can be very subtle. I mean, those are people who have a taste for those those kinds of issues. I've never been very sophisticated and haven't really understood a lot of the subtleties. So I like my math very intuitive. I’m on the very applied end of the applied-pure spectrum, so that sometimes people will think I'm not really a mathematician at all. I look more like a physicist to them, or maybe even, God forbid, a biologist or something. So yeah, I don't really have much taste for the difficult and the subtle. I like my math very cooperative and surprising. I like—well, not surprising for mathematical reasons, but more surprising for its power to mirror things in the real world. I like math that is somehow tapping into the order in the world around us.
EL: Yeah, so this it's interesting to me, also that you picked this because, yeah, as you say, you are a very applied mathematician. And I think of complex analysis as a very pure—I actually, I'm trying to not say “pure” math, because I think it's this weird, like, purity test or something. But you know, that like a very theoretical thing. So does it play into your field of research at all?
SS: Well, uh, not particularly. Yeah. So that's a good question. I mean, I have to say I was a little intimidated by the title of the podcast. If you ask me what's my favorite theorem, the truth is for me, theorems are not my favorite things.
KK: Okay.
SS: My favorite things are examples or mathematical models. Like there’s a model in my field called the Kuramoto model after a Japanese physicist Yoshiki Kuramoto. And if you asked me what's my favorite mathematical object, I would say the Kuramoto model, which is a set of differential equations that mirrors how fireflies can get their flashes in sync, or how crickets can chirp in sync, or how other things in nature can self-organize into cooperative, collective oscillation. So that's my favorite object. I've been studying that thing for 30 years. And I suppose there are theorems attached to it, but it's the set of equations themselves and what they do that is my favorite of all. So I don't know, maybe that's my real answer.
KK: Well, that’s fine. So yeah, it's true. We've had people who've done that in the past, they didn't have a favorite theorem, but they had a favorite thing.
SS: But still, I mean, I am still a mathematician, part of me is, and I do have theorems that I love, and one of the things I love about Cauchy’s theorem is that in the proof, with this drawing of all the nested triangles inside the big triangle, you end up using a kind of internal cancellation. The triangles touch other triangles except on their common edge, sometimes you're going one way, and sometimes you're going in the opposite direction on that same edge. And so those contributions end up cancelling. And you end up, the only thing that doesn't cancel is what's going on around the boundary. And then that can be sort of pulled all the way into a tiny triangle in the interior, which is where you end up using the local property that is the derivative condition to get everything that you need to prove the result about the big triangle on the outside.
But the reason I'm going into all that is that this is a principle, this internal cancellation, that is at the heart of another theorem that's been featured on your show, the fundamental theorem of calculus, which uses a telescoping sum to convert what's happening on the boundary to what's happening when you integrate over the interior. This idea of telescoping I think, is really deep. I mean, it's what we use to prove Stokes’ theorem. It's what we would use to prove all the theorems about line integrals. It comes up in topology when you're doing chains and cochains. So this is a principle that goes beyond any one part of math, this idea of telescoping. And I've been thinking I want to write an article, someday (I haven't written it yet) called “Calculus Through the Telescope” or “A Telescopic View of Calculus” or something like that, that brings out this one principle and shows its ramifications for many parts of math and analysis and topology. I think some people get it, people who really understand differential forms and topology know what I'm talking about. But no one ever really told me this, and I feel like maybe it should be mentioned, even though it is well-known to the people who know it.
KK: Right, it's the air we breathe, right? So we don't we don't think about it.
SS: I guess, but like, I think there are probably high school teachers, or others who are teaching calculus—like for instance, when I learned about telescoping series in my first calculus course, that's just seems like a trick to find an exact sum of a certain infinite series of numbers. You know, they show you, “Okay, you could do this one because it's a telescoping series.” And it seems like it's an isolated trick, but it's not isolated. This one idea—you can see the two- dimensional version of it in Cauchy’s theorem, and you can see the three-dimensional version of it in the divergence theorem, and so on. Anyway, so I like that. I feel like this idea has tentacles spreading in all directions.
EL: Yeah. Well, this makes me want to go back and think about that idea more because, yeah, I wouldn't say that I would necessarily have thought to connect it to this many other things. I mean, you did preface your statement with “those who really understand differential forms,” and my dark secret is that the word “form” really scares me. It's a tough one. It's somehow, that was one of those really hard things, when I started doing more, like, hard real analysis. It's like, I feel like I always had to just kind of hold on to it and pray. And you get to the end of it. You're like, “Well, I guess I did it.” But I feel like I never really got that full deep understanding of forms.
SS: Huh. I don't I don't claim that I have either. I'm reminded of a time I was a teaching assistant for a freshman course for the the whiz kids that—you know, every university has this where you throw outrageous stuff at these freshmen, and then they rise to the occasion because they don't know what you're asking them to do is impossible. But so I remember being in a course, like I say, as a teaching assistant, where it was called A Course in Mathematics for Students of Physics, based on a book by Shlomo Sternberg, at Harvard, and Paul Bamberg, who's a physicist there too, and a very good teacher. And that book tried to teach Maxwell's equations and other parts of physics with the machinery of differential forms and homology and cohomology theory to freshmen. But what was amazing is it sort of worked, and the students could do it. And in the course of teaching it, I came to this appreciation of integrating forms, and how it really does amount to this telescoping sum trick. And, anyway, yeah, it's true, that maybe it's not super widely appreciated. I don't know. I don't know if it is, I don't want to insult people who already know what I'm talking about. But I I do feel like there's a story to tell here.
KK: Okay. Well, we'll be looking for that.
EL: Yeah.
SS: Someday.
KK: In the New York Times, right?
SS: Well.
KK: So another thing we do on this podcast is we ask our guests to pair their theorem with something. And we might have sprung this on you, but you seem to have thought of a solution here. So what have you chosen to paired with Cauchy’s theorem?
SS: Cubist painting.
KK: Oh, excellent. Okay. Explain.
EL: Yeah, tell us why.
SS: Well, I'm thinking of Cubism. I don’t—look, I don't know much about art. So it might be a dumb pairing. But what I'm thinking is there's a there's a painting. I think it's by Georges Braque of a guy, or maybe it's Picasso. Someone walking down stairs. And maybe it's called a Nude Descending a Staircase, or something like that. You're nodding, do you know what I mean?
EL: I'm a little nervous about saying, I think it is Picasso, but I'm looking it up on my phone surreptitiously.
SS: I could try too. For some reason, I'm thinking it's George Braque, but that may be wrong. But so I'll describe the painting I have in my head and it may be totally not—
EL: No, it’s Marcel Duchamp!
SS: Oh, it's Marcel Duchamp?
EL: Yeah.
SS: And what's the name of it?
EL: Nude descending a staircase, number two. I think.
SS: Yeah, that's the one. Would that be considered Cubism?
EL: Yeah.
SS: It says according to Wikipedia, it’s widely considered a modernist classic. Okay, I don't know if it's the best example of what I'm thinking. But it's, let me just blow it up and look at it here. So, what hits me about it is it's a lot of straight lines. It's very rectilinear. And you don't see anything that really looks curved like a human form. You know, people are made of curved surfaces, our faces, our cheeks are, you know. What I like is this idea that you can build up curved objects out of lots of things made of straight lines. You know what you can do? mesh refinement on it. For instance, there's an old proof of the area of a circle where you chop it up into lots of pizza-shaped slices, right, and then you add up the areas of all those. And they can be approximated by triangles, and if you make the triangles thin enough, then those slivers can fill out more and more of the area, the method of exhaustion proof for the area of a circle. So this idea that you can approximate curved things with triangles, reminds me of this idea in Cauchy’s theorem that you first you prove it for the triangle, and then later Professor Stein proved the result for any smooth curve by approximating it with triangles, you know, a polygonal approximation to the curve, and then he could chop up the interior into lots of triangles. So I sort of think it pairs with this vision of the human form and it's sinuous descent down. You know, this person is smooth and yet they're being built out of these strange Cubist facets, or other shapes. I mean, think of other Cubist paintings you you represent smooth things with gem-like faceted structures, it sort of reminds me of Cauchy’s theorem.
KK: Okay, good pairing. Yup.
EL: Yeah, glad we got to the bottom of that before we made false statements about art on this math podcast.
SS: Yeah, it may not be the best Cubist example. But what are you gonna do? You invited a mathematician.
KK: So we also like to let our guests make pitches for things that they're doing. So you have a lot going on. You have a new podcast.
EL: Yeah, tell us about it.
SS: Okay. Yeah, thank you for mentioning it. I have a podcast with the confusing name Joy of X. Confusing because I also wrote a book by that name. And before that I had written an article by that name.
KK: Yes.
SS: So I did not choose that name for the podcast. But my producer felt like it sort of works for this podcast because it's a show where I interview scientists and mathematicians—in spirit, very similar to what we're doing here. And I talk to them about their lives and their work. And it's sort of the inner life of a scientist, but it could be a neuroscientist, it could be a person who studies astrophysics, or a mathematician. It's anything that is covered by Quanta Magazine. So Quanta Magazine, some of your listeners will know, is an online magazine that covers fundamental parts of math and science and computer science. Really, it's quite terrific. If people haven't read it, they might want to look at it online. It's free. And anyway, so Quanta wanted to start a podcast. And they asked me to host it, which was really fun because I get to explore all these parts of science. I've always liked all of the different parts of science, as well as math. And so yeah, that's the show. It's called the Joy of X where here, X takes on this generalized meaning of the unknown, not just the unknown in algebra, but anything that's unknown, and the joy of doing science and the scientific question. We'll be sure to link to that.
EL: Yeah.
KK: Also, I think Infinite Powers came out last year, right? 2019?
SS: That’s true. Yes, I had a book, Infinite Powers, about calculus. And that was an attempt to try to explain to the general public what's so special about calculus, why is it such a famous part of math. I try to make the case that it really did change the world and that it underpins a lot of modern science and technology as well as being a gateway to modern math. I really do think of it as one of the greatest ideas that human beings have ever come up with. Of course, that raises the question, did we discover it or invent it? But that’s a good one.
EL: Put that on a philosophy podcast somewhere. We don’t need that on this math podcast.
SS: Yeah, I don't really know what to say about that. That's a good timeless question. But anyway, yes, Infinite Powers was a real challenge to write because I'm trying to tell some of the history, but I'm not a historian of math. I wanted to really teach some of the big ideas for people who either have math phobia or who took calculus but didn't see the point of it, or just thought it was a lot of, you know, doing one integral after another without really understanding why they're doing it. So it's my love song to calculus. It really is one of my favorite parts of math, and I wanted other people to see what's so lovable and important about it.
KK: Yeah.
SS: The book, as I say, was hard because I tried to combine history and applications and big ideas without really showing the math.
KK: Yeah, that's hard.
SS: And make it fun to read.
KK: Right. It is. It's a very good book, though. I did read it.
SS: Oh, thanks.
KK: And I enjoyed it quite a bit.
EL: Well, it is on my table here under a giant pile of books to read, because people need to just stop publishing.
SS: That’s right.
EL: There’s too much. We just need to have a year to catch up, and then we could start going again but what's what's
KK: What’s that Japanese word, sort of the joy of having unread books? [Editor’s note: Perhaps tsundoku, “aquiring reading materials but letting them pile up in one’s home without reading them.”] There's a Japanese concept of like these books that you’ll, well, maybe even never read. But that you should have stacks and stacks of books. Because, you know, maybe you'll read them. Maybe you won't. But the potential is there.
SS: Nice.
KK: So I have a nightstand, on the shelf of my nightstand there's probably 20 books there right now, and I haven't read them all. I've read half of them, maybe, but I'm going to read them. Maybe.
SS: Yeah, yeah.
KK: Actually, you know, when you were talking about your sort of emotional feelings about Cauchy’s theorem, it reminded me of your—I don't know if it was your first book, but The Calculus of Friendship, about your relationship with your high school teacher.
SS: Well, how nice of you to mention it.
KK: Yeah. That was interesting to it, because it reminded me a lot of me, in the sense of, I thought I knew everything too when I was 18. Like, I thought, “Calculus is easy.” And then I get to university and math wasn't necessarily so easy. You know. And so these same sort of challenges, you know?
SS: Well, I appreciate that, especially because that book is pretty obscure. As far as I know, not many people read it. And it's very meaningful to me because I love my old teacher, Mr. Joffrey, who is now, let’s see, he's 90 years old. And I stayed in touch with him for about 35 years after college, and we wrote math problems to each other, and solutions. And it was really a friendship based on calculus. But over the course of those 35 years, a lot happened to both of us in our lives. And yet, we didn't tend to talk about that. It was like math was a sanctuary for us, a refuge to get away from some of the ups and downs of real life. But of course, real life has a way of making itself, you know, insinuating itself whether you like it or not. And so it's it's that story. The subtitle of the book is “what a teacher and a student learned about life while corresponding about math.” And I sometimes think of it as, like, there's a Venn diagram where there's one circle is people who want to read math books with all the formulas, because I include all the formulas from our letters.
KK: Yeah.
SS: And then there's people who want to read books about emotional friendships between men. And if you intersect those two circles, there's a tiny sliver that apparently you're one of the people in it.
KK: And your book might be the unique book in that in that Venn diagram too.
SS: Maybe. I don't know. But yeah, so it was it was clear it would not be a big hit in any way. But I felt like I couldn't do any other work until I wrote that book. I really wanted to write it. It was the easiest book to write. It poured out of me, and I would sometimes cry while I was writing it. It was almost like a kind of psychoanalysis for myself, I think, because I did have a lot of guilty feelings about that relationship, which, you know, if you do read the book, anyone listening, you'll see what I felt guilty about, and I deserved to feel guilty. I needed to grow up, and you see some of that evolution in the course of the book.
KK: Yeah. All right. Anything else you want to pitch? I mean?
SS: Well, how about I pitch this show? I mean, I'm very delighted to be on here. Really, I think you guys are doing a great thing helping to get the word out about math, our wonderful subject. And so God bless you for doing that.
KK: Well, this has been a lot of fun, Steve, we really appreciate you taking time out of your snow day. And so now do you have to shovel your driveway?
SS: Oh, yeah, that may be the last act I ever commit.
KK: Don’t you still have a teenager at home? Isn't that what they're for?
SS: My kids, I do have—you know what, that's a good point. I have one daughter who is still in high school and has not left for college yet, so maybe I could deploy her. She's currently making oatmeal cookies with one of her friends.
KK: Well, that's a useful, I mean that that's helping out the family too, right? I mean,
SS: They’re both able bodied, strong young women. So I should get them out there and with me, and we could all shovel ourself out. Yeah.
KK: Good luck with that. Thank you. Thanks for joining us.
SS: My pleasure. Thanks for having me.
On this episode of My Favorite Theorem, we were happy to talk with Steve Strogatz, an applied mathematician at Cornell University, about the Cauchy integral theorem. Here are some links you might find helpful.
Strogatz’s website, which includes links to information about his books and article
The Joy of X, the podcast he hosts for Quanta Magazine
The Cauchy integral theorem on Wikipedia
The Kuramoto model
Nude Descending a Staircase no. 2 by Marcel Duchamp
Evelyn Lamb: Hello and welcome to My Favorite Theorem, the podcast that was already quarantined. I’m one of your hosts, Evelyn Lamb. I am holed up in my house in Salt Lake City, Utah, where I'm a freelance writer. So, honestly, I have worked in my basement, you know, every day for the past five years, and that hasn't changed. This is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida, which is open for business…But you can't go to campus.
EL: Okay.
KK: Yeah, we moved all of our classes online two weeks ago, I'm just teaching a graduate course this term, so that's sort of easier for me. I feel bad for the people who have to actually lecture and figure out how to do this all at once. My faculty have actually been great. They really stepped up. And, remarkably, I've had very few complaints from students, and I'm the chai,r so you know, they would come to me. And it's just really not—I mean, everybody has really taken the whole thing in stride. A lot of anxiety out there, though, among our students. Really, this is a really challenging time for everybody. And I just encourage my faculty to, you know, be kind to their students and to themselves. So let’s shelter in place and get through this thing, right?
EL: Yup. Yeah, we had an earthquake a week and a half ago to just, like, shake things up, literally. So it's just like, oh, as if I pandemic sweeping through town was not enough. We'll just literally shake your house for a while.
KK: Yeah, well, you know, we can go outside. We have a Shelter in Place Order, but it's been 90 degrees every day for the last week. And so you know, I like to go bird watching, but my favorite bird watching spot is a city park, and it's closed. So I have to just kind of sit on my back porch and see what's up. Yeah. Oh, well,
EL: Well, yes, we're making it through it. And I hope—I mean by the time this is—we have a bit of a backlog in our past episodes, and so who even knows what's going to be happening when this is airing. [Editor’s note: We decided to publish this one out of order, so we actually recorded it pretty recently.] But whatever is happening, I know our guests will be very thrilled to be listening to Ruthi Hortsch! Hi, Ruthi. How are you today?
Ruthi Hortsch: Hey, I'm managing.
EL: Yeah.
RH: It’s a weird time.
EL: Definitely. So what do you do, and where are you?
RH: Yeah, so I'm in New York City right now, which is kind of right now the hotbed of lots of new infections. But I've been in my apartment for the last two and a half weeks and haven't really directly been experiencing that.
I work for an organization called Bridge to Enter Advanced Mathematics. So we're a education nonprofit. We work with low-income and historically marginalized youth. And we're trying to create a realistic pathway for them to become mathematicians, scientists, engineers, programmers.
We start working with students when they're in middle school and we try to figure out, like, what are the things you need to get you to a place where you'll have a successful STEM career? And so we do a lot of different things, but they all are to that purpose.
EL: Yeah, and I'm so glad that we have you on the show to talk about this. Because, yeah, I've been thinking like, we really need to get someone from BEAM on here because I think BEAM is just such a great program. My spouse, and I donate to it every year. I mean, obviously not every year, I don't even know how old it is. But you know, we've made that part of our yearly giving, and yeah, I just think it does great work. So, does that have programs in both New York and LA now?
RH: Yes. So we started in New York City in 2011. And a few years ago, we expanded to LA. So the LA programs are still pretty new. They're building up, kind of starting with students in the first year of contact, and then adding in programming for the older students as that first class gets older. So they now have eighth graders, and that's their oldest class, and they'll continue to add in the ninth grade and the 10th grade program, et cetera, as it goes on. The other kind of exciting thing is, last year, we got a grant from the Gates Foundation. And that grant was to partner with other local programs and other cities to help them build up programs that could do some of the same things we do. So it's not the same comprehensive, really intensive support that we give our students in New York City and LA. But assuming summer camps don't get canceled this summer because of corona, there are going to be day camps in Albuquerque and Memphis that are advised by us.
EL: Oh, that's so great. Yeah, because that's the one thing about it is that it is so localized and, of course, important places for it to be localized. But, you know, the more the, the wider, the better. So that's awesome. And what's your role there? What do you do?
RH: Yeah, I have a hard time answering this question. So I work in programs, which is like, I work on things that are directly affecting students. I run one of our summer camps in the summer. So I run a sleepaway camp at Union College, in which students learn proof-based mathematics for the first time. The students at the sleepaway camp are all rising eighth graders, and so they get to learn number theory and combinatorics and group theory. They also do some modeling and programming and stuff.
During the year I do some managing our other programs team, so supporting other staff. I also do all of our faculty hiring. So certainly we hire a lot of people just for the summer, and most of them are—so we hire college, university students, we hire grad students, we hire professors in various different roles. And I handle all of the, like, hiring people to teach math courses.
EL: Wow.
KK: That’s a lot. Are your programs sort of face to face, or are they online? Is it sort of a combination of stuff?
RH: Yeah, so our summer we run six in-person summer camps each summer. So there's two in upstate New York that are sleepaway, one in Southern California that's sleepaway, and then one day camp in LA and two day camps in New York City. And those are all in-person, face to face. And then during the school year, we also have Saturday classes, which is a mix of life skills and enrichment. And we also do in-person advising. So we have office hours where students can come ask us anything, and then also kind of more intensive. Like, how do you apply to college? How do you get into other summer programs or other STEM opportunities? So most of our programs are face to face. Right now, we've had to cancel a bunch of our year-round stuff. So we don't have Saturday classes right now. We are doing one class for the eighth graders virtually, because we really thought it was critical. And at the moment, we're hoping the summer programs will still run, but it's really hard to say what's going to be going on in two weeks.
KK: Yeah, well, fingers crossed.
EL: But as wonderful as it is to talk about BEAM, what we're dying to know is what is your favorite theorem?
RH: Yeah, so this was actually really fast for me to think of. My favorite theorem is Falting’s theorem. So Falting’s theorem is also actually known as the Mordell conjecture, because Mordell originally conjectured it in the same paper in which he proved Mordell’s theorem, I believe, or at least during the same process of research for him.
EL: Yeah, and so for longtime listeners, was it Mathilde Lalín who, that was her favorite theorem?
RH: Mm-hmm.
EL: Okay, that's right. So we're kind of dovetailing right in.
RH: Yeah. So Mordell’s theorem is about—so when you look at elliptic curves, they have a finitely-generated abelian group. And Mordell’s theorem is the theorem that proves that it actually is finitely-generated.
KK: Right.
RH: So when I say the finitely-generated part, it's actually only looking at the rational points on the curve. So we care about algebraic curves, kind of in general. And then we want to think about, like, how do different algebraic curves behave differently? And because I'm trained as a number theorist, I also specifically care about how many rational points are on that curve and how they behave. So this intersects also with algebraic geometry. And in some sense, this is a statement about how the arithmetic part of the curves—the rational points—interacts with the geometry of it.
So one thing that people care about a lot in geometry is the notion of a genus. This is one of the ways to classify things. And of course, when you're looking at visual shapes, one way of thinking about the genus is how many holes does it have? So if you're just looking at a shape that’s, like, a big sphere, there's no way of poking a hole through it without actually breaking it apart. And so that has genus zero because there are zero holes. But if you're looking at a doughnut, a torus, that has one hole because there's like one place where you can poke something through. And then you can generalize from there that having more holes is higher genus. And so that's kind of a wishy-washy way of looking at things, and a very visual way. There are ways to define that formally in the algebraic sense, but in the places where both definitions make sense, the definition is the same.
And so when you look at algebraic curves, we can ask ourselves, how do genus zero curves act differently than genus one curves, act differently than genus two curves, and does that tell us anything about the number of rational points? And so it turns out that with genus zero curves, genus zero curves are actually really just conic sections. So basically the nice lines that you study in like algebra in high school. And those have infinitely many rational points, right? So when I say rational point, you can kind of think of it as being like the points where the components have rational values.
And genus one curves are actually exactly elliptic curves. So in that case, that's when Mordell’s theorem kicks in and the rational points are this finally generated abelian group. And sometimes they have infinitely many rational points, and sometimes they don't, and it kind of depends on what this algebraic structure, this algebraic group structure, looks like. So that's the most complicated weird point. And for genus two or higher curves, it turns out to be true that there are only finitely many rational points on a genus two or higher curve. And that's the statement of Falting’s theorem.
EL: Okay, and so I, there's something that I, you know, you hear like genus two or higher. And I always wonder, is there a limit to how high the genus can be of these curves? Or, like, is there a maximum complexity that these curves can have?
RH: So no. And actually, there's a statement in algebraic geometry that makes it really easy-ish— you know, “ish”— to calculate the genus, which is called Riemann-Roch. And it gives you a relationship between the degree of the equation defining it and the genus. And essentially, the genus grows quadratically with the degree. There's an asterisk on everything I'm saying. It’s mostly true.
KK: It’s mostly true.
EL: So if I'm remembering correctly, Mordell’s—let’s see, Mordell’s conjecture, Falting’s theorem—was really important for proving Fermat’s last theorem. Is that correct?
RH: I don't think so, no. But all of these things are related to each other.
EL: Okay.
RH: A lot of the common definitions and theorems that play into all these things, they share a lot, but it's not directly, like, one thing implied the other.
EL: Okay, yeah.
RH: In particular, Fermat’s Last Theorem was reduced to a statement about elliptic curves, which is about genus one curves, while Falting’s theorem is really a statement about genus two or higher curves.
EL: Okay.
KK: So was this a love at first sight kind of theorem?
RH: I think no. I think part of the reason that I really started appreciating it was because I had a mentor in undergrad who was really excited about it. And I didn't really understand the full implications and the context, but I was like, “Okay, this mentor I have is really about it, so I'm going to be really about it.”
And we actually used Falting’s theorem as a black box for the REU project I was working on. So we assumed it was true and then used that to show other things. And then later on in grad school, I had a number of things that I was really interested in that Falting’s theorem was related to. One of the things that I think is really cool that's being researched right now is there’s a bunch of like, tropical geometry that is being studied. And this is, like, relating algebraic verbs to kind of more combinatorial objects. So you can actually translate these lcurves that have a more—I don't want to say analytic, but a smooth structure, and then turning them into a question about, like, counting more straight-edged structures instead.
One of the things about Falting’s proof of Falting’s theorem is that it's not, it doesn't actually give you a bound. So it tells you that there are only finitely many points, but it doesn't give you a constructive way of saying, like, what does it actually bounded by, the number of finite points? And using tropical geometry, people have been able to make statements about bounds in certain situations, which is really cool.
KK: Okay, I always like these tropical pictures, you know, because suddenly everything just looks almost like Voronoi diagrams in the plane, these piecewise linear things. So I guess the idea of genus probably still makes sense there in some way, once you define it properly. Right?
RH: Yeah. And there's a correspondence between, there’s a notion of a tropical curve, which still looks like one of those Voronoi diagrams. There’s an actual correspondence, this curve in classical algebraic geometry gives you this particular diagram.
EL: Nice. And so you say it was very easy to choose this theorem. So what's your, like, elevator sales pitch for this theorem? Keeping in mind that no one is going to be in an elevator with anyone else anytime soon. We're staying far apart, but you know.
RH: Yeah. So, I think it’s kind of amazing that geometry can tell you something about the arithmetic of a curve. I think this is what drew me to arithmetic algebraic geometry, that there is this kind of relationship. When you think, okay, arithmetic, geometry, those are totally different fields, people study them in totally different ways, but in fact, it turns out that the geometry of a curve can tell you information about the arithmetic. And that's just bizarre, and also very powerful in that you can make a statement about how many rational solutions there are to an equation using correspondence in geometry.
The REU project that I worked on actually is a statement that I think is really easy to understand. If you have a rational polynomial, that gives you a function from the rationals to the rationals, right?
And so you can ask yourself: how many-to-one is that function? How many points gets sent to the same point? And if you look at only rational points, our REU project showed that it can't be more than four-to-one off a finite number points.
So if you are willing to ignore some finite number of points, then no rational polynomial is ever more than four-to-one.
KK: Interesting.
RH: And that feels like a very powerful statement. And it's because we had this hammer of Falting’s theorem to just smash it in the middle.
KK: That’s really fascinating. So no matter how high the degree it's no more than four-to-one? I wouldn’t have guessed that.
RH: Off a finite number of points.
KK: Yeah, sure. Generically. Yeah. Right. Interesting.
RH: I think the real powerful thing there is that Falting’s theorem comes in.
KK: Yes.
RH: Oh, actually, higher degree means high complexity means high genus.
KK: Okay, cool. So another thing we like to do is ask our guests to pair their theorem with something. So what pairs well with Falting’s theorem?
RH: Yeah, so this is a maybe a little bit of a stretch, but I've been living in New York City for four years, and I love bagels. They’re definitely one of the best parts of living in New York City. I'm always two blocks away from a really good bagel. Traditionally, bagels are genus one, so it's actually not quite appropriate. You have to, I don't know, do the fancy cut to increase the genus—there’s a way to cut a bagel to get higher genus. But I still think since we're thinking about genuses, we're thinking about complexity of things.
EL: Yeah. Well, like, you cut the bagel in in half, you know, to get like the cream cheese surface, and then just stick them together and you've got a genus two. Put a little cream cheese on the side. You know?
RH: Yeah. I mean, if we're cutting holes we can cut as we want.
EL: That’s true. So, are you more—what do you put on the bagel? What kind of bagel, also, do you prefer?
RH: Ao I mostly like everything bagels.
EL: Of course. Yeah. Great bagel.
RH: There is a weird thing that goes on where some bagel shops put salt on their everything bagel and some don't. And I feel like the salt is important.
KK: Yeah. Agree.
EL: As long as it's not too much. Like just the right amount of salt is—
RH: Yeah. It’s definitely important.
KK: Well a salt bagel is a pretzel.
EL: Yes.
RH: And I don't actually eat cream cheese. So I do eat fish sometimes, but I generally don't eat dairy. And I so I usually get, like, tofu scallion spread. And the tofu spread that gets sold in the bagel shops here is actually really good.
KK: Well yeah, I'm not surprised. I can't get a decent bagel in Gainesville. I mean, there's a couple of bagel shops, but they're no good.
RH: Yeah. This is what you get for leaving New York City.
KK: Right, right.
EL: Yeah, it's funny, actually one of our quarantine projects we're thinking about is making bagels. I've made bagels one other time. But, yeah.
KK: It's kind of a nuisance. You know that. That boiling step is really—I mean, it's crucial, but it just takes so much time and space.
EL: Yeah, I mean, they were not nearly as good as a real bagel shop bagel, but fun to play with.
KK: Yeah. So what's everyone doing to keep themselves occupied? So far I've got a batch of sauerkraut fermenting. I just started a batch of limoncello that'll be ready in a month. I made scones. Maybe that’s it. Yeah. How about you guys?
RH: Well, I'm still trying to work 40 hours a week.
KK: Yeah, I'm doing that too.
RH: We're still trying to help our students respond to the crisis and helping support them both academically, but holistically also.
KK: Yeah, it's very stressful.
RH: And at the moment, we're still doing all of our prep work for the summer, which is a huge undertaking? But when I have free time, I've been cooking more. And I'm actually also working on writing a puzzle hunt.
EL Ooh, cool. Well if that happens, we'll include a link to that in the show notes—if it's the kind of thing that you can do out of a particular geographical place.
RH: Yeah, so the puzzle hunt I'm helping write is actually for Math Camp.
EL: Okay.
RH: So before I worked for BEAM I worked for Canada-USA Math Camp, and in theory, they're running a camp this summer, and one of the traditional events there is [the puzzle hunt]. I think the puzzle hunt often gets put up after the summer, but I’m not sure.
EL: Oh, cool. The last thing that I, or library book that I got out from the library—it was actually supposed to be due, like, the day after the library shut down here—was 660 Curries, which is an Indian cookbook that—we don’t really cook meat at home, but it's got, I don't know, maybe a hundred-page section of legume curries and a bunch of vegetable curries, so we've been kind of working through that. We made one last night that was great. It was a mixture of moong dal and masoor dal. Yeah, we’ve been eating a lot of curry, and it just makes my early-this-year plan of, like, “Oh, I want to make more dal, so I've got to go stock up on lentils and rice,” brilliant plan, really has made it a lot easier. So yeah.
RH: I love dal, and I don't feel like anybody around me ever likes dal as much as I do.
KK: This is a dal-lover convention right here. It's one of my favorite things to eat. Yeah.
EL: Oh, yeah. Well, I can recommend, if you get a chance to get 660 Curries, I don't remember if it's called mixed red and lentil dal with garlic and curry leaves, or something like that.
KK: Yeah, I'm actually making curry tonight, but chicken curry so we'll we'll see.
EL: Yeah, so other than that, just panicking most of the time. It’s been a big pastime for me.
RH: I’ve had to, like, ban myself from reading the news in the evening.
KK: Good call.
EL: That is very smart.
RH: I haven’t done a good job keeping to it.
EL: Yeah, I have not done a good job with my self-control with that. So, I’m really trying to do that. I'm hoping to do some sewing projects too, maybe making some masks that I can leave out for people in the neighborhood to take. Obviously not medical grade, but maybe make people feel a little better.
KK: So yeah, Ellen, my wife, started doing that yesterday. She made, you know, probably 15 of them yesterday real quick.
EL: Nice.
KK: I went to the store yesterday and you know—
EL: Hopefully it gives people a little peace of mind and maybe decreases droplet transmission.
KK: Let’s hope.
EL: I’ve refrained from armchair epidemiology, which I encourage everyone to do. So yeah, I hope everyone stays safe and tries to keep keep a good spirit and help the people in your lives. I hope our listeners can do that too. And I hope they find some enjoyment in thinking about math for a little while with us.
KK: So yeah, thanks for joining us, Ruthi. We really appreciate it.
EL: Yeah, everyone go find BEAM online if you want to learn more about that.
RH: Yeah. Follow us on social media.
EL: Yeah. So what are the handles for that?
RH: Yeah, I should have this memorized. You can find it on our website. They're all linked to on our website, beammath.org. If you're in New York or LA, we have trivia night, which is a puzzle-y, mathy trivia, usually in the fall, that you can buy tickets to. So I definitely recommend that. And otherwise, sign up for our newsletter, which you can also do on our website.
EL: And you're on Twitter also, right?
RH: Yes, I am. You do have to know how to spell my last name, though.
EL: Okay.
RH: Yeah, I'm @ruthihortsch.
EL: All right. And that's H-O-R-T-S-C-H?
RH: Good job!
EL: Yeah, it’s funny, I was actually in a Zoom spelling bee last night. So yeah, I got second place.
KK: Good for you.
EL: Got knocked out on diaphoresis.
KK: Diaphoresis. Wow. Yeah, that's pretty—okay, anyway. All right. Well, thanks for joining us and take care everyone.
RH: Right. Yeah, it was nice to meet you.
EL: Bye.
[outro]
On today's episode of My Favorite Theorem, we had the privilege to talk with Ruthi Hortsch, a program coordinator at Bridge to Enter Advanced Mathematics (BEAM), a math program for low-income and historically marginalized middle- and high-school students. Dr. Hortsch lives in New York City, which is currently being hit hard by covid-19. We love all our listeners and guests, and right now we are especially thinking about those in New York and other virus hot spots. You may be sick, you may be worried about loved ones, you may be suddenly parenting or caregiving in ways you hadn't expected. We wish you the best, and we hope you enjoy thinking about math for a little bit instead of the news cycle. Stay strong and healthy, friends!
As you listen to this episode, you may find these links helpful.
The Bridge to Enter Advanced Mathematics website, Twitter, Facebook, and Instagram pages.
Ruthi Hortsch on Twitter
Faltings’s theorem, Dr. Hortsch's favorite theorem
Our episode with Matilde Lalín, whose favorite theorem was the closely-related Mordell's theorem.
660 Curries
Tropical Geometry wikipedia page
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast. I'm Kevin Knudson, professor of mathematics at the University of Florida. And here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer, usually based in Salt Lake City, but currently still in Providence. I'll be leaving from this semester at ICERM in about a week. So trying to eat the last oysters that remain in the state before I leave and then head back.
KK: Okay, so you actually like oysters.
EL: Oh, I love them. Yeah, they're fantastic.
KK: That is one of those, it’s a very binary food, right? You either love them—and I do not like them at all.
EL: Oh, I get that, I totally get it.
KK: Sure.
EL: They’re like, in some sense objectively gross, but I actually love them.
KK: Well, I'm glad you've gotten your fill in. Probably—I imagine they're a little more difficult to get in Salt Lake City.
EL: Yeah, you can but it’s not like you can get over here.
KK: Might be slightly iffy. You don't know how long they've been out of the water, right?
EL: Yeah. So there's one place that we eat oysters sometimes there, yeah, that's the only place.
KK: Yeah, right. Okay. Well, today we are pleased to welcome Ben Orlin. Ben, why don't you introduce yourself?
Ben Orlin: Yeah, well, thanks so much for having me, Kevin and Evelyn. Yes, I'm Ben Orlin. I’m a math teacher, and I write books about math. So my first book was called Math with Bad Drawings, and my second one is called Change Is the Only Constant.
EL: Yeah, and you have a great blog of the same name as your first book, Math with Bad Drawings.
BO: Yeah, thank you. And I think our blogs are, I think almost birthday, not exactly but we started them within months of each other, right? Roots of Unity and Math with Bad Drawings.
EL: Oh, yeah.
BO: Began in, like, spring of 2013 which was a fertile time for blogs to begin.
EL: Yeah. Well, in a few years ago, you had some poll of readers of like, what other things they read and, and stuff and my blog was like, considered the most similar to yours, by some metric.
BO: Yeah, I did a reader survey and asked people, right, what what other sources they read, and mostly I was looking for reading recommendations. So what else do they consider similar? Overwhelmingly it was XKCD. Not so much—just because XKCD, it’s like if you have a little light that you're holding, a little candle you're holding up, and you're like, what does this remind you of? And like a lot of people are going to say the sun because they look up, and that’s where they see visible light.
KK: Sure.
BO: But I think in terms of actually similar writing, I think Toots of Unity is not so different, I think.
EL: Yeah. So I thought that was interesting because I have very few drawings on on mine. Although the ones that I do personally create are definitely bad. So I guess there’s that similarity.
BO: That’s the key thing, committing to the low quality.
KK: Yeah, but that's just it. I would argue they're actually not bad. So if I tried to draw like you draw, it would be worse. So I guess my book should just be Math with Worse Drawings.
BO: Right.
KK: You actually get a lot of emotion out of your characters, even though they're they're simple stick figures, right? There’s some skill there.
BO: Yeah, yeah. So I tried. I tried to draw them with a very expressive faces. Yeah, they're definitely still bad drawings is my feeling. Sometimes people say like, “Oh, but they've gotten so much better since you started the blog,” which is true, but it's one of these things where they could they could get a lot better every five-year interval for the next 50 years and still, I think not look like professional drawings by the end of it.
EL: Right. You're not approaching Rembrandt or anything.
KK: All right, so we asked you on here, because you do have bad drawings, but you also have thoughts about mathematics and you communicate them very well through your drawings. So you must have a favorite theorem. What is it?
BO: Yeah. So this one is drawn from my second book, actually, the second book is about calculus. And I have to confess I already kind of strayed from the assignment because it's not so much a favorite theorem as a favorite construction.
KK: Oh, that’s cool.
EL: You know, we get rule breakers on here. So yeah, it happens.
BO: Yeah, I guess that's the nature of mathematicians, they like to bend the rules and imagine new premises. So pretending that this were titled My Favorite cCnstruction, I would pick Weierstrass’s function. So that you know, first introduced in 1872. And the idea is it's this function which is continuous everywhere and differentiable nowhere.
EL: Yeah. Do you want to describe maybe what this looks like for anyone who might not have seen it yet?
BO: Yeah, sure. So when you're picturing a graph, right, you're probably picturing—it varies. I teach secondary school. So students are usually picturing a fairly small set of possibilities, right? Like you're picturing a line, maybe you're thinking of a parabola, maybe something with a few more squiggles, maybe as many squiggles as a sine wave going up and down. But they all have a few things in common one is that almost anything that students are going to picture is continuous everywhere. So basically, it's made of one unbroken line. You can imagine drawing it with your pencil without picking the pencil up. And then the other feature that they have is that they—this one's a little subtler, but there will be almost no points that are jagged, or sort of crooked, or, you know, if I picture an absolute value graph, right, it sort of is a straight line going down to the origin from the left, and then there's a sharp corner at the origin, and then it rises away from that sharp corner. And so those kind of sharp corners, you may have one or two in a graph a student would draw, but that's sort of it. You know, like sharp corners are weird. You don't can't draw all sharp corners. It feels like between any two sharp corners on your graph, there's going to have to be some some kind of non-sharp stuff connecting it, some kind of smooth bits going between them.
KK: Right.
BO: And so what sort of wild about about Weierstrass’s function is that you look at it, and it just looks very jagged. It’s got a lot of sharp corners. And you start zooming in, and you see that even between the sharp corners, there are more sharp corners. And you keep zooming in and there's just sharp corners all the way down. It's what we today call it fractal. Although back then that word wasn't around. And it's just it's the entire thing. Every single point along this curve is in some sense, a sharp corner.
EL: Yeah, it kind of looks like an absolute value everywhere.
BO: Yeah, exactly. It has that cusp at every single point you could look at.
KK: Right? So very pathological in nature. And, you know, I'm sure I've seen the construction of this. Is it easy to say what the construction is? Or is this going to be too technical for an audio format?
BO: It’s actually not hard to construct. There are there whole families of functions that have the same property. But Weierstrass’s is pretty simple. He starts with basically just a cosine curve. So you sort of have cosine of πx. So picture, you know, a cosine wave that has a period of two. And then you do another one that has a much shorter period. So you can sort of pick different numbers. But let's say the next one that you add on has a period that's 21 times faster. So it's sort of going up and down much quicker. And it's shorter, though, we've shrunk the amplitude also. So it's only about a third, let's say, as tall. And so you add that onto your first function. So now we've got—we started with just a nice, gentle wave. And now we've got a wave that has lots of little waves kind of coming off of it. And then you keep repeating that process. So the next, the second one in the iteration has a period of 21 cycles for two units. The next one has 212 cycles. And it's 1/9 the height of the original.
KK: Okay.
BO: And then after that, you're going to do you know, 213 cycles in the same span, 214 cycles. And so it goes—I don't know if you can hear my daughter is crying in the background, because I think she she finds it sort of upsetting to imagine the function that's has this kind of weird property.
EL: Fair.
BO: Especially because it's such a simple construction. Right? It's just, like, little building blocks for her that we're putting together. And one of the things I like about the construction, is it at no step, do you have any non-differentiable points, actually. It's a wave with a little wave on top of it and lots of little waves on top of that, and then tons and tons of little waves on top of that, but these are all smooth, nice, curving waves. And then it's only in the limit, sort of at the at the end of that infinite bridge, that suddenly it goes from all these little waves to its differentiable nowhere.
KK: I mean, I could see why that would be true, right?
BO: Yeah, right. Right. It feels like it's getting worse. And you can do—Weierstrass’s function is really a whole family of functions. He came up with some conditions that you need, basically that’s the basic idea. You need to pick an odd number for the number of cycles and then a geometric series for for the amplitude.
KK: So what's so appealing about this to you? It's just you can't draw it well, like you have to draw it badly?
KK: Yeah, that's one thing, right. Exactly. I try to push people into my corner, force them to have to drop badly. I do like that this is something—right, graphs of functions are so concrete. And yet this one you really can't draw. I've got it in my book, I have a picture of the first few iterations. And already, you can't tell the difference between the third step and the fourth step. So I had to, I had to, you know, do a little box and an inset picture and say, actually, in this fourth step, what looks like one little wave is really made up of 21 smaller waves. So I do sort of like that, how quickly we get into something kind of unimaginable and strange. And also, you know, I'm not a historian of mathematics. And so I always wind up feeling like I'm peddling sort of fairy tales about about mathematical history more than the complicated truth that is history. But the role that this function played in going from a world where it felt like functions were kind of nice and were something we had a handle on, into opening up this world where, like, oh no, there are all these pathological things going on out there. And there are just these monsters that lurk in the world of possibility.
KK: Yeah.
EL: Right. And was this it—Do you know, was this maybe one of the first, or the first step towards realizing that in some measure sense, like, all functions are completely pathological? Do you know kind of where it fell there, or, like, what the purpose was of creating it in the first place?
BO: Yeah, I think that's exactly right. I don't know the ins and outs of that story. I do know that, right, if you look in spaces of functions, that they sort of all have this property, right, among continuous functions, I think it's only a set of measure zero that doesn't have this property. So the sort of basic narrative as I understand it, leading from kind of the start of the 19th century to the end of the 19th century, is basically thinking that we can mostly assume things are good, to realizing that sometimes things are bad (like this function), culminating in the realization that actually basically everything is bad. And the good stuff is just these rare diamonds.
EL: Yeah, I guess maybe this slight, I don't know, silver lining, is that often we can approximate with good things instead. I don't know if that's like the next step on the evolution or something.
BO: Right. Yeah, I guess that's right. Certainly, that's a nice way to salvage some a silver lining, salvage a happy message. Because it's true, right? Even though, a simpler example, the rationals are only a set of measure zero and the reals, you know, they're everywhere, they're dense. So at least, you know, if you have some weird number, you can at least approximate it with a rational.
EL: Yeah, I was just thinking when you were saying this, how it has a really nice analogy to the rationals. And, and even algebraic numbers and stuff like, “Okay, start naming numbers,” you'll probably name whole numbers, which are, you know, this sparse set of measure zero. It’s like, o”h, be more creative,” like, “Okay, well, I'll name some fractions and some square roots and stuff.” But you're still just naming sets of measure zero, you’re never naming some weird transcendental function that I can't figure out a way to compute it.
BO: Yeah, it is funny, right? Because in some sense, right? We've imagined these things called numbers and these things called functions. And then you ask us to pick examples. And we pick the most unlikely, nicest hand-picked, cherry-picked examples. And so the actual stuff—we’ve imagined this category called functions, and most of what's in that category that we developed, we came up with that definition, most of what's in there is stuff that's much too weird for us to begin to picture.
EL: Yeah.
BO: Which says something about, I guess, our reach exceeding our grasp or something. I don't really know, but they are our definitions can really outrun our intuition.
EL: Yeah. So where did you first encounter this function?
BO: That’s a good question. I feel like probably as a kind of folklore bit in maybe 12th grade math. I feel like when I was probably first learning calculus, it was sort of whispered about. You know, my teacher sort of mentioned it offhand. And that was very enticing, and in some sense, that's actually where my whole second book comes from, is all these little bits of folklore, not exactly the thing you teach in class, but the little, I don't know, the thing that gets mentioned offhand. And you go “Wait, what, what was that?” “Oh, well, don't worry. You'll learn about that in your real analysis class in four years.” I don't want to learn about that in four years. Tell me about that now. I want to know about that weird function. And then I think the first proper reading I did was probably in a William Dunham’s book The Calculus Gallery, which is a nice book going through different bits of historical mathematics, beginning with the beginnings of calculus through through like the late 19th century. And he has the here's a nice discussion of the function and its construction.
KK: So when we were preparing for this, you also mentioned there are connections to Brownian motion here. Do you want to mention those for our audience?
BO: Yeah, I love that this turns out—so I have some quotes here from right when this function was sort of debuted, right when it was introduced to the world. You have Émile Picard, his line was, “If Newton and Leibniz had thought that continuous functions do not necessarily have a derivative, the differential calculus would never have been invented.” Which I like. If Newton and Leibniz knew what you were going to do to their legacy, they would never have done this! They would have rejected the whole premise. And then Charles Hermite? [Pronounced “her might, wonders if the pronunciation is correct]
KK: Hermite. [Pronounced “her meet”]
BO: That sounds better. Sounds good. Sure. Right. His line was, and I don't know what the context was, but, “I turn away with fright and horror from this lamentable evil of functions that do not have derivatives.” Which is really layering on I like the way people spoke in the 19th century. There was more, a lot more flavor to their their language.
EL: Yeah.
BO: And Poincaré also, he was saying 100 years ago prior to Weierstrass developing it, such a function would have been regarded as an outrage to common sense. Anyway, so I mention all those. You mentioned Brownian motion, right? The instinct when you see this function is that this is utterly pathological. This is math just completely losing touch with physical reality and giving us these weird intellectual puzzles and strange constructions that can't possibly mean anything to real human beings. And then it turns out that that's not true at all, that Brownian motion—so you look at pollen dancing around on the surface of some water, and it's jumping around in these really crazy aggressive ways. And it turns out our best models of that process, you know, of any kind of Brownian motions—you know, coal dust in the air or pollen on water—our best model to a pretty good approximation has the same property. The path is so jagged and surprising and full of jumps from moment to moment that it's nowhere differentiable, even though the particle obviously sort of has to be continuous. It can’t be discontinuous, I mean, it's jumping, like literally transporting from one place to another. So that's not really the right model. But it is non-differentiable everywhere, which means, weirdly, that it doesn't have a speed, right? Like, a derivative is a is a velocity.
EL: So that means maybe an average speed but not a speed at any time.
BO: Yeah, well, actually, even—I think it depends how you measure. I’d have to looked back at this, because what it means sort of between any two moments according to the model, between any two points in time, is traversing an infinite distance. So I guess it could have an average velocity, but the average speed I think winds up being infinite rates. Over a given time interval, you can just take how far it travels that time interval and divide by time, but I think the speed, if you take the absolute value of the magnitude? I think you sort of wind up with infinite speed, maybe? But really, it's just that you can’t—speed is no longer a meaningful notion. It's moving in such an erratic way. that n you can't even talk about speed.
KK: Well, because that tends to imply a direction. I mean, you know, it’s really velocity. That always struck me as that's the real problem, is that you can't figure out what direction it's going, because it's effectively moving randomly, right?
BO: Yeah, I think that's fair. Yeah. The only way I can build any intuition about it is to picture a single—imagine a baseball having a single non-differentiable moment. So like, you toss it up in the air. And usually what would happen is that it goes up in the air, it kind of slows down and slows down and slows down. There's that one moment when it's kind of not moving at all. And then it begins to fall. And so the non-differentiable version would be, like, you throw it up in the air, it's traveling up at 10 meters per second, and then a trillionth of a second later, it's traveling down at 10 meters per second. And what's happening at that moment? Well, it's just unimaginable. And now for Brownian motion, you've got to picture that that moment is every moment.
KK: Right. Yeah. Weird, weird world.
BO: Yeah.
KK: So another thing we like to do on this podcast is ask our guests to pair their, well in your case construction, with something. What does the Weierstrass function pair with?
BO: Yeah. So I think, I have two things in mind, both of them constructions of new things that kind of opened up new new possibilities that people could not have imagined before. So the first one, maybe I should have picked a specific dish, but I'm picturing basically just molecular gastronomy, this movement in in cooking where you take—one example I just saw recently in a book was, I think it was WD-50, a sort of famous molecular gastronomy restaurant in New York, where they had taken, the comes to you and it looks like a small, poppyseed bagel with lox. And then as it gets closer, you realize it's not a poppyseed bagel with lox, it's ice cream that looks almost identical to a poppyseed bagel with lox. So that's sort of weird enough already. And then you take a taste and you realize that actually, it tastes exactly like a poppyseed bagel with lox, because they've somehow worked in all the flavors into the ice cream.
KK: Hmm.
BO: Anyway, so molecular gastronomy basically is about imagining very, very weird possibilities of food that are outside our usual traditions, much in the way that Weierstrass’s function kind of steps outside the traditional structures of math.
EL: Yeah, I like this a lot. It's a good one. Partly because I'm a little bit of a foodie. And like, when I lived in Chicago, we went to this restaurant that had this amazing, like, molecular gastronomy thing. I’m trying to remember one of the things we had was this frozen sphere of blue cheese. And it was so weird and good. Yeah, you’d get you get like puffs of air that are something, and there’s, like, a ham sandwich, but it was like the bread was only the crust somehow there's like nothing inside. Yeah, it was all these weird things. Liquefied olive that was like in inside some little gelatin thing, and so it was just like concentrated olive taste that bursts in your mouth. So good.
BO: That sounds awesome to me the the molecular gastronomy food. I have very little experience of it firsthand.
KK: So you mentioned a second possible pairing. What would that be?
BO: Yeah, so the other one I had in mind is music. It's a Beatles album, Revolver.
KK: Great album.
BO: One of my favorite albums, and much like molecular gastronomy shows that the foods that we're eating are actually just a tiny subset of the possible foods that are out there, similarly what revolver did for for pop music and in ’65 whenever it came out.
KK: ’66.
BO: Okay. 66 Alright, thank you for that.
EL: I am not well-versed in albums of The Beatles. You know, I am familiar with the music of the Beatles, don’t worry. But I don't know what's on what album. So what is this album?
BO: So Kevin and I can probably go to track by track for you.
KK: I’d have to think about it, but it's got Norwegian Wood on it, for example.
BO: Oh, that's rubber sole, actually.
KK: Oh, that’s Rubber Soul. You're right. Yeah, I lost my Beatles cred. That's right. My bad. I mean, some would argue that—so Revolver was, some people argue, was the first album. Before that, albums had just been collections of singles, even in the case of the Beatles, but Revolver holds together as a piece.
BO: Yeah, that’s one thing. Which again, there's probably some an analogy to Weierstrass’s function there. Also, it begins with this kind of weird countdown where, I don’t remember if it's John or George, but they’re saying 1234 in the intro into Taxman.
KK: Yeah. Into Taxman, which is probably, it's not my favorite Beatles song, but it's certainly among the top four. Right.
BO: Yeah. So that one, already right there it’s a pop song about taxes, which is already, so lyrically, we're exploring different parts of the possibility space than musicians were before. Track two is Eleanor Rigby, which is, the only instrumentation is strings. Which again is something that you didn't really hear in pop. You know, Yesterday had brought in some strings, that was sort of innovative. Other bands have done similar things but, but the idea of a song that’s all strings, and then I’m Only Sleeping as the third track, which has this backwards guitar. They recorded the guitar and just played it backwards. And then Yellow Submarine, which is, like, this weird Raffi song that somehow snuck onto a Beatles album. Yeah, and then For No One has this beautiful French horn solo. Yes, every track is drawn from sort of a distant corner of this space of possible popular music, these kind of corners that had not been explored previously. Anyway, so my recommendation is, is think about the Weierstrass function while eating, you know, a giant sphere of blue cheese and listening to Taxman.
EL: Great. Yeah. I strongly urge all of our listeners to go do that right now.
BO: Yeah, if anyone does it, it'll probably be the first time that that set of activities has been done in conjunction.
EL: Yeah. But hopefully not the last.
BO: Hopefully not the last. That's right. Yeah. And most experiences are like that, in fact.
KK: So we also like to let our guests plug things. You clearly have things to plug.
BO: I do. Yeah. I'm a peddler of wares. Yes, so the prominent thing is my blog is Math with Bad Drawings, and you're welcome to come read that. I try to post funny, silly things there. And then my two books are Math with Bad Drawings, which kind of explores how math pops up in lots of different walks of life, like, you know, in thinking about lottery tickets or thinking about the Death Star is another chapter, and then Change Is the Only Constant is my second book, and it's all about calculus, and it’s sort of calculus through stories. Yeah, that one just came out earlier this year, and I'm quite proud of that one. So you should check it out.
KK: Yeah, so I own both of them. I've only read Math with Bad Drawings. I've been too busy so far to get to Change Is the Only Constant.
EL: And there were there been a slew of good pop—or I assume good because I haven't read most of them yet—pop math books that have come out recently, so yeah I feel like my stack is growing. It’s a fall of calculus or something.
BO: It’s been a banner year. And exactly, calculus has been really at the forefront. Steve Strogatz’s Infinite Powers was a New York Times bestseller, and then David Bressoud [Calculus Reordered] and others who I'm blanking on right now have had one. There was another graphic, like, cartoon calculus that came out earlier this year. So yeah, apparently calculus is kind of having a moment.
EL: Well, and I just saw one about curves.
KK: Curves for the Mathematically Curious. It's sitting on my desk. Many of these books that you've mentioned are sitting on my desk.
EL: So yeah, great year for reading about calculus, but I think Ben would prefer that you start that reading with Change Is the Only Constant.
BO: It's very frothy, it's very quick and light-hearted and should be—you can use it as your appetizer to get into the the, the cheesier balls of the later books.
KK: But it's highly non-trivial. I mean, you talk about really interesting stuff in these books. It's not some frothy thing. I mean it's lighthearted, but it's not simple.
BO: I appreciate that. Yeah, the early draft of the book I was doing pretty much a pretty faithful march through the AP Calculus curriculum. And then that draft wasn't really working. And I realized that part of what I wasn't doing that should be doing was since I'm not teaching, you know, you had to execute calculus maneuvers. I'm not teaching how to take derivatives. I can talk about anything as long as I can explain the ideas. So we've got Weierstrass’s function in there. And there's a little bit even on Lebesgue integration, and other sort of, some stuff on differential equations crops up. So since I'm not actually teaching a calculus course and I don't need to give tests on it, I just got to tell stories.
EL: Well, yeah, I hope people will check that out. And thanks for joining us today.
BO: Yeah, thanks so much for having me.
KK: Yeah. Thanks, Ben.
[outro]
Our guest on this episode, Ben Orlin, is a high school math teacher best-known for his blog and popular math books. He told us about Weierstrass’s construction of a function that is continuous everywhere but differentiable nowhere. Here is a short collection of links that might be interesting.
Ben’s Blog, Math with Bad Drawings
Math with Bad Drawings, the book
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, the math theorem with no test at the end. I think I decided I liked that tagline. [Editor’s note: Nope, she really didn’t notice that slip of the tongue!]
Kevin Knudson: Okay.
EL: So we’re going to go with that. Yeah. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is the other host.
KK: I’m Kevin Knudson, a professor of mathematics at the University of Florida. How are you doing?
EL: I’m doing well. Yeah, not not anything too exciting going on here. My mother-in-law is coming to visit later today. So the fact that I have to record this podcast means my husband has to do the cleaning up to get ready.
KK: Wouldn’t he do that anyway? Since it’s his mom?
EL: Yeah, probably most of it. But now I've got a really good excuse.
KK: Yeah, sure. Well, Ellen and I had our 27th anniversary yesterday.
EL: Oh, congratulations.
KK: Yeah, we had a nice night out on the town. Got a hotel room just to sit around and watch hockey, as it turns out.
EL: Okay.
KK: But there's a pool at the hotel. And you know, it's hot in Florida, and we don't have a pool. And this is absurd—which Ellen reminds me of every day, that we need a pool—and I just keep telling her that we can either send the kid to college or have a pool. Okay.
EL: Yeah.
KK: I mean, I don't know. Anyway, we're not here talking about that, we're talking about math..
EL: Yes. And we're very excited today to have Carina Curto on the show. Hi, Carina, can you tell us a little bit about yourself?
Carina Curto: Hi, I'm Carina, and I'm a professor of mathematics at Penn State.
EL: Yeah, and I think I first—I don't think we've actually met. But I think the first time I saw you was at the Joint Meetings a few years ago. You gave a really interesting talk about, like, the topology of neural networks, and how your brain has these, like, basically kind of mental maps of spaces that you interact with. It was really cool. So is that the kind of research you do?
CC: Yeah, so that was—I remember that talk, actually, at the Joint Meetings in Seattle. So that was a talk about the uses of typology for understanding neural codes. And a lot of my research has been about that. And basically, everything I do is motivated in some way by questions in neuroscience. And so that was an example of work that's been motivated by neuroscience questions about how your brain encodes geometry and topology of space.
KK: Now, there's been a lot of a lot of TDA [topological data analysis] moving in that direction these last few years. People have been finding interesting uses of topology in neuroscience and studying the brain and imaging stuff like that, very cool stuff.
CC: Yeah.
EL: And did you come from more of a neuroscience background? Or have you been kind of picking that up as you go, coming from a math background?
CC: So I originally came from a mathematical physics background.
EL: Okay.
CC: I was actually a physics major as an undergrad. But I did a lot of math, so I was effectively a double major. And then I wanted to be a string theorist.
KK: Sure, yeah.
CC: I started grad school in 2000. So this is, like, right after Brian Greene’s The Elegant Universe came out.
EL: Right. Yeah.
CC: You know, I was young and impressionable. And so I kind of went that route because I loved physics, and I loved math. And it was kind of an area of physics that was using a lot of deep math. And so I went to grad school to do mathematical string theory in the math department at Duke. And I worked on Calabi-Yaus and, you know, extra dimensions and this kind of stuff. And it was, the math was mainly algebraic geometry, is what right HD thesis was in So this had nothing to do with neuroscience.
EL: Right.
CC: Nothing. And so basically about halfway through grad school—I don't know how better to put it, then I got a little disillusioned with string theory. People laugh now when I say that because everybody is.
KK: Sure.
CC: But I started kind of looking for other—I always wanted to do applied things, interdisciplinary things. And so neuroscience just seemed really exciting. I kind of discovered it randomly and started learning a lot about it and became fascinated. And so then when I finished my PhD, I actually took a postdoc in a neuroscience lab that had rats and, you know, was reporting from the cortex and all this stuff, because I just wanted to to learn as much neuroscience as possible. So I spent three years working in a lab. I didn't actually do experiments. I did mostly computational work and data analysis. But it was kind of a total cultural immersion sort of experience, coming from more of a pure math and physics background.
EL: Right. Yeah, I bet that was a really different experience
CC: It was really different. So I kind of left math in a sense for my first postdoc, and then I came back. So I did a second postdoc at Courant at NYU, and then started getting ideas of how I could tackle some questions in neuroscience using mathematics. And so ever since then, I've basically become a mathematical neuroscientist. I guess I would call myself.
KK: So 2/3 of this podcast is Duke alums. That's good.
CC: Oh yeah? Are you a Duke alum?
KK: I did my degree there too. I finished in ’96.
CC: Oh, yeah.
KK: Okay. Yeah.
CC: Cool.
EL: Nice. Well, so what is your favorite theorem?
CC: So I have many, but the one I chose for today is the Perron-Frobenius theorem.
KK: Nice.
EL: All right.
CC: And so you want to know about it, I guess?
KK: We do. So do our listeners.
CC: So it's actually really old. I mean, there are older theorems, but Perron proved it, I think in 1907 and Frobenius in 1912, so it carries both of their names. So it's over 100 years old. And it's a theorem and linear algebra. So it has to do with eigenvectors and eigenvalues of matrices.
KK: Okay.
CC: And so I'll just tell you quickly what it is. So, if you have a square matrix, so like an n×n square matrix with all positive—so there are many variations of that theorem. I'm going to tell you the simplest one—So if all the entries of your matrix are positive, then you are guaranteed that your largest eigenvalue is unique and real and is positive, so a positive real part. So eigenvalues can be complex. They can come in complex conjugate pairs, for example, but when we talk about the largest one, we mean the one that has the largest real part.
EL: Okay.
KK: All right.
CC: And so one part of the theorem is that that eigenvalue is unique and real and positive. And the other part is that you can pick the corresponding eigenvector for it to be all positive as well.
EL: Okay. And we were talking before we started taping that I'm not actually remembering for sure whether we've used the words eigenvector and eigenvalue yet on the podcast, which, I feel like we must have because we've done so many episodes, but yeah, can we maybe just say what those are for anyone who isn't familiar?
CC: Yeah. So when you have a matrix, like a square matrix, you have these special vectors. So the matrix operates on vectors. And so a lot of people have learned how to multiply a matrix by a vector. And so when you have a vector, so say your matrix is A and your vector is x, if A times x gives you a multiple of x back—so you basically keep the same vector, but maybe scale it—then x is called an eigenvector of A. And the scaling factor, which is often denoted λ, is called the eigenvalue associated to that eigenvector.
KK: Right. And you want x to be a nonzero vector in this situation.
CC: Yes, you want x to be nonzero, yes, otherwise it's trivial. And so I like to think about eigenvectors geometrically because if you think of your matrix operating on vectors in some Euclidean space, for example, then what it does, what the matrix will do, is it will pick up a vector and then move it to some other vector, right? So there's an operation that takes vectors to vectors, called linear transformations, that are manifested by the matrix multiplication. And so when you have an eigenvector, the matrix keeps the eigenvector on its own line and just scales, or it can flip the sign. If the eigenvalue is negative, it can flip it to point the other direction, but it basically preserves that line, which is called the eigenspace associated. So it has a nice geometric interpretation.
EL: Yeah. So the Perron-Frobeius theorem, then, says that if your matrix only has positive entries, then there's some eigenvector that's stretched by a positive amount.
CC: So yeah, so it says there's some eigenvector where the entries of the vector itself are all positive, right, so it lies in the positive orthant of your space, and also that the the corresponding eigenvalue is actually the largest in terms of absolute value. And the reason this is relevant is because there are many kind of dynamic processes that you can model by iterating a matrix multiplication. So, you know, one simple example is things like Markov chains. So if you have, say, different populations of something, whether it be, say, animals in an ecosystem or something, then you can have these transition matrices that will update the population. And so, if you have a situation where if your matrix that's updating your population has—whatever the leading eigenvalue is of that matrix is going to control somehow the long-term behavior of the population. So that top eigenvalue, that one with the largest absolute value, is really controlling the long-term behavior of your dynamic process.
EL: Right, it kind of dominates.
CC: It is dominating, right. And you can even see that just by hand when you sort of multiply, if you take a matrix times a vector, and then do it again, and then do it again. So instead of having A times x, you have A squared times x or A cubed times x. So it's like doing multiple iterations of this dynamic process. And you can see how, then, what’s going to happen to the to the vector if it's the eigenvector. Well, if it's an eigenvector, well, what's going to happen is when you apply the matrix once, A times x, you're going to get λ times x. Now apply A again. So now you're applying A to the quantity λx, but the λ comes out, by the linearity of the of the matrix multiplication, and then you have Ax again, so you get another factor of λ, so you get λ^2 times x. And so if you keep doing this, you see that if I do A^k times x, I get λ^k times x. And so if that λ is something, you know, bigger than 1, right, my process is going to blow up on me. And if it's less than 1, it's going to converge to zero as I keep taking powers. And so anyway, the point is that that top eigenvector is really going to dominate the dynamics and the behavior. And so it's really important if it's positive, and also if it's bigger or less than 1, and the Perron-Frobenius theorem basically tells you that you have, it gives you control over what that top eigenvalue looks like and moreover, associates it to an all-positive eigenvector, which is then a reflection of maybe the distribution of population. So it's important that that be positive too because lots of things we want to model our positive, like populations of things.
KK: Negative populations aren't good. Yeah,
CC: Yes, exactly. And so this is one of the reasons it's so, useful is because a lot of the things we want to model are—that vector that we apply the matrix to is reflecting something like populations, right?
KK: So already this is a very non-obvious statement, right? Because if I hand you an arbitrary matrix, I mean, even like a 2×2 rotation matrix, it doesn't have any eigenvalues, any real eigenvalues. But the entries aren't all positive, so you’re okay.
CC: Right. Exactly.
KK: But yeah, so a priori, it's not obvious that if I just hand you an n×n matrix with all real entries that it even has a real eigenvalue, period.
CC: Yeah. It's not obvious at all, and let alone that it's positive, and let alone that it has an eigenvector that's all positive. That's right. And the positivity of that eigenvector is really important, too.
EL: Yeah. So it seems like if you're doing some population model, just make sure your matrix has all positive entries. It’ll make your life a lot easier.
CC: So there's an interesting, so do you do you know what the most famous application of the Perron-Frobenius theorem is?
EL: I don't think I do.
KK: I might, but go ahead.
CC: You might, but I’ll go ahead?
KK: Can I guess?
CC: Sure.
KK: Is it Google?
CC: Yes. Good. Did you Google it ahead of time?
KK: No, this is sort of in the dark recesses of my memory that essentially they computed this eigenvector of the web graph.
CC: Right. Exactly. So back in the day, in the late ‘90s, when Larry Page and Sergey Brin came up with their original strategy for ranking web pages, they used this theorem. This is like, the original PageRank algorithm is based on this theorem, because they're, they have again the Markov process where they imagine some web—some animal or some person—crawling across the web. And so you have this graph of websites and edges between them. And you can model the random walk across the web as one of these Markov processes where there's some matrix that that reflects the connections between web pages that you apply over and over again to update the position of the of the web crawler. And and so now if you imagine a distribution of web crawlers, and you want to find out in the long run what pages do they end up on, or what fraction of web crawlers end up on which pages, it turns out that the Perron-Frobenius theorem gives you precisely the existence of this all-positive eigenvector, which is a positive probability that you have on every website for ending up there. And so if you look at the eigenvector itself, that you get from your web matrix, that will give you a ranking of web pages. So the biggest value will correspond to the most, you know, trafficked website. And smaller values will correspond to less popular websites, as predicted by this random walk model.
EL: Huh.
CC: And so it really is the basis of the original PageRank. I mean, they do fancier things now, and I'm sure they don't reveal it. But the original PageRank algorithm was really based on this. And this is the key theorem. So I think it's a it's kind of a fun thing. When I teach linear algebra, I always tell students about this.
KK: Linear Algebra can make you billions of dollars.
CC: Yes.
KK: That’ll catch students’ attention.
CC: Yes, it gets students’ attention.
EL: Yes. So where did you first encounter the Perron-Frobenius theorem?
CC: Probably in an undergrad linear algebra class, to be honest. But I also encountered it many more times. So I remember seeing it in more advanced math classes as a linear algebra fact that becomes useful a lot. And now that I'm a math biologist, I see it all the time because it's used in so many biological applications. And so I told you about a population biology application before, but it also comes up a lot in neural network theory that I do. So in my own research, I study these competitive neural networks. And here I have matrices of interactions that are actually all negative. But I can still apply the theorem. I can just flip the sign.
EL: Oh, right.
CC: And apply the theorem, and I still get this, you know, dominant eigenvalue and eigenvector. But in that case, the eigenvalue is actually negative, and I still have this all-positive eigenvector that I can choose. And that's actually important for proving certain results about the behavior of the neural networks that I study. So it's a theorem I actually use in my research.
EL: Yeah. So would you say that your appreciation of it has grown since you first saw it?
CC: Oh for sure. Because now I see it everywhere.
EL: Right.
CC: It was one of those fun facts, and now it’s in, you know, so many math things that I encounter. It's like, oh, they're using the Perron-Frobenius theorem. And it makes me happy.
EL: Yeah, well, when I first read the statement of the theorem, it's not like it bowled me over, like, “Oh, this is clearly going to be so useful everywhere.” So probably, as you see how many places it shows up, your appreciation grows.
CC: Yeah, I mean, that's one of the things that I think is really interesting about the theorem, because, I mean, many things in math are like this. But you know, surely when Perron and Frobenius proved it over 100 years ago, they never imagined what kinds of applications it would have. You know, they didn't imagine Google ranking web pages, or the neural network theory, or anything like this. And so it's one of these things where it's like, it's so basic. Maybe it could look initially like a boring fact of linear algebra, right? If you're just a student in a class and you're like, “Okay, there's going to be some eigenvector, eigenvalue, and it's positive, whatever.” And you can imagine just sort of brushing it off as another boring fact about matrices that you have to memorize for the test, right? And yet, it's surprisingly useful. I mean, it has applications in so many fields of applied math and in pure math, and so it's just one of those things that gives you respect for even seemingly simple and not obviously, it doesn't bowl you over, right, you can see the statement and you're not like, “Wow, that's so powerful!” But it ends up that it's actually the key thing you need in so many applications. And so, you know, it's earned its place over time. It's aged nicely.
EL: And do you have a favorite proof of this theorem?
CC: I mean, I like the elementary proofs. I mean, there are lots of proofs. So I think there's an interesting proof by Birkhoff. There are some proofs that involve the Brouwer fixed point theorem, which is something maybe somebody has chosen already.
EL: Yes, actually. Two people have chosen it!
CC: Two people have chosen the Brouwer fixed point theorem. Yeah, I would imagine that's a popular choice. So, yeah, there are some proofs that rely on that, which I think is kind of cool. So those are more modern proofs of it. That's the other thing I like about it, is that it has kind of old-school elementary proofs that an undergrad in a linear algebra class could understand. And then it also has these more modern proofs. And so it's kind of an interesting theorem in terms of the variety of proofs that it admits.
KK: So one of the things we like to do on this podcast is we like to invite our guests to pair their theorem with something. So I'm curious, I have to know what pairs well with the Perron-Frobenius theorem?
CC: I was so stressed out about this pairing thing!
KK: This is not unusual. Everybody says this. Yeah.
CC: What is this?
KK: It’s the fun part of the show!
CC: I know, I know. And so don't know if this is a good pairing, but I came up with this. So I went to play tennis yesterday. And I was playing doubles with some friends of mine. And I told them, I was like, I have to come up with a pairing for my favorite theorem. So we chatted about it for a while. And as I was playing, I decided that I will pair it with my favorite tennis shot.
EL: Okay.
CC: So, my favorite shot in tennis is a backhand down the line.
KK: Yes.
CC: Yeah?
KK: I never could master that!
CC: Yeah. The backhand down the line is one of the basic ground strokes. But it's maybe the hardest one for amateur players to master. I mean, the pros all do it well. But, you know, for amateurs, it's kind of hard. So usually people hit their backhand cross court. But if you can hit that backhand down the line, especially when someone's at the net, like in doubles, and you pass them, it's just very satisfying, kind of like, win the point. And for my tennis game, when my backhand down the line is on, that's when I'm playing really well.
EL: Nice.
CC: And I like the linearity of it.
EL: Right, it does seem like, you know, you're pushing it down.
CC: Like I'm pushing that eigenvector.
KK: It’s very positive, everything's positive about it.
CC: Everything’s positive. The vector with the tennis ball, just exploding down the line. It's sort of maybe it's a stretch, but that's kind of what I decided.
EL: A…stretch? Like with an eigenvalue and eigenvector?
CC: Right, exactly. I needed to find a pairing that was a stretch.
EL: I think this is a really great pairing. And you know, something I love about the pairing thing that we do—other than the fact that I came up with it, so of course, I'm absurdly proud of it—is that I think, for me at least it's built all these bizarre connections with math and other things. It's like, now when I see the mean value theorem, I'm like, “Oh, I could eat a mango.” Or like, all these weird things. So now when I see people playing tennis, I'll be like, “Oh, the Perron-Frobenius theorem.”
CC: Of course.
EL: So are you a pretty serious tennis player?
CC: I mean, not anymore. I played in college for a little bit. So when I was a junior, I was pretty serious.
EL: Nice. Yeah, I’m not really a tennis person I've never played or really followed it. But I guess there's like some tennis going on right now that's important?
CC: The French Open?
EL: That’s the one!
KK: Nadal really stuck it to Federer this morning. I played obsessively in high school, and I was never really any good, and then I kind of gave it up for a long time, and I picked up again in my 30s and did league tennis when I lived in Mississippi. And my team at our level—we were just sort of very intermediate players, you know—we won the state championship two years in a row.
CC: Wow.
KK: And then and then I gave it up again when I moved to Florida. My shoulder can't take it anymore. I was one of these guys with a big booming serve and a pretty good forehand and then nothing else, right?
CC: Yeah.
KK: So you know, if you work my backhand enough you're going to destroy me.
EL: Nice. Oh, yeah, that's a that's a lot of fun. And I hope other our tennis appreciator listeners will now have have an extra reason to enjoy this theorem too. So yeah, we also like to give our guests a chance, like if they have a website or book or anything they want to mention—you know, if people want to find them online and chat about tennis or linear algebra— is there anything you want to mention?
CC: I mean, I don't have a book or anything that I can plug, but I guess I wanted to just plug linear algebra as a subject.
KK: Sure.
CC: I feel like linear algebra is one of the grand achievements of humanity in some ways. And it should really shine in the public consciousness at the same level as calculus, I think.
EL: Yeah.
KK: Maybe even more.
CC: Yeah, maybe even more. And now, everybody knows about calculus. Every little kid knows about calculus. Everyone is like, “Oh, when when are you going to get to calculus?” You know, calculus, calculus. And linear algebra—it also has kind of a weird name, right, so it sounds very elementary somehow, linear and algebra—but it's such a powerful subject. And it's very basic, like calculus, and it's used widely and so I just want to plug linear algebra.
EL: Right. I sometimes feel like there are basically—so math can boil down to like, doing integration by parts really well or doing linear algebra really. Like, I joked with somebody, like, I didn't end up doing a PhD in a field that used a lot of linear algebra, but I sort of got my PhD in applied integration by parts, it's just like, “Oh, yeah. Figure out an estimate based on doing this.” And I think linear algebra, especially now with how important social media and the internet are, it is really an important field that, I agree, more people should know about. It is one of the classes that when I took it in college, it's one of the reasons I—at that time, I was trying to get enough credits to finish my math minor. And I was like, “Oh, yeah, actually, this is pretty cool. Maybe I should learn a little more of this math stuff.” So, yeah, great class.
CC: And you know, it's everywhere. And you know, there are all these people, almost more people have heard of algebraic topology than linear algebra, outside, you know, because it's this fancy topology or whatever. But when it comes down to it, it's all linear algebra tricks. With some vision of how to package them together, of course, I’m not trying to diminish the field, but somehow linear algebra doesn't get it’s—it’s the workhorse behind so much cool math and yeah, doesn't get its due.
EL: Yes, definitely agree.
KK: Yeah. All right. Well, here's to linear algebra.
EL: Thanks a lot for joining us.
CC: Thank you.
KK: It was fun.
[outro]
Our guest on this episode, Carina Curto, is a mathematician at Penn State University who specializes in applications in biology and neuroscience. She talked about the Perron-Frobenius theorem. Here are some links you may find useful as you listen to this episode.
Curto’s website
A short video of Curto talking about how her background in math and physics is useful in neuroscience and a longer interview in Quanta Magazine
An article version of Curto’s talk a few years ago about topology and the neural code
Curto ended the episode with a plug for linear algebra as a whole. If you’re looking for an engaging video introduction to the subject, check out this playlist from 3blue1brown.
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcast. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer, usually in Salt Lake City, Utah, currently in Providence, Rhode Island. And this is your other host.
Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics, almost always at the University of Florida these days. How's it going?
EL: All right. We had hours of torrential rain last night, which is something that just doesn't happen a whole lot in Utah but happens a little more often in Providence. So I got to go to sleep listening to that, which always feels so cozy, to be inside when it's pouring outside.
KK: Yeah, well, it's actually finally pleasant in Florida. Really very nice today and the sun's out, although it's gotten chilly—people can't see me doing the air quotes—it’s gotten “chilly.” So the bugs are trying to come into the house. So the other night we were sitting there watching something on Netflix and my wife feels this little tickle on her leg and it was one of those big flying, you know, Florida roaches that we have here.
EL: Ooh
KK: And our dog just stood there wagging at her like, “This is fun.” You know?
EL: A new friend!
KK: “Why did you scream?”
EL: Yeah, well, we’re happy today to invite aBa to the show. ABa, would you like to introduce yourself?
aBa Mbirika: Oh, hello. I’m aBa. I'm here in Wisconsin at the University of Wisconsin Eau Claire. And I have been here teaching now for six years. I tell them where I'm from?
EL: Yeah.
KK: Sure.
aM: Okay. I am from, I was born and raised in New York City. I prefer never to go back there. And then I moved to San Francisco, lived there for a while. Prefer never to go back there. And then I went up to Sonoma County to do some college and then moved to Iowa, and Iowa is really what I call home. I'm not a city guy anymore. Like Iowa is definitely my home.
EL: Okay.
KK: So Southwestern Wisconsin is also okay?
aM: Yeah, it's very relaxing. I feel like I'm in a very small town. I just ride my bicycle. I still don't know how to drive, like all my friends from New York and San Francisco. But I don't need a car here. There's nowhere to go.
EL: Yeah.
aM: But can I address why you just called me aBa, as I asked you to?
EL: Yeah.
aM: Yeah, because maybe I'll just put this on the record. I mean, I don't use my last name. I think the last time I actually said some version of my last name was grad school, maybe? The year 2008 or something, like 10 years ago was the last time anyone's ever heard it said. And part of the issue is that it's It's pronounced different depending on who's saying it in my family. And actually it's spelled different depending on who’s in the family. Sometimes they have different letters. Sometimes there's no R. Sometimes it’s—so in any case, if I start to say one pronunciation, I know Americans are going to go to town and say this is the pronunciation. And that's not the case. I can't ask my dad. He's passed now, but he didn’t have a favorite. He said it five different ways my whole life, depending on context. So he doesn't have a preference, and I'm not going to impose one. So I'm just aBa, and I'm okay with that.
EL: Yeah, well, and as far as I know, you're currently the only mathematician named aBa. Or at least spelled the way yours is spelled.
aM: Oh yeah, in the arXiv. Yeah, on Mathscinet that it’s. Yeah, I'm the only one there. Recently someone invited me to a wedding and they were like, what's your address? And I said, “aBa and my address is definitely enough.”
EL: Yeah, so what theorem would you like to tell us about?
aM: Oh, okay, well I was listening actually to a couple of you shows recently, and Holly didn’t have a favorite theorem, Holly Krieger. I'm exactly the same way. I don't even have a theorem of, like, the week. She was lucky to have that. I have a theorem of the moment. I would like to talk about something I discovered when I was in college, that’s kind of the reason. but can I briefly say some of my like, top hits just because?
EL: Oh yeah.
KK: We love top 10 lists. Yeah, please.
aM: Okay. So I'm in combinatorics, loosely defined, but I have no reason—I don't know why people throw me in that bubble. But that's the bubble that that I've been thrown in. But my thesis—actually, I don’t ever remember the title, so I have to read it off a piece of paper—Analysis of symmetric function ideals towards a combinatorial description of the cohomology ring of Hessenberg varieties.
KK: Okay.
aM: Okay, all those words are necessary there. But my advisor said, “You're in combinatorics.” Essentially, my problem was, we were studying an object and algebraic geometry, this thing called a Hessenberg variety. To study this thing we used topology. We looked at the cohomology ring of this, but that was very difficult. So we looked at this graded ring from the lens of commutative algebra. And I studied the algebra the string by looking at symmetric functions, ideals of symmetric functions, and hence that's where my advisor said, “You're in combinatorics.” So it was the main tool used to study a problem an algebraic geometry that we looked at topology. Whatever, so I don't know what I am. But any case for top 10 hits, not top 10, but diagram chasing. Love it. Love it.
EL: Wow, I really don't share that love, but I’m glad somebody does love it.
aM: Oh, it's just so fun for students.
KK: So the snake lemma, right?
aM: The snake lemma, yes. It's a little bit maybe above the level of our algebra two class that I teach here for undergrads, but of course I snuck it in anyways. And the short five lemma. Those are like, would be my favorites if the moment was, like, months ago. In number theory I have too many faves, but I’m going to limit it to Euler-Fermat’s theorem that if a and n are coprime, then a to the power of the Euler totient function of n is congruent to 1 mod n. But that leads to Gauss’s epically cool awesome theorem on the existence of primitive roots. Now, this is my current craze.
EL: Okay.
aM: And this is just looking at the group of units in Z mod nZ, or more simply the multiplicative group of units of integers modulo n. When is this group cyclic? And Gauss said it's only cyclic when n is the numbers 2, or 4, or an odd prime to a k power, or twice an odd prime to some k-th power. And basically, those are very few. I mean, those are very little numbers in the broad spectrum of the infinity of the natural numbers. So this is very cool. In fact, I'm doing a non-class right now with a professor who retired maybe 10 years ago from our university, and I emailed him and said, “Want to have fun on my like my research day off?” And we’re studying primitive roots because I don't know anything about it. Like, my favorite things are things I know nothing about and I want to learn a lot about.
EL: Yeah, I don't think I've heard that theorem before. So yeah, I'll have to look that up later.
aM: Yes. And then the last one is from analysis, and I did hear Adrianna Salerno talked about it and in fact, I think also someone before her on your podcast, but Cantor’s theorem on uncountability of the real numbers.
EL: Yeah, that's that's a real classic.
aM: I just taught that two days ago in analysis, and like, it's like waiting for their heads to explode. And I think, I don't know, my students’ heads weren't all exploding. But I was like, “This is so exciting! Why are you not feeling the excitement?” So yeah, yeah, it was only my second time teaching analysis. So maybe I have to work on my sell.
EL: Yeah, you'll get them next time.
aM: Yeah. It's so cool! I even mentioned it to my class that’s non-math majors, just looking at sets, basic set theory. And this is my non-math class. These students hate math. They're scared of math. And I say, “You know, the infinity you know, it's kind of small. I mean, you're not going to be tested on this ever. But can I please take five minutes to like, share something wonderful?” So I gave them the baby version of Cantor’s theorem. Yeah, but that's it. I just want to throw those out there before I was forced to give you my favorite theorem.
EL: Yes. So now…
KK: We are going to force you, aBa. What is your favorite theorem?
EL: We had the prelude, so now this is the main event.
aM: Okay, main event time. Okay, you were all young once, and you remember—oh, we’re all young, all the time, sorry—but divisibility by 9. I guess when we're in high school—maybe even before that—we know that the number 108 is divisible by 9 because 1+0+8 is equal to 9. And that's divisible by 9. And 81 is divisible by 9 because 8+1 is 9, and 9 is divisible by 9. But not just that, the number 1818 is divisible by 9 because 1+8+1+8 is 18. And that's divisible by 9. So when we add up the digits of a number, and if that sum is divisible by 9, then the number itself is divisible by 9. And students know this. I mean, everyone kind of knows that this is true. I guess I was a sophomore in college. That was maybe a good 4 to 6 years after I started college because, well, that was hard. It's a different podcast altogether, but I made some choices to meet friends who made it really hard for me to go to school consistently in San Francisco—part of the reason why I'm kind of okay not going back there much anymore. Friends got into trouble too much.
But I took a number theory course and learned a proof for that. And the proof just blew my mind because it was very simple. And I wasn't a full-blown math major yet. I think I was in physics— I had eight majors, different majors through the time—I wasn't a math person yet. And I was on a bus going from—Oh, this is in Sonoma County. I went to Sonoma State University as my fourth or fifth college that I was trying to have a stable environment in. And this one worked. I graduated from there in 2004. It definitely worked. So I was on a bus to visit some of my bad friends in San Francisco—who I love, by the way, I'm just saying of the bad habits—and I was thinking about this theorem of divisibility by 9 and saying, what about divisibility by 7? No one talks about that. Like, we had learned divisibility by 11. Like the alternating sum of the digits, if that's divisible by 11, then the number is divisible by 11. But what about 7? You know, is that doable? Or why is it not talked about?
EL: Yeah.
aM: So it was an hour and a half bus ride. And I figured it out. And it was extremely, like, the same exact proof as the divisibility by 9, but boiled down to one tiny little change. But it's not so much that I love this theorem. I actually haven't even told it to you yet. But that I did the proof, that it changed my life. I really—that’s the only thing I can go back to and say why am I an associate professor at a university in Wisconsin right now. It was the life-changing event. So let me tell you the theorem.
EL: Yeah.
aM: It’s hardly a theorem, and this is why I don't know if it even belongs on this show.
EL: Oh, it totally does!
aM: Okay, so I don't even think I had calc 2 yet when I discovered this little theorem. All right, so here we go. So look at the decimal representation of some natural number. Call it n.
EL: I’ve got my pencil out. I'm writing this down.
aM: Oh, okay. Oh, great. Okay, I'm reading off a piece of paper that I wrote down.
EL: Yeah, you said something about it to us earlier. And I was like, “I'm going to need to have this written down.” It’s funny that I do a podcast because I really like looking at things that are written down. That helps me a lot. But let's podcast this thing.
aM: Okay, so say we have a number with k+1 digits. And so I'm saying k+1 because I want to enumerate the digits as follows: the units digit I'm going to call a0, the tens digit I’ll call a1 the hundreds place digit a2 etc, etc, down to the k+1st digit, which we’ll call ak. So read right to left, like in Hebrew, a0, a1 a2 … (or \cdots, you LaTeX people) ak-1 then the last far left digit ak.
EL: Yeah.
aM: So that is a decimal representation of a number. I mean, we're just, you know, like number 1008. That would be a0 is the number 8, a1 is the number 0, a2 is number 0, a3 is the number 1. So we just read right to left. So we can represent this number, and everybody knows this when you're in junior math, I guess in elementary school, that we can write the number—now I'm using a pen—123 as 3 times 1 plus—how many tens do we have? Well, we have two tens. So 2 times 10. How many hundreds do we have? Well, we have one of those. So 1 times 100. So just talking about, yeah, this is mathematics of the place value system in base 10. No surprise here. But a nicer way to write it as a fat sum, where i, the index goes from 0 to k of ai times 10i.
EL: Yeah.
aM: That’s how we in our little family of math nerds, how we compactly write that. So when we think about when does this number divisible by 7? It suffices to think about when what is the remainder when each of these summands is—when we divide each of these summands by 7, and then add up all those remainders and then take that modulo 7. So the key and crux of this argument is that what is 10 congruent to mod 7? Well, 10 leaves the remainder of 3 when you divide by 7. In the great language of concurrences—Thank you, Gauss—10≡3 mod 7. So now we can look at this, all of these tens we have. We have a0 ×100+ a1 ×101 + a2 ×102, etc, etc. When we divide this by 7, this number really is now a0 ×30 — because I can replace my 100 with 30 —plus a1 ×31 instead of—because 101 is the same as 31 in modulo 7 land—plus a2 ×32, etc. etc…. to the last one, ak ×3k. Okay, here I am on the bus thinking, “This is only cool if I know all my powers of 3.”
EL: Yeah. Which are not really that much easier than figuring it out in the first place.
aM: Okay, but I'm young mathematically and I'm just really super excited. So one little example, I guess this is not, I can't remember what I did on the bus, but 1008 is is a number that's divisible by seven. And let's just perform this check, using this check on this number. So is 1008 really divisible by 7? What we can do is according to this, I take the far right digit, the units digit, and that's 8 ×30, so that's just the number 8, 8×1, plus 0×31. Well, that's just 0, thankfully. Then the next, the hundreds place, that’s 0×32. So that's just another 0. And then lastly, the thousands place, 1×33 and that's 27. Add up now my numbers 8+0+0+27. And that's 35. And that's easy to know that the divisibility of. 7 divides 35 and thus 7 divides 1008. And, yeah, I don't know, I’m traveling back in time, and this is not a marvelous thing. But everybody, unfortunately, who I saw in San Francisco that day, and the next day, learned this. I just had to teach all my friends because I was like, “Well, this is not what I'm doing for college. This is something I figured out on the bus. This math stuff is great.”
EL: Yeah, just the fact that you got to own that.
aM: Yeah. And that also it wasn't in the book, and actually it wasn't in subsequently any book I've ever looked in ever since. But it's still just cute. I mean, it's available. And what it did, I guess it just touched me in a way, where I guess I didn't know about research, I didn't know about a PhD program. My end goal was to get a job, continue at the photocopy place that was near the college, where I worked. I really told my boss that, and I really believed that I was going to do that. And our school never really sent people to graduate programs. I was one of the first. And I don't know, it just changed me. And there were a lot of troubles in my life before then. And this is something that I owned. And that's my favorite theorem on that bus that day.
KK: It’s kind of an origin story, right?
aM: Yes, because people ask me, how did you get interested in math? And I always say the classic thing. Forget this story, but I'm also not speaking to math people. My usual thing is the rave scene. I mean, that was what I was involved in in San Francisco, and then, I don't know if you know what that is, but electronic dance music parties that happen in beaches and fields and farms and houses.
EL: What, you don’t think we go to a lot of raves?
aM: I don’t know if raves still happen!
EL: You have accurately stereotyped me.
aM: Okay. Now, I have to admit my parents were worried about that. And they said, “Ecstasy! Clubs!” and I was like, “No, Mom. That's a different rave. My people are not indoors. We’re outdoors, and we're not paying for stuff, and there's no bar, and there's no drinking. We're just dancing and it's daytime. It was a different thing. But that's really why I got involved in this math thing. In some sense, I wanted to know how all of that music worked, and that music was very mathematical.
EL: Oh.
aM: But then I kind of lost interest in studying the math of that because I just got involved in combinatorics and all the beautiful, theoretical math that fills my spirit and soul. But the origin story is a little bit rave, but mostly that bus.
EL: Yeah. A lot of good things happen on buses.
aM: You guys know about the art gallery theorem? Guarding a museum.
EL: Yeah. Yeah.
aM: What’s the minimum number of guards? Okay, I took the seat of someone—my postdoc was at Bowdoin college, and sadly the person who passed away shortly before I got the job was a combinatorialist named Steve Fisk (I hope I’ve got the name right). In any case, he's in the Proofs from the Book, for coming up with a proof for that art gallery theorem. You know, the famous Proofs from the Book, the idea that all the beautiful proofs are in some book? But yeah, guess where he came up with that, he told the chair of the math department when I started there: on a bus! And he was somewhere in Eastern Europe on a bus, and that's where he came up with it. And it's just like, yeah, things can happen on a bus, you know?
EL: Yeah. Now I want our listeners to, like, write in with the best math they've ever done on a bus or something. A list of bus math.
aM: You also have to include trains, I think, too.
EL: Yeah. Really long buses.
aM: All public transportation.
EL: Yeah. So something that we like to do on this podcast is ask our guests to pair their theorem with something. So what have you chosen to pair with your favorite theorem?
aM: Oh my gosh, I was supposed to think about that. Yes. Okay. Oh, 7.
EL: I feel like you have so many interests in life. You must you must have something you can think of.
aM: Oh, no, it's not a problem. I do currently a lot of mathematics. I'm in my office, sadly, a lot of hours of the day, but sometimes I leave my office and go to the pub down the road. And I call it a pub because it's really empty and brightly lit and not populated by students. It's kind of like a grown up bar. But I do a lot of recreational math there, especially on primitive roots recently. So I think I would pair my 7 theorem with seven sips of Michelob golden draft light. It's just a boring domestic beer. And then I would go across the street to the pizza place that's across from my tavern, and I would eat seven bites of a pizza with pepperoni, sausage, green pepper, and onion.
EL: Nice.
aM: I have a small appetite. So seven people would say yes, he can probably do seven bites before he’s full and needs to take a break.
EL: Or you could you could share it with seven friends.
aM: Yes. Oh, I'm often taking students down there and buying pizza for small sections of research students or groups of seven. Yes.
EL: Nice. So I know you wanted to share some other things with us on this podcast. So do you want to talk about those? Or that? I don't know exactly what form you would like to do this in.
aM: Oh, I wrote a poem. Yeah, I just want to share a poem that I wrote that maybe your listeners might find cute.
EL: Yeah. And I'd like to say I think the first time—I don't think we actually met in person that time, but the first time I saw you—was at the poetry reading at a Joint Math Meeting many years ago.
aM: Oh my gosh! I did this poem, probably.
EL: You might have. I’ll see I remember you. Many people might have seen you because you do stand out in a crowd. You know, you dress in a lot of bright colors and you have very distinctive glasses and hair and everything. So you were very memorable at the time. Yes, right now it's pink, red, and yeah, maybe just different shades of pink.
aM: Yes.
EL: But yeah, I remember seeing you do a poem at this this joint math poetry thing and then kept seeing you at various things and then we met, you know, a few years ago when I was at Eau Claire, I guess, we actually met in person then. But yeah, go ahead, please share your poem with us.
aM: Okay, this is part of the origin story again. This was just shortly after this seven thing from the bus. I was introduced to a proofs class, and they were teaching bijective functions. And I really didn't get the book. It was written by one of my teachers, and I was like, you know, I wrote a poem about it. And I think I understand my poem a little bit more than what you wrote in your book. And like, they actually sing this song now. So they recite it, so say the teachers at Sonoma State, each year to students who are taking this same course. But here it is, I think it's sometimes called a rap because I kind of dance around the room when I sing it. So it's called the Bijection Function Poem. And here you go. Are you ready?
EL: Yes.
KK: Let’s hear it.
aM: All right.
And it clearly follows that the function is bijective
Let’s take a closer look and make this more objective
It bears a certain quality – that which we call injective
A lovin’ love affair, Indeed, a one-to-one perspective.
Injection is the stuff that bonds one range to one domain
For Mr. X in the domain, only Miss Y can take his name
But if some other domain fool should try to get Miss Y’s affection,
The Horizontal Line Police are here to check for 1 to 1 Injection.
(Okay, that’s a little racy.)
Observe though, that injection does not alone grant one bijection
A function of this kind must bear Injection AND Surjection
Surjection!? What is that? Another math word gone surreal
It’s just a simple concept we call “Onto”. Here‟s the deal:
If for EVERY lady ‘y’ who walks the codomain of f
There exists at least one ‘x’ in the Domain who fancies her as his sweet best.
So hear the song that Onto sings – a simple mathful melody:
“There ain’t a Y in Codomain not imaged by some X, you see!”
So there you have it 2 conditions that define a quality.
If it’s injective and surjective, then it’s bijective, by golly!
(So this is the last verse. And there's some homework problems in my last verse, actually.)
Now if you’re paying close attention to my math-poetic verse
I reckon that you’ve noticed implications of Inverse
Inverse functions blow the same tune – They biject oh so happily
By sheer existence, inverse functions mimic Onto qualities (homework problem 1)
And per uniqueness of solution, another inverse golden rule (homework problem 2)
By gosh, that’s one-to-one & Onto straight up out the Biject School!
Word!
aM: Yeah, I never tire that one. I love teaching a proofs class.
EL: Yeah. And you said you use it in your class every time you teach it?
aM: Every time I have to say bijection. I mean, the song works, though. My only drawback in recent times is my wording long ago for “Mr. X in the domain” and “Miss Y can take his name” and the whole binary that this thing is doing. So I do have versions, I have a homosexual version, I have a this version—this is the hetero version—then I have the yet-to-be-written binary-free version, which I don't know how to make that because I was thinking for “Person X in the domain, only Person Y can take his name,” but you know person doesn't work. It's too long syllabically so I'm working on that one.
EL: Yeah.
aM: I’m working on that one.
EL: Well, yeah, modernize it for for the times we live in now.
aM: Yes. I kind of dread reading and reciting this is purely hetero version, you know? And also there's not necessarily only one Miss Y that can take Mr. X’s name. I mean, you know, there's whole different relation groups these days.
EL: Yeah.
aM: But I'm talking about the injection and surjection.
EL: Yeah, the polyamorous functions are a whole different thing.
KK: Those are just relations, they’re not functions. It’s a whole thing.
aM: Oh, yes, relations aren't necessarily functions, but certain ones that be called that right?
EL: Yeah. Well, thank you so much for joining us. Is there anything else you would like to share? I mean, we often give our guests ways to find—give our listeners ways to find our guests online. So if there's anything, you know, a website, or anything you’d like to share.
aM: Can you just link my web page or should I tell you it? [Webpage link here] Actually googling “aBa UWEC math.” That's all it takes. UWEC aBa math. Whenever students can’t find our course notes, I just say like, “I don't know, Google it. There's no way you cannot find our course notes if you remember the name of your school, what you're studying and my name.” Yeah.
EL: We’ll put a link to that also in the show notes for people.
aM: Yeah, one B, aBa, for the listeners.
EL: Yes, that's right. We didn't actually—I said it was the only one spelled that way but we didn't spell it. It's aBa, and you capitalize the middle, the middle and not the first letter, right?
aM: No, yes, that's fine. It looks more symmetric that way.
EL: Yeah. You could even reverse one of them.
aM: I usually write the B backwards. Like the band, but I can't do that usually, though. I don't want to be overkill to the people that I work around. But yes, at the bottom of my webpage, I have the links to videos of me singing various songs to students, complex analysis raps, PhD level down to undergraduate level, just different raps that I wrote for funs.
And I wanted to plug one thing at JMM. I mean, not that it's hard to find it in the program, but I'm an MAA invited speaker this time, and I'm actually scared pooless a little bit to be speaking in one of those large rooms. I don't know how I got invited. But I said yes.
KK: Of course you said yes!
aM: Well, I'm excited to share two research projects that I've been doing with students. Because I like doing research just for the sheer joy of it. And I think the topic of my talk is “A research project birthed out of curiosity and joy” or something like that, because one of the projects I'm sharing wasn't even a paid research project. I just had a student that got really excited to study something I noticed in Pascal's triangle, and these tridiagonal real symmetric matrices. I mean, it was finals week, and I was like, “You want to have fun?” And we spent the next year and a half having fun, and now she's pursuing graduate school, and it's great. It's great, research for fun. But one thing I'm talking about that I'm really excited about is the Fibonacci sequence. And I know that's kind of overplayed at times, but I find it beautiful. And we're looking at the sequence modulo 10. So we're just looking at the last, the units digits.
EL: Yeah, last digits.
aM: And whenever you take the sequence mod anything, it's going to repeat. And that's an easy proof to do. And actually Lagrange knew that long, long ago. But recently, in 1960, a paper came out studying these Fibonacci sequences modulo some natural number, and proved the periodicity bit and proved—there’s tons of papers in the Fibonacci Quarterly related to this thing. But what I'm looking at in particular is a connection to astrology—which actually might clear the room, but I'm hoping not—but the sequence has a length of periods 60. So if you lay that in a circle, it repeats and every 15th value in the Fibonacci number ends in 0. That's something you can see with the sequence, but it’s a lot easier to see when you're just looking at it mod 10. and that's something probably people didn't know now. Every 15th Fibonacci number ends in 0.
KK: No, I didn't know that.
aM: And if it ends in 0, it's a 15th Fibonacci number. And so, it’s an if and only if. And every 5th Fibonacci number is a multiple of five. So in astrology, we have the cardinal signs: Aries, Cancer, Libra and Capricorn. And you and you lay those on the zeros. Those are the zeros. And then the fixed and mutable signs, like Taurus, Gemini, etc, etc. As you move after the birth of the astrological seasons, those ones lay on the fives, and then you can look at aspects between them. Actually, I'm not going to say much astrology, by the way, in this talk. So people who are listening, please still come. It's only math! But I'm going to be looking at sub-sequences, but it got inspired by some videos online that I saw by a certain astrologer. And I—there was no mathematics in the videos and I was like, “Whoa, I can fill these gaps.” And it's just beautiful. Certain sub-sequences in the Fibonacci sequence mod 10 give the Lucas sequences mod 10. The Lucas sequence, and I don't know if your listeners or you guys know what the Lucas sequence is, but it's the Fibonacci sequence, but the starting values are 2 and then 1.
KK: Right.
aM: Instead of zero and one.
EL: Yeah.
aM: And Edward Lucas is the person, actually, who named the Fibonacci sequence the Fibonacci sequence! So this is a big player. And I am really excited to introduce people to these beautiful sub-sequences that exist in this Fibonacci sequence mod 10. It's like, just so sublime, so wonderful.
EL: I guess I never thought about last digits of Fibonacci numbers before, but yeah, I hope to see that, and we'll put some information about that in the show notes too. Yeah, have a good rest of your day.
aM: All right, you too, both of you. Thank you so much for this invitation. I’m happy to be invited.
EL: Yeah, we really enjoyed it. v
KK: Thanks, aBa.
aM: All right. Bye-bye.
On this episode of My Favorite Theorem, we talked with aBa Mbirika, a mathematician at the University of Wisconsin Eau Claire. He told us about several favorite theorems of the moment before zeroing in on one of his first mathematical discoveries: a way to determine whether a number is divisible by 7.
Here are some links you may find interesting after listening to the episode.
aBa’s website at UWEC
Snake lemma
Short five lemma
Euler-Fermat’s theorem
Gauss’s primitive roots
Adriana Salerno’s episode of the podcast
Steve Fisk’s “book proof” of the art gallery theorem
Information on aBa’s MAA invited address at the upcoming Joint Mathematics Meetings
Kevin Knudson: Welcome to My Favorite Theorem, math podcast and so much more. I'm Kevin Knudson, professor of mathematics at the University of Florida, and I am joined today by your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer, usually based in Salt Lake City, but today coming from the Institute for Computational and Experimental Research in Mathematics at Brown University in Providence, Rhode Island, where I am in the studio with our guest, Edmund Harriss.
KK: Yeah. this is great. I’m excited for this, this new format where we're, there's only two feeds to keep up with instead of three.
EL: Yeah, he even had a headphone splitter available at a moment's notice.
KK: Oh, wow.
EL: So yeah, this is—we’re really professional today.
KK: That’s right.
EL: So yeah, Edmund, will you tell us a little bit about yourself?
Edmund Harriss: I was going to say I'm the consummate unprofessional. But I'm a mathematician at the University of Arkansas. And as Evelyn was saying, I'm currently at ICERM for the semester working on illustrating mathematics, which is an amazing program that's sort of—both a delightful group of people and a lot of very interesting work trying to get these ideas from mathematics out of our heads, and into things that people can put their hands on, people can see, whether they be research mathematicians or other audiences.
EL: Yeah. I figured before we actually got to your theorem, maybe you could say a little bit about what the exact—or some of the mathematical illustration that you yourself do.
EH: So, yeah, well, one of the big pieces of illustration I've done will come up with a theorem,
EL: Great.
EH: But I consider myself a mathematician and artist. And a part of the artistic aspect, the medium—well, both the medium but more than that, the content, is mathematics. And so thinking about mathematical ideas as something that can be communicated within artwork. And one of the main tools I've used for that is CNC machines. So these are basically robots that control a router, and they can move around, and you can tell it the path to move on and carve anything you like. So even controlling the machine is an incredibly geometric operation with lots of exciting mathematics to it. When I first came across—so one of the sorts of machine you can have is called a five-axis machine. That's where you control both the position, but also the direction that you're cutting in. So you could change the angle as its as its cutting. And so that really brings in a huge amount of mathematics. And so when I first saw one of these machines, I did the typical mathematician thing, and sort of said, “Well, I understand some aspects of how this works really well. How hard can the stuff I don't understand be?” It took me several years to work out just how hard some of the other problems were. So I've written software that can control these machines and turn—in fact, even turn a hand-drawn path into a something the machine can cut. And so to bring it back to the question, which was about illustrating mathematics: One of the nice things about that idea is it takes a sort of hand-drawn path—which is something that's familiar to everyone, especially people in architecture or art, who are often wanting to use these machines, but not sure how—and the mathematics comes from the notion that we take that hand-drawn path, and we make a representation of that on the computer. And so you've got a really interesting function, they're going from the hand drawn path through to the the computer representation, you can then potentially manipulate it on the computer before then passing it again back to the machine. And so now the output of the machine is something in the real world. The initial hand-drawn path was in the real world, and we sort of saw this process of mathematics in the middle.
Amongst other things, I think this is a really sort of interesting view on a mathematical model. you have something in the real world, you pull it into an abstract realm, and then you take that back into the world and see what it can tell you. In this case, it's particularly nice because you get a sense of really what's happening. You can control things, both in the abstract and in the world. And I think, you know, to me that really speaks to the power of thinking and abstraction of mathematics. Of course, also controlling these machines allows you to make mathematical models and objects. And so a lot of my my work is sort of creating mathematical models through that, but I think the process is a more interesting, in many ways, mathematical idea, illustration of mathematics, that the objects that come out
KK: Okay, pop quiz. What's the configuration space of this machine? Do you know what it is?
EH: Well, it depends on which machine.
KK: The one you were describing, where you can where you can have the angles changing. That must affect the topology of the configuration space.
EH: So it’s R3 crossed with a torus.
KK: Okay.
EH: And so even though you're changing the angle of the bit, you really need to think about a torus. It's really also a subset of a torus because you can't reach all angles.
KK: Sure, right.
EH: But it is a torus and not a sphere.
KK: Yeah. Okay.
EH: So if you think about how to get from one position of the machine to another, you really want to—if you think about moving on a sphere, it's going to give you a very odd movement for the machine, whereas moving along a torus gives the natural movement.
KK: Sure, right. All right. So, what's your favorite theorem?
EH: So my favorite theorem is the Gauss-Bonnet.
KK: All the way with Gauss-Bonnet!
EL: Yes. Great theorem. Yeah.
EH: And I think in many ways, because it speaks to what I was saying earlier about the question: as we move to abstraction, that starts to tell us things about the real world. And so the Gauss-Bonnet theorem comes at this sort of period where mathematics is becoming a lot more abstract. And it's thinking about how space works, how we can work with things. You're not just thinking about mathematics as abstracted from the world, but as sort of abstraction in its own right. On the artist side, a bit later you have discussion of concrete art, which is the idea that abstract art starts with reality and then strips things away until you get some sort of form, whereas concrete art starts from nothing and tries to build form up. And I think there's a huge, nice intersection with mathematics. And in the 19th century, you've got that distinction where people were starting to think about objects in their own right. And as that happens, suddenly this great insight, which is something that can really be used practically—you can think about the gospel a theorem, and it's something that tells you about the world. So I guess I should now say what it is.
EL: Yeah, that would be great. Actually, I guess it must have been almost two years ago at this point, we had another guest who did choose the Gauss-Bonnet theorem, but in case someone has not religiously listened to every single episode—
KK: Right, this was some time ago.
EL: Yeah, we should definitely say it again.
EH: So the gospel out there links the sort of behavior of a surface to what happens when you walk around paths on that surface. So the simplest example is this: I start off, I’m on a sphere, and I start at the North Pole and I walk to the equator. At the equator, I turn 90 degrees, I walk a quarter of the way around the Earth, I turn 90 degrees again, and I walk back to the North Pole. And if I turn a final 90 degrees, I’m now back where I started facing in the same direction that I started. But if I look at how much I turned, I didn't go through 360 degrees. So normally if we go around a loop on a nice flat sheet, if you come back to a started pointing in the same direction, you've turned through 360 degrees. So in this path that I took on sphere, I turned through 270 degrees, I turned through too little. And that tells me something about the surface that I'm walking on. So even if I knew nothing about the surface other than this particular loop, I would then know that the surface inside must be mostly positively curved, like a sphere.
And similarly if I did the same trick, but instead of doing it on the sphere, I took a piece of lettuce and started walking around the edge of a piece of lettuce, in fact, I’d find that when I got back to where I started, I’d turned a couple of hundred times round, instead of just once, or less than once, as in the case the sphere. And so in that case, you've got too much turning. And that tells you that the surface inside is made up of a lot of saddles. It's a very negatively curved surface. And one of the motivations of creating this theorem for Gauss, I believe—I always find it dangerous to talk about history of mathematics in public because you never know what the apocryphal stories are—one of the questions Gauss was interested in was not whether or not the earth was a sphere. Well, actually, whether or not the earth was a sphere. So not whether or not it was round, or topologically a ball, but whether it was geometrically really a perfect sphere. And now we can go up into space and have a look back at the earth, and so we can sort of do a three-dimensional version of that, regard the earth as a three dimensional sphere, but Gauss was stuck on the surface of the earth. So he really had this sort of two dimensional picture. And what you can do is create different triangles and ask, for those triangles, what’s the average amount of curvature? So I look at that turning, I look at the total area, the size of the triangle, and ask does that average amount of curvature change as I draw triangles in different places around the earth? And at least to Gauss’s measurements—again, in the potentially apocryphal story I heard—the earth appeared to be a perfect sphere up to the level of measurement, they were able to do then. I think now, we know that the earth is an oblate spheroid, in other words, going between the poles is a slightly shorter distance than across the equator.
KK: Right.
EH: I believe that it was only a couple of years ago that we managed to make spheres that were more perfect than the Earth. So it was sort of, yeah, the Earth is one of the most perfect spheres that anyone has experience of, but it's not quite a perfect sphere when your measurements are fine enough.
KK: So what's the actual statement of Gauss-Bonnet?
EH: So, the statement is that the holonomy, which is a fancy word for the amount of turning you do as you go around a path on the surface, is equal to—now I’m forgetting the precise details—so that turning is closely related to the integral of the Gaussian curvature as you go over the whole surface.
KK: Right.
EH: So it's relating going around that boundary—which is a single integral because you're just moving around a path—to the double integral, which is the going over every point in the surface. And the Gaussian curvature is the notion of whether you're like a sphere, whether you're flat, or whether you're like a saddle at each individual point.
KK: And the Euler characteristic pops up in here somewhere if I remember right.
EH: Yeah. So the version I was giving was assuming that you’re bounding a disk in the surface, and you can do a more powerful version that allows you to do a loop around something that contains a donut.
EL: Yeah, and it relates the topology of a surface, which seems like this very abstract thing, to geometry, which always seems more tangible.
EH: Yeah. Yeah, the notion that the total amount of curvature doesn't change as you shift things topologically.
EL: Right.
EH: Even though you can push it about locally.
KK: Yeah. So if you're if you're pushing it in somewhere, it has to be pooching out somewhere else. Right? That's essentially what's going on, I guess. Right?
EH: Yeah. You know, another thing that's really nice about the the Gauss-Bonnet theorem, it links back to the Euler characteristic and that early topological work, and sort of pulls the topology in this lovely way back into geometric questions, as Evelyn said. And then the Euler characteristic has echoes back to Descartes. So you're seeing this sort of long development of the mathematics that's coming out. It’s not something that came from nowhere. It was slowly developed by insight after insight, of lots of different thinking on the nature of surfaces and polyhedra and objects like that.
EL: Yeah. And so where did you first encounter this theorem?
EH: So this is rather a confession, because—when I was a undergraduate, I absolutely hated my differential equations course. And I swore that I would never do any mathematics involved in differential equations. And I had a very wise PhD advisor who said, “Okay, I'm not going to argue with you on this, but I predict that at some point, you will give me a phone call and say you were wrong. And I don't know when that will be. But that's my prediction.”
KK: Okay.
EH: It did take several years. And so yes, many years later, I'd learned a lot of geometry, and I wanted to get better control over the geometry. So I sort of got into doing differential geometry not through the normal route—which is you sort of push on through calculus—but through first understanding the geometry and then wanting to really control—specifically thinking about surfaces that were neither the geometry of the sphere, the plane, or the hyperbolic plane. Those are three geometries that you can look at without these tools. But when you want to have surfaces that have saddles somewhere and positive curvature—I mean, this relates back to the CNC because you're needing to understand paths on surfaces there in order to take our tool and produce surfaces.
And so I realized that the answers to all my questions lay within differential equations, and actually differential equations were geometric, so I was foolish to dislike them. And I did call up my advisor and say, “Your prediction has come true. I'm calling you to say I was wrong.”
EL: Yeah.
EH: So basically, I came to it from looking at geometry and trying to understand paths on surfaces and realizing from from there that there was this lovely toolkit that I had neglected. And one of the real gems of this toolkit was this theorem. And I think it's a real shame that it's not something that's talked about more. I’ve said this is a bit like the Sistine Chapel of mathematics. You know, most people have heard of the Sistine chapel.
KK: Sure.
EH: Quite a lot of people can tell you something that's actually in it.
EL: Right.
EH: And slowly, only a few people have really seen it. And certainly a very few people have studied it and really looked and can tell you all the details. But in mathematics, we tend to keep everything hidden until people are ready to hear the details. And so I think this is a theorem that you can really play with and see in the world. I mean, it's not a—there are some models and things you can build that are not great for podcasts, but it's something you can really see in the world. You can put it put items related to this theorem into the hands of people who are, you know, eight or nine years old, and they can understand it and do something with it and and see how what happens because all you have to do is give people strips of paper and ask them to start connecting them together, just controlling how the angles work at the corners.
And depending on whether those angles add up to less than 360 degrees—well not the angles at the corner—depending on whether the turning gives you less than 360, exactly 360, or more than 360, you're going to get different shapes. And then you can start putting those shapes together, and you build out different surfaces. And so you can then explore and discover a lot of stuff in a sort of naive way You certainly don't need to understand what an integral is in order to have some experience of what the Gauss-Bonnet theorem is telling you. And so this is sort of it's that aspect, that this is something that was always there in the world. The sort of experiments, the sort of geometry you can look at, through differential geometry and things like the Gauss-Bonnet, that was available to the whole history of mathematics, but we needed to make a break from just geometry as a representation of the world to then sort of step back and look at this result that is a very practical, hands-on one.
You know, if you really want to control things, then you do need to have solid multivariate calculus. So generally, the three-semester course of calculus is often meant to finish with Gauss-Bonnet, and it's the thing that's dropped by most people at the end of the semester, because you don't quite have time for it. And there's not going to be a question on the test. But it's one of those things that you could sort of put out there and have a greater awareness of in mathematics. Just as: this is an interesting, beautiful result. I would say, you know, it's one of humanity's greatest achievements to my mind. You don't have to really be able to understand it perfectly in order to appreciate it. You certainly—as I proved you—can appreciate it without being able to state it exactly.
EL: Yeah, well, you've sold me—although, as we've learned to this podcast, I'm extremely open—susceptible to suggestion.
KK: That’s true. Evelyn's favorite theorem has changed multiple times now. That's right.
EL: Yeah. And I think you brought it back to Gauss-Bonnet. Because when when we had Jeanne Clelland earlier, who said Gauss-Bonnet, I was like, “Well, yeah, I guess the uniformization theorem is trash now”—my previous favorite theorem, but now—it had been pulled over to Cantor again, but you’ve brought it back.
KK: Excellent. All right, so that's another thing we do on this podcast is ask our guest to pair their theorem with something. So Edmund, what pairs well with Gauss-Bonnet?
EH: Well, I have to go with a walnut and pear salad.
KK: Okay.
EL: All right.
KK: I’m intrigued.
EH: Well, I think I've already mentioned lettuce.
EL: Yes.
EH: Lettuce is an incredibly interesting curved surface. Yeah. And then you've got pears, which gives you—
KK: Spheres.
EH: A nice positively curved thing. But they're not just boring spheres.
EL: Yeah.
EH: They have some nice interesting changes of curvature. And then walnuts are also something with very interesting changing curvature. They have very sharply positively curved pieces where they're sort of coming in but then they've got all these sort of wrinkly saddley parts. In fact, one of the applications of the Gauss-Bonnet theorem in nature is how do you create a surface that sort of fits onto itself and fills a lot of space—or doesn't fill that much space but gives you a very high surface area to volume ratio. So walnut is an example—or brains or coral—you see the same forms coming up. And the way many of those things grow is by basically giving more turning as you grow to your boundary.
KK: Right.
EH: And that naturally sort of forces this negatively-curved thing. So I think the salad really shows you different ways in which this surface can—the theorem can affect the behaviors of the surfaces.
EL: Yeah, well, what I want now is something completely flat to put in the salad. Do you have any suggestions?
KK: Usually you put goat cheese in such a thing, but that doesn't really work.
EL: That’s—well, parmesan. You could shave paremesan.
EH: Yeah, shavings of parmesan. Or maybe some thin-cut salami.
EL: Okay.
EH: And so even though those things would bend over—I mean, we’re now on to a different theorem of Gauss, and I don’t meant to corrupt Evelyn away—but you know, when you thinly cut the salami, it can it can bend but it doesn't actually change its curvature.
KK: Right.
EH: Your loops on that salami are going to have the same behavior that they had before. And I guess I should also say that I did create a toy that makes that paper model that I talked about easier to use. You don't have to use tape. You can hook together pieces. And so the toy is called Curvahedra.
KK: I was going to say, you should promote your toy. Yeah.
EH: I’m terrible at self-promotion, yes.
EL: We will help you. Yes, this is a very fun toy. I actually got to play with it for first time a few weeks ago when you did a little short thing and I think when I had seen pictures of it before I thought it was not going to be as sturdy as it is. But this is—yeah, it's called Curvahedra—look it up. It’s these quite sturdy—you know, you don't need to worry about ripping the pieces as you put them together—but you can create these things that look really intricate, and you can create positive curvature, or flat things, or negative curvature in all these different conformations. It's a very fun thing to play with.
EH: And it is a sort of physical version of exactly the Gauss-Bonnet theorem. As you hook together pieces, you're controlling what happens on a loop. And then as you put more of those loops together, you can get a variety of different surfaces, from hyperbolic planes to spheres to—of course, kids have made animals and creatures with it. So you get this sort of control. In fact, it's one of those things that, you put it into the hands of kids, and they do things that you didn't think were really possible with it because their ability to play with these ideas and be free is always so inspiring. So that's what I said, this is a theorem that you can—people can understand as something in the real world. And then you can tell the story of how this understanding of the world is linked directly back to abstract, esoteric mathematics, of the most advanced sort.
KK: Right. One of my favorite things about Curvahedra, though, is the video that you put online somewhere—I think was on Twitter—of it popping out of your suitcase, like you compressed it down into your suitcase to travel home one time?
EH: Yes, I have a model that's about to a two-foot cube. And so you can’t travel with that easily, but it can compress very small. And that same object has been in my suitcase and other things several times, and it's now sitting in my office here.
KK: That’s great fun. And also you've made similar models out of metal, correct?
EH: Yes. So the basic system—not the big one you can crush down to put into suitcases.
KK: No, certainly not.
EH: I’ve made a couple of the spheres. And we're currently working on a proposal to go outside the Honors College at the University of Arkansas. That grew out of a course—it was a design that was created from Curvahedra and other inspirations—by a course I taught with Carl Smith, who is a landscape architect in our landscape architecture school. And so there's going to be—hopefully at some point there's going to be a 12-foot tall Curvahedra-style model outside the Honors College at University of Arkansas.
KK: Very nice.
EL: Nice.
KK: Yeah, this has been great fun. Anything else we want to talk about?
EL: Yeah, well, do you want to say a website or Twitter account or anything where people can find you online?
EH: So I’m actually @Gelada on Twitter, and there is @Curvahedra, and my blog, which is very rarely updated, but has some nice stuff, is called Maxwell Demon.
EL: Yeah, and can you spell your Twitter?
EH: Yes, so Gelada is spelled G-E-L-A-D-A. They are baboons in Ethiopia, or it’s a cold beer in Brazil. I discovered that latter one after being on Twitter, and I regularly get @-ed by people in Brazil, who were not wanting to talk to me at all, but they're asking each other out for beers.
EL: Ah.
EH: And yeah, so then there's also curvahedra.com, where you can get that toy.
EL: Cool. Thanks for joining us.
KK: Yeah, thanks Edmund.
EH: Thank you.
[outro]
On today’s episode, we were pleased to talk with Edmund Harris, a mathematician and mathematical artist at the University of Arkansas, who is our second guest to sing the praises of the Gauss-Bonnet theorem. Below are some links you might find useful as you listen to the episode.
Edmund’s Twitter account, @Gelada
His blog, Maxwell’s Demon
The website and Twitter account for Curvahedra, the toys he makes that help you explore the Gauss-Bonnet theorem and just have a lot of good fun with geometry
Our episode with Jeanne Clelland, who also chose the Gauss-Bonnet theorem
Edmund and Evelyn both attended the Illustrating Mathematics program at the Institute of Computational and Applied Mathematics (ICERM). The program website, which includes videos of some interesting talks at the intersection of math and art, is here.
Kevin Knudson: Welcome to My Favorite Theorem, a math podcast and so much more. I'm one of your hosts, Kevin Knudson. I'm a professor of mathematics at the University of Florida. And here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance writer, usually based in Salt Lake City, but currently coming to you from Providence, Rhode Island.
KK: Hooray! Yeah, you're at ICERM.
EL: Yes. The Institute for computational and experimental research in mathematics, an acronym that I am now good at remembering.
KK: I’m glad you told me. I was trying to remember what it stood for this morning because I'm going next week. We'll be in the same place for, like, only the second time ever.
EL: Yeah.
KK: And the universe didn't implode the first time. So I think we're safe.
EL: Yeah.
KK: So the ICERM thing is visualizing mathematics, I mean, we're sort of doing like—next week is about geometry and topology, which since both of us are nominally that, that's just the right place for us to be.
EL: Yeah, it's it's going to be a fun semester. I'm also very excited because I recently turned in—it feels weird to call it a manuscript, but it is being published by a place that publishes books. It is the final draft of a page-a-day calendar about math. And I hope that by the time we air this, I will be able to have a link where people can purchase this and give it to give it to themselves or to their favorite mathematician.
KK: Yeah.
EL: So that's just, every day you can have a little morsel of math to start your morning.
KK: I’m looking forward to that. That’s really exciting. Yeah, that's that's great. All right, so we're continuing a tradition in this episode.
EL: Yes.
KK: So Christian Lawson-Perfect organizes this thing through the Aperiodical called the Great Internet Math-Off [Editor’s note: Whoops, it’s called the Big Internet Math-Off!] of which you were a participant in the first one but not this one, not the second go-around. And we had the first winner on. The winner gets named the World's Most Interesting Mathematician (among those people who Christian could round up and who were free in July). And so we wanted to keep this trend going of getting the most interesting mathematicians in the world on this podcast. And we are pleased to welcome this year's winner, Sophie Carr. Sophie, you want to introduce yourself, please?
SC: Oh, hello, thank you very much. Yeah, I'm Sophie Carr. I studied Bayesian networks at university, and now I own and run a data analytics company.
EL: Yeah, and you’re the most interesting mathematician!
SC: I am! For this year, I am the most interesting mathematician in the world. It's entirely Nira’s fault that I entered because he suggested, and put me forward.
KK: That’s right. Nira Chamberlain was last year's winner. And so when we interviewed him he was sitting in his attic wearing a winter coat. It was wintertime and it seemed very cold where he was. You look very comfortable. It looks like you have a very lovely home in the background.
SC: Yes, I mean, I am in two jumpers. Autumn has definitely arrived. Summer has gone, and it's a little chilly at the moment.
KK: I can only dare to dream. Yeah.
EL: Yeah, Florida and UK have slightly different seasons.
KK: Just a little bit. So you own a consulting company? That’s correct?
SC: Yeah, I do. I set it up 10 years ago now. There’s me and two other people who work with me. We just have an awful lot of fun finding patterns in numbers. I still find it amazing that we're still going. It's just the best fun ever. We get to go and work on all sorts of different problems with all sorts of different people. It's fantastic.
KK: Yeah, that's great. I mean, I'm glad companies are starting to come around to the idea that mathematicians might actually have something to tell them. Right?
SC: Yes. It really is. When you explain to them, you're not going to do magic and it's not a black box, and you can tell them how it works and how it can really make a difference, they are coming around to that.
KK: That’s fantastic. All right, so we're here to talk about theorems.
EL: Yeah. What is your favorite theorem?
SC: My favorite theorem in the whole world is Bayes’ Theorem.
EL: Yay, I'm so glad that someone will be talking about this! Because I know that this is a great theorem and—confession: I just, I don't appreciate it that much.
KK: You know, same.
EL: I need to be told why it's great.
KK: Yeah, I taught probability one time and I said, “Okay, here's Bayes’ theorem.” I kind of went all right. Fine, but of course the question is what's the prior, Mr. Bates? So tell us. Tell us, please.
EL: Yeah, preach!
KK: Preach for Reverend Bayes.
SC: You know, I don't think there's any any preaching needed. Because I always say this. I mean, there are two bits of statistics, there’s the frequentist and the Bayesian. And I always liken it to rugby union, and rugby league, which are two types of rugby in England. It's different codes, but it's the same thing. So to me, Bayes’ theorem, it's just the way that we naturally think. And it's beautifully simple, and all it does is let you take everything that you know and every piece of information that you have, and use that to update the overall outcome. And you're right, that the really big arguments come about from what the prior is. What is the background information that we have, and can we have actually genuinely have a true prior? And some people say no, because you might not have any information. But that's the great bit! Because then you can go and find out what the prior is. You have to be absolutely open about what you're putting in there. I think the really big debate comes around whether people are happy with uncertainty. Are they happy for you to not give an exact answer? If you go and you say, well, this is the prior, this is what we think the information is as well. And we combine these all, combine these priors, and this is the answer. Let's have a debate. Let's start talking about what we can have. Because at its simplest, you've got two things you’re timesing together. Just two numbers. Something that runs your mobile phone. I mean, that’s quite nifty.
KK: So can we can we remind our listeners what Bayes’ theorem actually says?
SC: Okay, so Bayes’ theorem takes two things. It takes the initial, or the prior distribution. Okay, and that's the bit where the argument is. And that might be just, what's the chance of something happening? What do you think the probability is of something happening? And you combine that with something called likelihood ratio. And it's real simple. The likelihood ratio is just a ratio of the probability of the information, or the evidence you have, assuming one hypothesis,divided by the probability of that information assuming another hypothesis. So you just have to have those two values. [And I say you just have to keep it.
And then all you have to do is times them together! That really is it, and when you start to say to people, it's just two numbers—Now, you can turn that into three numbers if you want. You can turn the likelihood ratio bit into its two separate parts. And you can show Bayes’ theorem very, very simply with decision trees, and that was part of the reason I used decision trees in the Math-Off, was just to show the power of something that is really quite simple, that can drive so, so far. And that's what I love about Bayes’ theorem. I always describe it as something that is stunningly elegant, but unbelievably powerful. And I always liken it to Audrey Hepburn. I think if it were to be a person, it would be Audrey Hepburn. Quite small! I'd say it's, it's this amazing little thing that has two simple numbers. But goodness me, getting those numbers, well, I mean, you can just have so much fun! I think you can.
And maybe it's just me that likes finding the patterns in the numbers and finding those distributions. Coming up with the priors. So come on, Kevin, you said, you sat there and your class said, “Well, what's the prior?”
KK: Yeah.
SC: What do you say? How would you tell people to go about finding a prior? Are they going to use their subjective opinion? Are they going to try and find it from data?
KK: Well, that that is the question, isn't it? Right? So, I mean, often, the problem with probability sometimes is that—at least, like, in political forecasting, right—people tend to round up probabilities to 1 or lop them off to zero. Right? So for example, when, you know, when Trump won the election in 2016, everybody thought it was a huge shock. But you know, 538 had it as, you know, Hillary Clinton was a two-to-one favorite. But two-to-one favorites lose all the time, right?
SC: Yeah.
KK: And and so the question then is, yeah, people like to think about one-off events. And then the question is, how do you estimate the probability of a one time event? And you have to make some guess, right, at the prior. And that’s—I think that's where people get suspicious of Bayes’ theorem, or Bayesian statistics, because how you make this estimate? So how do you make estimates in your daily work as a consultant?
SC: Okay, so we do it in a variety of different ways. So if we're really lucky, there’s some historical data we can go looking at.
KK: Sure.
SC: And often just mining that historical data gives you a good starting point. I always get slightly suspicious of flat distributions. Because if we really, really don't know anything other than that, I think maybe a bit of research before where you find the prior is always a good thing. My favorite priors are when we go and talk to people and start to get out of them their subjective opinion. Because I like statistics, I genuinely love statistics, because of the debate that goes on around it. And I think one of the things that people forget about math is that it's such a living subject. And there are so many brilliant debates—and you can call some of them arguments— people are prepared to go and say, “Look, this is my opinion and this is what I think the shape is.” And then we can do the analysis. Inevitably somebody will stand up and go, “Well, that bit is wrong.” Okay, so tell me why!
EL: Yeah.
SC: What evidence have you got for us to change the shape, or why do you think it should be skewed, or Poisson, or whatever we're using? And sometimes, if we haven't got time to do that we can start to put in flat distributions. We can say, “Well, we think it's about normal.” Or “We think on average, it'll be shoved a little bit to the right or a little bit to the left.” That's the three main ways we go about doing it. And I think the ability to be absolutely open and up front about what you know and what you don’t know helps you find that prior. And I don't really understand why people would be scared of running away from that. Why you would not want to say what the uncertainty is or what you're not sure about. But that might go a long way when people think that math is certain.
EL: Yeah.
SC: That when you say the answer is 12, well it’s 12. And not, “Well, it’s 12 because we kind of do it like this, and actually if something changes, that number might change.” And I think getting comfortable with uncertainty and being uncomfortable, is really the crux for developing those priors.
EL: Yeah. Well, I guess for me, it's hard to reason about statistics in a non frequentist way. Meaning—you know, I'm comfortable with non frequentist statistics to a certain degree. But just like what, as you were, saying, like, what does a 30% chance mean if it's not that we could do this 10 times that have it happen three times. But you can't have a presidential election—the same election—10 times, or you can't run Monday’s weather 10 times, or something like that. But it's just hard for me to interpret what does it mean if there isn't a frequentist interpretation?
SC: Yeah. One of the things we found that works really well is if you start showing patterns—and that's why I always talk about patterns, that we find patterns. It's when you're doing Bayesian stats with priors if you start to show the changes as curves, and I don't mean the distribution, but I mean, just as that rising and falling of numbers, people start to understand what's driving the priors, what assumptions are changing those priors. And then you start to see the impact of that, how the final answer changes. That can be incredibly powerful. Often people don't want that set answer. They want to know what the range is, they want to understand how that changes. And showing that impact as a shape—because I think most people are visual. When you show somebody a surface or, you know, a graph, or whatever it is, that's something you can really get a grip with. And actually I come from a Bayesian belief network. So I kind of found out about Bayes’ theorem by chance. I never set off to learn Bayes’ theorem. I set off to design [unintelligible]. That’s what I grew up wanting to do. But I ended up working on Bayesian networks. That’s the short version of what happened.
EL: So, how—was this a “love at first sight” theorem? Or what was your initial encounter with this theorem? And how did you feel about it? Since this is all about subjective feelings anyway!
SC: Well, my PhD was part-time. I spent eight years collecting subjective opinions. So I started a PhD in Bayesian networks, and there was this brilliant representation of a great big probability table. And this is a while ago now. And I’ve moved on a lot into [unintelligible]. But I've got this Bayesian network and supervisor said, “Here we go,” and I went, “Ah, it’s just lots of ovals connected with arrows”
And I went, “There must be something more to this.” And he went, “There’s this thing called Bayes’ theorem that underpins it and look at how it flows. It’s how the information affects it.” And I went, “Okay!” And so, as with all PhDs, you have this pile of reading, which is apparently going to be really, really good for you.
So I got my pile of reading. I went, “Okay.” And genuinely I just thought, “Yeah, it's just kind of how we all work, isn't it?” And I really had not liked statistics at university at all because I’d only really done frequentist statistics. And it’s not like I dislike frequentist statistics. I just didn’t fall in love with it. But when there was something I could see—and I genuinely think it’s because it's visual. I see the shapes move, I could see the numbers flow, I could see the information flow. I thought, “Oh, this is cool stuff. I understand this. I can get my head around this.” And I could start to see how to put things in and how they changed. And I think also I've got at times a very short attention span. So running millions of replicates never really did it for me.
EL: Yeah.
SC: So I had a bit of an issue with frequentist, where we just have to run lots and lots and lots lots of replicates.
EL: Right.
SC: Can we not assume it's kind of like this shape and see what happens? Then change that shape. Look, that’s great. That's much better for me.
EL: Yeah. So it was kind of a conversion experience there.
SC: I think, for people my age, probably. Because I don’t think Bayesian statistics, years ago, was taught that commonly. it's only really in the past sort of maybe decade that I think it's become really mainstream and been taught in the way it is now. Certainly with its its wide applications. That's what I think people just go, something that they've never heard of is now all in the AI world and it’s in your mobile phone, and it's in your medicine, and it's in your spam filters. And when it suddenly becomes really popular, people start to see what it can do. That's when it's taught more. And then you get all these other debates.
KK: So the other fun thing we like to do on this podcast is ask our guests to pair their theorem with something. So what pairs well with Bayes’ theorem?
SC: So this caused a lot of debate in our household.
KK: It always does.
SC: Yeah. And I am going to pair Bayes’ theorem with my favorite food, which is risotto, because risotto only takes three things. It only needs rice and onions and a good stock.
KK: Yes.
SC: And Bayes’ theorem is classically thought with three numbers. And it’s really powerful and gorgeous. And risotto only takes three ingredients, and it’s really gorgeous.
KK: And also, the outcome is uncertain sometimes, right?
SC: Oh, frequently uncertain. And if you change those prior proportions, you will get a very different outcome.
KK: That’s right. You might get soup, or it might might burn.
SC: So, I am going to say that Bayes' theorem is like a risotto.
EL: And you mentioned Audrey Hepburn earlier so maybe it’s even more like sharing a risotto with Audrey Hepburn.
SC: That would be brilliant. How cool would that be?
EL: I know!
SC: I will have my Bayes’ theorem discussion with Audrey Hepburn over risotto. That would be a pretty good day.
EL: Yeah, you could probably get a cardboard cutout. Just, like, invite her to dinner.
SC: Yeah, I'll do that. I'll try and set up a photo, superimpose them.
EL: Yeah.
KK: But Audrey Hepburn should be breakfast somewhere right?
EL: But you can eat risotto for breakfast.
SC: Yeah, you can eat risotto any time of the day.
KK: Sure.
SC: There’s never a bad time for risotto.
KK: No, there isn't. Yeah. My wife actually doesn't like risotto very much, so I never make it.
EL: So is that one of your restaurant foods? So we have this whole like foods that you you tend to order at a restaurant because your partner doesn't like them. And so it's like something that you can—like I don't really like mushrooms, so my partner often will order a mushroom thing at a restaurant.
KK: Yeah, so for me, I don't go out for Italian food because I can make it at home.
EL: Okay.
KK: So I just have a generic I don't I don't eat Italian out. There’s kind of no point, I think.
SC: So you’re right that risotto is my restaurant food because my husband doesn't like it.
KK: Oh.
EL: Aw.
SC: It's my most favorite thing in the world, so yeah, every time we go out, the kids go, “Mom, just don't get the menu. There’s no point. We know what you’re getting.
EL: Yeah. So you said this caused a debate. Did he have a different opinion about what your pairing should be?
SC: Well, there were discussions about whether it was my favorite drink with [a bag of crisps?], and what things could be combined together. And I said, “No, it just has to be risotto.”
KK: Okay. Excellent.
EL: Yeah, we do make that at home. And actually the funny thing is I don't really like mushrooms, but I do like the mushroom risotto that we make.
SC: Oh.
EL: Yeah.
SC: So you've not got a flat prior. You've actually got a little bit of a skew on there.
EL: Yeah, I guess. I’m trying to figure out how to quantify this. Yeah, like my prior distribution for mushroom preference is going to depend on whether it is cooked with arborio rice or not.
SC: See, there we go and you don’t have to worry about numbers you just draw a shape.
EL: Yeah, nice.
KK: Cool. So we also like to give our guests a chance to plug anything they want to plug. Do you have things out there in the world that you want people to know about?
SC: So the only thing I think that's worth mentioning is I do some Royal Institution maths master classes, where we go out and we take our favorite bit of math, and we go and take it to students who are between the ages of about 14 to 17. And that's really what I'm doing coming up in the near future, and they are a brilliant way for lots of people to engage with maths.
EL: Oh, nice.
KK: That’s very cool.
SC: Yeah. They are really good fun.
KK: Have you been doing that for very long?
SC: I’ve been doing them for about two years now. And the first one I ever did was on Bayes’ theorem. And I've never been so terrified, because I don’t teach. And then you have this group of students, and they come up with just the best and most fantastic questions. Every time you do it, you go, “I hadn’t thought of that.”
KK: Yeah.
SC: “And I don't know how to answer that question straight away.” So it's brilliant, and I love doing them. So that's kind of what we've got coming up. And you know, work is just going to be keeping me nicely busy.
EL: Nice.
SC: Yeah.
KK: Well, this has been great fun. Thank you for joining us, and congratulations on being the world's most interesting mathematician for this year.
EL: Yes. Yeah, thanks a lot.
SC: Thank you. I’ve been so excited to do this. I've been listening to your podcast for quite a long time, and I couldn't believe it when you emailed.
okay, thank you very much.
Okay. Thanks.
On this episode, we had the pleasure of talking with Sophie Carr, a statistics consultant and winner of Christian Lawson-Perfect’s Big Internet Math-Off last summer. Here are some links you may enjoy as you listen to this episode.
As we mentioned at the top of the show, Evelyn’s math page-a-day calendar is available for purchase in the AMS bookstore!
Sophie Carr’s twitter account
The Big Internet Math-Off at the Aperiodical
Royal Institution Masterclasses
Sophie Carr is this year’s World’s Most Interesting Mathematician. We also had last year’s World’s Most Interesting Mathematician, Nira Chamberlain, on the show in January. Find his episode here.
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics and all kinds of crazy stuff, and I have no idea what it's going to be today. It is a tale of two very different weather formats today. So I am Kevin Knudson, professor of mathematics at the University of Florida. Here's your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a math and science writer in Salt Lake City, Utah, where I am using the heater on May 28.
KK: Yes, and it's 100 degrees in Gainesville today, and I'm miserable. So this is bad news. Anyway, so today, we are pleased to welcome Judy Walker. Judy, why don't you introduce yourself?
Judy Walker: Hello. Thank you for having me. I'm Judy Walker. I'm a professor of mathematics at the University of Nebraska.
KK: And what else? You're like—
JW: And I am Associate Vice Chancellor for faculty and academic affairs, so that’s, like, Vice Provost for faculty.
KK: That sounds—
EL: Yeah, that does sound very official!
JW: It does sound very official, doesn't it?
KK: That’s right. Like you're weighing T & P decisions in your hands. It's like, you're like Caesar, right? With the thumbs up and the—
JW: I have no official power whatsoever.
KK: Right.
JW: So yes.
KK: But, well, your power is to make sure procedures get followed, right?
JW: Yes. And I have a lot of I have a lot of influence on other things.
KK: Yeah. Right. Yeah. That sounds like a very challenging job.
JW: And for what it's worth, I will add that it is cloudy and windy today. But I think we're supposed to be, like, 67 degrees. So right in the middle.
KK: All right. Great.
EL: Okay, perfect.
KK: So if we could see the map of the US, there'd be these nice isoclines. And here we are. Right. So we're, my mine is very hot. Mine's red. So we're good. Anyway, we came to talk about math. You’re excited to talk about math for once, right?
JW: Exactly. I guess I'm kind of going to be talking about engineering, too. So—
EL: Great.
KK: That’s cool. We like it all here. So what's your favorite theorem?
JW: So my favorite theorem is the Tsfasman-Vladut-Zink theorem.
KK: Okay, that's a lot of words.
JW: It is—well, it’s a lot of names. It's three names. And it's a theorem that is in error-correcting codes, algebraic coding theory. And it's my favorite theorem, because it solves a problem, or maybe not solves a problem, but shows that something's possible that people didn't think necessarily was possible. And the way that it shows that it's possible is by using some pretty high-powered techniques from algebraic geometry, which had not previously been brought into the field at all.
EL: So what is the basic setting? Like what kind of codes can you correct with this theorem?
JW: Right. So the codes are what does the correcting. We don't correct the codes, we use the codes to correct. So I used to tell my — actually, my advisor told me and then I've told all my PhD students — that you have to have a sentence that you start everything with. And so my sentence is: whenever information is transmitted across a channel, errors are bound to occur. So that is the setting for coding theory. You've got information that you're transmitting. Maybe it's pictures from a satellite, or maybe it's just storing things on a computer, or whatever, but you're storing this information. Or you're transmitting this information, and then on the other end, or when you retrieve it, there's going to be some mistakes. And so it's the goal of coding theory to add redundancy in such a way that you can find those mistakes and fix them. Okay?
And we don't actually consider it an error if you fix the mistake. So an error is when so many mistakes happened in the transmission or in the storage and retrieval, that what you think was sent was not what was actually sent, if that makes sense.
KK: Sure. Okay.
JW: So that's the basic setting for coding theory, and coding theory kind of started in 1948 with Shannon's theorem.
KK: Right.
JW: So Shannon's theorem says that reliable communication is possible. So what it says really, is that whatever your channel is, whether it's transmitting satellite pictures, or storing data, or whatever—whatever your channel is, there is a kind of maximum efficiency that's possible on the channel. And so what Shannon’s theorem says is that for any efficiency up to that maximum, and for any epsilon greater than zero, you can find a code that is that efficient has less than epsilon probability of error, meaning the probability that what you sent is not what you think was sent at the end. Okay?
So that's Shannon's theorem. Right? So that's a great theorem.
EL: Yeah.
JW: It’s not my favorite theorem. It’s not my favorite theorem because it actually kind of bothers me.
KK: Why does it bother you?
JW: Yeah, so the reason that bothers me are — there are two reasons it bothers me. One is that it doesn't tell us how to find these codes. It says good codes exist, but it doesn't tell us how to find them, which is kind of useless if you're actually trying to transmit data in a reliable way. But it's actually even worse than that. It's a probabilistic proof. And so it doesn't just say that good codes exists, it says they're everywhere, but you can't find them. Right? So it's like it's taunting us. So I just—. So yeah. So that's Shannon's theorem. And that's why it's not my favorite theorem. But why it's a really great theorem is that it started this whole field. So the whole field of coding theory has been — or of channel coding, at least, which is what we've been talking about is to find those codes, and not just find them, but find them along with efficient decoding algorithms for them. And so that's Shannon's challenge is to find the good codes with efficient decoding algorithms for those good codes. That's 1948, that that started. Right? Okay.
So just as a digression, let me say that most mathematicians and engineers will agree that at this point in time — so a little more than 70 years after Shannon's theorem, that Shannon's challenge has been met, so that we can find these good codes. They're not going to agree on how it's been met. But they'll all agree that it has been met. So on the one hand, in the late ‘90s — mid-to-late 90s — engineers found turbo codes, and they rediscovered low-density parity check codes. And these are codes that in simulations come very, very close to meeting Shannon's challenge. The theory around these codes is still being developed. So the understanding of why they meet Shannon challenge is still try to be solved. But the engineers will say that it's solved, that Shannon's challenge is met because they've got these simulations, and they're so confident about it, that these codes are actually being used in practice now.
EL: So I have a naive question, which is, like, does the existence of us talking over the internet on on this call, sort of demonstrate that this has been met? Like we we are hearing each other — I mean, not with perfect fidelity, but we're able to transmit messages. Is that? Or is that just not even in the same realm?
JW: No, that's exactly exactly what we're talking about, exactly what we're talking about. And not only that, but I don't know if you've noticed, but every once in a while, Kevin gets a little glitchy, and he doesn't move for a while. That's the code catching up and fixing the errors.
KK: Yeah, that's the irony is this this call has been very glitchy for me.
JW: Right.
KK: Which is why we each record our own channel.
EL: Yeah.
JW: Exactly. So in fact, low-density parity-check codes and turbo codes are being used now in mobile phones, in satellite communications, in digital TV, and in Wi-Fi. So that's exactly what we're using.
EL: Okay.
JW: But the mathematicians will say, “Well, it's not really—we’re not really done. Because we don't know why. We don't really understand these things. We don't have all the theoretical underpinnings of what's going on.” A lot of work has been done a lot, and a lot of that is there. But it's still a work in progress. About 10 years ago, kind of on the flip side, polar codes were discovered. And polar codes are the first family of codes to provably achieve capacity. So they actually provably meet Shannon's challenge. But at this moment, they are unusable. There's just still a lot of work to understand how we can actually use polar codes. So the mathematicians say, “We've met the challenge, because we've got polar codes, but we can't use them.” And the engineers say, “We've met the challenge because we've got turbo codes and LDPC codes, but we don't know why.” Right? And that's an oversimplification, but that's kind of the current state. And so different people are working on different things now. And of course, there are other kinds of coding that that aren’t — that isn't really channel coding. There are still all kinds of unsolved problems. So if anybody tells you that coding theory is dead, tell them they're wrong.
EL: Okay!
JW: It’s still very much alive. Okay, so we talked about Shannon's theorem from 1948. And we talked about the current status of coding theory. And my favorite theorem, this Tsfasman-Vladut-Zink, is from 1982. So in the middle.
EL: Halfway in between.
JW: Yes, yes. Just like my weather being halfway in between. Yes. So around this time, in the early ‘80s, and and preceding that, the way that mathematicians were approaching Shannon's challenge was through the study of linear codes. So linear codes are just subspaces, and we might as well think of—in a lot of applications, the data is zeros and ones. But let's go to Fq instead of F2, so q is any prime power.
KK: Okay, so we're doing algebraic geometry now, right?
JW: We’re not yet. Right now, we’re just talking about finite fields.
KK: Okay.
JW: We will soon be be doing algebraic geometry, but not yet. Is that okay?
EL: You’re just trying to transmit some finite set of characters.
JW: Yes, some finite string of characters. Order matters, right? So it's a string. And so the way that we think about it, we can think about it as a systematic code. So the first k characters are information, and then we're adding on n−k redundancy characters that are computed based on the first k.
KK: Okay.
JW: So if we're in a linear setting, then this collection of code words that include the information and the redundancy, that collection of code words is a subspace, say it's a k-dimensional subspace, of Fqn. So that's a linear code. And we can think about that ratio, k/n, as a measure of how efficient the code is.
KK: Right.
JW: Because it's the number of information bits divided by the total number of bits, or symbols, or characters. So, let's call that ratio, R for rate, right? k/n, we’ll call it R. And then how many errors can the code correct? Well, if you look at that Hamming distance—so that's the number of characters and number of positions in which to code words differ—then the bigger that distance, the more errors you can make and still be closest to the code word that was sent. So then that's not really an error. Right? So maybe we say the number of mistakes goes up.
EL: Yeah. So again, let's normalize that minimum distance of the code by dividing by the length of the code. So we have a ratio, let's call that ∂. So that's our relative minimum distance for the code. So one way to phrase this is if we want a certain error-correcting capability, so a certain ∂, how efficient can the code be? How big can R be? Okay, so there are a lot of bounds relating R and ∂, our information rate and our error-correcting capability, or our relative minimum distance. So one that I want I tell you about is that Gilbert-Varshamov bound.
So the Gilbert-Varshamov bound is from 1952. And it says that there's a sequence of codes, or a family of codes if you want, of increasing length, increasing dimension, increasing minimum distance, so that the rate converges to R and the minimum distance to converges to ∂. And R is at least 1−Hq(∂), where Hq is this entropy function. So you may have heard of the binary entropy function, there's a q-ary entropy function, that's what Hq(∂) is. So one such sequence is the so-called classical Goppa codes, and I want to say that that's from, 1956, so just a little bit later. And those codes were the best-known codes from this point of view for about 30 years. Okay, so let me just say that again. So the Gilbert-Varshamov bounds says that there's a sequence of codes with R at least 1−Hq(∂). The Goppa codes satisfy r=1−Hq(∂). And for 30 years, we couldn't find any codes with R greater than.
EL: That were better than that.
JW: Right. That were greater than this 1−Hq(∂).
KK: Okay.
JW: So people at this point were starting to think that maybe the Gilbert-Varshamov bound wasn't a bound as much as it was the true value of how good can R be given ∂, how efficient can codes be given given their relative minimum distance. So this is where this Tsfasman-Vladut-Zink theorem comes in. So in 1978—and Kevin, now we can talk about algebraic geometry. I know you’ve been waiting for that.
KK: All right, awesome.
JW: Yes. Right. So in 1978, Goppa defined algebraic geometry codes. So the way that definition works: remember, a code is just a subspace of Fqn, right? So how are we going to get a set of space of Fqn? Well, what we're going to do is we're going to take a curve defined over Fq that has a lot of rational points, Fq-rational points, right? So we're going to take one of those points and take a multiple of it and call that our divisor on the curve. And then we're going to take the rest of them. And we're going to take the rational functions in this space L(D). D is our divisor, right? So these are the functions that only have poles at this chosen point of multiplicity, at most the degree that we've chosen.
KK: Okay.
JW: And we're going to evaluate all those functions at all the rest of those points. So remember, those functions form a vector space, and evaluation is a linear map. So what we get out is a vector space. So that's our code. And if we make some assumptions, so if we assume that that degree of that divisor, so that multiplicity that we've chosen, is at least twice the genus minus 2, twice the genus of the curve minus 2, then Riemann-Roch kicks in, and we can compute the dimension of L(D). But if we also assume that that degree is less than the number of points that we're evaluating at, then the map is injective. And so we have exactly what the dimension of the code is. The dimension of the code is the degree of the divisor, so that multiplicity that we chose, plus 1 minus the genus. And the minimum distance, it turns out, is at least n minus the degree of the divisor. So lots of symbols, lots of everything.
EL Yeah, trying to hold this all in my mind, without you writing it on the board for me!
JW: I know, I’m sorry. But when you put it all together, and you normalize out by dividing by the length, what you get is that if you have a family of curves with increasing genus, and an increasing number of rational points, then we can end up with a family of codes, so that in the limit, R, our information rate, is at least 1−∂—that’s that relative minimum distance—minus the limit of the genus divided by the number of rational points. Okay. So g [the genus] and n are both growing. And so what's that limit? So that's that was Goppa’s contribution. I mean, not his only contribution. But that's the contribution of Goppa I want to talk about, just that definition of algebraic geometry code. So it's a pretty cool definition. It’s a pretty cool construction. It’s kind of brand new in the sense that nobody was using algebraic geometry in this very engineering-motivated piece of mathematics.
EL: Right.
JW: So here is algebraic geometry, here is a way of defining codes, and the question is, are they any good? And it really depends on what—how fast can the number of points grow, given how fast the genus is growing? So what Drinfeld and Vladut proved—so this is not the TVZ theorem, not my favorite theorem, but one more theorem to get there—Drinfeld and Vladut proved that if you take, if you define Nq(g) to be the maximum number of Fq-rational points on any curve over Fq of genus g, then as you let g go to go to infinity, and for a fixed q, the limit superior, the lim sup, of the ratio g/Nq(g), is at most 1/√(q−1). Okay, fine. Why do we care? Well, the reason we care is that the Tsfasman-Vladut-Zink theorem, which is again my favorite theorem, it says—so actually, my favorite theorem is a corollary of the Tsfasman-Vladut-Zink theorem. So the Tsfasman-Vladut-Zink theorem says that if q is a square prime power, then there's a sequence of curves over Fq of increasing genus that meets the Drinfeld-Vladut bound.
EL: Okay.
JW: Okay, so the Drinfeld-Vladut bound said you can be at most this good. And Tsfasman-Vladut-Zink says, hey, you can do that.
EL: Yeah, it's sharp.
JW: So if we put it all together, then the Gilbert-Varshamov bound gave us this curve, right? So it was a concave-up curve that intersects the vertical axis, which is the R-axis, at 1 and the horizontal axis, which is the ∂-axis, at 1−1/q. So it's this concave-up thing that's just kind of curving out. Then the Tsfasman-Vladut-Zink line—the theorem gives you a line that looks like R=1−∂−1/√(q−1). Right? So it's just a line of slope −1, right, with y-intercept 1−1/√(q−1). So the question is, does that line intersect that curve? And it turns out that if you have q, a square prime power q at least 49, then the line intersects the curve in two points.
EL: Okay.
JW: So what that is really doing for us is it's telling us that in that interval between those two points, we have an improvement on the Gilbert-Varshamov bound. We have better codes than we thought were possible for 30 years.
EL: Wow!
JW: Yes. So that's my, that's my favorite theorem.
KK: I learned a lot.
EL: And where did you first encounter this theorem?
JW: In graduate school? Okay, in graduate school, which was not in 1982. It was substantially after that, but it was said to me by my advisor, “I think there's a connection between algebraic geometry and coding theory, go learn about that.”
KK: Oh.
JW: And I said, “Okay.”
KK: And so two years later.
JW: Right. Right, right. Actually, two years later, I graduated.
KK: Okay. All right. So you’re much faster than I am.
JW: Well, there was four years before that of doing other things.
EL: So was it kind of love at first sight theorem?
JW: Very much so. Because I mean, it's just so beautiful, right? Because here's this problem that nobody knew how to solve, or maybe everybody thought was solved. Because nobody had any techniques that could get any better than the Gilbert-Varshamov bound. And then here's this idea, just way out of left field saying, hey, let's use algebraic geometry to find some codes. And then, hey, let's look at curves with many points. And hey, that ends up giving us better codes than we thought were possible. It's really, really pretty. Right? It's why mathematicians are better than electrical engineers.
EL: Ooh, shots fired!
JW: Gauntlet thrown. I know.
EL: But it does make you wonder how many other things in math will eventually find something like this, like, will will find for these problems—you know, factoring integers or things like this— that we think are difficult, will someone swoop in with some completely new thing and throw it on its head?
JW: Yes. Exactly. I mean, I don't know anything about it. Maybe you do. But the idea that algebraic topology, right, is useful in big data.
KK: Yeah, sure. That's what I've been working on lately. Yeah. Right.
JW: I love that.
KK: Yeah. Sure.
JW: I love that. I don't know anything about it. But I love it.
KK: Well, the mantra is data has shape. Right? So let me just, you know, smack the statisticians here. So they want to put everything on a straight line, right? But a circle isn't a straight line. So what if your data’s a circle? So topology is very good at finding circles.
JW: Nice.
KK: Well, that's the mantra, at least. So yeah. All these unexpected connections really do come up. I mean, it's really—that’s part of why we keep doing what we're doing, right? I mean, we love it. But we never know what's out there. It's, you know, to boldly go where no one has gone before. Right?
JW: Exactly. And Evelyn, it's funny that you should bring up factoring integers, because you know that the form of cryptography that we use today to make it safe to use our credit cards on the internet, that’s very much at risk when quantum computers are developed.
EL: Right.
JW: And so, it turns out that algebraic geometry codes are not being used in practice, because LDPC codes and turbo codes are much more easily implementable. However, one of the very few known so far unbreakable methods for post-quantum cryptography is based on algebraic geometry codes.
KK: Excellent.
EL: Nice.
JW: So even if we can factor integers,
KK: I can still buy dog food at Amazon. Right?
JW: You can still shop at Amazon because of algebraic geometry codes.
EL: Yeah, the important things.
KK: That’s right.
EL: Well, so another thing we like to do on this podcast is invite our guests to pair their theorem with something, the way we would pair food with fine wines. So what have you chosen for this theorem?
JW: So that was very hard. Yeah. I mean, it's just kind of the most bizarre request.
EL: Yeah.
JW: So I mean, I guess the way that I think about this Tsfasman-Vladut-Zink theorem, I was looking for something that was just, you know, unexpected and exciting and beautiful. But I couldn't come up with anything. And so instead, what I'm going with is lemon zest.
KK: Okay.
EL: Okay.
JW: Which I guess can be unexpected and exciting in a dessert, but also because of the way that you just kind of scrape it off that curve of the lemon. And that's what the Tsfasman-Vladut-Zink theorem is doing, is it’s scraping off a little bit of that Gilbert-Varshamov curve.
KK: This is an excellent visual. I've got it. I zest lemons all the time. I understand now. This is it.
EL: Yeah.
JW: There you go.
KK: So all right. Well, we also like to give our guests a chance to plug anything. You wrote a book once. Is that still right? I have it on my shelf.
JW: Yeah. I did write a book once. So that book actually was—Yeah, so I wasn't going to plug anything, but I will plug the book a little bit, but more I'm going to plug a suite of programs. So the book is called, I think, Codes and Curves.
KK: That sounds right.
JW: You would think I would know that.
KK: I’d have to find it. But it is on my shelf.
JW: Yes. It's on mine too, surprisingly, which is right behind me, actually, if you have the video on.
So that book really just a grew out of lecture notes from lectures I gave at the program for women and mathematics at the Institute for Advanced Study. Okay, so I will take my opportunity to plug something to plug that program, to plug EDGE, to plug the Carleton program, and to plug the Smith post-bac program, and to plug the Nebraska conference for undergraduate women in mathematics. So what do all these programs have in common they have in common? They have in common two things that are closely related. One is that they are all programs for women in mathematics. And the other is that they were all the subject of study of a recent NSF grant that I had with Ami Radunskaya and Deanna Haunsperger and Ruth Haas that studied what are the most important or effective aspects of these programs and how can we scale them?
EL: Oh, nice.
JW: Yes. And some of the results of that study, along with a lot of other information, are on our website. That is women do math.org?
EL: I will be visting it as soon as we get off this phone call.
JW: Right. Awesome. I hope it's functioning
KK: And because Judy won't promote herself, I will say, you know, she's been a significant leader in promoting programs for women in mathematics through the University of Nebraska’s math department there. There's a picture of her shaking Bill Clinton's hand somewhere.
JW: Well, that's also on my shelf. Okay. Yeah, I think it's online somewhere, too.
KK: Right. Their program won a national excellence award from the President. Really excellent stuff there at the University of Nebraska. Really a model nationally.
EL: Yeah, I’m familiar with that as one of the best graduate math programs for women.
JW: Thank you.
EL: Yeah. Great job!
EL: Yeah, well, we'll have links to all of those programs on the website. So if you didn't catch one, and you're listening, you can to the website for the podcast and find all those. Yeah. Well, thank you so much for joining us, Judy.
JW: Thank you for the opportunity.
KK: Yeah, this has been great fun. Thanks.
JW: All right. Thank you.
On this episode, we were happy to talk with Judy Walker, who studies coding theory at the University of Nebraska. She told us about her favorite theorem, the Tsfasman-Vladut-Zink theorem. Here are some links to more information about topics we mentioned in the episode.
Goppa (algebraic geometry) code
Judy Walker’s book Codes and Curves
The Program for Women and Mathematics at the Institute for Advanced Study
The Carleton Summer Mathematics Program for women undergraduates
The Smith College post-baccalaureate program for women in math
The Nebraska Conference for Undergraduate Women in Mathematics (Evelyn will be speaking at the conference in 2020)
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcasts where there's no quiz at the end. I’m coming up with a new tagline for it.
Kevin Knudson: Good.
EL: I just thought I'd throw that in. Yeah, so I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer from Salt Lake City—or in Salt Lake City, Utah, not originally from here. And here's your other host.
KK: I’m Kevin Knudson, professor of mathematics at the University of Florida in Gainesville, but not from Gainesville. This is part of being a mathematician, right? No one lives where they're from.
EL: Yeah, I guess probably a lot of professions could say this, too.
KK: Yeah, I don’t know. It’s also a sort of a generational thing, right? I think people used to just tend to, you know, live where they grew up, but now not so much. But anyway.
EL: Yeah.
KK: Oh, well, it's okay. I like it here.
EL: Yeah. I mean, it's great here right now it's spring, and I've been doing a ton of gardening, which always seems like such a chore and then I'm out smelling the dirt and looking at earthworms and stuff, and it's very nice.
KK: I’m bird watching like crazy these days. Yesterday, we went out and we saw the bobolinks were migrating through. They're not native here, they just come through for, like, a week, and then they're gone.
EL: The what?
KK: Bobolinks, B-O-B-O-L-I-N-K. They kind of fool you, they look a little bit like an oriole, but the orange is on the wrong side. It's on the back of the neck instead of underneath.
EL: Okay, I'll have to look up a picture of that later.
KK: And then this morning for the first time ever, we had a rose-breasted grosbeak at our feeder. Never seen one before and they're not native around here, they just migrate through. So this is
EL: Very nice. Yes.
KK: This is what I'm doing in my late middle age. This is what I do. I just took up bird watching, you know?
EL: Yeah. Well, I can see the appeal.
KK: Yeah, it's great.
EL: Yes. But we are excited today to be talking with Adriana Salerno. Do you want to introduce yourself?
Adriana Salerno: Hi. Yeah, I'm Adriana Salerno. Now I am an associate professor of math at Bates College in Maine. And I am also not from Maine. I live in Maine. I'm originally from Caracas, Venezuela, so quite a ways away.
EL: Yeah.
AS: Again, you don't choose where you live, but maybe you get to choose where you work. So that's nice.
EL: Yeah. And you're you're not only a professor there, but you're also the department chair right now, right?
AS: Oh, yeah. Yeah, I'm trying to forget. No, I’m kidding.
EL: Sorry!
KK: You know, speaking of, before we we started recording here, I spent my afternoon writing annual faculty evaluations. I’m in the first year as chair. I have 58 of them to write.
AS: Oh, I don't have to do those, which I'm very happy about. But we are hiring a staff position, and I'm in charge of that. And that's been a lot.
EL: And we actually met because both of us have done this mass media fellowship for people interested in math or science and writing. And so you've done a lot of writing not for mathematicians as well, throughout your career path.
AS: Yeah, yeah. I mean, I did the mass media fellowship in 2007. And since then, I've been trying to write more and more about mathematics for a general audience. These days, I mostly spend time writing for blogs for the AMS. And right now I'm editing and writing for inclusion/exclusion. I wish I had more time to write than I do. It's one of those things that I really like to do, and I don't think I do enough of, but these opportunities are great because I get to use those—or scratch that itch, I guess, by talking to you all.
EL: Yes.
KK: Well, so speaking of, we assume you have a favorite theorem that you want to tell us about. What is it?
AS: Well, so it's always hard to decide, right? But I guess I was inspired by a conversation I had with Evelyn at the Joint Math Meetings. So I've decided my favorite theorem is Cantor's diagonalization argument that the real numbers do not have the same cardinality as the natural numbers.
EL: Yes, and I’m so excited about this! Ever since we talked at the Joint Meetings, I’ve been very excited about getting you to talk about this.
AS: Good. Good.
EL: Because really, it’s such a great theorem.
AS: Yeah. Well, I was thinking about it today. And I'm like, how am I going to explain this? But I have chosen that, and I'm sticking with it. Yeah.
EL: Yes.
KK: Good.
AS: So yeah, it’s—one of the coolest things about it is sort of it’s this first experience that you have, as a math student—at least it was for me—where you realize that there are different sizes of infinity. And so another way of saying that is that this theorem shows, without a doubt, I believe—although some students still doubt me after we go over it—it shows that you can have different sizes of infinity. And so the first step, even, is to say, “How do you decide if two things have the same size of infinity?” Right? And so it's a very, very lovely sort of succession of ideas. And so the first thing is, how do you decide that two things are the same size? Well, if they're finite, you count them, and you see that you have the same number of things. But even when things are finite—and say, you're a little kid, and you don't know how to count—another way of saying there's the same number of things is if you can match them up in pairs, right? So you know, if you want to say I have the same number of crayons as I have apples, you can match a crayon to an apple and see that you don't have anything left over, right?
EL: Yeah.
AS: And so it's just a very natural idea. And so when you think about infinite sets—or not even infinite sets—but you can think of this idea of size by saying two things are the same size if I can match every element in one set to every element in another set, just one by one. And so I really like, I'm borrowing from Kelsey Houston-Edwards’ PBS show, but what I really like that she said that you have two sets, and every element has a buddy, right? And so then I love that language, and so I'm borrowing from from her. But then that works for finite sets, but you can extend it to an infinite set.You can say, for example, that two infinite sets are the same size if I can find a matching between every element in the first set and every element in the second set. It’s very hard to picture in your head, I think, but we're going to try to do this. So for example, you can say that the natural numbers, the counting numbers, 1, 2, 3, 4, etc, have the same size as the even numbers, because you can make a matching where you say, “Match the number 1 with the number 2 on the other side. And then the number 2, with the number 4 on the other side.” And you have all the counting numbers, and for every counting number, you have two times that number as the even buddy.
EL: Yeah. And I think this is, it's a simple example that you started with, but it even hints at the weirdness of infinity.
AS: Yeah.
EL: You’ve got this matching, but the even numbers are also a subset of the natural numbers. Ooh, things are going to get a little weird here.
KK: Clearly, there aren’t as many even numbers, right?
AS: Yeah.
KK: This is where you fight with your students all the time.
AS: That’s exactly—so when you're teaching this, the first thing you do is talk about things that have the same cardinality. And then everybody, it can take a while, you know, like, infinity is so weird that you can actually do these matching. So Hilbert’s infinite hotel is a really great way of doing this sort of more conceptually. So you have infinitely many rooms. And so for example, suppose that rooms numbers from 1, 2, 3, to infinity, mean, and so on. Yes, you have to be careful because infinity is not a number. You have to be careful with that. But say that all the rooms are occupied. And so then, you know, say someone shows up in the middle of the night, and they say, “I need a room.” And so what you do if you're the hotel manager is you tell everyone to move one room over. And so everyone moves one room over and you put this person in, and room number one. And so that's another way of seeing that. So the one-to-one pairing, or the matching here is every person has a room. And so the number of rooms and the number of people are the same—the word is cardinality because you don't want to say number because you can't count that.
KK: Right.
AS: And so you you say cardinality instead. But it's really weird, right? Because the first time you think about this, you say, “Well, you know, there's infinity, and there's infinity plus one.” That's like the kind of thing that you would say as a kid, right? And they're the same! When you have the natural numbers and the natural numbers and one extra thing, or like with zero, for example—unless you're in the camp that says zero is an actual number—but we're not going to get to that discussion right now.
KK: I’m camp zero is a natural number.
AS: Okay. I feel like I know maybe half people who say zero is a natural number and the other half say it's not. And I don't think anyone has good arguments other than, ah, it must be true! And so then the cool thing is, once you start doing that, then you start seeing, for example—and these are, these are kind of tricky examples, it can get tricky. Like, you can say that the integers like the positive whole numbers, negative whole numbers and zero, that also has the same cardinality as the natural numbers. Because you can just start with zero—I mean, basically, when you want to say that something has the same cardinality as the natural numbers, what you're really trying to do is to find a buddy, so you're trying to pair someone with one or two, or three. But really, what you can do is just list them in order, right? Like you can have like the first one, the second one, the third one, the fourth one, and you know that that's a good matching. It's like the hotel. You can put everyone in a room. And then you know they're the same number. Everyone has a room. So with the integers, for example, the whole numbers, positive, negative and zero, then you can say, “Okay, put zero first, then one, then negative one, and two, then negative two then three, then negative three,” and then they're the same size, right? And so once you start thinking about this—I remember this pretty clearly from from college—once you start thinking about this, then you're like, “Well, obviously, because infinity is infinity.” That’s the next step. So the first step is like, well no, infinity plus one and infinity are different. But then you get convinced that there is a way of matching things that where you can get things that seem pretty different, or a subset of a set, and they have the same cardinality. And then you go the other direction, which is “Well, of course, anything infinite is going to be the same size as anything else that's infinite.” And so then it turns out that even the rationals are the same size as the natural numbers. And that's way more complicated than we have time for. But if you add real numbers, meaning irrationals as well, then you have a whole different situation.
KK: You do indeed.
AS: It’s mind blowing, right? And so if you just think about the real numbers between zero and one, so just get go real simple. I mean, small, relatively. So you're just looking at decimal expansions. And so if those numbers had the same cardinality as the natural numbers, then you should be able to have a first one and a second one and a third one, and a fourth one. Or you can pair one number was the number 1, one number with the number 2, etc. And that list should be complete, and in the words of Kelsey Houston-Edwards, everyone should have a buddy. And so then, here's the cool thing, this is a proof that these two sizes of infinity are not the same, and it's a proof by contradiction, which is, again, your favorite proof when you are learning how to prove things. I mean, when I was learning proofs, I wanted to do everything by contradiction. So proving something by contradiction means you want to assume, “Well, what if we can list all the all the real numbers?” There’s a first one, a second one, a third one, etc. So Cantor’s amazing insight was that you can always find a number that was not on that list. Every time you make this list: a first one, a second one, a third one… there is some missing element.
And so you line up all your decimals. So you have the first number in decimal. And so you have like, you know, 0.12345… or something like that. And then you have the next one. And the next one. And like, I mean, this is really hard to do verbally, but we're going to do it. And so you sort of line them up, and you have infinite decimals. So you have point, a whole bunch of decimals, point, a whole bunch of decimals. And so you can make a missing number by taking that first number in the first decimal place a just changing that number. Okay, so if it was a 1, you write down a 2. And so you know, because we’ve known how to compare decimals since we were little kids, that what you need to compare is decimal place by decimal place. So these are different because they're different in this one spot, right? And then you go to the second number, and the second decimal point. And then you say, “Well, whatever number I see there, I'm going to make the second decimal point of my new number different.” So if you had a 3, you change it to a 4, whatever it is, as long as it's not the original number. And and this is why it's called the diagonalization argument, or the diagonal argument, because you have lined all those numbers up, and you can go through the diagonal, and for each one of those decimal points, at each decimal place, you just change the value. And what you're going to get is a number, another real number, infinitely many decimals, and it's going to be different from every number on your list, just by virtue of how you made it. And so then, what that shows is that the answer to the “what if” is: you can’t. The “what if” is, if you have a list of all real numbers, it's not complete. So there is never going to be a way that you can make that list complete. And this is the part where every time I tell my students, at some point, they're like, “Wait, there are different sizes of infinity? What?” Then—and that’s sort of lovely, because it's just this this mind-blowing moment where you've convinced yourself, by the way, that you were to infinity is infinity, and then you realize that there's something bigger than the cardinality of the natural numbers. And and then it's really fun when you tell them, “Well, is there something in between?” They’re like, “Of course! There must be!” And then you're like, “Wait, no one knows.”
KK: Maybe not.
EL: Yeah.
AS: So yeah, I just love that argument. And I love how simple it is. And at the same time, it's, simple, but it's very, very deep, right? You really have to understand how these numbers match up with each other. And it requires a big leap of imagination to just think of doing this and realizing that you could make a number that was not on this infinite list by just doing that simple trick.
EL: Yeah.
AS: And so I just think it's a really, really beautiful theorem. And then I also have a really personal connection to this theorem. But it's one of my favorite things to teach. And I'm going to be teaching at this term, and I’m really looking forward to seeing how that how that lands. Sometimes it lands really well. Sometimes people are like, “Eh, you’re just making stuff up.” Yeah.
EL: Yeah.
KK: Well, then you can really blow their minds then when you show them the Cantor set, right?
AS: Yeah, yeah.
KK: And say, “Well, look, I mean, here's this subset of the reels that has the same cardinality, but it's nothing.”
AS: Exactly. Yeah, there's nothing there. Yeah.
EL: Yeah. I remember, then, when I first saw this argument, really carefully talking myself through, “Like, okay, but what if I just added that number I just made to the end of the list? Why wouldn't that work?” And trying to go through, like, “Why can't I—Oh, and then there must be other numbers that don’t fit on the list either.” It's not like we got within 1 of being the right cardinality.
AS: Right.
EL: For these infinite number. So yeah, it's a really cool idea. But you said you had some personal connections to this. So do you want to talk more about those?
AS Sure. So I am from Venezuela, and I went to college there. And I liked college, it was fine. I knew—Well, one thing that you do have to decide when you're a student in high school is, you don't really apply to college, you apply to a major within the college. And so then I knew I wanted to do math. And I signed up for math at a specific university. And so then the first year was very similar to what you would do in the States, which is sort of this general year where everybody's thinking calculus, or everybody's taking—you have some subset of things that everybody takes. And then your second year, you start really going into the math major. And so this was my first real analysis class. This was my first serious proof-y class in my university. And we learned Cantor’s diagonalization argument, which was pretty early. But I loved this argument. I felt so mind-blown. You know, I was like, “This is why I want to do math,” you know, I was just so excited. And I knew I understood everything. And so I took the exam, and I got horrible grade. And in particular, I got zero points on the “prove that the real and the natural numbers don't have the same cardinality.” And so I went to the professor, and I saw my exam, and I was really confused. And I went to the professor, and I said, “I really don't understand what's wrong with this problem. Could you help me understand?” Because I thought I understood this. And then—you know, that's a typical thing. I probably said it in a more obnoxious way than I remember now. But I felt like I was being pretty reasonable. I was not the kind of kid that would go up to my professors too often to ask for points. I really was like, “I don't know what I did wrong.” And especially because I felt like I really got it.
EL: Right.
AS: And so then he just looked at me and said, “If you don't understand what's wrong with this problem, you should not be a math major.” And that was it. That was the end of that conversation. Well, I still don't know what's wrong with this problem, and now you just told me I need to do something else. Just go do something else at a different school. Right? And I mean, I don't know that that was particularly sexist. But I do know that I was the only woman in that class, and I know that I felt it a lot. I think he probably would have said that—I really do think that he in particular would have said that to any student. I don’t think it was just me being female that affected that at all. But I do think that if I had been less stubborn about my math identity, I might have taken him up on that. But I was just like, “No, I'm going to show you!” And eventually I got an A in his class. He taught real analysis every semester, so I had to take the class with him every time and at some point, I cracked his code. And he at some point respected me, and thought I deserved to be there. But he was just very old-fashioned. You know, I don't think it's even sexism. It's just very, very, like, this is how we do things. And then I went—eventually, I did talk to someone. I think it was a teaching assistant. And I was like, “I don't know what's wrong with this problem.” And he looked at it. And he said, “Well, here's the problem. When you were listing—so you needed to list all these generic numbers and their decimal expansion. And I did, “Okay, the first number is point A1, A2, A3, etc. The second number is point B1, B2, B3, etc. The third one is point C1, C2, C3, etc, dot dot dot, right? And he said, “You have listed 26 numbers. And that's not going to be an infinite list.” Right?
KK: That’s cheap.
AS: And I was just like, “Okay, but I got the idea, right?” I was like, “Okay, it's true.” He’s like, “The way you wrote it is incorrect.” And I'm like, sure.
EL: Sort of.
KK: I’ve written that same thing on a chalkboard.
AS: You know, this shows you—like, fine, you can be more careful, you can be more precise, but from this, you shouldn't be a math major? That’s pretty intense.
EL: Yeah.
AS: And I knew the mechanics, I knew what was supposed to be happening, I knew how to make the missing number, right? Like you just need A1: you change it to some other number, B2: you change it to some other number, C3: you change it to some other number. And so, I just thought—I mean, that was a moment where I was just literally told I should not be in math because I made a silly mistake. And it was a moment where I realized that—now looking back, I realize my math identity was pretty strong, because I just said, “Well, ask someone else to see what was wrong, and I'm not going to ask this guy anymore because it's clear what he thinks.”
EL: Yeah.
AS: And sort of the stubbornness of, “Well, I’ll show him that I do deserve to be here.” But I think of all the students who might have taken classes with him, who would have heard that and then been like, “Yeah, maybe I need to do something else.” I mean, it just makes me really sad to hear, especially now that I'm a professor, and teaching these kinds of things. It just makes me sad to see which people were just scared away by someone like that, you know?
EL: Yeah.
AS: So that was a big moment for me. Yeah.
EL: Yeah. Quite a disproportionate response to, what’s basically a bookkeeping difficulty.
AS: Yeah.
EL: So, you know, we like to get our mathematicians to pair their theorems, with something on this show. And what have you chosen as your pairing for Cantor's diagonalization argument?
AS: Well, now that you suggested, music and other things, I'm maybe changing my mind.
EL: You could pair more than one thing.
AS: I was trying to find something that was just like—I need to sort of express the sort of mind-blowing nature of this, right? And so I was like, a tequila shot! You know, really just strong. And like, “Whoa, what just happened?” And so that was one thing that I thought about. And then—I don't know, just mind-blowing experiences, like, when I saw the Himalayas from an airplane, or when—you know, there are some moments where you're just like, “I can't believe this exists.” I can't believe this is a thing that I get to experience. So I guess, you know, there's been—most of these have been with traveling, where you just see something that you're just like, “I can't believe that I get to experience this.” And so I think Cantor's diagonalization argument is something like that, like seeing this amazing landscape where you're just like, “How does this even exist?”
EL: Yeah, I like that. I mean, I've had that experience looking out of airplane windows too. One time we were just flying by the coast of Greenland. And these fjords there. Of course, an airplane window is tiny and it's not exactly high-definition picture quality out of the thick plastic there, but it just took my breath away.
AS: Yeah.
EL: Yeah, I like that. And we can even invite our listeners to think of their own mind-blowing favorite experiences that they've they've had. Hopefully legal experiences in their jurisdiction.
KK: Well, oh wait, it's not 4/20 anymore. Oh, well. So we also like to invite our guests to plug anything they want to plug. So you write for the AMS, the inclusion/exclusion blog, are there other places where we might find your mathematical writing for the general public?
AS: Well, that's my main plug and outlet right now. But I I do write for the MAA Focus magazine sometimes. That's sort of my main, and sometimes the AWM newsletter. So you might find some of my writing there. And the blog. I mean, again, now that I'm chair and doing a lot of other things, I'm not writing as much, but I definitely like to—I’ve gotten really into maybe this is a weird plug, but I've gotten really into storytelling.
EL: Oh yeah, you’ve been on Story Collider?
AS: Yeah, I was on one Story Collider. I've done some of the local stuff. But you can find me on the internet telling stories about being a mathematician. Some of them about some pretty fantastic experiences, and some not so great experiences.
EL: Yeah. Okay. Yeah. Well, we'll link to your Twitter, and that can help people find you too.
AS: Oh, yeah. Cool.
EL: Thanks a lot for joining us.
AS: Yeah. Thanks for having me and listen to me ramble about infinity.
EL: Oh, I just love this theorem so much.
KK: Yeah, we could talk about infinity all day. Thanks, Adriana.
AS: Yeah. Thank you so much.
We were excited to have Bates College mathematician Adriana Salerno on the show. She is also the chair of the department at Bates and a former Mass Media Fellow (just like Evelyn). Here are some links you might enjoy along with this episode.
Salerno on Twitter
AAAS Mass Media Fellowship for graduate students in math and science who are interested in writing about math and science for non-experts
Hilbert’s Infinite Hotel
Evelyn’s blog post about the Cantor set
Salerno’s StoryCollider episode
The inclusion/exclusion blog, an AMS blog about diversity, inclusion, race, gender, biases, and all that fun stuff
Kevin Knudson: 1-2-3
Kevin Knudson and Evelyn Lamb: Welcome to My Favorite Theorem!
KK: Okay, good.
EL: Yeah.
[Theme music]
KK: So we’re at the JMM.
EL: Yeah, we’re here at the Joint Math Meetings. They’re in Baltimore this year. The last time I was at the Joint Meetings in Baltimore I got really sick, but so far I seem to not be sick.
KK: That’s good. You’ve only been here a couple of days, though.
EL: Yeah. There’s still time.
KK: Yeah, so I’ve only been to the Joint Meetings one other time in my life, 20 years ago as a postdoc in Baltimore. I’ve just got a thing for Baltimore, I guess.
EL: Yeah, I guess so.
KK: So people may have seen this on Twitter. Fun fact: this is our first time meeting in person.
EL: Yeah.
KK: And you’re every bit as charming in real life as you are over video.
EL: And you’re taller than I expected because my first approximation of all humans is that they are my height, and you are not my height.
KK: But you’re not exceptionally short.
EL: No.
KK: You’re actually above average height, right?
EL: I’m about average for a woman, which makes me below average for humans.
KK: Well, if we’re going to the Netherlands, for example, I’m below average for the Netherlands.
EL: Yes.
KK: So I’m actually leaving today. I was only here for a couple of days. I was here for the department chairs workshop. You’re here through when?
EL: I’m leaving on Friday, tomorrow. Yeah, while we’ve been here we’ve been collecting flash favorite theorems where people have been telling us about their favorite theorem in a small amount of time. So yeah, we’re excited to share those with you.
KK: Yeah, this is going to be a good compilation. I’m going to try to get a couple more before I leave town. We’ll see what happens.
EL: Yeah. All right.
KK: Enjoy.
EL: I am here with Eric Sullivan. Can you tell us a little bit about yourself?
Eric Sullivan: Yeah, I'm an associate professor at Carroll College in Helena, Montana, lover of all things mathematics.
EL: And here with me in the Salt Lake City Airport, I assume catching a connecting flight to the Joint Math Meetings.
ES: You got it.
EL: All right, and what is your favorite theorem, or the favorite theorem you'd like to tell me about right now?
ES: Oh, I have many favorite theorems, but the one that's really coming to mind right now, especially since I'm teaching complex analysis this semester, are the Cauchy-Riemann equations.
EL: Very nice.
ES: Giving us a beautiful connection between analytic functions, and ultimately, harmonic functions. Really lovely. And it seems like a mystery to my students when they first see it, but it's beautiful math.
EL: Yeah, it is. They are kind of mysterious, even after you've seen them for a while. It's like, why does this balance so beautifully?
ES: Right? And the way you get there with the limit, so I'm just going to take the limit going one way, then I’ll take the limit going the other way and voila, out comes these beautiful partial differential equations.
EL: Yeah, very lovely. And I know I'm putting you on the spot. But do you have a pairing for this theorem?
ES: Ooh, a pairing? Oh boy, something was a very complex taste. Maybe chili.
EL: Okay.
ES: I’ll say chili because there's all sorts of flavors mixed in with chili, and complex analysis seems to mix all sorts of flavors together.
EL: All right, I like it. Well, thank you. This is the first lightning My Favorite Theorem I'm recording so far at the joing meetings, or even before, on the way, so yeah, thanks for joining me.
Courtney Gibbons: I'm Courtney Gibbons. I'm a professor at Hamilton College in upstate New York. And my favorite theorem is Hilbert’s Nullstellensatz, which translates to zero point theorem, but if you run it through Google Translate, it's actually quite beautiful. It's like the “empty star theorem” or something like that. It's very astronomical. And I love this theorem because it's one of those magical theorems that connect one area that I love, algebra, to another area that I don't really understand, but would like to love, geometry. And I find that in my classes, when I ask someone, “What's a parabola?” I have a handful of students who do some sort of interpretive dance. And I have a handful of students were like, “Oh, it's like y equals some x squared stuff.” And I'm like, “I'm with you.” I think of the equation. And some people think of the curve, the plot, and that's the geometric object, and the Nullstellensatz tells you how to take ideals and relate them to varieties. So it connects algebra and geometry. And it's just gorgeous, and the proof is gorgeous, and everything about it is wonderful, and David Hilbert was wonderful. And if I were going to pair it with something, I’d probably pair it with a trip to an observatory, so that you could go appreciate the beauty of the stars, and think about the wonderful connectedness of all of mathematics and the universe. And maybe you should have, like, a beer or something too.
EL: Why not?
CG: Yeah. Why not? Exactly.
EL: Good. Well, thank you. Absolutely.
KK: All right, JMM flash theorem time. Introduce yourself, please.
Shelley Kandola: Hi. My name is Shelly Kandola. I'm a grad student at the University of Minnesota.
KK: And it’s warmer here than where you are usually.
SK: Yeah, it's 15 degrees in Minnesota right now.
KK: That’s awful.
SK: Yeah.
KK: Well anyway, we’ve got to be quick here. What's your favorite theorem?
SK: The Banach-Tarski paradox.
KK: This is an amazing result that I still don't really understand and I can't wrap my head around.
SK: Yeah, you've got a solid sphere, a filled-in S2, and you can cut it into four pieces using rigid motions, and then put them back together and get two solid spheres that are the same size as the original.
KK: Well, theoretically, you can do this, right? This isn't something you can actually do, is it?
SK: Physically no, but with the power of group theory, yes.
KK: With the power of group theory.
SK: The free group on two generators.
KK: Why do you like this theorem so much?
SK: So I like it because it was the basis of my senior research project in college.
KK: It just seems so weird it was something you should think about?
SK: Yeah, it intrigued me. It's a paradox. And it's the first theorem I dove really deep into, and we found a way to generalize it to arbitrarily many dimensions with one tweak added.
KK: Cool. So what does one pair with the Banach-Tarski paradox?
SK: One of my favorite Futurama episodes. There's this one episode where there's a Banach-Tarski duplicator, and Bender jumps into the duplicator, and he makes two more, and he wants to build an army of himself.
KK: Sure.
SK: But every time he jumps in, the two copies that come out, are half the size of the original. He ends up with an army of nanobots. It contradicts the whole statement of the paradox that you're getting two things back that are the same size as the original.
KK: Although an army of Benders might be fun.
SK: Yeah, they certainly wreak havoc.
KK: Don’t we all have a little inner Bender?
SK: Oh yeah. He's powered by beer.
KK: Well, thanks for joining us. You gave a really good talk this morning.
SK: Thanks.
KK: Good luck.
SK: Thank you for having me.
KK: Sure.
David Plaxco: My name is David Plaxco. I'm a math education researcher at Clayton State University. And my favorite theorem is really more of an exercise, I think most people would think. It's proving that the set of all elements in a group that conjugate with a fixed element is a subgroup of the group. I'll tell you why. Because in my dissertation, that exercise was the linchpin in understanding how students can learn by proving.
EL: Okay.
DP: So I was working with a student. He had read ahead in the textbook and knew that not all groups are commutative, so you can't always commute any two elements you feel like. And he generalized this to thinking about inverses. He didn’t think that every inverse was necessarily two-sided, which in a group you are. Anyway, so he was trying to prove that that set was a subgroup and came to this impasse because he wanted to left cancel and right cancel with inverses and could only do them on one side. And then he started to question, like, maybe I'm just crazy, like maybe you can use the same inverse on both sides. And then he proved it himself using associativity. So he made, like, I call it John’s lemma, he came up with this kind of side proof to show that, well, if you're associative and you have a right inverse and a left inverse, then those have to be the same. And then he came back and was able to left and right cancel at free will any inverse, and then proved that it was a subgroup, so through his own proof activity, he was able to change his own conceptual understanding about what it means to be an inverse, like how groups work, all these things, and it gave him so much more power moving forward. So that's how that theorem became my favorite theorem because it gave me insight into how individuals can learn.
EL: Nice. And do you have a pairing for this theorem?
DP: My diploma. Because it helped me get it.
EL: That seems appropriate. Thanks.
DP: Thanks.
Terence Tsui: So I'm Terrance, and I'm currently a final year undergraduate studying in Oxford. My favorite theorem is actually a really elegant proof of Euler’s identity on the Riemann zeta function. We all know that the Riemann zeta function is defined in a way of the sum of 1/ks where k runs across all the natural numbers. But at the same time, Euler has given a really good other formulation: we say status is the same as the infinite product of 1-1/ps, where p runs across the primes. And then it's really interesting, because if you look at you see, on one hand, an infinite some, and on the other hand, you have an infinite product. And it’s very rare that we see that infinite sums and infinite products actually coincide. And they’re only there because it is a function that actually works on nearly every s larger than 1. And that means that this beautiful, elegant identity actually runs correct for infinitely many values. And the most interesting thing about this theorem is that the proof to it could be done probabilistically, where we consider some certain particular events, and we realize that the Riemann zeta series sum is actually equivalent to finding a certain intersection of infinitely many independent events. And first it is just an infinite product of certain events. And first we have the Riemann zeta function equalling a particular infinite product. And I think that is something that is really out of out of our imagination, because not only does it link two things—a sum to an infinite product, but at the same time, the way that it proves it comes from somewhere we could not even imagine, which is from probability. So if I need to pair this theorem with something, I would say it’s like a spider web, because you can see that there's very intricate connections and that things connect to each other, but in the most mysterious ways.
ELL: Cool. Well, thanks.
TT: Thank you.
Courtney Davis: So Hi, I'm Courtney Davis. I am an associate professor at Pepperdine University out in LA.
EL: Okay. And I hear that we have a favorite model, not a favorite theorem from you.
CD: Yes. So I'm a math biologist. So I'm going to say the obvious one, which is SIR modeling, because it is the entry way into getting to do this cool stuff. It’s the way that I get to show students how to write models. It's the first model I ever saw that had biology in it. And it's something that is ubiquitous and used widely. And so despite being the first thing everyone learns, it's still the first thing everyone learns. And that's what makes it interesting to me.
EL: Yeah. And and can you kind of just sum up in a couple sentences what this model is, what SIR means?
CD: Yeah. So SIR is you are modeling the spread of disease through a susceptible (S) population through infected and into recovered or immune, and you can change that up quite a lot. There are a lot of different ways to do it. It's not one fixed model. And it's all founded on the very simple premise that when two individuals run into each other in a population, that looks like multiplication. And so you can take multiplication, and with that build all the interactions that you really need, in order to capture what's actually happening in a population that at least is well mixed, so that you have a big room of people moving around about it, for instance.
EL: Okay. And I'm going to spring something on you, which is that usually we pair something with our theorem, or in this case model, so we have our guests, you know, choose a food, beverage, piece of art, or anything. Is there anything that you would suggest that pairs well with SIR?
CD: With an SIR model, I would say, a paint gun.
EL: Okay.
CD: I don't know that that's what you're looking for.
EL: That’s great.
CD: Simply because running around and doing pandemic games or other such things is also a common way to get data on college campuses so that you can introduce students, and they can parameterize their models by paint guns or water guns or something like that.
EL: Oh, cool. I like it. Thank you.
CD: Absolutely. Thank you.
Jenny Kenkel: I’m Jenny Kenkel. I'm a graduate student at the University of Utah. I study commutative algebra. My favorite theorem is this isomorphism between a particular local cohomology module and an injective module: The top local cohomology of a Gorenstein ring is isomorphic to the injective hull of its residue field. But I was thinking that maybe it would pair really well with like, a dark chocolate and a sharp cheddar, because these two things are isomorphic, and you would never expect that. But then they go really well together, just in the same way that I think a dark chocolate and a sharp cheddar seem kind of like a weird pairing, but then it's amazing. Also, they're both beautiful.
EL: Nice, thank you.
JK: Thank you.
Dan Daly: My name is Dan Daly. And I am the interim chair of the Department of Mathematics at Southeast Missouri State University.
KK: Southeast—is that in the boot?
DD: That is close to the boot heel. It's about two hours south of St. Louis.
KK: Okay. I'm a Cardinals fan. So I'm ready, we’ve got something here. So what's your favorite theorem?
DD: So my favorite theorem is actually the classification of finite simple groups.
KK: That’s a big theorem.
DD: That is a very big, big,
KK: Like 10,000 pages of theorem.
DD: At least
KK: Yeah. So what draws you to this? Is it your area?
DD: So I am interested in algebraic and combinatorics, and I am generally interested in all things related to permutations.
KK: Okay.
DD: And one of the things that drew me to this theorem is that it's such an amazing, collaborative effort and one of the landmarks of 20th century mathematics.
KK: Big deal. Yeah.
DD: And, you know, it just to me, it seems such a such an amazing result that we can classify these building blocks of finite groups.
KK: Right. So what does one pair this with?
DD: So I think since it's such a collaborative effort, I'm going to pair it with Louvre museum.
KK: The Louvre, okay.
DD: Because it's a collection of all of these different results that are paired together to create something that is really, truly one of a kind.
KK: I’ve never been. Have you?
DD: I have. It’s a wonderful place. Yeah. It’s a fabulous place. One of my favorite places.
KK: I’m going to wait until I can afford to rent it out like Beyonce and Jay Z.
DD: Yeah, right.
KK: All right, well thanks, Dan. Enjoy your time at the Joint Math Meetings.
DD: All right, thank you much.
Charlie Cunningham: My name's Charlie Cunningham. I'm visiting assistant professor at Haverford College. And my area of research originally is, or still is, geometric group theory. But the theorem that I want to talk about was a little bit closer to set theory, which is I want to talk about the existence of solutions to Cauchy’s functional equation.
EL: Okay. And what is Cauchy’s functional equation?
CC: So Cauchy’s functional equation is a really basic sort of thing you can ask about a function. It's asking, all right, you take the real numbers, and you ask is there—what are the functions from the real numbers to the real numbers where if you add two numbers together, and then apply the function, it's the same thing as applying the function to both of those numbers and then adding them together?
EL: Okay. So kind of like you're naive student and wanting to—how a function should behave.
CC: Yes. Right. So this would come up in a couple of places. So if you’ve taken linear algebra, that's the first axiom of a linear function. It doesn't ask about the scaling part. It's just the additive part. And if you've done group theory, it's a fancy way, is it's all the homomorphisms from the real numbers to themselves, an additive group. So the theorem basically, is that well, well, first of all, the question is, well, there are some obvious ones. There are all the functions where you just multiply by a fixed number, all the linear functions you’d know from linear algebra, like 2 times x, 3 times x, or π times x, any real number times x. So the question is, are there any others? Or are those the only functions that exists at all that satisfy this equation? And the theorem turns out that the answer depends on the fundamental axioms you take for mathematics.
EL: Wow. Okay.
CC: Right. So the answer is just to use a little bit of set theory, that if you are working in a set theory, which most mathematicians do, that has something called the axiom of choice in it, then the answer is no, there are lots and lots and lots of other functions that satisfy this equation, other than those obvious ones, but they're almost impossible to think about or write down. They're not continuous anywhere, they are not differentiable anywhere. They're not measurable, if anyone knows what that means. Their graph, if you tried to draw them, are dense in the entire plane, which means any little circle you draw on the plane intersects of the graph somewhere. They still pass the vertical line test. They’re still functions that are well-defined. And I really like this theorem. One reason is because it's a really great place for math students to learn that there isn't always one right answer in math. Sometimes the answer to very reasonably posed questions isn't true or false. It depends on the fundamental universe we’re working in. It depends on the what we all sit down and agree are the starting rules of our system. And it's a sort of question where you wouldn't realize that those sorts of considerations would come up. It also comes up—When I've asked linear algebra students, it's equivalent to the statement are both parts of the definition of a linear function actually necessary? We usually give them to you as two pieces: one, it satisfies this, and the other is scalars pull out. Do we actually need that second part? Can we prove that scalars pull out just from the first part? And this is the only way to prove the answer's no. It's a good exercise to try yourself to prove just from this axiom, that rational scalars pull out, any rational number has to pull out of that function. But real numbers, not necessarily. And these are the counter examples. So it's a good place at that level when you're first learning math, to realize that there are really subtle issues of what we really think truth means when we're beginning to have these conversations
EL: Nice. And what is your theorem pairing?
CC: My theorem pairing, I'm going to pair it with artichokes.
EL: Okay.
CC: I think that artichokes also had a bad rap for a lot of time, for a long time. You should also look at the artichoke war, if you've never heard of it, a great piece of history of New York City, and it took a long time for people to really understand that these prickly, weird looking vegetables can actually be delicious if approached from the right perspective.
EL: Nice. Well, thank you.
Ellie Dannenburg: So I'm Ellie Dannenberg, and I am visiting assistant professor at Pomona College in Claremont, California. And my favorite theorem is the Koebe-Andreev-Thurston circle packing theorem, which says that if you give me a triangulation of a surface, that I can find you exactly one circle packing where the vertices of your triangulation correspond to circles, and an edge between two vertices says that those circles are tangent.
EL: Okay, so this seems site kind of related to Voronoi things? Maybe I'm totally going in a wrong direction.
ED: So, I know that these are—so I don't think they're exactly related.
EL: Okay. Nevermind. Continue!
ED: Okay. But, right, it’s cool because the theorem says you can find a circle packing if I hand you a triangulation. But what is more exciting is you can only find one. So that's it.
EL: Oh, huh. Cool. All right. And do you have something that you would like to pair with this theorem?
ED: So I will pair this theorem with muhammara, which is this excellent Middle Eastern dip made from walnuts and red peppers and pomegranate molasses that is delicious and goes well with anything.
EL: Okay. Well, it's a good pairing. My husband makes a very good version. Yeah. Thank you.
ED: Thank you.
Manuel González Villa: This is Manuel González Villa. I'm a researcher in CIMAT [Centro de Investigación en Matemáticas] in Guanajuato, Mexico, and my favorite theorem is the Newton-Puiseux theorem. This is a generalization of implicit function theorem but for singular points of algebraic curves. That means you can parameterize a neighborhood of a singular point on an algebraic curve with a power series expansion, but with rational exponents, and the denominators of those exponents are bounded. The amazing thing about this theorem is that it’s very old. It comes back from Newton. But some people will still use it in research. I learned this theorem in Madrid where I made my PhD from a professor call Antonia Díaz-Cano. And also I learned with the topologist José María Montesinos to apply this theorem. It has some high-dimensional generalizations for some type of singularities, which are called quasi-ordinary.
The exponents—so you get a power series, so you get an infinite number of exponents. But there is a finite subset of those exponents which are the important ones, because they codify all the topology around the singular point of the algebraic curve. And this is why this theorem is very important. And the book I learned it from is Robert Walker’s Algebraic Curves. And if you want a more recent reference, I recommend you to look at Eduardo Casas-Alvero’s book on singularities of plane curves. Thank you very much.
EL: Okay.
EL: Yeah. So can you introduce yourself?
JoAnne Growney: My name is JoAnne Growney. I'm a retired math professor and a poet.
EL: And what is your favorite theorem?
JG: Well, the last talk I went to has had me debating about it. What I was prepared to say an hour ago was that it was the proof by contradiction that the real numbers are countable, and Cantor's diagonal proof. I like proofs by contradiction because I kind of like to think that way: on the one hand, and then the opposite. But I just returned from listening to a program on math and art. And I thought, wow, the Pythagorean theorem is something that I use every day. And maybe I'm being unfair to take something about infinity instead of something practical, but I like both of them.
EL: Okay, so we've got a tie there. And have you chosen something to pair with either of your theorems? We like to do, like, a wine and food pairing or, you know, but with theorems, you know, is there something that you think goes especially well, for example a poem, if you’ve got one.
JG: Well, actually, I was thinking of—the Pythagorean theorem, and it's probably a sound thing, made me think of a carrot.
EL: Okay.
JG: And oh, the theorem about infinity, it truly should make me think of a poem, but I don't have a pairing in mind.
EL: Okay. Well, thank you.
JG: Thank you.
Mikael Vejdemo-Johansson: I’m Michael Vejdemo-Johansson. I'm from the City University of New York.
KK: City University of New York. Which one?
MVJ: College of Staten Island and the Graduate Center.
KK: Excellent. All right, so we're sitting in an Afghan restaurant at the JMM. And what is your favorite theorem?
MVJ: My favorite theorem is the nerve lemma.
KK: Okay, so remind everyone what this is.
MVJ: So the nerve lemma says—well, it’s basically a family of theorems, but the original one as I understand it says that if you have a covering of a topological space where all the cover elements and all arbitrary intersections of cover elements are simple enough, then the intersection complex, the nerve complex of the covering that inserts a simplex for each nonlinear intersection is homotopy equivalent to the whole space.
KK: Right. This is extremely important in topology.
MVJ: It fuels most of topological data analysis one way or another.
KK: Absolutely. Very important theorem. So what pairs well, with the nerve lemma?
MVJ: I’m going to go with cotton candy.
KK: Cotton candy. Okay, why is that?
MVJ: Because the way that you end up collapsing a large and fluffy cloud of sugar into just thick, chewy fibers if you handle it right.
KK: That's right. Okay. Right. This pairing makes total sense to me. Of course, I’m a topologist, so that helps. Thanks for joining us, Mikael.
MVJ: Thank you for having me.
Michelle Manes: I’m Michelle Manes. I'm a professor at the University of Hawaii. And my favorite theorem is Sharkovskii’s theorem, which is sometimes called period three implies chaos. So the statement is very simple. You have a weird ordering of the natural numbers. So 3 is bigger than 5 is bigger than 7 is bigger than 9, etc, all the odd numbers. And then those are all bigger than 2 times 3 is bigger than 2 times 5 is bigger than 2 times 7, etc. And then down a row 4 times every odd number, and you get the idea. And then everything with an odd factor is bigger than every power of 2. And the powers of 2 are listed in decreasing order. So 23 is bigger than 22 is bigger than 2 is bigger than 1.
EL: Okay.
MM: So 1 is the smallest, 3 is the biggest, and you have this big weird array. And the statement says that if you have a continuous function on the real line, and it has a point of period n, for n somewhere in the Sharkovskii ordering, so put your finger down on n, it’s got a point of period everything less than n in that ordering. So in particular, if it has a point of period 3, it has points of every period, every integer. So I mean, I like the theorem, because the hypothesis is remarkable. The hypothesis is continuity. It's so minimal.
EL: Yeah.
MM: And you have this crazy ordering. And the conclusion is so strong. And the proof is just really lovely. It basically uses the intermediate value theorem and pretty pictures of folding the real line back on itself and things like that.
EL: Oh, cool.
MM: So yeah, it's my favorite theorem. Absolutely.
EL: Okay. And do you have something that you would suggest pairing with this theorem?
MM: So for me, because when I think of the theorem, I think of the proof of it, which involves this, like stretching and wrapping and stretching and wrapping, and an intermediate value theorem, it feels very kinetic to me. And so I feel like it pairs with one of these kind of moving sculptures that moves in the wind, where things sort of flow around.
EL: Oh, nice.
MM: Yeah, it feels like a kinetic theorem to me. So I'm going to start with the kinetic sculpture.
EL: Okay. Thank you.
MM: Thanks.
John Cobb: Hey there, I’m John Cobb, and I'm going to tell you my favorite theorem.
EL: Yeah. And where are you?
JC: I’m at College of Charleston applying for PhD programs right now.
EL: Okay.
JC: Okay. So I picked one I thought was really important, and I'm surprised it isn't on the podcast already. I have to say it's Gödel’s incompleteness theorems. Partly because for personal reasons. I'm in a logic class right now regarding the mechanics of the actual proof. But when I heard it, I was becoming aware of the power of mathematics, and hearing the power of math to talk about its own limitations, mathematics about mathematics, was something that really solidified my journey into math.
EL: And so what have you chosen to pair with your theorems?
JC: Yeah, I was unprepared for this question. So I’m making up on the spot.
EL: So you would say your your preparation was…incomplete?
JC: [laughing] I would say that! Man. I'll go with the crowd favorite pizza for no reason in particular.
EL: Well pizza is the best food and it's good with everything.
JC: Yeah.
EL: So that's a reason enough.
JC: Awesome. Well, thank you for the opportunity.
EL: Yeah, thanks.
Talia Fernós: My name is Talia Fernós, and I'm an associate professor at the University of North Carolina at Greensboro. My favorite theorem is Riemann’s rearrangement theorem. And basically, what it says is that if you have a conditionally convergent series, you can rearrange the terms in the series that the series converges to your favorite number.
EL: Oh, yeah. Okay, when you said the name of it earlier, I didn't remember, I didn't know that was the name of the theorem. But yes, that's a great theorem!
TF: Yeah. So the proof basically goes as follows. So if you do this with, for example, the series which is 1/n times -1 to, say, the n+1, so that looks like 1-1/2+1/3-1/4, and so on. So when you try to see why this is itself convergent, what you'll see is that you jump forward 1, then back a half, and then forward a third, back a fourth, so if you kind of draw this on the board, you get this spiral. And you see that it very quickly, kind of zooms in or spirals into whatever the limit is.
So now, this is conditionally convergent, because if you sum just 1/n, this diverges. And you can use the integral test to show that. So now, if you have a conditionally convergent series, you will have necessarily that it has infinitely many positive terms and infinitely many negative terms. And that each of those series independently also diverge. So when you want to show that a rearrangement is possible, so that it converges to your favorite number, what you're going to do is, let's say that you're trying to make this converge to 1, okay? So you're going to add up as many positive terms as necessary, until you overshoot 1, and then as many negative terms as necessary until you undershoot, and you continue in this way until you kind of have again, this spiraling effect into 1. And now the reason why this does converge is that the fact that it's conditionally convergent also tells you that the terms go to zero. So you can add sort of smaller and smaller things.
EL: Yeah, and you you don't run out of things to use.
TF: Right.
EL: Yeah. Cool. And what have you chosen to pair with this theorem?
F: For its spiraling behavior, escargot, which I don't eat.
EL: Yeah, I have eaten it. I don't seek it out necessarily. But it is very spiraly.
TF: Okay. What does it taste like?
EL: It tastes like butter and parsley.
TF: Okay. Whatever it’s cooked in.
EL: Basically. It's a little chewy. It's not unpleasant. I don't find a terribly unpleasant, but I don’t
TF: think it's a delicacy.
EL: Yeah. But I'm not very French. So I guess that's fair. Well, thanks.
TF: Sure.
This episode of my favorite theorem is a whirlwind of “flash favorite theorems” we recorded at the Joint Mathematics Meetings in Baltimore in January 2019. We had 16 guests, so we’ll keep this brief. Below is a list of our guests and their theorems with timestamps for each guest in case you want to skip around in the episode. We hope you enjoy this festival of theorem love as much as we enjoyed talking to all of these mathematicians!
1:58 Eric Sullivan from Carroll College in Montana loves the Cauchy-Riemann equations.
3:48 Courtney Gibbons from Hamilton College in New York loves Hilbert’s Nullstellensatz.
5:08 Shelley Kandola from the University of Minnesota loves the Banach-Tarski paradox.
7:20 David Plaxco from Clayton State University in Georgia loves a group theory exercise that helped him with his dissertation.
9:40 Terence Tsui from Oxford University in the UK loves a probabilistic proof of the equivalence of two forms of the Riemann zeta function.
12:14 Courtney Davis from Pepperdine University in California loves the SIR model.
14:25 Jenny Kenkel from the University of Utah loves the isomorphism between the top local cohomology of a Gorenstein ring and the injective hull of its residue field.
15:14 Dan Daly of Southeast Missouri State University loves the classification of finite simple groups.
16:42 Charlie Cunningham of Haverford College in Pennsylvania loves Cauchy’s functional equation.
20:38 Ellie Dannenberg of Pomona College in California loves the Koebe-Andreev-Thurston circle packing theorem.
22:15 Manuel González Villa of CIMAT (Centro de Investigación en Matemáticas) in Guanajuato, Mexico loves the Newton-Puiseux theorem.
24:08 JoAnne Growney, a retired math professor and current math poet, loves the Pythagorean theorem.
25:54 Mikael Vejdemo-Johansson of CUNY loves the nerve lemma.
27:27 Michelle Manes of the University of Hawaii loves Sharkovskii’s theorem.
29:38 John Cobb of the College of Charleston loves Gödel’s incompleteness theorems.
31:04 Talia Fernós of the University of North Carolina at Greensboro loves Riemann’s rearrangement theorem.
Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcast. I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: I’m Kevin Knudson, professor of mathematics at the University of Florida. How's it going?
EL: All right, yeah. Up early today for me. You know, you’re on the East Coast, and I'm in the Mountain Time Zone. And actually, when my husband is on math trips — sorry, if I'm on math trips on the East Coast, and he's in the mountain time zone, then we have, like, the same schedule, and we can talk to each other before we go to bed. I'm sort of a night owl. So yeah, it's early today. And I always complain about that the whole time.
KK: Sure. Is he a morning person?
EL: Yes, very much.
KK: So Ellen, and I are decidedly not. I mean, I'd still be in bed, really, if I had my way. But you know, now that I'm a responsible adult chair of the department, I have to—even in the summer—get in here to make sure that things are running smoothly.
EL: But yeah, other than the ungodly hour (it’s 8am, so everyone can laugh at me), everything is great.
KK: Right. Cool. All right, I’m excited for this episode.
EL: Yes. And today, we're very happy to have Jim Propp join us. Hi, Jim, can you tell us a little bit about yourself?
Jim Propp: Yeah, I'm a math professor at UMass Lowell. My research is in combinatorics, probability, and dynamical systems. And I also blog and tweet about mathematics.
KK: You do. Your blog’s great, actually.
EL: Yeah.
KK: I really enjoy it, and you know, you're smart. Once a month.
EL: Yes. That that was a wise choice. Most months, I think on the 17th, Jim has an excellent post at Math Enchantments.
KK: Right.
EL: So that’s a big treat. I think somehow I didn't realize that you did some things with dynamical systems too. I feel like I'm familiar with you in, like, the combinatorics kind of world. So I learned something new already.
KK: Yup.
JP: Yeah, I actually did my PhD work in ergodic theory. And after a few years of doing a postdoc in that field, I thought, “No, I'm going to go back to combinatorics," which was sort of my first love. And then some probability mixed into that.
KK: Right. And actually, we had some job candidates this year in combinatorics, and one of them was talking about—you have a list of problems, apparently, that's famous. I don't know.
JP: Oh, yes. Tilings. Enumeration of tilings.
KK: That’s right. It was a talk about tilings. Really interesting stuff.
JP: Yeah, actually, I should say, I have gone back to dynamical systems a little bit, combining it with combinatorics. And that's a big part of what I do these days, but I won't be talking about that at all.
EL: Okay. And what is your favorite theorem?
JP: Ah, well, I've actually been sort of leading you on a bit because I'm not going to tell you my favorite theorem, partly because I don't have a favorite theorem.
KK: Sure.
JP: And if I did, I wouldn't tell you about it on this podcast, because it would probably have a heavy visual component, like most of my favorite things in math, and it probably wouldn't be suited to the purely auditory podcast medium.
KK: Okay, so what are you gonna tell us?
JP: Well, I could tell you about one theorem that I like that doesn't have much geometric content. But I'm not going to do that either.
EL: Okay, so what bottom of the barrel…
JP: I’m going to tell you about two theorems that I like, okay, they’re sort of like twins. One is in continuous mathematics, and one is in discrete mathematics.
KK: Great.
JP: The first one, the one in continuous mathematics, is pretty obscure. And the second one, in discrete mathematics, is incredibly obscure. Like nobody’s named it. And I've only found it actually referred to, stated as a result in the literature once. But I feel it's kind of underneath the surface, making a lot of things work, and also showing resemblances between discrete and continuous mathematics. So these are, like, my two favorite underappreciated theorems.
EL: Okay.
KK: Oh, excellent. Okay, great. So what have we got?
JP: Okay, so for both of these theorems, the underlying principle, and this is going to sound kind of stupid, is if something doesn't change, it’s constant.
EL: Okay. Yes, that is a good principle.
JP: Yeah. Well, it sounds like a tautology, because, you know, doesn't “not changing” and “being constant” mean the same thing? Or it sounds like a garbled version of “change is the only constant.” But no, this is actually a mathematical idea. So in the continuous realm, when I say “something,” what I mean is some differentiable function. And when I say “doesn't change,” I mean, has derivative zero.
KK: Sure.
JP: Derivatives are the way you measure change for differentiable functions. So if you’ve got a differentiable function whose derivative is zero—let’s assume it's a function on the real line, so its derivative is zero everywhere—then it's just a constant function.
KK: Yes. And this is a corollary of the mean value theorem, correct?
JP: Yes! I should mention that the converse is very different. The converse is almost a triviality. The converse says if you've got a constant function, then its derivative is zero.
KK: Sure.
JP: And that just follows immediately from the definition of the derivative. But the constant value theorem, as you say, is a consequence of the mean value theorem, which is not a triviality to prove.
KK: No.
JP: In fact, we'll come back later to the chain of implications that lead you to the constant value theorem, because it's surprisingly long in most developments.
KK: Yes.
JP: But anyway, I want to point out that it's kind of everywhere, this result, at least in log tables— I mean, not log tables, but anti-differentiation tables. If you look up anti-derivatives, you'll always see this “+C” in the anti-derivative in any responsible, mathematically rigorous table of integrals.
EL: Right.
JP: Because for anti-derivatives, there's always this ambiguity of a constant. And those are the only anti-derivatives of a function that's defined on the whole real line. You know, you just add a constant to it, no other way of modifying the function will leave its derivative alone. And more generally, when you've got a theorem that says what all the solutions to some differential equation are, the theorem that guarantees there aren't any other solutions you aren't expecting is usually proved by appealing to the constant value theorem at some level. You show that something has derivative zero, you say, “Oh, it must be constant. “
KK: Right.
JP: Okay. So before I talk about how the constant value theorem gets proved, I want to talk about how it gets used, especially in Newtonian physics, because that's sort of where calculus comes from. So Newtonian physics says that if you know the initial state of a system, you know, of a bunch of objects—you know their positions, you know their velocities—and you know the forces that act on those objects as the system evolves, then you can predict where the objects will be later on, by solving a differential equation. And if you know the initial state and the differential equation, then you can predict exactly what's going to happen, the future of the system is uniquely determined.
KK: Right.
JP: Okay. So for instance, take a simple case: you’ve got an object moving at a constant velocity. And let's say there are no forces acting on it all. Okay? Since there are no forces, the acceleration is zero. The acceleration is the rate of change of the velocity, so the velocity has derivative zero everywhere. So that means the velocity will be constant. And the object will just keep on moving at the same speed. If the constant value theorem were false, you wouldn't really be able to make that assertion that, you know, the object continues traveling at constant velocity just because there are no forces acting on it.
KK: Sure.
JP: So, kind of, pillars of Newtonian physics are that when you know the derivative, then you really know the function up to an ambiguity that can be resolved by appealing to initial conditions.
EL: Yeah.
KK: Sure.
JP: Okay. So this is actually telling us something deep about the real numbers, which Newton didn't realize, but which came out, like, in the 19th century, when people began to try to make rigorous sense of Newton's ideas. And there's actually a kind of deformed version of Newton's physics that's crazy, in the sense that you can't really predict things from their derivatives and from their initial conditions, which no responsible physicist has ever proposed, because it's so unrealistic. But there are some kind of crazy mathematicians who don't like irrational numbers. I won't name names. But they think we should purge mathematics of the real number system and all of these horrible numbers that are in it. And we should just do things with rational numbers. And if these people tried to do physics just using rational numbers, they would run into trouble.
EL: Right.
JP: Because you can have a function from the rational numbers to itself, whose derivative is zero everywhere—with derivative being defined, you know, in the natural way for functions from the rationals to itself—that isn't a constant function.
KK: Okay.
JP: So I don't know if you guys have heard this story before.
KK: This is making my head hurt a little, but okay. Yeah.
EL: Yeah, I feel like I have heard this, but I cannot recall any details. So please tell me.
JP: Okay, so we know that the square root of two is irrational, so every rational number, if you square it, is either going to be less than two, or greater than two.
KK: Yes.
JP: So we could define a function from the rational numbers to itself that takes the value zero if the input value x satisfies the inequality x squared is less than 2 and takes the value 1 if x squared is bigger than two.
EL: Yes.
JP: Okay. So this is not a constant function.
KK: No.
JP: Right. Okay. But it actually is not only continuous, but differentiable as a function of the
EL: Of the rationals…
JP: From the rationals to itself.
KK: Right. The derivative zero but it's not constant. Okay.
JP: Yeah. Because take any rational number, okay, it's going to have a little neighborhood around it avoiding the square—avoiding the hole in the rational number line where the square root of 2 would be. And it's going to be constant on that little interval. So the derivative of that function is going to be zero.
KK: Sure.
JP: At every rational number. So there you have a non-constant function whose derivative is zero everywhere. Okay. And that's not good.
KK: No.
JP: It’s not good. for math. It's terrible for physics. So you really need the completeness property of the reals in a key way to know that the constant value theorem is true. Because it just fails for things like the set of rational numbers.
EL: Right.
KK: Okay.
JP: This is part of the story that Newton didn't know, but people like Cauchy figured it out, you know, long after.
KK: Right.
JP: Okay. So let's go back to the question of how you prove the constant value theorem.
EL: Yeah.
JP: Actually, I wanted to jump back, though, because I feel like I wanted to sell a bit more strongly this idea that the constant value theorem is important. Because if you couldn't predict the motions of particles from forces acting on those particles, no one would be interested in Newton's ideas, because the whole story there is that it is predictive of what things will do. It gives us a sort of clockwork universe.
KK: Sure.
JP: So Newton's laws of motion are kind of like the rails that the Newtonian universe runs on, and the constant value theorem is what keeps the universe from jumping off those rails.
KK: Okay. I like that analogy. That’s good.
JP: That’s the note I want to end on for that part of the discussion. But now getting back to the math of it. So how do you prove the constant value theorem? Well, you told me you prove it from the mean value theorem. Do you remember how you prove the mean value theorem?
KK: You use Rolle’s theorem?
EL: Just the mean value theorem turned sideways!
KK: Sort of, yeah. And then I always joke that it’s the Forrest Gump proof. Right? You draw the mean value theorem, you draw the picture on the board, and then you tilt your head, then you see that it's Rolle’s theorem. Okay, but Rolle’s theorem requires, I guess what we sometimes call in calculus books Fermat’s theorem, that if you have a differentiable function, and you're at a local max or min, the derivative is equal to zero. Right?
JP: Yup. Okay, actually, the fact that there exists even such a point at all is something.
KK: Right.
JP: So I think that's called the Extreme Value Theorem.
KK: Maybe? Well, the Extreme Value Theorem I always think of as as—well, I'm a topologist—that’s the image of a compact set is is compact.
JP: Okay.
KK: Right. Okay. So they need to know what the compact sets of the real line are.
JP: You need to know about boundedness, stuff like that, closedness.
KK: Closed and bounded, right. Okay. You're right. This is an increasingly long chain of things that we never teach in Calculus I, really.
JP: Yeah. I've tried to do this in some honors classes with, you know, varying levels of success.
KK: Sure.
JP: There’s the boundedness theorem, which says that, you know, a continuous function is bounded on a closed interval. But then how do you prove that? Well, you know, Bolzano-Weierstrass would be a natural choice if you're teaching a graduate class, maybe you prove that from the monotone convergence theorem. But ultimately, everything goes back to the least upper bound property, or something like it.
KK: Which is an axiom.
JP: Which is an axiom, that’s right. But it sort of makes sense that you'd have to ultimately appeal to some heavy-duty axiom, because like I said, for the rational numbers, the constant value theorem fails. So at some point, you really need to appeal to the the completeness of the reals.
EL: Yeah, the structure of the real numbers.
KK: This is fascinating. I've never really thought about it in this much detail. This is great.
JP: Okay. Well, I'm going to blow your mind…
KK: Good!
JP: …because this is the really cool part. Okay. The constant value theorem isn't just a consequence of the least upper bound property. It actually implies the least upper bound property.
KK: Wow. Okay.
JP: So all these facts that this, this chain of implications, actually closes up to become a loop.
KK: Okay.
JP: Each of them implies all the others.
KK: Wow. Okay.
JP: So the precise statement is that if you have an ordered field, so that’s a number system that satisfies the field axioms: you've got the four basic operations of pre-college math, as well as inequality, satisfying the usual axioms there. And it has the Archimedean property, which we don't teach at the pre-college level. But informally, it just says that nothing is infinitely bigger or infinitely smaller than anything else in our number system. Take any positive thing, add it to itself enough times, it becomes as big as you like.
KK: Okay.
JP: You know, enough mice added together can outweigh an elephant.
KK: Sure.
EL: Yeah.
JP: That kind of thing. So if you've got an ordered field that satisfies the Archimedean property, then each of those eight propositions is equivalent to all the others.
KK: Okay.
JP: So I really like that because, you know, we tend to think of math as being kind of linear in the sense that you have axioms, and from those you prove theorems, and from those you prove more theorems—it's a kind of a unidirectional flow of the sap of implication. But this is sort of more organic, there's sort of a two-way traffic between the axioms and the theorems. And sometimes the theorems contain the axioms hidden inside them. So I kind of like that.
KK: Excellent.
JP: Yeah.
KK: So math’s a circle, it's not a line.
JP: That’s right. Anyway, I did say I was going to talk about two theorems. So that was the continuous constant value theorem. So I want to tell you about something that I call the discrete constant value theorem that someone else may have given another name to, but I've never seen it. Which also says that if something doesn't change, its constant. But now we're talking about sequences and the something is just going to be some sequence. And when I say doesn't change, it means each term is equal to the next, or the difference between them is zero.
EL: Okay.
JP: So how would you prove that?
EL: Yeah, it really feels like something you don't need to prove.
KK: Yeah.
JP: If you pretend for the moment that it's not obvious, then how would you convince yourself?
KK: So you're trying to show that the sequence is eventually constant?
JP: It’s constant from the get-go, every term is equal to the next.
EL: Yeah. So the definition of your sequence is—or part of the definition of your sequence is—a sub n equals a sub n+1.
JP: That’s right.
EL: Or minus one, right?
JP: Right.
EL: So I guess you'd have to use induction.
KK: Right.
JP: Yeah, you’d use mathematical induction.
KK: Right.
JP: Okay. So you can prove this principle, or theorem, using mathematical induction. But the reverse is also true.
KK: Sure.
JP: You can actually prove the principle of mathematical induction from the discrete constant value theorem.
EL: And maybe we should actually say what the principle of mathematical induction is.
KK: Sure.
JP: Sure.
EL: Yeah. So that would be, you know, if you want to prove that something is true for, you know, the entire set of whole numbers, you prove it for the first one—for 1—and then prove that if it's true for n, then it's true for n+1. So I always have this image in my mind of, like, someone hauling in a chain, or like a big rope on a boat or something. And they're like, you know, each each pull of the of their arm is the is the next number. And you just pull it in, and the whole thing gets into boat. Apparently, that's where you want to be. Yeah, so that's induction.
JP: Yeah. So you can use mathematical induction to prove the discrete constant value theorem, but you can also do the reverse.
EL: Okay.
JP: So just as the continuous constant value theorem could be used as an axiom of completeness for the real number system, the discrete constant value theorem could be used as an axiom for, I don't want to say completeness, but the heavy-duty axiom for doing arithmetic over the counting numbers, to replace the axiom of induction.
EL: Yeah, it has me putting in my mind, like, oh, how could I rephrase, you know, my standard induction proof—that at this point, kind of just runs itself once I decided to try to prove something by induction—like how to make that into a sequence, a statement about sequences?
JP: Yeah, for some applications, it's not so natural. But one of the applications we teach students mathematical induction for is proving formulas, right? Like, the sum of the first n positive integers is n times n+1 over 2.
KK: Right.
JP: And so we do a base case. And then we do an induction step. And that's the format we usually use.
KK: Right.
JP: Okay. Well, proving formulas like that has been more or less automated these days. Not completely, but a lot of it has been. And the way computers actually prove things like that is using something more like the discrete constant value theorem.
EL: Okay.
JP: So for example, say you've got a sequence who's nth term is defined as the sum of the first n positive integers.
KK: Okay.
JP: So it’s 1, 1+2, 1+2+3,…. Then you have another sequence whose nth term is defined by the formula, n times n+1 over 2.
KK: Right.
JP: And you ask a computer to prove that those two sequences are equal to each other term by term. The way these automated systems will work, is they will show that the two sequences differ by a constant,
EL: and then show that the constant is zero.
JP: And then they’ll show that the constant is zero. So you show that the two sequences at each step increase by the same amount. So whatever the initial offset was, it’s going to be the same. And then you see what that offset is.
EL: Yeah.
KK: Okay, sure.
JP: So this is this looking a lot more like what we do in differential equations classes, where, you know, if you try and solve a differential equation, you determine a solution up to some unknown real parameters, and then you solve for them from initial conditions. There's a real strong analogy between solving difference equations in discrete math, and solving differential equations in continuous math. But somehow, the way we teach the subjects hides that.
EL: Yeah.
JP: The way we teach mathematical induction, by sort of having the base case come first, and then the induction step come later, is the reverse order from what we do with differential equations. But there's a way to, you know, change the way we present things so they're both mathematically rigorous, but they're much more similar to each other.
KK: Yeah, we've got this bad habit of compartmentalizing in math, right? I mean, the lower levels of the curriculum, you know, it's like, “Okay, well, in this course, you do derivatives and optimization. And in this course, you learn how to plow through integration techniques. And this course is multi-variable stuff. And in this course, we're going to talk about differential equations.” Only later do you do the more interesting things like induction and things like that. So are you arguing that we should just, you know, scrap it all in and start with induction on day one?
JP: Start with induction? No.
KK: Sure, why not?
JP: I’ve given talks about why we should not teach mathematical induction.
KK: Really?
JP: Yeah. Well, I mean, it’s not entirely serious. But I argue that we should basically teach the difference calculus, as a sort of counterpart to the differential calculus, and give students the chance to see that these ideas of characteristic polynomials and so forth, that work in differential equations, also work with difference equations. And then like, maybe near the end, we can blow their mind with that wonderful result that Robert Ghrist talked about.
KK: Yeah.
JP: Where you say that one of these operators, the difference operator, is e to the power of the derivative operator.
KK: Right.
EL: Yeah.
JP: They’re not just parallel theories. They're linked in a profound way.
EL: Yeah, I was just thinking this episode reminded me a lot of our conversation with him, just linking those two things that, yeah they they are in very different places in in my mental map of how how I think about math.
KK: All right. So what does one pair with these theorems?
JP: Okay, I'm going pair the potato chip.
EL: Okay, great. I love potato chips.
KK: I do too.
JP: So I think potato chips sort of bridge the gap between continuous mathematics and discrete mathematics.
EL: Okay.
JP: So the potato chip as an icon of continuous mathematics comes by way of Stokes’ theorem.
KK: Sure.
JP: So if you’ve ever seen these books like Purcell’s Electromagnetism that sort of illustrate what Stokes’ theorem is telling you, you have a closed loop and a membrane spanning in it,
EL: Right.
JP: …a little like a potato chip.
KK: Sure. Right.
JP: And the potato chip as an icon of discrete mathematics comes from the way it resembles mathematical induction.
KK: You can't eat just one.
JP: That’s right. You eat a potato chip, and then you eat another, and then another, and you keep saying, “This is the last one,” but there is no last potato chip.
EL: Yeah.
JP: And if there’s no last potato chip, you just keep eating them.
KK: That’s right. You need another bag. That's it.
EL: Yeah.
JP: But the other reason I really like the potato chip as sort of a unifying theme of mathematics, is that potato chips are crisp, in the way that mathematics as a whole is crisp. You know, people complain sometimes that math is dry. But that's not really what they're complaining about. Because people love potato chips, which are also dry. What they really mean is that it’s flavorless, that the way it's being taught to them lacks flavor.
KK: That’s valid, actually. Yeah.
JP: So I think what we need to do is, you know, when the math is too flavorless, sometimes we have to dip it into something.
EL: Yeah, get your onion dip.
JP: Yeah, the onion dip of applications, the salsa of biography, you know, but math itself should not be moist, you know?
EL: So, do you prefer like the plain—like, salted, obviously—potato chips, or do you like the flavors?
JP: Yeah, I don't like the flavors so much.
EL: Oh.
JP: I don’t like barbecue or anything like that. I just like salt.
KK: I like the salt and vinegar. That’s…
EL: Yeah, that's a good one. But Kettle Chips makes this salt and pepper flavor.
KK: Oh, yeah. I’ve had those. Those are good.
EL: It’s great. Their Honey Dijon is one of my favorites too. And I love barbecue. I love every—I love a lot of flavors of chips. I shouldn't say “every.”
KK: Well yeah, because Lay's always has this deal every year with the competition, like, with these crazy flavors. So, they had the chicken and waffles one year.
EL: Yeah, I think there was a cappuccino one time. I didn’t try that one.
KK: Yeah, no, that’s no good.
JP: I just realized, though, potato chips have even more mathematical content than I was thinking. Because there's the whole idea of negative curvature of surfaces.
EL: Yes, the Pringles is the ur-example of negatively curved surface.
JP: Yeah. And also, there's this wacky idea of varifolds, limits of manifolds, where you have these corrugated surfaces and you make the corregations get smaller and smaller, like, I think it’s Ruffes.
EL: Yeah, right.
JP: So a varifold is, like, the limit of a Ruffles potato chip as the Ruffles shrink, and the angles don't decrease to zero. There’s probably a whole curriculum.
EL: Yeah, we need a spinoff podcast. Make this—tell what this potato chip says about math.
KK: Right.
EL: Just give everyone a potato chip and go for it.
KK: Excellent.
EL: Very nice. I like this pairing a lot.
KK: Yeah.
EL: Even though it's now, like, 8:30-something, I'll probably go eat some potato chips before I have breakfast, or as breakfast.
JP: I want to thank you, Evelyn, because I know it wasn't your choice to do it this early in the morning. I had childcare duties, so thank you for your flexibility.
EL: I dug deep. Well, it was a sunny day today. So actually the light coming in helped wake me up. It's been really rainy this whole month, and that's not great for me getting out of bed before, you know, 10 or 11 in the morning.
KK: Sure. So we also like to give our guests a chance to plug things. You have some stuff coming up, right?
JP: I do. Well, there's always my Mathematical Enchantments essays. And I think my July essay, which will come out on the 17th, as always, will be about the constant value theorems. And I'll include links to stuff I have written on the subject. So anyone who wants to know more, should definitely go first to my blog. And then in early August, I'll be giving some talks in New York City. And they'll be about a theorem with some visual content called the Wall of Fire theorem, which I love and which was actually inspired by an exhibit at the museum. So it's going to be great actually to give a talk right next to the exhibit that inspired it.
EL: Oh, yeah, very nice.
KK: This is at the Museum of Math, the National Museum of Math, right? Okay.
JP: Yeah, I’ll actually give a bunch of talks. So the first talk is going to be, like, a short one, 20 minutes. That's part of a conference called MOVES, which stands for the Mathematics of Various Entertaining Subjects.
KK: Yeah.
JP: It’s held every two years at the museum, and I don't know if my talk will be on the fourth, fifth or sixth of August, but it'll be somewhere in that range. And then the second talk will be a bit longer, quite a bit longer. And it's for the general public. And I'll give it twice on August 7th, first at 4pm, and then at 7pm. And it'll feature a hands-on component for audience members. So it should be fun. And that's part of the museum's Math Encounters series, which is held every month. And for people who aren't able to come to the talk, there'll be a video on the Math Encounters website at some point.
EL: Oh, good. I've been meaning to check that because I'm on their email list, and so I get that, but obviously living in Salt Lake City, I don't end up in New York a whole lot. So yeah, I'm always like, “Oh, that would have been a nice one to go to.”
KK: Yeah.
EL: But I'll have to look for the videos.
KK: So, Jim, thanks for joining us.
JP: Thank you for having me.
KK: Thanks for making me confront that things go backwards in mathematics sometimes.
EL: Yes.
KK: Thanks again.
EL: Yeah, lots of fun.
JP: Thank you very much. Have a great day.
[outro]
In this episode of My Favorite Theorem, we were happy to talk with Jim Propp, a mathematician at the University of Massachusetts Lowell. He told us about the constant value theorem and the way it unites continuous and discrete mathematics.
Here are some links you might find interesting after listening to the episode:
Propp’s companion essay to this episode
Propp’s mathematical homepage
Propp’s blog Math Enchantments (home page, wordpress site)
His list of problems about enumeration of tilings that we mentioned
Our previous My Favorite Theorem episode with guest Robert Ghrist, who also talked about a link between continuous and discrete math
Propp’s article “Real Analysis in Reverse”
Mean Value Theorem
Rolle’s Theorem
Fermat’s Theorem
Varifold
MOVES (Mathematics of Various Entertaining Subjects), a conference he will be speaking at in August
Math Encounters, a series at the Museum of Mathematics (he will be speaking there in August)
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about theorems and math and all kinds of things. I'm one of your host,s Kevin Knudson, professor of mathematics at the University of Florida. Here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City. How are you today?
KK: I have a sunburn.
EL: Yeah. Can’t sympathize.
KK: No, no, I was, you know, Ellen and I went out birdwatching on Saturday, and it didn't seem like it was sunny at all, and I didn't wear a hat. So I got my head a little sunburned. And then yesterday, she was doing a print festival down in St. Pete. And even though I thought we were in the shade—look my, arms. They're like totally red. I don’t know. This is what happens.
EL: You know, March in Florida, you really can’t get away without SPF.
KK: No, you really can't. You would think I would have learned this lesson after 10 years of living here, but it just doesn't work. So anyway. Yeah. How are you?
EL: Oh, I'm all right. Yeah. Not sunburned.
KK: Okay. Good for you. Yeah. I'm on spring break. So, you know, I'm feeling pretty good. I got some time to breathe at least. So anyway, enough about us. This is actually a podcast where we invite guests on instead of boring the world with our chit chat. Today, we're pleased to welcome Matilde Lalín, you want to introduce yourself?
Matilde Lalín: Hi. Okay. Thank you for having me here. So I'm originally from Argentina. I grew up in Buenos Aires, and I did my undergraduate there. And then I moved to the US to do my Ph.D., mostly at the University of Texas at Austin. And then I moved to Canada for postdocs, and I stayed in Canada. So right now, I'm a professor at the University of Montreal, and I work in number theory.
EL: And I'm guessing you do not have a sunburn, being in Montreal in March.
ML: So maybe I should say we are celebrating that we are very close to zero Celsius.
KK: Oh, okay.
EL: Yeah, so exciting times.
ML: Yeah. So some of the snow actually is melting.
KK: Oh, okay. I haven’t seen snow in quite a while. I kind of miss it sometimes. But anyway.
EL: Oh, It is very pretty.
KK: Yeah, it is. It’s lovely. Until you have to shovel it every week for six months. But yeah, so Matilde, what is your favorite theorem?
ML: Okay, so I wanted to talk about a problem more than theorem. Well, it will lead to some theorems eventually, and a conjecture. So my favorite problem, let's say, is the congruent number problem.
KK: Okay.
ML: So okay, so basically, a positive integer number is called congruent if it is the area of a right triangle with rational sides.
EL: All three sides, right?
ML: Exactly, exactly. So the question will be, you know, how can you tell that a particular number is congruent? But more generally, can you give a list of all congruent numbers? So for example, six is congruent, because it is the area of the right triangle with sides three, four, and five. So that's easy, but then seven is congruent because it’s the area of the triangle with sides 24/5, 35/12, and 337/60.
KK: Ah, okay.
EL: So that’s not quite as obvious.
ML: Not quite as obvious, exactly. And in fact, there is an example, due to Zagier: 157 is congruent, and so the size of the triangle, they are fractions that have—okay, so the hypotenuse has 46 and 47 digits, the numerator and denominator. And so it can be very big. Okay, let me clarify for a congruent number, there are actually infinitely many triangles that satisfy this. But the example I'm giving you is the smallest, in a sense.
EL: Okay.
ML: So actually it can be very complicated, a priori, to decide whether a number is congruent or not.
KK: Sure.
ML: So this problem appears for the first time in an Arab manuscript in the 10th century, and then it was—
EL: Oh, wow, that's shocking!
ML: Yes. Well, because triangles—I mean, it's a very natural question. But then it was picked up by Fibonacci, who was actually looking at this question from a different point of view. So he was studying arithmetic sequences. So he posed the question of whether you can have a three-term arithmetic sequence whose terms are all squares. So basically, let me give you an example. So 1-25-49. Okay? So those are three squares, and 25−1 is 24. And 49−25 is 24. So that makes it an arithmetic sequence. And each of the three members are squares.
EL: Yeah.
ML: And he said that the difference—so in this case it would be 24, okay? 25−1 is 24, 49−25 is 24—so the difference is called a congruum, if you can build a sequence with this difference, basically. So it turns out that this this problem is essentially equivalent to the congruent number problem, so that's where the name, the word congruent, comes from. Fibonacci was calling this congruum. So congruent has to do with things that sort of congregate.
EL: Okay.
ML: And so kind of this difference of the arithmetic sequence. And you can prove that from such a sequence you can build your triangle. So in the example I gave you, this is a sequence that shows that six is congruent. Well, technically it shows that 24 is congruent, but 24 is a square times six. And so if you have a triangle, you can always multiply the size by the constant, and that would be equivalent to multiplying the area by some square.
KK: Sure, yeah,
EL: Right. Right, and so if it has a square in it, then there's a rational relationship that will still be preserved.
ML: Exactly. So Fibonacci actually managed to prove that seven is congruent. And then he posed as a question, as a conjecture, that one wasn't congruent. So when you say that one is not congruent, you are also saying that the squares are not congruent. The square of any rational number.
EL: Oh.
KK: Okay.
ML: It’s actually kind of a nicer statement, in a sense. It's like a very special case. And then, like 400 years after, Fermat came, and so he actually managed to solve Fibonacci’s question. So he actually proved, using his famous descent, he proved that one is not congruent. And also that two and three are not congruent. So basically, he settled the question for those. And five is known to be congruent, also six and seven. So well, that takes care of the first few numbers. Because four is one in this case.
KK: Four is one, that’s right.
ML: Yeah, exactly. And well, one thing that happens with this problem is that actually, if you go in the direction that Fibonacci was looking, okay, so this sequence of three squares, actually, if you can think of them as—say you call the middle square x, and then one is x−n and the other is x+n. So when you multiply these three together, it gives you a square. And what this is telling you, is that actually giving you a solution to an equation that you could write as, say, y2=x(x−n)(x+n). And that's what is called an elliptic curve.
EL: Oh, okay.
ML: Yes. So basically, an elliptic curve in this context is more general. You could think of it as y2 equals a cubic polynomial in x. And so basically, the congruent numbers problem is asking whether, for such an equation, you have a solution such that y is different from 0. So so then you can study the problem from that point of view. There is a lot, there is a big theory about elliptic curves.
EL: Right. And so I've been wondering, like, is this where people got the idea to bring elliptic curves into number theory? That's always seemed mysterious to me—like, when you first learn about Fermat’s Last Theorem, and you learn there's all this elliptic curve stuff involved in proving that, like, how do people think to bring elliptic curves in this way?
ML: As a matter of fact, okay, so elliptic curves in general, it’s actually a very natural object to study. So I don't know if it came exactly via the congruent number problem, because essentially—okay, so essentially, a natural problem more generally is Diophantine equations. So basically, I give you a polynomial with integer coefficients, and I am asking you about solutions that are either integers or rational. And we understand very well what happens when the degree is one, say an equation of a form ax+by=c. Okay? So those we understand completely. We actually understand very well what happens when the degree is two and actually, degree three is elliptic curves. So it's a very natural progression.
EL: Okay.
ML: So it doesn't necessarily have to come with congruent numbers. However, it is true that many people chose choose to introduce elliptic curves via numbers, because it's such a natural question, such a natural problem. But of course, it leads you to a very specific family of elliptic curves. I mean, not just the whole story. So what is known about elliptic curves that can help understand this question of the congruent number problem: So in 1922 Mordell actually proved that the solutions of an elliptic curve—Actually, I should have said this before. So the solutions of an elliptic curve, say, over the rationals, so if you look at all the rational numbers that are solutions to an equation like that, y2 equals some cubic polynomial in x, they form a group. And actually an abelian group.
And as I was saying, Mordell proved that this group actually is finally-generated. So you can actually give a finite list of elements in the group, and then every element in the group is a combination of those. Okay? So basically, it's very tempting to say, “Well, I mean, if you give me an elliptic curve, I want to find what the group is. So I just give the generators. So this should be very easy, okay? Yeah. [laughter] But actually, it's not easy. So there's no systematic way to find all the generators to determine what the group is. And even—so, you will always have, you may have, points of finite order. So, elements that if you, take some multiple, you get back to 0. So those are easy to find. But the question of whether they are elements of infinite order, and if there are, how many there are, or how many generators you need, all these questions are difficult in generals for an elliptic curve. And so, my favorite theorem actually—so the way I ended up coming up with the idea of talking about the congruent number problem, is actually Mordell’s theorem. So I really like Mordell’s theorem.
KK: And that theorem’s not at all obvious. I mean, so you sort of, I'm not sure if I’ve ever even seen a proof. I mean, I remember, this is one of the first things I learned in algebraic geometry, you draw the picture, you know, of the elliptic curve. And the group law, of course, is given by: take two points, draw the line, and where it intersects the curve in the third point is the sum of those things, right—actually, then you reflect, it’s minus that, right? Yeah. Those three points add to zero. That's right.
EL: We’ll put a picture of this up. Because Kevin's helpful air drawing is not obvious to our listeners.
ML: That’s right.
KK: Yeah. And from that, somehow, the idea that this is a finitely-generated group is really pretty remarkable. But the picture gives you no clue of where the where to find these generators, right?
ML: Well, their first issue, actually, is to prove that this is an associative law. So that statement is annoyingly complicated to prove in elementary ways.
KK: Yeah, commutativity is kind of obvious, right?
ML: But, yes, already to prove that it's a group in the sense that associativity, yeah. And then Mordell’s theorem, actually, it follows, it does some descent. So it follows in the spirit of Fermat’s descent, actually. But I mean, in a more complicated context. But it's very beautiful, yeah.
So as I was saying, the number of generators that have infinite order, that's called the rank, and already knowing whether the rank is zero, or what the value is, that's a very difficult question. And so in 1965, Birch and Swinnerton-Dyer came up with a conjecture that relates the rank to the order of vanishing of a certain function that you build from the elliptic curve. It’s called the L-function. So, in principle, with this conjecture, one can predict the value of the rank. That doesn't mean that we can find easily the generators, but at least we can answer, for example, whether there are infinitely many solutions or not and say that.
EL: Yeah.
ML: So basically, that's kind of the most exciting conjecture associated to this question. And I mean, it goes well beyond this question, and it's one of the Millennium Problems from the Clay.
EL: Right. Yeah. So it’s a high dollar-value question.
ML Yes. And it's interesting, because for this question, it is known that—if the L-function doesn’t vanish, then the rank is zero. So it's known for R[ank] zero, one direction, and the same for R[ank] 1. But not much more is known on average. So this very recent result, relatively recent result by Bhargava and Shankar, where they prove that the rank for, if you take all the elliptic, curves and order them in a certain way, the rank on average is bounded by 7/6. And so that means that there is a positive proportion of elliptic curves that actually satisfy BSD. Okay. But I mean, the question would be what BSD tells us about the original question that I posed.
EL: Right, yeah, so when we were chatting earlier, you said that a lot of questions or theorems about congruent numbers were basically—the theorems were proved as partial solutions to BSD. Am I getting that right?
ML: Yeah, okay. Some progress that is being done nowadays has to do with proving BSD for some particular families, I mean for these elliptic curves are attached to congruent numbers. But if I go back to the first connection, so there is this famous theorem by Tunnell that was published in 1983 where he basically ties the property of being a congruent number to two quadratic equations in three variables having one having double the solutions as the other somehow. So Tunnell’s result came, obviously, in ’83, so much earlier than most advances in BSD.
EL: Okay.
ML: And basically what Tunnell gives is like an algorithm to decide whether a number is congruent or not. And for the case where it’s non-congruent, actually it is conclusive, because this is a case, okay, so it depends on BSD, but this is a case where we know. And then the problem is the case where it will tell you that the number is congruent. So that is assuming BSD. So for now, like I said, many cases will just be the cases that—for example, there is some very recent result by Tian, where basically he proved that BSD applies to certain curves. And so for example, it is known that for primes congruent to five, six, or seven modulo eight, they are congruent. So this is a result that goes back to Heegner and Monsky in ’52, for Heegner. So that's for primes. So that's an infinite family of numbers that satisfy that they are congruent. But every question attached to this problem has to do with, okay, can you generalize this for all natural numbers that are congruent to six, five, or seven modulo eight. For example, that’s some direction of research going on now.
EL: So you could disprove the BSD conjecture, if you could find some number that Turner’s [ed. note: Evelyn misremembered the name of the theorem; this should be “Tunnell’s”] theorem said was congruent, but was actually not congruent?
ML: Yeah, yeah. So you could disprove—say you find a number that is congruent to six mod eight that is not a congruent number, you disprove BSD, yes.
EL: All right. Yeah, so our listeners, I'm sure they'll go—I’m sure no one has ever searched a lot of numbers to check on this. So yeah, that's our assignment for you. So something we like to do on this show then is to ask our guest to pair their theorem with something. So what do you think enhances enjoyment of congruent numbers, the congruent number problem, the Birch and Swinnerton-Dyer Conjecture, all of these things?
ML: Well, for me it is really how I pair my mathematics with things, right? And so I would pair it with chocolate because I am a machine of transforming chocolate into theorems instead of coffee. I will also pair it with mate, which is an infusion from South America. That's my source of caffeine instead of coffee. So it's a very interesting drink that we drink a lot in Argentina, but especially in Uruguay.
KK: Do you have the special straw with the filter and everything?
ML: Yeah, yeah, I have the metal straw. So you you put the metal straw and then you put just the leaves in in your special cup, and you drink from the straw that filters the leaves. So yeah, that's right. And you share it with friends. So it's a very collaborative thing, like mathematics.
EL: So I’ve never tried this. Does this—what does it taste like it? I mean, I know it's hard to describe tastes that you've never actually tasted before. But does it taste kind of like tea, kind of like coffee, kind of like something else entirely?
ML: I would say it tastes like tea. You could think was a special tea.
KK: Okay.
EL: There’s a coffee shop near us that has that. But I haven't tried it yet.
KK: Oh, come on. Give it a shot, Evelyn.
EL: Yeah, I will.
KK: You have to report back in a future episode. Actually, I going to hold you to it the next time we meet. Before then, have some mate. Do you have a chocolate preference? Are you a dark chocolate, milk chocolate?
ML: Milk chocolate, I would say. I'm not super gourmet with chocolate. But I do have my favorite place in Montreal to go drink a good cup of hot chocolate.
KK: All right, I've learned a lot.
EL: Yeah.
KK: This is very informative. In fact, while you were describing the congruent number problem, I was sort of sitting here sketching out equations that I might try to actually solve. Of course, it wasn't an elliptic curves, it was sort of the naive things that you might try. But this is a fascinating problem. And I could see how you could get hooked.
EL: Yeah, well, it does seem like it just has all these different branches. And all these weird dependencies where you can follow these lines around.
KK: I mean, the best mathematics is like that, right? I mean, it's sort of kind of simple, it’s a simple question to ask, you could explain this to a kid. And then the mathematics is so deep, it goes in so many directions. Yeah, it's really, really interesting.
EL: Yeah. Thanks a lot. Are there any places people can find you online? Your website, other other things you'd like to share?
ML: Well, yeah, my website. Shall I say the address?
EL: We can just put a link to that.
ML: Yeah, definitely my website. I actually will be giving a talk in the math club at my university on congruent numbers in a couple of weeks. So I’m going to try to post the slides online, but they are going to be in French.
EL: Okay, well, that'll be good. Our Francophone listeners can can check that out.
ML: I really like some notes that Keith Conrad wrote. And actually, I have to say, he has a bunch of expository papers in different areas that I always find super useful for, you know, going a little bit beyond my classes. And so in general, I recommend that his website for that, and in particular, the notes on the congruent number problem, if you're more interested. And then of course, there are some, some books that discuss congruent numbers and elliptic curves. So for example, a classic reference is Koblitz’s book on, I guess it’s called [Introduction to] Elliptic Curves and Modular Forms.
EL: Oh, yeah, I actually have that book. Because as a grad student, my second or third year, I, for some reason—I was not interested in number theory at all, but I think I liked this professor, so I took this class. So I have this book. And I remember, I just felt like I was swimming in that class.
KK: I have this book too, sitting on my shelf.
EL: The one number theory book two topologists have.
ML: So for me, I got this book before knowing I was going to be a number theorist.
EL: Yeah. No, but it is a nice book. Yeah. But yeah, well, we'll link to those will have. Make sure to get those all in the show notes so people can find those easily.
ML: Yeah.
EL: Well, thanks so much for joining me.
KK: Yeah.
EL: Us. Sorry, Kevin!
ML: Thank you for having us—for having me, now I’m confused! Thanks a lot. It's such a pleasure to be here.
EL: Yeah.
KK: Thanks.
On this episode, we were excited to welcome Matlide Lalín, a math professor at the University of Montreal. She talked about the congruent number problem. A congruent number is a positive integer that is the area of a right triangle with rational side lengths.
Our discussion took us from integers to elliptic curves, which are defined by equations of the form y2=x3+ax+b. As we mention in the episode, solutions to equations of this form satisfy what is known as a group law. That is a fancy way of saying there is a way to “add” two points on the curve to get another point. The diagram Kevin mentioned is here:
Here are links to some other things we talked about on the podcast and resources for diving deeper:
Slides (in French) for her talk about congruent numbers
Keith Conrad’s notes about the congruent number problem (pdf)
John Coates has written about Tian’s recent work on the congruent number problem for Acta Mathematica Vietnamica
Introduction to Elliptic Curves and Modular Forms by Neal Koblitz
For more on Mordell’s Theorem, try Elliptic Curves by Anthony Knapp
[Bhargava and Shankar]
Andrew Wiles’ expository article about the Birch and Swinnerton-Dyer conjecture
To go even deeper with BSD, try The Arithmetic of Elliptic Curves by Joseph Silverman
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about math and…I don't even know what it's going to be today. We'll find out. I'm one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida. Here is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. So yeah, how are you today?
KK: I’ve had a busy week. Both of my PhD students defended on Monday.
EL: Wow, congratulations.
KK: Yeah, and so, through some weird quirk of my career, these are my first two PhD students. And it was a nice time, slightly nerve-wracking here and there. But everybody went through, everything's good.
EL: Great.
KK: So we have two new professors out there. Well, one guy's going to go into industry. But yeah, how about you?
EL: I’m actually—once we're done with this, I need to go pack for a trip I'm leaving on today. I'm teaching a math writing workshop at Ohio State.
KK: Right. I saw that, yeah.
EL: I mean, if it goes well, then we'll leave this part in the thing. And if it doesn't, no one else will know. But yeah, I'm looking forward to it.
KK: Good. Well, Ellen and I are going to Seattle this weekend.
EL: Fun.
KK: She got invited to be on a panel at the Bainbridge Island Art Museum. And I thought. “I'm going along” because I like Seattle.
EL: Yeah, it's beautiful there.
KK: Love it there. Anyway, enough about us. Today, we are pleased to welcome Moon Duchin to the show. Moon, why don’t you introduce yourself?
Moon Duchin: Hi, I'm Moon. I am a mathematician at Tufts University, where I'm also affiliated with the College of Civic Life. It’s a cool thing Tufts has that not everybody has. And in math, my specialties are geometric group theory, topology (especially low-dimensional topology), and dynamics.
KK Very cool.
EL: Yeah, and so how does this civil life thing work?
MD: Civic life, yeah.
EL: Sorry.
MD: So that's sort of because in the last couple years, I've become really interested in politics and in applications—I think of it as applications of math to civil rights. So that's that's sort of mathematics engaging with civics, it’s kind of how we do government. So that's become a pretty strong locus of my energy in the last couple years.
EL: Yeah.
KK: And I'll vouch for Moon's work here. I mean, I've gone to a couple of the workshops that she's put together. Big one at Tufts in 2017, I guess it was, and then last December, this meeting at Radcliffe. Really cool stuff. Really important work. And I've gotten interested in it too. And let's hope we can begin to turn some tides here. But anyway, enough about that. So, Moon, what's your favorite theorem?
MD: Alright, so I want to tell you about what I think is a really beautiful theorem that is known to some as Gromov’s gap.
KK: Okay.
EL: Which also sounds like it could be the name of a mountain pass in the Urals or something.
MD: I was thinking it sounds like it could be, you know, in there with the Mines of Moria in Middle Earth.
EL: Oh, yeah, that too.
MD: Just make sure you toss the dwarf across the gap. Right, it does sound like that. But of course, it's Mischa Gromov, who is the very prolific Russian-French mathematician who works in all kinds of geometry, differential geometry, groups, and so on.
So what the theorem is about is, what kinds of shapes can you see in groups? So let me set that up a little with—you know, let me set the stage, and then I'll tell you the result.
EL: Okay.
MD: So here's the setting. Suppose you want to understand—the central objects in geometric group theory are, wel,l groups. So what are groups? Of course, those are sets where you can do an operation. So you can think of that as addition, or multiplication, it's just some sort of composition that tells you how to put elements together to get another element. And geometric group theory is the idea that you can get a handle on the way groups work—they’re algebraic objects, but you can study them in terms of shape, geometrically. So there are two basic ways to do that. Either you can look at those spaces that they act on, in other words, spaces where that group tells you how to move around. Or you can look at the group itself as a network, and then try to understand the shape of that network. So let's stick with that second point of view for a moment. So that says, you know, the group has lots of elements and instructions for how to put things together to move around. So I like to think of the network—a really good way to wrap your mind around that is to think about chess pieces. So if I have a chessboard, and I pick a piece—maybe I pick the queen, maybe I pick the knight—there are instructions for how it can move. And then imagine the network where you connect two squares if your piece can get between them in one step. Right?
KK: Okay.
MD: So, of course that's going to make a different network for a knight than it would for a queen and so on, right?
EL: Yeah.
MD: Okay. So that's how to visualize a group, especially an infinite—that works particularly well for infinite groups. That's how to visualize a group as a bunch of points and a bunch of edges. So it's some big graph or network. And then GGT, geometric group theory, says, “What's the shape of that network?” Especially when you view it from a distance, does it look flat? Does it look negatively curved, like a saddle surface? Or does it kind of curve around on itself like a sphere? You know, what's the shape of the group?
And actually, just a cool observation from, you know, a hundred plus years ago, an observation of Felix Klein is that actually the two points of view—the spaces that the group acts on or the group itself—those really are telling the same story. So the shape of the space is about the same as the shape of the group. That's become codified as kind of a fundamental observation in GGT. Okay, great. So that's the space I want to think about. What is the shape—what are the possible shapes of groups? Okay, and that's where Gromov kicks in. So the theorem is about the relationship of area to perimeter. And here's what I mean by that.
Form a loop in your space, in your network. And here, a loop just means you start at a point, you wander around with a path, and you end up back where you started. Okay? And then look at the efficient ways to fill that in with area. So visualize that, like, first you have an outline, and then you try to fill it in with maybe some sort of potato chip-y surface that kind of interpolates around that boundary. Okay, so the question is, if you look at shapes that have the extremal relationship of area to perimeter, then what is the relationship of area to parameter?
So let's do that in Euclidean space first, because it's really familiar. So we know that the extremal shapes there are circles, and you fill those in with discs. And the relationship is that area looks about like perimeter squared, right?
KK: Right.
MD: Okay, great. So now here's the theorem, then. Get ready for it. I love this theorem. In groups, you can find groups where area looks like perimeter to the K power. It can look like perimeter to the 1, it can look like perimeter to the 1, or 3, or 4, and so on. You can build designer groups with any of those exponents. But furthermore, you can also get rational exponents. You can get pretty much any rational exponent you want. You can get 113/5, you can get, you know, 33/10. Pick your favorite exponent, and you can do it.
EL: Can you get less than one?
MD: Well, let's come back to that.
EL: Okay. Sorry.
MD: So let me state Gromov’s theorem in this level of generality. So here's the theorem. You can get pretty much any exponent that you want, as long as it's not between 1 and 2.
KK: Wow.
EL: Oh.
MD: Isn’t that cool? That's Gromov’s gap.
KK: Okay.
EL: Okay.
MD: So there's this wasteland between 1 and 2 that's unachievable.
KK: Wow.
MD: Yeah. And then you can, see—past 2, you can see anything. Um, it actually turns out, it's not just rationals. You can see lots of other kinds of algebraic numbers too.
KK: Sure.
MD: And the closure up there is everything from two to infinity! But nothing between one and two. It's a gap.
EL: Oh, wow. That's so cool!
MD: That’s neat, right? Evelyn, to answer your question under one turns out not to really be well defined for reasons we could talk about. But yeah.
KK: This is remarkable. This sounds like something Gromov would prove, right? I mean, just these weird theorems out of nowhere. I mean—how could that be true? And then there it is. Yeah.
MD: Or that Gromov would state and leave other people to prove.
KK: That—yeah, that's really more accurate. Yeah. So. Okay, so you can't get area to the—I mean, perimeter to the 3/2. I mean, that's, that's really…Okay. Is there any intuition for why you can't get things between one and two?
MD: Yeah, there kind of is, and it's beautiful. It is that the stuff that sits at the exponent 1, in other words, where area is proportional, the perimeter is just really qualitatively different from everything else. Hence the gulf. And what is that stuff? That is hyperbolic groups. So this comes back to Evelyn's wheelhouse, I believe.
EL: It’s been a while since I thought in a research way about this, but yes, vaguely at the distance of my memory.
MD: Let me refresh your memory. Yes, so negatively curved things, things that are saddled shaped, those are the ones where area is proportional to parameter. And everything else is just in a different regime. And that's really what this theorem is telling you.
So that's one beautiful point of view, and kind of intuition, that there's this qualitative difference happening there. But there's something—there’s so many things I love about this theorem. It's just the gateway to lots of beautiful math. But one of the things I love about this theorem is that it fails in higher dimensions, which is really neat. So if you, instead of filling a loop with area, if you were to fill a shell with volume, there would be no gap.
EL: Oh.
MD: Cool, right?
EL: So this is, like, the right way to measure it if you want to find this difference in how these groups behave.
MDL Absolutely. And, you know, another way to say it, is this is an alternative definition of hyperbolic group from the usual one. It's like, the right way to pick out these special groups from everything else is specifically to look at filling loops.
KK: Right. And I might be wrong here, but aren't most groups hyperbolic? Is that?
MD: Yeah, so that's definitely the kind of religious philosophy that’s espoused. But you know, to talk about most groups usually the way people do that is they talk about random constructions of groups. And a lot of that is pretty sensitive to the way you set up what random means. But yeah, that's definitely the, kind of, slogan that you hear a lot in geometric group theory, is that hyperbolic groups are special, but they're also generic.
EL: Yeah.
KK: So are there explicit constructions of groups with say, exponent 33/10, to pick an example?
MD: Yes, there are. Yeah. And actually, if you're going to end up writing this up, I can send you some links to beautiful papers.
EL: Yeah, yeah. But there’s, like, a recipe, kind of, where you're like, “Oh, I like this exponent. I can cook up this group.”
MD: Yeah. And that's why I kind of call them designer groups.
EL: Right, right. Yeah. Your bespoke groups here.
MD: Yeah, there are constructions that do these things.
KK: That’s remarkable. So I was going to guess that your favorite term was the isoperimetric inequality. But I guess this kind of is, right?
MD: I mean, exactly. Right? So the isometric inequality is all about asking, what is the extremal relationship of area to parimeter? And so this is exactly that, but it's in the setting of groups.
KK: Yeah, yeah.
EL: So how did you first come across this theorem?
MD: Well, I guess, in—when you're in the areas of geometric topology, geometric group theory, there's this one book that we sometimes call the Bible—here I'm leaning on this religious metaphor again—which is this this great book by Bridson and Haefliger called Metric Spaces of Non-Positive Curvature. And it really does feel like a Bible. It's this fat volume, you always want it around, you flip to the stuff you need, you don't really read it cover to cover.
KK: Just like the Bible. Yeah,
MD: Exactly. Great. And that's certainly where I first saw it proved. But, yeah, I mean, the ideas that circulate around this theorem are really the fundamental ideas in GGT.
KK: Okay, great. Does this come up in your own work a lot? Do you use this for things you do? Or is this just like, something that you love, you know, for its own sake?
MD: Yeah, no, it does come up in my own work in a couple of ways. But one is I got interested in the relationship between curvature—curvature in the various senses that come from classical geometry—I got interested in the relationship between that and other notions of shape in networks. So this theorem takes you right there. And so for instance, I have a paper of a theorem with Lelièvre, and Mooney, where we look at something really similar, which is, we call it sprawl. It's how spread out do you get when you start at a point and you look at all the different positions you can get to within a certain distance. So you look at a kind of ball around the point. And then you ask how far apart are the points in that ball from each other? So that's actually a pretty fun question. And it turns out, here's another one of these theorems where hyperbolic stuff, there's just a gap between that and everything else.
KK: Right.
MD: So let’s follow that through for a minute. So suppose you start at a base point, and you take the ball of radius R around that base point. And then you ask, “How far apart are the points in that ball from each other?” Well, of course, by the triangle inequality, the farthest apart that could possibly be is 2R because can connect them through the center to each other, right, 2R. Okay, so then you could ask, “Hm, I wonder if there's a space that’s so sprawling, so spread out, so much like, you know, Houston, right, so sprawling that the average distance is actually the maximum?”
EL: Yeah.
MD: Right. What if the average distance between two points is actually equal to 2R?
And that’s, so that's something that we proved. We proved that when you're negatively curved, and you have, you know, a few other mild conditions, basically—but certainly true for negatively curved groups, just like the setting of Gromov’s theorem—so for negatively curved groups, the average is the maximum. You’re as sprawling as you can be. Yeah, isn't that neat? So that's very much in the vein of this kind of result.
KK: Oh, that's very cool. All right.
EL: Yeah. Kind of like the SNCF metric, also, where you have to go to Paris to go anywhere else. Slightly different, but still, that you basically you have to go in to the center to get to the other side.
MD: It’s exactly the same collection of ideas. And I'm just back from Europe, where I can attest that it's really true. You want to get from point to point on the periphery of France, you’d better be going through Paris if you want to do it fast. But yeah, it’s precisely the same idea, right? So the average distance between points on the periphery of France will be: get to Paris and get to the other point. So there's a max there that's also realized.
KK: All right, so France is hyperbolic.
MD: France is hyperbolic. Yup, in terms of travel time.
EL: Very appropriate. It’s such a great country. Why wouldn't it be hyperbolic?
KK: All right, so the other fun thing on this podcast is we ask our guests to pair their theorem with something. So what pairs well with Gromov’s gap theorem?
MD: So I'm actually going to claim that it pairs beautifully with politics. Right? True to form, true to form.
EL: Okay.
KK: Right, yeah, sure.
MD: All right, so let me try and make that connection. So, well, I got really interested in the last few years in gerrymandering in voting districts. And classically, one of the ways that we know that a district is problematic is exactly this same way, that it's built very inefficiently. It has too much perimeter, it has too much boundary, a long, wiggly, winding boundary without enclosing very much area. That's been a longstanding measurement of kind of the fairness or the reasonableness of the district. So I got interested in that through likes of this kind of network curvature stuff, with the idea that maybe the problem is in the relationship between area and perimeter.
And so what does that make you want to do, if you're me? It makes you want to take a state and look at it as a network. And you can do that with census data. You sort of take the little chunks of census geography and connect the ones that are next to each other and presto! You have a network. And it's a pretty big network, but it's finite.
KK: Yeah.
MD: So Pennsylvania's got about 9000 precincts. So you can make a graph out of that. But it's got a whole lot more census blocks. Virginia—we were just looking at Virginia recently—300,000 census blocks. So that's a pretty big network, but you know, still super duper finite, right?
EL: Yeah.
MD: And so you can sort of ask the same question, what's the shape of that network? And does that—you know, maybe the idea is, if the network itself, which is neutral, no one's doing any gerrymandering, that's just where the people are.
EL: Yeah.
MD: If the network network itself is is negatively curved, in some sense, then maybe that explains large perimeters in a reason that isn't due to political malfeasance, you know?
EL: Right.
MD: So I think this is a way of thinking about shape and possibility that lends itself to lots of problems. But I like to pair everything with politics these days.
EL: Yeah, well, I really I think—so I went to your talk at the joint mass meetings a couple of years ago, which I know you've probably given similar talks, with talking about gerrymandering, I think it's really important for people not to take too simplistic a view and just say, “Oh, here's a weird shape.” And you were, you did a really great job of showing, like, there are sometimes good reasons for weird shapes. Obvious things like theres a river here and people end up grouping like this around the river for this reason. But there are a lot of different reasons for this. And if we want to talk about this in a way that can actually be productive, we have to be very nuanced about this and understand all of those subtleties, which are mixing the math—we can't just divorce it from the world. We we mix the math from, you know, the underlying civil rights and, you know, politics, historic inequalities in different groups and things like that.
MD: Yeah, absolutely. That's definitely the point of view that I've been preaching, to stick with the religious metaphor. It’s the one that says, if you want to understand what should be out of bounds, because it's unreasonable when it comes to redistricting, first you have to understand the lay of the land. You have to spec out the landscape of what's possible. And like you're saying, you know, that landscape can have lots of built and structure that districting has to respect. So, yeah, you should really—that could be physical geography, like you mentioned rivers—but it could also be human geography. People distribute themselves in very particular ways. And districting isn't done with respect to like, imaginary people, it’s done with respect to the real, actual people and where they live.
EL: Yeah.
MD: And that's why I really, you know, I think more and more that some of those same tools that we use to study the networks of infinite groups, we can bring those to bear to study the large finite networks of people and how they live and how we want to divide them up.
EL: Yeah, that's, that's a nice pairing, maybe one of the weightier pairings we’ve had.
KK: Yeah, right.
MD: It was either that or a poem, but. I was thinking Gromov’s gap, maybe I could pair that with The Wasteland.
EL: Oh.
MD: Because you can’t get in the wasteland between exponents one and two. Nah, let's go politics.
EL: Well, I've tried to read that poem a few times, and I always feel like I need someone to hold my hand and, like, point everything out to me. It's like, I know there's something there but I haven't quite grabbed on to it yet.
MD: Yeah. Poetry is like math, better with a tour guide.
EL: Yes.
KK: Well, we also like to give our guests a chance to plug things. You want to tell everyone where they can find you online and and maybe about the MGGG?
MD: Sure, yeah, absolutely. So I co-lead a working group called MGGG, the Metric Geometry and Gerrymandering Group, together with a computer scientist named Justin Solomon, who's over at MIT. You can visit us online at mggg.org, where we have lots of cool things to look at, such as the brief we filed with the Supreme Court a couple weeks ago, which just yesterday was actually quoted in oral argument, which was pretty exciting, if quoted in a surprising way.
You can also find cool software tools that we've been developing, like our tool called Districtr lets you draw your own districts and kind of see—try your own hand at either gerrymandering or fair district thing and gives you a sense of how hard that is. We think it's one of the more user-friendly districting tools out there. Lots of different research links and software tools and resources on our site. So that would be that'd be fun if people want to check that out and give us feedback.
Other things I want to mention: Oh, I guess I'm going to do the 538 Politics podcast tomorrow, talking about this new Supreme Court case.
EL: Nice.
MD: Yeah. So I think that'll be fun. Those are some smart folks over there who’ve thought a lot about some of the different ways of measuring gerrymandering, so I think that'll be a pretty high-level conversation.
KK: Yeah, I'm sure. They turn around real fast, like this will be months from now.
MD: Right. I see. Okay, cool. Yeah, by the time this comes out, maybe we'll maybe we'll have yet another Supreme Court decision on gerrymandering that will…
KK: Yeah, fingers crossed.
MD: We’ll all be handling the fallout from.
EL: Yeah.
KK: All right. Well, this has been great fun, Moon. Thanks for joining us.
MD: Oh, it's a pleasure.
[outro]
Evelyn Lamb: Welcome to My Favorite Theorem, a podcast about theorems where we ask mathematicians to tell us about theorems. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. This is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson. I’m a professor of mathematics at the University of Florida. It seems like I just saw you yesterday.
EL: Yeah, but I looked a little different yesterday.
KK: You did!
EL: In between when I talked with you and this morning, I dyed my hair a new color, so I'm trying out bright Crayola crayon yellow right now.
KK: It looks good. Looks good.
EL: Yeah, it's been fun in this, I don't know, like 18 hours I've had it so far.
KK: Well, you know what, it's what it's sort of dreary winter, right, you feel the need to do something to snap you out of it, although it's sunny in Salt Lake, right, it’s just cold? No big deal.
EL: Yeah, we’ve had some sun. We’ve had a lot of snow recently, as our guest knows, because our guest today also lives in Salt Lake City. So I'm very happy to introduce Suresh Venkatasubramanian. So Hi. Can you tell us a little bit about yourself?
Suresh Venkatasubramanian: Hey, thanks for having me. First of all, I should say if it weren't for your podcast, my dishes would never get done. I put the podcast on, I start doing the dishes and life is good.
EL: Glad to be of service.
SV: So I'm in the computer science department here. I have many names. It depends on who's asking and when. Sometimes I'm a computational geometer, I'm a data miner. Sometimes occasionally a machine learner, though people raise their eyebrows at that. But the name I like the most right now is bias detective, or computational philosopher. These are the two names I like the most right now.
EL: Yeah, yeah. Because you've been doing a lot of work on algorithmic bias. Do you want to talk about that a little bit?
SV: Sure. So one of the things that we're dealing with now as machine learning and associated tools go out into the world is that they're being used for not just, you know, predicting what podcast you should listen to, or what music you should listen to, but they're also being used to decide whether you get a job somewhere, whether you get admission to college, whether you get surveilled by the police, what kind of sentences you might get, whether you get a loan. All these things are where machine learning is now being used just because we have lots of data to collect and seemingly make better decisions. But along with that comes a lot of challenges, because what we're finding is that a lot of the human bias in the way make decisions is being transferred to machine bias. And that causes all kinds of problems, both because of the speed at which these decisions get made and the relative obscurity of automated decision making. So trying to piece together, piece out what's going on, how this changes the way we think about the world, how we—the way we think about knowledge about society, has been taking up most of my time of late.
EL: Yeah, and you've got some interesting papers that you've worked on, right, on how people who design algorithms can help combat some of these biases that can creep in.
SV: Yeah, so there are many, many levels of questions, right? One basic question is how do you even detect whether there’s—so first of all, I mean, I think as a mathematical question, how do you even define what it means for something to be biased? What does that word even mean? These are all loaded terms. And, you know, once you come up with a bunch of different definitions for maybe what is a relatively similar concept, how do they interact? How do you build systems that can sort of avoid these kinds of bias the way you've defined it? And what are the consequences of building systems? What kind of feedback loops do you have in systems that you use? There’s a whole host of questions from the mathematical to the social to the philosophical. So it's very exciting. But it also means everyday, I feel even more dumb than I started the day with so.
EL: Yeah.
KK: So I think the real challenge here is that, you know, people who aren't particularly mathematically inclined just assume that because a computer spit out an answer, it must be valid, it must be correct. And that, in some sense, you know, it's cold, that the machine made this decision, and therefore, it must be right. How do you think we can overcome that idea that, you know, actually, bias can be built into algorithms?
SV: So this is the “math is not racist” argument, basically.
KK: Right.
SV: That comes up time and time again. And yeah, one thing that I think is an encouraging is that we've moved relatively quickly, in a span of, say, three to four years, from “math is not racist” to “Well, duh, of course, algorithms are biased.”
EL: Yeah.
SV: So I guess that's a good thing. But I think the problem is that there’s a lot of commercial and other incentives bound up with the idea that automated systems are more objective. And like most things, there's a kernel of truth to it in the sense that you can avoid certain kinds of obvious biases by automating decision making. But then the problem is you can introduce others, and you can also amplify them. So it's tricky, I think. You're right, it's getting away from the notion where commercially, there's more incentive to argue that way. But also saying, “Look, it's not all bad, you just need more nuance.” You know, arguments for more nuance tend not to go as well as, you know, “Here's exactly how things work,” so it's hard.
KK: Everything’s black or white, we know this, right?
SV: With a 50% probability everything is either true or not, right.
EL: So we invited you on to hear about your favorite theorem. And what is that?
SV: So it's technically an inequality. But I know that in the past, you've allowed this sort of deviation from the rules, so I'm going to propose it anyway.
EL: Yes, we are very flexible.
SV: Okay, good good good. So the inequality is called Fano’s inequality after Robert Fano, and it comes from information theory. And it’s one of those things where, you know, the more I talk about it, the more excited I get about it. I'm not even close to being an expert on the ins and outs of this inequality. But I just love it so much. So I need to tell you about that. So like all good stories, right, this starts with pigeons.
EL: Of course.
SV: Everything starts with pigeons. Yes. So, you may have heard of the pigeonhole principle.
KK: Sure.
SV: Ok. So the pigeonhole principle, for those in the audience who may not, basically, if you have ten pigeons and you have nine pigeonholes, someone's going to get a roommate, right? It’s one of the most obvious statements one can make, but also one of the more powerful ones, because if you unpack a little bit, it's not telling you where to find that pigeonhole with two pigeons, it's merely saying that you are guaranteed that this thing must exist, right, it's a sort of an existence proof that can be stated very simply.
But the pigeonhole principle, which is used in many, many parts of computer science to prove that, you know, some things are impossible or not can be restated, can be used to prove another thing. So we all know that if, okay, I need to store a set of numbers, and if the numbers range from 1 to n, that I need something like log(n) bits to store it. Well, why do I need log(n) bits? One way to think about this is this is the pigeonhole principle in action because you're saying, I have n things. If I have log(n) bits to describe a hole, like an address, then there are 2log(n) different holes, which is n. And if I don't have log(n) bits, then I don't have n holes, and therefore by the pigeonhole principle, two of my things will get the same name. And that's not a good thing. I want to be able to name everything differently.
So immediately, you get the simple statement that if you want to store n things you need log(n) bits, and of course, you know, n could be whatever you want it to be, which means that now—in theoretical computer science, you want to prove lower bounds, you want to say that something is not possible, or some algorithm must take a certain amount of time, or you must need to store so many bits of information to do something. These are typically very hard things to prove because you have to reason about any possible imaginary way of storing something or doing something, and that's very hard. But with things like the pigeonhole principle and the log(n) bit idea, you can do surprisingly many things by saying, “Look, I have to store this many things, I'm going to need at least log of that man bits no matter what you do.” And that's great.
KK: So that's the inequality.
EL: No, not yet.
KK: Okay, all right.
SV: I was stopping because I thought Evelyn was going to say something. I'm building up. It's like, you know, a suspense story here.
KK: Okay, good.
EL: Yes.
KK: Chapter two.
SV: So if you now unpack this log(n) bit thing, what it's really saying is that I can encode elements and numbers in such a way, using a certain number of bits, so that I can decode them perfectly because there aren't two pigeons living in the same pigeonhole. There's no ambiguity. Once I have a pigeonhole, who lives there is very clear, right? It's a perfect decoding. So now you've gone from just talking about storage to talking about an encoding process and a decoding process. And this is where Fano’s inequality comes into play.
So information theory—so going back to Shannon, right—is this whole idea of how you transmit information, how you transmit information efficiently, right, so the typical object of study in information theory is a channel. So you have some input, some sort of bits going into a channel, something happens to it in the channel, maybe some noise, maybe something else, and then something comes out. And one of the things you'd like to do is looking at, so x comes in, y comes out, and given y you'd like to decode and get back to x, right? And it turns out that, you know, you can talk about the ideas of mutual information entropy, they start coming up very naturally, where you start saying if x is stochastic, it's a random variable, and y is random, then the channel capacity, in some sense the amount of information the channel can send through, relates to what is called the mutual information between x and y, which is this quantity that captures, roughly speaking, if I know something about x, what can I say about y and vice versa. This is not quite accurate, but this is more or less what it says.
So information theory at a broader level—and this is where really Fano’s inequality connects to so many things—it’s really about how to quantify the information content that one thing has about another thing, right, through a channel that might do something mysterious to your variable as it goes through. So now what does Fano’s inequality say? So Fano’s inequality you can think of now in the context of bits and decoding and encoding, says something like this. If you have a channel, you take x, you push it into the channel and out comes y, and now you try to reconstruct what x was, right? And let's say there's some error in the process. The error in the process of reconstructing x from y relates to the term that is a function of the mutual information between x and y.
More precisely, if you look at how much, essentially, entropy you have left in x once I tell you y— So for example, let me see if this is a good example for this. So I give you someone's name, let's say it's an American Caucasian name. With a certain degree of probability, you'll be able to predict what their gender is. You won't always get it right, but there will be some probability of this. So you can think of this as saying, okay, there's a certain error in predicting that person's name from the—for predicting the person's gender from the name as you went through the channel. The reason why you have an error is because there's certain one of noise. Some names are sort of gender-ambiguous, and it's not obvious how to tell. And so there's a certain amount of entropy left in the system, even after I've told you the name of the person. There's still an amount of uncertainty. And so your error in predicting that person's gender from the name is related to the amount of entropy left in the system. And this is sort of intuitively reasonable. But what it's doing is connecting two things that you wouldn’t normally expect to be connected. It's connecting a computational metaphor, this process of decoding, right, and the error in decoding, along with a basic information theoretic statement about the relationship between two variables.
And because of that—Fano’s inequality, or even the basic log(n) bits needed to sort n things idea—for me, it's pretty much all of computer science. Because if we want to prove a lower bound on how we compete something, at the very least we can say, look, I need at least this much information to do this computation. I might need other stuff, but I need at least as much information. That clearly will be a lower bound. Can I prove a lower bound? And that has been a surprisingly successful endeavor, in reasoning about the lower bounds for computations, where it would be otherwise very hard to think about, okay, what does this computation do? or what does that computation do? Have I imagined every possible algorithm I can design? I don't care about any of that because Fano’s inequality says it doesn't matter. Just analyze the amount of information content you have in your system. That's going to give you a lower bond on the amount of error you’re going to see.
EL: Okay, so I was reading—you wrote a lovely posts about this, and I was reading that this morning, before we started talking. And I think this is what you just said, but it's one of these things that is very new to me, I'm not used to thinking like a computer scientists or information theorist or anything. So something that was a little bit, I was having trouble understanding, is whether this inequality, how much it depends on the particular relationship you have between the two, x and y that you're looking at.
SV: So one way to answer this question is to say that all you need to know is the conditional entropy of the x given y. That's it. You don't need to know anything else about how y was produced from x. All that you need to know, to put a bound on the decoding error, is the amount of entropy that's left in the system.
KK: Is that effectively computable? I mean, it is that easy to compute?
SV: For the cases where you apply Fano’s inequality, yes. Typically it is. In fact, you will often construct examples where you can show what the conditional entropy is, and therefore be able to reason, directly use Fano’s inequality, to argue for the probability of error. So let me give an example in computer science of how this works.
KK: Okay, great.
SV: Suppose I want to build a—so one of the things we have to do sometimes is build a data structure for a problem, which means you're trying to solve some problem, you want to store information in a convenient way so you can access the information quickly. So you want to build some lookup table or a dictionary so that when someone comes up with the question—“Hey, is this person's name in the dictionary?”—you can quickly give an answer to the question. Ok. So now I want to say, look, I have to process these queries. I want to know how much information I need to store my data structure so that I can answer queries very, very quickly. Okay, and so you have to figure out—so one thing you'd like to do is, okay, I built this awesome data structure, but is it the best possible? I don't know, let me see if I can prove a lower bound on how much information I need to store.
So the way you would use Fano’s inequality to prove that you need a certain amount of information would be to say something like this. You would say, I'm going to design a procedure. I'm going to prove to you that this procedure correctly reconstructs the answer for any query a user might give me. So there's a query, let’s say “Is there an element in the database?” And I have my algorithm, I will prove correctly returns this with some unknown number of bits stored. And given that it correctly returns us answer up to some error probability, I will use Fano’s and equality to say, because of that fact, it must be the case that there is a large amount of mutual information between the original data and the data structure you stored, which is the, essentially, the thing that you've converted the data from through your channel. And so this if that mutual information is large, the amount of bits you need to store this must also be large. And therefore, with small error, you must pay at least these many bits to build this data structure. And so this idea sort of goes through a lot of, in fact, more recent work in lower bounds in data structure design and also communication. So, you know, two parties want to communicate with each other, and they want to decide whether they have the same bit string. And you want to argue that you need these many bits of information for them to communicate, to show that the other same bit string. You use, essentially, either Fano’s inequality or even simpler versions of it to make this argument that you need at least a certain number of bits. This is used in statistics in a very different way. But that's a different story. But in computer science, this is the one of the ways in which the inequality is used.
EL: Okay, getting back to the example that you used, of names and genders. How—can you kind of, I don't know if this is going to work exactly. But can you kind of walk us through how this might help us understand that? Obviously, not having complete information about how, how this does correspond to names, but,
SV: Right, so let's say you have a channel, okay? What’s going into the channel, which is x, is the full information about the person, let's say, including their gender. And the channel, you know, absorbs a lot of this information, and just spits out a name, the name of the person. That's y and now the task is to predict. or reconstruct the person's gender from the name. Okay.
So now you have x, you have y. You want to reconstruct x. And so you have some procedure, some algorithm, some unknown algorithm that you might want to design, that will predict the gender from the name. Okay? And now this procedure will have a certain amount of error, let's call this error p, right, the probability of error, the probability of getting it wrong, basically is p. And so what Fano’s inequality says, roughly speaking—so I mean, I could read out the actual inequality, but on the air, it might not be easy to say—but roughly speaking, it says that this probability p times a few other terms that are not directly relevant to this discussion, is greater than or equal to the entropy of the random variable s, which is you can think of it as drawn from the population of people. So I drew uniformly from the population of people in the US, right, or of Caucasians in the US, because we are limiting ourselves to that thing. So, that's our random variable x. And going through this, I get the value y. So I compute the entropy of the distribution on x conditioned that the gender was, say, female. So I look at how much— and basically that probability of error is greater than or equal to, you know, with some extra constants attached, this entropy.
So in other words, so what it's saying is very intuitive, right? If I tell you, the person is is female— and we're sort of limiting ourselves to this binary choice of gender—but let's just say this person is female, you know—sorry, this person's name is so and so. Right? What is the range of gender? What does the gender distribution look like conditioned on having that name? So let's say the name, let's say is, we think of a name, that would be—let’s say, Dylan, so Dylan is the name. So there's going to be some, you know, probability of the person being male and some property being female. And in the case of a name like Dylan, those probabilities might not be too far apart, right? So you can actually compute this entropy now. You can say, okay, what is the entropy of x given y? You just write the formula for entropy for this distribution. It’s just p1logp(1)+p2log(p2) in this case, where p1+p2=1. And so the probability of error no matter what, how clever your procedure is, is going to be lower bounded by this quantity with some other constants attached to make it all make sense.
EL: Okay.
SV: Does that help?
EL: Yeah.
KK: Right.
SV: I made this completely up on the fly. So I'm not even sure it’s correct. I think it's correct.
KK: It sounds correct. So it's sort of, the noisier your channel, the probability is going to go up, the entropy is going to be higher, right?
SV: Right, yeah.
KK: Well, you're right, that's intuitively obvious, right? Yeah. Right.
SV: Right. And the surprising thing about this is that you don't have to worry about the actual reconstruction procedure, that the amount of information is a limiting factor. No matter what you do, you have to deal with that basic information thing. And you can see why this now connects to my work on algorithmic fairness and bias now, right? Because, for example, one of the things that is often suggested is to say, Oh, you know—like in California they just did a week ago, saying, “You are not allowed to use someone's gender to give them driver’s insurance.”
KK: Okay.
SV: Now, there are many reasons why there are problems with the policy as implemented versus the intent. I understand the intent, but the policy has issues. But one issue is this, well, just removing gender may not be sufficient, because there may be signal in other variables that might allow me, allow my system, to predict gender. I don't know what it's doing, but it could internally be predicting gender. And then it would be doing the thing you're trying to prohibit by saying, just remove the gender variable. So while the intention of the rule is good, it's not clear that it will, as implemented, succeed in achieving the goals it’s set out. But you can't reason about this unless you can reason about information versus computation. And that's why Fano’s inequality turns out to be so important. I sort of used it in spirit in some of my work, and some people in follow-up have used it explicitly in their work to sort of show limits on, you know, to what extent you can actually reverse-engineer, you know, protected variables of this kind, like gender, from other things.
EL: Oh, yeah, that would be really important to understand where you might have bias coming in.
SV: Right. Especially if you don't know what the system is doing. And that's what's so beautiful about Fano’s inequality. It does not care.
EL: Right. Oh, that's so cool. Thanks for telling us about that. So, is this something that you'd kind of learn in one of your first—maybe not first computer science courses, but very early on in in your education, or did you pick this up along the way?
SV: Oh, no, you don’t—I mean, it depends. It is very unlikely that you will hear about Fano’s inequality in any kind of computer science, theoretical computer science class, even in grad school.
EL:Okay.
SV: Usually you pick it up from papers, or if you take a course in information theory, it's a core concept there. So if you take any course in information theory, it'll come up very quickly.
EL: Okay.
SV: But in computer science, it comes up usually when you're reading a paper, and they use a magic lower bound trick. Like, where did they get that from? That's what happened to me. It's like, where do they get this from? And then you go down a rabbit hole, and you come back up three years later with this enlightened understanding of… I mean, Fano’s inequality generalizes in many ways. I mean, there's a beautiful, there's a more geometric interpretation of what Fano’s inequality really is when you go to more advanced versions of it. So there's lots of very beautiful—that’s the thing about a lot of these inequalities, they are stated very simply, but they have these connections to things that are very broad and very deep and can be expressed in different languages, not just in information theory, also in geometry, and that makes them really cool. So there's a nice geometric analog to Fano’s inequality as well that people use in differential privacy and other places.
KK: So what does one pair with Fano’s inequality?
SV: Ah! See, when I first started listening to My Favorite Theorem, I said, “Okay, you know, if one day they ever invite me on, I’m going to talk about Fano’s inequality, and I'm going to think about what to pair with it.” So I spent a lot of time thinking about this.
KK: Good.
SV: So see you have all these fans now, that's a cool thing.
So my choice for the pairing is goat cheese with jalapeño jam spread on top of it on a cracker.
EL: Okay.
KK: Okay.
SV: And the reason I chose this pairing: because they are two things that you wouldn't normally think go together well, but they go together amazingly well on a cracker.
KK: Well of course they do.
SV: And that sort of embodies for me what Fano’s inequality is saying, that two things that you don't expect to go together go together really well.
KK: No, no, no, the tanginess of the cheese and the salty of the olives. Of course, that's good.
SV: Not olives, jalapeño spread, like a spicy spread.
KK: Oh, okay, even better. So this sounds very southern. So in the south what we do, I mean, I grew up in North Carolina, you take cream cheese and pepper jelly, hot pepper jelly, and you put that on.
SV: Perfect. That's exactly it.
KK: Okay. All right. Good. Okay, delicious.
EL: So do you make your own jalapeños for this, or you have a favorite brand?
SV: Oh boy, no, no. I'm a glutton, not a gourmet, so I'll eat it whatever someone gives it to me, but I don't know what to make these things.
EL: Okay. We recently, we had a CSA last fall, and had a surplus of hot peppers. And I am unfortunately not very spice-tolerant. Or spiciness. I love spices, but, you know, can't handle the heat very well. But my spouse loves it. So I've been making these slightly sweet pickled jalapeños. I made those. And since then, he's been asking me, you know, I've just been going out and getting more and making those, so I think I will be serving this to him around the time we air this episode.
KK: Good.
SV: So since you can get everything horrifying on YouTube, one horrifying thing you can watch on YouTube is the world chili-eating competitions.
EL: Oh, no.
SV: Let’s just say it involves lots of milk and barf bags.
KK: I can imagine.
SV: But yeah, I do like my chilies. I like the habañeros and things like that.
EL: Yeah, I just watched the first part of a video where everyone in an orchestra eats this really hot pepper and then they're supposed to play something, but I just couldn't made myself watch the rest. Including the brass and wins and stuff. I was just thinking, “This is so terrible.” I felt too bad for them.
SV: It’s like the Drunk History show.
KK: We’ve often joked about having, you know, drunk favorite theorem.
SV: That would be awesome. That'd be so cool.
KK: We should do that.
EL: Yeah, well, sometimes when I transcribe the episodes, I play the audio slower because then I can kind of keep up with transcribing it. And it really sounds like people are stoned. So we joked about having “Higher Mathematics.”
KK: That’s right.
SV: That’s very good.
EL: Because they’re talking, like, “So…the mean…value…theorem.”
SV: I would subscribe to that show.
EL: Note to all federal authorities: We are not doing this.
KK: No, we’re not doing it.
EL: Yeah. Well, thanks a lot for joining us. So if people want to find you online, they can find you on Twitter, which was I think how we first got introduced to each other. What is your handle on that?
It's @geomblog, so Geometry Blog, that's my first blog. So g e o m, b l o g. I maintain a blog as well. And so that's among the places—I’m a social media butterfly. So you can find me anywhere.
EL: Yeah.
SV: So yeah, but web page is also a good place, my University of Utah page.
EL: We’ll include that, including the link to this post about Fano’s inequality so people can see. You know, it really helped me to read that before talking with you about it, to get some of the actual inequalities of the terms that appear in there straight in my head. So, yeah, thanks a lot for joining us.
KK: Thanks, Suresh.
SV: Thanks for having me.
[outro]
Evelyn Lamb: Hello, and welcome to my favorite theorem, a math podcasts where we ask mathematicians to tell us about their favorite theorems. I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And I am joined as usual by my co host Kevin. Can you introduce yourself?
Kevin Knudson: Sure. I'm still Kevin Knudson, professor of mathematics at the University of Florida. How are things going?
EL: All right. Yeah.
KK: Well, we just talked yesterday, so I doubt much has changed, right? Except I seem to have injured myself between yesterday today. I think it's a function of being—not 50, but not able to say that for much longer.
EL: Yeah, it happens.
KK: It does.
EL: Yeah. Well, hopefully, your podcasting muscles have not been injured.
KK: I just need a few fingers for that.
EL: Alright, so we are very happy today to welcome Ursula Whitcher to the show. Hi, can you tell us a little bit about yourself?
Ursula Whitcher: Hi, my name is Ursula. I am an associate editor at Mathematical Reviews, which, if you've ever used MathSciNet to look up a paper or check your Erdős number or any of those exciting things, there are actually 16 associate editors like me checking all the math that gets posted on MathSciNet and trying to make sure that it makes sense. I got my PhD at the University of Washington in algebraic geometry. I did a postdoc in California and spent a while as a professor at the University of Wisconsin Eau Claire, and then moved here to Ann Arbor where it's a little bit warmer, to start a job here.
EL: Ann Arbor being warmer is just kind of a scary proposition.
UW: It’s barely even snowing. It's kind of weird.
EL: Yeah. Well, and yeah, you mentioned Mathematical Reviews. I—before you got this job, I was not aware that, you know, there were, like, full time employees just of Mathematical Reviews, so that's kind of an interesting thing.
UW: Yeah, it's a really cool operation. We actually go back to sometime in probably the ‘40s.
KK: I think that’s right, yeah.
EL: Oh wow.
UW: So it used to be a paper operation where you could sign up and subscribe to the journal. And at some point, we moved entirely online.
KK: I’m old enough to remember in grad school, when you could get the year’s Math Reviews on CD ROM before MathSciNet was a thing. And you know, I remember pulling the old Math Reviews, physical copies, off the shelf to look up reviews.
UW: We actually have in the basement this set of file cards that our founder, who came from Germany around the Second World War, he had a collection of handwritten cards of all the potential reviewers and their possible interests. And we've still got that floating around. So there's a cool archival project.
KK: I’m ashamed to admit that I'm a lapsed reviewer. I used to review, and then I kind of got busy doing other things and the editors finally wised up and stopped sending me papers.
UW: I try to tell people to just be really picky and only accept the papers that you're really excited to read.
KK: I feel really terrible about this. So maybe I should come back. I owe an an apology to you and the other editors.
UW: Yeah, come back. And then just be really super, super picky and only take things that you are truly overjoyed to read. We don't mind. I—you know, I read apologies for my job for part of my day every day for not reviewing. So I’ve become sort of a connoisseur of the apology letter.
KK: Sure. So is part of your position also that you have some sort of visiting scholar deal at the University of Michigan? Does that come with this?
UW: Yeah, that is. So I get to hang out at the University of Michigan and go to math seminars and learn about all kinds of cool math and use the library card. I'm a really heavy user of my University of Michigan library card. So yeah.
EL: Those are excellent.
KK: It’s a great campus. That’s a great department, a lot of excellent people there.
EL: Yeah. So, what is your favorite theorem, or the favorite theorem you would like to talk about today?
UW: So I decided that I would talk about mirror theorems as a genre.
EL: Okay.
UW: I don't know that I have a single favorite mirror theorem, although I might have a favorite mirror theorem of the past year or two. But as this kind of class of theorems, these are a weird thing, because they run kind of backwards.
Like, typically there's this thing that happens where mathematicians are just hanging out and doing math because math is cool. And then at some point, somebody comes along and is like, “Oh, I see it practical use for this. And maybe I can spin it off into biology or physics or engineering or what have you.” Mirror theorems came the other way. They started with physical observation that there were two ways of phrasing of a theoretical physics idea about possible extra dimensions and string theory and gravity and all kinds of cool things. And then that physical duality, people chewed on and figured out how to turn it into precise mathematical statements. So there are lots of different precise mathematical statements encapsulating maybe something different about about the way these physical theories were phrased, or maybe building then, sort of chaining off of the mathematics and saying something that no longer directly relates to something you could state about a possible physical world. But there is still in like a neat mathematical relationship you wouldn't have figured out without having the underlying physical intuition.
EL: Yeah. And so this is, the general area is called mirror symmetry. And when I first heard that phrase, I assumed it was something about like group theory that was looking at, like, you know, more tangible, things that I would consider symmetric, like what it looks like when you look in a mirror. But that's not what it is, I learned.
UW: So I can tell you why it's called mirror symmetry, although it's kind of a silly reason. The first formulations of mirror symmetry, people were looking at these spaces called Calabi-Yau three-folds, which are—so there are three complex dimensions, six real dimensions, they could maybe be the extra dimensions of the universe, if you're doing string theory. And associated with a Calabi-Yau three-fold, you have a bunch of numbers that tell you about it’s topological information, sort of general stuff about what is this six dimensional shape looking like. And you can arrange those numbers in a diamond that's called the Hodge diamond. And then you can draw a little diagonal line through the Hodge diamond. And some of the mirror theorems predict that if I hand you one Calabi-Yau three-fold with a certain Hodge diamond, there should be somewhere out there in the universe another Calabi-Yau three-fold with another Hodge diamond. And if you flip across this diagonal axis, one is the Hodge diamonds should turn into the other Hodge diamond.
EL: Okay.
UW: So there is a mirror relationship there. And there is a really simple reflection there. But it's like you have to do a whole bunch of topology, and you have to do a whole bunch of geometry and you, like, convince yourself that Hodge diamonds are a thing. And then you have to somehow—like, once you've convinced yourself Hodge diamonds are a thing, you also have to convince yourself that you can go out there and find another space that has the right numbers in the diamond.
EL: So the mirror is, like, the very simplest thing about this. It’s this whole elaborate journey to get to the mirror.
UW: Yeah.
EL: Okay, interesting. I didn't actually know that that was where the mirror came from. So yeah. So can you tell us what these mirror theorems are here?
UW: Sure. So one version of it might be what I said, that given a Calabi-Yau manifold, with this information, that it has a mirror.
Or so then this diamond of information is telling you something about the way that the space changes. And there are different types of information that you could look at. You could look at how it changes algebraically, like if you wrote down an equation with some polynomials, and you changed those coefficients on the polynomials just a little bit, sort of how many different sorts of things, how many possible deformations of that sort could you have? That's one thing that you can measure using, like, one number in this diamond.
EL: Okay.
UW: And then you can also try to measure symplectic structure, which is a related more sort of physics-y information that happens over in a different part of the diamond. And so another type of mirror theorem, maybe a more precise type of mirror theorem, would say, okay, so these deformations measured by this Hodge number on this manifold are isomorphic in some sense to these other sorts of deformations measured by this other Hodge number on this other mirror manifold.
KK: Is there some trick for constructing these mirror manifolds if they exist?
UW: Yeah, there are. There are sort of recipes. And one of the games that people play with mirror symmetry is trying to figure out where the different recipes overlap, when you’ve, like, really found a new mirror construction, and when you’ve found just another way of looking at an old mirror construction. If I hand you one manifold, does it only have a unique mirror or does it have multiple mirrors?
KK: So my advisor tried to teach me Hodge theory once. And I can't even remember exactly what goes on, except there's some sort of bi-grading in the cohomology right?
UW: Right.
KK: And is that where this diamond shows up?
UW: Yeah, exactly. So you think back to when you first learned complex analysis, and there was, like, d/dz direction and there was the d/dz̅ direction.
KK: Right.
UW: And we're working in a setting where we can break up the cohomology really nicely and say, okay, these are the parts of my cohomology that come from a certain number of homomorphic d/dz kind of things. And these are the other pieces of cohomology that can be decomposed and look like, dz̅. And since it’s a Kähler manifold, everything fits together in a nice way.
KK: Right. Okay, there. That's all I needed to know, I think. That's it, you summarized it, you're done.
EL: So, I have a question. When you talk about like mirror theorems, I feel like some amount of mirror symmetry stuff is still conjectural—or “I feel like”—my brief perusal of Wikipedia on this indicates that there are some conjectures involved. And so how much of these theorems are that in different settings, these mirror relationships hold, and how much of them are small steps in this one big conjectural picture. Does that question make sense?
UW: Yeah. So I feel like we know a ton of stuff about Calabi-Yau three-folds that are realized in sort of the nice, natural ways that physicists first started writing down things about Calabi-Yau three-folds.
When you start getting more general on the mathematical side—for instance, there's a whole flavor of mirror symmetry that's called homological mirror symmetry that talks about derived categories and the Fuakaya category—a lot of that stuff has been very conjectural. And it's at the point where people are starting to write down specific theorems about specific classes of examples. And so that's maybe one of the most exciting parts of mirror symmetry right now.
And then there are also generalizations to broader classes of spaces, where it's not just Calabi-Yau three-folds where maybe you're allowing a more general kind of variety or relaxing things, or you're starting to look at, what if we went back to the physics language about potentials, instead of talking about actual geometric spaces? Those start having more conjectural flavor.
EL: Okay, so a lot of this is in the original thing, but then there are different settings where mirror symmetry might be taking place?
UW: Yeah.
EL: Okay. And I assume if you're such a connoisseur of mirror theorems, that this is related to your research also. What kinds of questions are you looking at in mirror symmetry?
UW: Yeah, so I spend some time just playing around with different mirror constructions and seeing if I can match them up, which is always a fun game, trying to see what you know. Lately, what I've been really excited about is taking the sort of classical old-fashioned hands-on mirror constructions where I can hand you a space, and I can take another space, and I can say these two things are mirror manifolds. And then seeing what knowing that tells me, maybe about number theory, about maybe doing something over a finite field in a setting that is less obviously geometry, but where maybe you can still exploit this idea that you have all of this extra structure that you know about because of the mirror and start trying to prove theorems that way.
EL: Oh, wow. I did not know there is this connection in number theory. This is like a whole new tunnel coming out here.
UW: Yeah, no, it's super awesome. We were able to make predictions about zeta functions of K3 surfaces. And in fact we have a theorem about a factor of as zeta function for Calabi-Yau manifolds of any dimensions. And it's a very specific kind of Calabi-Yau manifold, but it's so hard to prove anything about zeta functions! In part because if you're a connoisseur of zeta functions, you know they are controlled by the size of the cohomology, so once your cohomology starts getting really big, it’s really difficult to compute anything directly.
EL: So, like, how tangible are these? Like, here is a manifold and here is its mirror? Are there some manifolds you can really write down and, like, have a visual picture in your mind of what these things actually look like?
UW: Yeah, definitely. So I'm going to tell you about two mirror constructions. I think one of these is maybe more friendly to someone who likes geometry. And one of these is more friendly to someone who likes linear algebra.
EL: Okay.
UW: So the oldest, oldest mirror symmetry construction was, it's due to Greene and Plesser who were physicists. And they knew that they were looking for things with certain symmetries. So they took the diagonal quintic in projective four-space. I have to get to my dimensions right, because I actually often think about four dimensions instead of six.
So you're taking x5+y5+z5 plus, then, v5 and w5, because we ran out of letters, we had to loop around.
EL: Go back.
UW: Yeah. And you say, Okay, well, these are complex numbers, I could multiply any of them by a fifth route of unity, and I would have preserved my total space, right?
Except we're working in projective space, so I have to throw away one of my overall fifth roots of unity because if I multiply by the same fifth root of unity on every coordinate, that doesn't do anything. And then they wanted to maybe fit this into a family where they deformed by the product of all the variables. And if you want to have symmetries of that entire family, you should also make sure that the product of all of your roots of unity, I think multiplies to 1? So anyway, you throw out a couple of fifth roots of unity, because you have these other symmetries from your ambient space and things that you're interested in, and you end up with basically three fifths roots of unity that you can multiply by.
So I've got x5+y5+z5+v5+w5, and I'm modding out by z/5z3. Right? So I’m identifying all of these points in this space, right? I've just like got, like, 125 different things, and I’m shoving all these 125 different things together. So when I do that, this space—which was all nice and smooth and friendly, and it's named after Fermat, because Fermat was interested in equations like that—all of a sudden, I'd made it like, really stuck together and messy, and singular.
KK: Right.
UW: So I go in as a geometer, and I start blowing up, which is what algebraic geometers call this process of going in with your straw and your balloon, and blowing and smoothing out and making everything all nice and shiny again, right?
KK: Right.
UW: And when you do that, you've got a new space, and that's your mirror.
EL: Okay.
KK: So you blow up all the singularities?
UW: Yeah, your resolve the singularity.
KK: That’s a lot.
UW: Yeah. So what you had was, you had something which is floating around in P4. And because we picked a special example, it happens to have a lot of algebraic classes. But a thing in P4, the only algebraic piece you really know about it, in it, is, like, intersecting with a hyper plane.
So it has lots and lots of different ways you can vary all of its different complex parameters on only this one algebraic piece that you know about. And then when you go through this procedure, you end up with something which has very few algebraic ways to modify it. It actually naturally has only a one-parameter algebraic deformation space. But then there are all of these cool new classes that you know about, because you just blew up all of these things. So you're trading around the different types of information you have. You go from lots of deformations on one side to very few deformations on the other.
KK: Okay, so that was the geometry. What's the linear algebra one?
UW: Okay, so the linear algebra one is so much fun. Let's go back to that same space.
EL: I wish our listeners could see how big your smile is right now.
KK: That’s right. It’s really remarkable.
EL: It is truly amazing.
UW: Right. So we've got this polynomial, right, x5+y5+z5+w5+v5. And that thing I was telling you about finding the different fifth roots of unity that we could raise things to, that’s, like, a super tedious algebraic process, right, where you just sit down and you're like, gosh, I can raise different parts of the variables, like fifth roots of unity. And then I throw away some of my fifth roots of unity. So you start with that, the equation and the little algebraic rank that you want to get a group associated with it.
And then you convert your polynomial equation to a matrix. In this case, my matrix is just going to be like all fives down the diagonal.
KK: Okay.
UW: But you can do this more generally with other types of polynomials. The ones that work well for this procedure have all kinds of fancy names, like loops and chains of Fermat’s. So like Fermat’s is just the like different pure powers of variables. Loops would be if I did something like x5+y5+z5+…, and then I looped back around and used an x again.
EL: Okay.
UW: Or, sorry, it should have been like, x4y+y4z, and so on. So you can really see the looping about to happen.
And then chains are a similar thing. Anyway, so given one of these things, you can just read off the powers on your polynomial, and you can stick each one of those into a matrix. And then to get your mirror, you transpose the matrix.
EL: Oh, of course!
UW: And then you run this little crank, to tell you about an associated group.
EL: Okay.
UW: So getting which group goes with your transposed matrix, it's kind of a little bit more work. But I love the fact that you have this, like, huge, complicated physics thing with all this stuff, like the Hodge diamond, and then you're like, oh, and now we transpose a matrix! And, you know it’s a really great duality, right, because if you transpose the matrix again, you get back where you started.
KK: Sure.
EL: Right. Yeah. Well, and it seems like so many questions in math are, “How can we make this question into linear algebra?” It's just, like, one of the biggest hammers mathematicians have.
UW: Yeah.
EL: So another part of this podcast is that we ask our guest to pair their theorem, or in your case, you know, set of theorems, or flavor of theorems, with something. So what have you chosen as you're pairing?
UW: I decided that we should pair the mirror theorems with really fancy ramen.
EL: Okay. So yeah, tell me why.
UW: Okay. So really fancy ramen, like, the good Japanese-style, where you've simmered the broth down for hours and hours, and it's incredibly complex, not the kind that you just go buy in a packet, although that also has its use.
EL: Yeah, no, Top Ramen.
UW: Right. So it's complex. It has, like, a million different variants, right? You can get it with miso, you can get it spicy, you can put different things in it, you can decide whether you want an egg in it that gets a little bit melty or not, all of these different little choices that you get. And yeah, it seems like it's this really simple thing, it’s just noodle soup. And we all know what Top Ramen is. But there's so much more to it. The other reason is that I just personally, historically associate fancy ramen with mirror theorems. Because there was a special semester at the Fields Institute in Toronto, and Toronto has a bunch of amazing ramen. So a lot of the people who were there for that special semester grew to associate the whole thing with fancy ramen, to the point where one of my friends, who's an Italian mathematician, we were some other place in Canada, I think it was Ottawa, and she was like, “Well, why don't we just get ramen for lunch?” And I was like, “Sorry, it turns out that Canada is not a uniform source of amazing ramen.” That was special to Toronto.
KK: Yeah, Ottawa is more about the poutine, I think.
UW: Yeah, I mean, absolutely. There's great stuff in Ottawa. It just like, didn't have this beautiful ramen-mirror symmetry parents that we had all
EL: Right, I really liked this pairing. It works on multiple levels.
KK: Sure. It's personal, but it also works conceptually, it's really good. Yeah. Well, so how long have you been at Math Reviews?
UW: I think I'm in my third year.
KK: Okay.
KK: Do you enjoy it?
UW: I do. It’s a lot of fun.
KK: Is it a permanent gig? Or are these things time limited?
UW: Yeah, it's permanent. And in fact, we are hiring a number theorist. So if you know any number theorists out there who are really interested in, you know, precise editing of mathematics and reading about mathematics and cool stuff like that, tell them to look at our ad on Math Jobs. We're also hiring in analysis and math physics. And we've been hiring in combinatorics as well, although that was a faster hiring process.
EL: Yeah. And we also like to, you know, plug things that you're doing. I know, in addition to math, you have many other creative outlets, including some poetry, right, related to math?
UW: That’s right.
EL: Where can people find that? Ah, well, you can look at my website. Let's see, if you want the poetry you should look at my personal website, which is yarntheory.net.
There's one poem that was just up recently on JoAnne Growney’s blog.
EL: Yeah, that's right.
UW: And I have a poem that's coming out soon, soon, I’m not sure how soon in Journal of Humanistic Math. Yeah, it's a really goofy thing where I made up some form involving the group of units for the multiplicative group associated to the field of seven elements and then played around with that.
EL: I'm really, really looking forward to getting into that. Do you have a little bit of explanation of the mathematical structure in there?
UW: Just the very smallest. I mean, I think what I did was I listed, I found the generators of this group, and then I listed out where they would go as you generated them, and then I looked for the ones that seemed like they were repeating in an order that would make a cool poem structure.
EL: Okay, cool. Yeah. Well, thanks a lot for joining us. We'll be sure to share all that and hopefully people can find some of your work and enjoy it.
UW: Cool.
KK: Thanks, Ursula.
UW: Thanks so much for having me.[outro]
Evelyn Lamb: Hello and welcome to My Favorite Theorem, a math podcast where we get mathematicians to tell us about theorems. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. It's Friday night.
EL: Yeah, yeah, I kind of did things in a weird order today. So there's this concert at the Utah Symphony that I wanted to go to, but I can't go tonight or tomorrow night, which are the only two nights they're doing it. But they had an open rehearsal today. So I went to a symphony concert this morning. And now I'm doing work tonight, so it's kind of a backwards day.
KK: Yeah. Well, I got up super early to meet with my financial advisor.
EL: Oh, aren’t you an adult?
KK: I do want to retire someday. I’ve got 20 years yet, but you know, now it's nighttime and my wife is watching Drag Race and I'm talking about math. So.
EL: Cool. Yeah.
KK: Life is good.
EL: Yes. Well, we're very happy today to have Fawn Nguyen on the show. Hi, Fawn. Could you tell us a little bit about yourself?
Fawn Nguyen: Hi, Evelyn. Hi, Kevin. I was was thinking, “How nerdy can we be? It’s Friday.”
So my name is Fawn Nguyen. I teach at Mesa Union School in Somis, California. And it's about 60 miles north of LA.
EL: Okay.
FN: 30 miles south of Santa Barbara. I teach—the school I'm at is a K-8, one-school district, of about 600 students. Of those 600, about 190 of them are in the junior high, 6-8, but it's a unique one-school district. So we're a family. It's nice. It's my 16th year there but 27th overall.
EL: Okay, and where did you teach before then?
FN: I was in Oregon. And I was actually a science teacher.
KK: Is your current place on the coast or a little more inland?
FN: Coast, yeah, about 10 miles from the coast. I think we have perfect weather, the best weather in the world. So.
KK: It’s beautiful there, it’s really—it’s hard to complain. Yeah.
FN: Yeah. It’s reflected in the mortgage, or rent.
EL: Yeah
KK: Right.
FN: Big time.
EL: So yeah, what theorem have you decided to share with us today?
FN: You mean, not everyone else chose the Pythagorean Theorem?
EL: It is a very good theorem.
FN: Yeah. I chose the Pythagorean Theorem. I have some reasons, actually five reasons.
KK: Five, good.
FN: I was thinking, yeah, that's a lot of reasons considering I don't have, you know, five reasons to do anything else! So I don't know, should I just talk about my reasons?
EL: Yeah, what’s your first reason?
FN: Jump right in?
EL: Well, actually, we should probably actually at least say what the Pythagorean theorem is. It's a very familiar theorem, and everyone should have heard about it in a middle school math class from a teacher as great as Fawn. But yeah, so could you summarize the theorem for us?
FN: Well, that's just it. Yeah, I chose it for one, it’s the first and most obvious reason, because I am at the middle school. And so this is a big one for us, if not the only one. And it's within my pay grade. I can wrap my head around this one. Yeah, it's one of our eighth grade Common Core standards. And the theorem states that when you have a right triangle, the sum of the squares of the two legs is equal to the square of the hypotenuse.
KK: Right.
FN: How did I do, Kevin?
KK: Well, yeah, that's perfect. In fact, today I was—it just so happens that today, I was looking through the 1847 Oliver Byrne edition of Euclid’s Elements, this sort of very famous one with the pictures where the colors, shapes, and all of that, and I just I happened to look at that theorem, and the proof of it, which is really very nice.
FN: Yeah. So it being you know, the middle school one for us, and also, when I talk about my students doing this theorem—I just want to make sure that you understand that I no longer teach eighth grade, though. This is the first year actually at Mesa, and I've been there 16 years, that I do not get eighth graders. I'm teaching sixth graders. So when I refer to the lessons, I just want to make sure that you understand these are my former students.
EL: Okay.
FN: Yeah. And once upon a time, we tracked our eighth graders at Mesa. So we had a geometry class for the eighth graders. And so of course, we studied the Pythaogrean theorem then.
KK: So you have reasons.
FN: I have reasons. So that was the first reason, it’s a big one because, yeah. The second reason is there are so many proofs for this theorem, right? It's mainly algebraic or geometric proofs, but it's more than any other theorem. So it's very well known. And, you know, if you ever Google, you get plenty of different proofs. And I had to look this up, but there was a book published in 1940 that already had a 370 proofs in it.
KK: Yeah.
FN: Yeah. Even one of our presidents, I don't know if you know this, but yeah, this is some little nice trivia for the students.
EL: Yeah.
FN: One of our presidents, Garfield, submitted a proof back in 18-something.
EL: Yeah.
FN: He used trapezoids to do that.
KK: He was still in Congress at the time, I think the story is that, you know, that he was in the House of Representatives. And like, it was sort of slow on the floor that day, and he figured out this proof, right.
FN: Yeah. And then people continue to submit, and the latest one that I know was just over a year ago, back in November 2017, was submitted. That's the latest one I know. Maybe there was one just submitted two hours ago, who knows? And his was rearranging the a-squared and b-squared, the smaller squares, into a parallelogram. So I thought that was interesting. Yeah. And what's interesting is Pythagoras, even though it's the Pythagorean Theorem, he was given credit for it, it was a known long before him. And I guess there's evidence to suggest that it was by developed by a Hindu mathematician around 800 BC.
EL: Okay.
FN: And Pythagoras was what? 500-something.
KK: Something like that.
EL: Yeah.
FN: Yeah, something like that. But he was the first, I guess he got credit, because he was the first to submit a proof. He wasn't just talking about it, I guess it was official, it was a formal proof. And his was rearrangement. And I think that's a diagram that a lot of us see. And the kids see it. It's the one where you got the big, the big c square in the middle with the four right triangles around it, four congruent right triangles. Yeah. And then just by rearranging that big c squared became two smaller squares, your a squared and b squared. Yeah.
EL: Yeah. And I think it was known by, or—you know, I'm not a math historian. And I don't want to make up too much history today. But I think it has been known by a lot of different people, even as far back as Egyptians and Babylonians and things, but maybe not presented as a mathematical theorem, in the same kind of way that we might think about theorems now. But yeah, I think this is one of these things that like pretty much every human culture kind of comes up with, figuring out that this is true, this relationship.
KK: Yeah, I think recently wasn’t it? Or last year? There's this Babylonian tablet. And I remember seeing on Twitter or something, there was some controversy about someone claimed that this proved that the the Babylonians knew, all kinds of stuff, but really—
EL: Well, they definitely knew Pythagorean triples.
KK: Yeah, they knew lots of triples. Maybe you wrote about this, Evelyn.
EL: I did write about it, but we won’t derail it this way. We can put a link to that. I’ll get too bothered.
FN: Now that you brought up Pythagorean triples, how many do you know? How many of those can you get the kids to figure out? Of course, including Einstein submitted a proof. And I thought it was funny that people consider Einstein’s proof to be the most elegant. And I'm thinking, “Well, duh, it’s Einstein.” Yeah. And I guess I would have to agree, because there were a lot of rearrangements in the proofs, but Einstein, you know, is like, “Yeah, I don't need no stinking rearrangement.” So he stayed with the right triangle. And what he did was draw in the altitude from the 90 degree angle to the hypotenuse, and used similar triangles. And so there was no rearrangement. He simply made the one triangle into—by drawing in that altitude, he got himself three similar triangles. And yeah, and then he drew squares off of the hypotenuse of each one of those triangles, and then wrote, you know, just wrote up an equation. Okay, now we're just going to divide everything by a factor. The one that was drawn, in you just divide out the triangle, then you just you end up with a squared plus b squared equals c squared. It's hard to do without the the image of it, but yeah.
EL: But yeah, it is really a lovely one.
FN: Yeah. And this is something I didn't know. And it was interesting. I didn't know this until I was teaching it to my eighth graders. And I learned that, I mean, normally, we just see those squares coming off of the right triangle. And then I guess one of the high school students—we were using Geometer’s Sketchpad at the time, and one of the high school students made an animated sketch of the Pythagorean theorem. And, you know, he was literally drawing Harry Potter coming off of the three sides. And you know, and I just, Oh, I said, yeah, yeah. You don't have to have squares as long as they’re similar figures, right, coming off the edges. That would be fine. So that was fun to do. Yeah. So I have my kids just draw circles so that they—just anything but a square coming off of the sides, you know, do other stuff.
EL: Well, now I’m trying to—my brain just went to Harry Potter-nuse.
KK: Boo.
EL: I’m sorry, I’m sorry.
FN: That’s a good one.
KK: So I have forgotten when I actually learned this in life. You know, it's one of those things that you internalize so much that you can't remember what stage of your education you actually learned it in. So this is this is an actual Common Core eighth grade standard?
FN: Yes, yes. In the eighth grade, yeah.
KK: I grew up before the Common Core, so I don't really remember when we learned this.
FN: I don't know. Yeah, prior to Common Core, I was teaching it in geometry. And I don't think it was—it wasn't in algebra, you know, prior to these things we had algebra and then geometry. So yeah.
My third reason—I’m actually keeping track, so that was the second, lots of proofs. So the third reason I love the Pythagorean theorem is one fine day it led me to ask one of the best questions I'd asked of my geometry students. I said to them, “I wonder if you know how to graph and irrational number on the number line.” I mean, the current eighth grade math standard is for the kids to approximate where an irrational numbers is on the number line. That's the extent of the standard. So I went further and just asked my kids to locate it exactly. You know, what the heck?
EL: Nice. Yeah.
FN: And I actually wrote a blog post about it, because it was one of those magical lessons where you didn't want the class to end. And so I titled the post with “The question was mine, but the answer was all his.” And so I just threw it out to the class, I began with just, “Hey, where can we find—how do you construct the square root of seven on the number line?” And so, you know, they did the usual struggle and just playing around with it, but one of the kids towards the end of class, he got it, he came up with a solution. And I think when I saw it, and heard him explain it, it made me tear up, because it's like, so beautiful. And I'm so glad I did, because it was not, you know, a standard at all. And it was just something at the spur of the moment. I wanted to know, because we'd been working a lot with the Pythagorean theorem. And, yeah, so what he did was he drew two concentric circles, one with radius three and one with radius four on the coordinate plane, and the center is at (0,0). If you can imagine two concentric circles. And then he drew in y=-3, a line y=-3. And then you drew a line perpendicular to that line, that horizontal line, so that it intersects the—right, perpendicular to the horizontal line at negative three. And it intersects the larger circle, the one with radius four.
KK: Yeah, okay.
FN: So eventually what he did was he created, yeah, so you would have a right triangle created with one of the corners at (0,0). And the triangle would have legs of—the hypotenuse would be, what, four, the hypotenuse is four. One of the legs is three, and the other leg must be √7.
EL: Oh.
KK: Oh, yeah, okay.
FN: Yeah, yeah. So it's just so beautiful.
EL: That is very clever!
FN: Yeah, it really was. So every time I think about the Pythagorean theorem, I think back on that lesson. The kids really tried. And then from √7, we tried other routes. And we had a great time and continued to the next day.
EL: Oh, nice. I really liked that. That brought a big smile to my face.
KK: Yeah.
FN: The fourth reason I love the Pythagorean theorem is it always makes me think of Fermat’s Last Theorem. You know, it looks familiar, similar enough, where it states that no three positive integers a, b, and c can satisfy the equation of an+bn=cn. So for any integer value of n greater than 2. So it works for the Pythagorean Theorem, but any integer, any exponent greater than two would work. So I love—whenever I can, I love the history of mathematics, and whatever, I try to bring that in with the kids. So I read the book on the Fermat’s Last Theorem, and I kind of bring it up into the students for them to realize, Oh, my gosh, this man, Andrew Wiles, who solved it—and it's, you know, it's an over 300 year old theorem. And yeah, for him to first learn about the theorem when he was 10, and then to spend his life devoted to it. I mean, I can't think of a more beautiful love story than that. And yeah, so bring that to the kids. And I actually showed them the first 10 minutes of the documentary by BBC on Andrew Wiles. And just right, when he tears up, and, you know, I cannot stop tearing up at the same time because it—I don't know, it's just, it's that kind of dedication and perseverance. It's magical, and it's what mathematicians do. And so, you know, hopefully that supports all this productive struggle, and just for the love for mathematics. So, kind of get all geeky on the kids.
EL: Yeah, that is a lovely documentary.
FN: Yeah. Yeah. It's beautiful. All right. My fifth and final reason for loving this theorem is Pythagoras himself. What a nut!
EL: Yeah, I was about to say, he was one weird dude.
FN: Yeah, yeah. So, I mean, he was a mathematician and philosopher, astronomer and who knows what else. And the whole mystery wrapped up in the Pythagorean school, right? He has all these students, devotees. I don't know, it's like a cult! It really is like a cult because they had a strict diet, their clothing, their behaviors are a certain way. They couldn't eat meat or beans, I heard.
EL: Yeah.
FN: Yeah. And something about farting. And they believed that each time you pass gas that part of your soul is gone.
EL: That’s pretty dire for a lot of us, I think.
FN: Yeah. And what's remarkable also was that, the very theorem that he's named, you know, that's where I guess one of his students—I don't remember his name—but apparently he discovered, you know, the hypotenuse of √2 on the simple 1x1 isosceles triangle, and √2 and what did that do to him. The story goes he was thrown overboard for speaking up. He said, hey, there might be this possibility. So that's always fun, right? Death and mathematics, right?
EL: Dire consequences. Give your students a good gory story to go with it.
FN: I always like that. Yeah. But it's the start of irrational numbers. And of course, the Greek geometry—that mathematics is continuous and not as discrete as they had thought.
EL: Well, and it is an interesting irony, then that the Pythagoras theorem is one really easy way to generate examples of irrational numbers, where you find rational sides and a whole lot of them give you irrational hypotenuses.
FN: Yes.
EL: And then, you know, this theorem is the downfall of this idea that all numbers must be rational.
FN: Right. And I mean, the whole cult, I mean, that revelation just completely, you know, turned their their belief upside down, turned the mathematical world at that time upside down. It jeopardized and just humiliated their thinking and their entire belief system. So I can just imagine at that time what that did. So I don't know if any modern story that has that kind of equivalent.
EL: Yeah, no one really based their religion on Fermat’s Last Theorem being untrue. Or something like this.
FN: Right, right. Exactly.
EL: Yeah. I like all of your reasons. And you've touched on some really great—like, I will definitely share some links to some of those proofs of the Pythagorean theorem you mentioned.
So another part of this podcast is that we ask our guests to pair their theorem with something. So what have you chosen for your pairing?
FN: I chose football.
KK; Okay, all right.
FN: I chose football. It's my love. I love all things football. And the reason I chose football is simply because of this one video. And I don't know if you've seen it. I don't know if anyone's mentioned it. But I think a lot of geometry teachers may have shown it. It's by Benjamin Watson doing a touchdown-saving tackle. So again, his name is Benjamin Watson. I don't know how many years ago this was, but he’s a tight end for the New England Patriots. So what happened was, he came out of nowhere. Well, there was an interception. So he came out of nowhere to stop a potential pick-six at the one-yard line.
EL: Oh, wow.
FN: I mean, it's the most beautiful thing! So yeah, so if you look at that clip, even the coach say something to remember for the rest—anybody who sees it, for the rest of their life just because he never gave up, obviously. But you know, the whole point is he ran the diagonal of the field is what happened.
KK: Okay.
EL: Yeah, so you’ve got the hypotenuse.
FN: You’ve got the hypotenuse going. The shortest distance is still that straight line, and he never gave up. Oh, I mean, this guy ran the whole way, 90 yards, whatever he needed from from the very one end to the other. No one saw Ben Watson coming, just because, we say literally out of nowhere. Didn’t expect it. And the camera, what's cool is, you know, the camera is just watching the runner, right, just following the runner. And so the camera didn’t see it until later. Later when they did film, yeah, they zoomed out and said, Oh, my God, that's where he was coming from, the other hypotenuse, I mean, the other end of the hypotenuse. Yeah. But I pair everything, every mathematics activity I do, I try to pair it with a nice Cabernet. How's that?
EL: Okay.
KK: Not during school, I hope.
FN: Absolutely not.
EL: Don’t share it with your students.
FN: I’m a one glass drinker anyway, I'm a very, very lightweight. I talk about drinking, but I'm a wuss, Asian flush. Yeah.
EL: Yeah. Well, so, I’m not really a football person. But my husband is a Patriots fan. And I must admit, I'm a little disappointed that you picked an example with the Patriots because he already has a big enough head about how good the Patriots are, and I take a lot of joy in them not doing well, which unfortunately doesn’t happen very much these days.
KK: Never happens.
EL: There are certain recent Super Bowls that I am not allowed to talk about.
FN: Oh, okay.
KK: I can think of one in particular.
EL: There are few. But I'll say no more. And now I'm just going to say it on this podcast that will be publicly available, and I'll instruct him not to listen to this episode.
FN: Yeah, now my new favorite team actually, pro—well, college is Ducks, of course, but pro would be Dallas Cowboys. Just because that’s the favorite team from my fiance. So we actually, yeah, for Christmas, this past Christmas, I gave him that gift. We flew to Dallas to watch a Cowboys game.
EL: Oh, wow. We might have been in Dallas around the same time. So I grew up in Dallas.
FN: Oh!
EL: And so if I were a football fan, that would be my team because I definitely have a strong association with Dallas Cowboys and my dad being in a good mood.
FN: There we go.
EL: And I grew up in in the Troy Aikman era, so luckily the Cowboys did well a lot.
FN: Well, they’re doing well this year, too. So this Saturday, big game, right? Is it? Yeah.
KK: I feel old. So when I was growing up, I used to, I loved pro football growing up, and I've sort of lost interest now. But growing up in the ‘70s, it was either you're a Cowboys fan or a Steelers fan. That was the big rivalry.
FN: Yeah.
KK: I was not a Cowboys fan, I’m sorry to say.
FN: I never was either until recently.
KK: I was born in Wisconsin, and my mother grew up there, so I’m contractually obligated to be a Green Bay fan. I mean, I’m not allowed to do anything else.
EL: Well, it's very good hearted, big hearted of you, Fawn, to support your fiance's team. I admire that. I, unfortunately, I'm not that good a person.
FN: I definitely benefit because yeah, the stadium. What an experience at AT&T Stadium. Amazing.
EL: Yeah, it is quite something. We went to a game for my late grandfather's birthday a few years before he passed away. My cousins, my husband and I, my dad and uncle a ton of people went to a game there. And that was our first game at that stadium. And yeah, that is quite an experience. I just, I don't even understand—like the screen that they've got so you can watch the game bigger than the game is like the biggest screen I've ever seen in my life. I don't even understand how it works.
FN: Same here, it’s huge. And yet somehow the camera, when you watch the game on television, that screen’s not there, and then you realize that it's really high up. Yeah.
KK: Cool. Well, we learned some stuff, right?
EL: Yeah.
KK: And this has been great fun.
EL: Yeah, we want to make sure to plug your stuff. So Fawn is active on Twitter. You can find her at—what is your handle?
FN: fawnpnguyen. So my middle initial, Fawn P Nguyen.
EL: And Nguyen is spelled N-G-U-Y-E-N?
FN: Very good. Yes.
EL: Okay. And you also have a blog? What's the title of that?
FN: fawnnguyen.com. It’s very original.
EL: But it's just lovely. You're writing on there is so lovely. And it, yeah, it's just such a human picture. Like you really, when you read that you really see the feeling you have for your students and everything, and it's really beautiful.
FN: Thank you. They are my love. And I just want to say, Evelyn, when you asked me to do this, I was freaking out, like oh my god, Evelyn the math queen. I mean, I was thinking God, can you ask me do something else like washing the windows? Make you some pho?
KK: Wait, we could have had pho?
FN: We could have had pho. Because this was terrifying. But you know, it's a joy. Pythagorean theorem, I can take on this one. Because it's just so much fun. I mean, I've been in the classroom for a long time, but I don't see myself leaving it anytime soon because yeah, I don't know what else I would be doing because this is my love. My love is to be with the kids.
KK: Well, bless you. It's hard work. My sister in law teaches eighth grade math in suburban Atlanta, and I know how hard she works, too. It's really—
FN: We’re really saints, I mean—
KK: You are. It’s a real challenge. And middle school especially, because, you know, the material is difficult enough, and then you're dealing with all these raging hormones. And it's really, it's a challenge.
EL: Well, thanks so much for joining us. I really enjoyed it.
KK: Thanks, Fawn.
FN: Thank you so much for asking me. It was a pleasure. Thank you so much.
[outro]
Kevin Knudson: Welcome to My Favorite Theorem, a podcast that starts with math and goes all kinds of weird places. I'm Kevin Knudson, professor of mathematics at the University of Florida and here is your other host.
Evelyn Lamb: Hi. I'm Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City. Happy New Year, Kevin.
KK: And to you too, Evelyn. So this will happen, this will get out there in the public later. But today is January 1, 2019.
EL: Yes, it is.
KK: And Evelyn tells me that it was it's cold in Utah, and I have my air conditioning on.
EL: Yes.
KK: That seems about right.
EL: Yeah. Our high is supposed to be 20 today. And the low is 6 or 7. So we really, really don't have the air conditioner on.
KK: Yeah, it's going to be 82 here today in Gainesville. [For our listeners outside of the USA: temperatures are in Fahrenheit. Kevin does not live in a thermal vent.] I have flip flops on.
EL: Yeah. Yeah, I’m a little jealous.
KK: This is when it's not so bad to live in Florida, I’ve got to say.
EL: Yeah.
KK: Anyway, Well, today, we are pleased to welcome Robert Ghrist. Rob, you want to introduce yourself?
RG: Hello, this is Robert Ghrist.
KK: And say something about yourself a little, like who you are.
RG: Okay. All right, so I am a professor of mathematics and electrical and systems engineering at the University of Pennsylvania. This is in Philadelphia, Pennsylvania. I've been in this position at this wonderful school for a decade now. Previous to that I had tenured positions at University of Illinois in Urbana Champaign and Georgia Institute of Technology.
KK: So you've been around.
RG: A little bit.
EL: Oh, and so I was just wondering, so it that a joint appointment between two different departments, or is it all in the math department?
RG: This is a split appointment, not only between two different departments, but between two different schools.
EL: Oh, wow.
RG: The math appointment is in the School of Arts and Sciences, and the engineering appointment is in the School of Engineering. This is kind of a tricky sort of position to work out. This is one of the things that I love about the University of Pennsylvania is there are very low walls between the disciplines, and a sort of creative position like this is is very workable. And I love that.
KK: Yeah, and your undergraduate degree was actually in engineering, right?
RG: That’s correct I got turned on to math by my calculus professor, a swell guy by the name of Henry Wente, a geometer.
KK: Excellent. Well, I see you’re continuing the tradition. Well, we’ll talk about that later. And actually, you actually have an endowed chair name for someone rather famous, right?
RG: That’s true. The full title is The Andrea Mitchell PIK professor of mathematics, electrical and systems engineering. This is Andrea Mitchell from NBC News. She and her husband Alan Greenspan funded this position. Did not intend to hire me specifically, or a mathematician. I think she was rather surprised when the chair that she endowed wound up going to a mathematician, but there it is, we get along swell. She's great.
KK: Nice.
EL: Nice. That's really interesting.
KK: Yup. So, Rob, what is your favorite theorem?
RG: My favorite theorem is, I don't know. I don't know the name. I don't know that this theorem has a name, but I love this theorem. I use this theorem all the time in all the classes I teach, it seems. It's a, it's a funny thing about basically Taylor expansion, or Taylor series, but in an operator theoretic language. And the theorem, roughly speaking, goes like this: if you take the differentiation operator on functions, let's say just single input, single output functions, the kind of things you do in basic calculus class. Call the differentiation operator D. Consider the relationship between that operator and the shift operator. I’m going to call the shift operator E. This is the operator that takes a function f, and then just shifts the input by 1. So E(f(x)) is really— pardon me, E(x) is really f evaluated at x+1. We use shift—
EL: I need a pencil and paper.
RG: Yeah, I know, right? We use the shift operator all the time in signal processing, in all kinds of things in both mathematics and engineering. And here's the theorem, here's the theorem. There's this wonderful relationship between these two operators. And it's the following. If you exponentiate the differentiation operator, if you take e to the D, you get the shift operator.
KK: This is remarkable.
RG: What does this mean? What does this mean?
KK: Yeah, what does it mean? I actually did work this out once. So what our listeners don't know is that you and I actually had this conversation once in a bar in Philadelphia, and the audio quality was so bad, we're having to redo this.
EL: Yeah.
KK: So I went home, and I worked this out. And it's true, it does work out. But what does this mean, Rob? Sort of, you know, in a manifestation physically?
RG: Yeah, so let me back up. The first question that I ask students, when they show up in calculus class at my university is, is, what is e to the x? What does that even mean? What does that mean when x is an irrational number, or an imaginary number, or something like a square matrix, or an operator? And, of course, that takes us back to the the interpretation of exponentiation in terms of the Taylor series at zero, that I take that infinite series, and I use that to define what exponentiation means. And because things like operators, things like differentiations, or shifts, you can take powers of those by composition, by iteration, and you can rescale them. Then you can exponentiate them.
So I can talk about what it means to exponentiate the differentiation operator by taking first D to the 0, which of course, is the identity, the do-nothing operator, and then adding to it D, and then adding to that D squared divided by 2 factorial [n factorial is the product of the integers 1x2,…n], that's the second derivative, then D cubed divided by 3 factorial, that's the third derivative. If I can keep going, I’ve exponentiated the differentiation operator. And the theorem is that this is the shift operator in disguise. And the proof is one line. It's Taylor expansion. And there you go.
Now, this isn't your typical sort of my favorite theorem, in that I haven't listed all the hypotheses. I haven't been careful with anything at all. But one of the reasons that this is my favorite theorem is because it's so useful when I'm teaching calculus to students, when I'm teaching basic dynamical systems to students where, you know, in a more advanced class, yeah, we'd have a lot of hypotheses. And, oh, let's be careful. But when you're first starting out, first trying to figure out what is differentiation, what is exponentiation, this is a great little theorem.
EL: Yeah, this conceptual trip going between the Taylor series, or going between the idea of e to the x, or 2 to the x or something where we really have a, you know, a fairly good grasp of what exponentiation means in that case, if we, if we're talking about squares or something like that, and going then to the Taylor series, this very formal thing, I think that's a really hard conceptual shift. I know that was really hard for me.
RG: Agreed.
KK: Yeah. So I, I wonder, though, I mean, so what's a good application of this theorem, like, in a dynamics class, for example? Where does this pop up sort of naturally? And I can see that it works. And I also agree that this idea of—I start calculus there, too, by the way, when I say, you know, what does e to the .1 mean, what does that even mean?
RG: What does that even mean?
KK; Yeah, and that’s a good question that students have never really thought about. They’re just used to punching .1 into a calculator and hitting the e to the x key and calling it a day. So, but where would this actually show up in practice? Do you have a good example?
RG: Right. So when I teach dynamical systems, it's almost exclusively to engineering students. And they're really interested in getting to the practical applications, which is a great way to sneak in a bunch of interesting mathematics and really give them some good mathematics education. When doing dynamical systems from an applied point of view, stability is one of the most important things that you care about. And one of the big ideas that one has to ingest is that of stability criteria for, let's say, equilibria in a dynamical system. Now, there are two types of dynamical systems that people care about, depending on what notion of time you're using— continuous time or discrete time. Most books on the subject are written for one or the other type of system. I like to teach them both at once, but one of the challenges of doing that is that the stability criteria are are different, very different-looking. In continuous time, what characterizes a stable equilibrium is when you look at all of the eigenvalues of the linearization, the real parts are less than zero. When you move to a discrete time dynamical system, that is a mapping, then again, you're looking at eigenvalues of the linearization, but now you want the modulus to be less than 1. And I find that students always struggle with “Why is it different?” “Why is it this way here, and that way there?” And of course, of course, the reason is my favorite little theorem, because if I look at the evolution operator in continuous-time dynamics—that’s the derivative—versus the evolution operator in discrete-time dynamics—that is the shift, move forward one step in time—then if I want to know the relationship between the stability and, pardon me, the stable and unstable regions, it is exponentiation. If I exponentiate the left hand side of the complex plane, what do I get? I get the region in the plane with modulus less than 1.
KK: Right.
RG: I find that students have a real “aha” moment when they see that relationship, and when they can connect it to the relationship between the evolution operators.
EL: I’m having an “aha” moment about this right now, too. This isn't something I had really thought about before. So yeah, this is a really neat observation or theorem.
RG: Yeah, I never really see this written down in books.
KK: That’s—clearly now you should write a book.
RG: Another one?
KK: Well, we'll talk about how you spend your time in a little while here. But, no, Rob, I mean, so Rob has this—I don't know if it's famous, but well known—massive open online course through Coursera where he does calculus, and it's spectacular. If our listeners haven't seen it, is it on YouTube, Rob? Can you actually get it at YouTube now?
RG: Yes, yes. The the University of Pennsylvania has all the lectures posted on a YouTube channel.
KK: Well, I actually downloaded to my machine. I took the MOOC A few years ago, just for fun. And I passed! Remarkably.
RG: With flying colors, with flying colors.
KK: Yeah, I'm sure you graded my exam personally, Rob.
RG: Personally.
KK: And anyway, this is evidence for how lucky are students are, I think. Because, you know, you put so much time into this, and these these little “aha” moments. And the MOOC is full of these things. Just really remarkable stuff, especially that last chapter, which is so next level, the digital calculus stuff, which sort of reminds me of what we're talking about. Is there some connection there?
RG: Oh yes, it was creating that portion of the MOOC that really, really got me to do a deep dive into discrete analogs of continuous calculus, looking at the falling powers notation that is popular in computer science in Knuth’s work and others, thinking in terms of operators. Yeah, that portion of the MOOC really got me thinking a lot about these sorts of things.
KK: Yeah, I really can't recommend this highly enough. It’s really great.
EL: Yeah, so I have not had the benefit of this MOOC yet. So digital calculus, is that meaning, like, calculus for computers? Or what exactly is that? What does that mean?
RG: One of the things that I found students really got confused about in a basic single variable calculus class is, as soon as you hit sequences and series, their heads just explode because they get sequences and series confused with one another, and it all seems unmotivated. And why are we bothering with all these convergence tests? And where’d they come from? All this sort of thing.
EL: And why is it in a calculus class?
RG: Why is it even in a calculus class after all these derivatives and integrals? So the way that I teach it is when we get to sequences and series, you know, in the last quarter of the semester, I say, Okay, we've done calculus for functions with an analog input and an analog output. Now, we want to redo calculus for functions with a digital input and an analog output. And such functions we're going to call sequences. But I'm really just going to think of it as a function. How would you differentiate such a thing? How would you integrate such a thing? That leads one to think about finite differences, which leads to some some nice approaches to numerical methods. That leads one to looking at sums and numerical integration. And when you get to improper integrals over an unbounded domain? Well, that's series, and convergence tests matter.
KK: Yeah, it's super interesting. We will provide links to this. We’ll find the YouTube links and provide them.
EL: Yeah.
KK: So another fun part of this podcast, Rob, is that we ask our guests to pair their theorem with something, and I assume you're going to go with the same pairing from our conversation back in Philadelphia.
RG: Oh yes, that’s right.
KK: What is it?
RG: My work is fueled by a certain liquid beverage.
KK: Yeah.
RG: It’s not wine. It's not beer. It's not whiskey. It's not even coffee, although I drink a whole lot of coffee. What really gets me through to that next-level math is Monster. That's right. Monster Energy Drink, low carb if you please, because sugar is not so good for you. Monster, on the other hand, is pretty great for me, at any rate. I do not recommend it for people who are pregnant or have health problems, problems with hearts, anything like this, people under the age of 18, etc, etc. But for me, yeah, Monster.
KK: Yeah. There's lots of empties in your office, too like, up on the shelf there, which I'm sure have some significance.
RG: The wall of shame, that’s right. All those empty monster cans.
KK: See, I can't get into the energy drinks. I don't know. I mean, I know you're also fond of scotch. But does that does that help bring you down from the Monster, or is it…
RG: That’s a rare treat. That's a rare treat.
KK: Yeah, it should be. So when did your obsession with Monster start? Does this go back to grad school, or did it even exist when we were in grad school? Rob and I are roughly the same age. Were energy drinks a thing when we were in grad school? I don't remember.
RG: No, no. I didn't have them until, gosh, what is it, sometime within the past decade? I think it was when I was first working on that old calculus MOOC, like, what was that, six years ago? Six, seven years ago, is when I was doing that.
KK: Yup.
RG: That was difficult. That was difficult work. I had to make a lot of videos in a short amount of time. And, yup, the Monster was great. I would love to get some corporate sponsorship from them. You know, maybe, maybe try to pitch extreme math? I don't know. I don't think that's going to work.
KK: I don't know. I think it's a good angle, right? I mean, you know, they have this monster truck business, right? So there is this sort of whole extreme sports kind of thing. So why not? You know?
EL: Yeah, I'm sure they're just looking for a math podcast to sponsor. That's definitely next on their branding strategy.
KK: That’s right. Yeah. But not us. They should sponsor you, Rob. Because you're the true consumer.
RG: You know, fortune favors the bold. I'd be willing to hold up a can and say, “If you're not drinking Monster, you're only proving lemmas,” or something like that.
EL: You’ve thought this through. You've got their pitch already, or their slogan already made.
RG: That’s right. Yup.
KK: All right. Excellent. So we always like to give our guests a chance to to pitch their projects. Would you like to tell us about Calculus Blue?
RG: Oh, absolutely! This is—the thing that I am currently working on is a set of videos for multivariable calculus. I'm viewing this as something like a video text, a v-text instead of an e-text, where I have a bunch of videos explaining topics in multivariable calculus that are arranged in chapters. They’re broken up into small chunks, you know, roughly five minutes per video. These are up on my YouTube channel.There's another, I don't know, five or six hours worth of videos that are going to drop some time in the next week covering multivariate integration. This is a lot of fun. I'm having a ton of fun doing some 3d drawing, 3d animation. Multivariable calculus is just great for that kind of visualization. This semester, I'm going to use the videos to teach multivariable calculus at Penn in a flipped manner and experiment with how well that works. And then it'll be available for anyone to use.
KK; Yeah, I'm looking forward to these. I see the previews on Twitter, and they really are spectacular. How long does any one of those videos take you? It seems like, I mean, I know you've gotten really good at at the graphics packages that you need to create those things. But, you know, like a 10 minute video. How long does one of those things take to produce?
RG: I don't even want to say,
KK: Okay.
RG: I do not even want to say, no. I've been up since four o'clock this morning rendering video and compositing. Yeah, this is my day, pretty much. It's not easy. But it is worthwhile. Yeah.
KK: Well, I agree. I mean, I think, you know, so many of our colleagues, I think, kind of view calculus as this drudgery. But I still love teaching it. And I know you do, too.
RG: Absolutely.
KK: And I think it's important, because this is really a lot of what our job is, as academics, as professional mathematicians. Yes, we're proving theorems, all that stuff, that's great. But, you know, day in, day out, we're teaching undergraduates introductory mathematics. That's a lot of what we do. And I think it's really important to do it well.
EL: Well, and it can help, you know, bring people into math like it did for Rob.
KK: That’s right.
RG: Exactly. That's exactly right. Controversial opinion, but, you know, you get these people out there who say, oh, calculus, this is outdated, we don't need that anymore, just teach people data analysis or statistics. I think that's a colossal error. And that it's possible to take all of these classical ideas in calculus and just make them current, make them relevant, connect them to modern applications, and really reinvigorate the subject that you need to have a strong foundation in in order to proceed.
KK: Absolutely. And I, you know, I try to mix the two, I try to bring data into calculus and say, you know, look, engineering students, you’re mostly going to have data, but this stuff still applies. You know, calculus for me is a lot about approximation, right? That's what the whole Taylor Series business, that's what it's for.
RG: Definitely.
KK: And really trying to get students to understand that is one of my main goals. Well, this has been great fun. Thanks for taking time out from rendering video.
EL: Yeah, video rendering.
RG: Yes. I'm going to turn around and go right back to rendering as soon as we're done.
KK: That’s right, you basically have a professional quality studio in your in your basement, right?Is this how this works?
RG: This is how it works. Been renovating, oh, I don't know. It's about a year ago I started renovations and got a nice little studio up and running.
KK: Excellent. Do you have, like, foam on the walls and stuff like that?
RG: Yes, I'm touching the foam right now.
KK: All right. Yeah. So Evelyn I aren't that high-tech. We've just now gotten to the sort of like, multi-channel recording kind of thing.
RG: Ooh.
KK: Well, yeah, well, we're doing this now, right, where we’re each recording our own audio. I'm pleased with the results so far. Well, Rob, thanks again, and we appreciate your joining.
EL: Thanks for joining us.
RG: Thank you. It's been a pleasure chatting.
Evelyn Lamb: Hello and welcome to My Favorite Theorem, a podcast where we ask mathematicians to tell us about their favorite theorems. I'm Evelyn Lamb. I'm one of your hosts. I am a freelance math and science writer in Salt Lake City, Utah. Here's your other host.
Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida. How's it going, Evelyn?
All right. I am making some bread right now, and it smells really great in my house. So slightly torturous because I won't be able to eat it for a while.
KK: Sure. So I make my own pizza dough. But I always stop at bread. I never that extra step. I don't. I don't know why. Are you making baguettes? Are you doing the whole...
EL: No. I do it in the bread machine.
KK: Oh.
EL: Because I'm not going to make, yeah, I'm not going to knead and shape a loaf. So that's the compromise.
KK: Oh. That that's the fun part. So I've had a sourdough starter for, the same one for at least three or four years now. I've kept this thing going. And I make my own pizza crust, but I'm just lazy with bread. I don't eat a lot of bread.
EL: So yeah. And joining us to talk about--on BreadCast today--I'm very delighted to welcome Cynthia Flores. So hi. Tell us a little bit about yourself.
Cynthia Flores. Oh, thanks for having me on the show. I'm so grateful to join you today, Kevin and Evelyn. Well, I'm an assistant professor of mathematics and applied physics at California State University Channel Islands in the Department of Math and Physics. The main campus is not located on the Channel Islands.
EL: Oh, that's a bummer.
CF: It's actually located in Camarillo, California. It's one hour south of Santa Barbara, one hour north of downtown Los Angeles, roughly.
But the math department does get to have an annual research retreat at the research station located in the Santa Rosa Island. So that's kind of neat.
KK: Oh, how terrible for you.
EL: Yeah. That must be so beautiful.
KK: I was in Laguna Beach about a week and a half ago, which is, of course, further south from there, but still just spectacularly beautiful. Really nice.
CF: Yeah, I feel really fortunate to have the opportunity to stay in the Southern California area. I did my PhD at UC Santa Barbara, where I studied the intersections of mathematical physics, partial differential equations, and harmonic analysis and has motivated what I'm going to talk about today.
KK: Good, good, good.
EL: Yeah. Well, and Cynthia was on another podcast I host, the Lathisms podcast. And I really enjoyed talking with her then about the some of the research that she does. And she had some fun stories. So yeah, what is your favorite theorem? What do you want to share with us today?
CF: I'm glad you asked. I have several favorite theorems, and it was really hard to pick, and my students have heard me say repeatedly that my favorite theorem is the fundamental theorem of calculus.
EL: Great theorem.
KK: Sure.
CF: It's also a very, I find, intimidating theorem to talk about on this series, especially with so many creative individuals pairing their favorite theorems with awesome foods and activities. And so I just thought that one was maybe something to live up to. And I wanted to start with something that's a little closer to to my research area. So I found myself thinking of other favorites, and there was one in particular that does happen to lie at the intersection of my research area, which is mathematical physics, PDEs and harmonic analysis. And it's known as Heisenberg's Uncertainty Principle. That's how it's really known by the physics community. And in mathematics, it's most often referred to as Heisenberg's Uncertainty Inequality.
EL: Okay.
CF: So, is it familiar? I don't know.
EL: I feel like I've heard of it, but I don't--I feel like I've only heard of it from kind of the pop science point of view, not from the more technical point of view. So I'm very excited to learn more about it.
KK: So I actually have a story here. I taught a course in mathematics and literature a couple years back with a friend of mine in the in the foreign languages department. And we watched A Serious Man, this Coen Brothers movie, which, if you haven't seen is really interesting. But anyway, one of the things I made sure to talk about was Heisenberg's Uncertainty Principle, because that's sort of one of the themes, and of course now I forgotten what the inequality is. But I mean, I remember it involves Planck's constant, and there's some probability distribution, so let's hear it.
CF: Mm hmm. Yeah, yeah. So I was, I was like, this is what I'm going to pair it with. Like, I'm going to pair the conversation, like the mathematics description, physical description, with, basically I was thinking of pairing it with something Netflix and chill-like. I'm really glad that you brought that up, and I'll tell you more in a little bit about what I'm pairing it with. But first, I'll start mathematically. Mathematically, the theorem could be stated as follows. Given a function with sufficient regularity and decay assumptions, the L2 norm of the function is less than or equal to 2 over the dimension the function's defined on, multiplied by the product of the L2 norm of its first moment and the L2 norm of its gradient. And so mathematically, that's the inequality.
This wasn't stated this way by Heisenberg in the 1920s, which I believe he was recognized with a Nobel Prize for later on. Physically, Heisenberg described this in different ways it could be understood. Uncertainty might be understood as the lack of knowledge of a quantity taken by an observer, for example, or to the uncertainty due to experimental inaccuracy, or ambiguity in some definition, or statistical spread, as, as Kevin mentioned.
And actually, I'm going to recommend to the listeners to go to YouTube. There's a YouTuber named Veritasium, I'm not sure if I'm pronouncing that correctly.
EL: Oh, yeah, yeah.
CF: Yeah, he has a four minute demonstration of the original thought by Heisenberg and an experiment having to do with lasers that basically tells us it's impossible to simultaneously measure the position and momentum of a particle with infinite precision. The infinite precision part would be referring to something that we might call certainty. So in the experiment, that the YouTuber is recreating, a laser is shown through two plates that form a slit, and the split is becoming narrower and narrower. The laser is shone through the slit and then projected onto a screen. And as the slit is made narrower, the spot on the screen, as expected, is also becoming narrower and narrower. And at some point--you know, Veritasium does a really good job of creating this sort of like little "what's going to happen" excitement-- just when the slit seems to completely disappear and become infinitesimally small, the expectation might be that the that the laser projecting onto the screen would disappear too, but actually at a certain point, when the slit is so narrow, it's about to close, the spot on the screen becomes wider. We see spread. And this is because the photons of light have become so localized that the slit and their horizontal momentum has to become less well defined. And this is a demonstration of Heisenberg's Uncertainty Principle. And so according to Heisenberg--and this is from one of his manuscripts, and I wish I would have written down which one, I'm just going to read it-- "at the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum, and this change is the greater the smaller the wavelength of the light employed, in other words, the more exact determination of the position. At the instant at which the position of the electron is known, its momentum, therefore, can be known only up to magnitudes which correspond to that discontinuous change; thus, the more precisely the position is determined, the less precisely the momentum is known, and conversely." And there's also, so momentum and position were sort of the original context in which Heisenberg's Uncertainty Principle was stated, mainly for quantum mechanics. The inequality theorem that I presented was really from an introductory book to nonlinear PDEs, which is really what I study, nonlinear dispersive PDEs specifically. So you could use a lot of Fourier transforms and stuff like that.
But it has multiple variations, one of which is Heisenberg's Uncertainty Principle of energy and time, which more or less is going to tell you the same thing, which is going to bring me to my pairing. Can I can I share my pairing?
KK: Sure.
CF: My pairing--I want to pair this with a Netflix and chill evening with friends who enjoy the Adult Swim animated show Rick and Morty.
KK: Okay.
CF: And an evening where you also have the opportunity to discuss sort of deep philosophical questions about uncertainty and chaos. And so really the show's on Hulu, so you could watch the show. I don't know if either one of you are familiar with the show.
KK: Oh, yeah. Yeah. I have a 19 year old son, how could I not be?
CF: And actually my students were the ones that brought this show and this specific episode to my attention, and I watched it, and so I'll see a little bit about the show. It's a bit, it's somewhat inspired by The Back to the Future movies. It's a comedy about total reckless and epic space adventures with lots of dark humor and a lot of almost real science. I mean, Rick is a mad scientist and Morty is in many ways opposite to Rick. Morty is Rick's grandson and sidekick. Rick uses a portal gun that he created. It allows him and Morty to travel to different realities where they go on some fun adventures. And there are several references to formulas and theorems describing our everyday life. And so particularly the season premiere of season two, Rick and Morty pays homage to Heisenberg's Uncertainty Principle as well as Schrodinger's cat paradox and includes a mathematical proof of Rick's impression of his grandkids.
I won't spoil what he proves about them. I'll let the reader, the listeners, go check that out. But basically season one ended with Rick freezing time for everyone except for him and his two grandkids Morty and Summer. In this premiere, Rick unfreezes time and causes a disturbance in their reference timeline, and any uncertainty introduced by the three individuals gets them removed entirely from time and causes a split in reality into multiple simultaneous realities. The entire episode is following Heisenberg's Uncertainty Principle for energy and time and alluding to the concept from the quantum world that chaos is found in the distribution of energy levels of certain atomic systems. So I'm going to back it up a little bit. We talked about Heisenberg's Uncertainty Principle in terms of momentum and position. Heisenberg's Uncertainty Principle for energy and time is for simultaneous measurements of energy and time, and specifically that the distribution of energy levels and uncertainty and its measurement is a metaphor to chaos within the system. So within a time interval, it's not possible to measure chaos precisely. There has to be uncertainty in the measure, so that the product of the uncertainty and energy and the uncertainty and the time remains larger than h over 4π, the Planck's constant. In other words, you cannot have both, you cannot simultaneously have both small uncertainty in both measurements. In other words, lots of certainty, right? You have small uncertainty, you have lots of certainty.
So you can't have that both happening. So less chaos leads to more uncertainty and vice versa. Less than certainty (or more certainty) leads to more chaos. And so this, you know, this episode, if you watch it, seems to present the common misconception that more uncertainty leads to more chaos. And this is where I've thought about this really hard and even tried to find someone who put it nicely, maybe on a video, I couldn't. But I think--this is just my opinion--I think the writers really got it right on this episode, because the moment that the timeline merges in the episode is the moment when the main character Rick has given up on his chances of fixing a broken tool he was counting on for fixing the timeline. So in fact, in this episode, he's shown doing something which is unlike him. He's shown praying and asking God, or his maker, for forgiveness, you know, as the timeline is, as all of these realities are collapsing. And in my opinion of Rick, this is the largest amount of uncertainty he's ever displayed throughout this series. And this happens right at the moment that he restores the timeline and therefore reduces the chaos. So I really think the writers got it right on that.
EL: That's really neat.
CF: Yeah, yeah, I loved it. And so for me, I also find this as a perfect time to, you know, hang out with friends, Netflix and chill it up, and then afterwards talk, you know. I would really like to challenge the listeners to observe this phenomenon in their real life. And for some people, it might be a stretch. But to some extent, I think we observe Heisenberg's Uncertainty Principle in our daily lives, like in the sense that the more sure that we are about something, or the more plan that we've made for something, the more likely were observe chaos, right? The more things we've gotten planned out, the more things that are actually likely to go wrong. I get to see this at the university, right, with so many young minds planning out their future. And I really see that the more certain a student feels about their plan, the more likely they're going to feel chaos in their life, if things don't go according to plan. So I really enjoy Heisenberg's Uncertainty Principle on so many levels, mathematically, physically, maybe even philosophically, and observing it in our real lives.
EL: Yeah, I really like the the metaphorical aspect you brought here. And if I can reveal how naive I am about Heisenberg's Uncertainty Principle, I didn't know that it was applicable in these different things other than just position and momentum. Maybe I'm the only one. But that's really interesting that there, so are there a lot of other places where this is also the case?
CF: Well, mathematically, it's just a function defined, for example, on Rn that has regularity and decay properties, and so that function's L2 norm--so the statement of the theorem is under those decay and regularity assumptions, that function's L2 norm has to remain less than or equal to 2 over n multiplied by the product of the L2 norm of the first moment of the function times the L2 norm of the gradient of the function. And so in some sense, we can view that as talking about momentum and position, and so that has applications to various physical systems, both momentum and position. But in some sense, whenever you have a gradient of a function, it can also relate to some system's energy. And so mathematically, I think we have the position to view this in a the more abstract way, whereas physically, you tend to only read about the the momentum and position version and less about the energy and time version. So that's why it took me a long time to think about did the writers of Rick and Morty get it right, because they're basically relying on, it seems they're giving the impression that uncertainty is leading to chaos. Because every time someone feels uncertainty, the timeline gets split and multiple, simultaneous versions of reality are going on at the same time, introducing more chaos into the system. And I kept thinking about it: "But mathematically, that's not what I learned, like what's happening?" And so I really think it's at the end where where Rick merges all the timelines together and basically reduces the chaos in the system. I really think that's the moment where we're seeing Heisenberg's Uncertainty Principle at play. We're seeing that in the moment where Rick was the least certain about himself and his abilities to fix this is the moment where the timeline were fixed. I really think someone had to be knowing about the energy and time version of Heisenberg's Uncertainty Principle.
KK: I need to go back and watch this. I've seen all the first two seasons, but I don't remember this one in particular. My son should be here. He could tell you all about it. We could be having this, oh yeah this yeah. It's a bit raw of a show, though, so listener warning, if you don't like obscenities and--
EL: Delicate ears beware.
KK: Really not politically correct humor very often. It's, you know, it's
CF: I agree.
KK: it's a little raw.
CF: Yeah.
KK: It's entertaining. But it's
CF: Yeah, it's definitely dark humor and lots of references to sci-fi horror. And, you know, some references are done well, some are just a little, I don't know.
KK: Yeah.
CF: But I definitely learned about this show from undergraduate students during a conference where we were, you know, stuck in a long commute. And students found themselves talking to me about all sorts of things. And they mentioned this episode where something was proven mathematically, and I'm a huge fan of Back to the Future, I hadn't watched--I only recently, you know, watched the show, even though it's been around for some time, apparently. And I'm a huge fan of Back to the Future. And they're telling me that there's mathematical proofs. And so I'm course I'm like, "Well, I'm gonna have to check out the mathematical proofs." Any mathematician that watches the show could see that the mathematical proof, I'm not sure that it's much of a mathematical proof.
So it got it got me to watch the episode. And once I was watching the episode, what really drew my attention was that I realized they're talking about chaos and uncertainty.
EL: So going back to the theorem itself, where did you encounter that the first time?
CF: First year grad school at UC Santa Barbara. And it's actually--I never told the professor who was teaching a course turned out to eventually go on and become my PhD advisor. And that first year that I was a graduate student at UC Santa Barbara, I was much more interested in differential geometry and topology than I was in analysis. And this was in one of our homework assignments sort of buried in there. And I don't remember exactly who, if it was the professor himself, or maybe one of his current graduate students, or a TA for the course, that explained that inequality and its physical relationship to chaos and uncertainty. And I'm pretty sure that the conversation with whoever it was, it was about chaos and uncertainty. And it wasn't about momentum and position, which I think would have turned me off at the time. But we were talking about this, relating it to uncertainty and measurements and chaos present in the system. And for me, since that moment, I think I've lived by this sort of mantra that if I plan things out, more things are going to go wrong the more planning that I do. But, you know, I kind of have to keep that in mind that I can only plan so much without introducing some chaos into the system. And so it made a huge impression on me. And I asked this professor who assigned this homework assignment, I'm sure it was the first semester, I mean, the first quarter, of graduate real analysis, if he had more reading for me to do. And he became my advisor, and I went into this area, mathematical physics, PDEs, and harmonic analysis. So it made a huge influence on me. And that's why I wanted to include it as my favorite theorem.
EL: Yeah, that's such a great story. It's like your superhero origin story, is this theorem.
KK: Yeah. So surely this L2 norm business, though, came after the fact. Like Heisenberg just sort of figured it out in the physics sense, and then some mathematician must have come up with the L2 norm business.
CF: Right. I actually think Heisenberg came up with it in the physical sense. There was someone who wrote down something mathematically, and I actually haven't gone--I should--I haven't gone through the literature to find out which mathematician wrote down the L2 norm statement of the inequality.
But in the book Introduction to Nonlinear Dispersive Equations by Felipe Linares and Gustavo Ponce-- Gustavo's my advisor--on page 60, it's Exercise 3.14. It proves Heisenberg's inequality and states it the way I've stated it here in this podcast, and it's a really neat analysis exercise. You know, you have to use the density of the Schwartz class functions, you use integration by parts. It's a really neat exercise and really helps you use those tools that PDE people use. And yeah, my advisor doesn't know how much that exercise influenced my decision to study the mathematical physics, PDEs, and harmonic analysis.
KK: Good, then now our listeners have an exercise, too. So,
EL: Yeah.
CF: That's right. Yeah. So my recommendations are watch Rick and Morty, try exercise 3.14 from Introduction to Nonlinear Dispersive Equations, and have a deep philosophical conversation about uncertainty and chaos with your good friends as you Netflix and chill it out.
EL: Nice. Yeah, wise words, definitely. Thanks a lot for joining us. I really need to brush up on some of my physics, I think, and think about this stuff.
CF: I'm happy to talk about it anytime you like. Thank you so much for the invitation. I've really enjoyed talking to you all.
KK: Thanks, Cynthia.
Kevin Knudson: Welcome to My Favorite Theorem, a special Valentine’s Day edition this year.
Evelyn Lamb: Yes.
KK: I’m one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida. This is your other host.
EL: Hi. I’m Evelyn Lamb. I’m a math and science writer in Salt Lake City, Utah.
KK: How’s it going, Evelyn.
EL: It’s going okay. We got, we probably had 15 inches of snow in the past day and a half or so.
KK: Oh. It’s sunny and 80 in Florida, so I’m not going to rub it in. This is a Valentine’s Day edition. Are you and your spouse doing anything special?
EL: We’re not big Valentine’s Day people.
KK: So here’s the nerdy thing I did. So Ellen, my wife, is an artist, and she loves pens and pencils. There’s this great website called CW Pencil Enterprise, and they have a little kit where you can make a bouquet of pencils for your significant other, so this is what I did. Because we’ve been married for almost 27 years. I mean, we don’t have to have the big show anymore. So that’s our Valentine’s Day.
EL: Yeah, we’re not big Valentine’s Day people, but I got very excited about doing a Valentine’s Day episode of My Favorite Theorem because of the guests that we have. Will you introduce them?
KK: I know! We’re pleased to have Nikita Nikolaev and Beatriz Navarro, and they had some popular press. So why don’t you guys introduce yourselves and tell everybody about you?
Nikita Nikolaev: Hi. My name is Nikita. I’m a postdoctoral fellow at the University of Geneva in Switzerland. I study algebraic geometry of singular differential equations.
Beatriz Navarro Lameda: And I’m Beatriz. I’m currently a Ph.D. student at the University of Toronto, but I’m doing an exchange program at the University of Geneva, so that’s why we’re here together. And I’m studying probability, in particular directed polymers in random environments.
EL: Okay, cool! So that is actually applicable in some way.
BNL: Yes, it is.
EL: Oh, great!
KK: So why don’t we talk about this whole thing with the wedding? So we had this conversation before we started recording, but I’m sure our listeners would love to hear this. So what exactly happened at your wedding?
NN: Both of us being mathematicians, of course, we had almost everybody was either a mathematician or somehow mathematically related, most of our guests. So we decided to have a little bit of fun at the wedding, sprinkle a little bit of maths here and there. And one of the ideas was to, when the guests arrive at the dinner, in order for them to find which table they’re sitting at, they would have to solve a small mathematical problem. They would arrive at the venue there, and they would open their name card, and the name card would contain a first coordinate.
BNL: And a question.
NN: And a question. And the questions were very bespoke. It really depended on what we know their mathematical background to be. We had many people in my former research group, so I pulled questions from their papers or some of the talks they’ve given.
EL: This is so great! Yeah.
NN: And there were some people who are, maybe they’re not mathematicians, they’re engineers or chemists or something, and we would have questions which are more mathematically flavored rather than actual mathematical questions just to make everyone feel like they’re at a wedding of mathematicians.
EL: Right.
NN: So right. They had to find out two coordinates. All the tables were named after regular polyhedra, and they had to find out what their polyhedron of the night was.
EL: Okay.
NN: In order to do that, there was a matrix of polyhedra. Each one had two coordinates, and once you find out what the two coordinates are, you look at that matrix, and it gives you what polyhedron you’ve got. So as a guest, you would open the name card, and it would contain your first coordinate and a question and a multiple choice answer. And the answers were,
BNL: Usually it was one right answer and two crazy answers that had nothing to do with the question. Most of them were 2019 because that’s the year we got married, and then some other options. And then once you choose your answer, you would be directed to some other card that had a name of some mathematical term or some theorem, and that one would give you the second coordinate.
NN: So we made this cool what we called maths tree. We had several of these manzanita trees, and we put little cards on them with names, with these mathematical terms, with the answers, with people’s questions, and we just had this tree with cards hanging down, more than 100 cards hanging down. What I liked, in mathematicians it induced this kind of hunting instinct. You somehow look at this tree, and there are all these terms that you recognize, and you’ve seen before in your mathematical career, and you’re searching for that one that you know is the correct one.
BNL: And of course we wanted to make sure everyone found their table, so if they for any reason chose the wrong answer, they would also be directed to some card with a mathematical term. And when they opened it, it would say, “Oops, try again.” So that way you knew, okay, I just have to go and try again and find what the correct coordinate would be.
KK: This is amazing.
NN: And then to foolproof the whole thing, during the cocktail hour, they would do this kind of hunting for mathematical terminology, but then the doors would open into the dinner room, and just like most textbooks, when you open the back of the textbook, there’s the answer key, in the dinner room we had the answer key, which was a poster with everyone’s names
BNL: And their polyhedra
NN: And their polyhedron of the night.
EL: Yeah.
NN: So it was foolproof. I think some commentators on the internet were very concerned that some guests wouldn’t find their seat and starve to death.
EL: Leave hungry.
NN: No, it was all thought through completely.
KK: Some of the internet comments, people were just incredulous. Like, “I can’t believe these people forced their guests to do this!” They don’t understand, we would think this was incredible. This is amazing!
EL: Yeah! So delightfully nerdy and thoughtful. So, yeah, we’ve mentioned, this did end up on the internet in some way, which is how I heard about it, because I sadly was not invited to your wedding. (Since this is the first time I have looked at you at all.) So yeah, how did it end up making the rounds on some weird corner of the internet?
NN: So basically a couple of weeks before the wedding, I made a post on Facebook. It was a private Facebook post, just to my Facebook friends. You know, a Facebook friend is a very general notion.
EL: Right.
NN: I kind of briefly explained that all our guests are mathematicians, so we’re going to do this cool thing, we’re going to come up with mathematical questions, and one of my Facebook “friends,” a Facebook acquaintance, I later found out who it was, they didn’t like it so much, and they did a screengrab, and then they posted, with our names redacted and everything redacted, they made a post on Reddit which was like, “Maths shaming, look, these people are forcing their guests to solve a mathematical question to find their seat, maths shaming.”
BNL: It was in the “bridezilla” thread. “This crazy bride is forcing their guests to solve mathematical problems,” and how evil she is. Which, funny because Nikita was the one who wrote that Facebook post.
NN: I actually was the one.
BNL: So it was not a bridezilla, it’s what I like to call a Groom Kong.
NN: That’s right. So then this Reddit thread kind of got very popular, and later some newspaper in Australia picked it up, and then it just snowballed from there. Fox News, Daily Mail, yeah.
KK: Well, this is great. This is good, now you’ve had your 15 minutes of fame, and now life can get back to normal.
EL: Yeah.
KK: This is a great story. Okay, but this is a podcast about theorems, so what is your favorite theorem?
NN: Right, yeah. So we kind of actually thought long and hard about what theorem to choose. Like, what is our favorite theorem is such a difficult question, actually.
KK: Sure.
NN: It's kind of like, you know, what is your favorite music piece? And it's, I mean, it's so many variables. Depends on the day, right?
EL: Yeah.
NN: But we ended up deciding that we were going to choose the intermediate value theorem.
KK: Oh, nice.
NN: As our favorite theorem.
KK: Good.
BNL: So, yes, the intermediate value theorem is probably one of the first theorems that you learn when you go to university, right? Like calculus you start learning basic calculus, and it's one of the first theorems that you see. Well, what it says is that well, suppose you start with a continuous function f, and you look at some interval (a, b), so the function f sends a to f(a), b to f(b), and then you pick any value y that is between f(a) and f(b). And then you know that you will find a point c that is between a and b, such that f(c) equals y.
So it looks like an incredibly simple statement. Obvious.
KK: Sure.
BNL: Right, but it is a quite powerful statement. Most students believe it without proof. They don't need it. It's, yes, absolutely obvious. But, well, we have lots of things that we like about the theorem.
NN: Yeah, I mean, it feels incredibly simple and completely obvious. You look at it, and, you know, it's the only thing that could possibly be true. And the cool thing about it, of course, is that it represents kind of the essence of what we mean by continuous function.
KK: Sure.
EL: Yeah.
NN: In fact, actually, if you look at the history, before our modern formal definition of continuity, that was part of the property that was a required property that people used to use as part of the definition of continuity. In fact, actually, if you look at the history, before we formalize the definition of continuity, people were very confused about what a continuous function actually should mean. And many thought, erroneously, that this intermediate value property was equivalent to continuity. In some sense, it's what you would want to believe because it really is the property that more or less formalizes what we normally tell our students, that heuristically, a continuous function is one that if you want to draw it, you can draw it without taking off your pencil off of a piece of paper, and that's what the Intermediate Value Theorem, this intermediate value property, it represents that really.
But for all its simplicity and triviality, if you actually look at it properly, if you look at our modern definition of continuity using epsilon-delta, then it becomes not obvious at all.
BNL: Yes. So if you see epsilon, you look at the definition of continuity, and have epsilons, deltas, how is it possible that from this thing, you get such an obvious statement, like the intermediate value theorem? So what the intermediate value theorem is telling us is that, well, continuous functions do exactly what we want them to do. They are what we intuitively think of a continuous function. So in a sense, what the intermediate value theorem is doing for us is serving as a bridge between this formal definition that we encounter in university. So we start first year calculus, and then our professor gives us this epsilon-delta definition of continuity. And it's like, oh, but in high school, I learned that a continuous function is one that I can draw without lifting my pencil. Well, the intermediate value theorem is precisely that. It's connecting the two ideas, just in a very powerful way.
NN: Yeah. And also, you know, it cannot be overstated how useful it is. I mean, we use it all the time. As a geometer, of course, you know, you use some generalization of it, that continuous functions send connected sets to connected, and we use it all the time, absolutely, without thinking, we take it absolutely for granted.
BNL: So even if you do analysis, you are using it all the time, because you can see that the intermediate value theorem is also equivalent to the least upper bound property, so the completeness axiom of the real numbers. Which is quite incredible, to see that just having the intermediate value theorem could be equivalent to such a fundamental axiom for the real numbers, right? So it appears everywhere. It's surprising. We know, it's very easy, when you see the proof of the intermediate value theorem, you see that it is a consequence of this least upper bound property, but the converse is also true. So in a sense, we have that very powerful notion there.
KK: I don't think I knew the converse. So I'm a topologist, right? So to me, this is just a statement that the continuous image of a connected set is connected. But then, of course, the hard part is showing that the connected subsets of the real line are precisely the intervals, which I guess is where the least upper bound property comes in.
BNL: Yes, indeed, yes. Exactly yes.
KK: Okay. I haven't thought about analysis in a while. As it turns out, we're hiring several people this year. And, for some of them, we've asked them to do a teaching demonstration. And we want them all to do the same thing. And as it so happens, it's a calculus one demonstration about continuity and the intermediate value theorem.
BNL: Oh.
EL: Nice.
KK: So in the last month, I've seen, you know, 10 presentations about the intermediate value theorem. And I've come to really appreciate it as a result. My favorite application is, though, that you can use it to prove that you can always make any table level, or not level, but all four legs touch the ground at the same time.
BNL: Yes.
KK: Yeah, that's, that's great fun. The table won't be level necessarily, but all four feet will be on the ground, so it won't wobble.
BNL: Yes.
EL: Right.
NN: If only it were actually applied in classrooms, right?
KK: Right.
EL: Yeah.
NN: The first thing you always do when you come to, you sit at a desk somewhere, is to pull out a piece of paper to actually level it.
EL: Yeah. So was this a theorem that you immediately really appreciated? Or do you feel like your appreciation has grown as you have matured mathematically?
BNL: In my case, I definitely learned to appreciate it more and more as I matured mathematically. The first time I saw the theorem, it's like, "Okay, yes, interesting, very cool theorem." But I didn't realize at the moment how powerful that theorem was. Then as my mathematical learning continued, then I realized, "Oh, this is happening because of the intermediate value theorem. And this is also a consequence of it." So there's so many important things that are a consequence, of the intermediate value theorem. That really makes me appreciate it.
NN: Well, there's also somehow, I think this also comes with maturity, when you realize that some very, what appear to be very hard theorems, if you strip away all the complexity, you realize that they may be really just some clever application of the intermediate value theorem.
BNL: Like Sharkovskii's theorem, for example, is a theorem about periodic points of continuous functions. And it just introduces some new ordering in the natural numbers. And it tells you that if you have a periodic point of some period m, then you will have periodic points of any period that comes after m in that ordering. You can also look at the famous "period three implies chaos."
KK: Right.
BNL: A big component of it is period three implies all other periods. And the proof of it is really just a clever use of the intermediate value theorem. It's so interesting, that such an important and famous theorem is just a very kind of immediate--though, you know, it takes some work to get it--but you can definitely do it with just the Intermediate Value Theorem. And I actually like to present that theorem to students in high school because they can believe the Intermediate Value Theorem.
EL: Yeah.
BNL: That's something that if you tell someone, "This is true," no one is going to question it is definitely true.
KK: Sure.
BNL: And then you tell them, "Oh, using this thing that is obvious, we can also prove these other things." And I've actually done with high school students to, you know, prove Sharkovskii's theorem just starting from the fact that they believe the intermediate value theorem. So they can get to higher-level theorems just from something very simple. I think that's beautiful.
NN: Yeah, that's kind of a very astonishing thing, that from something so simple, and what looks obvious, you can get statements which really are not obvious at all, like what she just explained, Sharkovskii's theorem, that's kind of a mind blowing thing.
EL: Yeah, you're making a pretty good case, I must say.
KK: That's right.
EL: So when we started this podcast, our very first episode was Kevin and I just talking about our own favorite theorems. And I have already since re-, you know, one of our other guests has taken my loyalty elsewhere. And I think you're kind of dragging me. So I think, I think my theorem love is quite fickle, it turns out. I can be persuaded.
KK: You know, in the beginning of our conversation, you pointed out, you know, how does one choose a favorite theorem, right? And, and it's sort of like, your favorite theorem du jour. It has to be.
BNL: Exactly, yes.
EL: Yeah.
KK: All right, so what does one pair with the intermediate value theorem?
BNL: So we thought about it. And to continue with the Valentine's Day theme, we want to pair the intermediate value theorem with love in a relationship.
KK: Ah, okay, good.
BNL: The reason why we want to pair it with love is because when you love someone, it's completely obvious to you. You just know it's true, you know you love someone.
KK: That's true.
BNL: You just feel like there's no proof required. It's just, you know it, you love this person.
NN: It's the only thing that can possibly be true, there's no reason to prove it.
BNL: But also, just like any good theorem, you can also prove, you can provide a proof of love, right? You can show someone that you love them.
NN: Any good mathematical theorem can always be supplied with a very rigorous, detailed to whatever required level proof. And if you truly, really truly love someone, you can prove it. And if someone questions, any part of that proof, you can always supply more details and a more detailed explanation for why why you love that person. And that's why there's a similarity between the intermediate value theorem and love in a relationship.
EL: Yeah, well, I'm thinking of the poem now, "How do I love thee? Let me count the ways," This is a slightly mathematically-flavored poem.
KK: But I think there must be at least, you know, the, the continuity of the continuum ways, right? Or the cardinality of the continuum ways.
NN: Absolutely.
KK: That's an excellent pairing.
EL: Yeah.
BNL: We also thought that love is something that we feel, we take it as an obvious statement, and then from love, we can build so many other things, right? Like in the intermediate value theorem case, we start from a theorem that looks obvious, and using it, we can prove so many other theorems. So it's the same, right, in a relationship. You start from love, and then you can build so many other great things.
EL: Yeah, a marriage for example.
BNL and NN: For example, yes.
EL: Yeah. And a ridiculously amazing wedding game as part of that.
NN: There were some other mathematical tidbits in the wedding. So one of them I'll mention is our rings. Our wedding bands are actually Möbius bands.
KK: Oh, I see.
EL: Okay, very nice.
NN: We had to work with a jeweler. And there's a bit of a trick, because if you just take a wedding band, and you do the twist to make it a Möbius band, than the place where it twists would stick out too much.
EL: Yeah.
NN: So the idea is to try to squish it. And that, of course, is a bit challenging if you want to make a good-looking ring, so that was part of the problem to be solved.
EL: Yeah. Well, my wedding ring is also--it's not a Möbius band. But it's one that I helped design with a particular somewhat math-ish design.
KK: My wife and I are on our second set of wedding bands. The first ones, because we were, I was a graduate student and poor, we got silver ones. And silver doesn't last as long, so we're on our second ones. But the first ones were handmade, and they were, they had sort of like a similar to Evelyn's sort of little crossing thing. So they were a little bit mathy, too. I guess that's a thing that we do, right?
EL: Yeah.
NN: It's inevitable.
KK: Yeah, excellent.
KK: So we like to give our guests a chance to plug anything. Do you have any websites, books, wedding registries that you want to plug?
NN: Actually, in terms of the wedding registry, lots of our guests, of course, were asking. We didn't have a wedding registry because given the career of a postdoc, where you travel from place to place every few years, a wedding registry isn't the most practical thing. Yeah, difficult.
BNL: Yes. So we said, well, you can just give us anything you like, we'll have a box where you can leave envelopes. And some of our guests were very creative. They gave us, some of them decided to give us money. But the amounts they chose were very interesting, because they were, like, some integer times e or times π, or some combination. They wrote the number and then they explained how they came up with that number. And that was very interesting and sweet.
NN: Some of them didn't explain it. But we kind of understood. We cracked the code, essentially, except one. So one of our friends wrote us a check with a very strange number. And to this day,
BNL: We still don't know what the number is.
NN: We kept trying to guess what it could be. But no, I don't know. Maybe eventually I'll just have to ask. I'd like to know.
KK: Maybe it was just random.
NN: Maybe it was just random.
BNL: Yeah, I think one of the best gifts people gave us was their reaction right after seeing their card. In particular, there is a very nice story of a guest who really, really loved the way we set up everything and maybe you can tell us about that.
NN: Yeah, so we, at the dinner we would approach tables to say hi to some guests, and so this particular, he's actually Bea's teaching mentor.
BNL: So I'm very much into teaching. And he's the one who taught me most of the things I know.
NN: So we approached him, and, and he looked at us, and he pulled out the name card out of his breast pocket like, "This. This is the most beautiful thing I've ever seen. This is incredible. It's from my last paper, isn't it?" Yes. Yeah, that's right. He's like, "I have to send it to my collaborator. He's going to love it." And just seeing that reaction, him telling us how much he loved the card, just made all those hours that I and Bea spent reading through papers and trying to come up with some kind of, you know, short sounding question to be put into multiple choice, made all of that worthwhile.
EL: Yeah, I'm just imagining it. Like, usually you don't have to, like, cram for your wedding. But yeah, you've got all these papers you've got to read.
BNL: Yeah, we spent days going through everyone's papers and trying to find questions that were short enough to put in a small card and also easy to answer as a multiple choice question.
NN: Yeah, some were easy. So for example, my former PhD advisor came to our wedding, and I basically gave him a question from my thesis, you know, just to make sure he'd read it.
EL: Yeah.
NN: So when we approached him at the dinner and I said, "Oh, did you like the question?" and he just looked at me like, "Yeah, well I gave you that question two years ago!"
EL: Yeah
NN: So, yes, some questions were easy to come up with. Some questions were a bit more difficult. So we had a number of people from set theory, and neither of us are in set theory. I'd never, ever before opened a paper in set theory. It was all very, very new to me.
EL: Nice.
KK: Well, this has been great fun. Thanks for being such good sports on short notice.
EL: Yeah.
KK: Thank you for joining us.
BNL: Yeah.
EL: Yeah, really fun to talk to you about this. It's so much better than even the Reddit post and weird news stories led me to believe.
KK: Well, congratulations. it's fun meeting you guys. And let me tell you, it's fun being married for 27 years, too.
NN: We're looking forward to that.
KK: All right, take care.
NN: Thank you. Bye bye.
BNL: Bye.
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics, favorite theorems, and other random stuff that we never know what it’ll be. I’m one of your hosts, Kevin Knudson. I’m a professor of mathematics at the University of Florida. This is your other host.
Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah, where I forgot to turn on the heat when I first woke up this morning. I've got separate systems. So it is very cold in the basement here where I am recording.
KK: Well, yeah, it's cold in Florida this morning. It was, you know, in the mid-60s. It's very pleasant. I'm still in short sleeves. Our listeners can't see this, but I'm in short sleeves. Evelyn’s in a sweater. And our guest is in a jacket in his attic.
EL: Yes.
KK: So today we are happy to welcome Nira Chamberlain all the way from the UK. Can you tell everyone about yourself a little bit?
Nira Chamberlain: Yes, hello. My name is doctor Nira Chamberlain. I’m a professional mathematical modeler. I'm also the president-designate of the Institute of Mathematics and its Applications.
KK: Fantastic. So tell us about the IMA a little bit. So we have one of those here, but it's a different thing. So what is it?
NC: Right. I mean, the Institute of Mathematics and its Applications is a professional body of mathematicians, of professional mathematicians, and it's a learned society. It's been around since 1964. And it is actually to make sure that UK has a strong mathematical culture and look after the interest of mathematicians by industry, government and academia.
KK: Oh, that's great. Maybe we should have one of them here. So the IMA here is something else. It’s a mathematics institute. But maybe the US should have one of these. We have the AMS, right, the American Mathematical Society.
EL: Or SIAM might be more similar because it does applications, applied math.
KK: Yeah, maybe.
EL: Yeah, we’ve kind of got some.
NC: So we asked you on for lots of reasons. One is, you know, you're just sort of an interesting guy. Two, because you’re an applied mathematician, and we like to have applied mathematicians on as much as we can, Three you actually won something called the Great Internet Math-Off this summer, of which Evelyn was a participant.
EL: Yes. So he has been ruled—he’s not just an interesting guy, he has been officially ruled—the most interesting mathematician in the world…among people who were in the competition. The person who ran it always put this very long disclaimer asterisk, but I think Nira definitely has some claim on the title here. So, yeah. Do you want to talk a little bit about the big, Great Internet Math-Off?
NC: Yes, we have let's say an organization, a group of mathematicians that do a blog, Aperiodical, and they decided to start this competition called the big internet math-off. And it’s a as a knockout tournament, 16 mathematicians, and they put up something interesting about mathematics. It was put up on the internet, it was there for 48 hours, the general public would vote for what they found was their most favorite or most interesting, and the winner would progress to the next round, and it was four rounds all together. And if you reach to the final end, and you win it, you get the title “World's Most Interesting Mathematician.” And when I was invited, I thought, “Oh, isn't this really for those mathematicians that are pure mathematicians and those public communicators and those into puzzles? I mean, I'm a mathematical modeler, I’m in applied mathematics, so what am I really going to talk about?” And then when I saw that when I was actually introduced as the applied mathematician and everybody else was, let's say the public communicator, and here's the applied mathematician. It was almost like: then here's the villain—boo!
I thought, “Okay, there you go.” I’m thinking, “All right, what we're going to do is I'm actually going to stick to being an applied mathematician.” So three out of the four topics I actually introduced were about applied mathematics, and yes, the fourth topic was actually about the history of mathematics. And I was fortunate enough to get through each of the rounds and win the overall competition. It was very interesting and very good.
EL: Yeah, and I do wish — I think you look very interesting right now—I wish our listeners could see that you've got headphones on that make you look a little bit like a pilot, and behind you are these V-shaped beams, I guess in the attic, where I can totally imagine you, like, piloting some ship here, so you're really looking the part this morning, or this afternoon for you.
NC: Thank you very much indeed. I mean that’s what I call my mathematics attack room, which is the attic, and I have 200 math books behind me. And I’ve got three whiteboards in front of me, quite a number of computer screens. And I’ve got all my mathematical resources all in one place.
KK: Okay, so I just took a screenshot. So maybe with your permission, we’ll put this up somewhere.
So this is a podcast about theorems. So, Nira what is your favorite theorem?
NC: Okay, my favorite theorem is actually to do with the Lorenz equation, the Lorenz attractor. Now it was done in the 1960s by a meteorologist called Edward Lorenz. And what he wanted to do was to take a a partial differential equation, see if he could make some simplifications to it, and he came up with three nonlinear ordinary differential equations to actually look at, let's say, the convection and the movement, to see where we can actually use that to do some meteorological predictions. And then he got this set of equations, went to work solving it numerically, and then he decided, “Actually, I’d better restart my computer game because I've done something wrong.” So he went back, he restarted the computer, but he actually changed the initial conditions by a little bit. And then when he came back, he actually saw that the trajectory of the solution was different from what he had started with. When he went back and started checking, he actually saw that the initial conditions only changed by a little bit, and what was this? It was probably one of the first examples of the “butterfly effect.” The butterfly effect is saying that if, let's say, a butterfly flaps its wings, then that will prevent a hurricane going into Florida — topical.
KK: Yeah, it’s been a rough month.
NC: Yeah, or, if, let's say, another butterfly flaps its wings, then maybe another hurricane may go into Salt Lake City, for example. And this is, like I said, an example of chaotic behavior once you choose certain parameters now. The reason why I like this theorem so much is I was actually introduced to this topic when I was in my final year of my mathematics degree. And it probably was one of the introductions to the field of mathematical modeling, recognizing that when you actually model reality, mathematics is powerful, but also has its limitations. And you’re just trying to find that boundary between what can be done and what can't be done. Mathematical modeling has a part to play in that.
KK: Right. What's so interesting about meteorological modeling is that I've noticed that forecasts are really good for about two days.
NC: Yeah.
KK: So with modern computing power, I mean, of course, as you pointed out, everything is so sensitive to initial conditions, that if you have good initial data, you can get a good forecast for a couple of days, but I never believe them beyond that. It's not because the models are bad. It's because the computation is so precise now that the errors can propagate, and you sort of get these problems. Do you have any sense of how we might extend those models out better, or is it just a lost cause, is it hopeless?
NC: It's probably a lost cause. I agree with you to a certain extent. But it's a case of when we're dealing with, let’s say, meteorological equations, if they have chaotic behavior, if you put down initial conditions, and it's changed, you know, it's going out and it's changing, it just shows that, yeah, we may have good predictions to begin with, but as we go on into the future, those rounding errors will come, those differences will come. And it's almost like, let's use an analogy, let's say you go to whatever computer algebra software you have, and you get π, and let’s say you square root it 10 times, and then you raise it to a power 10 times, and then if you square root it 100 times and then you raise it to a power 100 times, and if you keep on repeating that, then actually, when you come back to the figure, you're thinking, “Is this actually π?” No, it's not. And also different calculator and different computer algebra softwares, you’ll see that they will have actually their difference. It’s that point where in terms of when we're doing, predicting a weather system, because of the chaotic behavior of the actual nonlinear differential equations, coupled with those rounding errors, it is very difficult to do that long-term weather forecast. So nobody can really say to me, “By the way, in five years’ time, on the 17th of June, the weather will be this.” That’s very much a nonsense.
KK: Sure, sure. Well, I guess orbital mechanics are that way too, right? I mean, the planetary orbits. I mean, we understand them, but we also can't predict anything in some sense.
NC: Yeah.
KK: Right, right. Living in Florida, I pay a lot of attention to hurricane models. And it's actually really fascinating to go to these various sites. So windy.com is a good one of these. They show the wind field over the whole planet if you want. And they'll also, when there are hurricanes, they have the separate models. So the European model actually turns out to be better than the American one a lot, which is sort of interesting because hurricanes affect us a lot more than— I mean the remnants get to the UK and all of that. But so you’re right, it's sort of interesting, the different implementations—the same equations, essentially, right, that underlie everything get built into different models. And then different computing systems have the different rounding error. And the models, they’re sort of, they're usually pretty close, but they do diverge. It's really very fascinating.
NC: Yeah, I mean, over in the United Kingdom, we had an interesting case in 1987 where the French meteorology office says, “By the way, people in the north of France, they should be aware that there's going to be a hurricane approaching.” While the British meteorologic office was saying, “Oh, there's no way that there's going to be a hurricane. There's no hurricane. Our model says there’s going to be no hurricane.” So the French are saying there’s going to be a hurricane. The British say there’s not going to be a hurricane. And guess what? The French were right and a hurricane hit the United Kingdom.
And because of that what they did is that now the Met Office, which is the main weather place in Britain, what they've done is they put quite a number of boats out in the Atlantic to measure, to come up with a much more accurate measure of the weather system so that they can actually feed their models, and they also use more powerful models because he equation itself remains the same, it’s the information that actually goes into it which is which is the difference, yeah? So in terms of what you said in the American models, it's all dependent on who you get the measurements from because you may not get exactly the measurement from the same boat. You may get it from a different boat, from different boats in a different location, different people. This is where you come to that human factor. Some people will say, “Oh, round it to this significant figure,” while someone else will say, “Round it to that significant figure,” and guess what? All of that actually affects your final results.
EL: Yeah, that matters.
NC: Yes.
KK: So do you do this kind of modeling yourself, or are you in other applications?
NC: Oh, I'm very much in other applications. I mean, I'm still very much a a mathematical modeler. I mean, my research now is to do with—to minimize the probability of artificial intelligence takeover. That’s what my current research I'm doing at Loughborough University.
EL: Well that, you know, the robots will have you first in the line or something in the robot uprising.
NC: Well, we talk about robots, but this is quite interesting. When we're talking about, let's say, artificial intelligence takeover, everybody thinks about the Hollywood Terminator matrix, I Robot, you know, robots marching down the street. But there are different types of AI takeovers, and some of them are much more subtle than that. For instance, one scenario is, let's say for instance, you have a company, and they decide to really upgrade their artificial intelligence, their machine learning, to a certain sense it's more advanced than their competition is. And by doing so, they actually put all their competitors out of business. And so what you have is you have this one company almost running the world economy. Now the question is, would that company make decisions (based on its AI), would they make decisions that are conducive with social cohesion? And you can't put your hand on your heart and say, “Absolutely, yes, because a machine, it’s largely, like, 1-0, it doesn't really care about the consequences of social cohesion. So henceforth, we can actually to a model of that, saying could we ever get to a situation where one company actually dominates all different industrial sectors and ends up, let’s say, running the world economy? And if that's the case, what can what strategies can we actually implement to try and minimize that risk?
EL: It sounds not entirely hypothetical.
KK: No, no. Well, you know, of course the conspiracy theorists types in the US would have you believe that this already exists, right? The Deep State and the Illuminati run everything, right?
EL: But getting back to the Lorenz system and everything, you were saying that this is one of the earliest examples of mathematical modeling you saw. Was it one of the things that inspired you to go that direction when you got your PhD?
NC: Yes, so I was doing that as part of my final year mathematics degree, and I thought, well, this whole idea that, you know, here’s applied mathematics, using mathematics in the real world, saying that there are problems that some people say it's impossible, you can't use mathematics. And you're just trying to push the boundaries of mathematics and say, “This is how we actually model reality.” It was one of the things that actually did inspire me, so Edward Lorenz actually inspire me, just saying, wait a minute, applied mathematics is not necessarily about: here’s a problem, here’s an equation, put the numbers in the right places, and here's a solution. It's about gaining that insight into the real world, learning more about the world around you learning more about the universe around you through through mathematics. And that's what inspired me.
KK: And it's very imprecise, but that's sort of what makes it intriguing, right? I mean, you have to come up with simplifying assumptions to even build a model, and then how much information can we extract from that?
NC: That’s one of the key things about mathematical modeling. I mean, you're looking at the world. The world is complex, full of uncertainty, and it’s messy. And you are making some simplifying assumptions, but the key thing is: do you make simplifying assumptions to an extent that it actually corrupts and compromises your solution, or do you make simplifying assumptions that say, “Actually, this gives me insight into how the world actually works.”? And recognizing which factors do you include, which factors do you exclude, and bring a model that is what I call useful.
KK: Right. That’s the art, right?
NC: Yeah, that's the art. That's the art of mathematical modeling.
KK: So another thing we do on this podcast is we ask our guests to pair their theorem with something. So what pairs well with the Lorenz equation?
NC: I chose to pair it with the Jamaican dish called ackee yam and saltfish[16:48] . Now the reason why is with ackee yam and saltfish, if you cook it right, it is delicious, but if you cook it wrong, the Ackee turns out to be poisonous and that’s a bit like the Lorenz equation.
KK: What is ackee? I don't think I know what this is.
NC: Okay. Ackee is actually a vegetable, but if you actually were to look at it, it looks like scrambled egg, but it's actually a vegetable. It's like a yellow vegetable.
EL: Huh.
KK: Interesting.
NC: And yam, it’s like an overgrown, very hard potato. It’s looks like a very overgrown, hard potato.
KK: Sure, yeah.
NC: And saltfish is just a Jamican saying for cod. Even though you could really say ackee yam and cod, they don't call it cod, they call it saltfish.
KK: Okay. All right. So I've never heard ackee.
EL: Yeah, me neither.
KK: I mean, I knew that yams, so in the United States, most people will call sweet potatoes yams, they’ll use those two words interchangeably. But of course, yams are distinct, and I think they can be poisonous if you don't cook them right, right? Or some varieties can. But so ackee is something separate from the yam.
NC: Yeah.
EL: Also poisonous if you don't cook it right.
NC: Absolutely.
KK: So can you actually access this in England, or do you have to go to Jamaica to get this?
NC: Yes, we can access this this in England because in England we have a large West Indian diaspora community.
KK: Sure, right.
NC: And also we do get lots of variety of foods from different countries around the world. So we can, it's relatively easy to to access ackee yam. And also we’ve got quite a number of Caribbean restaurants, so definitely there they are going to cook it right.
KK: So it's interesting, we have a Caribbean restaurant here in town in Gainesville, which of course we're not as far away as you are, but they don't try to poison us. The food is delicious.
EL: That you know of.
KK: Well that's right. I love eating there. The food is really spectacular. But this is interesting.
EL: And is this a family recipe? Do you have roots in in the West Indies, or…
NC: Yes, my parents were from from Jamaica. I still have relatives in Jamaica, and my wife’s descent is Jamaican. Now and again we do have that Caribbean meal. I thought, “Well, what shall I say as a food? I thought, “Well, should I go for the British fish and chips?” I thought, “No, let's go for ackee yam and saltfish.”
KK: Sure, well and actually I think your jacket looks like a Jamaican-influenced thing, right? With the black, green, and yellow, right?
NC: Yes, absolutely. And that's because it's quite cold in the in the attic. This is the same style of jacket as the Jamaican bobsled team, so I decided to wear it, as it’s quite cold up here.
EL: Yeah, Cool Runnings, the movie about that, was an integral part of my childhood. My brother and sister and I watched that movie a lot.
So I’m curious about this ackee vegetable, like how sensitive are we talking for the dependence on initial conditions, the dependence on cooking this correctly to be safe? Is it pretty good, or do you have to be pretty careful?
NC: You have to be pretty good, you have to be pretty careful. As long as you follow the instructions you’re okay, but in this case, if you don't cook it long enough, you don't cook it at a high enough temperature, whatever you do, please do not eat it cold, do not eat it raw.
EL: Okay.
KK: Like actually it might kill you, or it just makes you really sick?
NC: It will make you really sick. I haven't heard— well let’s put it this way, I do not wish to carry out the experiments to see what would happen.
KK: Understood.
EL: Yes.
KK: Well this has been great fun. I've learned a lot.
EL: Yeah
KK: Thanks for joining us, Nira.
NC: Thank you very much indeed for inviting me.
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics, theorems, and, I don't know, just about anything else under the sun, apparently. I'm Kevin Knudson. I'm one of your hosts. I'm a professor of mathematics at the University of Florida. This is your other host.
Evelyn Lamb: Hi, I'm Evelyn lamb. I'm a freelance math and science writer based in Salt Lake City. So how are things going?
KK: It's homecoming weekend. We're recording this on a Friday, and for people who might not be familiar with Southeastern Conference football, it is an enormous thing here. And so today is is a university holiday. Campus is closed. In fact, the local schools are closed. There's a big parade that starts in about 20 minutes. My son marched in it for four years. So I've seen it. I don't need to go again.
EL: Yeah.
KK: I had brunch at the president's house this morning, you know. It's a big festive time. I hope it doesn't get rained out, though. It's looking kind of gross outside. How are things for you?
EL: All right. Yeah, thankfully, no parades going on near me. Far too much of a misanthrope to enjoy that. Things are fight here. My alarm clock-- we're also recording in the the week in between the last Sunday of October and the first Sunday of November.
KK: Right.
EL: In 2007, the US switched when it went away from Daylight Saving back to Standard Time to the first Sunday of November. But my alarm clock, which automatically adjusts, was manufactured before 2007.
KK: I have one of those too.
EL: Yeah, so it's constantly teasing me this week. Like, "Oh, wouldn't it be nice if it were only 7am now?" So yeah.
KK: All right. Well, yeah, first world problems, right?
EL: Yes. Very, very much.
KK: All right. So today, we are thrilled to have Skip Garibaldi join us. Skip, why don't you introduce yourself?
Skip Garibaldi: My name is Skip Garibaldi. I'm the director at the Center for Communications Research in La Jolla.
KK: You're from San Diego, aren't you?
SG: Well, I got my PhD there.
KK: Ish?
SG: Yeah, ish.
KK: Okay.
SG: So I actually grew up in Northern California. But once I went to San Diego to get my degree, I decided that that was really the place to be.
KK: Well, who can blame you, really?
EL: Yeah, a lot to love there.
KK: It's hard to argue with San Diego. Yeah. So you've been all over. For a while you're at the Institute for Pure and Applied Math at UCLA.
SG: Yeah, that was my job before I came to the Center for Communications Research. I was associate director there. That was an amazing experience. So their job is to host conferences and workshops which bring together mathematicians in areas where there's application, or maybe mathematicians with different kinds of mathematicians where the two groups don't really talk to each other. And so the fact that they have this vision of how to do that in an effective way is pretty amazing. So that was a great experience for me.
KK: Yeah, and you even got in the news for a while. Didn't you and a reporter, like, uncover some crime syndicate? What am I remembering?
SG: That's right. Somehow, I became known for writing things about the lottery. And so a reporter who was doing an investigative piece on lottery crime in Florida contacted me, and I worked closely with him and some other mathematicians, and some people got arrested. The FBI got involved and it was a big adventure.
KK: So Florida man got arrested. Never heard of that. That's so weird.
SG: There's a story about someone in Gainesville in the newspaper article. You could take a look.
KK: It wasn't me. It wasn't me, I promise.
EL: Whoever said math wasn't an exciting field?
KK: That's right.
Alright, so, you must have a favorite theorem, Skip, what is it?
SG: I do. So you know, I listened to some of your other podcasts. And I have to confess, my favorite theorem is a little bit different from what your other guests picked.
EL: Good. We like the the great range of things that we get on here.
SG: So my favorite theorem for this podcast answers a question that I had when I was young. It's not something that is part of my research today. It's never helped me prove another theorem. But it answers some question I had from being junior high. And so the way it goes, I'm going to call it the unknowability of irrational numbers.
So let me explain. When you're a kid, and you're in school, you probably had a number line on the wall in your classroom. And so it's just a line going left to right on the wall. And it's got some markings on it for your integers, your 0,1,2,3, your -1,-2,-3, maybe it has some rational numbers, like 1/2 and 3/4 marked, but there's all these other points on that number line. And we know some of them, like the square root of two or e. Those are irrational, they're decimals that when you write them down as a number-- like π is 3.14, we know that you can't really write it down that way because the decimal keeps on going, it never repeats. So wherever you stop writing, you still haven't quite captured π.
So what I wondered about was like, "Can we name all those points on the number line?
EL: Yeah.
SG: Are π and e and the square root of two special? Or can we get all of them? And it comes up because your teacher assigns you these math problems. And it's like "x^2+3x+5=0. Tell me what x is." And then you name the answer. And it's something involving a square root and division and addition, and you use the quadratic formula, and you get the answer.
So that's the question. How many of those irrational can you actually name? And the answer is, well, it's hard.
EL: Yeah.
SG: Right?
KK: Like weirdly, like a lot of them, but not many.
SG: Exactly!
EL: Yeah.
SG: So if we just think about it, what would it mean to name one of those numbers? It would mean that, well, you'd have to write down some symbols into a coherent math problem, or a sentence or something, like π is the circumference of a circle over a diameter. And when you think about that, well, there's only finitely many choices for that first letter and finitely many choices for that second letter. So it doesn't matter how many teachers there are, and students, or planets with people on them, or alternate universes with extra students. There's only so many of those numbers you can name. And in fact, there's countably many.
EL: Right.
KK: Right. Yeah. So are we talking about just the class of algebraic numbers? Or are we even thinking a little more expansively?
SG: Absolutely more expansive than that. So for your audience members with more sophisticated tastes, you know, maybe you want to talk about periods where you can talk about the value of any integral over some kind of geometric object.
KK: Oh, right. Okay.
SG: You still have to describe the object, and you have to describe the function that you're integrating. And you have to take the integral. So it's still a finite list of symbols. And once you end up in that realm, numbers that we can describe explicitly with our language, or with an alien language, you're stuck with only a countable number of things you can name precisely.
EL: Yeah.
KK: Well, yeah, that makes sense, I suppose.
SG: Yeah. And so, Kevin, you asked about algebraic numbers. There are other classes of numbers you can think about, which, the ones I'm talking about include all of those. You can talk about something called closed form numbers, which means, like, you can take roots of polynomials and take exp and log.
KK: Right.
SG: That doesn't change the setup. That doesn't give you anything more than what I'm talking about.
EL: Yeah. And just to back up a sec, algebraic numbers, basically, it's like roots of polynomials, and then doing, like, multiplication and division with them. That kind of thing. So, like, closed form, then you're expanding that a little bit, but still in a sort of countable way.
SG: Yes. Like, what kinds of numbers could you express precisely if you had a calculator with sort of infinite precision, right? You're going to start with an integer. You can take it square root, maybe you can take its sine, you know. You can think about those kinds of numbers. That's another notion, and you still end up with a countable list of numbers.
KK: Right. So this sounds like a logic problem.
SG: Yes, it does feel that way.
KK: Yeah.
SG: So, Kevin and Evelyn, I can already imagine what you're thinking. But let me say it for the benefit of the people for whom the word "countable" is maybe a new thing thing. It means that you can imagine there's a way to order these in a list so that it makes sense to talk about the next one. And if you march down that list, you'll eventually reach all of them. That's what it means. But the interesting thing is, if you think about the numbers on the number line, we know going back to Cantor in the 1800s that those are not countable. You use the so-called diagonalization argument, if you happen to have seen that.
KK: Right.
EL: Yeah. Which is just a beautiful, beautiful thing. Just, I have to put a plug in for diagonalization.
KK: Oh, it's wonderful.
SG: I've been thinking about it a lot in preparation for this podcast. I agree.
KK: Sure.
SG: So what that means is that that's the statement, these irrational numbers, you can't name all of them, because there are uncountably many of them, but only countably many numbers you can name.
It sort of has a hideous consequence that I want to mention. And it's why this is my favorite theorem. Because it says, it's not just that you can't name all of them. It's just much worse than that. So the reason I love this theorem is not just that it answers a question from my childhood. But it tells you something kind of shocking about the universe. So when you--if you could somehow magically pick a specific point on the number line, which you can't, because you know, there's--
KK: Right.
SG: You have finite resolution when you pick points in the real world. But pretend you could, then the statement is the chance that the number you picked was a number you could name precisely is very low. Exactly. It's essentially zero.
KK: Yeah.
SG: So the technical way to say this is that the countable subset of real numbers has Lebesgue measure zero.
KK: Right.
SG: So I was feeling a little awkward about using this as my theorem for your podcast, because, you know, the proof is not much. If you know about countable and uncountable, I just told you the whole proof. And you might ask, "What else can I prove using this fact?" And the answer is, I don't know. But we've just learned something about irrational numbers that I think some of your listeners haven't known before. And I think it's a little shocking.
EL: Yeah, yeah. Well, it sounds like I was maybe more of a late bloomer on thinking about this than you, because I remember being in grad school, and just feeling really frustrated one day. I was like, you know, transcendental numbers, the non-algebraic numbers are, you know, 100% of the number line, Lebesgue measure one, and I know like, three of them, essentially. I know, like, e, π, and natural log two. And, you know, really, two of them are already kind of, in a relationship with each other. They're both related to e or the natural log idea. It's just like, okay, 2π. Oh, that's kind of a cheap transcendental number.
Like there's, there's really not that much difference. I mean, I guess then, in a sense, I only know, like, one irrational number, which is square root of 2, like, any other roots of things are non-transcendental, and then I know the rationals, but yeah, it's just like, there are all these numbers, and I know so few of them.
SG: Yeah.
KK: Right. And these other these other things, of course, when you start dealing with infinite series, and you know, you realize that, say, the Sierpinski carpet has area zero, right? But it's uncountable, and you're like, wait a minute, this can't be right. I mean, this is, I think why Cantor was so ridiculed in his time, because it does just seem ridiculous. So you were sitting around in middle school just thinking about this, and your teacher led you down this path? Or was it much later that you figured this out?
SG: Well, I figured out the answer much later. But I worried about it a lot as a child. I used to worry about a lot of things like, your classic question is--if you really want to talk about things I worried about as a child--back in seventh grade, I was really troubled about .99999 with all the nines and whether or not that was one.
EL: Oh yeah.
SG: And I have a terrible story about my eighth grade education regarding that. But in the end, I discovered that they are they are actually equal.
KK: Well, if you make some assumptions, right? I mean, there are number systems, where they're not equal.
SG: Ah, yeah, I'd be happy--I'm not prepared to get into a detailed discussion of the hyperreals.
KK: Neither am I. But what's nice about that idea is that, of course, a lot depends on our assumptions. We we set up rules, and then with the rules that we're used to, .999 repeating is equal to one. But you know, mathematicians like sandboxes, right? Okay, let's go into this sandbox and throw out this rule and see what happens. And then you get non Euclidean geometry, right, or whatever.
SG: Right.
KK: Really beautiful stuff.
SG: I have an analogy for this statement about real numbers that I don't know if your listeners will find compelling or not, but I do, so I'm going to say it unless you stop me.
KK: Okay.
EL: Go for it.
SG: Exactly. So one of the things I find totally amazing about geology is that, you know, we can see rocks that are on the surface of the earth and inspect them, and we can drill down in mines, and we can look at some rocks down there. But fundamentally, most of the geology of the earth, we can't see directly. We've never seen the mantle, we're never going to see the core. And that's most Earth. So nonetheless, there's a lot of great science you can do indirectly by analyzing as an aggregate, by studying the way, earthquake waves propagate and so on. But we're not able to look at things directly. And I think that has an analogy here with the number line, where the rocks can see on the surface are the integers and rationals. You drill down, and you can find some gems or something, and there's your irrational numbers you can name, and then all the ones you'll never be able to name, no matter how hard you try, how much time there is, how many alternate universes filled with people there are, you'll never be able to name, somehow that's like the core because you can't ever actually get directly at them.
EL: Yeah. I like this analogy a lot, because I was just reading about Inge Lehmann who is the Danish seismologist (who I think of as an applied mathematician) who was one of the people who found these different seismic waves that showed that the inner core had the liquid part--or I guess the core had the liquid part and then the solid inner core. She determined that it couldn't all be uniform, basically by doing inverse problems where, like, "Oh, these waves would not have come from this." So that's very relevant to something I just read. Christiane Rousseau actually wrote a really cool article about Inge Lehmann.
SG: Yes, that's a great article.
EL: So yeah, people should look that up.
KK: I'll have to find this.
EL: great analogy. Yeah.
KK: So, we know now that this, this is a long time there for you. So that's another question we've already answered. So, okay, what does one pair with this unknowability?
SG: Ah, so I I think I'm going to have to pair it with one of my favorite TV shows, which is Twin Peaks.
EL: Okay.
SG: So I watch the show, I really enjoy it. But there's a lot of stuff in there that just is impossible to understand.
And you can go read the stuff the people wrote about it on the side, and you can understand a little bit of it. But you know, most of it's clearly never meant to be understood. You're supposed to enjoy it as an aggregate.
KK: That's true. So you and I are the same age, roughly. We were in college when Twin Peaks was a thing. Did you did you watch it then?
SG: No, I just remember that personal ads in the school paper saying, "Anyone who has a video recording of Twin Peaks last week, please tell me. I'll bring doughnuts."
EL: You grew up in a dark time.
SG: Before DVRs, yeah.
KK: That's right. Well, yeah. Before Facebook or anything like that. You had to put an ad in the paper for stuff like this, yeah.
EL: Yeah, I'm really, really understanding the angst of your generation now.
KK: You know what, I kind of preferred it. I kind of like not being reached. Cell phones are kind of a nuisance that way. Although I don't miss paying for phone calls. Remember that, staying up till 11 to not have to pay long distance?
SG: Yeah.
KK: Alright, so Twin Peaks. So you like pie.
SG: Yeah, clearly. And coffee.
KK: And coffee.
SG: And Snoqualmie.
KK: Very good.
SG: I don't know if you--
KK: Sure. I only sort of vaguely remember-- what I remember most about that show is just being frustrated by it, right? Sometimes you'd watch it and a lot would happen. It's like, "Wow, this is bizarre and weird, and David Lynch is a genius." And then there'd be other shows where nothing would happen.
SG: Yes.
KK: I mean, nothing! And, you know, also see Book II of Game of Thrones, for example, where nothing happens, right? Yeah. And David Lynch, of course, was sort of at his peak at that time.
SG: Right.
KK: All right. So Twin Peaks. That's a good pairing because you're right, you'll never figure that out. I think a lot of it was meant to be unknowable.
SG: Yes. Yeah. Have you seen season three of Twin Peaks? The one that was out recently?
KK: No, I don't have cable anymore.
SG: About halfway through that season, there's an episode that is intensely hard to watch because so little happened on it. And if you look at the list the viewership ratings for each episode, there's a steep drop-off in the series at that episode. So this is like the most unknowable part of the number line if you if you follow the analogy.
KK: Okay. All right. That's interesting. So I assume that these these knowable numbers are probably fairly evenly distributed. I guess the rationals are pretty evenly distributed. So yeah.
So So our listeners might wonder if there's some sort of weird distribution to these things, like the ones that you can't name, do they live in certain parts? And the answer is no, they live everywhere.
SG: Yes. That's absolutely right.
EL: I wonder, though, if you can kind of--I'm thinking of continued fraction representations, where there is an explicit definition of number that's well-approximable versus badly-approximable numbers. I guess those are approximable by rationales, not by finite operations or closed form. So maybe that's a bad analogy.
KK: Mm hmm.
SG: Well, if you or your listeners are interested in, thinking about this question some more, then you can Google closed-form number. There's a Wikipedia entry to get people started. And there are a couple of references in there to some really well-written articles on the subject, one by my friend Tim Chow, that was an American Mathematical Monthly, and another one by Borwein and Crandall that was in the Notices of the AMS that's for free on the internet.
EL: Oh, great.
KK: Okay, great. We'll link to those.
EL: And actually, here's, this question, I'm not sure, so I'll just say, is this the same as computable or is closed form a different thing from computable numbers?
SG: Yeah, that's a good question. So there's not a widely-agreed upon definition of the term closed form number. So that's already a question. And then I'm not sure what your definition of computable is.
EL: Me neither.
SG: Okay.
EL: No, I've just heard of the term computable. But yeah, I guess the nice thing is no matter how you define it, your theorem will still be true.
SG: That's right. Exactly.
EL: There's still only countable.
KK: And now we've found something else unknowable: are these the same thing?
SG: Those are really hard questions in general. Yeah. That's the main question plumbed in those articles and referred to is: if you define them in these different ways, how different are they?
EL: Oh, cool.
SG: If you take a particular number, does it sit in which set? Those kinds of questions. Yeah, those are really hard usually, much like you said, what are the transcendental numbers that are--are certain numbers transcendental or not can be a hard question to answer.
EL: Yeah, yeah, even if you think, "Oh yeah, this certainly has to be transcendental, it takes a while to actually prove it."
SG: Yes.
KK: Or maybe you can't. I wonder if some of those statements are even actually undecidable, but again, we don't know. All right, we're going down weird rabbit holes here. Maybe David Lynch could just do a show.
SG: That would be great.
KK: Yeah, there would just be a lot of mathematicians, and nothing would happen
SG: And maybe owls.
KK: And maybe owls. Well, this has been great fun. Thanks for joining us before you head off to work, Skip. Our listeners don't know that it's you know, well, it's now nine in the morning where you are. So thanks for joining us, and I hope your traffic isn't so bad in La Jolla today.
SG: Every day's a great day here. Thank you so much for having me.
KK: Yeah. Thanks, Skip.
Evelyn Lamb: Hello and welcome to My Favorite Theorem, a math podcast where we ask mathematicians what their favorite theorem is. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
EL: I’m all right. It’s fall here, or hopefully getting to be fall soon.
KK: Never heard of it.
EL: Yeah. Florida doesn’t have that so much. But yeah, things are going well here. We had a major plumbing emergency earlier this month that is now solved.
KK: My big news is that I’m now the chair of the math department here at the university.
EL: Oh yes, that’s right.
KK: So my volume of email has increased substantially, but it’s and exciting time. We’re hiring more people, and I’m really looking forward to this new phase of my career. So good times.
EL: Great.
KK: But let’s talk about math.
EL: Yes, let’s talk about math. We’re very happy today to have Michèle Audin. Yeah, welcome, Michèle. Can you tell us a little bit about yourself?
Michèle Audin: Hello. I’m Michèle Audin. I used to be a mathematician. I’m retired now. But I was working on symplectic geometry, mainly, and I was interested also in the history of mathematics. More precisely, in the history of mathematicians.
EL: Yeah, and I came across you through, I was reading about Kovalevskaya, and I just loved your book about Kovalevskaya. It took me a little while to figure out what it was. It’s not a traditional biography. But I just loved it, and I was like, “I really want to talk to this person.” Yeah, I loved it.
MA: I wanted to write a book where there would be history and mathematics and literature also. Because she was a mathematician, but she was also a novelist. She wrote novels and things like that. I thought her mathematics were very beautiful. love her mathematics very much. But she was a very complete kind of person, so I wanted to have a book like that.
KK: So now I need to read this.
EL: Yeah. The English title is Remembering Sofya Kovalevskaya. Is that right?
MA: Yeah.
KK: I’ll look this up.
EL: Yeah. So, what is your favorite theorem.
MA: My favorite theorem today is Stokes’ formula.
EL: Oh, great!
KK: Oh, Stokes’ theorem. Great.
EL: Can you tell our listeners a little bit about it?
MA: Okay, so, it’s a theorem, okay. Why I love this theorem: Usually when you are a mathematician, you are forced to face the question, what is it useful for? Usually I’ll try to explain that I’m doing very pure mathematics and maybe it will be useful someday, but I don’t know when and for what. And this theorem is quite the opposite in some sense. It just appeared at the beginning of the 19th century as a theorem on hydrodynamics and electrostatics, some things like that. It was very applied mathematics at the very beginning. The theorem became, after one century, became a very abstract thing, the basis of abstract mathematics, like algebraic topology and things like that. So this just inverts the movement of what we are thinking usually about applied and pure mathematics. So that’s the reason why I like this theorem. Also the fact that it has many different aspects. I mean, it’s a formula, but you have a lot of different ways to write it with integrals, so that’s nice. It’s like a character in a novel.
KK: Yeah, so the general version, of course, is that the integral of, what, d-omega over the manifold is the same as the integral of omega over the boundary. But that’s not how we teach it to students.
MA: Yeah, sure. That’s how it became at the very end of the story. But at the very beginning of the story, it was not like that. It was three integrals with a very complicated thing. It is equal to something with a different number of integrals. There are a lot of derivatives and integrals. It’s quite complicated. At the very end, it became something very abstract and very beautiful.
KK: So I don’t know that I know my history. When we teach this to calculus students anymore, we show them Green’s theorem, and there are two versions of Green’s theorem that we show them, even though they’re the same. Then we show them something we call Stokes’ theorem, which is about surface integrals and then the integral around the boundary. And then there’s Gauss’s divergence theorem, which relates a triple integral to a surface integral. The fact that Gauss’s name is attached to that is probably false, right? Did Gauss do it first?
MA: Gauss had this theorem about the flux—do you say flux?
KK: Yeah.
MA: The flux of the electric—there are charges inside the surface, and you have the flux of the electric field. This was a theorem of Gauss at the very beginning. That was the first occurrence of the Stokes’ formula. Then there was this Ostrogradsky formula, which is related to water flowing from somewhere. So he just replaced the electric charges by water.
KK: Sort of the same difference, right? Electricity, water, whatever.
MA: Yes, it’s how you come to abstraction.
KK: That’s right.
MA: Then there was the Green theorem, then there is Stokes’ formula that Stokes never proved. There was this very beautiful history. And then in the 20th century, it became the basis for De Rahm theory. That’s very interesting, and moreover there were very interesting people working on that in the various countries in Europe. At the time mathematics were made in Europe, I’m sorry about that.
KK: Well, that’s how it was.
MA: And so there are many interesting mathematicians, many characters, different characters. So it's like it's like a novel. The main character is the formula, and the others are the mathematicians.
EL: Yeah. And so who are some of your favorite mathematicians from that story? Anyone that stands out to you?
MA: Okay, there are two of them: Ostrogradsky and Green. Do you know who was Green?
EL: I don't know about him as a person really.
MA: Yeah, really? Do you know, Kevin? No.
KK: No, I don't.
MA: Okay. So nobody knows, by the way. So he was he was just the son of a baker in Nottingham. And this baker became very rich and decided to buy a mill and then to put his son to be the miller. The son was Green. Nobody knows where he learned anything. He spent one year in primary school in Nottingham, and that’s it. And he was a member of some kind of, you know, there are books…it’s not a library, but morally it’s a library. Okay. And that’s it. And then appears a book, which is called, let me remember how it is called. It’s called An essay on the application of mathematical analysis to the theories of electricity and magnetism. And this appears in 1828.
EL: And this is just out of nowhere?
MA: Out of nowhere. And then the professors in Cambridge say, “Okay, it’s impossible. We have to bring here that guy.” So they take the miller from his mill and they put him in the University of Cambridge. So he was about, I don’t know, 30 or 40. and of course, it was not very convenient for the son of a baker to be a student with the sons of the gentlemen of England.
KK: Sure.
MA: Okay. So he didn't us stay there. He left, and then he died and nobody knew about that. There was this book, and that’s it.
KK: So he was he was 13 or 14 years old when he wrote this? [Ed. note: Kevin and Evelyn had misheard Dr. Audin. Green was about 35 when he wrote it. The joys of international video call reception!]
MA: Yeah. And then and then he died, and nobody knew except that—
KK: Wow.
MA: Wow. And then appears a guy called Thomson, Lord Kelvin later. This was a very young guy, and he decided to go to Paris to speak with a French mathematicians like Cauchy, Liouville.
And then it was a very long trip, and he took with him a few books to read during the journey. And among these books was this Green book, and he was completely excited about that. And he arrived in Paris and decided to speak of this Green theorem and this work to everybody in Paris. There are letters and lots of documentation about that. And then this is how the Green formula appeared in the mathematics.
EL: Interesting! Yeah, I didn't know about that story at all. Thanks.
KK: It’s fascinating.
MA: Nobody knows. Yeah, that's very interesting.
KK: Isn’t what we know about Stokes’ theorem, wasn't it set as an exam problem at Cambridge?
MA: Yeah, exactly. So it began It began with a letter of Lord Kelvin to Stokes. They were very friendly together, on the same age and doing say mathematics and physics and they were very friendly together and and they were but they were not at the same at the same place of the world writing letters. And once Thomson, Kelvin, sent a letter to Stokes speaking of mathematics, and at the very end a postscript where he said: You know that this formula should be very interesting. And he writes something which is what we now know as the Stokes theorem.
And then the guy Stokes, he had to make a problem for an exam, and he gave this as an examination. You know, it was in Cambridge, they have to be very strong.
KK: Sure.
MA: And this is why it’s called the Stokes’ formula.
EL: Wow.
KK: Wow. Yeah ,I sort of knew that story. I didn't know exactly how it came to be. I knew somewhere in the back of my mind that it had been set as an exam problem.
MA: It’s written in a book of Maxwell.
KK: Okay.
EL: And so the second person you mentioned, I forget the name,
MA: Ostrogradsky. Well, I don’t know how to pronounce it in Russian, and even in English, but Ostrogradsky, something like that. So he was a student in mathematics in Ukraine at that time, which was Russia at that time, by the way. And he was passing his exams, and the among the examination topics there was religion. So he didn't go for that, so he was expelled from the university, and he decided to go to Paris. So it was in 1820, something like that. He went to Paris. He arrived there, and had no exams, and he knew nobody, and he made connections with a lot of people, especially with Cauchy, who was a was not a very nice guy, but he was very nice to Ostrogradsky.
And then he came back to Russia and he was the director of all the professors teaching mathematics in military schools in in Russia. So it was quite important. And he wrote books about differential calculus—what we call differential calculus in France but you call calculus in the U.S. He wrote a book like that, and for instance, because we were speaking of Kovalevskaya when she was a child, on the walls of her bedroom there were the sheets of the course of Ostrogradsky on the wall, and she read that when she was a little girl. She was very good in calculus.
This is another story, I’m sorry.
KK: No, this is the best part.
MA: And so, next question.
KK: Okay, so now I’ve got to know: What does one pair with Stokes’ theorem?
MA: Ah, a novel, of course.
EL: Of course!
KK: A novel. Which one?
MA: Okay, I wrote one, so I’m doing my own advertisement.
EL: Yeah, I was hoping we could talk about this. So yeah, tell us more about this novel.
MA: Okay, this is called Stokes’ Formula, a novel—La formule de Stokes, roman. I mean, the word “novel” is in the title. In this book I tell lots of stories about the mathematicians, but also about the about the formula itself, the theorem itself. How to say that? It’s not written like the historians of mathematics like, or want you to write. There are people speaking and dialogues and things like that. For instance, at the end of the book there is a first meeting of the Bourbaki mathematicians, the Boubaki group. They are in a restaurant, and they are having a small talk, like you have in a restaurant. There are six of them, and they order the food and they discuss mathematics. It looks like just small talk like that, but actually everything they say comes from the Bourbaki archives.
EL: Oh wow.
MA: Well, this is a way to write. And also this is a book. How to say that? I decided it would be very boring if the history of Stokes’ formula was told from a chronological point of view, so it doesn’t start at the beginning, and it does not end at the end of the story. All the chapters, the title is a date: first of January, second of January, and they are ordered according to the dates. So you have for instance, it starts with the first of January, and then you have first of February, and so on, until the end, which is in December, of course. But it’s not during the same year.
EL: Right.
MA: Well, the first of January is in 1862, and the fifth of January is in 1857, and so on. I was very, very fortunate, I was very happy, that the very end of the story is in December because the first Bourbaki meeting was in December, and I wanted to have the end there.
Okay, so there are different stories, and they are told on different dates, but not using the chronology. And also in the book I explain what the formula means. You are comparing things inside the volume, and what happens on the surface face of the volume. I tried to explain the mathematics.
Also, in every chapter there is a formula, a different formula. I think it’s very important to show that formulas can be beautiful. And some are more beautiful than others. And the reader can just skip the formula, but look at it and just points out that it's beautiful, even if I don't understand it completely.
There were different constraints I used to write the book, and one of them was to have a formula, exactly one formula in every chapter.
EL; Yeah, and one of the reasons we wanted to talk to you—not just that I read your book about Kovalevskaya and kind of fell in love with it—but also because since leaving math academia, you've been doing a lot more literature, including being part of the Oulipo group, right, in France?
MA: Yes. You want me to explain what it is?
EL: Yeah, I don't really know what it is, so it'd be great if you could tell us a little more about that.
KK: Okay. It's a group—for mathematicians, I should say it’s a set—of writers and a few mathematicians. It was founded in 1960 by Raymond Queneau and François Le Lionnais. The idea is to the idea is to find constraints to write some literary texts. For instance, the most famous may be the novel by George Perec, La Disparition. It was translated in English with the title A Void, which is a rather long novel which doesn’t use the letter e. In French, it is really very difficult.
EL: Yeah.
MA: In English also, but in French even more.
EL: Oh, wow.
MA: Because you cannot use the feminine, for instance.
EL: Oh, right. That is kind of a problem.
MA: Okay, so some of the constraints have a mathematical background. For instance, this is not the case for La Disparition, but this is a case for for some other constraints, like I don't know, using permutations or graph theory to construct a text.
KK: I actually know a little about this. I taught a class in mathematics and literature a few years ago, and I did talk about Oulipo. We did some of these—there are these generators on the internet where you can, one rule is where you pick a number, say five, and you look at every noun and replace it by the one that is five entries later than that in the dictionary, for example. And there are websites that will, you feed it text, and it's a bit imperfect because it doesn't classify things as nouns properly sometimes, it's an interesting exercise. Or there was another one where—sonnets. So you would you would create sonnets. Sonnets have 14 lines, but you would do it sort of as an Exquisite Corpse, where you would write all these different lines for sonnets, and then you could remove them one at a time to get a really large number, I forget now however many you do so, yeah, things like that, right?
MA: Yeah, this is cent mille milliards, which is 10 to the 14.
KK: That’s right, yeah. So 10 different sonnets. But yeah, it’s really really interesting.
MA: The first example you gave then, which is called in French “X plus sept,” X plus seven, you do you start from a substantive, a noun, you take the seventh in a dictionary following it.
KK: That’s right.
MA: It depends on the dictionary you use, of course.
KK: Sure.
EL: Right.
MA: So that's what they did at the beginning, but now they're all different.
KK: Sure.
EL: Yeah, it's a really neat creative exercise to try to do that kind of constraint writing.
MA: That forms a constraint, the calendar constraint I used to in this book, is based on books by Michelle Grangaud, who is a poet from the Oulipo also, and she wrote Calendars, which were books of poetry. That's where the idea comes from.
EL: Yeah, and I assume this, your novel has been translated into English?
MA: Not yet.
EL: Oh, okay.
MA: Somebody told me she would do it, and she started, and I have no news now. I don’t know if she were thinking of a published or not. If she can do something, I will be very grateful.
EL: Yeah, so it’s a good reason to brush up your French, then, to read this novel.
And where can people find—is there writing work of yours that people can find on a website or something that has it all together?
MA: Okay, there is a website of the Oulipo, first of all, oulipo.net or something like that. Very easy to find.
KK: We’ll find it.
MA: Also, I have a webpage myself, but what I write is usually on the Oulipo site. I have also a site, a history site. It’s about history but not about mathematics. It’s about the Paris Commune in 1871. It has nothing to do with mathematics, but this is one of the things I am working on.
EL: Okay. Yeah, we'll share that with with people so they can find out more of this stuff.
MA: Thank you.
KK: Alright, this has been great fun. I learned a lot today. This is this is the best part of doing this podcast, actually, that Evelyn and I really learn all kinds of cool stuff and talk to interesting people. So we really appreciate you to appreciate you taking the time to talk to us today, and thanks for persevering through the technical difficulties.
MA: Yes. So we are finished? Okay. Goodbye.
EL: Bye.
Evelyn Lamb: Hello, and welcome to my favorite theorem, a math podcast where we asked mathematicians to tell us about their favorite theorems. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host,
Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida. How you doing, Evelyn?
EL: I’m all right. I had a really weird dream last night where I couldn't read numbers. And I was like, trying to find the page numbers in this book. And I kept having to ask someone, "Oh, is this 370?" Because it looked like 311 to me. For some reason those are two of the numbers that like somehow, yeah, those numbers don't look the same. But yeah, it was so weird. I woke up, and I opened a book. And I was like, "Okay, good. I can read numbers. Life is ok." But yeah, it was a bit disorienting.
KK: That's weird. I’ve never had anything like that.
EL: So how about you?
KK: Well, I don't know. I was in California earlier this week, so I'm trying to readjust to Florida after what was really nice in California. It’s just gruesomely hot here and gross. But anyway, enough about that. Yeah.
EL: Yeah. So today, we're very happy to have Anil Venkatesh joining us. Hi, Anil, can you tell us a little bit about yourself?
Anil Venkatesh: Hi, Evelyn. Hi, Kevin. Yes, I am an applied mathematician. I teach at a school called Ferris State University in Michigan. And I am also a musician, I play soccer, and I’m the lead Content Developer for a commercial video game.
EL: Oh, wow. And I how I ran across your name is through the music connection. Because you sometimes give talks at the Joint Math Meetings and things like that. And I think I remember seeing one of your talks there. But I didn't know about the game developing. What game is that?
AV: It's called Star Sonata. And I'll plug it maybe at the end of the episode. But it actually relates because the theorem I'm going to talk about, well, I ran across it in my development work, actually.
EL: Oh, cool. So let's get right to it.
AV: Okay. Well, I'm going to talk about the Shapley value, which is due to Lloyd Shapley. The paper came out in 1953, and there's a theorem in that paper. It did not come to be known as the Shapley theorem, because that's a different theorem. But it's an amazing theorem, and I think the reason theorem didn't gain that much recognition is that the value that it kind of proved something about is what really took off.
So should I tell you a little bit about what the Shapley value is, and why it's cool?
KK: Yeah, let’s have it.
AV: Well, so actually, I picked up this book that came out in ’88, so quite a long time after the Shapley value was originally introduced. And this book is amazing. It's got like 15 chapters. And each chapter is a paper by some mathematician or economist talking about how they use the Shapley value. So it's just this thing that really caught on in a bunch of different disciplines. But it is an econ result, which is why I took a while to actually track it down once I came up with the math behind it.
EL: RIght.
AV: So putting this into context, in 1953 people were thinking a lot about diplomacy, they were thinking about the Cold War, or ensuing Cold War. And so here's a great application of the Shapley value. So you have the United Nations. It’s got in the Security Council, five permanent members who can veto resolutions and then 10 rotating members. So for a resolution to pass, I don't know if this is exactly how it works now, but at least when the paper was written, you needed nine out of 15 to vote in favor.
KK: That’s still correct.
AV: And of those nine, you needed all five of the permanent members. So you couldn’t have any of those vetoes. So you might ask, How powerful is it to have the veto? Can we quantify the negotiating strength of possessing a veto in this committee?
KK: Okay.
EL: Okay.
AV: Okay. And yes, you can with the Shapley value, and it comes down to, well, do you want to hazard a guess? Like, how many times better is it to have a veto?
KK: Like a million.
AV: It's a lot better. You know, I didn't really have a frame of reference for guessing. It's about 100.
EL: Yeah, I don't know… Oh, how much?
AV: A hundred.
KK: I was only off by four orders of magnitude! That’s pretty good.
AV: Yeah.
EL: Yeah, not bad.
AV: So the way the Shapley value carries this out is you imagine out of 100 percent, let's apportion pieces of that to each of the 15 members according to how much power they have in the committee.
KK: Okay.
AV: And so if it was 20% to each of the permanent members, there wouldn't be any left for the remaining 10 voting, right? In actuality, it's 19.6% to each of the five permanent members.
KK: Okay.
EL: Wow.
AV: And then that last sliver gets apportioned 10 ways to the rotating members. And that's how we come up with roughly 100 times more powerful with the veto.
EL: Okay.
AV: I will tell you how this value is computed, and I'll tell you about the theorem. But I'll give you one more example, which I thought was pretty neat and timely. So in the US, laws get made when the House of Representatives and the Senate both vote with the majority in favor of the bill, and then the President does not veto that bill.
KK: Yes.
AV: Or if the president vetoes, then we need a two-thirds majority in both houses to override that veto. So you could ask, well, if you think of it just as the House, the Senate and the President, how much of the negotiating power gets apportioned to each of those three bodies when it comes to finally creating a law? And if you apply the Shapley value, you get ratios of 5:5:2, which means the president alone has a one-sixth say in the creation of a law.
EL: Okay. Yeah, I was when you said that I was thinking, I mean, if you do the Security Council one, the people with vetoes had almost one fifth each, so I was thinking maybe like being one third of the things that could veto, it would be about a third for the president, but that seemed too high.
AV: Yes. So if the if the it was not possible to override the veto, then it would be a little bigger, right?
EL: Right, right. Okay.
AV: Yes. Now, if you actually break this down on the individual basis, so you might think, okay, well, the house gets 5 out of 12 of the power, but there are so many people in the house, so each individual person in the house doesn't have as much power, right?
KK: Yes
AV: When it breaks down that way, so going individual representative, individual senator and President, the ratio goes like 2 to 9 to 350.
EL: Okay.
AV: So the President actually has way more power than any one individual lawmaker.
KK: Well, that makes sense, right?
AV: Yes, it does. And so, yeah. The great thing about the Shapley value is that it's not telling you things you don't know exactly, but it's quantifying things. So we know precisely what the balance of power is. Of course, you've got to ask, “Okay, so this sounds like a like a fun trick. But how is it done anyway?”
EL: Yeah.
AV: The the principle behind the Shapley value is just, it’s beautiful in its simplicity. The theory is this—and actually when I tell you this, it's going to remind you of a lemma that's already been on this podcast.
EL: Okay.
AV: More than one actually, this just a very standard kind of technique. So imagine all the possible orderings of voters. So suppose they come in one at a time and cast their vote. Under how many of these arrangements is a particular person casting the pivotal vote? The more more frequently the more arrangements in which Person A casts the typical vote, the more power Person A is allotted.
EL: Okay.
AV: That's it. So we actually just take an average overall possible orderings of votes and basically count up however many of those orderings involve a particular person casting the pivotal vote, and that's how we that's how we derive this breakdown of power.
EL: So this is a lot like having everyone at a vertex and looking at symmetries of this object, which is kind of reminding me of Mohammed Omar's episode about Burnside’s lemma. I assume so that's the one that you were thinking about.
AV: Yes, that’s the one I was thinking about.
EL: But you said another one as well.
AV: The other one hasn’t actually been on this podcast yet. And I could have talked about this one instead. But the Cohen-Lenstra heuristics for the frequency of ideal class groups of imaginary quadratic extensions also involves an idea, now this one gets a little deeper. but essentially, if you dig into the Shapley value, you notice that the bigger the group is, the less power each person has in it. And yeah, so there are various other twists you can ask using the Shapley value. So in the Cohen-Lenstra heuristics, you essentially divide by the automorphisms of a group, you weight things inversely by the number of automorphisms they have. Anyway, that one also evoked because you take sort of an average across all the groups of the same size. So, I'm not claiming that there's some kind of categorical equivalence between the Cohen-Lenstra heuristics and the Shapley value, but this idea of averaging over an entire space comes up in a bunch of different branches of mathematics.
KK: Sure.
EL: Yeah. Very cool. So, we've got the Shapley value now, and what is the theorem?
AV: The theorem, and this is what makes it all really pop, the theorem is why people, why the Shapley value is so ubiquitous. There is no other logical apportionment of 100 percent than the Shapley value’s algorithm.
EL: Okay.
AV: There is no other sensible way to quantify the power of a person in the committee.
EL: Interesting.
KK: What’s the definition of sensible?
AV: I’ll give it to you, and and when you hear—this is how weak the the assumptions are that already gave you this theorem, and that's why it's amazing.
KK: Sure.
AV: Efficiency: you must apportion all hundred percent
KK: Okay.
AV: Of course. Symmetry: if you rename the people but you don't change their voting rules, the Shapley value is not affected by that kind of game.
KK: Sure.
AV: Null player: if a person has no voting power at all, they get zero percent.
KK: All right.
AV: Obviously. And finally, additivity. That one takes a little bit more thinking about, but it's nothing crazy. It's just saying, like, if there are two different votes happening, then your power in the total situation is the sum of your power in the one vote and your power on the other vote. If there's more than one game being played, basically, the Shapley value is additive over those games.
KK: That's the weirdest one, but yeah, okay.
AV: Yeah, I looked at it. I thought a little bit about what to say. And then honestly, if you dig into it, you realize it's just, like, not saying anything amazing. You have to think about this: the Sheppey value, it's a function, right? So we're working in the space of functions, and weird things can happen there. So this is just asserting you don't have any really wild and woolly functions. We're not considering that.
EL: Okay.
AV: So you just have these assumptions. And then there's only one. And the way they prove it is by construction. They basically write down a basis of functions, and they write down a formula using that basis, and there can only be one because it's from a basis, and then they prove that formula has the properties desired. It’s a really short paper, it's like a 10 page paper with four references. It's amazing.
EL: You said this is the 1953 paper by Shapley?
AV: Yes, by Shapley.
EL: Yeah, was there another author too, or just?
AV: No, Shapley collaborated with many people on related projects, but the original paper was just by him.
EL: Yeah. So I assume people have maybe looked at Shapley values of individual voters, like in the US or in an individual state or local election. We're recording this in election season, a little bit before the midterm elections.
KK: Yeah, can’t end soon enough here.
EL: Yeah, I guess. Oh, I guess actually, that wouldn't be that interesting, because it would just be, I mean, within a state or something. But I guess, the Shapley value of someone in one state versus another state might be a fairly interesting question.
AV: Oh, yes. But even the Shapley value for one person in a certain district or another district, this gets into gerrymandering, for sure.
KK: Right.
AV: I don't know to what extent people have thought about the Shapley value applied in this way. I imagine they have, although I haven't personally seen it mentioned, or anything that looks like it in the gerrymandering math groups that have been doing the work.
KK: No, I mean, I've been working with them a little bit, too. I mean, not really. And yeah, of course, it sort of gets to things like, you know, the Senate is sort of fundamentally undemocratic.
EL: Right.
KK: I mean, the individual senators kind of have a lot power. But you know, the voters in Wyoming have a lot more, you know, their vote counts more than than a voter and say, Florida.
EL: Right? Or the voter in Utah versus the voter in Florida.
AV: I'm thinking about within a specific state, if you look at the different districts. I mean, I read a little bit about this. And I see that they're, they're trying to resolve kind of the tension between the ability to cast a pivotal vote and the ability to be grouped with people who are like-minded. I don't know, it seems like, I wonder whether there's some extent to which they're reinventing the wheel, and we already have a way to quantify the ability to cast a pivotal vote. There's only one way to do it.
EL: Interesting.
AV: I don't know. Yeah, I'm not super informed on that. But it feels like it would apply.
KK: Yeah. So what drew you to this through this? I mean, okay. So fun fact: Anil and I actually had the same PhD advisor, albeit a couple of decades apart, and neither of us works in this area, really. So what drew you to this?
AV: Well, that's why I mentioned my game development background. So this game, Star Sonata, is one of those massively multiplayer online role-playing games. It actually was created back in 2004, when World of Warcraft had just started. And basically the genre of game had just been created. So that's why the game started the way it did. But it's kind of just an indie game that stuck around and had its loyal followers since then.
And I also played the game myself, but several years ago, I just kind of got involved in the development side. I think initially, they wanted—Well, I was kind of upset as a player, I felt they’d put some stuff in the game that didn't work that well. So I said, “Listen, why don't you just bring me on as a volunteer, and I'll do quality assurance for you.” But after some time, I started finding a niche for myself in the development team, because I have these quantitative skills that no one else on the team really had that background in. So a little later, I also noticed that I actually had pretty decent managing skills. So here I am, I'm now basically managing the developers of the game.
And one of my colleagues there asked me an interesting question. And he was kind of wrestling with it in a spreadsheet, and he didn't know how to do it. So the question is this, suppose you're going to let the player have like, six pieces of equipment, and each piece of equipment, let's say it increases their power in the game by a percent. Power could be like, you know, your ability to kill monsters or something.
EL: Yeah.
AV: So the thing is, each piece of equipment multiplicatively increases your power. So your overall power is given by some product, let's say (1+a)(1+b)(1+c), and so on. One letter for each piece of equipment. So you write down this product, you have to use the distributed property to work out the the final answer. And it looks like 1 plus the sum of those letters plus a bunch of cross-terms.
KK: So symmetric functions, right?
AV: Yes, exactly. So what his question was, “Okay, now that we're carrying all six of these pieces of equipment, how much of that total power is due to each piece of equipment?”
EL: Okay.
AV: How much did each item contribute to the overall power of the player? The reason we want to know this is if we create a new piece of equipment the player can obtain, and we put that in, and then suddenly we discover that everyone in the game is just using that, that's not good game design. It's boring, right? We want there to be some variety. So we need to know a way to quantify ahead of time whether that will happen, whether a new a new thing in the game is going to just become the only thing anyone cares about. And they'll eschew all alternatives. So he asked me, basically, how can I quantify whether this will happen? And I thought about it. And as you can tell, what this is asking about is the Shapley value in a special case where all the actors contribute multiplicatively to the to the total. And I didn't know that at the time because I'd never learned about the Shapley value. I didn't really learn much econ.
KK: Sure.
AV: So I just derived it, as it turns out, independently, in this in this special case. And it works out in a very beautiful formula involving essentially the harmonic means of all those letters. So reciprocals of sums of reciprocals. The idea there—and I mean, I can give a real simple example—Like, suppose you have two items. One of them increases your power by 20%, and one increases by 30%. So your overall power is 1.2 times 1.3. So what does that get to? 1.56 So of that of that 56% increase, 20% goes to the one item, 30 goes to the other, but 6% is left over. And how should that be aportioned?
EL: Right.
AV: Well, if you think about it, you might think, “Well, okay, the 30 percent should get the lion's share.” And maybe so, maybe so. But then there's a competing idea: because that 30% was pretty big, the 20 percent’s effect is amplified, right? So it's not, there's not an immediately obvious way to split it. But you can kind of do it in a principled fashion. So once I wrote this down, you know, I gave it to my colleague, he implemented it, it improved our ability to make the game fun. But then I also started wondering, like, look, this is, this is nice and all, but someone must have thought of this before, you know? So I don't actually remember now, how I came across it, whether I just found it or somebody sent it to me. But one way or another, I found the Shapley value on Wikipedia. I read about it, and I immediately recognized it as the generalization of what I'd done. So, yeah.
EL: Oh, yeah. Well, and this seems like the kind of thing that would come up in a lot of different settings, too. A friend of mine one time was talking about a problem where, you know, they had sold more units and also increased price, or something. And, you know, how do you allocate the value of the increased unit sales versus the increase price or something, which might might be a slightly different, the Shapley value might not apply completely there.
AV: No, it does.
EL: Okay.
AV: Yes, that’s called the Aumann-Shapley pricing rule.
EL: Okay, yeah.
AV: Yeah. So, questions of fair division and cost allocation are definitely applications of the Shapley value. So, yeah.
EL: Neat. Thanks.
KK: Very cool. The other fun part of this podcast is that we ask our guests to pair their theorem is something What have you chosen to pair this with?
AV: Well, like many your guests, I really struggled with this question.
KK: Good.
AV: And the first thing I thought of, which won't be my choice, was a pie because you have to, you know, fairly divide the pie. I told this to one of my friends, and I explained what the Shapley value was, and she was like, “No, that's, that's a terrible idea, because you want to divide the pie equally.” But the Shapley value is this prescription for dividing unequally but according some other principle. So it won't be a pie. So I actually decided this morning, it's going I'm going to pair it with a nice restaurant you go to with your friends, but then they don't let you split the bill.
KK: Ah.
EL: Okay. Yeah, so you have to figure out what numbers to write on the back of the receipt for them read your credit cards. Or for the added challenge, you could decide, like, given the available cash in each person's wallet? Can you do that?
AV: Oh, don't even get me started.
KK: This is the problem, right? Nobody has cash. So when you're trying to figure out how to how to split the bill…People think that mathematicians are really good at this kind of thing, and in my experience, when you go to a seminar dinner or whatever, nobody can figure out how to split the bill.
AV: If I'm out with a bunch of people, and we have to split a bill, let it not be mathematicians, that’s what I say. Let it be anyone else.
KK: Yeah, because some people want to be completely exact and each person ordered a certain thing and it cost as much and you pay that, then you divide the tip proportionally, all this stuff. Whereas I'm more, you know, especially the older I get, the less I care about five or $10 one or the other.
AV: Yeah, well, I find it's good if I go out with a bunch of people who are kind of scared of math, because then they just let me do it. You know, I become the benevolent dictator of the situation
KK: That’s happened to me too, yeah.
EL: So, I don't remember where what city Ferris State is in.
AV: Well, it's in a town of Big Rapids, which is a little the Grand Rapids, which is a little bit more well-known
EL: Slightly grander. So, yeah, you're the slightly lesser rapids?
AV: So, there are at least five rapids in Michigan, like five different places named something rapids.
KK: Sure.
EL: So do you have a Big Rapids restaurant in mind for this pairing?
AV: You know, they're all really nice about splitting the bills there. So I was thinking something maybe in New York City or Boston.
KK: College towns are pretty good about this. In fact, they'll let you hand them five cards, and they'll just deal with it.
AV: Yeah, totally.
KK: Yeah, yeah, very nice. So your rapids are big but Grand Rapids’ rapids are grander.
AV: They’re much grander. Don’t get me started about Elk Rapids. I don't know how to compare that to the other two.
KK: Elk Rapids?
EL: Yeah, Big, Elk, and Grand, not clear what order those go in. [I guess Iowa’s got the Cedar Rapids.]
AV: Yes. I don't remember the other two rapids, but I know identified them at some point.
EL: Well thank you so much for joining us
AV: Thanks for inviting me. It was great.
EL: Yeah, I learned something new today for sure.
KK: Math and a civics lesson, right?
EL: Yes. Everybody go vote. Although this episode will already be out. [Ed. note: Evelyn said this backwards. Voting occurred before the episode, not vice versa.] But get ready to vote in the next election!
KK: Yeah, well, it's never ending right? I mean, as soon as one elections over, they start talking about the next one. Thanks, Anil.
EL: All right, bye.
AV: Thank you.
[outro]
Evelyn Lamb: Welcome to My Favorite Theorem. I'm your host Evelyn lamb. I'm a freelance math and science writer based in Salt Lake City. Today I am by myself because I'm on location. I am in Washington DC right now for the Science Writers conference. That's the conference for the National Association of Science Writers and I'm really happy to be joined by Yen Duong, who is a also a science writer with a math background. So yeah, can you tell us a little bit about yourself?
Yen Duong: Yeah, so I am in Charlotte, North Carolina, and I work part time for North Carolina Health News. And the rest of my time, I am a freelance math and science writer like you.
EL: Yeah.
YD: And I just finished the AAAS Mass Media Fellowship this summer, and before that I got my Ph.D. at UIC in geometric group theory.
EL: Yeah, and the AAAS fellowship is the one, the way I started doing science writing as well. A lot of people, when you come to conferences like these, you find out a lot of people who are more senior in the field have also gone through this. So it's really great. The application deadline, I believe is in January. So we'll try to air this at a time when people can look into that and apply for it. But yeah, it's a great program that brings grad students in science, you know, math and other sciences, into newsrooms to learn a little bit about how the news gets made and how to report on science for a broader audience. So it was a great experience for me. It sounds like it was a great experience for you.
YD: Yeah, it's fantastic. It's 10 weeks, I think this coming year, the stipend will be $6,000. So that's great. It is paid. And for me, at least it jump started the rest of my career as a math and science writer.
EL: Yeah, definitely. And it’s nice to hear that it's being paid a little more. I lived in New York City for less than that. And that was difficult. Okay, so do you want to tell us about your favorite theorem?
YD: I've been listening to this podcast for a while. And it's like, okay, I'll do a really fancy one to be really impressive. And people will think I'm fancy. But I decided not to do that. Because I'm not that fancy. And I think it's silly to be that pretentious. So I'm going with one of the first theorems I learned, like as an undergrad, which was Ramsey theory, that the Ramsey number of R(3,3) equals six.
EL: Okay, great. So, yeah, tell us what a Ramsey number is.
YD: Okay, so this is from graph theory. And the idea of saying, R(3,3)=6, I’ll just do the whole spiel.
EL: Yeah, yeah. And please use your hands a lot. It's really helpful for the podcast medium when you’re—Yeah, I know. Like Ramsey theory. I’m, like, moving my hands all around to show you what everything is.
YD: I will attempt to not pen and paper and start drawing things. Luckily, we don't have any available, right now. Yeah. So the idea is that, let's say that you are trying to put together a committee of three people. And you either want all three people to pairwise know each other and have worked together before, or you want all three people to be relative strangers. What you don't want is one person in the middle and everyone talks to them. And then the other two people don't talk to each other. That's a bad committee. Yeah. So the question is, how many people do you need to look at to guarantee that you can find such a committee?
EL: Right, so how big is your pool going to be of people you're choosing?
YD: Exactly. So like, if I look at three people? Well, that's not great, because it's me, you and someone in the next room. And there you go. We don’t have a good committee. And if I look at 100 people, like okay, I'm pretty sure I can find this with 100 people. So what Ramsey theory does is use graph theory to answer this question. And so like I said, the giveaway was that the number is 6, and something that I really love about this theorem is that you can teach it to literal—I think I taught it to 10 year olds the summer.
EL: Nice.
YD: And it's just a really nice basic introduction to, in my opinion, the fun parts of math. These kids who are like, “Ugh, I have to memorize equations, and I hate doing this.” And then I start drawing pictures and I explain the pigeonhole principle, and like, “Oh, I get it, like, I can do this.” I’m like, “Yes, you can! Everyone can do math!”
EL: Yay. Yeah. So the, the proof for that is, is kind of like you, you take a hexagon, right? Or the vertices of a hexagon and try to build—what do you do to denote whether you have friends or strangers?
YD: So graph theory is when you have vertices, which are dots, and edges, which are lines in between dots, and you use it to describe data and information systems. So in this case, we can make each person a dot, so we'll put six dots on a piece of paper. I do not have paper. I am using my hands. So we’ll have six dots on a piece of paper, and we’ll draw a blue line for friends, and we can draw a red line for strangers. So now our question becomes, how many dots do I need to make either a red triangle or blue triangle. So if you have six dots, let's look at one person, and that person will be me. And I look out at this crowd of five people. So for at least three of those people, I will have the same color line going to them. So they might all be strangers, so I'll have five red lines, or one might be a stranger and four friends—one red and four blue—but in that case, I have three, at least three, blue ones. So I can just assume that one of them is blue. So we'll just say, “Okay, I’ve got three blue lines going out.” So now I look at those three friends of mine. And I look at the relationships that they have with each other. This is really hard without pen and paper.
EL: Yeah, but luckily, our listeners have all gotten out of pens, two colors of pens, and they are driving this at home. So it's fine.
YD: Excellent. Good job, listeners! So now you've got your three dots. And you've got three blue lines coming out of them to one common dot. So you've got four dots on your piece of paper. So if in between any of those three dots, I draw a blue line, we’ve got our blue triangle, and we're done. We've got our committee.
EL: Yeah.
YD: Therefore, if I want to make this a proof, I'd better draw red lines. Yeah, I should draw a red line. Yeah. So now I've got three dots. And I've got red lines. But now I have red lines between all three of them. And there's my committee. So that's it. That's the entire proof. You can do it in a podcast in a few minutes. You can teach it to 10 year olds. You can teach it to 60 year olds. And I love it because it's like the gateway drug of mathematics proofs.
EL: Yeah, it’s really fun. And yeah, you can just sit down at home and do this. And—spoiler alert: to do this for four, to get a committee of four people, it's a little harder to sit down at home and do this, right? Do you—I should have looked up
YD: Oh, the Erdos quote, right? Is that what you're talking about?
EL: Well, well, I you can do four. Yeah, there's an Erdos quote about I think getting to six. Or five.
YD: So the Erdos quote is, paraphrased: if aliens come to the earth, and they tell us that they're going to destroy us unless we calculate R(5,5), then we should get all of the greatest minds in the world together and try to calculate it and solve it. But if the aliens say that we should try to compute R(6,6), then we should just try to destroy the aliens first.
EL: Yeah, so I think R(4,4) is like something like 18. Like, it's doable. I mean, by a computer, I think, not by a person, unless you really like drawing very large graphs. But yeah, it's kind of amazing. The Ramsey numbers just grow so fast. And we've been saying R(3,3) or R(4,4), having the same number twice in those. There are also Ramsey numbers, right, where it’s not symmetric.
YD: Like R(2,3) or R(2,4), Okay, so well two is maybe not the greatest number for this. But yeah, you can do things where you say, Oh, I'm going to have either a complete—so I'll either have a triangle of red, or I'll have four dots in blue, and they'll all be connected to each other with blue lines, a complete graph on four dots or however many dots.
EL: Yeah. So they don't have to be the same number. Although, you know, usually the same number is sort of a nicer one to look at. So how did you learn this theorem?
YD: Let's see. So I learned this through—I’ll just tag another great program—Budapest semesters in mathematics.
EL: Nice
YD: From a combinatorics professor. So BSM is when college students in the U.S. and Canada can go to Budapest for a semester and learn math from people there and they hang out with all these others. It’s a nice study abroad program for math. So that's when I first learned it. But since then, I think I've taught it to just like a hundred people, hundreds of people. I tell it to people in coffee shops, I break it out at cocktail parties, it's just like, my like, math is fun, I promise! little theorem. I think I've blogged about it.
EL: So watch out. If you're in a room with Yen, you will likely be told about this theorem.
YD: Yeah, that's my cocktail party theorem, that and Cantor’s diagonalization.
EL: Yeah, well, and cocktail parties are a place where people often like, describe this theorem. Like, if you're having a party, and and you want to make sure that any [ed. note: Evelyn stated this wrong; there shouldn’t have been an “any”] three people are mutual acquaintances, or mutual strangers, although the committee one actually makes a lot more sense. Because like, who thinks through a cocktail party that way? It's just a little contrived, like, “Oh, I must make sure the graph theory of my cocktail party is correct.” Like, I know a lot of mathematicians, and I go to a lot of their parties, but even I have never been to a party where someone did that. So on this podcast, we also like to ask you to pair your theorem with something. And why have you chosen for R(3,3)?
YD: I thought really hard about it, by the way.
EL: Yes. This is a hard part.
YD: Yeah. So I decided on broccoli with cheese sauce.
EL: Okay. Tell us why.
YD: Because it is the gateway vegetable, just like this theorem is the gateway theorem.
EL: Okay.
YD: Yeah. Like, my kids sometimes eat broccoli with cheese sauce. And it's sort of like trying to introduce them to the wonderful world of Brussels sprouts and carrots and delicious things. I feel like the cheese sauce is sort of this veneer of applicability that I threw on with the committee thing.
EL: Oh, very nice. Yeah.
YD: Even with the situation of the committee, like no one has ever tried to make a committee of three people who’ve all worked together or three people who didn’t. But, you know, it makes it more palatable than just plain broccoli.
EL: Yeah, okay. Well, and honestly, I could kind of see that, right. Because, like, it can be really that third wheel feeling when you’re hanging out with two people who know each other better than, you know either of them or something. Yeah. So actually, I feel, yeah, if you were making a committee for something, I could see why you might want to do this. I feel like a lot of people are not so thoughtful about making their committees that they would actually be like, “Will the social dynamics of this committee be conducive to…?”
YD: This is why my husband and I don't host cocktail parties, because my way of doing it is like, let's just invite everyone we know. And he's like, no, but what if someone feels left out? And then he gets stuck in the graph theory of our cocktail party and then it doesn't happen.
EL: And he's not even a mathematician, right?
YD: Yeah.
EL: Should have been, turns out.
YD: Yes, that's true. Stupid computers.
EL: Yeah. So when you make broccoli with cheese sauce, how do you make it. Are you a broccoli steamer? Do you roast it?
YD: We're definitely, if it's going to have cheese sauce on it, you’ve got to steam it. But generally, we're more roasters because I prefer it rested with garlic and olive oil.
EL: Okay.
YD: So delicious. Broccoli with cheese sauce is really a last resort. It's like, man, the kids have not eaten anything green in like a week
EL: They need a vitamin.
YD: Let’s give them some broccoli.
EL: So one of our favorite recipes is roasted broccoli with this raisin vinaigrette thing. You put vinegar and raisins, and maybe some garlic, A couple other things in a blender.
YD: Wait, so you blend the raisins?
EL: Yeah, you make a gloppy sauce out of the raisins and everything. And I don't think you plump them first or anything. I mean, usually I kind of get in a hurry, and I’ll put them all in, the ingredients, and then go do something else, and then come back. So maybe they plump a little from the vinegar. But yeah, it makes like a pasty kind of thing. It kind of looks like olive tapenade. And I have actually accidentally mistaken the leftover sauce in the fridge for olive tapenade and have been a bit disappointed. You know, if you're expecting olives, and you’re eating raisins instead, you’re just not as happy. But yeah, it's a really good recipe. If you want to expand your broccoli horizons, maybe not as kid friendly.
YD: Actually, my kids do love raisins. So maybe if I put raisins on top of broccoli, they would like it more.
EL: Yeah, I think there's some cumin in it too, something? And we're talking about recipes, because both of us like to cook a lot. And in fact Yen's blog is called Baking and Math. And it's not like baking with math. Like, there's baking, and there's math.
YD: Yeah, it’s a disjoint union. It doesn’t make that much sense, but I'm still a big fan of it. And it's actually how we met.
EL: Yes.
YD: Yeah. Because you found me on the internet.
EL: Yeah, I found you on the internet. And it was when I was writing for the AMS Blog on Math Blogs. And I was like, this is a cool blog. And yeah, then we became internet friends. And then I realized a couple of years later like, I feel like I know this person, but we've never actually met. We met at Cornell, at the Cornell topology festival, and I was like, “Wow, you're tall!” I just realized I always think people are either shorter than I think or taller that I think unless they're exactly my height because I think my
YD: You expect everyone to be your height?
EL: Yeah, my default, the blank slate version is like, “Oh, this person is the same height as I am.” So yeah, I was like oh, you're taller than I am. And I expected you to be exactly my height because I have no imagination
YD: I’m trying to think if I was surprised by, maybe, no, I don't think you had blue hair, maybe you did? No.
EL: No, I probably had blond hair at that point, yeah.
YD: I remember we did acro yoga when we first met. That's a good thing to do when you first meet someone.
EL: Yeah.
YD: It was very scary. It wasn't leap of faith, but so is meeting a stranger on the internet.
EL: Yeah. But luckily we’re both great people.
YD: Yeah. I also signed up for that conference because you tweeted that you were going to go, and I though, “Oh, I might as well sign up and then I can meet you.”
EL: I should have asked for a commission from the festival, although they probably paid for your travel, so it'd be like a reverse commission. So people can find your writing at your blog Baking and Math. They can find you on Twitter, you’re yenergy. And where here can they find your your science and health writing?
YD: So I post a lot of my clips on my website, my professional website, so that's yenduong.com, and then I also write for North Carolina Health News if you're interested in exactly what it sounds like, North Carolina health news.
EL: Yeah I'm sure a lot of people are I read them and I'm not in North Carolina, but I have a body, so I am interested in health news.
YD: Yeah.
EL: So thanks a lot for joining me.
YD: Thanks for having me. It was super fun. Fun fact for podcast listeners: Evelyn and I did not know where to look during this conversation. We couldn’t tell, should we look at each other or at the recording device?
EL: Yeah, so we did some of both. All right. Bye.
YD: Bye.
Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your host Evelyn Lamb, and I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?
EL: I’m all right. I had a lovely walk today. And there are, there’s a family of quail that is living in our bushes outside and they were parading around today, and I think they're going to have babies soon. And that's very wonderful.
KK: Speaking of babies, today is my son's birthday.
EL: Who’s not a baby anymore.
KK: He’s 19. Yeah, so still not the fun birthday, right? That's that's another two years out.
EL: Yes, in this country.
KK: In this country, yes. But our guest, however, doesn't understand this, right?
EL: Yes. Today we are very happy to have Katie Steckles from Manchester, England, United Kingdom. So hi, Katie. Can you tell us a little about yourself?
Katie Steckles: Hi. Well yeah I'm a mathematician, I guess. So I did a PhD in maths and I finished about seven years ago. And now my job is to work in public engagement. So I do events and do talks about maths and do workshops and talk about maths on YouTube and on the TV and on the radio and basically anywhere.
KK: That sounds awesome.
EL: Yeah, you’re all over the place.
KK: Yeah, that sounds like great fun, like no grading papers, right?
KS: A minimal amount of, yeah, I don’t think I’ve had to grade anything, no.
EL: Yeah, and you have some great YouTube videos. We’ll probably talk more about some of them later. Yeah. And and I have stayed at your apartment a few years ago, or your flat, in Manchester. Quite lovely. And yeah, it's great to have you on here and to talk with you again. So what is your favorite theorem?
KS: Okay, my favorite theorem is what's called the fold and cut theorem, which is a really, really nice piece of maths which, like the best bits of maths, is named exactly what it is. So it's about folding bits of paper and cutting them. So I first encountered this a couple years ago when I was trying to cut out a square. And I realize that's not a very difficult task, but I had a square drawn on a piece of paper and I needed to cut out just the square, and I also needed the outside bit paper to still be intact as well. So I realized I wasn't going to be able to just cut in from the edge. So I realized that if I folded up the bit of paper I could cut the square out without kind of cutting in from the side, and then I realized that if I folded it enough I could do that in one cut, one straight line would cut out the whole square. And I thought, “That’s kind of cool. I like that, that’s a nice little bit of maths.” And I showed this to another friend who’s also a mathematician, and he was basically like, “Isn't there a theorem about this?” I thought, “Ooh, maybe there is,” and I looked and the fold and cut theorem basically says that for any figure with straight line edges, you can always fold a piece of paper with that figure drawn on it so that you can cut out the whole thing with one cut, even if it's got more than one bit to it or a hole in it or anything like that. It's always possible with one cut, in theory.
EL: Yeah. So you discovered a special case of this theorem before even knowing this was a thing to mathematically investigate.
KS: Yeah, well, I was I was cutting out a square for math reasons, because that's everything I do. But I was actually trying to make a flexagon at the time, which as I'm sure you've all been there, but it was just because I needed this square hole. And I thought it was such a satisfying thing to see that it was possible in one cut. And my maths brain just suddenly went, “How can I extend this? Can I generalize this to other shapes?
KK: Sure.
KS: And it was just a nice kind of extension of that.
EL: Yeah. So I have a question for you. Did you, was your approach to go for the, like diagonal folds, or the folds that are parallel to the sides?
KS: Yeah, this is the thing. There are actually kind of two ways to do a square. So you can do, like, a vertical and a horizontal fold, and then you get something that needs two cuts, and then you can make one diagonal fold and just end up with the thing that you can do in one cut, but you can actually do it in two folds if you do two diagonal folds, but it's along the cut. I don't know what the payoff is there. It depends on how much time you want to spend cutting, I don't know.
EL: Okay.
KK: I was thinking as you're doing this, I've never—I know about this theorem, but I've never actually done it in practice, but never really tried, but I was as soon as you said the square, I started thinking, “Okay, what would I do here?” You know, and I immediately thought to sort of fold along the diagonals. But so in general, though, so you have some, you know, 75-sided figure, is there an algorithm for this?
KS: It’s pretty horrible, depending on how horrible the thing is. Like simple things are nice, symmetrical things are really nice, because youjust fold the whole thing off and then use, you know, just do the half of it. And so there are algorithms. So the proof is done by Eric Demaine and Martin Demaine. And they've essentially got, I think, at least two different algorithms for generating the full pattern given a particular shape. So I think one of them is based around what they call the straight skeleton, which is if you can imagine, you can shrink the shape in a very sort of linear way, so you shrink all of the edges down but keep them parallel to where they originally were, you’ll eventually get to kind of a skeleton shape in the middle of the shape, and that's sort of the basis of constructing all the fold lines. And it is sort of seems quite intuitive because if you think about, for example, the square, all your folds are going to need to either be bisecting an angle or perpendicular to a straight edge. Because if it bisects the angle, it puts one side of the shape on top of the other one. And if you go perpendicular to the edge, it’s going to put the edge straight on top of the edge. And I always kind of think about in terms of putting lines on top of where the lines are, because that's essentially what you're doing if you've got a thin enough bit of paper and a thick enough line, you can actually physically see it happening. So it's beautiful. And then the other method they have involves disks in each corner of the shape, I think, and you expand the disks until they're as big as they can be and touch the other disks. And that then gives you a structure to generate a fold pattern. But they have got algorithms. I haven't yet managed to find a simple enough implementation that you can just upload the picture to a website and it will tell you the whole pattern, which is a shame because I've come across some really difficult shapes that I would really like to be able to fold but haven't quite been able to do it by hand. I'm just going, “Ah, I could just put some maths on this and throw it in the computer program!” But I actually asked Eric Demaine because I was in email contact with him about this. And then the thing that happened was, there’s a TV show in the UK called Blue Peter. Their logo it's like a giant boat that’s called the Blue Peter. It's a big ship with about 20 sails on it. And they said we could talk about this nice piece of maths, and you could even maybe try and cut out our logo with one cut. And I said to myself, “Goodness me!” Because it's all curves as well, so I’d have to approximate it all by straight lines and then work out how to cut this whole thing, and I emailed Eric Demaine and I sent him the picture and asked him, “Like, do you have a program that you can use to just you know, take a figure, even if I send the shape the edge or whatever?” And in his reply, he was like, “Wow, well, that looks, no.”
I just love the fact that they asked me to do something that not even the mathematician that proved that it's possible for any shape was prepared to admit would be easy. And so yeah, I'm not sure if there is kind of a, I mean, I would love it. I’m not enough of a coder to be able to implement that kind of thing myself. I would love it if there was a way to, you know, put in a shape or word or picture and come up with a fold pattern. Yeah, no, I don't know if anyone's done that yet.
KK: Well, this is how mathematicians are, right? We just proved that a solution exists, you know, and then we walk away.
EL: And so I seem to remember you've done a video about this theorem. And one of the things you did in it was make a whole alphabet, making all of those out of one-cut shapes.
KS: Yeah, well, this was, I guess this is my kind of Everest in terms of this theorem. This is one of the reasons why I love it so much, because I put so much time into this as a thing. So essentially in the paper that Demaine and Demaine have written about this, they've got a little intro bit where they talk about applications at this theorem and times when it's been used. So I think it's maybe Harry Houdini used to do a five-pointed star with one car as part of his actual magic show. It was really impressive. And people watch me do it. And they go, “Wow, how do you do that?” Such a lovely little demo. They also mentioned in there that they heard of someone who could cut out any letter of the alphabet, and I saw that and thought, “Wow, that would be a really nice thing to be able to do!” You know, that would impress people because it's kind of like if you can do any shape, then the proof of that should be whatever shape you tell me, I can do. And of course, a mathematician would know that 26 things is not infinity things, but it's still quite a lot of things. It's an impressive demo. So I thought I would try and work that out. And I literally had to sit down and kind of draw out the shapes and kind of work out where all the bits went and how to fold them. And some are easy, some are nice ones to start off with, like I and C and L. As long as you’ve got a square sort of version of it, they're pretty easy to imagine what you’d do. And then they get more difficult. So S is horrible, because there’s no reflection symmetry at all. It's just rotation symmetry and you can't make any use of that at all. R is quite difficult, but not if you know how to do P, and P is quite difficult, but not if you know how to do F. And so it all kind of kind of builds gradually. And I worked out all of these patterns and and in fact, it was one of the reasons I was in communication with Eric Demaine. Because he'd seen the video and he said, “As well as being mathematicians, we collect fonts, like we just love different fonts, type faces, and we wondered if you could send us your fold patterns for your letters so that we can make a font out of them.”
EL: Oh wow.
KS: And I thought that was really nice, so they've got a list on their website of different fonts, and they’ve now got a fold-and-cut font which I’m credited for as well.
KK: Oh nice.
KS: So yeah, the video I did with Brady was for his channel Numberphile, which is as I understand it a hugely popular maths channel. I've done about five or six videos on there, and I've genuinely been recognized in the street.
EL: Oh wow. That’s amazing.
KS: I walked into a shop and the guy was like, “Are you Katie Steckles?” I said, “Yes?” Like, the customer service has gone way up in this place. And he said, “No, I’ve just been watching your video on YouTube.” It’s like, Oh, okay. So that was nice. So he asked me to come and do a few videos, and that was one of the things I want us to talk about. I said, “What do you want me to do? I mean, do you want me to spell out Numberphile or your name or whatever? Brady, who’s Australian, said, “No do, the whole alphabet.” His exact words were, “If you're going to be a bear, be a grizzly.” A very Australian thing to say, he was basically saying let's do the whole alphabet, it will be great. I think at that point it was early enough I wasn't 100 percent sure I would get them all right, but his kind of thing that he has about his videos is that they always write maths down on brown paper, so he had this big pile of brown paper there, and he cut it all into pieces for me, one for each letter. And it was such a wonderful kind of way to nod to that tradition of using brown paper. But I just sat there folding them all, and he filmed the whole thing, and he put it in as a time lapse, and then I cut each one, one cut on each bit of paper, and open them all up, and they all worked, so it was good. But it was this very long day crouched over a little table cutting out all of these letters. But people genuinely come and ask me about it because of that video, so that's quite nice.
EL: Yeah, well I think after I watched that video, I tried to do—I didn’t. H was was my kryptonite. I was trying to fold that, and I just at some point gave up. Like I kept having these long spindles coming out of the middle bar that I couldn't seem to get rid of.
KS: I think somewhere I have a photograph of all of my early attempts at the S. It’s just ridiculous. Like it's just a Frankenstein's monster parade of villains, just horrific shapes that don't even look like an S, and like how did I get this?
But it kind of gave me a learning process, and I think it was maybe just a few weeks of solidly playing around with things. I think I had one night in a hotel room while I was away working so that no one else around. I just spent the whole evening folding bits of paper. I don't know what the maid who cleaned the room the next day thought. The bin was full of bits of cut up paper. I've got like a big stacks of scrap paper at home that's like old printouts and things I don't need that I use for practicing the alphabet because I go through a lot of paper when I’m practicing.
KK: This is a really fun theorem. So you know, another thing we like to do on this podcast is ask our guests to pair their theorem with something. So what have we chosen to pair the fold-and-cut theorem with?
KS: Wow. So I know that you sometimes often pair things with foodstuffs, so I'm going to suggest that I would pair this with my husband's chili and cheddar waffles.
EL: Okay.
KS: And I’ll tell you why, so my reasoning is that I kind of feel like this is a really nice example, as a theorem, about kind of the way that maths works and the way the theorems work. So my husband's chili is a recipe that he's been working on for years. He comes from a family where they do a lot of cooking, and it was natural for him when he moved out to just have his own kind of recipes. His chili recipe is so good that we've taken his chili to parties and people have asked for the recipe. And I'm just like, there isn't one. It's not written down anywhere. It's just in his head. He has this recipe. And he's obviously worked really hard on on it and achieved this brilliant thing. And kind of the ability to do the alphabet, the ability to kind of make things using this theorem for me is my equivalent of that. It's my special skill I can show off to people with. Because, you know, I've put in that time and I've solved the problem. And one of my favorite things about maths is that it gives you that problem solving kind of brain, in that you will just keep working at something, you keep practicing until you get there. And then the reason why I’ve paired it with cheddar waffles is (a) because that is a delicious combo.
EL: That sounds amazing.
KK: Yeah.
KS: Yeah. As soon as we got a waffle maker, that was our first go at it, was “What can we put with this chili that will make it even better?” And I just found the recipe for cheddar waffles on the internet, because we don't have that, you know, we don't do that many waffles. We don’t really know how to make them. And but the fact that you can go online and just find a recipe for something, is a really nice kind of aspect of modern life.
This is one of the things about maths I appreciate is that once you prove the theory that kind of goes into a toolbox, and other people can then you know, look at that theorem and use it in whenever they're doing, and you kind of building your maths out of bits of things that other people have proved, and bits of things that you're proving, and it's sort of a nice analogy for that, I guess. So those are those are the two things about it. Now that we've got the fold-and-cut theorem, nobody needs to prove it again, and anyone can use it.
EL: Yeah. And I guess if it were a perfect analogy, in some ways, maybe the chili recipe is sort of like these algorithms for making them, they’re really—well maybe that’s not good because the algorithms seem really complicated and difficult. Here, it's more that the recipe is hidden in your husband's brain.
KK: Well, a lot of algorithms feel that way.
KS: It really is quite complex. So you get some more things out of the cupboards that I've never seen before and they all go back in again afterwards. There’s a lot to it that people don’t realize.
KK: It’s a black box. my chili recipe is a black box, too. I can't tell you what's in it I mean it’s probably not as good as your husband’s, though.
KS: It’s got roasted vegetables in it. Yeah, it's that's that's one of the main secrets if anyone's trying to recreate it. But then just a whole lot of other spices that only he can tell me
EL: My husband doesn't like soups with tomatoes in them very much. I mean sometimes he does. But I don't do chili very much. So yeah, I don't have a good chili recipe we have a friend who's allergic to onions and that's a nice excercise in, can you cook or modify your recipe and still have it taste like what it’s supposed to be? because without us yeah a lot of things that don't work and she must have a nightmare with it. Because like a lot of packaged foods, they've got it.
KK: Sure.
KS: They’ve got onion powder or stuff.
EL: Every restaurant.
KS: We made chili without, and it kind of works. It kind of works without onions. It was great. I think there was a bit more aubergine that went in and some new spices, just to give it a bit more oniony flavor, but it still works.
EL: Oh, nice. Yeah, cooking without onions is tough. Does it extend to to garlic—does it generalize to other things in the allium family?
KS: Yeah, it’s all alliums, so she can’t really have garlic either she can get away with a little bit of garlic, but not any reasonable amount. Yeah, it must be completely horrible. Actually it kind of reminds me of Eugenia Cheng, her first book was about maths and baking. But one of the really nice points that she makes about the analogy between recipes and maths, which we have apparently stumbled into is that, you know, understanding something in a maths sense means that you can take bits of it out and replace it with other things. You've got a particular problem and you go, “Okay, well, do we need to make this assumption, do we need this particular constraint? What happens if we relax this and then put something else in?” And that's how you explore kind of where you go with things. And if you relax a constraint and then find the solution, that maybe tell us something about the solution to the constraint problem, and things like that. So, you know, tweaking a recipe helps you to understand the recipe a bit more. And as long as you know roughly what goes in there and you've got something that is, you know, recognizably a chili, then, you know, it doesn't matter what you've changed, I guess.
KK: Yeah, so we also give our guests a chance to plug anything you're working on. You want to plug videos, websites, anything?
KS: Oh, I’m always working on a million different things. I guess probably the nicest thing for people to have a look at would be the Aperiodical, which is a website where I blog with two of my colleagues so we write— it's kind of a maths blog but aimed at the people who are already interested in maths, so it's one of the few things I do that is not an outreach project. Which is essentially it’s aimed at people who already are interested and want to find out what's going on, so we sometimes right like opinion pieces about things or, like, “Here’s a nice bit of maths I found,” and then sometimes we just write news. And there’s a surprising amount of maths news, it turns out. It's not just “They’ve discovered a new Mersenne prime again.” There are various other maths news stories that come up as well, so we write those up, and bits of competitions and puzzles and things as well and it's at aperiodical.com. And we get submissions. So if anyone else wants to write an article and have it go out on a blog that’s seen by, you know, a couple of thousand people a day or whatever, they’re welcome to send us stuff, and we’ll have a look at it.
EL: Yeah, it's a lovely blog, and you also organize and host the math blog carnival that is, like, every month a round-up of math blog posts and stuff like that.
KS: We sort of inherited that from whoever was running it before, the Carnival of Mathematics. Every month someone who has a maths blog takes it in turn to write a post, which is essentially just a bunch of blog posts that went out this month. And we have the submissions form and all the kind of machinery behind it is now hosted at Aperiodical and has been for a few years, so if you have a maths blog elsewhere, and you want to get an opportunity to put a post on your site that will be seen by a bunch of people because there's a bunch of people who just read it every month, then get in touch because we're always looking for hosts for future months. And essentially we just forward to your email address all the submissions that people put in during the month, and you can then write it up in kind of the first week of the next month.
EL: Yeah. And I always see something cool on there that I had missed during the month. So it's a nice resource.
KS: So one of the other non-outreach, I guess, maths things that I'm involved with is a thing called Maths Jam. Or in the U.S. the equivalent would be Math Jam. We do have both websites, basically. So I coordinate all the Math Jams in the world. So it's essentially a pub night for people who want to go and do maths in a pub with people. It's aimed at adults because a lot of kids already get a chance to go to math club at school and do maths puzzles and things in their classrooms, but adults who have finished school, finished university, don't often get that chance. So we basically go to the pub once a month or to a bar or restaurant somewhere that will allow us to sit around and drink do maths. And there are now I think, getting on for a hundred Maths Jams in the world. So we've got about 30 or 40 in the UK. And then they’re popping up all over. We just picked up one in Brazil, we’ve got three in Italy now, three in Belgium, and there are a few in the U.S. But what I'm going to say is that I’m very sad that we don't have more because I feel like it would be really nice if we had a whole load of U.S. jams. I think we've got more in Canada that we have in the USA, which interesting given the population sizes, or relative sizes.
EL: Right.
KS: I think Washington DC has just gone on hiatus because not enough people are coming along. So the organizer said, “I'm getting fed up of sitting in the pub on my own. No one else is coming. I'm just going to put it on hold for now.” And so if you live somewhere in the U.S. and you want to go meet with the people and do maths in an evening, essentially to start when you just need a couple of people you that know you can drag along with you to sit around the case no one else turns up. And we send out a sheet with some ideas of puzzles and things to do. And you can play games, chat about maths, and do whatever. People can bring stuff along. And all you need to do to organize it is choose a bar and send the email once a month. And those are the only requirements. And go to the pub once a month, but I think that's probably not a big ask if that's the kind of thing you're into. So if anyone is interested, you can email to [email protected] and I can send you all the details of what's involved. You can have a look on the website, mathsjam.com, or math-jam.com, if you want to have a look at what there is already, what’s near you.
EL: Yeah, it'd be nice to have more in the U.S.
KS: Yeah, well, I get a lot out of it. Even though it's kind of sort of my job, but also I always meet people and chat through things and share ideas and people always go, “Oh, that reminds me of this other thing I saw,” and they show me something I've not seen before. And it's such a nice way to share things. But also just to know that everyone else in the room is totally sympathetic to maths and will be quite happy for you to chat on about some theorem or whatever and not think you’re weird. It’s quite nice.
EL: Well thanks a lot for joining us. I enjoyed talking about the fold-and-cut theorem. It makes me want to go back and pick up that alphabet again and try to conquer Mount H, that felled me the last time.
KS: I can send you send you a picture of my fold pattern for each, but I’m sure you would much rather work it out for yourself. It’s such a lovely puzzle. It's a really nice little challenge.
EL: Yeah, it’s fun.
Kevin Knudson: Welcome to My Favorite Theorem. I’m Kevin Knudson, professor of mathematics at the university of Florida, and this is your other host.
EL: Hi. I’m Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah.
KK: How’s it going, Evelyn?
EL: How are you today?
KK: I’m okay. I’m a little sleepy. So before I came to Florida, I was at Mississippi State University, and I still have a lot of good friends and colleagues there, and that basketball game last night. I don’t know if you guys saw it, but that last minute shot, last second shot that Notre Dame hit to win was just a crusher. I’m feeling bad for my old friends. But other than that, everything’s great. Nice sunny day.
EL: Yeah, it’s gray here, so I’m a little, I always have trouble getting moving on gray mornings.
KK: But you’ve got that nice big cup of tea, so you’re in good shape.
EL: Yes.
KK: That’s right. So today we are pleased to welcome Mike Lawler. Mike, why don’t you introduce yourself and tell everyone about yourself?
1:24 ML: Hi. I’m Mike Lawler. I work in the reinsurance division for Berkshire Hathaway studying large reinsurance deals. And I also spend a lot of my spare time doing math activities for kids, actually mostly my own kids.
KK: Yeah.
EL: Yeah.
KK: Yours is one of my favorite sites on the internet, actually. I love watching how you explain really complicated stuff to your kids. How old are they now? They’re not terribly old.
ML: They’re in eighth grade and sixth grade.
KK: But you’ve been doing this for quite a while.
ML: We started. Boy, it could have been 2011, maybe before that.
KK: Wow, right.
ML: I think all three of us on the podcast today, and probably everybody listening, loves math.
KK: One hopes.
ML: And I think there’s a lot of really exciting math that kids are really interested in when they see. It’s fun finding things that are interesting to mathematicians and trying to figure out ways to share them with kids.
EL: Yeah. Well I like, you always make videos of the things, so listening to your kids talking through what they’re thinking is really fun. Recently I watched one of the old ones, and I was like, “Oh my goodness! They’re just little babies there.” They’re so much bigger now. I don’t have kids of my own, so I don’t have that firsthand look at kids growing up the same way. They’re sweet kids, though.
ML: I have to say, one of the first, it wasn’t actually the first one we did, but it’s called Family Math 1, where we do the famous “How many times can you fold a piece of paper?” And, you know, they’re probably 4 and 6 at the time, or maybe 5 and 7, and yeah, it’s always fun to go back and watch that one.
EL: Yeah.
KK: I see videos of my son, he’s now 18, he’s off in college. When I see videos of him, he’s a musician, so when he was 10, figuring out how to play this little toy accordion we got him, I kind of get a little weepy. You know.
ML: It’s funny, I was picking him up somewhere the other day, and I confused him with a 20-year-old, my older son, and I just thought to myself: how did this happen?
KK: So, all right. Enough talking about kids, I guess. So, Mike, we asked you on to talk about your favorite theorem. So what is it?
ML: Well, it’s not quite a theorem, but it’s something that’s been very influential to me. Not in sharing math with kids, but in my own work. It comes from a paper from 1995 at a professor named Zvi Bodie at BU. And he was studying finance, and continues to study finance. And he published a paper showing that the cost of insurance for long holdings in the stock market actually increases with time. Specifically, if you want to buy insurance to guarantee your investments at least earn the risk-free rate, that cost of insurance goes up over time. And it just shocked me when I was just learning about finance, actually when I was just in grad school. And this paper has had a profound influence on me over the last 20 years. So that’s what I want to talk about today.
KK: Okay. I know hardly any of those words. I have my retirement accounts and all that, but like most good quantitatively-minded people, I just ignore them.
ML: Well, let’s take a simple example. Let’s just take actually the most simple example. Say you wanted to invest $100 in the stock market, and you thought, because you’ve read or you’ve heard that the stock market gives you good returns, you thought, “Well, in 10 years from now, I think I’ll probably have at least $150 in that account.” And you said, “Well, what I want to do is go out and buy some insurance that guarantees me that at least I’ll have that amount of money in the account. That’s the problem. That’s the math problem that Bodie studied.
KK: Right. So how does one price that insurance policy, I guess? So right, on the insurance side, how do they price that correctly, and on the consumer side, how do you know you’re getting a worthwhile insurance policy, I guess.
ML: Yeah, well this is kind of the fun of applied mathematics. So there’s a lot of theory behind this, and I think like a lot of good theories, it’s not named after the people who originally discovered that. So I think that’s important part of any theory. But then when you understand the theory, and you actually go into the financial markets, you have to start to ask yourself, “What parts of the theory apply here, and which ones don’t?” So the theory itself goes back to the early 1900s with a French mathematician and his Ph.D. thesis. His last name is Bachelier, and I’m probably butchering that. But then people began to study random processes, and Norbert Wiener studied those. And eventually all of that math came into economics, I think in the late 60s, early 1970s, and something called the Black-Scholes formula came to exist. The Black-Scholes formula is what people use to price this kind of insurance, sometimes called options. So that’s been around the financial markets since at least the early 1970s, so let’s call it 50 years now. And if you’re a consumer, I think you’d better be careful.
EL: Well I find, I don’t know a lot about financial math, but I’ve tried to read a few books about the financial crash, actually one of which you suggested to me, I think, All the Devils Are Here. And I find, even with my math background, it’s very confusing what they’re pricing and how they’re calculating these, how they’re batching all of these things. It just really seems like a black box that you’re just kind of hoping what’s in the box isn’t going to eat you.
ML: That’s a pretty good description. Yeah, Bethany MacLean’s book All the Devils Are Here is absolutely phenomenal, and Roger Lowenstein’s book, called When Genius Failed, is also an absolutely phenomenal book. You are absolutely right. The math is very heavy, and a lot of times, especially when you talk about the financial crisis, the math formulas get misused a little bit, and maybe are applied into situations where they might not necessarily apply.
KK: Really? Wall Street does that?
ML: So you really have to be careful. I think if you pull the original Black-Scholes paper, I think there are 7 or 8 assumptions that go into it. As long as these 7 or 8 things are true, then we can apply this theory. In theory we can apply the theory.
KK: Right.
ML: So when you go into the financial markets, a lot of times if you have that checklist of 7 things with you, you’re going to find maybe not all 7 are true. In fact, a lot of times, maybe you’re going to find not a single one of those things is true. And that is I think a problem that a lot of mathematicians have when they come into the markets, and they just think the theory applies directly, if you will.
KK: Right, and we’ve all taught enough students to know they’re not very good at checking assumptions, right? So if you have to check off a list of 6 or 7 things, then after the first couple, you’re like, “Eh, I think it’s fine.”
ML: Right. Maybe that seventh one really matters.
KK: Right.
EL: Yeah.
ML: Or maybe you’re in a situation where the theory sort of applies 95% of the time, but now you’re in that 5% situation where it really doesn’t apply.
KK: So should I buy investment insurance? I mean, I’ve never directly done such a thing.
ML: Well…
KK: I don’t know if it’s an option for me since I just have 401Ks, essentially.
ML: Well, it’s probably not a great idea to give investment advice over a podcast.
KK: Right, yeah, yeah.
ML: But from a mathematical point of view, the really interesting thing about Bodie’s paper is Black-Scholes is indeed a very complicated mathematical idea, but the the thing that Bodie found was a really natural question to ask about pricing this kind of insurance, ensuring that your portfolio would grow at the risk-free rate. In that situation, and you can see it in Bodie’s paper, the math simplifies tremendously. And I think that is a common theme across mathematics. When someone finds exactly the right way to look at a problem, all of a sudden the problem simplifies. And I’m sure you can probably give me 3 or 4 examples in your own fields where that is then the case.
KK: Sure. Well, I’m not going to, but yeah.
EL: So something when you told us that this was the theorem or quasi-theorem you were going to talk about, it got me wondering how much the financial world—I’ve been trying to think about how to phrase this question—but how much your natural tendencies as mathematicians actually carry over into finance. How much are you able to think about your work in finance and insurance as math questions and how much you really have to shift how you’re thinking about things to this more realistic point of view.
ML: I think it’s a great question because, you know, the assumptions and a lot of times the mathematical simplifications that allow you to solve these differential equations that stand behind the Black-Scholes theorem and generally stochastic processes, you know, you’re, that doesn’t translate perfectly to the real world. And you have to start asking questions like, “If this estimate is wrong, does it miss high? Does it miss low?” “In the 5% of the times it doesn’t work, do I lose all my money?
EL: Right.
ML: And so those, I can tell you as an undergrad I was also a physics major, and I spent a lot of time in the physics lab, and there’s not one single person who was ever in lab with me who misses me. I was a mathematician in the labs. But doing some of these physics experiments really teaches you that applying the theory directly, even in a lab situation, is very difficult.
KK: Right.
EL: Right. And your Ph.D. was in pure math, right?
ML: Right, it was sort of mathematical physics. In the late 90s, people were really excited about the Yang-Mills equations.
KK: Mirror symmetry.
ML: Work that Seiberg and Witten were doing. So I was interested in that.
EL: So your background is different from what you’re doing now.
ML: Oh, totally. You know, I, it’s kind of a hard story for me to tell, but I really loved math from the time I was in fifth grade all the way up through about my third year of graduate school.
EL: Yeah, I think that could be a painful story.
ML: I don’t know why, I really don’t know why, I just kind of lost interest in math then. I finished my Ph.D., and I even took an appointment at the University of Minnesota, but I just lost interest, and it was an odd feeling because from about fifth grade until—what grade is your third year of graduate school?
KK: Nineteenth.
ML: Nineteenth grade. I really got out of bed every morning thinking about math, and I sort of drifted away from it. But my kids have brought me back into it, so I’m actually really happy about that.
KK: Well that’s great. So, what have you chosen to pair with your quasi-theorem, we’re calling it?
ML: Well, you know, so for me, this paper of Bodie’s goes back, and it sort of opened a new world for me, and for the last 20 years I’ve been studying more about it and learning more about it and all these different things, so I got to thinking about a journey. I have books on my table right now about this paper. So the journey I want to highlight is—and I think a lot of people can understand who are outside of math—is an athletic journey. I’m going to bring up a woman named Anna Nazarov, who represents the United States on the national ultimate frisbee team, which is a sport I’ve been around. And four years ago, she made it almost to being on the national team and got cut in the last minute and wrote this very powerful essay about her feelings about getting cut and then turned around and worked hard and improved and won three world and national championships in the last four years as a result of that work.
KK: Wow.
ML: Yeah, you know, it’s hard to compare world championships to just your plain old work. I think people in math understand that you kind of roll up your sleeves and over a long period of time you come to understand mathematics, or you come to understand in this case how certain mathematics applies, and so I want to pair this with that kind of athletic journey, which I think, to the general public, people understand a little bit better.
EL: Yeah, so I played ultimate very recreationally in grad school. There was a math department pickup ultimate game every week, and playing with other math grad students is my speed in ultimate. I really miss it. When you, I can tell, follow ultimate, and I often read the links you post about ultimate frisbee, I’m like, oh, I kind of miss doing that. But a few years ago, I did get to, I happened to be in Vancouver at the same time that they were doing the world ultimate championships there and got to see a couple games there, and it’s really fun, and it’s been fun to follow the much-higher-level-than-math-grad-student ultimate playing thing through the things you’ve posted.
ML: Yeah, it’s neat to follow an amateur sport, or not as well-known a sport because the players work so hard, and they spend so much of their own money to travel all over the world. You know, I think a lot of people do that with math. Despite the topic of today’s conversation, most people aren’t going into math because of the money.
KK: Well this has been great fun. Thanks for joining us, Mike. Is there anything, we always want to give our guest a chance to plug anything. We already kind of plugged your website.
EL: We’ll put links to your blog in the show notes there, and your Twitter. But yeah, if there’s anything else you want to plug here, this is the time for it.
ML: No, that’s fine. If you want to follow Mike’s Math Page, it’s a lot of fun sharing math with kids. And like I said, I sort of lost interest in math in grad school, but sharing math with kids now is what gets me out of bed in the mornings.
KK: Great.
EL: Yeah.
KK: All right. Well, thanks again, Mike.
ML: Thank you.
Kevin Knudson: Welcome to My Favorite Theorem. I’m your cohost Kevin Knudson, professor of mathematics at the University of Florida. I am joined by cohost number 2.
Evelyn Lamb: I am Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City. So how are you?
KK: I’m okay. And by the way, I did not mean to indicate that you are number 2 in this.
EL: Only alphabetically.
KK: That’s right. Yeah. Things are great. How are things in Salt Lake?
EL: Pretty good. I had a fantastic weekend. Basically spent the whole thing reading and singing, so yeah, it was great.
KK: Good for you.
EL: Yeah.
KK: I didn’t do much. I mopped the floors.
EL: That’s good too. My floors are dirty.
KK: That’s okay. Dirty floors, clean…something. So today we are pleased to have Chawne Kimber on the show. Chawne, do you want to introduce yourself?
Chawne Kimber: Sure. Hi, I’m a professor at Lafayette College. I got my Ph.D. a long time ago at University of Florida.
KK: Go Gators!
CK: Yay, woo-hoo. I work in lattice-ordered groups.
KK: Lattice-ordered groups, very cool. I should probably know what those are, but maybe we’ll find out what they are today. So yeah, let’s get into it. What’s your favorite theorem, Chawne?
CK: Okay, so maybe you don’t like this, but it’s a suite of theorems.
KK: Even better.
EL: Go for it.
CK: So, right, a lattice-ordered group is a group, to begin with, in which any two elements have a sup and an inf, so that gives you your lattice order. They’re torsion-free, so they’re, once you get past countable ones, they’re enormous groups to work with. So my favorite theorems are the representation theorems that allow you to prove stuff because they get unwieldy due to their size.
EL: Oh cool. One of my favorite classes in grad school was a representation class. I mean, I had a lot of trouble with it. It was just representations of finite groups, and those were still really out there, but it was a lot of fun. Really algebraic thinking.
CK: Well actually these representations allow you to translate problems from algebra to topology, so it’s pretty cool. The classical theorem is by Hahn in 1909. He proved the special cases that any totally ordered Archimedean group can be embedded as a subgroup of the reals, and it kind of makes sense that you should be able to do that.
KK: Sure.
CK: And then he said that any ordered abelian group, so not necessarily lattice-ordered, can be embedded in what’s called a lexicographical product of the reals. So we could get into what that is, but those are called Hahn groups. They’re just huge products of the reals that are ordered in dictionary order that only live on well-ordered sets. So this conjecture, it’s actually a theorem, but then there’s a conjecture that that theorem is actually equivalent to the axiom of choice.
KK: Wow.
EL: Oh wow.
CK: Right?
EL: Can we maybe back up a little bit, is it possible to, for me, I really like concrete examples, so maybe can you talk a little bit about a concrete example of one of these archimedean groups? I don’t know how concrete the concrete examples are.
CK: No, they’re just really weird ways of hacking at the reals, basically, so they’re just subgroups of the reals. Think of your favorite ones, and there you go, the ones that are archimedean. And as soon as you add two dimensions of ordering, it’s even more complex, right? So the classical example that I work with would be rings of continuous functions on a topological space, and then you can build really cool examples because we all understand continuous functions, so C(X), real-valued continuous functions on a Tychonoff space, so T-3 1/2, whatever.
KK: Metric space.
CK: The axioms so you have enough continuous functions. So Gillman and Jerison in the 1950s capitalized on a theorem from the 1930s by Gelfand and Kolmogorov that said that the maximal ideals of C(X), if you take them in the hull-kernel topology, are isomorphic to the Stone-Čech compactification of the space that you’re working on. And so if you have a compact space to begin with, then your space is isomorphic to your maximal ideals. So then, just build your favorite—so C(X) is lattice-ordered, if you take the pointwise ordering, and then since the reals have a natural order on then, you pick up your sups and infs pretty easily. So there you’re starting to touch some interesting examples of these groups. Have I convinced you, Evelyn?
EL: Yeah, yeah.
CK: Okay, good. So they’re huge. You have to have some complexity in order to be able to prove anything interesting about them. So then there the Hahn embedding is pretty obvious. You just take the images of the functions. There’s too much structure in a ring like that, so maybe you want to look at just an ordered group to get back to the Hahn environment. So how can you mimic Hahn in view of Gelfand-Kolmogorov? So can we get continuous functions as the representation of an ordered group? Because the lex products that Hahn was working with are intractable in a strong way. And so then you have to start finding units because you have to be able to define something called a maximal sub-object, so you want it to be maximal with respect to missing out on some kind of unit. And so then we get into a whole series of different embedding theorems that are trying to get you closer to being able to deal with the conjecture I mentioned before, that Hahn’s embedding theorem is equivalent to the axiom of choice.
EL: Yeah, I’m really fascinated by this conjecture. It kind of seems like it comes out of nowhere. Maybe we can say what the axiom of choice is and then, is there a way you can kind of explain how these might be related?
CK: Yes and no.
KK: Let’s start with the axiom of choice.
CK: Yeah, so the axiom of choice is equivalent to Zorn’s lemma, which says that maximal objects exist. So that’s the way that I deal with it. It allows me to say that maximal ideals exist, and if they didn’t exist, these theorems wouldn’t exist. You use this everywhere in order to prove Hahn’s theorem, so that’s why it’s assumed to be possibly equivalent. This isn’t the part that I work on. I’m not a logician.
KK: So many things are equivalent to the axiom of choice. For example, the Tychonoff product theorem, which is that the product of compact spaces is compact. That’s actually equivalent to the axiom of choice, which seems a bit odd. I was actually reading last night, so Eugenia Cheng has this book Beyond Infinity, her most recent book, good bedtime reading. I learned something last night about the axiom of choice, which is that you need the axiom of choice to prove that if you have two infinities, two countable infinities, you want to think [they’re the same], it’s countable somehow. If they come with an order, then fine, but if you have two, like imagine pairs of socks, like an infinite collection of pairs of socks, is that countable? Are the socks countable? It’s an interesting question, these weird slippery things with the axiom of choice and logic. They make my head hurt a little bit.
CK: Mine too.
EL: So yeah, you’re saying that looking at the axiom of choice from the Zorn’s lemma point of view, that’s where these maximal objects are coming in in the Hahn conjecture, right?
CK: Absolutely.
KK: That makes sense.
CK: That’s kind of why I drew the parallel with this theorem about C(X), these maximal ideals being equivalent to the space you’re on. Pretty cool.
KK: Right. Because even to get maximal ideals in an arbitrary ring, you really need Zorn’s lemma.
CK: Right. And there’s a whole enterprise of people working to see how far you can peel that back. I did take a small foray into trying to understand gradations of the axiom of choice, and that hurts your head, definitely.
KK: Right, countable axiom of choice, all these different flavors.
CK: Williams prime ideal theorem, right.
KK: Yeah, okay.
EL: So what drew you to these theorems, or what makes you really excited about them?
CK: Well, you know, as a super newbie mathematician back in the day, I was super excited to see that these disparate fields of algebra and topology that everyone had told me were totally different could be connected in a dictionary way. So a characteristic of a ring can be connected is equivalent to a characteristic on a topological space. So all kinds of problems can be stated in these two different realms. They seem like different questions, but they turn out to be equivalent. So if you just know the way to cross the bridge, then you can answer either question depending on which realm gives you the easier approach to the theorem.
KK: I like that interplay too. I’m a topologist, but I’m a very algebraic one for exactly that reason. I think there are so many interesting ideas out there where you really need the other discipline to solve it, or looking through that lens makes it a lot clearer somehow.
EL: And was this in graduate school that you saw these, or as a new professor?
CK: Definitely grad school. I was working on my master’s.
KK: So I wonder, what does one pair with this suite of theorems?
CK: It’s a very hard question, actually.
KK: That’s typical. Most people find this the more difficult part of the show.
CK: Yeah. I think that if you were to ask my Ph.D. advisor Jorge Martinez what he would pair, he is very much a wine lover and an opera lover. So it would be both. You’d probably see him taking a flask into Lincoln center while thinking about theorems. So he loved to go to Tuscany, so I assume that’s where you get chianti. I don’t know, I could be lying.
KK: You do, yeah.
CK: Yeah, so let’s go with a good chianti, although that might make me sound like Hannibal Lecter.
KK: No fava beans.
CK: So we’ve got a chianti, and maybe a good opera because it’s got to be both with him. It’s hard for me to say. So he comes up to New York to do an opera orgy, just watching two operas per day until he falls down. I sometimes join him for that, and the last one I went to was Così fan tutte, and so let’s go with that because that’s the one I remember.
EL: If I remember correctly—it’s been a while since I saw or listened to that opera—there are pairs of couples who end up in different configurations, and it’s one of these “I’ll trick you into falling in love with the other couple’s person” that almost seems like the pairs being topology and algebra, and switching back and forth. I don’t know, maybe I’m putting ideas in your mind here.
CK: Or sort of the graph of the different couplings, the ordered graph could be the underlying object here. You never know.
EL: An homage to your advisor here with this pairing.
CK: Yeah, let’s do that.
EL: Well I must admit I was kind of hoping that you might pair one of your own quilt creations here. So I actually ran into you through a quilting blog you have called completely cauchy. Do you mind talking to us a little bit about how you started quilting and what you do there because it’s so cool.
CK: Yeah. Of course I chose that name because Cauchy is my favorite mathematician, and as a nerd there would be no other quilt blog named after a dead mathematician. So I am a little mortified that when you google “Cauchy complete,” as many students do, mine is actually the first entry that comes up on google.
KK: Excellent.
CK: I don’t know what that means, but okay. So yeah, when I applied for tenure, which is kind of a hazing process no matter where you are, no matter how good of a faculty member you are, I really wanted to have control, and you don’t have control at that point. And so I started sewing for fun, late at night, at 1 am, after everything kind of felt done for the day. I never imagined that I’d be doing what I’m doing today, which is using quilting to confront issues of social justice in the United States, and they’ve been picked up by museums and other venues. It’s this whole side hustle out there that I kept quiet for a long, long time. And then once I got promoted to full professor I came out of the closet.
KK: Were you concerned that having a side hustle, so to speak, would compromise your career? Because it shouldn’t.
CK: Yeah, I think something so gender-specific as quilting, something you associate with grandmas. At the end of the day, the guys I work with, I must say half of my quilts have four-letter words on them, you know, the more interesting four-letter words, so as soon as my guys saw them, they were totally on board with this enterprise, so I didn’t really need to be into the closet, but I didn’t want anybody to ever say, “Oh, she should have proved one more theorem instead of making that quilt.”
EL: Yeah.
KK: It’s unfortunate that we feel that way, right? I think that’s true of all mathematicians, but I imagine it’s worse for women, this idea that you have to work twice as hard to prove you’re half as good or something like that?
CK: Do we need to mention I’m also a black woman? So that’s actually how I was raised, you need to do three times as much to be seen as half as good, and that’s the way that I’ve lived my life, and it’s not sustainable in any way.
KK: No, absolutely not.
EL: But yeah, they are really cool quilts, so everyone should look at completely cauchy, and that’s spelled cauchy. There’s a mathematician named Cauchy. I actually have another mathematician friend with a cat named Cauchy, or who had a cat named Cauchy. I think the cat has passed away. Yeah, and I actually sew as well. I’ve somehow never had the patience for quilting. It just feels somehow like too little. I like the more immediate gratification of making a whole panel of a skirt or something. You do really intricate little piecing there, which I admire very much, and I’m glad people like you do it so I don’t have to.
KK: Sure, but Evelyn, you don’t have to make it little.
CK: You don’t.
KK: I’m sure you’ve seen these Gee’s Bend quilts, right, they’re really nice big pieces, and that can have a very dramatic effect too. But yeah, the intricate work is really remarkable. My wife has done a little quilting, and she always gets tired of it because of the fine stuff, but then she’s a book artist. She sets lead typing in her printing press by hand, and that’s fine, but piecing together little pieces of cloth somehow doesn’t work.
CK: It seems more futile, you take a big piece of fabric and cut it into small pieces so that you can sew it back together. That is kind of dumb when you think about it.
KK: Well, but I don’t know, you’ve got this whole Banach-Tarski thing, maybe.
EL: Bring it back around to the axiom of choice again.
CK: You guys are good at this.
KK: It’s not our first podcast. Well this has been great fun. Anything else you want to promote?
CK: No, I’m good.
KK: Thanks for joining us, Chawne. This has really been interesting, and we appreciate you being on.
CK: Great. Thank you.
EL: Thanks.
Kevin Knudson: Welcome to My Favorite Theorem. I’m one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida, and here is your jet-lagged other host.
Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City. I’m doing pretty well right now, but in a few hours when it’s about 5 pm here, I think I will be suffering a bit. I just got back from Europe yesterday.
KK: I’m sure you will, but that’s the whole trick, right? Just keep staying up. In a couple of weeks, I’m off to that part of the world. I’ll be jet-lagged for one of the ones we have coming up right after that. That should be fun. I’ll feel your pain soon enough, I’m sure.
EL: Yeah.
KK: So today we are pleased to welcome James Tanton. James, why don’t you introduce yourself and tell everyone about yourself?
James Tanton: Hello. First of all, thank you for having me. This is such a delight. So who am I? I’m James Tanton. I’m the mathematician-at-large for the Mathematical Association of America, which is a title I’m very proud of. It’s a title I never want to give up because who doesn’t want to be a mathematician-at-large, wreaking havoc wherever one steps? But my life is basically doing outreach in the world and promoting joyous thinking and doing of mathematics. I guess my background is somewhat strange. You can probably tell I have an accent. I grew up in Australia and came to the US 30 years ago for my Ph.D., which was grand, and I liked it so much here I am 30 years later. My career has been kind of strange. I was in the university world for close to 10 years, and then I decided I was really interested in the state of mathematics education at all levels, and I decided to become a high school teacher. So I did that for 10 years. Now my life is actually working with teachers and college professors all across the globe, usually talking about let’s make the mathematics our kids experience, whatever level they’re at, really, truly joyous and uplifting.
EL: Yeah, I’ve wondered what “mathematician-at-large” entails. I’ve seen that as your title. It sounds like a pretty fun gig.
JT: So I was the MAA mathematician-in-residence for a good long while. They were very kind to offer me that position. But then I’m married to a very famous geophysicist, and my life is really to follow her career. She was off to a position at ASU in Phoenix, and then off we moved to Phoenix four years ago. So I said to the folks at the MAA, “Well, thanks very much. I guess I’m not your mathematician-in-residence anymore,” and they said, “Why don’t you be our mathematician at large?” That’s how that title came up, and of course I so beautifully, graciously said yes because that’s spectacular.
KK: Yeah, that sounds like a Michael Pearson idea, that he would just go, “No, no, we really want to keep you.”
JT: It’s so flattering. I’m so honored. It’s great because, you know, actually, it’s the work I was going to do in any case. I feel compelled to bring joyous mathematics to the world.
KK: Right. Okay, so this podcast is about theorems. So why don’t you tell us what your favorite theorem is?
JT: Okay, well first of all, I don’t actually have a theorem, even though I think it should be elevated to the status of one. I want to talk about Sperner’s lemma. So a lemma means, like, an auxiliary result, a result people use to get to other big ideas, but you know what? I think it’s charming in and of itself. So Sperner’s lemma. This was invented slash discovered back in the 1920s by a German mathematician by the name of Emanuel[] Sperner, who was playing with some combinatorial thinking in Euclidean geometry and came up with this little result.
Let me describe to you in one way first, not really the way he did it, because then I can actually explain a proof as well of the result. Imagine you have a big rubber ball, just a nice clean rubber surface, and you’ve got a marker. I’m going to suggest you just put dots all over the surface of the rubber ball, lots of dots all over the place. Once you’ve done that to your satisfaction, start connecting pairs of dots with little line segments. They’ll be little circular arcs, and make triangles, so three dots together make a triangle. Do that all over the surface of the sphere. Grand. So now you’ve got a triangulated sphere, a surface of a sphere completely covered with triangles. Each triangle for sure has only three dots on it, one in each corner, so no dots in the middle of the edges, please. All right? That’s step one.
Step two, just for kicks, go around and just label some of those dots with the letter A, randomly, and do some other dots with the letter B, randomly, just some other dots with the letter C—why not?—until each dot has a label of some kind, A, B, or C. And then admire what you’ve done. I claim if you look at the various triangles you have, you have some labeled BBB, and some labeled BCA, and some labeled BBA, and whatever, but if you find one triangle that is fully labeled ABC, I bet if you kept looking, in fact I know you are guaranteed, to find another triangle that’s labeled ABC. Sperner’s lemma says on the surface of a sphere that if there’s one fully labeled triangle, there’s guaranteed to be at least another.
EL: Interesting! I don’t think I knew that, or at least I don’t know that formulation of Sperner’s lemma.
JT: And the reason I said it that way, I can now actually describe to you why that is true because doing it on the surface of a sphere is a bit easier than doing it on a plane. Would you like to hear my little proof?
KK: Let’s hear it.
EL: Sure!
JT: Of course the answer to that has to be yes, I know. So imagine that these are really chambers. Each triangle is a room in a floor design on the surface of a sphere. So you’re in a room, an ABC room around you. You’ve got three walls around you: an AB wall, a BC wall, and an AC wall. Great. I’m going to imagine that some of these walls are actually doors. I’m going to say that any wall that has an AB label on it is actually a door you can walk through. So you’re in an ABC room, and you currently have one door you can walk through. So walk through it! That will take you to another triangle room. This triangle room has at least one AB edge on it, because you just walked through it, and that third vertex will have to be an A, B, or C. If it’s a C, you’re kind of stuck because there are no other AB doors to walk through, in which case you just found another ABC room. Woo-hoo, done!
EL: Right.
JT: If it’s either A or B, then it gives you a second AB door to walk through, so walk through it. In fact, just keep walking through every AB door you come to. Either you’ll get stuck, and in fact the only place you can possibly get stuck is if there’s exactly one AB door, in which case it was an ABC triangle, and you found an ABC triangle. Or it has another door to walk through, and you keep going. Since there’s only a finite number of triangles, you can’t keep going on indefinitely. You must eventually get stuck. You must get stuck in an ABC room. So if you start in one ABC room, you’ll be sure to be led to another.
EL: Oh, okay, and you can’t go back into the room you started in.
KK: That was my question, yeah.
JT: Could you possibly return to a room you’ve previously visited? Yes, there’s a subtlety there. Let’s argue our way through that. So think about the first room that you could possibly—if you do revisit a room, think of the first room you re-enter. That means you must have gone through an AB door to get in. In fact, if you’ve gone through that room before, you must have already previously used that AB door to go into and out of it. That is, you’ve used that AB door twice already. That is, the room you just came from was a previously revisited room. You argue, oh, if I think this is the first room I’ve visited twice, then the room you just came from, you’re wrong. It was actually that room that you first visited twice. Oh, no, actually it was the one before that that you first visited twice. There can be no first room that you first visited twice. And the only way out of that paradox is there can be no room that you visit twice.
EL: Okay.
JT: That’s the mind-bendy part right there.
EL: I feel like I need a balloon right now and a bunch of markers.
JT: You know, it’s actually fun to do it, it really is. But balloons are awkward. In fact, the usual way that Sperner’s lemma is presented, I’ll even not do it in the usual way. Sperner did it on a triangle. I’ll do it on any polygon. This time, this we can actually do with markers, and it’s really fun to actually do it. So draw a great big polygon on a page and then triangulate it. Fill its interior with dots and then fill in edges so you’ve got all these triangles filling up the polygon. And then randomly label the dots A, B, or C in a random, haphazard way. Make sure that you have an odd number of AB doors on the outside edge of that polygon. If you do that, no matter what you do, you cannot escape creating somewhere on the interior a fully labeled ABC triangle. The reason is, you just do this thing. Walk from the outside of the polygon through an AB door, an outside AB door, go along on a journey. If you get stuck, bingo! You’re on an ABC triangle. Or you might be led out another AB door back to the big space again. But if you have an odd number of AB doors on the outside, you’re guaranteed to have at least one of those doors not leading outside, meaning you’ve been stuck on the inside. It’s guaranteed to lead to an ABC triangle in the middle of the polygon.
EL: Okay, and this does require that you use all three—is it a requirement that you use all three letters, or does the odd number of things…I don’t know if my question makes sense yet.
JT: There’s no rules on what you except on the outside, please give me an odd number of AB doors.
EL: Okay.
JT: And there’s nothing special about the letters A and B. You could do an odd number of BC doors or an odd number of AC doors.
EL: Right.
JT: What you do on the interior is up to you. Label them all A, I dare you, and you’ll still find an ABC triangle.
EL: Okay.
JT: Isn’t that crazy?
KK: Okay, so why did Sperner care?
JT: Why did Sperner care? Well he was just playing around with this geometry, but then people realized, as one of your previous guests mentioned, the fabulous Francis Su, that this leads to some topological results, for example, the Brouwer fixed-point theorem, which people care about, and you should listen to his podcast because he explains the Brouwer fixed-point theorem beautifully.
EL: Yes, and he did actually mention to us in emails and stuff that he is actually quite fond of Sperner’s lemma also, so I’m sure he’ll be happy to listen to this episode.
JT: In some cases, Sperner’s lemma is kind of special because people knew Brouwer’s fixed-point theorem before Sperner, but they had very abstract nonconstructive proofs of the theorem. Fixed points, when you crumple pieces of paper and throw them on top of themselves, fixed points exist, but you can know that and not know to find them. Sperner’s lemma, if you think about it, is giving you a way to possibly find those ABC triangles. Just start on the outside and follow paths in. So it gives you a kind of hope of finding where those fixed points might actually lie, so it’s a very sort of constructive type of thinking on this topological result that is proved abstractly.
One thing that Francis Su did not mention is the hairy ball theorem, which I think is a lovely little application of Sperner’s lemma, which goes back to the spheres. Spheres—in my mind, this is how I was first thinking about Sperner’s lemma. So I don’t know if you know the hairy ball theorem. If you take a tennis ball with the little fur, the little hairs at, ideally, every point of the sphere, but that’s not really possible. But we can imagine in our mind’s eye a hairy ball. If you try to comb those hairs flat, tangent to the surface all the way around—well, maybe there would be a little angle, something like that. But as long as you don’t do anything crazy, you know, it’s a nice, smooth, continuous vector field on the surface of the sphere, just these hairs, close hairs go towards the same direction, very smoothly, nothing abrupt going on, then you are forced to have a cowlick, that is, one hair that sticks straight up. That is, you are forced to have a tangent vector that is actually the zero vector. You can actually prove that with Sperner’s lemma.
EL: Wow.
JT: Yeah, and the way you do that is: choose one point, like the North Pole, and imagine a little magnet there, and you can imagine all the magnetic fields make these two circular, the magnetic field of a dipole, sorry, I have to think back to my physics days. So you’ve got these natural lines associated with that magnet all over the sphere, so I suggest just triangulate the sphere. Just draw lots of little triangles all over it. And then at each vertex of the triangle, you’ve got this vector field, and you’ve got these hairs all over the vector field. At any point on the triangle, look at the direction the hair is pointing compared to the direction of the magnetic field. And you can label that either A, B, or C by doing the following. Basically you’ve got 360 degrees of possible differences of directions between those things. So if it’s in one of the first 0-120 degrees of counterclockwise motion, label it A. If it’s between the 120 and 240 mark, label it B. If it’s between the 240 to 360 mark, label it C. There is a way to label that triangulation based on the direction of the hairs on the surface of the sphere. Bingo! So we’ve just now proved that in any triangulation, you can argue that you arrange things at the pole as an ABC triangle, there’s this little thing you can arrange, then there has to be some other ABC triangle somewhere on the sphere. That is, there’s a little small region where you’ve got three hairs trying to point in three different directions. And do finer and finer triangulations. You actually argue the only way out of that predicament is there’s got to be one hair that’s pointing three directions at the same time, that is, the zero vector.
KK: That’s very cool.
EL: Yeah.
JT: I just love these. These things feel so tangible. I just want to play with them with my hands and make it happen. And you can to some degree. Try to comb a fuzzy ball. You have a hard time. Or look at a guinea pig. They’re basically fuzzy balls, and they always have a cowlick. Always.
KK: Are there higher-dimensional generalizations of this? This feels very much two-dimensional, but I feel there’s an Euler characteristic lurking there somewhere.
JT: Absolutely you can do this in higher dimensions. This works in any dimension. For example, to make this three-dimensional, stack all these tetrahedra together. Take a polyhedron, triangulate it. If there’s an odd number of ABC faces on the outside, then there’s guaranteed to be some ABCD tetrahedron in the middle. And higher dimensions. And people of course play with all sorts of variations. For example, I’ll go back to two dimensions for a moment, back to triangles. If three different people create their own labeling scheme, so you’ve got lots of ABC triangles around the place, then there’s guaranteed to be one triangle in the middle, so if you chose one person’s label for the vertex, the second person’s for the second vertex, the third person’s label for the third vertex, according to their labels, which are all different, that’s an ABC triangle in this sort of mixed labeling scheme. So they call these permutation results of Sperner’s lemma and so forth. Just mind-bendy, and in higher dimensions.
EL: So was this a love at first sight kind of theorem for you? What were your early experiences with it?
JT: So when did I first encounter it? I guess when I studied the Brouwer fixed-point theorem, and when I saw this lemma in and of itself—and I saw it in the light of proving Brouwer’s fixed-point theorem—it just appealed to me. It felt hands-on, which I kind of love. It felt immediately accessible. I could do it and experience it and play with it. And it seemed quirky. I liked the quirky. For some reason it just appealed to me, so yes, it appealed to all my sensibilities. And I also have this thing I’ve discovered about me and my life, which is that I like this notion that I’m nothing in the universe, that the universe has these dictates. For example, if there’s one ABC triangle, there’s got to be another one. I mean, that’s a fact. It’s a universal fact that despite my humanness I can do nothing about it. ABC triangles just exist. And things like the “rope around the earth” puzzle: if you take a rope and wrap it around the Equator, add 10 feet around the rope and re-wrap, you’ve got 19 inches of space. What I love about that puzzle, if you do it on Mars 10 feet from its Equator, it’s 19 inches of space. Do it for Jupiter: it’s 19 inches of space. Do it for a planet the size of a pea: it’s 19 inches of space. You cannot escape 19 inches. That sort of thing appeals to me. What can I say?
KK: So you are a physicist?
JT: Don’t tell anyone. My first degree was actually in theoretical physics.
KK: So the other fun thing we do on this podcast is we ask our guest to pair their theorem, or lemma in this case, with something. So what have you chosen to pair Sperner’s lemma with?
JT: You know, I’m going with a good old Aussie pavlova.
EL: Excellent.
JT: And I’ve probably offended all the people from New Zealand because they claim it’s their pavlova. But Australians say it’s theirs, and I’ll go with that since I’m an Aussie. And why that, you might ask?
EL: Well first can we say what a pavlova is in case, so I only learned what this was a couple years ago, so I’m just assuming—I was one of the lucky people who learned about this in making one, which was delicious, so yeah.
JT: First of all, it’s the most delectable dessert devised my mankind, or invented, or discovered. I’m not sure if desserts are invented or discovered. That’s a good question there. So it’s a great big mount of meringue, you just build this huge blob of meringue, and you bake it for two hours and let it sit in the oven overnight so it becomes this hard, hard outer shell with a soft, gooey meringue center, and you just slather it with whipped cream and your favorite fresh fruits. And my favorite fruits for a pavlova are actually mango and blueberries together. That’s my dessert. But you know, well maybe to think of it, there are actually two reasons. I happen to know it was invented in the 1920s, the same time Sperner came up with this lemma, which is kind of nice. But any time I bake one—I bake these things, I really enjoy baking desserts—it kind of reminds me of a triangulated sphere because you’ve got this mound of meringue, and you bring it out of the oven, and it’s got this crust that’s all cracked up, and it kind of looks like a triangulation of a polyhedron of some kind. So it has that parallel I really like. So pavlovas bring as much joy to my life as these quirky Sperner’s lemma type results, so that’s my pairing.
EL: So they’re not, so I went to this Australia-themed potluck party a couple years ago, and I decided to bring this because I was looking for Australian foods, so I got this. I was pretty intimidated when I saw the pictures, but it’s actually, at least I found a recipe and it looked good, and it worked the first time, more or less. I think you can handle it.
JT: It is a showstopper, but it’s so easy to make. Don’t tell anyone, it’s ridiculously easy, and it looks spectacular.
KK: Yeah, meringues look like something, but really, you just have to be patient to whip the whites into something, and then that’s it. It works.
JT: Then you’re done. It kind of works. You can’t overcook it. You can undercook it, but then it’s just a goopy delicious mess.
KK: Right. So we also like to give our guests a chance to plug various things. I’m sure you’re excited to talk about the Global Math Project.
JT: Of course I’m going to talk about the Global Math Project. Oh my goodness. You know, when I mention I’m kind of a man on a mission to bring joyous, uplifting mathematics to the world, I’m kind of trying to live up to those words, which is kind of scary. But let me just say something marvelous, really marvelous and humbling, happened last October. We brought a particular piece of mathematics to the world, a team of seven of us, the Global Math Project team, not knowing what was going to happen. It was all volunteer, grassroots, next to no funding, we’re terrible at raising funding, it turns out. But it really was believing that teachers, given the opportunity to have a real joyous, genuine, human conversation about mathematics with their students, that’s actually classroom-relevant mathematics. Classroom mathematics is a portal to the same mystery, delight, intrigue, and wonder, they will. Teachers are our best advocates across the globe for espousing beautiful, joyous, uplifting mathematics. So we presented a piece of mathematics called Exploding Dots, and we invited teachers all around the globe to do that, to have just some experience on this topic with their students during Global Math Week last October, and they did. We had teachers from 170 different countries and territories, all of their own accord, reach out to about 1.77 million students just in this one week. Phenomenal. And this is school-relevant mathematics. So we’re doing it again! Why not?
EL: Oh, great!
JT: So this year, 10/10, we chose that date because it’s a universal date. No matter how you read it, it’s the tenth of October. We’re going to go up to 10 million students with the same story of Exploding Dots. So I invite you, please look up Global Math Project, go Google Exploding Dots. See what we’re bringing to the world. And on its own accord, in the last number of months, we’ve now reached 4.6 million students across the globe, so 10 million students sounds outlandish, but you know what? We might actually do this. And it’s just letting the mathematics, the true, joyous mathematics, simply shine for itself, just get out of its way. And you know what? It happens. Math can speak for itself. Welcome to Global Math Project.
EL: Yeah. We’ll include that in the show notes, for sure.
KK: In fact, this is June that we’re taping this, recording this. Taping? I’m dating myself. We’re recording this in June. So just this weekend Jim Propp had a very nice essay on his Mathematical Enchantments blog about this, about Exploding Dots. I’d seen some things about it, so I knew a little about it. It’s really very lovely, and as you say relevant.
JT: I’m glad you mentioned Jim Propp. I was about to give a shout out to him as well because he wrote a beautiful piece, and it’s this Mathematical Enchantments blog piece for June 2018. Worth having a look at. Absolutely. What I love about this, it really shows, I mean, Exploding Dots is the story of place value, as simple as that. But it really connects to how you write numbers, what you’re experiencing in the early grades. It explains, if you think of it in one particular way, all the grade school algorithms one learns, goes through all of high school polynomial algebra, which is just a repeat of grade five, but no one tends to tell people that. Why stop at finite things? Go to infinite things, go to infinite series and so forth, and start getting quirky. Not just playing with 10-1 machines with base 10 and 2-1 machines with base 2, start playing with 3-2 machines and discover base 1 1/2, start playing with 2-negative 1 machines and discover base -2, and you get to unsolved research questions. So here’s this one simple little story, just playing with dots in boxes, literally—like me playing with dots on a sphere; I seem to be obsessed with dots in my life—takes you on a journey from K through 5 through 8 through 12 to 16, and on, all in one astounding fell swoop. This is mathematics for you. Think deeply about elementary ideas, and well, it’s a portal to a universe of wonder.
KK: That’s why we’re all here, right?
JT: Indeed. So let’s help the world see that together, from the young’uns all the way up.
KK: All right. Well, this has been great fun. I knew Sperner’s lemma, sort of in the abstract, but I never really thought of it too closely. So I’m glad that I can now prove it, so thank you for that.
EL: Yeah, I’m going to sit down and make sure you’re not pulling my leg about that. I think the odd number of AB’s is key here.
JT: Absolutely. The odd number of AB outside edges is key.
EL: Right.
JT: Because you could walk through the door and out a door, so pairs of them could cancel each other out. So play with them.
EL: I bet if I start drawing, I’ve been restraining myself from going over here to the side.
JT: Well you know, sketches work so well on a podcast.
KK: That’s right.
JT: Absolutely, play. That’s all mathematics should be, an invitation to play. Go for it.
EL: Yeah, thanks a lot for being here.
JT: My pleasure. Thank you so much.
[outro]
Evelyn Lamb: Hello and welcome to My Favorite Theorem, a podcast where we ask mathematicians to tell us about their favorite theorems. I’m your host Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. This is your other host.
Kevin Knudson: I’m Kevin Knudson, a professor of mathematics at the University of Florida. How’s it going?
EL: Great. I’m excited about a new project I’m working on that is appropriate to plug at the beginning of this, so I will. So I’ve been working on another podcast that will be coming out in the fall, may already be out by the time this episode is out. It’s with the folks at Lathisms, that’s L-A-T-H-I-S-M-S, which is a project to increase visibility and recognition of Hispanic and Latinx mathematicians. And our guest today is going to be a guest on that podcast too, so I’m very excited to introduce our guest, who is Erika Camacho. Hi, Erika. Can you tell us a little bit about yourself?
Erika Camacho. Sure. So I’m an associate professor at Arizona State University. My concentration is, well I’m a professor of applied mathematics, and my concentration is mathematical physiology, mainly focusing in the retina and modeling the retina and the deterioration of photoreceptors. And I’m in the west campus of Arizona State University, which is mainly focusing, it’s both a research and student focused institution, so it’s kind of like a hybrid between what you would call more of a research place and also a liberal arts education.
EL: Cool.
KK: Very nice. Which city is that in?
EC: We’re in Glendale, the west valley of Arizona, Phoenix greater area.
EL: I was in Arizona not too long ago, and the time zone is always interesting there because it’s exactly south of Utah, but I was there after Utah and most of the country went to daylight saving time, and most of Arizona doesn’t observe that, so it was kind of fun. I also went through part of the Navajo Nation there that does observe daylight saving time, so I changed time zones multiple times just driving straight north, which was kind of a fun thing.
EC: It is very confusing. Let’s say you have an event that you’re going to, and you’re driving to one where it’s say in some of the Navajo Nation, and you don’t realize that you might miss some of your event because of the time change. You’re just driving and you’re crossing the border where it changes to a different time zone. It takes a while to get adjusted to. I missed a flight one time for the same reason. I was not aware that Arizona didn’t observe daylight saving. Now I’m aware.
KK: I actually have a theory that someone could run a presidential campaign, and their sole platform is that they would get rid of daylight saving time, and they would win in a landslide.
EL: I mean, people have won on less.
KK: Clearly.
EL: So Erika, we invited you here not to chat about time zones or presidents but to chat about theorems. So what is your favorite theorem?
EC: Before I say my favorite theorem, like I said, I am an applied mathematician. So I focus on modeling. And in modeling, there’s a lot of complexities, a lot of different layers and levels where you’re trying to model things. So many of the systems you’re trying to develop as you create this model tend to be nonlinear models. Many times I’m looking at how different processes change over time. So many of the processes I work with are continuous. So I work with differential equations, and they tend to be nonlinear. Sometimes that’s where the complexity comes in, trying to analyze nonlinear systems, and the most accurate way, the way that we’re going to get the most insight into some of the behavior we’re looking for in terms of physiological systems that relate to the retina and retinal degeneration, one of the things that we’re really looking at is what happens in the long run? How is it that photoreceptors degenerate over time, and can we do something to stop the progression of blindness or the progression of certain diseases that would cause the photoreceptors to degenerate? So we’re really asking what are the long-term solutions of the system, and how did they evolve over time? So we’re looking for steady states. We’re looking for what is their stability and what are the changes in the processes or the mechanisms that govern those systems, which usually are defined by the parameters that end up actually leading to a change in the stability of the equilibria? And they could take the system to another equilibrium that is stable now. So in physiological terms, to another pathological state, or another state that we could hopefully do a few strategies to prevent blindness. So that’s the setting of where I come from, and when you asked me this question, what is my favorite theorem, it was hard because as applied mathematicians we utilize different theory. And all the theory is useful, and depending on what the question is, then the mathematics that are utilized are very different.
So I thought, “What is the theorem that is utilized the most in the case where we’re looking at nonlinear systems and we’re trying to analyze them? And one of the most powerful theorems out there, which is one that has almost become addicting, that you use it all the time, is the Hartman-Grobman theorem. I say addicting because it’s a very powerful theorem. It allows us to take a nonlinear system and in certain cases be able to analyze it and be able to get an accurate depiction of what’s happening around the equilibrium point, what is the qualitative behavior of the system, what are the solutions of the system, and what is their stability. Because you’re looking at, in most cases, a continuous system, you can map it and be able to kind of piece it together.
EL: So it’s been a long time since I took any differential equations. I’m a little embarrassed, or did any differential equations.
KK: Me too.
EL: So can you tell us a little more about the setting of this theorem?
EC: So the Hartman theorem, like I said, is a theorem that allows us to study dynamical systems in continuous time. It’s very powerful because it gives us an accurate portrayal of the flow, solutions of the nonlinear system in a neighborhood around a fixed point, the equilibrium, the steady state. So I’m going to be using fixed point, equilibrium, and steady state interchangeably. In some cases, and in the cases where it does help, is in the cases where the equilibrium that we’re looking at, the eigenvalues of the linearized system, or the nonlinear system that we’re looking at, actually has nonzero real part. In other words, we’re looking at hyperbolic equilibrium points. That’s when we could actually apply this system.
KK: Okay.
EC: This theorem, otherwise we cannot. That is, for certain cases? The standard technique is you look at your nonlinear system, you linearize it through a process, and you’re able to then shift your equilibrium point to the origin, and now you’re considering the linearized system, and that system, the Jacobian you obtain through the linearization has eigenvalues that have nonzero real part. Then you’re able to apply the Hartman theorem, which tells you that there is this homomorphism from the nonlinear system, the flow and the solutions of the nonlinear system, locally, to the actual linear system. And now everything that you get that you would normally be able to see analyzed in a linear system locally, you’re able to do it for the nonlinear system. So that’s where the powerful thing comes in. Like I said, the gist of it is that the solutions of the nonlinear system can actually be approximated by a linear system, but only in a neighborhood of the equilibrium point. And this is only in the case where we have hyperbolic equilibrium fixed points. But that is very powerful because that allows us to really get a handle on what’s going on locally in a neighborhood of the steady state. For us, we’re looking at, say, how certain diseases progress in the long run, where are we heading? Where is the patient heading, in terms of blindness? And it really allows us to be able to move in that direction in terms of understanding what is going on. And like I said, it’s powerful not just because it’s telling us about the stability, but it’s actually telling us the qualitative structure of the solution and the behavior, right, of your solutions, locally are the same in the linear case and the nonlinear case because of this topological equivalence.
KK: That’s pretty remarkable. But I guess the neighborhood might be pretty small, right?
EC: Right. The neighborhood is small.
KK: Sure.
EC: In nonlinear systems, you have plenty of different equilibrium points around those neighborhoods, right, but again remember that your solutions in the phase space are changing continuously, so you are able to kind of piece together what is going on, more or less, but for sure you know what’s going on in the long term behavior, you know what’s going on around that neighborhood, and for given initial conditions, which is really key in math applications because sometimes we’re asking what happens for different initial conditions. What are the steady states? What do the solutions look like in the long run? What do things look like, what is going on for different initial conditions?
KK: So if you’re modeling the retina, how many equations are we talking? How big are these systems?
EC: Well, that’s the thing. In the very most simplified case, where you’re able to divide the photoreceptors into the rods and cones, then you have two populations.
KK: Okay.
EC: And in one of the cases we’re looking at the flow of nutrients, so we are also considering the retinal pigment epithelium cells, which is another population, so you have three equations in that case. So that’s a more simplistic situation, but it’s a situation where we have been able to really get a sense of what’s going on in terms of degeneration in these two classes of photoreceptors that undergo a mutation. So one of the diseases I work on is retinitis pigmentosa, and the reason why that is a very complicated case that we haven’t been able to really get a handle on and be able to come up with better therapies and better ways of stopping degeneration of the photoreceptors—in fact there is no cure for stopping photoreceptors from degenerating—is because the mutation happens in the rods. The rods are the ones that are ill. Yet the cones die, which are perfectly healthy. And trying to understand how is it that the rods actually are communicating with the cones that ends up also killing them is an important part, and with a very simplistic model for an undiseased case, we were able to actually, before biologically this link was discovered, that in fact the photoreceptors produced this protein that is called the rod-derived cone viability factor, that helps the cone survive, and we were able to show that mathematically just by analyzing the equilibria and being able to look at different things in the long run, and the invariant spaces, and being able to show what we know just by basic biology of what happened to the rods and the cones and then realized that the communication had to be a one-way interaction from the rods to the cones. So that’s one of the models that we have. And then once we had that handled, we were able to introduce the disease and look at a four-dimensional system.
Now we’re looking internally at the metabolic process inside the cones because there’s a metabolic process. So the rods produce this protein. How is that protein taken by the rods, and what does it do once it’s inside the rods? For that we really need to look inside the metabolic process and the kinetics of the cones and also the rods. There, if you’re just considering the cones, you’re looking at 11-12 differential equations.
KK: Wow.
EL: Wow.
EC: With many parameters. So at that point we’re going to a much higher dimension. And that’s where we currently are. But that has given us a lot of insight, not just in how the rods help the cones but how is it that other processes are being influenced, getting affected? And again, where the Hartman-Grobman theorem applies is to autonomous systems, where time is not explicit in the equations.
EL: Okay.
KK: This is fascinating.
EL: Math gets this kind of rap for being really hard, but then you think, like, math is so much simpler than this biological system. Your rods being sick make your cones die!
EC: But I think the mathematics is essential. There’s a big cost in taking certain experiments to the lab, just to be able to understand what is going on. There’s a cost, there’s a time dependence, and math bypasses that. So once you have a mathematical model that is able to predict things. That’s why you start with things that are already known. Many times the first set of models that I create are models that show what we already know. They’re not giving any new insight. It’s just to show that the foundation is ready and we can build on it, now we can introduce some new things and be able to ask questions about things we don’t know. Because once we are able to do that, really it’s able to guide us to places, or at least indicate what kinds of lab studies and experiments should be run and what kinds of things should be focused on. And that’s one of the things we do. For example, one of the collaborators I work with is the Vision Institute of Paris, so the institute and the director there, and the director of genetics as well. And we have this collaboration where I think working together has really helped guide their experiments and their understanding of where they should be looking, just as they helped me really understand what are the types of systems we need to consider and what are the things that we can neglect, that we don’t have to really focus on? And I think that’s the thing, mathematics is really powerful to have in biological system, I think.
EL: Yeah.
EC: And my favorite theorem can be used to gain insight into photoreceptor degeneration in very complicated systems. Another thing that’s interesting about the Hartman-Grobman theorem is that one of the things that is really powerful is that you don’t have to find a solution, a solution to the nonlinear system, to get an understanding of what’s going on and get an insight into the qualitative behavior of those solutions. And I think that is really powerful. Do you have any questions?
EL: So, I mean, a lot. But something I always think is interesting about applied mathematicians is that often they end up working in really different application areas. So did you start out looking at retinas and that kind of biological system, or did you start out somewhere else in applied math and gradually move your way over there?
EC: So when I started in applied math, what I really liked was dynamical systems. Yes, the first project I worked on as a graduate student was actually looking at the cornea and how different light intensities affect the developing cornea. And for that I really had to learn about the physiology of the eye and the physiology of the retina. But then I did that for graduate school, and initially once I went out of graduate school, in my postdoc I was working on how different fanatic groups get formed.
EL: Oh wow, really different application.
EC: Which was in Los Alamos. I was looking at what are the sources of power that allow groups that can become terrorists, for example, to really become strong. What are the competing forces? So it was more a sociological application, but again using dynamical systems to try to understand it. Later on I moved on to a more general area of math biology, looking at other different systems and diseases, but then I went back through an undergraduate project in an REU. Usually the way I work with undergraduates is I make them be the ones that ask the question, that select the application. And I tell them, you have to go learn all about it because you have to come and teach me. And then from there I’m going to help you formulate the questions that can be put into a mathematical equation and that can be modeled somehow. And they were very interested, they wanted to do something with a PDE. And they thought, well, something with the retina. And they thought that retinitis pigmentosa would be perfect for modeling with a PDE and being able to analyze it that way. And then as I learned more about the disease, the interesting thing is that when the cones begin to die, like the rods are the ones that are sick, but when the cones begin to die, there is no spatial dependence anymore. They don’t die in a way that you can see this spatial dependence. It’s really more random, and it’s more dependent on the fact that there is this lack of protein that is not being synthesized by the rods anymore. Many times what happens is there is this first wave of death of photoreceptors where the rods die, and when most of then die, when 90 percent of them are gone, then the cones begin to die. And then there are all these other things about, yes, you can think about wakes and the velocity of them, but there is not this spatial dependence. Initially it is, but that’s when only the rods are dying. But when we are really interested in asking the questions about why the cones die, there’s not that case anymore.
KK: It’s sort of a uniformly distributed death pattern, as it were? What I love about this is, you know, here’s a problem that basically a second-year calculus student can understand in some sense. You have two populations. We teach them this all the time. You have two populations, and they’re interacting in some way. What’s the long-term behavior? But there are still so many sophisticated questions you can ask and complicated systems there. Yeah, I can see why your undergrads were interested in this, because they understood it immediately, at least that it could be applied. And then they brought this to you, and now you’re hopefully going to cure RP, right?
EC: Right, well another thing is that you can understand, and you can use math that is not very high-level to start to get your hands dirty. And for example, now that we’re looking at this multi-level layer where you’re looking at the molecular level and also at the cellular level, then you’re really asking about multi-scale questions and how can we better analyze the system when we have multiple scales, right? And then there are sometimes questions about delay. So the more focused and the more detailed the model becomes, the more difficult the mathematics becomes.
KK: Sure.
EC: And then there are also questions, for example, without the mathematics, there’s a lot of interesting mathematics going on, I’m sorry, I mean without the biology, that you could analyze with mathematics. We did a project like that with a collaborator where the parameter space was not really relevant biologically, but the mathematics was very interesting. We had all this different behavior. We had not just equilibrium points, but we had periodic solutions, torus, we had all of this, and what is going on? And a lot of this happened in a very small region, and it just became more of a mathematical kind of analysis rather than just a biological one.
EL: Yeah, very cool. So another part of this program is that we like to ask our guest to pair their theorem with something, you know, food, beverage, music art, anything like that. So have you chosen a pairing for the Hartman-Grobman theorem?
EC: I thought about it a lot, because like I said it’s such a powerful theorem, and I go back to the idea of it’s addicting. I think anyone who’s worked in dynamical systems in the nonlinear case in a continuous timeframe definitely utilizes this theorem. It comes to a point where we are doing it automatically. So I thought, what is something that I consider very addicting, yet it looks very simple, right? It’s elegant but simple. But once you have it, it’s addicting. And I could not think of anything else but the Tennessee whiskey cake. Have you ever had it?
KK: No.
EL: No, but it sounds dangerous.
EC: It is delicious. It’s funny, I don’t like whiskey, and I had it when I went to San Antonio to give a talk one time. I was like, well, okay, everyone wanted it, so I decided to go with it. I usually pick chocolate because that is my favorite.
EL: Yeah, that’s my go-to.
EC: I love chocolate. So I said, well, let me try it. It was the most delicious thing. Now I want to be able to bake it, make it. I had a piece, and I want more.
KK: So describe this cake a little bit. Obviously I get that it has bourbon in it.
EC: The way it’s served is it’s served warm, and it has vanilla ice cream. It has nuts, and it has this kind of butterscotch or sometimes chocolate sauce over it. And it’s very moist. It has those different layers. I also think, right, in terms of complexity, it has these different layers. In order to get a sense of the power of it, you have to kind of go through all the layers and have all of them in the same bite. And I feel like that with the Hartman theorem, right, that the power of it is really to apply it to something that has nonlinearity, that is really complex, and something that you know you might not be able to get a handle on the solutions analytically, but you still want to be able to say what is going on, what is the behavior, where are we heading? To somehow be able to infer what the solutions are through a different means, to be able to go around, and it gives you that kind of ability.
KK: And this is where whiskey helps.
EC: Well the whiskey’s the addicting part, right?
EL: So have you made this cake at all, or do you usually order it when you’re out?
EC: Usually I order it when I’m out. But I want to make it. So my mom’s birthday is coming up on August 3rd, and I’m going to try to make it. I was telling my husband, “We’re going to have to make it throughout the next few days because I’m pretty sure we’re going to go through a few trials.”
KK: Absolutely.
EC: I can never get it right.
EL: Even the mistakes will be rewarding, just like math.
KK: And again the whiskey helps.
EC: But that’s an interesting question. I thought, what can I pair it with? And the only thing I could think of was something that’s addicting or something that has multiple layers but that all of them have to be taken at once, that you’re allowed to look at all of them at once.
EL: This fits perfectly.
KK: Sounds great. Well I’ve learned a lot today. I’ve never thought about modeling the eye through populations of rods and cones, but now that you say it, I guess sure, of course. And now I have to look up Tennessee whiskey cake.
EL: Yeah, it’s really good. You should try it.
KK: I’m going to go do that.
EL: It’s almost lunch here, so you’re definitely making me hungry.
KK: Well thanks a lot for joining us.
EL: Thanks a lot for being here.
EC: Well thank you so much for having me here. I really enjoyed it.
EL: Hello and welcome to My Favorite Theorem, a podcast where we ask mathematicians what their favorite theorem is. I’m your cohost Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. This is your other cohost.
KK: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?
EL: I’m all right. How about you?
KK: Okay. So one of our former guests, who I won’t name, was giving a big lecture here at the colloquium series this week, so I got to meet that person in person.
EL: Oh, excellent.
KK: So I might even have a better picture for the webpage, for the post to say, hey, our hosts and guests can actually be in the same place at the same time.
EL: Yeah, that would be exciting. And one of these days, maybe you and I will meet in person, which I’m pretty sure we have not yet.
KK: Maybe. I know we haven’t. I keep threatening to come to Salt Lake City, but I don’t think Salt Lake can handle me. I have actually been there once. Wonderful town. It’s a great city.
EL: I like it. So today we are very glad to have Holly Krieger on the show. So Holly, would you like to tell us a little bit about yourself?
HK: Sure, I’d be happy to. Thanks for having me, first of all. So I am a lecturer at the University of Cambridge. I’m also a fellow at one of the constituent colleges of Cambridge, Murray Edwards College, and the kind of math I’m most interested in is complex dynamics and number theory. So I do a lot of studying of the Mandelbrot set and the arithmetic properties of these kinds of things and related questions.
EL: And I see you and I have the same poster of the Mandelbrot set. Mine is not actually hanging up yet. You have been better at getting the full experience by hanging it up, but I see that poster behind you.
HK: That’s right, the Mandelmap. It’s amazing, this poster. I just found it on Kickstarter, and then I sent it to a bunch of mathematician friends, and so occasionally I will go to someone to visit someone mathematically, and they have the same poster in their office. It’s very satisfying.
EL: Well, we have invited you here to ask you what your favorite theorem is. So what’s your favorite theorem?
HK: So here’s the thing: I shouldn’t be on this podcast because I don’t have a favorite theorem.
KK: No, no, no.
HK: I don’t have a favorite theorem, it’s true. Somehow I’m too much of a commitment-phobe, like I have a new favorite theorem every week or something like that. I can tell you this week’s favorite theorem.
EL: That’s good enough.
KK: That’s fine. Ours have probably changed too. Evelyn and I in Episode 0 stated our favorite theorems, and I’m pretty sure Evelyn might have changed her mind by now.
EL: Yeah, well, one of our other guests, Jeanne Clelland, made a pretty good case for the Gauss-Bonnet theorem.
KK: She really did.
EL: I think my allegiance has shifted.
HK: Maybe you can do a podcast retrospective, every 20 episodes or something, what are the hosts’ favorite theorems today?
KK: That’s a good idea, actually. Good.
HK: So, my favorite theorem for this week. I love this theorem because it is both mathematically sort of really heavy-hitting and also because it has this sort of delicious anti-establishment backstory to it. My favorite theorem this week is Brouwer’s fixed-point theorem.
KK: Nice.
HK: Maybe I should talk about it mathematically first, maybe the statement?
EL: Yeah.
HK: Okay. So I think the easiest way to state this is the way Brouwer would have thought about it, which is if you take a closed ball in Euclidean space, so you can think about an interval in the real line, that’s a closed ball in the one-dimensional Euclidean space, or you can think about a disc in two-dimensional space, or what we normally think of as a ball in three-dimensional space, and higher you don’t think about it because our brains don’t work that way. So if you take a closed ball in Euclidean space, and you take a continuous function from that closed ball to itself, that continuous function has to have a fixed point. In other words, a point that’s taken to itself by the function.
So that’s the statement of the theorem. Even just avoiding the word continuous, you can still state this theorem, which is that if you take a closed ball and morph it around and stretch it out and do crazy things to it, as long as you’re not tearing it apart, you’ll have a fixed point of your function.
KK: Or if you stir a cup of coffee, right?
HK: That’s right, so there’s this anecdote that what Brouwer was thinking about—I have no idea if this is accurate.
KK: Apocryphal stories are the best.
HK: Reading about him biographically, I almost feel like coffee would be too exciting for Brouwer. So I’m not actually sure about the accuracy of this story. So the story goes that he was stirring his coffee, and he noticed that there seemed to be a point at every point in time, a point where the coffee wasn’t moving despite the fact that he was stirring this thing. So that actually leads to one of the reasons I like this in terms of real-world applications. It’s a good—well, depending on who you hang out with, it’s a good—cocktail party theorem because if you’re making yourself a cocktail and you throw all the ingredients into your shaker and you start stirring them up, well, when you’re done stirring it, as long as you haven’t done anything crazy like disconnected the liquid inside of the shaker, then you’ve got to have some point in the liquid that’s returned to its original spot. And I think that’s a fun version of the coffee anecdote.
EL: But the cocktail would definitely be too exciting for Brouwer.
VN: I would be really surprised. He was a vegetarian, not that you can’t be a fun vegetarian. He was a vegetarian, and he was sort of a health nut in general, and that was back in a time—he proved this theorem in the early 1900s—back in a time when I don’t think that behavior was quite so common.
KK: It was more, like, on a commune. You’d go to some weird, well I shouldn’t say weird, you’d go to some rural place and hang out with other like-minded people.
HK: That’s right.
KK: And live this healthful lifestyle. You would eschew meat and sugar and all that stuff.
HK: Right, exactly. So the other way I like to describe this in terms of the real world, and I think this is a common way Brouwer himself actually described this, is that if you take a map, so take a map of somewhere that’s rectangularly shaped. You can either think the map itself is a rectangle, so whatever it pictures is a rectangle, or you can think of Colorado or something like that. If you take a map, and you’re in the place that’s indicated by the map, then there’s somewhere on the map that is precisely in the same point on the map as it is in the place. Namely, where you are. But you can get more specific than that. So those are two sort of nice ways to visualize this theorem.
One of the reasons I like it is that it basically touches every subfield of mathematics. It has implications for differential equations and almost any sort of applied mathematics that you might be interested in. Things like existence of equilibrium states and that kind of thing over to its generalizations, which touch on number theory and dynamical systems and these kinds of things through Lefschetz fixed-point theorem and trace formula and that kind of thing. So mathematically speaking, it’s sort of the precursor to the entire study of fixed-point theorems, which is maybe an underappreciated spine running through all of mathematics.
KK: Since you’re interested in dynamics, I can see why you might really be interested in this theorem.
HK: Yeah, that’s right. It comes up particularly in almost any kind of study of dynamical systems, where you’re interested in iteration, this comes up.
EL: I like to ask our guests if this was a love at first sight theorem or if it’s grown on you over time.
HK: That’s a good question. It’s definitely grown. I think when you first meet this thing, I mean let’s think about it a little bit. In one dimension, how do you think about this theorem? You think, well, I’ve got a map from, say, the unit interval to itself, right, which is a continuous map. I can draw its graph. And this is the statement essentially that that graph has to intersect the line y=x between 0 and 1.
KK: So it’s a consequence of the Intermediate Value Theorem.
HK: That’s right. This is one of those deals where we always tell the calc students, “Tilt your head,” and they always look at us like we’re crazy, but then they all do it and it works. I find this appealing because it’s sort of an intersection theoretic way to think about it, which is sort of the generalizations that I’m interested in. But I think that you don’t realize the scope of this kind of perspective viewing this as intersection, and how that sort of leads you into algebraic geometry versions of this theorem. You don’t realize that at first. Same with, you don’t realize the applications to Banach spaces at first, and equilibrium states at first, so understanding the breadth of this theorem is not something that happens right away. The other thing is that really why I like this theorem is the backstory. Can I tell you about the backstory?
KK: Absolutely.
HK: So Brouwer, you can already tell I kind of don’t like him, right? So Brouwer was a Dutch mathematician, and he was essentially the founder of a school of mathematical philosophy known as intuitionism. What these people think, or perhaps thought—I don’t know who among us is one of them at this point—what these people think is that essentially mathematics is a result of the creator of mathematics, that there is no mathematics independent of the person who is creating the mathematics. So weird consequences of this are things like not believing in the law of the excluded middle. So they think a thing is only true if you can prove it and only not true if you can provide a counterexample. So something that is an open problem, for example, they consider to be a counterexample, or whatever you want to say, to the law of the excluded middle. So it’s in some sense a time-dependent mathematical philosophy. It’s not that everything is either true or not in the system, but true or not or not yet.
EL: That’s interesting. I don’t know very much about this part of math history. I’ve sort of heard of the fact that you don’t have to necessarily accept the law of the excluded middle, but I hadn’t heard people talk about this time-dependent aspect of it. I guess this is before we get into Cantor and Gödel, or more Gödel and Cohen’s, incompleteness theorems, which kind of seem like that would be a whole other wrench into things.
HK: That’s right. So this does predate Gödel, but it’s after Cantor. This was basically a knee-jerk reaction to Cantor. So the reason why I’m sort of anti-this philosophy is that I view Cantor as a true revolutionary in mathematics.
KK: Absolutely.
HK: Maybe I’ll have a chance to say a little bit about the connection between the Brouwer fixed-point theorem and some of what he did, but Cantor sat back, or took a step back and said, “Here’s what the size of a set is, and I’m going to convince you that the real line and the real plane, this two-dimensional space, have the same size.” And everyone was so deeply unhappy with this that they founded schools of thought like intuitionism, essentially, which sort of forced you to exclude an argument like Cantor’s from being logically valid. And so anyone who was opposed to Cantor, I have a knee-jerk reaction to, and the reason I find this theorem so delicious, sort of appealing, is because it’s not constructivist. Brouwer’s fixed-point theorem doesn’t hand you the fixed point, which is what Brouwer says you should have to do if you’re actually proving something. He really believed, I mean, he worked on it from his thesis to his death, essentially, while he was active, he really believed in this philosophy of mathematics that you cannot say there exists a thing but I can’t ever tell you what it is. He thought you really had to hand over the mathematical object in order to convince somebody. And yet one of his most famous results fails to do exactly that. And the reason why is that his thesis advisor was like, “Hey, no one is going to listen to you unless you do some actual mathematics. So he put aside the philosophy for a few years, proved some nice theorems in topology, in sort of the formalist approach, and went back to mathematical philosophy.
KK: I did not know any of this. This whole time-dependent mathematics, now I can’t stop thinking about Slaughterhouse-5, right, you’ve read Slaughterhouse-5? The Tralfamadorians would tell us, you know, that it’s already all there. It’s encased in amber. They can see it all, so they know what theorems we’re going to discover later.
HK: That’s right.
KK: So what’s your favorite proof of this theorem?
HK: So I think my favorite proof of this theorem is probably not Brouwer’s. It’s probably an algebraic topology proof, essentially.
KK: I thought you’d go with the iteration proof, but okay.
HK: No, I don’t think so because what it’s really about to me, it really is a topological statement about the nonexistence of retractions. So let’s just talk about the disc, let’s do the two-dimensional version. So if you had, so first of all, it’s a proof by contradiction, which already Brouwer is not on board with, but let’s do it anyways. So if you had a function which was a continuous map of the closed unit disc to itself which had no fixed point, then you could define a new function which maps the closed disc to its boundary, the circle, in the following way. If you have a point inside the disc, you look at where its image is. It’s somewhere else, right, because there are no fixed points. So you can draw the ray from its image through that point in the plane. That ray will hit the unit circle exactly once. That’s the value you assign the point in this new function. This will give you a new map, which maps the closed unit disc to its boundary, so this map is a retraction, which means it acts as the identity on the unit circle, and it maps the entire disc continuously onto the boundary circle. And such a thing can never exist.
KK: You’ve torn a hole in the disc.
HK: You’ve torn a hole in the disc. It’s really believable, I mean, rather than a rigorous proof, think about the interval. Take every point in the interval and assign it a value of either 0 or 1. You obviously have to tear it to do it. It’s totally clear in your head. with the disc, maybe it’s not quite so obvious. Usually the cleanest proof of the non-existence of a retraction like this goes through algebraic topology and understanding what the fundamental groups of these two objects are.
KK: That’s the proof I was thinking of, being a topologist.
HK: You thought maybe I’d be dynamical about it?
KK: Well, you could just pick a point and iterate, and since it’s a complete metric space, it converges to some point, and that thing has to be fixed. But that’s also not constructive, right?
HK: It’s also not constructive. But there are approximate construction versions.
KK: Right.
HK: One more thing I like about this theorem, in terms of its implications, is it’s one more tool Brouwer used in his theorem proving the topological invariance of dimension, that dimension is a well-defined notion under homeomorphisms. In particular, you don’t have a homeomorphism, just stretching, continuous in both directions, from, like, R^n to R, n-dimensional Euclidean real space to the real line. This doesn’t sound earth-shattering to us now. I think we kind of take it for granted. But at the time, this was not so long after Cantor was like, “Oh, but there is actually an injection, right, from n-dimensional Euclidean space to the real line.” So it’s not that it was surprising, but it was sort of reassuring, I think, that if you impose continuity this kind of terrible behavior can’t happen.
KK: Right. In other words, you need additional structure to get your sense that the plane is bigger than the line.
HK: That’s right. Although even taking into account continuity, topology is weird sometimes. There are space-filling curves, so in other words, there are surjective maps from the real line, (well, let’s just stick to intervals) from the unit interval to any-dimensional box that you want. And so somehow that’s really counterintuitive to most people. So it’s not so obvious that maybe what you think of as the reverse, an injection of a large space into a small space, maybe that would be problematic. But thanks to Brouwer’s fixed-point theorem, it’s not.
KK: So what pairs well with Brouwer’s fixed-point theorem?
HK: Well, okay, it has to be a cocktail, right, because I chose the cocktail example and because cocktails are fun. And they’re anti-Brouwer, presumably, as we discussed. So for the overlap of the cocktail description and the map description that I gave of Brouwer’s fixed-point theorem, I’m going to go with a Manhattan.
EL: Okay.
KK: Is that your favorite cocktail?
HK: It’s one of my favorites. Also Manhattan is almost convex.
KK: Almost.
HK: Almost convex.
KK: So you’re a whiskey drinker?
HK: I am a whiskey drinker.
KK: All right. I don’t drink too much brown liquor because if I drink too much of it I’ll start fights.
HK: Fortunately being sort of small as a human has prevented me from starting too many fights. I just don’t think I would win.
EL: So in my household I am married to a dynamicist, so I’m a dynamicist-in-law, but I’m more of a geometer, and we have this joke that there are certain chores that I’m better at, like loading the dishwasher because I’m good at geometry and what shapes look like. My spouse is good at dynamics, and he is indeed our mixologist. So do you feel like your dynamical systems background gives you a key insight into making cocktails? It certainly seems to work with him.
HK: Definitely for the first cocktail. Subsequent cocktails, I don’t know.
KK: Well I’m going to happy hour tonight. Maybe I’ll get a Manhattan.
HK: Maybe you should talk about Brouwer’s fixed-point theorem.
KK: With my wife? Not so much.
HK: Doesn’t go over that well?
KK: Well, she would listen and understand, but she’s an artist. Cocktails and math, I don’t know, not so much for her.
HK: I don’t know, that just makes me think of, okay, wow, I’m really going to nerd it out. Do you guys ever watch Battlestar Galactica?
EL: I haven’t. It’s on my list.
KK: When I was a kid I watched the original.
HK: The new one. All right. This is for listeners who are BSG nerds. So there’s this drawing of this vortexy universe, this painting of the vortexy universe that features in the later, crappier seasons. Now that makes me think it’s kind of an illustration of Brouwer’s fixed-point theorem. So maybe you should tell your wife to try and paint Brouwer’s fixed-point theorem for you.
KK: Okay.
HK: Marital advice from me. Don’t take it.
KK: We’ve been married for almost 26 years. I think we’re okay. We’re hanging in all right. So we always like to give our guests a chance to plug anything they’ve been working on. You’ve been in a bunch of Numberphile videos, right?
HK: Yeah, that’s right, and there will be more in the future, so if anyone hasn’t checked out Numberphile, it’s this amazing YouTube channel where maths is essentially explained to the public. Mathematicians come, and they talk about some interesting piece of mathematics in what is really meant to be an accessible way. I’ve been a guest on there a couple of times, and it’s definitely worth checking out.
EL: Yeah, they’re great. Holly’s videos are great on there. I like Numberphile in general, but I have personally used your videos about the Mandelbrot set, the dynamics of it and stuff, when I’ve written about it, and some other related dynamical systems. They’ve helped me figure out some of the finer points that as not-a-dynamicist maybe don’t come completely naturally to me.
HK: Oh, that’s awesome.
EL: I’ve included them in a few of the posts I’ve done, like my post about the Mandelbrot set.
HK: That’s amazing. That’s good because I’ve used your blog a few times when I’ve tried to figure out things that people might be interested to know about mathematics and things that are accessible to write and talk about to people. So it goes both directions.
EL: Cool.
KK: It’s a mutual lovefest here.
EL: People can also find you on Twitter. I don’t remember actually what your handle is.
HK: It’s just my name, @hollykrieger.
EL: Thanks a lot for being on the show. It was a pleasure.
HK: Thanks so much for having me. It was great to talk to you guys.
KK: Thanks, Holly.
Kevin Knudson: Welcome to My Favorite Theorem! I’m your host Kevin Knudson, professor of mathematics at the University of Florida. And this is your other host.
Evelyn Lamb: Hi. I’m Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City. So how’s it going, Kevin?
KK: It’s okay. Classes are almost over. I’ve got grades for 600 students I still need to upload. But, you know, it’s an Excel nightmare, but once we get done with that, it’s okay. Then my son comes home for Christmas on Saturday.
EL: Oh great. I don’t miss grading. I miss some things about teaching, but I don’t miss grading.
KK: No.
EL: I don’t envy this time of the semester.
KK: Certainly not for a 600-student calculus class. But you know, I had a good time. It’s still fun. Anyway, today we are pleased to welcome Vidit Nanda. Vidit, why don’t you introduce yourself and tell everyone about you?
Vidit Nanda: Hello. My name is Vidit Nanda. I’m a research fellow at the University of Oxford and the amazing new Alan Turing Institute in London. This year I’m a member at the School of Mathematics at the Institute for Advanced Study in Princeton. I’m very happy to be here. Thank you both for doing this. This is a wonderful project, and I’m very happy to be a part of it today.
KK: Yeah, we’re having a good time.
EL: Can you tell us a little more about the Alan Turing Institute? I think I’ve heard a little bit about it, but I guess I didn’t even know it was that new. I thought I had just never heard of it before.
VN: Right. So about three years ago, and maybe longer because it takes time to set these things up, the UK decided they needed a national data science center, and what they did was they collected proposals from universities, and the ones who are now, well, the original five universities that got together and contributed funds and professors and students to the Turing Institute were Oxford, Cambridge, Warwick, UCL, and Edinburgh. Now we have a space on what they call the first floor of the British Library, and we would call the second floor of the British Library. Half of that floor is called the Alan Turing Institute, and it’s kind of crazy. You enter the British Library, and there’s this stack of books that kind of looks like wallpaper. It’s too beautiful, you know, but it is real. It’s behind glass. And then you turn to the right, and it’s Las Vegas, you know. There’s a startup-looking data science center with people dressed exactly the way you think they are with the hoodies, you know. It’s sort of nuts. But there are two things I should tell everyone about the Alan Turing Institute who’s listening. The first one is that if you walk down a flight of steps, there’s a room called Treasures of the British Library. Turn left, and the first thing you see is a table with Da Vinci’s sketches right next to Michaelangelo’s letters with the first printing of Shakespeare. Those are the first things you see. So if you’re ever thinking about cutting a corner in a paper you’re writing, you go down to that room, you feel bad about yourself for ten minutes, and you rush back up the stairs, inspired and ready to work hard.
KK: Yeah. This sounds very cool.
EL: Wow, that’s amazing.
VN: That’s the first table. There’s other stuff there.
KK: Yeah, I’m still waiting on my invitation to visit you, by the way.
VN: It’s coming. It would help if I’m there.
KK: Sure, once you’re back. So, Vidit, what’s your favorite theorem?
VN: Well, this will not be a surprise to the two of you since you cheated and you made me tell you this in advance. And this took some time. My favorite theorem is Banach’s fixed point theorem, also called the contraction mapping principle. And the reason it’s my favorite theorem is it’s about functions that take a space to itself, so for example, a polynomial in a single variable takes real numbers to real numbers. You can have functions in two dimensions taking values in two dimensions, and so on. And it gives you a criterion for when this function has a fixed point, which is a point that’s sent to itself by the function.
One of the reasons it’s my favorite theorem—well, there are several—but it’s the first theorem I ever discovered. For the kids in the audience, if there are any, we used to have calculators. I promise. They looked like your iPhone, but they were much stupider. And one of the most fun things you could do with them was mash the square root button like you were in a video game. This is what we had for entertainment.
KK: I used to do this too.
VN: Take a large number, and you mash the square root button, and you get 1. And it worked every time.
KK: Right.
VN: And this is Banach’s fixed-point theorem. That’s my proof of Banach’s fixed-point theorem.
KK: That’s great. What’s the actual statement, though? Let’s be less loose.
VN: Right. The actual statement requires a little bit more work than having an old, beat-up calculator. The setup is kind of simple. You have a complete metric space, and by metric space you mean a space where points have a well-defined distance subject to natural axioms for what a distance is, and complete means if you have a sequence of points that are getting close to each other, they actually have a limit. They stop somewhere. If you have a function from such a complete metric space to itself so that when you apply the function to a pair of points, the images are closer together strictly than the original points were, so f(x) and f(y), the distance between them should be strictly less, some constant less than 1 times the distance between x and y. If this is true, then the function has a unique fixed point, and the amazing part about this theorem that I cannot stress highly enough is that the way to find this fixed point is you start anywhere you want, pick any initial point and keep hitting f, this is mashing the square root button, and very quickly, you converge to the actual fixed point. And when you hit the square root button, nothing changes, you just stay at 1.
KK: And it’s a unique fixed point?
VN: It’s a unique fixed point because wherever else you start, you reach that same place. So I’m an algebraic topologist by trade, and this is very much not an algebraic topology fixed-point theorem. The algebraic topology fixed-point theorem makes no assumptions on the function, like it should be bringing points closer together. It makes assumptions on the space where the function is taking its values. It says if the space is nice, maybe convex, maybe contractible, then there is a fixed point, no uniqueness and no recipe for converging to the fixed point.
KK: In fact, we recently had a guest who chose the Brouwer fixed-point theorem.
EL: Yeah.
VN: Yes, the Brouwer fixed-point theorem is one of my favorites, it’s one of the tools I use in my work a lot, but I always have this sort of analyst envy where their fixed-point theorem comes with a recipe for finding the actual fixed point.
KK: Right.
VN: Instead of an existence result.
KK: Yeah, we just wave our hands and say, “Yeah, yeah, yeah, if you didn’t have a fixed point there’d be some map on homology that couldn’t exist and blah blah blah.
VN: Right. And that’s sort of neat but sort of unsatisfying if what you actually care about are the fixed points.
EL: Yeah, so in some ways I kind of ended up more of an analyst because of this. I was really attracted to algebra and that kind of thing, and I felt like at some point I just couldn’t do anything. I felt like in analysis, at least I could get a bound on something, even if it was a really ugly bound, I could at least come in with my hands and play around in the dirt and eventually come up with something. This is probably showing that somehow my brain is more likely to succeed at analysis or something because I know there are people who get to algebra and they can do things, but I just felt like at some point it was this beautiful but untouchable thing, and analysis wasn’t so pretty, and I didn’t mind going and mucking it up.
KK: I had the opposite point of view. I never liked analysis. All those epsilons and deltas, and maybe it was a function of that first advanced calculus course, where you have to get at the end the thing you’re looking for is less than epsilon, not 14epsilon+3epsilon^2. It had to be less than epsilon. I was like, man, come on, this thing is small! Who cares? So I liked the squishiness of topology. I think that’s why I went there.
VN: I think with those epsilon arguments, I don’t know about you guys, but I always ended up doing it twice. You do it the first time and get some hideous function of epsilon, and then you feed back whatever you got to the beginning of the argument, dividing by whatever is necessary, and then it looks like, when you submit your solution, it looks like you were a genius the whole time, and you knew to choose this very awkward thing initially, and you change the argument.
KK: That’s mathematics, right, when you read a paper, it’s lovely. You don’t see all the ugly, horrifying ream of paper you used for the calculations to get it right, you know. I think that’s part of our problem as mathematicians from a PR point of view. We make it look so slick at the end, and people think, wait a minute, how did you do that? Like it’s magic.
VN: We’re very much writing for people next door in our buildings as opposed to people on the street. It helps sometimes, and it also bites us.
KK: This is where Evelyn’s so great, because she is writing for people on the street, and doing it very well.
EL: Well thank you. I didn’t intend this to come back around here, but I’ll take it. Anyway, getting back to our guest, so when did you first encounter this theorem, and was it something you were immediately really into, or did it take some more time?
VN: Actually, the first time I encountered this theorem in a semiformal setting, it just blazed by. I think this is where most people see it for the first time, is in a differential equations course. One of the things that’s so neat about this theorem is that it’s one of the things that guarantees you take f’(x), which equals some hideous expression of x, why should this have a solution, how long should it have a solution for, when is a solution unique? And this requires the hideous thing on the right side to satisfy the contraction mapping property. The existence and uniqueness of ordinary differential equations is the slickest, most famous application of the Banach fixed-point theorem.
KK: I’d never thought about it.
VN: And the analyst nods while Kevin stares off into space, wondering why this should be the case.
KK: No, no, you had a better differential equations course than I did. In our first diffeq’s course, we wouldn’t bring this up. This is too high-powered, right?
VN: It was sort of mentioned, this was at Georgia Tech. It was mentioned that this property holds, there was no proof, even though the proof is not difficult. It’s not so bad if you understand the Cauchy sequence, which not everyone in differential equations does. So we were not shown the proof, but there’s a contraction mapping principle. And then Wikipedia was in its infancy, so now I’m dating myself badly, but I did look it up then and then forgot about it. And then of course I saw it in graduate school all over the place.
KK: Hey, when I was in college, the internet didn’t exist.
VN: How did you get anything done?
KK: You went to the library.
EL: Did you use a card catalog?
KK: I’m a master of the card catalog.
EL: We had one at my elementary school library.
KK: Geez. So growing up in high school, we used to go to the main public library downtown where they had bound periodicals and so if you needed to do your report about, say, the assassination of John Kennedy, for example, you had to go and pull the old Newsweeks off the shelf from 1963. I don’t know, there’s something to that. There’s something to having to actually dig instead of just having it on your phone. But I don’t want to sound like an old curmudgeon either. The internet is great. Well, although wait a minute, the net neutrality vote is happening right now.
VN: It’s great while we speak. We don’t know what’s going to happen in 20 minutes.
KK: Maybe in the middle of this conversation we’re going to get throttled. So Vidit, part of the fun here is that we ask our guest to pair their theorem with something. So what have you chosen to pair the contraction theorem with?
VN: I’m certainly not going to suggest Plato like one of the recent guests. I have something very simple in mind. The reason I have something simple in mind is there’s an inevitability to this theorem, right? You will find the fixed point. So I wanted something inevitable and irresistible in some sense, so I want to pair it with pizza.
EL: Pizza is the best food. Hands down.
VN: Right. It is the best food, hands down. I’m imagining the sort of heathens’ way of eating pizza, right, you eat the edges and move in. I’ve seen people do this, and it’s sort of very disturbing to me. The edge is how you hold the damn thing in the first place. But if you imagine a pizza being eaten from the outside, that’s how I think of the contraction mapping, converging to the middle, the most delicious part of the pizza. I refuse to tell you what fraction of the last two weeks it took me to come up with this pairing. It’s disturbingly difficult.
KK: So you argue that the middle of the pizza is the most delicious part?
EL: Oh yeah.
KK: See, my dog would argue with you. She is obsessed with the crust. If we ever get a pizza, she’s just sitting there: “Wait, can I have the crust?”
EL: But the reason she gets the crust is because humans don’t find it the most delicious.
VN: If I want to eat bread, I’ll eat bread.
KK: I make my own pizza dough, so I make really good pizza crust. It’s worth eating. It’s not this vehicle. But you’re right. Yeah, sure.
EL: We’re going to press you now. What pizza toppings are we talking here? We really need specifics. It’s 9 am where I am, so I can’t have pizza now unless I made my own.
KK: You could. You can have it any time of the day.
EL: But I don’t think there’s a store open. I guess I could get a frozen pizza at the grocery store.
VN: Kevin would suggest having a quick-rise dough set up that, if you pour your yeast in it, it’ll be done in 20 minutes. I think, I’m not big into toppings, but it’s important to have good toppings. Maybe bufala mozzarella and a bit of basil, keep it simple. There’s going to be tomatoes in it, of course, some pizza sauce. But I don’t want to overload it with olives and peppers and sausage and all that.
EL: Okay. So you’re going simple. That’s what we do. We make our own pizza a lot, and a couple years ago we decided to just for fun buy the fancy canned tomatoes from Italy, the San Marzanos.
VN: The San Marzanos, yeah.
EL: Buy the good mozzarella. And since then, that’s all we do. We used to put a bunch of toppings on it all the time, and now it’s just, we don’t even make a sauce, we just squish the tomatoes onto the pizza. Then put the cheese on it, and then the basil, and it’s so good.
KK: I like to make, I assume you’ve both been to the Cheese Board in Berkeley?
EL: No, I haven’t. I hear about it all the time.
KK: It’s on Shattuck Ave in Berkeley, and they have the bakery. It’s a co-op. The bakery is scones—delicious scones, amazing scones—and bread and coffee and all that. And right next door is a pizza place, and they make one kind of pizza for the day, and that’s what you’re going to have. You’re going to have it because it’s delicious. Even the ones where you’re like “eh,” it’s amazing. The line goes down the block, and everybody’s in a good mood, there’s a jazz trio. Anyway, I got the cookbook, and that’s how I make my crust. There’s a sourdough crust, and then our favorite one is the zucchini-corn pizza.
EL: Really.
KK: It’s zucchinis, onions, and cheese, and then corn, and a little feta on top. And then you sprinkle some cilantro and a squeeze of lime juice.
VN: God, I’m so hungry right now.
KK: This is amazing. Yeah, it’s almost lunchtime. My wife and I are going to meet for lunch after this, so can we wrap this up?
EL: Hopefully you’re going to have pizza.
KK: We’re going to a new breakfast place, actually. I’ve got huevos rancheros on my mind.
VN: Excellent.
EL: That’s good too.
KK: Well this has been great fun, Vidit. Thanks for joining us.
VN: Thanks so much again for having me and for doing this. I’m looking forward to seeing who else you’ve managed to rope in to describe their favorite theorems.
KK: There are some good ones.
EL: We’re enjoying it.
KK: We’re having a good time.
VN: Wonderful. Thank you so much, and have fun.
EL: Nice talking to you.
KK: See you. Bye.
Evelyn Lamb: Hello and welcome to My Favorite Theorem. This is a podcast about math where we invite a mathematician in each episode to tell us about their favorite theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m excited about this one.
EL: Yes, I’m very excited. I’m too excited to do any banter. We’re coming up on our one-year anniversary, and we are very honored today to have a special guest. She is a professor at Duke. She has gotten a MacArthur Fellowship, won many prizes. I was just reading her Wikipedia page, and there are too many to list. So we are very happy to have Ingrid Daubechies on the show. Hi, Ingrid. Can you tell us a little bit about yourself?
Ingrid Daubechies: Hi, Evelyn and Kevin. Sure. I have just come back from spending several months in Belgium, in Brussels. I had arranged to have a sabbatical there to be close to and help set up arrangements for my very elderly parents. But I also was involved in a number of fun things, like the annual contest for high school students, to encourage them to major in mathematics once they get to college. And this is the year I turn 64, and because 64 is a much more special number than 60 for mathematicians, my students had arranged to organize some festivities, which we held in a conference center in my native village.
KK: That’s fantastic.
ID: A lot of fun. We had family and friends, we opted to have a family and friends activity instead of a conference where we tried to get the highest possible collection of big name marquee. I enjoyed it hugely. We had a big party in Belgium where I invited via Facebook everybody who ever crossed my timeline. There were people I went to high school with, there was a professor who taught me linear algebra.
KK: Oh, wow.
ID: So it was really a lot of fun.
KK: That’s fantastic.
EL: Yeah, and you have also been president of the International Mathematical Union. I meant to say that at the beginning and forgot. So that is also very exciting. I think while you were president, you probably don’t remember this, but I think we met at a conference, and I was trying to talk to you about something and was very anxious because my grandfather had just gone to the hospital, and I really couldn’t think about anything else. I remember how kind you were to me during that, and just, I think you were talking about your parents as well. And I was just thinking, wow, I’m talking to the president of the International Mathematical Union, and all I can think about is my grandpa, and she is being so nice to me.
ID: Well, of course. This is so important. We are people. We are connected to other people around us, and that is a big part of our life, even if we are mathematicians.
EL: But we have you on the show today to talk about theorems, so what is your favorite theorem?
ID: Well, I of course can’t say that I have one particular favorite theorem. There are so many beautiful theorems. Right now I have learned very recently, and I am ashamed to confess how recently, because it’s a theorem that many people learn in kindergarten, it’s a theorem called Tutte’s embedding theorem about graphs, meshes, in my case it’s a triangular mesh, and the fact that you can embed it, meaning defining a map to a polygon in the plane without having any of the vertices cross, so really an embedding of the whole graph, so every triangle on the complicated mesh that you have, it’s a disk-type mesh, meaning it has no holes, a boundary, lots of triangles, but you can think of it as a complicated thing, and you can embed it under certain conditions in a convex polygon in the plane, and I really, really, really love that. I visualize it by thinking of it as a complicated shape and applying a hair dryer to it to kind of, like Saran wrap, and a hair dryer will flatten it nicely, will try to flatten it, and I think the fact that you can always do it is great. And we’re using it for an interesting, actually we are extending it to mappings, the theorem is originally formulated for a convex polygon in the plane, you can always map to a convex polygon in the plane, and we are extending it to the case where you have a non-convex polygon because that’s what we need, and then we have certain conditions.
KK: Sure. Well, there have to be some conditions, right, because certainly not every graph, every mesh you would draw is planar.
ID: Yeah.
KK: What are those conditions?
ID: It has to be 3-connected, and you define a set of weights on it, on the edges, that ensure planarity. You define weights on the edges that are all positive. What happens is that once you have it in the polygon, you can write each one of the vertices as a convex combination of its neighbors.
KK: Yeah.
ID: And those define your weights. You have to have a set of weights on the edges on your original graph that will make that possible.
KK: Okay.
ID: So you define weights on the original graph that help you in the embedding. What happens is that the positive weights are then used for that convexity. So you have these positive weights, and you use them to make this embedding, and so it’s a theorem that doesn’t tell you only that it is planar, but it gives you a mechanism for building that map to the plane. That’s really the power of the theorem. So you start already with something that you know is planar and you build that map.
KK: Okay.
ID: It’s really powerful. It’s used a lot by people in computer graphics. They then can reason on that Tutte embedding in the plane to build other things and apply them back to the original mesh they had in 3-space for the complicated object they had. And that’s also what we’re trying to use it for. But we like the idea of going to non-convex polygons because that, for certain of the applications that we have, will give us much less deformation.
EL: So, is this related to, I know that you’ve done some work with art reconstruction, and actually in the back of the video here, I think I see some pictures of art that you have helped reconstruct. So is it related to that work?
ID: Actually, it isn’t, although if at some point we go to 3-D objects rather than the paintings we are doing now, it might become useful. But right now this collaboration is with biologists where we’re trying to, well we have been working for several years and we’re getting good results, we are quantifying similarity of morphological surfaces. So the people we work with are working on bones and teeth. They’re paleontologists. Well, they’re interested in evolutionary anthropology, but they work a lot with teeth and bones. And there’s a lot of domain knowledge they have because they’ve seen so many, and they remember things. But of course in order to do science with it, they need to quantify how many similar or dissimilar things are. And they have many methods to do that. And we are trying to work with them to try to automate some of these methods in ways that they find useful and ways that they seek. We’ve gotten very good results in this over the many years that we’ve worked with them. We’re very excited about recent progress we’ve made. In doing that, these surfaces already for their studies get scanned and triangulated. So they have these 3-d triangulations in space. When you work with these organs and muscles and all these things in biology, usually you have 3-d shapes, and in many instances you have them voxelized, meaning you have the 3-d thing. But because they work with fossils, which often they cannot borrow from the place where the fossil is, they work with casts of those in very high-quality resin. And as a result of that, when they bring the cast back, they have the surface very accurately, but they don’t have the 3-d structure. So we work with the surfaces, and that’s why we work with these 3-d meshes of surfaces. And we then have to quantify how close and similar or dissimilar things are. And not just the whole thing, but pieces of it. We have to find ways in which to segment these in biologically meaningful ways. The embedding theorem comes in useful.
But it’s been very interesting to try to build mathematically a structure that will embody a lot of how biologists work. Traditionally what they do is, because they know so much about the collection of things they study, is they find landmarks, so they have this whole collection. They see all these things have this particular thing in common. It looks different and so on. But this landmark point that we mark digitally on these scanned surfaces is the same point in all of them. And the other point is the same. So they mark landmarks, maybe 20 landmarks. And then you can use that to define a mapping. But they asked us, “could we possibly do this landmark-free at some point?” And many biologists scoffed at the idea. How could you do this? At the beginning, of course, we couldn’t. We could find distances that were not so different from theirs, but the landmarks were not in the right places. But we then started realizing, look, why do they have this immense knowledge? Because they have seen so many more than just 20 that they’re now studying.
So we realized this was something where we should look at many collections, and there we have found, with a student of mine who made a breakthrough, if you start from a mapping between, so you have many surfaces, and you have a first way of mapping one to the other and then defining a similarity or not, depending on how faithful the mapping is, all these mappings are kind of wrong, not quite right. But because you have a large collection, there are so many little mistakes that are made that if you have a way of looking at it all, you can view those mistakes as the errors in a data set, and you can try to cancel them out. You can try to separate the grains from the chaff to get the essence of what is in there. A little bit like students will learn when they have a mentor who tells them, no, that point is not really what you think, and so on. So that’s what we do now. We have large collections. We have initial mappings that are not perfect. And we use the fact that we have the large collection to define, then, from that large collection, using machine learning tools, a much better mapping. The biologists have been really impressed by how much better the mappings are once we do that. The wonderful thing is that we use this framework, of course we use machine learning tools, we use all these computer graphics and dealing with surfaces to be efficient. We frame it as a fiber bundle, and we learn. If you think of it, every single one, if you look at a large collection, every one differs by little bits. We want to learn the structure of this set of teeth. Every tooth is a 2-d surface, and similar teeth can map to each other, so they’re all fibers, and we have a connection. And we learn that connection. We have a very noisy version of the connection. But because we know it’s a connection, and because it’s a connection that should be flat because things can be brought back to their common ancestor, and so going from A to B and B to C, it should not matter in what order you go because all these mappings can go to the common ancestor, and so it should kind of commute, we can really get things out. We have been able to use that in order to build correspondences that biologists are now using for their statistical analysis.
KK: So differential geometry for biology.
ID: Yes. Discrete differential geometry, which if there is an oxymoron, that’s one.
KK: Wow.
ID: So we have a team that has a biologist, it has people who are differential geometers, we have a computational geometer, and he was telling me, “you know, for this particular piece of it, it would be really useful if we had a generalization of Tutte’s theorem to non-convex polygons,” and I said, “well, what’s Tutte’s theorem?” And so I learned it last week, and that’s why it’s today my favorite theorem.
EL: Oh wow, that’s really neat.
KK: So we’ll follow up with you next year and see what your favorite theorem is then.
EL: Yeah, it sounds like a really neat collaborative environment there where everybody has their own special knowledge that they’re bringing to the table.
ID: Yes, and actually I have found that to be very, very stimulating in my whole career. I like working with other people. I like when they give you challenges. I like feeling my brain at work, with working together with their different expertise. And, well, once you’ve seen a couple of these collaborations at work, you get a feel for how you jump-start that, how you manage to get people talking about the problems they have and kind of brainstorm until a few problems get isolated on which we really can start to get our teeth dug in and work on it. And that itself is a dynamic you have to learn. I’m sure there are social scientists who know much more about this. In my limited setting, I now have some experience in starting these things up, and so my students and postdocs participate. And some of them have become good at propagating. I’m very motivated by the fact that you can do applications of mathematics that are really nontrivial, and you can distill nontrivial problems out of what people think are mundane applications. But it takes some investing to get there. Because usually the people who have the applications—the biologists, in my case—they didn’t say, “we had this very particular fiber bundle problem.”
EL: Right.
ID: In fact, it’s my student who then realized we really had a fiber bundle, and that helped define a machine learning problem differently than it had been before. That then led to interesting results. So you need all the background, you need the sense of adventure of trying to build tools in that background that might be useful. And I’m convinced that for some of these tools that we build, when more pure mathematicians learn about them, they might distill things in their world from what we need. And this can lead to more pure mathematics ultimately.
KK: Sure, a big feedback loop.
ID: Yes, absolutely. That’s what I believe in very, very strongly. But part of my life is being open to when I hear about things, is there a meaningful mathematical way to frame this? Not just for the fun of it, but will it help?
EL: Yeah, well, as I mentioned, I was amazed by the way you’ve used math for this art reconstruction. I think I saw a talk or an article you wrote about, and it was just fascinating. Things that I never would have thought would be applicable to that sphere.
ID: Yeah, and again it’s the case that there’s a whole lot of knowledge we have that could be applicable, and in that particular case, I have found that it’s a wonderful way to get undergraduates involved because they at the same time learn these tools of image processing and small machine learning tools working on these wonderful images. I mean, how much cooler is it to work on the Ghent altar piece, or even less famous artwork, than to work on standards of image analysis. So that has been a lot of fun. And actually, as I was in Belgium, the first event of the week of celebration we had was an IP4AI, which is Image Processing for Art Investigation, workshop. It’s really over the last 10-15 years, as a community is taking off. We’re trying to have this series of workshops where we have people who are interested in image processing and the mathematics and the engineering of that talk to people who have concrete problems in art conservation or understand art history. We try to have these workshops in museums, and we had it at a museum in Ghent, and it again was very, very stimulating exhilarating.
KK: So another thing we like to do on this podcast is ask our guest to pair their favorite theorem with something. So I’m curious. What do you think pairs well with Tutte’s theorem?
ID: Well, I was already thinking of Saran wrap and the hair dryer, but…
KK: No, that’s perfect. Yeah.
ID: I think also—not for Tutte’s theorem, there I really think of Saran wrap and a hair dryer—but I also am using in some of the work in biology as well what people call diffusion, manifold learning through diffusion techniques. The idea is if you have a complicated world where you have many instances and some of them are very similar, and others are similar to them, and so on, but after you’ve moved 100 steps away, things look not similar at all anymore, and you’d like to learn the geometry of that whole collection.
KK: Right.
ID: Very often it’s given to you by zillions of parameters. I mean, like images, if you think of each pixel of the image as a variable, then you live in thousands, millions of dimensions. And you know that the whole collection of images is not something that fills that whole space. It’s a very thin, wispy set in there. You’d like to learn its geometry because if you learn its geometry, you can do much more with it. So one tool that was devised, I mean 10 years ago or so—it’s not deep learning, it’s not as recent as that—is manifold learning in which you say, well, in every neighborhood if you look at all the things that are similar to me, then I have a little flat disc, it’s close enough to flat that I can really approximate it as flat. And then I have another one, and so on, and I have two mental images for that. I have one mental image: this whole kind of crochet thing, where each one of it you make with a crochet. You cover the whole thing with doilies in a certain sense. You can knit it together, or crochet it together and get the more complex geometry. Another image I often have is sequins. Every little sequin is a little disc.
EL: Yeah.
ID: But it can make it much more complex. So many of my mental images and pairings, if you want, are hands-on, crafty things.
KK: Do you knit and crochet yourself?
ID: Yes, I do. I like making things. I use metaphors like that a lot when I teach calculus because it’s kind of obvious. I find I use almost no sports metaphors. Sports metaphors are big in teaching mathematics, but I use much more handicraft metaphors.
KK: So what else should we talk about?
ID: One thing, actually, I was saying, I had such a lot of fun a couple of weeks ago when there was a celebration. The town in which I was born happens to have a fantastic new administrative building in which they have brought together all different services that used to be in different buildings in the town. The building was put together by fantastic architects, and it feels very mathematical. And it has beautiful shapes.
It’s in a mining town—I’m from a coal mining town—and so they have two hyperboloid shapes that they used to bring light down to the lower floors. That reminds people of the cooling towers of the coal mine. They have all these features in it that feel very mathematical. I told the mayor, I said, “Look, I’ll have this group of mathematicians, some of whom are very interested in outreach and education. We could, since there will be a party on Saturday and the conference only starts on Monday, we could on the Sunday have a brainstorming thing in which we try to design a clue-finding search through the building. We design mathematical little things in the building that fit with the whole design with the building. So you should have the interior designers as part of the workshop. I have no idea what will come out, but if something comes out, then we could find a little bit of money to realize it, and that could be something that adds another feature to the building.”
He loved the idea! I thought he was going to be…but he loved the idea. He talked to the person who runs the cafeteria about cooking a special meal for us. So we had a tagine because he was from Morocco. We wanted just sandwiches, but this man made this fantastic meal. We toured the building in the morning and in the afternoon we had brainstorming with local high school teachers and mathematicians and so on. We put them in three small groups, and they came up with three completely different ideas, which all sound really interesting. And then one of them said, “Why don’t we make it an activity that either a family could do, one after the idea, or a classroom could do? You’d typically have only an hour or an hour and a half, and the class would be too big, but you’d split the class into three groups, and each group does one of the activities. They all find a clue, and by putting the clues together, they find some kind of a treasure.”
KK: Oh, wow.
ID: So the ideas were great, and they link completely different things. One is more dynamical systems, one is actually embodying some group and graph theory (although we won’t call it that). And what I like, one of the goals was to find ideas that would require mathematical thinking but that were not linked to curriculum, so you’d start thinking, how would I even frame this? And so on, and trying to give stepwise progression in the problems so that they wouldn’t immediately have the full, complete difficult thing but would have to find ways of building tools that would get you there. They did excellent work. Now each team has a group leader that over email is working out details. We have committed to in a year working out all the details of texts and putting the materials together so it can actually be realized. That was the designers’ part. Can we make something like that not too expensive? They said, oh yeah, with foam and fabric. And I know they will do it.
A year from now I will see whether it all worked on that.
EL: So will you come to Salt Lake next and do that in my town?
ID: Do you have a great building in which it work?
EL: I’m trying to think.
ID: We’re linking it to a building.
EL: I’ll have to think about that.
KK: Well, we have a brand new science museum here in Gainesville. It’s called the Cade Museum. So Dr. Cade is the man who invented Gatorade, you know, the sports drink.
ID: Yes.
KK: And his family got together and built this wonderful new science museum. I haven’t been yet. It just opened a few months ago.
ID: Oh wow.
KK: I’m going to walk in there thinking about this idea.
ID: Yeah, and if you happen to be in Belgium, I can send you the location of this building, and you can have a look there.
KK: Okay. Sounds excellent. Well, this has been great, Ingrid. We really appreciate your taking your time to talk to us today.
ID: Well thank you.
KK: We’re really very honored.
ID: Well it’s great to have this podcast, the whole series.
KK: Yeah, we’re having a good time.
EL: We also want to thank our listeners for listening to us for a year. I’m just going to assume that everyone has listened religiously to every single episode. But yeah, it’s been a lot of fun to put this together for the past year, and we hope there will be many more.
ID: Yes, good luck with that.
KK: Thanks.
ID: Bye.
Evelyn Lamb: Welcome to My Favorite Theorem, a podcast about math. I’m Evelyn Lamb, one of your cohosts, and I’m a freelance math and science writer in Salt Lake City, Utah.
Kevin Knudson: Hi, I’m Kevin Knudson, a professor of mathematics at the University of Florida. How are you doing, Evelyn? Happy New Year!
EL: Thanks. Our listeners listening sometime in the summer will really appreciate the sentiment. Things are good here. I promised myself I wouldn’t talk about the weather, so instead in the obligatory weird banter section, I will say that I just finished a sewing project, only slightly late, as a holiday gift for my spouse. So that was fun. I made some napkins. Most sewing projects are non-Euclidean geometry because bodies are not Euclidean.
KK: Sure.
EL: But this one was actually Euclidean geometry, which is a little easier.
KK: Well I’m freezing. No one ever believes this about Florida, but I’ve never been so cold in my life as I have been in Florida, with my 70-year-old, poorly insulated home, when highs are only in the 40s. It’s miserable.
EL: Yeah.
KK: But the beauty of Florida, of course, is that it ends. Next week it will be 75. I’m excited about this show. This is going to be a good one.
EL: Yes, so we should at this point introduce our guest. Today we are very happy to have Ken Ribet on the show. Ken, would you like to tell us a little bit about yourself?
Ken Ribet: Okay, I can tell you about myself professionally first. I’m a professor of mathematics at the University of California Berkeley, and I’ve been on the Berkeley campus since 1978, so we’re coming up on 40 years, although I’ve spent a lot of time in France and elsewhere in Europe and around the country. I am currently president of the American Mathematical Society, which is how a lot of people know me. I’m the husband of a mathematician. My wife is Lisa Goldberg. She does statistics and economics and mathematics, and she’s currently interested in particular in the statistics of sport. We have two daughters who are in their early twenties, and they were home for the holidays.
KK: Good. My son started college this year, and this was his first time home. My wife and I were super excited for him to come home. You don’t realize how much you’re going to miss them when they’re gone.
KR: Exactly.
EL: Hi, Mom! I didn’t go home this year for the holidays. I went home for Thanksgiving, but not for Christmas or New Year.
KK: Well, she missed you.
EL: Sorry, Mom.
KK: So, Ken, you gave us a list of something like five theorems that you were maybe going to call your favorite, which, it’s true, it’s like picking a favorite child. But what did you settle on? What’s your favorite theorem?
KR: Well, maybe I should say first that talking about one’s favorite theorem really is like talking about one’s favorite child, and some years ago I was interviewed for an undergraduate project by a Berkeley student, who asked me to choose my favorite prime number. I said, well, you really can’t do that because we love all our prime numbers, just like we love all our children, but then I ended up reciting a couple of them offhand, and they made their way into the publication that she prepared. One of them is the six-digit prime number 144169, which I encountered early in my research.
KK: That’s a good one.
KR: Another is 1234567891, which was discovered in the 1980s by a senior mathematician who was being shown a factorization program. And he just typed some 10-digit number into the program to see how it would factor it, and it turned out to be prime!
KK: Wow.
KR: This was kind of completely amazing. So it was a good anecdote, and that reminded me of prime numbers. I think that what I should cite as my favorite theorem today, for the purposes of this encounter, is a theorem about prime numbers. The prime numbers are the ones that can’t be factored, numbers bigger than 1. So for example 6 is not a prime number because it can be factored as 2x3, but 2 and 3 are prime numbers because they can’t be factored any further. And one of the oldest theorems in mathematics is the theorem that there are infinitely many prime numbers. The set of primes keeps going on to infinity, and I told one of my daughters yesterday that I would discuss this as a theorem. She was very surprised that it’s not, so to speak, obvious. And she said, why wouldn’t there be infinitely many prime numbers? And you can imagine an alternative reality in which the largest prime number had, say, 50,000 digits, and beyond that, there was nothing. So it is a statement that we want to prove. One of the interesting things about this theorem is that there are myriad of proofs that you can cite. The best one is due to Euclid from 2500 years ago.
Many people know that proof, and I could talk about it for a bit if you’d like, but there are several others, probably many others, and people say that it’s very good to have lots of proofs of this one theorem because the set of prime numbers is a set that we know a lot about, but not that much about. Primes are in some sense mysterious, and by having some alternative proofs of the fact that there are infinitely many primes, we could perhaps say we are gaining more and more insight into the set of prime numbers.
EL: Yeah, and if I understand correctly, you’ve spent a lot of your working life trying to understand the set of prime numbers better.
KR: Well, so that’s interesting. I call myself a number theorist, and number theory began with very, very simple problems, really enunciated by the ancient Greeks. Diophantus is a name that comes up frequently. And you could say that number theorists are engaged in trying to solve problems from antiquity, many of which remain as open problems.
KK: Right.
KR: Like most people in professional life, number theorists have become specialists, and all sorts of quote-on-quote technical tools have been developed to try to probe number theory. If you ask a number theorist on the ground, as CNN likes to say, what she’s working on, it’ll be some problem that sounds very technical, is probably hard to explain to a general listener, and has only a remote connection to the original problems that motivated the study. For me personally, one of the wonderful events that occurred in my professional life was the proof of Fermat’s last theorem in the mid-1990s because the proof uses highly technical tools that were developed with the idea that they might someday shed light on classical problems, and lo and behold, some problem that was then around 350 years old was solved using the techniques that had been developed principally in the last part of the 20th century.
KK: And if I remember right — I’m not a number theorist — were you the person who proved that the Taniyama-Weil conjecture implied Fermat’s Last Theorem?
KR: That’s right. The proof consists of several components, and I proved that something implies Fermat’s Last Theorem.
KK: Right.
KR: And then Andrew Wiles partially, with the help of Richard Taylor, proved that something. That something is the statement that elliptic curves (whatever they are) have a certain property called modularity, whatever that is.
EL: It’s not fair for you to try to sneak an extra theorem into this podcast. I know Kevin baited you into it, so you’ll get off here, but we need to circle back around. You mentioned Euclid’s proof of the infinitude of primes, and that’s probably the one most people are the most familiar with of these proofs. Do you want to outline that a little bit? Actually not too long ago, I was talking to the next door neighbors’ 11-year-old kid, he was interested in prime numbers, and the mom knows we’re mathematicians, so we were talking about it, and he was asking about what the biggest prime number was, and we talked about how one might figure out whether there was a biggest prime number.
KR: Yeah, well, in fact when people talk about the proof, often they talk about it in a very circular way. They start with the statement “suppose there were only finitely many primes,” and then this and this and this and this, but in fact, Euclid’s proof is perfectly direct and constructive. What Euclid’s proof does is, you could start with no primes at all, but let’s say we start with the prime 2. We add 1 to it, and we see what we get, and we get the number 3, which happens to be prime. So we have another prime. And then what we do is take 2 and multiply it by 3. 2 and 3 are the primes that we’ve listed, and we add 1 to that product. The product is 6, and we get 7. We look at 7 and say, what is the smallest prime number dividing 7? Well, 7 is already prime, so we take it, and there’s a very simple argument that when you do this repeatedly, you get primes that you’ve never seen before. So you start with 2, then you get 3, then you get 7. If you multiply 2x3x7, you get 6x7, which is 42. You add 1, and you get 43, which again happens to be prime. If you multiply 2x3x7x43 and add 1, you get a big number that I don’t recall offhand. You look for the prime factorization of it, and you find the smallest prime, and you get 13. You add 13 to the list. You have 2, 3, 7, 43, 13, and you keep on going. The sequence you get has its own Wikipedia page. It’s the Euclid-Mullin sequence, and it’s kind of remarkable that after you repeat this process around 50 times, you get to a number that is so large that you can’t figure out how to factor it. You can do a primality test and discover that it is not prime, but it’s a number analogous to the numbers that occur in cryptography, where you know the number is not prime, but you are unable to factor it using current technology and hardware. So the sequence is an infinite sequence by construction. But it ends, as far as Wikipedia is concerned, around the 51st term, I think it is, and then the page says that subsequent terms are not known explicitly.
EL: Interesting! It’s kind of surprising that it explodes that quickly and it doesn’t somehow give you all of the small prime numbers quickly.
KR: It doesn’t explode in the sense that it gets bigger and bigger. You have 43, and it drops back to 13, and if you look at the elements of the sequence on the page, which I haven’t done lately, you’ll see that the numbers go up and then down. There’s a conjecture, which was maybe made without too much evidence, that as you go to the sequence, you’ll get all prime numbers.
EL: Okay. I was about to ask that, if we knew if you would eventually get all of them, or end up with some subsequence of them.
KR: Well, the expectation, which as I say is not based on really hard evidence, is that you should be able to get everything.
KK: Sure. But is it clear that this sequence is actually infinite? How do we know we don’t get a bunch of repeats after a while?
KR: Well, because the principle of the proof is that if you have a prime that’s appeared on the list, it will not divide the product plus 1. It divides the product, but it doesn’t divide 1, so it can’t divide the new number. So when you take the product and you factor it, whatever you get will be a quote-on-quote new prime.
KK: So this is a more direct version of what I immediately thought of, the typical contradiction proof, where if you only had a finite number of primes, you take your product, add 1, and ask what divides it? Well, none of those primes divides it. Therefore, contradiction.
KR: Yes, it’s a direct proof. Completely algorithmic, recursive, and you generate an infinite set of primes.
KK: Okay. Now I buy it.
EL: I’m glad we did it the direct way. Because setting it up as a proof by contradiction when it doesn’t really need the contradiction, it’s a good way, when I’ve taught things like this in the past, this is a good way to get the proof, but you can kind of polish it up and make it a little prettier by taking out the contradiction step since it’s not really required.
KR: Right.
KK: And for your 11-year-old friend, contradiction isn’t what you want to do, right? You want a direct proof.
KR: Exactly. You want that friend to start computing.
KK: Are there other direct proofs? There must be.
KR: Well, another direct proof is to consider the numbers known as Fermat numbers. I’ll tell you what the Fermat numbers are. You take the powers of 2, so the powers of 2 are 1, 2, 4, 8, 16, 32, and so on. And you consider those as exponents. So you take 2 to those powers of 2. 2^1, 2^2, 2^4, and so on. To these numbers, you add the number 1. So you start with 2^0, which is 1, 2^1 is 2, and you add 1 and get 3. Then the next power of 2 is 2. You add 1 and you get 5. The next power of 2 is 4. 2^4 is 16. You add 1, and you get 17. The next power of 2 is 8. 2^8 is 256, and you add 1 and get 257. So you have this sequence, which is 3, 5, 17, 257, and the first elements of the sequence are prime numbers. 257 is a prime number. And it’s rather a famous gaffe of Fermat that he apparently claimed that all the numbers in the sequence were prime numbers, that you could just generate primes that way. But in fact, if you take the next one, it will not be prime, and I think all subsequent numbers that have been computed have been verified to be non-prime. So you get these Fermat numbers, a whole sequence of them, an infinite sequence of them, and it turns out that a very simple argument shows you that any two different numbers in the sequence have no common factor at all. And so, for example, if you take 257 and, say, the 19th Fermat number, that pair of numbers will have no common factor. So since 257 happens to be prime, you could say 257 doesn’t divide the 19th Fermat number. But the 19th Fermat number is a big number. It’s divisible by some prime. And you can take the sequence of numbers and for each element of the sequence, take the smallest prime divisor, and then you get a sequence of primes, and that’s a infinite sequence of primes. The primes are all different because none of the numbers have a common factor.
KK: That’s nice. I like that proof.
EL: Nice! It’s kind of like killing a mosquito with a sledgehammer. It’s a big sequence of these somewhat complicated numbers, but there’s something very fun about that. Probably not fun to try to kill mosquitoes with a sledgehammer. Don’t try that at home.
KK: You might need it in Florida. We have pretty big ones.
KR: I can tell you yet a third proof of the theorem if you think we have time.
KK: Sure!
KR: This proof I learned about, and it’s an exercise in a textbook that’s one of my all-time favorite books to read. It’s called A Classical Introduction to [Modern] Number Theory by Kenneth Irleand and Michael Rosen. When I was an undergraduate at Brown, Ireland and Rosen were two of my professors, and Ken Ireland passed away, unfortunately, about 25 years ago, but Mike Rosen is still at Brown University and is still teaching. They have as an exercise in their book a proof due to a mathematician at Kansas State, I think it was, named Eckford Cohen, and he published a paper in the American Mathematical Monthly in 1969. And the proof is very simple. I’ll tell you the gist of it. It’s a proof by contradiction. What you do is you take for the different numbers n, you take the geometric mean of the first n numbers. What that means is you take the numbers 1, 2, 3, you multiply them together, and in the case of 3, you take the cube root of that number. We could even do that for 2, you take 1 and 2 and multiply them together and take the square root, 1.42. And these numbers that you get are smaller than the averages of the numbers. For example, the square root of 2 is less than 1.5, and the cube root of 6, of 1x2x3, is less than 2, which is the average of 1, 2, and 3. But nevertheless these numbers get pretty big, and you can show using high school mathematics that these numbers approach infinity, they get bigger and bigger. You can show, using an argument by contradiction, that if there were only finitely many primes, these numbers would not get bigger and bigger, they would stop and be all less than some number, depending on the primes that you could list out.
EL: Huh, that’s really cool.
KK: I like that.
KR: That’s kind of an amazing proof, and you see that it has absolutely nothing to do with the two proofs I told you about before.
KK: Sure.
EL: Yeah.
KK: Well that’s what’s so nice about number theory. It’s such a rich field. You can ask these seemingly simple questions and prove them 10 different ways, or not prove them at all.
KR: That’s right. When number theory began, I think it was a real collection of miscellany. People would study equations one by one, and they’d observe facts and record them for later use, and there didn’t seem to be a lot of order to the garden. And the mathematicians who tried to introduce the conceptual techniques in the last part of the 20th century, Carl Ludwig Siegel, André Weil, Jean-Pierre Serre, and so on, these people tried to make everything be viewed from a systematic perspective. But nonetheless if you look down at the fine grain, you’ll see there are lots of special cases and lots of interesting phenomena. And there are lots of facts that you couldn’t predict just by flying at 30,000 feet and trying to make everything be orderly.
EL: So, I think now it’s pairing time. So on the show, we like to ask our mathematicians to pair their theorem with something—food, beverage, music, art, whatever your fancy is. What have you chosen to pair with the infinitude of primes?
KR: Well, this is interesting. Just as I’ve told you three proofs of this theorem, I’d like to discuss a number of possible pairings. Would that be okay?
KK: Sure. Not infinitely many, though.
KR: Not infinitely many.
EL: Yeah, one for each prime.
KR: One thing is that prime numbers are often associated with music in some way, and in fact there is a book by Marcus du Sautoy, which is called The Music of the Primes. So perhaps I could say that the subject could be paired with his book. Another thing I thought of was the question of algorithmic recursive music. You see, we had a recursive description of a sequence coming from Euclid’s method, and yesterday I did a Google search on recursive music, and I got lots of hits. Another thing that occurred to me is the word prime, because I like wine a lot and because I’ve spent a lot of time in France, it reminds me of the phrase vin primeur. So you probably know that in November there is a day when the Beaujolais nouveau is released all around the world, and people drink the wine of the year, a very fresh young wine with lots of flavor, low alcohol, and no tannin, and in France, the general category of new wines is called vin primeur. It sounds like prime wines. In fact, if you walk around in Paris in November or December and you try to buy vin primeur, you’ll see that there are many others, many in addition to the Beaujolais nouveau. We could pair this theorem with maybe a Côtes du Rhône primeur or something like that.
But finally, I wanted to settle on one thing, and a few days ago, maybe a week ago, someone told me that in 2017, actually just about a year ago, a woman named Maggie Roche passed away. She was one of three sisters who performed music in the 70s and 80s, and I’m sure beyond. The music group was called the Roches. And the Roches were a fantastic hit, R-O-C-H-E, and they are viewed as the predecessors for, for example, the Indigo Girls, and a number of groups who now perform. They would stand up, three women with guitars. They had wonderful harmonies, very simple songs, and they would weave their voices in and out. And I knew about their music when it first came out and found myself by accident in a record store in Berkeley the first year I was teaching, which was 1978-79, long ago, and the three Roches were there signing record albums. These were vinyl albums at the time, and they had big record jackets with room for signatures, and I went up to Maggie and started talking to her. I think I spoke to her for 10 or 15 minutes. It was just kind of an electrifying experience. I just felt somehow like I had bonded with someone whom I never expected to see again, and never did see again. I bought one or two of the albums and got their signatures. I no longer have the albums. I think I left them in France. But she made a big impression on me. So if I wanted to pair one piece of music with this discussion, it would be a piece by the Roches. There are lots of them on Youtube. One called the Hammond Song, is especially beautiful, and I will officially declare that I am pairing the infinitude of primes with the Hammond Song by the Roches.
EL: Okay, I’ll have to listen to that. I’m not familiar with them, so it sounds like a good thing to listen to once we hang up here.
KK: We’ll link it in the show notes, too, so everyone can see it.
EL: That sounds like a lot of fun. It’s always a cool experience to feel like you’re connecting with someone like that. I went to a King’s Singers concert one time a few years ago and got a CD signed, and how warm and friendly people can be sometimes even though they’re very busy and very fancy and everything.
KR: I’ve been around a long time, and people don’t appreciate the fact that until the last decade or two, people who performed publicly were quite accessible. You could just go up to people before concerts or after concerts and chat with them, and they really enjoyed chatting with the public. Now there’s so much emphasis on security that it’s very hard to actually be face to face with someone whose work you admire.
KK: Well this has been fun. I learned some new proofs today.
KR: Fun for me too.
EL: Thanks a lot for being on the show.
KR: It’s my great pleasure, and I love talking to you, and I love talking about the mathematics. Happy New Year to everyone.
[outro]
Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?
EL: I’m all right. I’m excited because we’re trying a different recording setup today, and a few of our recent episodes, I’ve had a few connection problems, so I’m hoping that everything goes well, and I’ve probably jinxed myself by saying that.
KK: No, no, it’s going to be fine. Positive thinking.
EL: Yeah, I’m hoping that the blips that our listeners may have heard in recent episodes won’t be happening. How about you? Are you doing well?
KK: I’m fine. Spring break is next week, and we’ve had the air conditioning on this week. This is the absurdity of my life. It’s February, and the air conditioning is on. But it’s okay. It’s nice. My son is coming home for spring break, so we’re excited.
EL: Great. We’re very happy today to have Jana Rodriguez-Hertz on the show. So, Jana, would you like to tell us a little bit about yourself?
Jana Rodriguez-Hertz: Hi, thank you so much. I’m originally from Argentina, I have lived in Uruguay for 20 years, and now I live in China, in Shenzhen.
EL: Yeah, that’s quite a big change. When we were first talking, first emailing, I mean, you were in Uruguay then, you’re back in China now. What took you out there?
JRH: Well, we got a nice job offer, and we thought we’d like to try. We said, why not, and we went here. It’s nice. It’s a totally different culture, but I’m liking it so far.
KK: What part of China are you in, which university?
JRH: In Southern University of Science and Technology in Shenzhen. It’s in Shenzhen. Shenzhen is in mainland China in front of Hong Kong, right in front of Hong Kong.
KK: Okay. That’s very far south.
EL: I guess February weather isn’t too bad over there.
JRH: It’s still winter, but it’s not too bad.
EL: Of course, that will be very relevant to our listeners when they hear this in a few months. We’re glad to have you here. Can you tell us about your favorite theorem?
JRH: Well, you know, I live in China now, and every noon I see a dynamical process that looks like the theorem I want to talk to you about, which is the dynamical properties of Smale’s horseshoe. Here it goes. You know, at the canteen of my university, there is a cook that makes noodles.
EL: Oh, nice.
JRH: He takes the dough and stretches it and folds it without mixing, and stretches it and folds it again, until the strips are so thin that they’re ready to be noodles, and then he cuts the dough. Well, this procedure can be described as a chaotic dynamical system, which is the Smale’s horseshoe.
KK: Okay.
JRH: So I want to talk to you about this. But we will do it in a mathematical model so it is more precise. So suppose that the cook has a piece of dough in a square mold, say of side 1. Then the cook stretches the dough so it becomes three times longer in the vertical sense but 1/3 of its original width in the horizontal sense. Then he folds it and puts the dough again in the square mold, making a horseshoe form. So the lower third of the square is converted into a rectangle of height 1 and width 1/3 and will be placed on the left side of the mold. The middle third of the square is going to be bent and will go outside the mold and will be cut. The upper third will be converted to another rectangle of height 1 and width 1/3 and will be put upside down in the right side of the mold. Do you get it?
KK: Yeah.
JRH: Now in the mold there will be two connected components of dough, one in the left third of the square and one in the right third of the square, and the middle third will be empty. In this way, we have obtained a map from a subset of the square into another subset of the square. And each time this map is applied, that is, each time we stretch and fold the dough, and cut the bent part, it’s called a forward iteration. So in the first forward iteration of the square, we obtain two rectangles of width 1/3 and height 1. Now in the second forward iteration of the square, we obtain four rectangles of width 1/9 and height 1. Two rectangles are contained in the left third, two rectangles in the right third. These are four noodles in total.
Counting from left to right, we will see one noodle of width 1/9, one gap of width 1/9, a second noodle of width 1/9, a gap of 1/3, and two more noodles of width 1/9 separated by a gap of width 1/9. Is that okay?
KK: Yes.
JRH: So if we iterate n times, we will obtain 2n noodles of width (1/3)n. And if we let the number of iterations go to infinity, that is, if we stretch and fold infinitely many times, cutting each time the bent part, we will obtain a Cantor set of vertical noodles.
KK: Yes.
EL: Right. So as you were saying the ninths with these gaps, and this 1/3, I was thinking, huh, this sounds awfully familiar.
KK: Yeah, yeah.
EL: We’ll include a picture of the Cantor set in the show notes for people to look at.
JRH: When we iterate forward, in the limit we will obtain a Cantor set of noodles. We can also iterate backwards. And what is that? We want to know for each point in the square, that is, for each flour particle of the dough in the mold, where it was before the cook stretched vertically and folded the dough the first time, where it came from. Now we recall that the forward iteration was to stretch in the vertical sense and fold it, so if we zoom back and put it backwards, we will obtain that the backward sense the cook has squeezed in the vertical sense and stretched in the horizontal sense and folded, okay?
EL: Yes.
JRH: Each time we iterate backwards, we stretch in the horizontal sense and fold it and put it in that sense. In this way, the left vertical rectangle is converted into the lower rectangle, the lower third rectangle. And the right side rectangle, the vertical rectangle, is converted into the upper third rectangle, and the bent part is cut. If we iterate backwards, now we will get in the first backward iteration four horizontal rectangles of width 1/9 and the gaps, and if we let the iterations go to infinity, we will obtain a Cantor set of horizontal noodles.
When we iterate forward and consider only what’s left in the mold, we start with two horizontal rectangles and finish with two vertical rectangles. When we iterate backwards we start with two vertical rectangles and finish with two horizontal rectangles. Now we want to consider the particles that stay forever in the mold, that is, the points so that all of the forward iterates and all the backwards iterates stay in the square. This will be the product of two middle-thirds Cantor set. It will look more like grated cheese than noodles.
KK: Right.
JRH: This set will be called the invariant set.
KK: Although they’re not pointwise fixed, they just stay inside the set.
JRH: That’s right. They stay inside the square. In fact, not only will they be not fixed, they will have a chaotic behavior. That is what I want to tell you about.
KK: Okay.
JRH: This is one of the simplest models of an invertible map that is chaotic. So what is chaotic dynamics anyways? There is no universally accepted definition about that. But one that is more or less accepted is one that has three properties. These properties are that periodic points are dense, it is topologically mixing, and it has sensitivity to initial conditions. And let me explain a little bit about this.
A periodic point is a particle of flour that has a trajectory that comes back exactly to the position where it started. This is a periodic point. What does it mean that they are dense? As close as you wish from any point you get one of these.
Topologically mixing you can imagine that means that the dough gets completely mixed, so if you take any two small squares and iterate one of them, it will get completely mixed with the other one forever. From one iteration on, you will get dough from the first rectangle in the second rectangle, always. That means topologically mixing.
I would like to focus on the sensitivity to initial conditions because this is the essence of chaos.
EL: Yeah, that’s kind of what you think of for the idea of chaos. So yeah, can you talk a little about that?
JRH: Yeah. This means that any two particles of flour, no matter how close they are, they will get uniformly separated by the dynamics. in fact, they will be 1/3 apart for some forward or backward iterate. Let me explain this because it is not difficult. Remember that we had the lower third rectangle? Call this lower third rectangle 0, and the upper third rectangle 1. Then we will see that for some forward or backward iterate, any two different particles will be in different horizontal rectangles. One will be in 1, and the other one will be in the 0 rectangle. How is that? If two particles are at different heights, than either they are already in different rectangles, so we are done, or else they are in the same rectangle. But if they are in the same rectangle, the cook stretches the vertical distance by 3. Every time they are in the same horizontal rectangle, their vertical distance is stretched by 3, so they cannot stay forever in the same rectangle unless they are at the same height.
KK: Sure.
JRH: If they are at different heights, they will get eventually separated. On the other hand, if they are in the same vertical rectangle but at different x-coordinates, if we iterate backwards, the cook will stretch the dough in the horizontal sense, so the horizontal distance will be tripled. Each time they are in the same vertical rectangle, they cannot be forever in the same vertical rectangle unless they are in the same, unless their horizontal distance is 0. But if they are in different positions, then either their horizontal distance is positive or the vertical distance is positive. So in some iterate, they will be 1/3 apart. Not only that, if they are in two different vertical rectangles, then in the next backwards iterate, they are in different horizontal rectangles. So we can state that any two different particles for some iterate will be in different horizontal rectangles, no matter how close they are. So that’s something I like very much because each particle is defined by its trajectory.
EL: Right, so you can tell exactly what you are by where you’ve been.
JRH: Yeah, two particles are defined by what they have done and what they will do. That allows something that is very interesting in this type of chaotic dynamics, which is symbolic dynamics. Now you know that any two points in some iterate will have distinct horizontal rectangles, so you can code any particle by its position in the horizontal rectangles. If one particle is in the beginning in the 0 rectangle, you will assign to them a sequence so that its zero position is 0, a double infinite sequence. If the first iterate is in the rectangle 1, then in the first position you will put a 1. In this way you can code any particle by a bi-infinite sequence of zeroes and ones. So in dynamics this is called conjugation. You can conjugate the horseshoe map with a sequence of bi-infinite sequences. This means that you can code the dynamics. Anything that happens in the set of bi-infinite sequences, happens in the horseshoe and vice versa. This is very interesting because you will find particles that describe any trajectory that you wish because you can write any sequence of zeroes and ones as you wish. You will have all Shakespeare coded in the horseshoe map, all of Donald Trump’s tweets will be there too.
KK: Let’s hope not. Sad!
JRH: Everything will be there.
EL: History of the world, for better and worse.
KK: What about Borges’s Library of Babel? It’s in there too, right?
JRH: If you can code it with zeroes and ones, it’s there.
EL: Yeah, that’s really cool. So where did you first run into this theorem?
JRH: When I was a graduate student, I ran into chaos, and I first ran into a baby model of this, which is the tent map. A tent map is in the interval, and that was very cool. Unlike this model, it’s coded by one-sided sequences. And later on, I went to IMPA [Instituto de Matemática Pura e Aplicada] in Rio de Janeiro, and I learned that Smale, the author of this example, had produced this example while being at IMPA in Rio.
KK: Right.
JRH: It was cool. I learned a little more about dynamics, about hyperbolic dynamics, and in fact, now I’m working in partially hyperbolic dynamics, which is very much related to this, so that is why I like it so much.
KK: Yeah, one of my colleagues spends a lot of time in Brazil, and he’s still studying the tent map. It’s remarkable, I mean, it’s such a simple model, and it’s remarkable what we still don’t know about it. And this is even more complicated, it’s a 2-d version.
EL: So part of this show is asking our guests to pair their theorem with something. I have an idea of what you might have chosen to pair with your theorem, but can you tell us what you’ve chosen?
JRH: Yeah, I like this sensitivity to initial conditions because you are defined by your trajectory. That’s pretty cool. For instance, if you consider humans as particles in a system, actually nowadays in Shenzhen, it is only me who was born in Argentina, lived in Uruguay, and lives in Shenzhen.
EL: Oh wow.
JRH: This is a city of 20 million people. But I am defined by my trajectory. And I’m sure any one of you are defined by your trajectory. If you look at a couple of things in your life, you will discover that you are the only person in the world who has done that. That is something I like. You’re defined, either by what you’ve done, or what you will do.
EL: Your path in life. It’s interesting that you go there because when I was talking to Ami Radunskaya, who also chose a theorem in dynamics, she also talked about how her theorem related to this idea of your path in life, so that’s a fun idea.
JRH: I like it.
KK: Of course, I was thinking about taffy-pulling the whole time you were describing the horseshoe map. You’ve seen these machines that pull taffy, I think they’re patented, and everything’s getting mixed up.
EL: Yeah.
JRH: All of this mixing is what makes us unique.
EL: So you can enjoy this theorem while pondering your life’s path and maybe over a bowl of noodles with some taffy for dessert.
KK: This has been fun. I’d never really thought too much about the horseshoe map. I knew it as this classical example, and I always heard it was so complicated that Smale decided to give up on dynamics, and I’m sure that’s false. I know that’s false. He’s a brilliant man.
JRH: Actually, he’s coming to a conference we’re organizing this year.
EL: Oh, neat.
KK: He’s still doing amazingly interesting stuff. I work in topological data analysis, and he’s been working in that area lately. He’s just a brilliant guy. The Fields Medal was not wasted on him, for sure.
EL: Well thanks a lot for taking the time to talk to us. I really enjoyed talking with you.
JRH: Thank you for inviting me.
[outro]
Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?
EL: I’m all right. I am hanging in there in the winter as a displaced Texan.
KK: It’s not even winter yet.
EL: Yeah, well, somehow I manage to make it to the end of the season without dying every year outside of Texas, but yeah, the first few cold days really throw me for a loop.
KK: Well my son’s in college now, and they had snow last week.
EL: Well the south got a bunch of snow. Is he in South Carolina, is that right?
KK: North Carolina, and he’s never driven in snow before, and we told him not to, but of course he did. No incidents, so it was okay.
EL: So we’re very glad to have our guest today, who I believe is another displaced Texan, Francis Su. Francis, would you like to tell us a little bit about yourself?
Francis Su: Hi, Evelyn and Kevin. Sure. I’m a professor of mathematics at Harvey Mudd College, and that’s a small science and engineering school in southern California, and Evelyn is right. I am a displaced Texan from a small town in south Texas called Kingsville.
EL: Okay. I grew up in Dallas. Is Kingsville kind of between Houston and Beaumont?
FS: It’s between Houston and the valley. Closer to Corpus Christi.
EL: Ah, the other side. Many of us displaced Texans end up all over the country and elsewhere in the world.
FS: That’s right. I’m in California now, which means I don’t have to deal with the winter weather that you guys are wrestling with.
KK: I’m in Florida. I’m okay.
EL: Yeah. And you’re currently in the Bay Area at MSRI, so you’re not on fire right now.
FS: That’s right. I’m at the Math Sciences Research Institute. There’s a semester program going on in geometric and topological combinatorics.
KK: Cool.
EL: Yeah, that must be nice. Is this your, it’s not too long after your presidency of the Mathematical Association of America, so it must be nice to be able to not have those responsibilities and be able to just focus on research at MSRI this semester.
FS: That’s right. It was a way of hopping back into doing research after a couple of years doing some fun work for the MAA.
EL: So, what is your favorite theorem? We would love to hear it.
FS: You know, I went around and around with this because as mathematicians we have lots of favorite theorems. The one I kept coming back to was the Brouwer fixed point theorem.
KK: I love this theorem.
FS: Yes, so the Brouwer fixed point theorem is an amazing theorem. It’s about a hundred years old. It shows up in all sorts of unexpected places. But what it loosely says is if you have a continuous function from a ball to itself—and I’ll say what a ball means in a minute—it must have a fixed point, a point that doesn’t move. And a ball can be anything that basically has no holes.
EL: So anything you can make out of clay without punching a hole in it, or snaking it around and attaching two ends of it together. I’m gesturing with my hands. That’s very helpful for our podcast listeners.
KK: Right.
FS: Exactly.
KK: We don’t even need convexity, right? You can have some kind of dimpled blob and it still works.
FS: That’s right. It could be a blob with a funny shape. As long as it can be deformed to something that’s a ball, the ball has no holes, then the theorem applies. And a continuous function would be, one way of thinking about a continuous function from a ball to itself is let’s deform this blob, and as long as we deform the blob so that the blob stays within itself, then the blob doesn’t move. A very popular way of describing this theorem is if you take a cup of coffee, let’s say I have a cup of coffee and I take a picture of it. Then slosh the coffee around in a continuous fashion and then take another picture. There is going to be a point in the coffee that is in the same spot in both pictures. It might have moved around in between, but there’s going to be a point that’s in the same spot in both pictures. And then if I move that point out of its original position, I can’t help but move some other point into its original position.
EL: Yeah, almost like a reverse diagonalization. In diagonalization you show that there’s a problem because anything you thought you could get on your list, you show that something else, even if you stick it on the list, something else is not on the list still. Here, you’re saying even if you think, if I just had one fixed point, I could move it and then I wouldn’t have any, you’re saying you can’t do that without adding some other fixed point.
FS: That’s right. The coffee cup sloshing example is a nice one because you can see that if I take the cup of coffee and I just empty it and pour the liquid somewhere else, clearly there’s not going to be a fixed point. So you sort of see the necessity of having the ball, the coffee, mapped to itself.
KK: And if you had a donut-shaped cup of coffee, this would not be true, right? You could swirl it around longitudinally and nothing would be fixed.
FS: That’s right. If you had a donut-shaped coffee mug, we could. That’s right. The continuity is kind of interesting. Another way I like to think about this theorem is if you take a map of Texas and you crumple it up somewhere in Texas, there’s a point in the map that’s exactly above the point it represents in Texas. So that’s sort of a two-dimensional version of this theorem. And you see the necessity of continuity because if I tore the map in two pieces and threw east Texas into west Texas and west Texas into east Texas, it wouldn’t be true that there would be a point exactly above the point it represents. So continuity is really important in this theorem as well.
KK: Right. You know, for fun, I put the one-dimensional version of this as a bonus question on a calculus test this semester.
FS: I like that version. Are you referring to graphing this one-dimensional function?
KK: Right, so if you have a map from a unit interval to itself, it has a fixed point. This case is nice because it’s just a consequence of the intermediate value theorem.
FS: Yes, that’s a great one. I love that.
KK: But in higher dimensions you need a little more fire power.
FS: Right. So yeah, this is a fun theorem because it has all sorts of maybe surprising versions. I told you one of the popular versions with coffee. It can be used, for instance, to prove the fundamental theorem of algebra, that every polynomial has a root in the complex numbers.
EL: Oh, interesting! I don’t think I knew that.
KK: I’m trying to think of that proof.
FS: Yeah, so the idea here is that if you think about a polynomial as a function and you’re thinking of this as a function on the complex plane, basically it takes a two-dimensional region like Texas and maps it in some fashion back onto the plane. And you can show that there’s a region in this map that gets sent to itself, roughly speaking. That’s one way to think about what’s going on. And then the existence of a zero corresponds to a fixed point of a continuous function, which I haven’t named but that’s sort of the idea.
EL: Interesting. That’s nice. It’s so cool how, at least if I’m remembering correctly, all the proofs I know of the fundamental theorem of algebra are topological. It’s nice, I think, for topology to get to throw an assist to algebra. Algebra has helped topology so much.
FS: I love that too. I guess I’m attracted to topology because it says a lot of things that are interesting about the existence of certain things that have to happen. One of the things that’s going on at this program at MSRI, as the name implies, geometric and topological combinatorics, people are trying to think about how to use topology to solve problems in combinatorics, which seems strange because combinatorics feels like it just has to do with counting discrete objects.
EL: Right. Combinatorics feels very discrete, and topology feels very continuous, and how do you get that to translate across that boundary? That’s really interesting.
FS: I’ll give you another example of a surprising application. In the 1970s, actually people studied this game called Hex for a while. I guess Hex was developed in the ‘40s or ‘50s. Hex is a game that’s played on a board with hexagonal tiles, a diamond-shaped board. Two players take turns, X and O, and they’re trying to construct a chain from one side of the board to the other, to the opposite side. It turns out that the Brouwer fixed-point theorem, well you can ask the question: can that game ever end in a draw configuration where nobody wins? For large boards, it’s not so obvious that the game can’t end in a draw. But in a spectacular application of the Brouwer fixed-point theorem it can’t end in a draw using the Brouwer fixed-point theorem.
EL: Oh, that’s so cool.
KK: That is cool. And allegedly this game was invented by John Nash in the men’s room at Princeton, right?
FS: Yes, there’s some story like that, though I think it actually dates back to somebody before.
KK: Probably. But it’s a good story, right, because Nash is so famous.
EL: So was it love at first sight with the Brouwer fixed-point theorem for you, or how did you come across it and grow to love it?
FS: I guess I encountered it first as an undergraduate in college when a professor of mine, a topology professor of mine, showed me this theorem, and he showed me a combinatorial way to prove this theorem, using something known as Sperner’s lemma. There’s another connection between topology and combinatorics, and I really appreciated the way you could use combinatorics to prove something in topology.
EL: Cool.
KK: Very cool.
KK: You know, part of our show is we ask our guest to pair their theorem with something. So what have you chosen to pair the Brouwer fixed-point theorem with?
FS: I’d like to pair it with parlor games. Think of a game like chess, or think of a game like rock-paper-scissors. It turns out that the Brouwer fixed-point theorem is also related to how you play a game optimally, a game like chess or rock-paper-scissors optimally.
KK: So how do you get the optimal strategy for chess from the Brouwer fixed-point theorem?
FS: Very good question. So the Brouwer fixed-point theorem can’t tell you what the optimal strategy is.
KK: Just that it exists, right, yeah.
FS: It tells you that there is a pair of optimal strategies that players can play to play the game optimally. What I’m referring to is something known as the Nash equilibrium theorem. Nash makes another appearance in this segment. What Nash showed is that if you have a game, well there’s this concept called the Nash equilibrium. The question Nash asked is if you’re looking at some game, can you predict how players are going to play this game? That’s one question. Can you prescribe how players should play this game? That’s another question. And a third question is can you describe why players play a game a certain way? So there’s the prediction, descriptions, and prescription about games that mathematicians and economists have gotten interested in. And what Nash proposed is that in fact something called a Nash equilibrium is the best way to describe, prescribe, and predict how people are going to play a game. And the idea of a Nash equilibrium is very simple, it’s just players playing strategies that are mutually best responses to each other. And it turns out that if you allow what are called mixed strategies, every finite game has an equilibrium, which is kind of surprising. It suggests that you could maybe suggest to people what the best course of action is to play. There is some pair of strategies by both players, or by all players if it’s a multiplayer game, that actually are mutual best replies. People are not going to have an incentive to change their strategies by looking at the other strategies.
KK: The Brouwer fixed point theorem is so strange because it’s one of those existence things. It just says yeah, there is a fixed point. We tend to prove it by contradiction usually, or something. There’s not really any good constructive proofs. I guess you could just pick a point and start iterating. Then by compactness what it converges to is a fixed point.
FS: There is actually, maybe this is a little surprising as well, this theorem I mention learning as an undergrad, it’s called Sperner’s lemma, it actually has a constructive proof, in the sense that there’s an efficient way of finding the combinatorial object that corresponds to a fixed point. What’s surprising is that you can actually in many places use this constructive combinatorial proof to find, or get close to, a proposed fixed point.
KK: Very cool.
FS: That’s kind of led to a whole bunch of research in the last 40 years or so in various areas, to try to come up with constructive versions of things that prior to that people had thought of as non-constructive.
EL: Oh, that’s so cool. I must admit I did not have proper appreciation for the Brouwer fixed-point theorem before, so I’m very glad we had you on. I guess I kind of saw it as this novelty theorem. You see it often as you crumple up the map, or do these little tricks. But why did I really care that I could crumple up the map? I didn’t see all of these connections to these other points. I am sorry to the Brouwer fixed-point theorem for not properly appreciating it before now.
FS: Yes. I think it definitely belongs on a top ten list of top theorems in mathematics. I wonder how many mathematicians would agree.
KK: I read this book once, and the author is escaping me and I’m kind of embarrassed because it’s on the shelf in my other office, called Five Golden Rules. Have you ever seen this book? It was maybe 10 or 15 years ago.
EL: No.
KK: One of the theorems, there are like five big theorems in mathematics, it was the Brouwer fixed-point theorem. And yeah, it’s actually of fundamental importance to know that you have fixed points for maps. They are really important things. But the application he pointed to was to football ranking schemes, right? Because that’s clearly important. College football ranking schemes in which in essence you’re looking for an eigenvector of something, and an eigenvector is a fixed point with eigenvalue 1, and of course the details are escaping me now. This book is really well-done. Five Golden Rules.
EL: We’ll find that and put it in the show notes for sure.
FS: I haven’t heard of that. I should look that one up.
KK: It’s good stuff.
FS: I’ll just mention with this Nash theorem, the basic idea of using the Brouwer fixed-point theorem to prove it is pretty simple to describe. It’s that if you look at the set of all collections of strategies, if they’re mixed strategies allowing randomization, then in fact that space is a ball.
KK: That makes sense.
FS: And then the cool thing is if players have an incentive to deviate, to change their strategies, that suggests a direction in which each point could move. If they want to deviate, it suggests a motion of the ball to itself. And the fact that the ball has a fixed point means there’s a place where nobody is incentivized to change their strategy.
EL: Yeah.
KK: Well I’ve learned a lot. And I even knew about the Brouwer fixed-point theorem, but it’s nice to learn about all these extra applications. I should go learn more combinatorics, that’s my takeaway.
EL: Yeah, thanks so much for being on the show, Francis. If people want to find you, there are a few places online that they can find you, right? You’re on Twitter, and we’ll put a link to your Twitter in the show notes. You also have a blog, and I’m sorry I just forgot what it’s called.
FS: The Mathematical Yawp.
EL: That’s right. We’ll put that in the show notes. I know there are a lot of posts of yours that I’ve really appreciated, especially the ones about helping students thrive, doing math as a way for humans to grow as people and helping all students access that realm of learning and growth. I know those have been influential in the math community and fun to read and hear.
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics and everyone’s favorite theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. This is your other host.
Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City. So how are things going, Kevin?
KK: Okay. We’re hiring a lot, and so I haven’t eaten a meal at home this week, and maybe not last week either. You think that might be fun until you’re in the middle of it. It’s been great meeting all these new people, and I’m really excited about getting some new colleagues in the department. It’s a fun time to be at the University of Florida. We’re hiring something like 500 new faculty in the next two years.
EL: Wow!
KK: It’s pretty ambitious. Not in the math department.
EL: Right.
KK: I wish. We could solve the mathematician glut just like that.
EL: Yeah, that would be great.
KK: How are things in Salt Lake?
EL: Pretty good. It’s a warm winter here, which will be very relevant to our listeners when they listen in the summer. But it’s hiring season at the University of Utah, where my spouse works. He’s been doing all of that handshaking.
KK: The handshaking, the taking to the dean and showing around, it’s fun. It’s good stuff. Anyway, enough about that. I’m excited about today’s guest. Today we are pleased to welcome Emily Riehl from Johns Hopkins. Hi, Emily.
Emily Riehl: Hi.
KK: Tell everyone about yourself.
ER: Let’s see. I’ve known I wanted to be a mathematician since I knew that that was a thing that somebody could be, so that’s what I’m up to. I’m at Johns Hopkins now. Before that I was a postdoc at Harvard, where I was also an undergraduate. My Ph.D. is from Chicago. I was a student of Peter May, an algebraic topologist, but I work mostly in category theory, and particularly in category theory as it relates to homotopy theory.
KK: So how many students does Peter have? Like 5000 or something?
ER: I was his 50th, and that was seven years ago.
EL: Emily and I have kind of a weird connection. We’ve never actually met, but we both lived in Chicago and I kind of replaced Emily in a chamber music group. I played with Walter and the gang I guess shortly after you graduated. I moved there in 2011. They’re like, oh, you must know Emily Riehl because you’re both mathematicians who play viola. I was like, no, that sounds like a person, though, because violists are all the best people.
KK: So, Emily, you’ve told us, and I’ve had time to think about it but still haven’t thought of my favorite application of this theorem. But what’s your favorite theorem?
ER: I should confess: my favorite theorem is not the theorem I want to talk about today. Maybe I’ll talk about what I don’t want to talk about briefly if you’ll indulge me.
KK: Sure.
ER: So I’m a category theorist, and every category theorist’s favorite theorem is the Yoneda lemma. It says that a mathematical object of some kind is uniquely determined by the relationships that it has to all other objects of the same type. In fact, it’s uniquely characterized in two different ways. You can either look at maps from the object you’re trying to understand or maps to the object you’re trying to understand, and either way suffices to determine in. This is an amazing theorem. There’s a joke in category that all proofs are the Yoneda lemma. I mean, all proofs [reduce] to the Yoneda lemma. The reason I don’t want to talk about it today is two-fold. Number one, the discussion might sound a little more philosophical than mathematical because one thing that the Yoneda lemma does is it orients the philosophy of category theory. Secondly, there’s this wonderful experience you have as a student when you see the Yoneda lemma for the first time because the statement you’ll probably see is not the one I just described but sort of a weirder one involving natural transformations from representable functors, and you see them, and you’re like, okay, I guess that’s plausible, but why on earth would anyone care about that? And then it sort of dawns on you over however many years, in my case, why it’s such a profound and useful observation. And I don’t want to ruin that experience for anybody.
KK: You’re not worried about getting excommunicated, right?
ER: That’s why I had to confess. I was joking with some category theorists, I was just in Sydney visiting the Center of Australian Category Theory, which is the name of the group, and it’s also the center of Australian category theory. And I want to be invited back, so yes, of course, my favorite theorem is the Yoneda lemma. But what I want to talk about today instead is a theorem I really like because it’s a relatively simple idea, and it comes up all over mathematics. Once it’s a pattern you know to look for, it’s quite likely that you’ll stumble upon it fairly frequently. The proof, it’s a general proof in category theory, specializes in each context to a really nice argument in that particular context. Anyway, the theorem is called right adjoints preserve limits.
EL: All right.
KK: So I’m a topologist, so to me, we put a modifier in front of our limit, so there’s direct and inverse. And limit in this context means inverse limit, right?
ER: Right. That’s the historical terminology for what category theorists call limits.
KK: So I always think of inverse limits as essentially products, more or less, and direct limits are unions, or direct sum kinds of things. Is that right?
ER: Right.
KK: I hope that’s right. I’m embarrassed if I’m wrong.
ER: You’re alluding to something great in category theory, which is that when you prove a theorem, you get another theorem for free, the dual theorem. A category is a collection of objects and a collection of transformations between them that you depict graphically as arrows. Kind of like in projective geometry, you can dualize the axioms, you can turn around the direction of the arrows, and you still have a category. What that means is that if you have a theorem in category theory that says for all categories blah blah blah, then you can apply that in particular to the opposite category where things are turned around. In this case, there are secretly two categories involved, so we have three dual versions of the original theorem, the most useful being that left adjoints preserve colimits, which are the direct limits that you’re talking about. So whether they’re inverse limits or direct limits, there’s a version of this theorem that’s relevant to that.
KK: Do we want to unpack what adjoint functors are?
ER: Yes.
EL: Yeah, let’s do that. For those of us who don’t really know category theory.
ER: Like anything, it’s a language that some people have learned to speak and some people are not acquainted with yet, and that’s totally fine. Firstly, a category is a type of mathematical object, basically it’s a theory of mathematical objects. We have a category of groups, and then the transformations between groups are the group homomorphisms. We have a category of sets and the functions between them. We have a category of spaces and the continuous functions. These are the categories. A morphism between categories is something called a functor. It’s a way of converting objects of one type to objects of another type, so a group has an underlying set, for instance. A set can be regarded as a discrete space, and these are the translations.
So sometimes if you have a functor from one category to another and another functor going back in the reverse direction, those functors can satisfy a special dual relationship, and this is a pair of adjoint functors. One of them gets called a left adjoint, and one of them the right adjoint. What the duality says is that if you look at maps out of the image of the left adjoint, then those correspond bijectively and naturally (which is a technical term I’m not going to get into) to maps in the other category into the image of the right adjoint. So maps in one category out of the image of the left adjoint correspond naturally to maps in the other category into the image of the right adjoint. So let me just mention one prototypical example.
KK: Yeah.
ER: So there’s a free and forgetful construction. So I mentioned that a group has an underlying set. The reverse process takes a set and freely makes a group out of that set, so the elements of that group will be words in the letters and formal inverses modulo some relation, blah blah blah, but the special property of these free groups is if I look at the group homomorphism that’s defined on a free group, so this is a map in the category of groups out of an object in the image of the left adjoint, to define that I just have to tell you where the generators go, and I’m allowed to make those choices freely, and I just need to find a function of sets from the generating set into the underlying set of the group I’m mapping into.
KK: Right.
ER: That’s this adjoint relationship. Group homomorphisms from a free group to whatever group correspond to functions from the generators of that free group to that underlying set of the group.
EL: I always feel like I’m about to drown when I try to think about category theory. It’s hard for me to read category theory, but when people talk to me about it, I always think, oh, okay, I see why people like this so much.
KK: Reading category theory is sort of like the whole picture being worth a thousand words thing. The diagrams are so lovely, and there’s so much information embedded in a diagram. Category theory used to get a bad rap, abstract nonsense or whatever, but it’s shown to be incredibly powerful, certainly as an organizing principle but also just in being able to help us push boundaries in various fields. Really if you think about it just right, if you think about things as functors, lots of things come out, almost for free. It feels like for free, but the category theorist would say, no, there’s a ton of work there. So what’s a good example of this particular theorem?
ER: Before I go there, exactly to this point, there’s a great quote by Eilenberg and Steenrod. So Eilenberg was one of the founders of category theory. Saunders MacLand wrote a paper, the “General Theory of Natural Equivalences,” in the ‘40s that defined these categories and functors and also the notion of naturality that I was alluding to. They thought that was going to be both the first and last. Anyway, ten years later, Eilenberg and Steenrod wrote this book, Foundations of Algebraic Topology, that incorporated these diagrammatic techniques into a pre-existing mathematical area, algebraic topology. It had been around since at least the beginning of the twentieth century, I’d say. So they write, “the diagrams incorporate a large amount of information. Their use provides extensive savings in space and in mental effort. In the case of many theorems, the setting up of the correct diagram is a major part of the proof. We therefore urge that the reader stop at the end of each theorem and attempt to construct for himself (it’s a quote here) the relevant diagram before examining the one which is given in the text. Once this is done, the subsequent demonstration can be followed more readily. In fact, the reader can usually supply it himself.”
KK: Right. Like proving Meier-Vietoris, for example. You just set up the right diagram, and in principle it drops out, right?
ER: Right, and in general in category theory, the definitions, the concepts are the hard thing. The proofs of the theorems are generally easier. And in fact, I’d like to prove my favorite theorem for you. I’m going to do it in a particular example, and actually I’m going to do it in the dual. So I’m going to prove that left adjoints preserve colimits.
EL: Okay.
ER: The statement I’m going to prove, the specific statement I’m going to prove by using the proof that left adjoints preserve colimits, is that for natural numbers a, b, and c, I’m going to prove that a(b+c)=ab+ac.
KK: Distributive law, yes!
ER: Distributive property of multiplication over addition. So how are we going to prove this? The first thing I’m going to do is categorify my natural numbers. And what is a natural number? It’s a cardinality of a finite set. In place of the natural numbers a, b, and c, I’m going to think about sets, which I’ll also call A, B, and C. The natural numbers stand for the cardinality of these sets.
EL: Cardinality being the size, basically.
ER: Absolutely. A, B, and C are now sets. If we’re trying to prove this statement about natural numbers, they’re finite sets. The theorem is actually true for arbitrary sets, so it doesn’t matter. And I’ve replaced a, b, and c by sets. Now I have this operation “times” and this operation “plus,” so I need to categorify those as well. I’m going to replace them by operations on sets. So what’s something you can do to two sets so that the cardinalities add, so that the sizes add?
KK: Disjoint union.
EL: Yeah, you could union them.
ER: So disjoint union is going to be my interpretation of the symbol plus. And we also need an interpretation of times, so what can I do for sets to multiply the cardinalities?
EL: Take the product, or pairs of elements in each set.
ER: That’s right. Absolutely. So we have the cartesian product of sets and the disjoint union of sets. The statement is now for any sets a, b, and c, I’m going to prove that if I take the disjoint union B+C, and then form the cartesian product with A, then that set is isomorphic to, has in particular the same number of elements as, the set that you’d get by first forming the products A times B and A times C and then taking the disjoint union.
KK: Okay.
ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.
ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.
EL: Okay.
ER: Okay. So let’s just try and begin. So the way I’m going to prove that A times (B+C) is (AxB) +(AxC) is actually using a Yoneda lemma-style proof because the Yoneda lemma comes up everywhere. We know that these sets are isomorphic by arguing that functions from them to another set X correspond. So if the sets have exactly the same functions to every other set, then they must be isomorphic. That’s the Yoneda lemma. Let’s now consider a function from the set A times the disjoint union of B+C to another set X. The first thing I can do with such a function is something called currying, or maybe uncurrying. (I never remember which way these go.) I have a function here of two variables. The domain is the set A times the disjoint union (B+C). So I can instead regard this as a function from the set (B+C), the disjoint union, into the set of functions from A to X.
KK: Yes.
ER: Rather than have A times (B+C) to X, I have from (B+C) to functions from A to X. There I’ve just transposed the cross and adjunction. That was the adjunction bit. So now I have a function from the disjoint union B+C to the set of functions from A to X. Now when I’m mapping out of a disjoint union, that just means a case analysis. Either I need to define a function like this, I have to define firstly a function from B to functions from A to the X, and also from C to functions from A to the X. So now a single function is given by these two functions. And if I look at the piece, now, which is a function from B to functions from A to the X, by this uncurrying thing, that’s equally just a function from A times B to X. Similarly on the C piece, it’s just my function from C to functions from A to X is just a function from A times C to X. So now I have a function from A times B to X and also from A times C to X, and those amalgamate to form a single function from the disjoint union A times B to X, or disjoint union A times C to X. So in summary, functions from A times the disjoint union (B+C) to X correspond in this way to functions from (AxB) disjoint union (AxC) to X, so therefore the sets A times B+C and A times B plus A times C.
EL: And now I feel like I know a category theory proof.
ER: So what’s great about that proof is that it’s completely independent of the context. It’s all about the formal relationships between the mathematical objects, so if you want to interpret A, B, and C as vector spaces and plus as the direct sum, which you might as an example of a colimit, and times as a tensor product, I’ve just proven that the tensor product distributes as a direct sum, like modules over commutative rings. That’s a much more complicated setting, but the exact same argument goes through. And of course there are lots of other examples of limits and colimits. One thing that kind of mystified me as an undergraduate is that if you have a function between sets, the inverse image preserves both unions and intersections, whereas the direct image preserves only unions and not intersections. And there’s a reason for that. The inverse image is a functor between these poset categories of sets of subsets, and admits both left and right adjoints, so it preserves all limits and all colimits of both intersections and unions, whereas this left adjoint, which is the direct image, only preserves the colimits.
KK: Right. So here’s the philosophical question. You didn’t want to get philosophical, but here it is anyway. So category theory in a lot of ways reminds me of the new math. We had this idea that we were going to teach set theory to kindergarteners. Would it be the right way to teach mathematics? So you mention all of these things that sort of drop out of this rather straightforward fact. So should we start there? Or should we develop this whole library? The example of tensor products distributing over direct sums, I mean, everybody’s seen a proof of that in Atiyah and McDonald or whatever, and okay, fine, it works. But wouldn’t it be nice to just get out your sledgehammer and say, look, limits and adjoints commute. Boom!
ER: So I give little hints of category theory when I teach undergraduate point-set topology. So in Munkres, chapter 2 is constructing the product topology, constructing the quotient topology, constructing subspace topologies, and rather than treat these all as completely separate topics, I group all the limits together and group all the colimits together, and I present the features of the constructions. This is the coarsest topology so that such and such maps are continuous, this is the finest topology so that the dual maps are continuous. I don’t define limit or colimit. Too much of a digression. In teaching abstract algebra to undergraduates in an undergraduate course, I do say a little bit about categories. I guess I think it’s useful to precisely understand function composition before getting into technical arguments about group homomorphisms, and the first isomorphism theorem is essentially the same for groups and for rings and for modules, and if we’re going to see the same theorem over and over again, we should acknowledge that that’s what happens.
KK: Right.
ER: I think category theory is not hard. You can teach it on day one to undergraduates. But appreciating what it’s for takes some mathematical sophistication. I think it’s worth waiting.
EL: Yeah. You need to travel on the path a little while before bringing that in, seeing it from that point of view.
ER: The other thing to acknowledge is it’s not equally relevant to all mathematical disciplines. In algebraic geometry, you can’t even define the basic objects of study anymore without using categorical language, but that’s not true for PDEs.
KK: So another fun thing we like to do on this podcast is ask our guest to pair their theorem with something. So what have you chosen to pair this theorem with?
ER: Right. In honor of the way Evelyn and I almost met, I’ve chosen a piece that I’ve loved since I was in middle school. It’s Benjamin Britten’s Simple Symphony, his movement 3, which is the Sentimental Sarabande. The reason I love this piece, so Benjamin Britten is a British composer. I found out when I was looking this up this morning that he composed this when he was 20.
EL: Wow.
ER: The themes that he used, it’s pretty easy to understand. It isn’t dark, stormy classical music. The themes are relatively simple, and they’re things I think he wrote as a young teenager, which is insane to me. What I love about this piece is that it starts, it’s for string orchestra, so it’s a simple mix of different textures. It starts in this stormy, dramatic, unified fashion where the violins are carrying the main theme, and the cellos are echoing it in a much deeper register. And when I played this in an orchestra, I was in the viola section, I think I was 13 or so, and the violas sort of never get good parts. I think the violists in the orchestra are sort of like category theory in mathematics. If you take away the viola section, it’s not like a main theme will disappear, but all of a sudden the orchestra sounds horrible, and you’re not sure why. What’s missing? And then very occasionally, the clouds part, and the violas do get to play a more prominent role. And that’s exactly what happens in this movement. A few minutes in, it gets quiet, and then all of a sudden there’s this beautiful viola soli, which means the entire viola section gets to play this theme while the rest of the orchestra bows out. It’s this really lovely moment. The violas will all play way too loud because we’re so excited. [music clip] Then of course, 16 bars later, the violins take the theme away. The violins get everything.
EL: Yeah, I mean it’s always short-lived when we have that moment of glory.
ER: I still remember, I haven’t played this in an orchestra for 20 years now, but I still remember it like it was yesterday.
EL: Yeah, well I listened to this after you shared it with us over email, and I turned it on and then did something else, and the moment that happened, I said, oh, this is the part she was talking about!
KK: We’ll be sure to highlight that part.
EL: I must say, the comparison of category theory to violists is the single best way to get me to want to know more about category theory. I don’t know how effective it is for other people, but you hooked me for sure.
KK: We also like to give our guests a chance to plug whatever they’re doing. When did your book come out? Pretty recently, a year or two ago?
EL: You’ve got two of them, right?
ER: I do. My new book is called Category Theory in Context, and the intended audience is mathematicians in other disciplines. So you know you like mathematics. Why might category theory be relevant? Actually, in the context of my favorite theorem, the proof that right adjoints preserve limits is actually the watermark on the book.
KK: Oh, nice.
ER: I had nothing to do with that. Whoever the graphic designer is, like you said, the diagrams are very pretty. They pulled them out, and that’s the watermark. It’s something I’ve taught at the advanced undergraduate or beginning graduate level. It was a lot of fun to write. Something interesting about the writing process is I wanted a category theory book that was really rich with compelling examples of the ideas, so I emailed the category theory mailing list, I posted on a category theory blog, and I just got all these wonderful suggestions from colleagues. For instance, row reduction, the fact that the elementary row operations can be implemented by multiplication by an elementary matrix, and then you take the identity matrix and perform the row operations on that matrix, that’s the Yoneda lemma.
KK: Wow, okay.
ER: A colleague friend told me about that example, so it’s really a kind of community effort in some sense.
KK: Very cool. And our regular listeners also found out on a previous episode that you’re also an elite athlete. Why don’t you tell us about that a little bit?
ER: So I think I already mentioned the Center of Australian Category Theory. So there’s this really famous category theory group based in Sydney, Australia, and when I was a Ph.D. student, I went for a few months to visit Dominic Verity, who’s ~28:40 now my main research collaborator. It was really an eventful trip. I had been a rugby player in college, so then when I was in Sydney, I thought it might be fun to try this thing called Australian rules football, which I’d heard about as another contact sport, and I just completely fell in love. It’s a beautiful game, in my opinion. So then I came back to the US and looked up Australian rules football because I wanted to keep playing, and it does exist here. It’s pretty obscure. I guess a consequence of that is I was able to play on the US women’s national team. I’ve been doing that for the past seven years, and what’s great about that is occasionally we play tournaments in Australia, so whenever that happens, I get to visit my research colleagues in Sydney, and then go down to Melbourne, which is really the center of footie, and combine these two passions.
EL: We were talking about this with John Urschel, who of course plays American rules football, or recently retired. This is one time when I wish we had a video feed for this because his face when we were trying to explain, which of course, two mathematicians who have sort of seen this on a TV in a bar trying to explain what Australian rules football is, he had this look of bewilderment.
KK: Yeah, I was explaining that the pitch is a big oval and there’s the big posts on the end, and he was like, wait a minute.
EL: His face was priceless there.
KK: It was good. I used to love watching it. I used to watch it in the early days of ESPN. I thought it was just a fun game to watch. Well, Emily, this has been fun. Thanks for joining us.
ER: Thanks for having me. I’ve loved listening to the past episodes, and I can’t wait to see what’s in the pipeline.
KK: Neither can we. I think we’re still figuring it out. But we’re having a good time, too. Thanks again, Emily.
EL: All right, bye.
ER: Bye.
[end stuff]
Kevin Knudson: Welcome to My Favorite Theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. I’m joined by your cohost.
Evelyn Lamb: Hi, I’m Evelyn Lamb. I’m a math and science writer in Salt Lake City, Utah, where it is very cold now, and so I’m very jealous of Kevin living in Florida.
KK: It’s a dreary day here today. It’s raining and it’s “cold.” Our listeners can’t see me doing the air quotes. It’s only about 60 degrees and rainy. It’s actually kind of lousy. but it’s our department holiday party today, and I have my festive candy cane tie on, and I’m good to go. And I’m super excited.
John Urschel: So I haven’t been introduced yet, but can I jump in on this weather conversation? I’m in Cambridge right now, and I must say, I think it’s probably nicer in Cambridge, Massachusetts than it is in Utah right now. It’s a nice breezy day, high 40s, low 50s, put on a little sweater and you’re good to go.
EL: Yeah, I’m jealous of both of you.
KK: Evelyn, I don’t know about you, but I’m super excited about this one. I mean, I’m always excited to do these, but it’s the rare day you get to talk to a professional athlete about math. This is really very cool. So our guest on this episode is John Urschel. John, do you want to tell everyone about yourself?
JU: Yes, I’d be happy to. I think I might actually be the only person, the only professional athlete you can ask high-level math about.
KK: That might be true. Emily Riehl, Emily Riehl counts, right?
EL: Yeah.
KK: She’s a category theorist at Johns Hopkins. She’s on the US women’s Australian rules football team.
EL: Yeah,
JU: Australian rules football? You mean rugby?
KK: Australian rules football is like rugby, but it’s a little different. See, you guys aren’t old enough. I’m old enough to remember ESPN in the early days when they didn’t have the high-end contracts, they’d show things like Australian rules football. It’s fascinating. It’s kind of like rugby, but not really at the same time. It’s very weird.
JU: What are the main differences?
EL: You punch the ball sometimes.
KK: They don’t have a scrum, but they have this thing where they bounce the ball really hard. (We should get Emily on here.) They bounce the ball up in the air, and they jump up to get it. You can run with it, and you can sort of punch the ball underhanded, and you can kick it through these three posts on either end [Editor's note: there are 4 poles on either end.]. It’s sort of this big oval-shaped field, and there are three poles at either end, and you try to kick it. If you get it through the middle pair, that’s a goal. If you get it on either of the sides, that’s called a “behind.” The referees wear a coat and tie and a little hat. I used to love watching it.
JU: Wait, you say the field is an oval shape?
KK: It’s like an oval pitch, yeah.
JU: Interesting.
KK: Yeah. You should look this up. It’s very cool. It is a bit like rugby in that there are no pads, and they’re wearing shorts and all of that.
JU: And it’s a very continuous game like rugby?
KK: Yes, very fast. It’s great.
JU: Gotcha.
KK: Anyway, that’s enough of us. You didn’t tell us about yourself.
JU: Oh yeah. My name is John Urschel. I’m a retired NFL offensive lineman. I played for the Baltimore Ravens. I’m also a mathematician. I am getting my Ph.D. in applied math at MIT.
KK: Good for you.
EL: Yeah.
KK: Do you miss the NFL? I don’t want to belabor the football thing, but do you miss playing in the NFL?
JU: No, not really. I really loved playing in the NFL, and it was a really amazing experience to be an elite, elite at whatever sport you love, but at the same time I’m very happy to be focusing on math full-time, focusing on my Ph.D. I’m in my third year right now, and being able to sort of devote more time to this passion of mine, which is ideally going to be my lifelong career.
EL: Right. Yeah, so not to be creepy, but I have followed your career and the writing you’ve done and stuff like that, and it’s been really cool to see what you’ve written about combining being an athlete with being a mathematician and how you’ve changed your focus as you’ve left playing in the NFL and moved to doing this full-time. It’s very neat.
KK: So, John, what’s your favorite theorem?
JU: Yes, so I guess this is the name of the podcast?
KK: Yeah.
JU: So I should probably give you a theorem. So my favorite theorem is a theorem by Batson, Spielman, and Srivastava.
EL: No, I don’t. Please educate us.
JU: Good! So this is perfect because I’m about to introduce you to my mathematical idol.
KK: Okay, great.
JU: Pretty much who I think is the most amazing applied mathematician of this generation, Dan Spielman at Yale. Dan Spielman got his Ph.D. at MIT. He was advised by Mike Sipser, and he was a professor at MIT and eventually moved to Yale. He’s done amazing work in a number of fields, but this paper, it’s a very elegant paper in applied math that doesn’t really have direct algorithmic applications but has some elegance. The formulation is as follows. So suppose you have some graph, vertices and edges. What I want to tell you is that there exists some other weighted graph with at most a constant times the order of the graph number of edges, so linear in number of edges with respect to vertices, that approximates the Laplacian of this original very dense graph, no matter how dense it is.
So I’m doing not the very best job of explaining this, but let me put it like this. You have a graph. It’s very dense. You have this elliptic operator on this graph, and there’s somehow some way to find a graph that’s not dense at all, but extremely, extremely sparse, but somehow with the exact, well not exact, but nearly the exact same properties. These operators are very, very close.
KK: Can you remind our reader—readers, our listeners—what the Laplacian is?
JU: Yeah, so the graph Laplacian, what you can do, the way I like to introduce it, especially for people not in graph theory type things, is you can define a gradient on a graph. You take every edge, directed in some way, and you can think of the gradient as being a discrete derivative along the edge. And now, as in the continuous case, you take this gradient, you get your Laplacian, and the same way you get a Laplacian in the continuous case, this is how you get your graph Laplacian.
KK: This theorem, so the problem is that dense graphs are kind of hard to work with because, well, they’re dense?
EL: So can I jump in? Dense meaning a lot of edges, I assume?
JU: Lots of edges, as many edges as you want.
KK: So a high degree on every vertex.
JU: Lots of edges, edges going everywhere.
EL: And then with the weighting, that might also mean something like, not that many total edges, but they have a high weight? Does that also make it dense, or is that a different property?
JU: No, in that case, we wouldn’t really consider it very dense.
KK: But the new graph you construct is weighted?
JU: And the old graph can be weighted as well.
KK: All right. What do the weights tell you?
JU: What do you mean?
KK: On the new graph. You generate this new graph that’s more sparse, but it’s weighted. Why do you want the weights? What do the weights get you?
JU: The benefit of the weights is it gives you additional leeway about how you’re scaling things because the weights actually come into the Laplacian because for weighted graphs, when you take this Laplacian, it’s the difference between the average of each node, of all its neighbors, and the node, in a way, and the weights tell you how much each edge counts for. In that way, it allows you greater leeway. If you weren’t able to weight this very sparse graph, this wouldn’t work very well at all.
KK: Right, because like you said, you think of sort of having a gradient on your graph, so this new graph should somehow have the same kind of dynamics as your original.
JU: Exactly. And the really interesting thing is that you can capture these dynamics. Not only can you capture them, but you can capture them with a linear number of edges, linear in the order of the graph.
KK: Right.
JU: So Dan Spielman is famous for many things. One of the things he’s famous for is he was one of the first people to give provable guarantees for algorithms that can solve, like, a Laplacian system of equations in near-linear time, so O(n) plus some logs. From his work there have been many, many different sorts of improvements, and this one is extremely interesting to me because you only use a linear number of edges, which implies that this technique, given this graph you have should be extremely efficient. And that’s exactly what you want because it’s a linear number of edges, you apply this via some iterative algorithm, and you can use this guy as a sort of preconditioner, and things get very nice. The issue is, I believe—and it has been a little bit since I’ve read the paper—I believe the amount of time it takes to find this graph, I think is cubic.
EL: Okay.
JU: So it’s not a sort of paper where it’s extremely useful algorithmically, I would say, but it is a paper that is very beautiful from a mathematical perspective.
KK: Has the algorithm been improved? Has somebody found a better than cubic way to generate this thing?
JU: Don’t quote me on that, I do not know, but I think that no one has found a good way yet. And by good I mean good enough to make it algorithmically useful. For instance, if the amount of time it takes to find this thing is quadratic, or even maybe n to the 1.5 or something like that, this is already not useful for anything greater than near-linear. It’s a very interesting thing, and it’s something that really spoke to me, and I really just fell in love with it, and I think what I like about it most is that it is a very sort of applied area, and it is applied mathematics, theoretical computer science type things, but it is very theoretical and very elegant. Though I am an applied mathematician, I do like very clean things. I do like very nice looking things. And perhaps I can be a bad applied mathematician because I don’t always care about applications. Which kind of makes you a bad applied mathematicians, but in all my papers I’m not sure I’ve ever really, really cared about the applications, in the sense that if I see a very interesting problem that someone brings to me, and it happens to have, like some of the things I’ve gotten to do in machine learning, great, this is like the cherry on top, but that isn’t the motivating thing. If it’s an amazing application but some ugly, ugly thing, I’m not touching it.
EL: Well, before we actually started recording, we talked a little bit about how there are different flavors of applied math. There are ones that are more on the theoretical side, and probably people who do a lot of things with theoretical computer science would tend towards that more, and then there are the people who are actually looking at a biological system and solving differential equations or something like this, where they’re really getting their hands dirty. It sounds like you’re more interested in the theoretical side of applied math.
JU: Yeah.
KK: Applied math needs good theory, though.
JU: That’s just true.
KK: You’ve got to develop good theory so that you know your algorithms work, and you want them to be efficient and all that, but if you can’t prove that they actually work, then you’re a physicist.
JU: There’s nothing I hate more than heuristics. But heuristics do have a place in this world. They’re an important thing, but there’s nothing I dislike more in this world than doing things with heuristics without being able to give any guarantees.
EL: So where did you first encounter this theorem? Was it in the research you’ve been doing, the study you’ve been doing for your Ph.D.?
JU: Yes, I did encounter this, I think it was when I was preparing for my qualifying exams. I was reading a number of different things on so-called spectral graph theory, which is this whole field of, you have a graph and some sort of elliptic operator on it, and this paper obviously falls under this category. I saw a lecture on it, and I was just fascinated. You know it’s a very nice result when you hear about it and you’re almost in disbelief.
KK: Right.
JU: I heard about it and I thought I didn’t quite hear the formulation correctly, but in fact I did.
KK: And I seem to remember reading in Sports Illustrated — that’s an odd sentence to say — that you were working on some version of the traveling salesman problem.
JU: That is true. But I would say,
KK: That’s hard.
JU: Just because I’m working on the asymmetric traveling salesman problem does not mean you should be holding your breath for me to produce something on the traveling salesman problem. This is an interesting thing because I am getting my Ph.D., and you do want, you want to try to find a research project where yes, it’s tough and it’s challenging you, but at the end of your four or five years you have something to show for it.
KK: Right. Is this version of the problem NP-hard?
JU: Yes, it is. But this version, there isn’t any sort of inapproximability result as in some of the other versions of TSP. But my advisor Michele Gomez [spelling], who—for the record, I’m convinced I have the single best advisor in the world, like he is amazing, amazing. He has a strong background in combinatorial optimization, which is the idea that you have some set of discrete objects. You need to pick your best option when the number of choices you have is often not polynomial in the size of your input. But you need to pick the best option in some reasonable amount of time that perhaps is polynomial.
EL: Yeah, so are these results that will say something like, we know we can get within 3 percent of the optimal…
JU: Exactly. These sorts of things are called approximation algorithms. If it runs in polynomial time and you can guarantee it’s within, say, a constant factor of the optimal solution, then you have a constant approximation algorithm. We’ve been reading up on some of the more recent breakthroughs on ATSP. There was a breakthrough this August someone proved the first constant approximation algorithm for the asymmetric traveling salesman problem, and Michele Gomez, who also is the department head at MIT of math, he had the previous best paper on this. He had a log log approximation algorithm from maybe 2008 or 2009, but don’t quote me on this. Late 2000s, so this is something we’ve been reading about and thinking about.
EL: Trying to chip away a little bit at that.
JU: Exactly. It’s interesting because this constant approximation algorithm that came out, it used this approach that, I think Michele won’t mind me saying this, it used an approach that Michele didn’t think was the right way to go about it, and so it’s very interesting. There are different ways to construct an approximation algorithm. At its core, you have something you’re trying to solve, and this thing is hard, but now you have to ask yourself, what makes it hard? Then you need to sort of take one of the things that makes it hard and you need to loosen that. And his approach in his previous paper was quite different than their approach, so it’s interesting.
KK: So the other thing we like to do on this show is to ask our guest to pair their theorem with something. So what have you chosen to pair your theorem with?
JU: I still haven’t fully thought about this, but you’ve put me on the spot, and so I’m going to say this: I would pair this with, I think this is a thing, Miller 64. That’s a thing, right?
KK: This is a beer?
JU: Yeah, the beer.
KK: It’s a super low-calorie beer?
JU: It’s a beer, and they advertise it on TV.
KK: I see, it’s very sparse.
JU: People weightlifting, people running, and then drinking a 64-calorie beer. It’s the beer for athletes.
EL: Okay.
JU: I think it’s a very, very good beer because it at least claims to taste like a beer, be very much like a beer, and yet be very sparse.
EL: Okay, so it’s, yeah, I guess I don’t know a good name for this kind of graphs, but it’s this graph of beers.
JU: Yes, it’s like, these things are called spectral sparsifiers.
EL: Okay, it’s the spectral sparsifier of beers.
KK: That’s it.
EL: So they’ve used the “Champagne of beers” slogan before, but I really think they should switch to the “spectral sparsifier of beers.” That’s a free idea, by the way, Miller, you can just take that.
JU: Hold on.
KK: John’s all about the endorsements, right?
JU: Let’s not start giving things away for free now.
KK: John has representation.
EL: That’s true.
JU: We will give this to you guys, but you need to sponsor the podcast. This needs to be done.
EL: Okay. I’m sure if they try to expand their market share of mathematicians, this will be the first podcast they come to.
KK: That’s right. So hey, do you want to talk some smack? Were you actually the smartest athlete in the NFL?
JU: I am not the person to ask about that.
KK: I knew you would defer.
JU: Trust me, I’ve gone through many, many hours of media training. You need something a little more high-level to catch me than that.
KK: I’m sure. You know, I wasn’t really trying to catch you. You know, Aaron Rodgers looked good on Jeopardy. I don’t know if you saw him on Celebrity Jeopardy a couple years ago.
JU: No.
KK: He won his game. My mother—sorry—was a huge Packers fan. She grew up near Green Bay, and she loved Aaron Rodgers, and I think she recorded that episode of Jeopardy and watched it all the time.
JU: I was invited to go on Family Feud once, the celebrity Family Feud.
KK: Yeah?
JU: But I don’t know why, but I wasn’t really about that life. I wasn’t really into it.
KK: You didn’t want Steve Harvey making fun of you?
JU: Also, I’m not sure I’m great at guessing what people think.
EL: Yeah.
JU: That’s not one of my talents.
EL: Finger isn’t on the pulse of America?
JU: No, my finger is not on the pulse. What do people, what’s people’s favorite, I can’t even think of a question.
EL: Yeah.
KK: Well, John, this has been great. Thanks for joining us.
JU: Thanks for having me. I can say this with certainty, this is my second favorite podcast I have ever done.
KK: Okay. We’ll take that. We won’t even put you on the spot and ask you what the favorite was. We won’t even ask.
JU: When I started the sentence, know that I was going to say favorite, and then I remembered that one other. I’ve done many podcasts, and this is one of my favorites. It’s a fascinating idea, and I think my favorite thing about the podcast is that the audience is really the people I really like.
KK: Thanks, John.
EL: Thanks for being here.
[end stuff]
Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your cohost Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other cohost.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m looking forward to this because of the time zone issue here. This is taking place on two different days.
EL: Yes, yes, we are delighted to be joined by Nalini Joshi, who is joining us from tomorrow in Australia, which we’re getting a kick out of because we’re easily amused.
KK: That’s right.
EL: Hi, Nalini. Would you like to tell us a little bit about yourself?
Nalini Joshi: Sure. My name is Nalini Joshi. I’m a professor of applied mathematics at the University of Sydney. What else can I say except I’m broadcasting from the future? I was born in Burma and I moved to Australia as a child with my parents when they emigrated to Australia, and most of my education has been in Australia except for going to the U.S. to do a Ph.D., which I did at Princeton.
EL: Okay, so you’ve spent some time in both hemispheres. I guess in multiple times in your life.
NJ: Yeah.
EL: So when I was a little kid I had this idea that the world could never end because, you know, in the U.S., there’s always someone who’s a full day ahead, so I know that Thursday would have to happen because if it was Wednesday where I was, someone was already living in Thursday, so the world could never end.
NJ: That’s such a deep insight. That’s wonderful.
KK: That’s pretty good.
EL: Well…
KK: I was watching football when I was a kid.
NJ: I used to hang out at the back of the school library reading through all the old Scientific American magazines. If only they had columns like yours, Evelyn. Fantastic. I really, really wanted to work out what was happening in the universe, and so I thought about time travel and space travel a lot as a teenager.
EL: Oh. So did you start your career wanting to maybe go more into physics, or did you always know you wanted to be a mathematician?
NJ: No, I really wanted to become an astrophysicist, because I thought that was the way, surely, to understand space travel. I wanted to be an astronaut, actually. I went to an all-girls school for the first half of my school years, and I still remember going to see the careers counselor and telling her I wanted to be an astronaut. She looked at me and she said, you have to be more realistic, dear. There was no way that somebody like me could ever aspire to it. And nowadays it’s normal almost. People from all different countries around the world become astronauts. But at the time I had to think about something else, and I thought, okay, I’m going to become a scientist, explore things through my own mind, and that was one way I could explore the universe. So I wanted to do physics when I came to university. I studied at the University of Sydney as an undergraduate. When I got to first-year physics, I realized my other big problem, which is that I have no physical intuition. So I thought, I really needed to understand things from a really explicit, literal, logical, analytical point of view, and that’s how I came to know I must be more of a mathematician.
EL: Okay.
KK: I have the same problem. I was always going to be a math major, but I thought I might pick up a second major in physics, and then I walked into this junior-level relativity class, and I just couldn’t do it. I couldn’t wrap my head around it at all. I dropped it and took logic instead. I was much happier.
NJ: Yeah. Oh good.
EL: So we invited you on to find out what your favorite theorem is.
NJ: Yes. Well that was a very difficult thing to do. It was like choosing my favorite child, which I would never do. But I finally decided I would choose Mittag-Leffler’s theorem because that was something that really I was blown away by when I started reading more about complex analysis as a student. I mean, we all learnt the basics of complex analysis, which is beautiful in itself. But then when you went a little bit further, so I started reading, for example, the book by Lars Ahlfors, which I still have, called Complex Analysis.
KK: Still in use.
EL: That’s a great one.
NJ: Which was first I think published in 1953. I had the 1979 version. I saw that there were so many powerful things you could do with complex analysis. And the Mittag-Leffler theorem was one of the first ones that gave me that perspective. The main thing I loved about it is that you were taking what was a local, small piece of information, around, for example, poles of a function. So we’re talking about meromorphic functions here, that’s the subject of the theorem.
EL: Can we maybe set the stage a little bit? So what is a meromorphic function?
NJ: A meromorphic function is a function that’s analytic except at isolated points, which are poles. The worst singularities it has are poles.
EL: So these are places where the function explodes, but otherwise it’s very smooth and friendly.
KK: And it explodes in a controlled way, it’s like 1/zn for some finite n kind of thing.
NJ: Exactly. Right. An integer, positive n. When I try to explain this kind of thing to people who are not mathematicians, I say it’s like walking around in a landscape with volcanoes. Well-timed, well-controlled, well-spaced volcanoes. You’re walking in the landscape of just the Earth, say, walking around these places. There are well-defined pathways for you to move along by analytic continuation. You know ahead of time how strong the volcano’s eruption is going to be. You can observe it from a little distance away if you like because there is no danger because you can skirt all of these volcanoes.
KK: That’s a really good metaphor. I’m going to start using that. I teach complex variables in the summer. I’m going to start using that. That’s good.
NJ: So a meromorphic function, as I say, [cut ~7:29-7:31?] it’s a function that gives you a pathway and the elevation, the smoothness of your path in this landscape. And its poles are where the volcanoes are.
EL: So Mittag-Leffler’s theorem, then, is about controlling exactly where those poles are?
NJ: Not quite. It’s the other way around. If you give me information about locations of poles and how strong they are, the most singular part of that pole, then I can reconstruct a function that has poles exactly at those points and with exactly those strengths. That’s what the theorem tells you. And what you need is just a sequence of points and that information about the strength of the poles, and you need potentially an infinite number of these poles. There’s one other condition, that the sequence of these poles has a limit at infinity.
KK: Okay, so they don’t cluster, in other words.
NJ: Exactly. They don’t coalesce anywhere. They don’t have a limit point in the finite plane. Their limit point is at infinity.
EL: But there could be an infinite number of these poles if they’re isolated, on integer lattice points in the complex plane or something like that.
NJ: Right, for example.
KK: That’s pretty remarkable.
NJ: If you take your standard trigonometric functions, like the sine function or the cosine function, you know it has periodically spaced zeroes. You take the reciprocal of that function, then you’ve got periodically placed poles, and it’s a meromorphic function, and you can work out which trig function it is by knowing those poles. It’s powerful in the sense that you can reconstruct the function everywhere not just at the precise points which are poles. You can work out that function anywhere in between the poles by using this theorem.
KK: That’s really remarkable. That’s the surprising part, right?
NJ: Exactly.
KK: If you knew you had a finite number of poles, you could sort of imagine that you could kind of locally construct the function and glue it together, that wouldn’t be a problem. But the fact that you can do this for infinitely many is really pretty remarkable.
NJ: Right. It’s like going from local information that you might have in one little patch of time or one little patch of space and working out what happens everywhere in the universe by knowing those little local patches. It’s the local to global information I find so intriguing, so powerful. And then it struck me that this information is given in the form of a sum of those singular parts. So the function is reconstructed as a series, as an infinite sum of the singular parts of the information you’re given around each pole. That’s a very simple way of defining the function, just taking the sum of all these singular things.
KK: Right.
EL: Yeah, I love complex analysis. It’s just full of all of these things where you can take such a small amount of local information and suddenly know what has to be happening everywhere. It’s so wonderful.
NJ: Right, right. Those two elements, the local to global and the fact that you have information coming from a discrete set of points to give you continuous smooth information everywhere in between, those two elements, I realized much later, feature in a lot of the research that I do. So I was already primed to look for that kind of information in my later work.
EL: Yeah, so I was going to ask, I was wondering how this came up for you, maybe not the Mittag-Leffler theorem specifically, but using complex analysis in your work as an applied mathematician.
NJ: Right. So what I do is build toolboxes of methods. So I’m an applied mathematician in the sense that I want to make usable tools. So I study asymptotics of functions, I study how you define functions globally, functions that turn out to be useful in various mathematical physics contexts. I’m more of a theoretical applied mathematician, if you like, or I often say to people I’m actually a mathematician without an adjective.
KK: Right. Yeah.
NJ: You know that there is kind of a hierarchy of numbers in the number system. We start with the counting numbers, and we can add and subtract them. Subtraction leads you to negative integers. Multiplication and division leads you to rational numbers, and then solving polynomial equations leads you to algebraic numbers. Each time you’re building a higher being of a type of number. Beyond all of those are numbers like π and e, which are transcendental numbers, in the sense that they can’t be constructed in terms of a finite number of operations from these earlier known operations and earlier known objects.
So alongside that hierarchy of numbers there’s a hierarchy, a very, very closely related hierarchy of functions. So integers correspond to polynomials. Square roots and so on correspond to algebraic functions. And then there are transcendental functions, the exponential being one of them, exponential of x. So a lot of the transcendentality of functions is occupied by functions which are defined by differential equations.
I started off by studying differential equations and the corresponding functions that they define. So even when you’re looking at linear differential equations, you get very complicated transcendental functions, things like the exponential being one of them. So I study functions that are even more highly transcendental, in the sense that they solve nonlinear equations, and they are like π in the sense that these functions turn out to be universal models in many different contexts, particularly in random matrix theory where you might be, for example, trying to work out the statistics of how fundamental particles interact when you fire them around the huge loop of the CERN collider. You do that by looking at distributions of entries in infinitely large matrices where the entries are random variables. Now under certain symmetries, symmetry groups acting on, for example, you might have particles that have properties that allow these random matrices to be orthogonal matrices, or Hermitian matrices, or some other kind of matrices. So when you study these ensembles of matrices with these symmetry properties and you study properties like what’s their largest eigenvalue, then you get a probability distribution function which happens to be, by some miracle, one of those functions I’ve studied. There’s kind of a miraculous bridge there that nobody really knows why these happen. Then there’s another miraculous thing, which is that these models, using random matrices, happen to be valid not just for particle physics but if you’re studying bus arrival times in Cuernavaca, or aircraft boarding times, or when you study patient card sorting, all kinds of things are universally described by these models and therefore these functions. So first of all, these functions have this property: they’re locally defined by initial value problems given for the differential equation.
KK: Right.
NJ: But then they have these amazing properties which allow them to be globally defined in the complex plane. So even though we didn’t have the technology to describe these functions explicitly, not like I could say, take 1 over the sine function, that gives you a meromorphic function, whose formulae I could write down, whose picture I could draw, these functions are so transcendental that you can’t do that very easily, but I study their global properties that make them more predictable wherever you go in the complex plane. So the Mittag-Leffler theorem sort of sets up the baseline. I could just write them as the sum of their poles. And that’s just so powerful to me. There are so many facets of this. I could go on and on. There is another direction I wanted to insert into our conversation, which is that the next natural level when you go beyond things like trigonometric functions and their reciprocals is to take functions that are doubly periodic, so trigonometric functions have one period. If you take double periodicity in the complex plane, then you get elliptic functions, right? So these also have sums of their poles as an expression for them. Now take any one of these functions. They turn out to be functions that parametrize very nice curves, cubic curves, for example, in two dimensions. And so the whole picture shifts from an analytic one to an algebraic geometric one. There are two sides to the same function. You have meromorphic functions on one side, and differential equations, and on the other side you have algebraic functions and curves, and algebraic properties and geometric properties of these curves, and they give you information about the functions on the other side of that perspective. So that’s what I’ve been doing for the last ten years or so, trying to understand the converse side so I can get more information about those functions.
EL: Yeah, so using the algebraic world,
NJ: Exactly, the algebro-geometric world. This was a huge challenge at the beginning, because as I said, I was educated as an applied mathematician, and that means primarily the analytic point of view. But to try and marry that to the algebraic point of view is something that turned out to be a hurdle at the beginning, but once you get past that, it’s so freeing and so beautiful and so strikingly informative that I’m now saying to people, all applied mathematicians should be learning algebraic geometry.
KK: And I would say the converse is true. I think the algebraic geometers should probably learn some applied math, right?
NJ: True, that too. There’s so many different perspectives here. It all started for me with the Mittag-Leffler theorem.
EL: So something we like to do on this show is to ask our guest to pair their theorem with something: food, beverage, music, anything like that. So what have you chosen to pair your theorem with?
NJ: That was another difficult question, and I decided that I would concentrate on the discrete to continuous aspect of this, or volcanoes to landscapes if you like. As I said, I was born in Burma, and in Burma there are these amazing dishes called le thoke. I’ll send you a Wikipedia link so you can see the spelling and description. Not all of it is accurate, by the way, from what I remember, but anyway. Le thoke is a hand-mixed salad. “Le” is hand and “thoke” is mixture. In particular, the one that’s based on rice is one of my favorites. You take a series of different ingredients, so one is rice, another might be noodles, there have to be specific types, another is tamarind. Tamarind is a sour plant-based thing, which you make into a sauce. Another is fried onions, fried garlic. Then there’s roasted chickpea flour, or garbanzo flour.
KK: This sounds amazing.
NJ: Then another one is potatoes, boiled potatoes. Another one is coriander leaves. Each person might have their favorite suite of these many, many little dishes, which are all just independent ingredients. And you take each of them into a bigger bowl. You mix it with your hands. Add as much spices as you want: chili powder, salt, lemon juice, and what you’re doing is amalgamating and combining those discrete ingredients to create something that transcends the discrete. So you’re no longer tasting the distinct tamarind, or the distinct fried onion, or potatoes. You have something that’s a fusion, if you like, but the taste is totally different. You’ve created your meromorphic function, which is that taste in your mouth, by combining those discrete things, which each of them you wouldn’t eat separately.
KK: Sure. It’s not fair. It’s almost dinner time here, and I’m hungry.
NJ: I’m sorry!
EL: Are there any Burmese restaurants in Gainesville?
NJ: I don’t know. I think there’s one in San Francisco.
EL: Yes! I actually was just at a Burmese restaurant in San Francisco last month. I had this tea leaf salad that sounds like this.
NJ: Yeah, that’s a variation. Pickled tea leaves as an ingredient.
EL: Yeah, it was great.
NJ: I was also thinking about music. So there are these compositions by Philip Glass and Steve Reich which are basically percussive, independent sounds. Then when they interweave into those patterns you create these harmonies and music that transcends each of those particular percussive instruments, the strikes on the marimba and the xylophones and so on.
EL: Like Six Marimbas by Steve Reich?
NJ: Yeah.
EL: Another of our guests, her episode hasn’t aired yet, though it will by the time our listeners are hearing this, another of our guests chose Steve Reich to pair with her theorem.
KK: That’s right.
EL: One of the most popular musicians among mathematicians pairing their theorems with music.
NJ: Somebody should write a book about this.
KK: I’m sure. So my son is a college student. He’s studying music composition. He’s a percussionist. I need to get on him about this Steve Reich business. He must know.
EL: Yeah, he’s got to.
KK: This has been great fun, Nalini. I learned a lot about not just math, but I really knew nothing about Burmese food.
NJ: Right. I recommend it highly.
KK: Next time I’m there.
NJ: You said something about mentioning books?
EL: Yeah, yeah, if you have a website or book or anything you’d like to mention on here.
NJ: This is my book. I think it would be a bit too far away from the topic of this composition, but it has this idea of going from continuous to discrete.
EL: It’s called Discrete Systems and Integrability.
NJ: Yes.
EL: We’ll put a link to some information about that book, and we’ll also link to your website on the show notes so people can find you. You tweet some. I think we kind of met in the first place on Twitter.
NJ: That’s right, Exactly.
EL: We’ll put a link to that as well so people can follow you there.
NJ: Excellent. Thank you so much.
EL: Thank you so much for being here. I hope Friday is great. You can give us a preview while we’re still here.
KK: We’ll find out tomorrow, I guess.
NJ: Thank you for inviting me, and I’m sorry about the long delay. It’s been a very intense few years for me.
EL: Understandable. Well, we’re glad you could fit it in. Have a good day.
NJ: Thank you. Bye.
[outro]
Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m Evelyn Lamb, one of your hosts. And this is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?
EL: I’m good. I actually forgot to say what I do. In case anyone doesn’t know, I’m a freelance math and science writer, and I live in Salt Lake City, Utah, where it has been very cold recently, and I’m from Texas originally, so I am not okay with this.
KK: Everyone knows who you are, Evelyn. In fact, Princeton University Press just sent me a complimentary copy of the Best Math Writing of 2017, and you’re in it, so congratulations, it’s really very cool.
[clapping]
EL: Well thanks. And that clapping you heard from the peanut gallery is our guest today, Jayadev Athreya. Do you want to tell us a little bit about yourself?
Jayadev Athreya: Yeah, so I’m based in Seattle, Washington, where it is, at least for the last 15 minutes it has not been raining. I’m an associate professor of mathematics at the University of Washington, and I’m the director of the Washington Experimental Mathematics Lab. My work is in geometry, dynamical systems, connections to number theory, and I’m passionate about getting as many people involved in mathematics as a creative enterprise as is possible.
KK: Very cool.
EL: And we actually met a while ago because my spouse also works in your field. I have the nice privilege of getting to know you and not having to learn too much about dynamical systems.
JA: Evelyn and I have actually known each other since, I think Evelyn was in grad school at Rice. I think we met at some conferences, and Evelyn’s partner and I have worked on several papers together, and I’ve been a guest in their wonderful home and eaten tons of great granola among other things. On one incredibly memorable occasion, a buttermilk pie, which I won’t forget for a long time.
KK: Nice. I’ve visited your department several times. I love Seattle. You have a great department there.
JA: It’s a wonderful group of people, and one of the great things about it is of course all departments recognize research, and many departments also recognize teaching, but this department has a great tradition of public engagement with people like Jim Morrow, who was part of the annual [ed. note: JA meant inaugural; see https://sites.google.com/site/awmmath/awm-fellows] class of AWM fellows and runs this REU and this amazing event called Math Day where he gets two thousand high school kids from the Seattle area on campus. It’s just a very cool thing for a research math department to seriously recognize and appreciate these efforts. I’m very lucky to be here.
KK: Also because I’m a topologist, I have to take a moment to give, well, I don’t know what the word is, but you guys lost a colleague recently.
JA: We did.
KK: Steve Mitchell. He was a great topologist, but even more, he was just a really great guy. Sort of unfailingly kind and always really friendly and helpful to me when I was just starting out in the game. My condolences to you and your colleagues because Steve really was great, and he’s going to be missed.
JA: Thank you, Kevin. There was a really moving memorial service for Steve. For any of the readers who are interested in learning more about Steve, for the last few years of his life he wrote a really wonderful blog reflecting on mathematics and life and how the two go together, and I really recommend it. It’s very thoughtful. It’s very funny, even as he was facing a series of challenges, and I think it really reflects Steve really well.
KK: His biography that he wrote was really interesting too.
JA: Amazing. He came with a background that was very different to a lot of mathematicians.
EL: I’ll have to check it out.
KK: Enough of that. Let’s talk about theorems.
EL: Would you like to share your favorite theorem?
JA: Sure. So now that I’m in the northwest, and in fact I’m even wearing a flannel shirt today, I’m going to state the theorem from the perspective of a lumberjack.
EL: Okay.
JA: So when trees are planted by a paper company, they’re planted in a fairly regular grid. So imagine you have the plane, two number lines meeting at a 90 degree angle, and you have a grid, and you plant a tree at each grid point. So from a mathematician’s perspective, we’re just talking about the integer lattice, points with integer coordinates. So let’s say where I’m standing there’s a center point where maybe there’s no tree, and we call that the origin. That’s maybe the only place where we don’t plant a tree. And I stand there and I look out. Now there are a lot of trees around me. Let’s say I look around, and I can see maybe distance R in any direction, and I say, hm, I wonder how many trees there are? And of course you can do kind of a rough estimate.
Now I’m going to switch analogies and I’ll be working in flooring. I’m going to be tiling a floor. So if you think about the space between the trees as a tile and say that has area 1, you look out a distance R and say, well, the area of the region that you can see is about πR2, it’s the area of the circle, and each of these tiles has size 1, so maybe you might guess that there are roughly πR2 trees. That’s what’s called the Gauss circle problem or the lattice point counting problem. And the fact that that is actually increasingly accurate as your range of vision gets bigger and bigger, as R gets bigger and bigger, is a beautiful theorem with an elementary proof, which we could talk about later, but what I want to talk about is when you’re looking out, turning around in this spot, you can’t see every tree.
EL: Right.
JA: For instance, there’s a tree just to the right of you. You can see that tree, but there’s a tree just to the right of that tree that you can’t because it’s blocked by the first tree that you see. There’s a tree at 45 degrees that would have the coordinate (1,1), and that blocks all the other trees with coordinates (2,2) or (3,3). It blocks all the other trees in that line. We call the trees that we can see, the visible trees, we call those primitive lattice points. It’s a really nice exercise to see that if you label it by how many steps to the right and how many steps forward it is, call that that the integer coordinates (m,n), or maybe since we’re on the radio and can’t write, we’ll call it (m,k), so the sounds don’t get too confusing.
EL: Okay.
JA: A point (m,k) is visible if the greatest common divisor of the numbers m and k is 1. That’s an elementary exercise because, well maybe we’ll just talk a little bit about it, if you had m and k and they didn’t have greatest common divisor 1, you could divide them by their greatest common divisor and you’d get a tree that blocks (m,k) from where you’re sitting.
EL: Right.
JA: We call these lattice points, they’re called visible points, or sometimes they’re called primitive points, and a much trickier question is how many primitive points are there in the ball of radius R, or in any kind of increasingly large sequence of sets. And this was actually computed, I believe for the first time, by Euler
KK: Probably. Sure, why not?
JA: Yeah, Euler, I think Cauchy also noticed this. These are names, anything you get at the beginning of analysis or number theory, these names are going to show up.
KK: Right.
JA: And miraculously enough, we agreed that in the ball of radius R, the total number of trees was roughly the area of the ball, πR2. Now if you look at the proportion of these that are primitive, it’s actually 6/π2.
KK: Oh.
JA: So the total number of primitive lattice points is actually 6/π2 times πR2. And now, listeners of this podcast might remember some of their sequences and series from calc 1, or 2, or 3, and you might remember seeing, probably not proving, but seeing, that if you add up the following series: 1 plus 1/4 plus 1/9 plus 1/16 plus 1/25, and so on, and you can actually do this, you can write a little Python script to do this. You’ll get closer and closer to π2/6. Now it’s amazing, now there is of course this principle that there aren’t enough small numbers in mathematics, which is why you have all these coincidences, but this isn’t a coincidence. That π2/6 and our 6/π2 are in a very real mathematical sense the same object. So that’s my favorite mathematical theorem. So when you count all lattice points, you get π showing up in the numerator. When you count primitive ones, you get π showing up in the denominator.
KK: So the primitive ones, that must be related to the fact that if you pick two random integers, the probability that they’re relatively prime is this number, 6/π2.
JA: These are essentially equivalent statements exactly. What we’re saying is, look in the ball of radius R. Take two integers sort of randomly in between, so that m2+n2 is less than R squared, what’s the proportion of primitive ones is exactly the probability that they’re relatively prime. That’s a beautiful reformulation of this theorem.
KK: Exactly. And asymptotically, as you go off to infinity, that’s 6/π2.
JA: Yeah, and what’s fun is, if a listener does like to do a little Python programming, in this case, infinity doesn’t even have to be so big. You can see 6/π2 happening relatively quickly. Even at R=100, you’re not far off.
EL: Well the squares get smaller so fast. You’re just adding up something quite small in not too long.
JA: That’s right. That’s my favorite mathematical theorem for many reasons. For one, this number, 6/π2, it shows up in so many places. What I do is at the intersection of many fields of mathematics. I’m interested in how objects change. I’m interested in counting things, and I’m interested in the geometry of things. And all of these things come into play when you’re thinking about this theorem and thinking about various incarnations of this theorem.
EL: Yeah, I was a little surprised when you told us this was going to be your theorem because I was thinking it was going to be some kind of ergodic theorem for flows or something because the stuff I know about your field is more what my spouse does, which is more related to dynamical systems. I actually think of myself as a dynamicist-in-law.
JA: That’s right. The family of dynamicists actually views you as a favorite in-law, Evelyn. You publicize us very nicely. You write about things like billiards with a slit, which is something that we’ve been telling the world about, but until you did.
EL: And that was a birthday gift for my spouse. He had been wanting me to write about that, and I just thought it was so technical, I don’t feel like it. Finally, it’s a really cool space, but it’s just a lot to actually go in and write about that. But yeah, I was surprised to see something I think of as more number theory related show up here. That number 6/π2, or π2/6, whichever way you see it, it’s one of those things where the first time you see it, you wonder why would you ever square π? It comes as an area thing, so something else is usually being squared when you see it. Strange thing.
JA: So now what I’m going to say is maybe a little bit more about why I picked it. For me, that number π2/6 is actually the volume of a moduli space of abelian differentials.
KK: Ah!
EL: Of course!
JA: Of course it is. It’s what’s called a Siegel-Veech constant, or a Siegel constant. Can I say just a couple words about why I love π2/6 so much?
EL: Of course.
JA: Let’s say that instead of planting your trees in a square grid, you have a timber company where they wanted to shoot an ad where they shot over the forest and they wanted it to look cool, and instead of doing a square grid, they decided to do a grid with parallelograms. Still the trees are planted in a regular grid, but now you have a parallelogram. So in mathematical terms, instead of taking the lattice generated by (1,0) and (0,1), you just take two vectors in the plane. As long as they’re linearly independent, you can generate a lattice. You can still talk about primitive vectors, which are the ones you can see from (0,0). There are some that are going to be blocked and some that aren’t going to be blocked. In fact, it’s a nice formulation. If you think of your vectors as (a,c) and (b,d), then what you’re essentially doing is taking the matrix (ab,cd)[ed. note: this is a square array of numbers where the numbers a and b are in the top row and c and d are in the bottom row] and applying it to the integer grid. You’re transforming your squares into parallelograms.
KK: Right.
JA: And a vector in your new lattice is primitive if it’s the image of a primitive vector from the integer lattice.
EL: Yeah, so there’s this linear relationship. You can easily take what you know about the regular integer lattice and send it over to whatever cool commercial tree lattice you have.
JA: That’s right. Whatever parallelogram tiling of the plane you want. What’s interesting is even with this change, the proportion of primitive guys is still 6/π2. The limiting proportion. That’s maybe not so surprising given what I just said. But here’s something that is a little bit more surprising. Since we care about proportions of primitive guys, we really don’t care if we were to inflate our parallelograms or deflate them. If they were area 17 or area 1, this proportion wouldn’t change. So let’s just look at area 1 guys, just to nail one class down. This is the notion of an equivalence class essentially. You can look at all possible area 1 lattices. This is something mathematicians love to do. You have an object, and you realize that it comes as part of a family of objects. So we started with this square grid. We realized it sits inside this family of parallelogram grids. And then we want to package all of these grids into its own object. And this procedure is usually called building a moduli space, or sometimes a parameter space of objects. Here the moduli space is really simple. You just have your matrices, and if you want it to be area 1, the determinant of the matrix has to be 1. In mathematical terms, this is called SL(2,R), the special linear group with real coefficients. There’s a joke somewhere that Serge Lang was dedicating a book to his friend R, and so he inscribed it “SL2R,” but that’s a truly terrible joke that I’m sorry, you should definitely delete from your podcast.
KK: No, that’s staying in.
JA: Great.
EL: You’re on the record with this.
JA: Great.
That’s sort of all possible deformations, but then you realize that if you hit the integer lattice with integer matrices, you just get it back. Basically the space of all lattices you can basically think of as 2 by 2 matrices with real entries and determinant 1 up to 2x2 matrices with integer entries. What this allows you to do is allows you to give a notion of a random lattice. There’s a probability measure you can put on this space that tells you what it means to choose one of these lattices at random. Basically what this means is you pick your first vector at random, and then you pick your second vector at random as uniformly as possible from the ones that make determinant 1 with it. That’s actually accurate. That’s actually a technically accurate statement.
Now what that means is you can talk about the average behavior of a lattice. You can say, look, I have all of these lattices, I can average. And now what’s amazing is you can fix your R. R could be 1. R could be 100. R could be a million. And now you can look at the number of primitive points divided by the number of total points in the lattice. You average that, or let me put it a slightly different way: you average the number of primitive points and divide by the average number of total points.
KK: Okay.
JA: That’s 6/π2.
EL: So is that…
JA: That’s not an asymptotic. That’s, if you average, if you integrate over the space of lattices, you integrate and you look at the number of primitive points, you divide by the average number of total points, it’s 6/π2.That’s no matter the shape of the region you’re looking in. It doesn’t have to be a ball, it can be anything. That’s an honest-to-God, dead-on statement that’s not asymptotic.
EL: So is that basically saying that the integer lattice behaves like the average lattice?
JA: It’s saying at the very large scale, every lattice behaves like the average lattice. Basically there’s this function on the space of lattices that’s becoming closer and closer to constant. If you take the sequence of functions which is proportion of primitive vectors, that’s becoming closer and closer to constant. At each scale when you average it, it averages out nicely. There might be some fluctuations at any given scale, and what it’s saying is if you look at larger and larger scales, these fluctuations are getting smaller and smaller. In fact, you can kind of make this precise, if you’re in probability, what we’ve been talking about is basically computing a mean or an expectation. You can try and compute a variance of the number of primitive points in a ball. And that’s actually something my student Sam Fairchild and I are working on right now. There are methods that people have thought about, and there’s in fact a paper by a mathematician named Rogers in the 1950s who wrote about 15 different papers called Mean Values on the Space of Lattices, all of which contain a phenomenal number of really interesting ideas. But he got the dimension 2 case slightly wrong. We’re in the process of fixing that right now and understanding how to compute the variance. It turns out that what we do goes back to work of Wolfgang Schmidt, and we’re kind of assembling that in a little bit more modern language and pushing it a little further.
I do want to mention one more name, which is, I mentioned it very briefly already. I said this is what is called a Siegel-Veech constant. Siegel was the one who computed many of these averages. He was a German mathematician who was famous for his work on a field called the geometry of numbers. It’s about the geometry of grids. Inspired by Siegel, a mathematician named William Veech, who was one of Evelyn’s teachers at Rice, started to think about how to generalize this problem to what are called higher-genus surfaces, how to average certain things over slightly more complicated spaces of geometric objects. I particularly wanted to mention Bill Veech because he passed away somewhat unexpectedly.
EL: A year ago or so?
JA: Yeah, a little bit less than a year ago. He was somebody who was a big inspiration to a lot of people in this field, who really had just an enormous number of brilliant ideas, and I still think we’re still kind of exploring those ideas.
EL: Yeah, and a very humble person too, at least in the interactions I had with him, and very approachable considering what enormous work he did.
JA: That’s right. He was deeply modest and an incredibly approachable person. I remember the first time I went to Rice. I was a graduate student, and he had read things I had written. This was huge deal for me, to know that, I didn’t think anybody was reading things I’d written. And not to make this, I guess we started off with remembering Steve, and we’re remembering Bill.
There’s one more person who I think is very important to remember in this context, somebody who took Siegel’s ideas about averaging things over spaces and really pushed them to an extent that’s just incredible, and the number 6/π2 shows up in the introduction to one of the papers that came out of her thesis. This was Maryam Mirzakhani, who also we lost at a very, very young age. She was a person who, like Veech, had incredibly deep contributions that I think we’re going to continue to mine for ideas, and she’s going to continue having a really incredible legacy, who was also very encouraging to colleagues, contemporaries, and young people. If you’re interested in 6/π2 and how it connects to not just lattices in the plane but other surfaces, her thesis resulted in three papers, one in Inventiones, one in the Annals, and one in the Journal of the American Math Society, which might be the three top journals in the field.
EL: Right.
JA: For the record, for instance, I think of myself as a pretty good research mathematician, and I have a total over 12 years of zero in any of those three journals.
KK: Right there with you.
JA: The introduction to this paper, she studies simple closed curves on the punctured torus, which are very closely linked to integer lattice points. She shows how 6/π2 also shows up as what’s called a Weil-Peterson volume, or rather π2/6 shows up as what’s called a Weil-Peterson volume of the moduli space. Again, a way of packaging lots of spaces together.
EL: We’ll link to that, I’m sure we can find links for that for the show notes so people can read a little more about that if they want.
JA: Yeah. I think even there are very nice survey papers that have come out recently that describe some of the links there. These are sort of the big things I wanted to hit on with this theorem. What I love about it is it’s a thread that shows up in number theory, as you pointed out. It’s a thread that shows up in geometry. It’s a thread that shows up in dynamical systems. You can use dynamics to actually do this counting problem.
EL: Okay.
JA: Yeah, so there’s a way of doing dynamics on this object where we package everything together to get the 6/π2. It’s not the most efficient, not the most direct proof, but it’s a proof that generalizes in really interesting ways. For me, a theorem in mathematics is really beautiful if you can see it from many different perspectives, and this one to me starts so many stories. It starts a story where if you think of a lattice, you can think about going to higher-dimensional lattices. Or you can think of it as a surface, where you take the parallelogram or the square and glue opposite sides and get a torus, or you can start doing more holes, that’s higher genus. It’s rare that all of these different generalizations will hold really fruitful and beautiful mathematics, but in this case I think it does.
KK: So hey, another part of this podcast is that we ask our guest to pair their theorem with something. So what have you chosen to pair your theorem with?
JA: So there’s a grape called, I’m just going to look it up so I make sure I get everything right about it. It’s called primitivo. So it’s an Italian grape. It’s closely related to zinfandel, which I kind of like also because I want primitive, and of course I want the integers in there, so I’ve got a Z. Primitivos are also an excellent value wine, so that makes me very happy. It’s an Italian wine. Both primitivo and zinfandel are apparently descended from a Croatian grape, and so what I like about it is it’s something connected, it connects in a lot of different ways to a lot of different things. Now I don’t know how trustworthy this site is, it’s a site called winegeeks.com. Apparently primitivo can trace its ancestry from the ancient Phoenicians in the province of Apulia, the heel of Italy’s boot. I’m a big fan of the Phoenicians because they were these cosmopolitan seafarers who founded one of my favorite cities in the world, Marseille, actually Marseille might be the first place I learned about this theorem, so there you go.
EL: Another connection.
JA: Yeah. And it’s apparently the wine that was served at the last supper.
KK: Okay.
EL: I’m sure that’s very reliable.
JA: I’m sure.
EL: Good information about vintages of those.
JA: I would pair it with a primitivo wine because of the connections, these visible points are also called primitive points by mathematicians, so therefore I’m going to pair it with a primitivo wine. Another possible option, if you can’t get your hands on that, is to pair it with a spontaneously fermented, or primitive beer.
EL: Oh yeah.
JA: I’m a big fan of spontaneously fermented beers. I like lambics, I like other things.
EL: Two choices. If you’re more of a wine person or more of a beer person, you’ve got your pairing picked out. I’m glad you’re so considerate to make sure we’ve got options there.
JA: Or I might drink too much, that’s the other possibility.
KK: No, not possible.
EL: Well it’s 9:30 where you are, so I’m hoping you’re not about to go out and have one of these to start your day. Maybe at the end of the day.
JA: I think I’ll go with my usual cappuccino to start my day.
KK: Well this has been great fun. I learned a lot today.
EL: Yeah. Thanks for being on. You had mentioned that you wanted to make sure our listeners know about the website for the Washington math lab, which is where you do some outreach and some student training.
JA: That’s right. The website is wxml.math.washington.edu. It’s the Washington Experimental Math Lab. WXML is also a Christian radio station in Ohio. We are not affiliated with the Christian radio station in Ohio. If anybody listens to that, please don’t sue us. So what I said at the top of the podcast, we’re very interested in trying to create as large as possible a community of people who are creating their own mathematics. To that end, we have student research projects where undergraduate students work together with faculty and graduate students and collaborative teams to do exploratory and experimental mathematics, teams have done projects ranging from creating sounds associated to number theory sequences to updating and maintaining OEIS and Wikipedia pages about mathematical concepts to doing research modeling stock prices, modeling rare events in protein folding, to right now one of my teams is working on counting pairs and triples and quadruples of primitive integer vectors and trying to understand how those behave. So that’s one side of it. The other side is we do a lot of, like Evelyn said, public engagement. We run teacher’s circles for middle schools and elementary schools throughout the Seattle area and the northwest, and we do a lot of fabrication with 3d printing teaching tools. Right now I’m teaching calculus 3, so we’re printing Riemann sums, 3d Riemann sums as we do integration in two variables. The reason I’m spending so much time plugging this is if you’re in a university and this sounds intriguing to you, we have a lab starter kit on our webpage which gives you information on how you might want to start a lab. All labs look different, but at this point we just had our Geometry Labs United conference this summer. There are labs at Maryland, at the University of Illinois Urbana-Champaign, at the University of Illinois in Chicago, at George Mason University, at University of Texas Rio Grande Valley, Kansas State. There’s one starting at Oklahoma State, at the University of Kentucky. So the lab movement is on the march, and if you’re interested in joining that, please go to our website, check out our lab starter kit, and please feel free to contact us about what are some good ways to get started on this track.
EL: All right. Thanks for being on the show.
JA: Thanks so much for the opportunity. I really appreciate it, and I’m a big fan of the podcast. I loved the episode with Eriko Hironaka. I thought that was just amazing.
KK: Thanks. We liked that one too.
JA: Take care, guys.
EL: Bye.
[outro]
Evelyn Lamb: Welcome to My Favorite Theorem. I'm your host Evelyn Lamb, a freelance math and science writer in Salt Lake City, Utah, and this is your cohost.
Kevin Knudson: I'm Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?
EL: I am still on an eclipse high. On Monday, a friend and I got up, well got up in time to get going by 5 in the morning, to get up to Idaho and got to experience a total eclipse, which really lived up to the hype.
KK: You got totality?
EL: Yes, we got in the band of totality for a little over two minutes.
KK: We had 90 percent totality. It was still pretty impressive. Our astronomy department here set up their telescopes. We have a great astronomy department here. They had the filters on. There were probably 500 kids in line to see the eclipse. It was really pretty spectacular.
EL: It was pretty cool. I'm already making plans to go visit my parents on April 8, 2024 because they're in Dallas, which is in the path for that one.
KK: Very nice.
EL: So I've been trying to get some work done this week, but then I just keep going and looking at my friends' pictures of the eclipse, and NASA's pictures and everything. I'm sure I will get over that at some point.
KK: It was the first day of classes here for the eclipse. It was a bit disruptive, but in a good way.
EL: My spouse also had his first day of class, so he couldn't come with us.
KK: Too bad.
EL: But anyway, we are not here to talk about my feels about the eclipse. We are here to welcome Federico Ardila to the podcast. So Federico, would you like to say a bit about yourself?
Federico Ardila: Yeah, first of all, thanks so much for having me. As Evelyn just said, my name is Federico Ardila. I never quite know how to introduce myself. I'm a mathematician, I'm a DJ, I'm an immigrant from Colombia to the US, and I guess most relevant to the podcast, I'm a math professor at San Francisco state university. I also have an adjunct position in Colombia at theUniversidad de los Andes. I'm also spending the semester at MSRI [Mathematical Sciences Research Institute] in Berkeley as a research professor, so that's what I'm up to these days.
KK: I love MSRI. I love it over there. I spent a semester there, and every day at teatime, you walk into the lounge and get the full panoramic view of the bay. You can watch the fog roll in through the gate. It's really spectacular.
FA: Yeah, you know, one tricky thing is you kind of want to stay for the sunset because it's so beautiful, but then you end up staying really late at work because of it. It's a balance, I guess.
KK: So, the point of this thing is that someone has a favorite theorem, so I actually don't know what your favorite theorem is, so I'm going to be surprised. What's your favorite theorem, Federico?
FA: Yeah, so first of all I apologize for not following your directions, but it was deliberate. You both asked me to tell you my favorite theorem ahead of time, but I'm not very good at following directions. But I also thought that since I want to talk about something that I think not a lot of people think about, maybe I shouldn't give you a heads-up so we can talk about it, and you can interrupt me with any questions that you have.
EL: Get our real-time reactions here.
FA: Exactly. The other thing is that instead of talking about a favorite theorem, I want to talk about a favorite object. There's a theorem related to it, but more than the theorem, what I really like is the object.
EL: Okay.
FA: I want to talk a little about matroid theory. How much do you two think about matroids?
KK: I don't think about them much.
EL: Not at all.
KK: I used to know what a matroid is, so remind us.
FA: Excellent. Yeah, so matroid theory was basically an abstraction of the notion of independence. So something that was developed by Hassler Whitney, George Birkhoff, and Saunders MacLane in the '30s. Back then, you could write a thesis in graph theory at Harvard. This was part of Hassler Whitney's Ph.D. thesis where he was trying to solve the four-color theorem, which basically says that if you want to color the countries in a map, and you only have four colors, you will always be able to do that in such a way that no two neighboring countries are going to have the same color. So this was one of the big open problems at the time. At the time they were trying to figure out a more mathematical grounding or structure that they could put on graphs, and so out of that the theory of matroids was born. This was in a paper of Whitney in 1935, and he had the realization that the properties that graphs have with regards to how graphs cycle around, what the cycles are, what the spanning trees are, and so on, are exactly the same properties that vectors have. So there was a very strong link between graph theory and linear algebra, and he basically tried to pursue an axiomatization of what was the key combinatorial essence of independence?
EL: Okay, and so by independence, is that like we would think of linear independence in a matrix? Matroid and matrix are kind of suggestively similarly named. So is that the right thing we should be thinking about for independence?
FA: Exactly, so you might think that you have a finite set of vectors in a vector space, and now you want to figure out the linear dependencies between them. And actually that information is what's called the matroid. Basically you're saying these two vectors are aligned, or these three vectors lie on the same plane. So that information is called the matroid, and Whitney basically laid out some axioms for what the kind of combinatorial properties that linear independence has, and what he realizes is that these are exactly the same axioms that graphs have when you think about independence. Now you need a new notion of independence. In a graph you're going to say you have a dependency in edges whenever they form a cycle. So somehow it is redundant to be able to walk from point A to point B in two different ways, so whenever there is that redundancy, we call it dependency in a graph.
Basically Whitney that these were the same kind of properties, and he defined a matrix to be an abstract mathematical object that was supposed to capture that notion of independence.
EL: Okay. So this is very new to me, so I'm just kind of doing free association here. So I'm familiar with the adjacency matrix of a graph. Does this contain information about the matroid, or is this a little side path that is not really the same thing?
FA: This is a really good point. To every graph you can associate an adjacency matrix. Basically what you do is if you have an edge from vertex i to vertex j in the graph, in the matrix you put a column that has a bunch of 0's with a 1 in position i and a -1 in position j. You might think of this as the vector ei-ej where the e's are the standard basis in your vector space. And you're absolutely right, Evelyn, that when you look at the combinatorial dependencies between the graph in terms of graph dependence, they're exactly the linear dependencies in that set of vectors, so in that sense, that vector perfectly models the graph as matroid theory is concerned.
EL: Okay.
FA: So, yeah, that's a really good comparison. One reason that I love matroids is that it turns out that they actually apply in a lot of other different settings. There are many different notions of independence in mathematics, and it was realized over the years that they also satisfy these properties. Another notion of independence that you might be familiar with is the notion of algebraic independence. You learn this in a course in field extensions, and you learn about extension degrees and transcendence bases and things like this. That's the notion of algebraic independence, and it turns out that that notion of independence also satisfies these axioms that Whitney laid out, and so they also form a matroid. So whenever you have a field extension, you also have a matroid.
KK: So what's the data you present? Say X is a matroid. If you're trying to write this down, what gets handed to you?
FA: That's another really good question, and I think it's a bit of a frustrating question because it depends on who you ask. The reason for this is that so many people encounter matroids in their everyday objects that they think of them in very different ways. Some people, if they hand you a matroid, they're going to give you a bunch of sets. Maybe this is the most common things. If you give me a list of vectors, then I could give you the linearly independent sets out of these sets of vectors. That would be a list, say 1 and 2 are independent, 1 and 4 are independent, 1, 6, and 7 are dependent, and so on. That would be a set system. If you asked somebody else, then they might think of that as a simplicial complex, and they might hand you a simplicial complex and say that's a matroid. One thing that Birkhoff realized, and this was very fashionable in the '30s at Harvard, is to think about lattices in the sense of posets. If you had Birkhoff, he would actually hand you a lattice and say that's a matroid. I think this is something that's a bit frustrating for people that are trying to learn matroids. I think there are at least 10 different definitions of what a matroid is, and they're all equivalent to each other. Actually Rota made up the name cryptomorphism. You have the same theory, and you have two different axiom systems for the same theory, and you need to prove they're equivalent. This is something that when I first learned about matroids, I hated it. I found it really frustrating. But I think as you work in this topic, you realize that it's very useful to have the insight someone in linear algebra would have, the insight somebody in graph theory would have, the insight that somebody in algebraic geometry would have. And so to do that, you end up kind of going back and forth between these different ways of presenting a matroid.
EL: Like the clothing that the matroid is wearing at the time. Which outfit do you prefer?
FA: Absolutely.
KK: Being a good algebraic topologist, I want to say that this sort of reminds me of category theory. Can you describe these things as a functor from something to something else? It sort of sounds like you've got these sort of structures that are preserved, they're all the same, or they're cryptomorphic, right? So there must be something, you've got a category of something and another different category, and the matroid is sort of this functor that shows a realization between them, or am I just making stuff up?
FA: I should admit that I'm not a topologist, so I don't think a lot about categories, but I definitely do agree that over the last few years, one program has been to set down stronger algebraic foundations, and there's definitely a program of categorizing matroids. I'm not sure what you're saying is exactly correct.
KK: I'm sure it isn't.
FA: But that kind of philosophy is at play here.
KK: So you mentioned that there was a theorem lurking behind your love of matroids.
FA: So let me first mention one quick application, and then I'll tell you what the object is that I really like.
There's another application of this to matching problems. One example that I think academic mathematicians are very familiar with is the problem of matching job candidates and positions. It's a very difficult problem. Here you have a notion of dependences; for example, if the same person is offered two different jobs, they can only take one of those jobs, so in that sense, those two jobs kind of depend on each other. It turns out that this setting also provides a matroid. One reason that that is important is it's a much more applied situation because, you know, there are many situations in real life where you really need to do matchings, and you need to do it quickly and inexpensively and so on. Now when this kind of combinatorial optimization community got a hold of these ideas, and they wanted to find a cheap matching quickly, then one thing that people do in optimization a lot is if you want to optimize something, you make a polytope out of it. And so this is the object that I really like and want to tell you about. This is called the matroid polytope.
EL: Okay.
FA: Out of all these twelve different sets of clothing that matroids like to wear, my favorite outfit is the matroid polytope. Maybe I'll tell you first in the abstract why I like this so much.
EL: First, can we say exactly what a polytope is? So, are we thinking a collection of vertices, edges, faces, and higher-dimensional things because this polytope might live in a high-dimensional space? Is that what we mean?
FA: Exactly. If your polytope is in two dimensions, it's a polygon. If it's in three dimensions, it's the usual solids that we're used to, like cubes, pyramids, and prisms, and they should have flat edges, so they should have vertices, edges, and faces like you said. And then the polytope is just the higher-dimensional generalization for that. This is something that in combinatorial optimization is very natural. They really need these higher-dimensional polytopes because if you have to match ten different jobs, you have ten different axes you have to consider, so you get a polytope in ten dimensions.
KK: Sort of the simultaneous, feasible regions for multiple linear inequalities, right?
FA: Exactly. But yeah, I think Edmonds was the first person who said, okay, I want to study matroids. I'm going to make a polytope out of them. Then one thing that they realized is there is a notion in algorithms of greedy algorithms, which is, a greedy algorithm is when you're trying to accomplish a task quickly, what you do is you just, at each point in time, you just do the thing that seems best at the time. If we go back to the situation of matching jobs, then the first thing you might say is ask one school, okay, what do you want? And then they would hire the first person, and they would choose a person, and then you'd ask the next school, what do you want, and they would choose the next best person, and so on. We know that this strategy doesn't usually work. This is the no long-term planning solution. You just do what immediately what seems best to do, and what the community realized was that matroids are exactly where greedy strategies work. That's another way of thinking of matroids is that's where the greedy algorithm works. And the way they proved this was with this polytope.
So for optimization people, there's this polytope. It turns out that this polytope also arises in several other settings. There's a beautiful paper of Gelfand, Goresky, MacPherson, and and Serganova, and they're doing algebraic geometry. They're studying toric varieties. You don't need to know too much about what this is, but the main point is that if you have a toric variety, there is a polytope associated to it. There's something called the moment map that picks up a toric variety and takes it to a polytope. In this very different setting of toric varieties, they encounter the same polytope, coming from algebraic geometry. Also there's a third way of seeing this polytope coming from commutative algebra. If you have an ideal in a polynomial ring, and again it's not too important that you know exactly what this means, but there's a recipe, given an ideal, to get a polytope out of it. Again, there's a very natural way that, given a very natural ideal, you get the same polytope, coming from commutative algebra.
This is one reason that I like this polytope a lot. It really is kind of a very interdisciplinary object. It's nature. It drops from optimization, it drops from algebraic geometry, it drops from commutative algebra. It really captures the essence of these matroids that have applications in many different fields. So that's the favorite object that I wanted to tell you about.
KK: I like this instead of a theorem in some sense. I learned something today. I mean, I learn something every day. But this idea that, mathematicians know this and a lot of people outside of mathematics don't, that the same structures show up all over the place. Like you say, combinatorics is interesting this way. You count things two different ways and you get a theorem. This is a meta-version of that. You've got these different instances of this fundamental object. Whitney essentially found this fundamental idea. And we can point at it and say, oh, it's there, it's there, it's there, it's there. That's very rich, and it gives you lots to do. You never run out of problems, in some sense. And it also forces you to learn all this new stuff. Maybe you came at this from combinatorics to begin with, but you've had to learn some algebraic geometry, you've had to learn all these other things. It's really wonderful.
FA: I think you're really getting at one thing I really like about studying which is that, I'm always arguing with my students that they'll say, oh, I do analysis, I don't do algebra. Or I do algebra, I don't do topology. And this is one field where you really can't get away with that. You need to appreciate that mathematics is very interconnected and that if you really want to get the full power of the objects and you really want to understand them, you kind of have to learn many different ways of thinking about the same thing, which I think is really very beautiful and very powerful.
EL: So then was the theorem that you were talking about, is this the theorem that the greedy algorithm works on polytopes, or is this something else?
FA: No, so the theorem is a little different. I'll tell you what the theorem is. Out of all the polytopes, there is one which is very fundamental, which is the cube. Now as you know mathematicians are weird, and for us cubes, a square is a cube. A segment is a cube. Cubes exist in every dimension. In zero dimensions it's a point, in one dimension it's a segment, in two dimensions it's a square, in three dimensions it's the 3-cube, and in any dimension there is a cube. And so the theorem that Gelfand, Goresky, MacPherson, and Serganova proved, which probably Edmonds knew at least to some extent, so he was coming from optimization, is that matroids are exactly the sub-polytopes of the cube-in other words, you choose some vertices of the cube and you don't choose others, and then you look at what polytope that determines-that polytope is going to be a matroid if and only if the edges of that polytope are all of the form ei-ej. This goes back to what you were saying at the beginning, Evelyn, that these are exactly those vectors that have a bunch of zeroes, and then they have one 1 and one -1. So matroid polytopes have the property that every edge is one of those vectors, and what I find really striking is that the opposite is true: if you just take any sub-polytope of the cube and the edges have those directions, then you have a matroid on your hands. First of all, I think that's a really beautiful characterization.
KK: It's so clean. It's just very neat.
FA: But then the other thing is that this collection of vectors ei-ej is a very fundamental collection of vectors, so you know, this is the root system of the Lie algebra of type A. This might sound like nonsense, but the point is that this is one of about seven families of root systems that control a lot of very important things in mathematics. Lie groups, Lie algebras, regular polytopes, things like this. And so also this theorem points to how the theory of matroids is just a theory of type A, so to say, that has analogues in many other Coxeter groups. It basically connects to the tradition of Lie groups and Lie theory, and it begins to show how this is a much deeper theory mathematically than I think anybody anticipated.
EL: Oh cool.
KK: Wow.
EL: So I understand that you have a musical pairing for us today. We all have it queued up. We're recording this with headphones, and we're all going to listen to this simultaneously. Then you'll tell us a little bit about what it is.
KK: Are we ready? I'll count us down. 3-2-1-play.
EL: There we go.
FA: We'll let this play for a little while, and I'm going to ask you what you hear when you hear this. One reason I chose this was I saw that you like percussion.
KK: I do. My son is a percussionist.
FA: One thing I want to ask you is when you hear this, what do you hear?
KK: I hear a lot.
EL: It has a really neat complex rhythm going.
FA: Do you speak Spanish?
KK: A little. Otro ves.
EL: I do not, sadly.
KK: It's called Quítalo del rincón, which, I'm sorry, I don't know what quitálo means.
FA: The song is called Quítalo del Rincón by Carlos Embales. And he was a Cuban musician. One thing is that Cubans are famously hard to understand.
KK: Sure.
FA: So I think even for Spanish speakers, this can be a bit tricky to understand. So do you have any idea what's going on, what he's singing?
EL: No idea.
FA: So this is actually a math lesson.
KK: I was going to say, he's counting. I heard some numbers in there.
FA: Yeah, yeah, yeah. It's actually a math lesson. I just think, man, why can't we get our math lessons to feel like this? This has been something that has kind of shifted a lot my understanding about pedagogy of mathematics. Just kind of imagine a math class that looks like this.
KK: Is he just trying to teach us how to count, or is there more going on back there?
FA: It's kind of an arithmetic lesson, but one thing that I really like is it's all about treating mathematics as a community lesson, and it's saying, okay, you know, if there's somebody that doesn't want to learn, we're going to put them in the middle, and they're going to learn with us.
KK: Oh. So they're not going to let anyone off the hook.
FA: Exactly. We all need to succeed together. It's not about the top students only.
KK: Very cool. We'll put a link to this on the blog post. I'm going to fade it out a little bit.
FA: Same here. Maybe I can tell you a little bit more about why I chose this song.
EL: Yeah.
FA: I should say that this was a very difficult task for me because if choosing one theorem is hard for me, choosing one song is even harder.
KK: Sure.
FA: As I mentioned, I also DJ, and whenever I go to a math conference, I always set aside one day to go to the local record stores and see what I will find. Oddly enough, I found this record in a record store in, I want to say Ann Arbor, Michigan, a very unexpected place for this kind of music. It was a very nice find that managed to explain to me how my being as a mathematician, my being as a DJ might actually influence each other. As a DJ, my job is always to provide an atmosphere where people are enjoying themselves, and it took me hearing this record to connect for me that it's also my job as a mathematician, as a math teacher, also to create atmospheres where people can learn math joyfully and everybody can have a good experience and learn something. In that sense it's a very powerful song for me. The other thing that I really like about it and why I wanted to pair it with the matroids is I think this is music that you cannot possibly understand if you don't appreciate the complexity of the history of what goes behind this music. There's definitely a very strong African influence. They're singing in Spanish, there are indigenous instruments. And I've always been fascinated by how people always try to put borders up. They always tell people not to cross borders, and they divide. But music is something that has never respected those borders. I'm fascinated by how this song has roots in Africa and then went to Cuba. Then this type of music actually went back to Congo and became a form of music called the Congolese rumba, and then that music evolved and went back to Colombia, and that music evolved and became a Colombian form of music called champeta. In my mind, it's similar to something I said earlier, that in mathematics you have to appreciate that you cannot put things into separate silos. You can't just be a combinatorialist or just be an algebraist or just a geometer. If you really want to understand the full power of mathematics, you have to travel with the mathematics. This resonates with my taste in music. I think if you really want to understand music, you have to appreciate how it travels around the world and celebrate that.
KK: This isn't just a math podcast today. It's also enthnomusicology.
FA: Something like that.
KK: Something about that, you know, rhythms are universal, right? We all feel these things. You can't help yourself. You start hearing this rhythm and you go, yeah, I get this. This is fantastic.
FA: What our listeners cannot see but I can is how everybody was dancing.
KK: Yeah, it's undeniable. Of course, Cuban music is so interesting because it's such a diverse place. So many diverse influences. People think of Cuba as being this closed off place, well that's just because from the United States you can't go there, right?
FA: Right.
KK: Everybody else goes there, and they think it's great. Of course, living in Florida there's a weird relationship with Cuba here, which is a real shame. What an interesting culture. Oh well. Maybe someday, maybe someday. It's just right there, you know? Why can't we go?
EL: Well, thanks a lot. Would you like to share any websites or social media or anything that our listeners can find you on, or any projects you're excited about?
FA: Sure, so I do have a Twitter account. I occasionally tweet about math or music or soccer. I try not to tweet too much about politics, but sometimes I can't help myself. People can find that at @FedericoArdila. That's my Twitter feed. I also have an Instagram feed with the same name. Then if people are interested in the music nerd side of what I do, my DJ collective is called La Pelanga, and we have a website lapelanga.com. We have Twitter, Instagram, all these things. We actually, one thing we do is collect a lot of old records that have traveled from Africa to the Caribbean to Colombia to various different parts. Many of these records are not available digitally, so sometimes we'll just digitalize a song and put it up there for people to hear. If people like this kind of music, it might be interesting for people to visit. And then I have my website. People can Google my name and find information there.
EL: Well thank you so much for joining us.
KK: This has been great fun, Federico.
FA: Thank you so much. This has been really fun.
KK: Take care.
[outro]
Kevin Knudson: Welcome to My Favorite Theorem. I’m your host, professor of mathematics at the University of Florida Kevin Knudson. This is my cohost.
Evelyn Lamb: Hi! I’m Evelyn Lamb, a math and science writer in Salt Lake City, Utah. Yeah, things are going well here. I went to the mall the other day, and I was leaving—I had to go to get my computer repaired, and I was in a bad mood and stuff, and I was leaving, and there was just, I walked into the parking lot, there was this beautiful view of this mountain. It’s a mall I don’t normally go to, and these mountains: Wow, it’s amazing that I live here.
KK: Is this the picture you put on Twitter?
EL: Yeah, or Facebook.
KK: Yeah, that is pretty spectacular. Well, I had a haircut today, that’s all I can say. Anyway, let’s get to it. We are very pleased in this episode to welcome Laura Taalman. Laura, do you want to introduce yourself and tell people about yourself?
Laura Taalman: Sure. Hi, thank you for having me on this podcast. I am extremely excited to be on it. Thank you.
EL: We’re glad you’re here.
LT: I’m a math professor at James Madison University, which is in Virginia. I’ve been here since 2000. We don’t have graduate students in our department, we only have undergraduate students. So when I got here, straight out of grad school, I had been studying singular algebraic geometry, and I just could not talk about that with students when we were doing undergraduate research. And I switched to knot theory. I’ve since switched to many things. I seem to switch to a new hat every year or so. My new hat is 3D printing. I’ve been doing a lot with mathematical 3D printing, but I think I’m still wearing that math jacket while I’m wearing the 3D printing hat.
EL: That’s a very exciting costume.
LT: Yes, it’s a very exciting costume, that’s true.
KK: And for a while you were the mathematician in residence at the National Museum of Mathematics, right?
LT: MoMath, that’s true. I did a semester at that, and that was the start of me living in New York City for a couple years to solve a two-body problem. I spent a couple years working in industry in 3D printing there. I just recently, last year, came back to the university. I now have the jacket and hat problem.
KK: Well, that’s better than the two-body problem.
LT: It’s better than not having a jacket or a hat.
KK: That too, right. So actually I was just visiting James Madison a couple of months ago. Laura’s department was very nice. Actually, my wife was visiting, and I was just tagging along, so I crashed their colloquium and just gave one. And everybody was really nice. I really, you know, I went to college at Virginia Tech two hours down the road. I’d never really spent any time in Harrisonburg, but it’s a lovely little town.
LT: It is.
KK: It’s very diverse. I had lunch at an Indonesian place.
EL: Oh wow.
KK: It was fantastic. I can’t get that here, you know.
LT: It’s an amazing place.
KK: It is. I thought it was really great. Anyway, so, you’re going to tell us about your favorite theorem. You told us once beforehand, but I’ve kind of forgotten. I remember, but this is pretty great. So Laura, what’s your favorite theorem?
LT: My favorite theorem comes from my knot theory phase. It’s a theorem in knot theory. I don’t know how much knot theory I should assume before saying what this theorem is, but maybe I should just set it up a little bit.
KK: Yeah, set it up a little bit.
EL: That would be great.
LT: In knot theory, you’re in studying, say, you tie a shoelace and you connect the ends, and you do that again with a different piece of string, and you’re wondering if these could possibly be the same knot in disguise, like you could deform one to another. Of course, we don’t study knots in three dimensions like that because no one can draw that. This is, in fact, how I got into 3D printing was trying to print three-dimensional versions of knots that I could look at their conformations.
KK: Very cool.
LT: But really mathematicians study knots as planar diagrams. You’ve got a diagram of a knot with crossings: over crossings and under crossings, a collections of arcs in the plane with crossings. A very old result in knot theory is that if two of those diagrams represent the same knot secretly (they might look very different), there is a sequence of what are known as Reidemeister moves that gets from one to the other. Reidemeister moves are super simple moves, like putting a twist in a strand or moving one strand over another strand, or moving a strand over or under a crossing, right? Super simple. It’s been proved that that’s sufficient, that’s all you need to change one diagram into any other equivalent diagram.
KK: OK.
LT: So my favorite theorem is by Joel Haas and Jeffrey Lagarias, I think is his name. Haas is from UC Davis, and Lagarias is at Michigan. And in 2001, they proved an upper bound for the number of Reidemeister of moves that it takes to turn a knot diagram that’s secretly unknotted and turn it into basically a circle, the unknot. So they wanted to answer this question.
We know we can, if it’s unknotted, turn it into a circle. The question is how many of these Reidemeister moves are you going to need, and even worse than that, if you start with a diagram that has, like, 10 crossings, you might actually have to increase the number of crossings along the way while simplifying the knot. It’s not necessarily true that the number of crossings will be monotonically decreasing throughout the Reidemeister move process. You might increase the number, you might have to increase the number of crossings by a lot. So this is a nontrivial question of how many Reidemeister moves. So they said, OK, look. We want to find this one constant that will give you an upper bound for any knot that’s trivial to unknot it, the number of Reidemeister moves, and they said that the bound would be of the form 2 times [ed note: Taalman misspoke here and meant to the power instead of times, as is clear from the rest of the conversation] a constant times n, where n is the number of crossings. So if it’s a 10-crossing knot, it would be like 2^10 times this constant, right?
KK: OK.
LT: I was playing around with some numbers, so for example, if you had a 6-crossing knot, right, and if the constant happened to be 10, this would be 2^60, which is over a quintillion.
KK: That’s a lot.
LT: If that constant were 10, and your knot started out with just 6 crossings, that’s a big number. But that is not the bound that they found.
KK: It’s not 10.
LT: Their theorem, my favorite theorem, is that they came up with a bound that the maximum number of Reidemeister moves that would be needed to unknot a trivial knot, that constant is 2^10^11 times n. The constant is 10^11, so 2^(10^11) times n. So I put this into Wolfram Alpha with n=6. So say you have a 6-crossing knot. It’s not so bad. I put in 2^10million [ed note: Taalman misspoke here and meant hundred billion; 10^7 or 10 million comes up as a bound in a different part of the paper], and then also times 6 in the exponent. I just did this this afternoon, and do you know what Wolfram Alpha said?
KK: It couldn’t do it?
LT: I’ve never seen this. It said nothing.
EL: You broke it?
LT: It didn’t spin and think about it, and it didn’t attempt to say something. It literally just pretended that I did not press the button. This is really a big number.
KK: I’m surprised. You know what it should have done? It should have given you the shrug emoji.
LT: Yeah, that would be great if it had that. That would be perfect. So the reason it’s my favorite theorem, I guess there are a lot of reasons, but the primary reason is: this is ridiculous, right? If you have a 6-crossing knot, there’s no way you’re going to need even a quintillion Reidemeister moves in reality. If I actually give you a 6-crossing knot in reality, you’re not going to need a quintillion Reidemeister moves, let alone this number of silence that Wolfram Alpha can’t even calculate. So to me, it’s just really funny. And I could talk a little more about that. But it’s an important result because it’s the first upper bound, which is great, but also, it’s just, it’s ridiculous.
KK: It’s clearly not sharp. They didn’t cook up an example.
LT: It’s clearly not sharp.
KK: They didn’t cook up an example where they had to use that many moves.
LT: Right, no, they did not. It’s kind of like what happened with the twin prime conjecture, and people online were looking at the largest gap you could guarantee, I don’t know if I’m going to say this right, the largest gap.
KK: Right, it was 70 million.
LT: And eventually primes would have to appear with that gap. That gap started out being huge, I don’t remember what it was, but it was really big, and it ended up getting better and better and better and better.
KK: Right.
LT: So this is like the first shot in that game for Reidemeister moves, is 2 to the 10 to the 11th times the number of crossings.
KK: Has anybody made that better yet?
LT: They have. So that was in 2001, this exponential upper bound with very large exponent, and in 2011, two different mathematicians, Coward and Lackenby, I think, proved a different bound that involved an exponential tower. That gives you an idea of just how big that first bound was, if this bound is an exponential tower.
EL: And it’s better?
LT: Actually, let me say that slightly differently because this is not necessarily better. Their result was actually a little bit different. Their result wasn’t taking a knot to the unknot. It was taking any knot to any other knot it was equivalent to.
KK: OK.
EL: OK.
LT: This could well be worse, actually. And to tell you the truth, I was not entirely certain how to type this number into Mathematica, into Wolfram Alpha. It could be a lot worse. Their bound for the maximum number of Reidemeister moves that you need to take one knot to another knot that it’s ambient isotopy equivalent to in 3-space, if you had that knot. I’ve got to get my piece of paper to look at this. Their number is what they call exp^c^n(n), so the n is the sum of the crossing numbers of the two knots. The c^n: c is some constant to be determined. It could be laughably large, right? And what exp means is that it’s 2^n iterated that many times. So exp^k, or exp(k)(n) would be 2^n iterated k times.
KK: Right. 2 to the 2 to the 2 to the…
LT: …2 to the n. So this number is 2 to the 2 to the 2 to the…tower, and the height of this tower is c^n, where n is the number of crossings, and then there’s an n at the top. And the number c is 10 to the one millionth power.
KK: Wow.
EL: Wow. So this is bad news.
LT: This is very bad. So the tower is 10 to the one million high. I’m sure this is worse than the other one.
KK: It’s got to be worse.
LT: They didn’t try at all to make that low. I did a small example: what if the tower was only length 2 and there was a 6 on the top, so 2^2^6. And you’re doing your brackets from the top down, so 2 to the quantity 2^6.
EL: Right.
LT: That is over a quintillion.
KK: Sure.
EL: Yeah, like this is Graham’s number stuff.
LT: Yeah, Graham’s number, all that stuff with the arrows. All that stuff with the arrows.
EL: Yeah, basically you can’t even tell someone how big Graham’s number is because you don’t have the words to describe the bigness of this number.
LT: Yeah, and even with a tower of 2, I’m getting a quintillion. Their length is 10 to the one million. I already don’t understand what 10 to the one million is.
KK: No. You know this thing where you pack the known universe with protons, do you know how many there’d be?
LT: No. Not many?
KK: 10^126.
LT: Oh my God.
KK: So 10 to the one million. You’ve surely seen Powers of 10, this old Eames movie, right?
LT: Yeah, yeah.
KK: The known universe just isn’t that big, you know? It’s what, 10 to the 30th across or whatever. It’s nothing.
EL: You definitely can’t come up with an example that needs this because the heat death of the universe would occur well before we proved this example needed this many steps.
KK: Yeah.
LT: I think that these mathematicians know how funny their result it. It’s definitely, it’s not just funny. The proofs are very complicated and have to do with piecewise linear 3-manifolds and all this. I don’t understand the proofs. This is very sophisticated, so I’m not besmirching them by saying it’s funny. But I think they understand how crazy this sounds. They’ll say things like, this Coward-Lackenby paper has a line in there like, notice that this solves the problem of figuring out if two knots are Reidemeister equivalent because all you have to do is look at every sequence of Reidemeister moves of that length, look at them all, and then see if any two of them turn out to be the same knot. Boom, you’ve solved your problem.
KK: All you have to do.
LT: All you have to do! Problem solved.
EL: Yes.
LT: Or that, so earlier you asked if the result has been improved upon, and it has, but that wasn’t the reference I wanted to say for that. It has been improved just three years ago by Lackenby, one of the authors of that other result, and their result is polynomial. They found a polynomial bound, not an exponential bound. It’s much better. They found that if n is the number of crossings to go from a trivial knot to the trivial circle, this is back to that problem, it’s 236 times n to the 11th power.
KK: OK.
LT: It’s not so bad.
KK: Right.
LT: Not so bad. It is actually pretty bad. But it’s something that Wolfram could calculate. So I did it for example with n equals 3. So say you have a 3-crossing trivial knot. What’s the most number of Reidemeister moves that you would need according to this bound to unknot it? That would be 236 times 3 to the 11th power. That is 2 times 10^31 power, which is 10 nonillion.
KK: Right, OK.
LT: 10 nonillion.
EL: So this isn’t great.
LT: But it had a name! Dressed in scientific notation. Positive change.
EL: It didn’t cause Wolfram Alpha to run away in fright.
LT: No. I think this is the best one so far, this 2014 result by Lackenby. I think it’s the best one.
EL: Well that’s interesting, because you know, just for the example of 3, if you try, like, 10 Reidemeister moves, that’s gotta be it. It feels like that has to be so much lower. It’ll be interesting to see if it’s possible to shrink this down more to meet some more realistic bound.
LT: Honestly, 3 is a ridiculous example. I used it because it was the smallest, but you’re right. If you think about it, there’s really not that many three-crossing diagrams that one can draw.
KK: Right.
LT: Of the ones that are trivial, I’m sure you could find a path of Reidemeister moves. This result isn’t made for low-crossing knots, really, I think. Or at least not three. But you’re right, it’s got to be way better than this.
KK: This is where mathematicians and computer scientists are never going to see eye to eye on something. A computer scientist will look at this and say, that’s ridiculous. You have not solved the problem.
LT: I agree. It’s not good enough. They did have one result in this 2014 paper. Remember I said that you may have to increase the number of crossings? Well back in the original 2001 paper, Haas and Lagarias were like, hey, here’s a fun corollary: you only have to increase the number of crossings by 2 to the power of 10 to the 11th times n at most, because you can’t have more crossings than what it would take for the number of Reidemeister moves. So that’s their corollary. In 2014, that bound is super significantly improved. They just say it’s (7n) squared. That’s not bad at all. They’re saying it doesn’t have to get worse than that on your way to making it the unknot.
KK: You might have to go up and down and up and down and up and down, right?
LT: Right. I guess then they’re saying the most it would ever have to go up is to that.
KK: Yeah.
LT: So things are getting better.
KK: All the time getting better. So part of the fun of this podcast, aside from just learning about absurd numbers, is that we ask our guests to pair their theorem with something. So what have you chosen to pair your theorem with?
LT: That one is actually harder to answer than what is your favorite theorem.
KK: Sure.
LT: I could answer that right away. But I’ve thought about it, and I’ve decided that the best thing to pair it with is champagne.
KK: OK.
LT: Here’s why. First of all, you should really celebrate that a first upper bound has been found.
EL: Yeah.
LT: Especially in terms of when you have undergraduates who are doing research, this kind of meta question of what does it mean to have a first upper bound, a completely non-practical upper bound. The fact that that’s worthy of celebration is something I want them to know. It doesn’t have to be practical. The theory of having an upper bound is very important.
KK: Right.
LT: So champagne is to celebrate, but it’s also to get you over these numbers. I don’t know, maybe it represents how you feel when you’re thinking about the numbers, or what you need to do when you have been thinking about the numbers, is you need a stiff drink. It can be for both.
EL: And champagne is kind of funny, too. It’s got the funny little bubbles, and you’re always happy when you have it. I think it goes very well with the spirit. It’s not practical either.
KK: No.
LT: Yeah.
EL: As drinks go, it’s one of the less practical ones.
KK: And if you get cheap champagne, it will give you a headache, just like these big numbers.
LT: It’s very serious if you had a tower of exponential champagne, this would be a serious problem for you.
KK: Yeah.
EL: Yeah.
KK: Oh wow. We always like to give our guests a chance to plug anything they’re working on. You tweet a lot. I enjoy it.
LT: I do tweet a lot. If you want to find me online, I’m usually known as mathgrrl, like riot grrl but for math. If you’re interested in 3D printable mathematical designs, I have a ton of free math designs on Thingiverse under that name, and I also have a shop on Shapeways which makes you great 3D printed mathematical jewelry and stuff.
EL: It’s all really pretty. You also have a blog, is Hacktastic still going?
LT: Hacktastic is still there. A lot of it has been taken over by these tutorials I’ve been writing about 3D printing with a million different types of software. If you go to mathgrrl.com, Hacktastic is one of the tabs on that.
EL: I like that one.
KK: All over the internet.
EL: Yeah. She will definitely bring some joy to your life on Twitter and on 3D printing worlds. Yeah, thank you so much for being on here. I’m definitely going to look up these papers and try to conceptualize these numbers a little bit.
LT: These are very big numbers. Thank you so much. It’s been really fun talking about this, and thank you for asking what my favorite theorem is.
KK: Thanks, Laura.
[outro]
Evelyn Lamb: Welcome to My Favorite Theorem. I’m Evelyn Lamb, a freelance math and sci-ence writer in Salt Lake City. And this is my cohost.
Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?
EL: Pretty good. It’s hot here, but it gets cool enough at night that it’s survivable. It’s not too bad.
KK: It’s just hot here. It’s awful.
EL: Yeah, there’s really something about that dry heat. I lived in Houston for a while. It’s differ-ent here. So on each episode we invite someone on to tell us about their favorite theorem, and today we’re delighted to have Patrick Honner. Hey! Can you tell us a little bit about yourself?
Patrick Honner: Hi I’m happy to be here. Great to see you, Evelyn and Kevin. I’m in Brooklyn. It’s hot and muggy here. It’s never survivable in New York. I’ve got that going for me. I’m really excited to be here. I’m a high school math teacher. I teach at Brooklyn Technical High School. I studied math long ago, and I’m excited to talk about my favorite theorem today.
KK: Cool.
EL: Great.
KK: So what do you have for us?
PH: In thinking about the prompt of what my favorite theorem was, I guess I came to thinking about it from the perspective of a teacher, of course, because that’s what I’ve been doing for the last almost 20 years. So I was thinking about the kinds of theorems I like to teach, that are fun, that I think are really engaging, that are essential to the courses that i teach. A couple came to mind. I teach calculus occasionally, and I think the intermediate value theorem is probably my favorite theorem in calculus. I feel like the mean value theorem gets all the love in calculus. Eve-ryone thinks that’s the most important, but I really like the intermediate value theorem. I really love De Moivre’s theorem as a connection between complex numbers and geometry and alge-bra, and a little bit of group theory in there. But what really stuck out when thinking about what my favorite theorem is was Varignon’s theorem.
KK: I had to look this up.
PH: Well I think a lot of people, they know it when you show it to them, but they don’t know the name of it. That’s also part of why I like it. The name is sort of exotic sounding. It transports them to France somehow.
EL: Nice.
KK: Varignon’s theorem is a theorem of Euclidean geometry. It’s not that deep or powerful or exciting, but there’s just something about the way you can interact with it and play with it in class, and the way you prove it and the different directions it goes that really makes it one of my favorite theorems.
KK: Now we’re intrigued.
EL: Yeah. What is this theorem?
PH: Imagine, so Varignon’s theorem is a theorem about quadrilaterals. If you imagine a quadrilateral in the plane, you’ve got the four sides. If you construct the midpoints of each of the four sides, and then connect them in a consistent orientation, so clockwise or counterclockwise, then you will get another quadrilateral. You start with the four sides, take the midpoints and connect them. Now you’ve got another quadrilateral. So if you start with a square, you can imagine those mid-points appearing, and you connect them, then that new quadrilateral would be a square. So you have a square inside of a square. This is a picture I think a lot of people can see.
If you started with a rectangle and you constructed those midpoints, if the rectangle were a non-square rectangle, so longer than it was wide, you can think about it for a moment and maybe draw it, and you’d see a rhombus. Sort of a long, skinny rhombus, depending on the nature of the rectangle. Varignon’s theorem says that regardless of whatever quadrilateral you start with, the quadrilateral you form from those midpoints will be a parallelogram. And I just think that this is so cool.
KK: It’s always a parallelogram.
EL: Yeah, that’s really surprising. By every quadrilateral, do you mean only convex ones, is this for all quadrilaterals?
PH: That’s part of the reason why it’s so much fun to play around with this theorem. It’s true for every quadrilateral, and in fact in some ways, it’s true even for things that aren’t quadrilaterals. In some ways it’s this continual intuition-breaking process with kids when you’re playing around with them. The way you can engage a class with this is you can just tell every student to draw their own quadrilateral and then perform this procedure where they construct the midpoints and connect them. Then you can tell them, ‘Look around. What do you see?’ The first thing the kids see is that everybody drew a square and everybody has a square inscribed, right?
So this is a nice opportunity to confront kids about their mathematical prejudices. Like if you ask them to draw a quadrilateral, they draw a square. If you ask them to draw a triangle, they draw an equilateral triangle. But then there will always be a couple of kids who drew something a little bit more interesting. You can get kids thinking about what all of those things have in common and start looking for a conjecture. You can kind of push the and prod them to maybe do some different things. So maybe on the next interaction of this activity, we’ll get some rectangles or some arbitrary, some non-special quadrilaterals.
Even after a couple rounds of this, you’ll still see that almost all the quadrilaterals drawn are convex. Then you can start pushing the kids to see if they understand that there’s another way to draw a quadrilateral that might pose a problem for Varignon’s theorem. It’s so cool that when you get to that convex one, kids never believe that it’ll still form a parallelogram.
EL: In the non-convex one.
PH: That’s right, the concave one, the non-convex. I always get the two words mixed up. Maybe that’s why the kids are so confused. Yeah, the kids will never believe in the non-convex case that it’ll still form a parallelogram. Wow, I can’t believe that.
KK: It seems like, I looked this up, even if the thing isn’t really a quadrilateral, if you take four points in the plane and draw two triangles that meet where the lines cross, it still works, right?
PH: Yeah. There’s yet another level to go with this. Now you’ve got the kids like, wait, so for concave, this works? It’s kind of mind-blowing. Then you can start messing around with their idea of what a quadrilateral actually is. If you show them, well, what if I drew a complex quadri-lateral. I don’t use that terminology right away, but just this idea of connecting the vertices in such a way that two sides appear to cross. It can’t possibly work there, can it? The kids don’t know what to think at this point. They think something weird is going around. Amazingly, even if the quadrilateral crosses itself like that, as long as it’s the non-degenerate case, the four points will still make a parallelogram. It’s really remarkable.
KK: Is there a slick proof of this, or is it one of these crazy things, and you have to construct and construct and construct, and before you know it you’ve lost track of what you’re doing?
PH: No, that’s another reason why this is such a great high school activity. The proof is really accessible. In fact there are several proofs.
But before we talk about my favorite proof of my favorite there, there’s another case, another level you can go with Varignon’s theorem. Often I’ll leave this with students as something to think about, a homework problem or something like that. Varignon’s theorem actually works even if the four points don’t form a quadrilateral, so if the four points aren’t coplanar, say. This process of connecting the midpoints will still form a parallelogram. It’s amazing just that the four points are coplanar. You wouldn’t necessarily expect that the four midpoints would be in the same plane if the four starting points aren’t in the same plane. Moreover, those four points form a parallelogram. It’s such an amazing thing.
EL: What is your favorite proof, then?
PH: My favorite proof of Varignon’s theorem is something that connects to a couple of key ideas that we routinely explore in high school geometry. The first is one of the first important theorems about triangles that we prove, that’s simple but has some power. It’s that if you connect the mid-points of two sides of a triangle, that line segment is parallel to the third side. And it’s also half the length. But the parallelism is important.
The other idea, and I think this is one of the most important ideas that i try to emphasize with students across courses, is the idea of transitivity, of equality or congruence, or in this case par-allelism.
The nice proof of Varignon’s theorem is that you imagine all the quadrilaterals and midpoints. And you draw one diagonal. You just think about one diagonal. Now if you cover up half of the quadrilateral, you’ve got a triangle. The line segment connecting those two midpoints is parallel to that diagonal because that’s just that triangle theorem. Now if you cover up the other half of the quadrilateral, you have a second triangle. And that segment is parallel to the diagonal. So both of those line segments are parallel to that diagonal, and therefore by transitivity, they’re parallel to each other, and now you have that the two opposite sides are parallel. And the exact same argument works for the other sides using the other diagonal.
KK: I like that. My first instinct would be to do some sort of vector analysis. You realize all the sides as vectors and then try to add them up and show that they’re parallel or something.
PH: Yeah, and in some of the courses i teach, I do some work with vectors, and this is definitely something we do. We explore that proof using vectors, or coordinate geometry. Maybe later in the year we’ll do some work with coordinate geometry. We can prove it that way too.
EL: Yeah, I think I would immediately go to coordinates. Of course, I would have assumed they were coplanar in the first place. If you tell me it’s a quadrilateral, yeah, it’s going to be there in the plane and not in 3-space.
PH: I love coordinate geometry, and I definitely have that instinct to run to coordinates when I want to prove something. One of those things you have to be careful of in the high school class is making sure they understand all the assumptions that underly the use of coordinates, and un-derstanding the nature of an arbitrary figure. Going back to one of the first things I said, if you ask kids to draw a quadrilateral, they’re going to draw a square, or if you ask them to draw an arbitrary quadrilateral, they’re often going to draw a square or rectangle. If you ask them to draw an arbitrary quadrilateral in the plane, they might make assumptions about where those coordi-nates are likely to be.
EL: Yeah.
KK: Your students are lucky to have you.
PH: That’s what I tell them!
KK: Really, to give this much thought to something like this and show all these different per-spectives and how you might come at it in all these different ways, my high school geometry class, I mean I had a fine teacher, but we never saw anything with this kind of sophistication at all.
PH: It’s fun. I would like to present it as if I sat around and thought deeply about it and had this really thoughtful approach to it, but it just kind of happened. I think, again, that’s why this is one of my favorite theorems. You can just put this in front of students and have them play and just run with this. It’ll just go in so many different directions.
EL: So what have you chosen to pair with this theorem? What do you enjoy experiencing along with the glory of this theorem?
PH: This was a tricky one. I feel like when I think of Varignon’s theorem, really focusing on the name, it really transports me to France. I feel like it’s a hearty stew, like boeuf Varignon or something like that. I think you need some crusty bread and a glass of red wine with Varignon’s theorem]. Not my students.
EL: Crusty bread and grape juice for them. Yeah, I just got back from living in France for six months, and actually I didn’t have any boeuf bourgignon, or Varignon, while I was there, but I did enjoy quite a few things with crusty bread and a glass of red wine. I highly recommend it.
KK: This has been great fun.
PH: Yeah, I’ve enjoyed this. You seem to enjoy talking about this more than my students, so this was great for me.
KK: It helps to be talking to a couple of mathematicians, yeah.
EL: So, we like to let guests plug websites or anything. So would you like to tell people about your blog or any things you’re involved in that you’d like to share?
PH: Yeah, sure. I blog, less frequently now than I used to, but still pretty regularly. I blog at mrhonner.com. You can generally find out about what I’m doing at my personal website, pat-rickhonner.com. I’m pretty active on Twitter, @mrhonner.
KK: Lots of good stuff on Patrick’s blog, especially after the Regents exams. You have a lot to say.
PH: Not everybody thinks it’s good stuff. I’m glad some people do.
KK: I don’t live in New York. It’s fine with me.
EL: Yeah, he has a series kind of taking apart some of the worst questions on the New York Regents exams for math. It can be a little frustrating.
PH: We just wrapped up Regents season here. Let’s just say there are some posts in the works about what we’re facing. You know, I enjoy it. It always sparks interesting mathematical conver-sations. My goal is just to raise awareness about the toll of these tests and how sometimes it seems like not enough attention is given to making sure these tests are of high quality and are valid.
KK: I don’t think it’s just a problem in New York, either.
PH: It is not just a problem in New York.
KK: Well thanks for joining us, Patrick. This was really great. I learned something today.
EL: Yeah, me too.
PH: It was my pleasure. Thanks for having me. Thanks for giving me an opportunity to think about my favorite theorem and come on and talk about it. And maybe Varignon’s theorem will appear in a couple more geometry classes next year because of it.
KK: Let’s hope.
EL: Yeah, I hope so.
KK: Take care.
PH: Thanks. Bye.
Kevin Knudson: Welcome to My Favorite Theorem. I am Kevin Knudson, professor of mathematics at the University of Florida, and I am joined by my cohost.
Evelyn Lamb: Hi. I’m Evelyn Lamb. I’m a math and science writer in Salt Lake City, Utah.
KK: How’s it going?
EL: Yeah, it’s going okay. It’s a bit smoky here from the fires in the rest of the west. A few in Utah, but I think we’re getting a lot from Montana and Oregon and Washington, too. You can’t see the mountains, which is a little sad. One of the nice things about living here.
KK: Yeah. Well, Hurricane Irma is bearing down on Florida. I haven’t been to the grocery store yet, but apparently we’re out of water in town. So I might have waited a couple days too late.
EL: Fill up those bathtubs, I guess.
KK: I guess. I don’t know. I’m dubious. You know, I lived in Mississippi when Katrina happened, and the eye came right over where we lived, and we never even lost Direct TV. I’m trying not to be cavalier, but we’ll see. Fingers crossed. It’s going to be bad news in south Florida, for sure. I really hope everybody’s OK.
EL: Yeah, definitely.
KK: Anyway.
EL: Fire, brimstone, and water recently.
KK: Anyway, we’re not here to talk about that. We’re here to talk about math. Today we’re thrilled to have Candice Price with us. Candice, want to say hi?
Candice Price: Hi everyone!
KK: Tell us a little bit about yourself.
CP: Sure. I’m currently an assistant professor of mathematics at the University of San Diego. I got my Ph.D. at the University of Iowa, and I study DNA topology, so knot theory applied to DNA, applied to biology.
EL: So that’s knot with a ‘k.’
CP: Yeah, knot.
KK: San Diego is a big switch from Iowa.
CP: Yeah, it is. In fact, I had a stopover in New York and a stopover in Texas before getting here. All over.
EL: You’ve really experienced a lot of different climates and types of people and places.
CP: Yeah. American culture, really.
KK: All right. You’ve told us. Evelyn and I know what your favorite theorem is, and I actually had to look this up, and I’m intrigued. So, Candice, what’s your favorite theorem?
CP: Sure. My favorite theorem is actually John H. Conway’s basic theorem on rational tangles. It’s a really cool theorem. What Conway states, or shows, is that there’s a one-to-one correspondence between the extended rational numbers, so rational numbers and infinity, and what are known as rational tangles. What a rational tangle basically is, is you can take a 3-ball, or a ball, an open ball, and if you put strings inside the ball and attach the strings to the boundary of the ball, so they’re loose in there but fixed, and you add these twists to the strings inside, if you take a count to how many twists you’ve added in these different directions, maybe the direction of west and the direction of south, and if you just write down how many twists you’ve done, first going west and then going south, and then going west, going south, all of those, all the different combinations you can do, you can actually calculate a rational number, and that rational number is attributed to that tangle, to that picture, that three-dimensional object.
It’s pretty cool because as you can guess, these tangles can get very complicated, but if I gave you a rational number, you could draw that tangle. And you can say that any tangle that has that same rational number, I should be able to just maneuver the strings inside the ball to look like the other tangles. So it’s actually pretty cool to say that something so complicated can just be denoted by fractions.
EL: Yeah. So how did you encounter this theorem? I encountered it from John Conway at this IAS program for women in math one year, and I don’t think that’s where we met. I don’t remember if you were there.
CP: I don’t think so.
EL: Yeah, I remember he did this demonstration. And of course he’s a very engaging, funny speaker. So yeah, how did you encounter it?
CP: It’s pretty cool, so he has this great video, the rational tangle dance. So it’s fun to show that. I started my graduate work as a master’s student at San Francisco State University, and I had learned a little bit about knot theory (with a ‘k’) as an undergrad. And so when I started my master’s I was introduced to Mariel Vazquez, who studies DNA topology. So she actually uses rational tangles in her research. That was the first time I had even heard that you could do math and biology together, which is a fascinating idea. She had introduced to me the idea of a rational tangle and showed me the theorem, and I read up on the proof, and it’s fascinating and amazing that those two are connected in that way, so that was the first time I saw it.
KK: Since I hadn’t heard of this theorem before, I looked it up, and I found this really cool classroom activity to do with elementary school kids. You take four kids, and you hand them two ropes. You allow them to do twists, the students on one end of the ropes interchange, and there’s a rotation function.
CP: Yeah.
KK: And then when you’re done you get a rational number, and it leads students through these explorations of, well, what does a twist do to an integer? It adds one. The rotate is a -1/x kind of thing.
CP: Right.
KK: So I was immediately intrigued. This really would be fun. Not just for middle school kids, maybe my calculus students would like it. Maybe I could find a way to make it relevant to my undergrads. I thought, what great fun.
CP: Yeah. I think it’s even a cool way to show students that with just a basic mathematical entity, fractions or rational numbers, you can perform higher mathematics. It’s pretty cool.
KK: This sort of begs the question: are there non-rational tangles? There must be.
CP: Yes there are! It categorizes these rational tangles, but there is not yet a categorization for non-rational tangles. There are two types. One is called prime, and one is called locally knotted. So the idea of locally knotted is that one of the strands just has a knot on it. A knot is exactly what you think about where you have a knot in your shoestring. Then prime, which is great, is all of the tangles that are not rational and not locally knotted. So it’s this space where we’ve dumped the rest of the tangles.
KK: That’s sort of unfortunate.
CP: Yeah, especially the choice of words.
KK: You would think that the primes would be a subset, somehow, of the rational tangles.
CP: You would hope.
EL: So how do these rational tangles show up in DNA topology?
CP: That’s a great question. So your DNA, you can think of as long, thin strings. That’s how I think about it. And it can wrap around itself, and in fact your DNA is naturally coiled around itself. That’s where that twisting action comes, so you have these two strings, and each string, we know, is a double helix. But I don’t care about the helical twist. I just care about how the DNA wraps around itself. These two strings can wind around, just based on packing issues, or a protein can come about and add these twists to it, and naturally how it just twists around. Visually, it looks like what is happening with rational tangles. Visually, the example that Kevin was mentioning, that we have the students with the two ropes, and they’re sort of twisting the ropes around, that’s what your DNA is doing. It turns into a great model, visually and topologically, of your DNA.
KK: Very cool.
CP: I like it.
KK: Wait, where does infinity come from, which one is that? It’s the inverse of 0 somehow, so you rotate the 0 strand?
CP: Yes, perfect. Very good.
KK: So you change your point of view, like when I’m proving the mean value theorem in calculus, I just say, well, it’s Rolle’s theorem as Forrest Gump would look at it, how he tilts his head.
CP: Right. I’m teaching calculus. I might have to use that. That’s good. I mean, hopefully they’ll know who Forrest Gump is.
KK: Well, right. You’re sort of dating yourself.
CP: That’s also a fun conversation to have with them.
KK: Sure. So another fun conversation on this podcast is the pairing. We ask our guests to pair their theorem with something. What have you chosen to pair Conway’s theorem with?
CP: So I thought a lot about this. So being in California, right, what I paired this with is a Neapolitan shake from In n Out burger. And the reason for that is, you’ve sort of taken these three different flavors, equally delicious on their own, right, rational numbers, topology, and DNA, and you put them together in this really beautiful, delicious shake. So the Neapolitan shake from In n Out burger is probably my favorite dessert, so for me, it’s a good pairing with Conway’s rational tangle theorem.
KK: I’ve only eaten at In n Out once in my life, sadly, and I didn’t have that shake, but I’m trying to picture this. So they must not mix it up too hard.
CP: They don’t, not too hard. So there’s a possibility of just getting strawberry, just getting vanilla, just getting chocolate, but then you can at some point get all three flavors together, and it’s pretty amazing.
KK: So I can imagine if you mix it too much, it would just be, like, tan. It would just be this weird color.
CP: Maybe not as delicious looking as it is tasting.
KK: That’s an interesting idea.
CP: It’s pretty cool.
KK: So we also like to give our guests a chance to plug anything they’re working on. Talk about your blog, or anything going on.
CP: Sure. I am always doing a lot of things. I am hoping I can take this time to plug, in February we have a website—we is myself, Shelby Wilson, Raegan Higgins, and Erica Graham—a website called Mathematically Gifted and Black where we showcase or spotlight every day a contemporary black mathematician and their contributions to mathematics, and we’re working on that now. We’ll have an article in the AMS Notices in February coming up. It’s up now so you can see it. We launched in February 2017. It’s a great website. We’re really proud of it.
EL: Yeah. Last year it was a lot of fun to see who was going to be coming on the little calendar each time and read a little bit about their work. You guys did a really nice job with that.
CP: Thanks. We’re very proud, and I think the AMS will put a couple of posters around the website as well.
KK: Great. Well, Candice, thanks for joining us.
CP: Thank you.
KK: This has been good fun. I like learning new theorems. Thanks again.
CP: Yeah, of course. Thank you. I enjoyed it.
[outro]
Kevin Knudson: Welcome to My Favorite Theorem. I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m flying solo in this episode. I’m at the Geometry in Gerrymandering workshop at Tufts University, sponsored by the Metric Geometry, what is it called, Metric Geometry and Gerrymandering Group, MGGG. It’s been a fantastic week. I’m without my cohost Evelyn Lamb in this episode because I’m on location, and I’m currently sitting in the lobby of my bed and breakfast with my very old friend, not old as in age, just going way back, friend, Jeanne Nielsen Clelland.
Jeanne Clelland: Hi Kevin. Thanks for having me.
KK: So you’re at the University of Colorado, yes?
JC: University of Colorado at Boulder, yes.
KK: Tell everyone about yourself.
JC: Well, as you said, we’re old friends, going all the way back to grad school.
KK: Indeed. Let’s not say how long.
JC: Let’s not say how long. That’s a good idea. We went to graduate school together. My area is differential geometry and applications of geometry to differential equations. I’m a professor at the University of Colorado at Boulder, and I’m also really enjoying this gerrymandering conference, and I’m really happy to be here.
KK: Let’s see if we can solve that problem. Although, as we learned today, it appears to be NP-hard.
JC: Right.
KK: That shouldn’t be surprising in some sense. Anyway, hey, let’s put math to work for democracy. Whether we can solve the problem or not, maybe we can make it better. So I know your favorite theorem, but why don’t you tell our listeners. What’s your favorite theorem?
JC: My favorite theorem is the Gauss-Bonnet theorem.
KK: That’s awesome because if anybody’s gone to our Facebook page, My Favorite Theorem, or our Twitter feed, @myfavethm, the banner picture, the theorem stated there is the Gauss-Bonnet theorem. That’s accidental. I just thought the statement looked pretty.
JC: Yeah, and when I first looked at your page, I saw that. And I thought, well, I guess my favorite theorem is already taken since it’s your banner page, so I was really excited to hear that I could talk about it.
KK: No, no, no. In fact, I was doing one last week, and the person mentioned they might do Gauss-Bonnet, and I said no, no, no. I have an expert on Gauss-Bonnet who’s going to do it for us. So why don’t you tell us what Gauss-Bonnet is?
JC: OK. So Gauss-Bonnet is about a relationship between, so it’s in differential geometry. It comes from the geometry of surfaces, and you can start with surfaces in 3-dimensional space that are easy to visualize. And there are several notions of curvature for surfaces. One of these notions is called the Gauss curvature, and roughly it measures whether a surface is bowl-shaped or saddle-shaped. So if the Gauss curvature is positive, then you think the surface looks more like a bowl, like a sphere is the prototypical example of positive Gauss curvature. If the Gauss curvature is negative, then your surface is shaped more like a saddle, and if the Gauss curvature is zero, then you think your surface, well the prototypical example is a plane, a surface that’s flat, but in fact this is a notion that is metrically invariant, which means if you take a surface and bend it without stretching it, you won’t change the Gauss curvature.
KK: OK.
JC: So for instance I could take a flat piece of paper and wrap it up into a cylinder.
KK: Yes.
JC: And since that doesn’t change how I measure distance, at least small distances on that piece of paper, a cylinder also has Gauss curvature zero.
KK: So this is a global condition?
JC: No, it’s local.
KK: Right.
JC: It’s a function on the surface, so at every point you can talk about the Gauss curvature at a point. So of course the examples I’ve given you, the sphere, the plane, those are surfaces where the Gauss curvature is constant, but on most surfaces this is a function, it varies from point to point.
KK: Right, so a donut, a torus, on the inside it would be negative, right?
JC: Right.
KK: But on the outside,
JC: That’s exactly right, and that’s a great example. We’re going to come back to the example of the torus.
KK: Good.
JC: So at the other extreme for surface, particularly for compact surfaces, you have topology, which is your area. And there’s a fundamental invariant of surfaces called the Euler characteristic. And the way you can compute this is really fun. You draw a graph, and the mathematical notion of a graph is basically you have points, which are called vertices, you have edges joining your vertices, and then you have regions enclosed by these edges, which are called faces.
KK: Yes.
JC: And if you take a surface, you can draw a graph on it any way you like. You count the number of vertices V, the number of edges E, and the number of faces F. You compute the number V-E+F, and no matter how you drew your graph, that number will be the same for any graph on a given surface.
KK: Which is remarkable enough.
JC: That is remarkable enough, right, that’s hugely remarkable. That’s a very famous theorem that makes this number a topological invariant, so for instance the Euler characteristic is 2, the Euler characteristic of a donut is zero. If you were to take, say, a donut with multiple holes, my son really loves these things called two-tone knots, which are donuts. A two-tone has Euler characteristic of -2, and generally the more holes you add, the more negative the Euler characteristic.
KK: Right, so the formula is 2 minus two times the number of holes, or 2-2g.
JC: Yes, and that’s for a compact surface.
KK: Compact surfaces.
JC: And it gets more complicated for non-compact. So the Gauss-Bonnet theorem in its simplest form, and let me just state it for compact surfaces, so I’m not worried about boundary, it says if you take the Gauss curvature, which is this function, and you integrate that function over the surface, the number that you get is 2π times the Euler characteristic.
KK: This blew my mind the first time I saw it.
JC: This is an incredible relationship, a very surprising relationship between geometry and topology. So for instance, if you take your surface and you wiggle it, you bend it, you can change that Gauss curvature a lot.
KK: Sure.
JC: You can introduce all sorts of wiggles in it from point to point. What this theorem says is that however you do that, all those wiggles have to cancel out because the integral of that function does not change if you wiggle the surface. It’s this absolutely incredible fact.
KK: So for example take a sphere. So we would get 4π.
JC: 4π.
KK: A sphere has constant sectional curvature 1. I guess, can you change that? You can, right?
JC: Sure!
KK: But if you maybe stretch it into an ellipsoid, the curvature is still maybe going to be positive, it’s going to be really steep at the pointy ends but flatter in the middle. So the way I always visualized this was that yeah, you might bend and stretch, which topologists don’t care about, and this integral—and the way we think about integrals is that they’re just big sums, right?—so you increase some of the numbers and decrease some of the numbers, so they’re just canceling out.
JC: Not only that, these numbers are scale invariant. So if you take a big sphere versus a small sphere, the big sphere has more area, but the absolute value of the curvature function is smaller, and those things cancel out. So the integral remains 4π.
KK: Right, so the surface of the Earth, for example, we can’t really see the curvature.
JC: Right.
KK: But it is curved.
JC: It is curved, and the area is so big that the integral of that very small function over that very large area would still be 4π.
KK: Right. So on the donut, right, we’re getting this cancelation. On the inside it’s negative, and it’s going to be 0 in some points, and on the outside it’s positive.
JC: Right. That’s really the amazing thing about the donut. It’s this unique surface where you get zero. So you have this outer part of the donut where the Gauss curvature is positive, the inner part where it’s negative, and no matter what you do to your donut, how irregularly shaped you make it, just the fact that it’s donut shaped means that those regions of positive and negative curvature exactly cancel each other out.
KK: Wow. Yeah, it’s a remarkable theorem. Great connection between geometry and topology. Do you want to talk about the noncompact case?
JC: This also gets interesting for surfaces with boundary. It actually starts, when I teach this in a differential geometry class, where this starts is a very classical idea called the angle excess theorem. And this goes back to Euclidean geometry. So everybody knows in flat Euclidean geometry, if you draw a triangle, what’s the sum of the angles inside the triangle?
KK: Yeah, 180 degrees.
JC: 180 degrees, π, depending on whether you want to work in degrees or radians. This is a consequence of the parallel postulate, and in the history of developing non-Euclidean geometry, what happened is people had developed alternate ideas of geometry with alternate versions of the parallel postulate. So in spherical geometry, imagine you draw a triangle on the sphere. Say you’ve got a globe. Take a triangle with points: one vertex is at the north pole, and two vertices are at the Equator. Say you’ve moved a quarter of the way around the circle, and the straight lines in this geometry are great circles.
KK: Yes.
JC: So draw a triangle between those three points with great circles. That’s a triangle with three right angles.
KK: 270 degrees.
JC: Right, 270 degrees. What the angle excess theorem says is that the difference, and we use radians, so that has 3π/2 angle, instead of π. So it says that the difference of those two numbers is the integral of the Gauss curvature over that triangle.
KK: Oh wow, OK. OK, I believe that.
JC: As we were saying for a sphere, the total Gauss curvature integral is 4π. This triangle I’ve just described takes up an eighth of the sphere, it’s an octant. So its area is π/2, so that’s the difference of its Gauss curvature. So that’s why the difference of sum of those angles and π is π/2. So that’s where this theorem starts, and ultimately the way you prove the angle excess theorem, basically it boils down to Green’s theorem, which I was very excited to hear Amie Wilkinson talk about in one of your previous episodes. It’s really just Green’s theorem to prove the angle excess theorem. So from there, the way you prove the global Gauss-Bonnet theorem is you triangulate your surface. You cut it up into geodesic triangles, you apply the angle excess theorem to each of those triangles, you add them all up, and you count very carefully based on the graph you have drawn of triangles how many vertices, how many edges, and how many faces. And when you count carefully, the Euler characteristic pops out on the other side.
KK: Right, OK.
JC: It’s this very neat combination of classical things, the angle excess theorem and combinatorics. It’s fun teaching an undergraduate course when you tell them counting is hard.
KK: It is hard.
JC: And they don’t believe you until you show them the ways it’s hard.
KK: There’s no way. I can’t count.
JC: So it’s a really fun theorem to do with students. It’s the culmination of the differential geometry class that I teach for undergraduates. I spend the whole semester saying, “Just wait until we get to Gauss-Bonnet! You’re going to think this is really cool!” And when we get there, it really does live up to the hype. They’re really excited by it.
KK: Yeah. So this leads to the question. We like to pair our theorems with something. What have you chosen to pair the Gauss-Bonnet theorem with?
JC: Well the obvious thing would be donuts.
KK: Sure.
JC: And in fact I do sometimes bring in donuts to class to celebrate the end of the class, but you know, this is such a culminating theorem, I really wanted to pair it with something celebratory, like a fireworks display or some sort of very celebratory piece of music.
KK: I can get on with that. It’s true, donuts seem awfully pedestrian.
JC: They do. Donuts are great because of the content of the theorem. They’re a little too pedestrian.
KK: So a fireworks display with what, 1812 Overture?
JC: Something like that.
KK: Really, this is the end. Bang!
JC: I think it deserves the 1812 Overture.
KK: That’s a really good one, OK. And maybe we’ll try to get that into the podcast.
JC: That would be great.
KK: A nice public domain thing if I can find it.
[1812 Overture plays]
JC: Sounds great.
KK: So we like to give our guests a chance to plug something. So you published a book recently?
JC: I did. I recently published a book. It’s called From Frenet to Cartan: The Method of Moving Frames. It’s published in the American Math Society’s graduate series, and it’s basically designed to be a second course in differential geometry, so for advanced undergraduates or beginning graduate students who have had a course in curves and surfaces. Hopefully it’s accessible at that level, and it was really fun. It largely grew out of working with students doing independent study, so I really wrote this book in a way that’s intended to be very student-friendly. It’s informal in style and written the way I would talk to a student in my office. I’m very happy with how it came out, so if this is a topic that’s interesting to any of your listeners, check it out.
KK: That’s great. I took curves and surfaces from your advisor, Robert Bryant, who’s the nicest guy you’ve ever met.
JC: Oh, he’s wonderful.
KK: Everybody loves Robert. That was the last differential geometry course I took, so maybe I should read your book.
JC: Let me give him credit, too. Where this originally came from, when I was a new Ph.D., well relatively new, three years post-Ph.D., Robert invited me to give a series of graduate lectures with him at MSRI, and this book grew out of notes I wrote for that workshop many, many years ago. And Robert, when I very naively said to him, “You know, I have all these lecture notes I should turn into a book,” Robert, having written a book, should have laughed at me, but instead he said, “Yeah, you should!” And it became a back burner project for a long time.
KK: More than a decade, probably.
JC: Yeah, but eventually, I’ve had so much fun working with students on this project.
KK: I’ve written two books, and it’s really, it’s so much work.
JC: You don’t do it for the money.
KK: You really don’t do it for the money, that’s for sure. And of course it’s great you had such a model in Robert, as a teacher and an expositor.
JC: I count myself extremely fortunate to have had him as my advisor.
KK: Well, Jeanne, this has been fun. Thanks for joining us.
JC: Thanks for having me.
[outro]
Kevin Knudson: Welcome to My Favorite Theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. And I’m joined by my cohost.
EL: I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City, Utah.
KK: Welcome home.
EL: Yeah, thanks. I just got back from Paris a week ago, and I’m almost back on Utah time. So right now I’m waking up very early, but not 3 in the morning, more like 5 or 6.
KK: Wait until you’re my age, and then you’ll just wake up early in the morning because you’re my age.
EL: Yeah. I was talking to my grandma the other day, and I was saying I was waking up early, and she said, Oh, I woke up at 4 this morning.
KK: Yeah, that’s when I woke up. It’s not cool. I don’t think I’m as old as your grandmother.
EL: I doubt it.
KK: But I’m just here to tell you, winter is coming, let me put it that way. We’re pleased today to welcome Mohamed Omar. Mohamed, why don’t you tell everyone a little bit.
MO: Great to be on the podcast. My name is Dr. Mohamed Omar. I’m a professor at Harvey Mudd College. My area of specialty is algebra and combinatorics, and I like pure and applied flavors of that, so theoretical work and also seeing it come to light in a lot of different sciences and computer science. I especially like working with students, so they’re really involved in the work that I do. And I just generally like to be playful with math, you know, have a fun time, and things like this podcast.
KK: Cool, that’s what we aim for.
EL: And I guess combinatorics probably lends itself to a lot of fun games to play, or it always seems like it.
MO: Yeah. The thing I really like about it is that you can see it come to life in a lot of games, and a lot of hobbies can motivate the work that comes up in it. But at the same time, you can see it as a lens for learning a lot of more advanced math, such as, like, abstract algebra, sort of as a gateway to subjects like that. So I love this diversity in that respect.
KK: I always thought combinatorics was hard. I thought I knew how to count until I tried to learn combinatorics. It’s like, wait a minute, I can’t count anything.
MO: It’s difficult when you have to deal with distinguishability and indistinguishability and mixing them, and you sort of get them confused. Yeah, definitely.
KK: Yeah, what’s it like to work at Harvey Mudd? That always seemed like a really interesting place to be.
MO: Harvey Mudd is great. I think the aspects I like of it a lot are that the students are just intrinsically interested and motivated in math and science, and they’re really excited about it. And so it really feels like you’re at a place where people are having a lot of fun with a lot of the tools they learn. So when you’re teaching there, it’s a really interactive, fun experience with the students. There’s a lot of active learning that goes on because the students are so interested in these things. It’s a lot of fun.
KK: Very cool. So, Mohamed, what’s your favorite theorem?
MO: First of all, my favorite theorem is a lemma. Actually a theorem, but usually referred to as a lemma.
KK: Lemmas are where all the work is, right?
MO: Exactly. It’s funny you mention combinatorics because this is actually in combinatorics. It’s called Burnside's Lemma. Yeah, so I love Burnside's Lemma a lot, so maybe I’ll give a little idea of what it is and give an idea in light of what you mentioned, which is that combinatorics can be quite hard. So I’ll start with a problem that’s hard, a combinatorial one that’s hard. So imagine you have a cube. A cube has six faces, right? And say you ask the naive question how many ways are there to paint the faces of the cube with colors red, green, and blue.
KK: Right.
MO: You think, there are six faces, and the top face is either red, or green, or blue, and for every choice of color I use there, another face is red or green or blue, etc. So the number of colorings should be 3x3x3x3x3x3, 3^6.
EL: Right.
MO: But then, you know, you can put a little bit of a twist on this. You can say, how many ways are there to do this if you consider two colorings to be the same if you take the cube and rotate it, take one coloring, rotate the cube, and get another coloring.
EL: Right. If you had the red face on the left side, it could be on the top, and that would be the same.
MO: One naive approach that people tend to think works when they first are faced with this, is they think, OK, there are 6 faces, so maybe I can permute things 6 ways, so I divide the total number by 6.
KK: Wrong.
MO: Exactly. There are a lot of reasons. One is sort of the empirical reason. You said the answer was 3^6 if we’re not caring about symmetry. If you divide that by 6, there’s a little bit of a problem, right?
EL: Yeah.
MO: You can kind of see. If you have a painting where all the faces are red, no matter how you rotate that, you’re going to end up with the same coloring. But as you mentioned, if you color one face red and the rest green, for instance, then you get six different colorings when you rotate this cube around. So you’ve got to do something a little bit different. And Burnside's lemma essentially gives you a nice quick way to approach this by looking at something that’s completely different but easy to calculate. And so this is sort of why I love it a lot. It’s a really, really cool theorem that you can sort of explain at a maybe discrete math kind of level if you’re teaching at a university.
KK: So the actual statement, let’s see if I can remember this. It’s something like the number of colorings would be something like 1 over the order of the group of rotations times the sum of what is it the number of elements in each orbit, or something like that? I used to know this pretty well, and I’ve forgotten it now.
MO: Yeah, so something like that. So a way to think about it is, you have your object, and it has a bunch of symmetries. So if you took a square and you were coloring, say, the edges, this is an analogous situation to the faces of the cube. A square has 8 symmetries. There are the four rotations, but then you can also flip along axes that go through opposite edges, and then axes that go through opposite vertices.
So what Burnside's lemma says is something like this. If you want to know the number of ways to color, up to this rotational symmetry, you can look at every single one of these symmetries that you have. In the square it’s 8, in the cube it turns out to be 24. And for every single symmetry, you ask yourself how many ways are there to color with the three colors you have where the coloring does not change under that symmetry.
KK: The number of things fixed, essentially, right.
MO: Exactly. The number of things fixed by the symmetries. So like I mentioned, the cube has 24 symmetries. So let’s take an example of one. Let’s say you put a rod through the center of two opposite faces of the cube.
KK: Right.
MO: And you rotate 90 degrees along that. So you’re thinking about the top face and the bottom face and just rotating 90 degrees. Let’s just think about the colorings that would remain unchanged by that symmetry. So you’re free to choose whatever you’d like for the top and bottom face. But all the side faces will have to have the same color. Because as soon as you get another face. Whatever was in that face is now rotated 90 degrees as well. So if you count the number of colorings fixed by that rotation about the rod through the opposite faces, you get something like, well you have three choices for those side faces. As soon as you choose the color for one, you’re forced to use the same color for the rest. And then you have freedom in your top and bottom faces. So that’s just one of the symmetries. Now if you did that for every single symmetry and took the average of them, it turns out to be the number of ways to color the faces of the cube up to rotational symmetry in general.
So it’s kind of weird. There’s sort of two things that are going on. One is why in the world would looking at the symmetries and counting the number of colorings fixed under the symmetry have anything to do with the number of colorings in total up to symmetry in general? It’s not clear what the relationship there is at first. But the real cool part is that if you take every single symmetry and count the number of colorings, that’s a systematic thing you can do without having to think too hard. It’s a nice formula you can get at the answer quite quickly even though it seems like a complicated thing that you’re doing.
EL: Yeah. So I guess that naive way we were talking about to approach this where you just say, well I have three choices for this one, three choices for that one, you almost kind of look at it from the opposite side. Instead of thinking about how I’m painting things, I think about how I’m turning things. And then looking at it on a case by case basis rather than looking at the individual faces, maybe.
MO: Exactly. When I first saw this, I saw this as an undergrad, and I was like, “What?!” That was my initial reaction. It was a cool way to make some of this abstract math we were learning really come to life. And I could see what was happening in the mathematics physically, and that gave me a lot of intuition for a lot of the later things we were learning related to that theorem.
EL: Was that in a combinatorics class, or discrete math class?
MO: It was actually in a standalone combinatorics class that I learned this. And now another reason I really like this lemma is that I teach it in a discrete math course that I teach at Harvey Mudd, but then I revisit it in an abstract algebra course because really, you can prove this theorem using a theorem in abstract algebra called the orbit stabilizer theorem. So orbits are all of these different, you take one coloring, spin it around in all possible ways, you get a whole bunch of different ones, and stabilizers you can think of as taking one symmetry and asking what colorings are fixed under that symmetry. So that’s in our example what those two things are. In abstract algebra, there’s this orbit stabilizer theorem that has to do with more general objects: groups, like you mentioned. And then one of the things I really like about this theorem is that it sets the stage for even more advanced math like representation theory. I feel like a lot of the introductory concepts in a representation theory course really come back to things you play with in Burnside’s Lemma. It’s really cool in its versatility like that.
KK: That’s the context I know it in. So I haven’t taught group theory in 10 years or so, but it was in that course. Now I’m remembering all of this. It’s coming back. This is good. I’m glad we’re having this conversation. I’m with you. I think this is a really remarkable theorem. But I never took a combinatorics course that was deep enough where we got this far. I only know it from the groups acting on sets point of view, which is how you prove this thing, right? And as you say, it’s definitely leads into representation theory because, as you say, you can build representations of your groups. You just take a basis for a vector space and let it act this way, and a lot of those character formulas really drop out of this.
MO: Exactly.
KK: Very cool.
EL: So it sounds like you did not have a hard time choosing your favorite theorem. This was really, you sound very excited about this theorem.
MO: The way I tried to think about what my favorite theorem was what theorem to I constantly revisit in multiple different courses? If I do that, I must like it, right? And then I thought, hey, Burnside's Lemma is one that I teach in multiple courses because I like all the different perspectives that you can view it from. Then I had this thought: is Burnside's Lemma really a theorem?
KK: Yeah, it is.
MO: I felt justified in for the following reason, which is I think this lemma’s actually due to Frobenius, not Burnside. I thought, since the Burnside part is not really due to Burnside, then maybe the lemma part really is a theorem.
EL: I must say, Burnside sounds like it should be a Civil War general or something.
MO: Definitely.
EL: So what have you chosen to pair with your theorem?
MO: So I thought a chessboard marble cake would be perfect.
KK: Absolutely.
EL: OK!
MO: So first of all, I had a slice of one just about a few hours ago. It was my brother’s birthday recently, and I’m visiting family. There was leftover cake, and I indulged. But then I thought yeah, one of the prototypical problems when playing around with Burnside's Lemma is how many ways are there to color the cells of a chessboard up to rotational symmetry? So when I was eating the cake, I thought, hey, this is perfect!
EL: That’s great.
KK: How big of a chessboard was it?
MO: 8x8.
KK: Wow, that’s pretty remarkable.
MO: It was a big cake. I had a big piece.
KK: So when you sliced into it, was it 8x8 that way, or 8x8 across the top?
MO: Across the top.
KK: I’m sort of imagining, so my sister in law is a pastry chef, and she makes these remarkably interesting looking things, and it’s usually more like a 3x3, the standard if you go vertical.
EL: I’ve never tried to make a chessboard cake. I like to bake a lot, but anything that involves me being fussy about how something looks is just not for me. In baking. Eating I’m happy with.
MO: I’m the same. I really enjoy cooking a lot. I enjoy the cooking and the eating, not the design.
KK: Yeah, I’m right there with you. Well this has been fun. Thanks for joining us, Mohamed.
EL: Yeah.
MO: Thank you. This has been really enjoyable.
KK: Take care.
MO: Thank you.
[outro]
Evelyn Lamb: Welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer based in Salt Lake City. And today I am not joined by my cohost Kevin Knudson. Today I am solo for a very special episode of My Favorite Theorem because I am at MathFest, the annual summer meeting of the Mathematical Association of America. This year it’s in Chicago, a city I love. I lived here for a couple years, and it has been very fun to be back here with the big buildings and the lake and everything. There are about 2,000 other mathematicians here if I understand correctly. It’s a very busy few days with lots of talks to attend and friends to see, and I am very grateful that Ami Radunskaya has taken the time to record this podcast with me. So will you tell me a little bit about yourself?
Ami Radunskaya: Hi Evelyn. Thanks. I’m happy to be here at MathFest and talking to you. It’s a very fun conference for me. By way of introduction, I’m the current president for the Association for Women in Mathematics, and I’m a math professor at Pomona College in Claremont, which is a small liberal arts college in the Los Angeles County. My Ph.D. was in ergodic theory, something I am going to talk about a little bit. I went to Stanford for my doctorate, and before that I was an undergraduate at Berkeley. So I grew up in Berkeley, and it was very hard to leave.
EL: Yeah. You fall in love with the Bay Area if you go there.
AR: It’s a place dear to my heart, but I was actually born in Chicago.
EL: Oh really?
AR: So I used to visit my grandparents here, and it brings back memories of the Museum of Science and Industry and all those cool exhibits, so I’m loving being back here.
EL: Yeah, we lived in Hyde Park when we were here, so yeah, the Museum of Science and Industry.
AR: I think I was born there, Hyde Park.
EL: Oh? Good for you.
AR: My dad was one of the first Ph.D.s in statistics from the University of Chicago.
EL: Oh, nice.
AR: Although he later became an economist.
EL: Cool. So, what is your favorite theorem?
AR: I’m thinking today my favorite theorem is the Birkhoff ergodic theorem. I like it because it’s a very visual theorem. Can I kind of explain to you what it is?
EL: Yeah.
AR: So I’m not sure if you know what ergodic means. I actually first went into the area because I thought it was such a cool word, ergodic.
EL: Yeah, it is a cool word.
AR: I found out it comes from the Greek word ergod for path. So I’ve always loved the mathematics that describes change and structures evolving, so before I was a mathematician I was a professional cellist for about 10 years. Music and math are sort of as one in my mind, and that’s why I think I’m particularly attracted to the kinds of mathematics and the kinds of theory that describes how things change, what’s expected, what’s unexpected, what do we see coming out of a process, a dynamical process? So before I state the theorem, I need to tell you what ergodic means.
EL: Yeah.
AR: It’s an adjective. We’re talking about a function. We say a function is ergodic if it takes points: imagine you put a value into a function, you get out a new value. You put that value back in to the function, you get a new value. Repeat that over and over and over again, and now the function is ergodic if that set of points sort of visits everywhere in the space. So we say more technically a function is ergodic if the invariant sets, the sets it leaves alone, the sets that get mapped to themselves, are either the whole space or virtually nothing. A function is ergodic, a map is ergodic, if the invariant sets either have, we say, full measure or zero measure. So if you know anything about probability, it’s probability 1 or probability zero. I think that’s an easy way to think about measure.
EL: Yeah, and I think I’ve heard people describe ergodic as the time average is equal to the space average, so things are distributing very evenly when you look at long time scales. Is that right?
AR: Well that’s exactly the ergodic theorem. So that’s a theorem!
EL: Oh no!
AR: No, so that’s cool that you’ve heard of that. What I just said was that something is ergodic if the sets that it leaves unchanged are either everything or nothing, so these points, we call them the orbits, go everywhere around the set, but that doesn’t tell you how often they visit a particular piece of your space, whereas the ergodic theorem, so there are two versions of it. My favorite one is the one, they call it the pointwise ergodic theorem, because I think it’s easier to visualize. And it’s attributed to Birkhoff. So sometimes it’s called the Birkhoff ergodic theorem. And it’s exactly what you just said. So if you have an ergodic function, and then we start with a point and we sort of average it over many, many applications of the function, or iterations of the function, so that’s the time average. We think of applying this function once every time unit. The time average is the same as the integral of that function over the space. That’s the space average. So you can either take the function and see what it looks like over the entire space. And remember, that gives you, like, sizes of sets as well. So you might have your space, your function might be really big in the middle of the space, so when you integrate it over that piece, you’ll get a big hump. And it says that if I start iterating at any point, it’ll spend a lot more time in the space where the function is big. So the time average is equal to the space average. So that is the pointwise Birkhoff ergodic theorem. And I think it’s really cool because if you think about, say, if you’ve ever seen pictures of fractal attractors or something, so often these dynamical systems, these functions we’re looking at, are ergodic on their attractor. All the points get sucked into a certain subset, and then on that subset they stay on it forever and move around, so they’re ergodic on that attractor.
EL: Yeah.
AR: So if we just, say, take a computer and start with a number and plug it in our function and keep iterating, or maybe it’s a two-dimensional vector, or maybe it’s even a little shape, and you start iterating, you see a pattern appear because that point is visiting that set in exactly the right amount. Certain parts are darker, certain parts are lighter, and it’s as if, I don’t know in the old days, before digital cameras, we would actually develop photographs. Imagine you put that blank page in the developing fluid, and you sort of see it gradually appear. And it’s just like that. The ergodic theorem gives us that magical appearance of these shapes of these attractors.
EL: Yeah. That’s a fun image. I’m almost imagining a Polaroid picture, where it slowly, you know, you see that coming out.
AR: It’s the same idea. If you want to think about it another way, you’re sort of experiencing this process. You are the point, and you’re going around in your life. If your life is ergodic, and a lot of time it is, it says that you’ll keep bumping into certain things more often than others. What are those things you’ll bump into more often? Well the things that have higher measure for you, have higher meaning.
EL: Yeah. That’s a beautiful way to think about it. You kind of choose what you’re doing, but you’re guided.
AR: I call it, one measure I put on my life is the fun factor.
EL: That’s a good one.
AR: If your fun factor is higher, you’ll go there more often.
EL: Yeah. It also says something like, if you know what you value, you can choose to live your life so that you do visit those places more. That’s a good lesson. Let the ergodic theorem guide you in your life.
OK, so what have you chosen to pair with this theorem?
AR: So the theorem has a lot of motion in it. A lot of motion, a lot of visualization. I think as far as music, it’s not so hard to think of an ergodic musical idea. Music is, after all, structures evolving through space.
EL: Exactly.
AR: I think I would pair Steve Reich’s Violin Phase. Do you know that piece?
EL: Yeah, yeah.
AR: So what it is, it’s a phrase on the violin, then you hear another copy of it playing at the same time. It’s a repetitive phrase, but one of them gets slightly out of phase with the other, and more and more and more and more. And what you hear are how those two combine in different ways as they get more and more and more and more out of phase. And if you think of that visually, you might think of rotating a circle bit by bit by bit, and in fact, we know irrational rotations of the circle are ergodic. You visit everywhere, so you hear all these different combinations of those patterns. So Steve Reich Violin Phase.
He has a lot of pattern music. Some of it is less ergodic, I mean, you only hear certain things together. But I think that continuous phase thing is pretty cool.
EL: Yeah. And I think I’ve heard it as Piano Phase more often than Violin Phase.
AR: It’s a different piece. He wrote a bazillion of them.
EL: Yeah, but I guess the same idea. I really like your circle analogy. I almost imagine, maybe the notes are gears sticking out of the circle, and they line up sometimes. Because even when it’s not completely back in phase, sometimes the notes are playing at the same time but at a different part of the phrase. They almost lock in together for a little while, and then turn a little bit more and get out again and then lock in again at a different point in the phrase. Yeah, that’s a really neat visual. Have you performed much Steve Reich music?
AR: I’ve performed some, mostly his ensemble pieces, which are really fun because you have to focus. One of my favorites of his is called Clapping Music because you can do it with just two people. It’s the same idea as the Violin Phase, but it’s a discrete shift each time, so a shift by an eighth note. So the pattern is [clapping].
One person claps that over and over and over, and the other person claps that same rhythm but shifts it by one eighth note each time. So since that pattern is 12 beats long, you come back to it after 12 beats. So it’s discretized. You do each one twice, so it’s 24, so it’s long enough.
EL: So that’s a non-ergodic one, a periodic transformation.
AR: Exactly. So that one I do a lot when I give talks about how we can describe mathematics with its musical manifestations, but we can also describe music mathematically.
EL: Just like you, music is one of my loves too. I played viola for a long time. I’ve never performed any Steve Reich, and I’m glad you didn’t ask me to spontaneously perform Clapping Music with you. I think that would be tough to do on the spot.
AR: We can do that offline.
EL: Yeah, we’ll do that once we hang up.
AR: As far as foods, I think there are some great pairings of foods with the ergodic theorem. In fact, I think we apply the ergodic theorem often in cooking. You know, you mix stuff up. So one thing I like to do sometimes is make noodles, with a roller thing.
EL: Oh, from scratch?
AR: Yeah. You just get some flour, get some eggs, or if you’re vegan, you get some water. That’s the ingredients. You mix it up and you put it through this roller thing, so you can imagine things are getting quite mixed up. What’s really cool, I don’t know if you’ve ever eaten something they call in Italy paglia e fieno, straw and hay.
EL: No.
AR: And all it is is pasta colored green, so they put a little spinach in one of them. So you’ve got white and green noodles. So when you cook some spinach, you’ve got your dough. You put some blobs of spinach in. You start mushing it around and cranking it through, and you see the blobs make these cool streaks, and the patterns are amazing, until it’s uniformly, more or less, green.
EL: Yeah.
AR: So I’d say, paglia e fieno, we put on some Steve Reich, and there you go.
EL: That’s great. A double pairing. I like it.
AR: You can think of a lot of other things.
EL: Yeah, but in the pasta, you can really see it, almost like taffy. When you see pulling taffy. You can almost see how it’s getting transformed.
AR: It’s getting all mushed around.
EL: Thank you so much for talking to me about the Birkhoff ergodic theorem. And I hope you have a good rest of MathFest.
AR: You too, Evelyn. Thank you.
[outro]
Kevin Knudson: Welcome to MFT. I'm Kevin Knudson, your host, professor of mathematics at the University of Florida. I am without my cohost Evelyn Lamb in this episode because I'm on location at the Banff International Research Station about a mile high in the Canadian Rockies, and this place is spectacular. If you ever get a chance to come here, for math or not, you should definitely make your way up here.
I'm joined by my longtime friend Justin Curry. Justin.
Justin Curry: Hey Kevin.
KK: Can you tell us a little about yourself?
JC: I'm Justin Curry. I'm a mathematician working in the area of applied topology. I'm finishing up a postdoc at Duke University and on my way to a professorship at U Albany, and that's part of the SUNY system.
KK: Contratulations.
JC: Thank you.
KK: Landing that first tenure-track job is always
JC: No easy feat.
KK: Especially these days. I know the answer to this already because we talked about it a bit ahead of time, but tell us about your favorite theorem.
JC: So the theorem I decided to choose was the classification of regular polyhedra into the five Platonic solids.
KK: Very cool.
JC: I really like this theorem for a lot of reasons. There are some very natural things that show up in one proof of it. You use Euler's theorem, the Euler characteristic of things that look like the sphere, R=2.
There's duality between some of the shapes, and also it appears when you classify finite subgroups of SO(3). You get the symmetry groups of each of the solids.
KK: Oh right. Are those the only finite subgroups of SO(3)?
JC: Well you also have the cyclic and dihedral groups.
KK: Well sure.
JC: They embed in, but yes. The funny thing is they collapse too because dual solids have the same symmetry groups.
KK: Did the ancient Greeks know this, that these were the only five? I'm sure they suspected, but did they know?
JC: That's a good question. I don't know to what extent they had a proof that the only five regular polyhedra were the Platonic solids. But they definitely knew the list, and they knew they were special.
KK: Yes, because Archimedes had his solids. The Archimedean ones, you are allowed different polygons.
JC: That's right.
KK: But there's still this sort of regularity condition. I can never remember the actual definition, but there's like 13 of them, and then there's 5 Platonics. So you mentioned the proof involving the Euler characteristic, which is the one I had in mind. Can we maybe tell our listeners how that might go, at least roughly? We're not going to do a case analysis.
JC: Yeah. I mean, the proof is actually really simple. You know for a fact that vertices minus edges plus faces has to equal 2. Then when you take polyhedra constructed out of faces, those faces have a different number of edges. Think about a triangle, it has 3 edges, a square has 4 edges, a pentagon is at 5. You just ask how many edges or faces meet at a given vertex? And you end up creating these two equations. One is something like if your faces have p sides, then p times the number of faces equals 2 times the number of edges.
KK: Yeah.
JC: Then you want to look at this condition of faces meeting at a given vertex. You end up getting the equation q times the number of vertices equals 2 times the number of edges. Then you plug that into Euler's theorem, V-E+F=2, and you end up getting very rigid counting. Only a few solutions work.
KK: And of course you can't get anything bigger than pentagons because you end up in hyperbolic space.
JC: Oh yeah, that's right.
KK: You can certainly do this, you can make a torus. I've done this with origami, you sort of do this modular thing. You can make tori with decagons and octagons and things like that. But once you get to hexagons, you introduce negative curvature. Well, flat for hexagons.
JC: That's one of the reasons I love this theorem. It quickly introduces and intersects with so many higher branches of mathematics.
KK: Right. So are there other proofs, do you know?
JC: So I don't know of any other proofs.
KK: That's the one I thought of too, so I was wondering if there was some other slick proof.
JC: So I was initially thinking of the finite subgroups of SO(3). Again, this kind of fails to distinguish the dual ones. But you do pick out these special symmetry groups. You can ask what are these symmetries of, and you can start coming up with polyhedra.
KK: Sure, sure. Maybe we should remind our readers about-readers-I read too much on the internet-our listeners about duality. Can you explain how you get the dual of a polyhedral surface?
JC: Yeah, it's really simple and beautiful. Let's start with something, imagine you have a cube in your mind. Take the center of every face and put a vertex in. If you have the cube, you have six sides. So this dual, this thing we're constructing, has six vertices. If you connect edges according to when there was an edge in the original solid, and then you end up having faces corresponding to vertices in the original solid. You can quickly imagine you have this sort of jewel growing inside of a cube. That ends up being the octahedron.
KK: You join two vertices when the corresponding dual faces meet along an edge. So the cube has the octahedron as its dual. Then there's the icosahedron and the dodecahedron. The icosahedron has 20 triangular faces, and the dodecahedron has 12 pentagonal faces. When you do the vertex counts on all of that you see that those two things are dual.
Then there's the tetrahedron, the fifth one. You say, wait a minute, what's its dual?
JC: Yeah, and well it's self-dual.
KK: It's self-dual. Self-dual is a nice thing to think about. There are other things that are self-dual that aren't Platonic solids of course. It's this nice philosophical concept.
JC: Exactly.
KK: You sort of have two sides to your personality. We all have this weird duality. Are we self-dual?
JC: I almost like to think of them as partners. The cube determines, without even knowing about it, its soulmate the octahedron. The dodecahedron without knowing it determines its soulmate the icosahedron. And well, the tetrahedron is in love with itself.
KK: This sounds like an algorithm for match.com.
JC: Exactly.
KK: I can just see this now. They ask a question, “Choose a solid.” Maybe they leave out the tetrahedron?
JC: Yeah, who knows?
KK: You don't want to date yourself.
JC: Maybe you do?
KK: Right, yeah. On our show we like to ask our guests to pair their theorem with something.
JC: It's a little lame in that it's sort of obvious, but Platonic solids get their name from Plato's Timaeus. It's his description of how the world came to be, his source of cosmogeny. In that text he describes an association of every Platonic solid with an element. The cube is correspondent with the element earth. You want to think about why would that be the case? Well, the cube can tessellate three-space, and it's very stable. And Earth is supposed to be very stable, and unshakeable in a sense. I don't know if Plato actually knew about duality, but the dual solid to the cube is the octahedron, which he associated with air. So you have this earth-sky symbolic dualism as well.
Then unfortunately I think this kind of analogy starts to break down a bit. You have the icosahedron, the one made of triangle sides. This is associated to water. And if you look at it, this one sort of looks like a drop of water. You can imagine it rolling around and being fluid. But it's dual to the dodecahedron, this oddball shape. They only thought of four elements: earth, fire, wind, water. What do you do with this fifth one? Well that was for him ether.
KK: So the tetrahedron is fire?
JC: Yeah, the tetrahedron is fire.
KK: Because it's so pointy?
JC: Exactly.
KK: It's sort of rough and raw, or that They Might Be Giants Song “Triangle Man.” It's the pointiest one. Triangle wins every time.
JC: The other thing I like is that fire needs air to breathe. And if you put tetrahedra and octahedra together, they tessellate 3-space.
KK: So did they know that?
JC: I don't know. That's why this is fun to speculate about. They obviously had an understanding. It's unclear what was the depth or rigor, but they definitely knew something.
KK: Sure.
JC: We've known this for thousands of years.
KK: And these models, are they medieval, was it Ptolemy or somebody, with the nested?
JC: The way the solar system works.
KK: Nested Platonic solids. These things are endlessly fascinating. I like making all of them out of origami, out of various things. You can do them all with business cards, except the dodecahedron.
JC: OK.
KK: It's hard to make pentagons. You can take these business cards and you can make these. Cubes are easy. The other ones are all triangular faces, and you can make these triangular modules where you make two triangles out of business cards with a couple of flaps. And two of them will give you a tetrahedron. Four of them will give you an octahedron. The icosahedron is tricky because you need, what, 10 business cards. I have one on my desk. It's been there for 10 years. It's very stable once it's together, but you have to use tape along the way and then take the tape off. It's great fun. There's this great book by Thomas Hull, I forgot the name of it [Ed note: it's called Project Origami: Activities for Exploring Mathematics], a great origami book by Thomas Hull. I certainly recommend all of that.
Anything else you want to add? Anything else you want to tell us about these things? You have all these things tattooed on your body, so you must be
JC: I definitely feel pretty passionate. It's one of those things, if I have to live with this for 30 years, I'll know the Platonic solid won't change. There won't be suddenly a new one discovered.
KK: Right. It's not like someone's name, you might regret it later. But my tattoos are, this is man, woman, and son. My wife and I just had our 25th anniversary, so this is still good. I don't expect to have to get rid of that.
Anyway, well thanks, Justin. This has been great fun. Thanks for taking a few minutes out of your busy schedule. This is a really cool conference, by the way.
JC: I love it. We're bringing together some of the brightest minds in applied topology, and outside of applied topology, to see how topology can inform data science and how algebra interacts in this area, what new foundations we need and aspects of algebra.
KK: Yeah, it's very cool. Thanks again, and good luck in your new job.
JC: Thanks, Kevin.
[outro]
Evelyn Lamb: Welcome to My Favorite Theorem, the show where we ask mathematicians what their favorite theorem is. I’m your host Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I had to wear a sweater yesterday.
EL: Oh my goodness! Yeah, I’ve had to wear a sweater for about a month and a half, so.
KK: Yeah, yeah, yeah.
EL: Maybe not quite that long.
KK: Well, it’ll be hot again tomorrow.
EL: Yeah. So today we’re very glad to have our guest Henry Fowler on. Henry, would you like to tell us a little bit about yourself?
Henry Fowler: I’m a Navajo Indian. I live on the Navajo reservation. I live by the Four Corners in a community, Tsaile, Arizona. It’s a small, rural area. We have a tribal college here on the Navajo Nation, and that’s what I work for, Diné College. I’m a math faculty. I’m also the chair for the math, physics, and technology. And my clan in Navajo is my maternal clan is Bitterwater and my paternal clan is Zuni Edge Water.
EL: Yeah, and we met at the SACNAS conference just a couple weeks ago in Salt Lake City, and you gave a really moving keynote address there. You talked a little bit about how you’re involved with the Navajo Math Circles.
HF: Yes. I’m passionate about promoting math education for my people, the Navajo people.
EL: Can you tell us a little bit about the Navajo Math Circles?
HF: The Navajo Math Circles started seven years ago with a mathematician from San Jose State University, and her name is Tatiana Shubin. She contacted me by email, and she wanted to introduce some projects that she was working on, and one of the projects was math circles, which is a collection of mathematicians that come together, and they integrate their way of mathematical thinking for grades K-12 working with students and teachers. Her and I, we got together, and we discussed one of the projects she was doing, which was math circles. And it was going to be here on the Navajo Nation, so we called it Navajo Math Circles. Through her project and myself here living on the Navajo Nation, we started the Navajo math circles.
KK: How many students are involved?
HF: We started first here at Diné College, we started first with a math summer camp, where we sent out applications, and these were for students who had a desire or engaged themselves to study mathematics, and it was overwhelming. Over 50 students applied for only 30 slots that were open because our grant could only sustain 30 students. So we screened the students and with the help of their regular teachers from junior high or high school, so they had recommendation letters that were also presented to us. So we selected the first 30 students. Following that we expanded our math circle to the Navajo Nation public school system, and there’s also contract schools and grant schools. Now we’re serving, I would say over 1,000 students now.
KK: Wow. That’s great. I assume these students have gone on to do pretty interesting things once they finish high school and the circle.
HF: Yes. We sort of strategized. We wanted to work with lower grades a little bit. We wanted to really promote a different way of thinking about math problems. We started off with the first summer math camp at the junior high or the middle school level, and also the students that were barely moving to high school, their freshman year or their 10th grade year. That cohort, the one that we started off with, they have a good rate of doing very well with their academic work, especially in math, at their high school and junior high school. We have four that have graduated recently from high school, and all four of them are now attending a university.
KK: That’s great.
EL: And some of our listeners may have seen there’s a documentary about Navajo math circles that has played on PBS some, and we’ll include a link to that for people to learn a little bit about that in the show notes for the episode. We invited you here to My Favorite Theorem, of course, because we like to hear about what theorems mathematicians enjoy. So what have you selected as your favorite theorem?
HF: I have quite a few of them, but something that is simple, something that has been an awe for mathematicians, the most famous theorem would be the Pythagorean theorem because it also relates to my cultural practices, to the Navajo.
KK: Really?
HF: The Pythagorean theorem is also how Navajo would construct their traditional home. We would call it a Navajo hogan. The Navajo would use the Pythagorean theorem charting how the sun travels in the sky, so they would open their hogan door, which is always constructed facing east. So once the sun comes out, it projects its energy, the light, into the hogan. The Navajo began to study that phenomenon, how that light travels in space in the hogan. They can predict the solstice, the equinox. They can project how the constellations are moving in the sky, so that’s just a little example.
EL: Oh, yeah. Mathematicians, we call it the Pythagorean theorem, but like many things in math, it’s not named after the first person ever to notice this relationship. The Pythagorean theorem is a2+b2=c2, the relationship between the lengths of the legs of a right triangle and the hypotenuse of a right triangle, but it was known in many civilizations before, well before Pythagoras was born, a long time ago. In China, India, the Middle East, and in North America as well.
HF: Yes, Navajo, we believe in a circle of life. There’s time that we go through our process of life and go back to the end of our circle, and it’s always about to give back, that’s our main cultural teaching, to give back as much as you can, back to the people, back to nature, back to your community, as well as what you want to promote, what you’re passionate about, to give back to the people that way. Our way is always interacting with circles, that phenomenon, and how the Navajo see the relationship to space, the relationship to sunlight, how it travels, how they capture it in their hogan. Also they can related it to defining distance, how they relate the Pythagorean theorem to distance as well as to a circle.
KK: What shape is the hogan? I’m sort of curious now. When the light comes in, what sort of shadows does it cast?
HF: The Navajo hogan is normally a nine-sided polygon, but it Navajo can also capture what a circle means by regular polygons, the more sides they have, drawing to become a circle. They understand that event, they understand that phenomenon. The nine sides is in relationship to when a child is conceived and then delivered, it’s nine months. The Navajo call it nine full moons because they only capture what’s going on within their environment, and they’re really observant to how the sky and constellations are moving. Their monthly calendar is by full moon. And so that’s how, when the light travels in, when they open that hogan door, it’s like a semi-circle. In that space they feel like they are also secure and safe, and that hogan is also a representation that they are the child of Mother Earth, and that they are the child of Father Sky. And so that hogan is structured in relationship to a mother’s womb, when a child is being conceived and that development begins to happen. Navajos say that the hogan is a structure where in relationship there are four seasons, four directions, and then there are four developments that happen until you enter old age. There will be the time of your birth, the time when you become an adult, mid-life, and eventually old age. So using that concept, when that door is open, they harvest that sunlight when it comes in. Now we are moving to the state of winter solstice. That happens, to western thinking, around December 22. To the Navajo, that would be the 13th full moon, so when that light comes in that day, it will be a repeated event. They will know where. When the light comes into the hogan, when the door is opened, it will project on the wall of the hogan. When it projects on that wall, they mark it off. Every time, each full moon, they capture that light to see where it hits on the wall. That’s how they understand the equinox, that’s how they understand the solstice, in relationship to how the light is happening.
KK: Wow, that’s more than my house can do.
HF: Then they also use a wood stove to heat the hogan. There’s an opening at the center of the hogan, they call the chimney. They capture that sunlight, and they do every full moon. Sometimes they do it at the middle of that calendar, they can even divide that calendar into quarters. When they divide it into quarters, they chart that light as it comes in through the chimney. They find out that the sun travels through the sky in a figure eight in one whole year. They understand that phenomenon too.
KK: Ancient mathematics was all about astronomy, right? Every culture has tried to figure this out, and this is a really ingenious solution, with the chimney and the light. That’s really very cool.
HF: The practice is beginning not to be learned by our next generation because now our homes are more standardized. We’re moving away from that traditional hogan. Our students and our young people are beginning not to interact with how that light travels in the hogan space.
KK: Did you live in a hogan growing up?
HF: Yes. People around probably my age, that was how they were raised, was in a traditional hogan. And that was home for us, that construction. Use the land, use nature to construct your home with whatever is nearby. That’s how you create your home. Now everything is standardized in relation to different building codes.
EL: So what have you chosen to pair with your theorem?
HF: I guess I pair my Pythagorean theorem to my identity, who I am, as a Navajo person. I really value my identity, who I am as an indigenous person. I’m very proud of my culture, my land, where I come from, my language, as well as I compare it to what I know, the ancient knowledge of my ancestors is that I always respect my Navajo elders.
KK: Very cool. Do you think that living in a hogan, growing up in a hogan, did that affect you mathematically? Do you think it sort of made you want to be a mathematician? Were you aware of it?
HF: I believe so. We did a lot of our own construction, nothing so much that would be store-bought. If you want to play with toys, you’d have to create that toy on your own. So that spatial thinking, driving our animals from different locations to different spots, and then bringing our sheep back at a certain time. You’d calculate this distance, you’d estimate distance. You’d do a lot of different relationships interacting with nature, how it releases patterns. You’d get to know the patterns, and the number sense, the relationships. I really, truly believe that my culture gave me that background to engage myself to study mathematics.
KK: Wow.
EL: Yeah, and now you’re making sure that you can pass on that knowledge and that love for mathematics to younger people from your community as well.
HF: That’s my whole passion, is to strengthen our math education for my Navajo people. Our Navajo reservation is as large as West Virginia.
EL: Oh, wow, I didn’t realize that.
HF: And there’s no leader that has stood up to say, “I’m going to promote math education.” Right now, in my people, I’m one of the leaders in promoting math eduction. It’s strengthening our math K-12 so we build our infrastructure, we build our economy, we build better lives for my Navajo people, and that we build our own scientists, we build our own doctors and nurses, and we want to promote our own students, to show interests or take the passion and have careers in STEM fields. We want to build our own Navajo professors, Navajo scholars, Navajo researchers. That all takes down to math education. If we strengthen the education, we can say we are a sovereign nation, a sovereign tribe, where we can begin to build our own nation using our own people to build that nation.
EL: Wow, that’s really important work, and I hope our listeners will go and learn a little bit more about the Navajo math circles and the work you do, and other teachers and everyone are doing there.
HF: It’s wonderful because we have so many social ills, social problems among my people. There’s so much poverty here. We have near 50 percent unemployment. And we want my people to have the same access to opportunity just like any other state out there. And the way, from my perspective, is to promote math education, to bring social justice and to have access to a fair education for my people. And it’s time that the Navajo people operate their own school system with their own indigenous view, create our own curriculum, create our own math curriculum, and standardize our math curriculum in line to our elders’ thinking, to our culture, to our language, and that’s just all for my Navajo people to understand their self-identity, so they truly know who they are, so they become better people, and they get that strength, so that that motivation comes. To me, that’s what my work is all about, to help my people as a way to combat the social problems that we’re having. I really believe that math kept me out of problems when I was growing up. I could have easily joined a gang group. I would not have finished my education, my western education, but math kept me out of problems, out of trouble growing up.
KK: You’re an inspiration. I feel like I’m slacking. I need to do something here.
EL: Yeah. Thank you so, so much for being on the podcast with us. I really enjoyed talking with you.
KK: Yeah, this was great, Henry. Thank you.
HF: You’re welcome.
[outro]
This transcript is provided as a courtesy and may contain errors.
EL: Welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer based in Paris for a few more days, but after that I’ll be based in Salt Lake City, Utah. And this is your other host.
KK: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida, where it’s raining. It’s been raining for a week. After a spring of no rain, now it’s raining. But that’s OK.
EL: I probably shouldn’t tell you that it’s absolutely gorgeous, sunny and 75 degrees in Paris right now.
KK: You really shouldn’t.
EL: OK, then I won’t. OK. Each episode we invite a mathematician on to find out about their favorite theorem. Today we’re very happy to welcome Eriko Hironaka onto the show. So would you like to tell us a little bit about yourself, Eriko?
EH: Yes, thank you, first of all, for having me on this show. It’s very flattering and exciting. I worked at Florida State University for almost twenty years. I was a professor there, and I recently moved to the American Mathematical Society. I’ve been working there for two years. One year, I guess, full time, so far. I work in the book program. I’m somebody who is a mathematician but is doing it from various angles.
EL: Yeah, I was really interested in having you on the podcast because I think that’s a cool perspective to have where you’ve been in the research world for a long time, but now you’re also seeing a broader view, maybe, of math, or kind of looking at it from a different angle than before. Do you mind telling us a little bit about what you do as a book person for the AMS?
EH: Yeah, what do I do? Actually I was thrown into this job, in a way. They said, OK, you’re going to work in the book program. Your job is basically to talk to people about books, and see if anybody wants to write a book, and if they do, you keep talking with them, and when they finally submit something, you prepare, maybe the real job part is to, once there’s a submission, start it through a review process, then also what’s kind of exciting is to convince the publishing group to actually publish the book. That part requires me to think about how this book fits into mathematics and mathematical literature and then also how much it’ll cost to produce the book and what’s involved in selling the book. Who is the audience and how can it be presented in the best possible way? I think of myself as sort of the connector between the author, who is thinking about the mathematics, and the publishers, who are thinking about. The AMS is a nonprofit, but to cover costs and make this a reasonable project.
EL: You see a lot of different aspects of this, then.
EH: Yeah, so I don’t know if I was more naive than most mathematicians, but I think most mathematicians don’t think beyond proving theorems and conveying and communicating their ideas to other people. Maybe they also think about what to write on their vita, and things like that. That kind of thing is very different. Right now I don’t really have a reason to keep up my vita in the same way that I used to. That was a big change for me.
KK: Right.
EH: I still do mathematics, I still do research, give talks, and things like that. I still write papers. But that’s really become something just for me. Not for me, it’s for math, I guess. But it’s not for an institute.
KK: It’s not for the dean.
EH: It’s not for the dean. Exactly.
KK: That’s really liberating, I would think, right?
EH: It’s super liberating, actually. It’s really great.
KK: Very cool. I dream about that. One of these days.
EH: I feel like I’m supporting mathematics kind of from the background. Now I think about professors as being on the battlefield. They’re directly communicating with people, with students, with individuals. And working with the deans. Making their curriculum and program and everything work.
EL: So what have you chosen as your favorite theorem?
EH: OK, Well, I thought about that question. It’s very interesting. I’ve even asked other people, just to get their reaction. It’s a very interesting question, and I’m curious to know what other people have answered on your podcast. When I think of a theorem I think about not just the statement, but more the proof. I might think of proofs I like, theorems whose proofs I like, or I might think about how this theorem helped me because I really needed something. It’s actually kind of utilitarian, but a favorite theorem should be more like what made you feel great. I have to say for that, it’s a theorem of my own.
KK: Cool, great.
EH: So I have a favorite theorem, or the theorem that made me so excited, and it was the first theorem I ever proved. The reason it’s my favorite theorem is because of a mixture not just of feeling great that I’d proved the theorem but also feeling like it was a big turning point in my life. I felt like I had proved myself in a way. That’s this theorem, I think of it as a polynomial periodicity theorem, and what it says. Do you want me to say what the theorem is?
KK: Yeah, throw it out there, and we’ll unpack it.
EH: So the theorem in most generality, it says that if you have a finite CW complex, if you have a sort of nice space, in my case I was looking at quasiprojective varieties, but any kind of reasonably nice space, you can take a sequence of coverings, of regular coverings, corresponding to a choice of map from the fundamental group of the space to some, say, free abelian group, and the way you get the sequence of coverings is you take that map and compose it with the map from that free abelian group to the free abelian group tensored with Z mod n. So if everything is finitely generated, that gives you a surjective map from the fundamental group of your space to a finite abelian group. And now by the general theorem of covering spaces gives you a sequence of finite coverings of your space. And then if you have that space having a natural completion you can talk about natural branch coverings associated to those for each n. My theorem was what happens to the first Betti numbers of these things? The rank of the first homology of these coverings. I showed that this sequence actually has a pattern. In fact, there is a polynomial for every set base space and map from the fundamental group of the base space to a free abelian group, there is a polynomial with possibly periodically changing coefficients so that the first Betti number is that polynomial evaluated at n.
KK: Wow.
EH: So n is the degree of the covering. The Betti numbers are changing periodically, the polynomials are changing periodically, but it’s a pattern, it’s a nice pattern, and there’s a single polynomial telling you what all of these Betti numbers are doing.
EL: So what was the motivation behind this theorem?
EH: This problem of understanding the first Betti number of coverings comes from work of Zariski back in the early 1900s. His goal was to understand moduli of plane curves with various kinds of singularities. Simply put, what he did was he tried to distinguish curves by looking at topology, blending topology with algebraic geometry. This was kind of a new idea. This is not very well known about Zariski, but one of his innovations was bringing in topology to the study of algebraic geometry.
KK: That’s why it’s called the Zariski topology, right? I don’t know. One assumes.
EH: In a way. Not really!
KK: I’m not a historian. My bad.
EH: He brought geometry topology in. The Zariski topology is more of an algebraic definition. What he did was he was interested, for example, he showed that, what he was interested in when you’re talking about moduli of plane curves is whether or not you can get from one plane curve with prescribed singularities, say a sextic curve, a degree six curve, in C2, in the complex plane, with exactly six simple cusps. So six points in the plane can either lie on a conic or not. General position means it doesn’t lie on lines, doesn’t lie on a conic. But if the six points lie on a conic, it turns out you cannot move within the space of sextics with six cusps to a sextic with six cusps not lying on a conic.
KK: OK
EH: They’re two distinct families. You’d have to leave that family to get from one to another. You can’t deform them in the algebraic category. To prove this, he said, well, basically, even though the idea of fundamental groups and studying fundamental groups was really new still and was just starting to be considered a tool for knot theory, for example, that came a little bit later. But he said, you can tell they’re different because their topology is different. For example, take coverings. Take your curve, and say it’s given by the equation F(x,y)=0. So F(x,y) is a polynomial. Take the polynomial z^n=F(x,y). You get a surface in three-dimensional space, and now look at the first Betti number of that. So the first Betti number, the first homology, can kind of be described algebraically in terms of other things, divisors and things like that. You can think of it as a very algebraic invariant, but you can also think of it as a topological invariant. Forget algebra, forget complex analysis, forget everything. And he showed that if you take the sextics with six cusps and you took z^n=F(x,y), you get things with first Betti number nontrivial, and by the way, periodically changing with n. In fact, when 6 divides n, it’s nontrivial. It jumps. Every time 6 divides n, it jumps. Otherwise I can’t remember, I think it’s zero. But in the case that the cusps are in general position, the first Betti numbers are always zero.
KK: OK.
EH: So that must mean that the topology is different. And if the topology is different, they can’t be algebraically equivalent. So that was the process of thinking, that topology can tell you something about algebraic geometry. And that kind of topology is what geometric topologists study now, fundamental groups, etc. But this was all a very new idea.
EL: So that’s kind of the environment that your theorem lives in, this intersection between topology and algebraic geometry.
EH: That’s right. So my theorem, Sarnak conjectured, I was working on fundamental groups of complements of plane curves, especially with multiple components, for my thesis. And Peter Sarnak was looking at certain problems coming from number theory that had to do with arithmetic subgroups of GL(n) and looking at what happens when you take your fields to be finite, and things like that. You get these finite fields. Somehow in his work, something coming from number theory, he wondered, hearing about what I was doing with fundamental groups and Alexander polynomials, which have to do with Betti numbers of coverings, he asked, “Can you show that the Betti numbers of coverings are periodic or polynomial periodic,” which is that other thing. I thought, OK, I’ll do this, and since I was already working topologically, I could get the topological part by looking at the unbranched coverings, and then I had to complete it. To understand the completion, the rest, there’s a difference between the Betti numbers of the unbranched coverings and the Betti numbers of the branched coverings, to understand that, I needed to understand intersections of curves on the surface, to sort of understand intersection theory of algebraic curves. And these have very special properties, nice properties coming from the fact that we’re talking about varieties.
KK: Right.
EH: And I used that to complete the proof. It was a real blend of topology and algebraic geometry. That’s what made it really fun.
KK: That’s a lot of mathematics going in. And I love your confidence. Peter Sarnak said, “Hey, can you do this?” and you said, “Yeah, I can do this.”
EH: Right, well I was feeling pretty desperate. It was really a time of: Should I do math? Should I not do math? Do I belong here? And I thought, “OK, I’ll try this. If it works, maybe that’s a sign.”
EL: So what have you chosen to pair with this theorem?
EH: As it happens, after I proved this theorem and I showed it to Sarnak, I basically wrote a three-page outline of the proof. I showed it to him, and he looked at it carefully and said, “Yeah, this looks right.” Also, you know, you can feel it when you have it. Suddenly everything has become so clear. I was glowing with this and driving from Stanford to Berkeley, which is about an hour drive, and I usually took a nicer route through the hills to the west, so you can imagine driving with these vales and woods, and it was beautiful sunshine and everything, and the Firebird suite starts out very quiet, and it just perfectly represented what it feels like to prove a theorem. It starts really quiet, and then it gets really choppy and frenzied.
KK: And scary.
EH: Exactly, scary. The struggling bird, he’s anxious and frightened, really, really unsettling. And then there’s this real gentleness, feeling like it’s going to be OK, it’s going to be OK. But that also is a bit disturbing. There’s something about it that’s disturbing. So it keeps you listening, even though it’s very sweet and the themes are developed, it’s a very beautiful theme. Then there’s this bang and then it becomes really frenzied again, super frenzied, but excited. And then it becomes bolder and bolder. And then that melody comes in, and it starts to really come together. And it starts to feel like you’re running, like there’s a direction, and then finally it gets quiet again. There’s this serenity. And this time the serenity is real. All this stuff has built up to it, and that starts to build and the beautiful theme comes out in the end. It’s just this glorious wonder at the very end. It was like all my excitement was just exemplified in this piece of music.
EL: I love that picture of you driving through California, blasting Firebird.
EH: Yes, exactly.
EL: With this triumphant proof that you’d just done. That’s really a great picture.
KK: So my son just finished high school, and he wants to be a composer. He’s going to go to college and study composition. And I actually sort of credit that piece, Firebird suite, as one of the pieces that really motivated him to become a composer. That and Rhapsody in Blue.
EH: It really tells a story.
KK: Yes, it does. It’s really spectacular. So I think maybe a lot of our listeners don’t know that you have a rather famous father.
EH: Yes.
KK: Your father won the Fields medal for proving resolution of singularities in characteristic zero, right?
EH: Yes.
KK: What was that like?
EH: Yeah, so I had a really strange relationship with mathematics. Because I grew up with a mathematician father, I avoided math like the plague, you know. Partly because my father was a mathematician, and I thought that was kind of strange, that it didn’t fit in with the rest of the world that I knew. I grew up in the suburbs. It wasn’t a particularly intellectual background. For me, the challenge to my life was to figure out how to fit in, which I was failing at miserably. But I thought that was my challenge. Doing well in math was not the way to fit in in school. I would kind of deliberately add in mistakes to make sure that I didn’t get a good grade.
KK: Really? Wow.
EH: I would kick myself if I forgot and I would get a high grade and everybody would say, “How did she do that?” You know what I mean? I thought of math as this embarrassment, in a way, to tell the truth, strangely enough. But on the other hand, through my father and his friends and colleagues, I knew that mathematics also had this very beautiful side, and the people who did it were very happy people, it seemed. I saw that other side as well. And I think that was an advantage because I knew that math was really cool. It’s just that that wasn’t my thing. I didn’t want to do that. Also, my teachers were not very exciting. The math teachers seemed to make math as boring as possible. So I had this kind of split personality when it came to math, or split feeling about what math was.
EL: Yeah.
EH: But then when I started to do math, I started somehow accidentally to do math in college, and I actually got much more attracted to it. It was after vaguely stumbling through calculus and things like that. So I never really learned calculus, but I started skipping through calculus, and I took more advanced classes, and it just really clicked, and I got hooked. I learned calculus in graduate school, as some people do, by teaching it.
KK: Well that’s when you really learn it anyway, that’s right.
EH: Some people have this impression that lots of mathematicians had the advantage of having access to mathematics from a young age, but I think it’s not obvious how that’s an advantage. In some cases it could be that they were nurtured in mathematics. I mean, I talk to my kids about mathematics, and it’s a fun thing we do together. But I don’t think that’s necessarily the case of people with mathematical parents. In my case it certainly wasn’t the case for me. But still it was an advantage because I knew that there was this thing called mathematics, and many people don’t know that.
EL: Yeah. And, like you said, you knew that mathematicians were happy with their work, and just even knowing that there’s still math to prove. That was something, when I started to do math, I didn’t really understand that there was still more math to do, it wasn’t just learning calculus really well. But going and finding and exploring these new things.
KK: I had that same experience. I remember when I was in high school, telling people I was going to go to graduate school and be a math professor, and they said, “Well, what do you do?” I said, “I don’t know, I guess you write another calculus book.” Which we certainly do not need, right?
EH: Or we need different kinds.
KK: So, I say that, but I’m actually writing one, so you know.
EH: Oh, are you?
KK: Just in my spare time, right? I have so much of it these days.
EH: I think there is a need for calculus books, it’s just maybe different kinds.
KK: Well now that I know someone at the publishing house at the AMS…
EH: Absolutely. I’m going to follow up on this.
KK: Oh, wow. Well this has been fun.
EL: Yeah, thank you so much.
EH: Well thank you for asking me. It gave me a chance to think about different things, and it’s been fun talking to people about, “What’s you’re favorite theorem?”
EL: Good math conversation starter.
EH: Yeah, absolutely.
KK: Thanks for joining us, Eko.
EH: Thank you.
KK: Thanks for listening to My Favorite Theorem, hosted by Kevin Knudson and Evelyn Lamb. The music you’re hearing is a piece called Fractalia, a percussion quartet performed by four high school students from Gainesville, Florida. They are Blake Crawford, Gus Knudson, Del Mitchell, and Bao-xian Lin. You can find more information about the mathematicians and theorems featured in this podcast, along with other delightful mathematical treats, at Kevin’s website, kpknudson.com, and Evelyn’s blog, Roots of Unity, on the Scientific American blog network. We love to hear from our listeners, so please drop us a line at [email protected]. Or you can find us on Facebook and Twitter. Kevin’s handle on Twitter is @niveknosdunk, and Evelyn’s is @evelynjlamb. The show itself also has a Twitter feed. The handle is @myfavethm. Join us next time to learn another fascinating piece of mathematics.
This transcript is provided as a courtesy and may contain errors.
Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer based in Salt Lake City, but I’m currently recording in Chicago at the Mathematical Association of America’s annual summer meeting MathFest. Because I am on location here, I am not joined by our cohost Kevin Knudson, but I’m very honored to be in the same room as today’s guest, Dusa McDuff. I’m very grateful she took the time to talk with me today because she’s pretty busy at this meeting. She’s been giving the Hendrick Lecture Series and been organizing some research talk sessions. So I’m very grateful that she can be here. The introductions at these talks have been very long and full of honors and accomplishments, and I’m not going to try to go through all that, but maybe you can just tell us a little bit about yourself.
Dusa McDuff: OK. Well, I’m British, originally. I was born in London and grew up in Edinburgh, where I spent the first twenty years or so of my life. I was an undergraduate at Edinburgh and went to graduate study at Cambridge, where I was working in some very specialized area, but I happened to go to Moscow in my third year of graduate study and studied with a brilliant mathematician called Gelfand [spelling], who opened my eyes to lots of interesting mathematics, and when I came back, he advised that I become a topologist, so I tried to become a topologist. So that’s more what I’ve been doing recently, gradually moving my area of study. And now I study something called symplectic topology, or symplectic geometry, which is the study of space with a particular structure on it which comes out of physics called a symplectic structure.
EL: OK. And what is your favorite theorem?
DM: My favorite theorem at the moment has got to do with symplectic geometry, and it’s called the nonsqueezing theorem. This is a theorem that was discovered in the mid-80s by a brilliant mathematician called Gromov who was trying to understand. A symplectic structure is a strange structure you can put on space that really groups coordinates in pairs. You take two coordinates (x1,y1) and another two coordinates (x2,y2), and you measure an area with respect to the first pair, an area with respect to the second pair, and add them. You get this very strange measurement in four-dimensional space, and the question is what are you actually measuring? The way to understand that is to try to see it visually. He tried to explore it visually by saying, “Well, let’s take a round ball in four-dimensional space. Let’s move it so we preserve this strange structure, and see what we end up with.” Can we end up with arbitrary curly shapes? What happens? One thing you do know is that you have to preserve volume, but apart from that, nothing else was known.
So his nonsqueezing theorem says that if you took a round ball, say the radii were 1 in every direction, it’s not possible to move it so that in two directions the radii are less than 1 and in the other directions it’s arbitrary, as big as you want. The two directions where you’re trying to squeeze are these paired directions. It’s saying you can’t move it in such a way.
I’ve always liked this theorem. For one thing, it’s very important. It characterizes the structure in a way that’s very surprising. And for another thing, it’s so concrete. It’s just about shapes in four dimensions. Now four dimensions is not so easy to understand.
EL: No, not for me, at least!
DM: Thinking in four dimensions is tricky, and I’ve spent many, many years trying to understand how you might think about moving things in four dimensions, because you can’t do that.
EL: And to back up a little bit, when you say a round ball, are you talking about a two-dimensional ball that’s embedded in four-dimensional space, or a four-dimensional ball?
DM: I’m talking about a four-dimensional ball.
EL: OK.
DM: It’s got radius 1 in all directions. You’ve got a center point and move in distance 1 in every direction, that gives you a four-dimensional shape, it’s boundary is a three-dimensional sphere, in fact.
EL: Right, OK.
DM: Then you’re trying to move that, preserving this rather strange structure, and trying to see what happens.
EL: Yeah, so this is saying that the round ball is very rigid in some way.
DM: It’s very round and rigid, and you can’t squeeze it in these two related directions.
EL: At least to preserve the symplectic structure. Of course, you can do this and preserve the volume.
DM: Exactly.
EL: This is saying that symplectic structures are
DM: Different, intrinsically different, in a very direct way.
EL: I remember one of the pictures in your talk kind of shows this symplectic idea, where you’re basically projecting some four-dimensional thing onto two different two-dimensional axes. It does seem like a very strange way to get a volume on something.
DM: It’s a strange measurement. Why you have that, why are you interested in two directions? It’s because they’re related. This structure came from physics, elementary physics. You’re looking at the movement, say, of particles, or the earth around the sun. Each particle has got a position coordinate and a velocity coordinate. It’s a pairing of position and velocity for each degree of freedom that gives this measurement.
EL: And somehow this is a very sensible thing to do, I guess.
DM: It’s a very sensible thing to do, and people have used the idea that the symplectic form is fundamental in order to calculate trajectories, say, of rockets flying off. You want to send a probe to Mars, you want to calculate what happens. You want to have accurate numerical approximations. If you make your numerical approximations preserve the underlying symplectic structure, they just do much better than if you just take other approximation methods.
EL: OK.
DM: That was another talk, that was a fascinating talk at this year’s MathFest telling us about this, showing even if you’re trying to approximate something simple like a pendulum, standard methods don’t do it very well. If you use these other methods, they do it much better.
EL: Oh wow, that’s really interesting. So when did you first learn about the nonsqueezing theorem?
DM: Well I learned about it essentially when it was discovered in the mid-1980s.
EL: OK.
DM: I happened to be thinking about some other problem, but I needed to move these balls around preserving the symplectic structure. I just realized there was this question and I couldn’t necessarily do this when Gromov showed that one really could not do this, that there’s a strict limit. So I’ve always been interested in questions, many other questions coming from that.
EL: Another part of this podcast is that we like to ask our guests to pair their theorem with another delight in life, a food, beverage, piece of art or music, so what have you chosen to pair with the nonsqueezing theorem?
DM: Well you asked me this, and I decided I’d pair it with an avocado because I like avocados, and they have a sort of round, pretty spherical big seed in the middle. The seed is sort of inside the avocado, which surrounds it.
EL: OK. I like that. And the seed can’t be squeezed. The avocado’s seed cannot be squeezed. Is there anything else you’d like to say about the nonsqueezing theorem?
DM: Only that it’s an amazing theorem, that it really does underlie the whole of symplectic geometry. It’s led to many, many interesting questions. It seems to be a simple-minded thing, but it means that you can define what it means to preserve a symplectic structure without using derivatives, which means you can try and understand much more general kinds of motions, which are not differentiable but which preserve the symplectic structure. That’s a very little-understood area that people are trying to explore. What’s the difference between having a derivative and not having a derivative? It’s a sort of geometric thing. You actually see surprising differences. That’s amazing to me.
EL: Yeah. That’s a really interesting aspect to this that I hadn’t thought about. In the talk that you gave today was that the ball can’t be squeezed but the ellipsoids can. It’s this really interesting difference, also, between the ellipsoids and the ball.
DM: Right. So you have to think that somehow an ellipsoid, which is like a ball, but one direction is stretched, it’s got certain planes, there are certain discrete things you can do. You can slice it and then fold it along that slice. It’s a discrete operation somehow. That gives these amazing results about bending these ellipsoids.
EL: That’s another fascinating aspect to it. You I’m sure don’t remember this, but we actually met nine years ago when I was at the Institute for Advanced Study’s summer program for women in math. I’m pretty sure you don’t remember because I was too shy to actually introduce myself, but I remember you gave a series of lectures there about symplectic geometry. I studied Teichmüller theory, something pretty far away from that, and so I didn’t know if I was going to be interested in those. I remember that you really got me very interested in doing that many years ago. I was really excited when I saw that you were here and I’d be able to not be quite so shy this year and actually get to talk to you.
DM: That’s the thing, overcoming shyness. I used to be very shy and didn’t talk to people at all. But now I’m too old, I’ve given it all up.
EL: Well thank you very much for being on this podcast, and I hope you have a good rest of MathFest.
DM: Thank you.
This transcript is provided as a courtesy and may contain errors.
Kevin Knudson: Welcome to My Favorite Theorem. I’m Kevin Knudson, a mathematician at the University of Florida. I’m joined by my other cohost.
Evelyn Lamb: Hi. I’m Evelyn Lamb. I’m a freelance writer currently based in Paris.
KK: Currently based in Paris. For how much longer?
EL: Three weeks. We’re down to the final countdown here. And luckily our bank just closed our account without telling us, so that’s been a fun adventure.
KK: Well, who needs money, right?
EL: Exactly.
KK: You’ve got pastries and coffee, right? So in this episode we are pleased to welcome Jordan Ellenberg, professor of mathematics at the University of Wisconsin. Jordan, want to tell everyone about yourself?
Jordan Ellenberg: Hi. Yes, this is Jordan Ellenberg. I’m talking to you from Madison, Wisconsin today, where we are enjoying the somewhat chilly, drizzly weather we call spring.
KK: Nice. I’ve been to Madison. It’s a lovely place. It’ll be spring for real in a little while, right?
JE: It’ll be lovely. It’s going to be warm this afternoon, and I’m going to be down at the Little League field watching my son play, and it’s as nice as can be.
KK: What position does he play?
JE: He’s 11, so they mix it up. They don’t have defined positions.
KK: I have an 11-year-old nephew who’s a lefty, and they want him to pitch all the time. He’s actually pretty good.
JE: It’s same thing as asking a first-year graduate student what their field is. They should move around a little bit.
KK: That’s absolutely true.
JE: 11 is to baseball as the first year of grad school is to math, I think. Roughly.
KK: That’s about right. Well now they start them so young. We’re getting off track. Never mind. So we’re here to talk about math, not baseball, even though there’s a pretty good overlap there. So Jordan, you’re going to surprise us. We don’t actually know what your favorite theorem is. So why don’t you lay it on us. What’s your favorite theorem?
JE: It is hard to pick your favorite theorem. I think it’s like trying to pick your favorite kind of cheese, though I think in Wisconsin you’re almost required to have one. I’m going to go with Fermat’s Little Theorem.
KK: OK.
EL: This is a good theorem. Can you tell us what that is?
JE: I’m not even going to talk about the whole theorem. I’m going to talk about one special case, which I find very beautiful, which is that if you take a prime number, p, and raise 2 to that power, and then you divide by p, then the remainder is 2. In compact terms, you would say 2 to the p is congruent to 2 mod p. Shall we do a couple?
KK: Sure.
JE: For instance, 2^5 is 32. Computing the remainder when you divide by 5 is easy because you can just look at the last digit. 32 is 2 more than 30, which is a multiple of 5. This persists, and you can do it. Should we do one more? Let’s try. 2 to the 7th is 128, and 126 is a multiple of 7, so 128 is 2 mod 7.
KK: Your multiplication tables are excellent.
JE: Thank you.
KK: I guess being a number theorist, this is right up your alley. Is this why you chose it? How far back does this theorem go?
JE: Well, it goes back to Fermat, which is a long time ago. It goes back very early in number theory. It also goes back for me very early in my own life, which is why I have a special feeling for it. One thing I like about it is that there are some theorems in number theory where you’re not going to figure out how to prove this theorem by yourself, or even observe it by yourself. The way to get to the theorem, and this is true for many theorems in number theory, which is a very old, a very deep subject, is you’re going to study and you’re going to marvel at the ingenuity of whoever could have come up with it. Fermat’s Little Theorem is not like that. I think Fermat’s Little Theorem is something that you can, and many people do, and I did, discover at least that it’s true on your own, for instance by messing with Pascal’s Triangle, for example. It’s something you can kind of discover. At least for me, that was a very formative experience, to be like, I learned about Pascal’s triangle, I was probably a teenager or something. I was messing around and sort of observed this pattern and then was able to prove that 2 to the p was congruent to 2 mod p, and I thought this was great. I sort of told a teacher who knew much more than me, and he said, yeah, that’s Fermat’s Little Theorem.
I was like, “little theorem?” No, this was a lot of work! It took me a couple days to work this out. I felt a little bit diminished. But to give some context, it’s called that because of course there’s the famous Fermat’s Last Theorem, poorly named because he didn’t prove it, so it wasn’t really his theorem. Now I think nowadays we call this theorem, which you could argue is substantially more foundational and important, we call it the little theorem by contrast with the last theorem.
EL: Going back to Pascal’s triangle, I’m not really aware of the connection between Fermat’s Little Theorem and Pascal’s triangle. This is an audio medium. It might be a little hard to go through, but can you maybe explain a little bit about how those are connected?
JE: Sure, and I’m going to gesticulate wildly with my hands to make the shape.
EL: Perfect.
JE: You can imagine a triangle man dance sort of thing with my hands as I do this. So there’s all kinds of crazy stuff you can do with Pascal’s triangle, and of course one thing you can do, which is sort of fundamental to what Pascal’s triangle is, is that you can add up the rows. When you add up the rows, you get powers of two.
EL: Right.
JE: So for instance, the third row of Pascal’s triangle is 1-3-3-1, and if you add those up, you get 8, which is a power of 2, it’s 2^3. The fifth row of Pascal’s triangle is 1-5-10-10-5-1. I don’t know, actually. Every number theorist can sort of rattle off the first few rows of Pascal’s triangle. Is that true of topologists too, or is that sort of a number theory thing? I don’t even know.
KK: I’m pretty good.
JE: I don’t want to put you on the spot.
EL: No, I mean, I could if I wrote them down, but they aren’t at the tip of my brain that way.
JE: We use those binomial coefficients a lot, so they’re just like right there. Anyway, 1-5-10-10-5-1. If you add those up, you’ll get 32, which is 2^5. OK, great. Actually looking at it in terms of Pascal’s triangle, why is it the case that you get something congruent to 2 mod 5? And you notice that actually most of those summands, 1-5-10-10-5-1, I’m going to say it a few times like a mantra, most of those summands are multiples of 5, right? If you’re like, what is this number mod 5, the 5 doesn’t matter, the 10 doesn’t matter, the 10 doesn’t matter, the 5 doesn’t matter. All that matters is the 1 at the beginning and the 1 at the end. In some sense Fermat’s Little Theorem is an even littler theorem, it’s the theorem that 1+1=2. That’s the 2. You’ve got the 1 on the far left and the 1 on the far right, and when the far left and the far right come together, you either get the 2016 US Presidential election, or you get 2.
KK: And the reason they add up to powers of 2, I guess, is because you’re just counting the number of subsets, right? The number of ways of choosing k things out of n things, and that’s basically the order of the power set, right?
JE: Exactly. It’s one of those things that’s overdetermined. Pascal’s triangle is a place where so many strands of mathematics meet. For the combinatorists in the room, we can sort of say it in terms of subsets of a set. This is equivalent, but I like to think of it as this is the vertices of a cube, except by cube maybe I mean hypercube or some high-dimensional thing. Here’s the way I like to think about how this works for the case p=3, right, 1-3-3-1. I like to think of those 8 things as the 8 vertices of a cube. Is everybody imagining their cube right now? We’re going to do this in audio. OK. Now this cube that you’re imagining, you’re going to grab it by two opposite corners, and kind of hold it up and look at it. And you’ll notice that there’s one corner in one finger, there’s one corner on your opposite finger, and then the other six vertices that remain are sort of in 2 groups of 3. If you sort of move from one finger to the other and go from left to right and look at how many vertices you have, there’s your Pascal’s triangle, right? There’s your 1-3-3-1.
One very lovely way to prove Fermat’s Little Theorem is to imagine spinning that cube. You’ve got it held with the opposite corners in both fingers. What you can see is that you can sort of spin that cube 1/3 of a rotation and that’s going to group your vertices into groups of 3, except for the ones that are fixed. This is my topologist way. It’s sort of a fixed point theorem. You sort of rotate the sphere, and it’s going to have two fixed points.
EL: Right. That’s a neat connection there. I had never seen Pascal’s triangle coming into Fermat’s little theorem here.
JE: And if you held up a five-dimensional cube with your five-dimensional fingers and held opposite corners of it, you would indeed see as you sort of when along from the corner a group of 5, and then a group of 10, and then a group of 10, and then a group of 5, and then the last one, which you’re holding in your opposite finger.
EL: Right.
JE: And you could spin, you could spin the same way, a fifth of a rotation around. Of course the real truth, as you guys know, as we talk about, you imagine a five-dimensional cube, I think everyone just imagines a 3-dimensional cube.
KK: Right. We think of some projection, right?
JE: Exactly.
KK: Right. So you figured out a proof on your own in the case of p=2?
JE: My memory is that I don’t think I knew the slick cube-spinning proof. I think I was thinking of the Pascal’s triangle. This thing I said, I didn’t prove, as we were just discussing, I mean, you can look at any individual row and see that all those interior numbers in the triangle are divisible by 5. But that’s something that you can prove if you know that the elements of Pascal’s triangle are the binomial coefficients, the formula is n!/k!(n-k)!. It’s not so hard to prove in that case that if n is prime, then those binomial coefficients are all divisible by p, except for the first and last. So that was probably how I proved it. That would be my guess.
KK: Just by observation, I guess. Cool.
EL: We like to enjoy the great things in life together. So along with theorems, we like to ask our guests to pair something with this theorem that they think complements the theorem particularly well. It could be a wine or beer, favorite flavor of chocolate…
JE: Since you invited somebody in Wisconsin to do this show, you know that I’m going to tell you what cheese goes with this theorem.
EL: Yes, please.
KK: Yes, absolutely. Which one?
JE: The cheese I’ve chosen to pair with this, and I may pronounce it poorly, is a cheese called gjetost.
EL: Gjetost.
JE: Which is a Norwegian cheese. I don’t know if you’ve had it. It almost doesn’t look like cheese. If you saw it, you wouldn’t quite know what it was because it’s a rather dark toasty brown. You might think it was a piece of taffy or something like that.
EL: Yeah, yeah. It looks like caramel.
JE: Yes, it’s caramel colored. It’s very sweet. I chose it because a, because like Fermat’s Little Theorem, I just really like it, and I’ve liked it for a long time; b, because it usually comes in the shape of a cube, and so it sort of goes with my imagined proof. You could, if you wanted to, label the vertices of your cheese with the subsets of a 3-element set and use the gjetost to actually illustrate a proof of Fermat’s Little Theorem in the case p=3. And third, of course, the cheese is Norwegian, and so it honors Niels Henrik Abel, who was a great Norwegian mathematician, and Fermat’s Little Theorem is in some sense the very beginning of what we would now call Abelian group theory. Fermat certainly didn’t have those words. It would be hundreds of years before the general apparatus was developed, but it was one of the earliest theorems proved about Abelian groups, and so in that sense I think it goes with a nice, sweet Norwegian cheese.
EL: Wow, you really thought this pairing through. I’m impressed.
JE: For about 45 seconds before we talked.
EL: I’ve actually made this cheese, or at least some approximation of this. I think it’s made with whey, rather than milk.
JE: On purpose? What happened?
EL: Yeah, yeah. I had some whey left over from making paneer, and so I looked up a recipe for this cheese, and I had never tried the real version of it. After I made my version, then, I went to the store and got the real one. My version stood up OK to it. It didn’t taste exactly the same, but it wasn’t too bad.
JE: Wow!
KK: Experiments in cheesemaking.
JE: In twelve years, I’ve never made my own cheese. I just buy it from the local dairy farmers.
EL: Well it was kind of a pain, honestly. It stuck to everything. Yeah.
JE: Someone who lives in Paris should not be reduced to making their own cheese, by the way. I feel like that’s wrong.
EL: Yes.
KK: I’m not surprised you came up with such a good pairing, Jordan. You’ve written a novel, right, years ago, and so you’re actually a pretty creative type. You want to plug your famous popular math book? We like to let people plug stuff.
JE: Yes. My book, which came out here a few years ago, it’s called How Not to Be Wrong. It’ll be out in Paris in two weeks in French. I just got to look at the French cover, which is beautiful. In French it’s called, I’m not going to be able to pronounce it well, like “L’art de ne dire n’importe pas”, [L’art de ne pas dire n’importe quoi] which is “The art of not saying whatever nonsense,” or something like this. It’s actually hard work to translate the phrase “How not to be wrong” in French. I was told that any literal translation of it sounds appallingly bad in French.
This book is kind of a big compendium of all kinds of things I had to say with a math angle. Some of it is about pure math, and insights I think regular people can glean from things that pure mathematicians think about, and some are more on the “statistical news you can use” side. It’s a big melange of stuff.
KK: I’ve read it.
JE: I’m a bit surprised people like it and have purchased it. I guess the publishing house knew that because they wouldn’t have published it, but I didn’t know that. I’m surprised people wanted it.
KK: I own it in hardback. I’ll say it. It’s really well done. How many languages is it into now?
JE: They come out pretty slowly. I think we’ve sold 14 or 15. I think the number that are physically out is maybe []. I think I made the book hard to translate by having a lot of baseball material and references to US cultural figures and stuff like that. I got a lot of really good questions from the Hungarian translator. That one’s not out, or that one is out, but I don’t have a copy of it. It just came out.
KK: Very cool.
JE: The Brazilian edition is very, very rich in translator’s notes about what the baseball words mean. They really went the extra mile to be like, what the hell is this guy talking about?
KK: Is it out in Klingon yet?
JE: No, I think that will have to be a volunteer translator because I think the commercial market for Klingon popular math books is not there. I’m holding out for Esperanto. If you want my sentimental favorite, that’s what I would really like. I tried to learn Esperanto when I was kid. I took a correspondence course, and I have a lifelong fascination for it. But I don’t think they publish very many books in Esperanto. There was a math journal in Esperanto.
EL: Oh wow.
KK: That’s right, that’s right. I sort of remember that.
JE: That was in Poland. I think Poland is one of the places where Esperanto had the biggest popularity. I think the guy who founded it, Zamenhof, was Polish.
KK: Cool. This has been fun. Thanks, Jordan.
JE: Thank you guys.
EL: Thanks a lot for being here.
KK: Thanks a lot.
KK: Thanks for listening to My Favorite Theorem, hosted by Kevin Knudson and Evelyn Lamb. The music you’re hearing is a piece called Fractalia, a percussion quartet performed by four high school students from Gainesville, Florida. They are Blake Crawford, Gus Knudson, Dell Mitchell, and Baochau Nguyen. You can find more information about the mathematicians and theorems featured in this podcast, along with other delightful mathematical treats, at Kevin’s website, kpknudson.com, and Evelyn’s blog, Roots of Unity, on the Scientific American blog network. We love to hear from our listeners, so please drop us a line at [email protected]. Or you can find us on Facebook and Twitter. Kevin’s handle on Twitter is @niveknosdunk, and Evelyn’s is @evelynjlamb. The show itself also has a Twitter feed. The handle is @myfavethm. Join us next time to learn another fascinating piece of mathematics.
This transcript is provided as a courtesy and may contain errors.
EL: Welcome to My Favorite Theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer currently based in Paris. And this is my cohost.
KK: Hi, I’m Kevin Knudson, professor of mathematics at the University of very, very hot Florida.
EL: Yeah. Not so bad in Paris yet.
KK: It’s going to be a 96-er tomorrow.
EL: Wow. So each episode, we invite a mathematician to come on and tell us about their favorite theorem. Today we’re delighted to welcome Emille Davie Lawrence to the show. Hi, Emille.
EDL: Hello, Evelyn.
EL: So can you tell us a little bit about yourself?
EDL: Sure! So I am a term assistant professor at the University of San Francisco. I’m in the mathematics and physics department. I’ve been here since 2011, so I guess that’s six years now. I love the city of San Francisco. I have two children, ages two and almost four.
EL: Who are adorable, if your Facebook is anything to go by.
EDL: Thank you so much. You’ll get no arguments from me. I’ve been doing math for quite a while now. I’m a topologist, and my mathematical interests have always been in topology, but they’ve evolved within topology. I started doing braid groups, and right now, I’m thinking about spatial graphs a lot. So lots of low-dimensional topology ideas.
EL: Cool. So what is your favorite theorem?
EDL: My favorite theorem is the classification theorem for compact surfaces. It basically says that no matter how weird the surface you think you have on your hands, if it’s a compact surface, it’s only one of a few things. It’s either a sphere, or the connected sum of a bunch of tori, or the connected sum of a bunch of projective planes. That’s it.
EL: Can you tell us a little bit more about what projective planes are?
EDL: Obviously a sphere, well, I don’t know how obvious, but a sphere is like the surface of a ball, and a torus looks like the surface of a donut, and a projective plane is a little bit stranger. I think anyone who would be listening may have run into a Möbius band at some point. Basically you take a strip of paper and glue the two ends of your strip together with a half-twist. This is a Möbius band. It’s a non-orientable half-surface. I think sometimes kids do this. They pop up in different contexts. One way to describe a projective plane is to take a Möbius band and add a disc to the Möbius band. It gives you a compact surface without boundary because you’ve identified the boundary circle of the Möbius band to the boundary of the disc.
EL: Right, OK.
EDL: Now you’ve got this non-orientable thing called a projective plane. Another way to think about a projective plane is to take a disc and glue one half of the boundary to the other half of the boundary in opposite directions. It’s a really weird little surface.
KK: One of those things we can’t visualize in three dimensions, unfortunately.
EDL: Right, right. It’s actually hard to explain. I don’t think I’ve ever tried to explain it without drawing a picture.
EL: Right. That’s where the blackboard comes in hand.
KK: Limitations of audio.
EL: Have you ever actually tried to make a projective plane with paper or cloth or anything?
EDL: Huh! I am going to disappoint you there. I have not. The Möbius bands are easy to make. All you need is a piece of paper and one little strip of tape. But I haven’t. Have you, Evelyn?
EL: I’ve seen these at the Joint Meetings, I think somebody brought this one that they had made. And I haven’t really tried. I’d imagine if you tried with paper, it would probably just be a crumpled mess.
EDL: Right, yeah.
EL: This one I think was with fabric and a bunch of zippers and stuff. It seemed pretty cool. I’m blanking now on who is was.
KK: That sounds like something sarah-marie belcastro would do.
EL: It might have been. It might have been someone else. There are lots of cool people doing cool things with that. I should get one for myself.
EDL: Yeah, yeah. I can see cloth and zippers working out a lot better than a piece of paper.
EL: So back to the theorem. Do you know what makes you love this theorem?
EDL: Yeah. I think just the fact that it is a complete classification of all compact surfaces. It’s really beautiful. Surfaces can get weird, right? And no matter what you have on your hands, you know that it’s somewhere on this list. That makes a person like me who likes order very happy. I also like teaching about it in a topology class. I’ve only taught undergraduate topology a few times, but the last time was last spring, a year ago, spring of 2016, and the students seemed to really love it. You can play these “What surface am I?” games. Part of the proof of the theorem is that you can triangulate any surface and cut it open and lay it flat. So basically any surface has a polygonal representation where you’re just some polygon in the plane with edges identified in pairs. I like to have this game in my class where I just draw a polygon and identify some of the edges in pairs and say, “What surface is this?” And they kind of get into it. They know what the answers, what the possibilities are for the answers. You can sort of just triangulate it and find the Euler characteristic, see if you can find a Möbius band, and you’re off to the races.
KK: That’s great. I taught the graduate topology course here at Florida last year. I’m ashamed to admit I didn’t actually prove the classification.
EDL: You should not be ashamed to admit that. It’s something at an undergraduate level you get to at the end, depending on how you structure things. We did get to it at the end of the course, so I don’t know how rigorously I proved it for them. The combinatorial step that goes from: you can take this polygonal representation, and you can put it in this polygonal form, always, that takes a lot of work and time.
EL: There are delicacies in there that you don’t really know about until you try to teach it. I taught it also in class a couple years ago, and when I got there, I was like, “This seemed a little easier when I saw it as a student.” Now that I was trying to teach it, it seemed a little harder. Oh, there are all of these t’s I have to cross and i’s I have to dot.
KK: That’s always the way, right?
EDL: Right.
KK: I assigned as a homework assignment that my students should just compute the homology of these surfaces, and even puncture them. Genus g, r punctures, just as a homework exercise. From there you can sort of see that homology tells you that genus classifies things, at least up to homotopy invariants, but this combinatorial business is tricky.
EDL: It is.
EL: Was this a love at first sight kind of theorem, or is this a theorem that’s grown on you?
EDL: I have to say it’s grown on me. I probably saw it my first year of graduate school, and like all of topology, I didn’t love it at first when I saw it as a first-year graduate student. I did not see any topology as an undergrad. I went to a small, liberal arts college that didn’t have it. So yeah, I have matured in my appreciation for the classification theorem of surfaces. It’s definitely something I love now.
KK: You’re talking to a couple of topologists, so you don’t have to convince us very much.
EDL: Right.
KK: I had a professor as an undergrad who always said, “Topology is analysis done right.”
EDL: I like that.
KK: I know I just infuriated all the analysts who are listening. I always took that to heart. I always took that to heart because I always felt that way too. All those epsilons and deltas, who wants all that?
EDL: Who needs it?
KK: Draw me a picture.
EL: I was so surprised in the first, I guess advanced calculus class I had, a broader approach to calculus, and I learned that all these open sets and closed sets and things actually had to do with topology not necessarily with epsilons and deltas. That was really a revelation.
KK: So you’re interested in braids, too, or you were? You moved on?
EDL: I would say I’m still interested in braids, although that is not the focus of my research right now.
KK: Those are hard questions too, so much interesting combinatorics there.
EDL: That’s right. I think that’s sort of what made me like braid groups in the first place. I thought it was really neat that a group could have that geometric representation. Groups, I don’t know, when you learn about groups, I guess the symmetric group is one of the first groups that you learn about, but then it starts to wander off into abstract land. Braid groups really appealed to me, maybe just the fact that I liked learning visually.
EL: It’s not quite as in the clouds as some abstract algebra.
KK: And they’re tied up with surfaces, right, because braid groups are just the mapping class group of the punctured disc.
EDL: There you go.
KK: And Evelyn being the local Teichmüller theorist can tell us all about the mapping class groups on surfaces.
EL: Oh no! We’re getting way too far from the classification of surfaces here.
KK: This is my fault. I like to go off on tangents.
EDL: Let’s reel it back in.
EL: You mentioned that you’ve matured into true appreciation of this lovely theorem, which kind of brings me to the next part of the show. The best things in life are better together. Can you recommend a pairing for your theorem? This could be a fine wine or a flavor of ice cream or a favorite piece of music or art that you think really enhances the beauty of this classification theorem.
EDL: I hate to do this, but I’m going to have to say coffee and donuts.
KK: Of course.
EDL: I really tried to say something else, but I couldn’t make myself do it. A donut and cup of coffee go great with the classification of compact surfaces theorem.
EL: That’s fair.
KK: San Francisco coffee, right? Really good dark, walk down to Blue Bottle and stand in line for a while?
EDL: That’s right. Vietnamese coffee.
KK: There you go. That’s good.
EL: Is there a particular flavor of donut that you recommend?
EDL: Well you know, the maple bacon. Who can say no to bacon on a donut?
KK: Or on anything for that matter.
EDL: Or on anything.
KK: That’s just a genus one surface. Can we get higher-genus donuts? Have we seen these anywhere, or is it just one?
EDL: There are some twisted little pastry type things. I’m wondering if there’s some higher genus donuts out there.
EL: If nothing else there’s a little bit of Dehn twisting going on with that.
EDL: There’s definitely some twisting.
EL: I guess we could move all the way over into pretzels, but that doesn’t go quite as well with a cup of coffee.
EDL: Or if you’re in San Francisco, you can get one of these cronuts that have been all the rage lately.
EL: What is a cronut? I have not quite understood this concept.
EDL: It is a cross between a croissant and a donut. And it’s flakier than your average donut. It is quite good. And if you want one, you’re probably going to have to stand on line for about an hour. Maybe the rage has died down by now, maybe. But that’s what was happening when they were first introduced.
EL: I’m a little scared of the cronut. That sounds intense but also intriguing.
EDL: You’ve got to try everything once, Evelyn. Live on the edge.
EL: The edge of the cronut.
KK: You’re in Paris. We’re not too concerned about your ability to get pastry.
EL: I have been putting away some butter.
KK: The French have it right. They understand that butter does the heavy lifting.
EDL: It’s probably a sin to have a cronut in Paris.
EL: Probably. But if they made one, it would be the best cronut that existed.
EDL: Absolutely.
KK: Well I think this has been fun. Anything else you want to add about your favorite theorem?
EDL: It’s a theorem that everyone should dig into, even if you aren’t into topology. I think it’s one of those foundational theorems that everyone should see at least once, and look at the proof at least once, just for a well-rounded mathematical education.
KK: Maybe I should look at the proof sometime.
EL: Thanks so much for joining us, Emille. We really enjoyed having you. And this has been My Favorite Theorem.
EDL: Thank you so much.
KK: Thanks for listening to My Favorite Theorem, hosted by Kevin Knudson and Evelyn Lamb. The music you’re hearing is a piece called Fractalia, a percussion quartet performed by four high school students from Gainesville, Florida. They are Blake Crawford, Gus Knudson, Del Mitchell, and Bao-xian Lin. You can find more information about the mathematicians and theorems featured in this podcast, along with other delightful mathematical treats, at Kevin’s website, kpknudson.com, and Evelyn’s blog, Roots of Unity, on the Scientific American blog network. We love to hear from our listeners, so please drop us a line at [email protected]. Or you can find us on Facebook and Twitter. Kevin’s handle on Twitter is @niveknosdunk, and Evelyn’s is @evelynjlamb. The show itself also has a Twitter feed. The handle is @myfavethm. Join us next time to learn another fascinating piece of mathematics.
This transcript is provided as a courtesy and may contain errors.
Evelyn Lamb: Welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I am a freelance math writer usually based in Salt Lake City but currently based in Paris. And this is your other host.
KK: I’m Kevin Knudson, professor of mathematics at the University of Florida.
EL: Every episode we invite a mathematician on to tell us about their favorite theorem. This week our guest is Dave Richeson. Can you tell us a little about yourself, Dave?
Dave Richeson: Sure. I’m a professor of mathematics at Dickinson College, which is in Carlisle, Pennsylvania. I’m also currently the editor of Math Horizons, which is the undergraduate magazine of the Mathematical Association of America.
EL: Great. And so how did you get from wherever you started to Carlisle, Pennsylvania?
DR: The way things usually work in academia. I applied to a bunch of schools. Actually, seriously, my wife knew someone in Carlisle, Pennsylvania. My girlfriend at the time, wife now, and she saw the list of schools that I was applying to and said, “You should get a job at Dickinson because I know someone there.” And I did.
KK: That never happens!
EL: Wow.
DR: That never happens.
KK: That never happens. Dave and I actually go back a long way. He was finishing his Ph.D. at Northwestern when I was a postdoc there.
DR: That’s right.
KK: That’s how old-timey we are. Hey, Dave, why don’t you plug your excellent book.
DR: A few years ago I wrote a book called Euler’s Gem: The Polyhedron Formula and the Birth of Topology. It’s at Princeton University Press. I could have chosen Euler’s Formula as my favorite theorem, but I decided to choose something different instead.
KK: That’s very cool. I really recommend Dave’s book. It’s great. I have it on my shelf. It’s a good read.
DR: Thank you.
EL: Yeah. So you’ve told us what your favorite theorem isn’t. So what is your favorite theorem?
DR: We have a family joke. My kids are always saying, “What’s your favorite ice cream? What’s your favorite color?” And I don’t really rank things that way. This was a really challenging assignment to come up with a theorem. I have recently been interested in π and Greek mathematics, so currently I’m fascinated by this theorem of Archimedes, so that is what I’m giving you as my favorite theorem. Favorite theorem of the moment.
The theorem says that if you take a circle, the area of that circle is the same as the area of a right triangle that has one leg equal to the radius and one leg equal to the circumference of the circle. Area equals 1/2 c x r, and hopefully we can spend the rest of the podcast talking about why I think this is such a fascinating theorem.
KK: I really like this theorem because I think in grade school you memorize this formula, that area is π r2, and if you translate what you said into modern terminology, or notation, that is what it would say. It’s always been a mystery, right? It just gets presented to you in grade school. Hey, this is the formula of a circle. Just take it.
DR: Really, we have these two circle formulas, right? The area equals π r2, and the circumference is 2πr, or the way it’s often presented is that π is the circumference divided by the diameter. As you said, you could convince yourself that Archimedes theorem is true by using those formulas. Really it’s sort of the reverse. We have those formulas because of what Archimedes did. Pi has a long and fascinating history. It was discovered and rediscovered in many, many cultures: the Babylonians, the Egyptians, Chinese, Indians, and so forth. But no one, until the Greeks, really looked at it in a rigorous way and started proving theorems about π and relationships between the circumference, the diameter, and the area of the circle.
EL: Right, and something you had said in one of your emails to us was about how it’s not even, if you ask a mathematician who proved that π was a constant, that’s a hard question.
DR: Yes, exactly. I mean, in a way, it seems easy. Pi is usually defined as the circumference divided by the diameter for any circle. And in a way, it seems kind of obvious. If you take a circle and you blow it up or shrink it down by some factor of k, let’s say, then the circumference is going to increase by a factor of k, the diameter is going to increase by a factor of k. When you do that division you would get the same number. That seems sort of obvious, and in a way it kind of is. What’s really tricky about this is that you have to have a way of talking about the length of the circumference. That is a curve, and it’s not obvious how to talk about lengths of curves. In fact, if you ask a mathematician who proved that the circumference over the diameter was the same value of π, most mathematicians don’t know the answer to that. I’d put money on it that most people would think it was in Euclid’s Elements, which is sort of the Bible of geometry. But it isn’t. There’s nothing about the circumference divided by the diameter, or anything equivalent to it, in Euclid’s Elements.
Just to put things in context here, a quick primer on Greek mathematics. Euclid wrote Elements sometime around 300 BCE. Pythagoras was before that, maybe 150 years before that. Archimedes was probably born after Euclid’s Elements was written. This is relatively late in this Greek period of mathematics.
KK: Getting back to that question of proportionality, the idea that all circles are similar and that’s why everybody thinks π is a constant, why is that obvious, though? I mean, I agree that all circles are similar. But this idea that if you scale a circle by a factor of k, its length scales by k, I agree if you take a polygon, that it’s clear, but why does that work for curves? That’s the crux of the matter in some sense, right?
DR: Yeah, that’s it. I think one mathematician I read called this “inherited knowledge.” This is something that was known for a long time, and it was rediscovered in many places. I think “obvious” is sort of, as we all know from doing math, obvious is a tricky word in math. It’s obvious meaning lots of people have thought of it, but if you actually have to make it rigorous and give a proof of this fact, it’s tricky. And so it is obvious in a sense that it seems pretty clear, but if you actually have to connect the dots, it’s tricky. In fact, Euclid could not have proved it in his Elements. He begins the Elements with his famous five postulates that sort of set the stage, and from those he proves everything in the book. And it turns out that those five postulates aren’t enough to prove this theorem. So one of Archimedes’ contributions was to recognize that we needed more than just Euclid’s postulates, and so he added two new postulates to those. From that, he was able to give a satisfactory proof that area=1/2 circumference times radius.
KK: So what were the new postulates?
DR: One of them was essentially that if you have two points, then the shortest distance between them is a straight line, which again seems sort of obvious, and actually Euclid did prove that for polygonal lines, but Archimedes is including curves as well. And the other one is that if you have, it would be easier to draw a picture. If you had two points and you connected them by a straight line and then connected them by two curves that he calls “concave in the same direction,” then the one that’s in between the straight line and the other curve is shorter than the second curve. The way he uses both of those theorems is to say that if you take a circle and inscribe a polygon, like a regular polygon, and you circumscribe a regular polygon, then the inscribed polygon has the shortest perimeter, then the circle, then the circumscribed polygon. That’s the key fact that he needs, and he uses those two axioms to justify that.
EL: OK. And so this sounds like it’s also very related to his some more famous work on actually bounding the value of π.
DR: Yeah, exactly. We have some writings of his that goes by the name “Measurement of a Circle.” Unfortunately it’s incomplete, and it’s clearly not come down to us very well through history. The two main results in that are the theorem I just talked about and his famous bounds on π, that π is between 223/71 and 22/7. 22/7 is a very famous approximation of π. Yes, so these are all tied together, and they’re in the same treatise that he wrote. In both cases, he uses this idea of approximating a circle by inscribed and circumscribed polygons, which turned out to be extremely fruitful. Really for 2,000 years, people were trying to get better and better approximations, and really until calculus they basically used Archimedes’ techniques and just used polygons with more and more and more and more sides to try to get better approximations of π.
KK: Yeah, it takes a lot too, right? Weren’t his bounds something like a 96-gon?
DR: Yeah, that’s right. Exactly.
KK: I once wrote a Geogebra applet thing to run to the calculations like that. It takes it a while for it to even get to 3.14. It’s a pretty slow convergence.
DR: I should also plug another mathematician from the Greek era who is not that well known, and that is Eudoxus. He did work before Euclid, and big chunks of Euclid’s Elements are based on the work of Eudoxus. He was the one who really set this in motion. It’s become known as the method of exhaustion, but really it’s the ideas of calculus and limiting in disguise. This idea of proving these theorems about shapes with curved boundaries using polygons, better and better approximations of polygons. So Eudoxus is one of my favorite mathematicians that most people don’t really know about.
KK: That’s exactly it, right? They almost had calculus.
DR: Right.
KK: Almost. It’s really pretty amazing.
DR: Yes, exactly. The Greeks were pretty afraid of infinity.
KK: I’m sort of surprised that they let the method of exhaustion go, that they were OK with it. It is sort of getting at a limiting process, and as you say, they don’t like infinity.
DR: Yeah.
KK: You’d think they might not have accepted it as a proof technique.
DR: Really, and maybe this is talking too much for the mathematicians in the audience, but really the way they present this is a proof by contradiction. They show that it can’t be done, and then they get these polygons that are close enough that it can be done, and that gives them a contradiction. The final style of the proof would, I think, be comfortable to them. They don’t really take a limit, they don’t pass to infinity, anything like that.
EL: So something we like to do on this podcast is ask our guest to pair their theorem with something. Great things in life are often better paired: wine and cheese, beer and pizza, so what’s best with your theorem?
DR: I have to go with the obvious: pie, maybe pizza.
KK: Just pizza? OK?
EL: What flavor? What toppings?
KK: What goes on it?
DR: That’s a good question. I’m a fan of black olives on my pizza.
KK: OK. Just black olives?
DR: Maybe some pepperoni too.
KK: There you go.
EL: Deep dish? Thin crust? We want specifics.
DR: I’d say thin crust pizza, pepperoni and black olives. That sounds great.
EL: You’d say this is the best way to properly appreciate this theorem of Archimedes, is over a slice of pizza.
DR: I think I would enjoy going to a good pizza joint and talking to some mathematicians and telling them about who first proved that circumference over diameter is π, that it was Archimedes.
Actually, I was saying to Kevin before we started recording that I actually have a funny story about this, that I started investigating this. I wanted to know who first proved that circumference over diameter is a constant. I did some looking and did some asking around and couldn’t really get a satisfactory answer. I sheepishly at a conference went up to a pretty well-known math historian, and said, “I have this question about π I’m embarrassed to ask.” And he said, “Who first proved that circumference over diameter is a constant?” I said, “Yes!” He’s like, “I don’t know. I’d guess Archimedes, but I really don’t know.” And that’s when I realized it was an interesting question and something to look at a little more deeply.
EL: That’s a good life lesson, too. Don’t be afraid to ask that question that you are a little afraid to ask.
KK: And also that most answers to ancient Greek mathematics involve Archimedes.
DR: Yeah. Actually through this whole investigation, I’ve gained an unbelievable appreciation of Archimedes. I think Euclid and Pythagoras probably have more name recognition, but the more I read about Archimedes and things that he’s done, the more I realize that he is one of the great, top 5 mathematicians.
KK: All right, so that’s it. What’s the top 5?
DR: Gosh. Let’s see here.
KK: Unordered.
DR: I already have Archimedes. Euler, Newton, Gauss, and who would number 5 be?
KK: Somebody modern, come on.
DR: How about Poincaré, that’s not exactly modern, but more modern than the rest. While we’re talking about Archimedes, I also want to make a plug. There’s all this talk about tau vs. pi. I don’t really want to weigh in on that one, but I do think we should call π Archimedes’ number. We talk about π is the circumference constant, π is the area constant. Archimedes was involved with both of those. People may not know he was also involved in attaching π to the volume of the sphere and π to the surface area of the sphere. Here I’m being a little historically inaccurate. Pi as a number didn’t exist for a long time after that. But basically recognizing that all four of these things that we now recognize as π, the circumference of a circle, the area of a circle, the volume of a sphere, and the surface area of a sphere. In fact, he famously asked that this be represented on his tombstone when he died. He had this lovely way to put all four of these together, and he said that if you take a sphere and then you enclose it in a cylinder, so that’s a cylinder that’s touching the sphere on the sides, think of a can of soda or something that’s touching on the top as well, that the volume of the cylinder to the sphere is in the ratio 3:2, and the surface area of the cylinder to the sphere is also the ratio 3:2. If you work out the math, all four of these versions of π appear in the calculation. We do have some evidence that this was actually carried out. Years later, the Roman Cicero found Archimedes’ tomb, and it was covered in brambles and so forth, and he talks about seeing the sphere and the cylinder on Archimedes’ tombstone, which is kind of cool.
EL: Oh wow.
DR: Yeah, he wrote about it.
KK: Of course, how Archimedes died is another good story. It’s really too bad.
DR: Yeah, I was just reading about that this week. The Roman siege of Syracuse, and Archimedes, in addition to being a great mathematician and physicist, was a great engineer, and he built all these war devices to help keep the Romans at bay, and he ended up being killed by a Roman soldier. The story goes that he was doing math at the time, and the Roman general was apparently upset that they killed Archimedes. But that was his end.
KK: Then on Mythbusters, they actually tried the deal with the mirrors to see if they could get a sail to catch on fire.
DR: I did see that! Some of these stories have more evidence than others. Apparently the story of using the burning mirrors to catch ships on fire, that appeared much, much later, so the historical connection to Archimedes is pretty flimsy. As you said, it was debunked by Mythbusters on TV, or they weren’t able to match Archimedes, I should say.
KK: Well few of us can, right?
DR: Right. The other thing that is historically interesting about this is that one of the most famous problems in the history of math is the problem of squaring the circle. This is a famous Greek problem which says that if you have a circle and only a compass and straightedge, can you construct a square that has the same area as the circle? This was a challenging and difficult problem. Reading Archimedes’ writings, it’s pretty clear that he was working on this pretty hard. That’s part of the context, I think, of this work he did on π, was trying to tackle the problem of squaring the circle. It turns out that this was impossible, it is impossible to square the circle, but that wasn’t discovered until 1882. At the time it was still an interesting open problem, and Archimedes made various contributions that were related to this famous problem.
EL: Yeah.
KK: Very cool.
DR: I can go on and on. So today, that is my favorite theorem.
KK: We could have you on again, and it might be different?
DR: Sure. I’d love to.
KK: Well, thanks, Dave, we certainly appreciate you being here.
DR: I should say if people would like to read about this, I did write an article, “Circular Reasoning: Who first proved that c/d is a constant?” Some of the things I talked about are in that article. Mathematicians can find it in the College Math Journal, and it just recently was included in Princeton University Press’s book The Best Writing on Mathematics, 2016 edition. You can find that wherever, your local bookstore.
EL: And where else can our loyal listeners find you online, Dave?
DR: I spend a lot of time on Twitter. I’m @divbyzero. I blog occasionally at divisbyzero.com.
EL: OK.
DR: That’s where I’d recommend finding me.
KK: Cool.
EL: All right. Well, thanks for being here.
DR: Thank you for asking me. It was a pleasure talking to you.
Kevin Knudson: Welcome to My Favorite Theorem. I’m Kevin Knudson, professor of mathematics at the University of Florida, and I’m joined by my cohost.
Evelyn Lamb: I’m Evelyn Lamb. I’m a freelance writer currently based in Paris.
KK: Yeah, Paris. Paris is better than Gainesville. I mean, Gainesville’s nice and everything.
EL: Depends on how much you like alligators.
KK: I don’t like alligators that much.
EL: OK.
KK: This episode, we’re thrilled to welcome Amie Wilkinson of the University of Chicago. Amie’s a fantastic mathematician. Say hi, Amie, and tell everyone about yourself.
AW: Hi, everyone. So Kevin and I go way back. I’m a professor at the University of Chicago. Kevin and I first met when we were pretty fresh out of graduate school. We were postdocs at Northwestern, and now we’ve kind of gone our separate ways but have stayed in touch over the years.
KK: And, let’s see, my son and your daughter were born the same very hot summer in Chicago.
AW: Yeah, that’s right.
KK: That’s a long time ago.
AW: Right. And they’re both pretty hot kids.
KK: They are, yes. So, Amie, you haven’t shared what your favorite theorem is with Evelyn and me, so this will be a complete surprise for us, and we’ll try to keep up. So what’s your favorite theorem?
AW: Fundamental theorem of calculus.
KK: Yes.
EL: It’s a good theorem.
KK: I like that theorem. I just taught calc one, so this is fresh in my mind. I can work with this.
AW: Excellent. Probably fresher than it is in my mind.
EL: Can you tell us, remind our listeners, or tell our listeners what the fundamental theorem of calculus is?
AW: The fundamental theorem of calculus is a magic theorem as far as I’m concerned, that relates two different concepts: differentiation and integration.
So integration roughly is the computation of area, like the area of a square, area of the inside of a triangle, and so on. But you can make much more general computations of area like Archimedes did a long time ago, the area inside of a curve, like the area inside of a circle. There’s long been built up, going back to the Greeks, this notion of area, and even ways to compute it. That’s called integration.
Differentiation, on the other hand, it has to do with motion. In its earliest forms, to differentiate a function means to compute its slope, or speed, velocity. It’s a computation of velocity. It’s a way of measuring instantaneous motion. Both of these notions go way back, to the Greeks in the case of area, back to the 15th century and the people at Oxford for the computation of speed, and it wasn’t until the 17th century that the two were connected. First by someone named James Gregory, and not long after, sort of concurrently, by Isaac Barrow, who was the advisor of Isaac Newton. Newton was the one who really formalized the connection between the two.
EL: Right, but this wasn’t just a lightning bolt that suddenly came from Newton, but it had been building up for a while.
AW: Building up, actually in some sense I think it was a lightning bolt, in the sense that all of the progress happened within maybe a 30-year period, so in the world of mathematics, that’s sort of, you could even say that’s a fad or a trend. Someone does something, and you’re like, oh my god, let’s see what we can do with this. It’s an amazing insight that the two are connected.
The most concrete illustration of this is actually one I read on Wikipedia, which says that suppose you’re in a car, and you’re not the driver because otherwise this would be a very scary application. You can’t see outside of the car, but you can see the odometer. Sorry, you can’t see the odometer either. Someone’s put tape over it. But you can see the speedometer. And that’s telling you your velocity at every second. Every instance there’s a number. And what the fundamental theorem of calculus says is that if you add up all of those numbers over a given interval of time, it’s going to tell you how far you’ve traveled.
KK: Right.
AW: You could just take the speed that you see on the odometer the minute you start driving the car and then multiply by the amount of time that you travel, and that’ll give you kind of an approximate idea, but you instead could break the time into two pieces and take the velocity that you see at the time and the velocity that you see at the midpoint, and take the average of those two velocities, multiplied by the amount of time, and that’ll give you a better sense. And basically it says to compute the average velocity multiplied by the time, and you’re going to get how far you’ve gone. That’s basically what the fundamental theorem of calculus means.
KK: So here’s my own hot take on the fundamental theorem: I think it’s actually named incorrectly. I think the mean value theorem is the real fundamental theorem of calculus.
AW: Ah-ha.
KK: If you think about the fundamental theorem, it’s actually a pretty quick corollary to the fundamental theorem.
AW: Right.
KK: Which essentially just describes, well, the version of the fundamental theorem that calculus remember, namely that to compute a definite integral, “all you have to do” — and our listeners can’t see me doing the air quotes—but“all you have to do” is find the antiderivative of the function, we know how hard that problem is. That’s a pretty quick corollary of the mean value theorem, basically by the process you just described, right? You’ve got your function, and you’re trying to compute the definite integral, so what do you do? Well, you take a Riemann sum, chop it into pieces. Then the mean value theorem says over each subinterval, there’s some point in there where the derivative equals the average rate of change over that little subinterval. And so you replace with all that, and that’s how you see the fundamental theorem just drop out. This Riemann sum is essentially just saying, OK, you find the antiderivative and that’s the story. So I used to sort of joke, I always joke with my students, that one of these days I’m going to write an advanced calculus book sort of like “Where’s Waldo,” but it’s going to be “Where’s the Mean Value Theorem?”
AW: I like that.
KK: Whenever you teach advanced calculus for real, not just that first course, you start to see the mean value theorem everywhere.
AW: See, I think of the mean value theorem as being the flip side of the fundamental theorem of calculus. To me, what is the mean value theorem? The mean value theorem is a movie that I saw in high school calculus that was probably filmed in, like, 1960-something.
KK: Right. On a movie projector?
AW: Yeah, on a movie projector.
KK: A lot of our listeners won’t know what that is.
AW: It’s a very simple little story. A guy’s driving, again it’s a driving analogy.
KK: Sure, I use these all the time too.
AW: And he stops at a toll booth to get his ticket, and the ticket is stamped with the time that he crosses the tollbooth, and then he’s driving and driving, and he gets to the other tollbooth and hands the ticket to the toll-taker, and the toll-taker says, “You’ve been speedin.’ The reason I know this is the mean value theorem.” He says it just like that, “The mean value theorem.” I wish I could find that movie. I’m sure I could. It’s so brilliant. What that’s saying is if I know the distance I’ve traveled from A to B, I could calculate what the average speed is by just taking, OK, I know how much time it took. So that second toll-taker knows (a) how much time it took, and (b), the distance because he knows the other tollbooth, right? And so he computes the average speed, and what the mean value theorem says is somewhere during that trip, you had to be traveling the average speed.
KK: Right.
AW: So, it’s sort of like I can do speed from distance, so if you took too little time to travel the distance, you had to be speeding at some point, which is so beautiful. That’s sort of the flip side. If you know the distance and the amount of time, then you know the average speed. Whereas the first illustration I gave is you’re in this car, and you can’t see outside or the odometer, but you know the average speed, and that tells you the distance.
KK: So maybe they’re the same theorem.
EL: They’re all the same.
AW: In some sense, right.
KK: I think this is why I still love teaching calculus. I’ve been doing it for, like, 25 years, but I never get tired of it. It’s endlessly fascinating.
AW: That’s wonderful. We need more calculus teachers like you.
KK: I don’t know about that, but I do still love it.
AW: Or at least with your attitude.
KK: Right. There we go. So this is actually, the fundamental theorem is just sort of a one-dimensional version. There are generalizations, yes?
AW: Yes, there are. That gets to my favorite generalization of the fundamental theorem of calculus, which is Stokes' Theorem.
KK: Yeah.
AW: So what does Stokes’ Theorem do? Well, for one thing, it explains why π appears both in the formula for the circumference of a circle and in the formula for the area of the circle, inside of the circle.
KK: That’s cool.
AW: Right? One is πr2, and the other is 2πr, and roughly speaking, suppose you differentiate with respect to r. This is sort of bogus, but it’s correct.
KK: Let’s go with it.
AW: You differentiate πr2, you get 2πr. The point is that Stokes'’ Theorem, like the fundamental theorem of calculus, relates two quantities of a geometric object, in this case a circle. One is an integral inside the object, and the other is an integral on the boundary of the object. And what are you integrating? So Stokes' Theorem says if you have something called a form, and it’s defined on the boundary of an object, and you differentiate the form, then the integral of the derivative of the form on the inside is the integral of the original form on the boundary.
EL: Yeah.
AW: And the best way to illustrate this is with a picture, I’m afraid. It’s a beautiful, the formula itself has this beautiful symmetry to it.
EL: Yeah. Well, our listeners will be able to see that online when we post this, so we’ll have a visual aid.
AW: OK. So Stokes' Theorem establishes the duality of differentiation on the one hand, which is like analysis-calculus, right, and taking the boundary of an object on the other hand.
KK: That’s geometry, right.
AW: And boundary we denote by something that looks like a d, but it’s sort of curly, and we call it del. And differentiation we denote by d. The point is that those two operations can be switched and you get the same thing. You switch those operations in two different places, you get the same thing. That duality leads to differential topology. I mean, it’s just… The next theorem that’s amazing is De Rahm’s theorem that comes out of that.
KK: Let’s not go that far.
AW: OK.
KK: It’s remarkable. You think, in calculus 3, at the very end we teach students Stokes’ Theorem, but we sort of get there incrementally, right? We teach Green’s theorem in the plane, and then we give them the divergence theorem, right, which is still the same. They’re all the same theorem, and we never really tie it together really well, and we never go, oh, by the way, if we would unify this idea, we’d say, by the way, this is really just the fundamental theorem of calculus.
AW: Right.
KK: If you take your manifold to be a closed interval in the plane. So this makes me wonder if we need to start modernizing the calculus curriculum. On the other hand, then that gets a little New Math-y, right?
AW: No, no, I think we should totally normalize the curriculum in this way.
KK: Do you?
AW: Yeah, sure. It depends on what level we’re talking about, obviously, but I’ve always found that, OK, so, I’m going to confess the one time I taught multivariable calculus to “regular” students — granted, this was ages ago — I was so irritated by the current curriculum I couldn’t hide it.
KK: Oh, I see.
AW: But I’ve taught, lots and lots of times, multivariable calculus to somewhat more advanced students, to honors students who might become math majors, might not. And I always adopt this viewpoint, that the fundamental theorem of calculus is relating your object — your geometric object is just an interval, and it’s boundary is just two points, and differentiation-integration connects the difference of values of functions at two points with the integral over the interval.
KK: Then that gets to the question of, is that the right message for everyone? I could imagine this does work well with students who might want to be math majors. But in an engineering school, for example. I haven’t taught multivariable in maybe 15 years, but I’m tending to aim at engineers. But engineers, they don’t work outside of three dimensions, for the most part. Would this really be the right way to go? I don’t know.
AW: First of all, it’s good for turning students who are interested in calculus, who are interested in math, into math majors. So for me, that’s an effective tool.
KK: I absolutely believe that.
AW: Yeah, I don’t know about engineering students. They really have a distinct set of needs.
KK: Right.
AW: I mean, social scientists, for example, work regularly in very high dimensions, and I have taught this material to social scientists back at Northwestern, and that was also, I think, pretty successful.
KK: Interesting. Well, that’s a good theorem. We love the fundamental theorem around here.
EL: The best things in life are often better together. So one of the things we like to do on My Favorite Theorem is to ask our guests to pick a pairing for their theorem, a fine wine or tea, beer, ice cream, piece of music, so what would you like to pair with the Fundamental Theorem of Caclulus?
AW: Something like a mango, maybe.
EL: A mango!
AW: Something where you have this organic, beautiful shape that, if you wanted to understand it analytically, you would have to use calculus. So first of all, mango is literally my favorite.
KK: I love them too. Oh, man.
AW: Ripe mango. It has to be good. Bad mango is torture.
KK: This is one of the perks of living in Florida. We have good mangoes here.
AW: What I love about the mango is it’s a natural form that is truly not spherical. It’s a fruit that has this clearly organic and very smooth shape. But to describe it, I don’t even know.
KK: It’s not a solid of revolution.
AW: I don’t know why it grows like that.
KK: Well the pit is weird, right? The pit’s sort of flat.
AW: Yeah.
KK: Why does it grow like that? That’s interesting. Because most things, like an avocado, for example, it’s sort of pear-shaped, and the pit is round.
AW: An avocado is another example of a beautiful organic shape that is not perfectly spherical. So yeah, and I love avocado as well, so maybe I could have a mango-avocado salad.
EL: Oh, yeah. Really getting quite gourmet.
KK: And this goes to the fundamental theorem, right? Because you have to chop that up into pieces, which, I mean.
AW: Right?
KK: It’s sort of the Riemann sum of your two things.
AW: And they’re very hard, both of them are very hard to get the fruit out, reasonably difficult to get the fruit out of the shell.
KK: You know the deal, right? You cut it in half first and then you dice it and scoop it out, right?
AW: You mean with the mango, right?
KK: You do with an avocado, too. Yeah.
AW: You know, I’ve never thought to do that with an avocado.
KK: Yeah, you cut the avocado, take a big knife and just cut it and then split it open, pop the pit out, and then just dice it and scoop it out.
AW: Oh. I usually just scoop and dice, but you’re right. In the mango you do the same, but then you start turning it inside out, and it looks like a hand grenade. So beautiful.
KK: You do the same thing with the avocado, and just scoop it. See?
AW: That’s a really interesting illustration, too, because when you turn inside out the mango, you can see these cubes of fruit that are spreading apart. You sort of can see how by changing the shape of the boundary, you change radically the sort of volume enclosed by the boundary. Because those things spread apart because of the reversed curvature.
EL: Yeah.
KK: Now I’m getting hungry.
AW: Yeah.
EL: Yeah, that’s the problem with these pairings, right? We record an episode, and then we all have to go out to eat.
AW: Of course a more provincial kind of thing, a more everyday object, piece of fruit, would be, as you said, pear. That’s more connected to Isaac Newton.
EL: True, yeah.
AW: Apples.
KK: The apples falling on his head, yeah. Cool. Well, this was fun, Amie. Thanks for joining us. Anything else you want to add? Any projects you want to plug? We try to give everybody a chance to do that. What are you working on these days?
AW: My area is dynamical systems, which..
KK: Is hard!
AW: It’s hard, but it’s also connected very closely. It’s not that hard.
KK: Smale said it’s hard.
AW: It’s connected very closely to the fundamental theorem. I study how things change over time.
KK: Right.
AW: So I’ve been helping out, or I don’t know if I’ve actually been helping, but I’ve been talking a lot with some physicists who build particle accelerators, and we’re trying to use tools from pure mathematics to design these accelerators more effectively.
EL: Oh wow.
AW: To keep the particles inside the accelerator, moving in a focused beam.
EL: Nice.
AW: It’s a direct application of certain areas of smooth dynamical systems.
KK: Very cool. You never know where your career is going to take you.
AW: It’s very fun.
KK: That’s part of the beauty of mathematics, you never know where it’s going to lead you.
EL: Thanks so much for joining us on My Favorite Theorem.
AW: Thank you for having me. It’s been a lot of fun.
KK: Thanks for listening to My Favorite Theorem, hosted by Kevin Knudson and Evelyn Lamb. The music you’re hearing is a piece called Fractalia, a percussion quartet performed by four high school students from Gainesville, Florida. They are Blake Crawford, Gus Knudson, Dell Mitchell, and Baochau Nguyen. You can find more information about the mathematicians and theorems featured in this podcast, along with other delightful mathematical treats, at Kevin’s website, kpknudson.com, and Evelyn’s blog, Roots of Unity, on the Scientific American blog network. We love to hear from our listeners, so please drop us a line at [email protected]. Or you can find us on Facebook and Twitter. Kevin’s handle on Twitter is @niveknosdunk, and Evelyn’s is @evelynjlamb. The show itself also has a Twitter feed. The handle is @myfavethm. Join us next time to learn another fascinating piece of mathematics.
KK: Welcome to My Favorite Theorem. I’m Kevin Knudson, and I’m joined by my cohost. EL: I’m Evelyn Lamb. KK: This is Episode 0, in which we’ll lay out our ground rules for what we’re going to do. The idea is every week we’ll have a guest, and that guest will tell us what his or her favorite theorem is, and they’ll tell us some fun things about themselves, and Evelyn had good ideas here. What else are we going to do? EL: Yeah, well, with any great thing in life, pairings are important. So we’ll find the perfect wine, or ice cream, or work of 19th century German romanticism to include with the theorem. We’ll ask our guests to help us with that. KK: Since this is episode 0, we thought we should probably set the tone and let you know what our favorite theorems are. I’m going to defer. I’m going to let Evelyn go first here. What’s your favorite theorem? EL: OK, so we’re recording this on March 23rd, which is Emmy Noether’s birthday, her 135th, to be precise. I feel like I should say Noether’s theorem. It’s a theorem in physics that relates, that says basically conserved quantities in physics come from symmetries in nature. So time translation symmetry yields conservation of energy and things like that. But I’m not going to say that one. I’m sorry, physics, I just like math more. So I’m going to pick the uniformization theorem as my favorite theorem. KK: I don’t think I know that theorem. Which one is this? EL: It’s a great theorem. When I was doing math research, I was working in Teichmüller theory, which is related to hyperbolic geometry. This is a theorem about two-dimensional surfaces. The upshot of this theorem is that every two-dimensional surface can be given geometry that is either spherical, flat — so, Euclidean, like the flat plane — or hyperbolic. The uniformization itself is related to simply connected Riemann surfaces, the ones with no holes, but using this theorem you can show that 2-d surfaces with any number of holes have one of these kinds of 2-d geometry. This is a great theorem. I just love that part of topology where you’re classifying surfaces and everything. I think it’s nice A little of the history is that it was conjectured by Poincaré in 1882 and Klein in 1883. I think the first proof was by Poincaré in the early 1900s. There are a lot of proofs of it that come from different approaches. KK: Now that you tell me what the theorem is, of course I know what it is. Being a topologist, I know how to classify surfaces, I think. That is a great theorem. There’s so much going on there. You can think about Riemann surfaces as quotients of hyperbolic space, and you have all this fun geometry going. I love that theorem. In fact, I’m teaching our graduate topology course this year, and I didn’t do this. I’m sorry. I had to get through homology and cohomology. So yeah, surfaces are classified. We know surfaces. So what are you going to pair this with? EL: So my pairing is Neapolitan ice cream. I’m going a bit literal with this. Neapolitan ice cream is the ice cream that has part of it vanilla, part of it chocolate, and part of it strawberry. So this theorem says that surfaces come in three flavors. KK: Nice. EL: When I was a little kid, when we had our birthday parties at home, my mom always let us pick what ice cream we wanted to have, and I always picked Neapolitan so that if my friends liked one of the flavors but not the others, they could have whichever flavor they wanted. KK: You’re too kind. EL: Really, I’m just such a good-hearted person. KK: Clearly. EL: Yeah, Neapolitan. Three flavors of surfaces, three flavors of ice cream. KK: Nice. Although nobody ever eats the strawberry, right? EL: Yeah, I love strawberry ice cream now, but yeah, when I was a little kid chocolate and vanilla were a little more my thing. KK: I remember my mother would sometimes buy the Neapolitan, and I remember the strawberry would just sit there, uneaten, until it got freezer burn, and we just threw it away at that point. EL: I guess the question is, which of the kinds of geometry is strawberry? KK: Well, vanilla is clearly flat, right? EL: Yeah, that’s good. I guess that means strawberry must be spherical. KK: That seems right. It’s pretty unique, right? Spherical geometry is kind of dull, right? There’s just the sphere. There’s a lot more variation in hyperbolic geometry, right? EL: Yeah, I guess so. I feel like there are more different kinds of chocolate-flavored ice cream, and hyperbolic, there are so many different hyperbolic surfaces. KK: Right. Here in Gainesville, we have a really wonderful local ice cream place, and twice a year they have chocolate night, and they have 32 different varieties of chocolate. EL: Oh my gosh. KK: So you can go and you can get a ginormous bowl of all 36 flavors if you want, but we usually get a little sample of eight different flavors and try them out. It’s really wonderful. I think that’s the right classification. EL: OK. So Kevin? KK: Yes? EL: What is your favorite theorem? KK: Well, yeah, I thought about this for a long time, and what I came up with was that my favorite theorem is the ham sandwich theorem. I think it’s largely because it’s got a fun name, right? EL: Yeah. KK: And I remember hearing about this theorem as an undergrad for the first time. This was a general topology course, and you don’t prove it in that, I think. You need some algebraic topology to prove this well. I thought, wow, what a cool thing! There’s something called the ham sandwich theorem. So what is the ham sandwich theorem? It says: say you have a ham sandwich, which consists of two pieces of bread and a chunk of ham. And maybe you got a little nuts and you put one piece of bread on top of the fridge, and one on the floor, and your ham is sitting on the counter, and the theorem is that if you have a long enough knife, you can make one cut and cut all of those things in half. Mathematically what that means is that you have three blobs in space, and there is a single plane that cuts each of those blobs in half exactly. I just thought that was a pretty remarkable theorem, and I still think it’s kind of remarkable theorem because it’s kind of hard to picture, right? Your blobs could be anywhere. They could be really far apart, as long as they have positive measure, so as long as they’re not some flat thing, they actually have some 3-d-ness to them, then you can actually find a plane that does this. What’s even more fun, I think, is that this is a consequence of the Borsuk-Ulam theorem, which in this case would say that if you have a continuous function from the 2-sphere to the plane, then two antipodal points have to go to the same place. And that’s always a fun theorem to explain to people who don’t know any mathematics, because you can say, somewhere, right now, there are two opposite points on the surface of the earth where the temperature and the humidity are the same, for example. EL: Yeah. KK: I love that kind of theorem, where there’s a good physical interpretation for it. And of course there are higher-dimensional analogues, but the idea of the ham sandwich theorem is great. Everybody’s had a ham sandwich, probably, or some kind of sandwich. It doesn’t have to be ham. Maybe we should be more politically correct. What’s a good sandwich? EL: A peanut butter sandwich is a great sandwich. KK: A peanut butter sandwich. But the peanut butter is kind of hard to get going, right? You don’t really want that anywhere except in the middle of the sandwich. You don’t want to imagine this blob of peanut butter. The ham you can kind of imagine. EL: It’s really saying that you don’t even have to remove the peanut butter from the jar. You can leave the peanut butter in the jar. KK: There you go. EL: You can cut this sandwich in half. KK: Your knife’s going to have to cut through the whole jar. It’s gotta be a pretty strong knife. EL: Yeah. We’re already asking for an arbitrarily long knife. KK: Yes. EL: You don’t think our arbitrarily long knife can cut through glass? Come on. KK: It probably can, you’re right. How silly of me. If we’re being so silly and hyperbolic, we might as well. EL: We’re mathematicians, after all. KK: You’re right, we are. So I thought about the pairing, too. Basically, I’ve got a croque monsieur, right? EL: Right. KK: You’re in France. You probably eat these all the time. So what does one have with a croque monsieur? It’s not really fancy food. So I think you’ve got to go with a beer for this, and if I’m getting to choose any beer, we have a wonderful local brewery here, First Magnitude brewery, it’s owned by a good friend of mine. They have a really nice pale ale. It’s called 72 Pale Ale. I invite everyone to look up First Magnitude Brewing on the internet there and check them out. It’s a good beer. Not too hoppy. EL: OK. KK: It’s hoppy enough, but it’s not one of those West Coast IPA’s that makes your mouth shrivel up. EL: Yeah, socks you in the face with the hops. KK: Yeah, you don’t need all of that. EL: So actually, if you think of the two pieces of bread as one mass of bread and the ham as its own thing, then you could also bisect the bread, the ham, and the beer with one knife. KK: That’s right, we could do that. EL: Yeah, if you really wanted to make sure to eat your meal in two identical halves. KK: Right. So you have vanilla donuts and balls of chocolate, no, no, the donuts, wait a minute. The hyperbolic spaces were chocolate. This is starting to break down. But the flat geometry is the plane. But there’s a flat torus too, right? So you could have a flat donut, or a flat plane. Very cool. This is fun. I think we’re going to have a good time doing this. EL: I think so too. And I think we’re going to end each episode hungry. KK: It sounds that way, yeah. In the weeks to come, we have a pretty good lineup of interesting people from all areas of mathematics and all parts of the world, hopefully. I’m excited about this project. So thanks, Evelyn, for coming along with me on this. EL: Yeah. Thank you for inviting me. I’m looking forward to this. KK: Until next time, this has been My Favorite Theorem. KK: Thanks for listening to My Favorite Theorem, hosted by Kevin Knudson and Evelyn Lamb. The music you’re hearing is a piece called Fractalia, a percussion quartet performed by four high school students from Gainesville, Florida. They are Blake Crawford, Gus Knudson, Dell Mitchell, and Baochau Nguyen. You can find more information about the mathematicians and theorems featured in this podcast, along with other delightful mathematical treats, at Kevin’s website, kpknudson.com, and Evelyn’s blog, Roots of Unity, on the Scientific American blog network. We love to hear from our listeners, so please drop us a line at [email protected]. Or you can find us on Facebook and Twitter. Kevin’s handle on Twitter is @niveknosdunk, and Evelyn’s is @evelynjlamb. The show itself also has a Twitter feed. The handle is @myfavethm. Join us next time to learn another fascinating piece of mathematics.
En liten tjänst av I'm With Friends. Finns även på engelska.