Deeply researched, technical interviews with experts thinking about AI and technology.
thegradientpub.substack.com
The podcast The Gradient: Perspectives on AI is created by Daniel Bashir. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Episode 141
I spoke with Professor Philip Goff about:
* What a “post-Galilean” science of consciousness looks like
* How panpsychism helps explain consciousness and the hybrid cosmopsychist view
Enjoy!
Philip Goff is a British author, idealist philosopher, and professor at Durham University whose research focuses on philosophy of mind and consciousness. Specifically, it focuses on how consciousness can be part of the scientific worldview. He is the author of multiple books including Consciousness and Fundamental Reality, Galileo's Error: Foundations for a New Science of Consciousness and Why? The Purpose of the Universe.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:05) Goff vs. Carroll on the Knowledge Arguments and explanation
* (08:00) Preferences for theories
* (12:55) Curiosity (Grounding, Essence) and the Knowledge Argument
* (14:40) Phenomenal transparency and physicalism vs. anti-physicalism
* (29:00) How Exactly does Panpsychism Help Explain Consciousness
* (30:05) The argument for hybrid cosmopsychism
* (36:35) “Bare” subjects / subjects before inheriting phenomenal properties
* (40:35) Bundle theories of the self
* (43:35) Fundamental properties and new subjects as causal powers
* (50:00) Integrated Information Theory
* (55:00) Fundamental assumptions in hybrid cosmopsychism
* (1:00:00) Outro
Links:
* Philip’s homepage and Twitter
* Papers
* Curiosity (Grounding, Essence) and the Knowledge Argument
Hi everyone!
If you’re a new subscriber or listener, welcome.
If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will look going forward.
To summarize and give some context:
The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we’ve been able to keep this going for quite some time.
Our team has less bandwidth than we’d like right now (and I’ll admit that at least some of us are running on fumes…) — we’ll be making a few changes:
* Magazine: We’re going to be scaling down our editing work on the magazine. While we won’t be accepting pitches for unwritten drafts for now, if you have a full piece that you’d like to pitch to us, we’ll consider posting it. If you’ve reached out about writing and haven’t heard from us, we’re really sorry. We’ve tried a few different arrangements to manage the pipeline of articles we have, but it’s been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work.
* Newsletter: We’ll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We’ll have a way for you to send articles you want to be featured, but for now you can reach us at our [email protected].
* Podcast: I’ll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you’re interested in following, it might be worth subscribing on another player like Apple Podcasts, Spotify, or using the RSS feed.
* Sigmoid Social: We’ll keep this alive as long as there’s financial support for it.
If you like what we do and/or want to help us out in any way, do reach out to [email protected]. We love hearing from you.
Timestamps
* (0:00) Intro
* (01:55) How The Gradient began
* (03:23) Changes and announcements
* (10:10) More Gradient history! On our involvement, favorite articles, and some plugs
Some of our favorite articles!
There are so many, so this is very much a non-exhaustive list:
* NLP’s ImageNet moment has arrived
* The State of Machine Learning Frameworks in 2019
* Why transformative artificial intelligence is really, really hard to achieve
* An Introduction to AI Story Generation
* The Artificiality of Alignment (I didn’t mention this one in the episode, but it should be here)
Places you can find us!
Hugh:
* Papers/things mentioned!
* A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k)
* Planning in Natural Language Improves LLM Search for Code Generation
Andrey:
Daniel:
* Personal site (under construction)
Episode 140
I spoke with Professor Jacob Andreas about:
* Language and the world
* World models
* How he’s developed as a scientist
Enjoy!
Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:40) Jacob’s relationship with grounding fundamentalism
* (05:21) Jacob’s reaction to LLMs
* (11:24) Grounding language — is there a philosophical problem?
* (15:54) Grounding and language modeling
* (24:00) Analogies between humans and LMs
* (30:46) Grounding language with points and paths in continuous spaces
* (32:00) Neo-Davidsonian formal semantics
* (36:27) Evolving assumptions about structure prediction
* (40:14) Segmentation and event structure
* (42:33) How much do word embeddings encode about syntax?
* (43:10) Jacob’s process for studying scientific questions
* (45:38) Experiments and hypotheses
* (53:01) Calibrating assumptions as a researcher
* (54:08) Flexibility in research
* (56:09) Measuring Compositionality in Representation Learning
* (56:50) Developing an independent research agenda and developing a lab culture
* (1:03:25) Language Models as Agent Models
* (1:04:30) Background
* (1:08:33) Toy experiments and interpretability research
* (1:13:30) Developing effective toy experiments
* (1:15:25) Language Models, World Models, and Human Model-Building
* (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models”
* (1:21:32) What is a world model?
* (1:23:45) The Big Question — from meaning to world models
* (1:28:21) From “meaning” to precise questions about LMs
* (1:32:01) Mechanistic interpretability and reading tea leaves
* (1:35:38) Language and the world
* (1:38:07) Towards better language models
* (1:43:45) Model editing
* (1:45:50) On academia’s role in NLP research
* (1:49:13) On good science
* (1:52:36) Outro
Links:
* Jacob’s homepage and Twitter
* Language Models, World Models, and Human Model-Building
* Papers
* Semantic Parsing as Machine Translation (2013)
* Grounding language with points and paths in continuous spaces (2014)
* How much do word embeddings encode about syntax? (2014)
* Translating neuralese (2017)
* Analogs of linguistic structure in deep representations (2017)
* Learning with latent language (2018)
* Learning from Language (2018)
* Measuring Compositionality in Representation Learning (2019)
* Experience grounds language (2020)
* Language Models as Agent Models (2022)
Episode 139
I spoke with Evan Ratliff about:
* Shell Game, Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose.
* The end of the Longform Podcast and his thoughts on the state of journalism.
Enjoy!
Evan is an award-winning investigative journalist, bestselling author, podcast host, and entrepreneur. He’s the author of the The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord; the writer and host of the hit podcasts Shell Game and Persona: The French Deception; and the cofounder of The Atavist Magazine, Pop-Up Magazine, and the Longform Podcast. As a writer, he’s a two-time National Magazine Award finalist. As an editor and producer, he’s a two-time Emmy nominee and National Magazine Award winner.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:05) Evan’s ambitious and risky projects
* (04:45) Wearing different personas as a journalist
* (08:31) Boundaries and acceptability in using voice agents
* (11:42) Impacts on other people
* (13:12) “The kids these days” — how will new technologies impact younger people?
* (17:12) Evan’s approach to children’s technology use
* (20:05) Techno-solutionism and improvements in medicine, childcare
* (24:15) Evan’s perspective on simulations of people
* (27:05) On motivations for building tech startups
* (30:42) Evan’s outlook for Shell Game’s impact and motivations for his work
* (36:05) How Evan decided to write for a career
* (40:02) How voice agents might impact our conversations
* (43:52) Evan’s experience with Longform and podcasting
* (47:15) Perspectives on doing good interviews
* (52:11) Mimicking and inspiration, developing style
* (57:15) Writers and their motivations, the state of longform journalism
* (1:06:15) The internet and writing
* (1:09:41) On the ending of Longform
* (1:19:48) Outro
Links:
* Shell Game, Evan’s new podcast
Episode 138
I spoke with Meredith Morris about:
* The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields
* Disability studies and AI
* Generative ghosts and technological determinism
* Developing a useful definition of AGI
I didn’t get to record an intro for this episode since I’ve been sick.
Enjoy!
Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Meredith’s influences and earlier work
* (03:00) Distinctions between AI and HCI
* (05:56) Maturity of fields and cross-disciplinary work
* (09:03) Technology and ends
* (10:37) Unique aspects of Meredith’s research direction
* (12:55) Forms of knowledge production in interdisciplinary work
* (14:08) Disability, Bias, and AI
* (18:32) LaMPost and using LMs for writing
* (20:12) Accessibility approaches for dyslexia
* (22:15) Awareness of AI and perceptions of autonomy
* (24:43) The software model of personhood
* (28:07) Notions of intelligence, normative visions and disability studies
* (32:41) Disability categories and learning systems
* (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research
* (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry
* (43:25) Generative Agents and public imagination
* (45:13) The state of ML conferences, the need for more cross-pollination
* (46:42) Prestige in conferences, the move towards more cross-disciplinary work
* (48:52) Joon Park Appreciation
* (49:51) Training interdisciplinary researchers
* (53:20) Generative Ghosts and technological determinism
* (57:06) Examples of generative ghosts and clones, relationships to agentic systems
* (1:00:39) Reasons for wanting generative ghosts
* (1:02:25) Questions of consent for generative clones and ghosts
* (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls
* (1:06:25) Potential religious and spiritual significance of generative systems
* (1:10:19) Anthropomorphization
* (1:12:14) User experience and cognitive biases
* (1:15:24) Levels of AGI
* (1:16:13) Defining AGI
* (1:23:20) World models and AGI
* (1:26:16) Metacognitive abilities in AGI
* (1:30:06) Towards Bidirectional Human-AI Alignment
* (1:30:55) Pluralistic value alignment
* (1:32:43) Meredith’s perspective on deploying AI systems
* (1:36:09) Meredith’s advice for younger interdisciplinary researchers
Links:
* Meredith’s homepage, Twitter, and Google Scholar
* Papers
* Mediating Group Dynamics through Tabletop Interface Design
* SearchTogether: An Interface for Collaborative Web Search
* AI and Accessibility: A Discussion of Ethical Considerations
* LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia
Episode 137
I spoke with Davidad Dalrymple about:
* His perspectives on AI risk
* ARIA (the UK’s Advanced Research and Invention Agency) and its Safeguarded AI Programme
Enjoy—and let me know what you think!
Davidad is a Programme Director at ARIA. He was most recently a Research Fellow in technical AI safety at Oxford. He co-invented the top-40 cryptocurrency Filecoin, led an international neuroscience collaboration, and was a senior software engineer at Twitter and multiple startups.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:36) Calibration and optimism about breakthroughs
* (03:35) Calibration and AGI timelines, effects of AGI on humanity
* (07:10) Davidad’s thoughts on the Orthogonality Thesis
* (10:30) Understanding how our current direction relates to AGI and breakthroughs
* (13:33) What Davidad thinks is needed for AGI
* (17:00) Extracting knowledge
* (19:01) Cyber-physical systems and modeling frameworks
* (20:00) Continuities between Davidad’s earlier work and ARIA
* (22:56) Path dependence in technology, race dynamics
* (26:40) More on Davidad’s perspective on what might go wrong with AGI
* (28:57) Vulnerable world, interconnectedness of computers and control
* (34:52) Formal verification and world modeling, Open Agency Architecture
* (35:25) The Semantic Sufficiency Hypothesis
* (39:31) Challenges for modeling
* (43:44) The Deontic Sufficiency Hypothesis and mathematical formalization
* (49:25) Oversimplification and quantitative knowledge
* (53:42) Collective deliberation in expressing values for AI
* (55:56) ARIA’s Safeguarded AI Programme
* (59:40) Anthropic’s ASL levels
* (1:03:12) Guaranteed Safe AI —
* (1:03:38) AI risk and (in)accurate world models
* (1:09:59) Levels of safety specifications for world models and verifiers — steps to achieve high safety
* (1:12:00) Davidad’s portfolio research approach and funding at ARIA
* (1:15:46) Earlier concerns about ARIA — Davidad’s perspective
* (1:19:26) Where to find more information on ARIA and the Safeguarded AI Programme
* (1:20:44) Outro
Links:
* Davidad’s Twitter
* Papers
* Davidad’s Open Agency Architecture for Safe Transformative AI
* Dioptics: a Common Generalization of Open Games and Gradient-Based Learners (2019)
* Asynchronous Logic Automata (2008)
Episode 136
I spoke with Clive Thompson about:
* How he writes
* Writing about the climate and biking across the US
* Technology culture and persistent debates in AI
* Poetry
Enjoy—and let me know what you think!
Clive is a journalist who writes about science and technology. He is a contributing writer forWired magazine, and is currently writing his next book about micromobility and cycling across the US.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:07) Clive’s life as a Tarantino movie
* (03:07) Boring life and interesting art, life as material for art
* (10:25) Cycling across the US — Clive’s new book on mobility and decarbonization
* (15:07) Turning inward in writing
* (27:21) Including personal experience in writing
* (31:53) Personal and less personal writing
* (36:08) Conveying uncertainty and the “voice from nowhere” in traditional journalism
* (41:10) Finding the natural end of a piece
* (1:02:10) Writing routine
* (1:05:08) Theories of change in Clive’s writing
* (1:12:33) How Clive saw things before the rest of us
* (1:27:00) Automation in software engineering
* (1:31:40) The anthropology of coders, poetry as a framework
* (1:43:50) Proust discourse
* (1:45:00) Technology culture in NYC + interaction between the tech world and other worlds
* (1:50:30) Technological developments Clive wants to see happen (free ideas)
* (2:01:11) Clive’s argument for memorizing poetry
* (2:09:24) How Clive finds poetry
* (2:18:03) Clive’s pursuit of freelance writing and making compromises
* (2:27:25) Outro
Links:
* Selected writing
* The Attack of the Incredible Grading Machine (Lingua Franca, 1999)
* The Know-It-All Machine (Lingua Franca, 2001)
* How to teach AI some common sense (Wired, 2018)
* Blogs to Riches (NY Mag, 2006)
* Clive vs. Jonathan Franzen on whether the internet is good for writing (The Chronicle of Higher Education, 2013)
* The Minecraft Generation (New York Times, 2016)
* What AI College Exam Proctors are Really Teaching Our Kids (Wired, 2020)
* Companies Don’t Need to Be Creepy to Make Money (Wired, 2021)
* Is Sucking Carbon Out of the Air the Solution to Our Climate Crisis? (Mother Jones, 2021)
* AI Shouldn’t Compete with Workers—It Should Supercharge Them (Wired, 2022)
* Back to BASIC—the Most Consequential Programming Language in the History of Computing Wired, 2024)
Episode 136
I spoke with Judy Fan about:
* Our use of physical artifacts for sensemaking
* Why cognitive tools can be a double-edged sword
* Her approach to scientific inquiry and how that approach has developed
Enjoy—and let me know what you think!
Judy is Assistant Professor of Psychology at Stanford and director of the Cognitive Tools Lab. Her lab employs converging approaches from cognitive science, computational neuroscience, and artificial intelligence to reverse engineer the human cognitive toolkit, especially how people use physical representations of thought — such as sketches and prototypes — to learn, communicate, and solve problems.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:49) Throughlines and discontinuities in Judy’s research
* (06:26) “Meaning” in Judy’s research
* (08:05) Production and consumption of artifacts
* (13:03) Explanatory questions, why we develop visual artifacts, science as a social enterprise
* (15:46) Unifying principles
* (17:45) “Hard limits” to knowledge and optimism
* (21:47) Tensions in different fields’ forms of sensemaking and establishing truth claims
* (30:55) Dichotomies and carving up the space of possible hypotheses, conceptual tools
* (33:22) Cognitive tools and projectivism, simplified models vs. nature
* (40:28) Scientific training and science as process and habit
* (45:51) Developing mental clarity about hypotheses
* (51:45) Clarifying and expressing ideas
* (1:03:21) Cognitive tools as double-edged
* (1:14:21) Historical and social embeddedness of tools
* (1:18:34) How cognitive tools impact our imagination
* (1:23:30) Normative commitments and the role of cognitive science outside the academy
* (1:32:31) Outro
Links:
* Selected papers (there are lots!)
* Overviews
* Drawing as a versatile cognitive tool (2023)
* Using games to understand the mind (2024)
* Socially intelligent machines that learn from humans and help humans learn (2024)
* Research papers
* Communicating design intent using drawing and text (2024)
* Creating ad hoc graphical representations of number (2024)
* Visual resemblance and interaction history jointly constrain pictorial meaning (2023)
* Explanatory drawings prioritize functional properties at the expense of visual fidelity (2023)
* SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction (2023)
* Learning to communicate about shared procedural abstractions (2021)
* Visual communication of object concepts at different levels of abstraction (2021)
* Relating visual production and recognition of objects in the human visual cortex (2020)
* Collabdraw: an environment for collaborative sketching with an artificial agent (2019)
* Pragmatic inference and visual abstraction enable contextual flexibility in visual communication (2019)
* Common object representations for visual production and recognition (2018)
Episode 135
I spoke with L. M. Sacasas about:
* His writing and intellectual influences
* The value of asking hard questions about technology and our relationship to it
* What happens when we decide to outsource skills and competency
* Evolving notions of what it means to be human and questions about how to live a good life
Enjoy—and let me know what you think!
Michael is Executive Director of the Christian Study Center of Gainesville, Florida and author of The Convivial Society, a newsletter about technology and society.
He does some of the best writing on technology I’ve had the pleasure to read, and I highly recommend his newsletter.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:12) On podcasts as a medium
* (06:12) Michael’s writing
* (12:38) Michael’s intellectual influences, contingency
* (18:48) Moral seriousness
* (22:00) Michael’s ambitions for his work
* (26:17) The value of asking the right questions (about technology)
* (34:18) Technology use and the “natural” pace of human life
* (46:40) Outsourcing of skills and competency, engagement with others
* (55:33) Inevitability narratives and technological determinism, the “Borg Complex”
* (1:05:10) Notions of what it is to be human, embodiment
* (1:12:37) Higher cognition vs. the body, dichotomies
* (1:22:10) The body as a starting point for philosophy, questions about the adoption of new technologies
* (1:30:01) Enthusiasm about technology and the cultural milieu
* (1:35:30) Projectivism, desire for knowledge about and control of the world
* (1:41:22) Positive visions for the future
* (1:47:11) Outro
Links:
* Michael’s Substack: The Convivial Society and his book, The Frailest Thing: Ten Years of Thinking about the Meaning of Technology
* Michael’s Twitter
* Essays
* Humanist Technology Criticism
* Waste Your Time, Your Life May Depend On It
* The Stuff of (a Well-Lived) Life
Episode 134
I spoke with Pete Wolfendale about:
* The flaws in longtermist thinking
* Selections from his new book, The Revenge of Reason
* Metaphysics
* What philosophy has to say about reason and AI
Enjoy—and let me know what you think!
Pete is an independent philosopher based in Newcastle. Dr. Wolfendale got both his undergraduate degree and his Ph.D in Philosophy at the University of Warwick. His Ph.D thesis offered a re-examination of the Heideggerian Seinsfrage, arguing that Heideggerian scholarship has failed to fully do justice to its philosophical significance, and supplementing the shortcomings in Heidegger’s thought about Being with an alternative formulation of the question. He is the author of Object-Oriented Philosophy: The Noumenon's New Clothes and The Revenge of Reason. His blog is Deontologistics.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) Pete’s experience with (para-)academia, incentive structures
* (10:00) Progress in philosophy and the analytic tradition
* (17:57) Thinking through metaphysical questions
* (26:46) Philosophy of science, uncovering categorical properties vs. dispositions
* (31:55) Structure of thought and the world, epistemological excess
* (49:31) What reason is, relation to language models, semantic fragmentation of AGI
* (1:00:55) Neural net interpretability and intervention
* (1:08:16) World models, architecture and behavior of AI systems
* (1:12:35) Language acquisition in humans and LMs
* (1:15:30) Pretraining vs. evolution
* (1:16:50) Technological determinism
* (1:18:19) Pete’s thinking on e/acc
* (1:27:45) Prometheanism vs. e/acc
* (1:29:39) The Weight of Forever — Pete’s critique of What We Owe the Future
* (1:30:15) Our rich deontological language and longtermism’s limits
* (1:43:33) Longtermism and the opacity of desire
* (1:44:41) Longtermism’s historical narrative and technological determinism, theories of power
* (1:48:10) The “posthuman” condition, language and techno-linguistic infrastructure
* (2:00:15) Type-checking and universal infrastructure
* (2:09:23) Multitudes and selfhood
* (2:21:12) Definitions of the self and (non-)circularity
* (2:32:55) Freedom and aesthetics, aesthetic exploration and selfhood
* (2:52:46) Outro
Links:
* Book: The Revenge of Reason
* Writings / References
* So, Accelerationism, what’s that all about?
Episode 133
I spoke with Peter Lee about:
* His early work on compiler generation, metacircularity, and type theory
* Paradoxical problems
* GPT-4s impact, Microsoft’s “Sparks of AGI” paper, and responses and criticism
Enjoy—and let me know what you think!
Peter is President of Microsoft Research. He leads Microsoft Research and incubates new research-powered products and lines of business in areas such as artificial intelligence, computing foundations, health, and life sciences. Before joining Microsoft in 2010, he was at DARPA, where he established a new technology office that created operational capabilities in machine learning, data science, and computational social science. Prior to that, he was a professor and the head of the computer science department at Carnegie Mellon University. Peter is a member of the National Academy of Medicine and serves on the boards of the Allen Institute for Artificial Intelligence, the Brotman Baty Institute for Precision Medicine, and the Kaiser Permanente Bernard J. Tyson School of Medicine. He served on President Obama’s Commission on Enhancing National Cybersecurity. He has testified before both the US House Science and Technology Committee and the US Senate Commerce Committee. With Carey Goldberg and Dr. Isaac Kohane, he is the coauthor of the best-selling book, “The AI Revolution in Medicine: GPT-4 and Beyond.” In 2024, Peter was named by Time magazine as one of the 100 most influential people in health and life sciences.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:50) Basic vs. applied research
* (05:20) Theory and practice in computing
* (10:28) Traditional denotational semantics and semantics engineering in modern-day systems
* (16:47) Beauty and practicality
* (20:40) Metacircularity in the polymorphic lambda calculus: research directions
* (24:31) Understanding the nature of difficulties with metacircularity
* (26:30) Difficulties with reflection, classic paradoxes
* (31:02) Sparks of AGI
* (31:41) Reproducibility
* (38:04) Confirming and disconfirming theories, foundational work
* (42:00) Back and forth between commitments and experimentation
* (51:01) Dealing with responsibility
* (56:30) Peter’s picture of AGI
* (1:01:38) Outro
Links:
* Peter’s Twitter, LinkedIn, and Microsoft Research pages
* Papers and references
* The automatic generation of realistic compilers from high-level semantic descriptions
* Metacircularity in the polymorphic lambda calculus
* A Fresh Look at Combinator Graph Reduction
* Fundamental Research in Engineering
Episode 132
I spoke with Manuel and Lenore Blum about:
* Their early influences and mentors
* The Conscious Turing Machine and what theoretical computer science can tell us about consciousness
Enjoy—and let me know what you think!
Manuel is a pioneer in the field of theoretical computer science and the winner of the 1995 Turing Award in recognition of his contributions to the foundations of computational complexity theory and its applications to cryptography and program checking, a mathematical approach to writing programs that check their work. He worked as a professor of computer science at the University of California, Berkeley until 2001. From 2001 to 2018, he was the Bruce Nelson Professor of Computer Science at Carnegie Mellon University.
Lenore is a Distinguished Career Professor of Computer Science, Emeritus at Carnegie Mellon University and former Professor-in-Residence in EECS at UC Berkeley. She is president of the Association for Mathematical Consciousness Science and newly elected member of the American Academy of Arts and Sciences. Lenore is internationally recognized for her work in increasing the participation of girls and women in Science, Technology, Engineering, and Math (STEM) fields. She was a founder of the Association for Women in Mathematics, and founding Co-Director (with Nancy Kreinberg) of the Math/Science Network and its Expanding Your Horizons conferences for middle- and high-school girls.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (03:09) Manuel’s interest in consciousness
* (05:55) More of the story — from memorization to derivation
* (11:15) Warren McCulloch’s mentorship
* (14:00) McCulloch’s anti-Freudianism
* (15:57) More on McCulloch’s influence
* (27:10) On McCulloch and telling stories
* (32:35) The Conscious Turing Machine (CTM)
* (33:55) A last word on McCulloch
* (35:20) Components of the CTM
* (39:55) Advantages of the CTM model
* (50:20) The problem of free will
* (52:20) On pain
* (1:01:10) Brainish / CTM’s multimodal inner language, language and thinking
* (1:13:55) The CTM’s lack of a “central executive”
* (1:18:10) Empiricism and a self, tournaments in the CTM
* (1:26:30) Mental causation
* (1:36:20) Expertise and the CTM model, role of TCS
* (1:46:30) Dreams and dream experience
* (1:50:15) Disentangling components of experience from multimodal language
* (1:56:10) CTM Robot, meaning and symbols, embodiment and consciousness
* (2:00:35) AGI, CTM and AI processors, capabilities
* (2:09:30) CTM implications, potential worries
* (2:17:15) Advice for younger (computer) scientists
* (2:22:57) Outro
Links:
* Manuel’s homepage
* Lenore’s homepage; find Lenore on Twitter (https://x.com/blumlenore) and Linkedin (https://www.linkedin.com/in/lenore-blum-1a47224)
* Articles
* “The ‘Accidental Activist’ Who Changed the Face of Mathematics” — Ben Brubaker’s Q&A with Lenore
* “How this Turing-Award-winning researcher became a legendary academic advisor” — Sheon Han’s profile of Manuel
* Papers (Manuel and Lenore)
* AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
* A Theoretical Computer Science Perspective on Consciousness and Artificial General Intelligence
* References (McCulloch)
Episode 131
I spoke with Professor Kevin Dorst about:
* Subjective Bayesianism and epistemology foundations
* What happens when you’re uncertain about your evidence
* Why it’s rational for people to polarize on political matters
Enjoy—and let me know what you think!
Kevin is an Associate Professor in the Department of Linguistics and Philosophy at MIT. He works at the border between philosophy and social science, focusing on rationality.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:15) When do Bayesians need theorems?
* (05:52) Foundations of epistemology, metaethics, formal models, error theory
* (09:35) Extreme views and error theory, arguing for/against opposing positions
* (13:35) Changing focuses in philosophy — pragmatic pressures
* (19:00) Kevin’s goals through his research and work
* (25:10) Structural factors in coming to certain (political) beliefs
* (30:30) Acknowledging limited resources, heuristics, imperfect rationality
* (32:51) Hindsight Bias is Not a Bias
* (33:30) The argument
* (35:15) On eating cereal and symmetric properties of evidence
* (39:45) Colloquial notions of hindsight bias, time and evidential support
* (42:45) An example
* (48:02) Higher-order uncertainty
* (48:30) Explicitly modeling higher-order uncertainty
* (52:50) Another example (spoons)
* (54:55) Game theory, iterated knowledge, even higher order uncertainty
* (58:00) Uncertainty and philosophy of mind
* (1:01:20) Higher-order evidence about reliability and rationality
* (1:06:45) Being Rational and Being Wrong
* (1:09:00) Setup on calibration and overconfidence
* (1:12:30) The need for average rational credence — normative judgments about confidence and realism/anti-realism
* (1:15:25) Quasi-realism about average rational credence?
* (1:19:00) Classic epistemological paradoxes/problems — lottery paradox, epistemic luck
* (1:25:05) Deference in rational belief formation, uniqueness and permissivism
* (1:39:50) Rational Polarization
* (1:40:00) Setup
* (1:37:05) Epistemic nihilism, expanded confidence akrasia
* (1:40:55) Ambiguous evidence and confidence akrasia
* (1:46:25) Ambiguity in understanding and notions of rational belief
* (1:50:00) Claims about rational sensitivity — what stories we can tell given evidence
* (1:54:00) Evidence vs presentation of evidence
* (2:01:20) ChatGPT and the case for human irrationality
* (2:02:00) Is ChatGPT replicating human biases?
* (2:05:15) Simple instruction tuning and an alternate story
* (2:10:22) Kevin’s aspirations with his work
* (2:15:13) Outro
Links:
* Professor Dorst’s homepage and Twitter
* Papers
* Hedden: Hindsight bias is not a bias
* Higher-order evidence + (Almost) all evidence is higher-order evidence
* Being Rational and Being Wrong
* ChatGPT and human irrationality
Episode 130
I spoke with David Pfau about:
* Spectral learning and ML
* Learning to disentangle manifolds and (projective) representation theory
* Deep learning for computational quantum mechanics
* Picking and pursuing research problems and directions
David’s work is really (times k for some very large value of k) interesting—I’ve been inspired to descend a number of rabbit holes because of it.
(if you listen to this episode, you might become as cool as this guy)
While I’m at it — I’m still hovering around 40 ratings on Apple Podcasts. It’d mean a lot if you’d consider helping me bump that up!
Enjoy—and let me know what you think!
David is a staff research scientist at Google DeepMind. He is also a visiting professor at Imperial College London in the Department of Physics, where he supervises work on applications of deep learning to computational quantum mechanics. His research interests span artificial intelligence, machine learning and scientific computing.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:52) David Pfau the “critic”
* (02:05) Scientific applications of deep learning — David’s interests
* (04:57) Brain / neural network analogies
* (09:40) Modern ML systems and theories of the brain
* (14:19) Desirable properties of theories
* (18:07) Spectral Inference Networks
* (19:15) Connections to FermiNet / computational physics, a series of papers
* (33:52) Deep slow feature analysis — interpretability and findings on eigenfunctions
* (39:07) Following up on eigenfunctions (there are indeed only so many hours in a day; I have been asking the Substack people if they can ship 40-hour days, but I don’t think they’ve gotten to it yet)
* (42:17) Power iteration and intuitions
* (45:23) Projective representation theory
* (46:00) ???
* (46:54) Geomancer and learning to decompose a manifold from data
* (47:45) we consider the question of whether you will spend 90 more minutes of this podcast episode (there are not 90 more minutes left in this podcast episode, but there could have been)
* (1:08:47) Learning embeddings
* (1:11:12) The “unexpected emergent property” of Geomancer
* (1:14:43) Learned embeddings and disentangling and preservation of topology
* n/b I still haven’t managed to do this in colab because I keep crashing my instance when I use s3o4d :(
* (1:21:07) What’s missing from the ~ current (deep learning) paradigm ~
* (1:29:04) LLMs as swiss-army knives
* (1:32:05) RL and human learning — TD learning in the brain
* (1:37:43) Models that cover the Pareto Front (image below)
* (1:46:54) AI accelerators and doubling down on transformers
* (1:48:27) On Slow Research — chasing big questions and what makes problems attractive
* (1:53:50) Future work on Geomancer
* (1:55:35) Finding balance in pursuing interesting and lucrative work
* (2:00:40) Outro
Links:
* Papers
* Natural Quantum Monte Carlo Computation of Excited States (2023)
* Making sense of raw input (2021)
* Integrable Nonparametric Flows (2020)
* Disentangling by Subspace Diffusion (2020)
* Ab initio solution of the many-electron Schrödinger equation with deep neural networks (2020)
* Spectral Inference Networks (2018)
* Connecting GANs and Actor-Critic Methods (2016)
* Learning Structure in Time Series for Neuroscience and Beyond (2015, dissertation)
* Robust learning of low-dimensional dynamics from large neural ensembles (2013)
* Probabilistic Deterministic Infinite Automata (2010)
* Other
Episode 129
I spoke with Dan Hart and Michelle Michael about:
* Developing NSWEduChat, an AI-powered chatbot designed and delivered by the NSW Department of Education for students and teachers.
* The challenges in effectively teaching students as technology develops
* Understanding and defining the importance of the classroom
Enjoy—and let me know what you think!
Dan Hart is Head of AI, and Michelle Michael is Director of Educational Support and Rural Initiatives at the New South Wales (NSW) Department of Education.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:48) How NSWEduChat came to be, educational principles for AI use
* (02:37) Educational environment in New South Wales
* (04:41) How educators have adapted to new challenges for teaching and assessment
* (07:47) Considering technology advancement while teaching and assessing students
* (12:14) Educating teachers and students about how to use AI tools
* (15:03) AI in the classroom and enabling teachers
* (19:44) Product-first thinking for educational AI
* (22:15) Red teaming and testing
* (24:02) Benchmarking, chatbots as an assistant
* (26:35) The importance of the classroom
* (28:10) Media coverage and hype
* (30:35) Measurement and the benchmarking process/methodology
* (34:50) Principles for how chatbots should interact with students
* (44:29) Producing good educational outcomes at scale
* (46:41) Operating with speed and effectiveness while implementing governance
* (49:03) How the experience of building technologies evolves
* (51:45) Identifying good technologists and educators for development and use
* (55:07) Teaching standards and how AI impacts teachers
* (57:01) How technologists incorporate teaching standards and expertise in their work
* (1:00:03) NSWEduChat model details
* (1:02:55) Value alignment for NSWEduChat
* (1:05:40) Practicing caution in filtering chatbot responses
* (1:07:35) Equity and personalized instruction — how NSWEduChat can help
* (1:10:19) Helping students become “the students they could be”
* (1:13:39) Outro
Links:
* Guardian article on NSWEduChat
Episode 129
I spoke with Kristin Lauter about:
* Elliptic curve cryptography and homomorphic encryption
* Standardizing cryptographic protocols
* Machine Learning on encrypted data
* Attacking post-quantum cryptography with AI
Enjoy—and let me know what you think!
Kristin is Senior Director of FAIR Labs North America (2022—present), based in Seattle. Her current research areas are AI4Crypto and Private AI. She joined FAIR (Facebook AI Research) in 2021, after 22 years at Microsoft Research (MSR). At MSR she was Partner Research Manager on the senior leadership team of MSR Redmond. Before joining Microsoft in 1999, she was Hildebrandt Assistant Professor of Mathematics at the University of Michigan (1996-1999). She is an Affiliate Professor of Mathematics at the University of Washington (2008—present). She received all her advanced degrees from the University of Chicago, BA (1990), MS (1991), PhD (1996) in Mathematics. She is best known for her work on Elliptic Curve Cryptography, Supersingular Isogeny Graphs in Cryptography, Homomorphic Encryption (SEALcrypto.org), Private AI, and AI4Crypto. She served as President of the Association for Women in Mathematics from 2015-2017 and on the Council of the American Mathematical Society from 2014-2017.
Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:10) Llama 3 and encrypted data — where do we want to be?
* (04:20) Tradeoffs: individual privacy vs. aggregated value in e.g. social media forums
* (07:48) Kristin’s shift in views on privacy
* (09:40) Earlier work on elliptic curve cryptography — applications and theory
* (10:50) Inspirations from algebra, number theory, and algebraic geometry
* (15:40) On algebra vs. analysis and on clear thinking
* (18:38) Elliptic curve cryptography and security, algorithms and concrete running time
* (21:31) Cryptographic protocols and setting standards
* (26:36) Supersingular isogeny graphs (and higher-dimensional supersingular isogeny graphs)
* (32:26) Hard problems for cryptography and finding new problems
* (36:42) Guaranteeing security for cryptographic protocols and mathematical foundations
* (40:15) Private AI: Crypto-Nets / running neural nets on homomorphically encrypted data
* (42:10) Polynomial approximations, activation functions, and expressivity
* (44:32) Scaling up, Llama 2 inference on encrypted data
* (46:10) Transitioning between MSR and FAIR, industry research
* (52:45) An efficient algorithm for integer lattice reduction (AI4Crypto)
* (56:23) Local minima, convergence and limit guarantees, scaling
* (58:27) SALSA: Attacking Lattice Cryptography with Transformers
* (58:38) Learning With Errors (LWE) vs. standard ML assumptions
* (1:02:25) Powers of small primes and faster learning
* (1:04:35) LWE and linear regression on a torus
* (1:07:30) Secret recovery algorithms and transformer accuracy
* (1:09:10) Interpretability / encoding information about secrets
* (1:09:45) Future work / scaling up
* (1:12:08) Reflections on working as a mathematician among technologists
Links:
* Kristin’s Meta, Wikipedia, Google Scholar, and Twitter pages
* Papers and sources mentioned/referenced:
* The Advantages of Elliptic Curve Cryptography for Wireless Security (2004)
* Cryptographic Hash Functions from Expander Graphs (2007, introducing Supersingular Isogeny Graphs)
* Families of Ramanujan Graphs and Quaternion Algebras (2008 — the higher-dimensional analogues of Supersingular Isogeny Graphs)
* Cryptographic Cloud Storage (2010)
* Can homomorphic encryption be practical? (2011)
* ML Confidential: Machine Learning on Encrypted Data (2012)
* CryptoNets: Applying neural networks to encrypted data with high throughput and accuracy (2016)
* A community effort to protect genomic data sharing, collaboration and outsourcing (2017)
* The Homomorphic Encryption Standard (2022)
* Private AI: Machine Learning on Encrypted Data (2022)
* SALSA: Attacking Lattice Cryptography with Transformers (2022)
* SalsaPicante: A Machine Learning Attack on LWE with Binary Secrets
* SALSA VERDE: a machine learning attack on LWE with sparse small secrets
* Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors
* The cool and the cruel: separating hard parts of LWE secrets
* An efficient algorithm for integer lattice reduction (2023)
Episode 128
I spoke with Sergiy Nesterenko about:
* Developing an automated system for designing PCBs
* Difficulties in human and automated PCB design
* Building a startup at the intersection of different areas of expertise
By the way — I hit 40 ratings on Apple Podcasts (and am at 66 on Spotify). It’d mean a lot (really, a lot) if you’d consider leaving a rating or a review. I read everything, and it’s very heartening and helpful to hear what you think.
Enjoy, and let me know what you think!
Sergiy is founder and CEO of Quilter. Sergiy spent 5 years at SpaceX developing radiation-hardened avionics for SpaceX's Falcon 9 and Falcon Heavy's second stage rockets, before discovering a big problem: designing printed circuit boards for all the electronics in these rockets was tedious, manual and error prone. So in 2019, he founded Quilter to build the next generation of AI-powered tooling for electrical engineers.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:45) Quilter origins and difficulties in designing PCBs
* (04:12) PCBs and schematic implementations
* (06:40) Iteration cycles and simulations
* (08:35) Octilinear traces and first-principles design for PCBs
* (12:38) The design space of PCBs
* (15:27) Benchmarks for PCB design
* (20:05) RL and PCB design
* (22:48) PCB details, track widths
* (25:09) Board functionality and aesthetics
* (27:53) PCB designers and automation
* (30:24) Quilter as a compiler
* (33:56) Gluing social worlds and bringing together expertise
* (36:00) Process knowledge vs. first-principles thinking
* (42:05) Example boards
* (44:45) Auto-routers for PCBs
* (48:43) Difficulties for scaling to larger boards
* (50:42) Customers and skepticism
* (53:42) On experiencing negative feedback
* (56:42) Maintaining stamina while building Quilter
* (1:00:00) Endgame for Quilter and future directions
* (1:03:24) Outro
Links:
* Quilter homepage
* Other pages/features mentioned:
* Comment from Tom Fleet
Episode 127
I spoke with Christopher Thi Nguyen about:
* How we lose control of our values
* The tradeoffs of legibility, aggregation, and simplification
* Gamification and its risks
Enjoy—and let me know what you think!
C. Thi Nguyen as of July 2020 is Associate Professor of Philosophy at the University of Utah. His research focuses on how social structures and technology can shape our rationality and our agency. He has published on trust, expertise, group agency, community art, cultural appropriation, aesthetic value, echo chambers, moral outrage porn, and games. He received his PhD from UCLA. Once, he was a food writer for the Los Angeles Times.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:10) The ubiquity of James C. Scott
* (06:03) Legibility and measurement
* (12:50) Value capture, classes and measurement
* (17:30) Political value choice in ML
* (23:30) Why value collapse happens
* (33:00) Blackburn, “Hume and Thick Connexions” — projectivism and legibility
* (36:20) Heuristics and decision-making
* (40:08) Institutional classification systems
* (46:55) Back to Hume
* (48:27) Epistemic arms races, stepping outside our conceptual architectures
* (56:40) The “what to do” question
* (1:04:00) Gamification, aesthetic engagement
* (1:14:51) Echo chambers and defining utility
* (1:22:10) Progress, AGI millenarianism
* (disclaimer: I don’t know what’s going to happen with the world, either.)
* (1:26:04) Parting visions
* (1:30:02) Outro
Links:
* Chrisopher’s Twitter and homepage
* Papers referenced
* Transparency is Surveillance
* Autonomy and Aesthetic Engagement
* Art as a Shelter from Science
* Hume and Thick Connexions (Simon Blackburn)
Episode 126
I spoke with Vivek Natarajan about:
* Improving access to medical knowledge with AI
* How an LLM for medicine should behave
* Aspects of training Med-PaLM and AMIE
* How to facilitate appropriate amounts of trust in users of medical AI systems
Vivek Natarajan is a Research Scientist at Google Health AI advancing biomedical AI to help scale world class healthcare to everyone. Vivek is particularly interested in building large language models and multimodal foundation models for biomedical applications and leads the Google Brain moonshot behind Med-PaLM, Google's flagship medical large language model. Med-PaLM has been featured in The Scientific American, The Economist, STAT News, CNBC, Forbes, New Scientist among others.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:35) The concept of an “AI doctor”
* (06:54) Accessibility to medical expertise
* (10:31) Enabling doctors to do better/different work
* (14:35) Med-PaLM
* (15:30) Instruction tuning, desirable traits in LLMs for medicine
* (23:41) Axes for evaluation of medical QA systems
* (30:03) Medical LLMs and scientific consensus
* (35:32) Demographic data and patient interventions
* (40:14) Data contamination in Med-PaLM
* (42:45) Grounded claims about capabilities
* (45:48) Building trust
* (50:54) Genetic Discovery enabled by a LLM
* (51:33) Novel hypotheses in genetic discovery
* (57:10) Levels of abstraction for hypotheses
* (1:01:10) Directions for continued progress
* (1:03:05) Conversational Diagnostic AI
* (1:03:30) Objective Structures Clinical Examination as an evaluative framework
* (1:09:08) Relative importance of different types of data
* (1:13:52) Self-play — conversational dispositions and handling patients
* (1:16:41) Chain of reasoning and information retention
* (1:20:00) Performance in different areas of medical expertise
* (1:22:35) Towards accurate differential diagnosis
* (1:31:40) Feedback mechanisms and expertise, disagreement among clinicians
* (1:35:26) Studying trust, user interfaces
* (1:38:08) Self-trust in using medical AI models
* (1:41:39) UI for medical AI systems
* (1:43:50) Model reasoning in complex scenarios
* (1:46:33) Prompting
* (1:48:41) Future outlooks
* (1:54:53) Outro
Links:
* Vivek’s Twitter and homepage
* Papers
* Towards Expert-Level Medical Question Answering with LLMs (2023)
* LLMs encode clinical knowledge (2023)
* Towards Generalist Biomedical AI (2024)
* AMIE
* Genetic Discovery enabled by a LLM (2023)
Episode 125
False universalism freaks me out. It doesn’t freak me out as a first principle because of epistemic violence; it freaks me out because it works.
I spoke with Professor Thomas Mullaney about:
* Telling stories about your work and balancing what feels meaningful with practical realities
* Destabilizing our understandings of the technologies we feel familiar with, and the work of researching the history of the Chinese typewriter
* The personal nature of research
The Chinese Typewriter and The Chinese Computer are two of the best books I’ve read in a very long time. And they’re not just good and interesting, but important to read, for the history they tell and the ideas and arguments they present—I can’t recommend them and Professor Mullaney’s other work enough.
Tom is Professor of History and Professor of East Asian Languages and Cultures, by courtesy. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow. He is the author or lead editor of 8 books, including The Chinese Computer, The Chinese Typewriter (winner of the Fairbank prize), Your Computer is on Fire, and Coming to Terms with the Nation: Ethnic Classification in Modern China.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:00) “In Their Own Words” interview: on telling stories about your work
* (07:42) Clashing narratives and authenticity/inauthenticity in pursuing your work
* (15:48) Why Professor Mullaney pursued studying the Chinese typewriter
* (18:20) Worldmaking, transforming the physical world to fit our descriptive models
* (30:07) Internal and illegible continuities/coherence in work
* (31:45) The role of a “self”
* (43:06) The 2008 Beijing Olympics and false (alphabetical) universalism, projectivism
* (1:04:23) “Kicking the ladder” and the personal nature of research
* (1:18:07) The “Technolinguistic Chinese Exclusion Act” — the situatedness of historians in their work
* (1:33:00) Is the Chinese typewriter project finished? / on the resolution of problems
* (1:43:35) Outro
Links:
* Professor Mullaney’s homepage and Twitter
* In Their Own Words: Thomas Mullaney
* Books
* The Chinese Computer: A Global History of the Information Age
* The Chinese Typewriter: A History
* Coming to Terms with the Nation: Ethnic Classification in Modern China
Episode 124
You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.
I spoke with Professor Seth Lazar about:
* Why managing near-term and long-term risks isn’t always zero-sum
* How to think through axioms and systems in political philosphy
* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI
Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:54) Ad read — MLOps conference
* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation
* (03:53) Attention allocation as an independent good (or bad)
* (08:22) Axioms in political philosophy
* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust
* (15:05) AI safety / catastrophic risk concerns
* (22:10) Superintelligence arguments, reasoning about technology
* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?
* (35:55) GPT-2, model weights, related debates
* (39:11) Power and economics—coordination problems, company incentives
* (50:42) Morality tales, relationship between safety and capabilities
* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy
* (1:02:28) What is a feasibility horizon?
* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter
* (1:14:25) Sociotechnical lenses, narrowly technical solutions
* (1:19:47) Experiments for responsibly integrating AI systems into society
* (1:26:53) Helpful/honest/harmless and antagonistic AI systems
* (1:33:35) Managing incentives conducive to developing technology in the public interest
* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia
* (1:46:54) How we can help legitimize and support interdisciplinary work
* (1:50:07) Outro
Links:
* Resources
* Attention, moral skill, and algorithmic recommendation
Episode 123
I spoke with Suhail Doshi about:
* Why benchmarks aren’t prepared for tomorrow’s AI models
* How he thinks about artists in a world with advanced AI tools
* Building a unified computer vision model that can generate, edit, and understand pixels.
Suhail is a software engineer and entrepreneur known for founding Mixpanel, Mighty Computing, and Playground AI (they’re hiring!).
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:54) Ad read — MLOps conference
* (01:30) Suhail is *not* in pivot hell but he *is* all-in on 50% AI-generated music
* (03:45) AI and music, similarities to Playground
* (07:50) Skill vs. creative capacity in art
* (12:43) What we look for in music and art
* (15:30) Enabling creative expression
* (18:22) Building a unified computer vision model, underinvestment in computer vision
* (23:14) Enhancing the aesthetic quality of images: color and contrast, benchmarks vs user desires
* (29:05) “Benchmarks are not prepared for how powerful these models will become”
* (31:56) Personalized models and personalized benchmarks
* (36:39) Engaging users and benchmark development
* (39:27) What a foundation model for graphics requires
* (45:33) Text-to-image is insufficient
* (46:38) DALL-E 2 and Imagen comparisons, FID
* (49:40) Compositionality
* (50:37) Why Playground focuses on images vs. 3d, video, etc.
* (54:11) Open source and Playground’s strategy
* (57:18) When to stop open-sourcing?
* (1:03:38) Suhail’s thoughts on AGI discourse
* (1:07:56) Outro
Links:
* Suhail on Twitter
Episode 122
I spoke with Azeem Azhar about:
* The speed of progress in AI
* Historical context for some of the terminology we use and how we think about technology
* What we might want our future to look like
Azeem is an entrepreneur, investor, and adviser. He is the creator of Exponential View, a global platform for in-depth technology analysis, and the host of the Bloomberg Original series Exponentially.
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:32) Ad read — MLOps conference
* (01:05) Problematizing the term “exponential”
* (07:35) Moore’s Law as social contract, speed of technological growth and impedances
* (14:45) Academic incentives, interdisciplinary work, rational agents and historical context
* (21:24) Monolithic scaling
* (26:38) Investment in scaling
* (31:22) On Sam Altman
* (36:25) Uses of “AGI,” “intelligence”
* (41:32) Historical context for terminology
* (48:58) AI and teaching
* (53:51) On the technology-human divide
* (1:06:26) New technologies and the futures we want
* (1:10:50) Inevitability narratives
* (1:17:01) Rationality and objectivity
* (1:21:13) Cultural affordances and intellectual history
* (1:26:15) Centralized and decentralized AI systems
* (1:32:54) Instruction tuning and helpful/honest/harmless
* (1:39:18) Azeem’s future outlook
* (1:46:15) Outro
Links:
Episode 122
I spoke with Professor David Thorstad about:
* The practical difficulties of doing interdisciplinary work
* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations
* why EA epistemics suck (ok, it’s a little more nuanced than that)
Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:15) David’s interest in rationality
* (02:45) David’s crisis of confidence, models abstracted from psychology
* (05:00) Blending formal models with studies of the mind
* (06:25) Interaction between academic communities
* (08:24) Recognition of and incentives for interdisciplinary work
* (09:40) Movement towards interdisciplinary work
* (12:10) The Standard Picture of rationality
* (14:11) Why the Standard Picture was attractive
* (16:30) Violations of and rebellion against the Standard Picture
* (19:32) Mistakes made by critics of the Standard Picture
* (22:35) Other competing programs vs Standard Picture
* (26:27) Characterizing Bounded Rationality
* (27:00) A worry: faculties criticizing themselves
* (29:28) Self-improving critique and longtermism
* (30:25) Central claims in bounded rationality and controversies
* (32:33) Heuristics and formal theorizing
* (35:02) Violations of Standard Picture, vindicatory epistemology
* (37:03) The Reason Responsive Consequentialist View (RRCV)
* (38:30) Objective and subjective pictures
* (41:35) Reason responsiveness
* (43:37) There are no epistemic norms for inquiry
* (44:00) Norms vs reasons
* (45:15) Arguments against epistemic nihilism for belief
* (47:30) Norms and self-delusion
* (49:55) Difficulty of holding beliefs for pragmatic reasons
* (50:50) The Gibbardian picture, inquiry as an action
* (52:15) Thinking how to act and thinking how to live — the power of inquiry
* (53:55) Overthinking and conducting inquiry
* (56:30) Is thinking how to inquire as an all-things-considered matter?
* (58:00) Arguments for the RRCV
* (1:00:40) Deciding on minimal criteria for the view, stereotyping
* (1:02:15) Eliminating stereotypes from the theory
* (1:04:20) Theory construction in epistemology and moral intuition
* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries
* (1:10:30) The argument from minimal criteria, evaluating against competing views
* (1:13:45) Comparing to other theories
* (1:15:00) The explanatory argument
* (1:17:53) Parfit and Railton, norms of friendship vs utility
* (1:20:00) Should you call out your friend for being a womanizer
* (1:22:00) Vindicatory Epistemology
* (1:23:05) Panglossianism and meliorative epistemology
* (1:24:42) Heuristics and recognition-driven investigation
* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing
* (1:29:08) Stakes of inquiry and costs of metacognitive processing
* (1:30:00) When agents are incoherent, focuses on inquiry
* (1:32:05) Indirect normative assessment and its consequences
* (1:37:47) Against the Singularity Hypothesis
* (1:39:00) Superintelligence and the ontological argument
* (1:41:50) Hardware growth and general intelligence growth, AGI definitions
* (1:43:55) Difficulties in arguing for hyperbolic growth
* (1:46:07) Chalmers and the proportionality argument
* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore’s Law
* (1:50:08) On progress studies
* (1:52:40) Improving research productivity and technology growth
* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics
* (1:55:30) Cumulative and per-unit risk
* (1:57:37) Back and forth with longtermists, time of perils
* (1:59:05) Background risk — risks we can and can’t intervene on, total existential risk
* (2:00:56) The case for longtermism is inflated
* (2:01:40) Epistemic humility and longtermism
* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review
* (2:04:50) Compounding potential errors in knowledge
* (2:06:38) Group deliberation dynamics, academic consensus
* (2:08:30) The scope of longtermism
* (2:08:30) Money in effective altruism and processes of inquiry
* (2:10:15) Swamping longtermist options
* (2:12:00) Washing out arguments and justified belief
* (2:13:50) The difficulty of long-term forecasting and interventions
* (2:15:50) Theory of change in the bounded rationality program
* (2:18:45) Outro
Links:
* David’s homepage and Twitter and blog
* Papers mentioned/read
* Bounded rationality and inquiry
* Why bounded rationality (in epistemology)?
* Against the newer evidentialists
* The accuracy-coherence tradeoff in cognition
* There are no epistemic norms of inquiry
* Global priorities and effective altruism
* Against the singularity hypothesis (+ blog posts)
* Three mistakes in the moral mathematics of existential risk (+ blog posts)
Episode 121
I spoke with Professor Ryan Tibshirani about:
* Differences between the ML and statistics communities in scholarship, terminology, and other areas.
* Trend filtering
* Why you can’t just use garbage prediction functions when doing conformal prediction
Ryan is a Professor in the Department of Statistics at UC Berkeley. He is also a Principal Investigator in the Delphi group. From 2011-2022, he was a faculty member in Statistics and Machine Learning at Carnegie Mellon University. From 2007-2011, he did his Ph.D. in Statistics at Stanford University.
Reach me at [email protected] for feedback, ideas, guest suggestions.
The Gradient Podcast on: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:10) Ryan’s background and path into statistics
* (07:00) Cultivating taste as a researcher
* (11:00) Conversations within the statistics community
* (18:30) Use of terms, disagreements over stability and definitions
* (23:05) Nonparametric Regression
* (23:55) Background on trend filtering
* (33:48) Analysis and synthesis frameworks in problem formulation
* (39:45) Neural networks as a specific take on synthesis
* (40:55) Divided differences, falling factorials, and discrete splines
* (41:55) Motivations and background
* (48:07) Divided differences vs. derivatives, approximation and efficiency
* (51:40) Conformal prediction
* (52:40) Motivations
* (1:10:20) Probabilistic guarantees in conformal prediction, choice of predictors
* (1:14:25) Assumptions: i.i.d. and exchangeability — conformal prediction beyond exchangeability
* (1:25:00) Next directions
* (1:28:12) Epidemic forecasting — COVID-19 impact and trends survey
* (1:29:10) Survey methodology
* (1:38:20) Data defect correlation and its limitations for characterizing datasets
* (1:46:14) Outro
Links:
* Ryan’s homepage
* Works read/mentioned
* Nonparametric Regression
* Adaptive Piecewise Polynomial Estimation via Trend Filtering (2014)
* Distribution-free Inference
* Distribution-Free Predictive Inference for Regression (2017)
* Conformal Prediction Under Covariate Shift (2019)
* Conformal Prediction Beyond Exchangeability (2023)
* Delphi and COVID-19 research
* Flexible Modeling of Epidemics
* Real-Time Estimation of COVID-19 Infections
* The US COVID-19 Trends and Impact Survey and Big data, big problems: Responding to “Are we there yet?”
In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.
Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:43) Sasha’s background
* (01:52) How Sasha became interested in sociotechnical work
* (03:08) Larger models and theory of change for AI/climate work
* (07:18) Quantifying emissions for ML systems
* (09:40) Aggregate inference vs training costs
* (10:22) Hardware and data center locations
* (15:10) More efficient hardware vs. bigger models — Jevons paradox
* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports
* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs
* (28:22) General vs. task-specific models
* (31:20) Architectures and efficiency
* (33:45) Sequence-to-sequence architectures vs. decoder-only
* (36:35) Hardware efficiency/utilization
* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment
* (40:50) Stable Bias
* (46:45) Understanding model biases and representations
* (52:07) Future work
* (53:45) Metaethical perspectives on benchmarking for AI ethics
* (54:30) “Moral benchmarks”
* (56:50) Reflecting on “ethicality” of systems
* (59:00) Transparency and ethics
* (1:00:05) Advice for picking research directions
* (1:02:58) Outro
Links:
* Sasha’s homepage and Twitter
* Papers read/discussed
* Climate Change / Carbon Emissions of AI Models
* Quantifying the Carbon Emissions of Machine Learning
* Power Hungry Processing: Watts Driving the Cost of AI Deployment?
* Tackling Climate Change with Machine Learning
* Responsible AI
* Stable Bias: Analyzing Societal Representations in Diffusion Models
* Metaethical Perspectives on ‘Benchmarking’ AI Ethics
* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice
In episode 119 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Sipser.
Professor Sipser is the Donner Professor of Mathematics and member of the Computer Science and Artificial Intelligence Laboratory at MIT.
He received his PhD from UC Berkeley in 1980 and joined the MIT faculty that same year. He was Chairman of Applied Mathematics from 1998 to 2000 and served as Head of the Mathematics Department 2004-2014. He served as interim Dean of Science 2013-2014 and then as Dean of Science 2014-2020.
He was a research staff member at IBM Research in 1980, spent the 1985-86 academic year on the faculty of the EECS department at Berkeley and at MSRI, and was a Lady Davis Fellow at Hebrew University in 1988. His research areas are in algorithms and complexity theory, specifically efficient error correcting codes, interactive proof systems, randomness, quantum computation, and establishing the inherent computational difficulty of problems. He is the author of the widely used textbook, Introduction to the Theory of Computation (Third Edition, Cengage, 2012).
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:40) Professor Sipser’s background
* (04:35) On interesting questions
* (09:00) Different kinds of research problems
* (13:00) What makes certain problems difficult
* (18:48) Nature of the P vs NP problem
* (24:42) Identifying interesting problems
* (28:50) Lower bounds on the size of sweeping automata
* (29:50) Why sweeping automata + headway to P vs. NP
* (36:40) Insights from sweeping automata, infinite analogues to finite automata problems
* (40:45) Parity circuits
* (43:20) Probabilistic restriction method
* (47:20) Relativization and the polynomial time hierarchy
* (55:10) P vs. NP
* (57:23) The non-connection between GO’s polynomial space hardness and AlphaGo
* (1:00:40) On handicapping Turing Machines vs. oracle strategies
* (1:04:25) The Natural Proofs Barrier and approaches to P vs. NP
* (1:11:05) Debates on methods for P vs. NP
* (1:15:04) On the possibility of solving P vs. NP
* (1:18:20) On academia and its role
* (1:27:51) Outro
Links:
* Professor Sipser’s homepage
* Papers discussed/read
* Halting space-bounded computations (1978)
* Lower bounds on the size of sweeping automata (1979)
* GO is Polynomial-Space Hard (1980)
* A complexity theoretic approach to randomness (1983)
* Parity, circuits, and the polynomial-time hierarchy (1984)
* A follow-up to Furst-Saxe-Sipser
* The Complexity of Finite Functions (1991)
In episode 118 of The Gradient Podcast, Daniel Bashir speaks to Andrew Lee.
Andrew is co-founder and CEO of Shortwave, a company dedicated to building a better product experience for email, particularly by leveraging AI. He previously co-founded and was CTO at Firebase.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:43) Andrew’s previous work, Firebase
* (04:48) Benefits of lacking experience in building Firebase
* (08:55) On “abstract reasoning” vs empirical capabilities
* (10:30) Shortwave’s AI system as a black box
* (11:55) Motivations for Shortwave
* (17:10) Why is Google not innovating on email?
* (21:53) Shortwave’s overarching product vision and pivots
* (27:40) Shortwave AI features
* (33:20) AI features for email and security concerns
* (35:45) Shortwave’s AI Email Assistant + architecture
* (43:40) Issues with chaining LLM calls together
* (45:25) Understanding implicit context in utterances, modularization without loss of context
* (48:56) Performance for AI assistant, batching and pipelining
* (55:10) Prompt length
* (57:00) On shipping fast
* (1:00:15) AI improvements that Andrew is following
* (1:03:10) Outro
Links:
* Everything we shipped for AI Launch Week
* A deep dive into the world’s smartest email AI
Episode 117
“You get more of what you engage with. Everyone who complains about coverage should understand that every click, every quote tweet, every argument is registered by these publications as engagement. If what you want is really meaty, dispassionate, balanced, and fair explainers, you need to click on that, you need to read the whole thing, you need to share it, talk about it, comment on it. We get the media that we deserve.”
I spoke with Joss Fong.
Joss is a producer focused on science and technology, and was a founding member of the Vox video team. Her work has been recognized by the AAAS Kavli Science Journalism Awards, the Online Journalism Awards, and the News & Documentary Emmys. She holds a master's degree in science, health, and environmental reporting from NYU.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:32) Joss’s path into videomaking, J-school
* (07:45) Consumption and creation in explainer journalism
* (10:45) Finding clarity in information
* (13:15) Communication of ML research
* (15:55) Video journalism and science communication as separate and overlapping disciplines
* (19:41) Evolution of videos and videomaking
* (26:33) Explaining AI and communicating mental models
* (30:47) Meeting viewers in the middle, competing for attention
* (34:07) Explanatory techniques in Glad You Asked
* (37:10) Storytelling and communicating scientific information
* (40:57) “Is Beauty Culture Hurting Us?” and participating in video narratives
* (46:37) AI beauty filters
* (52:59) Obvious bias in generative AI
* (59:31) Definitions and ideas of progress, humanities and technology
* (1:05:08) “Iterative development” and outsourcing quality control to the public
* (1:07:10) Disagreement about (tech) journalism’s purpose
* (1:08:51) Incentives in newsrooms and journalistic organizations
* (1:12:04) AI for video generation and implications, limits of creativity
* (1:17:20) Skill and creativity
* (1:22:35) Joss’s new YouTube channel!
* (1:23:29) Outro
Links:
* Joss’s website and playlist of selected work
* AI-focused videos
* AI Art, Explained (2022)
* AI can do your homework. Now what? (2023)
* Computers just got a lot better at writing (2020)
* Facebook showed this ad to 95% women. Is that a problem? (2020)
* What facial recognition steals from us (2019)
* The big debate about the future of work (2017)
* AI and Creativity short film for Runway’s AIFF (2023)
* Others
* Is Beauty Culture Hurting Us? from Glad You Asked (2020)
* Joss’s Scientific American videos :)
In episode 116 of The Gradient Podcast, Daniel Bashir speaks to Kate Park.
Kate is the Director of Product at Scale AI. Prior to joining Scale, Kate worked on Tesla Autopilot as the AI team’s first and lead product manager building the industry’s first data engine. She has also published research on spoken natural language processing and a travel memoir.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:11) Kate’s background
* (03:22) Tesla and cameras vs. Lidar, importance of data
* (05:12) “Data is key”
* (07:35) Data vs. architectural improvements
* (09:36) Effort for data scaling
* (10:55) Transfer of capabilities in self-driving
* (13:44) Data flywheels and edge cases, deployment
* (15:48) Transition to Scale
* (18:52) Perspectives on shifting to transformers and data
* (21:00) Data engines for NLP vs. for vision
* (25:32) Model evaluation for LLMs in data engines
* (27:15) InstructGPT and data for RLHF
* (29:15) Benchmark tasks for assessing potential labelers
* (32:07) Biggest challenges for data engines
* (33:40) Expert AI trainers
* (36:22) Future work in data engines
* (38:25) Need for human labeling when bootstrapping new domains or tasks
* (41:05) Outro
Links:
In episode 115 of The Gradient Podcast, Daniel Bashir speaks to Ben Wellington.
Ben is the Deputy Head of Feature Forecasting at Two Sigma, a financial sciences company. Ben has been at Two Sigma for more than 15 years, and currently leads efforts focused on natural language processing and feature forecasting. He is also the author of data science blog I Quant NY, which has influenced local government policy, including changes in NYC street infrastructure and the design of NYC subway vending machines. Ben is a Visiting Assistant Professor in the Urban and Community Planning program at the Pratt Institute in Brooklyn where he teaches statistics using urban open data. He holds a Ph.D. in Computer Science from New York University.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) Ben’s background
* (04:30) Why Ben was interested in NLP
* (05:48) Ben’s work on translational equivalence, dominant techniques
* (10:14) Scaling, large datasets at Two Sigma
* (12:50) Applying ML techniques to quantitative finance, features in financial ML systems
* (17:27) Baselines and time-dependence in constructing features, human knowledge
* (19:23) Black box models in finance
* (24:00) Two Sigma’s presence in the AI research community
* (26:55) Short- and long-term research initiatives at Two Sigma
* (30:42) How ML fits into Two Sigma’s investment strategy
* (34:05) Alpha and competition in investing
* (36:13) Temporality in data
* (40:38) Challenges for finance/AI and beating the market
* (44:36) Reproducibility
* (49:47) I Quant NY and storytelling with data
* (56:43) Descriptive statistics and stories
* (1:01:05) Benefits of simple methods
* (1:07:11) Outro
Links:
* Ben’s work on translational equivalence and scalable discriminative learning
* Storytelling with data and I Quant NY
“There is this move from generality in a relative sense of ‘we are not as specialized as insects’ to generality in the sense of omnipotent, omniscient, godlike capabilities. And I think there's something very dangerous that happens there, which is you start thinking of the word ‘general’ in completely unhinged ways.”
In episode 114 of The Gradient Podcast, Daniel Bashir speaks to Venkatesh Rao.
Venkatesh is a writer and consultant. He has been writing the widely read Ribbonfarm blog since 2007, and more recently, the popular Ribbonfarm Studio Substack newsletter. He is the author of Tempo, a book on timing and decision-making, and is currently working on his second book, on the foundations of temporality. He has been an independent consultant since 2011, supporting senior executives in the technology industry. His work in recent years has focused on AI, semiconductor, sustainability, and protocol technology sectors. He holds a PhD in control theory (2003) from the University of Michigan. He is currently based in the Seattle area, and enjoys dabbling in robotics in his spare time. You can learn more about his work at venkateshrao.com
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:38) Origins of Ribbonfarm and Venkat’s academic background
* (04:23) Voice and recurring themes in Venkat’s work
* (11:45) Patch models and multi-agent systems: integrating philosophy of language, balancing realism with tractability
* (21:00) More on abstractions vs. tractability in Venkat’s work
* (29:07) Scaling of industrial value systems, characterizing AI as a discipline
* (39:25) Emergent science, intelligence and abstractions, presuppositions in science, generality and universality, cameras and engines
* (55:05) Psychometric terms
* (1:09:07) Inductive biases (yes I mentioned the No Free Lunch Theorem and then just talked about the definition of inductive bias and not the actual theorem 🤡)
* (1:18:13) LLM training and efficiency, comparing LLMs to humans
* (1:23:35) Experiential age, analogies for knowledge transfer
* (1:30:50) More clarification on the analogy
* (1:37:20) Massed Muddler Intelligence and protocols
* (1:38:40) Introducing protocols and the Summer of protocols
* (1:49:15) Evolution of protocols, hardness
* (1:54:20) LLMs, protocols, time, future visions, and progress
* (2:01:33) Protocols, drifting from value systems, friction, compiling explicit knowledge
* (2:14:23) Directions for ML people in protocols research
* (2:18:05) Outro
Links:
* Venkat’s Twitter and homepage
* Summer of Protocols and 2024 Call for Applications (apply!)
* Essays discussed
* Patch models and their applications to multivehicle command and control
* From Mediocre Computing
* Magic, Mundanity, and Deep Protocolization
* On protocols
* The Unreasonable Sufficiency of Protocols
* Protocols Don’t Build Pyramids
* Protocols in (Emergency) Time
* Atoms, Institutions, Blockchains
In episode 113 of The Gradient Podcast, Daniel Bashir speaks to Professor Sasha Rush.
Professor Rush is an Associate Professor at Cornell University and a Researcher at HuggingFace. His research aims to develop natural language processing systems that are safe, fast, and controllable. His group is interested primarily in tasks that involve text generation, and they study data-driven probabilistic methods that combine deep-learning based models with probabilistic controls. He is also interested in open-source NLP and deep learning, and develops projects to make deep learning systems safer, clearer, and easier to use.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:47) Professor Rush’s background
* (03:23) Professor Rush’s reflections on prior work—importance of learning and inference
* (04:58) How much engineering matters in deep learning, the Rush vs. Frankle Bet
* (07:12) On encouraging and incubating good research
* (10:50) Features of good research environments
* (12:36) 5% bets in Professor Rush’s research: State-Space Models (SSMs) as an alternative to Transformers
* (15:58) SSMs vs. Transformers
* (18:53) Probabilistic Context-Free Grammars—are (P)CFGs worth paying attention to?
* (20:53) Sequence-level knowledge distillation: approximating sequence-level distributions
* (25:08) Pruning and knowledge distillation — orthogonality of efficiency techniques
* (26:33) Broader thoughts on efficiency
* (28:31) Works on prompting
* (28:58) Prompting and In-Context Learning
* (30:05) Thoughts on mechanistic interpretability
* (31:25) Multitask prompted training enables zero-shot task generalization
* (33:48) How many data points is a prompt worth?
* (35:13) Directions for controllability in LLMs
* (39:11) Controllability and safety
* (41:23) Open-source work, deep learning libraries
* (42:08) A story about Professor Rush’s post-doc at FAIR
* (43:51) The impact of PyTorch
* (46:08) More thoughts on deep learning libraries
* (48:48) Levels of abstraction, PyTorch as an interface to motivate research
* (50:23) Empiricism and research commitments
* (53:32) Outro
Links:
* Research
* Early work / PhD
* Dual Decomposition and LP Relaxations
* Vine Pruning for Efficient Multi-Pass Dependency Parsing
* Improved Parsing and POS Tagging Using Inter-Sentence Dependency Constraints
* Research — interpretable and controllable natural language generation
* Compound Probabilistic Context-Free Grammars for Grammar Induction
* Multitask prompted training enables zero-shot task generalization
* Research — deep generative models
* A Neural Attention Model for Abstractive Sentence Summarization
* Learning Neural Templates for Text Generation
* How many data points is a prompt worth?
* Research — efficient algorithms and hardware for speech, translation, dialogue
* Sequence-Level Knowledge Distillation
* Open-source work
In episode 112 of The Gradient Podcast, Daniel Bashir speaks to Cameron Jones and Sean Trott.
Cameron is a PhD candidate in the Cognitive Science Department at the University of California, San Diego. His research compares how humans and large language models process language about world knowledge, situation models, and theory of mind.
Sean is an Assistant Teaching Professor in the Cognitive Science Department at the University of California, San Diego. His research interests include probing large language models, ambiguity in languages, how ambiguous words are represented, and pragmatic inference. He previously completed his PhD at UCSD.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:55) Cameron’s background
* (06:00) Sean’s background
* (08:15) Unexpected capabilities of language models and the need for embodiment to understand meaning
* (11:05) Interpreting results of Turing tests, separating what humans and LLMs do when behaving as though they “understand”
* (14:27) Internal mechanisms, interpretability, how we test theories
* (16:40) Languages are efficient, but for whom?
* (17:30) Initial motivations: lexical ambiguity
* (19:20) The balance of meanings across wordforms
* (22:35) Tension between speaker- and comprehender-oriented pressures in lexical ambiguity
* (25:05) Context and potential vs. realized ambiguity
* (27:15) LLM-ology
* (28:30) Studying LLMs as models of human cognition and as interesting objects of study in their own right
* (30:03) Example of explaining away effects
* (33:54) The internalist account of belief sensitivity—behavior and internal representations
* (37:43) LLMs and the False Belief Task
* (42:05) Hypothetical on observed behavior and inferences about internal representations
* (48:05) Distributional Semantics Still Can’t Account for Affordances
* (50:25) Tests of embodied theories and limitations of distributional cues
* (53:54) Multimodal models and object affordances
* (58:30) Language and grounding, other buzzwords
* (59:45) How could we know if LLMs understand language?
* (1:04:50) Reference: as a thing words do vs. ontological notion
* (1:11:38) The Role of Physical Inference in Pronoun Resolution
* (1:16:40) World models and world knowledge
* (1:19:45) EPITOME
* (1:20:20) The different tasks
* (1:26:43) Confounders / “attending” in LM performance on tasks
* (1:30:30) Another hypothetical, on theory of mind
* (1:32:26) How much information can language provide in service of mentalizing?
* (1:35:14) Convergent validity and coherence/validity of theory of mind
* (1:39:30) Interpretive questions about behavior w/r/t/ theory of mind
* (1:43:35) Does GPT-4 Pass the Turing Test?
* (1:44:00) History of the Turing Test
* (1:47:05) Interrogator strategies and the strength of the Turing Test
* (1:52:15) “Internal life” and personality
* (1:53:30) How should this research impact how we assess / think about LLM abilities?
* (1:58:56) Outro
Links:
* Cameron’s homepage and Twitter
* Research — Language and NLP
* Languages are efficient, but for whom?
* Research — LLM-ology
* Do LLMs know what humans know?
* Distributional Semantics Still Can’t Account for Affordances
* In Cautious Defense of LLM-ology
* Should Psycholinguists use LLMs as “model organisms”?
* (Re)construing Meaning in NLP
* Research — language and grounding, theory of mind, reference [insert other buzzwords here]
* Do LLMs have a “theory of mind”?
* How could we know if LLMs understand language?
* Does GPT-4 Pass the Turing Test?
* The extended mind and why it matters for cognitive science research
* EPITOME
* The Role of Physical Inference in Pronoun Resolution
In episode 111 of The Gradient Podcast, Daniel Bashir speaks to Nicholas Thompson.
Nicholas is the CEO of The Atlantic. Previously, he served as editor-in-chief of Wired and editor of Newyorker.com. Nick also cofounded Atavist, which sold to Automattic in 2018. Publications under Nick’s leadership have won numerous National Magazine Awards and Pulitzer Prizes, and one WIRED story he edited was the basis for the movie Argo. Nick is also the co-founder of Speakeasy AI, a software platform designed to foster constructive online conversations about the world’s most pressing problems.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:12) Nick’s path into journalism
* (03:25) The Washington Monthly — a turning point
* (05:09) Perspectives from different positions in the journalism industry
* (08:16) What is great journalism?
* (09:42) Example from The Atlantic
* (11:00) Other examples/pieces of good journalism
* (12:20) Pieces on aging
* (12:56) Mortality and life-force associated with running — Nick’s piece in WIRED
* (15:30) On urgency
* (18:20) The job of an editor
* (22:23) AI in journalism — benefits and limitations
* (26:45) How AI can help writers, experimentation
* (28:40) Examples of AI in journalism and issues: CNET, Sports Illustrated, Nick’s thoughts on how AI should be used in journalism
* (32:20) Speakeasy AI and creating healthy conversation spaces
* (34:00) Details about Speakeasy
* (35:12) Business pivots and business model trouble
* (35:37) Remaining gaps in fixing conversational spaces
* (38:27) Lessons learned
* (40:00) Nick’s optimism about Speakeasy-like projects
* (43:14) Social simulacra, a “Troll WestWorld,” algorithmic adjustments in social media
* (46:11) Lessons and wisdom from journalism about engagement, more on engagement in social media
* (50:27) Successful and unsuccessful futures for AI in journalism
* (54:17) Previous warnings about synthetic media, Nick’s perspective on risks from synthetic media in journalism
* (57:00) Stop trying to build AGI
(59:13) Outro
Links:
* Nicholas’s Twitter and website
* Writing
* “To Run My Best Marathon at Age 44, I Had to Outrun My Past” in WIRED
* “The year AI actually changes the media business” in NiemanLab’s Predictions for Journalism 2023
In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.
Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:11) Professor Kambhampati’s background
* (06:07) Explanation in AI
* (18:08) What people want from explanations—vocabulary and symbolic explanations
* (21:23) The realization of new concepts in explanation—analogy and grounding
* (30:36) Thinking and language
* (31:48) Conscious and subconscious mental activity
* (36:58) Tacit and explicit knowledge
* (42:09) The development of planning as a research area
* (46:12) RL and planning
* (47:47) What makes a planning problem hard?
* (51:23) Scalability in planning
* (54:48) LLMs do not perform reasoning
* (56:51) How to show LLMs aren’t reasoning
* (59:38) External verifiers and backprompting LLMs
* (1:07:51) LLMs as cognitive orthotics, language and representations
* (1:16:45) Finding out what kinds of representations an AI system uses
* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs
* (1:39:53) The Generative AI Paradox, reasoning and retrieval
* (1:43:48) AI as an ersatz natural science
* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering
* (1:58:33) Outro
Links:
* Professor Kambhampati’s Twitter and homepage
* Research and Writing — Planning and Human-Aware AI Systems
* A Validation-structure-based theory of plan modification and reuse (1990)
* Challenges of Human-Aware AI Systems (2020)
* Polanyi vs. Planning (2021)
* LLMs and Planning
* Can LLMs Really Reason and Plan? (2023)
* On the Planning Abilities of LLMs (2023)
* Other
* Changing the nature of AI research
In episode 109 of The Gradient Podcast, Daniel Bashir speaks to Russ Maschmeyer.
Russ is the Product Lead for AI and Spatial Commerce at Shopify. At Shopify, he leads a team that looks at how AI can better empower entrepreneurs, with a particular interest in how image generation can help make the lives of business owners and merchants more productive. He previously led design for multiple services at Facebook and co-founded Primer, an AR-enabled interior design marketplace.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:50) Russ’s background and a hacked Kinect sensor
* (06:00) Instruments and emotion, embodiment and accessibility
* (08:45) Natural language as input and generative AI in creating emotive experiences
* (10:55) Work on search queries and recommendations at Facebook, designing for search
* (16:35) AI in the retail and entrepreneurial landscape
* (19:15) Shopify and AI for business owners
* (22:10) Vision and directions for AI in commerce
* (25:01) Personalized experiences for shopping
* (28:45) Challenges for creating personalized experiences
* (31:49) Intro to spatial commerce
* (34:48) AR/VR devices and spatial commerce
* (37:30) MR and AI for immersive product search
* (41:35) Implementation details
* (48:05) WonkaVision and difficulties for immersive web experiences
* (52:10) Future projects and directions for spatial commerce
* (55:10) Outro
Links:
* With a Wave of the Hand, Improvising on Kinect in The New York Times
* Shopify Spatial Commerce Projects
* MR and AI for immersive product search
* A more immersive web with a simple optical illusion
* What if your room had a reset button?
In episode 108 of The Gradient Podcast, Daniel Bashir speaks to Professor Benjamin Breen.
Professor Breen is an associate professor of history at UC Santa Cruz specializing in the history of science, medicine, globalization, and the impacts of technological change. He is the author of multiple books including The Age of Intoxication: Origins of the Global Drug Trade and the more recent Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychedelic Science, which you can pre-order now.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:05) Professor Breen’s background
* (04:47) End of history narratives / millenarian thinking in AI/technology
* (09:53) Transformative technological change and societal change
* (16:45) AI and psychedelics
* (17:23) Techno-utopianism
* (26:08) Technologies as metaphors for humanity
* (32:34) McLuhanist thinking / brain as a computational machine, Prof. Breen’s skepticism
* (37:13) Issues with overblown narratives about technology
* (42:46) Narratives about transformation and their impacts on progress
* (45:23) The historical importance of today’s AI landscape
* (50:05) International aspects of the history of technology
* (53:13) Doomerism vs optimism, why doomerism is appealing
* (57:58) Automation, meta-skills, jobs — advice for early career
* (1:01:08) LLMs and (history) education
* (1:07:10) Outro
Links:
* Professor Breen’s Twitter and homepage
* Books
* Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychedelic Science
* The Age of Intoxication: Origins of the Global Drug Trade
* Writings
* Simulating History with ChatGPT
In episode 107 of The Gradient Podcast, Daniel Bashir speaks to Professor Ted Gibson.
Ted is a Professor of Cognitive Science at MIT. He leads the TedLab, which investigates why languages look the way they do; the relationship between culture and cognition, including language; and how people learn, represent, and process language.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:13) Prof Gibson’s background
* (05:33) The computational linguistics community and NLP, engineering focus
* (10:48) Models of brains
* (12:03) Prof Gibson’s focus on behavioral work
* (12:53) How dependency distances impact language processing
* (14:03) Dependency distances and the origin of the problem
* (18:53) Dependency locality theory
* (21:38) The structures languages tend to use
* (24:58) Sentence parsing: structural integrations and memory costs
* (36:53) Reading strategies vs. ordinary language processing
* (40:23) Legalese
* (46:18) Cross-dependencies
* (50:11) Number as a cognitive technology
* (54:48) Experiments
* (1:03:53) Why counting is useful for Western societies
* (1:05:53) The Whorf hypothesis
* (1:13:05) Language as Communication
* (1:13:28) The noisy channel perspective on language processing
* (1:27:08) Fedorenko lab experiments—language for thought vs. communication and Chomsky’s claims
* (1:43:53) Thinking without language, inner voices, language processing vs. language as an aid for other mental processing
* (1:53:01) Dependency grammars and a critique of Chomsky’s grammar proposals, LLMs
* (2:08:48) LLM behavior and internal representations
* (2:12:53) Outro
Links:
* Re-imagining our theories of language
* Research — linguistic complexity and dependency locality theory
* Linguistic complexity: locality of syntactic dependencies (1998)
* The Dependency Locality Theory: A Distance-Based Theory of Linguistic Complexity (2000)
* Consequences of the Serial Nature of Linguistic Input for Sentential Complexity (2005)
* Large-scale evidence of dependency length minimization in 37 languages (2015)
* Dependency locality as an explanatory principle for word order (2020)
* A resource-rational model of human processing of recursive linguistic structure (2022)
* Research — language processing / communication and cross-linguistic universals
* Number as a cognitive technology: Evidence from Pirahã language and cognition (2008)
* The communicative function of ambiguity in language (2012)
* Color naming across languages reflects color use (2017)
* How Efficiency Shapes Human Language (2019)
In episode 106 of The Gradient Podcast, Daniel Bashir speaks to Professor Harvey Lederman.
Professor Lederman is a professor of philosophy at UT Austin. He has broad interests in contemporary philosophy and in the history of philosophy: his areas of specialty include philosophical logic, the Ming dynasty philosopher Wang Yangming, epistemology, and philosophy of language. He has recently been working on incomplete preferences, on trying in the philosophy of language, and on Wang Yangming’s moral metaphysics.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:15) Harvey’s background
* (05:30) Higher-order metaphysics and propositional attitudes
* (06:25) Motivations
* (12:25) Setup: syntactic types and ontological categories
* (25:11) What makes higher-order languages meaningful and not vague?
* (25:57) Higher-order languages corresponding to the world
* (30:52) Extreme vagueness
* (35:32) Desirable features of languages and important questions in philosophy
* (36:42) Higher-order identity
* (40:32) Intuitions about mental content, language, context-sensitivity
* (50:42) Perspectivism
* (51:32) Co-referring names, identity statements
* (55:42) The paper’s approach, “know” as context-sensitive
* (57:24) Propositional attitude psychology and mentalese generalizations
* (59:57) The “good standing” of theorizing about propositional attitudes
* (1:02:22) Mentalese
* (1:03:32) “Does knowledge imply belief?” — when a question does not have good standing
* (1:06:17) Sense, Reference, and Substitution
* (1:07:07) Fregeans and the principle of Substitution
* (1:12:12) Follow-up work to this paper
* (1:13:39) Do Language Models Produce Reference Like Libraries or Like Librarians?
* (1:15:02) Bibliotechnism
* (1:19:08) Inscriptions and reference, what it takes for something to refer
* (1:22:37) Derivative and basic reference
* (1:24:47) Intuition: n-gram models and reference
* (1:28:22) Meaningfulness in sentences produced by n-gram models
* (1:30:40) Bibliotechnism and LLMs, disanalogies to n-grams
* (1:33:17) On other recent work (vector grounding, do LMs refer?, etc.)
* (1:40:12) Causal connections and reference, how bibliotechnism makes good on the meanings of sentences
* (1:45:46) RLHF, sensitivity to truth and meaningfulness
* (1:48:47) Intelligibility
* (1:50:52) When LLMs produce novel reference
* (1:53:37) Novel reference vs. find-replace
* (1:56:00) Directionality example
* (1:58:22) Human intentions and derivative reference
* (2:00:47) Between bibliotechnism and agency
* (2:05:32) Where do invented names / novel reference come from?
* (2:07:17) Further questions
* (2:10:04) Outro
Links:
* Harvey’s homepage and Twitter
* Papers discussed
* Higher-order metaphysics and propositional attitudes
* Sense, Reference, and Substitution
In episode 105 of The Gradient Podcast, Daniel Bashir speaks to Eric Jang.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:25) Updates since Eric’s last interview
* (06:07) The problem space of humanoid robots
* (08:42) Motivations for the book “AI is Good for You”
* (12:20) Definitions of AGI
* (14:35) ~ AGI timelines ~
* (16:33) Do we have the ingredients for AGI?
* (18:58) Rediscovering old ideas in AI and robotics
* (22:13) Ingredients for AGI
* (22:13) Artificial Life
* (25:02) Selection at different levels of information—intelligence at different scales
* (32:34) AGI as a collective intelligence
* (34:53) Human in the loop learning
* (37:38) From getting correct answers to doing things correctly
* (40:20) Levels of abstraction for modeling decision-making — the neurobiological stack
* (44:22) Implementing loneliness and other details for AGI
* (47:31) Experience in AI systems
* (48:46) Asking for Generalization
* (49:25) Linguistic relativity
* (52:17) Language vs. complex thought and Fedorenko experiments
* (54:23) Efficiency in neural design
* (57:20) Generality in the human brain and evolutionary hypotheses
* (59:46) Embodiment and real-world robotics
* (1:00:10) Moravec’s Paradox and the importance of embodiment
* (1:05:33) How embodiment fits into the picture—in verification vs. in learning
* (1:10:45) Nonverbal information for training intelligent systems
* (1:11:55) AGI and humanity
* (1:12:20) The positive future with AGI
* (1:14:55) The negative future — technology as a lever
* (1:16:22) AI in the military
* (1:20:30) How AI might contribute to art
* (1:25:41) Eric’s own work and a positive future for AI
* (1:29:27) Outro
Links:
In episode 104 of The Gradient Podcast, Daniel Bashir speaks to Nathan Benaich.
Nathan is Founder and General Partner at Air Street Capital, a VC firm focused on investing in AI-first technology and life sciences companies. Nathan runs a number of communities focused on AI including the Research and Applied AI Summit and leads Spinout.fyi to improve the creation of university spinouts. Nathan co-authors the State of AI Report.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Updates in Nathan World — Air Street’s second fund, spinouts,
* (07:30) Events: Research and Applied AI Summit, State of AI Report launches
* (09:50) The State of AI: main messages, the increasing role of subject matter experts
* Research
* (14:13) Open and closed-source
* (17:55) Benchmarking and evaluation, small/large models and industry verticals
* (21:10) “Vibes” in LLM evaluation
* (24:00) Codegen models, personalized AI, curriculum learning
* (26:20) The exhaustion of human-generated data, lukewarm content, synthetic data
* (29:50) Opportunities for AI applications in the natural sciences
* (35:15) Reinforcement Learning from Human Feedback and alternatives
* (38:30) Industry
* (39:00) ChatGPT and productivity
* (42:37) General app wars, ChatGPT competitors
* (45:50) Compute—demand, supply, competition
* (50:55) Export controls and geopolitics
* (54:45) Startup funding and compute spend
* (59:15) Politics
* (59:40) Calls for regulation, regulatory divergence
* (1:04:40) AI safety
* (1:07:30) Nathan’s perspective on regulatory approaches
* (1:12:30) The UK’s early access to frontier models, standards setting, regulation difficulties
* (1:17:20) Jailbreaking, constitutional AI, robustness
* (1:20:50) Predictions!
* (1:25:00) Generative AI misuse in elections and politics (and, this prediction coming true in Bangladesh)
* (1:26:50) Progress on AI governance
* (1:30:30) European dynamism
* (1:35:08) Outro
Links:
* Nathan’s homepage and Twitter
* Bringing Dynamism to European Defense
* A prediction coming true: How AI is disrupting Bangladesh’s election
* Air Street Capital is hiring a full-time Community Lead!
In episode 103 of The Gradient Podcast, Daniel Bashir speaks to Dr. Kathleen Fisher.
As the director of DARPA’s Information Innovation Office (I2O), Dr. Kathleen Fisher oversees a portfolio that includes most of the agency’s AI-related research and development efforts, including the recent AI Forward initiative. AI Forward explores new directions for AI research that will result in trustworthy systems for national security missions. This summer, roughly 200 participants from the commercial sector, academia, and the U.S. government attended workshops that generated ideas to inform DARPA’s next phase of AI exploratory projects. Dr. Fisher previously served as a program manager in I2O from 2011 to 2014. As a program manager, she conceptualized, created, and executed programs in high-assurance computing and machine learning, including Probabilistic Programming for Advancing Machine Learning (PPAML), making building ML applications easier. She was also a co-author of a recent paper about the threats posed by large language models.
Since 2018, DARPA has dedicated over $2 billion in R&D funding to AI research. The agency DARPA has been generating groundbreaking research and development for 65 years – leading to game-changing military capabilities and icons of modern society, such as initiating the research field that rendered self-driving cars and developing the technology that led to Apple’s Siri.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) Kathleen’s background
* (05:05) Intersections between programming languages and AI
* (07:15) Neuro-symbolic AI, trade-offs between flexibility and guarantees
* (09:45) History of DARPA and the Information Innovation Office (I2O)
* (13:55) DARPA’s perspective on research
* (17:10) Galvanizing a research community
* (20:06) DARPA’s recent investments in AI and AI Forward
* (26:35) Dual-use nature of generative AI, identifying and mitigating security risks, Kathleen’s perspective on short-term and long-term risk (note: the “Gradient podcast” Kathleen mentions is from Last Week in AI)
* (30:10) Concerns about deployment and interaction
* (32:20) Outcomes from AI Forward workshops and themes
* (36:10) Incentives in building and using AI technologies, friction
* (38:40) Interactions between DARPA and other government agencies
* (40:09) Future research directions
* (44:04) Ways to stay up to date on DARPA’s work
* (45:40) Outro
Links:
* Probabilistic Programming for Advancing Machine Learning (PPAML) (Archived)
* Assured Neuro Symbolic Learning and Reasoning (ANSR)
* Identifying and Mitigating the Security Risks of Generative AI Paper
* Semantic Forensics (SemaFor)
In episode 102 of The Gradient Podcast, Daniel Bashir speaks to Peter Tse.
Professor Tse is a Professor of Cognitive Neuroscience and chair of the department of Psychological and Brain Sciences at Dartmouth College. His research focuses on using brain and behavioral data to constrain models of the neural bases of attention and consciousness, unconscious processing that precedes and constructs consciousness, mental causation, and human capacities for imagination and creativity. He is especially interested in the processing that goes into the construction of conscious experience between retinal activation at time 0 and seeing an event about a third of a second later.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:45) Prof. Tse’s background
* (03:25) Early experiences in physics/math and philosophy of physics
* (06:10) Choosing to study neuroscience
* (07:15) Prof Tse’s commitments about determinism
* (10:00) Quantum theory and determinism
* (13:45) Biases/preferences in choosing theories
* (20:41) Falsifiability and scientific questions, transition from physics to neuroscience
* (30:50) How neuroscience is unusual among the sciences
* (33:20) Neuroscience and subjectivity
* (34:30) Reductionism
* (37:30) Gestalt psychology
* (41:30) Introspection in neuroscience
* (45:30) The preconscious buffer and construction of conscious experience, color constancy
* (53:00) Perceptual and cognitive inference
* (55:00) AI systems and intrinsic meaning
* (57:15) Information vs. meaning
* (1:01:45) Consciousness and representation of bodily states
* (1:05:10) Our second-order free will
* (1:07:20) Jaegwon Kim’s exclusion argument
* (1:11:45) Why Kim thought his own argument was wrong
* (1:15:00) Resistance and counterarguments to Kim
* (1:19:45) Criterial causation
* (1:23:00) How neurons evaluate inputs criterially
* (1:24:00) Concept neurons in the hippocampus
* (1:31:57) Criterial causation and physicalism, mental causation
* (1:40:10) Daniel makes another attempt to push back 🤡
* (1:45:47) More on AI
* (1:47:05) Prof Tse’s perspective on modern AI systems, differences with human cognition
* (2:17:25) Consciousness, attention, spirituality
* (2:20:10) Prof Tse’s hopes for AI
* (2:23:30) Outro
Links:
* Professor Tse’s homepage
* Papers
* Vision/Perception
* Perceptual learning based on the learning of diagnostic features
* Complete mergeability and amodal completion
* Attention
* How Attention Can Alter Appearances
* How Top-down Attention Alters Bottom-up preconscious operations
* Consciousness
* Network structure and dynamics of the mental workspace
* On free will
* NDPR review of “Neural Basis of Free Will”
* Ontological Indeterminism undermines Kim’s Exclusion Argument
In episode 101 of The Gradient Podcast, Daniel Bashir speaks to Vera Liao.
Vera is a Principal Researcher at Microsoft Research (MSR) Montréal where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics) group. She is trained in human-computer interaction research and works on human-AI interaction, currently focusing on explainable AI and responsible AI. She aims to bridge emerging AI technologies and human-centered design practices, and use both qualitative and quantitative methods to generate recommendations for technology design. Before joining MSR, Vera worked at IBM TJ Watson Research Center, and her work contributed to IBM products such as AI Explainability 360, Uncertainty Quantification 360, and Watson Assistant.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:41) Vera’s background
* (07:15) The sociotechnical gap
* (09:00) UX design and toolkits for AI explainability
* (10:50) HCI, explainability, etc. as “separate concerns” from core AI reseaarch
* (15:07) Interfaces for explanation and model capabilities
* (16:55) Vera’s earlier studies of online social communities
* (22:10) Technologies and user behavior
* (23:45) Explainability vs. interpretability, transparency
* (26:25) Questioning the AI: Informing Design Practices for Explainable AI User Experiences
* (42:00) Expanding Explainability: Towards Social Transparency in AI Systems
* (50:00) Connecting Algorithmic Research and Usage Contexts
* (59:40) Pitfalls in existing explainability methods
* (1:05:35) Ideal and real users, seamful systems and slow algorithms
* (1:11:08) AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
* (1:11:35) Vera’s earlier experiences with chatbots
* (1:13:00) Need to understand pitfalls and use-cases for LLMs
* (1:13:45) Perspectives informing this paper
* (1:20:30) Transparency informing goals for LLM use
* (1:22:45) Empiricism and explainability
* (1:27:20) LLM faithfulness
* (1:32:15) Future challenges for HCI and AI
* (1:36:28) Outro
Links:
* Research
* Earlier work
* Understanding Experts’ and Novices’ Expertise Judgment of Twitter Users
* Expert Voices in Echo Chambers
* HCI / collaboration
* Exploring AI Values and Ethics through Participatory Design Fictions
* Ways of Knowing for AI: (Chat)bots as Interfaces for ML
* Human-AI Collaboration: Towards Socially-Guided Machine Learning
* Questioning the AI: Informing Design Practices for Explainable AI User Experiences
* Rethinking Model Evaluation as Narrowing the Socio-Technical Gap
* Human-Centered XAI: From Algorithms to User Experiences
* AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
* Fairness and explainability
* Questioning the AI: Informing Design Practices for Explainable AI User Experiences
* Expanding Explainability: Towards Social Transparency in AI Systems
* Connecting Algorithmic Research and Usage Contexts
In episode 100 of The Gradient Podcast, Daniel Bashir speaks to Professor Thomas Dietterich.
Professor Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is a pioneer in the field of machine learning, and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding President of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of the moderators for the cs.LG category on arXiv.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Episode 100 Note
* (02:03) Intro
* (04:23) Prof. Dietterich’s background
* (14:20) Kuhn and theory development in AI, how Prof Dietterich thinks about the philosophy of science and AI
* (20:10) Scales of understanding and sentience, grounding, observable evidence
* (23:58) Limits of statistical learning without causal reasoning, systematic understanding
* (25:48) A challenge for the ML community: testing for systematicity
* (26:13) Forming causal understandings of the world
* (28:18) Learning at the Knowledge Level
* (29:18) Background and definitions
* (32:18) Knowledge and goals, a note on LLMs
* (33:03) What it means to learn
* (41:05) LLMs as learning results of inference without learning first principles
* (43:25) System I/II thinking in humans and LLMs
* (47:23) “Routine Science”
* (47:38) Solving multiclass learning problems via error-correcting output codes
* (52:53) Error-correcting codes and redundancy
* (54:48) Why error-correcting codes work, contra intuition
* (59:18) Bias in ML
* (1:06:23) MAXQ for hierarchical RL
* (1:15:48) Computational sustainability
* (1:19:53) Project TAHMO’s moonshot
* (1:23:28) Anomaly detection for weather stations
* (1:25:33) Robustness
* (1:27:23) Motivating The Familiarity Hypothesis
* (1:27:23) Anomaly detection and self-models of competence
* (1:29:25) Measuring the health of freshwater streams
* (1:31:55) An open set problem in species detection
* (1:33:40) Issues in anomaly detection for deep learning
* (1:37:45) The Familiarity Hypothesis
* (1:40:15) Mathematical intuitions and the Familiarity Hypothesis
* (1:44:12) What’s Wrong with LLMs and What We Should Be Building Instead
* (1:46:20) Flaws in LLMs
* (1:47:25) The systems Prof Dietterich wants to develop
* (1:49:25) Hallucination/confabulation and LLMs vs knowledge bases
* (1:54:00) World knowledge and linguistic knowledge
* (1:55:07) End-to-end learning and knowledge bases
* (1:57:42) Components of an intelligent system and separability
* (1:59:06) Thinking through external memory
* (2:01:10) Outro
Links:
* Research — Fundamentals (Philosophy of AI)
* Learning at the Knowledge Level
* What Does it Mean for a Machine to Understand?
* Research – “Routine science”
* Ensemble methods in ML and error-correcting output codes
* Solving multiclass learning problems via error-correcting output codes
* An experimental comparison of bagging, boosting, and randomization
* ML Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms
* The definitive treatment of these questions, by Gareth James
* Discovering/Exploiting structure in MDPs:
* Exogenous State MDPs (paper with George Trimponias, slides)
* Research — Ecosystem Informatics and Computational Sustainability
* Challenges for ML in Computational Sustainability
* Research — Robustness
* Steps towards robust AI (AAAI President’s Address)
* Benchmarking NN Robustness to Common Corruptions and Perturbations with Dan Hendrycks
* The familiarity hypothesis: Explaining the behavior of deep open set methods
* Recent commentary
* What's Wrong with Large Language Models and What We Should Be Building Instead
In episode 99 of The Gradient Podcast, Daniel Bashir speaks to Professor Martin Wattenberg.
Professor Wattenberg is a professor at Harvard and part-time member of Google Research’s People + AI Research (PAIR) initiative, which he co-founded. His work, with long-time collaborator Fernanda Viégas, focuses on making AI technology broadly accessible and reflective of human values. At Google, Professor Wattenberg, his team, and Professor Viégas have created end-user visualizations for products such as Search, YouTube, and Google Analytics. Note: Professor Wattenberg is recruiting PhD students through Harvard SEAS—info here.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (03:30) Prof. Wattenberg’s background
* (04:40) Financial journalism at SmartMoney
* (05:35) Contact with the academic visualization world, IBM
* (07:30) Transition into visualizing ML
* (08:25) Skepticism of neural networks in the 1980s
* (09:45) Work at IBM
* (10:00) Multiple scales in information graphics, organization of information
* (13:55) How much information should a graphic display to whom?
* (17:00) Progressive disclosure of complexity in interface design
* (18:45) Visualization as a rhetorical process
* (20:45) Conversation Thumbnails for Large-Scale Discussions
* (21:35) Evolution of conversation interfaces—Slack, etc.
* (24:20) Path dependence — mutual influences between user behaviors and technology, takeaways for ML interface design
* (26:30) Baby Names and Social Data Analysis — patterns of interest in baby names
* (29:50) History Flow
* (30:05) Why investigate editing dynamics on Wikipedia?
* (32:06) Implications of editing patterns for design and governance
* (33:25) The value of visualizations in this work, issues with Wikipedia editing
* (34:45) Community moderation, bureaucracy
* (36:20) Consensus and guidelines
* (37:10) “Neutral” point of view as an organizing principle
* (38:30) Takeaways
* PAIR
* (39:15) Tools for model understanding and “understanding” ML systems
* (41:10) Intro to PAIR (at Google)
* (42:00) Unpacking the word “understanding” and use cases
* (43:00) Historical comparisons for AI development
* (44:55) The birth of TensorFlow.js
* (47:52) Democratization of ML
* (48:45) Visualizing translation — uncovering and telling a story behind the findings
* (52:10) Shared representations in LLMs and their facility at translation-like tasks
* (53:50) TCAV
* (55:30) Explainability and trust
* (59:10) Writing code with LMs and metaphors for using
* More recent research
* (1:01:05) The System Model and the User Model: Exploring AI Dashboard Design
* (1:10:05) OthelloGPT and world models, causality
* (1:14:10) Dashboards and interaction design—interfaces and core capabilities
* (1:18:07) Reactions to existing LLM interfaces
* (1:21:30) Visualizing and Measuring the Geometry of BERT
* (1:26:55) Note/Correction: The “Atlas of Meaning” Prof. Wattenberg mentions is called Context Atlas
* (1:28:20) Language model tasks and internal representations/geometry
* (1:29:30) LLMs as “next word predictors” — explaining systems to people
* (1:31:15) The Shape of Song
* (1:31:55) What does music look like?
* (1:35:00) Levels of abstraction, emergent complexity in music and language models
* (1:37:00) What Prof. Wattenberg hopes to see in ML and interaction design
* (1:41:18) Outro
Links:
* Professor Wattenberg’s homepage and Twitter
* Harvard SEAS application info — Professor Wattenberg is recruiting students!
* Research
* Earlier work
* Stacked Graphs—Geometry & Aesthetics
* A Multi-Scale Model of Perceptual Organization in Information Graphics
* Conversation Thumbnails for Large-Scale Discussions
* Baby Names and Social Data Analysis
* History Flow (paper)
* At Harvard and Google / PAIR
* Tools for Model Understanding: Facets, SmoothGrad, Attacking discrimination with smarter ML
* TCAV
* Other ML papers:
* The System Model and the User Model: Exploring AI Dashboard Design (recent speculative essay)
* Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task
* Visualizing and Measuring the Geometry of BERT
* Artwork
In episode 98 of The Gradient Podcast, Daniel Bashir speaks to Laurence Liew.
Laurence is the Director for AI Innovation at AI Singapore. He is driving the adoption of AI by the Singapore ecosystem through the 100 Experiments, AI Apprenticeship Programmes and the Generational AI Talent Development initiative. He is the current Co-Chair of the Innovations and Commercialisation working group and Co-Chair of the "Broad Adoption of AI by SME" committee.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:25) Laurence’s background
* (07:00) AI Singapore and Singapore’s AI Strategy
* (08:27) Awareness and adoption of AI in Singapore
* (19:45) AI Apprenticeship Program stories
* (27:35) Developing generational AI talent within Singapore, literacy
* (32:25) Singapore’s place within the global AI ecosystem
* (38:30) How the generative AI boom has affected Singapore
* (43:50) Laurence’s vision for the future of Singapore’s tech ecosystem
* (49:41) Outro
Links:
In episode 97 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Levin and Adam Goldstein.
Professor Levin is a Distinguished Professor and Vannevar Bush Chair in the Biology Department at Tufts University. He also directs the Allen Discovery Center at Tufts. His group, the Levin Lab, focuses on understanding the biophysical mechanisms that implement decision-making during complex pattern regulation, and harnessing endogenous bioelectric dynamics toward rational control of growth and form.
Adam Goldstein was a visiting scientist at the Levin Lab, where he worked on cancer research, and is the co-founder and Chairman of Astonishing Labs. Previously Adam founded Hipmunk, wrote tech books for O'Reilly, and was a Visiting Partner at Y Combinator.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:37) Intros
* (03:20) Prof. Levin intro
* (04:26) Adam intro
* (06:25) A perspective on intelligence
* (08:40) Diverse intelligence — in unconventional embodiments and unfamiliar spaces, substrate independence
* (12:23) Failure of the life-machine distinction, text-based systems, grounding, and embodiment
* (16:12) What it is to be a Self, fluidity and persistence
* (22:45) The combination problem in cognitive function, levels and representation
* (27:10) Goals for AI / cognitive science, Prof Levin’s perspective on building intelligent systems
* (31:25) Adam’s and Prof. Levin’s recent research—regenerative medicine and cancer
* (36:25) Examples of regeneration, Adam on the right approach to the regeneration problem as generation
* (45:25) Protein engineering vs. Adam and Prof. Levin’s program, implicit assumptions underlying biology
* (48:15) Regeneration example in liver disease
* (50:50) Perspectives on AI and its goals
Links:
* Levin Lab homepage
* Forms of life, forms of mind
* Adam’s homepage
* Research
* On Having No Head: Cognition throughout Biological Systems
* Technological Approach to Mind Everywhere
* Future Medicine: from molecular pathways to the collective intelligence of the body
In episode 96 of The Gradient Podcast, Daniel Bashir speaks to Jonathan Frankle.
Jonathan is the Chief Scientist at MosaicML and (as of release). Jonathan completed his PhD at MIT, where he investigated the properties of sparse neural networks that allow them to train effectively through his lottery ticket hypothesis. He also spends a portion of his time working on technology policy, and currently works with the OECD to implement the AI principles he helped develop in 2019.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:35) Jonathan’s background and work
* (04:25) Origins of the Lottery Ticket Hypothesis
* (06:00) Jonathan’s empiricism and approach to science
* (08:25) More Karl Popper discourse + hot takes
* (09:45) Walkthrough of the Lottery Ticket Hypothesis
* (12:00) Issues with the Lottery Ticket Hypothesis as a statement
* (12:30) Jonathan’s advice for PhD students, on asking good questions
* (15:55) Strengths and Promise of the Lottery Ticket Hypothesis
* (18:55) More Lottery Ticket Hypothesis Papers
* (19:10) Comparing Rewinding and Fine-tuning
* (23:00) Care in making experimental choices
* (25:05) Linear Mode Connectivity and the Lottery Ticket Hypothesis
* (27:50) On what is being measured and how
* (28:50) “The outcome of optimization is determined to a linearly connected region”
* (31:15) On good metrics
* (32:54) On the Predictability of Pruning Across Scales — scaling laws for pruning
* (34:40) The paper’s takeaway
* (38:45) Pruning Neural Networks at Initialization — on a scientific disagreement
* (45:00) On making takedown papers useful
* (46:15) On what can be known early in training
* (49:15) Jonathan’s perspective on important research questions today
* (54:40) MosaicML
* (55:19) How Mosaic got started
* (56:17) Mosaic highlights
* (57:33) Customer stories
* (1:00:30) Jonathan’s work and perspectives on AI policy
* (1:05:45) The key question: what we want
* (1:07:35) Outro
Links:
* Jonathan’s homepage and Twitter
* Papers
* The Lottery Ticket Hypothesis and follow-up work
* Comparing Rewinding and Fine-tuning in Neural Network Pruning
* Linear Mode Connectivity and the LTH
* On the Predictability of Pruning Across Scales
* Pruning Neural Networks at Initialization: Why Are We Missing The Mark?
In episode 95 of The Gradient Podcast, Daniel Bashir speaks to Nao Tokui.
Nao Tokui is an artist/DJ and researcher based in Tokyo. While pursuing his Ph.D. at The University of Tokyo, he produced his first music album and singles using AI, including a 12-inch record with Nujabes, a legendary Japanese hip-hop producer. After completing his Ph.D. research, he founded Qosmo, AI Creativity and Music Lab, in 2009. Since then, he has been actively working at the intersection of AI technology and art. Nao and his team's works have been exhibited at renowned venues such as the New York MoMA and the Barbican Centre in London. Their performances have also been showcased at various music festivals, including MUTEK and Sonar. Additionally, he is leading the development of AI-based music instruments at his newly founded company, Neutone. In 2021, Nao received the Okawa Publishing Award for his Japanese book on art, creativity, and AI. The book is scheduled to be released in English as "Surfing human creativity with AI — A user's guide" in November 2023.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:15) Nao’s background and how he got into AI and music
* (05:10) Nao’s experiences as a DJ, collaboration with Nujabes
* (07:10) HCI and music
* (10:35) Leveraging the difference between AI systems and humans
* (12:40) Total control vs total chaos
* (13:45) Qosmo and the Neutone Project, misusable AI tools
* (17:25) On music and “creating something new”
* (21:00) Declarative and top-down vs. bottom-up creation, individual taste
* (23:50) How generative AI enables humans
* (26:25) On misusing technology and art
* (32:00) Dawn Patrol EP
* (36:00) A two-discriminator GAN for creating music in new genres
* (37:45) The AI DJ Project
* (38:20) The interactive vision of the project
* (42:10) How AI chooses music, breaking from constraints
* (43:15) Interpretability and how an AI system DJs differently
* (45:15) How the project altered Nao’s perspective on DJing, the role of humans
* (51:40) Nao’s book Creating with AI
* (55:15) Human-AI interaction as joint improvisation
* (58:10) Nao’s advice and takeaways for thinking about AI creatively
* (1:01:32) Outro
Links:
* Other links:
* Real-time AI-generative DJ performance
* Qosmo
* Nao’s book: Surfing human creativity with AI — A user's guide
* Paper on Creative-GAN for deviating from existing music genres
In episode 94 of The Gradient Podcast, Daniel Bashir speaks to Divyansh Kaushik.
Divyansh is the Associate Director for Emerging Technologies and National Security at the Federation of American Scientists where his focus areas include, amongst other things, AI policy, STEM immigration, and US-China strategic competition. He holds a PhD from Carnegie Mellon University, where he focused on designing reliable AI systems that align with human values. In addition to his advocacy work on Capitol Hill, he also played a key role in establishing the Congressional Graduate Research and Development Caucus. He is a frequent contributor to leading publications, including Vox, National Defense Magazine, The Dispatch, Daily Caller, and Forbes.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:20) Divyansh intro/background
* (06:00) Zachary Lipton Appreciation Session ( + advice from Prof Lipton)
* (08:00) How Divyansh got involved in policy
* (11:30) What does policy work look like? Divyansh’s early experiences
* (15:42) AI policy issues, divides, party lines
* (19:15) Bringing AI talent into the US
* (26:45) US/China saber rattling, impact of Xi Jinping’s presidency
* (33:49) China’s AI regulations, CCP motivations, China’s disadvantages in AI and benefits of the US policy process
* (42:42) Trading off AI governance and stifling innovation
* (51:17) AI governance comments from Jeremy Howard / Connor Leahy / Andrew Maynard, regulating use vs basic technology, limits on scaling
* (1:01:30) Articulating and communicating the issues for AI governance
* (1:03:10) Existential risk concerns in AI governance, theories of change
* (1:10:15) How can AI researchers/practitioners better communicate with policymakers?
* (1:16:57) Outro
Links:
* Divyansh’s Twitter and FAS page
* Divyansh’s policy work:
* How Congress can shape AI governance without stifling innovation
* Six Policy Ideas for the National AI Strategy
* Other work mentioned/discussed:
* Jeremy Howard’s AI Safety and the Age of Dislightenment
* Proposals from Connor Leahy
* Andrew Maynard’s Regulating Frontier AI: To Open Source or Not?
In episode 93 of The Gradient Podcast, Daniel Bashir speaks to Professor Tal Linzen.
Professor Linzen is an Associate Professor of Linguistics and Data Science at New York University and a Research Scientist at Google. He directs the Computation and Psycholinguistics Lab, where he and his collaborators use behavioral experiments and computational methods to study how people learn and understand language. They also develop methods for evaluating, understanding, and improving computational systems for language processing.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:25) Prof. Linzen’s background
* (05:37) Back and forth between psycholinguistics and deep learning research, LM evaluation
* (08:40) How can deep learning successes/failures help us understand human language use, methodological concerns, comparing human representations to LM representations
* (14:22) Behavioral capacities and degrees of freedom in representations
* (16:40) How LMs are becoming less and less like humans
* (19:25) Assessing LSTMs’ ability to learn syntax-sensitive dependencies
* (22:48) Similarities between structure-sensitive dependencies, sophistication of syntactic representations
* (25:30) RNNs implicitly implement tensor-product representations—vector representations of symbolic structures
* (29:45) Representations required to solve certain tasks, difficulty of natural language
* (33:25) Accelerating progress towards human-like linguistic generalization
* (34:30) The pre-training agnostic identically distributed evaluation paradigm
* (39:50) Ways to mitigate differences in evaluation
* (44:20) Surprisal does not explain syntactic disambiguation difficulty
* (45:00) How to measure processing difficulty, predictability and processing difficulty
* (49:20) What other factors influence processing difficulty?
* (53:10) How to plant trees in language models
* (55:45) Architectural influences on generalizing knowledge of linguistic structure
* (58:20) “Cognitively relevant regimes” and speed of generalization
* (1:00:45) Acquisition of syntax and sampling simpler vs. more complex sentences
* (1:04:03) Curriculum learning for progressively more complicated syntax
* (1:05:35) Hypothesizing tree-structured representations
* (1:08:00) Reflecting on a prediction from the past
* (1:10:15) Goals and “the correct direction” in AI research
* (1:14:04) Outro
Links:
* Prof. Linzen’s Twitter and homepage
* Papers
* Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
* RNNS Implicitly Implement Tensor-Product Representations
* How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
In episode 92 of The Gradient Podcast, Daniel Bashir speaks to Kevin K. Yang.
Kevin is a senior researcher at Microsoft Research (MSR) who works on problems at the intersection of machine learning and biology, with an emphasis on protein engineering. He completed his PhD at Caltech with Frances Arnold on applying machine learning to protein engineering. Before joining MSR, he was a machine learning scientist at Generate Biomedicines, where he used machine learning to optimize proteins.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:40) Kevin’s background
* (06:00) Protein engineering early in Kevin’s career
* (12:10) From research to real-world proteins: the process
* (17:40) Generative models + pretraining for proteins
* (22:47) Folding diffusion for protein structure generation
* (30:45) Protein evolutionary dynamics and generative models of protein sequences
* (40:03) Analogies and disanalogies between protein modeling and language models
* (41:45) In representation learning
* (45:50) Convolutions vs. transformers and inductive biases
* (49:25) Pretraining tasks for protein structure
* (51:45) More on representation learning for protein structure
* (54:06) Kevin’s thoughts on interpretability in deep learning for protein engineering
* (56:50) Multimodality in protein engineering and future directions
* (59:14) Outro
Links:
* Kevin’s Twitter and homepage
* Research
* Generative models + pre-training for proteins and chemistry
* Broad intro to techniques in the space
* Protein structure generation via folding diffusion
* Protein sequence design with deep generative models (review)
* Protein generation with evolutionary diffusion: sequence is all you need
* ML for protein engineering
* ML-guided directed evolution for protein engineering (review)
* Learned protein embeddings for ML
* Adaptive machine learning for protein engineering (review)
* Multimodal deep learning for protein engineering
In episode 91 of The Gradient Podcast, Daniel Bashir speaks to Arjun Ramani and Zhengdong Wang.
Arjun is the global business and economics correspondent at The Economist.
Zhengdong is a research engineer at Google DeepMind.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (03:53) Arjun intro
* (06:04) Zhengdong intro
* (09:50) How Arjun and Zhengdong met in the woods
* (11:52) Overarching narratives about technological progress and AI
* (14:20) Setting up the claim: Arjun on what “transformative” means
* (15:52) What enables transformative economic growth?
* (21:19) From GPT-3 to ChatGPT; is there something special about AI?
* (24:15) Zhengdong on “real AI” and divisiveness
* (27:00) Arjun on the independence of bottlenecks to progress/growth
* (29:05) Zhengdong on bottleneck independence
* (32:45) More examples on bottlenecks and surplus wealth
* (37:06) Technical arguments—what are the hardest problems in AI?
* (38:00) Robotics
* (40:41) Challenges of deployment in high-stakes settings and data sources / synthetic data, self-driving
* (45:13) When synthetic data works
* (49:06) Harder tasks, process knowledge
* (51:45) Performance art as a critical bottleneck
* (53:45) Obligatory Taylor Swift Discourse
* (54:45) AI Taylor Swift???
* (54:50) The social arguments
* (55:20) Speed of technology diffusion — “diffusion lags” and dynamics of trust with AI
* (1:00:55) ChatGPT adoption, where major productivity gains come from
* (1:03:50) Timescales of transformation
* (1:10:22) Unpredictability in human affairs
* (1:14:07) The economic arguments
* (1:14:35) Key themes — diffusion lags, different sectors
* (1:21:15) More on bottlenecks, AI trust, premiums on human workers
* (1:22:30) Automated systems and human interaction
* (1:25:45) Campaign text reachouts
* (1:30:00) Counterarguments
* (1:30:18) Solving intelligence and solving science/innovation
* (1:34:07) Strengths and weaknesses of the broad applicability of Arjun and Zhengdong’s argument
* (1:35:34) The “proves too much” worry — how could any innovation have ever happened?
* (1:37:25) Examples of bringing down barriers to innovation/transformation
* (1:43:45) What to do with all of this information?
* (1:48:45) Outro
Links:
* Zhengdong’s homepage and Twitter
* Arjun’s homepage and Twitter
* Why transformative artificial intelligence is really, really hard to achieve
* Other resources and links mentioned:
* Allan-Feuer and Sanders: Transformative AGI by 2043 is <1% likely
* Hardmaru on AI as applied philosophy
* Davis Blalock on synthetic data
* Matt Clancy on automating invention and bottlenecks
* Michael Webb on 80,000 Hours Podcast
* Bob Gordon: The Rise and Fall of American Growth
* OpenAI economic impact paper
* David Autor: new work paper
* Pew research centre poll, public concern on AI
* Human premium Economist piece
* Callum Williams — London tube and AI/jobs
* Culture Series book 1, Iain Banks
In episode 90 of The Gradient Podcast, Daniel Bashir speaks to Miles Grimshaw.
Miles is General Partner at Benchmark. He was previously a General Partner at Thrive Capital, where he helped the firm raise its fourth and fifth funds, and sourced deals in Lattice, Mapbox, Benchling, and Airtable, among others.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:48) Miles’ background (note: Miles is now the second newest GP at Benchmark)
* (06:07) Miles’ investment philosophy and previous investments
* (12:25) Investing in the “decade of deep learning” and how Miles became interested in AI
* (18:53) Miles’ / Benchmark’s investment in Langchain
* (24:29) On AI advances and adoption
* (39:25) Hardware shortages, radically changing UX for LLMs
* (48:12) Opportunities for AI applications in new domains
* (50:15) Miles’ advice for potential founders in AI
* (1:00:00) Outro
Links:
* Miles’ Twitter
In episode 89 of The Gradient Podcast, Daniel Bashir speaks to Shreya Shankar.
Shreya is a computer scientist pursuing her PhD in databases at UC Berkeley. Her research interest is in building end-to-end systems for people to develop production-grade machine learning applications. She was previously the first ML engineer at Viaduct, did research at Google Brain, and software engineering at Facebook. She graduated from Stanford with a B.S. and M.S. in computer science with concentrations in systems and artificial intelligence. At Stanford, helped run SHE++, an organization that helps empower underrepresented minorities in technology.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:22) Shreya’s background and journey into ML / MLOps
* (04:51) ML advances in 2013-2016
* (05:45) Shift in Stanford undergrad class ecosystems, accessibility of deep learning research
* (09:10) Why Shreya left her job as an ML engineer
* (13:30) How Shreya became interested in databases, data quality in ML
* (14:50) Daniel complains about things
* (16:00) What makes ML engineering uniquely difficult
* (16:50) Being a “historian of the craft” of ML engineering
* (22:25) Levels of abstraction, what ML engineers do/don’t have to think about
* (24:16) Observability for Production ML Pipelines
* (28:30) Metrics for real-time ML systems
* (31:20) Proposed solutions
* (34:00) Moving Fast with Broken Data
* (34:25) Existing data validation measures and where they fall short
* (36:31) Partition summarization for data validation
* (38:30) Small data and quantitative statistics for data cleaning
* (40:25) Streaming ML Evaluation
* (40:45) What makes a metric actionable
* (42:15) Differences in streaming ML vs. batch ML
* (45:45) Delayed and incomplete labels
* (49:23) Operationalizing Machine Learning
* (49:55) The difficult life of an ML engineer
* (53:00) Best practices, tools, pain points
* (55:56) Pitfalls in current MLOps tools
* (1:00:30) LLMOps / FMOps
* (1:07:10) Thoughts on ML Engineering, MLE through the lens of data engineering
* (1:10:42) Building products, user expectations for AI products
* (1:15:50) Outro
Links:
* Papers
* Towards Observability for Production Machine Learning Pipelines
* Rethinking Streaming ML Evaluation
* Operationalizing Machine Learning
* Moving Fast With Broken Data
* Blog posts
* The Modern ML Monitoring Mess
* Thoughts on ML Engineering After a Year of my PhD
In episode 88 of The Gradient Podcast, Daniel Bashir speaks to Professor Stevan Harnad.
Stevan Harnad is professor of psychology and cognitive science at Université du Québec à Montréal, adjunct professor of cognitive science at McGill University, and professor emeritus of cognitive science at the University of Southampton. His research is on category learning, categorical perception, symbol grounding, the evolution of language, and animal and human sentience (otherwise known as “consciousness”). He is also an advocate for open access and an activist for animal rights.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (05:20) Professor Harnad’s background: interests in cognitive psychobiology, editing Behavioral and Brain Sciences
* (07:40) John Searle submits the Chinese Room article
* (09:20) Early reactions to Searle and Prof. Harnad’s role
* (13:38) The core of Searle’s argument and the generator of the Symbol Grounding Problem, “strong AI”
* (19:00) Ways to ground symbols
* (20:26) The acquisition of categories
* (25:00) Pantomiming, non-linguistic category formation
* (27:45) Mathematics, abstraction, and grounding
* (36:20) Symbol manipulation and interpretation language
* (40:40) On the Whorf Hypothesis
* (48:39) Defining “grounding” and introducing the “T3” Turing Test
* (53:22) Turing’s concerns, AI and reverse-engineering cognition
* (59:25) Other Minds, T4 and zombies
* (1:05:48) Degrees of freedom in solutions to the Turing Test, the easy and hard problems of cognition
* (1:14:33) Over-interepretation of AI systems’ behavior, sentience concerns, T3 and evidence sentience
* (1:24:35) Prof. Harnad’s commentary on claims in The Vector Grounding Problem
* (1:28:05) RLHF and grounding, LLMs’ (ungrounded) capabilities, syntactic structure and propositions
* (1:35:30) Multimodal AI systems (image-text and robotic) and grounding, compositionality
* (1:42:50) Chomsky’s Universal Grammar, LLMs and T2
* (1:50:55) T3 and cognitive simulation
* (1:57:34) Outro
Links:
* Professor Harnad’s webpage and skywritings
* Papers:
* Category Induction and Representation
* From Sensorimotor Categories to Grounded Symbols
* Minds, machines and Searle 2
* The Latent Structure of Dictionaries
In episode 87 of The Gradient Podcast, Daniel Bashir speaks to Professor Terry Winograd.
Professor Winograd is Professor Emeritus of Computer Science at Stanford University. His research focuses on human-computer interaction design and the design of technologies for development. He founded the Stanford Human-Computer Interaction Group, where he directed the teaching programs and HCI research. He is also a founding faculty member of the Stanford d.school and a founding member and past president of Computer Professionals for Social Responsibility.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (03:00) Professor Winograd’s background
* (05:10) At the MIT AI Lab
* (05:45) The atmosphere in the MIT AI Lab, Minsky/Chomsky debates
* (06:20) Blue-sky research, government funding for academic research
* (10:10) Isolation and collaboration between research groups
* (11:45) Phases in the development of ideas and how cross-disciplinary work fits in
* (12:26) SHRDLU and the MIT AI Lab’s intellectual roots
* (17:20) Early responses to SHRDLU: Minsky, Dreyfus, others
* (20:55) How Prof. Winograd’s thinking about AI’s abilities and limitations evolved
* (22:25) How this relates to current AI systems and discussions of intelligence
* (23:47) Repetitive debates in AI, semantics and grounding
* (27:00) The concept of investment, care, trust in human communication vs machine communication
* (28:53) Projecting human-ness onto AI systems and non-human things and what this means for society
* (31:30) Time after leaving MIT in 1973, time at Xerox PARC, how Winograd’s thinking evolved during this time
* (38:28) What Does It Mean to Understand Language? Speech acts, commitments, and the grounding of language
* (42:40) Reification of representations in science and ML
* (46:15) LLMs, their training processes, and their behavior
* (49:40) How do we coexist with systems that we don’t understand?
* (51:20) Progress narratives in AI and human agency
* (53:30) Transitioning to intelligence augmentation, founding the Stanford HCI group and d.school, advising Larry Page and Sergey Brin
* (1:01:25) Chatbots and how we consume information
* (1:06:52) Evolutions in journalism, progress in trust for modern AI systems
* (1:09:18) Shifts in the social contract, from institutions to personalities
* (1:12:05) AI and HCI in recent years
* (1:17:05) Philosophy of design and the d.school
* (1:21:20) Designing AI systems for people
* (1:25:10) Prof. Winograd’s perspective on watermarking for detecting GPT outputs
* (1:25:55) The politics of being a technologist
* (1:30:10) Echos of the past in AI regulation and competition and learning from history
* (1:32:34) Outro
Links:
* Professor Winograd’s Homepage
* Papers/topics discussed:
* SHRDLU
* Beyond Programming Languages
* What Does It Mean to Understand Language?
* The PageRank Citation Ranking
* Stanford Digital Libraries project
* Talk: My Politics as a Technologist
In episode 86 of The Gradient Podcast, Daniel Bashir speaks to Professor Gil Strang.
Professor Strang is one of the world’s foremost mathematics educators and a mathematician with contributions to finite element theory, the calculus of variations, wavelet analysis, and linear algebra. He has spent six decades teaching mathematics at MIT, where he was the MathWorks Professor of Mathematics. He was among the first MIT faculty members to publish a course on MIT’s OpenCourseware and has since championed both linear algebra education and open courseware.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Professor Strang’s background and journey into teaching linear algebra
* (04:55) Undergrad interests
* (07:10) Writing textbooks
* (10:20) Prof. Strang’s interests in deep learning
* (11:00) How Professor Strang thought about teaching early on
* (16:20) MIT OpenCourseWare and education accessibility
* (19:50) Prof Strang’s applied/example-based approach to teaching linear algebra and closing the theory-practice gap
* (22:00) Examples!
* (27:20) Orthogonality
* (29:15) Singular values
* (34:40) Professor Strang’s favorite topics in linear algebra
* (37:55) Pedagogical approaches to deep learning, mathematical ingredients of deep learning’s complexity
* (42:04) Generalization and double descent in deep learning, powers and limitations
* (46:20) Did deep learning have to evolve as it did?
* (48:30) Teaching deep learning to younger students
* (50:50) How Prof. Strang’s approach to teaching linear algebra has evolved over time
* (53:00) The Four Fundamental Subspaces
* (56:15) Reflections on a career in teaching
* (59:49) Outro
Links:
In episode 85 of The Gradient Podcast, Andrey Kurenkov speaks to Anant Agarwal
Anant Agarwal is the chief platform officer of 2U, and founder of edX. Anant taught the first edX course on circuits and electronics from MIT, which drew 155,000 students from 162 countries. He has served as the director of CSAIL, MIT's Computer Science and Artificial Intelligence Laboratory, and is a professor of electrical engineering and computer science at MIT. He is a successful serial entrepreneur, having co-founded several companies including Tilera Corporation, which created the Tile multicore processor, and Virtual Machine Works.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) History with research
* (05:56) Founding EdX
* (13:05) AI at EdX
* (18:40) Reaction to AI as a teacher
* (25:00) Student interest in AI
* (32:20) AI’s impact on academia
* (35:00) Future of AI in education
* (38:25) AI writing essays
* (43:38) Experiences playing with ChatGPT
In episode 84 of The Gradient Podcast, Daniel Bashir speaks to Professor Raphaël Millière.
Professor Millière is a Lecturer (Assistant Professor) in the Philosophy of Artificial Intelligence at Macquarie University in Sydney, Australia. Previously, he was the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in Columbia University’s Center for Science and Society, and completed his DPhil in philosophy at the University of Oxford, where he focused on self-consciousness.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:20) Prof. Millière’s background
* (08:07) AI + philosophy questions and the human side / empiricism
* (18:38) Putting aside metaphysical issues
* (20:28) Prof. Millière’s work on self-consciousness, does consciousness constitutively involve self-consciousness?
* (32:05) Relationship to recent pronouncements about AI sentience
* (41:54) Chatbots’ self-presentation as having a “self”
* (51:05) Intro to grounding and related concepts
* (1:00:06) The different types of grounding
* (1:08:48) Lexical representations and things in the world, distributional hypothesis, concepts in LLMs
* (1:21:40) Representational content and overcoming the vector grounding problem
* (1:32:01) Causal-informational relations and teleology
* (1:43:45) Levels of grounding, extralinguistic aspects of meaning
* (1:52:12) Future problems and ongoing projects
* (2:04:05) Outro
Links:
* Professor Millière’s homepage and Twitter
* Research
* Are There Degrees of Self-Consciousness?
* The Varieties of Selflessness
* The Vector Grounding Problem
In episode 83 of The Gradient Podcast, Daniel Bashir speaks to Peli Grietzer.
Peli is a scholar whose work borrows mathematical ideas from machine learning theory to think through “ambient” and ineffable phenomena like moods, vibes, cultural logics, and structures of feeling. He is working on a book titled Big Mood: A Transcendental-Computational Essay in Art and contributes to the experimental literature collective Gauss PDF. Peli has a PhD in mathematically informed literary theory from Harvard Comparative Literature in collaboration with the HUJI Einstein Institute of Mathematics.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:17) Peli’s background
* (10:40) Daniel takes 2 entire minutes to ask how Peli thinks about ~ Art ~
* (26:10) Idealism and art as revealing the nature of reality, extralinguistic experiences of truth through literature
* (52:05) The autoencoder as a way to understand Romantic theories of art
* (1:14:55) More on how Peli thinks about autoencoders
* (1:18:05) Connections to ambient meaning, stimmung/mood
* (1:37:18) Examples of poetry/literature as mathematical experience, aesthetic unity and totalizing worldviews
* (1:51:15) Moods clashing within a single work
* (2:10:14) Modernist writers
* (2:32:46) Outro
Links:
* Peli’s Twitter
* Why poetry is a variety of mathematical experience
* Peli’s thesis
In episode 82 of The Gradient Podcast, Daniel Bashir speaks to Ryan Drapeau.
Ryan is a Staff Software Engineer at Stripe and technical lead for Stripe’s Payment Fraud organization, which uses machine learning to help prevent billions of dollars of credit card and payments fraud for business every year.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:15) Ryan’s background
* (05:28) Differences between adversarial problems (fraud, content moderation, etc.)
* (08:50) How fraud manifests for businesses
* (11:07) Types of fraud
* (15:49) Fraud as an industry
* (19:05) Information asymmetries between fraudsters and defenders
* (22:40) Fraud as an ML problem and Stripe Radar
* (25:45) Evolution of Stripe Radar
* (31:38) Architectural evolution
* (41:38) Why ResNets for Stripe Radar?
* (44:15) Future architectures for Stripe Radar and the explainability/performance tradeoff
* (48:58) War stories
* (52:55) Federated learning opportunities for Stripe Radar
* (55:50) Vectors for improvement in Stripe’s fraud detection systems
* (59:22) More ways of thinking about the fraud problem, multiclass models
* (1:03:30) Lessons Ryan has picked up from working on fraud
* (1:05:44) Outro
Links:
* How We Built It: Stripe Radar
In episode 81 of The Gradient Podcast, Daniel Bashir speaks to Shiv Rao.
Shiv Rao, MD is the co-founder and CEO of Abridge, a healthcare conversation company that uses cutting-edge NLP and generative AI to bring context and understanding to every medical conversation. Shiv previously served as an Executive Vice President at UPMC Enterprises, managing the provider-facing portfolio of technology investments and R&D. He is a practicing cardiologist in UPMC's Heart and Vascular Institute.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:34) Shiv’s medicine/technology/VC background
* (05:45) Difficulties for tech in healthcare and how this informs Shiv’s approach
* (10:52) “Productivity with a smile” and how AI can make medicine feel more human
* (12:35) Shiv’s experiences in medicine and how Abridge’s product helps doctors
* (16:10) How the role of a clinical team could evolve
* (19:30) Abridge’s partnerships and real-life use cases
* (23:00) Shiv’s perspectives on concerns about bias/trust/privacy
* (25:25) Clinical decision support vs “automating doctors”
* (29:07) Transparency and Abridge’s user experience
* (35:20) Algorithmic solutionism vs human-focused approaches to technology development
* (38:50) Ways AI might impact healthcare
* (41:10) Generative AI applications
* (45:00) Generative AI opportunities beyond documentation
* (49:25) Innovation and reducing friction, UX
* (50:56) Why people make wild predictions about AI
* (54:25) What it means to “automate away” a doctor, how we’re misusing the medical workforce
* (56:10) Shiv’s advice for people interested in AI + healthcare
* (1:00:04) Outro
Links:
In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle.
Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:38) Prof. Larochelle’s background, working in Bengio’s lab
* (04:53) Prof. Larochelle’s work and connectionism
* (08:20) 2004-2009, work with Bengio
* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning
* (13:58) Manifold learning in vision and language
* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining
* (21:00) From input copying to learning about local distribution structure
* (22:30) Zero-Data Learning of New Tasks
* (22:45) The phrase “extend machine learning towards AI” and terminology
* (26:55) Prescient hints of prompt engineering
* (29:10) Daniel goes on totally unnecessary tangent
* (30:00) Methods for training deep networks (strategies and robust interdependent codes)
* (33:45) Motivations for layer-wise pretraining
* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer
* (39:00) 2009-2011, postdoc in Geoff Hinton’s lab
* (40:00) Reflections on the AlexNet moment
* (41:45) Frustration with methods for evaluating unsupervised methods, NADE
* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures
* (47:40) The Restricted Boltzmann Forest
* (50:45) Imposing structure for tractable learning of distributions
* (53:11) 2011-2016 at U Sherbooke (and Twitter)
* (53:45) How Prof. Larochelle approached research problems
* (56:00) How Domain Adversarial Networks came about
* (57:12) Can we still learn from Restricted Boltzmann Machines?
* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine
* (1:06:55) The need for researchers doing different sorts of work
* (1:08:58) 2017-present, at MILA (and Google)
* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration
* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)
* (1:15:10) Meta-Dataset motivation
* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”
* (1:19:15) Head2Toe intuitions
* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset
* (1:26:02) Prof. Larochelle’s takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)
* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML
* (1:36:18) The creation and motivations of the TMLR journal
* (1:41:48) Prof. Larochelle’s takeaways about doing good science, building research groups, and nurturing a research environment
* (1:44:05) Prof. Larochelle’s advice for aspiring researchers today
* (1:47:41) Outro
Links:
* Professor Larochelle’s homepage and Twitter
* Transactions on Machine Learning Research
* Papers
* 2004-2009
* Nonlocal Estimation of Manifold Structure
* Classification using Discriminative Restricted Boltzmann Machines
* Zero-data learning of new tasks
* Exploring Strategies for Training Deep Neural Networks
* Deep Learning using Robust Interdependent Codes
* 2009-2011
* Stacked Denoising Autoencoders
* Tractable multivariate binary density estimation and the restricted Boltzmann forest
* The Neural Autoregressive Distribution Estimator
* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks
* 2011-2016
* Practical Bayesian Optimization of Machine Learning Algorithms
* Learning Algorithms for the Classification Restricted Boltzmann Machine
* A neural autoregressive topic model
* Domain-Adversarial Training of Neural Networks
* NADE
* An Infinite Restricted Boltzmann Machine
* 2017-present
* Modulating early visual processing by language
* A Universal Representation Transformer Layer for Few-Shot Image Classification
* Learning a universal template for few-shot dataset generalization
* Impact of aliasing on generalization in deep convolutional networks
* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning
* Fortuitous Forgetting in Connectionist Networks
In episode 79 of The Gradient Podcast, Daniel Bashir speaks to Jeremie Harris.
Jeremie is co-founder of Gladstone AI, author of the book Quantum Physics Made Me Do It, and co-host of the Last Week in AI Podcast. Jeremy previously hosted the Towards Data Science podcast and worked on a number of other startups after leaving a PhD in physics.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:37) Jeremie’s physics background and transition to ML
* (05:19) The physicist-to-AI person pipeline, how Jeremie’s background impacts his approach to AI
* (08:20) A tangent on inflationism/deflationism about natural laws (I promise this applies to AI)
* (11:45) How ML implies a particular viewpoint on the above question
* (13:20) Jeremie’s first (recommendation systems) company, how startup founders can make mistakes even when they’ve read Paul Graham essays
* (17:30) Classic startup wisdom, different sorts of startups
* (19:35) OpenAI’s approach in shipping features for DALL-E 2 and generation vs. discrimination as an approach to product
* (24:55) Capabilities and risk
* (26:43) Commentary on fundamental limitations of alignment in LLMs
* (30:45) Intrinsic difficulties in alignment problems
* (41:15) Daniel tries to steel man / defend anti-longtermist arguments (nicely :) )
* (46:23) Anthropic’s paper on asking models to be less biased
* (47:20) Why Jeremie is excited about Anthropic’s Constitutional AI scheme
* (51:05) Jeremie’s thoughts on recent Eliezer discourse
* (56:50) Cheese / task vectors and steerability/controllability in LLMs
* (59:50) Difficulty of one-shot solutions in alignment work, better strategies
* (1:02:00) Lack of theoretical understanding of deep learning systems / alignment
* (1:04:50) Jeremie’s work and perspectives on AI policy
* (1:10:00) Incrementality in convincing policymakers
* (1:14:00) How recent developments impact policy efforts
* (1:16:20) Benefits and drawbacks of open source
* (1:19:30) Arguments in favor of (limited) open source
* (1:20:35) Quantum Physics (not Mechanics) Made Me Do It
* (1:24:10) Some theories of consciousness and corresponding physics
* (1:29:49) Outro
Links:
* Jeremie’s Twitter
* Quantum Physics Made Me Do It
In episode 78 of The Gradient Podcast, Daniel Bashir speaks to Antoine Blondeau.
Antoine is a serial AI entrepreneur and Co-Founder and Managing Partner of Alpha Intelligence Capital. He was chief executive at Dejima when the firm worked on CALO, one of the biggest AI projects in US history and precursor to Apple’s Siri. Later, he co-founded Sentient Technologies, which boasted the title of world’s highest funded AI company in 2016. In 2018, he founded Alpha Intelligence Capital to support future AI unicorns, and has raised more than $300 million, which has been deployed into 31 companies.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) Antoine’s background
* (04:00) Dejima and the CALO cognitive assistant (the precursor to Siri)
* (07:35) More detail on CALO
* (10:10) Sentient Technologies and entrepreneurship during the AlexNet moment
* (14:35) Early predictions on scale
* (17:15) Role of evolutionary computation and neuroevolution
* (20:00) Antoine’s motivations for becoming an investor
* (22:30) Alpha Intelligence Capital’s investment focus
* (27:40) Safety and trust issues in fully automated systems
* (37:00) Models of culture, discernment in the use of AI systems
* (39:30) Antoine’s experience as an investor in today’s AI environment
* (44:43) How modern LLMs impact standard advice regarding the appropriateness of cutting-edge technologies in business
* (49:00) Data (and other) moats
* (52:07) Application/research areas Antoine is watching
* (55:00) Antoine’s advice for people watching AI’s current developments
* (58:47) Outro
Links:
* Alpha Intelligence Capital Homepage
In episode 77 of The Gradient Podcast, Daniel Bashir speaks to Joon Park.
Joon is a third-year PhD student at Stanford, advised by Professors Michael Bernstein and Percy Liang. He designs, builds, and evaluates interactive systems that support new forms of human-computer interaction by leveraging state-of-the-art advances in natural language processing such as large language models. His research introduced the concept of, and the techniques for building generative agents—computational software agents that simulate believable human behavior. Joon’s work has been supported by the Microsoft Research PhD Fellowship, the Stanford School of Engineering Fellowship, and the Siebel Scholarship.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:43) Joon’s path from studio art to social computing / AI
* (05:00) Joon’s perspectives on Human-Computer Interaction (HCI) and its recent evolution
* (06:45) How foundation models enter the picture
* (10:28) On slow algorithms and technology: A Slow Algorithm Improves Users’ Assessments of the Algorithm’s Accuracy
* (12:10) Motivations
* (17:55) The jellybean-counting task, hypotheses
* (22:00) Applications and takeaways
* (28:05) Deliberate engagement in social media / computing systems, incentives
* (32:55) Daniel rants about The Social Dilemma + anti- social media rhetoric, Joon on the role of academics, framings of addiction
* (39:05) Measuring the Prevalence of Anti-Social Behavior in Online Communities
* (48:30) Statistics on anti-social behavior and anecdotal information, limitations in the paper’s measurements
* (51:45) Participatory and value-sensitive design
* (52:50) “Interaction” in On the Opportunities and Risks of Foundation Models
* (53:45) Broader insights on foundation models and emergent behavior
* (56:50) Joon’s section on interaction
* (1:01:05) Daniel’s bad segue to Social Simulacra: Creating Populated Prototypes for Social Computing Systems
* (1:02:50) Context for Social Simulacra and Generative Agents, why Social Simulacra was tackled first
* (1:24:05) The value of norms
* (1:26:20) Collaborations between designers and developers of social simulacra
* (1:30:00) Generative Agents: Interactive Simulacra of Human Behavior
* (1:30:30) Context / intro
* (1:45:10) On (too much) coherence in generative agents and believability
* (1:52:02) Instruction tuning’s impact on generative agents, model alignment w/ believability goals, desirability of agent conflict / toxic LLMs
* (1:56:55) Release strategies and toxicity in LLMs
* (2:03:05) On designing interfaces and responsible use
* (2:09:05) Capability advances and the capability-safety research gap
* (2:14:12) Worries about LLM integration, human-centered framework for technology release / LLM incorporation
* (2:18:00) Joon’s philosophy as an HCI researcher
* (2:20:39) Outro
Links:
* Research
* A Slow Algorithm Improves Users’ Assessments of the Algorithm’s Accuracy
* Measuring the Prevalence of Anti-Social Behavior in Online Communities
* On the Opportunities and Risks of Foundation Models
* Social Simulacra: Creating Populated Prototypes for Social Computing Systems
* Generative Agents: Interactive Simulacra of Human Behavior
In episode 76 of The Gradient Podcast, Andrey Kurenkov speaks to Dr Christoffer Holmgård
Dr. Holmgård is a co-founder and the CEO of Modl.ai, which is building AI Engine for game development. Before starting the company, Christoffer was director of the indie game studio Die Gute Fabrik (which is German for "The Good Factory"), and has also done extensive research as an assistant professor in AI and Machine Learning for Games at Northeastern University.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
(00:00) Intro(01:30) History with video games (06:30) History with AI(09:40) Modeling stress responses in virtual environments(13:30) Play style personas from empirical data(17:15) Automating video game testing(21:00) Video game development(28:15) modl.ai(33:45) Automated playtesting with procedural personas through MCTS with evolved heuristics(35:40) Thoughts on recent AI progress(40:50) RL for game testing(44:40) AI in Minecraft(47:50) Impact of AI on video game development(01:00:00) Ethics of Gen AI (01:06:20) Hobbies / Interests (01:08:30) Outro
In episode 75 of The Gradient Podcast, Daniel Bashir speaks to Riley Goodside.
Riley is a Staff Prompt Engineer at Scale AI. Riley began posting GPT-3 prompt examples and screenshot demonstrations in 2022. He previously worked as a data scientist at OkCupid, Grindr, and CopyAI.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:37) Riley’s journey to becoming the first Staff Prompt Enginer
* (02:00) data science background in online dating industry
* (02:15) Sabbatical + catching up on LLM progress
* (04:00) AI Dungeon and first taste of GPT-3
* (05:10) Developing on codex, ideas about integrating codex with Jupyter Notebooks, start of posting on Twitter
* (08:30) “LLM ethnography”
* (09:12) The history of prompt engineering: in-context learning, Reinforcement Learning from Human Feedback (RLHF)
* (10:20) Models used to be harder to talk to
* (10:45) The three eras
* (10:45) 1 - Pre-trained LM era—simple next-word predictors
* (12:54) 2 - Instruction tuning
* (16:13) 3 - RLHF and overcoming instruction tuning’s limitations
* (19:24) Prompting as subtractive sculpting, prompting and AI safety
* (21:17) Riley on RLHF and safety
* (24:55) Riley’s most interesting experiments and observations
* (25:50) Mode collapse in RLHF models
* (29:24) Prompting models with very long instructions
* (33:13) Explorations with regular expressions, chain-of-thought prompting styles
* (36:32) Theories of in-context learning and prompting, why certain prompts work well
* (42:20) Riley’s advice for writing better prompts
* (49:02) Debates over prompt engineering as a career, relevance of prompt engineers
* (58:55) Outro
Links:
* Riley’s Twitter and LinkedIn
* Talk: LLM Prompt Engineering and RLHF: History and Techniques
In episode 74 of The Gradient Podcast, Daniel Bashir speaks to Professor Talia Ringer.
Professor Ringer is an Assistant Professor with the Programming Languages, Formal Methods, and Software Engineering group at the University of Illinois at Urbana Champaign. Their research leverages proof engineering to allow programmers to more easily build formally verified software systems.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Daniel’s long annoying intro
* (02:15) Origin Story
* (04:30) Why / when formal verification is important
* (06:40) Concerns about ChatGPT/AutoGPT et al failures, systems for accountability
* (08:20) Difficulties in making formal verification accessible
* (11:45) Tactics and interactive theorem provers, interface issues
* (13:25) How Prof Ringer’s research first crossed paths with ML
* (16:00) Concrete problems in proof automation
* (16:15) How ML can help people verifying software systems
* (20:05) Using LLMs for understanding / reasoning about code
* (23:05) Going from tests / formal properties to code
* (31:30) Is deep learning the right paradigm for dealing with relations for theorem proving?
* (36:50) Architectural innovations, neuro-symbolic systems
* (40:00) Hazy definitions in ML
* (41:50) Baldur: Proof Generation & Repair with LLMs
* (45:55) In-context learning’s effectiveness for LLM-based theorem proving
* (47:12) LLMs without fine-tuning for proofs
* (48:45) Something ~ surprising ~ about Baldur results (maybe clickbait or maybe not)
* (49:32) Asking models to construct proofs with restrictions, translating proofs to formal proofs
* (52:07) Methods of proofs and relative difficulties
* (57:45) Verifying / providing formal guarantees on ML systems
* (1:01:15) Verifying input-output behavior and basic considerations, nature of guarantees
* (1:05:20) Certified/verifies systems vs certifying/verifying systems—getting LLMs to spit out proofs along with code
* (1:07:15) Interpretability and how much model internals matter, RLHF, mechanistic interpretability
* (1:13:50) Levels of verification for deploying ML systems, HCI problems
* (1:17:30) People (Talia) actually use Bard
* (1:20:00) Dual-use and “correct behavior”
* (1:24:30) Good uses of jailbreaking
* (1:26:30) Talia’s views on evil AI / AI safety concerns
* (1:32:00) Issues with talking about “intelligence,” assumptions about what “general intelligence” means
* (1:34:20) Difficulty in having grounded conversations about capabilities, transparency
* (1:39:20) Great quotation to steal for your next thinkpiece + intelligence as socially defined
* (1:42:45) Exciting research directions
* (1:44:48) Outro
Links:
* Talia’s Twitter and homepage
* Research
* Concrete Problems in Proof Automation
* Baldur: Whole-Proof Generation and Repair with LLMs
In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Brigham Hyde.
Brigham is Co-Founder and CEO of Atropos Health. Prior to Atropos, he served as President of Data and Analytics at Eversana, a life sciences commercialization service provider. He led the investment in Concert AI in the oncology real-world data space at Symphony AI. Brigham has also held research faculty positions at Tufts University and the MIT Media Lab.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:55) Brigham’s background
* (06:00) Current challenges in healthcare
* (12:33) Interpretablity and delivering positive patient outcomes
* (17:10) How Atropos surfaces relevant data for patient interventions, on personalized observational research studies
* (22:10) Quality and quantity of data for patient interventions
* (27:25) Challenges and opportunities for generative AI in healthcare
* (35:17) Database augmentation for generative models
* (36:25) Future work for Atropos
* (39:15) Future directions for AI + healthcare
* (40:56) Outro
Links:
* Brigham’s Twitter and LinkedIn
In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson.
Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:45) Scott’s background
* (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection
* (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover’s algorithm
* (10:50) Overselling of quantum computing applied to AI, Scott’s analysis on quantum machine learning
* (18:45) ML problems that involve quantum mechanics and Scott’s work
* (21:50) Scott’s recent work at OpenAI
* (22:30) Why Scott was skeptical of AI alignment work early on
* (26:30) Unexpected improvements in modern AI and Scott’s belief update
* (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)
* (34:15) Watermarking GPT outputs
* (41:00) Motivations for watermarking and language model detection
* (45:00) Ways around watermarking
* (46:40) Other aspects of Scott’s experience with OpenAI, theoretical problems
* (49:10) Thoughts on definitions for humanistic concepts in AI
* (58:45) Scott’s “reform AI alignment stance” and Eliezer Yudkowsky’s recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling
* (1:08:45) Outro
Links:
* Scott’s blog
* AI-related work
* Quantum Machine Learning Algorithms: Read the Fine Print
* A very preliminary analysis of DALL-E 2 w/ Marcus and Davis
* New AI classifier for indicating AI-written text and Watermarking GPT Outputs
* Writing
In episode 71 of The Gradient Podcast, Daniel Bashir speaks to Ted Underwood.
Ted is a professor in the School of Information Sciences with an appointment in the Department of English at the University of Illinois at Urbana Champaign. Trained in English literary history, he turned his research focus to applying machine learning to large digital collections. His work explores literary patterns that become visible across long timelines when we consider many works at once—often, his work involves correcting and enriching digital collections to make them more amenable to interesting literary research.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:42) Ted’s background / origin story
* (04:35) Context in interpreting statistics, “you need a model,” the need for data about human responses to literature and how that manifested in Ted’s work
* (07:25) The recognition that we can model literary prestige/genre because of ML
* (08:30) Distant reading and the import of statistics over large digital libraries
* (12:00) Literary prestige
* (12:45) How predictable is fiction? Scales of predictability in texts
* (13:55) Degrees of autocorrelation in biography and fiction and the structure of narrative, how LMs might offer more sophisticated analysis
* (15:15) Braided suspense / suspense at different scales of a story
* (17:05) The Literary Uses of High-Dimensional Space: how “big data” came to impact the humanities, skepticism from humanists and responses, what you can do with word count
* (20:50) Why we could use more time to digest statistical ML—how acceleration in AI advances might impact pedagogy
* (22:30) The value in explicit models
* (23:30) Poetic “revolutions” and literary prestige
* (25:53) Distant vs. close reading in poetry—follow-up work for “The Longue Durée”
* (28:20) Sophistication of NLP and approaching the human experience
* (29:20) What about poetry renders it prestigious?
* (32:20) Individualism/liberalism and evolution of poetic taste
* (33:20) Why there is resistance to quantitative approaches to literature
* (34:00) Fiction in other languages
* (37:33) The Life Cycles of Genres
* (38:00) The concept of “genre”
* (41:00) Inflationary/deflationary views on natural kinds and genre
* (44:20) Genre as a social and not a linguistic phenomenon
* (46:10) Will causal models impact the humanities?
* (48:30) (Ir)reducibility of cultural influences on authors
* (50:00) Machine Learning and Human Perspective
* (50:20) Fluent and perspectival categories—Miriam Posner on “the radical, unrealized potential of digital humanities.”
* (52:52) How ML’s vices can become virtues for humanists
* (56:05) Can We Map Culture? and The Historical Significance of Textual Distances
* (56:50) Are cultures and other social phenomena related to one another in a way we can “map”?
* (59:00) Is cultural distance Euclidean?
* (59:45) The KL Divergence’s use for humanists
* (1:03:32) We don’t already understand the broad outlines of literary history
* (1:06:55) Science Fiction Hasn’t Prepared us to Imagine Machine Learning
* (1:08:45) The latent space of language and what intelligence could mean
* (1:09:30) LLMs as models of culture
* (1:10:00) What it is to be a human in “the age of AI” and Ezra Klein’s framing
* (1:12:45) Mapping the Latent Spaces of Culture
* (1:13:10) Ted on Stochastic Parrots
* (1:15:55) The risk of AI enabling hermetically sealed cultures
* (1:17:55) “Postcards from an unmapped latent space,” more on AI systems’ limitations as virtues
* (1:20:40) Obligatory GPT-4 section
* (1:21:00) Using GPT-4 to estimate passage of time in fiction
* (1:23:39) Is deep learning more interpretable than statistical NLP?
* (1:25:17) The “self-reports” of language models: should we trust them?
* (1:26:50) University dependence on tech giants, open-source models
* (1:31:55) Reclaiming Ground for the Humanities
* (1:32:25) What scientists, alone, can contribute to the humanities
* (1:34:45) On the future of the humanities
* (1:35:55) How computing can enable humanists as humanists
* (1:37:05) Human self-understanding as a collaborative project
* (1:39:30) Is anything ineffable? On what AI systems can “grasp”
* (1:43:12) Outro
Links:
* Research
* The literary uses of high-dimensional space
* The Longue Durée of literary prestige
* The Historical Significance of Textual Distances
* Machine Learning and Human Perspective
* Cohort Succession Explains Most Change in Literary Culture
* Other Writing
* Reclaiming Ground for the Humanities
* We don’t already understand the broad outlines of literary history
* Science fiction hasn’t prepared us to imagine machine learning.
* Mapping the latent spaces of culture
* Using GPT-4 to measure the passage of time in fiction
In episode 70 of The Gradient Podcast, Daniel Bashir speaks to Irene Solaiman.
Irene is an expert in AI safety and policy and the Policy Director at HuggingFace, where she conducts social impact research and develops public policy. In her former role at OpenAI, she initiated and led bias and social impact research at OpenAI in addition to leading public policy. She built AI policy at Zillow group and advised poilcymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Intro to Irene and her work
* (03:45) What tech people need to learn about policy, and vice versa
* (06:35) Societal impact—words and reality, Irene’s experience
* (08:30) OpenAI work on GPT-2 and release strategies (yes, this was recorded on Pi Day)
* (11:00) Open-source proponents and release
* (14:00) What does a multidisciplinary approach to working on AI look like?
* (16:30) Thinking about end users and enabling contributors with different sets of expertise
* (18:00) “Preparing for AGI” and current approaches to release
* (21:00) Who constitutes a researcher? What constitutes safety and who gets resourced? Limitations in red-teaming potentially dangerous systems.
* (22:35) PALMS and Values-Targeted Datasets
* (25:52) PALMS and RLHF
* (27:00) Homogenization in foundation models, cultural contexts
* (29:45) Anthropic’s moral self-correction paper and Irene’s concerns about marketing “de-biasing” and oversimplification
* (31:50) Data work, human systemic problems → AI bias
* (33:55) Why do language models get more toxic as they get larger? (if you have ideas, let us know!)
* (35:45) The gradient of generative AI release, Irene’s experience with the open-source world, tradeoffs along the release gradient
* (38:40) More on Irene’s orientation towards release
* (39:40) Pragmatics of keeping models closed, dealing with open-source by force
* (42:22) Norm setting for release and use, normalization of documentation on social impacts
* (46:30) Race dynamics :(
* (49:45) Resource allocation and advances in ethics/policy, conversations on integrity and disinformation
* (53:10) Organizational goals, balancing technical research with policy work
* (58:10) Thoughts on governments’ AI policies, impact of structural assumptions
* (1:04:00) Approaches to AI-generated sexual content, need for more voices represented in conversations about AI
* (1:08:25) Irene’s suggestions for AI practitioners / technologists
* (1:11:24) Outro
Links:
* Irene’s homepage and Twitter
* Papers
* Release Strategies and the Social Impacts of Language Models
* Hugh Zhang’s open letter in The Gradient from 2019
* Process for Adapting Large Models to Society (PALMS) with Values-Targeted Datasets
* The Gradient of Generative AI Release: Methods and Considerations
In episode 69 of The Gradient Podcast, Daniel Bashir speaks to Drago Anguelov.
Drago is currently a Distinguished Scientist and Head of Research at Waymo, where he joined in 2018. Earlier, he spent eight years at Google working on 3D vision and pose estimation for StreetView, then leading a research team that developed computer vision systems for annotating Google Photos. He has been involved in developing popular neural network methods such as the Inception architecture and the SSD detector. Before joining Waymo, he also led the 3D perception team at Zoox.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:04) Drago’s background in AI and self-driving, work with Daphne Koller + Sebastian Thrun, computer vision / pose estimation
* (14:20) One- and two-stage object detectors
* (15:15) Early experiences and thoughts on self-driving and its prospects
* (21:00) An introduction to the “self-driving stack”: mapping & localization, perception, behavior modeling & planning, simulation
* (29:25) From Stuart Russell’s comments on early Waymo’s “old-fashioned” approach
* (37:34) Scaling 3D Detection: challenges and architectural innovations
* (43:20) Behavior modeling: making decisions and modeling interactions in multi-agent environments
* (52:42) Distributional RL (+ imitation learning) in self-driving?
* (54:10) The Waymo Open Dataset
* (1:01:48) Looking forward in self-driving
* (1:04:36) Outro
Links:
* Drago’s LinkedIn and Twitter
* Research
* SSD: Single-Shot Multibox Detector
* SCAPE: Shape completion and animation of people
* Behavior Models for Autonomous Driving
* Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation
* Scaling 3D Detection to the Long Tail
In episode 68 of The Gradient Podcast, Daniel Bashir speaks to Professor Joanna Bryson.
Professor Bryson is Professor of Ethics and Technology at the Hertie School, where her research focuses on the impact of technology on human cooperation and AI/ICT governance. Professor Bryson has advised companies, governments, transnational agencies, and NGOs, particularly in AI policy. She is one of the few people doing this sort of work who actually has a PhD and work experience in AI, but also advanced degrees in the social sciences. She started her academic career though in the liberal arts, and publishes regularly in the natural sciences.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:35) Intro to Professor Bryson’s work
* (06:37) Shifts in backgrounds expected of AI PhDs/researchers
* (09:40) Masters’ degree in Edinburgh, Behavior-Based AI
* (11:00) PhD, differences between MIT’s engineering focus and Edinburgh, systems engineering + AI
* (16:15) Comments on ways you can make contributions in AI
* (18:45) When definitions of “intelligence” are important
* (24:23) Non- and proto-linguistic aspects of intelligence, arguments about text as a description of human experience
* (31:45) Cognitive leaps in interacting with language models
* (37:00) Feelings of affiliation for robots, phenomenological experience in humans and (not) in AI systems
* (42:00) Language models and technological systems as cultural artifacts, expressing agency through machines
* (44:15) Capabilities development and moral patient status in AI systems
* (51:20) Prof. Bryson’s perspectives on recent AI regulation
* (1:00:55) Responsibility and recourse, Uber self-driving crash
* (1:07:30) “Preparing for AGI,” “Living with AGI,” how to respond to recent AI developments
* (1:12:18) Outro
Links:
* Professor Bryson’s homepage and Twitter
* Papers
* Systems AI
* Behavior Oriented Design, action selection, key differences in methodology/views between systems AI researchers and e.g. connectionists
* Agent architecture as object oriented design (1998)
* Cognition
* Age-Related Inhibition and Learning Effects: Evidence from Transitive Performance (2013)
* Primate Errors in Transitive ‘Inference’: A Two-Tier Learning Model (2007)
* Skill Acquisition Through Program-Level Imitation in a Real-Time Domain
* Social learning in a non-social reptile (Geochelone carbonaria) (2010)
* Understanding and Addressing Cultural Variation in Costly Antisocial Punishment (2014)
* Polarization Under Rising Inequality and Economic Decline (2020)
* Semantics derived automatically from language corpora contain human-like biases (2017)
* Ethics/Policy
* Robots should be slaves (2010)
* Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems (2017)
* Of, For, and By the People: The Legal Lacuna of Synthetic Persons (2017)
* Patiency is not a virtue: the design of intelligent systems and systems of ethics (2018)
* Other writing
* Reflections on the EU’s AI Act
* One Day, AI Will Seem as Human as Anyone. What Then?
In episode 67 of The Gradient Podcast, Daniel Bashir speaks to Daniel Situnayake.
Daniel is head of Machine Learning at Edge Impulse. He is co-author of the O’Reilly books "AI at the Edge" and "TinyML". Previously, he’s worked on the Tensorflow Lite team at Google AI and co-founded Tiny Farms, an insect farming company. Daniel has also lectured in AIDC technologies at Birmingham City University.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (1:40) Daniel S Origin Story: computer networking, RFID/barcoding, earlier jobs, Tiny Farms, Tensorflow Lite, writing on TinyML, and Edge Impulse
* (15:30) Edge AI and questions of embodiment/intelligence in AI
* (21:00) The role of hardware, other constraints in edge AI
* (25:00) Definitions of intelligence
* (29:45) What is edge AI?
* (37:30) The spectrum of edge devices
* (43:45) Innovations in edge AI (architecture, frameworks/toolchains, quantization)
* (53:45) Model compression tradeoffs in edge
* (1:00:30) Federated learning and challenges
* (1:09:00) Intro to Edge Impulse
* (1:20:30) Feature engineering for edge systems, fairness considerations
* (1:25:50) Edge AI and axes in AI (large/small, ethereal/embodied)
* (1:37:00) Daniel and Daniel go off the rails on panpsychism
* (1:54:20) Daniel’s advice for aspiring AI practitioners
* (1:57:20) Outro
Links:
In episode 66 of The Gradient Podcast, Daniel Bashir speaks to Soumith Chintala.
Soumith is a Research Engineer at Meta AI Research in NYC. He is the co-creator and lead of Pytorch, and maintains a number of other open-source ML projects including Torch-7 and EBLearn. Soumith has previously worked on robotics, object and human detection, generative modeling, AI for video games, and ML systems research.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) Soumith’s intro to AI journey to Pytorch
* (05:00) State of computer vision early in Soumith’s career
* (09:15) Institutional inertia and sunk costs in academia, identifying fads
* (12:45) How Soumith started working on GANs, frustrations
* (17:45) State of ML frameworks early in the deep learning era, differentiators
* (23:50) Frameworks and leveling the playing field, exceptions
* (25:00) Contributing to Torch and evolution into Pytorch
* (29:15) Soumith’s product vision for ML frameworks
* (32:30) From product vision to concrete features in Pytorch
* (39:15) Progressive disclosure of complexity (Chollet) in Pytorch
* (41:35) Building an open source community
* (43:25) The different players in today’s ML framework ecosystem
* (49:35) ML frameworks pioneered by Yann LeCun and Léon Bottou, their influences on Pytorch
* (54:37) Pytorch 2.0 and looking to the future
* (58:00) Soumith’s adventures in household robotics
* (1:03:25) Advice for aspiring ML practitioners
* (1:07:10) Be cool like Soumith and subscribe :)
* (1:07:33) Outro
Links:
* Soumith’s Twitter and homepage
* Papers
* Convolutional Neural Networks Applied to House Numbers Digit Classification
* GANs: LAPGAN, DCGAN, Wasserstein GAN
* Automatic differentiation in PyTorch
* PyTorch: An Imperative Style, High-Performance Deep Learning Library
In episode 65 of The Gradient Podcast, Daniel Bashir speaks to Sewon Min.
Sewon is a fifth-year PhD student in the NLP group at the University of Washington, advised by Hannaneh Hajishirzi and Luke Zettlemoyer. She is a part-time visiting researcher at Meta AI and a recipient of the JP Morgan PhD Fellowship. She has previously spent time at Google Research and Salesforce research.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (03:00) Origin Story
* (04:20) Evolution of Sewon’s interests, question-answering and practical NLP
* (07:00) Methodology concerns about benchmarks
* (07:30) Multi-hop reading comprehension
* (09:30) Do multi-hop QA benchmarks actually measure multi-hop reasoning?
* (12:00) How models can “cheat” multi-hop benchmarks
* (13:15) Explicit compositionality
* (16:05) Commonsense reasoning and background information
* (17:30) On constructing good benchmarks
* (18:40) AmbigQA and ambiguity
* (22:20) Types of ambiguity
* (24:20) Practical possibilities for models that can handle ambiguity
* (25:45) FaVIQ and fact-checking benchmarks
* (28:45) External knowledge
* (29:45) Fact verification and “complete understanding of evidence”
* (31:30) Do models do what we expect/intuit in reading comprehension?
* (34:40) Applications for fact-checking systems
* (36:40) Intro to in-context learning (ICL)
* (38:55) Example of an ICL demonstration
* (40:45) Rethinking the Role of Demonstrations and what matters for successful ICL
* (43:00) Evidence for a Bayesian inference perspective on ICL
* (45:00) ICL + gradient descent and what it means to “learn”
* (47:00) MetaICL and efficient ICL
* (49:30) Distance between tasks and MetaICL task transfer
* (53:00) Compositional tasks for language models, compositional generalization
* (55:00) The number and diversity of meta-training tasks
* (58:30) MetaICL and Bayesian inference
* (1:00:30) Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
* (1:02:00) The copying effect
* (1:03:30) Copying effect for non-identical examples
* (1:06:00) More thoughts on ICL
* (1:08:00) Understanding Chain-of-Thought Prompting
* (1:11:30) Bayes strikes again
* (1:12:30) Intro to Sewon’s text retrieval research
* (1:15:30) Dense Passage Retrieval (DPR)
* (1:18:40) Similarity in QA and retrieval
* (1:20:00) Improvements for DPR
* (1:21:50) Nonparametric Masked Language Modeling (NPM)
* (1:24:30) Difficulties in training NPM and solutions
* (1:26:45) Follow-on work
* (1:29:00) Important fundamental limitations of language models
* (1:31:30) Sewon’s experience doing a PhD
* (1:34:00) Research challenges suited for academics
* (1:35:00) Joys and difficulties of the PhD
* (1:36:30) Sewon’s advice for aspiring PhDs
* (1:38:30) Incentives in academia, production of knowledge
* (1:41:50) Outro
Links:
* Sewon’s homepage and Twitter
* Papers
* Solving and re-thinking benchmarks
* Multi-hop Reading Comprehension through Question Decomposition and Rescoring / Compositional Questions Do Not Necessitate Multi-hop Reasoning
* AmbigQA: Answering Ambiguous Open-domain Questions
* FaVIQ: FAct Verification from Information-seeking Questions
* Language Modeling
* Rethinking the Role of Demonstrations
* MetaICL: Learning to Learn In Context
* Towards Understanding CoT Prompting
* Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
* Text representation/retrieval
* Nonparametric Masked Language Modeling
In episode 64 of The Gradient Podcast, Daniel Bashir speaks to Richard Socher.
Richard is founder and CEO of you.com, a new search engine that lets you personalize your search workflow and eschews tracking and invasive ads. Richard was previously Chief Scientist at Salesforce where he led work on fundamental and applied research, product incubation, CRM search, customer service automation and a cross-product AI platform. He was an adjunct professor at Stanford’s CS department as well as founder and CEO/CTO of MetaMind, which was acquired by Salesforce in 2016. He received his PhD from Stanford’s CS Department in 2014.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:20) Richard Socher origin story + time at Metamind, Salesforce (AI Economist, CTRL, ProGen)
* (22:00) Why Richard advocated for deep learning in NLP
* (27:00) Richard’s perspective on language
* (32:20) Is physical grounding and language necessary for intelligence?
* (40:10) Frankfurtian b******t and language model utterances as truth
* (47:00) Lessons from Salesforce Research
* (53:00) Balancing fundamental research with product focus
* (57:30) The AI Economist + how should policymakers account for limitations?
* (1:04:50) you.com, the chatbot wars, and taking on search giants
* (1:13:50) Re-imagining the vision for and components of a search engine
* (1:18:00) The future of generative models in search and the internet
* (1:28:30) Richard’s advice for early-career technologists
* (1:37:00) Outro
Links:
* Richard’s Twitter
* Papers mentioned
* Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions
* Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
* Grounded Compositional Semantics for Finding and Describing Images with Sentences
* ProGen
* CTRL
In episode 63 of The Gradient Podcast, Daniel Bashir speaks to Joe Edelman.
Joe developed the meaning-based organizational metrics at Couchsurfing.com, then co-founded the Center for Humane Technology with Tristan Harris, and coined the term “Time Well Spent” for a family of metrics adopted by teams at Facebook, Google, and Apple. Since then, he's worked on the philosophical underpinnings for new business metrics, design methods, and political movements. The central idea is to make people's sources of meaning explicit, so that how meaningful or meaningless things are can be rigorously accounted for. His previous career was in HCI and programming language design.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro (yes Daniel is trying a new intro format)
* (01:30) Joe’s origin story
* (07:15) Revealed preferences and personal meaning, recommender systems
* (12:30) Is using revealed preferences necessary?
* (17:00) What are values and how do you detect them?
* (24:00) Figuring out what’s meaningful to us
* (28:45) The decline of spaces and togetherness
* (35:00) Individualism and economic/political theory, tensions between collectivism/individualism
* (41:00) What it looks like to build spaces, Habitat
* (47:15) Cognitive effects of social platforms
* (51:45) Atomized communication, re-imagining chat apps
* (55:50) Systems for social groups and medium independence
* (1:02:45) Spaces being built today
* (1:05:15) Joe is building research groups! Get in touch :)
* (1:05:40) Outro
Links:
* Joe's 80m lecture on techniques for rebuilding society on meaning (youtube, transcript)
* The discord for Rebuilding Meaning—join if you'd like to help build ML models or metrics using the methods discussed
* Writing/papers mentioned:
* Tech products (that don’t cause depression and war)
* Values, Preferences, Meaningful Choice
* Social Programming Considered as a Habitat for Groups
* Is Anything Worth Maximizing
* Joe’s homepage, Twitter, and YouTube page
In episode 62 of The Gradient Podcast, Daniel Bashir speaks to Ed Grefenstette.
Ed is Head of Machine Learning at Cohere and an Honorary Professor at University College London. He previously held research scientist positions at Facebook AI Research and DeepMind, following a stint as co-founder and CTO of Dark Blue Labs. Before his time in industry, Ed worked at Oxford’s Department of Computer Science as a lecturer and Fulford Junior Research Fellow at Somerville College. Ed also received his MSc and DPhil from Oxford’s Computer Science Department.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:18) The Ed Grefenstette Origin Story
* (08:15) Distributional semantics and Ed’s PhD research
* (14:30) Extending the distributional hypothesis, later Wittgenstein
* (18:00) Recovering parse trees in LMs, can LLMs understand communication and not just bare language?
* (23:15) LMs capture something about pragmatics, proxies for grounding and pragmatics
* (25:00) Human-in-the-loop training and RLHF—what is the essential differentiator?
* (28:15) A convolutional neural network for modeling sentences, relationship to attention
* (34:20) Difficulty of constructing supervised learning datasets, benchmark-driven development
* (40:00) Learning to Transduce with Unbounded Memory, Neural Turing Machines
* (47:40) If RNNs are like finite state machines, where are transformers?
* (51:40) Cohere and why Ed joined
* (56:30) Commercial applications of LLMs and Cohere’s product
* (59:00) Ed’s reply to stochastic parrots and thoughts on consciousness
* (1:03:30) Lessons learned about doing effective science
* (1:05:00) Where does scaling end?
* (1:07:00) Why Cohere is an exciting place to do science
* (1:08:00) Ed’s advice for aspiring ML {researchers, engineers, etc} and the role of communities in science
* (1:11:45) Cohere for AI plug!
* (1:13:30) Outro
Links:
* (some of) Ed’s Papers
* Experimental support for a categorical compositional distributional model of meaning
* Multi-step regression learning
* Towards a formal distributional semantics
* A CNN for modeling sentences
* Teaching machines to read and comprehend
* Reasoning about entailment with neural attention
* Learning to Transduce with Unbounded Memory
* Teaching Artificial Agents to Understand Language by Modelling Reward
* Other things mentioned
* Large language models are not zero-shot communicators (Laura Ruis + others and Ed)
* Looped Transformers as Programmable Computers and our Update 43 covering this paper
* Cohere and Cohere for AI (+ earlier episode w/ Sara Hooker on C4AI)
* David Chalmers interview on AI + consciousness
In episode 61 of The Gradient Podcast, Daniel Bashir speaks to Ken Liu.
Ken is an author of speculative fiction. A winner of the Nebula, Hugo, and World Fantasy awards, he is the author of silkpunk epic fantasy series Dandelion Dynasty and short story collections The Paper Menagerie and Other Stories and The Hidden Girl and Other Stories. Prior to writing full-time, Ken worked as a software engineer, corporate lawyer, and litigation consultant.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) How Ken Liu became Ken Liu: A Saga
* (03:10) Time in the tech industry, interest in symbolic machines
* (04:40) Determining what stories to write, (07:00) art as failed communication
* (07:55) Law as creating abstract machines, importance of successful communication, stories in law
* (13:45) Misconceptions about science fiction
* (18:30) How we’ve been misinformed about literature and stories in school, stories as expressing multivalent truths, Dickens on narration (29:00)
* (31:20) Stories as imposing structure on the world
* (35:25) Silkpunk as aesthetic and writing approach
* (39:30) If modernity is a translated experience, what is it translated from? Alternative sources for the American pageant
* (47:30) The value of silkpunk for technologists and building the future
* (52:40) The engineer as poet
* (59:00) Technology language as constructing societies, what it is to be a technologist
* (1:04:00) The technology of language
* (1:06:10) The Google Wordcraft Workshop and co-writing with LaMDA
* (1:14:10) Possibilities and limitations of LMs in creative writing
* (1:18:45) Ken’s short fiction
* (1:19:30) Short fiction as a medium
* (1:24:45) “The Perfect Match” (from The Paper Menagerie and other stories)
* (1:34:00) Possibilities for better recommender systems
* (1:39:35) “Real Artists” (from The Hidden Girl and other stories)
* (1:47:00) The scaling hypothesis and creativity
* (1:50:25) “The Gods have not died in vain” & Moore’s Proof epigraph (The Hidden Girl)
* (1:53:10) More of The Singularity Trilogy (The Hidden Girl)
* (1:58:00) The role of science fiction today and how technologists should engage with stories
* (2:01:53) Outro
Links:
* The Dandelion Dynasty Series: Speaking Bones is out in paperback
* Books/Stories/Projects Mentioned
* “Evaluative Soliloquies” in Google Wordcraft
* The Paper Menagerie and Other Stories
* The Hidden Girl and Other Stories
In episode 60 of The Gradient Podcast, Daniel Bashir speaks to Hattie Zhou.
Hattie is a PhD student at the Université de Montréal and Mila. Her research focuses on understanding how and why neural networks work, based on the belief that the performance of modern neural networks exceeds our understanding and that building more capable and trustworthy models requires bridging this gap. Prior to Mila, she spent time as a data scientist at Uber and did research with Uber AI Labs.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:55) Hattie’s Origin Story, Uber AI Labs, empirical theory and other sorts of research
* (10:00) Intro to the Lottery Ticket Hypothesis & Deconstructing Lottery Tickets
* (14:30) Lottery tickets as lucky initialization
* (17:00) Types of masking and the “masking is training” claim
* (24:00) Type-0 masks and weight evolution over long training trajectories
* (27:00) Can you identify good masks or training trajectories a priori?
* (29:00) The role of signs in neural net initialization
* (35:27) The Supermask
* (41:00) Masks to probe pretrained models and model steerability
* (47:40) Fortuitous Forgetting in Connectionist Networks
* (54:00) Relationships to other work (double descent, grokking, etc.)
* (1:01:00) The iterative training process in fortuitous forgetting, scale and value of exploring alternatives
* (1:03:35) In-Context Learning and Teaching Algorithmic Reasoning
* (1:09:00) Learning + algorithmic reasoning, prompting strategy
* (1:13:50) What’s happening with in-context learning?
* (1:14:00) Induction heads
* (1:17:00) ICL and gradient descent
* (1:22:00) Algorithmic prompting vs discovery
* (1:24:45) Future directions for algorithmic prompting
* (1:26:30) Interesting work from NeurIPS 2022
* (1:28:20) Hattie’s perspective on scientific questions people pay attention to, underrated problems
* (1:34:30) Hattie’s perspective on ML publishing culture
* (1:42:12) Outro
Links:
* Hattie’s homepage and Twitter
* Papers
* Deconstructing Lottery Tickets: Zeros, signs, and the Supermask
* Fortuitous Forgetting in Connectionist Networks
* Teaching Algorithmic Reasoning via In-context Learning
In episode 59 of The Gradient Podcast, Daniel Bashir speaks to Professor Kyunghyun Cho.
Professor Cho is an associate professor of computer science and data science at New York University and CIFAR Fellow of Learning in Machines & Brains. He is also a senior director of frontier research at the Prescient Design team within Genentech Research & Early Development. He was a research scientist at Facebook AI Research from 2017-2020 and a postdoctoral fellow at University of Montreal under the supervision of Prof. Yoshua Bengio after receiving his MSc and PhD degrees from Aalto University. He received the Samsung Ho-Am Prize in Engineering in 2021.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:15) How Professor Cho got into AI, going to Finland for a PhD
* (06:30) Accidental and non-accidental parts of Prof Cho’s journey, the role of timing in career trajectories
* (09:30) Prof Cho’s M.Sc. thesis on Restricted Boltzmann Machines
* (17:00) The state of autodiff at the time
* (20:00) Finding non-mainstream problems and examining limitations of mainstream approaches, anti-dogmatism, Yoshua Bengio appreciation
* (24:30) Detaching identity from work, scientific training
* (26:30) The rest of Prof Cho’s PhD, the first ICLR conference, working in Yoshua Bengio’s lab
* (34:00) Prof Cho’s isolation during his PhD and its impact on his work—transcending insecurity and working on unsexy problems
* (41:30) The importance of identifying important problems and developing an independent research program, ceiling on the number of important research problems
* (46:00) Working on Neural Machine Translation, Jointly Learning to Align and Translate
* (1:01:45) What RNNs and earlier NN architectures can still teach us, why transformers were successful
* (1:08:00) Science progresses gradually
* (1:09:00) Learning distributed representations of sentences, extending the distributional hypothesis
* (1:21:00) Difficulty and limitations in evaluation—directions of dynamic benchmarks, trainable evaluation metrics
* (1:29:30) Mixout and AdapterFusion: fine-tuning and intervening on pre-trained models, pre-training as initialization, destructive interference
* (1:39:00) Analyzing neural networks as reading tea leaves
* (1:44:45) Importance of healthy skepticism for scientists
* (1:45:30) Language-guided policies and grounding, vision-language navigation
* (1:55:30) Prof Cho’s reflections on 2022
* (2:00:00) Obligatory ChatGPT content
* (2:04:50) Finding balance
* (2:07:15) Outro
Links:
* Professor Cho’s homepage and Twitter
* Papers
* NMT and attention
* Learning Phrase Representations
* Neural machine translation by jointly learning to align and translate
* More recent work
* Learning Distributed Representations of Sentences from Unlabelled Data
* Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
* Generative Language-Grounded Policy in Vision-and-Language Navigation with Bayes’ Rule
* AdapterFusion: Non-Destructive Task Composition for Transfer Learning
In episode 58 of The Gradient Podcast, Daniel Bashir speaks to Professor Steve Miller.
Steve is a Professor Emeritus of Information Systems at Singapore Management University. Steve served as Founding Dean for the SMU School of Information Systems, and established and developed the technology core of SIS research and project capabilities in Cybersecurity, Data Management & Analytics, Intelligent Systems & Decision Analytics, and Software & Cyber-Physical Systems, as well as the management science oriented capability in Information Systems & Management. Steve works closely with a number of Singapore government ministries and agencies via steering committees, advisory boards, and advisory appointments.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:40) Steve’s evolution of interests in AI, time in academia and industry
* (05:15) How different is this “industrial revolution”?
* (10:00) What new technologies enable, the human role in technology’s impact on jobs
* (11:35) Automation and augmentation and the realities of integrating new technologies in the workplace
* (21:50) Difficulties of applying AI systems in real-world contexts
* (32:45) Re-calibrating human work with intelligent machines
* (39:00) Steve’s thinking on the nature of human/machine intelligence, implications for human/machine hybrid work
* (47:00) Tradeoffs in using ML systems for automation/augmentation
* (52:40) Organizational adoption of AI and speed
* (1:01:55) Technology adoption is more than just a technology problem
* (1:04:50) Progress narratives, “safe to speed”
* (1:10:27) Outro
Links:
* Steve’s SMU Faculty Profile and Google Scholar
* Working with AI by Steve Miller and Tom Davenport
In episode 57 of The Gradient Podcast, Andrey Kurenkov speaks to Blair Attard-Frost.
Note: this interview was recorded 8 months ago, and some aspects of Canada’s AI strategy have changed since then. It is still a good overview of AI governance and other topics, however.
Blair is a PhD Candidate at the University of Toronto’s Faculty of Information who researches the governance and management of artificial intelligence. More specifically, they are interested in the social construction of intelligence, unintelligence, and artificial intelligence, the relationship between organizational values and AI use, and the political economy, governance, and ethics of AI value chains. They integrate perspectives from service sciences, cognitive sciences, public policy, information management, and queer studies for their research.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter or Mastodon
Outline:
* Intro
* Getting into AI research
* What is AI governance
* Canada’s AI strategy
* Other interests
Links:
* Once a promising leader, Canada’s artificial-intelligence strategy is now a fragmented laggard
* The Ethics of AI Business Practices: A Review of 47 Guidelines
In episode 56 of The Gradient Podcast, Daniel Bashir speaks to Linus Lee.
Linus is an independent researcher interested in the future of knowledge representation and creative work aided by machine understanding of language. He builds interfaces and knowledge tools that expand the domain of thoughts we can think and qualia we can feel. Linus has been writing online since 2014–his blog boasts half a million words–and has built well over 100 side projects. He has also spent time as a software engineer at Replit, Hack Club, and Spensa, and was most recently a Researcher in Residence at Betaworks in New York.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Linus’s background and interests, vision-language models
* (07:45) Embodiment and limits for text-image
* (11:35) Ways of experiencing the world
* (16:55) Origins of the handle “thesephist”, languages
* (25:00) Math notation, reading papers
* (29:20) Operations on ideas
* (32:45) Overview of Linus’s research and current work
* (41:30) The Oak and Ink languages, programming languages
* (49:30) Personal search engines: Monocle and Reverie, what you can learn from personal data
* (55:55) Web browsers as mediums for thought
* (1:01:30) This AI Does Not Exist
* (1:03:05) Knowledge representation and notational intelligence
* Notation vs language
* (1:07:00) What notation can/should be
* (1:16:00) Inventing better notations and expanding human intelligence
* (1:23:30) Better interfaces between humans and LMs to provide precise control, inefficiency prompt engineering
* (1:33:00) Inexpressible experiences
* (1:35:42) Linus’s current work using latent space models
* (1:40:00) Ideas as things you can hold
* (1:44:55) Neural nets and cognitive computing
* (1:49:30) Relation to Hardware Lottery and AI accelerators
* (1:53:00) Taylor Swift Appreciation Session, mastery and virtuosity
* (1:59:30) Mastery/virtuosity and interfaces / learning curves
* (2:03:30) Linus’s stories, the work of fiction
* (2:09:00) Linus’s thoughts on writing
* (2:14:20) A piece of writing should be focused
* (2:16:15) On proving yourself
* (2:28:00) Outro
Links:
In episode 55 of The Gradient Podcast, Daniel Bashir speaks to Professor Suresh Venkatasubramanian.
Professor Venkatasubramanian is a Professor of Computer Science and Data Science at Brown University, where his research focuses on algorithmic fairness and the impact of automated decision-making systems in society. He recently served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy, where he co-authored the Blueprint for an AI Bill of Rights.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:25) Suresh’s journey into AI and policymaking
* (08:00) The complex graph of designing and deploying “fair” AI systems
* (09:50) The Algorithmic Lens
* (14:55) “Getting people into a room” isn’t enough
* (16:30) Failures of incorporation
* (21:10) Trans-disciplinary vs interdisciplinary, the limiting nature of “my lane” / “your lane” thinking, going beyond existing scientific and philosophical ideas
* (24:50) The trolley problem is annoying, its usefulness and limitations
* (25:30) Breaking the frame of a discussion, self-driving doesn’t fit into the parameters of the trolley problem
* (28:00) Acknowledging frames and their limitations
* (29:30) Social science’s inclination to critique, flaws and benefits of solutionism
* (30:30) Computer security as a model for thinking about algorithmic protections, the risk of failure in policy
* (33:20) Suresh’s work on recourse
* (38:00) Kantian autonomy and the value of recourse, non-Western takes and issues with individual benefit/harm as the most morally salient question
* (41:00) Community as a valuable entity and its implications for algorithmic governance, surveillance systems
* (43:50) How Suresh got involved in policymaking / the OSTP
* (46:50) Gathering insights for the AI Bill of Rights Blueprint
* (51:00) One thing the Bill did miss… Struggles with balancing specificity and vagueness in the Bill
* (54:20) Should “automated system” be defined in legislation? Suresh’s approach and issues with the EU AI Act
* (57:45) The danger of definitions, overlap with chess world controversies
* (59:10) Constructive vagueness in law, partially theorized agreements
* (1:02:15) Digital privacy and privacy fundamentalism, focus on breach of individual autonomy as the only harm vector
* (1:07:40) GDPR traps, the “legacy problem” with large companies and post-hoc regulation
* (1:09:30) Considerations for legislating explainability
* (1:12:10) Criticisms of the Blueprint and Suresh’s responses
* (1:25:55) The global picture, AI legislation outside the US, legislation as experiment
* (1:32:00) Tensions in entering policy as an academic and technologist
* (1:35:00) Technologists need to learn additional skills to impact policy
* (1:38:15) Suresh’s advice for technologists interested in public policy
* (1:41:20) Outro
Links:
* Suresh is on Mastodon @[email protected] (and also Twitter)
* Blueprint for an AI Bill of Rights
* Papers
* Fairness and abstraction in sociotechnical systems
* A comparative study of fairness-enhancing interventions in machine learning
* The Philosophical Basis of Algorithmic Recourse
* Runaway Feedback Loops in Predictive Policing
In episode 54 of The Gradient Podcast, Andrey Kurenkov speaks with Pete Florence.
Note: this was recorded 2 months ago. Andrey should be getting back to putting out some episodes next year.
Pete Florence is a Research Scientist at Google Research on the Robotics at Google team inside Brain Team in Google Research. His research focuses on topics in robotics, computer vision, and natural language -- including 3D learning, self-supervised learning, and policy learning in robotics. Before Google, he finished his PhD in Computer Science at MIT with Russ Tedrake.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00:00) Intro
* (00:01:16) Start in AI
* (00:04:15) PhD Work with Quadcopters
* (00:08:40) Dense Visual Representations
* (00:22:00) NeRFs for Robotics
* (00:39:00) Language Models for Robotics
* (00:57:00) Talking to Robots in Real Time
* (01:07:00) Limitations
* (01:14:00) Outro
Papers discussed:
* Aggressive quadrotor flight through cluttered environments using mixed integer programming
* High-speed autonomous obstacle avoidance with pushbroom stereo
* Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation. (Best Paper Award, CoRL 2018)
* Self-Supervised Correspondence in Visuomotor Policy Learning (Best Paper Award, RA-L 2020 )
* iNeRF: Inverting Neural Radiance Fields for Pose Estimation.
* NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields.
* Reinforcement Learning with Neural Radiance Fields
* Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language.
* Inner Monologue: Embodied Reasoning through Planning with Language Models
* Code as Policies: Language Model Programs for Embodied Control
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 53 of The Gradient Podcast, Daniel Bashir speaks to Professor Melanie Mitchell.
Professor Mitchell is the Davis Professor at the Santa Fe Institute. Her research focuses on conceptual abstraction, analogy-making, and visual recognition in AI systems. She is the author or editor of six books and her work spans the fields of AI, cognitive science, and complex systems. Her latest book is Artificial Intelligence: A Guide for Thinking Humans.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:20) Melanie’s intro to AI
* (04:35) Melanie’s intellectual influences, AI debates over time
* (10:50) We don’t have the right metrics for empirical study in AI
* (15:00) Why AI is Harder than we Think: the four fallacies
* (20:50) Difficulties in understanding what’s difficult for machines vs humans
* (23:30) Roles for humanlike and non-humanlike intelligence
* (27:25) Whether “intelligence” is a useful word
* (31:55) Melanie’s thoughts on modern deep learning advances, brittleness
* (35:35) Abstraction, Analogies, and their role in AI
* (38:40) Concepts as analogical and what that means for cognition
* (41:25) Where does analogy bottom out
* (44:50) Cognitive science approaches to concepts
* (45:20) Understanding how to form and use concepts is one of the key problems in AI
* (46:10) Approaching abstraction and analogy, Melanie’s work / the Copycat architecture
* (49:50) Probabilistic program induction as a promising approach to intelligence
* (52:25) Melanie’s advice for aspiring AI researchers
* (54:40) Outro
Links:
* Melanie’s homepage and Twitter
* Papers
* Difficulties in AI, hype cycles
* Why AI is Harder than we think
* The Debate Over Understanding in AI’s Large Language Models
* What Does It Mean for AI to Understand?
* Abstraction, analogies, and reasoning
* Abstraction and Analogy-Making in Artificial Intelligence
* Evaluating understanding on conceptual abstraction benchmarks
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 52 of The Gradient Podcast, Daniel Bashir speaks to Professor Marc Bellemare.
Professor Bellemare leads the reinforcement learning efforts at Google Brain Montréal and is a core industry member at Mila, where he also holds the Canada CIFAR AI Chair. His PhD work, completed at the University of Alberta, proposed the use of Atari 2600 video games to benchmark progress in reinforcement learning (RL). He was a research scientist at DeepMind from 2013-2017, and his Arcade Learning Environment was very influential in DeepMind’s early RL research and remains one of the most widely-used RL benchmarks today. More recently he collaborated with Loon to deploy deep reinforcement learning to navigate stratospheric balloons. His book on distributional reinforcement learning, published by MIT Press, will be available in Spring 2023.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (03:10) Marc’s intro to AI and RL
* (07:00) Cross-pollination of deep learning research and RL in McGill and UDM
* (09:50) PhD work at U Alberta, continual learning, origins of the Arcade Learning Environment (ALE)
* (14:40) Challenges in the ALE, how the ALE drove RL research
* (23:10) Marc’s thoughts on the Avalon benchmark and what makes a good RL benchmark
* (28:00) Opinions on “Reward is Enough” and whether RL gets us to AGI
* (32:10) How Marc thinks about priors in learning, “reincarnating RL”
* (36:00) Distributional Reinforcement Learning and the problem of distribution estimation
* (43:00) GFlowNets and distributional RL
* (45:05) Contraction in RL and distributional RL, theory-practice gaps
* (52:45) Representation learning for RL
* (55:50) Structure of the value function space
* (1:00:00) Connections to open-endedness / evolutionary algorithms / curiosity
* (1:03:30) RL for stratospheric balloon navigation with Loon
* (1:07:30) New ideas for applying RL in the real world
* (1:10:15) Marc’s advice for young researchers
* (1:12:37) Outro
Links:
* Professor Bellemare’s Homepage
* Distributional Reinforcement Learning book
* Papers
* The Arcade Learning Environment: An Evaluation Platform for General Agents
* A Distributional Perspective on Reinforcement Learning
* Distributional Reinforcement Learning with Quantile Regression
* Distributional Reinforcement Learning with Linear Function Approximation
* Autonomous navigation of stratospheric balloons using reinforcement learning
* A Geometric Perspective on Optimal Representations for Reinforcement Learning
* The Value Function Polytope in Reinforcement Learning
In episode 51 of The Gradient Podcast, Daniel Bashir speaks to François Chollet.
François is a Senior Staff Software Engineer at Google and creator of the Keras deep learning library, which has enabled many people (including me) to get their hands dirty with the world of deep learning. Francois is also the author of the book “Deep Learning with Python.” Francois is interested in understanding the nature of abstraction and developing algorithms capable of autonomous abstraction and democratizing the development and deployment of AI technology, among other topics.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro + Daniel has far too much fun pronouncing “François Chollet”
* (02:00) How François got into AI
* (08:00) Keras and user experience, library as product, progressive disclosure of complexity
* (18:20) François’ comments on the state of ML frameworks and what different frameworks are useful for
* (23:00) On the Measure of Intelligence: historical perspectives
* (28:00) Intelligence vs cognition, overlaps
* (32:30) How core is Core Knowledge?
* (39:15) Cognition priors, metalearning priors
* (43:10) Defining intelligence
* (49:30) François’ comments on modern deep learning systems
* (55:50) Program synthesis as a path to intelligence
* (1:02:30) Difficulties on program synthesis
* (1:09:25) François’ concerns about current AI
* (1:14:30) The need for regulation
* (1:16:40) Thoughts on longtermism
* (1:23:30) Where we can expect exponential progress in AI
* (1:26:35) François’ advice on becoming a good engineer
* (1:29:03) Outro
Links:
* On the Measure of Intelligence
* Keras
Happy episode 50! This week’s episode is being released on Monday to avoid Thanksgiving.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 50 of The Gradient Podcast, Daniel Bashir speaks to Professor Yoshua Bengio.
Professor Bengio is a Full Professor at the Université de Montréal as well as Founder and Scientific Director of the MILA-Quebec AI Institute and the IVADO institute. Best known for his work in pioneering deep learning, Bengio was one of three awardees of the 2018 A.M. Turing Award along with Geoffrey Hinton and Yann LeCun. He is also the awardee of the prestigious Killam prize and, as of this year, the computer scientist with the highest h-index in the world.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:20) Journey into Deep Learning, PDP and Hinton
* (06:45) “Inspired by biology”
* (08:30) “Gradient Based Learning Applied to Document Recognition” and working with Yann LeCun
* (10:00) What Bengio learned from LeCun (and Larry Jackel) about being a research advisor
* (13:00) “Learning Long-Term Dependencies with Gradient Descent is Difficult,” why people don’t understand this paper well enough
* (18:15) Bengio’s work on word embeddings and the curse of dimensionality, “A Neural Probabilistic Language Model”
* (23:00) Adding more structure / inductive biases to LMs
* (24:00) The rise of deep learning and Bengio’s experience, “you have to be careful with inductive biases”
* (31:30) Bengio’s “Bayesian posture” in response to recent developments
* (40:00) Higher level cognition, Global Workspace Theory
* (45:00) Causality, actions as mediating distribution change
* (49:30) GFlowNets and RL
* (53:30) GFlowNets and actions that are not well-defined, combining with System II and modular, abstract ideas
* (56:50) GFlowNets and evolutionary methods
* (1:00:45) Bengio on Cartesian dualism
* (1:09:30) “When you are famous, it is hard to work on hard problems” (Richard Hamming) and Bengio’s response
* (1:11:10) Family background, art and its role in Bengio’s life
* (1:14:20) Outro
Links:
* Papers
* Gradient-based learning applied to document recognition
* Learning Long-Term Dependencies with Gradient Descent is Difficult
* Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation
In episode 49 of The Gradient Podcast, Daniel Bashir speaks to Kanjun Qiu and Josh Albrecht.
Kanjun and Josh are CEO and CTO of Generally Intelligent, an AI startup aiming to develop general-purpose agents with human-like intelligence that can be safely deployed in the real world. Kanjun and Josh have played these roles together in the past as CEO and CTO of AI recruiting startup Sourceress. Kanjun is also involved with building the SF Neighborhood, and together with Josh invests in early-stage founders at Outset Capital.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Kanjun’s and Josh’s intros to AI
* (06:45) How Kanjun and Josh met and started working together
* (08:40) Sourceress and AI in hiring, looking for unusual candidates
* (11:30) Generally Intelligent: origins and motivations
* (14:55) How Kanjun and Josh think about understanding the fundamentals of intelligence
* (17:20) AGI companies and long-term goals
* (19:20) How Kanjun and Josh think about intelligence + Generally Intelligent’s approach-agnosticism
* (22:30) Skill-acquisition efficiency
* (25:18) The Avalon Environment/Benchmark
* (27:40) Tasks with shared substrate
* (29:00) Blending of different approaches, baseline tuning
* (31:15) Approach to safety
* (33:33) Issues with interpretability + ML academic practices, ablations
* (36:30) Lessons about working with people, company culture
* (40:00) Human focus and diversity in companies, tech environment
* (44:10) Advice for potential (AI) founders
* (47:05) Outro
Links:
* Avalon: A Benchmark for RL Generalization
* Have suggestions for future podcast guests (or other feedback)? Let us know here!
* Want to write with us? Send a pitch using this form :)
In episode 48 of The Gradient Podcast, Daniel Bashir speaks to Nathan Benaich.
Nathan is Founder and General Partner at Air Street Capital, a venture capital (VC) firm focused on investing in AI-first technology and life sciences companies. Nathan runs a number of communities focused on AI including the Research and Applied AI Summit and leads Spinout.fyi to improve the creation of university spinouts. Together with investor Ian Hogarth, Nathan co-authors the State of AI Report.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:40) Nathan’s interests in AI, life sciences, investing
* (04:10) Biotech and tech-bio companies
* (08:00) Why Nathan went into VC
* (10:15) Air Street Capital’s focus, investing in AI at an early stage
* (14:30) Why Nathan believes in specialism over generalism in AI, balancing consumer-focused ML with serious technical work
* (17:30) The European startup ecosystem
* (19:30) Spinouts and inventions born in academia
* (23:35) Spinout.fyi and issues with the European model
* (27:50) In the UK, only 4% of private AI companies are spinouts
* (30:00) Solutions
* (32:55) Origins of the State of AI Report
* (35:00) Looking back on Nathan’s 2021 predictions: Anthropic and JAX
* (39:00) AI semiconductors and the difficult reality
* (42:45) Nathan’s perspectives on AI safety/alignment
* (46:00) Long-termism and debates, safety research as an input into improving capabilities
* (49:50) Decentralization and the commercialization of open-source AI (Stability AI, Eleuther AI, etc.)
* (53:00) Second-order applications of diffusion models—chemistry, small molecule design, genome editors
* (59:00) Semiconductor restrictions and geopolitics
* (1:03:45) This year’s State of AI predictions
* (1:04:30) Trouble in semiconductor startup land
* (1:08:40) Predictions for AGI startups
* (1:14:20) How regulation of AGI startups might look
* (1:16:40) Nathan’s advice for founders, investors, and researchers
* (1:19:00) Outro
Links:
* Rewriting the European spinout playbook
* Other sources mentioned
* Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy
* Choking Off China’s Access to the Future of AI
* China's New AI Governance Initiatives Shouldn't Be Ignored
* Have suggestions for future podcast guests (or other feedback)? Let us know here!
* Want to write with us? Send a pitch using this form :)
In episode 47 of The Gradient Podcast, Daniel Bashir speaks to Matt Sheehan.
Matt is a fellow at the Carnegie Endowment for International Peace, where he researches global technology with a focus on China. His writing and research explores China’s AI ecosystem, the future of China’s technology policy, and technology’s role in China’s political economy. Matt has also written for Foreign Affairs andThe Huffington Post, among other venues.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:28) Matt’s path to analyzing China’s AI governance
* (06:50) Matt’s experience understanding daily life in China and developing a bottom-up perspective
* (09:40) The development of government constraints in technology/AI in the US and China
* (12:40) Matt’s take on China’s priorities and motivations
* (17:00) How recent history influences China’s technology ambitions
* (17:30) Matt gives an overview of the Century of Humiliation
* (22:07) Adversarial perceptions, Xi Jinping’s brashness and its effect on discourse about International Relations, how this intersects with AI
* (24:40) Self-reliance and semiconductors. Was the recent chip ban the right move?
* (36:15) Matt’s question: could foundation models be trained on trailing edge chips if necessary? Limitations
* (38:30) Silicon Valley and China, The Transpacific Experiment and stories
* (46:17) 躺平 and how trends among youth in China interact with tech development, parallel trends in the US, work culture
* (51:05) China’s recent AI governance initiatives
* (56:25) Squaring China’s AI ethics stance with its use of AI
* (59:53) The US can learn from both Chinese and European regulators
* (1:02:03) How technologists should think about geopolitics and national tensions
* (1:05:43) Outro
Links:
* China’s influences/ambitions
* Beijing’s Industrial Internet Ambitions
* Beijing’s Tech Ambitions: What Exactly Does It Want?
* US-China exchange and US responses
* Who benefits from American AI research in China?
* Two New Tech Bills Could Transform US Innovation
* Fear of Chinese Competition Won’t Preserve US Tech Leadership
* China’s tech standards, government initiatives and regulation in AI
* How US businesses view China’s growing influence in tech standards
* Three takeaways from China’s new standards strategy
* China’s new AI governance initiatives shouldn’t be ignored
* Semiconductors
* Biden’s Unprecedented Semiconductor Bet (a new piece from Matt!)
* Choking Off China’s Access to the Future of AI
* Have suggestions for future podcast guests (or other feedback)? Let us know here!
* Want to write with us? Send a pitch using this form :)
In episode 46 of The Gradient Podcast, Daniel Bashir speaks to Luis Voloch.
Luis is co-founder of Immunai, a leading AI-led drug discovery company with over 140 employees and over one billion dollar valuation based out of NYC & Tel Aviv. Before Immunai, Luis was Head of Data Science and Machine Learning at ITC and worked at Palantir, where he worked on a variety of ML efforts. He did his studies and research in Math and CS in MIT.
He has also led AI, genomics, and software efforts at a number of other companies.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:25) Luis’s math background and getting into AI
* (06:35) Luis’s PhD experience, proving theoretical guarantees for recommendation systems
* (09:45) Why Luis left his PhD
* (15:45) Why Luis is excited about intersection of ML and biology
* (18:28) Challenges of applying AI to biology
* (22:55) Immunai
* (27:03) Challenges in building a biotech (or “tech-bio”) company
* (30:30) Research at Immunai, Neural Design for Genetic Perturbation Experiments
* (34:43) Interpretability in ML + biology
* (36:00) What Luis plans to do next
* (37:55) Luis’s advice for grad students / ML people interested in biology
* (40:00) Luis’s perspective on the future of AI + biology
* (43:10) Outro
Links:
* Luis on LinkedIn, Crunchbase
* Luis’s article on The convergence of deep neural networks and immunotherapy
* Papers
* Neural Design for Genetic Perturbation Experiments
* Have suggestions for future podcast guests (or other feedback)? Let us know here!
* Want to write with us? Send a pitch using this form :)
In episode 45 of The Gradient Podcast, Daniel Bashir speaks to Zachary Lipton.
Zachary is an Assistant Professor of Machine Learning and Operations Research at Carnegie Mellon University, where he directs the Approximately Correct Machine Intelligence Lab. He holds a joint appointment between CMU’s ML Department and Tepper School of Business, and holds courtesy appointments at the Heinz School of Public Policy and the Software and Societal Systems Department. His research spans core ML methods and theory, applications in healthcare and natural language processing, and critical concerns about algorithms and their impacts.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (2:30) From jazz music to AI
* (4:40) “fix it in post” we had some technical issues :)
* (4:50) spicy takes, music and tech
* (7:30) Zack’s plan to get into grad school
* (9:45) selection bias in who gets faculty positions
* (12:20) The slow development of Zack’s wide range of research interests, Zack’s strengths coming into ML research
* (22:00) How Zack got attention early in his PhD
* (27:00) Should PhD students meander?
* (30:30) Faults in the QA model literature
* (35:00) Troubling Trends, antecedents in other fields
* (39:40) Pretraining LMs on nonsense words, new paper!
* (47:25) what “BERT learns linguistic structure” misses
* (56:00) making causal claims in ML
* (1:05:40) domain-adversarial networks don’t solve distribution shift, underspecified problems
* (1:09:10) the benefits of floating between communities
* (1:14:30) advice on finding inspiration and learning
* (1:16:00) “fairness” and ML solutionism
* (1:21:10) epistemic questions, how we make determinations of fairness
* (1:29:00) Zack’s drives and motivations
Links:
* Papers
* DL Foundations, Distribution Shift, Generalization
* Does Pretraining for Summarization Require Knowledge Transfer?
* How Much Reading Does Reading Comprehension Require?
* Learning Robust Global Representations by Penalizing Local Predictive Power
* Detecting and Correcting for Label Shift with Black Box Predictors
* RATT
* Explanation/Interpretability/Fairness
* The Mythos of Model Interpretability
* Does mitigating ML’s impact disparity require treatment disparity?
* Algorithmic Fairness from a Non-ideal Perspective
* Broader perspectives/critiques
* Troubling Trends in Machine Learning Scholarship
* When Curation Becomes Creation
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 44 of The Gradient Podcast, Daniel Bashir speaks to Professor Stuart Russell.
Stuart Russell is a Professor of Computer Science and the Smith-Zadeh Professor in Engineering at UC Berkeley, as well as an Honorary Fellow at Wadham College, Oxford. Professor Russell is the co-author with Peter Norvig of Artificial Intelligence: A Modern Approach, probably the most popular AI textbook in history. He is the founder and head of Berkeley’s Center for Human-Compatible Artificial Intelligence and recently authored the book Human Compatible: Artificial Intelligence and the Problem of Control. He has also served as co-chair on the World Economic Forum’s Council on AI and Robotics.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:45) Stuart’s introduction to AI
* (05:50) The two most important questions
* (07:25) Historical perspectives during Stuart’s PhD, agents and learning
* (14:30) Rationality and Intelligence, Bounded Optimality
* (20:30) Stuart’s work on Metareasoning
* (29:45) How does Metareasoning fit with Bounded Optimality?
* (37:39) “Civilization advances by reducing complex operations to be trivial”
* (39:20) Reactions to the rise of Deep Learning, connectionist/symbolic debates, probabilistic modeling
* (51:00) The Deep Learning and traditional AI communities will adopt each other’s ideas
* (51:55) Why Stuart finds the self-driving car arena interesting, Waymo’s old-fashioned AI approach
* (57:30) Effective generalization without the full expressive power of first-order logic—deep learning is a “weird way to go about it”
* (1:03:00) A very short shrift of Human Compatible and its ideas
* (1:10:42) Outro
Links:
* Human Compatible page with reviews and interviews
* Papers mentioned
* Rationality and Intelligence
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 43 of The Gradient Podcast, Daniel Bashir speaks to Varun Ganapathi.
Varun is co-founder and CTO at AKASA, a company developing AI systems for healthcare operations. Varun’s previous entrepreneurial experience includes co-founding Numovis, a company focused on motion tracking and computer vision for user interaction that was acquired by Google, and Terminal.com, a browser-based IDE acquired by Udacity. Varun received his PhD from Stanford in 2014.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (1:50) Varun’s intro to AI
* (3:25) Working with Andrew Ng
* (7:37) Varun’s road to a PhD
* (13:20) Numovis, Google acquisition
* (15:00) Vacillating between research and entrepreneurship, Terminal.com
* (17:10) Roots of Varun’s interest in AI + healthcare
* (22:30) Research at AKASA, Deep Claim
* (25:45) Causality in claim denial, expert knowledge
* (25:52) we need to trademark the word “gradient”
* (28:20) AKASA’s Unified Automation, expert-in-the-loop
* (34:15) Varun’s near-term and long-term visions for AKASA
* (39:50) Towards “deploying a new version of healthcare”
* (42:25) Varun’s perspective on the role of AI in healthcare, the need for humans in the loop
* (47:02) Varun’s advice for aspiring AI researchers and practitioners
* (51:00) Outro
Links:
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 42 of The Gradient Podcast, Daniel Bashir speaks to Joel Lehman.
Joel is a machine learning scientist interested in AI safety, reinforcement learning, and creative open-ended search algorithms. Joel has spent time at Uber AI Labs and OpenAI and is the co-author of the book Why Greatness Cannot be Planned: The Myth of the Objective.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:40) From game development to AI
* (03:20) Why evolutionary algorithms
* (10:00) Abandoning Objectives: Evolution Through the Search for Novelty Alone
* (24:10) Measuring a desired behavior post-hoc vs optimizing for that behavior
* (27:30) Neuroevolution through Augmenting Topologies (NEAT), Evolving a Diversity of Virtual Creatures
* (35:00) Humans are an inefficient solution to evolution’s objectives
* (47:30) Is embodiment required for understanding? Today’s LLMs as practical thought experiments in disembodied understanding
* (51:15) Evolution through Large Models (ELM)
* (1:01:07) ELM: Quality Diversity Algorithms, MAP-Elites, bootstrapping training data
* (1:05:25) Dimensions of Diversity in MAP-Elites, what is “interesting”?
* (1:12:30) ELM: Fine-tuning the language model
* (1:18:00) Results of invention in ELM, complexity in creatures
* (1:20:20) Future work building on ELM, key challenges in open-endedness
* (1:24:30) How Joel’s research affects his approach to life and work
* (1:28:30) Balancing novelty and exploitation in work
* (1:34:10) Intense competition in AI, Joel’s advice for people considering ML research
* (1:38:45) Daniel isn’t the worst interviewer ever
* (1:38:50) Outro
Links:
* Evolution through Large Models: The Tweet
* Papers:
* Abandoning Objectives: Evolution through the search for novelty alone
* Evolving a diversity of virtual creatures through novelty search and local competition
* Designing neural networks through neuroevolution
* Evolution through Large Models
* Resources for (aspiring) ML researchers!
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 42 of The Gradient Podcast, Daniel Bashir speaks to Andrew Feldman.
Andrew is the co-founder and CEO of Cerebras Systems, an AI accelerator company that has built the largest processor in the industry. Before Cerebras, Andrew co-founded and served as CEO of SeaMicro, which was acquired by AMD in 2012. He has also served in executive positions at Force10 Networks and RiverStone Networks.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:05) Andrew’s trajectory, from business school to Cerebras
* (10:00) The large model problem and Cerebras’ approach
* (19:50) Cerebras’s GPT-J announcement
* (22:20) Andrew explains weight streaming to Daniel
* (32:30) Andrew’s thoughts on the MLPerf benchmark
* (38:20) The venture landscape for AI accelerator companies
* (42:50) The hardware lottery, hardware support for sparsity
* (45:40) The CHIPS Act, NVIDIA China ban and the accelerator industry
* (48:00) Politics and Chips, US and China
* (52:20) Andrew’s perspective on tackling difficult problems
* (56:42) Outro
Links:
* Sources mentioned
* “Political Chips” by Ben Thompson (because Daniel’s a fanboy)
* Daniel’s conversation with Sara Hooker
Have suggestions for future podcast guests (or other feedback)? Let us know here!
In episode 41 of The Gradient Podcast, Daniel Bashir speaks to Christopher Manning.
Chris is the Director of the Stanford AI Lab and an Associate Director of the Stanford Human-Centered Artificial Intelligence Institute. He is an ACM Fellow, an AAAI Fellow, and past President of ACL. His work currently focuses on applying deep learning to natural language processing; it has included tree recursive neural networks, GloVe, neural machine translation, and computational linguistic approaches to parsing, among other topics.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:40) Chris’s path to AI through computational linguistics
* (06:10) Human language acquisition vs. ML systems
* (09:20) Grounding language in the physical world, multimodality and DALL-E 2 vs. Imagen
* (26:15) Chris’s Linguistics PhD, splitting time between Stanford and Xerox PARC, corpus-based empirical NLP
* (34:45) Rationalist and Empiricist schools in linguistics, Chris’s work in 1990s
* (45:30) GloVe and Attention-based Neural Machine Translation, global and local context in language
* (50:30) Different Neural Architectures for Language, Chris’s work in the 2010s
* (58:00) Large-scale Pretraining, learning to predict the next word helps you learn about the world
* (1:00:00) mBERT’s Internal Representations vs. Universal Dependencies Taxonomy
* (1:01:30) The Need for Inductive Priors for Language Systems
* (1:05:55) Courage in Chris’s Research Career
* (1:10:50) Outro (yes Daniel does have a new outro with ~ music ~)
Links:
* Papers (1990s-2000s)
* Distributional Phrase Structure Induction
* Fast exact inference with a factored model for Natural Language Parsing
* Accurate Unlexicalized Parsing
* Corpus-based induction of syntactic structure
* Foundations of Statistical Natural Language Processing
* Papers (2010s):
* Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
* GloVe
* Effective Approaches to Attention-based Neural Machine Translation
* Stanford’s Graph-based Neural dependency parser
* Papers (2020s)
* Electra: Pre-training text encoders as discriminators rather than generators
* Finding Universal Grammatical Relations in Multilingual BERT
* Emergent linguistic structure in artificial neural networks trained by self-supervision
In episode 41 of The Gradient Podcast, Andrey Kurenkov speaks to Professor Jeff Clune.
Jeff is an Associate Professor of Computer Science at the University of British Columbia and a Faculty Member of the Vector Institute. Previously, he was a Research Team Leader at OpenAI and before that a Senior Research Manager and founding member of Uber AI Labs, and prior to that he was an Associate Professor in Computer Science at the University of Wyoming.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter
The Gradient is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Outline:
(00:00) Intro
(01:05) Path into AI
(08:05) Studying biology with simulations
(10:30) Overview of genetic algorithms
(14:00) Evolving gaits with genetic algorithms
(20:00) Quality-Diversity Algorithms
(27:00) Evolving Soft Robots
(32:15) Genetic algorithms for studying Evolution
(39:30) Modularity for Catastrophic Forgetting
(45:15) Curiosity for Learning Diverse Skills
(51:15) Evolving Environments
(58:3) The Surprising Creativity of Digital Evolution
(1:04:28) Hobbies Outside of Research
(1:07:25) Outro
In episode 40 of The Gradient Podcast, Andrey Kurenkov speaks to Catherine Olsson and Nelson Elhage.
Catherine and Nelson are both members of technical staff at Anthropic, which is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Catherine and Nelson’s focus is on interpretability, and we will discuss several of their recent works in this interview.
Follow The Gradient on Twitter
Outline:
(00:00) Intro
(01:10) Catherine’s Path into AI
(03:25) Nelson’s Path into AI
(05:23) Overview of Anthropic
(08:21) Mechanistic Interpretability
(15:15) Transformer Circuits
(21:30) Toy Transformer
(27:25) Induction Heads
(31:00) In-Context Learning
(35:10) Evidence for Induction Heads Enabling In-Context Learning
(39:30) What’s Next
(43:10) Replicating Results
(46:00) Outro
Links:
Anthropic
Zoom In: An Introduction to Circuits
Mechanistic Interpretability, Variables, and the Importance of Interpretable Bases
A Mathematical Framework for Transformer Circuits
In-context Learning and Induction Heads
PySvelte
In episode 38 of The Gradient Podcast, Daniel Bashir speaks to Been Kim.
Been is a staff research scientist at Google Brain focused on interpretability–helping humans communicate with complex machine learning models by not only building tools but also studying how humans interact with these systems. She has served with a number of conferences including ICLR, NeurIPS, ICML, and AISTATS. She gave the keynotes at ICLR 2022, ECML 2020, and the G20 meeting in Argentina in 2018. Her work TCAV received the UNESCO Netexplo award, was featured at Google I/O 2019 and in Brian Christian’s book The Alignment Problem.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
(00:00) Intro(02:20) Path to AI/interpretability(06:10) The Progression of Been’s thinking / PhD thesis(11:30) Towards a Rigorous Science of Interpretable Machine Learning(24:52) Interpretability and Software Testing(27:00) Been’s ICLR Keynote and Human-Machine “Language”(37:30) TCAV(43:30) Mood Board Search and CAV Camera(48:00) TCAV’s Limitations and Follow-up Work(56:00) Acquisition of Chess Knowledge in AlphaZero(1:07:00) Daniel spends a very long time asking “what does it mean to you to be a researcher?”(1:09:00) The everyday drudgery, more lessons from Been(1:11:32) Outro
Links:
In episode 37 of The Gradient Podcast, Andrey Kurenkov speaks to Laura Weidinger
Laura is a senior research scientist at DeepMind, with her focus being AI ethics. Laura is also a PhD candidate at the University of Cambridge, studying philosophy of science and specifically approaches to measuring the ethics of AI systems. Previously Laura worked in technology policy at UK and EU levels, as a Policy executive at techUK. She then pivoted to cognitive science research and studied human learning at the Max Planck Institute for Human Development in Berlin, and was a Guest Lecturer at the Ada National College for Digital Skills. She received her Master's degree at the Humboldt University of Berlin, from the School of Mind and Brain, with her focus being Neuroscience/ Philosophy/ Cognitive science.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
(00:00) Intro(01:20) Path to AI(04:25) Research in Cognitive Science(06:40) Interest in AI Ethics(14:30) Ethics Considerations for Researchers(17:38) Ethical and social risks of harm from language models (25:30) Taxonomy of Risks posed by Language Models(27:33) Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models(33:25) Main Insight for Measuring Harm(35:40) The EU AI Act(39:10) Alignment of language agents(46:10) GPT-4Chan(53:40) Interests outside of AI(55:30) Outro
Links:
Ethical and social risks of harm from language models
Taxonomy of Risks posed by Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
In episode 36 of The Gradient Podcast, Daniel Bashir speaks to Sebastian Raschka.
Sebastian is an Assistant Professor of Statistics at the University of Wisconsin-Madison and Lead AI Educator at Lightning AI. He has written two bestselling books: Python Machine Learning and Machine Learning with PyTorch and Scikit-Learn.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro
(01:10) Sebastian’s intro to AI
(05:15) Sebastian’s process for learning new things
(12:15) Learning style varies with purpose
(16:10) Ordinal Regression
(31:00) Solving rank inconsistency with conditional probability
(35:00) Semi-Adversarial Networks
(44:15) Why Sebastian got into education
(52:45) Lightning AI
(1:00:00) Sebastian’s advice for educators
(1:03:30) Be cool like Sebastian and follow the Gradient
(1:03:40) Outro
Episode Links:
In episode 35 of The Gradient Podcast, guest host Sharon Zhou speaks to Jack Shanahan.
John (Jack) Shanahan was a Lieutenant General in the United States Air Force, retired after a 36-year military career. He was the inaugural Director of the Joint Artificial Intelligence Center (JAIC) in the U.S. Department of Defense (DoD). He was also the Director of the Algorithmic Warfare Cross-Functional Team (Project Maven). Currently, he is a Special Government Employee supporting the National Security Commission on Artificial Intelligence; serves on the Board of Advisors for the Common Mission Project; is an advisor to The Changing Character of War Centre (Oxford University); is a member of the CACI Strategic Advisory Group; and serves as an Advisor to the Military Cyber Professionals Association.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
(00:00) Intro(01:20) Introduction to Jack and Sharon(07:30) Project Maven(09:45) Relationship of Tech Sector and DoD(16:40) Need for AI in DoD(20:10) Bringing the tech-DoD divide(30:00) Conclusion
Episode Links:
* John N.T. Shanahan Wikipedia
* AI To Revolutionize U.S. Intelligence Community With General Shanahan
* Email: [email protected]
In episode 34 of The Gradient Podcast, Daniel Bashir speaks to Sara Hooker.
Sara (@sarahookr) leads Cohere for AI and is a former Research Scientist at Google. Sara founded a Bay Area non-profit called Delta Analytics, which works with non-profits and communities to build technical capacity. She is also one of the co-founders of the Trustworthy ML Initiative, an active participant of the ML Collective research group, and a host of the underrated ML podcast.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro(02:20) Podcasting gripe-fest(06:00) Sara’s journey: from economics to AI(09:15) Economics vs. AI research(12:45) The Hardware Lottery(19:15) Towards better hardware benchmarks(26:00) Getting away from the hardware lottery(32:30) The myth of compact, interpretable, robust, performant DNNs(35:15) Top-line metrics vs. disaggregated metrics(39:20) Solving memorization in the data pipeline, noisy examples(45:35) Cohere For AI
Episode Links:
In episode 33 of The Gradient Podcast, Andrey Kurenkov speaks to Lukas Biewald.
Lukas Biewald is a co-founder of Weights and Biases, a company that creates developer tools for machine learning. Prior to that he was a co-founder and CEO of Figure Eight Inc. (formerly CrowdFlower) — an Internet company that collects training data for machine learning, which was sold for 300 million dollars.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:18) Start in AI
* (06:17) CrowdFlower / Crowdsourcing
* (21:06) Discovering Deep Learning
* (25:10) Learning Deep Learning
* (32:50) Weights and Biases
* (37:30) State of Tooling for ML
* (41:20) Exciting AI Trends
* (44:42) Interests outside of AI
* (45:40) Outro
Links:
* Starting a Second Machine Learning Tools Company, Ten Years Later
* Confession of a so-called AI expert
* What I learned from looking at 200 machine learning tools
* CS 329S: Machine Learning Systems Design
* Designing Machine Learning Systems
Opportunity at Weights & Biases:
In episode 32 of The Gradient Podcast, Andrey Kurenkov speaks to Chip Huyen.
Chip Huyen is a co-founder of Claypot AI, a platform for real-time machine learning. Previously, she was with Snorkel AI and NVIDIA. She teaches CS 329S: Machine Learning Systems Design at Stanford. She has also written four bestselling Vietnamese books, and more recently her new O’Reilly book Designing Machine Learning Systems has just come out!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
She also maintains a Discord server with a focus on Machine Learning Systems.
Outline:
* (00:00) Intro
* (01:30) 3-year trip through Asia, Africa, and South America
* (04:00) Getting into AI at Stanford
* (11:30) Confession of a so-called AI expert
* (16:40) Academia vs Industry
* (17:40) Focus on ML Systems
* (20:00) ML in Academia vs Industry
* (28:15) Maturity of AI in Industry
* (31:45) ML Tools
* (37:20) Real Time ML
* (43:00) ML Systems Class and Book
Links:
* MLOps Discord server
* Confession of a so-called AI expert
* What I learned from looking at 200 machine learning tools
* CS 329S: Machine Learning Systems Design
* Designing Machine Learning Systems
In episode 31 of The Gradient Podcast, Daniel Bashir speaks to Preetum Nakkiran.
Preetum is a Research Scientist at Apple, a Visiting Researcher at UCSD, and part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning. He completed his PhD at Harvard, where he co-founded the ML Foundations Group. Preetum’s research focuses on building conceptual tools for understanding learning systems.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro
(01:25) Getting into AI through Theoretical Computer Science (TCS)
(09:08) Lack of Motivation in TCS and Learning What Research Is
(12:12) Foundational vs Problem-Solving Research, Antipatterns in TCS
(16:30) Theory and Empirics in Deep Learning
(18:30) What is an Empirical Theory of Deep Learning
(28:21) Deep Double Descent
(40:00) Inductive Biases in SGD, epoch-wise double descent
(45:25) Inductive Biases Stick Around
(47:12) Deep Bootstrap
(59:40) Distributional Generalization - Paper Rejections
(1:02:30) Classical Generalization and Distributional Generalization
(1:16:46) Future Work: Studying Structure in Data
(1:20:51) The Tweets^TM
(1:37:00) Outro
Episode Links:
In episode 30 of The Gradient Podcast, Daniel Bashir speaks to Max Woolf.
Max Woolf (@minimaxir) is currently a Data Scientist at BuzzFeed in San Francisco. Some work he’s done for BuzzFeed includes using StyleGAN to create AI-generated fake boyfriends and AI-generated art quizzes. In his free time, Max creates open source Python and R software on his GitHub. More recently, Max has been developing tooling for AI content generation, such as aitextgen for easy AI text generation.
Max’s projects are funded by his Patreon. If you have found anything on his website helpful, please help contribute!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro
(01:20) Max’s Intro to Data Science and AI
(07:00) Software Engineering in Data Science, Max’s Perspectives
(09:00) Max’s Work at BuzzFeed
(23:10) Scaling, Inference, Large Models
(27:00) AI Content Generation
(30:45) Discourse About GPT-3
(34:30) AI Inventors
(38:35) Fun Projects and One-Offs: AI-generated Pokémon
(43:35) GPT-3-generated Discussion Topics
(46:30) Advice for Data Scientists
(48:10) BuzzFeed is Hiring :)
(48:20) Outro
Episode Links:
In episode 29 of The Gradient Podcast, we chat with Rosanne Liu. Rosanne is a research scientist in Google Brain, and co-founder and executive director of ML Collective, a nonprofit organization for open collaboration and accessible mentorship. Before that she was a founding member of Uber AI. Outside of research, she supports underrepresented communities, and organizes symposiums, workshops, and a weekly reading group “Deep Learning: Classics and Trends” since 2018. She is currently thinking deeply how to democratize AI research even further, and improve the diversity and fairness of the field, while working on multiple fronts of machine learning research including understanding training dynamics, rethinking model capacity and scaling.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (01:30) How did you go into AI / research
* (6:45) AI research: the unreasonably narrow path and how not to be miserable
* (16:30) ML Collective Overview
* (21:45) Deep Learning: Classics and Trends Reading Group
* (26:25) More details about ML Collective
* (39:35) ICLR 2022 Diversity, Equity & Inclusion
* (48:00) Narrowness vs Variety in research
* (57:20) Favorite Papers
* (58:50) Measuring the Intrinsic Dimension of Objective Landscapes
* (01:01:40) Natural Adversarial Objects
* (01:03:00) Interests outside of AI - Writing
* (01:08:05) Interests outside of AI - Narrating Travels with Charley
* (01:13:22) Outro
In episode 28 of The Gradient Podcast, Daniel Bashir speaks to Ben Green, postdoctoral scholar in the Michigan Society of Fellows and Assistant Professor at the Gerald R. Ford School of Public Policy. Ben’s work focuses on the social and political impacts of government algorithms.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro
(02:00) Getting Started
(06:15) Soul Searching
(11:55) Decentering Algorithms
(19:50) The Future of the City
(27:25) Ethical Lip Service
(32:30) Ethics Research and Industry Incentives
(36:30) Broadening our Vision of Tech Ethics
(47:35) What Types of Research are Valued?
(52:40) Outro
Episode Links:
* Special Issue of the Journal of Social Computing
In episode 27 of The Gradient Podcast, Andrey Kurenkov speaks to Max Braun, who leads the AI and robotics software engineering team at Everyday Robots, a moonshot to create robots that can learn to help people in their everyday lives. Previously, he worked on building frontier technology products as an entrepreneur and later at Google and X. Max enjoys exploring the intersection of art, technology, and philosophy as a writer and designer.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:00) Start in AI
* (5:45) Humanoid Research in Osaka
* (8:45) Joining Google X
* (12:15) Visual Search and Google Glass
* (15:58) Academia Industry Connection
* (18:45) Overview of Robotics Vision
* (26:00) Machine Learning for Robotics
* (32:00) Robot Platform
* (38:00) Development Process and History
* (43:35) QT-Opt
* (49:05) Imitation Learning
* (55:00) Simulation Platform
* (59:45) Sim2Real
* (1:07:00) SayCan
* (1:14:30) Current Objectives
* (1:17:00) Other Projects
* (1:21:40) Outro
Episode Links:
* Simulating Artificial Muscles for Controlling a Robotic Arm with Fluctuation
* Introducing the Everyday Robot Project
* Scalable Deep Reinforcement Learning from Robotic Manipulation (QT-Opt)
* Alphabet is putting its prototype robots to work cleaning up around Google’s offices
* Everyday robots are (slowly) leaving the lab
* Can Robots Follow Instructions for New Tasks?
* Efficiently Initializing Reinforcement Learning With Prior Policies
* Shortening the Sim to Real Gap
* Action-Image: Teaching Grasping in Sim
* SayCan
* I Made an AI Read Wittgenstein, Then Told It to Play Philosopher
In episode 26 of The Gradient Podcast, Daniel Bashir speaks to Yejin Choi, professor of Computer Science at the University of Washington, and senior research manager at the Allen Institute for Artificial Intelligence.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro
(01:42) Getting Started in the Winter
(09:17) Has NLP lost its way?
(12:57) The Mosaic Project, Commonsense Intelligence
(18:20) A Priori Intuitions and Common Sense in Machines
(21:35) Abductive Reasoning
(24:49) Benchmarking Common Sense
(33:00) DeLorean and COMET - Algorithms for Commonsense Reasoning
(43:30) Positive and Negative uses of Commonsense Models
(49:40) Moral Reasoning
(57:00) Descriptive Morality, Meta-Ethical Concerns
(1:04:30) Potential Misuse
(1:12:15) Future Work
(1:16:23) Outro
Episode Links:
* The Curious Case of Commonsense Intelligence in Daedalus
* Common Sense Comes Close to Computers in Quanta
* Can Computers Learn Common Sense? in The New Yorker
In episode 25 of The Gradient Podcast, Daniel Bashir speaks to David Chalmers, professor of philosophy and Philosophy and Neural Science at New York University, and co-director of NYU’s center for Mind, Brain, and Consciousness.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro(00:42) “Today’s neural networks may be slightly conscious”(03:55) Openness to Machine Consciousness(09:37) Integrated Information Theory(18:41) Epistemic Gaps, Verbal Reports(25:52) Vision Models and Consciousness(33:37) Reasoning about Consciousness(38:20) Illusionism(41:30) Best Approaches to the Hard Problem(44:21) Panpsychism(46:35) Outro
Episode Links:
* Facing Up to the Hard Problem of Consciousness (1995)
* Reality+: Virtual Worlds and the Problems of Philosophy
* Amanda Askell on AI Consciousness
In episode 24 of The Gradient Podcast, Daniel Bashir talks to Greg Yang, senior researcher at Microsoft Research. Greg Yang’s Tensor Programs framework recently received attention for its role in the µTransfer paradigm for tuning the hyperparameters of large neural networks.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro(01:50) Start in AI / Research(05:55) Fear of Math in ML(08:00) Presentation of Research(17:35) Path to MSR(21:20) Origin of Tensor Programs(26:05) Refining TP’s Presentation(39:55) The Sea of Garbage (Initializations) and the Oasis(47:44) Scaling Up Further(55:53) On Theory and Practice in Deep Learning(01:05:28) Outro
Episode Links:
* Visual Intro to Gaussian Processes (Distill)
In the 23rd interview of The Gradient Podcast, we talk to Nick Walton, the CEO and Co-Founder of Latitude, the goal of which is to make AI a tool of freedom and creativity for everyone, and which is currently developing AI Dungeon and Voyage.
Subscribe to The Gradient Podcast:
* Spotify
* RSS
Outline:(00:00) Intro(01:38) How did you go into AI / research(3:50) Origin of AI Dungeon(8:15) What is a Dungeon Master(12:!5) Brief history of AI Dungeon(17:30) AI in videogames, past and future(23:35) Early days of AI Dungeon(29:45) AI Dungeon as a Creative Tool(33:50) Technical Aspects of AI Dungeon(39:15) Voyage(48:27) Visuals in AI Dungeon(50:45) How to Control AI in Games(55:38) Future of AI in Games(57:50) Funny stories(59:45) Interests / Hobbies(01:01:45) Outro
In episode 22 of The Gradient Podcast, we talk to Connor Leahy, an AI researcher focused on AI alignment and a co-founder of EleutherAI.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Connor is an AI researcher working on understanding large ML models and aligning them to human values, and a cofounder of EleutherAI, a decentralized grassroots collective of volunteer researchers, engineers, and developers focused on AI alignment, scaling, and open source AI research. The organization's flagship project is the GPT-Neo family of models designed to match those developed by OpenAI as GPT-3.
Sections:
(00:00:00) Intro(00:01:20) Start in AI(00:08:00) Being excited about GPT-2 (00:18:00) Discovering AI safety and alignment(00:21:10) Replicating GPT-2 (00:27:30) Deciding whether to relese GPT-2 weights(00:36:15) Life after GPT-2 (00:40:05) GPT-3 and Start of Eleuther AI(00:44:40) Early days of Eleuther AI(00:47:30) Creating the Pile, GPT-Neo, Hacker Culture(00:55:10) Growth of Eleuther AI, Cultivating Community(01:02:22) Why release a large language model(01:08:50) AI Risk and Alignment(01:21:30) Worrying (or not) about Superhuman AI(01:25:20) AI alignment and releasing powerful models(01:32:08) AI risk and research norms(01:37:10) Work on GPT-3 replication, GPT-NeoX(01:38:48) Joining Eleuther AI(01:43:28) Personal interests / hobbies(01:47:20) Outro
Links to things discussed:
* Replicating GPT2–1.5B , GPT2, Counting Consciousness and the Curious Hacker
* The Pile
* GPT-Neo
* GPT-J
* Why Release a Large Language Model?
* What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective
* GPT-NeoX
In interview 21 of The Gradient Podcast, we talk to Percy Liang, an Associate Professor of Computer Science at Stanford University and the director of the Center for Research on Foundation Models.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Percy Liang’s research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.
Sections:
(00:00) Intro(01:21) Start in AI(06:52) Interest in Language(10:17) Start of PhD(12:22) Semantic Parsing(17:49) Focus on ML robustness(22:30) Foundation Models, model robustness(28:55) Foundation Model bias(34:48) Foundation Model research by academia(37:13) Current research interests(39:40) Surprising robustness results(44:24) Reproducibility and CodaLab(50:17) Outro
Papers / Topics discussed:
* On the Opportunities and Risks of Foundation Models
* Reflections on Foundation Models
* Removing spurious features can hurt accuracy and affect groups disproportionately.
* Selective classification can magnify disparities across groups
* Just train twice: improving group robustness without training group information
* LILA: language-informed latent actions
* CodaLab
In episode 20 of The Gradient Podcast, we talk to Eric Jang, a research scientist on the Robotics team at Google.
Eric is a research scientist on the Robotics team at Google. His research focuses on answering whether big data and small algorithms can yield unprecedented capabilities in the domain of robotics, just like the computer vision, translation, and speech revolutions before it. Specifically, he focuses on robotic manipulation and self-supervised robotic learning.
Sections:
(00:00) Intro(00:50) Start in AI / Research(03:58) Joining Google Robotics(10:08) End to End Learning of Semantic Grasping(19:11) Off Policy RL for Robotic Grasping(29:33) Grasp2Vec(40:50) Watch, Try, Learn Meta-Learning from Demonstrations and Rewards(50:12) BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning(59:41) Just Ask for Generalization(01:09:02) Data for Robotics(01:22:10) To Understand Language is to Understand Generalization (01:32:38) Outro
Papers discussed:
* Grasp2Vec: Learning Object Representations from Self-Supervised Grasping
* End-to-End Learning of Semantic Grasping
* Watch, Try, Learn Meta-Learning from Demonstrations and Rewards
* BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning
* To Understand Language is to Understand Generalization
* Robots Must Be Ephemeralized
In episode 19 of The Gradient Podcast, we talk to Rishi Bommasani, a Ph.D student at Stanford focused on Foundation Models.
Rish is a second-year Ph.D. student in the CS Department at Stanford, where he is advised by Percy Liang and Dan Jurafsky. His research focuses on understanding AI systems and their social impact, as well as using NLP to further scientific inquiry. Over the past year, he helped build and organize the Stanford Center for Research on Foundation Models (CRFM).
Sections:
(00:00:00) Intro(00:01:05) How did you get into AI?(00:09:55) Towards Understanding Position Embeddings(00:14:23) Long-Distance Dependencies don’t have to be Long(00:18:55) Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings(00:30:25) Masters Thesis(00:34:05) Start of PhD and work on foundation models(00:42:14) Why were people intested in foundation models(00:46:45) Formation of CRFM(00:51:25) Writing report on foundation models(00:56:33) Challenges in writing report(01:05:45) Response to reception(01:15:35) Goals of CRFM(01:25:43) Current research focus(01:30:35) Interests outside of research(01:33:10) Outro
Papers discussed:
* Towards Understanding Position Embeddings
* Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings
* Generalized Optimal Linear Orders
* On the Opportunities and Risks of Foundation Models
* Reflections on Foundation Models
In episode 18 of The Gradient Podcast, we talked to Upol Ehsan, an Explainable AI (XAI) researcher who combines his background in Philosophy and Human-Computer Interaction to address problems in XAI beyond just opening the "black-box" of AI. You can find his Gradient article charting this vision here.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Papers Discussed:
* Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
* Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
* Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
* Expanding Explainability: Towards Social Transparency in AI systems
* The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
* Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Exciting update!
In addition to listening to the audio recording, you can now experience the interview over at The Gradient’s main site, with live captions and the ability to jump to certain sections.
In addition, you can experience it as follows: Interactive Transcript | Transcript PDF | Interview on YouTube
About Upol:Upol Ehsan cares about people first, technology second. He is a doctoral candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining his expertise in AI and background in Philosophy, his work in Explainable AI (XAI) aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity.
Actively publishing in top peer-reviewed venues like CHI, his work has received multiple awards and been covered in major media outlets. Bridging industry and academia, he serves on multiple program committees in HCI and AI conferences (e.g., DIS, IUI, NeurIPS) and actively connects these communities (e.g, the widely attended HCXAI workshop at CHI). By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is an advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor.Follow him on Twitter: @upolehsan
In episode 17 of The Gradient Podcast, we talk to Miles Brundage, Head of Policy Research at OpenAI and a researcher passionate about the responsible governance of artificial intelligence.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Links:
* Will Technology Make Work Better for Everyone?
* Taking Superintelligence Seriously
* The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
* Release Strategies and the Social Impact of Language Models
* All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation
* Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
Timeline:
(00:00) Intro(01:05) How did you get started in AI(07:05) Writing about AI on Slate(09:20) Start of PhD(13:00) AI and the End of Scarcity(18:12) Malicious Uses of AI(28:00) GPT-2 and Publication Norms(33:30) AI-Generated Text for Misinformation(37:05) State of AI Misinformation(41:30) Trustworthy AI(48:50) OpenAI Policy Research Team(53:15) Outro
Miles is a researcher and research manager, and is passionate about the responsible governance of artificial intelligence. In 2018, he joined OpenAI, where he began as a Research Scientist and recently became Head of Policy Research. Before that, he was a Research Fellow at the University of Oxford's Future of Humanity Institute, where he is still a Research Affiliate).He also serves as a member of Axon's AI and Policing Technology Ethics Board. He completed a PhD in Human and Social Dimensions of Science and Technology from Arizona State University in 2019.
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"
Hosted by Andrey Kurenkov (@andrey_kurenkov), a PhD student with the Stanford Vision and Learning Lab working on learning techniques for robotic manipulation and search.
In episode 16 of The Gradient Podcast, we talk to Jeffrey Ding, a postdoctoral fellow at Stanford's Center for International Security and Cooperation
(01:35) Getting into AI research(04:20) Interest in studying China(06:50) Deciphering China’s AI Dream(23:25) Beyond the AI Arms Race(36:45) China's Current Capabilities in AI(46:45) AI as a General Purpose and Strategic Technology(57:38) ChinaAI Newsletter(01:04:20) Teaching AI to Policy People(01:06:30) Current Focus(01:09:10) Interests Outside of Work + Outro
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Jeffrey Ding (@jjding99) is a postdoctoral fellow at Stanford's Center for International Security and Cooperation, sponsored by Stanford's Institute for Human-Centered Artificial Intelligence, as well as a research affiliate with the Centre for the Governance of AI at the University of Oxford. His current research is centered on how technological change affects the rise and fall of great powers, with an eye toward the implications of advances in AI for a possible U.S.-China power transition. He also puts out the excellent ChinaAI newsletter, which has (sometimes) weekly translations of Chinese-language musings on AI and related topics.
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"
Hosted by Andrey Kurenkov (@andrey_kurenkov), a PhD student with the Stanford Vision and Learning Lab working on learning techniques for robotic manipulation and search.
In episode 15 of The Gradient Podcast, we talk to Stanford PhD Candidate Alex Tamkin
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Alex Tamkin is a fourth-year PhD student in Computer Science at Stanford, advised by Noah Goodman and part of the Stanford NLP Group. His research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings.
We discuss:
* Viewmaker Networks: Learning Views for Unsupervised Representation Learning
* DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning
* On the Opportunities and Risks of Foundation Models
* Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models
* Mentoring, teaching and fostering a healthy and inclusive research culture
* Scientific communication and breaking down walls between fields
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"
In episode 14 of The Gradient Podcast, we interview Stanford PhD Candidate Peter Henderson
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Peter is a joint JD-PhD student at Stanford University advised by Dan Jurafsky. He is also an OpenPhilanthropy AI Fellow and a Graduate Student Fellow at the Regulation, Evaluation, and Governance Lab. His research focuses on creating robust decision-making systems, with three main goals: (1) use AI to make governments more efficient and fair; (2) ensure that AI isn’t deployed in ways that can harm people; (3) create new ML methods for applications that are beneficial to society.
Links:
* Reproducibility and Reusability in Deep Reinforcement Learning.
* Benchmark Environments for Multitask Learning in Continuous Domains
* Reproducibility of Bench-marked Deep Reinforcement Learning Tasks for Continuous Control.
* Deep Reinforcement Learning that Matters
* Reproducibility and Replicability in Deep Reinforcement Learning (and Other Deep Learning Methods)
* Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning
* When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset”
* How US law will evaluate artificial intelligence for Covid-19
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"
In episode 13 of The Gradient Podcast, we interview Stanford Professor Chelsea Finn
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Chelsea is an Assistant Professor at Stanford University. Her lab, IRIS, studies intelligence through robotic interaction at scale, and is affiliated with SAIL and the Statistical ML Group. I also spend time at Google as a part of the Google Brain team. Her research deals with the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction.
Links:
* Learning to Learn with Gradients
* Visual Model-Based Reinforcement Learning as a Path towards Generalist Robots
* RoboNet: A Dataset for Large-Scale Multi-Robot Learning
* Greedy Hierarchical Variational Autoencoders for Large-Scale Video
* Example-Driven Model-Based Reinforcement Learning for Solving Long-Horizon Visuomotor Tasks
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 12 of The Gradient Podcast, we interview Devi Parikh, a professor at Georgia Tech whose research focuses on computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity.
Devi Parikh is an Associate Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity. In the past, she has also been an Assistant Professor at Virginia Tech and a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC). She received her M.S. and Ph.D. degrees from the Electrical and Computer Engineering department at Carnegie Mellon University in 2007 and 2009 respectively.
Links:
* Feel The Music: Automatically Generating A Dance For An Input Song
* Exploring Crowd Co-creation Scenarios for Sketches
* Neuro-Symbolic Generative Art: A Preliminary Study
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 11 of The Gradient Podcast, we interview Sergey Levine, a professor at Berkeley whose research focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms for robotics.
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms, and includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 10 of The Gradient Podcast, we interview data scientist, researcher, developer, educator, and entrepreneur Jeremy Howard.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, the chair of WAMRI, and is Chief Scientist at platform.ai. Previously, Jeremy was the founding CEO Enlitic, which was the first company to apply deep learning to medicine, was the President and Chief Scientist of the data science platform Kaggle, and was the founding CEO of two successful Australian startups.
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 9 of The Gradient Podcast, we interview Yannic Kilcher, an AI researcher and educator.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Evan is an AI safety veteran who’s done research at leading AI labs like OpenAI, and whose experience also includes stints at Google, Ripple andYelp. He currently works at the Machine Intelligence Research Institute (MIRI) as a Research Fellow, and joined me to talk about his views on AI safety, the alignment problem, and whether humanity is likely to survive the advent of superintelligent AI.
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 8 of The Gradient Podcast, we interview Yannic Kilcher, an AI researcher and educator.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Yannic graduated with his PhD from ETH Zurich’s data analytics lab and is now the Chief Technology Officer of DeepJudge, a company building the next-generation AI-powered context-sensitive legal document processing platform. He famously produces videos on his very popular Youtube channel, which cover machine learning research papers, programming, and issues of the AI community, and the broader impact of AI in society.
Check out his Youtube channel here and follow him on Twitter here
Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 7 of The Gradient Podcast, we interview founder and owner of Silero Alexander Veysov. You can find a transcript of our conversation here, and the repositories for Open Speech To Text and Silero Models here.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Alexander Veysov is the founder / owner of Silero, a small company building Speech / NLP enabled products, and author of Open STT. Silero has recently shipped its own Russian STT engine. Previously he worked in a then Moscow-based VC firm and Ponominalu.ru, a ticketing startup acquired by MTS (major Russian TelCo). He received his BA and MA in Economics in Moscow State University for International Relations (MGIMO). You can follow his channel in telegram (@snakers41).
Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 6 of The Gradient Podcast, we interview Deep Learning pioneer Yann LeCun.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Yann LeCun is the VP & Chief AI Scientist at Facebook and Silver Professor at NYU and he was also the founding Director of Facebook AI Research and of the NYU Center for Data Science. He famously pioneered the use of Convolutional Neural Nets for image processing in the 80s and 90s, and is generally regarded as one of the people whose work was pivotal to the Deep Learning revolution in AI. In fact he is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".
Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 5 of The Gradient Podcast, we interview NLP researcher Anna Rogers.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Anna Rogers is a post-doctoral associate at the University of Copenhagen, working with the research groups in the Center for Social Data Science and Machine Learning section. Her main research area is Natural Language Processing, with focus on interpretability and evaluation of deep learning models. She is also known for her work on improving peer review in NLP and as organizer of the workshop on Insights from Negative Results in NLP.Check out her article What Can We Do To Improve Peer Review in NLP and her tutorial Reviewing NLP Research.
In episode 4 of The Gradient Podcast, we interview artist, engineer, and entrepreneur Joel Simon.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Joel Simon is a multidisciplinary artist, toolmaker, and researcher. He studied computer science and art at Carnegie Mellon University, worked on bioinformatics at Rockefeller University, and most recently is the founder and director of Morphogen, a generative design company developing Artbreeder, a massively collaborative creative tool and network. His interests lie in the intersection of computer science, biology and design as well as furniture-design, collaborative-creativity, sculpture and game-design.
Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
Subscribe to The Gradient Podcast: iTunes | RSS | Spotify
In episode 3 of The Gradient Podcast, we interview researcher and entrepreneur Abubakar Abid. Follow him on Twitter and check out the websites of his company Gradio and his side project the Fatima Fellowship.
Abubakar is an entrepreneur and researcher focused on AI and its applications to medicine. He is currently running the company Gradio, which is developing a product to generate an easy-to-use UI for any ML model, function, or API. He is also running the Fatima Al-Fihri Predoctoral Fellowship, which is a 9-month program for computer science students from around the world who are planning on applying to PhD programs in the United States.
Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
In episode 2 of The Gradient Podcast, we interview AI artist Helana Sarin. Check out her work and follow her over at her Twitter @NeuralBricolage.
Helena Sarin is a visual artist and software engineer and is among the most prominent artists utilizing AI for their work. After she discovered GANs (Generative Adversarial Networks) several years ago and then made generative models her primary medium. She is a frequent speaker at ML/AI conferences, for the past year delivering invited talks at MIT, Library of Congress and Capitol One, and her artwork was exhibited at AI Art exhibitions in Zurich, Dubai, Oxford, Shanghai and Miami. Lastly, Helena was among the earliest authors to contribute a piece to The Gradient with 2018’s “Playing a game of GANstruction”, in which she described the process she follows to make her art.
Image credit: Happy Nation - The Waterpark By Helena Sarin
Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
Hello world! After more than 3 years of publishing overviews and perspectives from the AI community on thegradient.pub, The Gradient now has a podcast. In this first episode our lead editors take a look back on how it all started, as well as a look ahead at where things are heading. Keep an eye out for our next episode, coming soon!
Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music".
En liten tjänst av I'm With Friends. Finns även på engelska.