Deeply researched interviews
www.dwarkeshpatel.com
The podcast Dwarkesh Podcast is created by Dwarkesh Patel. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive.
In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed.
After the episode, I convinced Gwern to create a donation page where people can help sustain what he's up to. Please go here to contribute.
Read the full transcript here.
Sponsors:
* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open - if you want to stand out, take a crack at their new Kaggle competition. To learn more, go here: https://jane-st.co/dwarkesh
* Turing provides complete post-training services for leading AI labs like OpenAI, Anthropic, Meta, and Gemini. They specialize in model evaluation, SFT, RLHF, and DPO to enhance models’ reasoning, coding, and multimodal capabilities. Learn more at turing.com/dwarkesh.
* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
If you’re interested in advertising on the podcast, check out this page.
Timestamps
00:00:00 - Anonymity
00:01:09 - Automating Steve Jobs
00:04:38 - Isaac Newton's theory of progress
00:06:36 - Grand theory of intelligence
00:10:39 - Seeing scaling early
00:21:04 - AGI Timelines
00:22:54 - What to do in remaining 3 years until AGI
00:26:29 - Influencing the shoggoth with writing
00:30:50 - Human vs artificial intelligence
00:33:52 - Rabbit holes
00:38:48 - Hearing impairment
00:43:00 - Wikipedia editing
00:47:43 - Gwern.net
00:50:20 - Counterfactual careers
00:54:30 - Borges & literature
01:01:32 - Gwern's intelligence and process
01:11:03 - A day in the life of Gwern
01:19:16 - Gwern's finances
01:25:05 - The diversity of AI minds
01:27:24 - GLP drugs and obesity
01:31:08 - Drug experimentation
01:33:40 - Parasocial relationships
01:35:23 - Open rabbit holes
A bonanza on the semiconductor industry and hardware scaling to AGI by the end of the decade.
Dylan Patel runs Semianalysis, the leading publication and research firm on AI hardware. Jon Y runs Asianometry, the world’s best YouTube channel on semiconductors and business history.
* What Xi would do if he became scaling pilled
* $ 1T+ in datacenter buildout by end of decade
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Sponsors:
* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for FPGA programmers, CUDA programmers, and ML researchers. To learn more about their full time roles, internship, tech podcast, and upcoming Kaggle competition, go here.
* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
If you’re interested in advertising on the podcast, check out this page.
Timestamps
00:08:25 – How semiconductors get better
00:11:16 – China can centralize compute
00:18:50 – Export controls & sanctions
00:32:51 – Huawei's intense culture
00:38:51 – Why the semiconductor industry is so stratified
00:40:58 – N2 should not exist
00:45:53 – Taiwan invasion hypothetical
00:49:21 – Mind-boggling complexity of semiconductors
00:59:13 – Chip architecture design
01:04:36 – Architectures lead to different AI models? China vs. US
01:10:12 – Being head of compute at an AI lab
01:16:24 – Scaling costs and power demand
01:37:05 – Are we financing an AI bubble?
01:50:20 – Starting Asianometry and SemiAnalysis
02:06:10 – Opportunities in the semiconductor stack
Unless you understand the history of oil, you cannot understand the rise of America, WW1, WW2, secular stagnation, the Middle East, Ukraine, how Xi and Putin think, and basically anything else that's happened since 1860.
It was a great honor to interview Daniel Yergin, the Pulitzer Prize winning author of The Prize - the best history of oil ever written (which makes it the best history of the 20th century ever written).
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Sponsors:
This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
This episode is brought to you by Suno, pioneers in AI-generated music. Suno's technology allows artists to experiment with melodic forms and structures in unprecedented ways. From chart-toppers to avant-garde compositions, Suno is redefining musical creativity. If you're an ML researcher passionate about shaping the future of music, email your resume to [email protected].
If you’re interested in advertising on the podcast, check out this page.
Timestamps
(00:00:00) – Beginning of the oil industry
(00:13:37) – World War I & II
(00:25:06) – The Middle East
(00:47:04) – Yergin’s conversations with Putin & Modi
(01:04:36) – Writing through stories
(01:10:26) – The renewable energy transition
I had no idea how wild human history was before chatting with the geneticist of ancient DNA David Reich.
Human history has been again and again a story of one group figuring ‘something’ out, and then basically wiping everyone else out.
From the tribe of 1k-10k modern humans who killed off all the other human species 70,000 years ago; to the Yamnaya horse nomads 5,000 years ago who killed off 90+% of (then) Europeans and also destroyed the Indus Valley.
So much of what we thought we knew about human history is turning out to be wrong, from the ‘Out of Africa’ theory to the evolution of language, and this is all thanks to the research from David Reich’s lab.
Buy David Reich’s fascinating book, Who We Are How We Got Here.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
Sponsor
This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
If you’re interested in advertising on the podcast, check out this page.
Timestamps
(00:00:00) – Archaic and modern humans gene flow
(00:20:24) – How early modern humans dominated the world
(00:39:59) – How bubonic plague rewrote history
(00:50:03) – Was agriculture terrible for humans?
(00:59:28) – Yamnaya expansion and how populations collide
(01:15:39) – “Lost civilizations” and our Neanderthal ancestry
(01:31:32) – The DNA Challenge
(01:41:38) – David’s career: the genetic vocation
Chatted with Joe Carlsmith about whether we can trust power/techno-capital, how to not end up like Stalin in our urge to control the future, gentleness towards the artificial Other, and much more.
Check out Joe's sequence on Otherness and Control in the Age of AGI here.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Sponsors:
- Bland.ai is an AI agent that automates phone calls in any language, 24/7. Their technology uses "conversational pathways" for accurate, versatile communication across sales, operations, and customer support. You can try Bland yourself by calling 415-549-9654. Enterprises can get exclusive access to their advanced model at bland.ai/dwarkesh.
- Stripe is financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
If you’re interested in advertising on the podcast, check out this page.
Timestamps:
(00:00:00) - Understanding the Basic Alignment Story
(00:44:04) - Monkeys Inventing Humans
(00:46:43) - Nietzsche, C.S. Lewis, and AI
(1:22:51) - How should we treat AIs
(1:52:33) - Balancing Being a Humanist and a Scholar
(2:05:02) - Explore exploit tradeoffs and AI
I talked with Patrick McKenzie (known online as patio11) about how a small team he ran over a Discord server got vaccines into Americans' arms: A story of broken incentives, outrageous incompetence, and how a few individuals with high agency saved 1000s of lives.
Enjoy!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
Sponsor
This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
Timestamps
(00:00:00) – Why hackers on Discord had to save thousands of lives
(00:17:26) – How politics crippled vaccine distribution
(00:38:19) – Fundraising for VaccinateCA
(00:51:09) – Why tech needs to understand how government works
(00:58:58) – What is crypto good for?
(01:13:07) – How the US government leverages big tech to violate rights
(01:24:36) – Can the US have nice things like Japan?
(01:26:41) – Financial plumbing & money laundering: a how-not-to guide
(01:37:42) – Maximizing your value: why some people negotiate better
(01:42:14) – Are young people too busy playing Factorio to found startups?
(01:57:30) – The need for a post-mortem
I chatted with Tony Blair about:
- What he learned from Lee Kuan Yew
- Intelligence agencies track record on Iraq & Ukraine
- What he tells the dozens of world leaders who come seek advice from him
- How much of a PM’s time is actually spent governing
- What will AI’s July 1914 moment look like from inside the Cabinet?
Enjoy!
Watch the video on YouTube. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
Sponsors
- Prelude Security is the world’s leading cyber threat management automation platform. Prelude Detect quickly transforms threat intelligence into validated protections so organizations can know with certainty that their defenses will protect them against the latest threats. Prelude is backed by Sequoia Capital, Insight Partners, The MITRE Corporation, CrowdStrike, and other leading investors. Learn more here.
- This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
If you’re interested in advertising on the podcast, check out this page.
Timestamps
(00:00:00) – A prime minister’s constraints
(00:04:12) – CEOs vs. politicians
(00:10:31) – COVID, AI, & how government deals with crisis
(00:21:24) – Learning from Lee Kuan Yew
(00:27:37) – Foreign policy & intelligence
(00:31:12) – How much leadership actually matters
(00:35:34) – Private vs. public tech
(00:39:14) – Advising global leaders
(00:46:45) – The unipolar moment in the 90s
Here is my conversation with Francois Chollet and Mike Knoop on the $1 million ARC-AGI Prize they're launching today.
I did a bunch of socratic grilling throughout, but Francois’s arguments about why LLMs won’t lead to AGI are very interesting and worth thinking through.
It was really fun discussing/debating the cruxes. Enjoy!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Timestamps
(00:00:00) – The ARC benchmark
(00:11:10) – Why LLMs struggle with ARC
(00:19:00) – Skill vs intelligence
(00:27:55) - Do we need “AGI” to automate most jobs?
(00:48:28) – Future of AI progress: deep learning + program synthesis
(01:00:40) – How Mike Knoop got nerd-sniped by ARC
(01:08:37) – Million $ ARC Prize
(01:10:33) – Resisting benchmark saturation
(01:18:08) – ARC scores on frontier vs open source models
(01:26:19) – Possible solutions to ARC Prize
Chatted with my friend Leopold Aschenbrenner on the trillion dollar nationalized cluster, CCP espionage at AI labs, how unhobblings and scaling can lead to 2027 AGI, dangers of outsourcing clusters to Middle East, leaving OpenAI, and situational awareness.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes. Follow Leopold on Twitter.
Timestamps
(00:00:00) – The trillion-dollar cluster and unhobbling
(00:20:31) – AI 2028: The return of history
(00:40:26) – Espionage & American AI superiority
(01:08:20) – Geopolitical implications of AI
(01:31:23) – State-led vs. private-led AI
(02:12:23) – Becoming Valedictorian of Columbia at 19
(02:30:35) – What happened at OpenAI
(02:45:11) – Accelerating AI research progress
(03:25:58) – Alignment
(03:41:26) – On Germany, and understanding foreign perspectives
(03:57:04) – Dwarkesh’s immigration story and path to the podcast
(04:07:58) – Launching an AGI hedge fund
(04:19:14) – Lessons from WWII
(04:29:08) – Coda: Frederick the Great
Chatted with John Schulman (cofounded OpenAI and led ChatGPT creation) on how posttraining tames the shoggoth, and the nature of the progress to come...
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00:00) - Pre-training, post-training, and future capabilities
(00:16:57) - Plan for AGI 2025
(00:29:19) - Teaching models to reason
(00:40:50) - The Road to ChatGPT
(00:52:13) - What makes for a good RL researcher?
(01:00:58) - Keeping humans in the loop
(01:15:15) - State of research, plateaus, and moats
Sponsors
If you’re interested in advertising on the podcast, fill out this form.
* Your DNA shapes everything about you. Want to know how? Take 10% off our Premium DNA kit with code DWARKESH at mynucleus.com.
* CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.
Mark Zuckerberg on:
- Llama 3
- open sourcing towards AGI
- custom silicon, synthetic data, & energy constraints on scaling
- Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much more
Enjoy!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Human edited transcript with helpful links here.
Timestamps
(00:00:00) - Llama 3
(00:08:32) - Coding on path to AGI
(00:25:24) - Energy bottlenecks
(00:33:20) - Is AI the most important technology ever?
(00:37:21) - Dangers of open source
(00:53:57) - Caesar Augustus and metaverse
(01:04:53) - Open sourcing the $10b model & custom silicon
(01:15:19) - Zuck as CEO of Google+
Sponsors
If you’re interested in advertising on the podcast, fill out this form.
* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more at stripe.com.
* V7 Go is a tool to automate multimodal tasks using GenAI, reliably and at scale. Use code DWARKESH20 for 20% off on the pro plan. Learn more here.
* CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.
Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast.
No way to summarize it, except:
This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them.
You would be shocked how much of what I know about this field, I've learned just from talking with them.
To the extent that you've enjoyed my other AI interviews, now you know why.
So excited to put this out. Enjoy! I certainly did :)
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
There's a transcript with links to all the papers the boys were throwing down - may help you follow along.
Follow Trenton and Sholto on Twitter.
Timestamps
(00:00:00) - Long contexts
(00:16:12) - Intelligence is just associations
(00:32:35) - Intelligence explosion & great researchers
(01:06:52) - Superposition & secret communication
(01:22:34) - Agents & true reasoning
(01:34:40) - How Sholto & Trenton got into AI research
(02:07:16) - Are feature spaces the wrong way to think about intelligence?
(02:21:12) - Will interp actually work on superhuman models
(02:45:05) - Sholto’s technical challenge for the audience
(03:03:57) - Rapid fire
Here is my episode with Demis Hassabis, CEO of Google DeepMind
We discuss:
* Why scaling is an artform
* Adding search, planning, & AlphaZero type training atop LLMs
* Making sure rogue nations can't steal weights
* The right way to align superhuman AIs and do an intelligence explosion
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Timestamps
(0:00:00) - Nature of intelligence
(0:05:56) - RL atop LLMs
(0:16:31) - Scaling and alignment
(0:24:13) - Timelines and intelligence explosion
(0:28:42) - Gemini training
(0:35:30) - Governance of superhuman AIs
(0:40:42) - Safety, open source, and security of weights
(0:47:00) - Multimodal and further progress
(0:54:18) - Inside Google DeepMind
We discuss:
* what it takes to process $1 trillion/year
* how to build multi-decade APIs, companies, and relationships
* what's next for Stripe (increasing the GDP of the internet is quite an open ended prompt, and the Collison brothers are just getting started).
Plus the amazing stuff they're doing at Arc Institute, the financial infrastructure for AI agents, playing devil's advocate against progress studies, and much more.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00:00) - Advice for 20-30 year olds
(00:12:12) - Progress studies
(00:22:21) - Arc Institute
(00:34:27) - AI & Fast Grants
(00:43:46) - Stripe history
(00:55:44) - Stripe Climate
(01:01:39) - Beauty & APIs
(01:11:51) - Financial innards
(01:28:16) - Stripe culture & future
(01:41:56) - Virtues of big businesses
(01:51:41) - John
It was a great pleasure speaking with Tyler Cowen for the 3rd time.
We discussed GOAT: Who is the Greatest Economist of all Time and Why Does it Matter?, especially in the context of how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, animal spirits, prediction markets, alignment, central planning, and much more.
The topics covered in this episode are too many to summarize. Hope you enjoy!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) - John Maynard Keynes
(00:17:16) - Controversy
(00:25:02) - Fredrick von Hayek
(00:47:41) - John Stuart Mill
(00:52:41) - Adam Smith
(00:58:31) - Coase, Schelling, & George
(01:08:07) - Anarchy
(01:13:16) - Cheap WMDs
(01:23:18) - Technocracy & political philosophy
(01:34:16) - AI & Scaling
This is a narration of my blog post, Lessons from The Years of Lyndon Johnson by Robert Caro.
You read the full post here: https://www.dwarkeshpatel.com/p/lyndon-johnson
Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes.
This is a narration of my blog post, Will scaling work?.
You read the full post here: https://www.dwarkeshpatel.com/p/will-scaling-work
Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes.
A true honor to speak with Jung Chang.
She is the author of Wild Swans: Three Daughters of China (sold 15+ million copies worldwide) and Mao: The Unknown Story.
We discuss:
- what it was like growing up during the Cultural Revolution as the daughter of a denounced official
- why the CCP continues to worship the biggest mass murderer in human history.
- how exactly Communist totalitarianism was able to subjugate a billion people
- why Chinese leaders like Xi and Deng who suffered from the Cultural Revolution don't condemn Mao
- how Mao starved and killed 40 million people during The Great Leap Forward in order to exchange food for Soviet weapons
Wild Swans is the most moving book I've ever read. It was a real privilege to speak with its author.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00:00) - Growing up during Cultural Revolution
(00:15:58) - Could officials have overthrown Mao?
(00:34:09) - Great Leap Forward
(00:48:12) - Modern support of Mao
(01:03:24) - Life as peasant
(01:21:30) - Psychology of communist society
Andrew Roberts is the world's best biographer and one of the leading historians of our time.
We discussed
* Churchill the applied historian,
* Napoleon the startup founder,
* why Nazi ideology cost Hitler WW2,
* drones, reconnaissance, and other aspects of the future of war,
* Iraq, Afghanistan, Korea, Ukraine, & Taiwan.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00:00) - Post WW2 conflicts
(00:10:57) - Ukraine
(00:16:33) - How Truman Prevented Nuclear War
(00:22:49) - Taiwan
(00:27:15) - Churchill
(00:35:11) - Gaza & future wars
(00:39:05) - Could Hitler have won WW2?
(00:48:00) - Surprise attacks
(00:59:33) - Napoleon and startup founders
(01:14:06) - Robert’s insane productivity
Here is my interview with Dominic Cummings on why Western governments are so dangerously broken, and how to fix them before an even more catastrophic crisis.
Dominic was Chief Advisor to the Prime Minister during COVID, and before that, director of Vote Leave (which masterminded the 2016 Brexit referendum).
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00:00) - One day in COVID…
(00:08:26) - Why is government broken?
(00:29:10) - Civil service
(00:38:27) - Opportunity wasted?
(00:49:35) - Rishi Sunak and Number 10 vs 11
(00:55:13) - Cyber, nuclear, bio risks
(01:02:04) - Intelligence & defense agencies
(01:23:32) - Bismarck & Lee Kuan Yew
(01:37:46) - How to fix the government?
(01:56:43) - Taiwan
(02:00:10) - Russia
(02:07:12) - Bismarck’s career as an example of AI (mis)alignment
(02:17:37) - Odyssean education
Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out!
We discuss:
- Does he regret inventing RLHF, and is alignment necessarily dual-use?
- Why he has relatively modest timelines (40% by 2040, 15% by 2030),
- What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?
- Why he’s leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon,
- His current research into a new proof system, and how this could solve alignment by explaining model's behavior
- and much more.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Open Philanthropy
Open Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.
For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/
The deadline to apply is November 9th; make sure to check out those roles before they close.
Timestamps
(00:00:00) - What do we want post-AGI world to look like?
(00:24:25) - Timelines
(00:45:28) - Evolution vs gradient descent
(00:54:53) - Misalignment and takeover
(01:17:23) - Is alignment dual-use?
(01:31:38) - Responsible scaling policies
(01:58:25) - Paul’s alignment research
(02:35:01) - Will this revolutionize theoretical CS and math?
(02:46:11) - How Paul invented RLHF
(02:55:10) - Disagreements with Carl Shulman
(03:01:53) - Long TSMC but not NVIDIA
I had a lot of fun chatting with Shane Legg - Founder and Chief AGI Scientist, Google DeepMind!
We discuss:
* Why he expects AGI around 2028
* How to align superhuman models
* What new architectures needed for AGI
* Has Deepmind sped up capabilities or safety more?
* Why multimodality will be next big landmark
* and much more
Watch full episode on YouTube, Apple Podcasts, Spotify, or any other podcast platform. Read full transcript here.
Timestamps
(0:00:00) - Measuring AGI
(0:11:41) - Do we need new architectures?
(0:16:26) - Is search needed for creativity?
(0:19:19) - Superhuman alignment
(0:29:58) - Impact of Deepmind on safety vs capabilities
(0:34:03) - Timelines
(0:41:24) - Multimodality
I had a lot of fun chatting with Grant Sanderson (who runs the excellent 3Blue1Brown YouTube channel) about:
- Whether advanced math requires AGI
- What careers should mathematically talented students pursue
- Why Grant plans on doing a stint as a high school teacher
- Tips for self teaching
- Does Godel’s incompleteness theorem actually matter
- Why are good explanations so hard to find?
- And much more
Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Full transcript here.
Timestamps
(0:00:00) - Does winning math competitions require AGI?
(0:08:24) - Where to allocate mathematical talent?
(0:17:34) - Grant’s miracle year
(0:26:44) - Prehistoric humans and math
(0:33:33) - Why is a lot of math so new?
(0:44:44) - Future of education
(0:56:28) - Math helped me realize I wasn’t that smart
(0:59:25) - Does Godel’s incompleteness theorem matter?
(1:05:12) - How Grant makes videos
(1:10:13) - Grant’s math exposition competition
(1:20:44) - Self teaching
I learned so much from Sarah Paine, Professor of History and Strategy at the Naval War College.
We discuss:
- how continental vs maritime powers think and how this explains Xi & Putin's decisions
- how a war with China over Taiwan would shake out and whether it could go nuclear
- why the British Empire fell apart, why China went communist, how Hitler and Japan could have coordinated to win WW2, and whether Japanese occupation was good for Korea, Taiwan and Manchuria
- plus other lessons from WW2, Cold War, and Sino-Japanese War
- how to study history properly, and why leaders keep making the same mistakes
If you want to learn more, check out her books - they’re some of the best military history I’ve ever read.
Watch on YouTube, listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript.
Timestamps
(0:00:00) - Grand strategy
(0:11:59) - Death ground
(0:23:19) - WW1
(0:39:23) - Writing history
(0:50:25) - Japan in WW2
(0:59:58) - Ukraine
(1:10:50) - Japan/Germany vs Iraq/Afghanistan occupation
(1:21:25) - Chinese invasion of Taiwan
(1:51:26) - Communists & Axis
(2:08:34) - Continental vs maritime powers
Here is my conversation with Dario Amodei, CEO of Anthropic.
Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00:00) - Introduction
(00:01:00) - Scaling
(00:15:46) - Language
(00:22:58) - Economic Usefulness
(00:38:05) - Bioterrorism
(00:43:35) - Cybersecurity
(00:47:19) - Alignment & mechanistic interpretability
(00:57:43) - Does alignment research require scale?
(01:05:30) - Misuse vs misalignment
(01:09:06) - What if AI goes well?
(01:11:05) - China
(01:15:11) - How to think about alignment
(01:31:31) - Is modern security good enough?
(01:36:09) - Inefficiencies in training
(01:45:53) - Anthropic’s Long Term Benefit Trust
(01:51:18) - Is Claude conscious?
(01:56:14) - Keeping a low profile
A few weeks ago, I sat beside Andy Matuschak to record how he reads a textbook.
Even though my own job is to learn things, I was shocked with how much more intense, painstaking, and effective his learning process was.
So I asked if we could record a conversation about how he learns and a bunch of other topics:
* How he identifies and interrogates his confusion (much harder than it seems, and requires an extremely effortful and slow pace)
* Why memorization is essential to understanding and decision-making
* How come some people (like Tyler Cowen) can integrate so much information without an explicit note taking or spaced repetition system.
* How LLMs and video games will change education
* How independent researchers and writers can make money
* The balance of freedom and discipline in education
* Why we produce fewer von Neumann-like prodigies nowadays
* How multi-trillion dollar companies like Apple (where he was previously responsible for bedrock iOS features) manage to coordinate millions of different considerations (from the cost of different components to the needs of users, etc) into new products designed by 10s of 1000s of people.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
To see Andy’s process in action, check out the video where we record him studying a quantum physics textbook, talking aloud about his thought process, and using his memory system prototype to internalize the material.
You can check out his website and personal notes, and follow him on Twitter.
Cometeer
Visit cometeer.com/lunar for $20 off your first order on the best coffee of your life!
If you want to sponsor an episode, contact me at [email protected].
Timestamps
(00:00:52) - Skillful reading
(00:02:30) - Do people care about understanding?
(00:06:52) - Structuring effective self-teaching
(00:16:37) - Memory and forgetting
(00:33:10) - Andy’s memory practice
(00:40:07) - Intellectual stamina
(00:44:27) - New media for learning (video, games, streaming)
(00:58:51) - Schools are designed for the median student
(01:05:12) - Is learning inherently miserable?
(01:11:57) - How Andy would structure his kids’ education
(01:30:00) - The usefulness of hypertext
(01:41:22) - How computer tools enable iteration
(01:50:44) - Monetizing public work
(02:08:36) - Spaced repetition
(02:10:16) - Andy’s personal website and notes
(02:12:44) - Working at Apple
(02:19:25) - Spaced repetition 2
The second half of my 7 hour conversation with Carl Shulman is out!
My favorite part! And the one that had the biggest impact on my worldview.
Here, Carl lays out how an AI takeover might happen:
* AI can threaten mutually assured destruction from bioweapons,
* use cyber attacks to take over physical infrastructure,
* build mechanical armies,
* spread seed AIs we can never exterminate,
* offer tech and other advantages to collaborating countries, etc
Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:
* what is the far future best case scenario for humanity
* what it would look like to have AI make thousands of years of intellectual progress in a month
* how do we detect deception in superhuman models
* does space warfare favor defense or offense
* is a Malthusian state inevitable in the long run
* why markets haven't priced in explosive economic growth
* & much more
Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Catch part 1 here
Timestamps
(0:00:00) - Intro
(0:00:47) - AI takeover via cyber or bio
(0:32:27) - Can we coordinate against AI?
(0:53:49) - Human vs AI colonizers
(1:04:55) - Probability of AI takeover
(1:21:56) - Can we detect deception?
(1:47:25) - Using AI to solve coordination problems
(1:56:01) - Partial alignment
(2:11:41) - AI far future
(2:23:04) - Markets & other evidence
(2:33:26) - Day in the life of Carl Shulman
(2:47:05) - Space warfare, Malthusian long run, & other rapid fire
In terms of the depth and range of topics, this episode is the best I’ve done.
No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.
We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.
This part is about Carl’s model of an intelligence explosion, which integrates everything from:
* how fast algorithmic progress & hardware improvements in AI are happening,
* what primate evolution suggests about the scaling hypothesis,
* how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,
* how quickly robots produced from existing factories could take over the economy.
We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.
The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.
Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00:00) - Intro
(00:01:32) - Intelligence Explosion
(00:18:03) - Can AIs do AI research?
(00:39:00) - Primate evolution
(01:03:30) - Forecasting AI progress
(01:34:20) - After human-level AGI
(02:08:39) - AI takeover scenarios
It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb
We discuss
- similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)
- visiting starving former Soviet scientists during fall of Soviet Union
- whether Oppenheimer was a spy, & consulting on the Nolan movie
- living through WW2 as a child
- odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea
- how the US pulled of such a massive secret wartime scientific & industrial project
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) - Oppenheimer movie
(0:06:22) - Was the bomb inevitable?
(0:29:10) - Firebombing vs nuclear vs hydrogen bombs
(0:49:44) - Stalin & the Soviet program
(1:08:24) - Deterrence, disarmament, North Korea, Taiwan
(1:33:12) - Oppenheimer as lab director
(1:53:40) - AI progress vs Manhattan Project
(1:59:50) - Living through WW2
(2:16:45) - Secrecy
(2:26:34) - Wisdom & war
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.
If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) - TIME article
(0:09:06) - Are humans aligned?
(0:37:35) - Large language models
(1:07:15) - Can AIs help with alignment?
(1:30:17) - Society’s response to AI
(1:44:42) - Predictions (or lack thereof)
(1:56:55) - Being Eliezer
(2:13:06) - Othogonality
(2:35:00) - Could alignment be easier than we think?
(3:02:15) - What will AIs want?
(3:43:54) - Writing fiction & whether rationality helps you win
I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:
* time to AGI
* leaks and spies
* what's after generative models
* post AGI futures
* working with Microsoft and competing with Google
* difficulty of aligning superhuman AI
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00) - Time to AGI
(05:57) - What’s after generative models?
(10:57) - Data, models, and research
(15:27) - Alignment
(20:53) - Post AGI Future
(26:56) - New ideas are overrated
(36:22) - Is progress inevitable?
(41:27) - Future Breakthroughs
It is said that the two greatest problems of history are: how to account for the rise of Rome, and how to account for her fall. If so, then the volcanic ashes spewed by Mount Vesuvius in 79 AD - which entomb the cities of Pompeii and Herculaneum in South Italy - hold history’s greatest prize. For beneath those ashes lies the only salvageable library from the classical world.
Nat Friedman was the CEO of Github form 2018 to 2021. Before that, he started and sold two companies - Ximian and Xamarin. He is also the founder of AI Grant and California YIMBY.
And most recently, he has created and funded the Vesuvius Challenge - a million dollar prize for reading an unopened Herculaneum scroll for the very first time. If we can decipher these scrolls, we may be able to recover lost gospels, forgotten epics, and even missing works of Aristotle.
We also discuss the future of open source and AI, running Github and building Copilot, and why EMH is a lie.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) - Vesuvius Challenge
(0:30:00) - Finding points of leverage
(0:37:39) - Open Source in AI
(0:40:32) - Github Acquisition
(0:50:18) - Copilot origin Story
(1:11:47) - Nat.org
(1:32:56) - Questions from Twitter
I flew out to Chicago to interview Brett Harrison, who is the former President of FTX US President and founder of Architect.
In his first longform interview since the fall of FTX, he speak in great detail about his entire tenure there and about SBF’s dysfunctional leadership. He talks about how the inner circle of Gary Wang, Nishad Singh, and SBF mismanaged the company, controlled the codebase, got distracted by media, and even threatened him for his letter of resignation.
In what was my favorite part of the interview, we also discuss his insights about the financial system from his decades of experience in the world's largest HFT firms.
And we talk about Brett's new startup, Architect, as well as the general state of crypto post-FTX.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) - Passive investing & HFT hacks
(0:08:30) - Is Finance Zero-Sum?
(0:18:38) - Interstellar Markets & Periodic Auctions
(0:23:10) - Hiring & Programming at Jane Street
(0:32:09) - Quant Culture
(0:42:10) - FTX - Meeting Sam, Joining FTX US
(0:58:20) - FTX - Accomplishments, Beginnings of Trouble
(1:08:11) - FTX - SBF's Dysfunctional Leadership
(1:26:53) - FTX - Alameda
(1:33:50) - FTX - Leaving FTX, SBF"s Threats
(1:45:45) - FTX - Collapse
(1:53:10) - FTX - Lessons
(2:04:34) - FTX - Regulators, & FTX Mafia
(2:15:42) - Architect.xyz
(2:30:10) - Institutional Interest & Uses of Crypto
My podcast with the brilliant Marc Andreessen is out!
We discuss:
* how AI will revolutionize software
* whether NFTs are useless, & whether he should be funding flying cars instead
* a16z's biggest vulnerabilities
* the future of fusion, education, Twitter, venture, managerialism, & big tech
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:17) - Chewing glass
(0:04:21) - AI
(0:06:42) - Regrets
(0:08:51) - Managerial capitalism
(0:18:43) - 100 year fund
(0:22:15) - Basic research
(0:27:07) - $100b fund?
(0:30:32) - Crypto debate
(0:43:29) - Future of VC
(0:50:20) - Founders
(0:56:42) - a16z vulnerabilities
(1:01:28) - Monetizing Twitter
(1:07:09) - Future of big tech
(1:14:07) - Is VC Overstaffed?
Garett Jones is an economist at George Mason University and the author of The Cultural Transplant, Hive Mind, and 10% Less Democracy.
This episode was fun and interesting throughout!
He explains:
* Why national IQ matters
* How migrants bring their values to their new countries
* Why we should have less democracy
* How the Chinese are an unstoppable global force for free markets
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Timestamps
(00:00:00) - Intro
(00:01:08) - Migrants Change Countries with Culture or Votes?
(00:09:15) - Impact of Immigrants on Markets & Corruption
(00:12:02) - 50% Open Borders?
(00:16:54) - Chinese are Unstoppable Capitalists
(00:21:39) - Innovation & Immigrants
(00:24:53) - Open Borders for Migrants Equivalent to Americans?
(00:28:54) - Let's Ignore Side Effects?
(00:30:25) - Are Poor Countries Stuck?
(00:32:26) - How Can Effective Altruists Increase National IQ
(00:39:13) - Clone a million John von Neumann?
(00:44:39) - Genetic Selection for IQ
(00:47:02) - Democracy, Fed, FDA, & Presidential Power
(00:49:42) - EU is a force for good?
(00:55:12) - Why is America More Libertarian Than Median Voter?
(00:56:19) - Is Ethnic Conflict a Short Run Problem?
(00:59:38) - Bond Holder Democracy
(01:04:57) - Mormonism
(01:08:52) - Garett Jones's Immigration System
(01:10:12) - Interviewing SBF
One of my best episodes ever. Lars Doucet is the author of Land is a Big Deal, a book about Georgism which has been praised by Vitalik Buterin, Scott Alexander, and Noah Smith. Sam Altman is the lead investor in his new startup, ValueBase.
Talking with Lars completely changed how I think about who creates value in the world and who leeches off it.
We go deep into the weeds on Georgism:
* Why do even the wealthiest places in the world have poverty and homelessness, and why do rents increase as fast as wages?
* Why are land-owners able to extract the profits that rightly belong to labor and capital?
* How would taxing the value of land alleviate speculation, NIMBYism, and income and sales taxes?
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow Lars on Twitter. Follow me on Twitter.
Timestamps
(00:00:00) - Intro
(00:01:11) - Georgism
(00:03:16) - Metaverse Housing Crises
(00:07:10) - Tax Leisure?
(00:13:53) - Speculation & Frontiers
(00:24:33) - Social Value of Search
(00:33:13) - Will Georgism Destroy The Economy?
(00:38:51) - The Economics of San Francisco
(00:43:31) - Transfer from Landowners to Google?
(00:46:47) - Asian Tigers and Land Reform
(00:51:19) - Libertarian Georgism
(00:55:42) - Crypto
(00:57:16) - Transitioning to Georgism
(01:02:56) - Lars's Startup & Land Assessment
(01:15:12) - Big Tech
(01:20:50) - Space
(01:23:05) - Copyright
(01:25:02) - Politics of Georgism
(01:33:10) - Someone Is Always Collecting Rents
Holden Karnofsky is the co-CEO of Open Philanthropy and co-founder of GiveWell. He is also the author of one of the most interesting blogs on the internet, Cold Takes.
We discuss:
* Are we living in the most important century?
* Does he regret OpenPhil’s 30 million dollar grant to OpenAI in 2016?
* How does he think about AI, progress, digital people, & ethics?
Highly recommend!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Timestamps
(0:00:00) - Intro
(0:00:58) - The Most Important Century
(0:06:44) - The Weirdness of Our Time
(0:21:20) - The Industrial Revolution
(0:35:40) - AI Success Scenario
(0:52:36) - Competition, Innovation , & AGI Bottlenecks
(1:00:14) - Lock-in & Weak Points
(1:06:04) - Predicting the Future
(1:20:40) - Choosing Which Problem To Solve
(1:26:56) - $30M OpenAI Investment
(1:30:22) - Future Proof Ethics
(1:37:28) - Integrity vs Utilitarianism
(1:40:46) - Bayesian Mindset & Governance
(1:46:56) - Career Advice
This was one of my favorite episodes ever.
Bethany McLean was the first reporter to question Enron’s earnings, and she has written some of the best finance books out there.
We discuss:
* The astounding similarities between Enron & FTX,
* How visionaries are just frauds who succeed (and which category describes Elon Musk),
* What caused 2008, and whether we are headed for a new crisis,
* Why there’s too many venture capitalists and not enough short sellers,
* And why history keeps repeating itself.
McLean is a contributing editor at Vanity Fair (see her articles here) and the author of The Smartest Guys in the Room, All the Devils Are Here, Saudi America, and Shaky Ground.
Watch on YouTube. Listen on Spotify, Apple Podcasts, or your favorite podcast platform.
Follow McLean on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps
(0:04:37) - Is Fraud Over?
(0:11:22) - Shortage of Shortsellers
(0:19:03) - Elon Musk - Fraud or Visionary?
(0:23:00) - Intelligence, Fake Deals, & Culture
(0:33:40) - Rewarding Leaders for Long Term Thinking
(0:37:00) - FTX Mafia?
(0:40:17) - Is Finance Too Big?
(0:44:09) - 2008 Collapse, Fannie & Freddie
(0:49:25) - The Big Picture
(1:00:12) - Frackers Vindicated?
(1:03:40) - Rating Agencies
(1:07:05) - Lawyers Getting Rich Off Fraud
(1:15:09) - Are Some People Fundamentally Deceptive?
(1:19:25) - Advice for Big Picture Thinkers
Nadia Asparouhova is currently researching what the new tech elite will look like at nadia.xyz. She is also the author of Working in Public: The Making and Maintenance of Open Source Software.
We talk about how:
* American philanthropy has changed from Rockefeller to Effective Altruism
* SBF represented the Davos elite rather than the Silicon Valley elite,
* Open source software reveals the limitations of democratic participation,
* & much more.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Timestamps
(0:00:00) - Intro
(0:00:26) - SBF was Davos elite
(0:09:38) - Gender sociology of philanthropy
(0:16:30) - Was Shakespeare an open source project?
(0:22:00) - Need for charismatic leaders
(0:33:55) - Political reform
(0:40:30) - Why didn’t previous wealth booms lead to new philanthropic movements?
(0:53:35) - Creating a 10,000 year endowment
(0:57:27) - Why do institutions become left wing?
(1:02:27) - Impact of billionaire intellectual funding
(1:04:12) - Value of intellectuals
(1:08:53) - Climate, AI, & Doomerism
(1:18:04) - Religious philanthropy
Perhaps the most interesting episode so far.
Byrne Hobart writes at thediff.co, analyzing inflections in finance and tech.
He explains:
* What happened at FTX
* How drugs have induced past financial bubbles
* How to be long AI while hedging Taiwan invasion
* Whether Musk’s Twitter takeover will succeed
* Where to find the next Napoleon and LBJ
* & ultimately how society can deal with those who seek domination and recognition
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
Timestamps:
(0:00:50) - What the hell happened at FTX?
(0:07:03) - How SBF Faked Being a Genius:
(0:12:23) - Drugs Explain Financial Bubbles
(0:17:12) - On Founder Physiognomy
(0:21:02) - Indexing Parental Involvement in Raising Talented Kids
(0:30:35) - Where are all the Caro-level Biographers?
(0:39:03) - Where are today's Great Founders?
(0:48:29) - Micro Writing -> Macro Understanding
(0:51:48) - Elon's Twitter Takeover
(1:00:50) - Does Big Tech & West Have Great People?
(1:11:34) - Philosophical Fanatics and Effective Altruism
(1:17:17) - What Great Founders Have In Common
(1:19:56) - Thinkers vs. Analyzers
(1:25:40) - Taiwan Invasion bets & AI Timelines
Edward Glaeser is the chair of the Harvard department of economics, and the author of the best books and papers about cities (including Survival of the City and Triumph of the City).
He explains why:
* Cities are resilient to terrorism, remote work, & pandemics,
* Silicon Valley may collapse but the Sunbelt will prosper,
* Opioids show UBI is not a solution to AI
* & much more!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) - Mars, Terrorism, & Capitals
(0:06:32) - Decline, Population Collapse, & Young Men
(0:14:44) - Urban Education
(0:18:35) - Georgism, Robert Moses, & Too Much Democracy?
(0:25:29) - Opioids, Automation, & UBI
(0:29:57) - Remote Work, Taxation, & Metaverse
(0:42:29) - Past & Future of Silicon Valley
(0:48:56) - Housing Reform
(0:52:32) - Europe’s Stagnation, Mumbai’s Safety, & Climate Change
I had a fascinating discussion about Robert Moses and The Power Broker with Professor Kenneth T. Jackson.
He's the pre-eminent historian on NYC and author of Robert Moses and The Modern City: The Transformation of New York.
He answers:
* Why are we so much worse at building things today?
* Would NYC be like Detroit without the master builder?
* Does it take a tyrant to stop NIMBY?
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) Preview + Intro
(0:11:13) How Moses Gained Power
(0:18:22) Moses Saved NYC?
(0:27:31) Moses the Startup Founder?
(0:32:34) The Case Against Moses Highways
(0:50:30) NIMBYism
(1:02:44) Is Progress Cyclical
(1:11:13) Friendship with Caro
(1:19:50) Moses the Longtermist?
It was a pleasure to welcome Brian Potter on the podcast! Brian is the author of the excellent Construction Physics blog, where he discusses why the construction industry has been slow to industrialize and innovate.
He explains why:
Construction isn’t getting cheaper and faster,
“Ugly” modern buildings are simply the result of better architecture,
China is so great at building things,
Saudi Arabia’s Line is a waste of resources,
Environmental review makes new construction expensive and delayed
and much much more!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
You may also enjoy my interviews with Tyler Cowen (about talent, collapse, & pessimism of sex). Charles Mann (about the Americas before Columbus & scientific wizardry), and Austin Vernon about (Energy Superabundance, Starship Missiles, & Finding Alpha).
Timestamps
(0:00) - Why Saudi Arabia’s Line is Insane, Unrealistic, and Never going to Exist
(06:54) - Designer Clothes & eBay Arbitrage Adventures
(10:10) - Unique Woes of The Construction Industry
(19:28) - The Problems of Prefabrication
(26:27) - If Building Regulations didn’t exist…
(32:20) - China’s Real Estate Bubble, Unbound Technocrats, & Japan
(44:45) - Automation and Revolutionary Future Technologies
(1:00:51) - 3D Printer Pessimism & The Rising Cost of Labour
(1:08:02) - AI’s Impact on Construction Productivity
(1:17:53) - Brian Dreams of Building a Mile High Skyscraper
(1:23:43) - Deep Dive into Environmentalism and NEPA
(1:42:04) - Software is Stealing Talent from Physical Engineering
(1:47:13) - Gaps in the Blog Marketplace of Ideas
(1:50:56) - Why is Modern Architecture So Ugly?
(2:19:58) - Advice for Aspiring Architects and Young Construction Physicists
It was a fantastic pleasure to welcome Bryan Caplan back for a third time on the podcast! His most recent book is Don't Be a Feminist: Essays on Genuine Justice.
He explains why he thinks:
- Feminists are mostly wrong,
- We shouldn’t overtax our centi-billionaires,
- Decolonization should have emphasized human rights over democracy,
- Eastern Europe shows that we could accept millions of refugees.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
More really cool guests coming up; subscribe to find out about future episodes!
You may also enjoy my interviews with Tyler Cowen (about talent, collapse, & pessimism of sex), Charles Mann (about the Americas before Columbus & scientific wizardry), and Steve Hsu (about intelligence and embryo selection).
Timestamps
(00:12) - Don’t Be a Feminist
(16:53) - Western Feminism Ignores Infanticide
(19:59) - Why The Universe Hates Women
(32:02) - Women's Tears Have Too Much Power
(45:40) - Bryan Performs Standup Comedy!
(51:02) - Affirmative Action is Philanthropic Propaganda
(54:13) - Peer-effects as the Only Real Education
(58:24) - The Idiocy of Student Loan Forgiveness
(1:07:57) - Why Society is Becoming Mentally Ill
(1:10:50) - Open Borders & the Ultra-long Term
(1:14:37) - Why Cowen’s Talent Scouting Strategy is Ludicrous
(1:22:06) - Surprising Immigration Victories
(1:36:06) - The Most Successful Revolutions
(1:54:20) - Anarcho-Capitalism is the Ultimate Government
(1:55:40) - Billionaires Deserve their Wealth
It was my great pleasure to speak once again to Tyler Cowen. His most recent book is Talent, How to Find Energizers, Creatives, and Winners Across the World.
We discuss:
- how sex is more pessimistic than he is,
- why he expects society to collapse permanently,
- why humility, stimulants, & intelligence are overrated,
- how he identifies talent, deceit, & ambition,
- & much much much more!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
You may also enjoy my interviews of Bryan Caplan (about mental illness, discrimination, and poverty), David Deutsch (about AI and the problems with America’s constitution), and Steve Hsu (about intelligence and embryo selection).
Timestamps
(0:00) -Did Caplan Change On Education?
(1:17) - Travel vs. History
(3:10) - Do Institutions Become Left Wing Over Time?
(6:02) - What Does Talent Correlate With?
(13:00) - Humility, Mental Illness, Caffeine, and Suits
(19:20) - How does Education affect Talent?
(24:34) - Scouting Talent
(33:39) - Money, Deceit, and Emergent Ventures
(37:16) - Building Writing Stamina
(39:41) - When Does Intelligence Start to Matter?
(43:51) - Spotting Talent (Counter)signals
(53:30) - Will Reading Cowen’s Book Help You Win Emergent Ventures?
(1:02:15) - Existential risks and the Longterm
(1:10:41) - Cultivating Young Talent
(1:16:58) - The Lifespans of Public Intellectuals
(1:24:36) - Is Stagnation Inevitable?
(1:30:30) - What are Podcasts for?
Charles C. Mann is the author of three of my favorite history books: 1491. 1493, and The Wizard and the Prophet.
We discuss:
* why Native American civilizations collapsed and why they failed to make more technological progress
* why he disagrees with Will MacAskill about longtermism
* why there aren’t any successful slave revolts
* how geoengineering can help us solve climate change
* why Bitcoin is like the Chinese Silver Trade
* and much much more!
Timestamps
(0:00:00) -Epidemically Alternate Realities
(0:00:25) -Weak Points in Empires
(0:03:28) -Slave Revolts
(0:08:43) -Slavery Ban
(0:12:46) - Contingency & The Pyramids
(0:18:13) - Teotihuacan
(0:20:02) - New Book Thesis
(0:25:20) - Gender Ratios and Silicon Valley
(0:31:15) - Technological Stupidity in the New World
(0:41:24) - Religious Demoralization
(0:43:24) - Critiques of Civilization Collapse Theories
(0:48:29) - Virginia Company + Hubris
(0:52:48) - China’s Silver Trade
(1:02:27) - Wizards vs. Prophets
(1:07:19) - In Defense of Regulatory Delays
(1:11:50) -Geoengineering
(1:16:15) -Finding New Wizards
(1:18:10) -Agroforestry is Underrated
(1:27:00) -Longtermism & Free Markets
Austin Vernon is an engineer working on a new method for carbon capture, and he has one of the most interesting blogs on the internet, where he writes about engineering, software, economics, and investing.
We discuss how energy superabundance will change the world, how Starship can be turned into a kinetic weapon, why nuclear is overrated, blockchains, batteries, flying cars, finding alpha, & much more!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow Austin on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:00) - Intro
(0:01:53) - Starship as a Weapon
(0:19:24) - Software Productivity
(0:41:40) - Car Manufacturing
(0:57:39) - Carbon Capture
(1:16:53) - Energy Superabundance
(1:25:09) - Storage for Cheap Energy
(1:31:25) - Travel in Future
(1:33:27) - Future Cities
(1:39:58) - Flying Cars
(1:43:26) - Carbon Shortage
(1:48:03) - Nuclear
(2:12:44) - Solar
(2:14:44) - Alpha & Efficient Markets
(2:22:51) - Conclusion
Steve Hsu is a Professor of Theoretical Physics at Michigan State University and cofounder of the company Genomic Prediction.
We go deep into the weeds on how embryo selection can make babies healthier and smarter.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Read the full transcript here.
Follow Steve on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00:14) - Feynman’s advice on picking up women
(0:11:46) - Embryo selection
(0:24:19) - Why hasn't natural selection already optimized humans?
(0:34:13) - Aging
(0:43:18) - First Mover Advantage
(0:53:38) - Genomics in dating
(0:59:20) - Ancestral populations
(1:07:07) - Is this eugenics?
(1:15:08) - Tradeoffs to intelligence
(1:24:25) - Consumer preferences
(1:29:34) - Gwern
(1:33:55) - Will parents matter?
(1:44:45) - Wordcels and shape rotators
(1:56:45) - Bezos and brilliant physicists
(2:09:35) - Elite education
Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.
We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website + Transcript here.
Follow Will on Twitter. Follow me on Twitter for updates on future episodes.
Subscribe to find out about future episodes!
Timestamps
(00:23) - Effective Altruism and Western values
(07:47) - The contingency of technology
(12:02) - Who changes history?
(18:00) - Longtermist institutional reform
(25:56) - Are companies longtermist?
(28:57) - Living in an era of plasticity
(34:52) - How good can the future be?
(39:18) - Contra Tyler Cowen on what’s most important
(45:36) - AI and the centralization of power
(51:34) - The problems with academia
Please share if you enjoyed this episode! Helps out a ton!
Transcript
Dwarkesh Patel 0:06
Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.
Will MacAskill 0:20
Thanks so much for having me on.
Effective Altruism and Western values
Dwarkesh Patel 0:23
My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?
Will MacAskill 0:32
Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.
We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.
What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.
Dwarkesh Patel 1:49
If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.
Will MacAskill 2:09
Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.
One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.
Dwarkesh Patel 2:56
So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values?
Will MacAskill 3:19
Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.
If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.
Dwarkesh Patel 4:14
If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.
Will MacAskill 4:30
Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.
The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.
Are we unwise?
Dwarkesh Patel 5:05
In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?
Will MacAskill 5:32
My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.
Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.
Dwarkesh Patel 6:34
But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?
Will MacAskill 6:47
Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.
Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.
The contingency of technology
Dwarkesh Patel 7:47
In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?
Will MacAskill 8:17
In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.
But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.
Dwarkesh Patel 9:10
It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?
Will MacAskill 9:22
In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.
It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.
Dwarkesh Patel 9:57
The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”
Will MacAskill 10:11
Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.
If there’s a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.
Dwarkesh Patel 11:06
When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?
Will MacAskill 11:22
Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don’t think a billion people would have died. Rather, similar developments would have happened shortly afterwards.
Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.
Who changes history?
Dwarkesh Patel 12:02
What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?
Will MacAskill 12:12
Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.
Dwarkesh Patel 13:04
If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?
Will MacAskill 13:20
As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.
Let’s say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.
Dwarkesh Patel 14:20
The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.
Will MacAskill 14:37
If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.
Scientific talent
Dwarkesh Patel 14:48
Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.
Will MacAskill 15:07
Yes, number of people times fraction of the population devoted to R&D.
Dwarkesh Patel 15:11
Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?
Will MacAskill 15:36
The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.
Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.
Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.
Dwarkesh Patel 16:58
If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?
Will MacAskill 17:14
I wouldn't say that at all. The model we’re working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.
However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.
Longtermist institutional reform
Dwarkesh Patel 18:00
I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?
Will MacAskill 18:23
The thing I'll caveat with longtermist institutions is that I’m pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?
In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies’ effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I’m skeptical in practice, but I would love some country to try it and see what happens.
There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.
Dwarkesh Patel 20:30
If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?
Will MacAskill 20:48
There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.
That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.
Dwarkesh Patel 21:42
Would that be your objection to a scheme like Robin Hanson’s about maximizing the expected future GDP using prediction markets and making decisions that way?
Will MacAskill 21:50
Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson’s idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It’s an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.
Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn’t been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.
Dwarkesh Patel 23:13
Let’s take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.
The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”
Will MacAskill 24:09
It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.
Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.
Dwarkesh Patel 25:35
If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?
Will MacAskill 25:46
Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could’ve had some system that wouldn't have been co-opted in the long-term.
Are companies longtermist?
Dwarkesh Patel 25:56
Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can’t be around if there’s an existential risk…
Will MacAskill 26:18
I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It’s surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.
For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.
Dwarkesh Patel 27:16
Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?
Will MacAskill 27:24
Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.
Whether that’s the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.
Living in an era of plasticity
Dwarkesh Patel 28:57
You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?
Will MacAskill 29:04
There are specific types of ‘moments of plasticity’ for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.
Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.
Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.
Dwarkesh Patel 30:46
Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?
Will MacAskill 31:01
The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).
Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it’s very likely, but it's seriously on the table - 10% or something?
Dwarkesh Patel 31:53
Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?
Will MacAskill 32:18
I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they’ve historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you’re in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.
The second is a ‘regression to the mean’ argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you’re changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.
Dwarkesh Patel 33:40
Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?
Will MacAskill 34:11
It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It’s certainly the trend over time. In which case, if we’re sharing similar broad goals, and they're implementing it in a different way, then they have it.
How good can the future be?
Dwarkesh Patel 34:52
Let's talk about how good we should expect the future to be. Have you come across Robin Hanson’s argument that we’ll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?
Will MacAskill 35:11
Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.
That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.
Dwarkesh Patel 36:02
Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?
Will MacAskill 36:29
I think the numbers are lower than that from memory, at least. From memory, it’s something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn’t. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.
There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.
Dwarkesh Patel 37:41
I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?
Will MacAskill 37:50
Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.
There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.
If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.
Contra Tyler Cowen on what’s most important
Dwarkesh Patel 39:18
Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.
Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?
Will MacAskill 39:48
Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.
It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don’t know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.
For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn’t feel too controversial. Even though it’s hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.
Dwarkesh Patel 42:19
It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?
Will MacAskill 42:31
What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.
Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.
Dwarkesh Patel 43:38
You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?
Will MacAskill 43:57
I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.
Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.
But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.
AI and the centralization of power
Dwarkesh Patel 45:36
Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?
Will MacAskill 45:54
The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.
The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.
Dwarkesh Patel 47:34
Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.
Will MacAskill 47:52
Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.
Dwarkesh Patel 48:06
Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?
Will MacAskill 48:32
There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you’re right. Value changes are something that pay off slowly over time.
I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.
If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.
Dwarkesh Patel 49:59
Have you heard of Slime Mold Time Mold Potato Diet?
Will MacAskill 50:03
I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.
Dwarkesh Patel 50:25
Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF’s and making moral philosophy arguments for EA? Curious about that.
Will MacAskill 50:41
It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF’s new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I’ve had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.
The problems with academia
Dwarkesh Patel 51:34
You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?
Will MacAskill 51:54
I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they’re too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There’s almost no work done on any of these topics. Companies aren't interested too grand in scale.
Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.
If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.
Dwarkesh Patel 53:20
Will I be able to send my kids to MacAskill University? What's the status on that project?
Will MacAskill 53:25
I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.
Dwarkesh Patel 54:10
Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.
Will MacAskill 54:34
Great. Well, thank you so much!
Dwarkesh Patel 54:38
Anywhere else they can find you? Or any other information they might need to know?
Will MacAskill 54:39
Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.
If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.
Dwarkesh Patel 55:33
Awesome, thanks so much for coming on the podcast! It was a lot of fun.
Will MacAskill 54:39
Thanks so much, I loved it.
Joseph Carlsmith is a senior research analyst at Open Philanthropy and a doctoral student in philosophy at the University of Oxford.
We discuss utopia, artificial intelligence, computational power of the brain, infinite ethics, learning from the fact that you exist, perils of futurism, and blogging.
Watch on YouTube. Listen on Spotify, Apple Podcasts, etc.
Episode website + Transcript here.
Follow Joseph on Twitter. Follow me on Twitter.
Subscribe to find out about future episodes!
Timestamps
(0:00:06) - Introduction
(0:02:53) - How to Define a Better Future?
(0:09:19) - Utopia
(0:25:12) - Robin Hanson’s EMs
(0:27:35) - Human Computational Capacity
(0:34:15) - FLOPS to Emulate Human Cognition?
(0:40:15) - Infinite Ethics
(1:00:51) - SIA vs SSA
(1:17:53) - Futurism & Unreality
(1:23:36) - Blogging & Productivity
(1:28:43) - Book Recommendations
(1:30:04) - Conclusion
Please share if you enjoyed this episode! Helps out a ton!
Fin Moorhouse is a Research Scholar and assistant to Toby Ord at Oxford University's Future of Humanity Institute. He co-hosts the Hear This Idea podcast, which showcases new thinking in philosophy, the social sciences, and effective altruism.
We discuss for-profit entrepreneurship for altruism, space governance, morality in the multiverse, podcasting, the long reflection, and the Effective Ideas & EA criticism blog prize.
Watch on YouTube. Listen on Spotify, Apple Podcasts, etc.
Episode website + Transcript here.Follow Fin on Twitter. Follow me on Twitter.
Subscribe to find out about future episodes!
Timestamps
(0:00:10) - Introduction
(0:02:45) - EA Prizes & Criticism
(0:09:47) - Longtermism
(0:12:52) - Improving Mental Models
(0:20:50) - EA & Profit vs Nonprofit Entrepreneurship
(0:30:46) - Backtesting EA
(0:35:54) - EA Billionares
(0:38:32) - EA Decisions & Many Worlds Interpretation
(0:50:46) - EA Talent Search
(0:52:38) - EA & Encouraging Youth
(0:59:17) - Long Reflection
(1:03:56) - Long Term Coordination
(1:21:06) - On Podcasting
(1:23:40) - Audiobooks Imitating Conversation
(1:27:04) - Underappreciated Podcasting Skills
(1:38:08) - Space Governance
(1:42:09) - Space Safety & 1st Principles
(1:46:44) - Von Neuman Probes
(1:50:12) - Space Race & First Strike
(1:51:45) - Space Colonization & AI
(1:56:36) - Building a Startup
(1:59:08) - What is EA Underrating?
(2:10:07) - EA Career Steps
(2:15:16) - Closing Remarks
Please share if you enjoyed this episode! Helps out a ton!
Alexander Mikaberidze is Professor of History at Louisiana State University and the author of The Napoleonic Wars: A Global History.
He explains the global ramifications of the Napoleonic Wars - from India to Egypt to America. He also talks about how Napoleon was the last of the enlightened despots, whether he would have made a good startup founder, how the Napoleonic Wars accelerated the industrial revolution, the roots of the war in Ukraine, and much more!
Watch on YouTube, or listen on Spotify, Apple Podcasts, or any other podcast platform.
Episode website + Transcript here. Follow Professor Mikaberidze on Twitter. Follow me on Twitter for updates on future episodes.
Subscribe to find out about future episodes!
Timestamps:
(0:00:00) Alexander Mikaberidze - Professor of history and author of “The Napoleonic Wars”
(0:01:19) - The allure of Napoleon
(0:13:48) - The advantages of multiple colonies
(0:27:33) - The Continental System and the industrial revolution
(0:34:49) - Napoleon’s legacy.
(0:50:38) - The impact of Napoleonic Wars
(1:01:23) - Napoleon as a startup founder
(1:14:02) The advantages of war and how it shaped international government and to some extent, political structures.
Please share if you enjoyed this episode! Helps out a ton!
I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website + Transcript here.
Follow me on Twitter for updates on future episodes
Subscribe to find out about future episodes!
Timestamps
(00:18) - How inefficient is the world?
(01:11) - Choosing a career
(04:15) - The difficulty of being a founder
(06:21) - Is effective altruism too narrowminded?
(09:57) - Political giving
(12:55) - FTX Future Fund
(16:41) - Adverse selection in philanthropy
(18:06) - Correlation between different causes
(22:15) - Great founders do difficult things
(25:51) - Pitcher fatigue and the importance of focus
(28:30) - How SBF identifies talent
(31:09) - Why scaling too fast kills companies
(33:51) - The future of crypto
(35:46) - Risk, efficiency, and human discretion in derivatives
(41:00) - Jane Street vs FTX
(41:56) - Conflict of interest between broker and exchange
(42:59) - Bahamas and Charter Cities
(43:47) - SBF’s RAM-skewed mind
Unfortunately, audio quality abruptly drops from 17:50-19:15
Transcript
Dwarkesh Patel 0:09
Today on The Lunar Science Society Podcast, I have the pleasure of interviewing Sam Bankman-Fried, CEO of FTX. Thanks for coming on The Lunar Society.
Sam Bankman-Fried 0:17
Thanks for having me.
How inefficient is the world?
Dwarkesh Patel 0:18
Alright, first question. Does the consecutive success of FTX and Alameda suggest to you that the world has all kinds of low-hanging opportunities? Or was that a property of the inefficiencies of crypto markets at one particular point in history?
Sam Bankman-Fried 0:31
I think it's more of the former, there are just a lot of inefficiencies.
Dwarkesh Patel 0:35
So then another part of the question is: if you had to restart earning to give again, what are the odds you become a billionaire, but you can't do it in crypto?
Sam Bankman-Fried 0:42
I think they're pretty decent. A lot of it depends on what I ended up choosing and how aggressive I end up deciding to be. There were a lot of safe and secure career paths before me that definitely would not have ended there. But if I dedicated myself to starting up some businesses, there would have been a pretty decent chance of it.
Choosing a career
Dwarkesh Patel 1:11
So that leads to the next question—which is that you've cited Will MacAskill's lunch with you while you were at MIT as being very important in deciding your career. He suggested you earn-to-give by going to a quant firm like Jane Street. In retrospect, given the success you've had as a founder, was that maybe bad advice? And maybe you should’ve been advised to start a startup or nonprofit?
Sam Bankman-Fried 1:31
I don't think it was literally the best possible advice because this was in 2012. Starting a crypto exchange then would have been…. I think it was definitely helpful advice. Relative to not having gotten advice at all, I think it helps quite a bit.
Dwarkesh Patel 1:50
Right. But then there's a broader question: are people like you who could become founders advised to take lower variance, lower risk careers that in, expected value, are less valuable?
Sam Bankman-Fried 2:02
Yeah, I think that's probably true. I think people are advised too strongly to go down safe career paths. But I think it's worth noting that there's a big difference between what makes sense altruistically and personally for this. To the extent you're just thinking of personal criteria, that's going to argue heavily in favor of a safer career path because you have much more quickly declining marginal utility of money than the world does. So, this kind of path is specifically for altruistically-minded people.
The other thing is that when you think about advising people, I think people will often try and reference career advice that others got. “What were some of these outward-facing factors of success that you can see?” But often the answer has something to do with them and their family, friends, or something much more personal. When we talk with people about their careers, personal considerations and the advice of people close to them weigh very heavily on the decisions they end up making.
Dwarkesh Patel 3:17
I didn't realize that the personal considerations were as important in your case as the advice you got.
Sam Bankman-Fried 3:24
Oh, I don’t think in my case. But, it is true with many people that I talked to.
Dwarkesh Patel 3:29
Speaking of declining marginal consumption, I'm wondering if you think the implication of this is that over the long term, all the richest people in the world will be utilitarian philanthropists because they don't have diminishing returns of consumption. They’re risk-neutral.
Sam Bankman-Fried 3:40
I wouldn't say all will, but I think there probably is something in that direction. People who are looking at how they can help the world are going to end up being disproportionately represented amongst the most and maybe least successful.
The difficulty of being a founder
Dwarkesh Patel 3:54
Alright, let’s talk about Effective Altruism. So in your interview with Tyler Cowen, you were asked, “What constrains the number of altruistically minded projects?” And you answered, “Probably someone who can start something.”
Now, is this a property of the world in general? Or is this a property of EAs? And if it's about EAs, then is there something about the movement that drives away people who took could take leadership roles?
Sam Bankman-Fried 4:15
Oh, I think it's just the world in general. Even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do well, if they were run well, that we'd be excited to fund. And the missing ingredient quite frequently for them is the right person or team to take the lead on it. In general, starting something is brutal. It's brutal being a founder, and it requires a somewhat specific but extensive list of skills. Those things end up making it high in demand.
Dwarkesh Patel 4:56
What would it take to get more of those kinds of people to go into EA?
Sam Bankman-Fried 4:59
Part of it is probably just talking with them about, “Have you thought about what you can do for the world? Have you thought about how you can have an impact on the world? Have you thought about how you can maximize your impact on the world?” Many people would be excited about thinking critically and ambitiously about how they can help the world. So I think honestly, just engagement is one piece of this. And then even within people who are altruistically minded and thinking about what it would take for them to be founders, there are still things that you can do.
Some of this is about empowering people and some of this is about normalizing the fact that when you start something, it might fail—and that's okay. Most startups and especially very early-stage startups should not be trying to maximize the chances of having at least a little bit of success. But that means you have to be okay with the personal fallout of failing and that we have to build a community that is okay with that. I don't think we have that right now, I think very few communities do.
Is effective altruism too narrowminded?
Dwarkesh Patel 6:21
Now, there are many good objections to utilitarianism, as you know. You said yourself that we don't have a good account of infinite ethics—should we attribute substantial weight to the probability that utilitarianism is wrong? And how do you hedge for this moral uncertainty in your giving?
Sam Bankman-Fried 6:35
So I don't think it has a super large impact on my giving. Partially, because you'd need to have a concrete proposal for what else you would do that would be different actions-wise—and I don't know that that I've been compelled by many of those. I do think that there are a lot of things we don't understand right now. And one thing that you pointed to is infinite ethics. Another thing is that (I'm not sure this is moral uncertainty, this might be physical uncertainty) there are a lot of sort of chains of reasoning people will go down that are somewhat contingent on our current understanding of the universe—which might not be right. And if you look at expected-value outcomes, might not be right.
Say what you will about the size of the universe and what that implies, but some of the same people make arguments based on how big the universe is and also think the simulation hypothesis has decent probability. Very few people chain through, “What would that imply?” I don't think it's clear what any of this implies. If I had to say, “How have these considerations changed my thoughts on what to do?”
The honest answer is that they have changed it a little bit. And the direction that they pointed me in is things with moderately more robust impact. And what I mean by that is, I'm sure one way that you can calculate the expected value of an action is, “Here's what's going to happen. Here are the two outcomes, and here are the probabilities of them.” Another thing you can do is say - it's a little bit more hand-wavy - but, “How much better is this going to make the world? How much does it matter if the world is better in generic diffuse ways?” Typically, EA has been pretty skeptical of that second line of reasoning—and I think correctly. When you see that deployed, it's nonsense. Usually, when people are pretty hard to nail down on the specific reasoning of why they think that something might be good, it’s because they haven't thought that hard about it or don't want to think that hard about it. The much better analyzed and vetted pathways are the ones we should be paying attention to.
That being said, I do think that sometimes EA gets too narrow-minded and specific about plotting out courses of impact. And this is one of the reasons why that people end up fixating on one particular understanding of the universe, of ethics, of how things are going to progress. But, all of these things have some amount of uncertainty in them. And when you jostle them, some theories of impact behave somewhat robustly and some of them completely fall apart. I’ve become a bit more sympathetic to ones that are a little robust under thoughts about what the world ends up looking like.
Political giving
Dwarkesh Patel 9:57
In the May 2022 Oregon Congressional Election, you gave 12 million dollars to Carrick Flynn, whose campaign was ultimately unsuccessful. How have you updated your beliefs about the efficacy of political giving in the aftermath?
Sam Bankman-Fried 10:12
It was the first time that I gave on that scale in a race. And I did it because he was, of all the candidates in the cycle, the most outspoken on the need for more pandemic preparedness and prevention. He lost—such is life. In the end, there are some updates on the efficacy of various things. But, I never thought that the odds were extremely high that he was going to win. It was always going to be an uncertain close race. There's a limit to how much you can update from a one-time occurrence. If you thought the odds were 50-50, and it turns out to be close in one direction or another, there's a maximum of a factor-of-two update that you have on that. There were a bunch of sort of micro-updates on specific factors of the race, but on a high level, it didn’t change my perspective on policy that much.
Dwarkesh Patel 11:23
But does it make you think there are diminishing or possibly negative marginal returns from one donor giving to a candidate? Because of the negative PR?
Sam Bankman-Fried 11:30
At some point, I think that's probably true.
Dwarkesh Patel 11:33
Continuing on the theme of politics, when is it more effective to give the marginal million dollars to a political campaign or institution to make some change at the government level (like putting in early detection)? Or when is it more effective to fund it yourself?
Sam Bankman-Fried 11:47
It's a good question. It's not necessarily mutually exclusive. One thing worth looking at is the scale of the things that need to happen. How much are things like international cooperation important for it? When you look at pandemic prevention, we're talking tens of billions of dollars of scale necessary to start putting this infrastructure in place. So it's a pretty big scale thing—which is hard to fund to that level individually. It’s also something where we’re going to need to have cooperation between different countries on, for example, what their surveillance for new pathogens looks like. And vaccine distribution If some countries have a great distribution of vaccines and others don't, that's not good. It's both not fair and not equitable for the countries that get hit hardest. But also, in a global pandemic, it's going to spread. You need global coverage. That's another reason that government has to be involved, at least to some extent, in the efforts.
FTX Future Fund
Dwarkesh Patel 12:55
Let's talk about Future Fund. As you know, there are already many existing Effective Altruist organizations that do donations. What is the reason you thought there was more value in creating a new one? What's your edge?
Sam Bankman-Fried 13:06 There's value in having multiple organizations. Every organization has its blind spots, and you can help cover those up if you have a few. If OpenPhil didn't exist, maybe we would have created an organization that looks more like OpenPhil. They are covering a lot of what we’re looking at—we're looking at overlapping, but not identical things. I think having that diversity can be valuable, but pointing to the ways in which we intentionally designed to be a little bit different from existing donors:
One thing that I've been really happy about is the re-granting program. We have a number of people who are experts in various areas to who we've basically donated pots that they can re-grant. What are the reasons that we think this is valuable? One thing is giving more stakeholders a chance to voice their opinions because we can't possibly be listening to everyone in the world directly and integrating all those opinions to come up with a perfect set of answers. Distributing it and letting them act semi-autonomously can help with that. The other thing is that it helps with a large number of smaller grants. When you think about what an organization giving away $100 million in a year is thinking about, “if we divided that up into $25,000 grants, how many grants would that mean?” 4,000 grants to analyze, right? If we want to give real thought to each one of those, we can't do that.
But on the flip side, sometimes the smaller grants are the most impactful per dollar and there are a lot of cases where someone really impressive has an exciting idea for a new foundation or a new organization that could do a lot of good for the world and needs $25,000 to get started. To rent out a small office, to be able to cover salaries for two employees for the first six months. Those are the kind of cases where a pretty small grant can make a huge change in the development of what might ultimately become a really impactful organization. But they're the kind of things that are really hard for our team to evaluate all of, just given the number of them—but the re-grantor program gives us a way to do that. Instead, we have 10, 50, or 100 re-grantors, who are going out and finding a lot of those opportunities close to them, they can then identify those and direct those grants—and it gives us a much wider reach. It also biases it less towards people who we happen to know, which is good.
We don't want to just like overfund everyone we know and underfund everyone that we don’t. That's one initiative that I've been pretty excited about that we're going to keep doing. Another thing we've really tried to have a lot of emphasis on making the (application) process smooth and clean. There are pros and cons to this. But it drops the activation energy necessary for someone to decide to apply for a grant and fill out all of the forms. We’ve really tried to bring more people into the fold.
Adverse selection in philanthropy
Dwarkesh Patel 16:41
If you make it easy for people to fill out your application and generally fund things that other organizations wouldn't, how do you deal with the possibility of adverse selection in your philanthropic deal flow?
Sam Bankman-Fried 16:52
It's a really good question. It’s a worry that Bob down the street might see a great book case study that he wants and wonder if he can get funding for this bookcase as it’s going to house a lot of knowledge. Knowledge is good, right? Obviously, we would detect that pretty quickly. The basic answer is that we still vet all of these. We do have oversight of them. But, we also do a deep dive into both all of the large ones, but also into samplings of all the small ones. We do deep dives into randomly sampled subsets of them—which allows us to get a good statistical sense of whether we are facing significant adverse selection in them. So far, we haven't seen obvious signs of it, but we're going to keep doing these analyses and see if anything worrying comes out of those. But that's a way to be able to have more trusted analyses for more scaled-up numbers of grants.
Correlation between different causes
Dwarkesh Patel 18:06
A long time ago, you wrote a blog post about how EA causes are multiplicative, instead of additive. Do you still find that's the case with most of the causes you care about? Or are there cases where some of the causes you care about are negatively multiplicative? An example might be economic growth and the speed at which AI takes off.
Sam Bankman-Fried 18:24
Yeah, I think it’s getting more complicated. Specifically around AI, you have a lot of really complex factors that can point in the same direction or in opposite directions. Especially if what you think matters is something like the relative progress of AI safety research versus AI capabilities research, a lot of things are going to have the same impact on both of those, and thus confusing impact on safety as a whole.
I do think it's more complicated now. It's not cleanly things just multiplying with each other. There are lots of cases where you see multiplicative behavior, but there are cases where you don't have that. The conclusion of this is: if you have multiplicative cases, you want to be funding each piece of it. But if you don't, then you want to be trained to identify the most impactful pieces and move those along. Our behavior should be different in those two scenarios.
Dwarkesh Patel 19:23
If you think of your philanthropy from a portfolio perspective, is correlation good or bad?
Sam Bankman-Fried 19:29
Expected value is expected value, right? Let's pretend that there is one person in Bangladesh and another one in Mexico. We have two interventions, both 50-50 on saving each of their lives. Suppose there’s some new drug that we could release to combat a neglected disease. This question is asking, “are they correlated?” “Are these two drugs correlated in their efficacy?” And my basic argument is, “it doesn't matter, right?” If you think about it from each of their perspectives, the person in Mexico isn't saying, “I only want to be saved in the cases where the person in Bangladesh is or isn't saved.” That’s not relevant. They want to live.
The person in Bangladesh similarly wishes to live. You want to help both of them as much as you can. It's not super relevant whether there’s alignment or anti-alignment between the cases where you get lucky and the ones where you don't.
Dwarkesh Patel 20:46
What’s the most likely reason that Future Fund fails to live up to your expectations?
Sam Bankman-Fried 20:51
We get a little lame. We give to a lot of decent things. But all the cooler or more innovative things that we do, don't seem to work very well. We end up giving the same that everyone else is giving. We don’t turn out to be effective at starting new things, we don't turn out to be effective at thinking of new causes or executing them. Hopefully, we'll avoid that. But, it's always a risk.
Dwarkesh Patel 21:21
Should I think of your charitable giving, as a yearly contribution of a billion dollars? Or should I think of it as a $30 billion hedge against the possibility that there's going to be some existential risk that requires a large pool of liquid wealth?
Sam Bankman-Fried 21:36
It's a really good question, I'm not sure. We've given away about 100 million so far this year. We're going to start doing that because we think there are really important things to fund and to start scaling up those systems. We notice opportunities as they come and we have systems ready in place to give to them. But it's something we're really actively discussing internally—how concentrated versus diffuse we want that giving to be, and storing up for one very large opportunity versus a mixture of many.
Great founders do difficult things
Dwarkesh Patel 22:15
When you look at a proposal and think this project could be promising, but this is not the right person to lead it, what is the trait that's most often missing?
Sam Bankman-Fried 22:22
Super interesting. I am going to ignore the obvious answer which is that the guy is not very good and look at cases where it's someone pretty impressive, but not the right fit for this. There are a few things. One of them is how much are they going to want to deal with really messy s**t. This is a huge thing! When I was working at Jane Street, I had a great time there. One thing I didn’t realize was valuable until I saw the alternative—if I decided that is a good trade to buy one share of Apple stock on NASDAQ, there's a button to do that.
If you as a random citizen want to buy one share of Apple stock directly on an exchange, it'll cost you tens of millions of dollars a year to get set up. You have to get a physical colo(cation) in Secaucus, New Jersey, have market data agreements with these companies, think about the sip and about the NBBO and whether you’re even allowed to list on NASDAQ, and then build the technological infrastructure to do it. But all of that comes after you get a bank account.
Getting a bank account that's going to work in finance is really hard. I spent hundreds, if not thousands of hours of my life, trying to open bank accounts. One of the things at early Alameda that was really crucial to our ability to make money was having someone very senior spend hours per day in a physical bank branch, manually instructing wire transfers. If we didn't do that, we wouldn't have been able to do the trade.
When you start a company, there are enormous amounts of s**t that looks like that. Things that are dumb or annoying or broken or unfair, or not how the world should work. But that’s how the world does work. The only way to be successful is to fight through that. If you're going to be like, “I'm the CEO, I don't do that stuff,” then no one's going to do that at your company. It's not going to get done. You won't have a bank account and you won't be able to operate. One of the biggest traits that are incredibly important for a founder and for an early team at a company (but not important for everything in life) is willing to do a ton of grunt work if it’s important for the company right then.
Viewing it not as “low prestige” or “too easy” for you, but as, “This is the important thing. This is a valuable thing to do. So it's what I'm going to do.” That's one of the core traits. The other thing is asking if they’re excited about this idea? Will they actually put their heart and soul into it? Or are they going to be not really into it and half-ass? Those are two things that I really look for.
Pitcher fatigue and the importance of focus
Dwarkesh Patel 25:51
How have you used your insights about pitcher fatigue to allocate talent in your companies?
Sam Bankman-Fried 25:58
Haha. When it comes to pitchers, in baseball, there's a lot of evidence that they get worse over the course of the game. Partially, because it's hard on the arm. But, it's worth noting that the evidence seems to support the claim that it depends on the pitchers. But in general, you're better off breaking up your outings. It's not just a function of how many innings they pitch that season, but also extremely recently. If you could choose between someone throwing six innings every six days, or throwing three innings every three days, you should use the latter. That's going to get the better pitching on average, and just as many innings out of them—and baseball has since then moved very far in that direction. The average number of pitches thrown by starting pitchers has gone down a lot over the last 5-10 years.
How do I use that in my company? There’s a metaphor here except this is with computer work instead of physical arm work. You don't have the same effect where your arm is getting sore, your muscles snap, and you need surgery if you pitch too hard for too long. That doesn't directly translate—but there's an equivalent of this with people getting tired and exhausted. But on the other hand, context is a huge, huge piece of being effective. Having all the context in your mind of what's going on, what you're working on, and what the company is doing makes it easier to operate effectively. For instance, if you could have either two half-time employees or one full-time employee, you're way better off with one full-time employee because they're going to have more context than either of the part-time employees would have—thus be able to work way more efficiently.
In general, concentrated work is pretty valuable. If you keep breaking up your work, you're never going to do as great of work as if you truly dove into something.
How SBF identifies talent
Dwarkesh Patel 28:30
You've talked about how you weigh experience relatively little when you're deciding who to hire. But in a recent Twitter thread, you mentioned that being able to provide mentorship to all the people who you hire is one of the bottlenecks to you being able to scale. Is there a trade-off here where if you don't hire people for experience, you have to give them more mentorship and thus can't scale as fast?
Sam Bankman-Fried 28:51
It's a good question. To a surprising extent, we found that the experience of the people that we hire has not had much correlation with how much mentorship they need. Much more important is how they think, how good they are at understanding new and different situations, and how hard they try to integrate into their understanding of coding how FTX works. We actually have by and large found that other things are much better predictors of how much oversight and mentorship they’re going to need then.
Dwarkesh Patel 29:35
How do you assess that short of hiring them for a month and then seeing how they did?
Sam Bankman-Fried 29:39
It's tough, I don't think we're perfect at it. But things that we look at are, “Do they understand quickly what the goal of a product is? How does that inform how they build it?” When you're looking at developers, I think we want people who can understand what FTX is, how it works, and thus what the right way to architect things would be for that rather than treating it as an abstract engineering problem divorced from the ultimate product.
You can ask people like, “Hey, here's a high-level customer experience or customer goal. How would you architect a system to create that?” That’s one thing that we look for. An eagerness to learn and adapt. It's not trivial to ask for that. But you can do some amount of that by giving people novel scenarios and seeing how much they break versus how much they bend. That can be super valuable. Specifically searching for developers who are willing to deal with messy scenarios rather than wanting a pristine world to work in. Our company is customer-facing and has to face some third-party tooling. All those things mean that we have to interface with things that are messy and the way the world is.
Why scaling too fast kills companies
Dwarkesh Patel 31:09
Before you launched FTX, you gave detailed instructions to the existing exchanges about how to improve their system, how to remove clawbacks, and so on. Looking back, they left billions of dollars of value on the table. Why didn't they just fix what you told them to fix?
Sam Bankman-Fried 31:22
My sense is that it’s part of a larger phenomenon. One piece of this is that they didn't have a lot of market structure experts. They did not have the talent in-house to think really deeply about risk engines. Also, there are cultural barriers between myself and some of them, which meant that they were less inclined than they otherwise would have been to take it very seriously. Ignoring those factors, there's something much bigger at play there. Many of these exchanges had hired a lot of people and they got in very large. You might think they were more capable of doing things with more horsepower. But in practice, most of the time that we see a company grow really fast, really quickly, and get really big in terms of people, it becomes an absolute mess.
Internally, there's huge diffusion of responsibility issues. No one's really taking charge. You can't figure out who's supposed to do what. In the end, nothing gets done. You actually start hitting the negative marginal utility of employees pretty quickly. The more people you have, the less total you get done. That happened to a number of them to the point where I sent them these proposals. Where did they go internally? Who knows. The Vice President of Exchange Risk Operations (but not the real one—the fake one operating under some department with an unclear goal and mission) had no idea what to do with it. Eventually, she passes it off to a random friend of hers that was the developer for the mobile app and was like, “You're a computer person, is this right?” They likely said, “I don’t know, I'm not a risk person,” and that's how it died. I’m not saying that’s literally what happened but sounds kinda like that’s probably happened. It's not like they had people who took responsibility and thought, “Wow, this is scary. I should make sure that the best person in the company gets this,” and pass it to the person who thinks about their risk modeling. I don't think that's what happened.
The future of crypto
Dwarkesh Patel 33:51
There're two ways of thinking about the impact of crypto on financial innovation. One is the crypto maximalist view that crypto subsumes tradfi. The other is that you're basically stress-testing some ideas in a volatile, fairly unregulated market that you're actually going to bring to tradfi, but this is not going to lead to some sort of decentralized utopia. Which of these models is more correct? Or is there a third model that you think is the correct one?
Sam Bankman-Fried 34:18
Who knows exactly what's going to happen? It's going to be path-dependent. If I had to guess I would say that a lot of properties of what is happening crypto today will make their way into Trad Fi to some extent. I think blockchain settlement has a lot of value and can clean up a lot of areas of traditional market structure. Composable applications are super valuable and are going to get more important over time. In some areas of this, it's not clear what's going to happen. When you think about how decentralized ecosystems and regulation intersect, it's a little TBD exactly where that ends up.
I don't want to state with extreme confidence exactly what will or won't happen. Stablecoins becoming an important settlement mechanism is pretty likely. Blockchains in general becoming a settlement mechanism, collateral clearing mechanism, and more assets getting tokenized seem likely. There being programs written on blockchains that people can add to that can compose with each other seems pretty likely to me. A lot of other areas of it could go either way.
Risk, efficiency, and human discretion in derivatives
Dwarkesh Patel 35:46
Let's talk about your proposal to the CFTC to replace Futures Commission Merchants with algorithmic real-time risk management. There's a worry that without human discretion, you have algorithms that will cause liquidation cascades when they were not necessary. Is there some role for human discretion in these kinds of situations?
Sam Bankman-Fried 36:06
There is! The way that traditional future market structure works is you have a clearinghouse with a decent amount of manual discretion in it connected to FCMs. Some of which use human discretion, and some of which use automated risk management algorithms with their clients. The smaller the client, the more automated it is. We are inverting that where at the center, you have an automated clearing house. Then, you connect it to FCM, which could use discretionary systems when managing their clients.
The key difference here is that one way or another, the initial margin has to end up at the clearinghouse. A programmatic amount of it and the clearinghouse acts in a clear way. The goal of this is to prevent contagion between different intermediaries. Whatever credit decisions one intermediary makes, with respect to their customers, doesn't pose risk to other intermediaries. This is because someone has to post the collateral to the clearinghouse in the end—whether it's the FCM, their customer, or someone else. It gives clear rules of the road and lack of systemic risk spreading throughout the system and contains risk to the parties that choose to take that risk on - to the FCMs that choose to make credit decisions there.
There is a potential role for manual judgment. Manual judgment can be valuable and add a lot of economic value. But it can also be very risky when done poorly. In the current system, each FCM is exposed to all of the manual bespoke decisions that each other FCM is making. That's a really scary place to be in, we've seen it blow up. We saw it blow up with LME nickel contracts and with a few very large traders who had positions at a number of different banks that ended up blowing out. So, this provides a level of clarity, oversight, and transparency to this system, so people know what risk they are, or are not taking on.
Dwarkesh Patel 38:29
Are you replacing that risk with another risk? If there's one exchange that has the most liquidity om futures and there’s one exchange where you're posting all your collateral (across all your positions), then the risk is that that single algorithm the exchange is using will determine when and if liquidation cascades happen?
Sam Bankman-Fried 38:47
It’s already the case that if you put all of your collateral with a prime broker, whatever that prime broker decides (whether it's an algorithm or a human or something in between) is what happens with all of your collateral. If you're not comfortable with that, you could choose to spread it out between different venues. You could choose to use one venue for some products and another venue for other products. If you don't want to cross-collateralized cross-margin your positions, you get capital efficiency for cross-margining them—for putting them in the same place. But, the downside of that is the risk of one can affect the other. There's a balance there, and I don't think it's a binary thing.
Dwarkesh Patel 39:28
Given the benefits of cross-margining and the fact that less capital has to be locked up as collateral, is the long-run equilibrium that the single exchange will win? And if that's the case, then, in the long run, there won't be that much competition in derivatives?
Sam Bankman-Fried 39:40
I don't think we're going to have a single exchange winning. Among other things, there are going to be different decisions made by different exchanges—which will be better or worse for particular situations. One thing that people have brought up is, “How about physical commodities?” Like corn or soy? What would our risk model say about that? It's not super helpful for those commodities right now because it doesn't know how to understand a warehouse. So, you might want to use a different exchange, which had a more bespoke risk model that tried to understand how the human would understand what physical positions someone had on. That would totally make sense. That can cause a split between different exchanges.
In addition, we've been talking about the clearing house here, but many exchanges can connect to the same clearinghouse. We're already, as a clearing house, connected to a number of different DCMs and excited for that to grow. In general, there are going to be a lot of people who have different preferences over different details of the system and choose different products based on that. That's how it should work. People should be allowed to choose the option that makes the most sense for them.
Jane Street vs FTX
Dwarkesh Patel 41:00
What are the biggest differences in culture between Jane Street and FTX?
Sam Bankman-Fried 41:05
FTX has much more of a culture of like morphing and taking out a lot of random new s**t. I don’t want to say Jane Street is an ossified place or anything, it’s somewhat nimble. But it is more of a culture of, “We're going to be very good at this particular thing on a timescale of a decade.” There are some cases where that's true of FTX because some things are clearly part of our core business for a decade. But there are other things that we knew nothing about a year ago, and now have to get good at. There's been more adaptation and it's also a much more public-facing and customer-facing business than Jane Street is—which means that there are lots of things like PR that are much more central to what we're doing.
Conflict of interest between broker and exchange
Dwarkesh Patel 41:56
Now in crypto, you're combining the exchange and the broker—they seem to have different incentives. The exchange wants to increase volume, and the broker wants to better manage risk, maybe with less leverage. Do you feel that in the long run, these two can stay in the same entity given the potential conflict of interest?
Sam Bankman-Fried 42:13
I think so. There's some extent to which they differ, but more that they actually want the same thing—and harmonizing them can be really valuable. One is to provide a great customer experience. When you have two different entities with two completely different businesses but have to go from one to the other, you're going to end up getting the least common denominator of the two as a customer. Everything is going to be supported as poorly as whichever of the two entities support what you're doing most poorly - and that makes it harder. Whereas synchronizing them gives us more ability to provide a great experience.
Bahamas and Charter Cities
Dwarkesh Patel 42:59
How has living in the Bahamas impacted your opinion about the possibility of successful charter cities?
Sam Bankman-Fried 43:06
It's a good question. It's the first time and it’s updated positively. We've built out a lot of things here that have been impactful. It's made me feel like it is more doable than I previously would have thought. But it's a lot of work. It's a large-scale project if you want to build out a full city—and we haven’t built out a full city yet. We built out some specific pieces of infrastructure that we needed and we've gotten a ton of support from the country. They've been very welcoming, and there are a lot of great things here. This is way less of a project than taking a giant, empty plot of land, and creating a city in it. That's way harder.
SBF’s RAM-skewed mind
Dwarkesh Patel 43:47
How has having a RAM-skewed mind influence the culture of FTX and its growth?
Sam Bankman-Fried 43:52
On the upside, we've been pretty good at adapting and understanding what the important things are at any time. Training ourselves quickly to be good at those even if it looks very different than what we were doing. That's allowed us to focus a lot on the product, regulation, licensing, customer experience, branding, and a bunch of other things. Hopefully, it means that we're able to take whatever situations come up and provide reasonable feedback about them and reasonable thoughts on what to do rather than thinking more rigidly in terms of how previous situations were. On the flip side, I need to have a lot of people around me who will try and remember long-term important things that might get lost day-to-day. As we focus on things that pop up, it's important for me to take time periodically to step back and clear my mind and remember the big picture. What are the most important things for us to be focusing on?
Please share if you enjoyed this episode! Helps out a ton!
Agustin Lebron began his career as a trader and researcher at Jane Street Capital, one of the largest market-making firms in the world. He currently runs the consulting firm Essilen Research, where he is dedicated to helping clients integrate modern decision-making approaches in their business.
We discuss how AI will change finance, why adverse selection makes trading and hiring so difficult, & what the future of crypto holds.
Watch on YouTube, or listen on Spotify, Apple Podcasts, or any other podcast platform.
Episode website here.
Buy The Laws of Trading.
Follow Agustin on Twitter. Follow me on Twitter for updates on future episodes.
Subscribe to find out about future episodes!
Timestamps:
(00:00) - Introduction
(04:18) - What happens in adverse selection?
(09:22) - Why is having domain expertise in trading not important?
(15:09) - How do you deal when you're on the other side of the adverse selection?
(21:16) - Why you should invest in training your people?
(25:37) - Is finance too big at 9% of GDP?
(31:06) - Trading is very labor intensive
(36:16) - Overlap of rationality community and trading
(48:00) - The age of startup founders
(50:43) - The role of market makers in crypto
(57:31) - Three books that you recommend
(58:47) - Life is long, not short
(1:03:01) - Short history of Lunar Society
Please share if you enjoyed this episode! Helps out a ton!
Ananyo Bhattacharya is the author of The Man from the Future: The Visionary Life of John von Neumann. He is a science writer who has worked at the Economist and Nature. Before journalism, he was a medical researcher at the Burnham Institute in San Diego, California. He holds a degree in physics from the University of Oxford and a PhD in protein crystallography from Imperial College London.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
Follow Ananyo on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps:
(0:00:30) - John Von Neumann - The Man From The Future
(0:02:29) - The Forgotten Father of Game Theory
(0:16:04) - The last representative of the great mathematicians
(0:19:45) - Did John Von Neumann have a Miracle year?
(0:26:31) - The fundamental theorem of John von Neumann’s game theory
(0:29:34) - The strong supporter of "Preventive War”
(0:50:51) - We can't all be superhuman
Stephen Grugett is a cofounder of Manifold Markets, where anyone can create a prediction market. We discuss how prediction markets can change how countries and companies make important decisions.
Manifold Markets: https://manifold.markets/
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
Follow me on Twitter for updates on future episodes.
Timestamps:
(0:00:00) - Introduction
(0:02:29) - Predicting the future
(0:05:16) - Getting Accurate Information
(0:06:20) - Potentials
(0:09:29) - Not using internal prediction markets
(0:11:04) - Doing the painful thing
(0:13:31) - Decision Making Process
(0:14:52) - Grugett’s opinion about insider trading
(0:16:23) - The Role of prediction market
(0:18:17) - Dealing with the Speculators
(0:20:33) - Criticism of Prediction Markets
(0:22:24) - The world when people cared about prediction markets
(0:26:10) - Grugett’s Profile Background/Experience
(0:28:49) - User Result Market
(0:30:17) - The most important mechanism
(0:32:59) - The 1000 manifold dollars
(0:40:30) - Efficient financial markets
(0:46:28) - Manifold Markets Job/Career Openings
(0:48:02) - Objectives of Manifold Markets
Today I talk to Pradyu Prasad (blogger and podcaster) about the book "Hirohito and the Making of Modern Japan" by Herbert P. Bix. We also discuss militarization, industrial capacity, current events, and blogging.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Podcast website here.
Follow Pradyu on Twitter. Follow me on Twitter for updates on future episodes.
Follow Pradyu's Blog: https://brettongoods.substack.com/
Timestamps:
(0:00:00) - Intro
(0:01:59) - Hirohito and Introduction to the Book
(0:05:39) - Meiji Restoration and Japan's Rapid Industrialization
(0:11:11) - Industrialization and Traditional Military Norms
(0:14:50) - Alternate Causes for Japanese Atrocities Richard Hanania's Public Choice Theory in Imperial Japan (0:17:03)
(0:21:34) - Hirohito's Relationship with the Military
(0:24:33) - Rant of Japanese Strategy
(0:33:10) - Modern Parallel to Russia/Ukraine
(0:38:22) - Economics of War and Western War Capacity
(0:48:14) - Elements of Effective Occupation
(0:55:53) - Ideological Fervor in WW2 Japan
(0:59:25) - Cynicism on Elites
(1:00:29) - The Legend of Godlike Hirohito
(1:06:47) - Postwar Japanese Economy
(1:13:23) - Blogging and Podcasting
(1:20:31) - Spooky
(1:38:00) - Outro
Please share if you enjoyed this episode! Helps out a ton!
Razib Khan is a writer, geneticist, and blogger with an interest in history, genetics, culture, and evolutionary psychology.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Podcast website here.Follow Razib on Twitter. Follow me on Twitter for updates on future episodes
Thanks for reading The Lunar Society! Subscribe to find out about future episodes!
Time Stamps
(0:00:05) Razib's Background
(0:01:34) Dysgenics of Intelligence
(0:04:23) Endogamy and Genetic traits in India
(0:08:58) Similar Examples of Endogamy
(0:14:28) Why So Many Brahmin CEOs
(0:19:55) Razib the Globe Trotter, Geography Expert
(0:25:04) Male/Female Genetic Variance
(0:30:04) Agricultural Man and Our Tiny Brains
(0:34:40) The Church of Science
(0:42:33) Professorship, a family business
(0:44:23) Long History
(0:52:42) Future of Human-Computer Interfacing
(0:56:30) Near Future of Gene Editing
(0:59:19) Meta Questions and Closing
Please share if you enjoyed this episode! Helps out a ton!
Jimmy Soni is the author of The Founders: The Story of Paypal and the Entrepreneurs Who Shaped Silicon Valley.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
Follow Jimmy on Twitter. Follow me on Twitter for updates on future episodes!
Timestamps:
(0:00:00) - Bell Labs vs PayPal
(0:05:12) - Scenius in Ancient Rome and America's Founding
(0:07:02) - Girard at PayPal
(0:15:17) - Thiel almost shorts the Dot com bubble
(0:19:49) - Does Zero to One contradict PayPal's story?
(0:27:57) - Hilarious Russian hacker story
(0:29:06) - Why is Thiel so good at spotting talent?
(0:34:50) - Did PayPal make talent or discover it?
(0:40:40) - Japanese mafia invests in PayPal?!
(0:44:42) - Upcoming TV show on PayPal
(0:48:11) - Musk in ancient Rome
(0:52:12) - Why didn't Musk keep pursuing finance?
(0:56:32) - Why didn't the mafia get back together?
(1:00:06) - Jimmy's writing process
I interview the economist Bryan Caplan about his new book, Labor Econ Versus the World, and many other related topics.
Bryan Caplan is a Professor of Economics at George Mason University and a New York Times Bestselling author. His most famous works include: The Myth of the Rational Voter, Selfish Reasons to Have More Kids, The Case Against Education, and Open Borders: The Science and Ethics of Immigration.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Podcast website here.
Follow Bryan on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps:
(0:00:00) - Intro
(0:00:33) - How many workers are useless, and why is labor force participation so low?
(0:03:47) - Is getting out of poverty harder than we think?
(0:10:43) - Are elites to blame for poverty?
(0:14:56) - Is human nature to blame for poverty?
(0:19:11) - Remote work and foreign wages
(0:24:43) - The future of the education system?
(0:29:31) - Do employers care about the difficulty of a curriculum?
(0:33:13) - Why do companies and colleges discriminate against Asians?
(0:42:01) - Applying Hanania's unitary actor model to mental health
(0:50:38) - Why are multinationals so effective?
(0:53:37) - Open borders and cultural norms
(0:58:13) - Is Tyler Cowen right about automation?
Richard Hanania is the President of the Center for the Study of Partisanship and Ideology and the author of Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
Follow Richard on Twitter. Follow me on Twitter for updates on future episodes.
Read Richard's Substack: https://richardhanania.substack.com/
Timestamps:
(0:00:00) - Intro
(0:04:35) - Did war prevent sclerosis?
(0:06:05) - China vs America's grand strategy
(0:10:00) - Does the president have more power over foreign policy?
(0:11:30) - How to deter bad actors?
(0:15:39) - Do some countries have a coherent foreign policy?
(0:16:55) - Why does self-interest matter in foreign but not domestic policy?
(0:21:05) - Should we limit money in politics?
(0:23:47) - Should we credit expertise for nuclear detante and global prosperity?
(0:28:45) - Have international alliances made us safer?
(0:31:57) - Why does academic bueracracy work in some fields?
(0:36:26) - Did academia suck even before diversity?
(0:39:34) - How do we get expertise in social sciences?
(0:42:19) - Why are things more liberal?
(0:43:55) - Why is big tech so liberal?
(0:47:53) - Authoritarian populism vs libertarianism
(0:51:40) - Can authoritarian governments increase fertility?
(0:54:54) - Will increasing fertility be dysgenic?
(0:56:43) - Will not having kids become cool?
(0:59:22) -Advice for libertarians?
David Deutsch is the founder of the field of quantum computing and the author The Beginning of Infinity and The Fabric of Reality.
Read me contra David on AI.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Read the full transcript with helpful links here.
Follow David on Twitter. Follow me on Twitter for updates on future podcasts.
Timestamps
(0:00:00) - Will AIs be smarter than humans?
(0:06:34) - Are intelligence differences immutable / heritable?
(0:20:13) - IQ correletation of twins seperated at birth
(0:27:12) - Do animals have bounded creativity?
(0:33:32) - How powerful can narrow AIs be?
(0:36:59) - Could you implant thoughts in VR?
(0:38:49) - Can you simulate the whole universe?
(0:41:23) - Are some interesting problems insoluble?
(0:44:59) - Does America fail Popper's Criterion?
(0:50:01) - Does finite matter mean there's no beginning of infinity?
(0:53:16) - The Great Stagnation
(0:55:34) - Changes in epistemic status is Popperianism
(0:59:29) - Open ended science vs gain of function
(1:02:54) - Contra Tyler Cowen on civilizational lifespan
(1:07:20) - Fun criterion
(1:14:16) - Does AGI through evolution require suffering?
(1:18:01) - Would David enter the Experience Machine?
(1:20:09) - (Against) Advice for young people
Byrne Hobart writes The Diff, a newsletter about inflections in finance and technology with 24,000+ subscribers.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
The Diff newsletter: https://diff.substack.com/
Follow me on Twitter for updates on future episodes!
Thanks for reading The Lunar Society! Subscribe for free to receive new posts and support my work.
Timestamps:
(0:00:00) - Byrne's one big idea: stagnation
(0:05:50) -Has regulation caused stagnation?
(0:14:00) - FDA retribution
(0:15:15) - Embryo selection
(0:17:32) - Patient longtermism
(0:21:02) - Are there secret societies?
(0:26:53) - College, optionality, and conformity
(0:34:40) - Differentiated credentiations underrated?
(0:39:15) - WIll contientiousness increase in value?
(0:44:26) - Why aren't rationalists more into finance?
(0:48:04) - Rationalists are bad at changing the world.
(0:52:20) - Why read more?
(0:57:10) - Does knowledge have increasing returns?
(1:01:30) - How to escape the middle career trap?
(1:04:48) - Advice for young people
(1:08:40) - How to learn about a subject?
David Friedman is a famous anarcho-capitalist economist and legal scholar.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website + transcript here.
David Friedman's website: http://www.daviddfriedman.com/
Follow me on Twitter for updates on future episodes.
Timestamps:
(0:00:00) - Dating market
(0:12:15) - The future of reputation
(0:27:30) - How Friedman predicted bitcoin
(0:35:35) - Prediction markets
(0:40:00) - Can regulation stop progress globally?
(0:45:50) - Lack of diversity in modern legal systems
(0:54:20) - Friedman's theory of property rights
(1:01:50) - Charles Murray's scheme to fight regulations
(1:06:25) -Property rights of the poor
(1:09:07) - Automation
(1:16:00) - Economics of medieval reenactment
(1:19:00) - Advice for futurist young people
Sarah Fitz-Claridge is a writer, coach, and speaker with a fallibilist worldview. She started the journal that became Taking Children Seriously in the early 1990s after being surprised by the heated audience reactions she was getting when talking about children. She has spoken all over the world about her educational philosophy, and you can find transcripts of some of her talks on her website.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
Sarah's Website: https://www.fitz-claridge.com/
Follow Sarah on Twitter. Follow me on Twitter for updates.
Michael Huemer is a professor of philosophy at the University of Colorado. He is the author of more than sixty academic articles in epistemology, ethics, metaethics, metaphysics, and political philosophy, as well as eight amazing books.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Podcast website here.
Buy Knowledge, Reality, and Value and The Problem of Political Authority.
Read Michael’s awesome blog and follow me on Twitter for new episodes.
Timestamps:
(0:00:00) - Intro
(0:01:07) - The Problem of Political Authority
(0:03:25) - Common sense ethics
(0:09:39) - Stockholm syndrome and the charisma of power
(0:18:14) - Moral progress
(0:26:55) - Growth of libertarian ideas
(0:33:37) - Does anarchy increase violence?
(0:44:37) - Transitioning to anarchy
(0:47:20) - Is Huemer attacking our society?!
(0:51:40) - Huemer's writing process
(0:53:18) - Is it okay to work for the government
(0:56:39) - Burkean argument against anarchy
(1:02:07) - The case for tyranny
(1:11:58) - Underrated/overrated
(1:25:55) - Huemer production function
(1:30:41) - Favorite books
(1:33:04) - Advice for young people
Robert Martin (aka Uncle Bob) is a programming pioneer and bestselling author or Clean Code. We discuss the prospect of automating programming, spotting and developing coding talent, occupational licensing, quotas, and the elusive sense of style.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Listen to his fascinating talk on the future of programming: https://youtu.be/ecIWPzGEbFc
Read his blog about programming: http://blog.cleancoder.com/
Buy his books on Amazon: https://www.amazon.com/kindle-dbs/ent...
Thanks for reading The Lunar Society! Subscribe to find out about future episodes!
Timestamps
(0:00) - Automating programming
(8:40) - Educating programmers (expertise, talent, university)
(21:45) - Spotting talent
(26:10) - Teaching kids
(29:31) - Prose and music sense in coding
(32:22) - Occupational licensing for programmers
(35:49) - Why is tech political
(39:28) - Quotas
(42:29) - Advice to 20 yr old
Scott Aaronson is a Professor of Computer Science at The University of Texas at Austin, and director of its Quantum Information Center.
He's the author of one of the most interesting blogs on the internet: https://www.scottaaronson.com/blog/ and the book “Quantum Computing since Democritus”.
He was also my professor for a class on quantum computing.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
Follow me on Twitter to get updates on future episodes and guests.
Timestamps
(0:00) - Intro
(0:33) - Journey through high school and college
(12:37) - Early work
(19:15) - Why quantum computing took so long
(33:30) - Contributions from outside academia
(38:18) - Busy beaver function
(53:50) - New quantum algorithms
(1:03:30) - Clusters
(1:06:23) - Complexity and economics
(1:13:26) - Creativity
(1:24:07) - Advice to young people
Scott is the author of Ultralearning and famous for the MIT Challenge, where he taught himself MIT's 4 year Computer Science curriculum in 1 year.
I had a blast chatting with Scott Young about aggressive self-directed learning. Scott has some of the best advice out there about learning hard things. It has helped yours truly prepare to interview experts and dig into interesting subjects.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Podcast website here.
Check out Scott’s website. Follow me on Twitter for updates on future episodes.
Buy Scott’s book on Ultralearning: https://amzn.to/3TuPEbf
Timestamps
(00:00) - Intro
(01:00) - Einstein
(13:20) - Age
(18:00) - Transfer
(24:40) - Compounding
(34:00) - Depth vs context
(40:50) - MIT challenge
(1:00:50) - Focus
(1:10:00) - Role models
(1:20:30) - Progress studies
(1:24:25) - Early work and ambition
(1:28:18) - Advice for 20 yr old
(1:35:00) - Raising a genius baby?
I ask Charles Murray about Human Accomplishment, By The People, and The Curmudgeon's Guide to Getting Ahead.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Read the full transcript here.
Follow Charles on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00) - Intro
(01:00) - Writing Human Accomplishment
(06:30) - The Lotka curve, age, and miracle years
(10:38) - Habits of the greats (hard work)
(15:22) - Focus and explore in your 20s
(19:57) - Living in Thailand
(23:02) - Peace, wealth, and golden ages
(26:02) - East, west, and religion
(30:38) - Christianity and the Enlightenment
(34:44) - Institutional sclerosis
(37:43) - Antonine Rome, decadence, and declining accomplishment
(42:13) - Crisis in social science
(45:40) - Can secular humanism win?
(55:00) - Future of Christianity
(1:03:30) - Liberty and accomplishment
(1:06:08) - By the People
(1:11:17) - American exceptionalism
(1:14:49) - Pessimism about reform
(1:18:43) - Can libertarianism be resuscitated?
(1:25:18) - Trump's deregulation and judicial nominations
(1:28:11) - Beating the federal government
(1:32:05) - Why don't big companies have a litigation fund?
(1:34:05) - Getting around the Halo effect
(1:36:07) - What happened to the Madison fund?
(1:37:00) - Future of liberty
(1:41:00) - Public sector unions
(1:43:43) - Andrew Yang and UBI
(1:44:36) - Groundhog Day
(1:47:05) - Getting noticed as a young person
(1:50:48) - Passage from Human Accomplishment
Alex Tabarrok is a professor of economics at George Mason University and with Tyler Cowen a founder of the online education platform http://MRU.org.
I ask Alex Tabarrok about the Grand Innovation Prize, the Baumol effect, and Dominant Assurance Contracts.
Watch on YouTube, or listen on Spotify, Apple Podcasts, or any other podcast platform.
Episode website here.
Follow Alex on Twitter. Follow me on Twitter for updates on future episodes.
Alex Tabarrok's and Tyler Cowen's excellent blog: https://marginalrevolution.com/
Thanks for reading The Lunar Society! Subscribe to find out about future episodes!
Timestamps:
(00:00) - Intro
(00:34) - Grand Innovation Prize
(08:45) - Prizes vs grants
(14:10) -Baumol effect
(27:50) - On Bryan Caplan's case against education
(31:35) - Scaling education online
(48:50) - Declining research productivity
(52:15) - Dominant Assurance Contracts
(58:40) - Future of governance
(1:04:05) - On Robin Hanson's Futarchy
(1:06:02) - Beating Adam Smith
(1:08:35) - Our Warfare-Welfare State
(1:19:30) - The Great Stagnation vs The Innovation Renaissance
(1:21:40) - Advice to 20 year olds
Caleb Watney is the director of innovation policy at the Progressive Policy Institute.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Episode website here.
Follow Caleb on Twitter. Follow me on Twitter for updates on future episodes.
Caleb's new blog: https://www.agglomerations.tech/
Timestamps
(00:00) - Intro
(00:20) - America's innovation engine is slowing
(01:02) - Remote work/ agglomeration effects
(08:45) - Chinese vs American innovation
(16:23) - Reforming institutions
(19:00) - Tom Cotton's critique of high skilled Immigration
(22:26) - Eric Weinstein's critique of high skilled Immigration
(26:02) - Reforming H1-B
(30:30) - Immigration during recession
(32:55) - Big tech / AI
(38:20) - EU regulation
(40:07) - Biden vs Trump
(42:30) - Federal R & D
(47:20) - Climate megaprojects
(49:35) - Falling fertility rates
(52:20) - Advice to 20 year olds
Robin Hanson is a professor of economics at George Mason University. He is the author of The Elephant in the Brain and The Age of Em.
Robin's Twitter: https://twitter.com/robinhanson
Robin's blog: https://www.overcomingbias.com/
Robin's website: http://mason.gmu.edu/~rhanson/home.html
My blog: https://dwarkeshpatel.com/
My Twitter: https://twitter.com/dwarkesh_sp
00:05 The long view
15:07 Subconscious vs conscious intelligence
20:28 Meditators
26:50 Signaling, norms, and motives
36:50 Conversation
42:54 2020 election nominees
49:25 Nerds in startups and social science
54:50 Academia and Robin
58:20 Dominance explains paternalism
1:09:32 Remote work
1:21:26 Advice for 20 yr old
1:28:05 Idea futures
1:32:13 Reforming institutions
Jason Crawford writes at The Roots of Progress about the history of technology and industry and the philosophy of progress.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Podcast website here.
Follow Jason on Twitter. Follow me on Twitter for updates on future episodes.
Jason's website: https://jasoncrawford.org/
The Roots of Progress: https://rootsofprogress.org/
Matjaž Leonardis has co-written a paper with David Deutsch about the Popper-Miller Theorem. In this episode, we talk about that as well as the dangers of the scientific identity, the nature of scientific progress, and advice for young people who want to be polymaths.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Podcast website here.
Follow Matjaž's excellent Twitter. Follow me on Twitter for updates on future episodes!
Tyler Cowen is Holbert L. Harris Professor of Economics at George Mason University and also Director of the Mercatus Center.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Transcript + Podcast website here.
Follow Tyler Cowen on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps
(0:00) - The Great Reset
(2:58) - Growth and the cyclical view of history
(4:00) - Time horizons, growth, and sustainability
(5:30) - Space travel
(8:11) - WMDs and end of humanity
(10:57) - Common sense morality
(12:20) - China and authoritarianism
(13:45) - Are big businesses complacent?
(17:15) - Online education vs university
(20:45) - Aesthetic decline in West Virginia
(23:20) - Advice for young people
(25:18) - Mentors
(27:15) - Identifying talent
(29:50) - Can adults change?
(31:45) - Capacity to change men vs women
(33:10 ) - Are effeminate societies better?
(35:15) - Conservatives and progress
(36:50) - Biggest mistake in history
(39:05) - Nuke in my lifetime
(40:35) - Age and learning
(42:45) - Pessimistic future
(43:50) - Optimistic future
(46:28) - Closing
Bryan Caplan is a Professor of Economics at George Mason University and a New York Times Bestselling author. His most famous works include: The Myth of the Rational Voter, Selfish Reasons to Have More Kids, The Case Against Education, and Open Borders: The Science and Ethics of Immigration.
I talk to Bryan about open borders, the idea trap, UBI, appeasement, China, the education system, and Bryan Caplan's next two books on poverty and housing regulation.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Follow Bryan on Twitter. Follow me on Twitter for updates on future episodes.
En liten tjänst av I'm With Friends. Finns även på engelska.