109 avsnitt • Längd: 50 min • Månadsvis
The Dynamist, a podcast by the Foundation for American Innovation, brings together the most important thinkers and doers to discuss the future of technology, governance, and innovation. The Dynamist is hosted by Evan Swarztrauber, former Policy Advisor at the Federal Communications Commission. Subscribe now!
The podcast The Dynamist is created by Foundation for American Innovation. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
AI has emerged as a critical geopolitical battleground where Washington and Beijing are racing not just for economic advantage, but military dominance. Despite these high stakes, there's surprising little consensus on how—or whether—to respond to frontier AI development.
The polarized landscape features techno-optimists battling AI safety advocates, with the former
dismissing the latter as "doomers" who exaggerate existential risks. Meanwhile, AI business leaders face criticism for potentially overstating their companies' capabilities to attract investors and secure favorable regulations that protect their market positions.
Democrats and civil rights advocates warn that focusing solely on catastrophic risks versus economic prosperity distracts from immediate harms like misinformation, algorithmic discrimination, and synthetic media abuse. U.S. regulatory efforts have struggled, with California's SB 1047 failing last year and Trump repealing Biden's AI Executive Order on inauguration day. Even the future of the U.S. government's AI Safety Institute remains uncertain under the new administration.
With a new administration in Washington, important questions linger: How should government approach AI's national security implications? Can corporate profit motives align with safer outcomes? And if the U.S. and China are locked in an AI arms race, is de-escalation possible, or are we heading toward a digital version of Mutually Assured Destruction?
Joining me to explore these questions are Dan Hendrycks, AI researcher and Director of the Center for AI Safety and co-author of "Superintelligence Strategy," a framework for navigating advanced AI from a national security and geopolitical perspective, and FAI's own Sam Hammond, Senior Economist and AI policy expert.
It’s an understatement that U.S.-China relations have been tense in recent years. Policymakers and industry leaders have elevated concerns around China’s trade practices, including currency manipulation, intellectual property theft, and allegations that China is directing or enabling fentanyl to flood into the U.S.
Trade and public health are increasingly linked, as COVID revealed the vulnerability of medical supply chains when U.S. overreliance on China led to delays and shortages of masks and personal protective equipment. Another issue that’s getting more attention from lawmakers and parents is the prevalence of Chinese-made, counterfeit electronic cigarettes or “vapes” throughout the U.S. Politicians from Senator Ashley Moody (R-FL) to President Trump himself have raised the alarm.
At the same time, American manufacturers have bemoaned the slow and stringent regulatory process they have faced at the FDA, which they say has enabled China to flood the market with cheap, sketchy alternatives. With a new FDA administrator set to take the helm, key questions remain. How did we end up in this situation, and what are the lessons not just for public health, but for other areas where the U.S. is looking to tighten up its trade policy. Is it possible for the U.S. to maintain the ideal of a relatively free market without adversaries exploiting that freedom?
Evan is joined by Joel Thayer, President of the Digital Progress Institute. You can read his op-ed on illicit vapes, the Bloomberg report we discuss in the episode, as well as Aiden Buzzetti’s op-ed in CommonPlace.
Since President Trump returned to office, tariffs have once again dominated economic policy discussions. Recent headlines have highlighted escalating trade tensions with China, renewed disputes with Canada and Mexico, and the ongoing controversy surrounding Trump’s proposal to repeal the CHIPS Act—a $52 billion semiconductor initiative that enjoys wide support in Congress as essential for U.S. technological independence.
But while tariffs capture public attention, beneath these headlines is a much broader debate over America's industrial strategy—how the nation can rebuild its manufacturing base, ensure economic prosperity, and strengthen national security in an increasingly competitive global environment. Critics argue that the shortcomings of recent attempts at industrial policy, such as the CHIPS Act, prove why government can’t outperform free markets.
Our guests today have a different view.
Marc Fasteau and Ian Fletcher of the Coalition for a Prosperous America authored a new book, Industrial Policy for the United States: Winning the Competition for Good Jobs and High-Value Industries. They argue that a bold, comprehensive industrial strategy is not only achievable but essential—calling for targeted tariffs, strategic currency management, and coordinated investments to rejuvenate American industry and secure the nation's future. But will their approach overcome the challenges of bureaucracy, political division, and international backlash? And can industrial policy truly deliver on its promise of economic renewal?
Everyone wants government to work better, and part of that is updating outdated systems and embracing modern technology. The problem? Our federal government faces a critical tech talent crisis. Only 6.3% of federal software engineers are under the age of 30, which is lower than the percentage of total federal workers under 30. That means that federal tech talent skews older than lawyers, economists, etc. Not to mention, Silicon Valley pays 2-3x more than the feds, which makes it hard to attract computer science majors into government. The shortage threatens America's ability to navigate an era of technological disruption across AI, quantum computing, defense tech, and semiconductors.
While recent initiatives like Elon Musk's temporary team of young engineers and the $500 billion Stargate program highlight the urgency, they don't solve the fundamental problem: creating a sustainable pipeline of technical talent willing to take a pay cut for public service. This talent gap could hamper innovation despite the current AI boom that's receiving 60% of venture funding. How can the private sector and federal government work to bridge this gap?
Evan is joined by Arun Gupta, who pivoted from 18 years as a Partner at Columbia Capital investing in cybersecurity and AI startups to leading NobleReach Foundation, which works to bring some of the best assets of the private sector into public service. They explore how to bridge the gap between Silicon Valley and government service to ensure America can effectively regulate, adopt, and leverage emerging technologies for the national interest.
Fusion energy, potentially a fuel source that could last a thousand years, is transitioning from science fiction to business reality. Helion Energy recently signed the first fusion power purchase agreement with Microsoft, promising 50 megawatts by 2028. But the story isn't just about the physics breakthroughs that make fusion possible. The U.S. and China are tussling for global leadership in fusion, as is the case in so many fields. And as China is outspending the US on fusion research by about $1.5 billion annually, concerns mount that they could make a serious challenge to America's lead in fusion. After all, while the US pioneered advances in clean energy technologies like solar panels and EVs, America ultimately lost manufacturing leadership to China.
With fusion, the stakes could be much higher, given that fusion has the potential to be the world's "last energy source," with significant economic and national security implications. Evan is joined by Sachin Desai, General Counsel at Helion Energy and former Nuclear Regulatory Commission staffer, and Thomas Hochman, Director of Infrastructure Policy at FAI. They discuss the technical, regulatory, and geopolitical dimensions of what could be this decade's most consequential technology race.
Mark Zuckerberg sent shockwaves around the world when Meta announced the end of its fact-checking program in the U.S. on its platforms Facebook, Instagram, and Threads. Critics lamented the potential for more mis/disinformation online while proponents (especially conservatives) rejoiced, as they saw the decision as a rollback of political censorship and viewpoint discrimination. Beneath the hot takes lie bigger questions around who should control what we see online. Should critical decisions around content moderation that affect billions of users be left to the whims of Big Tech CEOs? If not, is government intervention any better—and could it even clear First Amendment hurdles? What if there is a third option between CEO decrees and government intrusion?
Enter middleware: third-party software that sits between users and platforms, potentially offering a "third way" beyond what otherwise appears as a binary choice between. Middleware holds the potential to enable users to select different forms of curation on social media by third-parties—anyone from your local church to news outlets to political organizations. Could this technology put power back in the hands of users while addressing concerns about bias, misinformation, harassment, hate speech, and polarization?
Joining us are Luke Hogg, Director of Technology Policy at FAI, and Renée DiResta, Georgetown University professor and author of "Invisible Rulers: The People Who Turned Lies Into Reality." They break down their new paper, “Shaping the Future of Social Media with Middleware,” on and explore whether this emerging technology could reshape our social media landscape for the better.
During the pandemic, Congress spent an unprecedented $190 billion to help reopen schools and address learning loss. But new test scores show the investment isn't paying off - fourth and eighth grade reading levels have hit record lows, performing worse than even during COVID's peak. As the Trump administration signals dramatic changes to federal education policy, from eliminating the Department of Education to expanding school choice, questions about federal involvement in education are moving from abstract policy debates to urgent national security concerns.
In part two of our series on education R&D, we explore these developments with Sarah Schapiro and Melissa Moritz of the Alliance for Learning Innovation, a coalition working to build better research infrastructure in education. Drawing on their extensive experience - from PBS Education to the Department of Education's STEM initiatives - they examine how shifting federal policy could reshape educational innovation and America's global competitiveness. Can a state-centered approach maintain our edge in the talent race? What role should the private sector play? And how can evidence-based practices help reverse these troubling trends in student achievement?
Joining them are FAI's Dan Lips and Robert Bellafiore, who bring fresh analysis on reforming federal education R&D to drive better outcomes for American students. This wide-ranging discussion tackles the intersection of education, national security, workforce development and technological innovation at a pivotal moment for American education policy.
During the pandemic from 2020 to 2021, Congress dropped $190 billion to help reopen schools, provide tutoring, and assist with remote learning. The results? Fourth graders' math scores have plummeted 18 points from 2019-2023, eighth graders’ have dropped 27 points - the worst decline since testing began in 1995. Adult literacy is deteriorating too, with Americans in the lowest literacy tier jumping from 19% to 28% in just six years. Are we watching the collapse of academic achievement in real time?
In this episode, education policy veteran Chester Finn joins us to examine this crisis and potential solutions. Drawing on his experience as a Reagan administration official and decades of education reform work, Finn discusses why accountability measures have broken down, whether school choice can right the ship, and if the federal government's education R&D enterprise is fixable. Joining the conversation are FAI's Dan Lips and Robert Bellafiore, who recently authored new work on leveraging education R&D to help address America's learning challenges.
This is part one of a two-part series examining the state of American education and exploring paths forward as a new administration takes office with ambitious - and controversial - plans for reform.
The newly established Department of Government Efficiency (DOGE) has put state capacity back in the spotlight, reigniting debates over whether the federal government is fundamentally broken or just mismanaged. With Elon Musk at the helm, DOGE has already taken drastic actions, from shutting down USAID to slashing bureaucratic redundancies. Supporters argue this is the disruption Washington needs; critics warn it’s a reckless power grab that could erode public accountability. But regardless of where you stand, one thing is clear: the ability of the U.S. government to execute policy is now under scrutiny like never before.
That’s exactly the question at the heart of this week’s episode. From the Navy’s struggles to build ships to the Department of Education’s FAFSA disaster, our conversation lays out why the government seems incapable of delivering even on its own priorities. It’s not just about money or political will—it’s about outdated hiring rules, a culture of proceduralism over action, and a bureaucracy designed to say "no" instead of "go." These failures aren’t accidental; they’re baked into how the system currently operates. Jennifer Pahlka, former U.S. Deputy Chief Technology Officer under President Obama and Senior Fellow at Niskanen Center and Andrew Greenway, co-founder of Public Digital, join.
The solution? A fundamental shift in how government works—not just at the leadership level, but deep within agencies themselves. She advocates for cutting procedural bloat, giving civil servants the authority to make real decisions, and modernizing digital infrastructure to allow for rapid adaptation. Reform, she argues, isn’t about breaking government down; it’s about making it function like a system designed for the 21st century. Whether DOGE is a step in that direction or a warning sign of what happens when frustration meets executive power remains to be seen.
At Trump's second inauguration, one of the biggest stories, if not the biggest, was the front-row presence of Big Tech CEOs like Google’s Sundar Pichai and Meta’s Mark Zuckerberg—placed even ahead of Cabinet members. As the plum seating signaled a striking shift in Silicon Valley's relationship with Washington, just 24 hours later, the administration announced Stargate, a $500 billion partnership with OpenAI, Oracle, and other tech giants to build AI infrastructure across America.
But beneath the spectacle of billionaire CEOs at state functions lies a deeper question about the "Little Tech" movement—startups and smaller companies pushing for open standards, fair competition rules, and the right to innovate without being crushed by either regulatory costs or Big Tech copycats. As China pours resources into AI and semiconductors, American tech policy faces competing pressures: Trump promises business-friendly deregulation while potentially expanding export controls and antitrust enforcement against the very tech giants courting his favor.
To explore this complex new paradigm, Evan and FAI Senior Fellow Jon Askonas are joined by Garry Tan, CEO of Y Combinator, the startup accelerator behind Airbnb, DoorDash, and other alumni. As both a successful founder and venture capitalist, Tan discusses what policies could help startups thrive without dipping into overregulation, and whether Silicon Valley's traditionally progressive culture can adapt to Trump's tech alliances. You can read more about YC’s engagement with Washington, DC here.
Chinese AI startup DeepSeek’s release of AI reasoning model R1 sent NVIDIA and other tech stocks tumbling yesterday as investors questioned whether U.S. companies were spending too much on AI development. That’s because DeepSeek claims it made this model for only $6 million, a fraction of the hundreds of millions that OpenAI spent making o1, its nearest competitor. Any news coming out of China should be viewed with appropriate skepticism, but R1 nonetheless challenges the conventional American wisdom about AI development—massive computing power and unprecedented investment will maintain U.S. AI supremacy.
The timing couldn't be more relevant. Just last week, President Trump unveiled Stargate, a $500 billion public-private partnership with OpenAI, Oracle, SoftBank, and Emirati investment firm MGX aimed at building AI infrastructure across America. Meanwhile, U.S. efforts to preserve its technological advantage through export controls face mounting challenges and skepticism. If Chinese companies can innovate despite restrictions on advanced AI chips, should the U.S. rethink its approach?
To make sense of these developments and their implications for U.S. technological leadership, Evan is joined by Tim Fist, Senior Technology Fellow at the Institute for Progress, a think tank focused on accelerating scientific, technological, and industrial progress, and FAI Senior Economist Sam Hammond.
As revelations about Meta's use of pirated books for AI training send shockwaves through the tech industry, the battle over copyright and AI reaches a critical juncture. In this final episode of The Dynamist's series on AI and copyright, Evan is joined by FAI's Senior Fellow Tim Hwang and Tech Policy Manager Joshua Levine to discuss how these legal battles could determine whether world-leading AI development happens in Silicon Valley or Shenzhen.
The conversation examines the implications of Meta's recently exposed use of Library Genesis - a shadow library of pirated books - to train its LLaMA models, highlighting the desperate measures even tech giants will take to source training data. This scandal crystallizes a core tension: U.S. companies face mounting copyright challenges while Chinese competitors can freely use these same materials without fear of legal repercussions. The discussion delves into potential policy solutions, from expanding fair use doctrine to creating new statutory licensing frameworks, that could help American AI development remain competitive while respecting creator rights.
Drawing on historical parallels from past technological disruptions like Napster and Google Books, the guests explore how market-based solutions and policy innovation could help resolve these conflicts. As courts weigh major decisions in cases involving OpenAI, Anthropic, and others in 2024, the episode frames copyright not just as a domestic policy issue, but as a key factor in national technological competitiveness. What's at stake isn't just compensation for creators, but whether IP disputes could cede AI leadership to nations with fewer or no constraints on training data.
In the third installment of The Dynamist's series exploring AI and copyright, FAI Senior Fellow Tim Hwang leads a forward-looking discussion about how market dynamics, technological solutions, and geopolitics could reshape today's contentious battles over AI training data. Joined by Jason Zhao, co-founder of Story AI, and Jamil Jaffer, Executive Director of the National Security Institute at George Mason University, the conversation moves beyond current lawsuits to examine practical paths forward.
The discussion challenges assumptions about who really stands to gain or lose in the AI copyright debate. Rather than a simple creator-versus-tech narrative, Zhao highlights how some creators and talents have embraced AI while others have shown resistance and skepticism. Through Story's blockchain-based marketplace, he envisions a world where creators can directly monetize their IP for AI training without going through traditional gatekeepers. Jaffer brings a crucial national security perspective, emphasizing how over-regulation of AI training data could threaten American technological leadership - particularly as the EU prepares to implement strict new AI rules that could effectively set global standards.
Looking ahead to 2025, both guests express optimism about market-based and technological solutions winning out over heavy-handed regulation. They draw parallels to how innovations like Spotify and YouTube's Content ID ultimately resolved earlier digital disruptions. However, they warn that the US must carefully balance innovation and IP protection to maintain its AI edge, especially as competitors like China take a more permissive approach to training data. The episode frames copyright not just as a domestic policy issue, but as a key factor in national competitiveness and security in the AI era.
From the SAG-AFTRA picket lines to the New York Times lawsuit against OpenAI, the battle over AI's role in creative industries is heating up. In this second episode of The Dynamist's series on AI and copyright, we dive into the messy reality of how artists, creators, and tech companies are navigating this rapidly evolving landscape.
Our guests bring unique perspectives to this complex debate: Mike Masnick, CEO of Techdirt, who's been chronicling the intersection of tech and copyright for decades; Alex Winter, the filmmaker and actor known for Bill & Ted's Excellent Adventure, who offers boots-on-the-ground insight from his involvement in recent Hollywood labor negotiations; and Tim Hwang, Senior Fellow at FAI, who explores how current legal battles could shape AI's future.
The conversation covers everything from "shakedown" licensing deals between AI companies and media outlets to existential questions about artistic value in an AI age. While the guests acknowledge valid concerns about worker protection and fair compensation, they challenge the notion that restricting AI development through copyright law is either practical or beneficial. Drawing parallels to past technological disruptions like Napster, they explore how industries might adapt rather than resist change while still protecting creators' interests.
Copyright law and artificial intelligence are on a collision course, with major implications for the future of AI development, research, and innovation. In this first episode of The Dynamist's four-part series exploring AI and copyright, we're joined by Professor Pamela Samuelson of Berkeley Law, a pioneering scholar in intellectual property law and a leading voice on copyright in the digital age. FAI Senior Fellow Tim Hwang guest hosts.
The conversation covers the wave of recent lawsuits against AI companies, including The New York Times suit against OpenAI and litigation facing Anthropic, NVIDIA, Microsoft, and others. These cases center on two key issues: the legality of using copyrighted materials as training data and the potential for AI models to reproduce copyrighted content. Professor Samuelson breaks down the complex legal landscape, explaining how different types of media (books, music, software) might fare differently under copyright law due to industry structure and existing precedent.
Drawing on historical parallels from photocopying to the Betamax case, Professor Samuelson provides crucial context for understanding today's AI copyright battles. She discusses how courts have historically balanced innovation with copyright protection, and what that might mean for AI's future. With several major decisions expected in the coming months, including potential summary judgments, these cases could reshape the AI landscape - particularly for startups and research institutions that lack the resources of major tech companies.
As we approach the three-year mark of the war in Ukraine, and conflict continues to rage in the Middle East, technology has played a key role in these arenas—from cyber attacks and drones to propaganda efforts over social media. In Ukraine, SpaceX’s Starlink has blurred the lines between commercial and military communications, with the satellite broadband service supporting the Ukrainian army while becoming a target for signal jamming by Russia. What can we learn from these conflicts in Europe and the Middle East? What role will cyber and disinformation operations play in future wars? What has Ukraine taught us about the U.S. defense industrial base and defense technology? As China increases its aggression toward Taiwan and elsewhere in the Indo-Pacific, how will technology play a role in either deterring a conflict or deciding its outcome?
Evan is joined by Kevin B. Kennedy, a recently retired United States Air Force lieutenant general who last served as commander of the Sixteenth Air Force. He previously served as Director for Operations at U.S. Cyber Command.
2024 has been a whirlwind year for tech policy, filled with landmark moments that could shape the industry for years to come. From the high-profile antitrust lawsuits aimed at Big Tech to intense discussions around data privacy and online safety for kids, the spotlight on how technology impacts our daily lives has never been brighter. Across the Atlantic, Europe continued its aggressive regulatory push, rolling out new frameworks with global implications. Meanwhile, back in the U.S., all eyes are on what changes might come to tech regulation after the election.
With all this upheaval, one thing remains constant: people love posting their Spotify Wrapped playlists at the end of the year. It’s a fun way to reflect on the hits (and maybe a few misses) of the past twelve months, so we thought, why not take a similar approach to tech policy?
In this episode ofThe Dynamist, Evan is joined by Luke Hogg, FAI’s Director of Policy and Outreach, and Josh Levine, FAI’s Tech Policy Manager, for a lively conversation breaking down the year’s biggest stories. Together, they revisit the key moments that defined 2024, from courtroom dramas to legislative battles, and share their thoughts on what’s next for 2025. Will AI regulations dominate the agenda? Could new leaders at U.S. agencies take tech in a bold new direction? Tune in to hear their reflections, predictions, and maybe even a few hot takes as they wrap up 2024 in tech policy.
There is growing concern among parents and policymakers over the Internet’s harms to children—from online pornography to social media. Despite that, Congress hasn’t passed any legislation on children’s online safety in decades. And while psychologists continue to debate whether and to what extent certain Internet content harms children, several states have stepped into the fray, passing legislation aimed at protecting kids in the digital age. One such state is Texas where Governor Greg Abbott signed HB 1181 in June of 2023.
The bill requires adult or online pornography websites to verify the age of users to prevent users under the age of 18 from accessing those sites. A group representing online porn sites sued, and the bill was enjoined by a district court, then partially upheld by the Fifth Circuit, and will now be heard by the Supreme Court in Free Speech Coalition v. Paxton, with oral arguments scheduled for January 15.
The ruling in this case could have major implications for efforts to regulate the online world both at the state and federal level—not just for porn but other online content social media. On today’s show, Evan moderates a debate on the following resolution: Texas's Age Verification (AV) Law is Constitutional and AV laws are an effective means of protecting children from online harms.
Arguing for the resolution is Adam Candeub, senior fellow at Center for Renewing America, professor of law at michigan state university, and formerly acting assistant secretary of commerce for telecommunications and information under President Trump. Arguing against the resolution is Robert Corn-Revere, chief counsel at the Foundation for Individual Rights and Expression (FIRE). Before that he was a partner at Davis Wright Tremaine law firm for 20 years and served in government as chief counsel to former Federal Communications Commission Chairman James Quello. You can read FIRE’s brief in the case here.
Is Medicare a valley of death for medical innovation? While the U.S. is seen as a global leader in medical device innovation, the $800+ billion program that covers healthcare costs for senior citizens has been slow to reimburse certain medical devices, even when those devices are approved by the Food and Drug Administration. On average, it takes Medicare 4.5 years to cover a new FDA-approved medical device. This length of time has been dubbed the “Valley of Death,” referring to the human cost of delay.
While members of Congress and advocates in the med tech industry are pushing Medicare to streamline its process, CMS, the Center for Medicare and Medicaid services, has sounded a note of caution, warning that moving too quickly fails to account for the unique needs and considerations of the Medicare population, Americans over 65 years old.
Is this simply bureaucratic foot dragging, or are there legitimate safety and health risks with Medicare giving its blessing to new technologies and treatments? Is there a policy balance to be struck, where government health officials give seniors the unique consideration they need without denying them access to potentially life-saving treatments and devices?
Evan is joined by Katie Meyer, Vice President of Public Affairs at Novacure, a global oncology company working to extend survival in some of the most aggressive forms of cancer. Prior to that, she served in Congress in various roles, including as Deputy Health Policy Director at the Senate Finance Committee.
The Federal Trade Commission (FTC) is a once sleepy, three-letter agency in Washington that serves as the nation’s general purpose consumer protection regulator—dealing with everything from deceptive advertising to fraud. In recent years, however, the FTC has become somewhat of a household name thanks to current chair Lina Khan and high-profile cases against tech giants Microsoft, Meta, and Amazon. While some populists on right and left have praised the agency for taking on big business, others, particularly in the business community, have railed against the agency for an anti-business stance and preventing legitimate mergers and acquisitions.
Conservatives and Republicans have generally been skeptical of antitrust enforcement and government regulation, but in recent years they have been rethinking how to apply their philosophy in an era when trillion-dollar tech behemoths could be threats to online free speech. And as concerns around other tech issues like data privacy and children’s online safety continue to persist, the FTC sits at the center of it all as the nation’s de facto tech regulator. Is there a balance to be struck between Khan’s aggressive enforcement and the lax treatment preferred by the business world? And how should the agency tackle challenges like artificial intelligence?
Who better to help answer these questions than one of agency’s five commissioners. Evan is joined by Andrew Ferguson, one of two Republican commissioners at FTC. Prior to that, he was the solicitor general of Virginia and chief counsel to Republican senate leader Mitch McConnell.
President-elect Trump recently announced that entrepreneurs Elon Musk and Vivek Ramaswamy will lead the Department of Government Efficiency. Musk had forecast the idea in the tail end of the presidential election, championing a commission focused on cutting government spending and regulation. In a statement posted to Truth Social, the president-elect said DOGE would “pave the way for my administration to dismantle government bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure federal agencies.” For his part, Musk said “this will send shockwaves through the system, and anyone involved in government waste, which is a lot people.”
Government waste has long been a focus for Republicans in Washington. The phrase “waste, fraud, and abuse” often generates a chuckle in DC circles, given how much the federal bureaucracy, government spending, and the national debt have grown despite decades of professed fiscal hawkishness. While critics of Trump and Musk are rolling their eyes at what they perceive as a toothless commission, proponents welcome the focus on government efficiency from the president-elect and the world’s richest man, and are optimistic that Musk and Ramaswamy’s expertise in the business world would bring much-needed outside perspectives on how to optimize the federal government.
The Foundation for American Innovation has operated a project on government efficiency and tech modernization since 2019. FAI fellows just published a new paper on the topic of “An Efficiency Agenda for the Executive Branch.” To discuss DOGE, the challenges of streamlining bureaucracy, how AI might play a role in the efforts, and what Congress can do to help make DOGE a success, Evan is joined by Sam Hammond, Senior Economist at FAI and Dan Lips, Head of Policy at FAI. For a quick take on FAI's recommendations, check out Dan's oped in The Hill linked here.
Donald Trump won the 2024 presidential election, Republicans won control of the Senate, and the GOP is slated to maintain control of the House. If you turn on cable news, you will see many pundits playing monday morning quarterback in the wake of this Republican trifecta, arguing about the merits of how people voted, speculating on cabinet secretaries, and pointing fingers on who to blame, or who to give credit to, for the results.
But this is The Dynamist, not CNN. In today’s show, we focus on what the results mean for tech policy and tech politics. There are ongoing antitrust cases against Meta, Google, Apple, and Amazon. Investigations into Microsoft, Open AI, and Nvidia. How might the new president impact those cases? Congress is considering legislation to protect children from the harms of social media. Will we see action in the lame duck session or will the issue get kicked to January when the new Congress settles in? What about AI? Trump has vowed to repeal Biden’s Executive Order on artificial intelligence. What, if anything, might replace it? And for those in Silicon Valley who supported Trump, from Elon Musk to Peter Thiel, how might they wield influence in the new administration?
Evan is joined by Nathan Leamer, CEO of Fixed Gear Strategies and Executive Director of Digital First Project, and Ellen Satterwhite, Senior Director at Invariant, a government relations and strategic communications firm in DC. Both Nathan and Ellen previously served in government at the Federal Communications Commission—Nathan under President Trump and Ellen under President Obama.
When people hear 'quantum physics,' they often think of sci-fi movies using terms like 'quantum realm' to explain away the impossible. But today we're talking about quantum computing, which has moved beyond science fiction into reality. Companies like IBM and Google are racing to build machines that could transform medicine, energy storage, and our understanding of the universe.
But there's a catch: these same computers could potentially break most of the security protecting our digital lives, from WhatsApp messages to bank transfers to military secrets. To address this threat, the National Institute of Standards and Technology recently released quantum-safe cryptography standards, while new government mandates are pushing federal agencies to upgrade their security before quantum systems become cryptographically relevant—in other words, vulnerable to hacks by quantum computers.
To help us understand both the promise and peril of quantum computing, we're joined by Travis Scholten, Technical Lead in the Public Sector at IBM and former quantum computing researcher at the company. He’s also a former policy hacker at FAI, author of the Quantum Stack newsletter and co-author of a white paper on the benefits and risks of quantum computers.
When voters head to the polls next week, tech policy won't be top of mind—polling shows immigration, the economy, abortion, and democracy are the primary concerns. Yet Silicon Valley's billionaire class is playing an outsized role in this election, throwing millions at candidates and super PACs while offering competing visions for America's technological future.
The tech industry is in a much different place in 2024 than in past elections. Big Tech firms, who once enjoyed minimal government oversight, now face a gauntlet of regulatory challenges—from data privacy laws to antitrust lawsuits. While some tech leaders are hedging their bets between candidates, others are going all in for Harris or Trump—candidates who offer different, if not fully developed, approaches to regulation and innovation.
Trump's vision emphasizes a return to American technological greatness with minimal government interference, attracting support from figures like Elon Musk and Marc Andreessen despite Silicon Valley's traditionally Democratic lean. Harris presents a more managed approach, a generally pro-innovation stance tempered by a desire for government to help shape AI and other tech outcomes. Democratic donors like Mark Cuban and Reid Hoffman are backing Harris while hoping she'll soften Biden's tough antitrust stance. Meanwhile, crypto billionaires are flexing their political muscle, working to unseat skeptics in Congress after years of scrutiny under Biden's financial regulators.
What are these competing visions for technology, and how would each candidate approach tech policy if elected? Will 2024 reshape the relationship between Silicon Valley and Washington? Evan is joined by Derek Robertson, a veteran tech policy writer who authors the Digital Future Daily newsletter for Politico.
*Correction: The audio clip of Trump was incorrectly attributed to his appearance on the Joe Rogan Experience. The audio is from Trump’s appearance on the Hugh Hewitt Show
Over the past few years, Elon Musk’s political evolution has been arguably as rapid and disruptive as one of his tech ventures. He has transformed from a political moderate to a vocal proponent of Donald Trump and the MAGA movement and his outspokenness on issues like illegal immigration make him an outlier among tech entrepreneurs and CEOs.
Musk's increasing political involvement has added a layer of scrutiny to his businesses, particularly as SpaceX aims to secure more contracts and regulatory permissions. Labor tensions also loom, with Tesla facing unionization efforts and accusations of unfair labor practices, adding a wrinkle into an election where both presidential candidates are vying for the labor vote in the midst of several high-profile strikes this year.
Through all this, Musk’s companies—SpaceX, Tesla, and X—are pressing forward, but the stakes have arguably never been higher with regulatory bodies and the court of public opinion keeping a close watch. Many conservatives have embraced Musk as a Randian hero of sorts, a champion of free speech and innovation. Others sound a note of caution, warning that his emphasis on “efficiency” could undermine certain conservative values, and question whether his record on labor and China are worth celebrating. So, should conservatives embrace, or resist, Musk-ification?
Evan is joined by Chris Griswold, Policy Director at American Compass, a New Right think tank based in DC. Check out his recent piece, “Conservatives Must Resist Musk-ification.” Previously, he served as an advisor to U.S. Senator Marco Rubio, where he focused on innovation, small business, and entrepreneurship.
Have tech companies become more powerful than governments? As the size and reach of firms like Google and Apple have increased, there is growing concern that these multi-trillion dollar companies are too powerful and have started replacing important government functions.
The products and services of these tech giants are ubiquitous and pillars of modern life. Governments and businesses increasingly rely on cloud services like Microsoft Azure and Amazon Web Services to function. Elon Musk's Starlink has provided internet access in the flood zones of North Carolina and the battlefields of Ukraine. Firms like Palantir are integrating cutting-edge AI into national defense systems.
In response to these rapid changes, and resulting concerns, regulators in Europe and the U.S. have proposed various measures—from antitrust actions to new legislation like the EU's AI Act. Critics warn that overzealous regulation could stifle the very innovation that has driven economic growth and technological advancement, potentially ceding Western tech leadership to China. Others, like our guest, argue that these actions to rein in tech don’t go nearly far enough, and that governments must do more to take back the power she says that tech companies have taken from nation states.
Evan and Luke are joined by Marietje Schaake, a former MEP and current fellow at Stanford’s Cyber Policy Center. She is the author of The Tech Coup: How to Save Democracy from Silicon Valley. You can read her op-ed in Foreign Affairs summarizing the book.
On September 29th, Governor Newsom vetoed SB 1047, a controversial bill aimed at heading off catastrophic risks of large AI models. We previously covered the bill on The Dynamist in episode 64. In a statement, Newsom cited the bill’s “stringent standards to even the most basic functions” and said he does “not believe this is the best approach to protecting the public from real threats posed by the technology.” Senator Scott Wiener, the bill’s author, responded, “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers[.]”
The bill had passed the California senate back in August by a vote of 30-9, having been the subject of fierce debate between AI companies big and small and researchers and advocates who fear a catastrophic AI event. Proponents want to get ahead of AI cyberattacks, AI weapons development, or doomsday scenarios by making developers liable to implement safety protocols. Opponents argue that the bill will stifle innovation in California, calling it an “assault on open source” and a “harm to the budding AI ecosystem.”
Aside from the merits of the legislation, it is arguably the first major political fight over AI in the U.S. where competing interests fought all the way to the governor’s desk, attempting to sway the pen of Governor Newsom. The story featured a cast of characters from California Democrats like Nancy Pelosi to billionaires like Elon Musk to major companies like Google and OpenAI. What does this battle say about who holds sway in emerging AI politics? What are the factions and alignments? And what does this all mean for next year in California and beyond?
Evan is joined by Sam Hammond, Senior Economist at FAI and author of the Substack Second Best, and Dean Ball, a research fellow at the Mercatus Center, author of the Substack Hyperdimensional, and a non-resident fellow at FAI.
Since the advent of platforms like Uber, Instacart, and DoorDash, the so-called gig economy has been intertwined with technology. While the apps no doubt created loads of opportunity for people seeking flexible work on their own schedules, they have been lambasted by critics who say they don’t provide drivers and grocery shoppers with a minimum wage and health benefits.
This tech-labor debate has largely played out in state legislatures and in the courts. Voters have weighed in as well, with gig companies DoorDash and Lyft spending some $200 million to win the Prop 22 ballot initiative in California that exempted their workers from new labor laws. Should Uber be forced to provide benefits to employees? Should government stay out and let these markets continue to operate?
As labor leaders and progressive lawmakers continue to battle with the companies, and governments, companies, and unions struggle to apply old principles to an increasingly digital economy, some argue for a third way, including our guest today. Wingham Rowan is the founder and managing director of Modern Markets for All, a non-profit that develops infrastructure for people working outside of traditional 9-5 jobs. Prior to that, he was a TV host and producer at the BBC. Read more about his work at PeoplesCapitalism.org.
When the average person thinks of nuclear energy, there’s a good chance they’re thinking in terms influenced by pop culture—Homer Simpson’s union job at the Springfield plant, or the HBO miniseries Chernobyl, which dramatized the world’s biggest meltdown.
For all its promise in the mid-20th century, U.S. nuclear energy largely stalled in the 1970s and 80s. While public anxiety over its safety played a role, experts have pointed to the hefty cost of building plants and poor regulatory/policy decisions as having more impact. But in recent years, as demand for low-carbon energy surges and companies like OpenAI, Microsoft, and Google are burning through energy to train artificial intelligence, there is a renewed interest in making nuclear work in this century.
But concerns over cost and safety remain, and even among proponents of nuclear energy, there is a robust debate about exactly how to approach future builds, whether to rely on conventional methods or hold off until new research potentially yields a smaller, more cost-effective method of unlocking atomic energy. What is the state of nuclear power in the U.S. and around the world today? What policies could shape its future? And how might AI, other market dynamics, geopolitics, and national security concerns impact the debate and its outcomes?
Evan is joined by Emmet Penney, the creator of Nuclear Barbarians, a newsletter and podcast about industrial history and energy politics, and a contributing editor at COMPACT magazine. Thomas Hochman, Policy Manager at FAI, is also joining. You can read Emmet’s recent piece on how why nuclear energy is a winning issue for the populist GOP here. You can read Thomas’s piece for The New Atlantis on “nuclear renaissance” here, and his writeup of the ADVANCE Act here.
The recent riots in the United Kingdom raise new questions about online free speech and misinformation. Following the murder of three children in Southport, England, false rumors spread across social media about the killer’s identity and religion, igniting simmering resentment over the British government’s handling of immigration in recent years. X, formerly Twitter, has come under fire for allowing the rumors to spread, and the company’s owner Elon Musk has publicly sparred with British politicians and European Union regulators over the issue.
The incident is the latest in an ongoing debate abroad and in the U.S. about free speech and the real-world impact of online misinformation. In the U.S., politicians have griped for years about the content policies of major platforms like YouTube and Facebook—generally with conservatives complaining the companies are too censorious and liberals bemoaning that they don’t take down enough misinformation and hate speech.
Where should the line be? Is it possible for platforms to respect free expression while removing “harmful content” and misinformation? Who gets to decide what is true and false, and what role, if any, should the government play? Evan is joined by Renee Diresta who studies and writes about adversarial abuse online. Previously, she was a research manager at the Stanford Internet Observatory where she researched and investigated online political speech and foreign influence campaigns. She is the author of Invisible Rulers: The People Who Turn Lies into Reality. Read her recent op-ed in the New York Times here.
Minnesota Governor Tim Walz has made headlines for being picked as Vice President Kamala Harris’s running mate. One underreported aspect of his record is signing Minnesota’s first “right to repair” law last year. The bill took effect last month.
The concept sounds simple enough: if you buy something like a phone or a car, you should have the right to fix it. But as our world becomes more digitized, doing it yourself, or having your devices repaired by third-party mechanics or cell phone shops, can be complicated. Everything from opening a car door to adjusting your refrigerator can now involve complex computer code, giving manufacturers more control over whether, and how, devices can be repaired.
Frustrations over this dynamic sparked the “right to repair” movement, which advocates for legislation to require manufacturers to provide parts, tools, and guides to consumers and third parties. While powerful companies like John Deere and Apple have cited cybersecurity and safety concerns with farmers and iPhone users tinkering with their devices, right-to-repair advocates say irreparability undermines consumer rights, leads to higher prices and worse quality, and harms small businesses that provide third-party repair services.
As more states continue to adopt and debate these laws, which industries will be impacted? And will the federal government consider imposing the policy nationwide? Evan and Luke are joined by Kyle Wiens, perhaps the most vocal proponent of the right to repair in the U.S. Wiens is the co-founder and CEO of IFixit, which sells repair parts and tools and provides free how-to-guides online. Read Kyle’s writing on repair rights and copyright in Wired and his article in The Atlantic on how his grandfather helped influence his thinking. See Luke’s piece in Reason on how the debate impacts agriculture.
OpenAI unleashed a controversy when the famed maker of Chat GPT debuted its new voice assistant Sky. The problem? For many, her voice sounded eerily similar to that of Scarlett Johansson, who had ironically starred in the dystopian movie Her about a man, played by Joaquin Phoenix, who developed a romantic relationship with a virtual assistant. While OpenAI claimed that Sky’s voice belonged to a different actress, the company pulled it down shortly after the launch given the furor from Johansson and the creative community. But a flame had already been lit in the halls of Congress, as the controversy has inspired multiple pieces of legislation dealing with serious questions raised by generative AI.
Should AI companies be allowed to train their models without compensating artists? What exactly is “fair use” when it comes to AI training and copyright? What are the moral and ethical implications of training AI products with human-created works when those products could compete with, or replace, those same humans? What are the potential consequences of regulation in this area, especially as the U.S. government wants to beat out China in the race for global AI supremacy?
Evan is joined by Josh Levine, Tech Policy Manager at FAI, and Luke Hogg, Director of Policy and Outreach at FAI. Read Josh’s piece on the COPIED Act here, and Luke’s piece on the NO AI FRAUD Act here.
Trump’s pick of J.D. Vance as his running mate is seen by many as the culmination of a years-long realignment of Republican and conservative politics—away from trickle-down economics toward a more populist, worker-oriented direction. While the pick ushered in a flood of reactions and think pieces, it’s unclear at this stage what Vance’s impact would truly be in a Trump second term. Will Vance be able to overcome some of Trump’s more establishment-friendly positions on taxes and regulation? Will he advocate that Trump continue some of Biden’s policies on tech policy, particularly the administration’s actions against companies like Google, Amazon, and Apple? How might Vance influence policies on high-tech manufacturing, defense technology, and artificial intelligence?
Evan is joined by Oren Cass, Chief Economist and Founder of American Compass and the author of The Once and Future Worker: A Vision for the Renewal of Work in America. Read his recent op-ed in the New York Times on populism and his recent piece in Financial Times on Vance. Subscribe to his Substack, “Understanding America.”
Evan is also joined by Marshall Kosloff, co-host of The Realignment podcast, sponsored by FAI, that has been chronicling the shifting politics of the U.S. for several years, as well as by Jon Askonas, professor of politics at Catholic University and senior fellow at FAI.
On July 1, the Supreme Court issued a 9-0 ruling in NetChoice v. Moody, a case on Florida and Texas’s social media laws aimed at preventing companies like Facebook and YouTube from discriminating against users based on their political beliefs. The court essentially kicked the cases back down to lower courts, the Fifth and Eleventh Circuits, because they hadn’t fully explored the First Amendment implications of the laws, including how they might affect direct messages or services like Venmo and Uber. While both sides declared victory, the laws are currently enjoined until the lower court complete their remand, and a majority of justices in their opinions seemed skeptical that regulating the news feeds and content algorithms of social media companies wouldn’t violate the firms’ First Amendment rights. Other justices like Samuel Alito argued the ruling is narrow and left the door open for states to try and regulate content moderation.
So how will the lower courts proceed? Will any parts of the Florida and Texas laws stand? What will it mean for the future of social media regulation? And could the ruling have spillover effects into other areas of tech regulation, such as efforts to restrict social media for children or impose privacy regulations? Evan and Luke are joined by Daphne Keller, Platform Regulation Director at Stanford’s Cyber Policy Center. Previously, she was Associate General Counsel at Google where she led work on web search and other products. You can read her Wall Street Journal op-ed on the case here and her Lawfare piece here.
It’s time for American industry’s Lazarus moment. At least, that’s what a growing coalition of contrarian builders, investors, technologists, and policymakers have asserted over the past several years.
American might was built on our industrial base. As scholars like Arthur Herman detail in Freedom’s Forge, the United States won World War 2 with industrial acumen and might. We built the broadest middle class in the history of the world, put men on the moon, and midwifed the jet age, the Internet, semiconductors, green energy, revolutionary medical treatments, and more in less than a century.
But the optimism that powered this growth is fading, and our public policy ecosystem has systematically deprioritized American industry in favor of quick returns and cheap goods from our strategic competitors. Is there a way to restore our domestic industry? What does movement-building in this space look like?
We're joined by Austin Bishop, a partner at Tamarack Global, co-founder of Atomic Industries, and co-organizer of REINDUSTRIALIZE, and Jon Askonas, Senior Fellow with FAI and Professor of Politics at the Catholic University of America. You can follow Austin on X here and Jon here. Read more about REINDUSTRIALIZE and the New American Industrial Alliance here and check out some of Jon's research on technological stagnation for American Affairs here.
For this special edition episode, FAI Senior Fellow Jon Askonas flew down to Palm Bay, FL to mix and mingle with the brightest minds in aerospace, manufacturing, and defense at the Space Coast Hard Tech Hackathon, organized by stealth founder Spencer Macdonald (also an FAI advisor).
Jon sits down with a friend of the show and Hyperstition founder Andrew Côté for a wide-ranging conversation on the space tech revolution, the “vibe shift” towards open dialogue, AI’s role in shaping reality, and the challenges Silicon Valley faces in fomenting new innovation. They critique regulatory moats that hamper entrepreneurship, safetyism's risk to progress, and explore the concept of “neural capitalism,” where AI enhances decentralized decision-making.
You can follow Jon at @jonaskonas and Andrew at @andercot. Andrew recently hosted Deep Tech Week in San Francisco, and he's gearing up to host the next one in New York City.
Silicon Valley was once idolized for creating innovations that seemed like modern miracles. But the reputations of tech entrepreneurs have been trending downward of late, as Big Tech companies are blamed for any number of societal ills, from violating users’ privacy and eroding teenagers’ mental health, to spreading misinformation and undermining democracy.
As the media and lawmakers focus on modern gripes with Big Tech, the origin stories of companies like Meta and Google feel like ancient history or almost forgotten. Our guest today argues that these stories, filled with youthful ambitions and moral tradeoffs—even “original sins”—help explain how the companies came to be, amass profits, and wield power. And the lessons learned could provide a path for more responsible innovations, especially as the gold rush for artificial intelligence heats up.
Evan is joined by Rob Lalka, Professor at Tulane University’s Freeman School of Business and Executive Director of the Albert Lepage Center for Entrepreneurship and Innovation. He is the author of a new book, The Venture Alchemists: How Big Tech Turned Profits Into Power. Previously he served in the U.S. State Department.
This is how many assume the tech economy is supposed to work. Big, established companies are at risk of getting disrupted as they get set in their ways; the internal bureaucracies grow too large and they lose their nimbleness and take fewer risks. The pressure from upstarts forces larger firms to innovate – otherwise, they lose market share and may even fold.
But is that how it works in practice? An increasing share of policymakers believe Big Tech giants don’t face meaningful competition because their would-be competitors get bought, copied, or co-opted by essentially the same five companies: Google, Amazon, Apple, Meta, and Microsoft. While antitrust regulators have been focusing a lot on what they believe are “killer acquisitions,” such as then-Facebook buying Instagram, there seems to be less focus on what some experts call “co-opting disruption,” where large firms seek to influence startups and steer them away from potentially disruptive innovations. So what does that look like in practice? And is this a fair characterization of how the tech market works?
Evan is joined by Adam Rogers, senior tech correspondent at Business Insider. Prior to that, he was a longtime editor and writer at Wired Magazine. You can read his article on co-opting disruption, “Big Tech’s Inside Job,” here. He is also the author of Full Spectrum: How the Science of Color Made Us Modern.
Tornado Cash is a decentralized cryptocurrency mixing service built on Ethereum. Its open-source protocol allows users to obscure the trail of their cryptocurrency transactions by pooling funds together, making it difficult to trace the origin and destination of any given transfer.
In August 2022, the U.S. Treasury Department took the unprecedented step of sanctioning Tornado Cash, effectively criminalizing its use by American citizens and businesses. Authorities accused the service of facilitating money laundering, including processing hundreds of millions in stolen funds linked to North Korean hackers. In the wake of the sanctions, Tornado Cash's website was taken down, its GitHub repository removed, and one of its developers arrested in Amsterdam.
The crackdown has sent shockwaves through the crypto and privacy advocacy communities. Proponents argue that Tornado Cash is a neutral tool, akin to VPNs or Tor, with many legitimate uses beyond illicit finance. They warn that banning a piece of code sets a dangerous precedent and undermines fundamental rights to privacy and free speech. On the other hand, regulators contend that mixers like Tornado Cash have become a haven for cybercriminals and rogue state actors, necessitating more aggressive enforcement.
As the legal and political battle unfolds, Coin Center, a leading crypto policy think tank, has taken up the mantle of defending Tornado Cash and its users. Director of Research Peter Van Valkenburgh, who also serves as a board member for Zcash, joins The Dynamist today to walk through this crackdown and the implications for decentralized finance and open-source software today. Luke Hogg, director of policy and outreach, guest hosts this episode. You can read more from Peter on this issue here.
Social media undermines democracy. Small businesses are more innovative than big ones. Corporate profits are at all-time highs. America’s secret weapon is laissez-faire capitalism. These are widely held beliefs, but are they true? Our guest today argues that these statements aren’t just wrong, but that they’re holding America back—discouraging talented people from entering the technology field and making companies too cautious and wary of regulators. Is America losing its faith in innovation? If so, what can companies and governments do to turn the tide? Has America’s “free-market” really been as free as we think, and what can policymakers learn from Alexander Hamilton when it comes to industrial policy?
Evan is joined by Robert Atkinson, Founder and President of the Information Technology and Innovation Foundation, an independent, nonpartisan research and educational institute, often recognized as the world’s leading think tank on science and tech policy. He is also the co-author of the Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy. Read his article on Hamiltonian industrial policy here.
Is American ready for the next pandemic? The answer is a resounding “no,” according to a recent Washington Post editorial. When the U.S. was caught flat-footed and unprepared to deal with COVID-19, many lawmakers vowed to address pandemic preparedness. Yet, according to many experts, these efforts are inadequate and interest among lawmakers in preparedness has waned as focus has shifted to wars around the world and other geopolitical conflicts.
With bio threats emerging at an accelerating pace, and as biotechnology becomes more available, how can companies and governments address these threats?
Evan is joined by Swati Sureka, Strategic Communications Lead at Ginkgo Bioworks, a U.S. biotech firm that partners with the CDC and other organizations to monitor for emerging pathogens worldwide. She’s the co-author of a new paper laying out a roadmap for a global pathogen surveillance infrastructure, “A New Paradigm for Threat Agnostic Biodetection: Biological Intelligence.”
When it comes to AI regulation, states are moving faster than the federal government. While California is the hub of American AI innovation (Google, OpenAI, Anthropic, and Meta are all headquartered in the Valley), the state is also poised to enact some of the strictest state regulations on frontier AI development.
Introduced on February 8, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) is a sweeping bill that would include a new regulatory division and requirements that companies demonstrate their tech won’t be used for harmful purposes, such as building a bioweapon or aiding terrorism.
SB 1047 has generated intense debate within the AI community and beyond. Proponents argue that robust oversight and safety requirements are essential to mitigate the catastrophic risks posed by advanced AI systems. Opponents contend that the scope is overbroad and that the compliance burdens and legal risks will advantage incumbent players over smaller and open-source developers.
Evan is joined by Brian Chau, Executive Director of Alliance for the Future and Dean Ball, a research fellow at the Mercatus Center and author of the Substack Hyperdimensional. You can read Alliance for the Future’s call to action on SB 1047 here. And you can read Dean’s analysis of the bill here. For a counter argument, check out a piece by AI writer Zvi Mowshowitz here.
Is America in a new Cold War with China? If so, who is winning? One of the defining features of the 21st century has been the intensifying competition between the United States and the Chinese Communist Party. As the two superpowers jockey for global influence, China threatens to dislodge America’s longstanding role atop the international order.
At the heart of this struggle lies the Belt and Road Initiative, or BRI, a massive infrastructure and investment project that has become the centerpiece of China's foreign policy. The BRI is often portrayed as an economic venture—China is seeking to create new markets for its goods, stimulate economic growth in its less-developed regions like Africa and Latin America. But the BRI has increasingly come under scrutiny as a geopolitical gambit designed to expand China's power and undermine American leadership.
As policymakers in Washington grapple with how to respond to China's growing assertiveness, our guest today offers a provocative thesis: America is losing this new cold war, and it needs a bold strategy to turn the tide.
Evan is joined by Michael Sobolik, a senior fellow at the American Foreign Policy Council. His new book, "Countering China's Great Game: A Strategy for American Dominance,” calls for the U.S. to take new approach that would exploit the BRI's weaknesses, such as its reliance on unsustainable debt and its tendency to breed corruption and local resentment, while simultaneously strengthening U.S. alliances and providing alternative models of development assistance. Michael also hosts the "Great Power Podcast," AFPC's show about global competition and U.S.-China relations. You can read a critique of Michael’s book here.
Should we accelerate into the AI future or proceed with caution? Do we even have a choice?
From deep-tech disruptors to policymaking under time pressure, a battle over the fate of human civilization is now being waged on multiple fronts: Closed vs. Open, Hardware vs. Software, Safety vs. Ethics: in sum, Order vs. Chaos.
Foundation for American Innovation and 8VC hosted a live recording of a conversation with Andrew Côté (Hyperstition founder, a16z scout) and Guillaume Verdon (Extropic founder, effective accelerationism creator), moderated by FAI Senior Economist Samuel Hammond. Andrew, Gil, and Sam discussed their visions for the future, the tradeoffs between centralized and decentralized AI, and the incentives facing founders, technologists, and government regulators.
The number of internet-connected devices in the world has skyrocketed. According to one estimate, there are currently 17 billion connected devices in the world. This doesn’t just include well known electronics like laptops and smartphones. These devices span every sector of the global economy—from agriculture and manufacturing to healthcare and science. The average American household now has 17 connected devices. While this growth has been a boon for jobs and the tech sector, it has also dramatically increased cybersecurity risk.
In theory, every one of these devices could be a vector for a cyberattack, and there are serious questions about whether manufacturers are building adequate security into the design of their products. What can the private sector and government do to improve cybersecurity and mitigate threats? Evan is joined by Nathan Simington, Commissioner at the Federal Communications Commission (FCC). You can read his recent statement on the agency’s newly established Cyber Trust Mark here.
This week, President Biden signed into law a bill that would require TikTok to divest itself from Chinese parent company ByteDance or else face a ban in the United States. The legislation was part of a package of bills that included foreign aid to Israel, Taiwan, and Ukraine. Over the past few years, TikTok has exploded in popularity. Today over 170 million Americans are monthly users of the platform, and seven million businesses rely on it for either part or all their income. With that growth in users has come growing concern about its parent company ByteDance, and its capacity as a vector for surveillance and propaganda by the Chinese government.
Proponents of a divestiture/ban argue that this is a narrowly targeted measure to address a clear national security threat, consistent with other restrictions on foreign ownership in areas like broadcast media. Critics, meanwhile, raise First Amendment concerns and argue that the bill creates a slippery slope that could lead to the targeting of platforms like X or Truth Social.
To consider these questions, the Foundation for American Innovation and Young Voices hosted a debate on the bill, with the following resolution: Given national security concerns, ByteDance should be forced to divest from TikTok or face a ban. Evan was joined by the following speakers:
Pro Divestment:
Anti-divestment:
In the U.S., there is supposed to be some division between domestic and foreign police activities. The CIA handles overseas activities, while the FBI and local police agencies handle domestic law enforcement. Because as the Internet is inherently borderless, Americans’ emails, texts, and phone calls are inevitably captured in overseas intelligence activities, which is legal under Section 702 of the the Foreign Intelligence Surveillance Act (FISA).
With FISA set to expire on April 19 without Congressional reauthorization, the debate over whether and how to reform government surveillance has intensified. The Biden Administration, the FBI, and others in the world of national security say Section 702 is a critical tool to combat terrorism and other threats, and that privacy reforms might put Americans in danger by slowing down intel activities. Critics warn that this provision is violating the civil liberties of Americans, that Section 702 enables warrantless surveillance and is an end run around the Fourth Amendment, which protects Americans from unreasonable searches and seizures.
How will Congress act? How does 702 really work? And what are the politics and alliances on both sides of this debate? Are there any reforms that might pass as part of a compromise? Evan and Luke are joined by David DiMolfetta, a reporter at NextGov/FCW, where he covers how the US government is adapting to the world of cybersecurity. He was a researcher at the Washington Post and a reporter at S&P Global. You can read his latest piece on Section 702 here.
In the digital world, there is an enduring tension between privacy and security. What is our right to privacy from the government or the companies whose services we use? What rights does our government have to surveil us in the name of national security?
Most of us have a general understanding of the basic tradeoff in the Internet era—you give up some data in exchange for free or freemium services like Gmail or social media apps like Instagram. But the data marketplace goes well beyond the Big Tech players we’re most familiar with, and the depth and breadth of these processes, and the players involved, are often much harder to pin down.
What role do data brokers play and what sorts of data do they have access to? Is our data simply for sale to the highest bidder? Can even the chips in car tires be used to spy on people?
Joining us to discuss all of this is Byron Tau, whose investigative work has shone a light on the connections between tech companies and government surveillance. His latest book, "Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State," uncovers the extensive ways our data are used to watch and influence us as the American public. Byron is also a reporter at NOTUS, a new publication covering politics and policy from the Albritton Journalism Institute, and an adjunct lecturer at Georgetown University.
Is the Internet broken? The original promise of this great invention is that it would offer a platform for free information exchange, empowering individual users worldwide. It would spread democracy and knowledge. It would surface the best and brightest from around the world. It would empower individuals over elites.
Many, including our guest, argue that is not the Internet we have today. It seems everyone has gripes about Big Tech—from concerns around misinformation and censorship to the impact of social media on youth mental health. Underlying these policy issues is the issue of who controls our data and the flow of information. Have the economic benefits of the Internet been distributed fairly? Will AI lead to more competition or cement the dominance of incumbent firms? Is there a way to have the conveniences of today’s Internet without the downsides?
Evan is joined by Frank H. Mccourt, Jr, Executive Chairman of McCourt Global and author of Our Biggest Fight: Reclaiming Liberty, Humanity, and Dignity in the Digital Age. He is also the founder of Project Liberty. Project Liberty is a far-reaching effort to build an internet where individuals have more control over their data, a voice in how digital platforms operate, and greater access to the economic benefits of innovation.
On March 13, the U.S. House of Representatives voted 352 to 65 on the Protecting Americans from Foreign Adversary Controlled Applications Act. This bill is aimed at forcing ByteDance, a Chinese tech company, to divest its subsidiary TikTok or face a ban of the popular social media app in the U.S. In practical terms, if a suitable divestiture doesn’t happen, the bill would require Apple and Google to remove it from their app stores—and web hosting companies, advertisers, and others wouldn’t be able to do business with TikTok. This would make it far more difficult, if not impossible, for any American to access it.
TikTok has been through a political whirlwind since the Trump Administration first began an effort to ban TikTok or force a divestiture in 2020. Despite growing concern among lawmakers and bills introduced in recent years, TikTok seemed to have dodged a bullet, especially when President Biden joined the app last month. However, TikTok’s fortunes took a drastic turn when the recent bipartisan bill emerged and gained widespread support.
The saga raises a lot of questions. To what extent does Chinese influence, and money, affect our politics? How have the positions of Biden, Trump, and other political figures evolved on the issue? And what will this all mean for the bill’s prospects in the notoriously slower and more deliberative U.S. Senate? Evan is joined by friends of the pod Adam Kovacevich, founder and CEO of the Chamber of Progress, and Nathan Leamer, CEO of Fixed Gear Strategies.
Are you an Android user? Have you been ridiculed for the dreaded green text bubble, or been accused of “messing up the group chat?”
In December, the tech company Beeper tried to bridge the Android-iPhone divide. They launched Beeper Mini, an app that gave Android users access to iMessage functionality. The app immediately took off, gaining over a hundred thousand downloads in the first few days and reaching the top-20 app chart on the Google Play Store. But just a couple of days later, Apple shut the app down, citing security concerns and the potential for privacy risks.
This brouhaha caught the attention of US lawmakers, including Senator Elizabeth Warren, who criticized Apple's actions as potentially anti-competitive. A bipartisan group in Congress has requested the Department of Justice investigate Apple's conduct towards Beeper, suggesting that it may violate antitrust laws.
The Beeper situation reflects a broader debate around app store competition, security and privacy, and the competitive dynamics between major tech platforms and third-party developers. Are app stores monopolistic? How might Beeper’s story influence the future of antitrust when it comes to Big Tech?
Evan is joined by Eric Migicovsky. Eric is the co-founder of Beeper and a central player in the ongoing discussions around Apple and its app store. Prior to that, he founded the smartwatch company Pebble, and was a partner at Y Combinator, a startup accelerator. Read Beeper’s blog post about their situation with Apple here.
Many conservatives lament a decades-long stagnation of innovation. As Peter Thiel once quipped, “We wanted flying cars, instead we got 140 characters.” The rise of AI and other transformative technologies may augur an end to this stagnation, according to thinkers like Marc Andreessen, who joined The Dynamist recently to discuss techno-optimism. Others, of course, are more pessimistic. Will we end the Great Stagnation? Will we build the sci-fi future of our dreams? And where does the hurly-burly of politics fit into this conversation?
Our guest today, James Pethokoukis, recently wrote The Conservative Futurist: How to Create the Sci-Fi World We Were Promised. Jim is a senior fellow and the DeWitt Wallace Chair at the American Enterprise Institute, where he analyzes US economic policy, writes and edits the AEIdeas blog, and hosts AEI’s Political Economy podcast. He is also a CNBC contributor and writes the Faster, Please! Substack.
We’re also joined by FAI Senior Fellow Jon Askonas and Research Manager Robert Bellafiore. Robert recently reviewed James’ book for The New Atlantis. Jon has written extensively about the politics of innovation, including for Compact and American Affairs.
In our inaugural live recording ofThe Dynamist, FAI hosted a debate on two upcoming Supreme Court cases, Moody v. NetChoice and NetChoice v. Paxton. These cases could have major implications for online free speech and whether states can regulate the practices of Big Tech platforms.
Over the past ten years, the debate over how companies and governments deal with online speech has only intensified. Whether you call it content moderation or censorship, people have very strong opinions about how companies like Meta, Twitter, YouTube, and TikTok moderate their platforms. Florida and Texas both passed laws in recent years aimed at cracking down on what they see as politically biased behavior by these companies. Florida Senate Bill 7072, among other provisions, imposes fines on companies who "deplatform" political candidates and news outlets. Texas House Bill 20 prohibits social media companies with 50 million or more active monthly users from discriminating against users based on their viewpoint.
On Feb 20, 2024, Evan moderated a debate at FAI’s office in Washington, DC. Arguing for NetChoice is Carl Szabo, Vice President & General Counsel of NetChoice, a trade association representing tech companies and the plaintiff in both cases. Arguing for the states is Adam Candeub, former Trump Administration official and Professor of Law at Michigan State University.
One of the ways the Chinese government looks to exert influence is by changing the behavior of businesses and individuals who operate in China. Remember the firestorm that occurred when Houston Rockets general manager Daryl Morey sent a tweet in support of the Hong Kong protests? NBA games were taken off the air in China, and a series of profuse apologies on the part of the NBA and its partners followed.
As tensions rise between the U.S. and China, so do the tensions for businesses trying to operate in China. The nation of 1.4 billion people represents the biggest market in the world and an enormous source of potential revenue. But those who do business in China must play by China’s rules, so what are the tradeoffs? How far is too far? What role, if any, should the U.S. government play in regulating American businesses’ relationship with and dealings in China?
Evan is joined by Chris Fenton, a movie producer and author of Feeding The Dragon: Inside the Trillion Dollar Dilemma Facing Hollywood, the NBA, & American Business. Today, Chris advises companies, brands, and Congress on how to navigate the America-China relationship and co-hosts US Congressional Member delegations in China.
Last week, the Senate Judiciary Committee brought the CEOs of major tech companies like Meta and TikTok to answer questions about the impact of social media on children—from concerns about bullying and mental health to sexual exploitation. Lawmakers around the country and the world have been increasingly focused on this and other issues under the broader umbrella of digital privacy. Europe has led the Western world in enacting regulations that privacy advocates herald while critics warn they stifle innovation.
We’re 30 years into widespread adoption of the commercial Internet, yet Congress has failed to pass any sort of comprehensive legislation around digital privacy. There’s broad agreement that America needs a national privacy law, so why don’t we have one? In the meantime, a growing number of U.S. states have filled the void with bills like the California Consumer Privacy Act and the Illinois Biometric Information Privacy Act. How have these laws impacted the tech landscape? How do they impact global internet practices, and shape principles around online free speech and innovation?
Evan and FAI Director of Outreach Luke Hogg are joined by Jennifer Huddleston, technology policy research fellow at the Cato Institute. Her work covers a range of topics, including antitrust, online content moderation, and data privacy. For more, see her recent piece on online safety legislation.
NB: A previous version of this episode was missing content, which has been restored.
Our government agencies are hopelessly out of date. Public documents are stored in backroom file cabinets, instead of being digitized and posted online. As FAI Senior Economist Samuel Hammond has noted, “We validate people’s identity with a nine-digit numbering system created in 1936. The IRS Master File runs on assembly from the 1960s.”
The deliberations of the government and its agencies are often inaccessible to the general public. And without this information, nearly everything becomes harder. How do you hold government institutions accountable when their activity and data are buried under layers of bureaucracy? How do we improve the collection, organization, and distribution of government information, as well as public information in general? And how will the arrival of new technologies like artificial intelligence help (or hurt) with that goal?
Evan is joined by Jamie Joyce, Director at Internet Archive, a nonprofit digital library, and Founder of the Society Library, a nonpartisan, nonprofit institution that builds tools and develops products to improve the information ecosystem. She’s also a board member at WikiTongues, an internet archive dedicated to the preservation of world languages.
The New York Times has sued OpenAI and Microsoft, alleging the tech companies violated the newspaper’s copyrights by training ChatGPT on millions of Times articles. The decision in this case could have enormous implications for journalism and AI tools like large language models, and the lawsuit could go to the Supreme Court. While OpenAI says such training is “fair use,” the Times says the companies “seek to free-ride” on its journalism. How will the case be decided, and how will the outcome affect the next decade-plus of journalism and AI development? Fundamentally, should companies like OpenAI be allowed to train on copyrighted material without compensating creators?
Joining us to discuss all of this today are Matthew Sag and Zach Graves. Sag is a Professor of Law in Artificial Intelligence, Machine Learning and Data Science at Emory University Law School. He is an expert in copyright law and intellectual property, and a leading authority on the fair use doctrine in copyright law and its implications for AI. Graves is the Executive Director at the Foundation for American Innovation. He was recently invited to participate in the Senate’s AI Insight Forum.
In examining international competition between the U.S. and rivals like China, we tend to think of two types of power—military and economic. How large and advanced is our military compared to others? Are we overly reliant on other countries for resources like oil and microchips? But there’s a third, less commonly thought of type of power that is crucial to America’s role in the world order. We might call it our reputation or our cultural dominance. The Chinese government calls it “discourse power.”
In China’s view, America has come to dominate the international system in part by controlling the narrative around governance, norms, and values. For China to gain control in the international order, then, it’s not enough for their economy or military to grow to either match or surpass ours. They have to secure discourse power—one that favors Chinese Communist Party values and their approach to security and human rights. In particular, they see the digital realm as an opportunity to tilt the balance in China’s favor. So what does this look like in practice?
Evan is joined by Kenton Thibaut, a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab, where she leads China-related research and engagements. See her paper on discourse power and her new article in Foreign Affairs on whether China can swing Taiwan’s election.
With the benefit of hindsight, there’s a lot that people wish they could have done differently after a pandemic, wildfire, or other disasters. That’s why governments, militaries, public health entities, and first responders spend significant time and resources “wargaming” potential scenarios and how best to respond. But while technologies like flight simulators have long played a role in disaster preparedness, AI could dramatically change how wargaming is done and help overcome human “failures of imagination.” How does AI threat-casting compare to human creativity? How could AI change the way governments respond to major stress tests? How might the response to COVID-19 have differed if generative AI were more available to policymakers? Joining us to discuss these questions and more is Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), an AI nonprofit utilizing simulation gaming to progress societal disaster preparedness.
A San Francisco jury recently ruled that Google's Android app store is a monopoly, siding with Epic Games in a lawsuit initiated in 2020. The verdict focuses on Google's practices, such as mandating app customers and developers use its billing system and taking a 30% commission on app subscriptions. Google intends to appeal, citing cybersecurity and other concerns. This ruling raises questions about Apple's App Store, with Epic's similar case against Apple possibly going to the Supreme Court. The outcome could significantly impact antitrust laws and government efforts to regulate Big Tech.
Evan is joined by Adam Kovacevich, founder and CEO of the Chamber of Progress, an American trade group that represents technology companies on issues such as antitrust law and content moderation. He previously helmed government relations for Lime Bike. Prior to that was senior director of U.S. public policy for Google. Follow Adam on X: @adamkovac
The worlds of tech and policy are increasingly integrated, for good or ill. Tech professionals are recognizing government service as a vital way to contribute to the national interest, at the same time that politicos and policy experts have realized that they need the tech industry’s experience and insight. Ten years after the Foundation for American Innovation was formed to serve as a bridge between Silicon Valley and DC, the fusion of technology and public policy is greater than ever. But can technologists, founders, and investors really accomplish more in a sclerotic political environment than they can in industry?
Jennifer Pahlka, founder of Code for America, served as the Deputy Chief Technology Officer of the United States under President Obama, and as a member of the Defense Innovation Board under Presidents Obama and Trump. This year, she published Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better. Michael Kratsios, former Principal and Chief of Staff at Thiel Capital, served as Chief Technology Officer of the United States and Under Secretary of Defense at the Pentagon during the Trump administration. He is Managing Director at Scale AI, where he leads corporate strategy and helps accelerate AI applications across industries. Media Fellow Marshall Kosloff hosts The Realignment podcast with FAI.
This event was hosted at the Internet Archive in San Francisco on December 4, 2023. We thank Project 47 for their support of From Tech to Government and Back Again.
A recent New York Times editorial painted a damning portrait of learning loss from COVID-19 school closures, arguing it “may prove to be the most damaging disruption in the history of American education” setting “student progress in math and reading back by two decades.” The Institute for Education Sciences is a federal agency within the Department of Education with a modest budget and a daunting mandate—figure out what works and what doesn’t, including how to reverse and mitigate learning loss.
IES Director Mark Schneider has led the agency since 2017. In that time, the agency has funded an array of million-dollar programs and faced the unprecedented disruptions of COVID-19. Evan and Director Schneider discuss the challenges facing students and educators, what we can learn from COVID-19 and the government’s response, and how artificial intelligence could help individualize education.
Over the last few years, a small but influential group of right-of-center Twitter/X users have begun outlining a vision for what they half-jokingly refer to as Bison Nationalism. In a lot of ways, it’s hard to fully understand all of the relevant context unless you spend too much time online. Is the idea of repopulating the American prairie with buffalo just a meme? A longing for tradition? Or is it a real policy goal? Why might certain communities find this issue compelling, and how does this fit into a broader vision of conservative environmentalism? Joining us to discuss this today is Micah Meadowcroft. Research Director at the Center for Renewing America, a policy group based in DC, and former White House liaison for the Environmental Protection Agency.
Read his piece on the future of environmental conservatism here, and his writing on Bison Nationalism here.
The human fascination with creating life dates back centuries. From the ancient myth of Pygmalion, who carved the statue that came to life, to the Jewish legend of the golem, and now to our modern-day marvels in AI, humans remain captivated by questions surrounding consciousness, creation, and the Divine. In a prior episode, we discussed AI’s practical impact on the day-to-day practice of religion. Today, we explore AI’s interaction with religion at a more fundamental level. What are the central philosophical, anthropological, and spiritual issues brought about by the AI phenomenon? What does AI reveal about our own human nature? Can it understand or possess spirituality? And how might a religion like Christianity, with its own set of answers to fundamental human questions, contribute to this conversation?
Evan is joined by Taylor Barkley, Director of Technology and Innovation at the Center for Growth and Opportunity, where his primary research concerns the intersection of culture, technology, and innovation. He also has a recent op-ed for Fox News entitled, “Christians shouldn’t fear AI, they should partner with it.”
Marc Andreessen’s Techno-Optimist Manifesto set the tech world ablaze just a few short weeks ago – and now, he responds to his critics. A bold statement of principles arguing for the liberatory potential of technology, his manifesto generated criticism from both the left and right—including FAI’s own Sam Hammond.
In this special edition of The Dynamist, FAI Senior Fellow Jon Askonas and Marc Andreessen hash out the foundations of the Techno-Optimist politics of tomorrow. Marc is a cofounder and general partner at the venture capital firm Andreessen Horowitz. He has achieved two rare feats in the tech industry: pioneering a software category used by more than a billion people, and establishing multiple billion-dollar companies. You can read the Techno-Optimist Manifesto here, along with responses discussed on the episode from Ezra Klein and TechCrunch.
When it comes to science and math education, America’s report card has been in decline. According to the National Science Foundation, U.S. students have lagged their peers for over ten years, ranking dead last in math among our closest economic competitors. With the U.S. seeking to lead the world in artificial intelligence, how will the country’s math and science literacy impact jobs and economic growth? The federal government has invested billions of dollars in improving STEM education in K-12 schools. What works? What doesn’t? And how can research and development play a role in achieving America’s education goals?
Evan is joined by Melissa Moritz, Senior Fellow for the Social Innovation Team at the Federation of American Scientists, and Dan Lips, Head of Policy and Senior Fellow at FAI. You can check out their co-authored paper on this subject here.
Remarkable improvements in artificial intelligence are forcing us to reassess our government, our economy, and ourselves. Boosters see an opportunity to empower individual creators and circumvent sclerotic industry gatekeepers. Many creators are already using AI to hone their craft, test new concepts, and reach new audiences. But skeptics see another possibility: that AI will stifle creativity by strengthening the most powerful corporations. Artists’ work is being used without license to teach AI models. AI platforms have produced works inspired by human creators without attribution. And as the recent writers strike shows, many fear that media companies will use AI to replace human creators altogether.
How can we channel AI so that it strengthens individual agency? What are potential artistic and public interest applications of AI, and what policies and incentives do we need to make those applications succeed?
In this bonus episode, Laurent Crenshaw (Patreon, FAI board of directors), Sy Damle (Latham & Watkins, fmr. general counsel for the U.S. Copyright Office, Ashkhen Kazaryan (Stand Together), and Patrick Blumenthal (New Frontier Ventures) discuss AI's implications for creators, art, and innovation live from Washington, D.C.
Recently, the Biden Administration announced further restrictions on the types of semiconductors that American companies sell to China. The move is aimed at preventing American AI from benefitting Chinese military applications. While heralded by many as a necessary move to protect U.S. national security, how will the move affect Sino-American relations, and how will China respond? Could China simply “smuggle” the chips to avoid U.S. restrictions, or will the move spur China to race to develop more chips domestically? Could China simply access the computing power it needs through “the cloud?”
Evan is joined by Onni Aarne and Erich Grunewald of the Institute for AI Policy and Strategy, which works to reduce risks related to the development & deployment of frontier AI systems. You can read Erich’s report on chip smuggling here.
It's an old trope that nothing gets done in Washington. The city is filled with some of the brightest minds in the country looking to tackle massive challenges, from immigration reform to confronting the threats posed by China. But despite all the discourse, monied interests, lobbyists, and think tanks, so many major issues facing the country see little in the way of action. That raises the question: when America does have major policy success, how did it happen? How, exactly, did energetic civil servants address core issues like AIDS in Africa or developing the COVID-19 vaccine?
Evan is joined by Santi Ruiz, Senior Editor at the Institute for Progress and co-creator of Statecraft, a new newsletter & podcast focused on policy entrepreneurship, state capacity, and governance.
Recently FCC Chairwoman Jessica Rosenworcel announced her intent to bring back net neutrality regulation. It’s hard to believe it’s been six years since the brouhaha over broadband regulation reached a fever pitch. When the Trump FCC repealed the Obama-era rules, the apocalypse was predicted. CNN said it would be “the end of the Internet as we know it.” Senator Ron Wyden of Oregon warned of “digital serfdom.” Underlying the heated public debate has always been a more arcane legal question of how to regulate Internet access—whether through a light touch or a heavier one. And with the Supreme Court taking a closer look at “major questions” to see if federal agencies are acting outside the bounds of the laws passed by Congress, it remains to be seen whether the FCC’s revival of net neutrality will withstand legal muster.
Evan is joined by Tom Johnson, former general counsel for the FCC under Chairman Ajit Pai during the Trump administration where he successfully defended the agency’s net neutrality repeal before a federal district court. He is now a partner at the law firm Wiley Rein and co-chair of their appellate practice. He recently discussed his perspective on this issue in an article for Ars Technica.
The White House and the state of Missouri are in a court battle over whether the Biden Administration crossed the line in trying to influence social media companies’ content moderation decisions—from Hunter Biden’s laptop to vaccine skeptics to the origins of COVID-19. The “Twitter Files,” documents released to select journalists by Elon Musk, as well as information unearthed by Missouri’s lawsuit, appear to show that the FBI, CIA, and other agencies either coerced, or heavily encouraged, social media companies to take certain actions.
Many on the right say the Biden Administration violated the First Amendment by essentially co-opting social media companies into censoring speech that the government couldn’t censor itself. But many researchers and activists working on disinformation and misinformation worry that the outcome of this case could squelch legitimate government efforts to communicate with social media companies and combat foreign efforts to influence elections and American political discourse.
So did the Biden Administration cross the line? Did Big Tech companies become “state actors?” Evan is joined by Ben Sperry, Senior Scholar of Innovation Policy at the International Center for Law and Economics and author of a new white paper on regulating misinformation on social media platforms.
Are the citizens of the EU at risk of becoming second-class digital citizens? It’s well known at this point that Europe doesn’t have its own version of Silicon Valley. Many believe that this is in large part due to its digital regulatory approach—the General Data Protection Regulation (GDPR), the Digital Markets Act (DMA), and the AI act, among others. While Congress hasn’t passed a federal privacy law in the US, states like California have enacted rules similar to the EU model—at least on paper. Are the consequences of such regulation overstated? Is it possible to have consumer protection without sacrificing innovation?
Evan discusses with Brian Chau, former mathematician and machine learning engineer and current research fellow at Alliance for the Future. He’s also the author of the widely-read AI Pluralism newsletter. In a recent piece for Pirate Wires, he argues that Europe’s digital regulations are turning EU residents into “second-class digital citizens.”
Are Chinese drones a security threat? Not the kind that drop bombs, but the ones you might see at the beach or a major sporting event—used to take aerial photos and videos. These drones aren’t just for hobbyists. Government agencies in the U.S. use them for policing to fighting wildfires. And they've been buying them for years, predominantly from a Chinese manufacturer named DJI. Since the early 2010s, DJI drones have allowed even a poorly coordinated amateur to shoot video and create high-quality maps, and the company today has a 70 percent global market share.
So what’s the problem? The company has close ties to China's People’s Liberation Army and has the ability to disable its products from afar. Could America’s reliance on DJI be an economic or cybersecurity risk? Is this just another anti-China “red scare,” an outgrowth of the growing tensions and saber-rattling between the world’s two greatest powers? Evan is joined by Lars Erik Schönander, a policy technologist at the Foundation for American Innovation and author of a new paper for FAI, Securing the Skies: Chinese Drones and U.S. Cybersecurity Risks.
*Correction: Evan misstated the publication of an article discussed on the episode. It was published in Foreign Policy, not Foreign Affairs.
The European Union has designated six Big Tech companies as "gatekeepers" to the Internet—Alphabet, Amazon, Apple, Meta, Microsoft, and ByteDance (TikTok's parent company). Experts & pundits are calling this designation under the EU’s Digital Markets Act the most significant action against Big Tech ever taken. As the U.S. Congress continues to avoid significant legislative action, Europe has stepped into the void. Will this be another example of the so-called Brussels effect, where European policy becomes de facto regulation for the entire Western World? How will the companies respond, and what impact will it have on consumers? Joining Evan is FAI Director of Outreach Luke Hogg, whose tech policy research focuses on decentralization and innovation. Read his recent piece on the "Brussels effect" for Pirate Wires here.
It’s been seven years since Pokemon Go introduced augmented reality to the masses and caused a global craze. Since then, consumers have used a slew of applications that alter their reality—from more mundane uses like TikTok filters adding cat ears to someone’s head to more immersive experiences like Meta’s Oculus headset video games. Beyond shopping and gaming, augmented, virtual, and mixed reality software could become an invaluable tool for education. While research shows promise, classrooms have been slow to adopt immersive tech, just as they were slow to adopt PCs in the 80s and 90s.
Could a research and development strategy that includes government investment help integrate this tech into the classroom? Evan is joined by Juan Londoño, policy analyst at the Information Technology and Innovation Foundation (ITIF), where he focuses on augmented and virtual reality. You can read his paper on immersive learning here.
Tension between China and the U.S. is arguably at the highest it has been since President Nixon began normalizing relations decades ago. Yet, despite China’s treatment of ethnic minorities, its crackdown on Hong Kong, and threats against Taiwan, America remains economically entangled with the People’s Republic. How did the U.S. become so dependent on its chief geopolitical rival? What role did American businesses like Boeing and diplomats like Henry Kissinger play in the building of the modern relationship between the two nations? How has Beijing used the economic relationship to advance the Communist Party’s goals? How likely is war between the U.S. and China, and how would that impact trade and foreign investment?
Evan is joined by Isaac Stone Fish, founder and CEO of Strategy Risks. He is also an adjunct professor at NYU's Center for Global Affairs and a visiting fellow at the Atlantic Council. He is the author of America Second: How America’s Elites are Making China Stronger.
As the saying goes, “if the service is free, you are the product.” In the social media age, many companies don't compete for our money, but for our time. While many traditional entertainment companies increasingly rely on monthly subscription fees, social media products like TikTok and Instagram are “free,” powered by consumer data used to sell advertising. What platforms compete with each other for our attention? Does watching TV make you less likely to use social media? Or are you just scrolling the small screen while watching the big screen? As policymakers consider the nature of competition and issues involving “Big Tech,” such as data privacy, how should they factor in how much attention consumers pay to different platforms?
Joining us to discuss all of this is Scott Wallsten, President of the Technology Policy Institute and a PhD economist with broad expertise. His prior roles include stints at the FCC and White House Council of Economic Advisers. Read TPI’s paper on the attention economy here.
What if your rabbi used ChatGPT to write a sermon? What if you asked a faith-based chat bot to help you with bible study? The proliferation of AI tech is changing every sector, including religion and theology. The mechanized sanctum is no longer theoretical, as the rise of AI in religious spaces poses both unprecedented opportunities and serious ethical challenges. It poses questions around the nature of sentience, personhood, and what constitutes a creator. Can a super-intelligent AI have a soul? And there are also more immediate questions: will certain faiths use AI more effectively to spread their gospel and grow their ranks? Does AI have a religious bent? Should there even be a place for this tech in religious practice at all?
Evan is joined by friend of the podcast Nathan Leamer, CEO of Fixed Gear Strategies, a boutique tech policy consulting firm, and former policy advisor to FCC Chairman Ajit Pai.
Google is facing legal challenges that could strike at the heart of the company’s advertising business, which accounts for 80 percent of its global sales. The U.S. Department of Justice sued Google for allegedly monopolizing digital advertising technology (ad tech). Across the pond, the European Commission told the Big Tech giant recently its preliminary view that the company distorted competition in ad tech—favoring its own services to the detriment of competitors. The outcomes of these cases could force Google to divest significant portions of its business and potentially transform the tech industry.
Is Google really guilty of the agencies’ claims? And how could proposed legislation in Congress impact the company going forward? Joining Evan is Mark Meador, partner at Kressin Meador, a boutique antitrust law firm. He was formerly Deputy Chief Counsel for Antitrust and Competition Policy for Senator Mike Lee. Prior to that, he was an attorney at both the DoJ and the FTC.
Will artificial intelligence spell the end of humanity? The concept has been implanted in American culture through dystopian phenomena like Terminator and The Matrix, but how real is this possibility? Since the public release of Open AI’s ChatGPT in late 2022, AI doomerism has played a key role in shaping the discourse around this rapidly advancing technology. “Artificial intelligence could lead to extinction,” blares the BBC. “The race to win the AI competition could doom us all,” warns The Japan Times. Some commentators have even said that we may need to bomb data centers to stop or slow AI development.
Is so-called AI “doomerism” simply an outgrowth of AI-related science fiction? Or is there a concerted PR effort to frame the conversation? How does doomerism impact the debate over how/whether to regulate AI, and what positive applications of AI aren’t receiving enough attention? Evan is joined by Perry Metzger, CEO of a stealth AI startup and founder of Alliance for the Future. You can read his work on his Substack, Diminished Capacity. Evan is also joined by Jon Askonas, a professor of politics at Catholic University and Senior Fellow at the Foundation for American Innovation. He has written broadly on tech and culture for outlets like Foreign Policy and American Affairs, and his work has been discussed at length in the New York Times.
Can tech companies send data about European Union citizens across the Atlantic? According to a new framework, the answer is yes. Recently, the EU formally adopted a new agreement with the U.S. on data privacy that gives companies the green light to send data back and forth. For years, EU privacy advocates have raised alarms that U.S. intel agencies like the NSA are spying on EU citizens, particularly by tapping the data droves of Big Tech companies like Google and Meta. This framework is the third attempt at a data-sharing framework after past attempts were struck down by a European court after the Edward Snowden revelations revealed U.S. spying practices. Will the third time be the charm?
Evan is joined by Caitlin Fennessy, Vice President and Chief Knowledge Officer at the International Association of Privacy Professionals. Prior to joining the IAPP, Caitlin was the Privacy Shield Director at the U.S. International Trade Administration, where she spent ten years working on international privacy and cross-border data flow policy issues. You can read her work on these issues here.
How much does U.S. regulation really cost Americans and the economy? A new report from FAI found that, in 2022 alone, agencies issued more than more than 3,000 rules, including 265 “significant” ones with an estimated cost of over $117 billion. Some estimates say the totality of federal regulations costs the economy nearly $2 trillion. These rules span everything from healthcare to the environment, but what is the actual effect on our daily lives?
Some critics of the ever growing bureaucracy (or “Deep State” as President Trump calls it) say Congress has let federal agencies run amok—writing unclear laws that then have to be interpreted and implemented by unelected bureaucrats. Has Congress given too much power to the Executive Branch? Is there a way that Congress can flex its muscles over federal agencies?
Evan is joined by Satya Thallam, Senior Fellow at FAI and former White House and Senate policy advisor, and Dan Lips, Head of Policy at FAI and former national security policy advisor in Congress. Read Satya's recent report on reining in the administrative state.
The feds, via the SEC, are cracking down on Binance, the largest cryptocurrency exchange in the world, essentially calling it an illegal operation. Prior to his appointment as Biden’s SEC chair, Gary Gensler taught a class on Bitcoin at MIT, which made some crypto enthusiasts think he might be friendly to the industry. But he’s been anything but a friend to crypto. His proponents say he’s taking long overdue action to rein in an industry rife with fraud, scams, and get-rich-quick schemes. Critics worry the SEC’s increasingly aggressive approach will send crypto and blockchain-based innovations overseas, and see the U.S. cede leadership to other nations.
As the debate rages over how to regulate various crypto coins (are they commodities or securities?), is there a way for the SEC to go after back actors without casting too wide a net? Evan and Luke are joined by Dr. Thomas L. Hogan, a former Chief Economist for the US Senate Banking Committee and now a specialist in crypto and monetary policy with the American Institute for Economic Research. You can check out his work here.
The question has become cliche: Why doesn’t Europe have “Big Tech” companies? Critics of the European Union’s approach to tech regulation say it’s just that—they’ve regulated too much. But proponents of a stronger hand say America’s relative “light-touch” has left consumers unprotected from abuse of their personal and sensitive data. As the EU continues to lead the democratic world in regulating tech, will their standards become the global standard, or will tech firms start splintering their products and user experiences for different markets? Is the impact of European regulation overplayed? Can differences in the continent’s tech sector be better explained by a more conservative investment culture than the risk-taking of Silicon Valley? Evan discusses all that and more with Yael Ossowski, deputy director of Consumer Choice Center, a global consumer advocacy group. Check out his radio show & podcast Consumer Choice Radio here.
With the 2024 election shaping up to be a digital bloodbath, social media platforms like Facebook will continue to be an electoral lightning rod in the United States around the world. Social media executives are under intense scrutiny as disagreements flare over misinformation, foreign interference, bias, free speech, and voter targeting. Now, AI-generated ads are already making their way to voters’ screens at a time when the rules are still being defined. With artificial intelligence poised to play a major role in the U.S. presidential election, how will governments and companies respond? Joining Evan to discuss is Katie Harbath, founder and CEO of Anchor Change where she advises clients on tech policy issues. Previously she worked at Facebook where she built and led teams responsible for managing elections and working with governments and elected officials to use Facebook and Instagram to connect and engage with constituents. You can subscribe to Katie’s newsletter here and read her work for Bipartisan Policy Center here.
In April, an anonymous TikToker released a song, “Heart on my Sleeve,” that was listened to by millions of people before being taken down by various streaming platforms. The problem? The song wasn’t by the famous artists Drake and The Weeknd. It was generated by artificial intelligence that mimicked their voices. This song and other examples of AI-generated media have sparked a debate among artists, lawmakers, and others about whether and how generative AI should be allowed to learn from copyrighted works. As the U.S. Copyright Office, courts, and Congress look to tackle the issue, is there a way to balance the interests of human creators, AI developers, and consumers? Evan is joined by Daniel Takash, regulatory policy fellow at Niskanen Center, a nonprofit public policy organization based in Washington, DC. You can read his work on copyright and other topics here.
Recently, Apple CEO Tim Cook traveled to Beijing where he praised China for the country’s “rapid innovation” and celebrated the longstanding and “symbiotic relationship” that his company has had with the People’s Republic. As the U.S. Congress is increasingly examines the business dealings of American companies in China, including through the Select Committee on the Chinese Communist Party, what can lawmakers learn from Apple’s investments in China—from manufacturing to supply chains. And as tensions continue to rise between the U.S. and China, and Taiwan faces a potential invasion, should Apple be rethinking this relationship? Should the U.S. government intervene? Evan is joined by Geoffrey Cain, Senior Fellow for Critical Emerging Technologies at Foundation for American Innovation and author of The Perfect Police State: An Undercover Odyssey into China's Terrifying Surveillance Dystopia of the Future.
References:
Tim Cook’s comments on Apple in China at the 2017 Fortune Global Forum.
Congress seems to be in a mad rush to regulate artificial intelligence, determined not to repeat what many legislators see as the mistake of letting social media run amok. But while AI-related headlines focus on doomsday scenarios like civilizational destruction and job loss, less attention is paid to the potential for AI to transform how our government operates. It would be an understatement to say our government could use some modernization, but can a Congress so bent on regulating AI also embrace the technology for its own purposes? Joining Evan is Luke Hogg, Director of Outreach at Foundation for American Innovation. You can read his piece in Tech Policy Press, “Artificial Intelligence Could Democratize Government.” And check out other work from FAI scholars on this topic, including this piece by Zach Graves.
Elon Musk has called himself a “free speech absolutist,” but a recent decision to censor certain content on Twitter ahead of an election casts doubt on the validity of that moniker. Musk argues that it’s better to comply with the Turkish government’s requests than see the platform shut off in Turkey entirely. Skeptics say Musk should’ve denied the requests, and, if President Erdoghan shut down Twitter, it would prove he is an authoritarian, which could help inform voters as the head to the polls. What can we learn from this dustup and Twitter’s handling of government requests more broadly? Evan is joined by Nathan Leamer, Executive Director of Digital First Project, a tech policy organization. You can read his chapter in “The Digital Public Square” here.
Almost everyone agrees that an Internet connection is essential for full participation in modern American life. That’s why our government is spending huge sums to build networks in rural areas and help low-income Americans pay their bills or connect for free. As the burden increases on taxpayers, is it time to rethink how we subsidize broadband? Should Big Tech companies like Google, Amazon, Meta, and Microsoft help foot the bill for the infrastructure needed to use their services? Or should Americans pay additional fees on their Internet bill to help other Americans get online? What other business models might help pay for infrastructure going forward?
Evan is joined by Roslyn Layton, Senior Vice President of Strand Consult and visiting researcher at Aalborg University. She is also a nonresident senior fellow at Foundation for American Innovation. You can read her report on broadband cost recovery and her other work at StrandConsult.dk. You can check out the Sandvine report on Internet traffic referenced on the episode here.
While it didn’t get the attention of the Edward Snowden leaks, a recent dump of classified information on a video game chat server has been described as one of the worst Western intelligence failures in modern memory. Analysts say the leak could complicate Ukraine’s spring offensive against Russia and expose U.S. assets in the Kremlin, among other potential ramifications. What makes this leak unique is that it doesn’t appear to be driven by ideology or a foreign adversary, but rather the suspect’s desire to impress his online gamer buddies.
Is “clout chasing” a growing threat to national security? How can these leaks be prevented and what policies should the U.S. government change or implement in response? Evan is joined by Jon Askonas, Assistant Professor of Politics at Catholic University and a non-resident senior fellow at the Foundation for American Innovation. Read his piece, co-authored with Stanford Internet Observatory's Renee DiResta, in Foreign Policy on the threat gamers pose to national intelligence and check out his ongoing series in The New Atlantis on the collapse of consensus reality.
Politicians gripe constantly about Twitter, Instagram, TikTok, and their ilk. Two years ago, then-CEO of Twitter Jack Dorsey pitched Congress that a lot of their complaints could be solved by his project called “Bluesky,” which aims to decentralize social media. The app is now available on iPhone and Android, and hundreds of thousands of users are trying it out. Can we learn any initial lessons from Bluesky? Are decentralized protocols the silver bullet to the endless debates over content moderation and online censorship? Is it really possible for social media to be “owned” by its users? Evan is joined by Paul Bohm, a distributed systems engineer and founder and CEO of Teleport.XYZ. You can read Paul’s blog post on Bluesky here.
Artificial intelligence is all the rage these days. The large language model ChatGPT reached over 100 million users in record time, and AI is growing more accessible and relevant for everyday consumers. While many are cheering the AI revolution and heralding a brighter future, others are sounding the alarm. Elon Musk has warned AI could spell “civilizational destruction” without proper safety protocols. Is AI moving too fast, or is this the pace of innovation our economy needs? What should policymakers do, if anything, to tackle the challenges posed by AI? Evan is joined by Sam Hammond, Senior Economist at Lincoln Network.
As the headaches for TikTok pile up in Washington, the embattled social media platform and its supporters are arguing that a ban on the app would violate the U.S. Constitution, particularly the First Amendment. TikTok’s critics counter that the national security problems posed by the company's Chinese ownership far outweigh free speech concerns. Which side holds the upper hand, and what can we learn from past court cases involving a pornographic bookstore and a North Carolina law regarding sex offenders on social media? Evan is joined by Joel Thayer, president of Digital Progress Institute.
References
Joel’s piece for FedSoc, “Banning TikTok Outright Would Be Constitutional”
Dan Lyon’s piece for American Enterprise Institute, “Would a TikTok Ban Be Constitutional?”
Statement from the American Civil Lberties Union opposing a TikTok ban
In a prior episode, Gabriela Rodriguez of American Compass argued that the Jones Act, a law aimed at supporting the U.S. ship building should be reformed—not repealed. On The Dynamist’s first ever “rebuttal episode,” Evan is joined by Colin Grabow, a research fellow at the Cato Institute’s Herbert A. Stiefel Center for Trade Policy Studies. They discuss why Grabow supports a full repeal of the Jones Act, his response to Rodriguez’s proposed reforms, and what a post-Jones Act world might look like.
Cato blog, “More Industrial Policy Won’t Solve the Jones Act’s Many Problems”
Op-ed in The Atlantic, “The Obscure Maritime Law That Ruins Your Commute”
Last week, TikTok CEO Shou Chew appeared before the House Energy and Commerce Committee for a marathon hearing focused on national security and other concerns with the popular social media app. His goal was to assuage lawmakers’ concerns, but, if anything, the app’s future in the United States looks more bleak than ever. But how likely is an outright ban or divestiture from TikTok’s Beijing-based parent company ByteDance? Would these measures truly solve the national security risks? And what are the political and legal implications? Evan is joined by FCC Commissioner Brendan Carr to discuss.
References:
For many, their first thought about blockchain or cryptocurrency has to do with crime, scams, or the infamous meltdown of FTX. But the implications of blockchain technology go far beyond the breathless headlines. Consider data privacy. Governments around the world are increasingly trying to protect the privacy of Internet users, particularly when it comes to so-called “free” services like YouTube and Instagram which are supported by targeted advertising. While governments have struggled to get a grip on user privacy with these services, our guest today says that decentralized tech like cryptocurrency can radically alter how data privacy must be tackled. Luke Hogg is Director of Outreach at Lincoln Network, focusing on the intersection of emerging technologies and public policy. Read his paper on Web 3 and data privacy, co-authored with Antonio García Martínez.
Most people don’t think about global shipping and supply chains until a crisis spotlights these issues—from the hurricanes in Puerto Rico to the COVID-19 pandemic. But while the debate over cargo transport doesn’t often reach the kitchen table, it’s been going on for years in policy circles in Washington, with powerful interests involved on all sides of the debate. It traces back to the 1920 Jones Act, passed in the wake of World War I after German submarines had decimated American commercial ships. While the law was intended to bolster U.S. shipbuilding, has the law failed to achieve its goal? Critics argue it makes shipping more complicated and expensive, raising prices for consumers. Proponents respond that it's essential for national security and preserving domestic shipbuilding capacity. Should the law be repealed, left alone, or reformed? Gabriela Rodriguez, Policy Advisor at American Compass, joins the show to discuss. Follow Gabriela on Twitter here.
References:
Gabriela’s piece, “The Ghosts of Navies Past: Rebooting the Jones Act for the 21st century”
For years, businesses have been “moving to the cloud.” Instead of relying on servers and hardware located at offices, companies are increasingly using third parties like Microsoft and Oracle for their workplace needs—from analyzing sales data to communicating with coworkers. Congress and regulators are increasingly focused on tech policy issues like digital privacy and the size of Big Tech companies. But one area that gets much less attention is our topic today: cloud software licensing. Has software licensing become too restrictive and anti-competitive? If so, how does that impact consumers and businesses? How should policymakers respond? Evan is joined by Ryan Triplette, Executive Director of the Coalition for Fair Software Licensing.
References:
Report in FedScoop, “Major government tech contractors use monopolistic vendor-lock to drive revenue, study says”
Statement from the Coalition for Fair Software Licensing on a new complaint against Microsoft in Europe
The Russian invasion of Ukraine and the ensuing war put energy policy in the global spotlight. The dependence of European nations like Germany on Russian oil and gas played a significant factor in Putin’s aggression and continues to finance the Kremlin’s war effort. In the U.S., Republicans and Democrats continue to spar over our energy future. Many Democrats want a “Green New Deal,” while Republicans accuse the Biden administration of curtailing domestic oil and gas production. My guest, Alec Stapp, argues that an agenda of energy abundance can solve seemingly intractable fights. He is the co-founder and co-CEO of the Institute for Progress, a non-partisan research and advocacy organization.
Read Alec’s recent piece in The Atlantic, “Climate Relief Can’t Wait for Utopia”
There's turmoil at the Federal Trade Commission—the agency charged with protecting consumers and one of two agencies that deal with antitrust issues, such as promoting competition and preventing monopolies. Last week, Republican FTC Commissioner Christine Wilson announced her resignation in a Wall Street Journal op-ed, citing FTC Chairwoman Lina Khan’s alleged disregard for the rule of law and due process. What does this FTC drama mean for the agency’s efforts to rein in Big Tech? Are there broader implications for antitrust policy going forward? Evan is joined by Matt Stoller, Director of Research at the American Economic Liberties Project. He is also the author of Goliath: The Hundred Year War Between Monopoly Power and Democracy. You can read his work on his Substack, “BIG” at MattStoller.Substack.com.
Cyber attacks are on the rise, but this will come as no surprise to most Americans. It seems the news is always full of stories about a major data breach or ransomware attack. It's not just your imagination—studies show attacks have risen sharply in the past couple of years. In the wake of a Chinese spy balloon flying over sensitive U.S. military sites, is the issue of cybersecurity ripe for the public attention it deserves? Evan is joined by Shane Tews, non-resident senior fellow at the American Enterprise Institute and host of the brilliantly-named podcast “Explain to Shane.” They discuss the state of the nation’s cyber hygiene and what companies and governments can be doing differently to secure our data.
Verizon report found ransomware attacks rose 13% in 2022, more than the prior five years combined
Check Point report that global cyber attacks increased 28% in the third quarter of 2022 year over year
Foreign Affairs oped by CISA Director Jen Easterly and Assistant Director Eric Goldstein calling for companies to build better cybersecurity into their products
Government Accountability Office report on “federal actions urgently needed to protect the nation’s critical infrastructure”
The debate over whether and how to regulate social media has been boiling for years. The Supreme Court may have the final say, but will a ruling address mounting complaints with how these platforms work, from misinformation to censorship? Evan is joined by Richard Reisman, founder of Teleshuttle Corporation, an innovation studio based in New York City. He argues that fixing social media requires a fundamental rethink that moves us past the firehoses and filter bubbles that most Americans experience online. Can social media be more like bars, churches, and clubs where people filter their experiences in the physical world? And what’s the difference between freedom of expression and freedom of impression?
“Delegation, Or, The Twenty Nine Words That The Internet Forgot,” by Richard Reisman and Chris Riley in Tech Policy Press
“Clubhouse, a Tiny Audio Chat App, Breaks Through,” by Erin Griffith and Taylor Lorenz in New York Times
“Free Speech Is Not the Same As Free Reach,” by Renee DiResta in WIRED
Smartly Intertwingled, Richard Reisman's blog
"Into the Plativerse through Fiddleware," by Richard Reisman
“Environmental, social, and governance,” better known as ESG, has been a major topic of discussion in the business world. Proponents of ESG praise companies for efforts to reduce carbon emissions and make their workplaces more inclusive. Critics have charged that ESG is merely “woke capital,” a way that corporations leverage their power and wealth to advance leftwing policy priorities at the expense of fossil fuels and traditional values. Julius Krein, editor of American Affairs, says it’s a lot more complicated than a simple “left versus right” divide. He argues that Republicans need a better alternative to ESG than “shareholder primacy,” the free-market fundamentalism at odds with rising American populism. Can Republicans find an effective alternative to ESG?
Read Krein’s piece in COMPACT, “Why the Right Can’t Beat ESG”
Watch Senator Tom Cotton’s exchange with Kroger’s CEO
Is the Internet a force for freedom, or a tool for dictators to oppress their people? The answer largely depends on where you live in the world. For decades, U.S. policymakers have, for the most part, embraced the Internet as a tool to promote democracy. But China, Russia, Iran, and other nations have done the opposite: used the Internet to suppress, surveil, and manipulate people both within and beyond their borders. What is the U.S. doing to promote Internet freedom? Since 2012, the Open Technology Fund has supported projects designed to counter Internet censorship. But is the Fund up to the challenges we face today? And what backlash might the U.S. face by engaging in these activities? Joining Evan to discuss is Dan Lips, Head of Policy at Lincoln Network and former FBI analyst and Homeland Security staffer in Congress. See Dan's white paper on OTF here.
References
Freedom House report that shows global Internet freedom has declined for 12 consecutive years.
Note: This episode was recorded prior to the completion of Elon Musk's Twitter takeover.
There are few debates in tech policy as heated as the debate over what content or “digital speech” is allowed on the Internet. Proponents of more “content moderation” say it’s really just about taking down posts that create real-world harm. Critics say the term is little more than a euphemism for censorship. With Congress deadlocked on whether and how to regulate social media, state capitols and the courts have begun to fill the void.
What do these bills and cases mean for the future of social media and online speech? Will the Supreme Court have the final say? Is there a role for agencies like the Federal Communications Commission? And what impact will Elon Musk’s takeover of Twitter have? Evan discussed all that and more with Brendan Carr, the senior Republican commissioner at the Federal Communications Commission (FCC).
If you’re the parent of a teenager, you might lament the hours they spend scrolling through videos on TikTok. But other than being a time suck, it may seem harmless, right? Not according to a growing chorus of policymakers who say, given TikTok’s relationship with the Chinese government, the app needs to be banned, or seriously curtailed, to protect America. So how could cute dances, animal videos, and influencers be a threat to national security? Evan is joined by Geoffrey Cain, Senior Fellow for Critical Emerging Technologies at Lincoln Network and author of The Perfect Police State: An Undercover Odyssey into China's Terrifying Surveillance Dystopia of the Future. They discuss the changing relationship between China and the U.S., the evolving policy debate over TikTok and its parent company ByteDance, and the geopolitical implications of potential U.S. government action against the popular app.
What is dynamism? The dictionary will tell you, “the quality of being characterized by vigorous activity and progress.” But aside from being an SAT word, “dynamism” is an ethos that pervades the technology sector in the U.S., particularly in Silicon Valley. In recent years, has America lost its dynamist edge? Sure, we get a new iPhone every year, but where are the major, disruptive leaps we associate with tech-driven innovation? Evan is joined by Zach Graves, Executive Director at Lincoln Network. They discuss the state of tech and tech policy in the U.S., how the rise of China implicates traditional view of free markets and industrial policy. Can bridging the gap between engineers in tech hubs and policymakers in Washington and state capitols help make America more dynamist?
En liten tjänst av I'm With Friends. Finns även på engelska.