Washington’s initial thinking about AI regulation has evolved from a knee-jerk fear response to a more nuanced appreciation of its capabilities and potential risks. Today on Faster, Please! — The Podcast, I talk with technology policy expert Neil Chilson about national competition, defense, and federal vs. state regulation in this brave new world of artificial intelligence.
Chilson is the head of AI policy at the Abundance Institute. He is a lawyer, computer scientist, and former chief technologist at the Federal Trade Commission. He is also the author of “Getting Out of Control: Emergent Leadership in a Complex World.”
In This Episode
* The AI risk-benefit assessment (1:18)
* AI under the new Trump Administration (6:31)
* An AGI Manhattan Project (12:18)
* State-level overregulation (15:17)
* Potential impact on immigration (21:15)
* AI companies as national champions (23:00)
Below is a lightly edited transcript of our conversation.
The AI risk-benefit assessment (1:18)
Pethokoukis: We're going to talk a bit about AI regulation, the future of regulation, so let me start with this: Last summer, the Biden administration put out a big executive order on AI. I assume the Trump administration will repeal that and do their own thing. Any idea what that thing will be?
We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.
Chilson: The Biden executive order, repealing it is actually part of the GOP platform, which does not say a lot about AI, but it does say that it's definitely going to get rid of the Biden executive order. I think that's the first order of business. The repeal and replace process . . . the previous Trump administration actually had a couple of executive orders on AI, and they were very big-picture. They were not nearly as pro-regulatory as the Biden executive order, and they saw a lot of the potential.
I'd expect a shift back towards a vision of AI as a force for good, I'd expect to shift towards the international dynamics here, that we need to keep ahead of China in AI. We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.
That emphasis differs from the Biden emphasis in what way?
The Biden emphasis, when you read the executive order, it has some nice language up top about how this is a great new technology, it's very powerful, but overwhelmingly the Biden executive order is directed at the risk of AI and, in particular, not existential risk, more the traditional risks that academics who have talked about the internet have had for a long time: these risks of bias, or risks to privacy, or risks to safety, or deepfakes. And to be honest, there are risks to all of these technologies, but the Biden executive order to really pounded that home, the emphasis was very much on what are the problems that this tech could cause and what do we as the federal government need to do to get in here and make sure it's safe for everybody?
I would expect that would be a big change. I don't see, especially on the bias front, I don't see a Trump administration emphasizing that as a primary thing that the federal government needs to fix about AI. In fact, with people like Elon Musk having the ear of the president, I would expect maybe to go in the opposite direction, that these ideas around bias are inflated, that these risks aren't really real, and, to the extent that they are, that it's no business of the federal government to step in and tell companies how to bias or de-bias their products.
One thing that sort of confuses me on the Elon Musk angle is that it seemed that he was — at least used to be — very concerned about these somewhat science-fictional existential risks to AI. I guess my concern is that we'll get that version of Musk again talking to the White House and maybe he says, “I'm not worried about bias, but I'm still worried about it killing us all.” Is there any concern there, that that theme, which I think seems to have faded a little bit from the public conversation (maybe I'm wrong) that that will reemerge.
I agree with you that I think that theme has faded. The early Senate hearings were very much in that vein, they were about the existential risk, and some of that was the people who were up there talking. This is something that's been on the mind of some of the leaders of the cutting edge of the tech space, and it's part of the reason why they got into it. There's always been a tension there. There is some sort of dynamic here where they're like, “This stuff is super dangerous and super powerful, so I need to be the one creating it and controlling it.” I think Musk still kind of falls in that bucket, so I share a little bit of that concern, but I think you're right that Congress has said, “Oh, those things seem really farfetched. That's not how we're going to focus our time.” I would expect that to continue even with a Musk-influenced administration.
I actually don't think that there is necessarily a big tension between that and a pushback against the sort of red-tape regulatory approach to AI that was kind of the more traditional pessimistic, precautionary approach to technology, generally. I think Musk is a guy who hates red tape. I think he's seen it in his own businesses, how it's slowed down launches of all sorts. I think you can hate red tape and be worried about this existential risk. It's not necessarily in intentioned, but it'll be interesting to see how those play out, how Musk influences the policy of the Trump administration on AI.
AI under the new Trump Administration (6:31)
One issue that seemed to be coming up over and over again is differing opinions among technologists, venture capitalists, about the open-source issue. How does that play out heading into a Trump administration? When I listen to the Andreessen Horowitz podcast, those guys seem very concerned.
They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.
So there's a lot of disagreements about how open source plays out. Open source, it should be pointed out first, is a core technology across everything that people who develop software use. Most websites run on open source software. Most development tools have a huge open source component, and one of the best ways to develop and test technology is by sharing it with people and having people build on it.
I do think it is a really important technology in the AI space. We've seen that already, people are building smaller models, doing new things in open source that it costs a lot of money to do in the first instance, maybe in a closed source.
The concerns that people raise is that this, especially in the national security space or the national competition, that this sort of exposes our best research to other countries. I think there's a couple of responses to that.
The first one is that closed source is no guarantee that those people don't have that technology as well. In fact, most of these models fit on the thumb drive. Most of these AI labs are not run like nuclear facilities, and it's much easier to smuggle a thumb drive out than it is to smuggle a gram of plutonium or something like that. They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.
It also offers, in many ways, an alternative to centralized sources of artificial intelligence models, which can offer a bunch of user interface-based benefits. They're just easier to use. It's much easier to log into OpenAI and use their ChatGPT than it is to download and build your own model, but it is really nice as a competitive gap filler to have thousands and thousands of other that might do something specific, or have a specific orientation, which you can train on your own. And those exist because of the open source ecosystem. So I think it solves a lot of problems, probably a lot more than it creates.
So what would you expect — let's focus on the federal level — for this congress, for the Trump administration, to do other than broadly affirm that we love AI, we hope it continues? Will there be any sort of regulatory rule, any sort of guidance, that would in any way constrain or direct this technology? Maybe it's in the area of the frontier models, I don't know.
I think we're likely to see a lot of action at the use level: What are the various uses of various applications and how does AI change that? So in transportation and healthcare . . . this is a general purpose technology, and so it's going to be deployed in lots of spaces, and a lot of these spaces already have a lot of regulatory frameworks in place, and so I think we'll see lots of agencies looking to see, “Hey, this new technology, does it really change anything about how we regulate medical devices? If it does, how do we need to accommodate that? What are the unique risks? What are the unique opportunities that maybe the current framework doesn't really allow for?”
I think we'll see a lot of that. I think, once you get up to the abstract model level, it's much harder to figure out what problem both are we trying to solve at the model level and do we have the capability to solve at the model level. If we're worried about people developing bio weapons with this technology, is making sure the model doesn't allow that, is that useful? Is it even possible? Or should we focus those attentions maybe down on, people can't secure the components that they need to execute a biohazard? Would that be a more productive place? I don't see a lot of action, honestly, at the model level.
Maybe there'll be some reporting requirements or training requirements. The executive order had those, although they used something called the Defense Production Act — I think probably unconstitutionally, how they use that. But that's going to go away. If that gets filled in by Congress, that there's some sort of reporting regime — maybe that's possible, but Congress doesn't seem to be able to get those types of really high-level tech regulations across the line. They haven't done it with privacy legislation for a long time and everybody seems to think that would be a good idea.
I think we'll continue to see efforts at the agency level. One thing Congress might do is they might spend some money in this space, so maybe there will be some new investment or maybe the national laboratories will get some money to do additional AI research. That has its own challenges, but most of them are financial challenges, they're not so much whether or not it's going to impede the industry, so that's kind of how I think it'll likely play out at the federal level.
An AGI Manhattan Project (12:18)
A report just came out (yesterday, as we're recording this) from the outside advisory group on US-China relations that advises the government, and they're calling for a Manhattan Project to get to an artificial general intelligence, I assume before China or anybody else.
Is that a good idea? Do you think we'll do that? What do you make of that recommendation, which caused quite a stir when it came out?
For the most part, artificial general intelligence, I don't understand what the appeal of that is, frankly . . . Why not train something that could do something specific really well?
Yeah, it's a really interesting report. If you read through the body of the report, it's pretty standard international competitiveness analysis that says, “What are the supply chains for chips? How does it look? How do we compare on talent development? How do we compare on the industry backing investment?” Things like that. And we compare very well, overall, the report says.
But then, all of a sudden at the top level, the first recommendation talks about artificial general intelligence. This is the kind of AI that doesn't exist yet, but it's the kind that could basically do everything a human could do at the intellectual level that a human could do it. It's interesting because that recommendation, it doesn't seem to be founded on anything that's actually in the report. There's no other discussion in the report about artificial general intelligence, or how important it is strategically, or anything like that, and yet, they want to spend Manhattan Project-level amounts of money — I think in today's dollars, that'd be like $30 billion to create this artificial general intelligence. I don't know what to make of that, and, more than that, I think it's very unlikely to move the policy discussion. Maybe it moves the Overton window, so people are talking like, “Yeah, we need a Manhattan Project,” but I don't think that it's likely to do anything.
For the most part, artificial general intelligence, I don't understand what the appeal of that is, frankly. It has a sort of theoretical appeal, that we could have a computer that could do all the things that a person could do, but in the modern economy, it's actually better to have things that are really good at doing a specific set of things rather than having a generalist that you can deploy lots of different places, especially if you're talking about software. Why not train something that could do something specific really well? I think that would slot into our economy better. I think it's much more likely to be the most productive value of the intense computation time and money that it takes to train these types of models. So it seems like a strange thing to emphasize in our federal spending, even if we're talking about the national security implications. It would seem like it'd be much better to train a model that's specifically built for some type of drone warfare or something rather than trying to make it good at everythi ng and then say, “Oh, now we're going to use you to fly drones.” That doesn't seem to make a ton of sense.
State-level overregulation (15:17)
We talked about the federal level. Certainly — and not that the states seem to need a nudge, but if they see the Washington doing less, I'm sure there'll be plenty of state governments saying, “Well then we need to do more. We need to fill up the gap with our state regulation.” That already seems to be happening. Will that continue to happen and can the Trump administration stop that?
I think it will continue to happen, the question is what kind of gap is left by the Trump administration. I would say what the Biden administration left was a vision gap. They didn't really have an overarching vision for how the US was going to engage with this technology at the federal level, unlike the Clinton administration which set out a pretty clear vision for how the federal government planned to engage on the early version of the internet. What it said was, for some really good reasons, we're going to let the commercial sector lead on development here.
I think sending a signal like that could have sort of bully-pulpit effect, especially in redder states. You'll still see states like California and New York, they're listening to Europe on how to do stuff in this space.
Still? Are we still listening to . . . Who are the people out there who think, “They've got it figured out”? I understand that maybe that's your initial impulse when you have a new technology and you're like, “I don't know what to do, so who is doing something on it?” But we've had a little bit of time and I just don't get anybody who would default to be like, “Man, we're just going to look at a couple of EU white papers and off to the races here in our state.”
I think we're starting to see . . . the shopping of bills that look a lot like the way privacy has worked across the states, and in some cases are being pushed by the same organizations that represent compliance companies saying, “Hey, yeah, we need to do all this algorithmic bias auditing, or safety auditing, and states should require it.”
I think a lot of this is a hangover of the social media fights. AI, if you poll it just at that level, if you're like, “Hey, do you think AI is going to be good or bad for your job or for the economy?” Americans are somewhat skeptical. It's because they think of AI in the cultural context that includes Terminator, and automation, and so they think of it that way. They don't think about the thousands of applications on their phones that use artificial intelligence.
So I think there's a political moment here around this. The Europeans jumped in on and said, “Hey, we're the first to regulate in this space comprehensively.” I think they're dialing that back since some of their member states are like, “Hey, this is killing our own homegrown AI industry.” But for some reason, you're right, California and New York seem to be embracing that, and I think they probably will continue to. At the very local level, at the state level, there's just weird incentives to do something and then you don't really pay a lot of consequences down the road.
Having said that, there was a controversial bill that was very aggressively pushed, SB 1047, in California over the summer, and it got killed. It got canned by Gavin Newsom in the end. And I think that's a sort of a unique artifact of California's “go along to get along” legislature process where even people who don't support bills vote for them, kind of knowing that Gavin, or that the governor, will bring down the veto when it doesn't make political sense.
All of this is to say, California's going to California. I think we're starting to see, and what concerns me is, we're starting to see the shopping of bills that look a lot like the way privacy has worked across the states, and in some cases are being pushed by the same organizations that represent compliance companies saying, “Hey, yeah, we need to do all this algorithmic bias auditing, or safety auditing, and states should require it.”
There's a Texas draft bill that has been floated right now, and you wouldn't think that Texas would be on the frontier of banning differential effects in bias from AI. It doesn't really sound particularly red-state-y, but these things are getting shopped around and if it moves in Texas, it'll move other places too. I worry about that level of red tape coming at the state level, and that's just going to be ground warfare on the legislative front at the state level.
So federal preemption, what is that and how would that work? And is that possible?
It's really hard in this space because the technology is so general. Congress could, of course, write something that was very broad and preempted, certain types of regulation of models, and maybe that's a good idea, I've seen some draft language around that.
On the other hand, I do believe in federalism and these aren't quite the same sort of network-based technologies that only make sense in a national sweep. So maybe there's an argument that we should let states suffer the consequences of their own regulatory approaches. That hurts my heart a little bit just to think about the future because there are a lot of talented people in those states who are going to find out it's the lawyers who are their main constraint. Those types of transaction costs, they will slow us down. I think if it looks like we're falling behind in the US because we can't get out of our own way regulatorily, I think there will be more impulse to fix things.
There are some other creative solutions such as interstate compacts to try to get people to level up across multiple states about how they're going to treat AI and allow innovation to flourish, and so I think we'll see more of those experiments, but it is really hard at the federal level to preempt just because there's so many state-based interests who are going to push back against that sort of thing.
Potential impact on immigration (21:15)
As far as AI influencing what we do elsewhere — one thing you wrote about recently in a really great essay, which I've already drawn upon in some of these questions is — thinking about immigration and AI talent coming to the United States — what I think is now a widely accepted understanding, that this is an important technology and we certainly want to be the leader in this technology — does that change how we think about immigration, at least very high-skilled immigration?
We should be wanting the most talented people to come here and stay here.
I think it should. Frankly, we should have changed our minds about some of this stuff a long time ago. We should be wanting the most talented people to come here and stay here. The most talented people in the world already come here for school often. When I was in computer science grad school, it was full of people who really desperately wanted to stay in the US and build companies and build products, and some of them struggled really hard to figure out a way to do it legally.
I think that making it easier for those people to stay is essential to keeping not just our lead in the world, I don't want to say it that way — I mean that's important, I think national competitiveness is sort of underrated, I think that is valuable — but those people are the most productive in the US system where they can get access to venture capital that's unlike any other part of the planet. They can get access to networks of talent that are unavailable on other parts of the planet. Keeping them here is good for the US, but I think it's good overall for technological development, and we should really, really, really focus on how to make that easier and more attractive.
AI companies as national champions (23:00)
This isn't necessarily a specific AI issue, but again, as you said earlier, it seems like a lot of the debate, initially, is really a holdover from the social media debates about moderation, and bias, and all that, and a lot of those sorts of people, in many cases, and frameworks just got globed onto AI.
Another aspect is the antitrust, and now we’re worried about these big companies owning these platforms, and they're biased.
Do we begin to look at issues of how we look at our big companies who have been leading in AI, doing a lot of R&D — does the politics around Big Tech change if we begin to see them as our vanguard companies that will keep us ahead of China?
. . . in contrast to the Biden sort of “big-is-bad” rhetoric that they sort of leaned into entirely, I think a Trump administration is going to bring more nuance to that in some ways. And I do think that there will be more of a look towards our innovative companies as being the vanguard of what we do in the US.
I think it already has, honestly. You saw early on, the Senate hearings around AI were totally inflected with the language of social media and that network-effects type of ecosystem. AI does not work like that. It doesn't work the same way. In fact, the feedback loops are so much faster from these models, we saw things like Google Gemini that had ahistorical renderings of the founding fathers, and that got so much shouting on Twitter, and on X, and lots of other places that Google very quickly adjusted, tweaked its path. I think we're seeing the toning down of that rhetoric and the recognition that these companies are creating a lot of powerful, useful products, and that they are sort of national champions.
Trump, on the campaign trail, when asked about breaking up Google for an ongoing antitrust litigation was like, “Hold on guys, breaking up these companies might not be in our best interest. There might be other ways we can solve these types of problems.” I think that that level of, in contrast to the Biden sort of “big-is-bad” rhetoric that they sort of leaned into entirely, I think a Trump administration is going to bring more nuance to that in some ways. And I do think that there will be more of a look towards our innovative companies as being the vanguard of what we do in the US.
Now, having said that, obviously I think there's tons of AI development that is not inside of these largest companies in the open source space, and especially in the application layer, building on top of some of these foundation models, and so I think that ecosystem is also extremely important. Things that sort of preference the big companies over the small ones, I would have a lot of concerns about, and there have been regulatory regimes proposed that, even while opposed by some of the bigger companies, would certainly be possible for them to comply with in a way that small companies would struggle to comply with, and open-source developers just don't have any sort of nexus with which to comply, since there is no actual business model that's propping that type of approach up. So I'd want to keep it pretty neutral between the big companies, the small companies, and open source, while having the cultural recognition that big companies are extremely valuable to the US innovation ecosystem.
If you had some time with, I don’t know, somebody, the president, the vice president, the Secretary of Commerce, someone in an elevator going from the first to the 10th floor, and you had to quickly say, “Here's what you need to be keeping in mind about AI over the next two to four years,” what would you say?
I think the number one thing I would say is that, at the state level, we're wrapping a lot of red tape around innovative companies and individuals, and that we need to find a way to clear that thicket or stop it from growing any further. That's the number one challenge that I see facing this.
Secondary to that, I would say the US government needs to figure out how to take advantage of these tools. The federal government is slow to adopt new technologies, but this technology has a lot of applications to the types of government work that hundreds of thousands of federal employees do every day, and so finding ways to streamline using AI to do the job better I think is really valuable, and I think it would be worth some investment at the federal level to think about how to do that well.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Micro Reads
▶ Economics
* Productivity During and Since the Pandemic - San Francisco Fed
* The Effect of COVID-19 Immigration Restrictions on Post-Pandemic Labor Market Tightness - St. Louis Fed
* Trump Plans Tariffs on Canada, China and Mexico That Could Cripple Trade - NYT
▶ Business
* Nvidia’s new AI audio model can synthesize sounds that have never existed - Ars
* Europe’s Mistral expands in Silicon Valley in hunt for AI staff - FT
▶ Policy/Politics
* Musk Wants $2 Trillion of Spending Cuts. Here’s Why That’s Hard. - WSJ
* AI Governance: From Fears and Fearmongering to Risks and Rewards - AEI
* Newsom says California to offer EV subsidies if Trump kills federal tax credit - Wapo
▶ AI/Digital
* A new golden age of discovery - AI Policy Perspectives
* How Do You Get to Artificial General Intelligence? Think Lighter - Wired
* Is Creativity Dead? - NYT Opinion
* The way we measure progress in AI is terrible - MIT
* AI's scientific path to trust - Axios
* AI Dash Cams Give Wake-Up Calls to Drowsy Drivers - Spectrum
▶ Biotech/Health
* Combining AI and Crispr Will Be Transformational - Wired
* Neuralink Plans to Test Whether Its Brain Implant Can Control a Robotic Arm - Wired
* Scientists are learning why ultra-processed foods are bad for you - Economist
▶ Clean Energy/Climate
* Taxing Farm Animals’ Farts and Burps? Denmark Gives It a Try. - NYT
* These batteries could harness the wind and sun to replace coal and gas - Wapo
▶ Robotics/AVs
* On the Wings of War - NYT
▶ Up Wing/Down Wing
* ‘Genesis’ Review: Rise of the New Machines - WSJ
* The Myth of the Loneliness Epidemic - Asterisk
▶ Substacks/Newsletters
* The Middle Income Trap - Conversable Economist
* America's Productivity Boom - Apricitas Economics
* The Rise of Anthropic powered by AWS - AI Supremacy
* Data to start your week - Exponential View
* Trump's economic team is on a collision course with reality - Slow Boring
* Five Unmanned SpaceX Starships to Mars in 2026 with Thousands of Teslabots - next BIG future
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.