Monday through Friday, Marketplace demystifies the digital economy in less than 10 minutes. We look past the hype and ask tough questions about an industry that’s constantly changing.
The podcast Marketplace Tech is created by Marketplace. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
It's the last Friday in April and it's time for Marketplace Tech Bytes Week in Review.
This week, we'll talk about how the Federal Trade Commission is suing Uber over its subscription service.
Plus, how the VC world is navigating the uncertainty created by the trade war.
But first, a nonprofit pivot is facing some challenges. Open AI, the maker of ChatGPT was founded about a decade ago as a nonprofit research lab. It's now looking to restructure as a for-profit — specifically, a public benefit corporation
But that transformation is facing resistance.
About 10 former Open AI employees, along with several Nobel laureates and other experts, have written an open letter asking regulators in California and Delaware to block the change.
They argue that nonprofit control is crucial to Open AI's mission, which is to “ensure that artificial general intelligence benefits all of humanity."
Marketplace’s Stephanie Hughes spoke with Jewel Burks Solomon, managing partner at Collab Capital, about how unusual it is to see this kind of conversion.
An Open Letter - Not For Private Gain
Ex-OpenAI workers ask California and Delaware AGs to block for-profit conversion of ChatGPT maker - from the Associated Press
OpenAI’s Latest Funding Round Comes With a $20 Billion Catch - from the Wall Street Journal
FTC Takes Action Against Uber for Deceptive Billing and Cancellation Practices - from the Federal Trade Commission
FTC sues Uber over difficulty of canceling subscriptions, “false” claims - from ArsTechnica
White House Considers Slashing China Tariffs to De-Escalate Trade War - from the Wall Street Journal
VC manufacturing deals were already declining before tariffs entered the picture - from Pitchbook
TikTok is going to be testing a new crowd-sourced fact-checking system called Footnotes. It’s seems similar to the Community Notes systems already in use on other social media, such as X and Facebook.
TikTok is also keeping its current fact-checking systems in place. The way these community systems generally work is, say someone makes a post stating "whales are the biggest fish out there." Another user could add a note saying "actually, whales are mammals, and here's a source with more information."
Marketplace’s Stephanie Hughes spoke with Vanderbilt psychology professor Lisa Fazio about why this model of "citizen fact-checking" is catching on.
The use of algorithmic software in setting residential rents has come under scrutiny in recent years. In 2024, the Joe Biden administration sued real estate company RealPage, alleging that its algorithm, which aggregates and analyzes private data on the housing market, enables landlords to collude in pricing and stifles competition. There's no word yet on what the second Donald Trump administration's Justice Department will do with this case. But in the meantime, some cities are banning the use of these algorithms completely. Marketplace’s Meghan McCarty Carino spoke with Robbie Sequeira, who has been reporting on the issue for Stateline.
We've sometimes wished we could have our own Wendy Rhodes, the performance coach at the hedge fund on the TV show “Billions.” Most workplaces, however, aren't bringing in billions and can't afford a Wendy. But an upskilling platform called Multiverse uses artificial intelligence to provide personalized, on-the-job guidance. Its AI coach, Atlas, helps workers expand their abilities and keep themselves relevant in an economy that makes skills obsolete faster than ever before, says Ujjwal Singh, chief product and technology officer at Multiverse.
Developers of mobile apps have "room for improvement" in making their platforms fully accessible for disabled users, according to a new report from the software company ArcTouch and the digital research platform Fable.
It looked at fifty popular apps and assessed them for features that improve accessibility like screen reading, text size adjustability, voice controls and multiple screen orientations. The apps were tested by disabled users who reported a poor or failing experience almost three-quarters of the time.
Marketplace’s Meghan McCarty Carino spoke with Ben Ogilvie, head of accessibility at ArcTouch, to learn more about why so many apps are behind.
NVIDIA gets caught up in the trade war, the titans of Twitter/X debate intellectual property law — and the Federal Trade Commission's antitrust case against Meta kicks off in court.
We're digging into all of it on today's Tech Bytes: Week in Review. Marketplace’s Meghan McCarty Carino speaks with Anita Ramaswamy, columnist at The Information, about what we learned in week one of Meta's monopoly trial.
Flying cars have been a staple of science-fiction visions of the future for ages. Perhaps most famously in “Back to the Future II.” The film may have overshot the mark a bit with Doc and Marty McFly navigating full-on air highways in 2015. But Utah is pushing for the technology to take off by 2034, when the state hosts the Olympic and paralympic winter games.
We're not exactly talking about flying Delorians or vehicles you'd recognize as a car, but rather small, lightweight aircraft for traveling shorter distances. Reporter Caroline Ballard got a first look at the air taxis.
China is responsible for most of the world’s processing of rare earth metals and minerals, but its new export restrictions have raised the stakes for U.S. efforts to build its own supply chain and processing industry. Barbara Arnold, a professor of mining engineering at Penn State, says there are options, but they require time, development and investment.
Surveillance technology like automated license plate readers has become commonplace in policing. They've made it easier to locate stolen vehicles and track suspects, but they've also raised concerns about civil liberties. Cardinal News Executive Editor Jeff Schwaner took a 300-mile drive through the state to see how often his car would be recorded. Marketplace’s Meghan McCarty Carino spoke with Schwaner about his experience and issues related to privacy and who has access to the data.
One area where artificial intelligence has been swiftly adopted is software coding. Google even boasted last year that more than a quarter of its code was generated by AI. But the technology is also generating challenges to the traditional technical job interview, where candidates are given programming problems as a way to assess their skills. And lately it’s become apparent that a lot of applicants are using AI to give themselves a boost, according to recent reporting from Business Insider's Amanda Hoover. Marketplace’s Meghan McCarty Carino spoke with Hoover about the controversy over applicants using AI while interviewing for jobs that often use AI.
The tariff rollercoaster has created a lot of uncertainty in the tech industry. We're digging into how its playing out for makers of consumer tech, e-commerce platforms and AI. Marketplace’s Meghan McCarty Carino speaks with Paresh Dave, senior writer at Wired, about all these topics for this week’s Tech Bytes.
Etsy, the online marketplace known for selling one-of-a-kind handmade items, is hoping that artificial intelligence can boost sales of those crafty creations. The site has been selling less stuff and recently announced a plan to double down on high-quality and unique merchandise over cheap and mass-produced. Now, it's launching AI-curated product collections, based on trends like island luxe or maximalism. They build on the work of human trendspotters, using AI to scan the site and tag thousands of matching products. Nick Daniel, chief product officer at Etsy, explains what the company calls algotorial curation to Marketplace’s Meghan McCarty Carino.
After President Donald Trump's launched his “Liberation Day” tariff agenda, the tech-heavy Nasdaq Composite stock index suffered its biggest plunge since March 2020. The so-called Magnificent 7 — Nvidia, Apple, Meta, Amazon, Google, Microsoft and Tesla — lost a combined $1.8 trillion of market value in two days. The tariff-induced downturn in business conditions is likely to be temporary, according to Daniel Newman, CEO and chief analyst at the Futurum Group, a tech research and advisory firm. Newman told Marketplace’s Meghan McCarty Carino that tech consumers might feel more of the pain, but not much can stop corporate AI adoption and the data center buildout.
Microsoft celebrates its 50th anniversary this month. The company started as a small software startup co-founded by Bill Gates and Paul Allen in an Albuquerque, New Mexico, garage. It went on to revolutionize personal computing, business productivity and now — it hopes — artificial intelligence with its big investment in OpenAI, the maker of ChatGPT. Microsoft has set about integrating the technology across its products, and it recently unveiled a slew of upgrades to its Copilot AI assistant. They include Memory, which retains personal details like the foods you like or your kids' birthdays and can use that information to make your dinner reservations or pick out a gift. The Vision upgrade enables the AI to analyze photos and video and provide tips on, say, redecorating your kitchen. Marketplace's Meghan McCarty Carino spoke with Yusuf Mehdi, Microsoft's consumer chief marketing officer, to learn more about the new features.
Rising demand for electricity, largely to power the artificial intelligence boom, has stirred a resurgence in nuclear energy. Older plants like Three Mile Island in Pennsylvania are being brought out of retirement, but there’s also investment in smaller-scale reactors with different designs. The fresh interest in nuclear generation has also renewed discussion about how to build these facilities ethically, in other words, with an approach that’s sensitive to the needs of the community and the world at large. Marketplace’s Meghan McCarty Carino spoke with Aditi Verma, assistant professor of nuclear engineering at the University of Michigan, who co-created an undergrad course about ethically designing modern nuclear facilities. Verma discussed her effort to train young engineers to transform the industry.
For some engineers, it’s also renewed a discussion about how to build these facilities ethically. Marketplace’s Meghan McCarty Carino spoke with Aditi Verma, professor of nuclear engineering at the University of Michigan who co-created a course for undergraduate students about how to ethically design modern nuclear facilities, about why it’s so important to be teaching this to young, would-be engineers now.
OpenAI — the maker of ChatGPT — keeps raising more money, this time in a $40 billion round led by SoftBank. We’ll get into the strings attached in Marketplace “Tech Bytes — Week in Review.” Plus, what’s going on with Tesla’s sales slump? And how much is its polarizing CEO, Elon Musk, to blame? But first, the clock is ticking on a TikTok sale. The extended deadline, which may or may not be a real deadline according to President Donald Trump, is coming Saturday. As of this episode’s recording, the hugely popular short-form video app was supposed to find a U.S. buyer or be banned, and plenty of suitors have thrown their hats into the ring. Marketplace’s Meghan McCarty Carino spoke with Maria Curi, tech policy reporter at Axios, about all these topics and more.
There’s been mounting concern in recent years about the harms of social media use for kids. The sites can be addictive, ripe for cyberbullying and contribute to increased rates of body dysmorphia, anxiety and depression.
The growing evidence has led at least a dozen states to pass laws attempting to restrict access to online platforms for kids. The Kids Off Social Media Act, a bipartisan bill in the Senate, would bar minors under 13 from social media.
But despite the risks, there can be benefits to finding communities online, especially for LGBTQ+ teens and young adults. A recent report jointly released by the Born This Way Foundation and the nonprofit Hopelab found that young people in these demographics felt significantly safer expressing their identities online compared to in-person spaces.
Registration for the H-1B visa lottery closed last week. The tech industry has long been the biggest beneficiary of this program for specialized workers. But uncertainty has been spreading due to the Trump administration’s restrictive stance on immigration policy. Even legal immigrants have felt the crackdown. It’s led some companies to advise their H-1B holders not to leave the country for fear that they could be barred from returning. Marketplace’s Meghan McCarty Carino spoke with Gerrit De Vynck, who wrote about risks to the visa program for The Washington Post.
Yes, Napster is still alive and kicking. The peer-to-peer file-sharing company that became synonymous with music piracy in the early 2000s was bought by a company called Infinite Reality Labs last week for about $207 million. It’s the latest in a string of attempts to revive the brand. After it was shut down by the courts in 2001 and declared bankruptcy, Napster returned as a music subscription service, a marketplace for non-fungible tokens and now a virtual reality-metaverse destination. Marketplace’s Meghan McCarty Carino spoke with Harry McCracken, global technology editor at Fast Company, who has been following Napster from the beginning. He says the brand still has some power.
Chinese President Xi Jinping is pushing for the country to be a global leader in artificial intelligence by 2030 as Beijing competes with Washington to gain an edge in advanced technology. The release of AI chatbot DeepSeek, which stunned industry experts in January, gave a boost to China’s hopes of catching up to the U.S. despite restrictions on the advanced chips used to power AI.
AI company Anthropic recently added web search to its chatbot Claude. It joins other artificial intelligence tools like Perplexity and ChatGPT in delivering one clear answer to a web search query instead of pages and pages of links. Plus, 23andMe declared bankruptcy. So what’s gonna happen to all that genetic data? But first — the Signal group chat heard round the world. A Trump administration official appears to have inadvertently invited a journalist into a conversation about sensitive national security issues on the secure messaging app Signal. The app does offer end-to-end encryption, the gold standard for security in consumer-level messaging apps, but that doesn’t make it foolproof for the most sensitive of data. Marketplace’s Meghan McCarty Carino spoke with Joanna Stern, senior personal technology columnist at The Wall Street Journal, to break down all these topics for this week’s Marketplace “Tech Bytes: Week in Review.”
On today’s episode of “Marketplace Tech,” Meghan McCarty Carino speaks with Daniel Cohan, professor of civil and environmental engineering at Rice University, about virtual power plants. These aren’t physical generating stations. They’re more of a network, usually managed by a local utility, that aggregates electricity from different sources like businesses or homes. Essentially, these customers give energy back to the grid or help the utility balance supply and demand. As electricity demand grows, thanks to power-hungry AI data centers, electric cars and extreme weather, some providers are turning to virtual power plants to reduce strain on the grid.
Last Friday, the Securities and Exchange Commission held its first-ever crypto roundtable, a discussion with industry leaders and skeptics to answer a grand question: how should the SEC regulate crypto? Should SEC officials regulate crypto tokens like bonds and stocks? The agency under President Donald Trump is taking what many see as a friendlier approach to cryptocurrency and has already dropped a number of lawsuits against various crypto exchanges initiated during the Biden Administration. Axios reporter and author of the Axios Crypto newsletter, Brady Dale, returns to the show to discuss why the question of regulating crypto like a security asset is a very complicated one to answer.
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
The electric vehicle industry in the Southeast is growing rapidly, with increased sales, charging stations and manufacturing. Buoyed by notable victories in the last couple of years, the United Auto Workers union is revving up efforts to organize the EV and battery sector in the South. One target is a sprawling campus in rural Kentucky that, once completed, will be one of the largest EV battery plants in the world. A supermajority of workers at BlueOval SK has asked the National Labor Relations Board for a vote on joining the United Auto Workers. The nearly $6 billion electric vehicle battery campus in Glendale, Kentucky, is part of a joint venture between Ford and South Korea’s SK On.
The stock market has been a tad volatile lately. But this month, the digital physical therapy company Hinge Health filed for an initial public offering. Plus, a new tool out of Stanford University evaluates how various AI models perform in real-world health care. It grades them on tasks from patient education to clinical note generation. But first, Nvidia just hosted its annual GTC confab, where it announced a whole lot of collaborations and, of course, some new and improved chips. Main takeaway: The company has its fingers in a bunch of AI pies. Marketplace’s Meghan McCarty Carino discusses all of this with Christina Farr, managing director at Manatt Health.
Stanford University has long been a feeder for the neighboring tech industry with graduates often heading to a brand name of Silicon Valley. But the times, they are a-changin’, according to writer Jasmine Sun. She reported recently for the San Francisco Standard that building tech for the military has become cool on campus. One student, Divya, said her “most effective and moral friends are now working for Palantir.” Marketplace’s Meghan McCarty Carino spoke with Sun about how this shift compares to when she attended Stanford in the late 2010s.
Federal officials are warning consumers against a type of cyberattack that’s been on the rise. It’s called Medusa, a ransomware program that uses tactics like phishing to infect a target’s system and encrypt their data, which hackers then threaten to publicly release unless a ransom is paid. Medusa is just one example of how hackers are evolving their strategies at a time when federal cybersecurity resources are being cut by the Donald Trump administration. Marketplace’s Meghan McCarty Carino spoke with Lesley Carhart, director of incident response for North America at cybersecurity firm Dragos, to learn more about the use of embarrassment as a weapon and the impact of funding cuts on digital safety.
You could say once your company becomes a verb, you’ve arrived. And “Venmo me” is a pretty common phrase these days. Mobile payment apps like Venmo, along with Zelle and Cash App, are becoming pretty widespread, especially among young people. According to the Federal Reserve Bank of Atlanta, consumers under the age of 25 were twice as likely to have used some kind of mobile payment app compared to older Americans. But as with any form of money, there is etiquette about how to use them. Marketplace’s Stephanie Hughes spoke with Yanely Espinal, host of Marketplace’s “Financially Inclined,” a video podcast that provides money lessons for teens, about the do’s and don’ts of these payment apps.
Back when the pandemic first hit, many students received tablets or laptops from their schools. Schools also wanted to know what students were doing on those devices, so demand for AI-powered software to monitor students’ digital activities also grew. That surveillance software is the subject of a new investigation from the Associated Press andTthe Seattle Times, whic Claire Bryan coauthored. Marketplace’s Stephanie Hughes asked her what sort of things this surveillance software might flag.
We are taking a look at how the tech industry is pushing back against federal cuts to artificial intelligence and science. Plus, Waymo is expanding its self-driving services in Silicon Valley. But first, Chinese e-commerce giant Alibaba this week released an AI model called R1-Omni, which the company says can read human emotions. Alibaba shared a demo on the coding platform GitHub that accurately described a character as being angry and experiencing fear. Marketplace’s Stephanie Hughes is joined by Jewel Burks Solomon, managing partner at venture firm Collab Capital, to break down these stories.
This week, we’ve been exploring the lasting impacts of the COVID-19 pandemic. In March 2020, we spoke about what might happen with futurist Amy Webb, the CEO of the Future Today Strategy Group. She predicted, among other things, that we would give up more personal data around our health and location. Then on the show in 2021, she said more definitively that privacy was dead. This week, Marketplace’s Stephanie Hughes spoke with Webb again. They discussed the current state of digital privacy, the lessons not learned from the pandemic and, as Webb sees it, the victory of politics over planning.
In the spring of 2020, 77% of American public schools moved to online distance learning when the pandemic hit, according to data from the U.S. Department of Education. Prior to the pandemic, you could say that schools were trickling into the digital age. Then, when COVID changed everything, they were basically tossed into it. Some educators adapted quickly, like Bebi Davis, who was working as a vice principal in Honolulu at the time. She’s now principal of Princess Victoria Kaiulani Elementary. Going totally virtual, she said, meant introducing an onslaught of technology — videoconferencing, classroom management software and messaging systems. Marketplace’s Stephanie Hughes asked Davis about the school system’s experience adopting so much tech all at once.
Five years ago today, after the World Health Organization declared the COVID-19 outbreak a pandemic, there was a widespread shift to remote work for many workers who were considered nonessential. And people had to get used to seeing their colleagues mainly on a screen. In recent years, some companies have required employees to return to the office full time. But remote work remains a major part of many people’s lives, far more than in 2019. Marketplace’s Stephanie Hughes spoke with Anita Blanchard, a professor of psychological and organization science at the University of North Carolina at Charlotte, about what’s lost when workers don’t interact in the same physical space.
March 11 marks five years since the World Health Organization declared the COVID-19 virus officially a pandemic. Tracking the virus has been key to understanding where outbreaks are occurring and one tracking tool that had been mostly on the shelf prior to the pandemic is wastewater surveillance. That’s pretty much what it sounds like — testing what we flush down the toilet which eventually lands in what’s known as a sewer shed. Marketplace’s Stephanie Hughes spoke with molecular virologist Marc Johnson at the University of Missouri about the advantages of wastewater surveillance. The following is an edited transcript of their conversation.
In this week’s Marketplace “Tech Bytes: Week in Review,” TSMC announced it’s investing an additional $100 billion to make chips in the U.S. Plus, a co-founder of the social media platform Reddit joins a bid to buy TikTok. But first, let’s talk about the stock market. A number of tech companies watched their stocks sink this week, when new tariffs on China, Mexico and Canada were put in place. That volatility continued when President Donald Trump backtracked on the policy, at least temporarily. Marketplace’s Stephanie Hughes spoke with Natasha Mascarenhas, reporter at The Information, to unpack these stories and more.
Today, we’re wrapping up our series “The Infinite Scroll,” where we look at kids’ lives on social media and the risks and rules they face. One approach some states take to creating rules to mitigate risk is known as an age-appropriate design code, a law that puts the onus on tech companies to design products that keep kids safer when they’re on the internet. California passed its Age-Appropriate Design Code Act in 2022, as did Maryland last year. Both have been challenged by lawsuits from the tech industry. State Delegate Jared Solomon, a sponsor and lead author of the Maryland law, explained to Marketplace’s Stephanie Hughes that the oversight effort attempts to prevent manipulation by algorithms. He hopes the industry will begin to “think differently about how they design their products.”
On our new series “The Infinite Scroll,” we’re looking at the rules and risks of kids using social media. Artificial intelligence is showing up on these platforms in the form of chatbots, digital characters you can text or talk with. Today we explore what can happen to youngsters who interact with them. Marketplace’s Stephanie Hughes discussed the subject with Meetali Jain, founder and director at the Tech Justice Law Project. Her organization is involved in a lawsuit against Character.AI, an app that enables users to create and communicate with these bots.
This week, we are looking at how kids use social media and the risks and rules around it. It’s part of our new series “The Infinite Scroll.” Monday, we talked about how habitually checking social media can change adolescents’ brains, making them more sensitive to feedback from their peers. Today, we’re going to look at what it’s like to be a parent monitoring their kids’ social media. One thing’s clear: It can be a lot of work.
Social media takes up a huge chunk of kids’ lives. A 2024 study from Pew found that about half of U.S. teenagers are online “almost constantly.” It’s a big source of stress for parents too, and policing their kids’ actions on these platforms can take up a lot of time and energy. Also, there’s AI, and it’s showing up on social media as bots that are always available to talk. We’re going to get to all of that this week in our new series about what it’s like to be a kid on social media and the risks and rules that come with it. We call it “The Infinite Scroll.” We’re kicking things off with Eva Telzer, a professor of neuroscience and psychology at the University of North Carolina at Chapel Hill. Telzer told Marketplace’s Stephanie Hughes about the intensity of youngsters’ connection to their phones and its effects on how the kids are wired, which may last into adulthood.
In this week’s “Tech Bytes: Week in Review,” chip powerhouse Nvidia saw its revenue soar last quarter, showing that the AI boom is still booming. Plus, it was a bumpy week for bitcoin after the crypto exchange Bybit lost almost $1.5 billion of digital assets in a hack. But first, Apple announced it’s spending $500 billion to expand manufacturing and create jobs in the U.S. Marketplace’s Stephanie Hughes spoke with Anita Ramaswamy, columnist at The Information, about what the investment could do for American tech manufacturing and more.
Patreon, a company that enables fans to directly support internet creators financially, has produced a report looking at how creators and their fans are feeling these days. One finding: Fans say they’re seeing more short-form work on social media, even though they prefer long-form content. And more than half of creators surveyed say it’s harder to reach their followers now than five years ago. This is part of what the report calls the “TikTokification of the internet.” Brielle Villablanca, vice president of communications and creator advocacy at Patreon, discusses the trade-offs for creators in the current TikTok-driven environment with Marketplace’s Stephanie Hughes.
For years, coding has been thought of as a useful skill for children to learn. It’s integrated into computer science classes and a number of organizations are dedicated to helping kids code. But now, AI tools can write code themselves. Marketplace’s Stephanie Hughes spoke with Monica McGill of the Institute for Advancing Computing Education about what the expanding capabilities of artificial intelligence mean for coding as a necessary — or not so necessary — skill.
Last year, Australia passed a measure that would ban children under 16 from using social media. That’ll be a big shift: About 80% of Australian kids between the ages of 8 and 12 used social media in 2024, according to a report from Australia’s online safety regulator. The government is now working on the details of how to implement what many are calling one of the strictest age restriction policies in the world. The BBC’s Naomi Rainey reports on the difficulties of enforcing the ban and the impact it could have on kids in the future.
Satellite internet has been around for decades. But in just the past six years, the number of satellites orbiting the planet has grown dramatically. Many belong to Starlink, a unit of SpaceX whose satellites are in low Earth orbit. And it’s expected to get even busier up there with Amazon’s Project Kuiper launching thousands of new satellites. Joe Supan of CNET recently wrote about this. He told Marketplace’s Stephanie Hughes about the race to claim a piece of space and the risk of high-tech debris clogging the zone.
Another lawsuit hits the Department of Government Efficiency from privacy rights advocates concerned about Americans’ personal data. And another wearable — the Ai Pin — bites the dust. But first, layoffs by the federal government are continuing, including, reportedly, at the National Institute of Standards and Technology, or NIST, which is part of the Commerce Department. This is a federal laboratory that’s been around since 1901 whose mission is to promote U.S. innovation and competition. And part of its work is to help create standards for new technology, like artificial intelligence. Marketplace’s Stephanie Hughes is joined by Maria Curi, tech policy reporter at Axios, to break down these stories. Curi recently reported that NIST is expected to fire about 500 workers. But what does NIST do, exactly?
The Washington Post reported earlier this month that representatives of DOGE — the Department of Government Efficiency — gained access to sensitive data at the Department of Education and fed it into AI software.
This has raised red flags over whether it violates federal privacy law. We reached out to DOGE for comment, but didn’t hear back.
But there are ways to use AI to improve efficiency without raising privacy concerns. Marketplace’s Stephanie Hughes spoke with Kevin Frazier, contributing editor at the publication Lawfare, about how the government has used AI in the past and how it could use it more responsibly in the future.
The following is an edited transcript of their conversation.
Kevin Frazier: The federal government’s use of AI really spans decades, if we’re going to be honest, because how you define AI is a whole, other hour-long conversation, if not two-hour-long conversation. But here we can even just look back to the end of 2024 when we had an inventory done of the federal government’s use cases of AI, and what we saw is that across 37 agencies, there were more than 1,700 different uses of AI, ranging from the Army Corps of Engineers using AI to predict flooding to, of course, the Department of Defense using AI to bolster its cybersecurity defenses.
Stephanie Hughes: Tell me a little bit more about, you know, what the goal is with incorporating AI into the federal government, like, what’s the hope?
Frazier: Yeah, so there are tons of hopes. I think the biggest advantage to relying on AI systems are a couple things. So number one, AI is really adept at spotting patterns that would otherwise elude human staffers, and so AI deployed in the federal government setting can really assist with efficiency when it comes to identifying waste, trying to forecast new trends, whether those are market trends or weather trends. So a lot of this just goes to trying to do really large, difficult tasks in a more streamlined and reliable fashion. One thing I want to point out is that AI operates the same way in any given context. We can see what its function is. We can know it’s going to run in a certain way. Now I’m not trying to say that AI is perfect, far from it. We know that it can be susceptible to bias and other issues, but it does have that capacity to operate in a more predictable fashion and serve different tasks that humans just aren’t really well suited for.
Hughes: Going big picture, the use of AI in many aspects of life, including government, seems inevitable. What’s the best way to maximize the benefits of AI while still maintaining public trust?
Frazier: First is AI literacy. We really haven’t seen a concentrated effort across the country to educate Americans about the risks and benefits and technical background of AI, and we need a lot more folks in the federal government who have a deep knowledge of AI and a deep experience with AI to help make sure that these systems are running in a responsible fashion that aligns with federal law. Number two is transparency. It’s really important that, from a trust perspective, Americans know when AI is going to be used to achieve certain ends. And I think a lot of Americans want to know that they’re either interacting with an AI system, or they’re helping inform an AI system or not. One third step I really want to see is experimentation, because in many ways, the use of AI is going to improve government services. For example, just to highlight one thing, the Social Security Administration has been using AI to proactively identify individuals who may be eligible for benefits. That’s an awesome use case, right? Finding Americans who should be receiving more benefits but aren’t, that’s really exciting. So I want that to keep happening. So let’s use AI on this project. Let’s see how it goes. Let’s report the results. Let’s show the American public how it’s working, what risk we identified, and how we’re responding to those and then keep scaling it up.
Hughes: You spent some time in the tech world. You had a stint at Google. You were at Cloudflare for a minute. You founded a tech non-profit. You’ve also spent a lot of time in the legal world. We now have all these people with tech mindsets coming into a world of politics and laws. Is there a happy medium between the “move fast and break things” approach of Silicon Valley and the way the federal government has traditionally worked, which is move cautiously and try not to break stuff?
Frazier: I really think there is and I think the sweet spot comes with a general approach that actually is already in display in Utah. So Utah recently created the Utah Office of AI policy. They’re creating what’s known as a regulatory sandbox where the government and the private entity in question work together to come up with a bespoke, flexible regulatory scheme. So you can imagine, for example, a new healthcare company moves to Utah. They want to use AI to identify health risks for the residents of Utah. Of course, you may have some folks who say, “oh my gosh, AI dealing with my health information? That’s scary. AI may be leading to bad predictions or hallucinating about whether or not I have some serious disease.” Under a regulatory sandbox, though, we err on the side of experimentation and seeing if those use cases align with our expectations around benefits, or whether instead, we’re seeing some unknown risks that we think maybe aren’t worth continuing with that project. So adoption of a more experimental approach that isn’t forcing folks to surrender their whole lives to AI, but also isn’t the kind of scared approach that we’ve seen usually typify governments when they see emerging technology, I think that’s a really happy medium.
DOGE was temporarily blocked from accessing student loan data after a student group in California sued to stop the disclosure. That order was lifted the following Monday after a federal judge found there wasn’t sufficient proof that irreparable harm had been done.
404: Page Not Found. That error message has become a more common sight on government websites. Many — reportedly thousands — of federal government webpages were recently taken down, ranging from Census Bureau research on depression among LGBT adults to Food and Drug Administration guidance for making clinical trials more diverse.
These erasures come after President Donald Trump signed executive orders cracking down on diversity, equity and inclusion initiatives and what he calls gender ideology.
Marketplace’s Stephanie Hughes spoke with Jack Cushman, director of the Harvard Library Innovation Lab and a contributor to the End of Term archive project, which works to preserve government sites before a new administration takes over. They discussed his recent work archiving those sites and data sets and what’s lost when these digital artifacts are not properly archived.
The following is an edited transcript of their conversation.
Jack Cushman: I think one way to put it is that a great deal has been archived, and especially [by] the End of Term archive. Another way to put it is that a great deal has not been captured. We’re talking about a government of 2 million people who generate data in the course of doing their work, and whatever their work is, they’re going to report out to us as their ultimate employers. Here’s what I saw, here’s what I recorded, here’s how you can get access to a copy of it. So we’re talking about an infrastructure that is much larger than there’s any ability for the external community to make a copy of, so if we, in a serious way, if we shut off access to all of the data that the government has created for us, that we paid for, then we’re going to end up not having a copy of most of it at the end of the day. This isn’t something that we can fix from the outside, but it is something where we can get the copies of the things that are most important or most often used and help our patrons that way.
Stephanie Hughes: A federal judge recently ordered that at least some of these government websites be restored. Do you have a sense of how easy or how hard it is to do that?
Cushman: Sure. So I think it could be very easy to restore a website, or it could be very hard, depending how it was taken down in the first place. If you imagine that a website is a car driving along the road, and then you make the car stop, it could be, well, you just pulled over and you left the engine idling, and you just need to, like, hit the gas again. It could be that you turn off the keys. It could be that you threw the keys out the window. It could be that you removed the engine. And it’s very hard to tell in a given instance which one it is. I do think that if you shut down big running software projects and you let the people go who know how they work, and you let the running copies go and you’re trying to restore from backups, you can easily end up in a situation where it is expensive or prohibitive to get something running again.
Hughes: This taking down of web pages, it’s happening not just with the government. It’s happening with corporate websites. It’s happening with prestigious universities. What’s the cost of losing this information?
Cushman: So there’s a focus on government data right now, but the same issue applies to any other kind of information that you care about. So if there are videos that you remember seeing and caring about on YouTube, there’s probably only one account that controls those, and either a person or a Google policy could make that video vanish. If you have stories that you’re proud of publishing, especially if you’re publishing behind a paywall, there’s probably only one copy that most people access, and a change in policy or an accident or mistake could change the content of that forever, in a way that you would have trouble recovering from. The reason this happens is that when we’re working with digital materials, the incentive is to invest everything in the one best copy. So you get the one video-sharing website that everyone likes to use, and you invest more in that instead of having more copies. And when we get down to one copy of everything, we’re vulnerable to policy change, but we’re also vulnerable to accident and we’re vulnerable to cyberattacks, or we’re just vulnerable to losing our cultural memory. I think the impact of having a vulnerable cultural memory is that anything that we care about is harder to plan, and as we get on thinner and thinner ice, as we have fewer and fewer copies of each piece of data, there’s more and more vulnerability to just forgetting where we came from or important information that we had, and therefore making mistakes about what we do next.
Hughes: You said, economically, it makes sense to not have multiple copies of digital information. Can you tell me a little bit more about that? Like, it just actually costs money to have multiple copies of things on the internet?
Cushman: Sure. Imagine that you were a nonprofit funder, and you were trying to decide whether to give me money to make a new website that shares access to important public information. I could tell you, “I’m going to make a second copy of public information that everyone can already get from the government website directly.” Or I could tell you, “I’m going to make a copy of information that no one else has, that only we have at this library. That’s going to be the first time you can get it online.” Well, your funders are going to be a lot more likely to fund the story that is, I’m going to put information online for the first time. It’s also true at a business. If you’re asking for investment capital, if you say, “For the first time, people will be able to share their story,” that’s a lot more fundable than “I’m going to make a copy of YouTube,” even though YouTube already exists. No one wants to hear, “I’m going to make this for the fourth time.” They’re going to ask, “What’s better about yours than the one that’s currently the most popular?” And most of the time, the answer is, well, the one that’s most popular has the most money coming in, it has the most users, it has the most features. The current popular one is the best one, and we’re all going to go use that one instead.
So in the internet, you end up with this concentration where everyone is using the most popular version of the thing, and that’s fine as long as it works, and then if there’s any threat to the most popular thing, you don’t have that resiliency that you used to have. The other thing I like to compare this to is the issue of supply chains, and just-in-time supply chains that a lot of us learned that phrase for the first time in the [COVID] pandemic, when all of a sudden you couldn’t find toilet paper, and it turned out that our supply chains had a lot less redundancy or resiliency built into them. I think all of our data infrastructure online has that same just-in-time quality, where we’ve paid only the minimum amount we have to to keep it working today and not the amount to survive any kind of shock or challenge.
Hughes: I mean, I’ve had the experience as a reporter where I’ve saved a link to data that I was referencing in a story and then I’d gone back to it and it wasn’t there, and I felt like part of my brain was missing. What advice do you have for people who stumble across some data or some information on the web that they want to reference and hold on to, how can they do that?
Cushman: Well, the No. 1 advice for any kind of data preservation is lots of copies — keep stuff safe. And what that means for you is don’t trust remote things that you don’t control to be part of your memory. If you notice, oh, this is part of my memory, then say, I better figure out how to have a copy of this. And there are great tools to do that with. One of my favorites is something called Archive Webpage by the project Web Recorder, and that’ll give you either a browser extension in your browser or an application you can download where you can visit parts of the web and click Record, and everything that you see will be recorded into a file, which you can then save and keep wherever you want. I think the next step is to start thinking about how you can group together to invest in things like your local library or the local resources that you use, how to not rely on that one point of failure for everything that you do.
On a slightly lighter note, while working on this story, I wondered where the “404” error message came from, and I found a 2017 Wired story on its origins. It says that in the early days of the web, writing long messages took up valuable time and memory for coders, so a numerical designation was given to certain errors. 404 was assigned to “not found.”
I guess it’s more polite than the internet blowing you a giant raspberry, which is kinda what that 404 feels like when you come across it, followed by a sinking feeling in the pit of your stomach because the information you thought you could rely on isn’t all that reliable.
Venture capitalists have been welcomed into the Donald Trump administration, and their presence is growing. People who’ve been in the business of backing startups have been tapped to run the Office of Personnel Management and the Commodity Futures Trading Commission. Another, David Sacks, is the White House artificial intelligence and cryptocurrency czar.
Even the vice president, JD Vance, spent time making venture deals before he moved into politics.
Sarah Kunst, founder and managing director at Cleo Capital, says that in venture capital, you have to be good at saying no and comfortable taking risks knowing they likely won’t pan out. Marketplace’s Stephanie Hughes asked Kunst what it means to bring these qualities to the federal government.
The following is an edited transcript of their conversation.
Sarah Kunst: I think that that’s probably going to be a bit of a culture clash. The government and the people in America shouldn’t sort of be treated to a culture where it’s fine if 90%-plus fail as long as one or two does incredibly well. And regardless of political leanings, I do think that you ignore your base at your peril, and I don’t think that the average American wants to feel like it’s fine that they’re worse off, as long as one or two Americans are doing better than ever.
Stephanie Hughes: I want to ask just sort of about technology generally. You know, in Silicon Valley, there is this mindset of, why do with a person what you could do with technology? I don’t think they mean that meanly. It’s just kind of their jobs, to a certain extent. But if we have more venture capitalists working in the federal government, what do you think that will mean for the workforce and the amount of technology that we see?
Kunst: I mean, the idea that you always want people over technology is demonstrably not true. And we know that every time we get in a car and choose to drive somewhere versus walk. And so it’s not that all technology is bad and you always want people powering things. That being said, there needs to be a thought of, what are the checks and balances here? What are we giving up when we’re bringing in automation or when we’re bringing in technology? And then, more importantly, and I think this is the hard part for tech, is the technology ready to be responsible for the lives or the salaries or the well-being of hundreds of millions of our fellow Americans? That piece of it, I think, is the hardest for tech.
Way back in the day, far before Elon Musk, Twitter used to have this thing called a fail whale, which was when too many people would be on. [For example], it was the Super Bowl, it was whatever, and you would see this fail whale, which meant that the system was overloaded and it couldn’t work. And that is fine. It was sort of a meme. It was a startup. They were early days, you didn’t pay for it. Things happen, it’s OK if it breaks regularly. It is less OK if that same sort of build-fast, scale-fast, hope-it-all-works-out mentality, that works pretty well when you’re early-stage Twitter, starts to get applied to things like veterans benefits, where you can’t just give somebody a fail whale and say, “Sorry, you can’t get dialysis today and we don’t know what tomorrow is going to look like either because we’re doing something experimental, and that tech broke.”
And so I think that the balance here is going to be this sense of, how do you take being able to just sort of try things and if they don’t work, pivot, and iterate that into institutions where that doesn’t really serve the greater good, at least in the short term? And you’re often talking about real people’s money and lives. And so you can’t be quite as cavalier as you can about, “Oh no, the micro-messenger service that I use to tweet about my sandwich didn’t work for a couple hours today.”
Hughes: We’ve definitely seen the Silicon Valley ethos of “move fast and break things” in action in the last few weeks in Washington with the dismantling of whole agencies. How much more of that approach do you think we’ll see?
Kunst: I don’t think that they’re done. Elon Musk and his child in the Oval Office recently made it clear that he feels that he has the mandate and that there is a lot of dismantling to do. And I think different groups within the government right now have different feelings about how much needs to be dismantled, what needs to be dismantled and why. But there doesn’t seem to be much pushback inside the government on the executive level to the idea that more change is going to come. I think the question mark is does the Senate agree? Does Congress agree? Do the American people agree? And does the Supreme Court agree? But I think it’s clear to me, at least, that certainly the executive branch and a lot of the tech people who work in it now feel that they’re just getting started.
Hughes: You’ve been working in tech for a long time. You know a lot of people taking these jobs personally. What do you hope people who work in venture take the time to learn as they step into these new jobs?
Kunst: It’ll be interesting to see them learn how hard running a government is. You can’t sell a country, and you can’t have an IPO. There is no exit, you don’t wind it down, ideally. So you have to figure out this sort of perpetual motion machine of meeting the moment, meeting the technology. Just the sheer complexity of that, compared to in the tech world, where we tend to say, pick one thing and do it well. The government is not one thing. It can’t be one thing. It has to be everything. And so how do you even begin to do that? And so I think that a lot of their challenge is just going to be the sheer firehose that they’re drinking from that is the American experiment.
This story was produced by our colleagues at the BBC.
Lottie Hayton lost both her parents within two months of each other. As a young journalist she wanted to write about it, and in particular to investigate a new tech genre known as “grief tech,” or “ghostbots”.
An industry is emerging that uses artificial intelligence to build chatbots of people who’ve died with the aim of offering solace to those who’ve lost loved ones.
Hayton made a chatbot version of her dad and a visual talking avatar of her mom.
“My drive to try it was, I guess, in order to provide other people who might be using it with information,” said Hayton.
“The bot sort of blinks and moves slightly like she did, that was quite alarming. The face moves in a juddery way but it very much looked like her and that threw me off,” she added. “It had an air of her, but it was definitely robotic. I was inherently aware that this was a piece of technology.”
Within the last five years, the idea of digital resurrections have gone from science fiction to reality. Anyone can sign up to create a digital chat version of loved ones for as little as $10. Lifetime subscriptions, however, can cost several hundred dollars.
Justin Harrison set up his ghostbot business, You Only Virtual, after his own mother became ill.
“For anybody tech-savvy, it’s a pretty easy process, extracting text messages and emails and online messages,” Harrison said. “Our technology is nowhere near as good as it’s going to be in six months, in a year. Generally speaking it’s going to be mind-blowing in three years.”
This industry is still very new and still pretty niche. Businesses operating in this field only have a few thousand users and there’s not a huge amount of investor interest. But some experts believe that ghostbots could go mainstream.
Carl Orman is a Swedish researcher and author who has spent the past 10 years studying the ethics of the digital afterlife.
“Five years ago I would have said that most people would still find it kind of creepy. But then ChatGPT hit,” said Orman. “It’s not implausible that over the next decade or so, interacting with chatbots impersonating real humans becomes just as common as having a video call and that’s going to open up a new market for those chatbots. “
Experts are now calling for studies to find out if these tools can really help us with our grief, and if they do, figure out how companies might offer that ethically, consensually, and safely.
On this week’s Marketplace “Tech Bytes: Week in Review,” we’ll talk about Apple launching a new health research study and BuzzFeed starting a new social media platform. But first, the U.S. is pushing back against global AI regulation. This week there was a kind of who’s who of AI and government at the Artificial Intelligence Action Summit in Paris. French President Emmanuel Macron reportedly said there should be rules for this technology and that AI cannot be the Wild West. But the country that’s home to the original Wild West wants to forge ahead. U.S. Vice President JD Vance delivered a speech underlining the Donald Trump administration’s intent to develop AI without worrying about the risks. Marketplace’s Stephanie Hughes spoke with Jewel Burks Solomon, managing partner at the venture firm Collab Capital, about these topics for this week’s “Tech Bytes.”
There’s a concept in business called the first-mover advantage. Basically, it means that if you’re the first company with a successful product in a new market, you have the opportunity to dominate the market and fend off rivals.
But that advantage can be short-lived. Take Netscape, which produced Navigator, the first popular commercial web browser. Then Microsoft entered the field with Internet Explorer, and it wasn’t long before Navigator crashed.
In the world of AI chatbots, two of the first movers are OpenAI and Anthropic. But recently the Chinese company DeepSeek made a splash with an AI chatbot that it reportedly developed for a fraction of what its competitors have spent.
Marketplace’s Stephanie Hughes spoke with historian Margaret O’Mara, author of the book “The Code: Silicon Valley and the Remaking of America,” about whether America’s artificial intelligence industry should be worried about newcomers like DeepSeek.
The following is an edited transcript of their conversation.
Margaret O’Mara: There’s this dimension of foreign competition [that] also brings to mind the late ’70s and early ’80s in the U.S. semiconductor industry that was suddenly freshly challenged by advanced chips coming out of Japan, which was kind of a surprise. Japanese chipmakers were taking some technologies developed in the United States to develop complex chipmaking, and assisted by subsidies from the Japanese government, came to market and were able to rapidly undercut American chipmakers on price and really had Silicon Valley on the ropes for a few years in the early 1980s with this very fierce competition. So the DeepSeek saga brings to mind this earlier geopolitical moment, and I think there are some interesting similarities.
Stephanie Hughes: It’s way too early to make any pronouncements on where companies like OpenAI will land after this, but it does raise the specter of companies like Netscape and [social media pioneer] Friendster, who were sort of first, but certainly didn’t last forever. What factors could determine if American AI companies, you know, go the way of Friendster or if they can enjoy their first-mover advantage?
O’Mara: What I’m watching is, you know, how costly is it going to be to continue to develop these advanced models? And by cost, I don’t just mean cost of the consumer. I mean cost [of] energy, how efficient the hardware is. The capital expenditures of the largest tech platforms are mind-boggling, that kind of massive investment of capital and material is, you know, it’s not sustainable. It’s not sustainable from an environmental and energy-consumption standpoint, and also not kind of sustainable from a capital-expenditure standpoint. So I think that’s the real challenge, that it’s become so expensive to be a player in this. And there has to, what DeepSeek is pointing towards, is there is possibly another way. But this is different than before, than the age of Netscape or Friendster or other first movers, where the first movers in the AI space have, by necessity, had to access the capital of the large platforms. The large tech companies are the only ones that have the money and the resources and the data centers and all that data infrastructure to do these things, and that is something that is different than before.
Hughes: So actually, in preparing for this interview, I read this article from Harvard Business Review about first-mover advantage. It was older. It was from 2005 and one thing that made me really laugh as I was reading it is that it mentioned Apple as a company that had been a first mover and then, you know, had declined. And this was published two years before the first iPhone came out. And so I was wondering, is it possible to be, you know, a first mover and then lose that advantage and then start winning again? Like, is there a trajectory to it?
O’Mara: It is possible, usually by doing something different or the world has changed around you or something. Apple’s a great example of a company that was a superstar and then hit some roadblocks, so much so that by 1985, Steve Jobs is being fired by his board, as you know. “Get out of here! We need to do things differently.” And it’s only with the advent of the iPod and iTunes and then the iPhone that it really becomes the colossus it does. And that’s building something different. Every company has its ups and downs, but there’s this sort of really interesting sort of series of stories. It’s the company that becomes the verb in the beginning of a market doesn’t necessarily get to stay on top, and in some cases, they kind of fade altogether.
Hughes: What lessons can AI companies like OpenAI and Anthropic take from first movers, both the ultimate winners and the ultimate losers?
O’Mara: Well, I guess that maybe the top line is never take success for granted. As former Intel CEO, the legendary Andy Grove, once said, only the paranoid survive. So in a way, it’s reinforcing the hypercompetitive nature of the tech business, which is you got to keep on elbowing everyone out of the way. But also that, you know, the market you define in the beginning may change, and I think there’s a scale question. You know, like what happens when you’re really scaling up from being a startup that’s offering something completely new, helping define a market, and being able to take that to the next level, when the market actually becomes a mass market? What we also see is, you know, who’s going to deliver a product in the least expensive way, the smoothest way, the most frictionless way? Those are often the companies that, that come to dominate, but with the corollary [that] every firm eventually becomes a dinosaur, sometimes [it takes] longer than others, and it’s very difficult to stay, like, the innovator always. There’s a kind of a tension between, you know, being able to scale up and becoming a big market-dominant company and also continuing to be the one that’s developing the next, next big thing.
One thing about first movers: You can get attached to them or, at least, be nostalgic for what it was like to use them. I was talking with “Marketplace Tech” producer Daniel Shin about early movers we’ve used and loved, and he mentioned the search engine Ask Jeeves, which really brought me back.
Meanwhile, our senior producer, Daisy Palacios, remembers Facebook forerunner Myspace and obsessing over what song to put on her profile, though she doesn’t miss having to list her top eight friends, in order.
And Marketplace’s Meghan McCarty Carino, who had the idea for this segment, says given the alternatives now, she misses the original Friendster.
As for me, I never was on Friendster, but I do have a memory of being in college and my housemate Rob sending me an instant message over AOL Instant Messenger telling me to get on Friendster. You see, he had a classmate on there with more friends than he had, and he needed to catch up. I believe his message said, “She’s winning at life.”
Is there a first mover you wish was still around? Let us know at [email protected].
Data centers are filled with servers, basically a bunch of beefed-up computers stacked on top of each other in buildings that can be as big as warehouses. So they need a lot of electricity. And there are more of those projects in the works. For example, Meta has said it’s planning to build out at least one data center that’s going to be so big it could cover a good chunk of Manhattan. Wall Street Journal tech reporter Meghan Bobrowsky explained to Hughes what kinds of companies are benefitting from this data center construction boom.
About 1 in 4 U.S. jobs requires an occupational license, according to the National Conference of State Legislatures. Licensing requirements differ by state and can apply to everyone from barbers to lawyers. The general idea, of course, is to keep unqualified workers out.
But technology, and specifically artificial intelligence, is making inroads. Rebecca Haw Allensworth, a law professor at Vanderbilt University, is also author of the new book “The Licensing Racket: How We Decide Who Is Allowed to Work, and Why It Goes Wrong.” She told Marketplace’s Stephanie Hughes that in some instances, AI is letting consumers bypass licensed workers altogether. The following is an edited transcript of their conversation.
Rebecca Haw Allensworth: This has been true for a while, but it’s really become intense now that AI is what it is. You can go on ChatGPT and say something like, write me a contract for funeral services, and ChatGPT will spit out a contract. That has forever been the practice of law, and that’s been limited, until this point, to people who have a license in law.
Stephanie Hughes: But now AI is kind of getting in on that practice of law, even though there have been lawsuits to try to stop it.
Allensworth: Yeah, it’s hard to know who to sue. So if you were not a lawyer, if you were just an unlicensed person, and you were writing contracts for people, the [bar association] might come after you and say, you’re engaged in the unlawful, unlicensed practice of law, and you have to stop. A cease-and-desist letter. With technology these days, it’s a little bit hard to figure out who was writing that contract. Should I get in trouble for asking the prompt? Should [ChatGPT developer] OpenAI as a company have some liability for this? And if that were the case, how on Earth would we figure out what actually is happening at the company level? So, yeah, it’s breaking down some of the barriers of licensure, and I think it’s really unclear how the professions are going to respond.
Hughes: What’s the worry here? Like, the worry is that the bots are not licensed by any particular professional board and maybe don’t know what they’re doing — is that the concern?
Allensworth: Well, I think there’s two concerns. The first concern is that somebody getting legal advice or a contract written by a bot is going to get bad advice. But the other concern, I think, is on behalf of the profession itself. If that contract is a straightforward, easy thing that can kind of be pulled together by the bot, then that’s a really inexpensive way for me to get a little bit of legal help. And lawyers don’t like that because that will, at the end of the day, undercut their bottom line.
Hughes: It seems that sometimes these professional licensure boards are doing their best to keep technology out. In the book, you write about a company called Edge AI that dealt with this. Can you tell me a little bit about that?
Allensworth: So this was a case that I saw in front of the alarm system installers board, which, believe it or not, was until recently a licensed profession in my state.
Hughes: So in Tennessee, if you wanted to have an alarm installed, you had to have a licensed person come by and do it?
Allensworth: That’s right. And in another case, I saw that same board go after somebody who had installed their neighbor’s Ring cam from Sam’s Club for unlicensed installation of an alarm system. But this case was a little bit more threatening to the board and to the profession because this guy had figured out how to make the AI to recognize faces. So he was selling this to, like, schools and hospitals as a way of keeping out known sex offenders or other people who shouldn’t be on the premises. And he wasn’t really actually installing an alarm system, exactly, but the board thought it was kind of close enough and said, well, hold on, you’re like a startup programmer, but did you also know that you are an alarm system installer, and as such, you don’t have a license? And he says this effectively shut down his business because it was going to take him months to try to get a license, something he couldn’t do. He just lost those months, and competition caught up with him, and he wasn’t able to start his business.
Hughes: I want to ask about telehealth. This really took off during the pandemic, and it seems like it sort of pushed the boundaries of how different states license medical professionals. Can you tell me about this?
Allensworth: The licensing boards had been resisting telemedicine in various ways for a long time. COVID kind of made that a little harder for them to do. They couldn’t make the same arguments that this just isn’t going to happen, this isn’t safe, because there was such a demand for it. And so you did see, because of this big shock to the system, an expansion of what licensing boards were willing to let doctors do remotely. At the same time, it’s still true that you have to be in the same state as your doctor. The doctor seeing you has to have a license in that state. So there’s another way in which, if we didn’t have such kind of state-by-state and onerous licensing laws, telehealth could be used on a national basis, really expanding access to care.
Hughes: Is there a world where licensing boards could modernize, like could accept technology faster and kind of just move things along?
Allensworth: Well, like so many things I saw at the licensing boards, this is one of the downsides of licensure that boards just don’t understand, the way in which it makes the profession slow to react to technology. And I think part of why boards don’t understand the real effect of their regulation is that they’re super-expert in their profession, but they’re not expert in the consequences of the regulation. They’re not expert in regulation. These boards are basically self-regulatory. They’re made up of members of the profession making these decisions. And I think if we had people making these decisions who thought about the importance of technological change and innovation, they could modernize in the way that you’re talking about. But I don’t think the way they’re currently constituted — mostly members of the profession — you’re going to get there.
Hughes: So you’re a licensed lawyer. How are you thinking about the balance between technology and licensure and how it affects you as a practicing lawyer?
Allensworth: Well, I mean, I think about this a lot in my role as a teacher of practicing lawyers because pretty soon it’s going to be malpractice to not be able to use these tools. The whole way in which we train lawyers is pretty clunky and slow to change, and I just think that we need to not be afraid of it. I think we need to understand it, and I think we should also be excited about it because I think this idea that you can go online and ask ChatGPT to write you a contract or ask it a legal question and get a decent answer, it’s not the end of the world. It can really expand what we call access to justice. Access to care is the same idea, but in medicine. There’s a lot of people who can’t afford legal services, and I think we should look at AI as an opportunity to expand the provision of those services. We just have to find a responsible way to do it.
Hughes: What do you see as the future, in terms of how licensing boards deal with technology or think about modernization?
Allensworth: Well, I think that some things force their hand. So I think COVID forced the hand of medical boards to really reckon with telemedicine. And I think probably generative AI is going to force law boards to really reckon with what counts as the practice of law. And I think that what you’ll see is resistance, but you will also see some change forced upon them. And I think that in the case of law, the future is a lot of things that were considered to be the practice of law, and therefore requiring a license and requiring a live person, are now going to be seen as not that and can be provided by AI. And I think that’s a good thing.
Geography has been part of President Trump’s agenda. His first day on the job, he signed an executive order changing the name of the Gulf of Mexico to the Gulf of America, and Denali, the highest peak in North America, will now go back to being called Mount McKinley.
Private companies that make maps — analog or digital — don’t have to follow suit but at least one is.
Google said in a post on X that it has long had a practice of applying name changes from official government sources. So, once the official federal naming database is changed, it’ll update Google Maps for people in the US.
Marketplace Tech reached out to Google, Apple and Microsoft for a statement clarifying their approach to renaming bodies on their digital maps. Apple and Microsoft did not provide one. Google redirected us to their X post.
Marketplace’s Stephanie Hughes spoke with, Sterling Quinn Professor of Geography at Central Washington University, about whether tech companies generally have standard operating procedures around name changes.
The following is an edited transcript of their conversation.
Sterling Quinn: 10 to 20 years ago, when online maps were newer, we saw these companies putting out long, detailed statements and policies about how they were going to handle disputed boundaries and place names, almost like they were kind of aiming for a single correct depiction of the world. And over time, as they received feedback from people who don’t agree sometimes over the boundaries and place names, I think they realized it was more complicated, but also they wanted to be able to maintain their business operation in the smoothest way. So their statements changed over time, to talk about how their changes would support the company’s mission or local market expectations, and that was actually closer to the truth of what they started doing. So over time, their statements about how they handled disputes have kind of gone away, and we just see little glimpses in them now and then, like the posts that Google put on X last week explaining how it was going to handle this change.
Stephanie Hughes: And I mean, is the goal basically to avoid controversy? That Google doesn’t want to be in the business of naming naming places?
Quinn: I think we have to kind of step back and ask ourselves the question, why are most of the maps that we use these days made by big tech companies? These maps were built as part of a business that supports search and advertising. If you can attach search results or advertising to a location, that’s very powerful, and it also takes a lot of technical resources to build a multi-scale, fast map of the world that you can serve out to billions of people, and only a few companies have the ability to do all of that. And so we wind up in this situation where these companies are the ones kind of showing us the geography that we begin to understand. I remember the days of paper maps, but my students, who are undergraduates here, many of them have grown up with just the digital map and the view of the world that that brings, but that is a view that can sometimes be filtered based on market expectations that these companies and the goals that they have in building these map platforms.
Hughes: What alternatives do people have? You know, are there any lesser known companies or organizations that take different approaches for digital mapping?
Quinn: A map that I’ve studied quite a bit, that I think is interesting is Open Street Map, that map is made in a crowd sourced way, and it’s more like a database of information from the world that’s used to make maps. So there are a lot of corporations that contribute data into Open Street Map, but it does have more of a community ethos to it, and it’s a non-profit foundation. So that’s an interesting case for studying map conflict, because the contributors to Open Street Map themselves may have strong disagreements about where borders and names should go, it’s just that those play out sometimes in more transparent ways. There’s discussion posts and online forums where people talk about their decisions on Open Street Map, with what corporations like Google and Microsoft are doing, we often have very little information about how they make their decisions, unless they decide to make some posts like Google did on X, explaining this Gulf of America change.
Hughes: And that’s interesting, you thought that what Google gave us on X was transparency?
Quinn: I’m not sure I’d go that far, but they are at least revealing something, whereas many times we have no idea what is behind this. There was a team in 2016 with some scientists at Northeastern and other universities that actually built a tool to read or scrape the maps from Google from all of its localizations, so they could try to identify areas where Google was customizing boundaries. I mean, that kind of thing is how we know a little bit of what they do. It was almost an attempt to reverse engineer or to peer behind the curtain of what is going on. But companies don’t actively talk about these map customizations that they do by region.
Hughes: You’re a college professor. How do you talk with your students about these issues?
Quinn: I just like to encourage students and others, when they view maps, to think about the motives and objectives of those who created the map, and rather than viewing a map as kind of an objective, scientific truth or just the only way to see the world, realize that there are multiple ways to depict and show the world, and map makers have to make decisions all the time about what things they include in the map, how much prominence they give each thing, the labels and language that they use when describing things. And as students learn how to make their own maps and read maps, they think more deeply about those topics.
In addition to thinking about the motives of mapmakers, Sterling Quinn told me it’s important for people to look at lots of different maps and consider the variety of ways they depict the world.
One that Quinn recommends is a wall map called the “Essential Geography of the United States of America.” In addition to roads and cities, it also has cultural landmarks like the Iditarod Trail and the Kennedy Space Center. Quinn says that many things become visible upon closer examination and you can tell it’s not made by a machine.
On this week’s Marketplace “Tech Bytes: Week in Review,” we’ll explore OpenAI’s inroads in higher education. Plus, how passengers can get on a waitlist to hail a driverless car in Austin, Texas. But first, a look at how Google is changing its approach to artificial intelligence. In 2018, the company published its “AI principles,” guidelines for how it believed AI should be built and used. Google originally included language that said it would not design or deploy AI to be used in weapons or surveillance. That language has now gone away. Google didn’t respond to our request for comment, but it did say in a blog post this week that companies and governments should work together to create AI that, among other things, supports national security. Marketplace’s Stephanie Hughes spoke with Natasha Mascarenhas, reporter at The Information, about these topics for this week’s “Tech Bytes.”
Congress considered 158 bills that mention artificial intelligence over the past two years, according to a count by the Brennan Center for Justice. But zero comprehensive AI laws have been passed.
There has been movement by states, however. In Tennessee, for example, the ELVIS Act, which protects voices and likenesses from unauthorized use by AI, became law in March. In Colorado, a law that takes effect in 2026 requires developers of high-risk AI systems to protect consumers from algorithm-based discrimination.
But some who fund AI technology say a federal law is needed. That includes Matt Perault, head of AI policy at the venture capital firm Andreessen Horowitz. The following is an edited transcript of his conversation with Marketplace’s Stephanie Hughes.
Matt Perault: We think it’s really important at the current moment to have a national competitiveness strategy for AI policy. We’re at a Sputnik moment right now with the release of DeepSeek, and we’re not going to be able to do that if we have a state-by-state patchwork where companies have to comply with different laws in different states. Our focus at Andreessen Horowitz is on Little Tech companies, so small companies and startups that are at the cutting edge of innovation and we think are going to help define the AI products of the future. For those companies, it’s particularly challenging for them to try to figure out, how do I comply with the law in California that might conflict with the law in Texas that might be different from the law in Florida?
Stephanie Hughes: Say more about that. How would navigating a patchwork of state laws even work when it comes to AI?
Perault: That’s the question — it doesn’t function particularly well. Like, if you think about other tech products that you use, like a messaging service, for instance. If you’re located in Baltimore and I’m located in Durham, North Carolina, and we’re sending communications back and forth, we want that to function seamlessly regardless of where we live. We don’t want to have a different experience in Maryland than the one that we have in North Carolina. And it’s the same thing for AI products. We are used to having tech products that function across borders. That’s actually one of the value propositions of many technology products, that it increases and improves communication and allows us to communicate more cheaply and easily. And having a patchwork where companies need to figure out, what does it mean when I build an AI product in one state versus in another, would be really challenging.
Hughes: AI laws can go in a lot of different directions. What do you think a federal statute — what should be the substance of it at this point?
Perault: There are a lot of different forms that it could take, but the focus for us is on ensuring that the regulation of AI technology focuses on harmful misuses, not on regulating AI development itself. If you focus on regulating harmful uses, you can do that through enforcement of consumer protection law, through enforcement of civil rights law, through enforcement of antitrust law. When you instead put the focus on regulating AI model development, really what you’re doing is creating a tax on math. You’re making it harder to do the innovation that actually would enable American companies to compete.
Hughes: So at Andreessen Horowitz, you guys think a lot about startups. You fund them, you benefit if they win. But why is it important for these startups to have clear regulation at a national level for them to really play in the space?
Perault: Startups have historically been the driver of innovation in our country. And our concern is that if you focus on regulating AI models, that that will literally make the process of startups engaging in the task of innovation more difficult. It functions essentially as a tax on them.
Hughes: How much unity is there within the AI community — big, established companies; tiny, little startups — about what regulation should look like, to your knowledge?
Perault: Well, a lot of the companies that we work with are incredibly small. They might not even have a general counsel. And so for lots of companies, they might not have a very specific vision of exactly all the different things that they want AI policy to look like. But they know that in order to compete, it’s going to be very challenging for them, and they don’t want to have to build really complicated processes to navigate complicated compliance burdens to try to identify what is happening in one state versus another. They want to ensure that they can build to a framework that allows them to innovate. They wouldn’t ask for an exemption from existing law. There’s no AI company that I’ve talked to that says, we want to be able to violate consumer protection law or we want to be able to violate criminal law. Obviously, there’s an understanding that you need to comply with the law. But the challenge of AI policy that focuses on model development is it actually goes into that garage and it makes it harder for the technical people to build the technologies that we think will allow American companies to compete with companies abroad.
Hughes: To talk about that for a second, everybody is paying a lot of attention right now to the abilities of the Chinese startup DeepSeek. How does regulation in the U.S., or a lack of it, affect how [China] can compete on the international stage, how its companies can compete internationally?
Perault: So I think this is an important moment to reorient American AI policy. If you focus on model development, you literally slow the process of developing models. And so now we’ve seen a model that can compete aggressively with American products. If you want to ensure that American products can keep pace and that the products that we use in the future are American products, as opposed to Chinese products, for instance, then if you are simply regulating model development, you’re putting American companies at a disadvantage. So from our point of view, that approach to policy is not the optimal framework for bolstering American competitiveness. I think the question then is, well, are you saying that we should take no action? And that’s not the position that our firm takes. Our firm is advocating for policy that really can protect consumers. But you would do that by addressing potential harmful uses of AI technology by consumers, not by aiming for the bank shot of hoping that if you slow model development, then you’ll also have potentially have the effect of reducing consumer harm, because the thing that gets caught up in that is really beneficial innovation.
Hughes: It’s 2025, we have a new Congress and a new administration. Mark Andreessen, a founding partner in your firm, is said to be advising President [Donald] Trump. How do you think these new policymakers could affect the likelihood of a federal AI law coming to fruition?
Perault: Well, we’ll see. But I think that there is a moment now, not just with the change in administrations, but also with the news of DeepSeek, to think about what the AI policy is that we want. Do we want to tax AI model development, or do we want to focus on actually enforcing existing law and looking at, to the extent that we need new law, focusing that on consumer protection itself?
Among President Donald Trump’s many executive orders is one calling for a “next-generation missile defense shield.” The White House calls this the Iron Dome for America.
The order says it should defend against all sorts of missile attacks and include “space-based interceptors” that could potentially act as both sensors and weapons.
It reminded retired Air Force Maj. Gen. Robert Latiff of a Ronald Reagan-era program he worked on: the Strategic Defense Initiative, or SDI, known popularly, and especially to its critics, as “Star Wars.”
Marketplace’s Stephanie Hughes spoke with Latiff about whether the U.S. has the technology, money and time to make this grand project work. The following is an edited transcript of their conversation.
Robert Latiff: Yes and no. Yes, we do have sensor technologies. We have the launch technologies. Whether or not we have the technologies to lash it all together and make it work, I think, is an open question.
Stephanie Hughes: And President Trump has compared this to the defense shield used in Israel. Israel is about the size of New Jersey. The United States is about the size of the United States — many New Jerseys. You know, we’re talking about something way bigger and different kinds of protections too, right?
Latiff: Yeah. So I think the use of the term “Iron Dome” for this one was, was really misleading because the Iron Dome in Israel, it can protect an area of about 150 square miles against targets that maybe are 35 or 40 miles away, short-range ballistic missiles that aren’t going very fast and don’t have any countermeasures necessarily. There’s even some controversy over how good it is. Israel swears by it, others say maybe it’s not as good as they claim.
Hughes: So the secretary of defense has been instructed to come up with a plan that includes a budget. How much would something like this cost?
Latiff: I look at what was proposed in the EO, and I have actually, sort of myself, referred to it as “Star Wars 2.0.” This is very, very much like the old Reagan-era Strategic Defense Initiative. The estimates for SDI were all over the map, but even some conservative organizations admitted that it was probably going to be more expensive than SDI thought. I would say, probably in the area of $750 billion to a trillion. And I’ve based that on looking at previous estimates for some of the pieces of SDI.
Hughes: And how much time would it take to develop something like this — develop and then, I guess, deploy?
Latiff: Advocates claim, you know, five, six, seven years. I think it’s probably longer than that, just given the history of acquisition projects. But I think 10 years would probably be a reasonable number to posit.
Hughes: What do you mean when you say “acquisition”?
Latiff: Well, our history in acquiring weapon systems of any kind — airplanes, ships, you name it — they always come with some very rosy projections and almost always fail in those projections — almost always over budget, over schedule, over budget is pretty common. I can understand that because people who want to start the program come up with best-case scenarios, and best-case scenarios just never happen. And in this case, I think we would probably have the same thing.
Hughes: You were in the Air Force in 1983, when then-President Ronald Reagan proposed the Strategic Defense Initiative.
Latiff: I worked on it.
Hughes: Tell me about it. What was that like?
Latiff: It was exciting for someone — I was 33 at the time, we had more money than we knew what to do with, really. So, yeah, it was an exciting time.
Hughes: It sort of fizzled. How did that experience affect how you look at something like this when it’s proposed?
Latiff: Well, you have to ask yourself, why did it fizzle? And I think it fizzled because the enemy went away. 1989-1990, the Berlin Wall was falling, [the] USSR was disappearing, and so the whole rationale for SDI really kind of disappeared. And I think that was also complicated by the fact that they were just beginning to realize that it was going to cost a lot more and take a lot longer. Both of those things came into play.
Hughes: Even if SDI didn’t come to fruition as imagined, how did it affect the way the U.S. approaches defense?
Latiff: Well, [the Strategic Defense Initiative Organization] didn’t continue. Missile defense and [the] Missile Defense Agency still exist today. I don’t know the exact numbers, but we’ve been spending $10 to $15 billion a year on missile defenses.
Hughes: To bring it back to this initiative now, how likely do you think it is that this Iron Dome project, as they’re calling it, comes to fruition?
Latiff: I would only be guessing. I don’t think that in four years’ time, the length of time of President Trump’s administration, that they will be able to get very far with it. If the next president supports it and supports continued large investment of money, I think we could achieve maybe some of it. I hesitate to think that we will ever achieve what the desired end goal is, of a shield above the United States. Probably just couldn’t happen.
Hughes: Yeah. Even if it doesn’t come to fruition as written, how would it affect future defense technology, even just working on it, like you worked on SDI?
Latiff: If they succeed in launching some, maybe perhaps not all, of the satellites, it will provide us with an enormous amount of data, even if they don’t succeed in launching the interceptors. I mean, I think that could end up being a political hot potato, but maybe there’s the will in the government to do that. But even if they don’t launch the interceptor portion of it, I think it will be much to our advantage in tracking future threats and being able to negate future threats in other ways. So even if it didn’t succeed totally, I think what it would leave behind would be valuable.
A note on the potential price tag — Maj. Gen. Latiff’s educated guess was that this proposed defense shield could cost up to $1 trillion. For fiscal year 2024, the budget for the whole Department of Defense came in at about $841 billion. So, this one system could cost what we spend on the entire military in one year.
One of the more hopeful scenarios for how artificial intelligence could affect jobs is that it would take over more of the boring grunt work and free up humans for loftier pursuits.
Mondelez, the company behind many of America’s favorite snacks, like Oreo cookies, Sour Patch Kids candy and Ritz crackers, is trying to do just that — using AI to speed up innovation for food scientists and give their taste buds a break.
Marketplace’s Meghan McCarty Carino spoke with Wall Street Journal reporter Isabelle Bousquette about how AI is changing the snack game. The following is an edited transcript of their conversation.
Isabelle Bousquette: When they are, you know, creating these new recipes, it used to be a process that was, you know, fairly trial-and-error based, and the process is still fairly similar now with this AI tool, except what the AI is doing is it’s giving the scientists the ability to actually go in and tick the box for the flavor characteristics that they’re looking for. They can tick a box if they want it to, you know, taste more buttery or more oily or eggy, or have more chocolate chips, or be rounder, less round, or, you know, it’s really endless, like the amount of characteristics they can tick there, and they don’t all apply to, like, every product they create. And they’re still kind of taste tested. There’s a lot of human in the loop, but it’s basically making that whole process, like, four to five times faster.
Meghan McCarty Carino: So how has it kind of changed the workflow of food scientists working in this?
Bousquette: It’s still developing, so it won’t necessarily be used in every product that comes to market. I think they have a rule that if they’re changing fewer than three or four ingredients, it’s maybe not worth [it] to go through the tool. You know, there’s several people involved in the process, and they’ll sort of work with brand stewards to make sure that they’re not, you know, pushing the remit beyond the core of, you know, say, what an Oreo should be. But yeah, essentially it’s, you know, just making the process a lot faster and, you know, just getting them those recipes, that they used to have to put in the manual work to do, in a more automated way.
McCarty Carino: Yeah, tell me more about some of the limits for this tool and some of the maybe challenges that have been encountered.
Bousquette: Yeah, yeah. I mean, one of the things is like they have to give the tool a lot of their own data, of which they have a lot because they’re a massive company. You know, [in] earlier iterations of the tool where maybe they didn’t give it enough data, it would suggest things that were, you know, way out there, like a cookie that was, like, almost entirely made of baking soda because it just didn’t really understand what a cookie was. I mean, they’ve spent several years basically training and priming this tool to, you know, really understand, like, the attributes of all their different products and what they should all be. And again, they’re still double-checking everything for sure.
McCarty Carino: And in this case, it sounds like, you know, maybe freeing some workers up from the kind of onerous responsibility of so much taste testing that not all of them relish.
Bousquette: Yeah, yeah, absolutely. I mean, I think the fact that this is going to accelerate time to market and basically reduce the number of, you know, iterative tastings that have to happen for any given product was a relief to some of the staff there that basically sometimes have to do tastings, you know, up to four to five days a week, which I thought sounded fun, but I guess they were like, when you work in a company like Mondelez, the amount of tastings is not fun. It’s hard to give, you know, tasting feedback on products where the differences are sort of minute, and when you’re doing it so much, I think sometimes even your feedback, you just kind of lose a sense of what you’re even thinking. So, yeah, I think fewer tastings per product [is] definitely a relief to some.
McCarty Carino: How much is Mondelez sort of acknowledging or promoting the use of AI in its products? Obviously, they had you come report on it.
Bousquette: Yeah, no, it’s funny. I mean, and again, like, they’ve been working on this tool for five years, and it’s been used in, you know, 70 different products at this point. But as a consumer, you wouldn’t know the fact that Mondelez is using it. It’s helped them respond to consumer demands faster. And the last couple years have been kind of challenging for the consumer packaged goods snacking sector, consumer wallets tightening. They’re, you know, scaling back on treats and so, you know, maybe it’s a situation where they would rather go for, you know, a packet of minis or a packet of thins, rather than sort of, you know, the traditional big package. And Mondelez is in a position where they can sort of respond to that quickly. So I think that’s something that’s helped them, even if the consumers don’t actually know, like, oh, this was made with AI, how cool.
Artificial general intelligence, or AGI, has long been the holy grail of innovation — a synthetic intelligence with all the capabilities of a human mind or more. Recent advances in AI have many predicting we could be closer to achieving it than we’re ready for.
It’s a reality that preoccupied the late diplomat Henry Kissinger before he died last year at 100 years old. He collaborated with Eric Schmidt, formerly at Google, and Craig Mundie, formerly at Microsoft, on the new book “Genesis: Artificial Intelligence, Hope and the Human Spirit.” Mundie joined Marketplace’s Meghan McCarty Carino to discuss what a future with superintelligence might look like. The following is an edited transcript of their conversation:
Craig Mundie: As it relates to the government issues, it affects many things. As we’ve seen in the applications to misinformation, people can use it to try to affect how democracies work. On the other hand, authoritarian governments [may] say, well, this is the greatest thing ever in terms of me being able to keep track of my people, or watch them or control them in some way. The impact on warfare, I think, will be profound. While in the current environment, we still tend to focus mostly on kinetic warfare, I think in the age of AI, we’re going to see that more and more this will become focused on cyber warfare, the ability to disable a society in its entirety will come more likely from cyber means than from kinetic means. And the emergence of superintelligence becomes empowering for people on both sides of that, whether you’re on the offense or the defense. And so in the book we talk a little bit about the fact that you really have to stop and think, how does this evolve? And to some extent, what should governments be doing about it?
We said there is a window of time that we think is not all that long, where governments have to come to realize that much has happened, for example, with nuclear weapons, where we built them all and then realized this could be a problem, not because we might still shoot at the Soviets or vice versa, but they came to realize that if you just let these things proliferate willy nilly, then it was really hard to keep track of what other people might do with them. And then that began a 70-year-old regime of non-proliferation and controls on those. And I think we’re ultimately going to have to come to the same realization here, that there’s going to have to be some agreement by the the governments, at least, that have the greatest investment and progress in this area, that they’re going to have to come together and realize they’re better off thinking about how to control this for the benefit of humanity collectively than strictly for the benefit of any one of the countries or governments. And so there’s definitely going to be a tension there. And the book tries to make the case that as difficult as it is, we’re going to have to encourage governments fairly quickly to move beyond this local optimization and into something that’s more global in nature.
Meghan McCarty Carino: One of the questions you engage is whether governments should cede decision making to artificial general intelligence, provided that it is providing insights that no human could provide. What are the tensions in that question?
Mundie: Well, I think this is sort of one of the central features of the book, to make people understand that these machines will be polymathic to a degree that no human or group of humans will ever attain. And therefore that’s both the good news and, to some extent, the bad news. The good news, it’ll allow us to solve problems and advance our species and our society in ways that we can’t even imagine. The bad news is we won’t understand it all. So the ultimate issue in my mind is, how do we get trust in this system? We have to trust it to the same degree that we ultimately come to trust other humans.
McCarty Carino: This question of how humans might build the technology so that it is in alignment with human interests. It doesn’t seem to have obvious solutions. I mean, the AI industry, as currently constructed, is very decentralized, fairly unregulated, unlike the example of nuclear technology, which was very top down from governments. A lot of this is happening at private companies. Some models are open source. Even among humans, there’s going to be a lot of disagreement about what proper alignment looks like. So how can we possibly ever hope that this will work out to the positive?
Mundie: Well, partly the answer is the capabilities of the AI itself. It turned out about six years ago, I started working with Sam Altman and the people at OpenAI. And my focus there was really these longer term policy questions. And, of course, one of their founding concepts was that they were going to build an AI that was going to be good for humanity. And I would frequently ask the question, well, it’s great to see all the short term efforts to try to make this thing safe, but what about that alignment thing? What does it even mean, and how are we going to get there? And so over time, some of the people who were at OpenAI would spend time talking to me about it.
And one of the things that we quite rapidly concluded, a couple of us, was that we couldn’t see a way to, if you will, govern these AIs for this long term goal of alignment and symbiosis, unless you used an AI to do it, and then that leads you to a whole other set of questions. But the whole industry and all the government activities have moved down the path of very short term ways — you hear the terms guidelines and guardrails and other things. These are very, I’ll say, short term in nature. And partly because of my interactions with Kissinger over the years, I thought what you really needed to think about was an architecture of control and governance, but in a much broader sense of the word architecture. It needed a technological strategy, but you also needed a legal strategy, a policy strategy, a diplomacy strategy, a non-proliferation strategy. All these things that we have in ad hoc ways, at times, done in other areas, like nuclear weapons and nuclear power, somebody needs to be thinking at that level about it.
And so partly out of concern and partly out of frustration, I ended up spending about four years of my own personal time thinking about that broad architecture. And the things that are in the book at a high level are derivative of a lot of the work that I did in that area. But it all built on this idea that the only way to solve the problem, the dilemma you describe, is to have an AI and its polymathic ability to adjudicate these things, to become part of the solution itself. So AI is not just the problem, it’s part of the solution. And so some of us have gone on to build at least prototypes of how that could actually be done technically.
And then that leads to the challenge of, how do we get the companies and the countries and their governments to come around to realizing that there is a path forward, but it takes an effort that’s a lot more than, well, let’s examine the models and decide whether we think they’re good or bad, or let’s try to have guidelines that are written by humans. That’s just not going to be sufficient. So in part, the book was a vehicle to try to get people to realize how big these problems are in the long term for humanity, but also to say that there’s a lot of benefit to get, and that should be a motivation to come together and attack this problem of what is the collective action that should be taken by the businesses, the academy, the governments, and to some extent, diplomatic efforts in order to bring this together.
McCarty Carino: I want to ask you more about your sense of positivity about this, because toward the end of the book you write that a world with no artificial general intelligence would be preferable to a world with even one AGI system that is not aligned with humans. I think a lot of people might look at that equation and say, okay, well, let’s not risk it, shut this down. But that is not your conclusion. Why not?
Mundie: Unless you could shut it down and guarantee that it was 100% shut down by every actor on the planet, then it’s a lost cause. And so the book is an entreaty to both the companies, the academy and the governments of the world to recognize that only by a coordinated effort can we have a trust system that will allow for those who want to comport with it, comfort and interoperability. But that then also creates the basis of discriminating between those who want to play in a happy way together and those who don’t. And once you know that, you can bring the, essentially, the powers of the economy and government to bear on the question of how you want to deal with non-proliferation, and at least try to slow down the emergence of uncontrolled activities.
Everyone was obsessed with the new white whale of the AI world this week. We’ll get into it on today’s “Marketplace Tech Bytes: Week in Review.” Plus, Trump floats tariffs on semiconductors from overseas. And a bipartisan Senate bill to ban kids from social media is getting another look. But first, back to that DeepSeek drama. The Chinese AI company took the world and the markets by storm with claims that its class-leading large language model was built at a fraction of the cost of Silicon Valley rivals. DeepSeek claims it spent only $6 million on compute power — at least 16 times less than leading U.S. companies. Marketplace’s Meghan McCarty Carino spoke with Paresh Dave, senior writer at Wired, about all these topics for this week’s Tech Bytes.
President Donald Trump’s return to the White House has been seen by many as a boost for cryptocurrency. During the campaign, he made several crypto-friendly pledges and recently made a splash when he launched his own “meme coin” shortly before the inauguration.
The Trump token reached a nearly a $15 billion valuation, though it has since fallen quite a bit. But it continues to provoke questions, like, is creating this investment vehicle a conflict of interest for a high-ranking official? And what the heck is a meme coin anyway?
Marketplace’s Meghan McCarty Carino spoke with Axios reporter Brady Dale, author of the Axios Crypto newsletter, to get some answers.
The following is an edited transcript of their conversation.
Brady Dale: They’re a speculative instrument that, I sort of view them as a game. You know, there are these cryptocurrencies that capture an idea. So the most famous one is dogecoin — that funny dog that everyone sees all over the internet — and then, you know, these days there’s this blockchain called Solana that’s very fast and easy to use and cheap to use. And this app was spun up called pump.fun that made it very easy to just create new meme coins, like super, superfast. And so thousands get created every day. And it’s just a game where people, you know, watch the new ones come out and ask themselves, which of these ideas is the funniest, which of these ideas is the catchiest? And they buy some, you know, making a bet that they can pick it correctly. And then, you know, these coins typically don’t last very long. Sometimes they do, but typically they don’t. And so you’re trying to, like, guess which ones will grab everyone’s attention. If it does start to go up, you try to sell at the right time because you sort of assume it’s not going to last forever. So it’s this big, massive gambling game that people are playing together because right now, in crypto there’s nothing else really interesting to do other than buying bitcoin, because bitcoin has had a good year or so, but that’s not that exciting. And so people are looking for, you know, people in this space like fun stuff to play around with. And so that’s been meme coins for the last many months.
Meghan McCarty Carino: All right, and we’re discussing this now, of course, because of President Trump’s meme coin, which he launched the weekend before the inauguration. How did that go?
Dale: Well, it really exploded. At first, a handful of people made a ton of money off of it. A bunch of other people made a little, tiny amount of money off of it. And, you know, some others lost money. It’s way down from its high price. But one thing I will say is this meme coin would fall into the category of what we sometimes call celebrity meme coins. And there’s been a lot of celebrity crypto products. Those have the potential to have more staying power because, like the celebrity — in this case, the most powerful man in the world — can, if they choose, do things around those coins to, like, keep driving interest to them. He could start saying that, like, there’s a special section at my rallies for, you know, Trumpcoin holders. Who knows if he’ll do that? He may never talk about it again. It may be a totally dead thing, but if he did, that kind of thing would continue to drive interest in it.
McCarty Carino: Yeah. Tell me more about these celebrity coins because there have been a number of them and a number of kind of scandals around them in the last year or so.
Dale: Yeah, generally speaking, when celebrities get involved in cryptocurrency, it doesn’t go well, you know? It’s been mostly something that kind of C-list or has-been celebrities have done, you know, honestly. And so it’s tough for them to drive ongoing interest, since people don’t have ongoing interest in them. But obviously, you know, no one is able to drive ongoing interest like President Trump. So this coin could be different. I’m not banking on it, but it could be different.
McCarty Carino: I mean, you did report that a lot of the people buying the Trump meme coin were largely new to crypto.
Dale: Yeah, because I think, you know, the app that they used at launch, called Moonshot, said it onboarded 400,000 people, and you know, probably most of them were people who had never touched a blockchain before. I think a lot of people saw it and assumed it was Trump and so probably the price would go up quickly and think maybe they could make some money. But I think some people thought it gave legitimacy to the sector. I mean, here is the [then] about-to-be president, now the president, launching a cryptocurrency, so maybe that actually meant something. Now, the weird thing is, it’s hard to see how much this actually came from Trump himself. You know, he was asked about it by some reporters, and he said he didn’t know much about it, which, you know, who knows if that’s true or not? But what is true is, I think when it launched, a lot of us were expecting him to promote it in some way during his inauguration time, and he didn’t say anything about it. He still hasn’t, you know. So there’s been something that [Trump’s son] Barron did that he wasn’t paying much attention to, which is unclear.
McCarty Carino: So how has the crypto industry been responding to these official presidential meme coins?
Dale: Yeah, I would say the people who’ve been around for a long time, the more sober crypto people, you know, those folks do exist, haven’t been that excited about it because they feel like the president launching a cryptocurrency undermines his kind of, you know, authority and his credibility as someone to move pro-crypto regulation. So they haven’t been that psyched about it, but that the newer folks, like, for example, I talked to one of the co-founders of pump.fun, you know, the guys who got the whole meme coin thing going in the last year. They’re extremely excited about this, you know? Like, obviously, it’s been great for them and their sector. It’s been great for the blockchain they build on, Solana, but I think by and large, the energy from Trumpcoin has kind of died off. We haven’t seen them putting more into it. So I think folks are largely disappointed and sort of not feeling great about this happening. But it also, the president is so powerful right now, it probably isn’t really going to mean much, one way or the other, for the industry, really, in the grand scheme of things.
McCarty Carino: And what can investors do with these coins?
Dale: Bet that they’ll go up or down? I mean, that’s, that’s all. That’s the only function they have so far. You know, the Trump Organization could introduce more functions for them, but right now, it’s purely speculative. Now this might be of interest to your listeners. This is hardly the first Trump meme coin. There’s been a ton of Trump meme coins. It’s just that this is the first official Trump meme coin. And the reason that’s important is there were lots of Trump meme coins trading during the election. There were also Biden coins, there was Kamala coins, there was, you know, RFK coins, like whatever, you name it. But these were just things that people made, some as more serious efforts than others. But what is interesting about those coins is they did go up and down with, like, the polls and the news and the election. So if Trump looked like he was doing better than Biden, the Trumpcoins would go up in value. When [then-Vice President Kamala Harris] first became clear that she was going to be the Democratic nominee, her meme coin really shot up in price, like a ton, you know. Went from like, next to nothing to, you know, something. You could have made a lot of money if you’d had some of those. So they really do follow the news. That is how folks trade them. So, I mean, it could be that people will buy into these things and sort of, you know, make bets around moves that Trump makes, and what that might do to his price, the price of his coin and probably, if what we saw in the election is true, this coin will follow big news moments in his presidency.
On the crypto regulation front, Trump has already created a group focused on, among other things, the creation of a national cryptocurrency reserve, which could come from crypto seized by the federal government.
Meanwhile, Trump Media — the parent company of Truth Social, also majority owned by the president — recently announced it was planning to offer its own financial services, specifically investments like exchange-traded funds and, yes, cryptocurrencies.
Back to that question of what you can do with a Trump meme coin: Several e-commerce sites have announced they will accept it as a form of payment. So you can now buy Trump watches, Trump sneakers and Trump fragrances with Trumpcoin.
Amid all the executive orders signed by President Donald Trump during his first week in office came a promise to “restore freedom of speech” and end federal censorship. Keen observers may note that freedom of speech is protected by the Constitution.
But the order seems to have something more specific in mind. It calls out what it characterizes as the Biden administration’s pressure campaign on social media companies to “moderate, deplatform, or otherwise suppress speech under the guise of combatting misinformation.”
Will Oremus, tech news analysis writer at The Washington Post, told Marketplace’s Meghan McCarty Carino that the order is a signal of the president’s continued focus on content moderation online. The following is an edited transcript of their conversation:
Will Oremus: What [Trump is] doing, though, is equating censorship with online content moderation and rules around what people can say online. And in particular, he uses the executive order to assert that the Biden administration abused its power when it talked with tech companies, or potentially pressured tech companies, to take down certain types of misinformation, conspiracy theories, mostly stuff around COVID-19 and the vaccines, and some around January 6 and the 2020 election.
Meghan McCarty Carino: And this is kind of a practice that has come to be called jawboning.
Oremus: That’s right. So jawboning is an older term, but it refers to when someone in government uses their position of authority to try to influence the decisions of private companies. It’s not a legal term, there’s no law against jawboning, but when you’re jawboning private companies around their decisions about speech, that could be a form of censorship by proxy, depending on how it’s done. However, when the government is telling social media companies, or if the government were telling social media companies, what their speech policies should be, then that could be deemed government censorship. And so that was at issue in a case that came to the Supreme Court last year called Missouri vs. Biden. And so Trump is going back to that and saying, that was, in fact, censorship and we’re going to make sure that doesn’t happen anymore — even though the Supreme Court actually did not say that it was censorship.
McCarty Carino: This Trump order calls for investigating the actions of the federal government during the Biden administration, to look for free speech violations. I mean, does this mean that they’re going to be relitigating that case?
Oremus: I think it does. Now, when you’re a government official, you have a right to free speech as well. Joe Biden can say, “I think it’s bad that there is a lot of COVID vaccine misinformation on Facebook or on Twitter,” that’s within his right. The question is, did he pressure the companies to change their actions? Did he threaten them? Did it rise to the level of coercion? The Supreme Court hasn’t found that so far. But [that case is] now back in a lower court in Louisiana. And here’s the funny thing, because Biden is no longer the president, Trump is now the defendant in that case. So the case is now, in fact, called Missouri vs. Trump, and so the Trump administration will probably take a very different approach. It may not even try to defend the Biden administration’s actions in that case. So we’ll have to see how that case gets resolved at the lower level.
McCarty Carino: Many free speech advocates have been very troubled by the actions of the Biden administration — and also, I would say, Republicans who have also engaged in this kind of activity in the past. Does this Trump order please folks like that, that are concerned about government influence on social media companies?
Oremus: Yeah, it’s a good question. So even First Amendment advocates who were troubled by the Biden administration’s contacts with social media companies are not buying necessarily Trump as the defender of free speech and the true enemy of jawboning. Trump himself has engaged in plenty of jawboning. In his first term, he did it when he got upset by his own tweets being fact checked on Twitter (this was pre-Elon Musk), and within days he had come out with an executive order saying “we should look at Section 230.” A lot of First Amendment advocates say that seemed like it might have crossed the line into illegal censorship by Trump. So this executive order, one scholar told me, it feels like an attempt to rewrite history, and to say that when Biden did it, it was illegal censorship, and we’re going to investigate that, and we’re not going to tolerate that, but it doesn’t mention at all Trump’s own attempts to influence tech companies, which are almost certain to continue, and in some ways, already are continuing at the beginning of his second term.
McCarty Carino: Let’s talk a bit more about the potential effects of this order on content moderation more broadly. What might this mean for the online information ecosystem for users on social media?
Oremus: The upshot of all this for social media users is that they’re going to find across most of the major platforms, fewer restrictions on what they can say, and they might also find themselves subjected to more slurs or harassment or discrimination and find that they don’t have as much recourse in successfully reporting those posts as they used to. In general, we will see Meta’s platforms and maybe other platforms moving more in the direction of X under Elon Musk, and because Trump has made it so clear that he does not want to see social media companies moderating content. And we’ve seen from his first term that getting on Trump’s bad side just creates a succession of headaches for tech companies.
McCarty Carino: And it seems like even before Trump took office, we saw a number of organizations, social media platforms, moving in that direction already.
Oremus: So there’s been this years long Republican-led push to equate content moderation with censorship. It has had an impact already on the landscape across social media and also nonprofits and academia. We’ve seen universities disband groups that were researching misinformation because those groups were getting subpoenaed by Republicans in Congress, and they were getting labeled as censors, and they were drawing criticism and harassment and all that sort of thing. And some universities just decided, well, we’re better off without this. And so misinformation research has declined as a field. And more recently, we’ve seen Mark Zuckerberg and Meta pull back on content moderation. Mark Zuckerberg has gone back to saying that Facebook and Instagram and Threads should air on the side of free speech, and this is very much in keeping with this Trump led and Republican led push to stop moderating what people say online.
McCarty Carino: We’ve already seen recent efforts targeting this NewsGuard, which rates online publications with these nonpartisan transparency nutrition labels for consumers, but also geared at advertisers who might want to have more details about sites that potentially host their advertising. What’s been going on there?
Oremus: Yeah, so we wrote about NewsGuard last month as an interesting window into some of the effects that this campaign against not only content moderation, but also against fact checking. You know, a lot of conservatives these days equate fact-checking with a liberal project to suppress conservative views. And so this company NewsGuard was actually founded in an explicitly nonpartisan way, and they set out these objective criteria by which they rate the credibility of news sites. But they’ve been cast as part of this censorship cartel. And so Brendan Carr, who is an FCC commissioner, who is now the chair of the FCC under Trump, wrote a letter seeming to threaten tech companies that work with NewsGuard, that use NewsGuard’s ratings in their algorithms or to help with their search results. One example is if you have Microsoft Edge as your browser, you can actually turn on NewsGuard’s ratings and see them every time a publisher comes up in your search results. Carr said, hey, working with a group like NewsGuard really makes you guys look like sensors, and just keep in mind we can always take another look at Section 230 and your liability shield if we find that you’re not moderating in good faith, and working with NewsGuard would be an example of not moderating in good faith. So NewsGuard is now under threat almost entirely from the right. I mean, when it started out, it got criticism from both sides, but now it has become sort of demonized on the right, and it’s fighting for its own reputation as a nonpartisan source for credibility ratings.
There’s no shortage of bullish voices on artificial intelligence among the titans of tech. But even many of the leading evangelists, in addition to prevailing pop culture narratives, tend to strike a note of impending doom when envisioning the future of the technology. Reid Hoffman wants us to consider the alternative. He’s the co-founder of LinkedIn, and a founding investor and former board member of OpenAI before he branched into other ventures, like Inflection AI. And his new book “Superagency: What Could Possibly Go Right with Our AI Future?” explores those alternatives. Marketplace’s Meghan McCarty Carino spoke with Hoffman about what he means by the idea of “superagency.”
Last week’s annual gathering of the rich and powerful at the World Economic Forum in Davos, Switzerland, was a bit overshadowed by the inauguration of Donald Trump in the U.S. The president made a virtual appearance at the conference, delivering a speech that hit on several of his recurring themes: tariffs, inflation and artificial intelligence. AI has been a big topic at the summit for several years. But the way it was treated this year felt different, according to Reed Albergotti, tech editor at news website Semafor. Marketplace’s Meghan McCarty Carino caught up with Albergotti just as he was wrapping up his reporting at Davos.
There’s been quite a firehose of news this week, but we’re going to distill some of it into a nice, tall glass for you on today’s Marketplace “Tech Bytes: Week in Review.” We’ll dig into why some crypto insiders are upset with President Donald Trump over his preinaugural meme coins. Plus, the latest in the TikTok ban rollback and how Congress might respond. But first, amid the flurry of executive orders the president signed during his first week in office, he announced the Stargate project, a private, multiparty venture to build domestic artificial intelligence data centers. In attendance at the White House were OpenAI CEO Sam Altman, Oracle co-founder Larry Ellison and SoftBank CEO Masayoshi Son. The investment could be as much as $500 billion. Marketplace’s Meghan McCarty Carino spoke to Anita Ramaswamy, columnist at The Information, for her take on these stories.
Getting fast, comprehensive and accurate information is crucial during emergencies like the devastating wildfires still raging in the Los Angeles area. And over the last two terrifying weeks, one app has become the place to find it: Watch Duty. Operated by a nonprofit, the app was launched in 2021 to track wildfires in Northern California and now provides coverage for more than 20 states. Marketplace’s Meghan McCarty Carino spoke with David Merritt, Watch Duty’s chief technology officer, about how it all came together.
The explosion of artificial intelligence tools like chatbots has rocked the education world in the last couple years. It’s spurred efforts to prohibit, detect or otherwise build guardrails around these powerful new tools. Some educators, though are embracing them, and Colby College is doing it on an institutional level. Four years ago, before most of the public had ever heard about large language models, this private liberal arts college in Maine established a cross-disciplinary institute for AI to help educators and students integrate the technology into their curricula in an ethical way. We had the college president on back then to discuss, and today we wanted to check back in — this time with Michael Donihue, interim director of the Davis Institute for AI at Colby College.
There’s been a lot of doom and gloom in the tech sector in recent years — the feeling that so many of the advances in internet connectivity, social media and now artificial intelligence might have caused more harm than good, increasing the need for at least caution in the industry and even, possibly, government intervention. But lately a backlash to the backlash has been brewing among techno-optimists. Their movement is called effective accelerationism, a play on the effective altruism community, and its supporters argue that unrestricted technological progress is a force for positive change. It’s received more attention since Donald Trump won the 2024 election. Marketplace’s Meghan McCarty Carino spoke with Nadia Asparouhova, a writer and researcher who’s been following the rise of the effective accelerationist subculture, often shortened to e/acc.
It’s Inauguration Day, and a veritable who’s who of tech are in attendance for the swearing in of Donald Trump as the 47th president of the United States. The massive presence of tech leaders, overtly supporting or just making nice with Trump, represents a stunning reversal from his first term. Today, we’re looking back at what happened in between. President Joe Biden was often seen as taking an adversarial approach to the tech industry.
On this week’s Marketplace “Tech Bytes,” we’ll dive into President Joe Biden’s executive order on artificial intelligence plus a request Meta CEO Mark Zuckerberg made to President-elect Donald Trump. But first, tech news site The Information reported that TikTok plans to completely shut down its app in the U.S. on Sunday and will instead direct users to a website where they can read about the platform’s ban. According to that reporting, TikTok will allow American users to download their data — and, if the ban is overturned down the road, those users will be granted access to it immediately. Marketplace’s Kimberly Adams is joined by Maria Curi, tech policy reporter at Axios, to break down these stories.
California relies on a variety of tools to stop and mitigate wildfires, some as low-tech as dumping giant buckets of seawater on the flames. But on the higher-tech side is a new, AI-powered monitoring system called ALERTCalifornia, which was developed at the University of California, San Diego. It’s designed to speedily detect and report wildfires using a network of over 1,000 cameras and sensors. The developers say the network detected over 1,200 blazes across the state during the 2023 fire season, sometimes with impressive quickness. But the system wasn’t quick enough to prevent the current disaster in Los Angeles. Marketplace’s Kimberly Adams spoke with Cyrus Farivar, a senior writer at Forbes, who explored how the fury of the Palisades fire overwhelmed that human-made system.
As fires burn in Los Angeles, many people are going online to find ways to support people who have been temporarily or permanently displaced by the disaster. But like we’ve seen in the aftermath of recent hurricanes and floods, bad actors are spreading misinformation and financial scams. Marketplace’s Kimberly Adams spoke with Steve Grobman, chief technology officer at the cybersecurity firm McAfee, to learn more.
Scam calls about fake warranty renewals, non-existent credit card bills and more are still a global problem. But some companies and telecommunication providers are turning to AI chatbots to intercept the calls before they ever reach a real person. Marketplace’s Meghan McCarty Carino recently spoke with Dali Kaafar, founder and CEO of Apate AI, an Australia-based company creating these chatbots, about how his company is designing these bots to scam the scammers.
Since large language model chatbots hit the scene a few years ago, there’s been a lot of speculation about which jobs they might disrupt most. A lot of bets were on customer service. And recent data show they are becoming more common in the space. A Salesforce survey found a 42% increase in the share of shoppers who turned to AI-powered chatbots for customer service during the 2024 holiday shopping season compared to the previous year. But as AI becomes more powerful and more human-like, will AI voice agents become the norm, even for those more complicated customer cases now handled by human agents? The BBC’s Elizabeth Hotson looked into what a future of synthetic customer service might look like.
CES wraps up in Las Vegas this week. That’s the annual convention where some of the most cutting-edge consumer tech is unveiled. And while we still don’t have a prototype for Rosey, the housecleaning robot from “The Jetsons,” we’ll get into some of the big robot reveals for today’s Marketplace “Tech Bytes: Week in Review.” Plus, YouTubers are taking PayPal to court. A class-action suit alleges that the payments company is messing with their commissions on affiliate links. But first, Meta made big changes to its content moderation policy this week. Facebook’s parent company said it’s cutting ties with third-party fact checkers and switching to a community notes system like the one X uses. Marketplace’s Meghan McCarty Carino spoke with Joanna Stern, senior personal technology columnist at The Wall Street Journal, about her takeaways from the announcement.
This week, Meta CEO Mark Zuckerburg announced some big changes to content moderation strategy. The parent company of Facebook, Instagram, Threads and WhatsApp will no longer be contracting with third-party fact-checkers from the media and nonprofits as it has since 2016. Instead, Meta will follow the lead of X under Elon Musk and rely on crowd-sourced Community Notes to provide additional context on posts. Marketplace’s Meghan McCarty Carino spoke with David Gilbert, a reporter at Wired who covers online disinformation and extremism, to learn more about Meta’s latest pivot.
U.S. ports could be facing another strike as the deadline looms next Wednesday to settle a union contract for 45,000 dockworkers on the East and Gulf coasts. A major sticking point has been automation. Proponents argue that technology can make ports cleaner and more efficient; critics point to lost jobs, high costs and mixed productivity results. While the cost-benefit analysis of port automation is complicated, there are places where the model appears to be succeeding, like Rotterdam in the Netherlands.
By now you probably know the term “large language model.” They’re the systems that underlie artificial intelligence chatbots like ChatGPT. They’re called “large” because typically the more data you feed into them — like all the text on the internet — the better those models perform. But in recent months, there’s been chatter about the prospect that ever bigger models might not deliver transformative performance gains. Enter small language models. MIT Technology Review recently listed the systems as a breakthrough technology to watch in 2025. Marketplace’s Meghan McCarty Carino spoke to MIT Tech Review Executive Editor Niall Firth about why SLMs made the list.
A battle is brewing over the restructuring of OpenAI, the creator of pioneering artificial intelligence chatbot ChatGPT. It was founded as a nonprofit in 2015 with the goal of developing AI to benefit humanity, not investors. But advanced AI requires massive processing power, which gets expensive, feeding into the company’s decision to take on major investors. Recently, OpenAI unveiled a plan to transition into a for-profit public benefit corporation. That plan has drawn objections from the likes of Elon Musk, Meta and Robert Weissman, co-president of consumer advocacy group Public Citizen, which urged California authorities to ensure that as OpenAI reorganizes, it will repay much of the benefits it received as a nonprofit.
OpenAI closed the year with a bang, announcing a new, powerful AI model called o3. It could mark a significant step toward artificial general intelligence — an advanced form of AI that can learn or understand anything a human can. Plus, we’re mulling another tech prediction for 2025 — will AI assistants actually make our lives easier this year? But first, President-elect Donald Trump asked the Supreme Court to put the TikTok ban on hold so he might negotiate a deal to save the app in the United States. Marketplace’s Meghan McCarty Carino spoke with Paresh Dave, senior writer at Wired, about all these topics for this week’s Tech Bytes.
Artificial intelligence and promises about the tech are everywhere these days. But excitement about genuine advances can easily veer into hype, according to Arvind Narayanan, computer science professor at Princeton who along with PhD candidate Sayash Kapoor wrote the book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.”
He says even the term AI doesn’t always mean what you think. The following is an edited transcript of his conversation with Marketplace’s Meghan McCarty Carino:
Arvind Narayanan: AI is an umbrella term. It refers to a collection of loosely-related technologies. So many different products are called AI. In some ways, AI has certainly made remarkable progress. But in other cases, what is being sold as AI, first of all is 100-year-old statistics, simple formulas that are being rebranded as AI; but more importantly, it’s being used in situations where we should not expect AI or any other technology to work, like trying to predict a person’s future. So in the criminal justice system, AI is used to predict who might commit another crime if they are released before their trial, and if they’re deemed too risky because the algorithm predicts they might commit another crime, then they are detained. And you know that could be months or years until their trial. And that’s just one example. It’s used in the health system. It’s used in automated hiring. So in so many cases, these kinds of predictive technologies, they can pick up some basic statistical patterns in the data, that’s true, but they are not at the level of predictivity where we think it’s morally justified to be making these incredibly consequential decisions about people.
Meghan McCarty Carino: Let’s talk about predictive AI, or automated decision making, where you kind of have your most potent criticisms. These are systems that use machine learning to try to predict, say, who is most worthy of a bank loan or who would be the best fit for a job or who is likely to commit crimes. What makes these applications of AI so problematic?
Narayanan: So let me make one small distinction here. Yes, these are all areas where predictive AI is being used. But, look, I mean, banks have to determine who is more risky and who is less risky. And if they make no distinctions between applicants, they will probably go out of business. So in some cases, yeah, we have to apply some sort of prediction, but in other cases, it’s really not clear to us why we’re treating this as a prediction problem. In hiring, the way we used to do hiring is we insist on certain minimum qualifications and then we interview the person, and we come to a nuanced human understanding of how well the things they bring into this job will contribute to what we want them to do. And that’s really hard to turn into a pure prediction problem in the way that machine learning can handle it, because how well the person will perform is not merely a function of whatever they’ve set in their resume, but really a bunch of factors that relate to both the candidate as well as the environment into which they’re going to be placed. When we look at the research, a big part of the reason why candidates might often under perform might have to do with their manager.
And so when we ignore all that and we try to turn this into a pure prediction problem, first of all, there doesn’t seem to be that much predictability in the data. And secondly, the experience that we put these job seekers through — being interviewed by a robot, essentially — I think it’s kind of a violation of basic dignity, the ability for that job seeker to explain to a person why they bring something valuable to this job, why they’re deserving of this job. And I think when we forget that, we lose something really essential. And this is, I think, a kind of algorithmic injustice that goes beyond bias and discrimination. So even if it has the same error rate for all demographic groups, our point in the book is that it’s not fair to anybody.
McCarty Carino: Are there certain use cases or conditions where you think skepticism is especially warranted when we talk about generative AI?
Narayanan: Definitely. So while this is a powerful technology, we should always be thinking about: are the things we’re applying it to even technology problems, problems where technology can possibly be the solution? And one good example of this was this story out of Wyoming, where in Cheyenne, the capital, there was someone who was running for a bot to be the mayor, and the bot just seems to be ChatGPT behind the scenes, but he calls it VIC (virtual integrated citizen), which sounds more sophisticated. And he says he wants the bot to be making decisions, all of the decisions that a mayor would make, not just mundane stuff like how much to spend on infrastructure, but that’s also much more controversial things like decisions about book bans. And he talks about [how] the bot can be more accurate than a person, and its IQ is 155 and so on. I don’t think IQ is a valid measure for how well a bot works, but let’s set that aside.
The bigger point is that accuracy is not even the relevant question here. I mean, what does it mean to have a bot for a mayor? The reason we have political processes is because politics is the way we’ve chosen is the venue for resolving our deepest societal differences. So these disagreements, while they might get heated sometimes, being able to work them out is the whole point of politics. And to try to automate that is to miss the point. And to make it very concrete to our listeners: whatever your views are on book bans, imagine that this ChatGPT mayor is making a decision and through whatever “objective method,” it spit out a decision that’s not the one that you agree with. Will you accept that decision simply because it’s this supposedly unbiased bot? You know, presumably not. And I think that example just shows why this is not the kind of thing we should even be trying to automate.
McCarty Carino: Something we have heard a lot about over the last couple of years, and particularly from experts within the AI community, is the potential for an AI apocalypse. You know, the idea that artificial general intelligence is just around the corner. This is AGI capable as a human in pretty much every way. Is this AI snake oil in your view.
Narayanan: Yeah, so thinking about large scale risks from AI is definitely important. And there’s an AI safety research community. We ourselves do some AI safety research. I’m glad that that’s happening, but that’s different from the view that this is imminent and urgent and we need to take extreme action to ward off this threat. So for instance, policymakers should get together and through presumably unprecedented international cooperation, should put the brakes on AI. So those are the kinds of policy proposals that result if we see this as an imminent and urgent threat. And for that, we’re asking, where’s the evidence? This is not how we usually do policy.
McCarty Carino: Yeah, and when it comes to approaches to regulation, you kind of talk in the book about the need to be specific about harms. So many of these technologies that are around now have very general purposes. They can be used for many benign purposes. They could potentially be used for malicious purposes. How do you regulate a technology like that?
Narayanan: So there is some regulation that’s definitely needed at the level of the technology. But I think a lot of it has to be at the level of the use of technology. I mean, generative AI is as general purpose as computers are. You know, we can’t prevent computers to be made safe in the sense that they can’t be used by bad actors, unless we sell devices that are so close down that only government approved apps can be installed on them. So that’s the trade off — if we want to try to solve the entire problem at the level of the technology, that’s the kind of almost authoritarian regulation that you need. And if we’re not going to do that, then, unfortunately, it’s going to be harder, but a lot of it has to be at the level of the use of the technology. And we shouldn’t also forget that while it’s true that, for instance, bio terrorists might take advantage of AI, it’s not that AI is enabling them to do something that they couldn’t otherwise have done. Again, AI is useful to every knowledge worker. It helps all of us do certain things, I don’t know, 10% faster.
And in that same way, it’s going to help bad actors as well. But that’s not an AI problem. The fact that pathogens can be created in the lab, and they could potentially even cause pandemics, that’s a threat that we have been living with for very long, and we have not adequately acted against that threat. And so maybe this moment of worries about AI is going to give added urgency to policymakers to put into place policies that are about better pandemic prevention and response. And so if that happens, that’s a good thing. But if we treat it as an AI problem and try to put AI back in the box, first of all, it’s not going to work. And secondly, we will have done nothing about the fact that we are living with these threats that can be accomplished even without the help of AI.
This episode originally aired on August 19th, 2024.
Six years ago, Apple introduced a new feature on iPhones and iPads: The Screen Time Report.
The notification pops up every Sunday and it informs iPhone users with a handy graph — just as they’re trying to relax before a stressful week — that they have once again failed to reduce their phone time over the past week.
The feature promised to empower users to manage their device time and balance the things that are really important. But is it actually doing that?
Caroline Mimbs Nyce, a staff writer at The Atlantic, recently wrote about why she thinks Screen Time is the worst feature Apple has ever made. She told Marketplace’s Meghan McCarty Carino that it sometimes feels like Screen Time is doing more guilt-tripping than empowering these days.
The following is an edited transcript of their conversation.
Caroline Mimbs Nyce: Time is just sort of a weird way to measure our relationship with our devices. We have these sort of magical smartphones that come with a lot of utility, but they also come with a lot of distraction. So, figuring out how to regulate that is not just a question of time, but actually one of really being thoughtful about what are the positive and the negative use cases of a phone? How does it make me feel? And that actually matters a lot when we’re talking about health outcomes.
Meghan McCarty Carino: Yeah, the addition of the Screen Time feature seems to be kind of an admission by Apple that they know that there can be some negative effects of being on your phone too much, and they wanted to allow users to “take control” of the time they spend on their devices. Does this actually do that?
Mimbs Nyce: A spokesperson for Apple did not respond to my question about whether or not they have any evidence that Screen Time helps people reduce the time spent on their phone. So, it’s sort of unclear. I don’t know if there are any good studies that really track time, but what we do know from the broader health literature is that all of this stuff is really context dependent. So, you could have someone using their phone for Google Maps and someone who’s using it to doom scroll and having anxiety or depression because of that. But even on an app level, it’s really hard to tell. So, you could have someone that uses Instagram and they have a group chat with their friends and they’re sending each other memes and having a blast. Or you could have someone who uses Instagram to scroll through photos that really affect their self-esteem, and they’re comparing themselves with what they see. So, it’s just really complicated, and a lot of the science suggests that the context is what really matters.
McCarty Carino: The one thing it does seem like this feature has been successful at is making people feel guilty about how much time they’re spending on their phones. Is that something that’s widespread?
Mimbs Nyce: I definitely relate to that. I absolutely feel guilty. I write in my piece about how I had this beautiful day recently, and then found out I used my phone for six hours during it, and it sort of reflexively made me go, “Oh my gosh, what did I do?” And then I had to remind myself that I had paddled out and surfed and kayaked and done all these beautiful offline things, and it’s actually okay to use your phone in some instances. The one caveat I will give to this is that experts did tell me that sleep does matter, obviously. So, if your phone is impeding on your sleep time, whether it’s the most productive, positive thing or a super distracting thing, it’s time to turn it off and put it away.
McCarty Carino: We have been, I think, alluding to phone use by adults, but there is increasing concern about the effects of phone use, and especially social media use for younger people. Are we right to be concerned about screen time in that demographic?
Mimbs Nyce: So the research on teens and tech is also really murky, and it’s very highly contested at the moment. One meta-analysis, so a study of studies that looked at teens and technology use, found that there was a negative but small correlation between phone use and negative mental health effects, but that correlation was too small to actually guide policy. So, we’re in sort of a tough spot where, whether we’re talking about teens or adults, we don’t totally know the health effects of these devices that follow us around all day. But even with teens, it does seem to be context dependent. That being said, this is a very hot moment in the discourse right now for teens and smartphone use. And I think what I wanted to make sure is that the adult conversation wasn’t getting muddled with the teen conversation.
McCarty Carino: You tried out some other apps and programs that claim to help users reduce their screen time. What did you find with some of these other apps?
Mimbs Nyce: I did. It was really fun. A lot of the other apps are a lot more customizable, which I really appreciated. And experts told me that that’s something that would be really useful if you are concerned about certain negative health effects of using your phone. So, they had the ability to sort by app. One service I was using called Opal actually categorizes apps by productive or distracting, and it doesn’t count the productive apps towards the screen time that it calculates. So, I found it really helpful. Experts suggested that that’s the way to go here, that something that’s flexible, that allows you to set goals for what you want to do and isn’t just a notification coming at you on Sunday morning saying implicitly “you used your phone too much last week, stop doing that.” And is instead something that allows you to really set those goals and then try to meet them.
McCarty Carino: So, after thinking about this so much and reporting this piece, did you keep the screen time notifications on?
Mimbs Nyce: The notifications are on right now. I will say at one point when I was reporting, and this didn’t go in the piece, but I turned the feature off and I found that I had a much more peaceful relationship with my device. So, for now, it’s on, it might go off again, but, you know, I think my main concern with screen time was feeling tortured by it. And I think when I was feeling tortured, it wasn’t productive for me. And now I feel like I’m in a much better place, where I am a lot more mindful about this notification that doesn’t mean anything. You know, I can say “this is a perfectly fine use of my phone, and I’m going to disregard this notification,” or, I can say “hey, this actually is making me feel kind of cruddy,” and maybe I’m going to put my phone in a box or choose not to use that app as much.
2024 was all about the artificial intelligence boom. That was true for Wall Street and Silicon Valley, but also the case on a wider, more practical level, with AI becoming increasingly visible in our schools, offices and social media feeds.
AI advances are sure to remain a massive part of the tech economy, but in the coming year, we could see more sci-fi-like tech becoming reality, according to futurist Amy Webb, CEO of the Future Today Institute.
Marketplace’s Meghan McCarty Carino spoke with Webb about some of the emerging trends she’s watching for in 2025, including a potential evolution in AI tech that she calls living intelligence.
The following is an edited transcript of their conversation.
Amy Webb: This year, everybody has been focused exclusively on artificial intelligence as the next big thing, and the issue is that artificial intelligence is already here, but worse, it’s actually connected to two other critical areas of technology that haven’t gotten as much attention. Those are advanced sensors and biotechnology. Together, artificial intelligence as the foundation with sensors and biotechnology are kind of creating yet another new field of technology, one that I’m calling living intelligence. And basically, living intelligence creates systems that can sense and learn and adapt and evolve, which means that as technology progresses in one of those fields, it sort of creates this flywheel effect. Living intelligence is going to drive an exponential cycle of innovation, of growth. It will disrupt businesses. It will change our reality, and it won’t happen overnight, but we’re at the beginning of a long-term cycle where living intelligence will play a pretty significant role.
Meghan McCarty Carino: Paint a little bit of a picture for me of what this might look like. How is bioengineering and sensors and artificial intelligence all put together in this?
Webb: So there’s a nationwide shortage of nitrile gloves. These are the gloves that you encounter at a restaurant or at a doctor’s office. The problem with nitrile gloves is that they require basically a handful of ingredients that are hard to source, specifically in the United States. There’s just no way for us currently to keep up with the demand. Now, what if there was a different way to produce the substance, the rubberlike substance that goes into these nitrile gloves? In fact, there is, and that’s because of living intelligence. In addition to all of the work that Google has done on artificial intelligence, they’ve also been working in the field of what’s called generative biology to generate molecules or genomes, right? New materials. And that’s exactly what Google has been working on, which means that in the near future, artificial intelligence combined with biotechnology could lead to an abundant supply of nitrile gloves, but even better, inventing a new substance that’s very similar. It’s plausible that we will have any number of new materials that will help us with the current shortages and problems that we have with things like petroleum and plastic. The point of living intelligence is that we’re looking at the different convergences between AI, advanced sensors and biology.
McCarty Carino: All right. Another area that you write about is crypto and kind of the continuing thaw of the crypto winter. We have an incoming administration, an incoming Congress seen as more friendly towards crypto. We have already seen the markets respond to that. But what does it mean for crypto, and you know, kind of the blockchain as technology?
Webb: So the inbound president, President [Donald] Trump, has picked Paul Atkins, who is a self-proclaimed crypto fanboy, as the next chair of the [Securities and Exchange Commission]. Why is this significant? Well, it’s significant because up until now, crypto has kind of been a frontier technology, and it just hasn’t been part of the agenda for the financial agencies and institutions in the United States. But that’s going to change next year, and that’s because we have an incoming administration that’s not just enthusiastic about cryptocurrencies, the president himself has one. World Liberty Financial is itself a very early crypto venture that has the backing of President Trump. So this is incredibly important going forward for a couple of reasons. So first, if you’re a holder of cryptocurrency, you’re probably looking at a good year. But if we zoom out a little bit, you know, a lot of other countries around the world are trying to unhook from the dollar, and they are looking at cryptocurrency and blockchain as a potential way to do that. Some countries have already launched digital currencies, that includes Brazil. So you know, what does that mean in practice? Clarity on regulation. It very likely means that we will have guardrails and specifics and basically regulatory decisions and policies that pave the way for cryptocurrencies to play a significant role in the U.S. economy.
McCarty Carino: Do you think it moves the U.S. any closer to adoption of a digital currency of its own?
Webb: Part of the challenge has to do with traceability. You know, once you are using systems like blockchain, what’s nice is that it gives you the ability to verify, so you can trace and authenticate transactions, but you can also obscure individual people where the transactions originated from, so you need an account, not necessarily a name, which has made cryptocurrency an easy way to launder money. Honestly, we’re going to have to wait and see what happens. There’s quite a bit of opposition against digital currencies, but there is a new administration that appears to be very focused on making cryptocurrencies much more mainstream, and I should note that the MAGA base, a lot of that base is very enthusiastic about cryptocurrency. So there’s a, if not a market pull or a reason, there certainly is one that’s constituent-driven.
McCarty Carino: All right, let’s talk about quantum computing. You note that quantum computing may have its moment in 2025. Remind us where we are with quantum computing and what the potential might be in the next year.
Webb: A couple of things happened at the end of 2024, one of which has to do with a quantum chip called Willow from Google. Willow can reduce the errors below a particular threshold when processing information. Why is this important? Well, it’s important because qubits, which are the basis for a quantum computer, instead of a one and zero, they can flip and be either a one or a zero at the same time, and all you really need to know is that this allows quantum computers to solve certain types of problems way faster than a regular computer. The problem is that these qubits are super-, superfragile, and they respond to any type of noise. Willow, this new chip, it reduces all of that noise so Willow can store quantum information. It can’t perform calculations yet. But this huge breakthrough, I know it sounds superboring, but like it’s a big deal. There is now a clear pathway to make quantum computers a reality in the much nearer future.
McCarty Carino: So what are the barriers that remain between here and there, between achieving kind of functional quantum computing?
Webb: Well, so error reduction is big, and there’s no way to overstate how important that is. Now those special chips have to be put in machines that can be accessed, that can be used commercially, that can scale, and then you have to build applications that work for them. It’s kind of like having the first transistor, now we got something to work with. The next steps that have to happen is, you know, we have to build the systems and tools now on top of this thing.
We asked Amy Webb about what could happen after Elon Musk bought Twitter back in 2022. She thought it could accelerate a trend she was already seeing — internet discourse splintering from social media as the digital town square into smaller, more niche, communities, or what she called the Splinternet.
And with Twitter — now X — coded more politically right, and left-leaning users fleeing to Bluesky, plus the growth of other platforms like Threads and Mastodon, she said that prediction is shaping up to be true. The next important development, she said, could be systems like Flipboard’s app Surf, which bills itself as a browser to view content from various social media apps in one place.
This story was produced by our colleagues at the BBC.
Have you ever found yourself angry or outraged at a piece of content on social media? A disgusting recipe or shocking opinion? It could be intentional.
Social media influencer Winta Zesu freely admits that she provokes for profit.
“Every single video of mine that has gained, like, millions and millions of views is because of hate comments,” she said.
The 24-year-old estimates she made $150,000 last year by exploiting an online trend known as rage-baiting.
“Literally, just if people get mad, the video is gonna go viral. I can make money on TikTok. Instagram is paying, like, YouTube pays you. So I was like, OK, I’m just gonna post everything on every platform.”
She’s part of a growing group of online creators making rage-bait content, where the goal is simple: record videos, produce memes and write posts that make other users viscerally angry, then bask in the thousands, or even millions, of shares and likes.
“The more content they create, the more engagement they get, the more that they get paid,” said Andréa Jones, a marketing strategist based in Toronto, Canada. “Even if, even if those views are negative or inciting rage and anger in people.”
Experts say its popularity is due to the way the algorithms are designed, which determine what users see.
“If we see a cat, we’re like, ‘Oh, that’s cute.’ We scroll on,” Jones said. “But if we see someone doing something obscene, we may type in the comments, ‘This is terrible,’ and that sort of comment is seen as a higher-quality engagement by the algorithm.”
But for Ariel Hasell, assistant professor of communication and media at the University of Michigan, the negatives are clear.
“One of the things that we see happen is that people are sort of overwhelmed by negativity in these environments,” she said. “The concern is that long term, we won’t be able to get anybody’s attention and get them to pay attention to the things that we hope that they should be paying attention to.”
We contacted the major social media platforms to see what they had to say about rage-bait on their sites. At the time of publication, we had no responses, but we do know it is on their radar.
In October, a Meta executive took to Threads to report “an increase in engagement-bait” on the platform, adding, “we’re working to get it under control.” But if rage-bait continues to pay, it’s likely to continue to appear in our social feeds.
It’s fair to say China dominates in electric vehicle sales. The country is the world’s biggest consumer of electric cars and has dozens of automakers competing in the space. Last year, Chinese companies sold about 9.5 million EVs and plug-in hybrid cars.
But the industry faces mounting trade pressures. The Biden administration imposed a 100% tariff on Chinese EVs, which President-elect Donald Trump is expected to continue. Meanwhile the European Union recently raised tariffs up to 45%, citing concerns that Chinese government subsidies give its companies an unfair advantage.
Subsidies certainly help, but there are other factors giving Chinese EVs an edge. Marketplace’s Meghan McCarty Carino spoke with Marketplace’s China correspondent Jennifer Pak about how those factors could keep Chinese EV makers competitive, even in a more restrictive global market.
The following is an edited transcript of their conversation.
Jennifer Pak: There are multiple factors, and different people will emphasize different points to it. So one of the things is China has a complete supply chain. It has cheap labor. There’s fierce competition amongst all of the Chinese companies here. There’s a big demand, a big market and subsidies. According to a consultancy, Automobility, they say that batteries account for over half of the cost of the EVs. So if you can imagine, China controls the processing of the raw materials all the way down to the assembly, it means you can negotiate good prices, especially if you order in massive quantities. So if you, you know, have more volume, then prices per unit comes down. And then there’s also cheap labor. So for example, in the U.S. last year, it was $28 per hour for an autoworker, whereas when we went to Central China, at one of the BYD factories in Changsha, we spoke to assembly workers who said they earned about $1,000 a month, which is pretty good for factory work in China, but because of the amount of overtime they have to put in, it works out to at best $3.60 an hour. So that’s quite a difference from $28 an hour.
McCarty Carino: And when we say that Chinese EVs are cheaper, we should specify, I mean, there are several models that are less than $20,000, right?
Pak: Yes, for sure, but a lot of the companies want to get a higher profit margin, so in fact, they want to sell the higher end ones. And what we’ve been told by experts is that even if they compare to their relative peers, you know, it’s still cheaper because they have better features, or they put more features into the car.
McCarty Carino: As it becomes kind of increasingly difficult to sell to Western markets, do you think these competitive advantages hold up down the road?
Pak: Yes, for the simple reason about the battery, that supply chain being locked in China right now is really essential to keeping prices low. The labor cost is less so because actually wages have become higher and higher, and factories are becoming more automated, so that’s becoming less of a factor. But the other thing that we were talking about is value for money, right? So Chinese EV companies are not really looking to export its cheapest models. Certainly, certain countries would want that, but what they want is to get higher profit margins. And China has an advantage in that it doesn’t just manufacture EVs, it manufactures quite a lot of things that go into the EVs. Like, for example, I went into one of the cheaper models. It was $18,000, it’s the BYD Qin, and it’s super basic, so basic that in the back there’s no air vents, there’s no entertainment system, but now the replacement model comes automatically with an app where you can remotely start the air conditioning. It has some voice control functions. So it’s these sort of “braggable” features in an EV which would prompt consumers to buy and I think as we go further down the road and as more companies start producing EVs, I think that might be one of the factors and China is doing it pretty well. So we’re not talking about just cheap but maybe better value for money, might be a better term.
McCarty Carino: Most Chinese EVs are being sold in China. It is by far the biggest consumer of EVs globally. But what’s the experience like driving an EV in China, how is the infrastructure for charging?
Pak: So right now, especially in all of the mega cities, most, if not all of the taxi fleets have turned into electric vehicles. Ride share platforms as well have also basically forced drivers to change to EVs, like they’ll put a time limit on gas powered cars, like how many years you can have on that car before you have to switch over. So actually, in the whole consumer experience, it’s very common to take electric vehicles, and oftentimes you wouldn’t even really know, because on the interior, especially the cheaper models that you get in the taxis, it just looks like a normal car. It’s just pretty bare bones. And you don’t notice much about the charging facilities, because they are everywhere. As of the end of 2023, there were 8.6 million, and as of the end of October this year, we’re now getting close to 12 million, so they’re building the infrastructure rapidly.
We can’t talk about Chinese EVs without mentioning the biggest name in the game: BYD, which stands for Build Your Dreams. The company is frequently spoken of in the same breath as Tesla. And earlier this year BYD actually reported higher quarterly revenues than Tesla for the first time.
Tesla still led in net profits and sales of pure electric vehicles. BYD also sells plug-in hybrid cars so their total unit sales are not completely apples to apples.
But speaking of Tesla, Jennifer reminded us that Tesla was also a beneficiary of Chinese subsidies thanks to its Shanghai Gigafactory. And Tesla’s easier access to China’s “complete supply chain” as Jennifer put it is also an important reason why the company is still at the top.
As we close out the year and look ahead at 2025, we wanted to mark an anniversary of sorts: 20 years ago, the online review site Yelp was launched — the name reportedly a mashup of “help” and “Yellow Pages.”
It started as an email service to send personal recommendations to your friends, then in 2005 morphed into the standalone review site we now know. Yelp wasn’t the first in the online review game, but it has been among the most popular and enduring.
Andrea Rubin has been at the company almost from the beginning. She joined in 2006 as the first community manager in Chicago. She’s now the senior vice president for community nationally.
“I absolutely loved my city and loved local businesses, and I just loved being able to share my thoughts on these local businesses through the Yelp platform,” Rubin said. “I was like, ‘Well, if I love this, I know there’s many other people who are going to love it as well.’”
Yelp has now accrued almost 300 million reviews worldwide. And the site has helped usher in our current star-saturated era, said David Godes, a marketing and economics professor at Johns Hopkins University, noting “almost all of us, almost always check reviews when we’re making a purchase.”
More than 90% of consumers do so before visiting a new business, according to a recent survey from Capital One.
But as online reviews have taken on greater weight, Godes said there’s been a greater incentive to game them.
“I guess you could think of it sort of as white hat and black hat,” Godes said.,
The white hat, ethical version would be encouraging consumers who seem satisfied to write reviews or giving them some sort of incentive to do so. The black hat, more sinister version?
“Online networks of reviewers who get paid to write fake reviews,” Godes said. “This has been documented. Using AI to generate fake reviews, for example — lots of that going on.”
Yelp uses a combination of algorithms and user reports to flag suspicious review activity so it won’t get recommended to users or factored into a business’ star rating. Content determined by moderators to be deceptive is removed, and a business page might be labeled with an alert.
On average, about 10% of online reviews are fake, according to research from Dina Mayzlin, a marketing professor at the University of Southern California.
“But I just want to point out that’s true for all social media,” Mayzlin said.
The internet is full of trolls, conspiracies and misinformation, she noted. But people mostly find ways to filter through the noise.
“I think the calculation all of us make is that there’s enough, you know, authentic, useful information out there that you still want to listen to it,” she said.
Andrea Rubin, the senior vice president of community at Yelp, said the platform has remained relevant despite increasing competition from sites like Google and Facebook, thanks in part to constant innovation.
“When the iPhone launched, we were one of the first apps on the iPhone, and a really successful app too that millions of people use now,” Rubin said. Recently, Yelp launched an AI assistant that can provide business recommendations.
But Rubin said its biggest strength is still the community of dedicated Yelpers it cultivates.
“They’re just extremely passionate about where they live and want to share it with others,” she said.
Rubin herself still writes reviews. She’s tallied thousands of them over the years.
This episode originally aired Sept. 23, 2024.
You might say online gambling has been on a winning streak since a Supreme Court decision in 2018 cleared the way for states to allow sports betting.
It’s now legal in 30 states and its influence is hard to miss: Online sportsbook companies like DraftKings and FanDuel are on billboards, commercials even college campuses, many of which have made deals with sports betting companies.
Three out of four college students gambled last year according to the National Council on Problem Gambling, and online betting sites are increasingly targeting young people.
Yanely Espinal, host of Marketplace’s “Financially Inclined” podcast, recently covered this topic on her show, and she explained to Marketplace’s Meghan McCarty Carino how online betting companies are reeling in younger users.
The following is an edited transcript of their conversation.
Yanely Espinal: So that’s where I think it’s most interesting and a little bit scary, because the physical space where people are going to see sports, especially young people with their friends, it’s like these physical spaces where the ads are popping up constantly. They’re around you, they’re always on TV and in commercials. And so, I think it’s getting more and more and more prevalent over recent years than really ever before.
Meghan McCarty Carino: The last time you were on the show, we talked about “dark patterns,” these kinds of user designs that are intended to manipulate people into giving up something that they might not have otherwise, like their time or their money. That feels relevant here. How are these gambling apps integrating dark patterns?
Espinal: Yeah, the tools that they’re using are almost identical. Like, you see when you scroll and something pops up right away, it’s like a treat, like a reward. And so, there’s a lot of pop ups. There’s a lot of rewards coming up, you spin this wheel to see what you get today. Or a notification will come up on your phone when you’re not even in the app, you click that notification, and even though you weren’t even thinking about it, you’re back in the app again. And then there’s all these rewards and things like if you sign up, you get a $40 credit, or they’ll send it to you once a month to start using it. That’s the kind of stuff that is designed to get you hooked. And so, I think that the danger of a lot of that stuff is that, if you’re a 45 year old, 55 year old person, you’ve had decades of experience resisting things, you know, pushing up in front of you on computers and phones or as you walk around your community and see stores pushing things in sales. You have some more experience, and you’ve got a different kind of brain to be able to fight that urge and resist but when you’re a teenager, the brain is a little more easily manipulated. That’s why I think this is especially dangerous when you are 17 or 18 years old.
McCarty Carino: A lot of these sports gambling platforms are pretty aggressive in terms of courting new users. They’re running promotions where they give you your first bet for free. What’s going on with that?
Espinal: Yeah, so actually, that is one of the things I was curious about, because I have a brother who gets into gambling a little, and I notice a lot of times he gets enticed by these new, “free” bets where they say you don’t have to pay, we’re going to give you the credit to bet. And that is usually a way that he gets tricked. And so, I asked the guest that I had on “Financially Inclined” about this. His name is Danny Funt and he’s been reporting on this for a while at the Washington Post. He talked specifically about sports betting, and he said that this is something that is especially important to be cautious about, because it’s really easy to fall prey to these promotions. This is what he said:
Danny Funt: Just think about the fact that they’re giving you that money on the front end because they know they’ll make it back and then some on the back end. It’s kind of like putting out bait on a fishing line and hoping someone’s going to bite into it. Treat that with caution.
Espinal: As he said, it’s like bait. That was such a cool visual for students to understand. You’re just swimming along, you think everything’s all good, and oh look at this little free reward, and then suddenly they’ve got you and now you’re hooked, and you literally cannot stop that itch. That’s kind of where I think the first early signs of the real problem start to begin.
McCarty Carino: You found that these kinds of gambling-like manipulations are actually showing up a lot in mobile games for kids now, right?
Espinal: Yeah. So, Danny mentioned that if you look at some of the kids’ games, you actually see a lot of the same features that you would see like in a casino, which is wild, because why are you showing that to a little child, like an eight-year-old playing games on a phone? To me, it’s insane. But he said there are similarities with thing things like ticking clocks. You know, you have 15 seconds to get to the next level, to claim the prize, or to open the treasure chest, and that ticking clock is like that time pressure. And these are the same types of things that we’re seeing in other in, grown adult interface apps and stuff. So, I think the technology is really tricky.
One of the ones for me that was a problem, because I’ve actually seen it with my nieces and nephews, is the spinning wheel. They’ll get on these games and it’s like a spinning wheel to see, what should you play today or what color should the princess wear today and I’m like, what is that spinning wheel? It looks like a roulette machine or like a like a slot machine. And then that just gets them into this mindset of “I can’t wait to see, I don’t know what it’s going to be, and I can’t wait to find out.” And that is actually what makes, I think, a lot of people get addicted to gambling. It’s the thrill of not knowing, the waiting and finding out. And so just starting to expose kids to these manipulative design features that keep kids playing longer, or maybe even begging mom and dad to buy things, that is so problematic.
McCarty Carino: How much are you seeing this kind of stuff affecting young people and teens?
Espinal: I’ve only seen little signs of it myself when I talk to students that have concerned me, but when I started reading about it, and I started seeing how many younger and younger people are reporting that they need help with this, that’s what started to scare me. And in my own family, I’ve seen my younger brother now spend a lot of years struggling with this and until he had a kid and life got real for him, he didn’t realize how serious it was and that he needed to stop. If you’re using diaper money to go and place a bet, bro, you have a problem. He and I had to have a real heart to heart. And I think that’s happening with so many young guys. And I think for me, the fact that it’s starting to not be seen as a problem, because it’s like, it’s legal. These guys can say “I’m not doing anything bad, I’m not committing crimes out here”, you know, and that, to me, is the very dangerous shift.
McCarty Carino: What experts that you talked to say about how to recognize when someone has an actual gambling addiction?
Espinal: This was actually, for me, a big insight. I’m a personal finance nerd, like, I talk about financial education all the time in my life, on social media, at my work, and so for me, I always thought you kind of look at your budget and you’re like, all right, I got $20 extra this week, I’m going to go ahead and place a $20 bet. Like, it’s just a treat for myself as my extra money. But it’s not that. It’s not a budgeting thing. It’s not a mathematical thing. And Danny Funt told me it actually has very little to do with money and a lot of people actually do spend outside of their budget when they get addicted. This is a mental health problem. People may need to actually go to therapy. This is not the type of thing where you just need to sit down and get somebody to help you with your finances and to How teenagers can get hooked on sports bettinglook at your budget and clean up your budget because you’re spending too much on these apps. You actually have to go to therapy because you have an addiction, and you have to work through what’s happening in your brain and how your brain has changed and try to reclaim it back and get it back to how it was before. So that, to me, was a really insightful. This is more of a mental health issue than anything else.
McCarty Carino: So if someone thinks they might have a problem, or see someone else in their life they think might have a problem with this stuff. What kinds of resources are available? Where should they go?
Espinal: Anybody can reach out to 1-800-GAMBLER, which is a hotline that anybody can call and just say, “hey I I’m having a problem” or “I have a friend that I noticed is having a problem, I just want to talk to you, and I want to see what I can do and get help” and it’s free. They also have some state-based hotlines as well. So, you could also just go online and do a web search for the name of your state and the gambling hotline and see what gambling hotlines for your state exist. And then, if not, of course, just go with the national one. And then there’s also the National Council on Problem Gambling, and they have a great site where you can go on and look at state-based resources, including hotlines you can call. So, I think the biggest thing there is to notice that you have to talk to somebody. And the hardest thing, I think, is admitting that you have a problem like. And that’s what’s good, I think, about teenagers. They’re so honest. They’re on their phones all the time, using all kinds of tech but they’re honest about when they’re doing it too much, when they’ve been doom-scrolling, when it’s affecting them, when they’re staying up late and not sleeping well. They know they can be honest about that. And so, I think calling these hotlines is one. And then, of course, if you need to talk to a therapist, that’s okay. There’s nothing wrong with it, because it is a mental health problem and it’s not something to be ashamed of.
This episode originally aired Sept. 11, 2024.
Over the last couple of years, the tech industry has slashed hundreds of thousands of jobs, many of them in recruiting and other departments that work to improve diversity. Companies like Meta and Google, which earlier set ambitious hiring and investment goals, have pulled resources from those efforts.
As a result, many nonprofit groups set up to train and recruit underrepresented workers are struggling to stay afloat. One prominent person in the field is Lisa Mae Brunson, founder of the nonprofit Wonder Women Tech. Marketplace’s Meghan McCarty Carino asked her how things have changed. The following is an edited transcript of their conversation.
Lisa Mae Brunson: We saw the writing on the wall where it seemed like companies were performative, despite the fact that, statistically, companies will perform better when they have diverse teams. Their bottom line will actually increase, they will make more profits. But that wasn’t really what they were looking at. I mean, I think they were looking at the fact that they were going to have to change culture. And you started to see the political climate change too. And I think when that shifted, the focus on increasing representation also shifted.
Meghan McCarty Carino: And we’ve certainly seen a shift in terms of financial investment. Of course, over the last couple years, a lot of tech companies have been shedding jobs. Many of those jobs have been concentrated in [human resources], recruiting, diversity, equity and inclusion programs. Have you felt the effect of that in the work you do?
Brunson: Oh, 1,000%. I mean, we had a pretty significant surge for a hot moment when George Floyd was murdered, and you saw all of these companies get together and focus on Black Lives Matter. But then once that momentum shifted and that political climate shifted, it was almost like night and day, just a complete reversal of all of the progress made over, over time. And then on a federal level, you see affirmative action being impacted, and it’s really harrowing to kind of bear witness to all of the possibilities that this work can do and create impact to seeing where people are just intentionally devaluing it.
McCarty Carino: Right. You mentioned the end of affirmative action in college admissions, which was overturned by the Supreme Court last summer. How do you think that has affected kind of the corporate approach to these efforts and the legal approach, I guess?
Brunson: Well, access to education, the assumption that everybody has access to the same opportunities, is just simply not true. And so when we have these programs in place that allow for people from different backgrounds — and this isn’t just about race, this is economic, these are people that wouldn’t ordinarily have the the same opportunities as their counterparts that do have opportunities and have the support systems and the financial systems and educational systems and mentors in place to get them to where they need to be inside of these universities. By being at the university, they then would be able to level up their education and contribute meaningfully to tech or any other industry.
So, the idea that well, we need to strike down affirmative action because it’s unfairly targeting a certain demographic and not giving access to everyone else — when everybody else is already there — you know, this is a small group of people making a huge impact for these demographics that ordinarily wouldn’t have access. So we’re going to see tech and innovation significantly impacted if we don’t see students being trained and having the opportunity to learn new skills and get into tech and to thrive in tech. So I think it’s devastating, to be honest. And then, the idea that they want to replace it with merit, that you should be able to get into these schools based on merit, it’s just simply just not true. You’re assuming that we’re all on a level playing field, and by our merit and value alone, we’re going to have the same access to tools and resources and education, and history has repeatedly shown that, that just simply isn’t true, especially in a systemically marginalized institution.
McCarty Carino: Do you have a sense of how the pullback of these efforts at institutional levels is affecting demographics in tech, in [science, technology, engineering and math]? I mean, many of these companies, as you noted, were very happy to put out reports touting all of the gains they were making when they were investing in it. Is that still the case?
Brunson: Well, this year alone, we saw some major organizations close shop. Notably, two of the largest women in tech organizations, Girls in Tech [and] Women Who Code, both organizations that I modeled a lot of my work after. I mean, I looked up to them. And so these are organizations that collectively impact about half a million women in tech globally and nationally. And so when you see that these organizations are shut down, what do you think happens? These are the organizations that women in tech look to to support their professional development choices in an otherwise male-dominated world. It’s true, like, these things exist.
It’s hard to be a woman in tech. Every single day, I have to remind myself to put on my cowgirl boots and be confident and, and just keep going because it is really hard to make gains when the system isn’t set up for you to be successful. So we’re already seeing the impact of organizations that cannot sustain and weather the storm. And maybe we’re not going to see it today, but if this continues, we’re definitely going to see even less population of women and people of color in tech, and that will be a very, very sad day for innovation and science at large.
McCarty Carino: What gets left behind when these efforts are not made to increase diversity in an industry where innovation is of paramount importance?
Brunson: Well, if you don’t have innovative, diverse minds at the table to solve for X and be able to look at a product or service or a tech holistically, you’re gonna find that this tech will intentionally and unintentionally harm people. Case in point: So if we are telling AI, this is what the world looks like, these are the stereotypes, these are the archetypes, this is how the world exists, AI will mirror that back to us. So who gets left behind are millions and millions and millions of people.
Raise your hand if you kind of forgot where the word podcast comes from. The now-catchall term for digital audio shows goes back to the Apple iPod. And it’s been almost two decades now since Apple helped bring podcasts mainstream by adding them to iTunes.
“We’re going to list thousands of podcasts and you’ll be able to click on them, download them for free, and subscribe to them right in iTunes,” said then-Apple CEO Steve Jobs at the 2005 Worldwide Developers Conference.
So, what was the business of podcasting like at the beginning, and where might it go from here? Marketplace’s Meghan McCarty asked Nicholas Quah, podcast critic for Vulture and New York Magazine. The following is an edited transcript of their conversation:
Nicholas Quah: My understanding is that there wasn’t really a business model. A lot of the early podcasts were just people making stuff and posting stuff around. And the analogy of the blog that rose with the rise of Google AdSense, these spam ads that you see on the internet, that was the early form of monetizing blogs and websites. There wasn’t quite anything like that for podcasts at the time. That being said, this was mid-2000s. And late-2000s era also gave rise to “Dear Brothers” podcast networks, and they tried to sell ads directly into the show. But, of course, the big hurdle, is that nobody quite knew what a podcast was. It was a little hard to prove what the audience size was. For advertisers, it’s difficult to get a sense that if I paid you X amount of money to get X amount of impressions, can I trust your impressions? All of that was very difficult early on.
Meghan McCarty Carino: I want to talk about some of the technical changes that kind of have changed the podcasting game — you know, the ability to stream high quality audio and the ability to have really good analytics of when people are downloading [and] listening. How did that kind of data start to affect podcasting?
Quah: Well, we’re in the middle of still feeling that out. So if you ask anybody who works in the business, or anybody who covers this space, a constant refrain is that metrics in podcasting is still kind of all over the place. And I want to sort of lay the context here in a sense that this was part of the point, originally, of podcasting: the whole notion that it was sort of freestanding also meant that it was, yes, harder to sort of understand and study in terms of how many people actually download it and listened, and to what extent did they listen to each individual episode. But that created a system in which nobody could govern the destiny or the trajectory of the medium. To answer your question, [let’s] quickly go through what happened between 2014 and the pandemic. There was this huge influx of investors and money and new players. Lots of people were triggered into the idea of, there’s something here. People are talking about podcasting. We’re going to make a ton of it.
McCarty Carino: Might call that the bubble era.
Quah: Yes, it was definitely a bubbly atmosphere. Very frothy atmosphere. This influx of money also drew increased attention from technology companies, and advertisers had this need. They were like, if we’re going to pay X amount of money to have our ads in your show, we want to know how many people actually heard the ad, who is listening to the ad — like, is the ad being efficient? So this is the tension that drove a lot of the podcasting story during that era. A lot of people in the podcasting space looked around and sort of saw what broader digital media companies experienced through the wave, when Facebook was a major player, when Google and social media companies kind of dictated the rise and falls of different media companies. And with podcasting, it just was a couple years delayed through that process. So the major play to think about introducing analytics at a very high level is Spotify. They jumped into the space in 2019. If the majority of all podcast listeners are listening on Spotify, then they can provide a picture of the metrics more, with more granularity, and hopefully develop a very tight relationship with advertisers within the medium. So that’s a big force that’s happening. That happened between 2019 through to the pandemic. Today it’s a little different, so we can talk a little bit about that, because at some point over the past couple years, YouTube came into play.
McCarty Carino: Yeah, so YouTube is now the primary place that people access podcasts. How has that changed the landscape?
Quah: So it’s changing. It’s changing the landscape, in the sense of introducing a very complicated, what I would say is, an identity crisis in podcasting. So as you alluded to earlier, podcasting went through this really big, bubbly period. It’s a lot of money coming in, a lot of venture capital, a lot of speculative money, a lot of shows that cost way more than they were able to make the money back. And at the beginning of pandemic, the bubble inflated further because a lot of media companies and film, television companies couldn’t produce their work. And so a lot of celebrities turned to podcasting that drew for their attention. But there were several major economic changes over the course of the pandemic, and that caused the bubble to burst. Interest rates went up, and it was us understanding that tech companies and investment firms were now within an incentive structure where they needed to show revenue and profit as opposed to just growth. This was true for the tech scene writ large, and it was true for the podcast scene specifically. So this caused a massive pop, or deflation — lots of podcast shows got canceled, a lot of podcast companies laid off staff. And what seemed to be of success … often talky, low overhead shows, interviews, chat, hosted by somebody who’s already famous … there was a grasping for anything that could provide a sense of stability and growth, any sense of financial excitement, [then] enters YouTube.
McCarty Carino: Right, so video podcasting on YouTube has kind of become the standard now. But why is that? Why are podcasters leaning into YouTube now, even though it’s been around a long time?
Quah: My understanding is that it was a response from a lot of podcast companies going, where is the next frontier growth? We should put our podcast on internet, because a lot of it is just chat. A lot of it is people talking in front of mic with a video camera going in front of them, like, slap it up on YouTube and reach a wider audience that’s already baked in through the algorithm that YouTube has. As a result, a lot of new audiences who are being introduced to podcasting for the first time understand it as a video-first medium. They kind of equate the word podcasting with YouTube with video, which is not necessarily the case of what it was in terms of its identity over the past 20 years, and that’s why we’re currently in the middle of a space where it’s a little tricky to talk about podcasts in terms of what it actually is, because it’s a concept at this point in time.
McCarty Carino: What new innovations are you watching as we move past 20 years of podcasting into hopefully the future?
Quah: Innovation is a really interesting word to use here, because we’re in the middle of a very complicated identity crisis for podcasts, like, is it video, audio? So to the extent that I’m looking at innovation moving forward, I’m looking more in terms of a conceptual innovation of, can we reframe or be more granular and specific with what exactly we’re talking about with this medium? Maybe it means that we just don’t use the word podcast anymore — it’s all pulling back together into this giant blob of content, and podcasting is not part of that blob.
The House Task Force on Artificial Intelligence released a lengthy report this week that doesn’t recommend any specific policies or bills. We’ll also look ahead at what the new year could bring the robotaxi business. But first, the TikTok ban is heading to the Supreme Court. A federal appeals court last week upheld the law that would ban the short-form video app if its Chinese owners don’t sell it by Jan. 19. TikTok asked the court to weigh in, and this week SCOTUS agreed. Lily Jamali, tech correspondent at the BBC, joins Marketplace’s Meghan McCarty Carino to discuss the news.
This story was produced by our colleagues at the BBC.
Voice cloning is becoming easier, faster and more convincing. Artificial Intelligence makes it possible to change the age of an actor’s voice, translate words into any language, and replace a voice lost through illness. But it’s also increasingly being used by criminals to impersonate a loved one, extort money or compromise bank accounts. It’s changing how we communicate with each other and how we trust each other.
Respeecher is a Ukrainian AI company at the forefront of cloning speech that’s indistinguishable from a human voice. They’ve worked on Disney’s Obi-Wan Kenobi, cloning James Earl Jones’ voice for Darth Vader. And along with the Massachusetts Institute of Technology, the company won an Emmy Award for turning an actor’s voice into President Nixon in the short film, “In Event of Moon Disaster.”
Alex Serdiuk, Respeecher’s founder and CEO, said the process is much quicker.
“We had to get several hours of Nixon’s voice from the Nixon library. Now we can do the same work with having just several minutes of it.”
So how close to real time conversations are we? Serdiuk said we’re already there.
“We have a piece of technology that allows change in real time, meaning that there’s only a very small delay and that would make it quite smooth for a phone conversation.”
And Serdiuk said, there are other fields where the tech can change lives.
“We’ve applied our real time models to things like technology for health care. For example, laryngectomy patients who went through throat cancer and had their voice box removed.”
But Serdiuk adds a note of caution and said the technology needs to be controllable, so that bad players don’t ruin the market.
“Some irresponsible companies let people create anyone’s voice within seconds and with such good quality that it would trick people.”
A case in point is Jennifer DeStefano. Last year she told the U.S. Senate Judiciary Committee about a frightening anonymous phone call she thought was a call for help from her daughter. Panicking, she told a friend, who called 911, which warned Jennifer there was a scam going around involving voice cloning.
It’s a problem jurisdictions around the world are trying to grapple with, according to Professor Brent Mittelstadt from the Oxford Internet Institute. He specializes in AI ethics, law and policy.
“There are a number of proposals in the U.S. to basically fight robo-calls or scam calls or fraud calls that are using AI-generated voice profiles,” Mittelstadt said. “America is further ahead.”
And Europe is also taking action.
“The AI Act is a new regulatory framework that has recently been passed in the European Union,” he said. “There is political will whereas in the UK there is hesitancy at this stage in AI’s development.”
The UK does have a national reporting center for fraud and cybercrime, Action Fraud. The body says that in the first three months of this year, it had reports of voice cloning to compromise bank accounts, impersonate someone and request payment for a fictional emergency. There have also been cases of blackmail and impersonating a celebrity to encourage investing in a fraudulent scheme.
Noting these risks, Professor Mittelstadt isn’t impressed with the UK’s approach.
“To not regulate such transformative technology seems like a huge missed opportunity. And I think that’s why we see the EU and the U.S. acting now to actually regulate it,” he said.
About 170 million U.S. users could be TikTokless as soon as Jan. 19. Early this month, a federal appeals court upheld a law that could ban the very popular short-form video app unless its Chinese owners agree to sell it. They have a willing buyer, though, in billionaire Frank McCourt, who has assembled a consortium of investors ready to put down more than $20 billion. He’s the founder of the internet reform initiative Project Liberty. You may also know him as a real estate developer who once owned the Los Angeles Dodgers.
The artificial intelligence boom and its hunger for electricity has brought a surge of interest in nuclear power. Microsoft, for instance, made a deal to restart the Three Mile Island plant in Pennsylvania, while Google and Amazon have invested in companies developing small, modular reactors.
The Joe Biden administration’s Department of Energy aims to triple nuclear energy capacity by 2050, but the sector will need a lot more workers to make that happen.
By some estimates there’s a gap of more than 200,000 jobs to fill over the next decade.
Marketplace’s Meghan McCarty Carino spoke with Craig Piercy, CEO of the American Nuclear Society, to learn more about the hunt for talent and why many younger workers are fired up about joining the industry.
The following is an edited transcript of their conversation.
Craig Piercy: What I like to say is there are the lab coats and there are the hard hats, and so it starts from nuclear engineers and very highly educated and trained scientific and technical professionals, which make up, I would say, a small but important percentage of the overall workforce, but I’d say the bulk is in the skilled trades — pipefitters, welders, metalworkers, fabricators. And then you have all the indirect layers of employment beyond that, so supervisors, security guards, technicians, other people that are involved in keeping the nonnuclear parts of a nuclear plant running. So it certainly runs the gamut.
Meghan McCarty Carino: Tell me more about how we got to this point where we have this kind of gap to close. I mean, clearly the demand for nuclear is expanding. What about the kind of the supply of those in the workforce? What has that been like over the, you know, the near past?
Piercy: Yeah, if you look at the demographic distribution of the nuclear workforce, it looks like a double-humped camel. You have this prime generation that came into the industry in the 1960s and ’70s, in nuclear’s heyday. And then, of course, after [the 1979 nuclear accident at] Three Mile Island, you had a significant drop-off in the number of people coming into the industry. Then we have this younger generation who are late millennials, early Gen Zs. In our history as a professional society, in modern history, we have more people under the age of 40 than over the age of 60. So it’s this second generation that’s coming in that really is going to have the task of growing the workforce, bringing advanced nuclear plants online in the next 10 years, rebuilding our domestic supply chain that’s atrophied for 30 years. It’s a new generation coming to the fore.
McCarty Carino: What is attracting this younger generation? What can attract more of them?
Piercy: Yeah, so I think we see a much higher level of social consciousness among this younger generation. They’re not just coming in because nuclear pays well. They’re coming in because they want to do something good for the world. They have concerns over climate change, and nuclear really is, in many ways, a very necessary component to any successful plan to reduce carbon emissions. So a lot of them are coming in, to put it bluntly, to save the world.
McCarty Carino: I’m curious if there’s a tension there in terms of the image of nuclear, clearly a clean energy solution, but also an industry that has been dogged by kind of bad publicity, you know, nuclear disasters and environmental concerns. Is that still a factor?
Piercy: Less and less every day. And I think that what’s happening now is people are beginning to realize how safe nuclear actually is, how many megawatts of clean energy it generates. I think that if you look at public opinion polls today, and especially among the young, nuclear is much more favorable than it was just a decade or so ago. So in many ways, we’re leaving that part of our history behind us, and it’s due, in many regards, to the good work of the nuclear workforce that has been there, that have been running these plants efficiently and safely and really making it about the safest form of energy generation that we have in the world today. So that’s changed.
McCarty Carino: Where do you see demand going, you know, in the next decade or maybe two decades?
Piercy: We need to design, license and build a whole set of new advanced reactors. That will occur in the next eight to 10 years. We need to be churning out those reactors on a regular basis through a revitalized supply chain, so that by 2050 we can achieve that goal. It’s doable, but we have to start now, and that workforce has to grow significantly in the next 10 years in order to accommodate it.
McCarty Carino: Do you see the right signals in kind of the labor force in the pipeline?
Piercy: We do. We do see increase in the workforce, according to DOE statistics. There is a need for more, and I think one of the overarching challenges is the general lack of availability of, especially in the skilled trades — again, pipefitters, electricians, welders. There is competition across industries for those kinds of employees, and in nuclear we call it the “war on talent,” and the industry and the workforce is in some ways limited by that overall lack of availability.
Last month we spoke with professor Anna Erickson at the Georgia Institute of Technology all about the current state of the nuclear industry and the challenges to growing it.
In addition to the workforce, Erickson also noted there are supply chain constraints. Nuclear reactors generally use enriched uranium as fuel, and most of that comes from other countries, including a not insignificant share from Russia, which is subject to trade restrictions since the U.S. imposed sanctions after the invasion of Ukraine.
Well, last week, Reuters reported the Department of Energy announced contracts with six U.S. companies to fund the production of domestic uranium fuel for nuclear power.
As we approach President-elect Donald Trump’s inauguration next month, questions are coming up about how his second administration will deal with tech.
A lot has changed in the industry and its relationship to the former president since his first go-round.
Marketplace’s Meghan McCarty Carino spoke with Reed Albergotti, tech editor at the news site Semafor, to help us decode some of the signals. They started with artificial intelligence and the man Trump has named as his AI czar, venture capitalist David Sacks.
The following is an edited transcript of their conversation.
Reed Albergotti: I actually don’t think we really know at this point what the AI czar’s responsibilities are. I thought it was really interesting, the announcement of David Sacks focused on freedom of speech and sort of bias in AI chatbots. That’s been an issue that I think people have brought up on the right. I don’t know if that’s the focus. I mean, I would be surprised if the Trump administration is really going after this, you know, Manhattan Project for AI and they want to do these huge infrastructure projects, if the AI czar isn’t going to have some say in that. He’s also, I mean, very close with Elon Musk. And Musk is, as people may know, is building this massive AI data center in Memphis [Tennessee] that is possibly the biggest supercomputer in the world. He has to be thinking about this stuff in sort of that big geopolitical way
Meghan McCarty Carino: When it comes to AI nationalism, trade is also something that comes up a lot, especially for hardware like semiconductors. How might the Trump administration use policy around chips as literally bargaining chips in foreign policy?
Albergotti: There are all these countries now, the United Arab Emirates, Saudi Arabia, to name a couple that really want to bolster their tech industries, their homegrown tech industries, by getting into this AI race and building massive data centers. I think they see their competitive advantage here being able to provide cheap energy, like the UAE has been building nuclear power plants like nobody’s business. They can, you know, cut red tape, but the Commerce Department has been very hesitant to allow Nvidia and other chipmakers to send their most advanced chips over to these countries, mainly because they’re worried that China has too much access and too much influence there. So these countries have become — and I think you’ll see more of them around the world — sort of these proxy battlefronts in the tech race with China. And it’ll be fascinating to see how the Trump administration plays that. And we just know that, you know, Trump likes to make a deal, and I don’t think he’s going to want to send these things over there without getting something in return. So I’m just really curious to see how he uses those “bargaining chips.”
McCarty Carino: One domain that you write about to keep an eye on is biotech. Can you explain what we might anticipate there?
Albergotti: Right, I mean, biotech is also just another fascinating one because you have maybe [Robert F. Kennedy] Jr. coming in to run the [Food and Drug Administration]. There’s a huge explosion right now in biotech, where there’s so much money going into these attempts to use artificial intelligence to come up with new drugs, and that is going to require some reforms at the FDA. Of course, RFK Jr. has talked about reforms at the FDA, but maybe not the kind that the biotech industry wants to see. I just was actually talking with a biotech investor the other day who said all anybody wants to talk about in these meetings is whether RFK Jr. will basically stop approvals of drugs. So there’s a massive amount of worry. I think it’s totally unpredictable what’s going to happen there, but I think in line with this idea of the Manhattan Project for AI, I would expect Trump to want to push that technology forward. It’s a huge competitive advantage that the U.S. has over China and other countries, and you probably want to see us continue in that area, rather than let it go somewhere else.
McCarty Carino: All right, let’s turn to antitrust. There’s been a lot of antitrust activity in the tech sector during the Biden administration, and crucially, a number of these cases actually started during the first Trump administration. We now know Trump has nominated a sitting member of the Federal Trade Commission, Andrew Ferguson, to lead the FTC. So it seems current Chair Lina Khan will be out. There had been some speculation as to whether she might actually remain during the Trump administration. But where does this appointment lead us into the second Trump administration when it comes to antitrust?
Albergotti: I mean, the way I’m looking at it is, you know, there were questions about whether Trump would maybe, sort of, you know, let some of the litigation die on the vine. As you said, he started this, but that was in a different era. And I think the impetus for those cases really had more to do with kind of the way he felt that the tech industry had been going after him, and I don’t think that’s the issue anymore. I don’t think — you see, you know, tech executives really bending over backwards to try to, you know, ingratiate themselves with Trump or work with Trump, but now with these picks, you know, for FTC and antitrust with the [Department of Justice], I think you have to assume that those cases will just continue. I don’t know if it’s necessarily going to pick up or if there will be new cases or accelerate in any way. If I’m the heads or CEOs of these big tech companies, I wouldn’t count on any of those cases [going away]. I’d just count on having to continue to fight just as they did under the Biden administration.
McCarty Carino: So another area of policy that’s not sort of explicitly tech-related, but is very important to the tech industry, is immigration, and specifically skilled immigration, H-1B visas. What are you watching on that front?
Albergotti: Everyone in Silicon Valley really supports more skilled immigration. You hear this. It just seems like it comes up in every tech conference where somebody says, you know, I really think every Ph.D. student should get a green card. I think that’s a very popular point of view. The U.S. tech industry is really bolstered, I mean, so many of the leaders in the industry have come from, you know, places like India, China, other countries. They’re just doing all this breakthrough work here. And I think there’s a real feeling that if the U.S. wants to continue to lead in technology, it’s going to have to really be open to, you know, people coming in from all over the world or continue to be open there. I just can’t see that the Trump administration 2.0 will kind of make some of the mistakes that they made in 1.0, where they really shut a lot of that down. If your goal is to race against China in terms of technology and AI and biotech and all these areas, like you really want to increase that rather than decrease it.
McCarty Carino: Broadly speaking, how would you sort of characterize the difference in tone and approach that we might expect to technology policy in the Trump administration compared to Biden, and to what extent might it be similar?
Albergotti: Well, obviously on the antitrust front, it’s similar. I think in every other front you’re seeing a much more gung-ho, “let’s build” kind of mentality from the Trump administration. And that has, you know, people in Silicon Valley, in certain circles of Silicon Valley — you know, [like] Marc Andreessen, the venture capitalist who famously came out in support of Trump — I think really excited. Because I think the Biden administration has been more of a, they’ve had a little bit of more of a cool relationship with tech around antitrust, but also, I think, around AI. They’ve had this AI executive order, which, you know, I don’t think was an anti-AI or anti-technology bill, but there was a lot of worry there about, you know, what is this technology going to mean for society? And I think from Trump, it’s going to be much more like, well, you know, we need to win. This is a race and we need to win. And I think that’s something that resonates with a lot of people in Silicon Valley right now.
There’s been a lot of discussion about health insurance over the last week. And one practice could be seeing more oversight: the use of artificial intelligence in coverage decisions. Plus, the FDA issues final guidance for makers of AI-enabled medical devices so they can now update their software after approval. And it was a good year for health tech startups — after a not-so-good year in 2023 — especially for those with the letters “AI” attached to their business. Our regular contributor Christina Farr, managing director with Manatt Health, joins Marketplace’s Meghan McCarty Carino to discuss the news.
Here’s another possible use for artificial intelligence: helping low-income consumers qualify for loans. These consumers may not have the required paperwork, or the documentation banks require in loan applications. Could AI and fintech help?
Emily Williams, an assistant professor at Harvard Business School, wrote a pper bout this about this and discussed it at a recent Federal Reserve Bank of Boston conference. Her research focused on whether AI and fintech can help level the playing field for consumers who are left out of the traditional banking system; the unbanked. During a break in the action Williams tells me new technologies could make a difference.
“We think about fintech sort of bringing down the cost of financial services, increasing competition, expanding the pie,” she explained.
Williams says fintech apps that help you track your money and budget can also keep you safe from things like overdraft fees. Banks can use technology to broaden the pool of people they loan money to, using AI to decide whether to make a loan to someone who doesn’t have the typical data they look for – like your payment history on a mortgage or credit card. AI can help banks sift through a different trail of data — mounds of information on things like whether you paid your bills and rent on time.
“AI can be used in that and is used in that to sort of to try to understand more deeply characteristics about people by looking at their patterns, the patterns in their data, essentially,” Williams said.
But Williams says AI and fintech aren’t the end-all-be-all for people who are shut out of traditional banking. AI can be discriminatory. And lenders need to watch out for AI bias. For example…
“We might be able to understand a person’s gender from just looking at their bank account transactions,” Williams told me.
Williams says AI might make a lending decision based on gender. She says that’s illegal, but AI doesn’t know that. Also, you have to be tech savvy to use fintech. If you don’t have a smartphone and good internet access? Forget it. And Christine Parlour, a financial economist at Berkeley, says fintech probably wouldn’t help undocumented immigrants who are trying to open a bank account.
“So, if the reason why undocumented aren’t in banks is because they don’t have documentation then fintechs are not going to help,” she said.
With all of these caveats and what-ifs, Boston Fed President Susan Collins was cautious when I asked her about fintech and AI helping bring more people into the banking world.
“I think there are reasons for skepticism and concern and there are also reasons for optimism,” she said.
Cautious optimism. Collins says economists have a lot more work to do before they can deliver a verdict on fintech, AI and the consumers who are still shut out of the traditional banking system.
As consumers, we leave trails of personal data all over the internet. And collecting and selling it is big business. Sensitive information, like our Social Security numbers, incomes and credit scores, are often sold by so-called data brokers to the highest bidder. Sometimes that’s a bank, sometimes it’s a scammer.
This month, the federal Consumer Financial Protection Bureau proposed a rule that would crack down on the practice. It would bar companies from selling sensitive data or hold them to the same legal standards that apply to credit reporting agencies. Rohit Chopra, director of the CFPB, explained the proposal in more detail to Marketplace’s Meghan McCarty Carino.
The following is an edited transcript of their conversation.
Rohit Chopra: About 60, 70 years ago, we actually had the same problem of private detectives and other investigators researching all of us and creating files for sale about us. And people were really nervous about what could happen if this data was misused or if it had inaccurate information. That’s why when it comes to your credit report you have the right to look at it, you have the right to dispute incorrect information, and the companies creating them must make sure that the information in them is accurate.
Meghan McCarty Carino: So data brokers would be restricted in buying and selling the most sensitive data, or they would basically be considered like a credit bureau?
Chopra: Yes, so what we’re trying to make sure is that information, like your income, your Social Security number, this is really only shared for legitimate purposes, like accessing a loan offer. It can’t be used simply to sell our data to scammers who might be targeting older adults and others in financial distress, and it certainly shouldn’t be used to sell to state actors who are looking to collect information about U.S. citizens for nefarious purposes.
McCarty Carino: So what are the implications of a data broker falling under the standards of a credit bureau?
Chopra: Well, they’ll just have to come clean and not sell data that is not allowed under the law. They’ll have to make sure that if they’re maintaining information about your debt payments or other financial information, that they’re ensuring that that’s accurate, that they’re allowing you to dispute improper information, and they’re getting some meaningful consent for sharing that information as well, rather than burying permissions in fine print. The key here is that we don’t want these data brokers evading long-standing law on the books by pretending they’re something different.
McCarty Carino: This proposed rule has met some resistance from some law enforcement groups. They’ve argued this could inhibit their ability to access certain information to solve criminal cases or in counterterrorism efforts. What’s your response to those concerns?
Chopra: Well, our proposed rule is very clear in that it would preserve existing pathways created by federal law to access this information for legitimate law enforcement, counterterrorism and counterintelligence. But we actually are hearing a lot from law enforcement about the need for this rule. In some circumstances, we’ve seen how judges and police officers are targeted by those who may be defendants in criminal proceedings, finding their information through data brokers and going to their homes. States like West Virginia and New Jersey have passed their own laws to make sure that data brokers are not being used as a tool to dox and harass law enforcement officials. We also know that state actors overseas are eager to find out who is working in intelligence agencies. We want to make sure that sensitive data about personnel protecting our national security are not being sold to the highest bidder overseas. We’ve also seen how members of the military, how their sensitive information is being trafficked by data brokers in ways that could cause our country some real harm. So we’ve really been focused on making sure that our work is advancing our national security and protecting people against crime.
McCarty Carino: You know, there are a number of state-level data privacy laws, including in California. How might this intersect with those?
Chopra: A lot of those laws are pretty interesting. Some of them exclude companies that are covered under the Fair Credit Reporting Act or other financial privacy laws. So one of the things that we’ve noticed is that some companies actually want to say that they are credit reporting companies or financial companies to be exempted from those state laws because they perceive that some of the financial privacy laws are weaker. And what we’re trying to do is also stop some of that arbitrage by making sure that there are some meaningful safeguards under federal law as well.
McCarty Carino: From the consumer side, what would this look like? Would consumers appreciate any change?
Chopra: Well, what’s really important is that consumers right now don’t even know all of the data and sensitive information being collected about them. We’re essentially living in a world of digital surveillance where companies can match our contact lists with our social media activity, with our location and our income, and even sometimes knowledge about our job and job performance. That data really is something that people don’t expect is being sold and auctioned off. I think even if people don’t know about this underworld of data brokers, they know that their data shouldn’t be weaponized against them. So I think that they may not even need to see much change in order to benefit from it. Now, of course, if they find out that data brokers are maintaining incorrect information about them, they’ll have some important legal rights under the law to challenge it.
McCarty Carino: This rule was proposed about a month and a half before the administration is going to change. The CFPB is taking comments until March. What does that change in administration mean for the potential future of this rule?
Chopra: Well, one of the things that has been really gratifying about the issue of data brokers and advancing data privacy, it really knows no political stripes. It’s really about people and consumers and our national security up against well-funded companies. And we’ve seen in state after state, regardless of what party controls it, real, meaningful efforts to rein in some of these corporate surveillance practices and to better protect consumer privacy. So it is one of the rare areas of agreement that we need much more action, and the CFPB’s data broker rule will be one key part of that.
The push to integrate artificial intelligence — like large language models — in the workplace is hitting almost every industry these days. And that includes policing.
Reporter James O’Donnell with MIT Technology Review got an inside look at the ways in which many departments are experimenting with the new technology when he visited the annual International Association of Chiefs of Police conference back in October.
The event, which bills itself as the largest gathering for police leaders in the U.S., is not generally very open to the media. But O’Donnell was able to attend for a day to see how artificial intelligence was being discussed. He said police are using or thinking about AI in a wide range of applications.
Marketplace’s Meghan McCarty Carino spoke with O’Donnell to learn more about those applications, starting with AI-powered virtual reality training. The following is an edited transcript of their conversation:
James O’Donnell: Rather than having instructors and actors come in and guide police departments through different scenarios, which could range from de-escalating a situation out on the street all the way to active shooter scenarios, the pitch is that VR can do that more realistically and in a more engaging way for police officers. And the one that I tried was a little bit lacking. You know, the company said they had some connectivity issues on the expo floor with their Wifi and internet and everything. And the point of the VR training was to to go through a de-escalation process with this person, to talk them down, to figure out what the issue was. And to be honest, I found it a little bit unconvincing. There was some lag involved, and I can see some of the benefits of it. But, you know, I think it’s still — like many pieces of VR and AR technology — I think it’s still sort of in the beginning [phases].
Meghan McCarty Carino: Another use you write about is kind of applying AI to analyze all of the many streams of data that are collected by police departments and increasing, you know, streams of data, tell us about how this might work.
O’Donnell: Yeah, so this is a trend that is going on in not just police departments, but government more broadly, [like] the Department of Defense. And the idea is basically that every piece of hardware or every sensor that you can have out in the world is producing lots of data. So for police departments that could be cameras, license plate readers, gunshot detectors, all of these sensors that police departments are increasingly deploying collect lots and lots of data. But up until now, it’s been really hard to sort of sort through all of that data to find insights to make much use of it. And there’s a lot of companies who are pitching AI as sort of the solution to that. And so there are cases you can think of where that would be really, really useful and important, right? I give them that.
There are some cases where, if you imagine, you know, you have a kidnapping, or you have a really time sensitive situation where someone’s life is in danger, and you’re trying to piece together, you know, security cameras from around the city with other sensors and get a real time picture of what’s happening, you could imagine that the AI is really useful for sort of finding the connections between those different data sources. On the other hand, you have a lot of privacy experts and people from organizations like the ACLU who consider this to be basically one more step towards a surveillance state, to basically over-police in certain areas and maybe under-police in others.
McCarty Carino: You also write about the application of AI in more banal ways, just doing boring, grunt work, filling out paperwork. I mean, what does that look like in policing?
O’Donnell: For police officers, one of the big complaints that departments have is that their officers spend far too much time writing up police reports. So one company I write about in the story is called Axon. They basically take body camera footage, they transcribe the audio from that body camera footage, and then they use that information in the audio to create a first draft of a police report. And in theory, it would save a lot of time. You know, officers do a lot of this report writing at the end of the day. Maybe their memory is foggy, maybe they’re fatigued. So there is an argument that using AI to save some of that time would be beneficial.
On the other hand, AI is fallible. It makes mistakes. And, you know, leaning on AI and writing these police reports would introduce AI into a document that actually plays a huge role in the in the criminal justice system. And to be fair, the companies building these police report generators have taken a lot of steps to make sure that police officers actually have to read and edit the report. I’ve spoken to some people who say it’s a problem if you take the body camera footage and you use it to generate a report, because it’s basically like showing a police officer footage of what happened before they’re supposed to write this report. So it introduces this kind of variability, and in some ways, takes what was supposed to be two distinct sources of information, and in some ways reduces that to one source of information.
McCarty Carino: In general, you write that the way us police are adopting AI is “inherently chaotic.” Tell me more about what you mean by that.
O’Donnell: For the Department of Defense or the CIA or other national security agencies, they are going to be bound by one agency-wide policy of how they’re going to deploy AI, what they can use it for, what they can’t use it for. Police departments are just not organized that way. There are, you know, some federal rules that they of course, have to abide by, but there’s far from a universal federal policy of how police departments can use AI and how they can’t use it. And what’s more likely going to determine how those departments use AI is just department by department. So I think what’s lacking from a privacy perspective and an ethical perspective is one overarching federal rule of how these departments can and cannot use artificial intelligence.
McCarty Carino: And given the change in administration that we are about to see, the probability of an overarching federal rule seems lower?
O’Donnell: Yeah, and I think [in] Donald Trump’s first term in office, as well as in his campaign for president, he has spoken a lot about rolling back some of the police reform regulations put forth by the Biden administration, some of which have not been enshrined in law, but Trump has spoken pretty outwardly about rolling back some of the police regulations that have come about since since 2020. He’s talked about giving more protection to police officers. He’s talked about increasing the use of practices like stop and frisk, and I think it’s likely that police departments who want to use AI in a really bold way in the next few years will probably be even more emboldened by his administration.
One of the industries that is adopting artificial intelligence tools the fastest is the legal field. A recent report from the legal tech company Clio showed almost 80% of legal professionals are using AI in some way in their practice, up from about 30% last year. Joshua Lenon, a lawyer in residence at Clio, told Marketplace’s Meghan McCarty Carino the profession is particularly ripe for tech disruption.
The CEO of Intel resigned this week, likely with a push from the company’s board. We’ll take a look at the landscape for U.S. chip manufacturing on today’s “Tech Bytes: Week in Review.” Plus, Amazon is trying to make good on its net zero carbon emission pledges with a pilot to capture carbon at one of its data centers. But first, OpenAI announced this week it’s partnering with a military technology startup called Anduril. It’s just the latest AI company to get into the defense business. Marketplace’s Meghan McCarty Carino spoke with Jewel Burks Solomon, managing partner at Collab Capital, about all these stories and more.
We talk a lot about how the internet is filling up with AI content. And, of course, that includes the sort guaranteed to generate clicks and dollars: the adult variety. Platforms like Instagram have seen an explosion in sexy AI-generated influencers, and the people running those accounts sometimes steal content from real creators and mash them up with AI. The practice is called “AI pimping,” Jason Koebler, co-founder of 404 Media, told Marketplace’s Meghan McCarty Carino.
The Quayside development on Toronto’s waterfront was supposed to be the shining example of a tech-optimized smart city, an urban environment reinvented “from the internet up,” as it was described by Sidewalk Labs. That was a sister company to Google, which won a government bid in 2017 to modernize the 12 acres of former dockland. There would be robotaxis, heated sidewalks, adaptive traffic lights and lots of data collection. But in 2020, Sidewalk Toronto suddenly shut down before a single ribbon had been cut, turning a shining example into a cautionary tale. It’s all chronicled in a new book from Globe and Mail reporter Josh O’Kane called “Sideways: The City Google Couldn’t Buy.” Marketplace’s Meghan McCarty Carino spoke with O’Kane about what went on behind the scenes of the Sidewalk Toronto project.
Ever since ChatGPT hit the scene a couple years ago, there’s been a nagging sense of dread for many: what will this mean for jobs? Well, new research from Imperial College London finds a shift already underway. Between July 2021 and July 2023, the report found freelance job postings for writing and coding decreased by about 20%. There was also a slowdown in freelance jobs for visual art. And it’s happening more quickly than past technological disruptions, Ozge Demirci, one of the coauthors of that report and a business professor at Imperial College London, told Marketplace’s Meghan McCarty Carino.
The internet has been overrun by AI content. The weirdly glowing and inadvertently surreal airbrushed images, the generic and oddly formal sentences peppered with factual errors and distracting phrases like “as of my last knowledge update.” So much of social media content these days has the unmistakable stench of “AI slop,” hastily spit out by image generators or chatbots to get a few likes. And while the phenomenon might seem harmless or sometimes even charming, the AI slop takeover of the internet is crowding out real information and human perspectives. Marketplace’s Meghan McCarty Carino spoke with Rebecca Jennings, a senior correspondent at Vox, about how AI slop is transforming social media.
OpenAI’s chatbot ChatGPT turns two years old tomorrow. So how has it changed the tech industry and what’s next for the company? We’ll get into it in today’s “Marketplace Tech Bytes: Week in Review.” Plus, we look into rumblings that improvements in AI have slowed, raising questions about whether we’ve hit a wall when it comes to training more advanced AI systems. But first, the Commerce Department finalized nearly $7.9 billion in subsidies for Intel. It’s the largest award yet under the CHIPS and Science Act and a potentially game-changing sum for the company right now. Marketplace’s Meghan McCarty Carino is joined by Natasha Mascarenhas, reporter at The Information, to break down these stories.
A survey by nonprofit organization Common Sense Media shows 42% of children in the U.S. have a phone by the age of 10. And numbers like this are causing concern for educators, including a group of headteachers in Greystones, a town in Ireland. That group was so worried by the increased levels of anxiety among children using smartphones and social media that last year they asked parents to sign a voluntary pledge to delay buying cellphones for their children until at least the age of 11. The BBC’s Leanna Byrne checks in to see what effect it had.
A 58-year-old Mike Tyson may have come up short in his ballyhooed comeback match against YouTuber-turned-boxer Jake Paul. But Netflix emerged as a big winner, boasting 108 million viewers for the Nov. 15 spectacle, the most streamed sporting event in history. Unfortunately for viewers, Netflix’s livestream of the fight suffered buffering and lag problems. It wasn’t a great start for the platform, which will be livestreaming some much-anticipated NFL games on Christmas Day. But the streaming service has been leaning into more and more live content. Marketplace’s Meghan McCarty Carino spoke with Lucas Shaw, who writes the Screentime newsletter at Bloomberg, about the event and what it portends for Netflix’s future live endeavors.
It’s an interesting time for many in the U.S. Some people feel great about President-elect Donald Trump’s return to the White House, while others don’t. This week, people from both sides are sitting down together for Thanksgiving dinner. And while it’s one thing to ignore a family member’s social media posts or online rants, that can be a bit more challenging face-to-face, sometimes leading to awkward conversations about beliefs, truth and misinformation. Marketplace’s Kimberly Adams spoke to Whitney Phillips, assistant professor of digital platforms and ethics at the University of Oregon, about how to navigate awkward conversations this holiday season.
The push for electric vehicle adoption got a bit more uncertain with the election of Donald Trump. While reports of “EV death” have been greatly exaggerated, sales growth has slowed, and carmakers have pulled back on aggressive targets. Now, it seems Marketplace’s Meghan McCarty Carino may be part of that trend. She recently spoke with Jack Stewart, a former Marketplace reporter and the man who convinced her to buy an EV, about her decision to trade in her EV for a gas-powered car.
The social media app Bluesky is flying high this week as users disenchanted with Elon Musk’s X flee that platform post-election. That’s just one of the topics for today’s “Marketplace Tech Bytes: Week in Review.” We’ll also get into Big Tech’s big-money lobbying effort to slow down a federal bill aimed at protecting kids online. But first, the latest in the potential Google breakup. This week, the Department of Justice proposed forcing the company to sell its Chrome browser. It’s one possible resolution to an antitrust case that has already ruled Google’s search business a monopoly. Marketplace’s Meghan McCarty Carino is joined by Maria Curi, tech policy reporter at Axios, to break down these stories.
Some of the biggest health insurers in the country are turning to an algorithm to help determine if a medical claim will be approved. That’s according to a recent investigation led by ProPublica into EviCore, a contractor used to outsource prior approval requests for much of the insurance industry. The investigation found that EviCore tweaks an algorithm to increase the likelihood those claims will be denied, which means lower costs for insurers but more patients losing access to potentially lifesaving care. Marketplace’s Meghan McCarty Carino spoke to ProPublica’s T. Christian Miller, who co-reported this story.
Remember the old mantra from the early days of social media, “pics or it didn’t happen”? For more than a century, photographic evidence was about as close to a physical representation of the real world as we’ve had. But, thanks to new AI-powered photo editing tools – like the one now available on Google’s newest Pixel phones – anyone can create convincing pics of things that didn’t happen. Marketplace’s Meghan McCarty Carino spoke to Sarah Jeong, a features editor at The Verge, who recently wrote about these cutting edge tools. Jeong says no one’s ready for the impact of this technology.
President-elect Donald Trump has tapped Elon Musk to co-lead a new Department of Government Efficiency. And the CEO of Tesla and SpaceX, who is also the owner of X, does have a record of wringing efficiencies out of his businesses. But the move raises many questions, like should someone whose companies benefit from federal dollars have a hand in making budget decisions? SpaceX alone has secured about $15.4 billion in federal contracts over the last decade, helping it become the dominant player in the industry. So, how has SpaceX rocketed ahead of the competition, and can anyone catch up? Ashlee Vance, the author of “When the Heavens Went on Sale” and a writer for Bloomberg, pointed to reusable rockets, an innovation that was on spectacular display when SpaceX tested its Starship system last month.
Advancements in artificial intelligence have made it possible for the technology to mimic humans in ever-more convincing ways. But even far less sophisticated tools than today’s chatbots have been shown in research to trick our brains, in a sense, into projecting human thought processes and emotions onto these systems. It’s a cognitive failure that can leave people open to deception and manipulation, which makes the increasingly human-like technologies proliferating in our daily lives particularly dangerous, Rick Claypool, research director at the nonprofit Public Citizen, a consumer advocacy organization, told Marketplace’s Meghan McCarty Carino.
It’s been almost eight months since Reddit went public, and since then, the platform known as the front page of the internet has been going gangbusters. We’ll get into why on this week’s “Marketplace Tech Bytes: Week in Review.” Plus, crypto surges to new highs in the wake of the election. But first up, Silicon Valley is going to Washington. This week, President-elect Donald Trump tapped his favorite tech CEO, Elon Musk, as the co-lead of a new Department of Government Efficiency along with Vivek Ramaswamy, the former biotech entrepreneur and GOP presidential candidate.
Marketplace’s Meghan McCarty Carino spoke to Anita Ramaswamy, financial analysis columnist at The Information, for her take on these stories.
Apple is reportedly facing a fine from the European Union, and it could be a hefty one. It’s the first Big Tech company to be slapped with a financial penalty under the EU’s Digital Markets Act, which went into effect last year. The law, aimed at spurring competition in digital markets, requires Big Tech companies designated as “gatekeepers” to change policies that lock consumers into their products. Like, say, the walled garden of the Apple App Store. EU regulators ruled that Apple violated the DMA by failing to fully support app developers “steering” consumers to alternative marketplaces. It’s a story Matt Binder, a senior tech reporter for Mashable, has been following.
Gary Marcus is worried about AI. The professor emeritus at NYU doesn’t count himself a luddite or techno-pessimist. But Marcus has become one of the loudest voices of caution when it comes to AI. He’s chronicled some of the funniest and most disturbing errors made by current tools like ChatGPT, calling out the many costs – both human and environmental – of an industry that continues to accrete money and power. In his new book “Taming Silicon Valley: How We Can Ensure That AI Works for Us,” Marcus lays out his vision for a responsible path forward. Marketplace’s Meghan McCarty Carino spoke to Marcus about that path and how it may be further out of reach, though not impossible, given the results of this year’s presidential election.
Do the free speech protections guaranteed by the First Amendment apply to online discourse? What if that online discourse spreads misinformation? Marketplace’s Kimberly Adams speaks with Nadine Farid Johnson, policy director at the Knight First Amendment Institute at Columbia University, about how we should understand the right to free speech in the internet era.
This fall, California became the latest state to adopt a law banning cellphone use in schools. The Golden State joins more than a dozen that have imposed restrictions as alarm grows about the potentially harmful effects of smartphone use on students’ learning and mental health. Support for these policies spans the political spectrum. But one important constituency sometimes has a hard time adjusting: parents. Kathryn Jezer-Morton, a columnist for The Cut, wrote about the challenges of disconnecting.
The president-elect is also a former president who’s been a fixture in national politics for the last decade. But predicting what Donald Trump might have in mind for the tech industry in his second term based on that history, well, that’s a tough call. Trump has, at times, had strong words for some tech titans, cozied up to others, and pushed for — and then against — a TikTok ban. His first administration initiated several antitrust cases against tech companies, but Trump recently expressed skepticism about the potential breakup of Google after a federal judge ruled that its search business was a monopoly. Marketplace’s Meghan McCarty Carino spoke with Paresh Dave, a senior writer at Wired, about the future of tech antitrust policy and more in the second Trump term.
It has been almost two years since ChatGPT burst onto the scene and made teachers’ lives a whole lot harder. A report from Common Sense Media this fall showed that 70% of teenage students used artificial intelligence for school or fun. But a majority of those students’ parents and teachers were unaware. Leila Wheless, a seventh- and eighth-grade English teacher in North Carolina, asked her students how they use the technology.
The thing about the artificial intelligence boom is that the tech needs a lot of electricity. One estimate from Goldman Sachs suggests that largely because of AI, data centers will use 160% more electricity by 2030. It’s got Big Tech fired up about an option that’s never really been the cool kid of the clean energy class: nuclear power. Microsoft made a deal to restart the Three Mile Island plant, while Google and Amazon are investing in new types of reactors. It’s stirring something of a “nuclear revival” for the U.S. after decades of stagnation. Marketplace’s Meghan McCarty Carino spoke with Anna Erickson, professor of nuclear and radiological engineering at the Georgia Institute of Technology, about the push to revive the nuclear energy sector.
It’s Election Day and even though the campaign may be over, the battle over misinformation is not. Marketplace’s Kimberly Adams spoke with Derek Tisler, counsel at the Brennan Center for Justice, about some of the misleading online narratives voters should expect to see and how to deal with them. This conversation is part of “Marketplace Tech’s” limited series “Decoding Democracy.” Watch the full episode on our YouTube channel.
In case you forgot, we’ve got Election Day tomorrow. But it was also a big year for elections in the rest of the world. About half of the global population is voting in national elections in 2024, and in many countries people have encountered shut down internet, blocked websites or manipulated content online, according to a recent report from the nonprofit Freedom House. Allie Funk leads Freedom House’s technology and democracy initiative, and she told Marketplace’s Meghan McCarty Carino this is the 14th consecutive year the report has documented a decline in human rights online.
An AI transcription tool used in health care has been found to frequently hallucinate things no one ever said, including making up medications. That’s just one of the topics for today’s Marketplace Tech Bytes: Week in Review. Plus, we’ll get into what we learned from this week’s Big Tech earnings, including Google saying that it’s using AI to generate about 25% of its code.
But first, it’s been a busy week for Apple. The company launched some of its new Apple Intelligence features and released its new lineup of Mac computers along with some souped-up chips.
Marketplace’s Meghan McCarty Carino spoke with Joanna Stern, senior personal technology columnist at The Wall Street Journal, to get her take on this week’s tech news.
For almost a century, people have been going to the movies to get freaked out by fictional depictions of artificial intelligence. Back in 1968, there was Hal 9000 in “2001: A Space Odyssey.” The 1980s gave us Skynet in “The Terminator.” And these days, movies about rogue bots are more popular than ever. Films like 2022’s “M3GAN” and this summer’s “AfrAId” seem to be channeling our worst fears about the intelligent technology increasingly embedded in our daily lives. Marketplace’s Meghan McCarty Carino spoke to Shira Ovide, a tech reporter and author of The Washington Post’s “Tech Friend” newsletter, about why AI is such a compelling horror villain.
We are in the midst of the first major U.S. election of the generative AI era. The people who want to win your vote have easy access to tools that can create images, video or audio of real people doing or saying things they never did — and slap on weird appendages or other make-believe effects along with targeted slogans. But the potential to deceive has led about two dozen states to enact some form of regulation requiring political ads that use artificial intelligence to include a label. So how do voters respond when they know a campaign has used AI? That’s what Scott Brennen and his team at New York University’s Center on Technology Policy set out to answer in a recent study.
We know from various studies that young people are, unsurprisingly, using generative AI tools like chatbots and image generators, sometimes for homework, sometimes for fun and sometimes for malicious purposes. A recent survey from the Center for Democracy and Technology found that artificial intelligence is being used among high school students to create nonconsensual, illicit imagery — in other words, sexually explicit deepfakes. Marketplace’s Meghan McCarty Carino spoke with Elizabeth Laird, director of equity in civic technology at CDT, to learn more.
With tech now able to clone voices in minutes, many people in creative industries are worried about what this could mean for their livelihoods. The BBC’s Ben Derico looks at what this AI revolution has meant for voice actors who claim to have had their likeness copied by an AI voice-generating company.
The next big thing in Silicon Valley might just be an old-fashioned concept: humanoid robots that can mimic our physical abilities. Developments in AI are triggering renewed interest in the robotics industry. And Anthropic’s latest Claude model can control a computer on its own, which could have implications for the future of work. But first, is the “best bromance in tech” starting to sour? That’s how OpenAI CEO Sam Altman once described his company’s partnership with Microsoft, but recently the alliance has shown signs of tension. Marketplace’s Meghan McCarty Carino spoke with Natasha Mascarenhas for her take on all this for our weekly segment “Marketplace Tech Bytes: Week in Review.”
There’s a movement to make it possible to repair our gadgets ourselves instead of having to send them back to the company that makes them or, you know, just get a new one. The “right to repair” movement in consumer electronics has made real gains in recent years. Several states, like California, New York and Oregon, have passed legislation requiring it. And it looks like Apple’s newest iPhone — the 16 — has made strides in that department. Marketplace’s Meghan McCarty Carino spoke with Kyle Wiens, CEO of the online repair guide iFixit, about the iPhone 16’s improved repairability.
A content creator who goes by the username Mrs. Frazzled recently noticed something strange happening on her Instagram account. With more than 370,000 followers, her videos sometimes score millions of views. Except, it seems, when she talks about the election. Mrs. Frazzled sensed she was being shadowbanned by Instagram, so Geoffrey Fowler, a tech columnist at The Washington Post, investigated. Marketplace’s Meghan McCarty Carino spoke to Fowler about what he found.
Trying to vote when your disabled can present a series of obstacles but technology can help, even if integrating technology into our election system has its risks. Back in 2020, several states changed their voting rules with more mail-in, early, and remote voting options which increased turnout among disabled voters. Marketplace’s Kimberly Adams recently spoke with Michelle Bishop, voter access and engagement manager at the National Disability Rights Network, about finding the right balance of tech integration into our elections in order to empower more disabled voters in the U.S.
Artificial intelligence, according to its boosters, could help us unlock solutions to some of the world’s toughest problems, like climate change. But in the meantime, it’s become a key tool for fossil fuel companies like Exxon Mobil and Chevron to maximize the extraction of emissions-producing oil and gas. Marketplace’s Meghan McCarty Carino spoke to freelance reporter Karen Hao, who recently wrote in The Atlantic about how Microsoft has actively courted the fossil fuel industry.
Web crawlers scan and catalog sites all over the internet and, in the AI era, use that data to train chatbots. We’ll talk about why The New York Times is trying to put a stop to crawlers from the AI company Perplexity. We’ll also discuss the record share of venture capital dollars flowing into the AI sector and the difficulty of attracting investment for startups without those two magic letters. Plus, the ups and downs of SpaceX, owned by Elon Musk. Marketplace’s Meghan McCarty Carino spoke with Jewel Burks Solomon, managing partner at Collab Capital, for her take on all this for our weekly segment “Marketplace Tech Bytes: Week in Review.”
Vice President Kamala Harris sat for her first interview on Fox News Wednesday as the Democratic presidential candidate continued her media blitz ahead of the November election. And while it’s generating plenty of headlines, these kinds of big interviews just don’t hold the power they used to, according to Nick Quah, a podcast and culture critic at New York Magazine who’s been following the candidates’ interviews on the alternative media circuit. Marketplace’s Meghan McCarty Carino spoke with Quah about how Kamala Harris’ appearance on more internet-native shows like the podcast “Call Her Daddy” or Donald Trump’s appearances on various “bro-centric” shows like Logan Paul ‘sYouTube channel represent a notable media shift compared to previous elections.
After years of hype, Tesla finally debuted a robotaxi called the Cybercab last week. CEO Elon Musk has been making and breaking promises about Tesla’s autonomous vehicle for years. So, did the debut of the Cybercab finally deliver? Andrew Hawkins, transportation editor for The Verge, tells Marketplace’s Meghan McCarty Carino what the Cybercab unveiling means for Musk and for Tesla.
Online misinformation about Hurricanes Helene and Milton, and about the relief response from the Federal Emergency Management Agency have surged in recent weeks, including false narratives of aid being withheld from victims for their political beliefs and aid being stolen by undocumented immigrants. Marketplace’s Kimberly Adams spoke with Ethan Porter, professor of media, public affairs and political science at George Washington University, about why there’s been so much misinformation about these natural disasters and FEMA’s relief response.
TikTok has a lot going on legally these days. Last week, it saw a fresh round of lawsuits alleging the short-form video app harms children. And then there’s the federal law that could ban the app if ByteDance, its China-based owner, doesn’t divest by January. TikTok has sued to block that law. Oral arguments in TikTok Inc. v. Merrick Garland were heard in the U.S. Court of Appeals for the District of Columbia Circuit in September. The company is joined by eight TikTok creators as plaintiffs in the case, and one of them is Talia Cadet. She has nearly 140,000 followers on TikTok, where she produces lifestyle videos focused on her love of books and travel. She talked with Marketplace’s Meghan McCarty Carino about the case.
TikTok is facing yet another legal challenge. This week, attorneys general from 13 states plus Washington, D.C., sued the short-form video app, alleging that it harms children. We’ll be digging into the latest lawsuits on today’s Marketplace “Tech Bytes: Week in Review,” our roundup of the week’s top tech headlines. Like the so-called Godfather of AI who is sharing the Nobel Prize in physics. Plus, the U.S. government is weighing what to do about Google after its search business was ruled a monopoly earlier this year. Marketplace’s Meghan McCarty Carino is joined by Maria Curi, tech policy reporter at Axios, to break down these stories.
The new kid on the block of social media, Meta’s Threads, hit 200 million active users in August. When it launched in the summer of 2023 as a rival to the platform formerly known as Twitter, Meta said the app would eventually be integrated into the so-called fediverse. This “federated universe” is the most prominent example of a decentralized social network in which users can join any affiliated platform and interact with content from all the others. Recently, Meta took some steps to integrate Threads into this ecosystem, and Will Oremus, tech news analysis writer for The Washington Post, has been following the developments.
A lot of personal data – stuff like your home address, phone number, marital status and more – is out there on the internet. Anyone can buy it from sites like Whitepages, PeopleFinders or Intelius, which aggregate data from public records and social media. You can contact each of these “people search” sites and request they take down your information, but it’s a bit of a game of whack-a-mole. Naturally, a whole industry of data-removal services has sprung up. For a price, they promise to do the dirty work for you. But do they deliver? Marketplace’s Meghan McCarty Carino spoke to Yael Grauer, a researcher at Consumer Reports, who recently looked into the efficacy of the data-removal industry.
Until about a decade ago, independent cybersecurity researchers in the U.S. weren’t allowed to examine voting machines for potential vulnerabilities. But that ban was essentially lifted in 2015. Two years later, DEF CON — one of the largest hacker conventions — decided to invite hackers, cybersecurity researchers and election officials to find those flaws during its annual Voting Village event. Marketplace’s Kimberly Adams spoke with Catherine Terranova, executive director of Voting Village, about how they balance the well-intentioned work of finding vulnerabilities before bad actors do and the problem of misinformation around the security of voting machines.
It’s been more than 15 years since the digital currency bitcoin was launched, going from a fringe phenomenon in the dark corners of the internet to an asset traded on Wall Street. But the identity of bitcoin’s creator, known by the pseudonym Satoshi Nakamoto, has remained a mystery wrapped in a cryptographic enigma. Now, investigative filmmaker Cullen Hoback may have cracked the case. His last HBO series “Into the Storm” uncovered the origins of the QAnon conspiracy theory. In his new documentary, “Money Electric: The Bitcoin Mystery,” Hoback sets out to answer the elusive question: Who is Satoshi Nakamoto? To prevent any spoilers, we’ll keep his conclusions secret.
Investors are once again pouring money into biotechnology startups. But this time, it feels different from the heyday of 2021. We’ll be digging into the latest data for today’s Marketplace “Tech Bytes: Week in Review,” our roundup of the week’s top headlines, including some you might have missed.
We’ll also talk about a private equity deal with the country’s biggest digital pharmacy platform. But first, OpenAI closes a historic funding round. The maker of ChatGPT raised another $6.6 billion — valuing the company at $157 billion, double its worth earlier this year.
Our regular contributor Christina Farr, managing director with Manatt Health, joins Marketplace’s Meghan McCarty Carino to discuss the news.
All those fancy artificial intelligence systems need a lot of data centers to run, and those data centers need a lot of energy. One estimate from the Electric Power Research Institute suggests that current data center electricity consumption in the U.S. will more than double by 2030, making up about 9% of all energy use. But the AI sector is coming up against the big energy-hungry tech innovation of yesteryear: crypto mining. Marketplace’s Meghan McCarty Carino spoke with Reuters reporter Laila Kearney about the scramble to power up in both industries.
Today we’re talking about voting tech and the push in some areas to move away from machines and go back to hand counting ballots. A legal battle is brewing in Georgia over a new rule requiring ballots be hand counted on election night to ensure the tally matches electronic records. Arizona has added a similar requirement. The issue has become particularly mired in misinformation in recent years, with some election deniers questioning the security of the tech used in our elections. While some may believe hand counts are more accurate, the number of jurisdictions across the country relying on them on election night has been steadily dropping. Marketplace’s Meghan McCarty Carino spoke with Pam Smith, president and CEO of the nonpartisan organization Verified Voting, about why the practice of counting ballots by hand is waning.
This week, we’re talking about how teenagers are using artificial intelligence tools like chatbots and image generators, often without the knowledge of their parents and teachers, according to a recent report from the nonprofit Common Sense Media. Monday we heard about that research from Jim Steyer, founder and CEO of the group. And now we want to home in on a specific piece of what he said: “If you look back at the advent of social media, about 20 years ago, we pretty much blew the regulatory side of that, but also the educating teachers and parents part of that. And we left kids on their own.” So we called up Nathan Sanders, an affiliate of the Berkman Klein Center for Internet and Society at Harvard, who has written about the overlapping risks of AI and social media.
As soon as ChatGPT burst onto the scene in late 2022, it became clear that artificial intelligence was going to send massive shockwaves through education. And, as with any new technology, young people were likely to adopt it more quickly. Well, now we have some data about that phenomenon. A new report from the non-profit Common Sense Media shows seven in 10 teenagers from ages 13-18 are using generative AI in some way. And Jim Steyer, founder and CEO of Common Sense Media, told Marketplace’s Meghan McCarty Carino it’s not all about cheating.
En liten tjänst av I'm With Friends. Finns även på engelska.