95 avsnitt • Längd: 20 min • Månadsvis
Welcome to MIT Technology Review Narrated, the home for the very best of our journalism in audio. Each week we will share one of our most ambitious stories, from print and online, narrated for us by real voice actors. Expect big themes, thought-provoking topics, and sharp analysis, all backed by our trusted reporting.
The podcast MIT Technology Review Narrated is created by MIT Technology Review. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
The growing business of surf pools wants to bring the ocean experience inland, making surfing more accessible to communities far from the coasts.
These pools can use—and lose—millions upon millions of gallons of water every year. With many planned for areas facing water scarcity, who bears the cost of building the perfect wave?
This story was written by senior features and investigations reporter Eileen Guo and narrated by Noa.
Open-world video games are inhabited by vast crowds of computer-controlled characters. These animated people—called NPCs, for “nonplayer characters”—populate the bars, city streets, or space ports of games. They make virtual worlds feel lived in and full. Often—but not always—you can talk to them.
After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game.
It may not always be like that. Just as it’s upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. Future AI-powered NPCs that don’t rely on a script could make games—and other worlds—deeply immersive.
This story was written by executive editor Niall Firth and narrated by Noa - newsoveraudio.com
At any given time, the US organ transplant waiting list is about 100,000 people long. Martine Rothblatt sees a day when an unlimited supply of transplantable organs—and 3D-printed ones—will be readily available, saving countless lives.
This story was written by senior biomedicine editor Antonio Regalado and narrated by Noa - newsoveraudio.com
Design thinking suggests that we are all creatives, and we can solve any problem if we empathize hard enough. The methodology was supposed to democratize design, but it may have done the opposite. Where did it go wrong?
This story was written by Rebecca Ackermann and narrated by Noa - newsoveraudio.com
Tokelau is a group of three isolated atolls strung out across the Pacific Ocean between New Zealand (of which it’s an official territory) and Hawaii. Its population hovers around 1,400 people. Reaching it requires a boat ride from Samoa that can take over 24 hours. To say that Tokelau is remote is an understatement: it was the last place on Earth to be connected to the telephone… in 1997.
Despite its size, Tokelau has become an internet giant. Until recently, its .tk domain had more users than any other country’s: a staggering 25 million. Yet only one website with a .tk domain is actually from Tokelau. Nearly all the others are used by spammers, phishers, and cybercriminals.
This is the story of how Tokelau unwittingly became the global capital of cybercrime—and its fight to fix its reputation.
This story was written by Jacob Judah and narrated by Noa - newsoveraudio.com
An AI startup created a hyperrealistic deepfake of MIT Technology Review’s senior AI reporter that was so believable, even she thought it was really her at first. This technology is impressive, to be sure. But it raises big questions about a world where we increasingly can’t tell what’s real and what's fake.
This story was written by senior AI reporter Melissa Heikkilä and narrated by Noa - newsoveraudio.com
Though “user” seems to describe a relationship that is deeply transactional, many of the technological relationships in which a person would be considered a user are actually quite personal. That being the case, is the term “user” still relevant?
This story was written by Taylor Majewski and narrated by Noa.
We've known of Europa’s existence for more than four centuries, but for most of that time, Jupiter’s fourth-largest moon was just a pinprick of light in our telescopes— a bright and curious companion to the solar system’s resident giant. Over the last few decades, however, as astronomers have scrutinized it through telescopes and six spacecraft have flown nearby, a new picture has come into focus. Europa is nothing like our moon.
Observations suggest that its heart is a ball of metal and rock, surrounded by a vast saltwater ocean that contains more than twice as much water as is found on Earth.
In the depths of its ocean, or perhaps crowded in subsurface lakes or below icy surface vents, Jupiter’s big, bright moon could host life.
MIT Technology Review articles are narrated by Noa (News Over Audio), an app offering you professionally-read articles from the world’s best publications. To stay ‘truly’ informed on Science & Technology, Business & Investing, Current Affairs & Politics, and much more, download the Noa app or visit newsoveraudio.com.
Despite all their runaway success, nobody knows exactly how—or why—large language models work. And that’s a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
This story was written by senior AI editor Will Douglas Heaven and narrated by Noa ((News Over Audio), an app offering you professionally-read articles from the world’s best publications.
Moore’s Law holds that the number of transistors on an integrated circuit doubles every two years or so. In essence, it means that chipmakers are always trying to shrink the transistors on a microchip in order to pack more of them in. The cadence has been increasingly hard to maintain now that transistor dimensions measure in a few nanometers. In recent years ASML’s machines have kept Moore’s Law from sputtering out. Today, they are the only ones in the world capable of producing circuitry at the density needed to keep chipmakers roughly on track.
Martin Van den Brink is the outgoing co-president and CTO of ASML. He joined the Dutch company in 1984 when it was founded and has played a major role in guiding it to it current dominant position. He explains to MIT Technology Review how the company overtook its competition and how it can stay ahead.
MIT Technology Review articles are narrated by Noa (News Over Audio), an app offering you professionally-read articles from the world’s best publications. To stay ‘truly’ informed on Science & Technology, Business & Investing, Current Affairs & Politics, and much more, download the Noa app or visit newsoveraudio.com.
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code.
Philosophers, cognitive scientists, and engineers are grappling with what it would take for AI to achieve consciousness—and whether it's even possible.
This story was written by Grace Huckins and narrated by NOA.
Three years ago this week we launched this podcast on a mission to show the world how AI touches our everyday lives. It's been our great honor and privilege to make it through three seasons, a global pandemic, an unbelievable nineteen (19!!) award nominations, and a whole lot of tests and demos.
Goodbyes are very hard to say, so instead we'll leave you with some of the show's highlights and an invitation to follow us as we continue our journey with a new show called SHIFT. Sign up for updates at shiftshow.ai and subscribe wherever you get your podcasts.
Credits:
This series was created by Jennifer Strong and Emma Cillekens with the support of Gideon Lichfield and Michael Reilly. Its producers have been Emma Cillekens and Anthony Green. The editors have included Gideon Lichfield, Michael Reilly and Mat Honan with support from Karen Hao and Tate Ryan Mosley. You can thank Garret Lang and Jacob Gorski for the original music and excellent sound design. The weekly art was from Stephanie Arnett with album art from Eric Mongeon.
Thanks for listening.
Hidden away in our voices are signals that may hold clues to how we’re doing, what we’re feeling and even what’s going on with our physical health. Now, AI systems tasked with analyzing these signals are moving into healthcare.
We meet:
Lina Lakoczky-Torres, student at Menlo College
Angela Schmiede, Vice President of Menlo College.
Grace Chang, CEO of Kintsugi
David Liu, CEO of Sonde Health
Liam Kaufman, former CEO of Winterlight Labs.
Margaret Mitchell, Chief Ethics Scientist of Hugging Face
Bjoern Schuller, professor of artificial intelligence at Imperial College London
Credits:
This episode was reported by Hilke Schellmann, produced by Jennifer Strong, Emma Cillekens and Anthony Green, edited by Mat Honan and mixed by Garret Lang with original music by Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett. Special thanks to the Knight Science folks at MIT for their support with this reporting.
AI is used in farming in some ways you might not expect, like for tracking the health of crops—from space. We travel from test farms to labs in the second installment of our series on agriculture, AI, and satellites.
We Meet:
Joseph Liefer, senior product manager of autonomy at John Deere
Julian Sanchez, director of emerging technology at John Deere
Shely Aranov, CEO of InnerPlant
Rod Kumimoto, CSO of InnerPlant
Credits:
This episode was reported and produced by Jennifer Strong, Emma Cillekens and Anthony Green. It was edited by Mat Honan, and mixed by Garret Lang, with original music by Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
In this special episode we bring you a live taping between the "Godfather of AI" Geoffrey Hinton and MIT Technology Review's Senior Editor for AI Will Douglas Heaven. This conversation was recorded at EmTech Digital, our signature AI event, in the MIT Media Lab.
Credits:
This episode was recorded in front of a live audience in Cambridge, Massachusetts with special thanks to Will Douglas Heaven, Amy Lammers and Brian Bryson. It was produced by Jennifer Strong and Emma Cillekens, directed by Erin Underwood, and edited by Mat Honan.
This episode, we get an insider's look at the ongoing chip war from the person who wrote the book on it, Chris Miller, professor at Tufts University and the author of Chip War. Join us for a live conversation from the MIT Media Lab at Tech Review’s Future Compute conference.
Credits:
This episode was recorded and produced by Jennifer Strong with help from Emma Cillekens and Anthony Green. We’re edited by Mat Honan and mixed by Garret Lang, with original music from Garret Lang and Jacob Gorski. Artwork from Stephanie Arnett.
I Was There When is an oral history project that’s part of the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them.
In this episode we meet Cognitive Scientist Gary Marcus.
CREDITS:
This project was produced by Jennifer Strong, Emma Cillekens, and Anthony Green. It was edited by Mat Honan and mixed by Garret Lang with original music by Jacob Gorski. The art is from Eric Mongeon and Stephanie Arnett. It was recorded at the TED Conference in Vancouver, Canada.
LINKS:
https://blog.ted.com/the-astounding-new-era-of-ai-notes-on-session-2-of-ted2023/
https://www.technologyreview.com/topic/artificial-intelligence/
https://podcasts.apple.com/us/podcast/humans-vs-machines-with-gary-marcus/id1532110146
The term ‘smart city’ paints a picture of a tech-enabled oasis—powered by sensors of all kinds. But we’re starting to recognize what all these tools might mean for privacy. In this episode, we meet a researcher studying how this is being applied in Iran and visit one of the nation’s top smart cities, to learn how its efforts there have evolved over time.
We Meet:
University of Oxford and Article19 Human Rights Researcher Mahsa Alimardani
City of Las Vegas Chief Innovation Officer Michael Sherwood
City of Hope Director of Campus Support Operations Mark Reed
Sounds:
How will artificial intelligence change the cities we live in? - BBC Ideas via YouTube
https://www.youtube.com/watch?v=UXxyCBimRyM
‘Smart’ cities promise economic and environmental benefits to the developing world - CBC News via YouTube
https://www.youtube.com/watch?v=u08A7yiTmu4
Singapore is building a city in China - CNBC via YouTube
https://www.youtube.com/watch?v=iP11XeIV1ZA
Global Smart Cities - The China Current via YouTube
https://www.youtube.com/watch?v=-qmiqHWD6Uc
Footage appears to show Iranian riot police confronting students at university in Tehran - The Guardian via YouTube
https://www.youtube.com/watch?v=BgQshPJohmg
China: facial recognition and state control - The Economist via YouTube
https://www.youtube.com/watch?v=lH2gMNrUuEY
Facial recognition: Concerns over China's widespread surveillance
https://www.youtube.com/watch?v=CT6KEy_QXvM
Credits:
This episode was reported and produced by Jennifer Strong and Anthony Green with help from Emma Cillekens. It was edited by Mat Honan, and mixed by Garret Lang, with original music by Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
The best definitions of AI are vague, largely lack consensus and represent a huge challenge for lawmakers and legal scholars looking to regulate it. But back to back breakthroughs and rapid adoption of generative AI tools are making it feel a lot more real to everybody else. We examine how it’s possible that alone might be enough to push conversations about ethics further into focus.
We Meet:
MIT Technology Review Senior AI Reporter Melissa Heikkilä
Mozilla President Mark Surman
IBM Chief Privacy Officer Christina Montgomery
United Nations AI Advisor Neil Sahota
Sounds:
Advances in artificial intelligence raise new ethics concerns - PBS NewsHour via YouTube https://youtu.be/l5nTlHeqYOQ
He loves artificial intelligence. Hear why he is issuing a warning about ChatGPT - CNN via YouTube https://youtu.be/THJysHMi81c
Credits:
This episode was reported and produced by Jennifer Strong and Anthony Green with help from Emma Cillekens and Melissa Heikkilä. It was edited by Mat Honan, and mixed by Garret Lang, with original music by Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
This episode we meet people building next generation tools for creativity who are thinking about how these AI models should be trained and deployed in order to be both useful and fair to artists.
We hear from:
Artist Holly Herndon
Adobe CTO Digital Media Ely Greenfield
Soundful CEO Diaa El All
Links:
https://www.ted.com/talks/holly_herndon_what_if_you_could_sing_in_your_favorite_musician_s_voice
https://www.technologyreview.com/2023/02/03/1067786/ai-models-spit-out-photos-of-real-people-and-copyrighted-images/
https://www.technologyreview.com/2022/12/16/1065247/artists-can-now-opt-out-of-the-next-version-of-stable-diffusion/
Credits: This episode was produced by Anthony Green with help from Emma Cillekens. It was edited by Jennifer Strong and Mat Honan, mixed by Garret Lang, with original music from Jacob Gorski.
We're so excited this episode has been selected as a New York Festivals finalist! Please enjoy this encore edition and we'll see you back next week!
Digital twins of humans capture the physical look and expressions of real humans. Increasingly these replicas are showing up in the entertainment industry and beyond and it gives rise to some interesting opportunities as well as thorny questions.
We speak to:
Greg Cross, CEO and co-founder of Soul Machines
Credits: This episode was produced by Anthony Green with help from Emma Cillekens. It was edited by Jennifer Strong and Mat Honan, mixed by Garret Lang, with original music from Jacob Gorski.
I Was There When is an oral history project that’s part of the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them.
In this episode we meet Marc Raibert, the founder and chairman of Boston Dynamics.
CREDITS: This project was produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Mat Honan and mixed by Garret Lang, with original music by Jacob Gorski. Artwork by Eric Mongeon.
VIDEOS:
Spot
https://www.youtube.com/watch?v=7atZfX85nd4&t=17s
https://www.youtube.com/watch?v=6VUQHrWhoqg
Atlas
https://www.youtube.com/watch?v=-e1_QhJ1EhQ&t=5s
Big Dog
https://www.youtube.com/watch?v=xqMVg5ixhd0
One Legged Robot (Hopping robot)
Computers are ranking the way people look—and the results are influencing the things we do, the posts we see, and the way we think.
Ideas about what constitutes “beauty” are complex, subjective, and by no means limited to physical appearances. Elusive though it is, everyone wants more of it. That means big business and increasingly, people harnessing algorithms to create their ideal selves in the digital and, sometimes, physical worlds. In this episode, we explore the popularity of beauty filters, and sit down with someone who’s convinced his software will show you just how to nip and tuck your way to a better life.
Reporting links:
https://www.technologyreview.com/2023/03/13/1069649/hyper-realistic-beauty-filters-bold-glamour/
https://www.technologyreview.com/2022/08/19/1057133/fight-for-instagram-face/
We meet:
Shafee Hassan, Qoves Studio founder
Lauren Rhue, Assistant Professor of Information Systems at the Robert H. Smith School of Business
Credits:
This episode was reported by Tate Ryan-Mosley, and produced by Jennifer Strong, Emma Cillekens, Karen Hao and Anthony Green. We’re edited by Michael Reilly and Bobbie Johnson.
How we train fighter pilots—both real and artificial—is undergoing a series of rapid changes. In order for these systems to be useful we need to trust them, but figuring out just how, when and why remains a massive challenge. In this second of a two-part series, we look at how AI is being used to teach human pilots to perform some of the most dangerous and difficult maneuvers in aerial combat, and we experience synthetic dogfighting first hand.
We Meet:
Tom "T-Mac" Mackie, Director of Red6
Chris Cotting, Director Research, US Air Force Test Pilot School
Bill Gray, Chief Test Pilot, US Air Force Test Pilot School
Daniel Robinson, Founder & CEO Red6
Credits:
This episode was reported and produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Mat Honan, and mixed by Garret Lang, with original music by Garret Lang and Jacob Gorski. Art by Stephanie Arnett.
A boy wrote about his suicide attempt. He didn’t realize his school's software was watching.
While schools commonly use AI to sift through students' digital lives and flag keywords that may be considered concerning, critics ask at what cost to privacy.
We Meet:
Jeff Patterson, CEO of Gaggle
Mark Keierleber, investigative reporter at The 74
Teeth Logsdon-Wallace, student
Elizabeth Laird, director of Equity in Civic Technology at Center for Democracy & Technology
Sounds From:
"Your Heart is a Muscle the Size of Your Fist" from the band Ramshackle Glory's 2011 album Live the Dream.
"Spying or protecting students? CBS46 Investigates school surveillance software" from CBS46 in Atlanta, GA on February 14, 2022.
"Student Surveillance Software: Schools know what your child is doing online. Do you?" from WSPA7 News in Greenville, SC on May 5, 2021.
"Spying or protecting students? CBS46 Investigates school surveillance software" from News 5 in Cleveland, OH on February 5, 2020.
Credits:
This episode was produced by Anthony Green and Emma Cillekens with reporting from Mark Keierleber. It was edited by Jennifer Strong and Michael Reilly, and mixed by Garret Lang with original music from Jacob Gorski. Art by Stephanie Arnett.
https://www.theguardian.com/education/2021/oct/12/school-surveillance-dragnet-suicide-attempt-healing
https://www.the74million.org/contributor/mark-keierleber/
You can support our journalism by going to http://www.techreview.com/subscribe.
Late last year the US Department of Defense successfully ran a dozen flight tests in which AI agents piloted an experimental fighter jet. We explore the program that got it there and what this milestone means.
We Meet:
Chase Kohler, Edwards Air Force Base
Sue Halpern, The New Yorker
Paul Scharre, Center for a New American Security
Additional sources and sound:
DARPA's AlphaDogfight Trials: https://www.youtube.com/watch?v=NzdhIA2S35w
The Rise of A.I. Fighter Pilots: Artificial intelligence is being taught to fly warplanes: https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai-fighter-pilots
https://www.edwards.af.mil/News/Article/3297083/dod-artificial-intelligence-agents-successfully-pilot-fighter-jet/
Credits:
This episode was reported and produced by Jennifer Strong and Anthony Green with help from Emma Cillekens. It was edited by Jennifer Strong and Mat Honan, and mixed by Garret Lang with original music from Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
A look at how artificial intelligence is starting to be used to support the elderly.
We Meet:
Dor Skuler, Intuition Robotics
Greg Olsen, New York State Office for the Aging
Marie Defrancesco
Credits:
This episode was reported and produced by Jennifer Strong and Anthony Green with help from Emma Cillekens. We’re edited by Mat Honan and mixed by Garret Lang, with original music from Garret Lang, and Jacob Gorski. Art by Stephanie Arnett.
We asked ChatGPT to summarize this episode and this is what it wrote:
"In the episode, the host discussed the increasing use of AI language models like ChatGPT in newsrooms. The host explained that ChatGPT, a large language model developed by OpenAI, is being used to automate tasks such as data analysis and writing, freeing up time for journalists to focus on more in-depth reporting. The host interviewed experts in the field who highlighted the benefits of using AI technology in newsrooms, including increased efficiency and consistency, as well as the potential to improve the accuracy and speed of reporting. However, the experts also discussed the challenges that come with using AI in journalism, such as issues around bias and accountability, and the need for human oversight to ensure ethical and accurate reporting. The episode concluded by exploring the future of AI in journalism, and how it will continue to shape the way news is produced and consumed."
The episode was written by people.
Links:
https://www.technologyreview.com/2023/01/31/1067436/could-chatgpt-do-my-job/
We meet:
Mat Honan, MIT Technology Review
Jonah Peretti, Buzzfeed
Sayash Kapoor, Princeton University
Francesco Marconi, Applied XL
Credits:
This episode was produced by Anthony Green and Emma Cillekens, and edited by Jennifer Strong and Mat Honan. It was mixed by Garret Lang with original music from Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
We're joined on stage by two startup founders working to bring automation to smaller scale farms. A live conversation from Lisbon, Portugal taped at one of the world's largest tech conferences, Web Summit.
We meet:
Praveen Penmetsa, CEO of Monarch Tractor
Barry Lunn, CEO of Provizio AI
Credits:
This episode was recorded and produced by Jennifer Strong with help from Emma Cillekens and Anthony Green. We’re edited by Mat Honan and mixed by Garret Lang, with original music from Garret Lang and Jacob Gorski. Artwork from Stephanie Arnett.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
This episode we go behind the scenes of an investigation that uncovered how sensitive photos taken by an AI powered vacuum were leaked and landed on the internet.
Reporting:
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Roomba testers feel misled after intimate images ended up on Facebook
We Meet:
Eileen Guo, MIT Technology Review
Albert Fox Cahn, Surveillance Technology Oversight Project
Credits:
This episode was reported by Eileen Guo and produced by Emma Cillekens and Anthony Green. It was hosted by Jennifer Strong and edited by Amanda Silverman and Mat Honan. This show is mixed by Garret Lang with original music from Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
Our reporting about farming, AI and satellites turned into three episodes of this podcast, which you can find linked here in the show notes, and as part of this reporting we also toured a satellite factory in downtown San Francisco, called Planet Labs. This week we bring you along for one of our audio postcards to hear how these satellites are built and tested.
We meet:
Jacob Stern, director of test engineering at Planet Labs
Credits:
This episode was produced by Jennifer Strong with help from Anthony Green and Emma Cillekens. It was edited by Mat Honan and mixed by Garret Lang, with original music from Garret Lang and Jacob Gorski. Art direction by Stephanie Arnett.
A panel of luminaries join us live on stage at MIT Technology Review’s flagship conference, EmTech MIT, and discuss the path forward for AI research.
We Meet:
Will Douglas Heaven, Senior Editor of AI at MIT Technology Review
Ashley Llorens, Vice President & Managing Director at Microsoft Research
Raia Hadsell, Senior Director of Research and Robotics at DeepMind
Yann LeCun, NYU Professor, VP & Chief AI Scientist at Meta
Credits:
This episode was recorded in front of a live audience at the MIT Media Lab in Cambridge, Massachusetts with special thanks to Will Douglas Heaven, Amy Lammers and Brian Bryson. It was produced by Jennifer Strong, Emma Cillekens and Anthony Green, directed by Erin Underwood, edited by Mat Honan and mixed by Garret Lang.
From chess to Jeopardy to e-sports, AI is increasingly beating humans at their own games. But that was never the ultimate goal. In this episode we dig into the symbiotic relationship between games and AI. We meet the big players in the space, and we take a trip to an arcade.
We Meet:
Julian Togelius
Will Douglas-Heaven
David Silver
David Fahri
We Talked To:
Julian Togelius
Will Douglas-Heaven
Karen Hao
David Silver
David Fahri
Natasha Regan
Sounds From:
Jeopardy 2011-02:The IBM Challenge
Garry Kasparov VS Deep Blue 1997 6th game (Kasparov Resigns)
https://www.youtube.com/watch?v=EsMk1Nbcs-s
Attack Like AlphaZero: The Power of the King
https://www.youtube.com/watch?v=c0JK5Fa3AqI
Miracle Perfect Anti Mage 16/0 - Dota 2 Pro Gameplay
https://www.youtube.com/watch?v=59KnNcU9iKc
DOTA 2 - ALL GAME-WINNING Moments in The International History (TI1-TI9)
https://www.youtube.com/watch?v=RJcNbuASl-Y
Credits:
This episode was reported by Jennifer Strong and Will Douglas Heaven and produced by Anthony Green, Emma Cillekens and Karen Hao. We’re edited by Niall Firth, Michael Reilly and Mat Honan. Our mix engineer is Garret Lang. Sound design and music by Jacob Gorski.
AI is used in farming in some ways you might not expect, like for tracking the health of crops—from space. We travel from test farms to labs in the second installment of our series on agriculture, AI, and satellites.
We Meet:
Joseph Liefer, senior product manager of autonomy at John Deere
Julian Sanchez, director of emerging technology at John Deere
Shely Aranov, CEO of InnerPlant
Rod Kumimoto, CSO of InnerPlant
Credits:
This episode was reported and produced by Jennifer Strong, Emma Cillekens and Anthony Green. It was edited by Mat Honan, and mixed by Garret Lang, with original music by Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
AI is used in agriculture to precisely target weeds and optimize irrigation practices. It’s also being used in ways you might not expect, like for tracking the health of cow pastures—from space. We travel from test farms to orchards in the first of a two-part series on agriculture, AI, and satellites.
We Meet:
Greg Brickner, Veterinarian and grazing specialist at Organic Valley
Geoff Klein, irrigation manager of Bullseye Farms
John Bourne, SVP Ceres Imaging
Deanna Kovar, VP of Production and Precision Ag Production Systems at John Deere
Jahmy Hindman, CTO at John Deere
Credits:
This episode was reported and produced by Jennifer Strong, Emma Cillekens and Anthony Green. It was edited by Mat Honan, and mixed by Garret Lang, with original music by Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
We’re in the middle of another major disruption in retail—one that’s been accelerated by the pandemic, and looks to take the convenience of e-commerce and apply it to physical environments. In this episode, we examine how AI is at the center of this transition.
We meet:
Prakhar Mehrotra, VP, Machine Learning, Walmart Global Tech
Jordan Fisher, Chief Executive Officer, Standard AI
Terrence Griffin, Quality Control Specialist, Standard AI
Suresh Kumar, Global Chief Technology Officer and CDO, Walmart
This episode was produced by Anthony Green and Emma Cillekens. It was edited by Jennifer Strong and Mat Honan and mixed by Garret Lang, with original music by Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.
Face mapping and other tracking systems are changing the sports experience in the stands and on the court. In part-three of this latest series on facial recognition, Jennifer Strong and the team at MIT Technology Review jump on the court to unpack just how much things are changing. This episode was originally published December 8, 2020.
We meet:
Donnie Scott, senior vice president of public security, IDEMIA
Michael D'Auria, vice president of business development, Second Spectrum
Jason Gay, sports columnist, The Wall Street Journal
Rachel Goodger, director of business development, Fancam
Rich Wang, director of analytics and fan engagement, Minnesota Vikings
Credits:
This episode was reported and produced by Jennifer Strong, Anthony Green, Tate Ryan-Mosley, Emma Cillekens and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield.
Algorithms now determine how much things cost. It’s called dynamic pricing and it adjusts according to current market conditions in order to increase profits. The rise of ecommerce has propelled pricing algorithms into an everyday occurrence—whether you’re shopping on Amazon, booking a flight, hotel or ordering an Uber.
We Meet:
Credits:
This episode was reported by Anthony Green and produced by Jennifer Strong and Emma Cillekens. We’re edited by Mat Honan and our mix engineer is Garret Lang, with sound design and music by Jacob Gorski.
In the past, hiring decisions were made by people. Today, some key decisions that lead to whether someone gets a job or not are made by algorithms. The use of AI-based job interviews has increased since the pandemic. As demand increases, so too do questions about whether these algorithms make fair and unbiased hiring decisions, or find the most qualified applicant. In this second episode of a four-part series on AI in hiring, we meet some of the big players making this technology including the CEOs of HireVue and myInterview—and we test some of these tools ourselves.
We Meet:
We Talked To:
Sounds From:
Credits:
This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Karen Hao and Anthony Green with special thanks to James Wall. We’re edited by Michael Reilly. Art direction by Stephanie Arnett.
Shortages of everything from seeds to fertilizer might accelerate the adoption of technologies that can help supplies go further in war-torn Ukraine.
We meet:
Roman Tarasevich, Farmer, Ukraine
Morten Schmidt, Chief Executive Officer, OneSoil
Inbal Reshef, Program Director, NASA Harvest
Olekssi Misiura, Head of Research and Development, IMC
Credits: This episode was reported and produced by Jennifer Strong, Emma Cillekens and Anthony Green. It was edited by Mat Honan and contains original music from Garret Lang and Jacob Gorski. Our mix engineer is Garret Lang. We had field production help in Ukraine from Orysia Khimiak. Special thanks this week to Max Furman, Ty Walrod, Antonio Regalado and Megan Zaroda Mullenioux. Our artwork is by Stephanie Arnett.
The International Space Station hosts scores of experiments that can’t be done on Earth. But it’s also showing its age—with repairs and safety concerns becoming increasingly common as it draws nearer to its end of life. In this episode, we bring you a conversation with Astronaut Michael López-Alegría about the path forward for research in low Earth orbit, from MIT Technology Review’s flagship conference, EmTech MIT.
CREDITS:
This episode was created by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Mat Honan, directed by Erin Underwood and mixed by Garret Lang. Episode art by Stephanie Arnett and special thanks this week to Amy Lammers and Brian Bryson from our events team.
SOUNDS:
What the next space station might look like, CNBC
https://www.youtube.com/watch?v=yRcNxPCC9_A
International space station removed from orbit 2031, NBC
https://www.youtube.com/watch?v=My_mUGfc418
Space Station to retire in 2031, NASA says, Fox 35 Orlando
A look at how AI and other tech is being used to help predict, detect, and pinpoint the location of wildfires in the second of a two-part series.
We Meet:
Tricia Small, Television Producer, Small Fox Films
George Whitesides, Space Executive
Brittany Zajic, Disaster Response, Planet Labs
Dave Winnacker, Fire Chief, Moraga-Orinda Fire District
Arvind Satyam, Chief Commercial Officer, Pano
Credits:
This episode was reported and produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Mat Honan and contains original music from Garret Lang and Jacob Gorski. Our mix engineer is Garret Lang and our artwork is by Stephanie Arnett.
A look at how AI and other tech is being used to help predict, detect, and pinpoint the location of wildfires in the first of a two-part series.
We meet:
Dustin Tetrault, Deputy Fire Chief, Big Sky Fire Department
Sankar Narayanan, Chief Practice Officer, Fractal Analytics
Credits:
This episode was reported and produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Mat Honan and contains original music from Garret Lang and Jacob Gorski. Our mix engineer is Garret Lang and our artwork is by Stephanie Arnett.
I Was There When is an oral history project that’s part of the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them.
In this episode we meet Alex Serdiuk, founder and CEO of Respeecher.
CREDITS: This project was produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Mat Honan and mixed by Garret Lang with original music by Jacob Gorski. The art is from Eric Mongeon and Stephanie Arnett.
Synthetic voice technologies are increasingly passing as human. But today’s voice assistants are still a far cry from the hyper-intelligent thinking machines we’ve been musing about for decades. In this episode, we explore how machines learn to communicate—and what it means for the humans on the other end of the conversation.
In this encore edition we revisit an episode from last year.
Links to our reporting:
https://www.technologyreview.com/2022/10/18/1061320/digital-clones-of-dead-people/
https://www.technologyreview.com/topic/artificial-intelligence/voice-assistants/
We meet:
Susan C. Bennett, voice of Siri
Cade Metz, The New York Times
Charlotte Jee, MIT Technology Review
Credits:
This episode was produced by Jennifer Strong, Emma Cillekens, Anthony Green, Karen Hao and Charlotte Jee. This episode was edited by Michael Reilly and Niall Firth.
I Was There When is an oral history project that’s part of the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them.
In this episode we meet one of the world's greatest chess players, Garry Kasparov.
CREDITS: This project was produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Mat Honan and mixed by Garret Lang with original music by Jacob Gorski. The art is from Eric Mongeon and Stephanie Arnett.
Digital twins of humans capture the physical look and expressions of real humans. Increasingly these replicas are showing up in the entertainment industry and beyond and it gives rise to some interesting opportunities as well as thorny questions.
We speak to:
Greg Cross, CEO and co-founder of Soul Machines
Credits: This episode was produced by Anthony Green with help from Emma Cillekens. It was edited by Jennifer Strong and Mat Honan, mixed by Garret Lang, with original music from Jacob Gorski.
Who wants to take a walk around a California vineyard to explore how it’s deploying sensors and other forms of AI? Join us for a field trip as we do something a little bit different this week.
We meet:
Dirk Heuvel, vice president of vineyard operations, McManis Family Vineyards
Credits:
This episode was produced by Jennifer Strong with help from Anthony Green and Emma Cillekens. It was edited by Mat Honan and mixed by Garret Lang, with original music from Jacob Gorski. Art direction by Stephanie Arnett.
Retailers face an evolving landscape of fraud tactics each day. It’s why companies are increasingly turning to AI to try and catch threat patterns never seen before, and block attacks before they happen. While this approach lends itself to efficiency, it’s also one that relies on increasingly complex data profiles of consumers. In this episode, we peer into the world of retail fraud detection.
We Meet:
David Cost, VP of ecommerce and marketing at Rainbow Apparel
Will Douglas Heaven, senior editor for AI at MIT Technology Review
Rajesh Ramanand, co-founder & CEO at Signifyd
Credits:
This episode was reported by Jennifer Strong and produced by Anthony Green and Emma Cillekens, It was edited by Mat Honan and contains original music from Garret Lang and Jacob Gorski. Our mix engineer is Garret Lang and our artwork is made by Stephanie Arnett.
I Was There When is an oral history project that’s part of the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them.
In this episode we meet Dave Johnson, the chief data and artificial intelligence officer at Moderna.
CREDITS: This project was produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Michael Reilly and mixed by Garret Lang with original music by Jacob Gorski. The art is from Eric Mongeon and Stephanie Arnett.
A conversation about equity and what it takes to make effective AI policy taped before a live audience at MIT Technology Review’s annual AI conference, EmTech Digital.
We Meet:
Nicol Turner Lee, director of the Center for Technology at the Brookings Institution
Anthony Green, producer of the In Machines We Trust podcast
Credits:
This episode was created by Jennifer Strong, Anthony Green, Erin Underwood and Emma Cillekens. It was edited by Michael Reilly, directed by Laird Nolan and mixed by Garret Lang. Episode art by Stephanie Arnett. Cover art by Eric Mongeon. Special thanks this week to Amy Lammers and Brian Bryson.
Amid a growing epidemic of gun violence, can AI be part of the solution? In this episode we look at some of the weapons detection technologies schools are using in an effort to try to keep students safe.
We Meet:
Gary Hough, superintendent of Fayette County schools
Mark Keierleber, investigative reporter at The 74
Mike Ellenbogen, Founder, chief innovation officer at Evolv Technologies
Donald Maye, head of operations at IPVM
Sounds From:
Spielberg, S. (2002). Minority Report. Twentieth Century Fox.
Avigilon Athena Security integration for Gun Detection, via YouTube
Credits:
This episode was produced by Anthony Green and Emma Cillekens with reporting from Mark Keierleber. It was edited by Jennifer Strong, Rachel Courtland and Mat Honan, mixed by Garret Lang, with original music from Jacob Gorski and art from Stephanie Arnett.
I Was There When is an oral history project that’s part of the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them.
In this episode we meet Gustav Söderström, who helped create algorithms aiming to understand our taste in music.
CREDITS: This project was produced by Jennifer Strong, Anthony Green and Emma Cillekens. It was edited by Michael Reilly and mixed by Garret Lang, with original music by Jacob Gorski. Artwork by Eric Mongeon.
A boy wrote about his suicide attempt. He didn’t realize his school's software was watching.
While schools commonly use AI to sift through students' digital lives and flag keywords that may be considered concerning, critics ask at what cost to privacy.
We Meet:
Jeff Patterson, CEO of Gaggle
Mark Keierleber, investigative reporter at The 74
Teeth Logsdon-Wallace, student
Elizabeth Laird, director of Equity in Civic Technology at Center for Democracy & Technology
Sounds From:
"Your Heart is a Muscle the Size of Your Fist" from the band Ramshackle Glory's 2011 album Live the Dream.
"Spying or protecting students? CBS46 Investigates school surveillance software" from CBS46 in Atlanta, GA on February 14, 2022.
"Student Surveillance Software: Schools know what your child is doing online. Do you?" from WSPA7 News in Greenville, SC on May 5, 2021.
"Spying or protecting students? CBS46 Investigates school surveillance software" from News 5 in Cleveland, OH on February 5, 2020.
Credits:
This episode was produced by Anthony Green and Emma Cillekens with reporting from Mark Keierleber. It was edited by Jennifer Strong and Michael Reilly, and mixed by Garret Lang with original music from Jacob Gorski. Art by Stephanie Arnett.
https://www.theguardian.com/education/2021/oct/12/school-surveillance-dragnet-suicide-attempt-healing
https://www.the74million.org/contributor/mark-keierleber/
You can support our journalism by going to http://www.techreview.com/subscribe.
The team that brings you In Machines We Trust has much to be grateful for—a brand new season of this show, a big awards nomination for The Extortion Economy, a show about ransomware that we made with ProPublica, and our new investigative series, Curious Coincidence.
We celebrate how far we've come with a look back at where it all started!
--
What happens when an algorithm gets it wrong? In the first of a four-part series on face recognition, Jennifer Strong and the team at MIT Technology Review explore the arrest of a man who was falsely accused of a crime using facial recognition. The episode also starts to unpack the complexities of this technology and introduce some thorny questions about its use.
We meet:
Robert and Melissa Williams
Peter Fussey, University of Essex
Hamid Khan, Stop LAPD Spying Coalition
Credits: This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens. We had help from Karen Hao and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski. Special thanks to Kyle Thomas Hemingway and Eric Mongeon.
This is a detective story that’s unsolved. Hosted by investigative reporter Antonio Regalado, Curious Coincidence dives into the mysterious origins of Covid-19 by examining the genome of the virus, the labs doing sensitive research on dangerous pathogens, and questions of whether a lab accident may have touched off a global pandemic.
A five-part investigation from MIT Technology Review.
This week we're sharing another tech show we made that we think you're going to love. It's called The Extortion Economy and it's a five-part series about the ransomware epidemic produced with ProPublica.
See you soon with a whole new season of In Machines We Trust!!
--
A new-age iteration of the old-age extortion problem. A ransomware vigilante, a piracy (as in actual boats) expert, a school administrator, and a kidnapping victim share their experiences. This is part one.
We Meet:
Fabian Wosar, CTO, Emsisoft
Doug Russell, Director of Technology, Haverhill Public Schools
Lisa Forte, Co-founder, Red Goat Cyber Security
Credits:
This series is hosted by Meg Marco and produced by Emma Cillekens, Tate Ryan-Mosley and Anthony Green. It’s inspired by reporting from Renee Dudley and Daniel Golden from ProPublica. We're edited by Bobbie Johnson, Michael Reilly, Mat Honan and Robin Fields. Our mix engineer is Erick Gomez with help from Rebekah Wineman. Our theme music is by Jacob Gorski. Art is from Lisa Larson-Walker and Eric Mongeon. Emma Cillekens is our voice coach. The executive producers of The Extortion Economy podcast are Meg Marco and Jennifer Strong.
Sounds From:
Video: Colonial Pipeline CEO Joseph Blount testifies at the Senate Homeland Security Committee, Source: CNBC Television, https://www.youtube.com/watch?v=DcYePKjI_mc
Video: Roving Report Italy, Source: AP, http://www.aparchive.com/metadata/youtube/8b08bfc68a0b203d238aa8e0c4316e61
Video: CBS Evening News 1989-12-14, Source: CBS, https://www.youtube.com/watch?v=wHsbZEX5pQw
Computers are ranking the way people look—and the results are influencing the things we do, the posts we see, and the way we think.
Ideas about what constitutes “beauty” are complex, subjective, and by no means limited to physical appearances. Elusive though it is, everyone wants more of it. That means big business and increasingly, people harnessing algorithms to create their ideal selves in the digital and, sometimes, physical worlds. In this episode, we explore the popularity of beauty filters, and sit down with someone who’s convinced his software will show you just how to nip and tuck your way to a better life.
We meet:
Shafee Hassan, Qoves Studio founder
Lauren Rhue, Assistant Professor of Information Systems at the Robert H. Smith School of Business
Credits:
This episode was reported by Tate Ryan-Mosley, and produced by Jennifer Strong, Emma Cillekens, Karen Hao and Anthony Green. We’re edited by Michael Reilly and Bobbie Johnson.
Researchers have spent years trying to crack the mystery of how we express our feelings. Pioneers in the field of emotion detection will tell you the problem is far from solved. But that hasn’t stopped a growing number of companies from claiming their algorithms have cracked the puzzle. In part one of a two-part series on emotion AI, Jennifer Strong and the team at MIT Technology Review explore what emotion AI is, where it is, and what it means.
We meet:
Credits: This episode was reported and produced by Jennifer Strong and Karen Hao, with Tate Ryan-Mosley and Emma Cillekens. We had help from Benji Rosen. We’re edited by Michael Reilly and our theme music is by Jacob Gorski.
Cameras in stores aren’t anything new—but these days there are AI brains behind the electric eyes. In some stores, sophisticated systems are tracking customers in almost every imaginable way, from recognizing their faces to gauging their age, their mood, and virtually gussying them up with makeup. The systems rarely ask for people’s permission, and for the most part they don’t have to. In our season 1 finale, we look at the explosion of AI and face recognition technologies in retail spaces, and what it means for the future of shopping.
We meet:
RetailNext CTO Arun Nair,
L'Oreal's Technology Incubator Global VP Guive Balooch,
Modiface CEO Parham Aarabi
Biometrics pioneer and Chairman of ID4Africa Joseph Atick
Credits:
This episode was reported and produced by Jennifer Strong, Anthony Green, Tate Ryan-Mosley, Emma Cillekens and Karen Hao. We’re edited by Michael Reilly. Our theme music is by Jacob Gorski.
Voice technology is one of the biggest trends in the healthcare space. We look at how it might help care providers and patients, from a woman who is losing her speech, to documenting healthcare records for doctors. But how do you teach AI to learn to communicate more like a human, and will it lead to more efficient machines?
We Meet:
Sounds:
Credits:
This episode was reported and produced by Anthony Green with help from Jennifer Strong and Emma Cillekens. It was edited by Michael Reilly. Our mix engineer is Garret Lang and our theme music is by Jacob Gorski.
Defining what is, or isn’t artificial intelligence can be tricky (or tough). So much so, even the experts get it wrong sometimes. That’s why MIT Technology Review’s Senior AI Editor Karen Hao created a flowchart to explain it all. In this bonus content our host and her team reimagined Hao’s reporting, gamifying it into a radio play.
If you would like to see the original reporting visit:
Credits: This episode was reported by Karen Hao. It was adapted for audio and produced by Jennifer Strong and Emma Cillekens. The voices you hear are Emma Cillekens, as well as Eric Mongeon and Kyle Thomas Hemingway. (If you like our show art they made it!) We’re edited by Michael Reilly and Niall Firth.
In this episode, we meet Sophie Zhang—a former data scientist at Facebook. Before she was fired, she had become consumed by the task of finding and taking down fake accounts that were being used to sway elections globally.
I Was There When is a new oral history project from the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them.
Credits:
This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens, and edited by Niall Firth and Mat Honan. It’s mixed by Garret Lang, with theme music by Jacob Gorski.
Algorithms now determine how much things cost. It’s called dynamic pricing and it adjusts according to current market conditions in order to increase profits. The rise of ecommerce has propelled pricing algorithms into an everyday occurrence—whether you’re shopping on Amazon, booking a flight, hotel or ordering an Uber.
We Meet:
Credits:
This episode was reported by Anthony Green and produced by Jennifer Strong and Emma Cillekens. We’re edited by Mat Honan and our mix engineer is Garret Lang, with sound design and music by Jacob Gorski.
I Was There When is an oral history project that's part of the In Machines We Trust podcast. It features stories of how breakthroughs in artificial intelligence and computing happened, as told by the people who witnessed them.
In this first installment we meet Joseph Atick who helped create the first commercially viable facial recognition system.
Do you have a story to tell for this series? Do you want to nominate someone who does? We want to hear from you! Please reach out to us at [email protected].
CREDITS: This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens with help from Lindsay Muscato. It’s edited by Michael Reilly and Mat Honan, and mixed by Garret Lang, with sound design and music by Jacob Gorski.
From chess to Jeopardy to e-sports, AI is increasingly beating humans at their own games. But that was never the ultimate goal. In this episode we dig into the symbiotic relationship between games and AI. We meet the big players in the space, and we take a trip to an arcade.
We Meet:
Julian Togelius
Will Douglas-Heaven
David Silver
David Fahri
We Talked To:
Julian Togelius
Will Douglas-Heaven
Karen Hao
David Silver
David Fahri
Natasha Regan
Sounds From:
Jeopardy 2011-02:The IBM Challenge
Garry Kasparov VS Deep Blue 1997 6th game (Kasparov Resigns)
https://www.youtube.com/watch?v=EsMk1Nbcs-s
Attack Like AlphaZero: The Power of the King
https://www.youtube.com/watch?v=c0JK5Fa3AqI
Miracle Perfect Anti Mage 16/0 - Dota 2 Pro Gameplay
https://www.youtube.com/watch?v=59KnNcU9iKc
DOTA 2 - ALL GAME-WINNING Moments in The International History (TI1-TI9)
https://www.youtube.com/watch?v=RJcNbuASl-Y
Credits:
This episode was reported by Jennifer Strong and Will Douglas Heaven and produced by Anthony Green, Emma Cillekens and Karen Hao. We’re edited by Niall Firth, Michael Reilly and Mat Honan. Our mix engineer is Garret Lang. Sound design and music by Jacob Gorski.
When it comes to hiring, it’s increasingly becoming an AI’s world, we’re just working in it. In this, the final episode of Season 2, and the conclusion of our series on AI and hiring, we take a look at how AI-based systems are increasingly playing gatekeeper in the hiring process—screening out applicants by the millions, based on little more than what they see in your resume. But we aren’t powerless against the machines. In fact, an increasing number of people and services are designed to help you play by—and in some cases bend—their rules to give you an edge.
We Meet:
Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
Ian Siegel, CEO, ZipRecruiter
Sami Mäkeläinen, Head of Strategic Foresight, Telstra
Salil Pande, CEO, VMock
Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University
We Talked To:
Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
Students and Teachers from The HOPE Program in Brooklyn, NY
Jonathan Kestenbaum, Co-founder & Managing Director of Talent Tech Labs
Josh Bersin, Global Industry Analyst
Brian Kropp, Vice President Research, Gartner
Ian Siegel, CEO, ZipRecruiter
Sami Mäkeläinen, Head of Strategic Foresight, Telstra
Salil Pande, CEO, VMock
Kiran Pande, Co-Founder, VMock
Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University
Sounds From:
Credits:
This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green and Karen Hao. We’re edited by Michael Reilly.
Increasingly, job seekers need to pass a series of ‘tests’ in the form of artificial intelligence games—just to be seen by a hiring manager. In this third, of a four-part miniseries on AI and hiring, we speak to someone who helped create these tests, we ask who might get left behind in the process and why there isn’t more policy in place. We also try out some of these tools ourselves.
We Meet:
Matthew Neale, Vice President of Assessment Products, Criteria Corp.
Frida Polli, CEO, Pymetrics
Henry Claypool, Consultant and former Obama Administration Member, Commission on Long-Term Care
Safe Hammad, CTO, Arctic Shores
Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
Nathaniel Glasser, Employment Lawyer, Epstein Becker Green
Keith Sonderling, Commissioner, Equal Employment Opportunity Commission (EEOC)
We Talked To:
Aaron Rieke, Managing Director, Upturn
Adam Forman, Employment Lawyer, Epstein Becker Green
Brian Kropp, Vice President Research, Gartner
Josh Bersin, Research Analyst
Jonathan Kestenbaum, Co-Founder and Managing Director, Talent Tech Labs
Frank Pasquale, Professor, Brooklyn Law School
Patricia (Patti) Sanchez, Employment Manager, MacDonald Training Center
Matthew Neale, Vice President of Assessment Products, Criteria Corp.
Frida Polli, CEO, pymetrics
Henry Claypool, Consultant and former Obama Administration Member, Commission on Long-Term Care
Safe Hammad, CTO, Arctic Shores
Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
Nathaniel Glasser, Employment Lawyer, Epstein Becker Green
Keith Sonderling, Commissioner, Equal Employment Opportunity Commission (EEOC)
Sounds From:
*Science 4-Hire, podcast
*Matthew Kirkwold’s cover of XTC’s, Complicated Game, https://www.youtube.com/watch?v=tumM_6YYeXs
Credits:
This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green and Karen Hao. We’re edited by Michael Reilly.
In the past, hiring decisions were made by people. Today, some key decisions that lead to whether someone gets a job or not are made by algorithms. The use of AI-based job interviews has increased since the pandemic. As demand increases, so too do questions about whether these algorithms make fair and unbiased hiring decisions, or find the most qualified applicant. In this second episode of a four-part series on AI in hiring, we meet some of the big players making this technology including the CEOs of HireVue and myInterview—and we test some of these tools ourselves.
We Meet:
We Talked To:
Sounds From:
Credits:
This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Karen Hao and Anthony Green with special thanks to James Wall. We’re edited by Michael Reilly. Art direction by Stephanie Arnett.
If you’ve applied for a job lately, it’s all but guaranteed that your application was reviewed by software—in most cases, before a human ever laid eyes on it. In this episode, the first in a four-part investigation into automated hiring practices, we speak with the CEOs of ZipRecruiter and Career Builder, and one of the architects of LinkedIn’s algorithmic job-matching system, to explore how AI is increasingly playing matchmaker between job searchers and employers. But while software helps speed up the process of sifting through the job market, algorithms have a history of biasing the opportunities they present to people by gender, race...and in at least one case, whether you played lacrosse in high school.
We Meet:
We Talked To:
Sounds From:
Credits:
This episode was reported by Hilke Schellmann, and produced by Jennifer Strong, Emma Cillekens and Anthony Green with special thanks to Karen Hao. We’re edited by Michael Reilly.
Additional reporting from us:
https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/
https://www.technologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/
https://www.technologyreview.com/2021/04/09/1022217/facebook-ad-algorithm-sex-discrimination/
https://www.technologyreview.com/2019/11/07/75194/hirevue-ai-automated-hiring-discrimination-ftc-epic-bias/
https://www.technologyreview.com/2020/02/14/844765/ai-emotion-recognition-affective-computing-hirevue-regulation-ethics/
Despite their popularity with kids, tablets and other connected devices are built on top of systems that weren’t designed for them to easily understand or navigate. Adapting algorithms to interact with a child isn’t without its complications—as no one child is exactly like another. Most recognition algorithms look for patterns and consistency to successfully identify objects. but kids are notoriously inconsistent. In this episode, we examine the relationship AI has with kids.
We Meet:
Judith Danovitch, associate professor of psychological and brain sciences at the University of Louisville
Lisa Anthony, associate professor of computer science at the University of Florida
Tanya Basu, MIT Technology Review
Credits:
This episode was reported and produced by Tanya Basu, Anthony Green, Jennifer Strong, and Emma Cillekens. We’re edited by Michael Reilly.
Clearview AI has built one of the most comprehensive databases of people’s faces in the world. Your picture is probably in there (our host Jennifer Strong’s was). In the second of a four-part series on facial recognition, we meet the CEO of the controversial company who tells us our future is filled with face recognition—regardless of whether it's regulated or not.
We meet:
Hoan Ton-That, Clearview AI
Alexa Daniels-Shpall, Police Executive Research Forum
Credits:
This episode was reported and produced by Jennifer Strong, with Tate Ryan-Mosely and Emma Cillekens, with special thanks to Karen Hao and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski.
Credit scores have been used for decades to assess consumer creditworthiness, but their scope is far greater now that they are powered by algorithms: not only do they consider vastly more data, in both volume and type, but they increasingly affect whether you can buy a car, rent an apartment, or get a full-time job.
We meet:
Chi Chi Wu, staff attorney at National Consumer Law Center
Michele Gilman, professor of law at University of Baltimore
Mike de Vere, CEO Zest AI
Credits:
This episode was produced by Jennifer Strong, Karen Hao, Emma Cillekens and Anthony Green. We’re edited by Michael Reilly.
Synthetic voice technologies are increasingly passing as human. But today’s voice assistants are still a far cry from the hyper-intelligent thinking machines we’ve been musing about for decades. In this episode, we explore how machines learn to communicate—and what it means for the humans on the other end of the conversation.
We meet:
Susan C. Bennett, voice of Siri
Cade Metz, The New York Times
Charlotte Jee, MIT Technology Review
Credits
This episode was produced by Jennifer Strong, Emma Cillekens, Anthony Green, Karen Hao and Charlotte Jee. We’re edited by Michael Reilly and Niall Firth.
Tech giants are moving into our wallets—bringing AI and big questions with them.
Our entire financial system is built on trust. We can exchange otherwise worthless paper bills for fresh groceries, or swipe a piece of plastic for new clothes. But this trust—typically in a central government-backed bank—is changing. As our financial lives are rapidly digitized, the resulting data turns into fodder for AI. Companies like Apple, Facebook and Google see it as an opportunity to disrupt the entire experience of how people think about and engage with their money. But will we as consumers really get more control over our finances? In this first of a series on automation and our wallets, we explore a digital revolution in how we pay for things.
We meet:
Umar Farooq, CEO of Onyx by J.P. Morgan Chase
Josh Woodward, Director of product management for Google Pay
Ed McLaughlin, President of operations and technology for MasterCard
Craig Vosburg, Chief product officer for MasterCard
Credits
This episode was produced by Anthony Green, with help from Jennifer Strong, Karen Hao, Will Douglas Heaven and Emma Cillekens. We’re edited by Michael Reilly. Special thanks to our events team for recording part of this episode at our AI conference, Emtech Digital.
Computers are ranking the way people look—and the results are influencing the things we do, the posts we see, and the way we think.
Ideas about what constitutes “beauty” are complex, subjective, and by no means limited to physical appearances. Elusive though it is, everyone wants more of it. That means big business and increasingly, people harnessing algorithms to create their ideal selves in the digital and, sometimes, physical worlds. In this episode, we explore the popularity of beauty filters, and sit down with someone who’s convinced his software will show you just how to nip and tuck your way to a better life.
We meet:
Shafee Hassan, Qoves Studio founder
Lauren Rhue, Assistant Professor of Information Systems at the Robert H. Smith School of Business
Credits:
This episode was reported by Tate Ryan-Mosley, and produced by Jennifer Strong, Emma Cillekens, Karen Hao and Anthony Green. We’re edited by Michael Reilly and Bobbie Johnson.
Host Jennifer Strong and MIT Technology Review’s editors explore what it means to entrust AI with our most sensitive decisions.
Cameras in stores aren’t anything new—but these days there are AI brains behind the electric eyes. In some stores, sophisticated systems are tracking customers in almost every imaginable way, from recognizing their faces to gauging their age, their mood, and virtually gussying them up with makeup. The systems rarely ask for people’s permission, and for the most part they don’t have to. In our season 1 finale, we look at the explosion of AI and face recognition technologies in retail spaces, and what it means for the future of shopping.
We meet:
RetailNext CTO Arun Nair,
L'Oreal's Technology Incubator Global VP Guive Balooch,
Modiface CEO Parham Aarabi
Biometrics pioneer and Chairman of ID4Africa Joseph Atick
Credits:
This episode was reported and produced by Jennifer Strong, Anthony Green, Tate Ryan-Mosley, Emma Cillekens and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield.
Two weeks after her forced exit, the AI ethics researcher reflects on her time at Google, how to increase corporate accountability, and the state of the AI field.
We meet:
Dr. Timnit Gebru
Find more reporting:
https://www.technologyreview.com/2020/12/16/1014634/google-ai-ethics-lead-timnit-gebru-tells-story/
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Google's email to employees:
https://twitter.com/JeffDean/status/1334953632719011840
Gebru's email to the listserv Google Brain Women and Allies:
https://www.platformer.news/p/the-withering-email-that-got-an-ethical
The petition from Google Walkout:
https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382
Credits:
This episode was reported by Karen Hao, edited by Jennifer Strong, Niall Firth, Gideon Lichfield and Michael Reilly, and produced with help from Anthony Green, Emma Cillekens and Benji Rosen.
Face mapping and other tracking systems are changing the sports experience in the stands and on the court. In part-three of this latest series on facial recognition, Jennifer Strong and the team at MIT Technology Review jump on the court to unpack just how much things are changing.
We meet:
Donnie Scott, senior vice president of public security, IDEMIA
Michael D'Auria, vice president of business development, Second Spectrum
Jason Gay, sports columnist, The Wall Street Journal
Rachel Goodger, director of business development, Fancam
Rich Wang, director of analytics and fan engagement, Minnesota Vikings
Credits:
This episode was reported and produced by Jennifer Strong, Anthony Green, Tate Ryan-Mosley, Emma Cillekens and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield.
Facial recognition technology is being deployed in housing projects, homeless shelters, schools, even across entire cities—usually without much fanfare or discussion. To some, this represents a critical technology for helping vulnerable communities gain access to social services. For others, it’s a flagrant invasion of privacy and human dignity. In this episode, we speak to the advocates, technologists, and dissidents dealing with the messy consequences that come when a technology that can identify you almost anywhere (even if you’re wearing a mask) is deployed without any clear playbook for regulating or managing it.
We meet:
Eric Williams, senior staff attorney at Detroit Justice Center
Fabian Rogers, community advocate at Surveillance Technology Oversight Project
Helen Knight, founder of Tech for Social Good
Ray Bolling, president and co-founder of Eyemetric Identity Systems
Mary Sunden, executive director of the Christ Church Community Development Corporation
Credits:
This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley, Emma Cillekens, and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield.
Moves have been made to restrict the use of facial recognition across the globe. In part one of this series on face ID, Jennifer Strong and the team at MIT Technology Review explore the unexpected ways the technology is being used, including how the technology is being turned on police.
We meet:
Christopher Howell, data scientist and protester.
Credits:
This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens, and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield.
The use of facial recognition by police has come under a lot of scrutiny. In part three of our four-part series on face ID, host Jennifer Strong takes you to Sin City, which actually has one of America’s most buttoned-up policies on when cops can capture your likeness. She also finds out why celebrities like Woody Harrelson are playing a starring role in conversations about this technology. This episode was originally published August 12, 2020.
We meet:
Albert Fox Cahn, Surveillance Technology Oversight Project
Phil Mayor, ACLU Michigan
Captain Dori Koren, Las Vegas Police
Assistant Chief Armando Aguilar, Miami Police
Credits:
This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens. We had help from Benji Rosen and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield.
In the second of two exclusive interviews, Technology Review’s Editor-in-Chief Gideon Lichfield sat down with Parag Agrawal, Twitter’s Chief Technology officer to discuss the rise of misinformation on the social media platform. Agrawal discusses some of the measures the company has taken to fight back, while admitting Twitter is trying to thread a needle of mitigating harm caused by false content without becoming an arbiter of truth. This conversation is from the EmTech MIT virtual conference and has been edited for clarity.
For more of coverage on this topic, check out this week's episode of Deep Tech: https://cms.megaphone.fm/channel/deep-tech?selected=MIT6065037377 and our coverage at https://www.technologyreview.com/topic/tech-policy/
Credits: This episode from EmTech MIT was produced by Jennifer Strong and Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.
Misinformation and social media have become inseparable from one another; as platforms like Twitter and Facebook have grown to globe-spanning size, so too has the threat posed by the spread of false content. In the midst of a volatile election season in the US and a raging global pandemic, the power of information to alter opinions and save lives (or endanger them) is on full display. In the first of two exclusive interviews with two of the tech world’s most powerful people, Technology Review’s Editor-in-Chief Gideon Lichfield sits down with Facebook CTO Mike Schroepfer to talk about the challenges of combating false and harmful content on an online platform used by billions around the world. This conversation is from the EmTech MIT virtual conference and has been edited for length and clarity.
For more of coverage on this topic, check out this week's episode of Deep Tech: https://cms.megaphone.fm/channel/deep-tech?selected=MIT6065037377 and our coverage at https://www.technologyreview.com/topic/tech-policy/
Credits: This episode from EmTech was produced by Jennifer Strong and Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.
Defining what is, or isn’t artificial intelligence can be tricky (or tough). So much so, even the experts get it wrong sometimes. That’s why MIT Technology Review’s Senior AI Reporter Karen Hao created a flowchart to explain it all. In this bonus content our Host Jennifer Strong and her team reimagine Hao’s reporting, gamifying it into an audio postcard of sorts.
If you would like to see the original reporting visit:
Credits: This episode was reported by Karen Hao. It was adapted for audio and produced by Jennifer Strong and Emma Cillekens. The voices you heard were Emma Cillekens, as well as Eric Mongeon and Kyle Thomas Hemingway from our art team. We’re edited by Michael Reilly and Niall Firth.
AI can read your emotional response to advertising and your facial expressions in a job interview. But if it can already do all this, what happens next? In part two of a series on emotion AI, Jennifer Strong and the team at MIT Technology Review explore the implications of how it’s used and where it’s heading in the future.
We meet:
Shruti Sharma, VSCO
Gabi Zijderveld, Affectiva
Tim VanGoethem, Harman
Rohit Prasad, Amazon
Meredith Whittaker, NYU's AI Now Institute
Credits: This episode was reported and produced by Jennifer Strong, Karen Hao, Tate Ryan-Mosley, and Emma Cillekens. We had help from Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.
Researchers have spent years trying to crack the mystery of how we express our feelings. Pioneers in the field of emotion detection will tell you the problem is far from solved. But that hasn’t stopped a growing number of companies from claiming their algorithms have cracked the puzzle. In part one of a two-part series on emotion AI, Jennifer Strong and the team at MIT Technology Review explore what emotion AI is, where it is, and what it means.
We meet:
Credits: This episode was reported and produced by Jennifer Strong and Karen Hao, with Tate Ryan-Mosley and Emma Cillekens. We had help from Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.
Automated driving is advancing all the time, but there’s still a critical missing ingredient: trust. Host Jennifer Strong meets engineers building a new language of communication between automated vehicles and their human occupants, a crucial missing piece in the push toward a driverless future.
We meet:
Dr. Richard Corey and Dr. Nicholas Giudice, founders of the VEMI Lab at the University of Maine
Ryan Powell, UX Design & Research at Waymo.
Rashed Haq, VP of Robotics at Cruise
Credits: This episode was reported and produced by Jennifer Strong,Tanya Basu, Emma Cillekens and Tate Ryan-Mosley. We had help from Karen Hao and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.
What weird bugs did you pick up last time you rode a subway train? A global network of scientists mapping the DNA of urban microbes and using AI to look for patterns pivots to tracking covid-19. Join host Jennifer Strong as she rides along on a subway-swabbing mission and talks to scientists racing to find an existing drug that might treat the disease.
We meet:
Weill Cornell Medicine's Christopher Mason and David Danko
BenevolentAI CEO Baroness Joanna Shields
Credits: This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley, Emma Cillekens and Karen Hao with help from Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski.
What happens when an algorithm gets it wrong? In the first of a four-part series on face recognition, Jennifer Strong and the team at MIT Technology Review explore the arrest of a man who was falsely accused of a crime using facial recognition. The episode also starts to unpack the complexities of this technology and introduce some thorny questions about its use.
We meet:
Robert and Melissa Williams
Peter Fussey, University of Essex
Hamid Khan, Stop LAPD Spying Coalition
Credits: This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens. We had help from Karen Hao and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski. Special thanks to Kyle Thomas Hemingway and Eric Mongeon.
Clearview AI has built one of the most comprehensive databases of people’s faces in the world. Your picture is probably in there (our host Jennifer Strong’s was). In the second of a four-part series on facial recognition, we meet the CEO of the controversial company who tells us our future is filled with face recognition—regardless of whether it's regulated or not.
We meet:
Hoan Ton-That, Clearview AI
Alexa Daniels-Shpall, Police Executive Research Forum
Credits:
This episode was reported and produced by Jennifer Strong, with Tate Ryan-Mosely and Emma Cillekens, with special thanks to Karen Hao and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski.
The use of facial recognition by police has come under a lot of scrutiny. In part-three of our series, host Jennifer Strong takes you to Sin City, which actually has one of America’s most buttoned-up policies on when cops can capture your likeness. She also finds out why celebrities like Woody Harrelson are playing a starring role in conversations about this technology.
We meet:
Albert Fox Cahn, Surveillance Technology Oversight Project
Phil Mayor, ACLU Michigan
Captain Dori Koren, Las Vegas Police
Assistant Chief Armando Aguilar, Miami Police
Credits:
This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens. We had help from Benji Rosen and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski.
Police have a history of using face recognition to arrest protestors—something not lost on activists since the death of George Floyd. In the last of a four-part series on facial recognition, host Jennifer Strong explores the way forward for the technology and examines what policy might look like.
We meet:
Artem Kuharenko, NTechLab
Deborah Raji, AI Now Institute
Toussaint Morrison, Musician, actor, and Black Lives Matter organizer
Jameson Spivack, Center on Privacy & Technology
Credits:
This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley, Emma Cillekens, and Karen Hao. We had help from Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski.
Welcome to a podcast about the automation of everything. Host Jennifer Strong and MIT Technology Review’s editors explore what it means to entrust AI with our most sensitive decisions.
En liten tjänst av I'm With Friends. Finns även på engelska.