32 avsnitt • Längd: 40 min • Månadsvis
Technology is changing fast. And it’s changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
The podcast Computer Says Maybe is created by Alix Dunn. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Welcome back! Let us know what you think of the show and what you want to see more of in 2025 by writing in here, or rambling into a microphone here.
In this episode Alix is joined by Tawana Petty, who shares her experiences coming up as a political community activist in Detroit. Tawana studied the history of radical black movements under Grace Lee Boggs, and has taken these learnings into her work today.
Listen to learn about how places like Detroit are used as testing grounds for new ‘innovations’ — especially within marginalised neighbourhoods. Tawana explains in detail how surveillance and safety are often mistakenly conflated, and how we have to work to unlearn this conflation.
Further reading:
Tawana Petty is a mother, social justice organizer, poet, author, and facilitator. She is the founding Executive Director of Petty Propolis, Inc., an artist incubator which teaches poetry, policy literacy and advocacy, and interrogates negative pervasive narratives, in pursuit of racial and environmental justice. Petty is a 2023-2025 Just Tech Fellow with the Social Science Research Council, a 2024 Rockwood National LIO Alum, and she currently serves on the CS (computer science) for Detroit Steering Committee. In 2021, Petty was named one of 100 Brilliant Women in AI Ethics. In 2023, she was honored with the AI Policy Leader in Civil Society Award by the Center for AI and Digital Policy, the Ava Jo Silent Shero Award by the Michigan Roundtable for Diversity and Inclusion, and with a Racial Justice Leadership Award by the Detroit People's Platform. In 2024, Petty was listed on Business Insider’s AI Power List for Policy and Ethics.
We’re wrapped for the year, and will be back on the 10th of Jan. In the meantime, listen to Alix, Prathm, and Georgia discuss their biggest learnings from the pod this year from some of their favourite episodes.
**We want to hear from YOU about the podcast — what do you want to hear more of in 2025? Share your ideas with us here: https://tally.so/r/3E860B**
Or if you’d rather ramble into a microphone (just like we do…) use this link instead!
We pull out clips from the following episodes:
Further reading:
Google has finally been judged to be a monopoly by a federal court — while this was strikingly obvious already, what does this judgement mean? Is this too little too late?
This week Alix and Prathm were joined by Michelle Meagher, an antitrust lawyer who shared a brief history of how antitrust started as a tool for governments to stop the consolidation of corporate power, and over time has morphed to focus on issues of competition and consumer protection — which has allowed monopolies to thrive.
Michelle discusses the details and her thinking on the ongoing cases against Google, and more generally on how monopolies are basically like a big octopus arm-wrestling itself.
Further reading:
Sign up to the Computer Says Maybe newsletter to get invites to our events and receive other juicy resources straight to your inbox
Michelle is a competition lawyer and co-founder of the Balanced Economy Project, Europe’s first anti-monopoly organisation. She is author of Competition is Killing Us: How Big Business is Harming Our Society and Planet - and What to Do About It (Penguin, 2020), a Financial Times Best Economics Book of the Year. She is a Senior Policy Fellow at the University College London Centre for Law, Economics and Society. She is a Senior Fellow working on Monopoly and Corporate Governance at the Centre for Research on Multinational Corporations (SOMO).
What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan?
This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise.
So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that?
Further reading:
Eryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.
In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk.
Alix is joined again by Mark Surman, and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy.
We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks.
Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.
More about our guests:
Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.
Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.
Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.
Further reading & resources
In the context of AI, what do we mean when we say ‘open source’? An AI model is not something you can straightforwardly open up like a piece of software; there are huge technical and social considerations to be made.
Is it risky to open-source highly capable foundation models? What guardrails do we need to think about when it comes to the proliferation of harmful content? And, can you really call it ‘open’ if the barrier for accessing compute is so high? Is model alignment really the only thing we have to protect us?
In this two-parter, Alix is joined by Mozilla president Mark Surman to discuss the benefits and drawbacks of open and closed models. Our guests are Alondra Nelson, Merouane Debbah, Audrey Tang, and Sayash Kapoor.
Listen to learn about the early years of the free software movement, the ecosystem lock-in of the closed-source environment, and what kinds of things are possible with a more open approach to AI.
Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.
More about our guests:
Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.
Alondra Nelson is s scholar of the intersections of science, technology, policy, and society, and the Harold F. Linder Professor at the Institute for Advanced Study, an independent research center in Princeton, New Jersey. Dr. Nelson was formerly deputy assistant to President Joe Biden and acting director of the White House Office of Science and Technology Policy (OSTP). In this role, she spearheaded the development of the Blueprint for an AI Bill of Rights, and was the first African American and first woman of color to lead US science and technology policy.
Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.
Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.
Further reading & resources
This week Alix was joined by Kevin De Liban, who just launched Techntonic Justice, an organisation designed to support and fight for those harmed by AI systems.
In this episode Kevin describes his experiences litigating on behalf of people in Arkansas who found their in-home care hours cut aggressively by an algorithm administered by the state. This is a story about taking care away from individuals in the name of ‘efficiency’, and the particular levers for justice that Kevin and his team managed to take advantage of to eventually ban the use of this algorithm in Arkansas.
CW: This episode contains descriptions of people being denied care and left in undignified situations at around 08.17- 08.40 and 27.12-28.07
Further reading & resources:
Kevin De Liban is the founder of Techtonic Justice, and the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers' rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas's use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly.
This week we’re wallowing in post-election catharsis: Alix and Prathm process the result together, and discuss the implications this administration has for technology politics.
How much of a role will people like Elon Musk and Peter Thiel play during Trump’s presidency? What kind of tactics should the left adopt going forward to stop this from happening again? And what does this mean for the technology politics community?
This episode was recorded on Wednesday the 6th of November; we don’t have all the answers but we know we want to move forward and have never been more motivated to make change happen.
For this pre-election special, Prathm spoke with law professor Spencer Overton about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about…
Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour.
Further reading:
Spencer Overton is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.
For our final episode in this series on the environment, Alix interviewed Karen Hao on how tough it is to report on environmental impacts of AI.
The conversation focusses on two of Karen’s recent stories, linked below. One of the biggest barriers to consistent reporting on AI’s climate injustices is the sheer opaqueness of information about what companies are trying to do when building infrastructure and what they think the actual costs — primarily of energy use and water — will be. Tech companies that Karen has written about enter communities via shell companies and promise relatively big deals for small municipalities if they allow development of new data centres — and community members often don’t know what they’re signing up for before it’s too late.
Listen to learn about how difficult it is to report on this industry, and the tactics and methods Karen has to use to tell her stories.
Further reading:
Karen Hao is an American journalist who writes for publications like The Atlantic. She was previously a foreign correspondent based in Hong Kong for The Wall Street Journal and a senior artificial intelligence editor at the MIT Technology Review. She is best known for her coverage on AI research, technology ethics and the social impact of AI.
In our third episode about AI & the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.
This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.
In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!
Sherif Elsayed-Ali trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.
Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.
This week we are continuing our AI & Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.
They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.
Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.
Boxi Wu is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics & Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.
Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (criticalinfralab.net), a postdoctoral fellow with the Technology & Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law & Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.
Further reading
This week we’re kicking off a series about AI & the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.
Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.
Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.
Holly Alpine is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.
Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.
Further Reading:
In 2017 Google’s urban planning arm Sidewalk Labs came into Toronto and said “we’re going to turn this into a smart city”.
Our guest Bianca Wylie was one of the people who stood up and said “okay but… who asked for this?”
This is a story about how a large tech firm came into a community with big promises, and then left with its tail between its legs. In the episode Alix and Bianca discuss the complexities of government procurement of tech, and how attractive corporate solutions look when you’re so riddled with austerity.
Bianca Wylie is a writer with a dual background in technology and public engagement. She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015.
Further Reading:
A Counterpublic Analysis of Sidewalk Toronto
In Toronto, Google’s Attempt to Privatize Government Fails—For Now
What if we could have a public library for compute? But is… more compute really what we want right now?
This week Alix interviewed Teri Olle from the Economic Security Project, a co-sponsor of the California AI safety bill (SB 1047). The bill has been making the rounds in the news because it would force AI companies to do safety checks on their models before releasing them to the public — which is seen as uh, ‘controversial’, to those in the innovation space.
But Teri had a hand in a lesser known part of the bill: the construction of CalCompute, a state owned public cloud cluster for resource-intensive AI development. This would mean public access to the compute power needed to train state of the art AI models — finally giving researchers and plucky start ups access to something otherwise locked inside a corporate walled garden.
Teri Olle is the California Campaign Director for Economic Security Project Action. Beginning her career as an attorney, Teri soon moved into policy and issue advocacy, working on state and local efforts to ban toxic chemicals and pesticides, decrease food insecurity and hunger, increase gender representation in politics. She is a founding member of a political action committee dedicated to inserting parent voice into local politics and served as the president of the board of Emerge California. She lives in San Francisco with her husband and two daughters.
Applications for our second cohort of Media Mastery for New AI Protagonists are now open! Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. Apply by Sunday, September 29th!
The adoption of new technologies is driven by stories. A story is a shortcut to understanding something complex. Narratives can lock us into a set of options that are…terrible. The kicker is that narratives are hard to detect and even harder to influence.
But how reliable are our narrators? And how can we use story as strategy?
The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game.
This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI.
In this ep we ask:
Hanna Barakat is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies.
Jonathan Tanner founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy.
Daniel Stone is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia.
There are oceans of research papers digging into the various harms of online platforms. Researchers are asking urgent questions such as how hate speech and misinformation has an effect on our information environment, and our democracy.
But how does this research find it’s way to the media, policymakers, advocacy groups, or even tech companies themselves?
To help us answer this, Alix is joined this week by Issie Lapowsky, who recently authored Bridging The Divide: Translating Research on Digital Media into Policy and Practice — a report about how research reaches these four groups, and what they do with it. This episode also features John Sands from Knight Foundation, who commissioned this report.
Further reading:
Issie Lapowsky is a journalist covering the intersection between tech, politics and national affairs. She has been published in WIRED, Protocol, The New York Times, and Fast Company.
John Sands is Senior Director of Media and Democracy at Knight Foundation. Since joining Knight Foundation in 2019, he has led more than $100 million in grant making to support independent scholarship and policy research on information and technology in the context of our democracy.
Last week, CEO of Telegram Pavel Durov landed in France and was immediately detained. The details of his arrest are still emerging; he is being charged for being complicit in illegal activities happening on the platform, including the spread of CSAM.
Durov’s lawyer has referred to these charges as “absurd” — because the head of a social media company cannot be held responsible for criminal activity on the platform. That might be true in the US but does that hold up in France?
This week Alix is joined by Mallory Knodel to talk us through what happened:
Mallory Knodel is The Center for Democracy & Technology’s Chief Technology Officer. She is also a co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force and a chairing advisor on cybersecurity and AI to the Freedom Online Coalition.
That’s the END of Exhibit X folks; if you’ve been following along, congratulations on choosing to become smarter. If not that’s okay, consider this episode a delicious teaser for the series.
In this episode Alix and Prathm engage their large wet brains and pull out the meatiest insights and learnings from the last five episodes. This series has been a delightful intellectual expedition into big tech litigation, knowledge creation, and online speech — if you’re a nerd for any of those things, it would be irresponsible for you to ignore this.
Thank you for listening; we hope to do more deep explorations like this in the future!
What makes an expert witness? How does a socio-technical researcher become one? Now that we’re the end of this miniseries, we might finally be ready to answer these questions…
In the fifth instalment of Exhibit X, civic tech acrobat Elizabeth Eagen shares her pithy insights on how researchers of emerging technologies are starting to interface with litigators and regulators.
The questions we explore this week:
Elizabeth Eagen is Deputy Director of the Citizens and Technology Lab at Cornell University, which works with communities to study the effects of technology on society and test ideas for changing digital spaces to better serve the public interest. She was a 2022-23 Practitioner Fellow at the Digital Civil Society Lab at Stanford University, and serves as a board member at a number of nonprofit technology organizations.
Imagine: something horrible has happened and the only evidence you have is a video posted online. Can you submit it into evidence in court? Well, it’s complicated.
In part 4 of our Exhibit X series, Alix sat down with Dr. Alexa Koenig to discuss her work with the International Criminal Court. Dr. Koenig and many colleagues are supporting the court to grapple with online evidence and tackling challenges that courts face when they adapt to our digital world.
We answer questions like:
Alexa Koenig, PhD, JD, is Co-Faculty Director of the Human Rights Center , Director of HRC’s Investigations Program, and an adjunct professor at UC Berkeley School of Law, where she teaches classes that focus on the intersection of emerging technologies and human rights. She also co-teaches a class on open source investigative reporting at Berkeley Journalism. Alexa co-founded the Human Rights Center Investigations Lab, which trains students and professionals to use social media and other digital open source content to strengthen human rights research, reporting, and accountability.
Often it feels as though the cases and lawsuits brought against big tech firms are continuously piling up, but there never seems to be any resulting justice or resolution. There are many good reasons for this, two of which are section 230 and the first amendment.
Big Tech companies will routinely invoke 230 and the first amendment to get cases against them thrown out before they can go to trial. In part 3 of Exhibit X, Meetali Jain explains how litigators have been playing 4D chess to get the courts to hold these companies accountable.
In this episode we ask…
Meetali Jain is a human rights lawyer, who founded the Tech Justice Law Project in 2023. The Project works with a collective of legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age, and that online spaces are safer and more accountable.
This episode was hosted by Alix Dunn and Prathm Juneja
In part 2 of Exhibit X, Alix interviewed Frances Haugen, who In 2021 blew the whistle on Meta; they were sitting on the knowledge that their products were harmful to kids, and yet — shocker — they continued to make design decisions that would keep kids engaged.
Mark Zuckerberg worked hard on his image (it’s a hydrofoil, not a surfboard!), while Instagram was being used for human trafficking — the lack of care and accountability here absolutely melts the mind.
What conversations did Frances’s whistleblowing start?
Was whistleblowing an effective mechanism for accountability in this case?
Do we have to add age verification to social media sites or break end-to-end encryption to keep children safe online?
*Frances Haugen is a data scientist & engineer. In 2021 she disclosed 22,000 internal documents to The Wall Street Journal and the Securities & Exchanges Commission which demonstrated Meta’s knowledge of their products harms.*
Your hosts this week are Alix Dunn and Prathm Juneja
Here is something you’re probably tired of hearing: Big Tech is responsible for a bottomless brunch of societal harms. And they are not being held accountable. Right now it feels as though we hear constantly about laws, regulation, courts. But none of it is effective in litigating against Big Tech.
In our latest podcast series Exhibit X, we’re looking at how the tides might finally be turning. Legal accountability could be around the corner, but only if a few things happen first.
To start, we look back to 1964. When Big Tobacco was winning the ‘try your best to profit from harm’ race. Research showed cigarettes were addictive and also caused cancer — and yet the industry evaded accountability for decades.
In this episode we ask questions like:
Prathm Juneja was Alix’s co-host for this episode. He is a PhD Candidate in Social Data Science at the Oxford Internet Institute Working at the intersection of academia, industry, and government on technology, innovation, and policy.
Further reading
In the Exhibit X series Alix and Prathm sink their fingernails into the tangled universe of litigation and Big Tech; how have the courts held Big Tech firms accountable for their various harms over the years? Is whistleblowing an effective mechanism for informing new regulations? What about a social media platform’s first amendment rights? So much to cover, so many episodes coming your way!
In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”.
In their paper they discuss how an erosion of public trust can lead to ‘any idea will do’ decisions, and often these lean on technology, such as predictive policing systems. One such tool is the Shot Spotter, a piece of audio surveillance tech designed to detect gunfire — a contentious system which has been sold both as a tool for police to surveil civilians, and as a tool for civilians to keep tabs on police. Can it really be both?
Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She has worked for institutions such as DG CNECT at the European Commission, the Berkman Klein Centre for Internet & Society at Harvard University, The Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) and The Future Society. Previously, Marta was a Ph.D. student and researcher on Algorithmic Bias and AI Policy at the Oxford Internet Institute. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people. Marta holds a BSc in Mathematics and Philosophy from University College Maastricht. She also holds an MSc in Philosophy and Public Policy and an executive degree in Chinese Language and Culture for Business from the London School of Economics.
Dasha Pruss is a 2023-2024 fellow at the Berkman Klein Center for Internet & Society and an Embedded EthiCS postdoctoral fellow at Harvard University. In fall 2024 she will be an assistant professor of philosophy and computer science at George Mason University. She received her PhD in History & Philosophy of Science from the University of Pittsburgh in May 2023, and holds a BSc in Computer Science from the University of Utah. She has also co-organized with Against Carceral Tech, an activist group working to ban facial recognition and predictive policing in the city of Pittsburgh.
This episode is hosted by Alix Dunn. Our guests are Marta Ziosi and Dasha Prussi
Further Reading
In this episode, we speak with Lara Groves and Jacob Metcalf at the seventh annual FAccT conference in Rio de Janeiro.
In part four of our FAccT deep dive, Alix joins Lara Groves and Jacob Metcalf to discuss their paper “ Auditing Work: Exploring the New York City algorithmic bias audit regime”.
Lara Groves is a Senior Researcher at the Ada Lovelace Institute. Her most recent project explored the role of third-party auditing regimes in AI governance. Lara has previously led research on the role of public participation in commercial AI labs, and on algorithmic impact assessments. Her research interests include practical and participatory approaches to algorithmic accountability and innovative policy solutions to challenges of governance.
Before joining Ada, Lara worked as a tech and internet policy consultant, and has experience in research, public affairs and campaigns for think-tanks, political parties and advocacy groups. Lara has an MSc in Democracy from UCL.
Jacob Metcalf, PhD, is a researcher at Data & Society, where he leads the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies.
Jake’s consulting firm, Ethical Resolve, provides a range of ethics services, helping clients to make well-informed, consistent, actionable, and timely business decisions that reflect their values. He also serves as the Ethics Subgroup Chair for the IEEE P7000 Standard.
This episode is hosted by Alix Dunn. Our guests are Lara Groves and Jacob Metcalf.
Further Reading
In this episode, we speak with Nari Johnson and Sanika Moharana at this year’s FAccT conference in Rio de Janeiro.
In part two of our FAccT deep dive, Alix joins Nari Johnson and Sanika Moharana to discuss their paper “The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment”.
Nari Johnson is a third-year PhD student in Carnegie Mellon University's Machine Learning Department, where she is advised by Hoda Heidari. She graduated from Harvard in 2021 with a BA and MS in Computer Science, where she previously worked with Finale Doshi-Velez.
Sanika Moharana is a second-year PhD student in Human Computer Interaction at Carnegie Mellon University. As an advocate for human-centered design and research, Sanika practices iterative ideation and prototyping for multimodal interactions and interfaces across intelligent systems, connected smart devices, IOT’s, AI experiences, and emerging technologies .
Further Reading
In part 1 of our FAccT conference deep dive, Alix Dunn sits down with co-host Andrew Strait from the Ada Lovelace Institute to talk about the history of FAccT and some of the papers being presented at this year’s event.
The Fairness, Accountability and Transparency Conference, or FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars and exploring how socio-technical systems could be built in a way that is compatible with a fair society. The seventh annual FAccT conference was held in Rio de Janeiro, Brazil, from Monday, June 3rd through Thursday, June 6th 2024 with over five hundred people in attendance.
This episode is hosted by Alix Dunn and our Co-Host is Andrew Strait
Further Reading:
In this episode, we speak with Dr. Kate Sim, one of the core organisers of the Google Worker Sit-In Against Project Nimbus.
Dr. Kate Sim was recently fired, alongside almost 50 other employees, from Google after helping organize a sit-in protesting Project Nimbus, a joint contract between Google and Amazon to provide technology to the Israeli government and military. In this episode, Alix and Dr. Sim discuss technology-enabled violence, Dr. Sim's work in trust and safety, and Google's cancelled project Maven. They also talk about Dr. Sim's journey into protesting Project Nimbus, the many other voices fighting against the contract, and how Big Tech often obfuscates its responsibility in perpetuating violence. In the end, we arrive at a common lesson: solidarity is our main hope for change.
This episode is hosted by Alix Dunn and our guest is Dr. Kate Sim.
Further Reading
In this episode, we talk about the kinds of jobs that are being created as AI systems grow, how those jobs are evolving, what the labour conditions of those jobs are like and, who is benefitting from these systems.
This episode is hosted by Alix Dunn and guests include James (Mojez) Oyange, Yoel Roth, Catherine Bracy and Cori Crider.
If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email [email protected] or share an audio note here: speakpipe.com/saysmaybe.
Further Reading
News Articles
Other Links
In this episode, we walk through how misinformation and disinformation has been used in past elections to impact outcomes, where we think AI might make a material difference in how elections play out this year, and where we think responsibility lies for the situation we’re in.
This episode is hosted by Alix Dunn and Prathm Juneja, and guests include Sam Gregory, Josh Lawson, and Claire Wardle.
If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email [email protected] or share an audio note here: speakpipe.com/saysmaybe
--
Further Reading
Academic Articles
News Articles
Other Links
En liten tjänst av I'm With Friends. Finns även på engelska.