Join Dr Eleanor Drage and Dr Kerry McInerney as they ask the experts: what is good technology? Is ‘good’ technology even possible? And how can feminism help us work towards it? Each week, they invite scholars, industry practitioners, activists, and more to provide their unique perspective on what feminism can bring to the tech industry and the way that we think about technology. With each conversation, The Good Robot asks how feminism can provide new perspectives on technology’s biggest problems.
The podcast The Good Robot is created by Dr Kerry McInerney and Dr Eleanor Drage. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
In this episode we talk to two activists, Hat and Nell, from the organisation Stop Oxevision, who are fighting against the rollout of surveillance technologies used on mental health wards in the United Kingdom (UK). We explore how surveillance on mental health wards affects patients who never know exactly when they're being watched, and how surveillance technologies in mental health wards are implemented within a much wider context of unequal power relationships. We also reflect on resistance, solidarity, and friendship as well as the power of activism to share information and combat oppressive technologies. Please note that this episode does contain distressing content, including references to self harm.
In this episode, we talk to Sebastián Lehuedé, a Lecturer in Ethics, AI, and Society at King's College London. We talk about data activism in Chile, how water-intensive lithium extraction affects people living in the Atacama desert, the importance of reflexive research ethics, and an accidental Sunday afternoon shot of tequila.
In this episode, we talked to Jill Walker Rettberg, Professor of Digital Culture at the University of Bergen in Norway. In this wide-ranging conversation, we talk about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and that famous photo-shopped Mother's Day Photo released by Kate Middleton in March, 2024.
In this episode, we talk to Yasmine Boudiaf, a researcher, artist and creative technologist who uses technology in beautiful and interesting ways to challenge and redefine what we think of as 'good'. We discuss her wide-ranging art projects, from using AI to create a library of Mediterranean hand gestures through to her project Ways of Machine Seeing, which explored how machine vision systems are being taught to 'see'. Throughout the episode, we explore how Yasmine creatively uses technology to challenge the colonial gaze and the predominance of Western European ideas and concepts in ethics.
Note: this episode was recorded in Summer 2023
In this episode we talk to Elizabeth Wilson, a professor of gender, sexuality and women's studies at Emory University, a leading scholar on the intersections between feminism and biology, and the author of Gut Feminism. We talk about everything from what feminism can learn from biology to TERFs (trans exclusionary radical feminists), penises, Freud and technology.
Note: this episode was recorded in Spring 2023.
In this episode, we speak to Janneke Parrish, who's one of the co founders of Apple Together, a solidarity union at Apple. Apple fired Parrish on the 14th of October 2021. Since she's written an incredible book, continues to be an advisor to Apple together, and is now studying law. We talk about how Apple's culture of silence underlies its aim to surprise and delight the customer, how companies should listen to their workers, and how to be diplomatic and dignified in the face of an institution that is trying to crush you at work.
In this episode, we chat about coming back from summer break, and discuss a research paper recently published by Kerry and the AI ethicist and researcher Os Keyes called "The Infopolitics of Feeling: How race and disability are configured in Emotion Recognition Technology". We discuss why AI tools that promise to be able to read our emotions from our faces are scientifically and politically suspect. We then explore the ableist foundations of what used to be the most famous Emotion AI firm in the world: Affectiva. Kerry also explains how the Stop Asian Hate and Black Lives Matter protests of 2020 inspired this research project, and why she thinks that emotion recognition technologies have no place in our societies.
In this episode, we talk to Amba Kak and Sarah Myers West of the AI Now Institute, who are the co directors of this leading policy think tank. In the episode, which is the second installment of our EU AI Act series, Amba and Sarah explore why different tech policy narratives matter, the difference between the US and the EU regulatory landscape, why this idea that AI is simply outstripping regulation is an outdated maxim, and then finally, their policy wish list for 2024.
In this episode, we talk to Daniel Leufer and Caterina Rodelli from Access Now, a global advocacy organization that focuses on the impact of the digital on human rights. As leaders in this field, they've been working hard to ensure that the European Union's AI Act doesn't undermine human rights or indeed fundamental democratic values. They share with us how the EU AI act was put together, the Act's particular downfalls, and where the opportunities are for us as citizens or as digital rights activists to get involved and make sure that it's upheld by companies across the world.
Note: this episode was recorded back in February 2024.
We often think that maths is neutral or can't be harmful, because after all, what could numbers do to hurt us? In this episode, we talk to Dr. Maurice Chiodo, a mathematician at the University of Cambridge, who's now based at the Center for Existential Risk. He tells us why maths can actually throw out big ethical issues. Take the atomic bomb or the maths used by Cambridge Analytica to influence the Brexit referendum or the US elections. Together, we explore why it's crucial that we understand the role that maths plays in unethical AI.
Follow our IG shenanigans: https://www.instagram.com/thegoodrobotpodcast/?locale=hi_IN
Tweet us: https://twitter.com/thegoodrobot1?lang=en
Watch our TikTok adventures: https://www.tiktok.com/@thegoodrobotpodcast
Listen here: https://open.spotify.com/show/5jbYieHj1QrykdQUeCVpOR or https://podcasts.apple.com/gb/podcast/the-good-robot/id1570237963
We have the best newsletter full of AI updates and reading recs! https://tech.us12.list-manage.com/subscribe?
This is a special live episode because Kerry is talking to Professor Helen Hester at the tech transformed conference in London. Helen is a leading thinker of feminism technology and the future of work, and she explores the history of domestic technologies- so technology used around the house. It's really important that we understand that technologies like the washing machine were actually not as liberatory for women as we'd like to think. In fact, they may have actually prevented women from rising up against domestic labor. Helen also talks about how medical care is increasingly being outsourced to home spaces, and why smart home technology is making our lives more convenient, but not necessarily less laborious.
Follow our IG shenanigans: https://www.instagram.com/thegoodrobo...
Tweet us: https://twitter.com/thegoodrobot1?lan...
Watch our TikTok adventures: /thegoodrobotpodcast
In this episode, we talk to Heather Zheng, who makes technologies that stop everyday surveillance. This includes bracelets that stopped devices from listening and on you, to more secure biometric technologies that can protect us by identifying us by for example, our dance moves. Most famously, Zheng is one of the computer scientists behind Nightshade, which helps artists protect their work by 'poisoning' AI training data sets.
Follow our IG shenanigans: https://www.instagram.com/thegoodrobotpodcast/?locale=hi_IN
Tweet us: https://twitter.com/thegoodrobot1?lang=en
Watch our TikTok adventures: https://www.tiktok.com/@thegoodrobotpodcast
Listen here: https://open.spotify.com/show/5jbYieHj1QrykdQUeCVpOR or https://podcasts.apple.com/gb/podcast/the-good-robot/id1570237963
We have the best newsletter full of AI updates and reading recs! https://tech.us12.list-manage.com/subscribe?
In this episode we talk to Caroline Sinders, the human rights researcher, an artist, and the founder of convocation, design and research. We begin by talking about Gamergate, when women were harassed for being gamers. We also talk about what it's like doing high risk research about abusive misogynists online and experiences of doxing. Just to give you a heads up. We do talk about online harassment in today's episode. If you're facing online harassment and you need immediate help Caroline's organization offers pro bono support, so just email, [email protected]. And they'll get back to you.
We’re expected to look amazing online, but also natural. We’re fighting against the gender pay gap, but also spend thousands on cosmetics. In this episode, Ellen Atlanta talks us through the paradoxes of feminism and beauty in the digital sphere.
In this episode, we talk to Dr. Isabella Rosner, a curator at the Royal School of Needlework and a research consultant at Witney Antiques. Isabella tells us about the evolution of embroidery as a technology, and the complex relationship between needlework and feminism. We use this history to shed light on technology and feminism today.
In this episode, we talked to Darren Byler, author of Terror Capitalism and In the Camps, Life in China's High Tech Penal Colony. We discussed his in depth research on Uyghur Muslims in China and the role played by technology in their persecution. If you're just listening to this on Spotify or wherever you get your podcasts, you can now watch us on YouTube at The Good Robot Podcast.
In this episode we talk to Thuy Linh Thu, Professor of Social and Cultural Analysis at NYU. We talk about how good technology disperses power, while bad technology concentrates power, the racial history of dermatology, including the connections between the Vietnam War, medical experimentation on incarcerated men in the U. S., and retinol creams,. Please note that this episode contains references to medical experimentation and racial violence.
In this very special Good Robot hot take we talk about our new book, The Good Robot: Why Technology Needs Feminism. It's a beautiful new illustrated book where the top scholars, activists, artists, writers, technologists, all come together to respond to the prompt: good technology is... Kerry and Eleanor chat about getting its illustrations as tattoos, and you can vote for which one you think we should get tattooed. And then we have some more serious conversations about why good technology is always complicit, whether that be a blood glucose monitor, the Dyson Air Wrap, a Tangle Teezer, a water purifier or Kerry's option: knitting needles.
The book has just launched online and in stores. So you can find it at your local bookshop. We know that it stocked in Waterstones, hers. Blackwells, Pages of Hackney... and of course this wouldn't be an episode on the complicities of good technology without saying that you can also find it on Amazon.
In this episode we chat to Shannon Vallor, the Bailey Gifford professor in the ethics and data of AI at the University of Edinburgh and the Director for the Centre for Technomoral Futures. We talk about feminist care ethics; technologies, vices and virtues; why Aristotle believed that the people who make technology should be excluded from citizenship; and why we still don't have the kinds of robots that we imagined that we'd have in the early 2000s. We also discuss Shannon's new book, The AI Mirror, which is now available for pre-order.
In this episode, we talk to Emily M. Bender and Alex Hanna. AI ethics legends and now the co-hosts of the Mystery AI Hype Theatre 3000 podcast which is a new podcast where they dispel the hype storm around AI. Emily is a professor of linguistics at university of Washington and the co-author of that stochastic parrots paper that you may have heard of, because two very important people in the Google AI ethics team allegedly got fired over it, and that's Timnit Gebru and Meg Mitchell. And Alex Hanna is the director of research at the Distributed AI Research Institute known by its acronym, DAIR, which is now run by Timnit. In this episode, they argue that we should stop using the term AI altogether, and that the world might be better without text to image systems like DALL·E and Midjourney. They tell us how the AI hype agents are getting high on their own supply, and give some advice for young people going into tech careers.
This week we chat to Melissa Heikkilä, a senior tech reporter for MIT Tech review, about ChatGPT, image generation, porn, and the stories we tell about AI. We hope you enjoy the show.
In this episode, we talked to Rebecca Woods, a Senior Lecturer in Language and Cognition at Newcastle University. We have an amazing chat about language learning in AI, and she tells us how language is crucial to how ChatGPT functions. She's also an expert in how children learn languages, and she compares this to teaching AI how to process languages.
Happy holidays from your favourite jingle belles at the Good Robot podcast! In this episode we celebrate both the holidays and Eleanor's new book, The Planetary Humanism of European Women's Science Fiction: An Experience of the Impossible, which is a history of women's utopian science fiction from 1666 to 2016. We talk about the ways that women have imagined better places and times and worse ones throughout history, as well as what utopia means politically and why we need it, lesbian bacteria, Hitchcock's The Birds, and weird deep sea fish.
.
In this episode we talk to British physicist Jess Wade about the 1923 Wikipedia pages (and counting) she’s created and edited in her aim to put more women and more people of colour onto the online encyclopaedia.
In this episode we welcome Eleanor back from Slovenia, where she was speaking at a conference on digital sovereignty. But what is digital sovereignty, and what does it mean for you and your data?
In this episode, we talked to Azerbaijani journalist Arzu Geybulla, a specialist on digital authoritarianism and its implications on human rights and press freedoms in Azerbaijan. She now lives in self-imposed exile in Istanbul. Aside from writing for big publications like Al Jazeera, Eurasianet, Foreign Policy Democracy Lab, she also founded Azerbaijan Internet Watch and is writing a political memoir about a lost generation of civil society artists in Azerbaijan. We chat to Arzu about Azerbaijan's use of technology to go after diasporic community members or people who've been exiled from the country, how women are more often targeted than men, subliminal propaganda, misinformation and censorship in the recent Turkish elections, and the importance of tracking and mapping internet censorship and surveillance in authoritarian states.
In this episode, we speak to K Allado-McDowell a writer, speaker, and musician. They've written three books and an opera libretto, and they've established the artists and machine intelligence program at Google AI. We talk about good technology as healing, the relationship between psychedelics and technology, utopianism and the counter-cultural movements in the Bay Area, and the economics of Silicon Valley.
In this episode, we talk to Giada Pistilli, Principal Ethicist at Hugging Face, which is the company that Meg Mitchell joined, following her departure from Google. Giada is also completing her PhD in philosophy and ethics of applied conversational AI at Sorbonne University. We talk about value pluralism and AI, which means building AI according to the values of different groups of people. We also explore what it means for an AI company to actually take AI ethics really seriously as well as the state of feminism in France right now
In this episode, we talk to Dr. Matt Mahmoudi, a researcher and advisor on artificial intelligence and human rights at Amnesty International, and an affiliated lecturer at the Department of Sociology at the University of Cambridge. We discuss how AI is being used to survey Palestinians in Hebron and East Jerusalem, both in their bedrooms and in their streets, which Dutch and Chinese companies are supporting this surveillance, and how Israeli security forces have been pivotal to the training of US police. We also think about creative resistance projects like plastering stickers on cameras to notify passes by that they're being watched.
In this episode, we hear all about Kerry’s trip to Japan (spoiler alert: she loved it) and explore her work on anti-Asian racism and AI. Kerry explains what the very long word ‘techno-Orientalism’ means and how fears and fantasies of East Asia or the so-called ‘Orient’ shape Western approaches to technology and AI. We chat about how US sci-fi genres like cyberpunk use imagery from East and South East Asia to connote scary, dystopian futures where the ‘human’ is indistinguishable from the ‘machine’, and how this mimics old stereotypes about East Asian people as ‘mechanical’ or ‘machinic’.
In this episode, we talk to Dr. Hayleigh Bosher, Associate Dean and Reader in intellectual property law at Brunel University and host of the podcast Whose Song is it Anyway?, a podcast on the intersections of IP [intellectual property] and the music industry. Hayleigh gives us some great insight into tomorrow's legal disputes over AI and music copyright. She tells us why AI can never create an original song, what it takes to sue a generative AI company for creating music in the style of someone, and why generative AI risks missing the point about what creativity is.
In this episode we talk to Meredith Broussard, data journalism professor at the Arthur L. Carter Institute at New York University. She's also the author of Artificial Unintelligence, which made waves following its release in 2018 by claiming that AI was nothing more than really fancy math. We talk about why we need to bring a little bit more friction back into technology and her latest book More Than a Glitch, which argues that AI that's not designed to be accessible is bad for everyone, in the same way that raised curbs between the pavement and the street that you have to go down to cross the road makes urban outings difficult for lots of people, not just wheelchair users.
In this episode we chat to Grace DiIlon, Professor in the Indigenous Nations Studies Department at Portland State University. Grace, an Anishinaabe cultural critic and a phenomenal storyteller in her own right, gives an overview of the fiction and science books by indigenous writers doing very cool things. We talk about apocalypse and healing, ceremonial science, and the genre of native slipstream.
In this episode, we talk to Mar Hicks, an Associate Professor of Data Science at the University of Virginia and author of Programmed Inequality: How Britain discarded Women Technologists and Lost its Edge in computing. Hicks talks to us about the lessons that the tech industry can learn from histories of computing, for example: how sexism is an integral feature of technological systems and not just a bug that can be extracted from them; how techno-utopianism can stop us from building better technologies; when looking to the past is useful and when it's not helpful; the dangers of the 'move fast and break things' approach where you just build technology just to see what happens; and whether regulatory sandboxes are sufficient in making sure that tech isn't deployed unsafely on an unsuspecting public.
Welcome to this week’s Hot Take, where your hosts Kerry and Eleanor give their candid opinion on the latest in tech news. This week they discuss the rebranding of Twitter as X and how people like Elon Musk have an outsized impact on the daily technologies that we use, on the kinds of technologies that get made and created, and on the kinds of needs that get prioritized when it comes to user preferences and desires. From X to the Barbie movie, they explore why diversity matters in the tech industry, as well as why trying to understand what ‘diversity’ is and what it means in context is much trickier than it sounds.
We talk to Peter Hershock, director of the Asian Studies Development Program and coordinator of the Humane AI Initiative at the East-West Center in Honolulu. We talked to Peter about the kinds of misconceptions and red herrings that shape public interpretations of machine consciousness and what we can gain from approaching the question of machine consciousness from a Buddhist perspective. Our journey takes us from Buddhist teaching about relational dynamics that tell us that nothing exists independently from someone or something else to how to make the best tofu larb.
In this week’s Good Robot Hot Takes, Kerry and Eleanor talk about a group of scientists in Zurich that tried to measure a correlation between brain activity and sexuality using AI. This smacks not only of previous attempts to use AI to try and ‘read’ people’s sexuality, but also of dangerous 19th and 20th century race science. We talk about how the language of science is weaponised against queer people, why there are no real scientific foundations to using AI to detect sexuality, and why science needs to think about sexuality not as fixed or static but wild and infinite.
In this episode we chat to Karen Levy, Associate Professor of Information Science at Cornell University and author of Data Driven: Truckers, Technology, and the New Workplace Surveillance. Karen is an expert in the changing face of long distance driving - she spent ten years doing research with truck drivers. So she’s been looking at how surveillance and automation are changing what it means to be a trucker in the USA. We talk about how truckers are responding to new AI technologies monitoring their behaviour, and what the future holds for the trucking industry. We recorded this a while ago so it’s an audio-only episode.
Welcome to our third episode of the Good Robot Hot Takes. Every two weeks Kerry and Eleanor will be giving their hot take on some of the biggest issues in tech. If you’re a graduate or a jobseeker, this is the episode for you because this week we talk about AI that’s being used for recruitment. That’s right, AI is being used to assess your performance in an interview. In fact companies are claiming that their tools can read your personality by looking at your face, and that this can strip away a candidate’s race and gender. We hope you enjoy the show.
In this episode we chat with Ofri Cnaani, an artist and associate lecturer at Goldsmiths, University of London. Artists are doing amazing things in tech spaces, not just working with tech but also using art to explore how our world is infused with data. Ofri discusses some of her projects with us, including her investigation of the fire that destroyed the National Museum of Brazil in 2018, which prompted a massive crowdsourced appeal for photos of museum exhibits taken by visitors, and her Statistical Bodies project, which humorously looks at what kind of data about bodies aren't yet useful, like jealousy and social fatigue, or what is impossible to capture about the body.
Welcome to our second episode of the Good Robot Hot Takes, where every week Kerry and Eleanor give you their spicy opinions about top issues in tech. This week we talk about science fiction films, why we love Aliens and Sigourney Weaver, how female AI scientists and professionals are represented on screen, how this contributes to the unequal gender dynamics of the AI industry, why Iron Man's Tony Stark sucks, and why he and Ex Machina's Nathan Bateman aren’t just bad apples but an epidemic of conceited AI scientists on screen.
From using computers to process the work of Thomas Aquinas to using facial recognition to compare portraits of Shakespeare, computational techniques have long been applied to humanities research. These projects are now called the digital humanities, and today we’re interviewing two major figures in this discipline. We talk to Dr Sharon Webb, Senior Lecturer in Digital Humanities at the University of Sussex, History Department and a Director of the Sussex Humanities Lab, and Caroline Bassett, Professor of Digital Humanities in the Faculty of English and the Director of Cambridge Digital Humanities at the University of Cambridge. They tell us about full stack feminism, hidden histories of women's involvement in computing, and what it means to bring feminism into the study of technology.
Welcome to our new format: The Good Robot Hot Takes! In these fun, lively, conversational episodes, we (Eleanor and Kerry) discuss some of the biggest issues in tech, from ChatGPT, and the sexy fembot problem in Hollywood film, to why predictive policing is a scam and why gender recognition is garbage.
This week we're talking about the Future of Life Institute's open letter calling for an AI 'pause' in the wake of ChatGPT. We explore framing large language models as 'foundational' and therefore inevitable, the dangers of AI 'race' rhetoric, why AI's long term harms are given way more attention than its more immediate ones, and how race and gender shape what 'counts' as existential risk.
EDIT - This episode has been re-uploaded to make a correction. Bostrom is associated with the Future of Life Institute, but he is not the Founder or a Founding member, as we originally stated.
In this episode we chat to Laura Forlano, Associate Professor of Design at the Institute of Design at Illinois Institute of Technology. This is a special episode because Laura reads us some of her work on life as a Type 1 diabetic, or in her words, a disabled cyborg calibrated to an insulin pump. Laura’s writing gives us a different kind of insight into good technology, tech that in her case literally keeps her alive, but can also let you down in alarming ways.
This special bonus episode was recorded at the AI Anarchies conference in Berlin. We held a workshop exploring with participants what good technology means for them, and why thinking in terms of ‘good technology’ actually limits us. Two amazing participants offered to be interviewed by us, Christina Lu, who at the time was a software engineer at DeepMind and is now a researcher on the Antikythera program and Grace Turtle, a designer, artist, and researcher that uses experimentation and play, like Table Top Games, LARPing, and simulation design to encourage us to transition to more just and sustainable futures.
In this episode we chat to Louise Hickman, an activist and scholar based at the Minderoo Centre for Technology and Democracy at the University of Cambridge. Louise talks to us about stenography, the process of transcribing speech into shorthand. You may be familiar with this from having seen court reporters write a transcription of a tribunal or case, but many stenographers also do crucial access work to create live captions of someone speaking. Stenographers create their own online dictionaries and then access words really quickly using keyboard shortcuts. We explore the political decision making process of captioning and why this matters.
In this episode we discuss the new generation of Chinese science fiction with two of the genres most brilliant translators, editors, writers and researchers. They’ve just published The Way Spring Arrives and Other Stories, an anthology of science fiction written by Chinese women and non-binary writers that aims to overwrite stereotypes about who Chinese sf writers are and what they write about. Regina is a SF writer who works for the Co-Futures project at the University of Oslo and Emily is a writer and translator doing a PhD at Yale in East Asian Languages and Literature.
In this episode we talk to Bridget Boakye, the artificial intelligence (AI) policy leader at the Tony Blair Institute for Global Change. Bridget is an expert in how AI is impacting Africa and the major challenges in implementing AI use across the continent. She tells us about what good technology means in the contexts in which she works and the benefits and drawbacks of Google and other Big Tech companies operating in Africa.
In this episode we talk to Pedro Oliveira, a researcher and sound artist based at the Akademie der Kunste in Berlin. Pedro does amazing work investigating border control technologies that listen to asylum seekers and claim to be able to discern where they came from from the way they speak. In this episode we discuss why these kinds of technologies rely on the assumption that there is an authentic way that a migrant from a particular place should sound. Our quest to unravel vocal 'authenticity' takes us through frequency, timbre, and 1960s synthesisers from East Berlin.
We all know about Microsoft Excel and Outlook, but did you know about the kinds of tech they develop in and sell to the Global South? These include escape management system for jails, police cars inbuilt with sensor data, and software that supports facial recognition systems. To tell us more about this, we talk to Dr Michael Kwet, a visiting fellow of the Information Society project at Yale Law School and a postdoctoral researcher at the Centre for Social Change at the University of Johannesburg. His extensive investigation of how Global South economies are increasingly dependent on Big Tech companies like Microsoft shows that they get bad deals when they hand over valuable raw materials and labour. They end up seeing little of the vast wealth that Big Tech amasses through this unfair exchange. His work has focuses on South Africa, where economies, schools and prisons rely on Microsoft software and services.
In this episode we speak to Abeba Birhane, senior research fellow at Mozilla, about how cognition extends beyond the brain, why why we need to turn questions like ‘why aren't there enough black women in computing’ on their head and actually transform computing cultures, and why human behaviour is a complex adaptive system that can’t always be modelled computationally.
In this episode we talk to Arjun Subramonian, a Computer Science PhD student at UCLA conducting machine learning research and a member of the grassroots organisation Queer in AI. In this episode we discuss why they joined Queer in AI, how Queer in AI is helping build artificial intelligence directed towards better, more inclusive, and queer futures, why ‘bias’ cannot be seen as a purely technical problem, and why Queer in AI rejected Google sponsorship.
In this episode we chat to Su Lin Blodgett, a researcher at Microsoft Research in Montreal, on whether you can use AI to measure discrimination, why AI can never be de-biased, and how AI shows us that categories like gender and race are not as clear cut as we think they are.
Ever worried that AI will wipe out humanity? Ever dreamed of merging with AI? Well these are the primary concerns of transhumanism and existential risk, which you may not have heard of, but whose key followers include Elon Musk and Nick Bostrom, author of Superintelligence. But Joshua Schuster and Derek Woods have pointed out that there are serious problems with transhumanism’s dreams and fears, including its privileging of human intelligence above all other species, its assumption that genocides are less important than mass extinction events, and its inability to be historical when speculating about the future. They argue that if we really want to make the world and its technologies less risky, we should instead encourage cooperation, and participation in social and ecological issues.
Science fiction writer Chen Qiufan ( Stanley Chen), author of Waste Tide, discusses the feedback loop between science fiction and innovation, what happened when he went to live with shamans in China, how science fiction can also be a psychedelic, and why it’s significant that linear time arrived from the West and took over ideas of circular or recurring time between Chinese dynasties.
In this episode, the historian of science Lorraine Daston explains why science has long been allergic to emotion, which is seen to be the enemy of truth. Instead, objective reason is science’s virtue. She explores moments where it’s very difficult for scientists not to get personally involved, like when you’re working on your pet hypothesis or theory, which might lead you to select data that confirms your hypothesis, or when you’re confronted with some anomalies in your dataset that threaten a beautiful and otherwise perfect theory. But Lorraine also reminds us that the desire for objectivity can itself be an emotion, as it was when Victorian scientists expressed their heroic masculine self-restraint. She also explains why we should only be using AI for the parts of our world which are actually predictable, and how it’s not just engineers who debug algorithms, now that task is being outsourced to us - the consumers - as we’re the ones who are now forced to flag downstream effects when things go wrong.
How should governments collect personal data? In this episode, we talk to Dr Kevin Guyan about the census, and the best ways of asking people to identify themselves. We discuss why surveys that you fill in by hand offer less restrictive options for self-identification than online forms, and how queer communities are not just identified but produced through the counting of a census. As Kevin reminds us, who does the counting affects who is counted. We also discuss why looking at histories of identifying as heterosexual and cisgender is also beneficial to queer communities.
In this episode we speak to two brilliant professors here at Cambridge, Mónica Moreno Figueroa and Ella McPherson about a data project they launched at the University of Cambridge to track everyday racism in the university. We discuss using technology for social good without being obsessed with the technology itself and the importance of tracking how racism dehumanises people, confuses us about each other, and causes physical suffering, which students of colour have to deal with on top of the ordinary stress of their uni degree.
In this episode, we talk to Louise Amoore, professor of political geography at Durham and expert in how machine learning algorithms are transforming the ethics and politics of contemporary society. Louise tells us how politics and society have shaped computer science practices. This means that when AI clusters data and creates features and attributes, and when its results are interpreted, it reflects a particular view of the world. In the same way, social views about what is normal and abnormal in the world are being expressed through computer science practices like deep learning. She emphasises that computer science can solve ethical problems with help from the humanities, which means that if you work with literature, languages, linguistics, geography, politics and sociology, you can help create AIs that model the world differently.
In this episode we talk to Sarah Franklin, a leading figure in feminist science studies and the sociology of reproduction. In this tour de force of IVF ethics and feminism through the ages, Sarah discusses ethical issues in reproductive technologies, how they compare to AI ethics, how feminism through the ages can help us, Shulamith Firestone’s techno-feminist revolution, and the violence of anti-trans movement across the world.
In this episode we chat to Michelle N. Huang, Assistant Professor of English and Asian American literature at Northwestern University. Chatting with Michelle is bittersweet, as we think collectively together about anti-Asian racism and how it intersects with histories and representations of technological development in the context of intensified violence against Asian American and Asian diaspora communities during the COVID-19 pandemic. We discuss why the humanities really matter when thinking about technology and the sciences, Michelle’s amazing film essay Inhuman Figures which examines and subverts racist tropes and stereotypes about Asian Americans; why the central idea of looking at what's been discarded, devalued, and finding different values and ways of doing things defines the power of feminist science studies; and what it means to think about race on a molecular level.
In this episode we talk to Sareeta Amrute, Affiliate Associate Professor at the University of Washington who studies race, labour, and class in global tech economies. Sareeta discusses happened when Rihanna and Greta Thunberg got involved in the Indian farmers protests; how race has wound up in algorithms as an indicator of what products you might want to buy; how companies get out of being responsible for white supremacist material sold across their platforms; why all people who make technology have an ethic, though they might not know it; and what the effects are of power in tech companies lying primarily with product teams.
In this episode Sophie, author of Full Surrogacy Now and self-defined wayward Marxist, talks about defining good technology for the whole of the biosphere, why the purity of the human species has always been contaminated by our animal and technological origins, why nature is much, much stranger than we think, what that means for the lambs that are now being grown in artificial wombs, and why technologies like birth control and IVF can never liberate women within the power dynamics of our capitalist present.
In this episode we chat to Karen Hao, a prominent tech journalist who focuses on the intersections of AI, data, politics and society. Right now she’s based in Hong Kong as a reporter for the Wall Street Journal on China, tech and society; before this, she conducted a number of high profile investigations for the MIT tech review. In our interview we chat about her series on AI colonialism and how tech companies reproduce older colonial patterns of violence and extraction; why both insiders and outside specialists in AI ethics struggle to make AI more ethical when they’re competing with Big Tech’s bottom line; why companies engaging user attitudes isn’t enough, since we can’t really ever ‘opt out’ of certain products and systems; and her hopes for changing up the stories we tell about the Chinese tech industry.
In the race to produce the biggest language model yet, Google has now overtaken Open AI’s GPT-3 and Microsoft’s T-NLG with a 1.6 trillion parameter model. In 2021, Meg Mitchell was fired from Google, where she was co-founder of their Ethical AI branch, in the aftermath of a paper she co-wrote about why language models can be harmful if they’re too big. In this episode Meg sets the record straight. She explains what large language models are and what they do, why they’re so important to Google. She tells us why it's a problem that these models don’t understand the significance or meaning of the data that they are trained on, which means that wikipedia data can influence what these models take to be historical fact. She also tells us about how some white men are gatekeeping knowledge about large language models, as well as the culture, politics, power and misogyny at Google that led to her firing.
In this episode, we speak to Soraj Hongladarom, a professor of philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok. Soraj explains what makes Buddhism a unique and yet appropriate intervention in AI ethics, why we need to aim for enlightenment with machines, and whether there is common ground for different religions to work together in making AI more inclusive.
In this episode we chat to Os Keyes, an Ada Lovelace fellow and adjunct professor at Seattle University, and a PhD student at the University of Washington in the department of Human Centered Design & Engineering. We discuss everything from avoiding universalism and silver bullets in AI ethics to how feminism underlies Os’s work on autism and AI and automatic gender recognition technologies.
In this episode, we talk to Dr Alex Hanna, Director of Research at the Distributed AI Research Institute which was founded and directed by her ex-boss at Google Dr Timnit Gebru. Previously a sociologist working on ethical AI at Google and now a superstar in her own right, she tells us why Google’s attempt to be neutral is nonsense, how the word good in ‘good tech’ allows people to dodge getting political when orienting technology towards justice, and why technology may not actually take on the biases of its individual creators but probably will take on the biases of its organisation.
In this episode we chat to Virginia Dignum, Professor of Responsible Artificial Intelligence at the University of Umeå where she leads the Social and Ethical Artificial Intelligence research group. We draw on Dignum’s experience as an engineer and legislator to discuss how any given technology might not be good or bad, but is never valueless; how the public can participate in conversations around AI; how to combat evasions of responsibility among creators and deployers of technology, when they say ‘sorry, the system says so’; and why throwing data at a problem might not make it better.
In this episode, we talk to Blaise Agüera y Arcas, a Fellow and Vice President at Google research and an authority in computer vision, machine intelligence, and computational photography. In this wide ranging episode, we explore why it is important that the AI industry reconsider what intelligence means and who possesses it, how humans and technology have co-evolved with and through one another, the limits of using evolution as a way of thinking about AI, and why we shouldn’t just be optimising AI for survival. We also chat about Blaise’s research on gender and sexuality, from his huge crowdsourced surveys on how people self-identify through to debunking the idea that you can discern someone’s sexuality from their face using facial recognition technology.
In this episode, we talk to Dr. Kate Chandler, Assistant Professor at Georgetown and a specialist on drone warfare. We recorded this interview the day that Russia invaded Ukraine, which reminded us of just how urgent a task it is to rethink the relationship between tech innovation and warfare. As Kate explains, drones are more than just tools, they’re also intimately tied to political, economic and social systems. In this episode we discuss the historical development of drones - a history which is both commercial and military - and then explore a better future for these kinds of technologies, one where AI innovation money comes from nonviolent sources, and AI can be used for the prevention of violence.
In this episode, we chat to Meryl Alper, Associate Professor of Communication Studies at Northeastern University. We discuss histories of technological invention by disabled communities, the backlash against poor algorithmically transcribed captions or ‘craptions’, what it actually means for a place or a technology to be accessible to disabled communities with additional socio-economic constraints, and the kinds of assistive augmented communication devices (AAC), like the one used by Stephen Hawking, that are being built by non-speaking people to represent different kinds of voices.
In this episode, we chat with Professor Wendy Chun, who is Simon Fraser University's Canada 150 Research Chair in New Media. As both an expert in Systems Design Engineering and English Literature, her extraordinary analysis of contemporary digital media bridges the humanities and STEM sciences to think through some of the most pressing technical and conceptual issues in technology today. Wendy discusses her most recent book, Discriminating Data, where she explains what is actually happening in AI systems that people claim can predict the future, why facebook friendship has forced the idea that friendship is bidirectional, and how technology is being built on the principle of homophily, the idea that similarity breeds connection.
In this episode, we chat to Dr Leonie Tanczer, a Lecturer in International Security and Emerging Technologies at UCL and Principle Investigator on the Gender and IoT project. Leonie discusses why online safety and security are not the same when it comes to protection online; how to identify bad actors while protecting people’s privacy; how we can use ‘threat modelling’ to account for and envision harmful unintended uses of technologies, and how to tackle bad behaviour online that is not yet illegal.
In this episode we chat to Professor Jason Edward Lewis, the University Research Chair in Computational Media and the Indigenous Future Imaginary at Concordia University in Montreal. Jason is Cherokee, Hawaiian and Samoan and an expert in indigenous design in AI. He’s the founder of Obx Labs for Experimental Media and the co-director of a number of research groups such as Aboriginal Territories in Cyberspace, Skins Workshops on Aboriginal Storytelling and Video Game Design, and the Initiative for Indigenous Futures. In this episode we discuss how indigenous communities think about what it means for humans and AI to co-exist, why we need to rethink what it means to be an intelligent machine, and why mainstream Western modes of building technology might actually land us with Skynet.
In this episode, we chat to Neema Iyer, a technologist, artist and founder of Pollicy, a civic technology organisation based in Kampala, Uganda. We discuss feminism and building AI for the world's fastest growing population, what feminism means in African contexts, and the challenges of working with different governments and regional bodies like the African Union.
In this episode, we talk to Frances Negron-Mutaner, an award-winning filmmaker, writer, and scholar and Professor of English and Comparative Literature at Columbia University, New York City. We discuss her Valor y Cambio or Value and Change project that brought a disused ATM to the streets of Puerto Rico filled with special banknotes. On the banknotes were the faces of Black educators, abolitionists and visionaries of a Caribbean Confederacy - people who are meaningful and inspirational to Puerto Ricans today. The machine asked the person retrieving bills what they valued, and in doing so, sparked what Frances calls decolonial joy. Together, we explore the unintended repurposing of technologies for decolonial and anti-capitalist purposes.
In this episode, we chat to Maya Indira Ganesh,the course lead for the University of Cambridge Master of Studies programme in AI Ethics and Society. She transitioned to academia after working as a feminist researcher with international NGOs and cultural organisations on gender justice, technology, and freedom of expression. We discuss the human labour that is obscured when we say a machine is autonomous, the YouTube phenomenon of ‘Unboxing’ Apple products, and why AV ethics isn’t just about trolley problems.
In this episode, we chat to Jess Smith, a PhD student at the University of Colorado in Information Science and co-host of the Radical AI podcast who specialises in the intersections of artificial intelligence, machine learning and ethics. We discuss the tensions between Silicon Valley’s move fast and break stuff mantra and the slower pace of ethics work. She also tells us how we can be mindful users of technology and how we can develop computer science programs that foster a new generation of ethically-minded technologists.
In this episode we chat to Ranjit Singh, a postdoctoral scholar for the AI On The Ground team at the New York-based Data & Society Research Institute. We discuss India’s Biometric Identification System, the problems with verifying a population of a billion people, and the difficulties in having to check whether beneficiaries of state pensions are still alive. We also talk about the problems with classification systems, and how we can better understand the risks posed by biometrics through looking at the world from the perspective of a biometric system, in high and low resolution.
In this episode, we talk to Dylan Doyle-Burke, a PhD student at the University of Colorado Boulder and host of the Radical AI podcast, previously a PhD student at the University of Denver in religious studies and a minister at the Unitarian Universalist church, an inclusive and liberal US-based congregation. We discuss the challenges and advantages of thinking through ethical issues in AI using Christian spiritual traditions, particularly approaches from liberation theology.
In this episode, we chat to Anne Anlin Cheng, Professor in the Department of English at Princeton University and author of Ornamentalism. We discuss how Asiatic femininity has historically been associated with ornamental extravagance and objecthood, and why we see so many of these stereotypes in visions of the future, like Ghost in the Shell, Bladerunner and Ex Machina, which are populated with geishas and concubines.
[We would like to apologise for the sound quality on this episode, we had technical difficulties]
In this episode, Eleanor chats to Rosi Braidotti, one of the leading philosophers of our time and a Distinguished Professor at Utrecht University. Her pioneering theory of posthumanism is a way of thinking that she believes is key to understanding the posthuman condition within which we all exist. We are releasing this conversation in two parts. In this first part, she explains how to embrace the crises and possibilities of advanced capitalism, what it means for NASA to choose Leonardo da Vinci’s Vitruvian Man as one of its logos, and why colonising outer space risks repeating the worst features of terrestrial capitalism. Look out for the bonus episode for the second half of this interview, which will be released very soon.
This bonus episode is the second half of our conversation with Rosi Braidotti. In this part, Braidotti discusses the culture wars, genealogies of Black feminisms, the relationship between gender and capitalism, the rise of neoliberal feminism and the effect that has had on solidarities between generations of feminists, and of course, the feminist posthuman project. She takes us from Virginia Woolf to Alice Walker, Paul Preciado to Shulamith Firestone. She explains why Firestone predicted some of reproductive possibilities we now had on offer, but failed to see that capitalism, not revolution, would be the source of these reproductive freedoms. She explains why corporations like IBM that have been thinking about gender as a spectrum, inherit these ideas from John Money and the gender reassignment clinics back in the 60s, and why most good predictions about capitalism can be attributed to Gilles Deleuze and Félix Guattari. We hope you enjoy the show.
In this episode we chat to Jennifer Lee, the technology and liberty project manager at the American Civil Liberties Union or ACLU of Washington, a non-profit organisation that fights for racial and gender equality and has been one of the leading voices in opposing facial recognition technology. Jen explains why we need to underscore the power dynamics in any decision to build, design, and use a technology, and why Microsoft’s new $22 billion contract to provide the military with technology affects how the tech industry defines good technology. Whether its the NYPD using automated license plate readers to track Muslim communities, or the 400,000 Michigan residents having their unemployment checks wrongfully withheld due to false fraud determinations, Jen tells us what can be done about the wrongful use of powerful technologies. We hope you enjoy the show.
In this episode we chat to Cynthia Bennett, one of the leading voices in AI and Accessibility and Disability Studies. She’s currently a researcher at Apple and a postdoctoral scholar at Carnegie Mellon University’s Human-Computer Interaction Institute. We discuss combatting the model of disability as deficit, how feminism and disability approaches can help democratise whose knowledge about AI is taken into consideration when we build technology, and why the people who make technology need to be representative of the people who use it. We also discuss the things that go wrong with AI that helps disabled users navigate their environment, particularly what can go wrong when using images labelled by humans.
In this episode, we chat to Kanta Dihal, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence who leads the Decolonising AI project. We discuss the stories that are being told about AI, why these stories need to be decolonised, what that means, and how we should go about it. We discuss the need to examine a plurality of local stories about AI and Kanta recommends us her favourite science fiction narratives from around the globe that are challenging the supremacy of science fiction from the Anglophone West.
In this episode, we chat to Neda Atanasoski, Professor and Chair of Feminist Studies at UC Santa Cruz, about the relationship between technology, racial capitalism, and histories of gendered and racialised labour.
In this episode, we chat to David Adelani, a computer scientist, PhD candidate at Saarland University in Germany, and active member of Masakhane. Masakhane is a grassroots organisation whose mission is to strengthen and support natural language processing research in African languages. There are over 2000 African languages, so David and the Masakhane team have their work cut out for them. We also discuss how to build technology with few resources and the challenges and joys of participatory research.
In this episode, we chat to Jack Halberstam, Professor of Gender Studies and English at Columbia University, about the relationship between resistance and invention, and why social media is even worse than we already know. He asks: Why does the state need to know your gender? Why are bodies subjected to technological recognition and how can we evade it? How are homophobia and transphobia operating under the banner of “security”, in, for example, AI used in airports? What’s the glitch in Ex Machina? How has the family shifted during lockdown? and much more.
Content Warning: This episode contains references to invasive and transphobic practices at the airport.
In this episode, we chat to Sneha Revanur, who is founder and president of Encode Justice, a global, youth-led coalition working to safeguard civil rights and democracy. Sneha has a fellowship at Civics Unplugged, is a Justice Initiative Committee Member at Harvard Law School, is a Civil Rights Policy Fellow at The Greater Good Initiative, a school program leader at Opportunity X, and a National Issue Advocacy Committee Criminal Justice Lead for the High School Democrats of America group. All this, and she’s still completing high school. We discuss intergenerational communication, what feminism means to young activists, why Gen Z are particularly empathetic and concerned with issues of equality, and why young activists are in an especially good position to deal with ethical problems around technology.
In this episode, we chat to Catherine D’Ignazio, Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning at MIT and Director of the Data + Feminism Lab, about data feminism, what that means and why feminism matters in data science. We talk applying programming skills to social justice work, the tension between corporate and social good, and how technology can be oriented towards the feminist project of shifting power. D’Ignazio explains what would be needed to reshape the model of accountability in AI and why ‘better’ technology might not be less harmful. She argues that data work can be most effective at producing better outcomes when grounded in feminist scholarship and practice. We hope you enjoy the show.
Content Warning: This episode contains a brief discussion of femicide.
In this episode, we chat with N. Katherine Hayles, Distinguished Professor of English at UCLA and James B. Duke Professor of Literature Emerita at Duke University, about feminism, embodiment, cognition, and human-AI relationships. We explore the role of feminism in science and technology, what productive conversations between engineers and humanities scholars look like, literary depictions of non-human embodiment and cognition, and the distribution of cognition across human-AI systems.
In this episode, we chat with Anita Williams, online counter-abuse policy and platform protection specialist, about the new challenges arising in the area of online abuse and how abusers exploit platforms and systems. We explore the multiple and intersectional harms that can arise from new technologies, the ethical problems around data collection, the protections required for content moderators, and the need to build women’s experiences into new technologies.
Content Warning: This episode contains discussions of online sexual abuse, grooming, and child sexual abuse.
In this episode, we chat with The Venerable Tenzin Priyadarshi, President and CEO of the Dalai Lama Centre for Ethics and Transformative Values at MIT, about how Buddhism can inform AI ethics. We discuss the problem with metrics, how to make meaningful contributions to ethics and avoid virtue signalling, why self-driving cars coming out of Asia and Euro-America prioritise the safety of different road users, and whether we should be trying to make machines intelligent, wise, or empathetic.
In this episode, we chat with Priya Goswami, anti-FGC activist, awardwinning filmmaker, and CEO and founder of the AI-driven app Mumkin, about feminist data practices and app design. We discuss designing and building apps that do good, why an app’s “users” are actually “participants”, and why you cannot compromise on the participants’ privacy and safety. Priya explains what it means to design an app as an activist, why feminism should be normalised, and the problem with running activist campaigns on social media.
Content Warning: This episode contains discussion of female genital cutting, or FGC, and gender-based violence.
Welcome to The Good Robot! In this teaser trailer, Eleanor and Kerry briefly chat about who they are, what they do, and why they’re starting The Good Robot podcast. Plus, they give you a sneak peek into the super exciting topics they’ll be exploring in upcoming episodes!
En liten tjänst av I'm With Friends. Finns även på engelska.