Artificial intelligence is everywhere, growing increasingly accessible and pervasive. Conversations about AI often focus on technical accomplishments rather than societal impacts, but leading scholar Kate Crawford has long drawn attention to the potential harms AI poses for society: exploitation, discrimination, and more. She argues that minimizing risks depends on civil society, not technology.
The ability of people to govern AI is often overlooked because many people approach new technologies with what Crawford calls “enchanted determinism,” seeing them as both magical and more accurate and insightful than humans. In 2017, Crawford cofounded the AI Now Institute to explore productive policy approaches around the social consequences of AI. Across her work in industry, academia, and elsewhere, she has started essential conversations about regulation and policy. Issues editor Monya Baker recently spoke with Crawford about how to ensure AI designers incorporate societal protections into product development and deployment.
Resources
Learn more about Kate Crawford’s work by visiting her website and the AI Now Institute.
Read her latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.
Visit the Anatomy of an AI System artwork at the Museum of Modern Art, or see and learn about it virtually here.
Working with machine learning datasets? Check out Crawford’s critical field guide to think about how to best work with these data.