Artificial intelligence is letting us make predictive algorithms that translate languages and spot diseases as well or better than humans. But these systems are also being used to make decisions about hiring and criminal sentencing. Do computers trained on vast datasets of human experience learn human biases, like sexism and racism? Is it possible to create an algorithm that is fair for everyone? And should you have the right to know when these algorithms are being used and how they work?
For links to materials referenced in the episode, suggestions for further learning, and guest bios, visit bravenewplanet.org.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.