Earlier this month, IBM said it was getting out of the facial recognition business. Then Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. There's growing evidence these algorithmic systems are riddled with gender and racial bias. Today on the show, Short Wave speaks with AI policy researcher Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.
Learn more about sponsor message choices:
podcastchoices.com/adchoicesNPR Privacy Policy