Why Tech Companies Are Limiting Police Use of Facial Recognition

0
98
In this Wednesday, Feb. 13, 2019, photo, Massachusetts Institute of Technology facial recognition researcher Joy Buolamwini stands for a portrait behind a mask at the school, in Cambridge, Mass. Buolamwini's research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Buolamwini holds a white mask she had to use so that software could detect her face. (AP Photo/Steven Senne)

IBM said it was getting out of the facial recognition business last year. Then, Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. Nationwide protests last summer opened the door for a conversation around how these systems should be used by police, amid growing evidence of gender and racial bias baked into the algorithms.

On today’s encore episode, Short Wave host Maddie Sofia and reporter Emily Kwong speak with AI policy analyst Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.

Nkonde is the CEO of AI For the People, fellow at the Berkman Klein Center for Internet & Society at Harvard University and Fellow at the Digital Civil Society Lab at Stanford University.

LEAVE A REPLY

Please enter your comment!
Please enter your name here