LEAVE A REPLY

Please enter your comment!
Please enter your name here

IBM said it was getting out of the facial recognition business last year. Then, Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. Nationwide protests last summer opened the door for a conversation around how these systems should be used by police, amid growing evidence of gender and racial bias baked into the algorithms.

On today’s encore episode, Short Wave host Maddie Sofia and reporter Emily Kwong speak with AI policy analyst Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.

Nkonde is the CEO of AI For the People, fellow at the Berkman Klein Center for Internet & Society at Harvard University and Fellow at the Digital Civil Society Lab at Stanford University.

- A word from our sposor -

Why Tech Companies Are Limiting Police Use of Facial Recognition