General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsA.I. Has a Race Problem
A.I. Has a Race Problem
Facial recognition software still gets confused by darker skin tones.
By Lizette Chapman
and Joshua Brustein
June 26, 2018, 4:00 AM CDT Corrected June 26, 2018, 10:20 AM CDT
https://www.bloomberg.com/news/articles/2018-06-26/ai-has-a-race-problem
A couple of years ago, as Brian Brackeen was preparing to pitch his facial recognition software to a potential customer as a convenient, secure alternative to passwords, the software stopped working. Panicked, he tried adjusting the rooms lighting, then the Wi-Fi connection, before he realized the problem was his face. Brackeen is black, but like most facial recognition developers, hed trained his algorithms with a set of mostly white faces. He got a white, blond colleague to pose for the demo, and they closed the deal. It was a Pyrrhic victory, he says: It was like having your own child not recognize you.
At Kairos AR Inc., his 40-person facial recognition company in Miami, Brackeen says hes improved the software by adding more black and brown faces to his image sets, but the results are still imperfect. The same problem bedevils companies including Microsoft, IBM, and Amazon and their growing range of customers for similar services. Facial recognition is being used to help Indias government find missing children, and British news outlets spot celebrities at royal weddings. More controversially, its being used in a growing number of contexts by law enforcement agencies, which are often less than forthcoming about what theyre using it for and whether theyre doing enough about potential pitfalls. Brackeen believes the problem of racial bias is serious enough that law enforcement shouldnt use facial recognition at all.
Microsoft, IBM, and Chinas Face++ misidentified darker-skinned women as often as 35 percent of the time and darker-skinned men 12 percent of the time, according to a report published by MIT researchers earlier this year. The gender difference owes to a smaller set of womens faces. Such software can see only what its been taught to see.
An inaccurate system will implicate people for crimes they didnt commit
<snip>
Engineers are improving how they train algorithms as more agencies purchase the software, but they may not be able to head off growing calls for regulation. The authors of the Georgetown report call for state and federal laws governing how police departments use facial recognition and call on the police to test regularly for algorithmic bias. In April a group of civil rights organizations said it was categorically unethical to deploy real-time facial recognition analysis of footage captured by police body cameras.
Some, including the EFFs Lynch, argue that their concerns will only increase as the technology improves. An accurate image merged with personal information about an individual such as location, family ties, voting records, and the like can be pulled together by authorities using products such as those from Palantir Technologies Inc. to create a digital dossier on people without their consent or knowledge. Even if we have a 100 percent accurate system, I dont want that system, Lynch says. That means we can no longer walk around and interact with people without the government knowing who we are, where we are, and who were talking to.
(Corrects Microsoft error rate in fourth paragraph)
BOTTOM LINE - Microsoft says its cut its facial recognition error rate to zero percent for everyone except darker-skinned women, but as with rivals, those numbers are likely to rise in the real world.