HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » Forums & Groups » Main » General Discussion (Forum) » A.I. Has a Race Problem

Wed Jun 27, 2018, 08:35 AM

A.I. Has a Race Problem

A.I. Has a Race Problem
Facial recognition software still gets confused by darker skin tones.
By Lizette Chapman
and Joshua Brustein
June 26, 2018, 4:00 AM CDT Corrected June 26, 2018, 10:20 AM CDT


A couple of years ago, as Brian Brackeen was preparing to pitch his facial recognition software to a potential customer as a convenient, secure alternative to passwords, the software stopped working. Panicked, he tried adjusting the room’s lighting, then the Wi-Fi connection, before he realized the problem was his face. Brackeen is black, but like most facial recognition developers, he’d trained his algorithms with a set of mostly white faces. He got a white, blond colleague to pose for the demo, and they closed the deal. It was a Pyrrhic victory, he says: “It was like having your own child not recognize you.”

At Kairos AR Inc., his 40-person facial recognition company in Miami, Brackeen says he’s improved the software by adding more black and brown faces to his image sets, but the results are still imperfect. The same problem bedevils companies including Microsoft, IBM, and Amazon and their growing range of customers for similar services. Facial recognition is being used to help India’s government find missing children, and British news outlets spot celebrities at royal weddings. More controversially, it’s being used in a growing number of contexts by law enforcement agencies, which are often less than forthcoming about what they’re using it for and whether they’re doing enough about potential pitfalls. Brackeen believes the problem of racial bias is serious enough that law enforcement shouldn’t use facial recognition at all.

Microsoft, IBM, and China’s Face++ misidentified darker-skinned women as often as 35 percent of the time and darker-skinned men 12 percent of the time, according to a report published by MIT researchers earlier this year. The gender difference owes to a smaller set of women’s faces. Such software can see only what it’s been taught to see.

“An inaccurate system will implicate people for crimes they didn’t commit”


Engineers are improving how they train algorithms as more agencies purchase the software, but they may not be able to head off growing calls for regulation. The authors of the Georgetown report call for state and federal laws governing how police departments use facial recognition and call on the police to test regularly for algorithmic bias. In April a group of civil rights organizations said it was “categorically unethical” to deploy real-time facial recognition analysis of footage captured by police body cameras.

Some, including the EFF’s Lynch, argue that their concerns will only increase as the technology improves. An accurate image merged with personal information about an individual such as location, family ties, voting records, and the like can be pulled together by authorities using products such as those from Palantir Technologies Inc. to create a digital dossier on people without their consent or knowledge. “Even if we have a 100 percent accurate system, I don’t want that system,” Lynch says. “That means we can no longer walk around and interact with people without the government knowing who we are, where we are, and who we’re talking to.”
(Corrects Microsoft error rate in fourth paragraph)

BOTTOM LINE - Microsoft says it’s cut its facial recognition error rate to zero percent for everyone except darker-skinned women, but as with rivals, those numbers are likely to rise in the real world.

2 replies, 807 views

Reply to this thread

Back to top Alert abuse

Always highlight: 10 newest replies | Replies posted after I mark a forum
Replies to this discussion thread
Arrow 2 replies Author Time Post
Reply A.I. Has a Race Problem (Original post)
marble falls Jun 2018 OP
WhiskeyGrinder Jun 2018 #1
marble falls Jun 2018 #2

Response to marble falls (Original post)

Wed Jun 27, 2018, 08:40 AM

1. As something built by people, AI will have human blind spots for quite some time.

Reply to this post

Back to top Alert abuse Link here Permalink

Response to WhiskeyGrinder (Reply #1)

Wed Jun 27, 2018, 08:43 AM

2. For me it illustrates that as knowledge is passed on and so are biases.

Reply to this post

Back to top Alert abuse Link here Permalink

Reply to this thread