Google exec issues stark warning on ‘biased’ AI facial recognition tech

Facial recognition tech still carries “inherent biases” and requires more testing to combat issues with diversity, according to a top Google executive.
Diane Greene, the head of cloud computing at Google, has warned the top tech companies must be cautious with its use of the technology.
The warning comes after Amazon’s Rekognition software incorrectly identified 28 members of the US Congress as police subjects. Those identified were disproportionately people of colour. Amazon says it simply used the wrong settings for the test. However, that same tech is reportedly being used by police forces in the United States.
Google, of course, landed itself in hot water back in 2015 when it was forced to apologise for its Image Search tech identifying a black couple as gorillas.
“We need to be really careful about how we use this kind of technology,” she told the BBC. “We’re thinking really deeply. The humanistic side of AI – it doesn’t have the diversity it needs and the data itself will have some inherent biases, so everybody’s working to understand that.”
“I think everybody wants to do the right thing. I’m sure Amazon wants to do the right thing too. But it’s a new technology, it’s a very powerful technology.”
Related: Best VPN
The American Civil Liberties Union says tech companies and law enforcement should be restricted in the use of facial recognition tech until there’s sufficient debate on its merits.
The ACLU says: “Speaking of facial recognition more widely, the ACLU said: “Congress should enact a federal moratorium on law enforcement use of this technology until there can be a full debate on what – if any – uses should be permitted.”
Do you think its proper for the police to use facial recognition technology to identify possible offenders? Drop us a line @TrustedReviews on Twitter.