Trusted Reviews is supported by its audience. If you purchase through links on our site, we may earn a commission. Learn more.

Facial recognition is terrifying – and it’s here to stay

Facial recognition is a cutting edge technology that’s being rolled out across pretty much every platform you can think of.

In the consumer space everything from laptops to mobile phones and even smart home systems use the tech to identify their users and act accordingly.

On the one hand these features are excellent time-savers – though it’s a little finicky Windows Hello is a convenient bit of tech that makes it quick and easy to unlock your laptop without having to remember and input complex passwords.

In government and enterprise spaces it’s also making waves with police forces using it to spot and arrest criminals, and businesses are trying to find ways to use it to do things like identifying returning customers. There’s also research into the viability of using the tech for things like making payments and checking into flights at airports – which would make both processes significantly less hassle – and healthcare, where you face could be scanned for early signs of illness.

But underneath the shiny veneer of convenience there’s a definite dark side that’s outright terrifying and has the potential to be misused by prying governments and ne’er-do-well identity thieves. What’s more, thanks to the friendly face companies are putting on the tech, most consumers are missing the dangers and willfully putting themselves at risk.

Related: Best VPN

This was most recently showcased in a case relating to IBM and an alleged misuse of the tech during a Flickr ”Diversity in Faces” trial it demoed in January. The trial saw the company use a custom AI to analyse and describe collection of 99.2 million million photos taken from the platform.

The idea was that the AI could be trained to accurately identify a more diverse range of faces and avoid false positives, or unintended biases developing within recognition systems – a known problem that has made some commentators question if the tech is appropriate for use by law enforcement.

The issue is, according to NBC news, that IBM didn’t request permission to use the images from Flickr, the photographers or people involved. Putting aside how disturbing this is in the first place, what makes it truly scary is that, according to statement from IBM to Trusted Reviews, it acted entirely within the letter of the law.

“We take the privacy of individuals very seriously and have taken great care to comply with privacy principles, including limiting the Diversity in Faces dataset to publicly available image annotations and limiting the access of the dataset to verified researchers,” read the statement.

“Individuals can opt-out of this dataset. IBM has been committed to building responsible, fair and trusted technologies for more than a century and believes it is critical to strive for fairness and accuracy in facial recognition.”

This showcases a key problem with consumers attitude to AI: we’re still willing to play fast and loose with what personal data and information we post online.

As noted by Kaspersky Lab’s principal security researcher David Emm:

“This technology also presents significant privacy risks. Every individual has a right to know who is using our data and how they are doing so. Most people don’t know the risks they face online – or the value of their personal details.”

He continued:

“Research we conducted last year with 7,000 consumers across Europe showed that nearly two-thirds (64%) of people did not know all the places where their personal data was stored. In addition, half (50%) of people do not know how much their data is worth.”

This is particularly concerning when you consider the fact facial recognition technology is already being actively trialled by law enforcement within the UK.

South Wales Police force have been trialling facial recognition tech from Japan’s NEC Group in Cardiff. The trial’s running now, and uses the city’s CCTV network and cameras mounted on police cars and vans to monitor and identify wanted criminals and suspects.

In the Cardiff trial the tech works slightly different to the type demoed by IBM. It should just reference data taken from the force’s capture footage archive and known offenders list. This alleviates any concerns about police using data taken from social media, but here we come full circle and back to the original reason IBM wanted to run the Diversity of Faces trial in the first place.

Related: Best free anti-virus

As Kaspersky’s Emm noted, without such a diverse database, like the one trialled by IBM, the network cannot be trusted.

“It’s important for law enforcement – and other implementers of the technology – to remember that facial recognition technology is not perfect” he said.

“We’ve seen problems that still exist, for example the recent case with Amazon’s facial recognition technology demonstrated that there is a lot of work to be done before this can be considered ‘reliable technology’, and for its results to be trusted in environments, where there is no room for error. The fact that this system is nowhere near perfect.”

Amazon’s Rekognition technology was blasted in 2018 after it incorrectly identified 28 members of the US Congress as police subjects – and those it fingered were disproportionately people of colour.

Amazon chalked the blunder up to human error and the wrong settings being applied, but if this kind of technology is to be utilised by law enforcement agencies around the world, then better training of users, as well as better application of the technology will be required.

Similarly, London’s Metropolitan Police used facial recognition tech to monitor the Notting Hill Carnival in 2017, using a database of 20 million facial images to match faces against suspects – but the technology has a 98% false positive rate, and managed to incorrectly label 35 people at the Carnival.

Cardiff Police hadn’t responded to Trusted Reviews request for clarification how many actual arrests have been made using the tech at the time of publishing, so fully gauging the effectiveness of the trial is tricky.

But if Emm’s comments are accurate this puts most consumers between a rock and a hard place. On the one hand we can maintain our privacy, but forgo the benefits of facial recognition at anything but a local level. This makes sense but means consumers will miss out on its wider benefits, which are plentiful, and include things like improved security services, streamlined check-ins at airports and, potentially in the future, a new system to pay for goods and services.

Or we can accept governments will follow IBM’s strategy and risk opening ourselves up to Big Brother surveillance. Neither option sounds brilliant, though we personally think until legal safeguards are in place and consumers are more aware how their data is being used option one is the only sensible way to go.

Do you agree? Let us know on Twitter @TrustedReviews

Why trust our journalism?

Founded in 2003, Trusted Reviews exists to give our readers thorough, unbiased and independent advice on what to buy.

Today, we have millions of users a month from around the world, and assess more than 1,000 products a year.

author icon

Editorial independence

Editorial independence means being able to give an unbiased verdict about a product or company, with the avoidance of conflicts of interest. To ensure this is possible, every member of the editorial staff follows a clear code of conduct.

author icon

Professional conduct

We also expect our journalists to follow clear ethical standards in their work. Our staff members must strive for honesty and accuracy in everything they do. We follow the IPSO Editors’ code of practice to underpin these standards.

Trusted Reviews Logo

Sign up to our newsletter

Get the best of Trusted Reviews delivered right to your inbox.

This is a test error message with some extra words