Government agencies and the technology companies working with them are optimistic (see FaceFirst and NEC’s cheery, but mildly dystopian, demo videos) about the potential for AI and facial recognition to make law enforcement safer and more efficient, but inaccurate software, documented racial bias in algorithms, and the long-term potential for the development of a mass surveillance apparatus mean that the technology should probably be approached with caution.

How does facial recognition work?

Humans have fingerprints, but they also have “faceprints.” There are dozens of individual data points that can be analyzed about the human face, from the distance between your eyes to your “skinprint.” Facial recognition technology looks at images of faces, analyzes these features, and returns its best guess about who the face belongs to, and it can be extremely accurate if the pictures are high-quality and the system is well-made. The increasing availability and efficiency of artificial intelligence have made it much easier and faster to collect and analyze all this data.

How is law enforcement using it?

Police and government entities are all using facial recognition in basically the same two ways: checking people against a database to find out who they are (and raise red flags if someone is wanted) and using ID photos to actively search for people. The US, the UK, and other countries are experimenting with these technologies on a large scale, but they’ve been slowed by public pushback. Countries like China, on the other hand, don’t need public approval to start experimenting with real-time AI surveillance. Some of the most well-known current projects include:

What’s the problem?

Many privacy advocates are worried that even if it’s developed as a well-intentioned technology with safeguards and limitations, governments could easily misuse facial recognition if it’s allowed to grow into any form of mass surveillance. Large-scale electronic monitoring and tracking systems are becoming fairly standard in most countries, and adding facial recognition to the mix would make them that much more effective at finding and suppressing problematic individuals or groups of people – activists, journalists, political opponents, etc. Facial recognition may even be able to figure out your politics and sexuality, which could clearly be an issue in many places. Another concern is algorithmic bias, which is a well-documented issue. Some of the most widely-used facial recognition systems have been found to misidentify images of people with very dark or very pale skin at higher-than-average rates, as well as having lower accuracy rates for women.

Putting on a brave new face

As with most technologies in the twenty-first century, it’s not a question of whether they’ll be developed (they definitely will) but how they’ll be used. China’s systems may not be able to track every citizen in real-time just yet, but they may just be a few breakthroughs away from making that into a reality, and the rest of the world can definitely catch up. There’s no question that facial recognition will be part of law enforcement operations everywhere, but keeping the process open, transparent, and legal is important for everyone involved. Image Credit: Visage Technologies via Wikimedia