A new report details what privacy experts are calling a dangerous misapplication of facial recognition that uses photos of celebrities and digitally-doctored images to comb for criminals.
According to a detailed investigation by Georgetown Law’s Center on Privacy and Technology, one New York Police Department detective attempted to identify a suspect by scanning the face of actor Woody Harrelson.
After footage from a security camera failed to produce results in a facial recognition scan, the detective used Google images of what he concluded to be the suspects celebrity doppelganger — Woody Harrelson — to run a test.
The system turned up a match, says the report, who was eventually arrested on charges of petit larceny.
Scanning celebrity likenesses isn’t the only way the department is pushing the limits of facial recognition technology.
The report also details how in some cases, images of suspects are digitally altered to increase the likelihood of a successful match.
For instance, in cases where a suspect’s mouth is open — a pose that is more difficult for algorithms to assess — images are doctored through superimposing a closed mouth over the mouth of the original suspect.
The reports says that those facial features are sourced from stock images of other faces.
The danger is that subtle variations in facial characteristics may affect the outcome of the software’s matching process and increase the likelihood of falsely identifying someone, says the report.
There are currently no overlying set of guidelines for how and when officers can use facial recognition software in investigations, says the report. As a result similar applications have cropped up across the country.
A recent report from the Washington Post detailed how some officers in the Washington County police department are using artistic renderings — sketches — of suspects to try and find matches in its database of mugshots.
Specifically, the department uses a software called Rekognition which is developed and sold by the e-commerce giant Amazon to law enforcement agencies and other organizations across the country.
Georgetown’s investigation says that at least half a dozen police departments across the country ‘permit, if not encourage, the use of face recognition searches on forensic sketches.’
Many of those sketches are not only hand-drawn but generated using sometimes shaky accounts from eye-witnesses or victims that can be given hours or even days after a crime occurs.
In a response to The Verge, an NYPD representative said its use of facial recognition software to investigate subjects is only supplementary.
‘No one has ever been arrested on the basis of a facial recognition match alone, a police spokesperson told The Verge.
‘As with any lead, further investigation is always needed to develop probable cause to arrest. The NYPD has been deliberate and responsible in its use of facial recognition technology.’
As the sophistication and availability of facial recognition software grows, the technology has found its way into several new frontiers.
Among them are an increasing number of U.S. airports, which use face scans to track passengers and may soon be ubiquitous across America, and also surveillance methods that are teaching cameras how to scan faces and identify suspects.
As the use of facial recognition grows, however, so too has its opponents.
Recently, San Francisco became the first city in the world to ban the use of facial recognition software by law enforcement citing its potential to misidentify people for crimes, especially people of color whose faces have been proven to be difficult for the technology to accurately scan.