Privacy

Facial Recognition Is Plagued by Problems


Robert Julian-Borchak Williams was arrested in the driveway of his home, taken to the local police station, and interrogated for a crime he did not commit. Mr. Williams was the victim of a facial recognition algorithm. 

Supporters of facial recognition highlight how the new technology is a powerful tool that assists law enforcement in solving crimes. However, Mr. Williams’ story highlights one of the problems with the widespread use of facial recognition software. In addition to the harm caused to Mr. Williams, law enforcement squandered valuable time and resources to track down the wrong person. There are other concerns with facial recognition technology, including perpetual citizen surveillance and unintended consequences that have yet to emerge.

Like all technology, facial recognition has an error rate. False positives could have drastic consequences, including leading law enforcement to arrest and prosecute individuals innocent of any crime. 

In the world of algorithms, error in input leads to error in output. According to a Brookings Institute report, because the datasets companies used in testing were overwhelmingly lighter-skinned individuals, facial recognition returns more false positives and negatives when attempting to accurately identify African-Americans. This means the use of facial recognition at scale would be substantially likely to disparately impact African-Americans, giving individuals in communities skeptical of police more reasons to doubt the intentions of law enforcement.

Furthermore, to access facial recognition software, government agencies contract with private companies, such as Clearview AI. These companies aggregate data from a wide range of sources, including hard drives, cell phones, smartwatches, or other wearable devices. 

YouTube, Facebook, and other social media sites are scanned for images. Data can also be purchased through data brokers or requested in a bulk data request. Law enforcement sends these requests to companies (like Google) in search of criminal suspects. 

Given the public/private partnerships that make government use of facial recognition possible, consumers remain unaware of which seemingly benign consumer products could land in government hands. For example, Amazon Ring has partnered with hundreds of local law enforcement agencies. This partnership makes it significantly easier for police to access vast amounts of digital data without a warrant. 

Events in Ukraine have exposed another dark side of facial recognition technology. Using Clearview AI’s technology, facial recognition scans have been used to identify deceased Russian soldiers. The results of these scans, including gruesome images, have then been sent directly to the dead soldier’s family members. 

The Washington Post discussed the ethical concerns of this unprecedented tactic with Stephanie Hare, a surveillance researcher in London. Hare cited this practice as “classic psychological warfare” and could set a troubling standard generated in the midst of horrific warfare. 

Additionally, although possibly designed to cut through Russian propaganda by directly showing Russians the cost of the war, sending graphic images could also cause family members caught in the throes of grief to lash out at Ukraine rather than Putin.

The thorny ethical issues associated with the use of facial recognition could extend to thousands of pages, and many legitimate concerns have not yet materialized. A year ago, the use of facial recognition software to notify relatives of the casualties of war had not yet been contemplated. 

Privacy rights continue to dwindle. Given the breakneck speed of technological development, swift action must be taken to ensure privacy does not become a relic of the past.