The average human can recognize approximately 5,000 faces; one well-trained computer algorithm can potentially identify millions of people. Given enough reference photos, computers could eventually be able to identify every single person on earth in only a mere fraction of a second. The potential uses for facial-recognition technology range from minor conveniences like organizing photos of friends, to major improvements in security and law enforcement. These uses have important data security and privacy implications. The technology is here, but the laws have been slow to catch up.
In the past few months concerns have emerged as to whether facial-recognition technology, in its current state, suffers from racial bias and can lead to unfortunate mistaken identity cases. However, little consideration has been given to some other problematic aspects of wide-spread usage of the technology, particularly those associated with privacy and cybersecurity risks.
Background and Applications
Governments and private companies are working on facial-recognition technology, with the private sector developing superior technology so far. Facebook reportedly began developing its “DeepFace” technology in 2007, and by 2014 it was outperforming the algorithm developed and used by the United States Federal Bureau of Investigations. Amazon successfully marketed its “Rekognition” software to law enforcement, among others, who used it to identify suspects in policy body camera footage and to identify people who steal packages from porches. But some private sector employees who contributed to developing facial-recognition technology, as well as local politicians, are concerned about its government applications. This year, the city of San Francisco banned the use of facial-recognition technology in policing.
There are plenty of private sector applications for facial-recognition technology. Apple uses its “Face Id” technology instead of a passcode or fingerprint reader on newer model iPhones. Some retail banks use facial-recognition to prevent fraud, both by preventing the opening of fraudulent accounts by people presenting fake identification cards, and by declining to cash checks presented by people pretending to be someone they are not. Multiple automakers have floated the idea of using similar technology to unlock a vehicle as the owner approaches, or permitting a car to start only when an approved driver is in the driver’s seat. Applications are not limited to security features. For example, Facebook uses facial-recognition technology to enable users to quickly and easily tag photos of their friends. This is just the beginning, and new and unexpected applications will almost certainly develop.
Despite the great potential for facial-recognition technology there are clear concerns about the reliability of this technology, and at least in the U.S. the current legal framework is only just beginning to grapple with these issues. Facial-recognition technology is not perfect, but, with some exceptions discussed below, can be accurate in more than 97% of cases. This is similar to the rate at which humans can identify faces. However, unlike humans, current facial-recognition software encounters higher error rates when identifying women and people of color. Error rates also increase when a larger database of reference photographs is used, because given a large enough sample, many humans start to look alike.
It is currently unclear what legal recourse an individual falsely identified using facial-recognition would have—particularly in the context of private sector applications. While the risks of misidentification in law enforcement cases are obvious, the consequences could be equally detrimental in the civil context: when someone cannot cash their paycheck the day their rent is due, or when a storm is coming and the car will not start to enable them to drive to safety, for example. Some of this risk could potentially be addressed through existing products liability and tort laws, though some level of equipment malfunction might be considered too remote to be deemed compensable and establishing causation tied to facial-recognition software would likely be difficult. Moreover, it remains to be seen how much of that liability would be borne by implementers (e.g., a security company or a bank) compared to the technology developers (like Facebook and Amazon). To the extent product liability and tort doctrines cannot offer adequate protection, regulatory oversight might be available to fill the gaps. Both of our examples above involve industries that already include heavy oversight: both banking and motor vehicle safety is regulated at both the state and federal level. Though to date, regulatory interest in the effects of facial-recognition software has been limited.
Data privacy is another major concern, specifically regarding the collection, use and sharing of data collected by facial-recognition systems. An aspect of facial-recognition technology that has not been sufficiently discussed to date is that its usage results in collection of significant personal data beyond just that relating to the human face (or facial signature) itself. One could imagine, for example, companies seeking to build more effective marketing campaigns not just around the data collected through websites and social media platforms the consumer visits, but also based on where individuals go in real life. Fashion retailers could, for example, track who is visiting their (and, assuming data sharing, their competitors’) stores, at what times and with which other customers. It would be similar to collection of data using online cookies (which—contrary to facial-recognition—is regulated in many jurisdictions, including requiring notice and, in some cases, consent) but with potentially far more serious privacy implications because it would permit tracking real-life habits and behavior.
One challenge from a data privacy perspective is the relative difficulty in determining whether a data subject gave prior consent to data processing. In a way, this is plagued by a catch-22: in order to determine whether the data subject has consented to the processing of his or her data the system needs to identify the individual concerned; but in order to identify the individual, the system needs to carry out significant processing of personal data, which involves collecting biometric facial data and comparing it to data in a pre-existing database.
And what of all the millions of people whose faces have already been included in facial-recognition databases without their knowledge and consent? Can they request the removal from those databases of their facial data, for example by exercising the right to be forgotten, a right that the EU’s General Data Protection Regulation and the new California Consumer Privacy Act provide for? One study found that approximately half of all Americans are already in a facial-recognition database. It is unlikely that all these individuals would benefit from the right to delete their facial data from these databases (and exercising such a right would require the requesting individuals to first agree to have their face scanned so that their facial signature could be compared with the existing database).
Emerging privacy laws are important for establishing a framework for how to approach personal data—including emerging technologies like facial-recognition. The GDPR is an important milestone and establishes requirements such as (i) that data processors have a legitimate legal basis for processing personal data, (ii) mandating disclosures, and (iii) providing certain rights to request that personal data be deleted or corrected. Under the GDPR, legal grounds for collecting and processing data include: that the processing is necessary in order to protect the vital interests of the data subject or of another natural person, or that the processing is in the performance of a task carried out in the public interest or in the exercise of official authority vested in the data controller. However, privacy laws that could meaningfully be applied to facial recognition remain rare in the United States, and effective enforcement is even more elusive. A handful of facial-recognition software companies have been subject to suit under the Illinois Biometric Information Privacy Act (which requires companies to obtain consent from the data subject prior to creating a biometric template such as that used in facial-recognition). Initial results were mixed, with courts initially rejecting facially valid claims because the plaintiff could not show “any financial, physical or emotional injury apart from feeling offended by the unauthorized collection” of biometric data. This highlights one difficulty of crafting an effective remedy, where the harm the privacy law is seeking to address is both intangible and unquantifiable.
The cybersecurity concerns center around two key risks: (1) that vast amounts of sensitive data tied to a single individual would be stolen (unauthorized access and misuse of data collected through the use of facial-recognition systems), and (2) that facial-recognition-based systems would be hacked and manipulated (hacking the system in order to change how it works, e.g., to provide access to a building to an unauthorized person).
As for the first risk, as discussed above, facial-recognition technology can provide the means to collect information about an individual’s real-life habits and behaviors, including in situations where that person did not think he or she would be identified. The use of facial-recognition therefore has the potential to significantly increase the volume and sensitivity of data that is collected and stored about individuals. Additionally, data on a single individual could be aggregated across multiple facial-recognition sources. In other words, public and private-sector companies will, collectively, have vast amounts of information that could include everything from where they were to whom they were with, what they were wearing or which items they stopped to look at. A breach of a database storing data collected through facial-recognition technology, could thus lead to identify theft, stalking, extortion and other crime that are much more sophisticated than those seen today.
Second, manipulation of facial-recognition based system can be devastating. This may include hacking into networks and systems that rely on facial-recognition technology and replacing the facial signature that must be matched with that of the hacker’s, which would allow criminals to break into smartphones, homes, cars, bank accounts and highly-secure facilities. However, despite the obvious risks and sensitivity, it is not clear to data subjects and observers how much of this data is stored and by whom.
* * *
Comedian Groucho Marx once quipped: “I never forget a face, but in your case I’ll be glad to make an exception”—in the case of faces collected by facial-recognition systems, that may literally be true. Facial-recognition technology holds tremendous potential for enhancing safety and security, and making life easier. But these benefits come with significant privacy and cybersecurity risks. And while certain types of information, such as personal health information and non-public financial information, are subject to specific data security rules in the United States, facial-recognition data is not. Eventually, government and private sectors will develop responses to manage these risks, but in the meantime legislators, courts, and legal observers should be cognizant of their role in addressing these emerging legal issues in connection with facial-recognition systems.
 Daniel Ilan is a partner in Cleary Gottlieb Steen & Hamilton’s LLP’s New York office. His practice focuses on intellectual property, cybersecurity and privacy. Alexandra K. Theobald is an associate in the firm’s New York office. Her practice focuses on commercial litigation and arbitration, as well as intellectual property and cybersecurity. They may be reached, respectively, at firstname.lastname@example.org and email@example.com.
 R. Jenkins, A. J. Dowsett and A. M. Burton, How Many Faces Do People Know?, Proceedings of the Royal Society of Biological Sciences, Vol. 285, No. 1888 (Oct. 10, 2018), available at https://doi.org/10.1098/rspb.2018.1319.
 Niraj Chokshi, Facial Recognition’s Many Controversies, From Stadium Surveillance to Racist Software, New York Times, (May 5, 2019), available at https://www.nytimes.com/2019/05/15/business/facial-recognition-software-controversy.html.
 Russell Brandom, Why Facebook is beating the FBI at facial recognition, The Verge, (July 7, 2014), available at https://www.theverge.com/2014/7/7/5878069/why-facebook-is-beating-the-fbi-at-facial-recognition.
 Christopher Mims, Amazon’s Face-Scanning Surveillance Software Contrasts With Its Privacy Stance, Wall Street Journal (June 21, 2018), available at https://www.wsj.com/articles/the-privacy-paradox-face-recognition-is-techs-next-moral-dilemma-1529596801.
 Kari Paul, Amazon’s strategy to foil porch pirates with facial recognition hints at a ‘dangerous future,’ critics say, Market Watch, (Dec. 18, 2018), available at https://www.marketwatch.com/story/amazons-strategy-for-catching-porch-pirates-with-facial-recognition-could-also-curb-civil-liberties-critics-say-2018-12-14.
 Anne Brannigin, Citing Fears of Worsening Racial Bias, San Francisco Votes to Ban Police Use of Facial Recognition Technology, The Root, (May 16, 2019), available at https://www.theroot.com/citing-fears-of-worsening-racial-bias-san-francisco-vo-1834815492.
 Mitch Lipka, When financial fraud meets facial recognition, the jig may be up, Reuters, (Nov. 18, 2011), available at https://www.reuters.com/article/us-usa-fraud-facialrecognition/when-financial-fraud-meets-facial-recognition-the-jig-may-be-up-idUSTRE7AH23B20111118.
 Tom Simonite, Facebook Creates Software That Matches Faces Almost As Well As You Do, MIT Technology Rev. (Mar. 17, 2014), available at https://www.technologyreview.com/s/525586/facebook-creates-software-that-matches-faces-almost-as-well-as-you-do/.
 Steve Lohr, Facial Recognition Is Accurate, if You’re a White Guy, New York Times (Feb. 9, 2018) (finding error rate of 1% for white males, 12% for darker-skinned males, and 35% for darker-skinned females), available at https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html; Jacob Snow, Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots, ACLU (July 26, 2018) (finding people of color were twice as likely to be falsely identified), available at https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.
 Biometric facial data is data, such as precise measurements of the distance between eyes or from forehead to chin, that can be used to authenticate a person—arguably a far more intimate version of a fingerprint.
 Clare Garvie, Alvaro Bedoya and Jonathan Frankle, The Perpetual Line-up: Unregulated Police Face Recognition in America, Georgetown Center of Privacy and Technology (Oct. 18, 2016), available at https://www.perpetuallineup.org/.
 Compare Rivera et al. v. Google LLC, No. 1:2016-cv-02714, opinion at 5 (N.D. Ill. 2018) (requiring plaintiff to plead injury in fact) with Rosenbach v. Six Flags Entertainment Corp., 2019 IL 123186 (Ill. Ct. App. Jan. 25, 2019) (reversing intermediate appellate court to find that plaintiff need not plead or prove “that they sustained some actual injury or damage beyond infringement of the rights” afforded them under the Biometric Information Privacy Act). The latter decision by the Illinois Court of Appeals controls.