Culture Current Affairs

Facial Recognition at Dispensaries Violates Privacy

Written by Lance Griffin

Recall your favorite sci-fi dystopia. Sleek technology scans the heroine’s face. It instantly identifies her and calls out her name in an ominous robotic voice. An alarm sounds. Then a robo-SWAT team busts through the windows.

This technology is no longer science fiction.

In fact, a number of cannabis dispensaries have integrated facial recognition surveillance systems. These systems analyze faces in real time for rapid identification.  But the ethics are murky and the regulations sparse.

The obvious uses for dispensaries are to prevent masked criminals from entering at the door, to identify and tag problematic patrons, and to confirm the age of consumers. But the technology also collects and stores data. Dispensary owners may voluntarily hand this data over to police departments. Law enforcement has been quietly creating virtual lineups using facial recognition for years. U.S. Immigration and Customs Enforcement (ICE) was recently uncovered to have mined millions of driver’s license photos with facial recognition.

Even if our faces don’t end up in the hands of law enforcement, data theft from cannabis dispensaries is more than a possibility — it has already happened. How cyber criminals use such precise and impactful data could prove nightmarish.

Then there is the systemic sociological problem: facial recognition may discriminate against minorities, who are perhaps the greatest victims of cannabis prohibition. As one company points out, “We enable you to deliver content specifically designed for individual viewing based on age, race, gender, location and daypart.”

Researchers have developed machine learning tools to predict criminality based on facial features, and their technology is based on the photographed faces of incarcerated individuals. [1] Facial recognition may therefore simply expand the human bias that characterizes incarceration. This is, in a nutshell, high-tech physiognomy, or deciding who a person is based on their face. It’s a pseudoscience at best; it’s a tool for the propagation of racism and sexism at worst.

A 2018 pilot research project at MIT (Massachusetts Institute of Technology) reported that major commercial facial recognition software (Microsoft, IBM, and Face++) was accurate 99% of the time for white men but produced errors up to 35% of the time for dark-skinned women. [2] A 2019 report sponsored by the U.S. Department of Commerce examined algorithms from diverse corporations and universities; in a sample of over 8 million people, researchers demonstrated high false match rates for black and Asian women in addition to American Indians. [3]

Products like Amazon Rekognition are actually able to identify emotions, such as fear, in the faces of those surveilled. Of course, fear of surveillance is a red flag.

Yet there is little regulation. Only a few states have taken action. California, for example, passed the The Body Camera Accountability Act (AB 1215) to prevent police officers from using facial recognition in their body cameras. Facebook recently lost a lawsuit in Illinois after user photos were processed by facial recognition technology without consent. This was thanks to a unique law in Illinois, the Biometric Information Privacy Act, that places controls on how organizations may use biometric data (including facial scans).

Critics may perceive these concerns to be overblown, but China has already installed facial recognition surveillance in public areas. The technology allows for public shaming of law breakers (including jaywalkers) and mass collection of citizen data. Government officials are developing social credit scores as part of a reputation points system — using facial recognition — that punishes “bad” citizens (e.g., public shaming and being denied financial loans) but rewards “good” citizens (e.g., free coffee and lower interest rates on loans). It is, in essence, a real-life episode of “Black Mirror.”

Some advocates have taken matters into their own hands. The website HR Improved, for example, is “designed to demonstrate the privacy invasions and deterioration of societal norms we can expect if facial recognition functionality goes unrestricted.” Users of the site (presumably, HR recruiters) submit photos of potential hires to determine if the subject has ever been photographed in a “Make America Great Again” hat. The objective is to mitigate the risk of “hiring a toxic employee.” You can’t make this up…

At the end of the day, does anyone want to live under constant surveillance? If walking into a dispensary means having a face scan sent to the police, has prohibition really ended?

References

  1. Wu X, Zhang X. “Automated Inference on Criminality Using Face Images.” ArXiv, Times Cited = 25
  2. Buolamwini J, Gebru T. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research, vol.81, 2018, pp.1-15.
  3. Grother P, et al. “Face Recognition Vendor Test (FRVT): Part 3: Demographic Effects.” Inst. Stand. Technol. Interag. Intern. Rep. 8280, 2019.

About the author

Lance Griffin

Lance

Leave a Comment