How we approach face recognition and law enforcement as ethically as we can
At Machine Box, we’ve always been about empowering you to build amazing things with simple machine learning tools. One important aspect of this approach is ownership. When you use Machine Box, you get complete control of all of your data, and nowhere is this more important than in surveillance and face recognition.
Back in May of 2018, this came to the forefront of our minds.
[T]he American Civil Liberties Union led a group of more than two dozen civil rights organizations that asked Amazon to stop selling its image recognition system, called Rekognition, to law enforcement.
At the time, I remember trying to wrap my head around the implications of this backlash for face recognition in general. On the one hand, I am a strong advocate for personal privacy, not just from government organizations but from large companies attempting to track my every thought. On other hand, I’ve been the victim of a crime that could have either been prevented just by the presence of surveillance cameras — or at least the resulting video could have helped law enforcement identify him later. So just like everyone else, I have to grapple with the age-old question of security vs. liberty.
The reason I bring this up is because the company that acquired Machine Box recently announced two new applications that can use Facebox to recognize known offenders in law enforcement video.
A natural extension to the aiWARE application suite, agencies are now empowered to not only intelligently search to find pertinent evidence but identify suspects and redact sensitive materials within that evidence prior to distribution.
I’ve also been a strong advocate for the use of video in law enforcement, as it can be a superior arbiter of truth. But capturing all of that video won’t help much if law enforcement agencies have to sift through it manually. Furthermore, compliance with evidentiary laws and other considerations become even more difficult without the assistance of AI-powered capabilities to make sense of what video has been captured. There’s simply too much video.
We want video, but we also have to draw a line between helping law enforcement keep us safe and enabling Big Brother.
This is why I prefer these applications use Facebox. Facebox is a Docker container that can run in either a private or public cloud environment, where the ingress and egress of data is 100% controllable by the application. In either deployment option, there’s no open-ended cloud endpoint into which face data simply disappears. Instead, you as the developer are in 100% control of your data — from training data, state files, to any subsequent data you present to Facebox.
You’ll note that in the press release it mentions the fact that the only faces that are used to train the face recognition are those of existing offenders or persons of interest provided by the specific law enforcement agency using the app. Because Facebox runs in such a controlled manner, all of the training data never leaves the chain of custody of the law enforcement organization. It is treated like a fingerprint found at the scene of a crime.
This is congruent with our mission of being able to let organizations keep their data private and within their control.
Additionally, Facebox enables the face data and all the biometric properties therein to be deleted or forgotten. Other face recognition services might not allow this, either because they want to maintain that data forever due to their business goals, or because they technically don’t have those capabilities. For example, in Google Vision’s API terms of service it states:
b. Submission of Content
Some of our APIs allow the submission of content. Google does not acquire any ownership of any intellectual property rights in the content that you submit to our APIs through your API Client, except as expressly provided in the Terms. For the sole purpose of enabling Google to provide, secure, and improve the APIs (and the related service(s)) and only in accordance with the applicable Google privacy policies, you give Google a perpetual, irrevocable, worldwide, sublicensable, royalty-free, and non-exclusive license to Use content submitted, posted, or displayed to or from the APIs through your API Client. “Use” means use, host, store, modify, communicate, and publish. Before you submit content to our APIs through your API Client, you will ensure that you have the necessary rights (including the necessary rights from your end users) to grant us the license.
We joined Veritone because it offers us the ability to expand not just the reach of Machine Box, but the democratization of the power of these tools. Part of that step is to address these use cases head on, and to do our best to uphold our values.
How we approach face recognition and law enforcement as ethically as we can was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.