There has been a lot in the zeitgeist recently about face recognition, and I think some of the better uses of face recognition are getting overlooked. One of my favorite customer use cases of ours is using it to speed up payments and other activities on physical kiosks. Here’s why customers are coming to us to solve this, and how you can try it yourselves.
The problem it solves.
There are a few things that I believe are driving the surprising amount of inbound requests we’ve been getting this last year for Facebox (our easy-to-deploy face recognition tech).
- Security. Your face is a biometric, which means that it is unique to you, like a fingerprint. I like the idea of using it to help authenticate access to certain things like my bank account through an ATM. I wouldn’t want it to be the ONLY way to authenticate, but as part of a 2 or 3 factor authentication system, it works well. It also lets you consider things like shutting a screen down if more than one face is detected.
- Ease-of-use. It is hard to accidentally leave your face at home, unlike your wallet, keys, or phone. If I can pay for stuff at a store with my face plus a pin code, or some other second check, I’d find that very convenient.
- Speed. Face recognition is in use at San Francisco International Airport, Heathrow Airport, and others for passport control. Since they’ve implemented this, I have not see the massive lines that used to be a common staple for this part of international travel. In fact, last time I went through SFO’s passport control, there was no wait whatsoever. Facebox and most other face recognition systems are pretty fast nowadays, and if they’re implemented as I describe below, all the data is local or nearby so it can’t be slowed down by network latency.
One way to implement.
In this section, I’ll highlight how you can get this going quickly on a physical machine using Facebox. The process may or may not be similar for other face recognition algorithms if you can solve for runtime issues.
You can train Facebox with a single example of each face. You might put a card scanner into your kiosk so that users can put in their driver’s license and have the face on that card be used to teach Facebox.
Once Facebox has been trained with a face, you can then use the kiosk’s camera to start trying to recognize faces. You can use Facebox’s built in face detection features to see if there is 1 or more people present in the camera’s viewing angle as an added precaution.
Facebox stores its learning in state files locally. You can periodically sync that state file with other kiosks nearby via a slow 3G network or some other communications technology.
Even if you’ve trained 1 million people, that statefile will always remain a tiny file. No need to store images or JPEGs of people’s faces!
If you’re having trouble matching a person, you can have them take another photo and add that training to Facebox live, with out having to re-train or do any kind of maintenance window. People can also choose to delete their faces or data from Facebox.
This implementation can theoretically scale to massive sizes, but we don’t recommend training a single state file with hundreds of thousands of people, unless you start to return similar faces as results instead of betting on a perfect match. It is like doing a Google search. You don’t just jump to the first result, you look at a page of relevant results that are ranked, and you usually pick something from the top 4 or 5.
What I like about this implementation is that you, the kiosk owner, doesn’t have to send private face data to any big tech companies. You can keep the data hyper local, which helps speed things up and keeps everything dynamic.
You can read more about deploying face recognition to solve various use cases by checking out these others posts;
- Build face recognition directly into a browser.
- Introducing Facebox Faceprint
- Configure Multiple Instances of Facebox