Forget passwords – you could soon be unlocking your phone by wrinkling your nose or sticking out your tongue. Google is working on technology that could let your smartphone recognize your facial expressions.
The latest Google Android devices already have a built-in Face Unlock feature that uses facial recognition to unlock their handsets, but this patent would take the technology a step further adding additional ‘liveness’ features.
When Google launched Face Unlock in 2011, as part of Android Ice Cream Sandwich, it was criticized by security experts because it could be bypassed by holding static photos up to the phone or tablet’s camera.
HOW DOES GOOGLE’S LATEST FACIAL RECOGNITION PLANS WORK?
- To get access to a device the user would have to pull a specific predetermined facial expression.
- The expression would then be scanned and compared to a previously captured photo to confirm the user’s identity.
- Facial expressions listed include blinking, frowning, smiling, sticking out a tongue, wrinkling a nose and raising an eyebrow.
- The patent explains there would be a small margin of error but the user’s expression would have to match the original photo as closely as possible.
- The technology would then check for ‘liveness’ – a signal that shows the user is alive and moving and not a static image.
- It would scan for changes in pixels and light to monitor and recognise the changes in the location of the facial features.
For example, if a blink is used to gain access the technology would record the light from the eye and then monitor if this light changes, suggesting the eye has been closed.
This ‘live’ movement is then given a score based on how similar it is to the original image. If the score reaches the minimum security threshold, the phone is unlocked. If it doesn’t the user is denied access and has to try again. The expression would then be scanned and compared to a previously captured photo to confirm the user’s identity.
The patent said: ‘The anti-spoofing techniques herein may use facial gestures such as blinks, winks, and other gestures that may be performed within the confines of a human face.
‘[The device] may detect facial gestures associated with various facial features.
Examples include one or both eyes (e.g., for blinks, winks, etc.), the mouth area (e.g., for gestures including smiling, frowning, displaying a user’s teeth, extending a user’s tongue, etc.), the nose (e.g., for a nose wrinkle gesture, etc.), the forehead (e.g., for a forehead wrinkle gesture, etc.), one or both eyebrows (for eyebrow raise and compression gestures, etc.), and various others.’