Biometric authentication is one of the hottest trends in the digital ID space. Users demand both ironclad security and seamless login experiences, and biometrics are being incorporated into hardware solutions to meet these desires. Apple’s latest iPhone offerings employ biometric unlock features, for example, with the iPhone 8 providing fingerprint recognition, and the X and 11 models enabling facial-scan analysis.
Fraudsters, hackers and other cybercriminals are unfortunately working to circumvent these defenses and access valuable user data, but biometrics providers are creating measures to anticipate and counter the latest threats. The Deep Dive examines the seemingly never-ending arms race between the providers enacting new security protocols and the bad actors creating new techniques to thwart them.
Threats To Digital ID Security
Cybercriminals employ numerous means to beat biometric authentication measures. One method is spoofing, the industry term for faking biometric identifiers to impersonate legitimate users and gain access. Technologies such as artificial intelligence (AI), machine learning, 3D printing and advanced sensors are double-edged swords: Biometrics developers can use these tools to improve their security methods, but fraudsters are just as likely to utilize them to crack defenses.
Most spoofing methods rely on faking physical characteristics like facial features, fingerprints or vein patterns, thereby tricking sensors into recognizing users who are not there. Fake fingerprints are a well-known example going back decades, with thieves and spies breaking into secure facilities by using putty, wood glue or other common household items to create false fingerprints.
Similar schemes take on much more sophisticated forms today. A team of German security researchers at the Chaos Communication Congress conference last year demonstrated a fake hand designed to outsmart palm-vein scanners. They used a high-definition camera with its infrared filter removed to document vein patterns underneath individuals’ hands, utilizing those pictures to place fake veins inside wax models. According to researcher Jan Krissler, the entire process took about a month to develop.
Facial recognition biometrics can also be spoofed with certain technologies. University of North Carolina researchers used rendering software to create 3D models of human heads, based on publicly accessible Facebook pictures. They then utilized a virtual reality system to submit the models to facial recognition programs BioID, KeyLemon, Mobius, True Key and 1D, successfully spoofing four of them — with success rates ranging from 55 percent to 85 percent.
Facial Recognition Goes Live
Biometrics providers are relying on myriad tools to fight biometrics spoofing. Chief among these is liveness detection, referring to security features that ensure biometric information is accurately matched to the proper users. Such features take many forms, but most fall into one of two categories: active or passive. Active detection requires users to perform specific actions, such as blinking or smiling, making it harder for criminals to spoof facial features. Passive detection relies on internal algorithms to detect threats.
Some providers are leveraging a hybrid approach by using involuntary facial reactions that certify liveness, but do not require users to react to prompts. IriTech uses pupil dilation to prevent spoofing by monitoring whether users’ pupils react to varying amounts of light, for example.
Liveness certification protocols are becoming commonplace, as customers are increasingly comfortable with taking selfies or filming themselves on cue. Liveness detection has a near-perfect record against fraud, with the National Institute of Standards and Technology reporting an accuracy rate of 99.7 percent.
Employing A Human Touch To Stop Spoofs
Many digital ID providers are tapping into automated defense systems like liveness detection, but others are turning to a more human touch. Some are even employing hackers to test their systems and identify vulnerabilities.
One such authentication provider is FaceTec, which is offering a $30,000 bounty to any hacker who can spoof its ZoOm authentication system, which uses 3D facial biometrics to verify users. The company has claimed that there have been no successful attempts thus far, theorizing that some sort of ultra-realistic 3D sculpture or mask will be necessary to fool the system.
A team of white-hat hackers was able to access 27.8 million consumers’ records earlier this year after successfully cracking into a centralized biometric-access control system called BioStar 2, which was used by U.K. police and several major banks. The records included employees’ personal information, unencrypted customer usernames and passwords, and even a trove of fingerprint and facial recognition data. Fraudsters would have eventually spotted the vulnerability, and caused incalculable damage, had the white-hat hackers not beat them to it.
AI systems may one day surpass human analysts in the fight against biometric authentication fraud. Until they do, humans — from security researchers to biometrics providers to white-hat hackers — will be required to cover their blind spots.