By Vishwam Sankaran
Researchers from New York University have created a set of master fingerprint keys that can be used to spoof biometric identification systems.
While the database of fingerprints used by the researchers had a chance of falsely matching with a random fingerprint one out of 1000 times, the master prints they generated had the power to falsely match one out of five times.
Their paper was published on the pre-print server ArXiv and proves that fingerprints can be artificially generated using machine learning and used to trick databases secured by fingerprint authentication.
This is alarming because a growing number of devices, and large scale databases like India’s Aadhar, use digital fingerprinting to uniquely identify users – and could potentially be targeted with such ‘master key’ fingerprints by identity thieves.
A report published last year by Counterpoint Research indicated that more than 50 percent of smartphones shipped in 2017 had fingerprint sensors in them, and predicted that the figure would rise to 71 percent by the end of this year.
Graph showing rise in percentage of phones shipped with fingerprint sensors: Counterpoint Research
The problem is that these sensors obtain only partial images of users’ fingerprints – at the points where they make contact with the scanner. The paper noted that since partial prints are not as distinctive as complete prints, the chances of one partial print getting matched with another is high.
The artificially generated prints, dubbed DeepMasterPrints by the researchers, capitalize on the aforementioned vulnerability to accurately imitate one in five fingerprints in a database. The database was originally supposed to have only an error rate of one in a thousand.
Credit: Philip Bontrager
Another vulnerability exploited by the researchers was the high prevalence of some natural fingerprint features such as loops and whorls, compared to others. With this understanding, the team generated some prints that contain several of these common features. They found that these artificial prints were more likely to match with other prints than would be normally possible.
Using these most-repeated features, the neural networks also generated fake prints that convincingly look like a real fingerprint.
The DeepMasterPrints can be used to spoof a system requiring fingerprint authentication without actually requiring any information about the user’s fingerprints. As the paper noted about the application of the fake prints:
Therefore, they can be used to launch a dictionary attack against a specific subject that can compromise the security of a fingerprint-based recognition system.
Mikko Hypponen, a cyber security expert and columnist, took to twitter to articulate the significance of this vulnerability in commonly used biometric systems.
Another thing to look out for is the security of public databases that rely solely on biometric scanners for security. Your friendly neighborhood burglar is unlikely to make such master prints to access information from your phone. But large scale databases such as those used by governments to ID citizens could potentially be spoofed more easily by ambitious criminals – Aadhar, we are looking at you.
This article was written by Vishwam Sankaram, an Editorial Fellow with The Next Web.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius