Detecting Fingerprint Images Outside Training Distribution for Spoof Detection
THESIS STUDENT:
Daniel Holmkvist
From: Alvesta, Sweden
Studies: Engineering Physics at Lund University
How did you become aware of Precise Biometrics, and what attracted you to do your project with us?
I became aware of Precise Biometrics from a thesis event hosted by AI Lund. When exploring options, I thought the development done by Precise sounded interesting. Additionally, the YOUNiQ camera at the entrance to the office also impressed me.
What are the main research questions you aim to answer through your thesis?
Neural networks* often have issues dealing with data that is unlike any it has seen before, where it may not only be incorrect, but also highly confident in the incorrect answer. The focus of my thesis is on how to deal with this issue and detect images that are out-of-distribution for the existing Precise models. This means that if the network is fed an image it does not recognize, say for example an image of a dog, the detector should quickly be able to realize that this image is out-of-distribution.
How do you believe your thesis contributes to the existing knowledge in AI and biometric technology?
The thesis contributes to existing knowledge by applying techniques used on more general datasets on real-world fingerprint data. Furthermore, because of that, and because I am using pre-trained Precise models, additional techniques may be used in addition with current state of the art methods to improve the result.
In what ways do you believe your research could be expanded or improved upon in the future?
More research is always useful! 🙂 OOD detection* is an entire field with a lot of existing research that I have certainly not exhausted. Furthermore, there may be more effective ways to implement the code with the existing Precise code base, and further ways to improve the accuracy and speed.
What are the practical implications of your research findings, and how do you envision them being applied in real-world settings?
The research finding is that it is possible to create a reasonably accurate and reasonably fast classifier to distinguish, for example, fingerprints from cloth or flowers, so the research could be applied to do just that!
GLOSSARY*
OOD – (Out-of-Distribution) refers to data that is significantly different from the training data a machine learning model was trained on, potentially containing unexpected features or patterns.
Neural networks – A type of artificial intelligence that are modeled after the structure and function of the human brain, used to learn patterns and relationships in data and make predictions or decisions based on that knowledge.