Alice is at the forefront of preventing print and replay attacks.
The National Institute of Standards and Technology, NIST, considered the world’s most prestigious biometric engine evaluation body, has published the results of its first presentation attack detection evaluation. Alice has achieved the best results in this innovative evaluation in which 82 passive facial presentation attack detection (PAD) algorithms participated.
This NIST assessment places Alice as one of the industry leaders in proof-of-life face detection (also known as presentation attack detection). That is, in the detectionof attacks in numerous critical scenarios that Identity Verification solutions face every day among its users. Such as the printing of photos and the detection of repetitions.
Detecting these frequent forms of attack is crucial to protect digital user enrolment and authentication processes against fraudulent activity. More importantly, this must be achieved without imposing undue challenges on legitimate users by minimising false positives. For any algorithm that wants to be considered viable for production environments, rejecting more than 5% of authentic users is not acceptable.
Esteban Vazquez, CTO and co-founder of Alice, expressed his gratitude for NIST’s thorough evaluation of PAD and his enthusiasm for the favourable results. He said: “NIST is world-renowned for its expertise in evaluating commercial biometric algorithms, and we greatly appreciate their diligent evaluation of PAD. We are delighted with the results, as they validate our belief that biometric security must be perfect. These results provide compelling evidence that the current generation of life-sensing solutions can deliver both robust security and user convenience.”
We have recently published a distilled report that helps navigate through the complexity of the results of this test, which allows the reader to use in finding an identity verification solution for real cases.
The initial NIST Internal Report 8491 contains a comprehensive description of the evaluation, covering 82 algorithms from 45 developers. The evaluation focused on two distinct use cases: spoofing and evasion. Alice presented algorithms exclusively in the impersonation tests and was tested with various types of presentation attacks, such as silicon masks, photo prints and replay attacks. For each type of attack, metrics focusing on both convenience and security were provided to accurately assess performance.