Face verification is the process of determining whether images of two faces belong to the same person. This is particularly useful in onboarding scenarios when we want to check if the holder of the document is present during onboarding.
Another typical use case is login, when the user is verified with a previously stored face.
Face verification is provided by the mobile app libraries:
- Android Face Image Verifier and Android Template Verifier
- iOS Face Image Verifier and iOS Face Template Verifier
Better precision of face verification can be achieved by calling the Core server.
Innovatriocs face biometric algorithm ranks among the top in the NIST FRVT
In order to verify two faces, the following steps must be performed:
- Face detection - find position of the face in the image
- Template extraction - compute representation of the face used for matching
- Matching - compare two face templates and output similarity score
First step while performing face verification is face detection. This is an important step, because there might be no face or multiple faces present in the picture. Once a face is detected it can be used in the verification process. There are various face detection modes available, fast mode provides lower latency, in accurate mode detection is more precise. Mobile devices only support fast mode.
Once the face has been detected, the face template can be generated. These templates can be cached on the application level to speed up the verification. Once the reference image is uploaded to the server, the template can be generated and cached. When a user logs in, face detection and extraction is only performed on the probe image from the user and the reference template is pulled from the cache. For extraction fast and accurate modes are available too. Similarly to detection, on mobile devices only fast mode is supported. Only templates generated by the same mode and product version can be matched. During major product upgrades, templates must be regenerated as mentioned in the respective product changelog.
Matching is a very fast operation, and it calculates similarity of two templates, providing a verification score. The higher the score the more similar the faces are.
Final decision if the two faces belong to the same person should be determined by the similarity score and the threshold. If the score is above the threshold this can be interpreted as accepted, if the score is below the threshold it is rejected.
Following characteristics has been measured on our ICAO face quality testing dataset using fast extraction mode:
|FAR levels||FAR [%]||FRR [%]||Score threshold|
If we require a FAR level of 1:5000, we have to set the threshold of the resulting score of the verification function to 39.8. If we have a representative set of 10000 matching face pairs, statistically 638 will be in this case incorrectly marked as not matching, even though they are. If we have 10000 not matching pairs, statistically 2 will be wrongly marked as matching.
Setting the correct threshold depends on the security/convenience balance that is required for the specific use case.
During the initial configuration of the system, two thresholds can be set. If the score is below the bottom threshold, the result is automatically set to reject. If the score is above the top threshold it is automatically accepted. If the score is between the two thresholds, images go for review to a back office operator for final decision.
To add verification to your workflow, please consider the following:
- Image quality - if image quality is low, accuracy of the verification decreases
- Age difference between the images - time difference between the capture of the two images is several years, the person’s appearance might have changed significantly.
Image vs template usage
When performing verification using images, face detection is always called internally and a template is generated. When using templates, face detection is skipped.
If you do not need the result of the face detection for other purposes you can simply invoke verification with images. This is particularly useful when verification is performed only once during the flow. An example would be a simple selfie vs identity document face comparison.
If you need more data about the face, such as age estimation or passive liveness, recommended approach is as follows:
- Invoke face detection with all needed attributes and also template extraction enabled
- Cache this template
- Use it for verification
This approach can be used when we want to evaluate the passive liveness and also perform face verification. Calling verify with at least one template reduces processing time.
Templates can be also cached on the application level for use cases like login, where the same reference face is needed. Please note that templates are incompatible across major product upgrades and must be regenerated by invoking the face detection on the source images again.
To perform verification on mobile, please check our mobile SDKs. For use on the server, please check DOT Core Server verify operation