Face Matching
Face matching is the process of determining whether the images of two faces belong to the same person. This is particularly useful in onboarding scenarios when we want to check if the holder of the document is present during onboarding.
Another typical use case is login, when the user is verified with a previously stored face.
Face matching is provided by mobile app libraries:
The Digital Identity Service (DIS) provides the face matching functions in two ways. Either as dedicated face API functions, or as part of the onboarding API.
Onboarding API | Face API |
---|---|
Face matching is performed when both the customer’s selfie and document image is uploaded, and the inspect function is called. This matches the document portrait with the selfie. | Face matchinng is performed when the face photo is provided and a reference photo is in the similarity function. |
DOT Digital Identity Service - Customer Onboarding | DOT Digital Identity Service - Face Biometrics |
Swagger: customers/{id}/inspect API call | Swagger: faces/{probe_face_id}/similarity API call |
Innovatrics face biometric algorithm ranks among the top in the NIST FRVT
Matching steps
In order to verify two faces, the following steps must be performed:
- Face detection - find the position of the face in the image
- Template extraction - compute representation of the face used for matching
- Matching - compare two face templates and output similarity score
Face detection
The first step when performing face matching is face detection. This is an important step, because there might be no face or multiple faces present in the picture. Once a face is detected, it can be used in the matching process. There are various face detection modes available, fast mode provides lower latency, in accurate mode detection is more precise. Mobile devices only support fast mode, server components are configured by default to accurate mode.
Template extraction
Once the face has been detected, the face template can be generated. These templates can be cached on the application level to speed up the matching. Once the reference image is uploaded to the server, the template can be generated and cached. When a user logs in, face detection and extraction is only performed on the probe image from the user and the reference template is pulled from the cache. For extraction fast and accurate modes are available too. Similarly to detection, on mobile devices only fast mode is supported. Only templates generated by the same mode and product version can be matched. During major product upgrades, templates must be regenerated as mentioned in the respective product changelog.
Matching
Matching is a very fast operation, and it calculates the similarity of two templates, providing a matching score. The higher the score, the more similar the faces.
Matching threshold
The final decision if the two faces belong to the same person should be determined by the similarity score and threshold. If the score is above the threshold this can be interpreted as accepted, if the score is below the threshold it is rejected.
The following characteristics have been measured on our ICAO face quality testing dataset using DOT Digital Identity Service (accurate extraction mode):
FAR levels | FAR [%] | FRR [%] | Score threshold |
---|---|---|---|
1:500 | 0.200 | 0.020 | 0.252 |
1:1000 | 0.100 | 0.022 | 0.271 |
1:5000 | 0.020 | 0.040 | 0.322 |
1:10000 | 0.010 | 0.058 | 0.345 |
1:50000 | 0.002 | 0.171 | 0.413 |
eer | 0.034 | 0.034 | 0.304 |
The following characteristics have been measured on our ICAO face quality testing dataset using the DOT Face mobile library for Android and iOS (fast extraction mode):
FAR levels | FAR [%] | FRR [%] | Score threshold |
---|---|---|---|
1:500 | 0.200 | 0.867 | 0.276 |
1:1000 | 0.100 | 1.205 | 0.297 |
1:5000 | 0.020 | 2.483 | 0.358 |
1:10000 | 0.010 | 3.216 | 0.378 |
1:50000 | 0.002 | 5.250 | 0.428 |
eer | 0.573 | 0.573 | 0.243 |
Example
If we require a FAR level of 1:5000, we have to set the threshold of the resulting score of the matching function to 39.8. If we have a representative set of 10,000 matching face pairs, statistically 638 will in this case be incorrectly marked as not matching, even though they are. If we have 10,000 not matching pairs, statistically 2 will be wrongly marked as matching.
Setting the correct threshold depends on the security/convenience balance that is required for the specific use case.
During the initial configuration of the system, two thresholds can be set. If the score is below the bottom threshold, the result is automatically set to reject. If the score is above the top threshold it is automatically accepted. If the score is between the two thresholds, images go for review to a back office operator for a final decision.
NOTE
To add matching to your workflow, please consider the following:
- Image quality - if image quality is low, accuracy of the matching decreases
- Age difference between the images - time difference between the capture of the two images is several years, the person’s appearance might have changed significantly.
Image vs template usage
When performing matching using images, face detection is always called internally and a template is generated. When using templates, face detection is skipped.
Using images
If you do not need the result of the face detection for other purposes, you can simply invoke matching with images. This is particularly useful when matching is performed only once during the flow. An example would be a simple selfie vs identity document face comparison.
Using templates
If you need more data about the face, such as age estimation or passive liveness, the recommended approach is as follows:
- Invoke face detection with all the needed attributes and also with template extraction enabled
- Cache this template
- Use it for matching
This approach can be used when we want to evaluate passive liveness and also perform face matching. Calling verify with at least one template reduces processing time.
Templates can also be cached on the application level for use cases like login, where the same reference face is needed. Please note that templates are incompatible across major product upgrades, and must be regenerated by invoking the face detection on the source images again.