Passive Liveness Detection - Presentation Attack Detection

The Passive liveness detection is a method of Presentation attack detection (PAD) determining whether the face presented on a photo is a real person, without requiring the user to perform any additional challenge (unlike active liveness detection).

This presentation attack detection method is recommended for applications where the user’s experience and seamless app flow is paramount.

DOT passive liveness detection achieved iBeta Level 2 PAD accreditation in accordance with ISO 30107-03 PAD. See the confirmation letter. More on the test specification.

Passive Liveness Service

Besides being an integral feature of the Digital Identity Service, the passive liveness detection is provided also by our dedicated hosted service - Passive Liveness Service. This is hosted in the AWS cloud.

Technical documentation

Details on using the liveness detection in Digital Identity Service can be found in the technical documentation in the Liveness functionality.

The documentation of the Passive Liveness Server can be found in here.

Photo Capture Method for Passive Liveness Detection

It is important that in the onboarding process the user cannot upload a photo him/herself. It has to be strictly enforced that the user has to take the selfie at the time of enrollment. It is strongly recommended to use the Innovatrics' face auto capture component, whether mobile or web one. Being able to upload a selfie photo allows the user to generate synthetic faces or do image manipulation that cannot be detected. Enforcing the usage of camera allows the liveness check to detect screen attack and injection is detected with the Video Injection Prevention.

Passive Liveness Image Requirements

The first step of performing the passive liveness detection is face detection in the image. It is important for the image to fullfill quality requirements in order to detect the face and to determine the liveness. Requirements for the image are:

  • image size at least 600x600 pixels
  • distance between the eyes at least 120 pixels
  • shorter side of the image should be at least 4 times the distance between the eyes in pixels
  • face should be near the center of the image
  • not too strong backlight or sidelight
  • no overexposed or underexposed images
  • ICAO attributes are recommended to be compliant with the table at the bottom
  • JPEG capture quality should be at least 80%
  • image should not be cropped or manipulated between the capture and the processing step

Presentation Attack Detection Types

Our algorithm has been trained to detect real faces and also various kinds of presentation attacks. These include:

  • Screen replay - faces presented to camera on a screen
  • Printed faces on a paper presented to camera
  • 2D masks - faces printed on a cardboard with cutouts and worn by a person’s face
  • 3D masks - silicone masks, dolls and manequins

(Synthetic face images and photo manipulation can be detected with Video Injection Detection)

We recognize that new attacks might emerge, so we regularly retrain the models to incorporate new attack vectors. It is also important that our customers keep the installed components up to date as we are releasing them.

Evaluating Passive Liveness Scores - Setting a Correct Threshold

The result of the passive liveness detection algorithm is a score. This is a value in the range 0 to 1. (Not to be mistaken with probability percentage, the behaviour here is nonlinear.)

The result whether a face photo is a bona-fide presentation (genuine face) or an attack presentation is decided by comparison of the passive liveness score and the threshold. If the score is above the threshold, it is classified as genuine and thus accepted. If the score is below the threshold, it is classified as a fraud and thus rejected.

Passive Liveness Scores, Error Rates and Accuracy

  • APCER: Attack presentation images that are classified as bona-fide presentations are false accepts. The percentual rate of such error on a given dataset and given threshold represents the Attack Presentation Classification Error Rate (APCER, formerly FAR).
  • BPCER: Bona-fide presentation images that are classified as attacks are false rejects. The percentual rate of such error on a given dataset and given threshold represents the Bona-fide Presentation Classification Error Rate (BPCER, formerly FRR).

Example:

Imagine a dataset of 10,000 bona-fide presentation photos (real faces) and 1,000 attack presentation photos, where measurements were made. Threshold of 0.895, which is at working point of 1% APCER results in 3.7% BPCER. That means there are 10 attack presentation photos marked as bona-fide (false accepts) and 370 bona-fide photos are marked as attacks (false rejects).

Datasets and Extrapolating Measurements to Real World Usage

The measured results and accuracy of the liveness detection depend on the dataset used to calculate them. If such dataset is large and representative for required use case, the measurement results can be extrapolated for real world conditions.

However, the real world conditions and thus the accuracy of the algorithm differ from project to project. The behaviour and accuracy of liveness algorithm may be affected by the demographics of the user population, prevailing daylight conditions when it is used, quality of the phones used by the population and configuration of the autocapture components in the client app.

The Innovatrics' 2023 validation dataset consists of 125,000 photos with bona-fide and attack presentation images represented equally. Unlike the dataset used previously, this one has sophisticated presentation atyatck attempts in order to better quantify the security precision.

Recommendations for Passive Liveness Deployment in Projects

It is recommended for the pilot phase of the project to set the threshold according to our measurements below. If there is available helpdesk personel and implementation allows it, there could be 2 thresholds set, one for automatic rejection and one for automatic acceptance. The photos between these two thresholds are to be evaluated by personel. Having processed thousands of photos, an accuracy measurement should be made on the collected dataset to adjust the used thresholds.

For onboarding and login use cases, the passive liveness detection has to be always combined with face matching. This ensures not only that the presented face is real, but it checks also whether it belongs to the person expected.

Thresholds for Passive Liveness

The tables below provide measured threshold for defined performance on the updated Innovatrics' 2024-06 dataset, which was made more challenging compared to the previous version.

Digital Identity Service 1.42.1 and above (updated 2024-06 dataset, UNIVERSAL model)

Use case typeThresholdPerformance
Convenience (minimum rejected attempts)0.8002.1% APCER @ 1% BPCER
Balanced (equal error rate)0.8161.5% both APCER & BPCER
Security (minimum accepted frauds)0.8342.2% BPCER @ 1% APCER

Digital Identity Service 1.37 till 1.42.0 (updated 2024-06 dataset, UNIVERSAL model)

Use case typeThresholdPerformance
Convenience (minimum rejected attempts)0.8032.8% APCER @ 1% BPCER
Balanced (equal error rate)0.8291.8% both APCER & BPCER
Security (minimum accepted frauds)0.8583.7% BPCER @ 1% APCER

Digital Identity Service 1.29 till 1.36.1 (updated 2024-06 dataset, UNIVERSAL model)

The table below shows the thresholds for new UNIVERSAL liveness model, which is default since version 1.29. It is possible to revert to the older STANDARD model, which is the same as in v1.28. See the table for v1.28 for configuration.

Use case typeThresholdPerformance
Convenience (minimum rejected attempts)0.8054.6% APCER @ 1% BPCER
Balanced (equal error rate)0.852.6% both APCER & BPCER
Security (minimum accepted frauds)0.8958.2% BPCER @ 1% APCER

Speed Comparison of the Models

The new UNIVERSAL model is slower compared to the STANDARD as a tradeoff for the higher accuracy. On a reference AWS machine c6a.large the STANDARD takes 180 ms, while the UNIVERSAL takes 500 ms.

Digital Identity Service 1.22 till 1.28 (updated 2024-06 dataset, STANDARD model)

Use case typeThresholdPerformance
Convenience (minimum rejected attempts)0.838.8% APCER @ 1% BPCER
Balanced (equal error rate)0.873.6% both APCER & BPCER
Security (minimum accepted frauds)0.90510.2% BPCER @ 1% APCER

Digital Identity Service 1.10 till 1.19 (updated 2024-06 dataset, STANDARD model)

Use case typeThresholdPerformance
Convenience (minimum rejected attempts)0.8222% APCER @ 1% BPCER
Balanced (equal error rate)0.8857.6% both APCER & BPCER
Security (minimum accepted frauds)0.9326% BPCER @ 1% APCER

Passive liveness on Mobile Devices

For native mobile use cases where the liveness detection must be performed without the server connectivity fully offline - Mobile SDK libraries provide the lightweight version of passive liveness detection. Server-side liveness detection offers higher security and better user experience and should be used whenever possible. Passive liveness score can be obtained from the quality attributes returned by the Face Capture UI component.

Evaluating the ICAO Image Quality in Mobile Libraries

PassiveLivenessQualityProvider used in mobile auto capture components can ensure that all these conditions are met during the capture. These are the optimal values for liveness detection required in the quality provider:

AttributeMinMax
BRIGHTNESS0.110.75
CONTRAST0.250.8
SHARPNESS0.31
UNIQUE_INTENSITY_LEVELS0.5251
PITCH_ANGLE-1515
YAW_ANGLE-2020

Passive liveness threshold - on client

The Passive liveness models on client (in DOT Face mobile libraries) are lightweight and therefore provide different FAR/FRR statistics. The Passive liveness detection on client is not recommended and should be used for features like face login, but not for authorization.

Thresholds for mobile libraries DOT Face (new 2023 dataset)

Use case typeThresholdPerformance
Convenience (minimum rejected attempts)0.810% APCER @ 1% BPCER
Balanced (equal error rate)0.883.2% both APCER & BPCER
Security (minimum accepted frauds)0.9311% BPCER @ 1% APCER