Performance measurements

The performance of Digital Identity Service has been measured on AWS platform to assist with infrastructure planning, focusing on the exhaustive identity verification scenario. All testing images were generated by DOT Mobile and Web components.

Evaluation

Identity verification process:

  • Upload selfie
  • Check passive liveness on selfie
  • Upload and OCR two sides of Slovakia national ID card
  • Get customer & Inspect customer & Inspect customer document requests
  • Get document front & back page
  • Delete customer

Total of 300 full identity verification processes were evaluated. With 3 concurrent threads, the throughput reached 0.46 verifications per second.

OperationMedian [ms]Average [ms]95% Line
Create customer15.0018.2324.45
Provide customer selfie130.50145.10227.80
Create liveness11.0012.2625.25
Passive liveness selfie with link24.0026.9750.00
Evaluate passive liveness439.50510.47670.45
Create document11.0012.0218.00
Create document front page2230.002244.912758.00
Create document back page2349.502375.502912.25
Inspect document121.50135.43218.90
Inspect customer705.50735.96970.80
Get customer46.5050.9591.35
Get document front page61.5068.80112.45
Get document back page62.0070.24110.90
Delete customer14.0016.2034.15
Identity verification scenario6334.006423.037434.00

Upon evaluation, the CPU utilization of DIS peaked at approximately 90%, with consistent memory usage.

Configuration

Digital Identity Service

  • Version: 1.40.0
  • Deployment: DIS is running as a Docker container deployed on an AWS machine with resources equivalent to an AWS c6a.xlarge instance.

The server is using the default application configuration with SSE and AVX optimization enabled. The Docker image is built using Dockerfile provided in the distribution package.

Redis

  • Version: 7.1.0
  • Deployment: AWS Elasticache cluster with one cache.m6g.large node.

Testing Tool - Jmeter

  • Version: 5.5
  • Deployment: Jmeter is running as a Docker container deployed on an AWS machine with resources equivalent to an AWS c6a.xlarge instance.

Testing Setup

The setup involved deploying a single instance of DIS, which was connected to a Redis cluster running on a separate machine. The testing client was deployed as a single instance generating requests across multiple threads. All services were deployed within the same region on the Amazon AWS platform to mitigate network latency.

Scaling the infrastructure to the estimated number of transaction requests

Example Use Case

The distribution of the user requests generating server transaction has been measured across multiple installation in the fintech use case in European countries. This reflects behavior of certain population for certain use case and cannot be generalized for all use cases. Integrators of DIS are strongly encouraged to do their measurements for their use case.

A hypothetical daily request for 1000 transactions applying this behavior could be split into 10 minute slots across an average working day. The distribution can be seen on the next chart:

Daily distribution

It can be seen that during day hours, there would be in average less than 15 requests per 10 minutes, meaning any machine would by idling most of the time if only 1000 transactions are done daily.

The peak of the requests is around 40 per 10 minutes. These may come, off course, in a short burst. It is up to the desired latency of transaction response, if a throughput of 0.5 or 1 req/sec would be needed to handle such burst.