Performance measurements

The performance of Digital Identity Service has been measured on AWS platform to assist with infrastructure planning, focusing on the exhaustive identity verification scenario. All testing images were generated by DOT Mobile and Web components.

Evaluation

Identity verification process:

  • Upload selfie
  • Check passive liveness on selfie
  • Upload and OCR two sides of Slovakia national ID card
  • Get customer & Inspect customer & Inspect customer document requests
  • Get document front & back page
  • Delete customer

Total of 750 full identity verification processes were evaluated. With 3 concurrent threads, the throughput reached 0.62 verifications per second.

OperationMedian [ms]Average [ms]95% Line
Create customer9.0012.1116.00
Provide customer selfie116.50134.82194.95
Create liveness7.007.5912.00
Passive liveness selfie with link14.0015.0121.00
Evaluate passive liveness404.00413.89456.00
Create document7.007.7712.00
Create document front page1374.501357.251650.00
Create document back page1457.001493.061928.85
Inspect document388.00398.06569.80
Inspect customer608.50622.54739.00
Get customer30.0033.3651.95
Get document front page57.0059.3678.00
Get document back page57.0060.0379.00
Delete customer10.0010.8015.00
Identity verification scenario4633.504625.665483.85

Upon evaluation, the CPU utilization of DIS peaked at approximately 85%, with consistent memory usage.

Configuration

Digital Identity Service

  • Version: 1.44.0
  • Deployment: DIS is running as a Docker container deployed on an AWS machine with resources equivalent to an AWS c6a.xlarge instance.

The server is using the default application configuration with SSE and AVX optimization enabled. The Docker image is built using Dockerfile provided in the distribution package.

Redis

  • Version: 7.1.0
  • Deployment: AWS Elasticache cluster with one cache.m6g.large node.

Testing Tool - Jmeter

  • Version: 5.5
  • Deployment: Jmeter is running as a Docker container deployed on an AWS machine with resources equivalent to an AWS c6a.xlarge instance.

Testing Setup

The setup involved deploying a single instance of DIS, which was connected to a Redis cluster running on a separate machine. The testing client was deployed as a single instance generating requests across multiple threads. All services were deployed within the same region on the Amazon AWS platform to mitigate network latency.

Scaling the infrastructure to the estimated number of transaction requests

Example Use Case

The distribution of the user requests generating server transaction has been measured across multiple installation in the fintech use case in European countries. This reflects behavior of certain population for certain use case and cannot be generalized for all use cases. Integrators of DIS are strongly encouraged to do their measurements for their use case.

A hypothetical daily request for 1000 transactions applying this behavior could be split into 10 minute slots across an average working day. The distribution can be seen on the next chart:

Daily distribution

It can be seen that during day hours, there would be in average less than 15 requests per 10 minutes, meaning any machine would by idling most of the time if only 1000 transactions are done daily.

The peak of the requests is around 40 per 10 minutes. These may come, off course, in a short burst. It is up to the desired latency of transaction response, if a throughput of 0.5 or 1 req/sec would be needed to handle such burst.