Performance measurements
The performance of Digital Identity Service has been measured on AWS platform to assist with infrastructure planning, focusing on the exhaustive identity verification scenario. All testing images were generated by DOT Mobile and Web components.
Evaluation
Identity verification process:
- Upload selfie
- Check passive liveness on selfie
- Upload and OCR two sides of Slovakia national ID card
- Get customer & Inspect customer & Inspect customer document requests
- Get document front & back page
- Delete customer
Total of 750 full identity verification processes were evaluated. With 3 concurrent threads, the throughput reached 0.62 verifications per second.
Operation | Median [ms] | Average [ms] | 95% Line |
---|---|---|---|
Create customer | 9.00 | 12.11 | 16.00 |
Provide customer selfie | 116.50 | 134.82 | 194.95 |
Create liveness | 7.00 | 7.59 | 12.00 |
Passive liveness selfie with link | 14.00 | 15.01 | 21.00 |
Evaluate passive liveness | 404.00 | 413.89 | 456.00 |
Create document | 7.00 | 7.77 | 12.00 |
Create document front page | 1374.50 | 1357.25 | 1650.00 |
Create document back page | 1457.00 | 1493.06 | 1928.85 |
Inspect document | 388.00 | 398.06 | 569.80 |
Inspect customer | 608.50 | 622.54 | 739.00 |
Get customer | 30.00 | 33.36 | 51.95 |
Get document front page | 57.00 | 59.36 | 78.00 |
Get document back page | 57.00 | 60.03 | 79.00 |
Delete customer | 10.00 | 10.80 | 15.00 |
Identity verification scenario | 4633.50 | 4625.66 | 5483.85 |
Upon evaluation, the CPU utilization of DIS peaked at approximately 85%, with consistent memory usage.
Configuration
Digital Identity Service
- Version: 1.44.0
- Deployment: DIS is running as a Docker container deployed on an AWS machine with resources equivalent to an AWS c6a.xlarge instance.
The server is using the default application configuration with SSE and AVX optimization enabled.
The Docker image is built using Dockerfile
provided in the distribution package.
Redis
- Version: 7.1.0
- Deployment: AWS Elasticache cluster with one cache.m6g.large node.
Testing Tool - Jmeter
- Version: 5.5
- Deployment: Jmeter is running as a Docker container deployed on an AWS machine with resources equivalent to an AWS c6a.xlarge instance.
Testing Setup
The setup involved deploying a single instance of DIS, which was connected to a Redis cluster running on a separate machine. The testing client was deployed as a single instance generating requests across multiple threads. All services were deployed within the same region on the Amazon AWS platform to mitigate network latency.
Scaling the infrastructure to the estimated number of transaction requests
Example Use Case
The distribution of the user requests generating server transaction has been measured across multiple installation in the fintech use case in European countries. This reflects behavior of certain population for certain use case and cannot be generalized for all use cases. Integrators of DIS are strongly encouraged to do their measurements for their use case.
A hypothetical daily request for 1000 transactions applying this behavior could be split into 10 minute slots across an average working day. The distribution can be seen on the next chart:
It can be seen that during day hours, there would be in average less than 15 requests per 10 minutes, meaning any machine would by idling most of the time if only 1000 transactions are done daily.
The peak of the requests is around 40 per 10 minutes. These may come, off course, in a short burst. It is up to the desired latency of transaction response, if a throughput of 0.5 or 1 req/sec would be needed to handle such burst.