DOT Digital Identity Service

v1.34.0

Overview

Digital Identity Service enables two main features:

  • Customer onboarding

  • Face biometrics

Customer onboarding is the basic use-case of DOT. A selfie and photos of identity card should be provided by the customer, and a liveness check should have passed. Provided data can be checked for inconsistencies, and based on the checked result, the client decides if the customer will be onboarded.

The biometric processing of face images allows the client to support specific use-cases with the need for face biometrics.

API Reference

The Digital Identity Service API reference is published here

Distribution package contents

The distribution package can be found in our older CRM portal or in the new Customer portal. It contains these files:

Your sales representative will provide credentials for the CRM login.
  • config – The configuration folder

    • application.yml – The application configuration file, see Application configuration

    • logback-spring.xml – The logging configuration file

  • doc – The documentation folder

    • Innovatrics_DOT_Digital_Identity_Service_1.34.0_Technical_Documentation.html – Technical documentation

    • Innovatrics_DOT_Digital_Identity_Service_1.34.0_Technical_Documentation.pdf – Technical documentation

    • swagger.json – Swagger API file

    • EULA.txt - The license agreement

  • docker – The Docker folder

    • Dockerfile – The text document that contains all the commands to assemble a Docker image, see Docker

    • root-user.Dockerfile - The alternative Dockerfile to assemble a Docker image with Digital Identity Service running as a root user

    • entrypoint.sh – The entry point script

  • libs – The libraries folder

    • libsam.so – The Innovatrics OCR library

    • libiface.so – The Innovatrics IFace library

    • libinnoonnxruntime.so – The Innovatrics runtime library

    • solvers – The Innovatrics IFace library solvers

  • dot-digital-identity-service.jar – The executable JAR file, see How to run

  • Innovatrics_DOT_Digital_Identity_Service_1.34.0_postman_collection.json – Postman collection

Installation

System requirements

While the following requirements are minimal (e.g.: we require some disk space for the app itself, logging and configuration), please refer to the performance measurements page for detailed results on varying configurations.
  • Rocky Linux 9.x (64-bit)

  • A CPU supporting the AVX2 instruction set

  • Unless agreed otherwise, the machine hosting the Digital Identity Service needs to be able to access the URL innovatrics.count.ly.

Minimal system requirements

  • CPU: 2 vCPU

  • RAM: 7 GB

  • DISK: 4 GB

Minimal Redis requirements

Version: 7.x.x

We recommend two nodes with the following configuration:

  • CPU: 2 vCPU

  • RAM: 3 GB

Minimal Memcached requirements

We recommend two nodes with the following configuration:

  • CPU: 2 vCPU

  • RAM: 3 GB

Steps

  1. Install the following packages:

    • OpenJDK 17 Runtime Environment (Headless JRE) (openjdk-17-jre-headless)

    • userspace USB programming library (libusb-0.1)

    • GCC OpenMP (GOMP) support library (libgomp1)

    • Locales

    apt-get update
    apt-get install -y openjdk-17-jre-headless libusb-0.1 libgomp1 locales
  2. Set the locale

    sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen
    export LANG=en_US.UTF-8; export LANGUAGE=en_US:en; export LC_ALL=en_US.UTF-8
  3. Extract the Digital Identity Service distribution package to any folder.

  4. Link the application libraries:

    ldconfig /local/path/to/current/dir/libs
    Replace the path /local/path/to/current/dir in the command with your current path. Keep /libs as a suffix in the path.

Activate the DOT license

For Digital Identity Service version 1.20.0 and above

Starting from Digital Identity Service version 1.20.0, a new method for retrieving licenses is available. To obtain a license, please contact your sales representative or email sales@innovatrics.com to gain access to the customer portal where the license can be obtained. Once you have received the license, deploy it as described below.

For the Digital Identity Service version 1.19.0 and below

When using a license generated via the customer portal in versions 1.19.0 and earlier of the Digital Identity Service, the application will start up, but consistently return HTTP 401 Unauthorized. Please contact your sales representative or sales@innovatrics.com to give you license for your specific version. Once you get the license, please deploy it as described below.

Copy your license file iengine.lic for Innovatrics IFace SDK 5.12.0 into {DOT_DIGITAL_IDENTITY_SERVICE_DIR}/license/

How to run

As Digital Identity Service is a stand-alone Spring Boot application with an embedded servlet container, there is no need for deployment on a pre-installed web server.

Digital Identity Service needs a running redis or memcached. redis or memcached must be configured via the externalized configuration first.

Digital Identity Service can be run from the application folder:

java -Dspring.config.additional-location=file:config/application.yml -Dlogging.config=file:config/logback-spring.xml -DLOGS_DIR=logs -Djna.library.path=libs/ -jar dot-digital-identity-service.jar

Embedded Tomcat web server will be started and the application will be listening on the port 8080 (or another configured port).

Docker

To build a Docker image, use the Dockerfile and the entrypoint.sh script. A Dockerfile example and Entrypoint.sh script example can also be found in the Appendix.

The Docker image should be built as follows:

cd docker
cp ../dot-digital-identity-service.jar .
cp ../libs/libsam.so.* .
cp ../libs/libiface.so.* .
cp ../libs/libinnoonnxruntime.so.* .
cp -r ../libs/solvers/ ./solvers
docker build \
    --build-arg="JAR_FILE=dot-digital-identity-service.jar" \
    --build-arg="SAM_OCR_LIB=libsam.so.*" \
    --build-arg="IFACE_LIB=libiface.so.*" \
    --build-arg="INNOONNXRUNTIME_LIB=libinnoonnxruntime.so.*" \
    --build-arg="ADDITIONAL_LIBS=" \
    -t dot-digital-identity-service \
    .

In the ADDITIONAL_LIBS build argument, you can set space-separated names of additional linux libraries that should be included in the Docker image. For instance, if you want to include curl and wget linux libraries, you can set ADDITIONAL_LIBS like this:

    --build-arg="ADDITIONAL_LIBS=curl wget" \

Digital Identity Service needs a running redis or memcached. redis or memcached must be configured via the externalized configuration first.

Run the container according to the instructions below:

docker run -v /local/path/to/license/dir/:/srv/dot-digital-identity-service/license -v /local/path/to/config/dir/:/srv/dot-digital-identity-service/config -v /local/path/to/logs/dir/:/srv/dot-digital-identity-service/logs -p 8080:8080 dot-digital-identity-service
Replace the path /local/path/to/license/dir/ in the command with your local path to the license directory.
Replace the path /local/path/to/config/dir/ in the command with your local path to the config directory (from the distribution package).
Important Replace the path /local/path/to/logs/dir/ in the command with your local path to the logs directory (you need to create the directory mounted to a persistent drive). The volume mount into the docker is mandatory, otherwise the application will not start successfully.
Important The Digital Identity Service running inside the container, built from Dockerfile, runs under dot-dis user and not as root user. This may cause issues with files and directories mounted from outside the docker container (e.g. logs directory). To overcome this issue, you must ensure that the user’s UID (User ID) on the host machine, who owns the file or directory, matches the UID of the dot-dis user, which is 1000. Alternatively, you have the option to build the Docker container using the root-user.Dockerfile, which runs Digital Identity Service under the root user and does not have this limitation.

Rocky linux as a base in the Digital Identity Service version 1.32.0 and above

From version 1.32.0 the Digital Identity Service is using a Rocky Linux 9 distribution as a base image instead of Ubuntu 22.04. As a result of this change, the Docker image now contains only mandatory Linux packages. Packages that are commonly preinstalled and used by the user such as the package manager are now not installed. This change was implemented mainly to minimise the amount of external binaries, resulting in fewer security patches needed.

Logging

Digital Identity Service logs to the console and also writes the log file (dot-digital-identity-service.log). The log file is located at a directory defined by the LOGS_DIR system property. Log files rotate when reaching 5 MB size, maximum history is by default set to 7 days or logs size of 1GB.

As this is a Spring Boot application, debug logging can be turned on by setting the logging.level.root property to DEBUG in the application.yml file.

API Transaction Counter Log

The separate log files following filename pattern dot-digital-identity-service-transaction-counter.log.%d{yyyy-MM-dd}.%i.gz are located at a directory defined by the LOGS_DIR system property. The %d{yyyy-MM-dd} template represents the date and the %i represents the index of log window within the day, starting at 0. These log files contain information about counts of API calls (transactions). The same rolling policy is applied as for the application log, except the maximum history of these log files is 455 days.

For proper transactions billing, please be sure to send all transactions logs every time.

Docker: Persisting log files in local filesystem

When Digital Identity Service is run as a Docker container, log files may be accessed even after the container no longer exists. This can be achieved by using Docker volumes. To find out how to run a container, see Docker.

Monitoring

Information as build or license info can be accessed on /api/v1/info. Information about available endpoints can be viewed under /swagger-ui/index.html.

The health endpoint accessible under /api/v1/health provides information about the application health and the Innovatrics Tracking Service status. This feature can be used by an external tool such as Spring Boot Admin, etc.

The application also supports exposing metrics in standardized prometheus format. These are accessible under /api/v1/prometheus. This endpoint can be exposed in your configuration:

management:
  endpoints:
    web:
      exposure:
        include: health, info, prometheus

For more information, see Spring Boot documentation, sections Endpoints and Metrics. Spring Boot Actuator Documentation also provides info about other monitoring endpoints that can be enabled.

Tracing

Micrometer tracing with OpenTelemetry API is used to collect traces. Data is exported via gRPC using OTLP format, to the configured collector (e.g.: Jaeger) defined by the management.tracing.endpoint property (default: http://localhost:4317).

By default, OpenTelemetry tracing uses W3C format for context propagation. To enable tracing propagation using the B3 format, the management.tracing.propagation.type property can be set to b3.

Tracing is disabled by default. It can be enabled by the following property:

management:
  tracing:
    enabled: true

Architecture

Digital Identity Service is a semi-stateful service. It temporarily retains intermediate results and images in an external cache. This enables the exposed API to flexibly use only the methods needed for a specific use case, without repeating expensive operations. Another advantage is that the user can provide data when available, without the need to cache on the user’s side.

The Digital Identity Service can be horizontally scaled. Multiple instances of the service can share the same cache or a cache cluster.

Architecture diagram
Figure 1. Horizontal scaling of Digital Identity Service with a cache cluster

The services of Digital Identity Service are better suited for shorter-time processes. The cache can nevertheless be configured to support various use cases and processes.

Cache

The Digital Identity Service currently supports Redis and Memcached as cache options. For development and test purposes, embedded EhCache is also available. However, please note that this option is not suitable for production or an environment with multiple Digital Identity Service instances. The table below describes configuration options for switching between these options:

Table 1. Cache type configuration properties

Property

Description

innovatrics.dot.dis.persistence

  • type

Type of cache implementation to use.

Possible values: redis, memcached or ehcache

Various tools exist to monitor the performance of your Redis or Memcached server, and we recommend using one to ensure the cache is performing as expected.

Common cache record expiration configuration

Every cache option supports setting the expiration time for both customer and face records. The expiration time can be configured independently for all of these resources. The configuration is described in the table below:

Table 2. Cache record expiration configuration properties

Property

Description

innovatrics.dot.dis.persistence.cache

  • customer-expiration

The time in seconds to persist all data created and used by Onboarding API.

Example value: 1800

  • face-expiration

The time in seconds to persist face records created and used by Face API.

EhCache

Cache option intended for development and test purposes only. Each instance of Digital Identity Service runs its own embedded EhCache, which is not shared between other instances. This means that running multiple Digital Identity Service instances in cluster mode may lead to unexpected behavior.

Configuration

Maximum size of memory which can be allocated by embedded cache may be configured via configuration properties. Both Java heap and off-heap memory are supported. In general, heap memory is faster in terms of I/O operations, but comes with performance cost due to Java garbage collection. In the scope of Digital Identity Service, this performance difference should be negligible.

If no off-heap-size property is set, the cache will solely rely on Java heap memory.

In case of exceeding configured memory EhCache will remove records even before configured TTL.

Table 3. EhCache configuration properties

Property

Description

innovatrics.dot.dis.persistence.ehcache.resource-pool

  • heap-size

Maximum number of records which can be allocated in Java heap memory.

Example value: 200

  • off-heap-size

Maximum size in MB which can be allocated in Java off-heap memory.

Example value: 800

System requirements

Have in mind that embedded cache share resources with Digital Identity Service, so to ensure the smooth operation, it is crucial to allocate appropriate resources that cater to both the application itself and its embedded cache.

Redis

The Digital Identity Service also supports Redis as a cache option in various setups which depend on the configuration of your environment. The Lettuce client is used for communication with Redis.

An eager initialization has been configured, so the client will attempt to connect to the Redis server on startup. If the connection fails, the application will fail to start.

We require the Redis server to be of version 7.x.x. Using older versions or a higher major version may result in an unexpected behavior.

The following Redis environment setups are supported:

The individual setups and their configurations are described in the following sections. The application will also fail to start if one of the setups has been configured incorrectly or is incomplete.

The option to configure the usage of SSL/TLS is also available. The SSL/TLS is optional and can be configured via application properties.

For redis authentication use username and password configured via application properties. If left empty, no authentication will be used.

The timeout for all Redis operations has been configured to 10 seconds. This can be overridden via application properties.

The table below describes configuration options common to all Redis setups:

Table 4. Redis common configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis

  • key-prefix

A String prefix for grouping the key/values. This is useful when multiple applications share the same Redis instance.

This property is optional.

A : is appended automatically to the prefix if it is not empty.

Example value: innovatrics:dis

  • setup

Setup of your Redis environment.

Possible values: STANDALONE, MASTER_REPLICA, CLUSTER

  • use-ssl

Indicates whether to use SSL/TLS for communication with Redis.

This property is optional.

Possible values: true or false (default)

  • credentials.username

The username for authentication to your Redis environment.

This property is optional.

Example value: user

  • credentials.password

The password for authentication to your Redis environment.

This property is optional.

Example value: pass

  • timeout

The timeout for all Redis operations in milliseconds.

This property is optional.

Example value: 10000 (default)

Configuration

Standalone

The standalone mode is the simplest mode of operation. It is suitable for development and testing environments.

The following configuration properties are available:

Table 5. Redis Standalone cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis

  • hostname

The hostname of the Redis server.

Example: localhost

  • port

The port of the Redis server.

Example: 6379

Master/Replica

The master/replica mode is suitable for production environments. The client is configured in a way where the reads are set to be preferred on the replicas.

The following configuration properties are available:

Table 6. Redis Master/Replica cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis

  • hostname

The hostname of the Redis server.

Example: localhost

  • port

The port of the Redis server.

Example: 6379

  • master-replica.info-command-used

Indication whether your environment uses the INFO command to retrieve the master/replica information. This results in a different configuration to be used.

Possible values: true (default) or false

Cluster

The cluster mode is suitable for high-performance production environments with the need for automatic failover.

The application will automatically discover the cluster topology and will use it for communication.

In the case of a primary node failure, the application will automatically failover to a new primary node and will continue to operate normally. The application will attempt to reconnect to the cluster in case of a failure.

The topology refresh interval has been configured to 60 seconds. This can be overridden via application properties. If the topology refresh interval is not set, the topology will not be refreshed.

The following configuration properties are available:

Table 7. Redis Cluster cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis.cluster

  • nodes

The hostname of the Redis cluster.

Individual nodes are delimited by a comma, however we recommend to provide hostname of the cluster entrypoint (e.g.: AWS ElastiCache cluster configuration endpoint) as opposed to individual nodes.

Example: clustercfg.your-redis-instance:6379 (recommended) or node1.your-redis-instance:6379,node2.your-redis-instance:6379

  • topology-refresh-interval

Topology refresh interval in milliseconds. If unset, the topology will not be refreshed.

This property is optional.

Example: 60000

Memcached (deprecated)

Configuration

The cache is configurable via the externalized configuration.

It can be configured either with the AWS Elastic Cache, or a list of hosted memcached servers can be used.

Efficient memory usage

For optimal performance, the expiration of records must be configured according to the nature of the implemented process:

  • Short expiration time causes smaller memory usage and higher throughput of short requests.

  • Long expiration time enables longer processing of cached records and higher memory requirements.

Memory consumption for longer processes can be lowered by cleaning records once no longer needed. The API provides deletion methods for each resource.

The expiration of records can be configured independently for the onboarding API and for face operations.

Table 8. Memcached cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.memcached

  • aws-elastic-cache-config-endpoint

The host and port of aws elastic cache config endpoint.

Format: host:port

  • servers

The list of host and port pairs to the memcached instances. Only used if aws elastic cache config endpoint is not configured.

Format: host1:port1 host2:port2

  • read-timeout

The memcached read timeout in milliseconds.

Example value: 2000

  • write-timeout

The memcached write timeout in milliseconds.

Example value: 2000

  • operation-timeout

The memcached operation timeout in milliseconds.

Example value: 5000

Authentication and authorization

The Digital Identity Service API is secured with an API Key authentication, hence an HTTP Authorization header needs to be sent with every request.

The header must contain a Bearer token, which is a UTF-8 Base64 encoded string that consists of two parts, delimited by a colon:

Table 9. Token description

Token part

Description

  • API Key

A unique identifier that is received with your license

  • API Secret

A unique string that is received with your license

The server will return a HTTP 401 Unauthorized response for every request that either does not contain the Authorization header, or if the header contents are invalid (e.g.: malformed Base64 or invalid API Key or Secret).

Some endpoints are not secured by design (such as /metrics, /health or /info) and do not require any authentication

Authorization header creation

For the Digital Identity Service version 1.20.0 and above

Credentials for the Digital Identity Service can be retrieved from the customer portal. The Api Key & Secret contains 3 values, as shown in the figure below:

Api Key & Secret
Figure 2. Api Key & Secret pop-up window

Each request must contain the Authorization header which consists of the Bearer keyword and the Bearer Token value, e.g.:

Bearer aW5rXzcwYTJjOTg4Omluc19XRjBhVzl1WDNScGJJQ0l3TURJeklERXhPV1ZCVDBpZlE9PQ==

For the Digital Identity Service version 1.19.0 and below

In the Digital Identity Service version 1.19.0, the process for creating an API token differs. It requires getting both the key and secret from the license. Below is an example snippet illustrating the structure of the API key and secret within the license file:

{
  "contract": {
    "dot": {
      "authentication": {
        "apiKeyAndSecrets": [
          {
            "key": "some-api-key",
            "secret": "mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6"
          }
        ]
      }
    },
    ...
  },
  ...
}

You will need to encode the key and secret parts into a valid UTF-8 Base64 string (those two parts, delimited by a colon), e.g.:

some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6

The encoding can be performed by the user via the bash command below:

echo 'some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6' | base64 -w 0

Once the aforementioned token has been encoded into Base64, each request must contain the Authorization header which consists of the Bearer keyword and encoded key and secret:

Bearer c29tZS1hcGkta2V5Om1iN0RaUTZKd2VzUkhrV1BiaktWRGdHSFh4ckFIRmQ2

Data isolation

The resources created with one API key are accessible only with that particular API key. This is to prevent any unauthorized access by isolating the created resources in the cache.

Image Data Downloader

The Digital Identity Service API supports two ways to provide an image in its requests:

  • base64 encoded data

  • url to the remote image

Images provided are downloaded by the Image Data Downloader.

The Image Data Downloader is enabled by default and can be disabled via the configuration to prevent downloading images from remote URLs. The data downloader can also be configured to allow or block only specific URLs to be downloaded from. See the Server-side request forgery (SSRF) protection section for more details.

The connection timeout and the read timeout for the Image Data Downloader are configurable via properties.

Table 10. Image Data Downloader configuration properties

Property

Description

innovatrics.dot.dis.data-downloader

  • enabled

Indicates whether the Image Data Downloader is enabled. If false, submitting data via URLs is not allowed.

Default value: true

  • connection-timeout

The connection timeout for image data downloader in milliseconds.

Default value: 2000

  • read-timeout

The read timeout for image data downloader in milliseconds.

Default value: 30000

Server-side request forgery (SSRF) protection - Optional

If needed, the Image Data Downloader can be protected against SSRF attacks.

The URLs can be either configured as absolute URLs or as regular expressions. The regular expressions can be enclosed in .* and can contain any number of characters.

If the whitelist property is configured, only the URLs matching the configured URLs will be allowed (any other will be blocked).

If the blacklist property is configured, only the URLs matching the configured URLs will be blocked (any other will be allowed).

Configuring both properties is not allowed, also configuring them with wildcards \* or .* is not allowed. The following configuration properties are available:

Table 11. Image Data Downloader SSRF configuration properties

Property

Description

innovatrics.dot.dis.data-downloader.ssrf-protection

  • whitelist

The list of allowed hosts for the Image Data Downloader.

This property is optional.

Example:

whitelist:
  - 'https://example.com'
  - '.*example.org.*'
  • blacklist

The list of disallowed hosts for the Image Data Downloader.

This property is optional.

Example:

blacklist:
  - 'https://example.com'
  - '.*example.org.*'

Logging Transactions via the Innovatrics Tracking Service

For billing purposes, all transactions performed must be reported by any running instance of the Digital Identity Service.

The Digital Identity Service is configured to periodically publish metadata about executed transactions to the Innovatrics tracking service.

No sensitive details are stored, only information about transaction count, outcome of operations, and the quality of inputs. Collected statistics may subsequently be used to improve system performance in your environment.

All data published to Innovatrics tracking service is also logged to the dot-digital-identity-service-countly-event.log file. If it is not possible to configure the deployment to communicate with the Innovatrics tracking service, transactions can be reported by sending this file or by uploading this file to Customer Portal.

Use the api/v1/health endpoint to verify the successful connectivity of the Digital Identity Service with the Innovatrics Tracking Service. Upon success, the expected JSON response should include components.countly.status set to UP:

{
    "status": "UP",
    "components": {
        "countly": {
            "status": "UP"
        }
    }
}

For additional details on how Digital Identity Service verifies transactions, please refer to the Transaction Tracking and Charging

The reporting URL is configured as innovatrics.count.ly. This cannot be changed but can be used for forwarding via your proxy server/egress instance.

Proxy server configuration

If your deployment is behind a proxy server, a proxy server to allow communication with the Innovatrics tracking service needs to be configured. This can be done by setting the following properties in the application.yml file:

Table 12. Server proxy configuration properties

Property

Description

innovatrics.dot.dis.proxy

  • host

The hostname of your proxy server.

Example: squid.example.com

  • port

Available port of your proxy server.

Example: 8088

If either of the aforementioned properties is not set, the proxy server will not be used.

Multiple options for image uploads

The Digital Identity Service API supports multiple ways to provide an image during the onboarding process:

  1. a direct upload of the image data as a base64 encoded string

  2. providing a URL to the remote image

  3. an octet-stream upload of the image data produced by the Innovatrics Web components or Mobile SDKs

    • this option provides more security than the base64 encoded string upload, enabling detection of any tampering of the image data or potential spoof

The requests must contain only one of the above options as the image source. Combining multiple options in one request is not allowed.

Examples of these options are included in the Postman collection.

Customer Onboarding

The Customer Onboarding API enables a fully digital process to remotely verify identity to enroll a new customer.

During the onboarding, a person registers with a company or government entity. They provide their identity document, and one or more selfies to prove their identity.

With a digital onboarding process powered by Digital Identity Service, a company can easily and securely convert a person into a trusted customer.

Standard Onboarding Flow

The recommended customer onboarding process looks like this:

To use any part of the Customer Onboarding API, create a customer must be called first. The customer will be persisted for a configurable amount of time (see config section). Once created, additional actions can be performed while the record is persisted.

The data-gathering steps (2-4) can be performed in any order. Extracted data can be deleted or replaced by repeating the same action with different inputs.

The results of the get customer request (5) or inspection steps (6-7) depend on data previously gathered.

Once the onboarding has been completed, the customer can be deleted to reduce required memory. Deleting a customer will remove any related data, such as selfies and document pages. Otherwise, the data will expire after a configured amount of time.

Actions for onboarding a customer have to be performed sequentially, parallel processing of the same customer is not allowed. If there are concurrent requests on any resource belonging to the same customer resource, only one such request will succeed and the rest will end with an error (409 Conflict). For example, the front and back page of the document cannot be uploaded in parallel.

Create Customer

To create a customer, a POST /customers request must be made.

The response will contain a link to the newly created customer resource, as well as the ID of the customer.

Add Selfie

To provide a selfie for a customer, a PUT /selfie request must be made on the customer resource.

If a liveness selfie or liveness image data were already uploaded via Create Liveness Record request, the reference to the liveness selfie can be specified in the payload instead.

A successful response will contain the position of the detected face in the input image, the confidence, and a link to the newly-created customer selfie resource. The response may also contain a list of warnings. An unsuccessful response will contain an error code.

The face position is represented by the face rectangle.

The detection confidence contains a score from the interval <0.0,1.0>. Values near 1.0 indicate high confidence that a human face was detected.

Each customer can have max one selfie. An existing selfie can be replaced by adding a new selfie.

Once the face has been detected, you can:

Face Detection Configuration

Face detection on a customer’s selfie is configurable. The speed, accuracy, and other aspects can be adjusted according to needs and available resources. Find more details about image requirements, face detection speed-accuracy modes, and face size ratio in the Face API section of this document.

Liveness Check

Liveness check allows verification of interaction with a live, physically present person. It can distinguish live faces from photos, videos, 2D/3D masks, and other attacks.

The Digital Identity Service provides various approaches to verify liveness:

The liveness check generally comprises the 3 following steps:

Create Liveness Check

To create a liveness check, a PUT /liveness request must be made on the customer resource.

The response will contain a link to the newly-created customer’s liveness resource.

Add Selfie to Liveness Check

This alternative of creating the liveness check is done by using one or multiple images (selfies) in standard JPG or PNG format.

To add a selfie, a POST /liveness/selfies request must be made on the customer’s liveness resource.

If a selfie that was already added as a customer’s selfie is required for use, its reference can be specified in the payload instead of uploading it again.

For each selfie added to the liveness check, the assertion must be specified. The provided assertion will determine if and how the selfie will be used for the selected liveness method evaluation in the next step.

The successful response will be empty.

If the quality of the selfie does not fully match the requirements for evaluation, the response will contain a warning. If this happens, this selfie can still be used to evaluate the liveness, but the result is not guaranteed to be reliable. In the case of not wishing to proceed with this selfie, then delete the liveness resource, and start again by creating a new one.

If the selfie was not accepted, the response will contain an error code.

Multiple selfies can be added to one liveness check.

The Digital Identity Service will try to detect a face on every selfie provided. The configuration of face detection on selfies is explained in this chapter.

Providing liveness selfies using this option is only supported for the Passive Liveness Check, Eye-gaze Liveness Check and Smile Liveness Check.

Create liveness record

This alternative of creating the liveness check is done by using the binary file produced only by Innovatrics web component or mobile SDKs.

To create a liveness record, a POST /liveness/records request must be made on the customer’s liveness resource.

A successful response will contain the position of the detected face on the liveness selfie, represented by the face rectangle. The response also contains the detection confidence, a score from the interval <0.0,1.0>, where values near 1.0 indicate high confidence that a human face was detected.

A successful response also contains a link to the newly-created liveness record selfie resource.

In the case of not wishing to proceed with this liveness record, create a new one and the old one will be automatically replaced.

An unsuccessful response will contain an error code and liveness record will not be created.

Once the liveness record has been successfully created, you can:

  • Access the liveness selfie via a GET request on the provided liveness record selfie link

  • Use the liveness selfie as a customer selfie via the Add Selfie request referencing the liveness record selfie link from the response

Evaluate Liveness

To evaluate liveness, a POST /liveness/evaluation request must be made on the customer’s liveness resource.

The type of liveness check to be evaluated must be specified.

A successful response will contain a score from the interval <0.0,1.0>. Values towards 1.0 indicate higher confidence that the associated selfies contained a live person. The score has to be compared to a threshold to determine the liveness. See the documentation page dedicated to the active and passive liveness for recommended thresholds.

An unsuccessful response will contain an error code.

The evaluation can be repeated for different types of liveness on the same liveness resource. Only selfies with a relevant assertion will be used for a given type of liveness.

Passive Liveness Check

The passive liveness check is a process of determining whether the presented face is a real person without requiring the user to perform any additional actions.

It is recommended to perform this check on the customer`s selfie. A user can add the existing customer’s selfie to the liveness check by providing a reference to it.

To add a selfie for a passive liveness evaluation, the assertion must be set to NONE. Only selfies with this assertion will be evaluated for passive liveness.

To evaluate passive liveness, the type of liveness needs to be specified as PASSIVE_LIVENESS.

Passive liveness can be evaluated once at least a single selfie with the correct assertion was added. If there are multiple selfies with the corresponding assertion, the returned score will be the average of all of them.

There are two modes of passive liveness evaluation (UNIVERSAL and STANDARD), which can be configured via a property. The default is UNIVERSAL.

Eye-gaze Liveness Check

Eye-gaze liveness check is the process of determining whether the presented faces belong to a real person, by requiring the user to follow an object displayed on the screen with their eyes.

This check is recommended for applications where security is paramount, and is recommended as an additional step after performing the passive liveness check.

Follow these steps to implement the eye-gaze liveness check:

  1. generate object movement instructions randomly on your application server

  2. send these instructions to the client and ask the customer to follow the movement of the object with his/her eyes

  3. capture photos of the customer while he/she follows the object, and add them to the liveness with a corresponding assertion

Selfies for eye-gaze liveness need to have one of the following assertion values: EYE_GAZE_TOP_LEFT, EYE_GAZE_TOP_RIGHT, EYE_GAZE_BOTTOM_LEFT, EYE_GAZE_BOTTOM_RIGHT

Each of these assertions corresponds to the position of the object at the moment the photo was taken.

Selfies with assertions needs to be provided sequentially in the order captured. Parallel processing is not allowed.

Eye-gaze liveness can be evaluated only once the required number of selfies with relevant assertions was added.

The minimum number of selfies for eye-gaze liveness is configurable via a property.

Smile Liveness Check

Smile liveness check is the process of determining whether the presented faces belong to a real person by requiring the user to change his/her expression.

Follow these steps to implement the smile liveness check:

  1. ask the customer to maintain a neutral expression and then smile

  2. capture photos of the customer with both expressions, and add them to the liveness with a corresponding assertion

Selfies with assertions need to be provided sequentially. Parallel processing is not allowed.

Smile liveness can be evaluated only once selfies with both SMILE and NEUTRAL assertions have been added. As a part of the evaluation process passive liveness is calculated on both photos.

You can fine tune the passive liveness threshold for smile liveness with a property.

MagnifEye Liveness Check

MagnifEye liveness check is the process of determining whether the presented faces belong to a real person by navigating the user to capture a detailed image of the eye. It is a semi-passive method inspired by our extensive know-how in the domain of facial and iris recognition. The core of the technology is built upon Innovatrics Passive Liveness Detection, while also taking into account the uniqueness of the human eye.

Follow these steps to implement the magnifeye liveness check:

  1. follow the instructions explained in the DOT web/mobile components

  2. upload the binary file created by the DOT components via Create Liveness Record request

To evaluate magnifeye liveness, the type of liveness needs to be specified as MAGNIFEYE_LIVENESS. Liveness can be evaluated once the liveness record has been successfully created.

Customer Document Operations

The Onboarding API provides services to recognize and process customer’s photo identity documents. (Only identity documents containing a photo of the holder are usable for remote identity verification.)

The process starts with creating an identity document. At this point, information can be provided about the document type and/or edition. Parts of the document to be processed can also be specified

The second step is to upload pictures of the document pages. The system will try to detect and classify the document on the picture.

Once at least one page has been successfully recognized, it is possible to:

Supported Identity Documents

The Digital Identity Service can support identity documents of the following types:

  • Passports

  • Identity cards

  • Driving licenses

  • Foreigner permanent residence cards

  • and other cards of similar format that include the holder’s photo

Support for document recognition may be in two levels:

Level 1 support

Level 1 support includes all documents compliant with ICAO machine-readable travel document specification.

The Digital Identity Service can process the document portrait and parse data from the machine-readable zone of documents with this level of support.

Level 2 support

For Level 2 support, the Digital Identity Service needs to be trained to support each individual document type and its edition.

Once the document is supported, the Digital Identity Service can process any data available on it.

The list of documents with Level 2 support can be found via the get metadata endpoint.

In the case that an ID document type required does not have Level 2 support, contact Innovatrics to request support for such document type in a future version of the Digital Identity Service.

Get Metadata for documents with Level 2 support

To get the full list of documents with Level 2 support, make a GET /metadata request.

The response contains a list of documents supported by the current version of the Digital Identity Service and the metadata for each document.

The metadata for an individual document contains a list of its pages. For each page, there is a list of text fields that the Digital Identity Service was trained to OCR.

For each text field, there is information if the field’s value is being returned as found on the document or if it is being normalized and returned in a standard format.

If present on the document, there is also the original label for each text field.

If a document page has the classificationAdviceRequired attribute set to true, a create document request must contain a precise classification advice, comprising the exact document type, edition, and country of the given document.

Document Classification

The amount of data that the Digital Identity Service can extract from an identity document depends on how precisely it can classify this document.

There are 3 levels of classification:

The Digital Identity Service tries to classify the document up to the level that allows the processing of all requested document sources:

  1. It will try to fully classify the document if the processing of visual zone or barcodes was requested.

  2. Otherwise, it will only try to recognize the travel document type of the document.

  3. If the document was not at least partially classified, it will be processed as an unknown document.

The classification of a document can be affected by classification advice that can be optionally provided in the create document request payload.

It can be also affected by optional advice on the type of page in the add document page request payload.

If the document page has classificationAdviceRequired attribute set to true, classification advice is required and must be provided in the create document request payload. If the classification advice is missing or invalid, the document will be classified as UNKNOWN.
Full classification

A full classification means the Digital Identity Service knows the type of the document, its issuing country, the exact edition, and the type of travel document if the document is compliant with travel document specifications.

Only documents that have Level 2 support can be fully classified.

Any document source on a fully classified document can be processed. That means the Digital Identity Service can:

  • ocr textual data from the visual zone

  • parse data from the machine-readable zone

  • decode data from barcodes

  • extract biometric information from the document portrait

  • check input for tampering by inspecting the color profile of the image

  • identify image fields: signature, fingerprint, ghost portrait and document portrait

Partial classification

A partial classification means the Digital Identity Service knows the type of the travel document.

With a partially classified document, only the machine-readable zone and the document portrait sources can be processed. That means the Digital Identity Service can:

  • parse data from the machine-readable zone

  • extract biometric information from the document portrait

Partial classification is possible for any document with Level 1 support.

A document can be partially classified only after a page containing a machine-readable zone is provided. That means:

  • TD1 document can be partially classified after a back page is provided. If the front page was provided first, it will stay unrecognized till the back page is added.

  • TD2 and TD3 documents can be partially classified after a front page is provided

Document not recognized

If the Digital Identity Service was unable to recognize the document’s exact edition nor its travel document type, then the document will be processed as an unknown document.

With an unknown document, the Digital Identity Service can only process the document portrait source. If a portrait is present on the provided page, the Digital Identity Service can:

  • extract biometric information from the document portrait

The system only keeps the last provided page for an unknown document. If there are multiple images provided and the document is still unknown, all previous pages are replaced with the last one.

Classification of an additional page

Once the document is at least partially classified, any page added later has to match the existing classification.

  • That means if the document is fully classified, then it will only accept pages from the same document edition.

  • If the document is partially classified, then it will accept pages from documents with the same travel document type.

  • If the document is not recognized, it will accept pages of any type.

The level of classification of a document can be increased with an additional page. For example, the exact edition of a document that is only partially classified as a travel document of TD1 type can subsequently be specified by recognizing it from an additional page. The recognized edition has to be compliant with the already recognized type of travel document. The classification level will move from partial classification to full classification.

If the document was classified incorrectly, the whole document needs to be deleted and the process started again. Classification can be improved by providing classification advice and/or by providing images of better quality.

Create Document

To create an identity document for a customer, make a PUT /document request on the customer resource.

Improve the performance of the document processing by providing classification adviceand/or specifying data sources on the document to be processed.

The response will contain a link to the newly created customer document resource.

There can be at most one document for a customer. The existing document can be replaced by creating a new document for the customer.

Classification Advice

If it’s known upfront what type of document will be uploaded, performance of the classification can be improved by providing classification advice.

Classification advice can influence how the document will be recognized. Potential candidates can be restricted by specifying allowed countries, document types, editions, and/or travel document types.

If no advice is provided, the system will perform the classification considering all supported document types.

Document Sources

The performance of document processing can be improved by specifying what parts of the document need to be processed.

Provide a list of document sources that need to be processed. If the list of sources is not provided in the request, or if it is empty, then the Digital Identity Service will try to process all of them.

Table 13. Supported document sources

Document Source

Description

Requirements

visual zone

  • read data from text fields

  • crop image fields: signature, ghost-portrait, fingerprint

document page needs to be fully recognized

machine-readable zone

  • parse data from machine-readable zone

the type of machine-readable travel document needs to be recognized

document portrait

  • extract biometric data from document portrait

document portrait needs to be present on provided page

barcode

  • extract data encoded in barcodes

document page needs to be fully recognized

Add Document Page

To add a page to the identity document for a customer, make a PUT /pages request on the customer’s document resource. There are alternative ways of uploading the image of a document page described in Multiple options for image uploads.

Improve the performance of the page’s processing by specifying whether it is a front or a back page in the classification advice.

The optional classification advice in the add document page request can specify only the type of page. To provide advice on the type of document, use the classification advice in the create document request.

A successful response will contain info about the classified document type and the recognized type of page. It will also contain the position of the detected document in the input image, the confidence, and a link to the newly created document page resource.

The response may contain a list of warnings.

An unsuccessful response will contain an error code.

When a page for a document is provided, the Digital Identity Service will try to recognize the type of page and the type of document. This process is called classification and is described in chapter Document Classification.

Image requirements

Ideally, the photo of identity document should be created with Innovatrics’ auto-capture components, whether in mobile libraries or browser-based. These components ensure the quality requirements mentioned below:

  • The supported image formats are JPEG and PNG, or binary data created by Innovatrics web components or mobile SDKs

  • The document image must be large enough — when the document card is normalized, the text height must be at least 32 px (or the document card width in the image must be approximately 1000 px)

  • The document card edges must be clearly visible and be placed at least 10 px inside the image area

  • The image must be sharp enough for the human eye to recognize the text

  • The image should not contain objects or background with visible edges. (example below) This can confuse the process of detecting card on image

EdgesDemo
Figure 3. Examples of invalid and valid images

Check Document Page Quality

To check the quality of a provided image for a document page, a Get /quality request has to be made on the document page resource.

The response contains details about the brightness, sharpness, and the presence of hotspots on the original image.

The response also contains a list of found issues and a list of warnings.

Table 14. Quality check examples
Quality check resultInput image

OK

Ok

WARNING: DOCUMENT_CLOSE_TO_IMAGE_BORDER

The distance of at least one detected corner point from the nearest image border is less than 2% of the image width or height.

Document close to image borders

ISSUE: BRIGHTNESS_LOW

The brightness score is below 0.25

Low brightness

ISSUE: BRIGHTNESS_HIGH

The brightness score is over 0.9

High brightness

ISSUE: SHARPNESS_LOW

The sharpness score is below 0.85

Low sharpness

ISSUE: HOTSPOTS_SCORE_HIGH

The hotspots score is over 0.008

Hotspots

ISSUE: DOCUMENT_OUT_OF_IMAGE

At least one corner point of the document was detected outside the image area.

Document out of image

ISSUE: DOCUMENT_SMALL

The width of the detected document should be over 450px and the height over (450px / aspect ratio). If any detected edge of the document does not meet these requirements, the document is considered to be too small.

Small document

Get Document Page Image

To get the normalized image for a document page you have to make a GET / request on the document page resource.

An image can be requested to be returned in a specific size by providing optional query parameters width and/or height.

The Response contains a base64 encoded image with a document page in the JPG format. The compression quality of returned images can be configured via the property.

Normalized page
Figure 4. Normalized document page

Get Document Image Fields

The Customer Onboarding API provides methods to get image fields from recognized document pages.

Supported image fields are:

If the customers' document does not contain a requested image field or if the page with the field was not provided or it was not recognized, the service will return a not found error with 404 error code.

The successful response contains a base64 encoded image with a document page in the JPG format. The compression quality of returned images can be configured via the property.

Any image field can be requested to be returned in a specific size by providing optional query parameters width and/or height.

Get Document Portrait

To get an image with a document portrait, a GET /portrait request has to be made on the document resource.

The successful Response contains a base64 encoded image with a document portrait in the JPG format.

Document portrait
Figure 5. Document portrait

The document portrait is available if it was present on a previously uploaded document page, even if the page was not recognized.

Get Document Ghost Portrait

To get an image with a ghost portrait, a GET /ghost-portrait request has to be made on the document resource.

The successful Response contains a base64 encoded cropped ghost portrait image in the JPG format.

The ghost portrait is only available if the visual zone source processing was requested when the document resource was created. The ghost portrait has to be present on a previously uploaded and recognized document page as well.

Get Document Signature

To get an image with a signature, a GET /signature request has to be made on the document resource.

The successful Response contains a base64 encoded cropped signature image in the JPG format.

Document signature
Figure 6. Document signature

The signature image is only available if the visual zone source processing was requested when the document resource was created. The signature has to be present on a previously uploaded and recognized document page as well.

Get Document Fingerprint

To get an image with a fingerprint, a GET /fingerprint request has to be made on the document resource.

The successful Response contains a base64 encoded cropped fingerprint image in the JPG format.

The fingerprint image is only available if the visual zone source processing was requested when the document resource was created. The fingerprint has to be present on a previously uploaded and recognized document page as well.

Inspect Document

The Digital Identity Service API provides an endpoint to check consistency of data of the submitted document. This can be useful for authenticity and data manipulation checks.

To perform this inspection, a POST request has to be made on the /document/inspect endpoint.

Based on the requested sources and provided data, the response contains results of the following checks:

For a detailed overview of the response itself, please refer to the OpenAPI specification of this endpoint.

For a more technical response regarding various comparisons, the POST /document/inspect/disclose endpoint can be called.
Expiration check

If the document contains an expiration date, the system will check if the document is expired or not.

MRZ Validity check

If the MRZ was processed, the system checks if it conforms to the ICAO specification.

Portrait inspection

If the document portrait was processed, the estimated age and gender are compared with corresponding values from other sources. The consistency of estimated age is evaluated only if the date of issue of the document is present.

Visual Zone inspection

The result of the the inspection includes a list of fields that are inconsistent, a list of fields with low OCR confidence, and a median of all OCR confidences for the available fields.

The threshold for low OCR is configurable via a property.

Color profile change detection

The color profile change detection checks if the colors on the detected document corresponds to the trained model for the classified document edition.

This check is available only for documents with Level 2 support. The document also needs to be fully classified.

Table 15. Color profile change detection examples
color profile change detectedInput image

true

Color profile change detected

false

Genuine image

Screenshot detection

The screenshot detection checks if the provided image is a genuine photo of a document or if it was taken from a screen of another device.

Table 16. Display attack detection examples
looks like a screenshotInput image

true

A display attack was detected.

Display attack detected

false

No display attack was detected.

Genuine image

Inspect Customer

To check the consistency of gathered biometric data about the customer, a POST /inspect request has to be made on the customer resource.

The response contains any biometric data extracted from the customer’s selfie and a comparison of this data with other available customer data.

The response provides a comprehensive overview of customer biometric data consistency in one single place, thus providing an opportunity to detect inconsistencies and to assess credibility of provided customer selfies and selected data from the document.

Digital Identity Service evaluates consistency of customer’s selfie with liveness selfies, document portrait, and/or text data read from identity document.

For a more technical response (including the calculated matching score) regarding various comparisons, the POST /inspect/disclose endpoint can be called.

Based on the provided sources, the response provides the following information:

Biometric face aspects estimations

Age and gender are estimated from the customer’s selfie.

Face similarities

Customer selfie face is checked for similarity with other customer faces provided - identity document portrait face and faces on liveness selfies. Faces are considered similar when the respective face similarity threshold is reached. Face similarity threshold for selfie comparison with liveness selfies is configurable via property. Face similarity threshold for selfie comparison with document portrait is configurable via property.

Age differences

Comparison of age estimated from customer selfie with age from document portrait and age calculated from document’s text fields.

The age comparison with document portrait is performed only if the date of issue of the document is present.

Gender consistencies

Indication whether the gender evaluated from the customer’s selfie face matches the gender from other sources.

Face mask detection

Reports whether a mask was detected on the customer’s selfie.

The person on the selfie is considered wearing a face mask, when the face mask detection score reaches a threshold configurable via property.

Video injection detection

Reports whether the injection detection was evaluated and eventually detected.

Get Customer Data

To get all gathered data about a customer, a GET / request has to be made on the customer resource.

The response contains data extracted from the provided selfie and/or from provided document pages.

Attributes in the response contain values per source from which the value was extracted. Values may be extracted from: visual zone, mrz, selfie, document portrait, barcodes.

In case the provided document contains multiple instances of the same text field type within the visual zone, duplicated values of this field type will be returned as visualZoneDuplicates.

Barcode items contain the base64 encoded text, which represents the data extracted from the barcode, and the barcode format (e.g.: qr_code). If a barcode is present on the document but is not returned, the barcode is either not supported, could have failed checksum validation or could not be parsed properly (e.g.: blurry photo of the document was submitted).

Only fully trained documents (Level 2) are supported for the barcode extraction
In case of multiple barcodes of the same format on the same page, knowledge of the document is required to identify the data that the barcode represents

If a document was provided, the response contains links to resources of every present document page and image. Use them to get images of normalized pages and/or image fields

Face biometrics

The Face API provides a service to detect face in an image. Once detected, the face will be persisted for a configurable amount of time (see the config section). Use the face resource to execute additional actions while the record is persisted.

It also offers a method to evaluate the quality of a detected face and the face image. Use this method to verify if the detected face matches standards for a specific use case, and if a result of an additional action can be trusted.

There are additional actions that can be performed with a detected face while it is being held in cache:

Detect Face

To detect a face, make a POST /faces request.

If a customer selfie was already uploaded via Add Selfie request from the Customer Onboarding API a link to this selfie can be provided instead. In this case, specifying custom detection properties in the request is not permitted.

The successful response will contain the position of the detected face in the input image, the confidence, and a link to the newly created face resource. The response may also contain a list of warnings. The unsuccessful response will contain the error code.

The face position is represented by the face rectangle.

The detection confidence contains a score from the interval <0.0,1.0>. Values near 1.0 indicates a high confidence a human face was detected. Some additional actions may be performed only on faces detected with a confidence above a certain threshold. Specific limits can be found in the quality section.

Image requirements

  • The supported image formats are JPEG and PNG

Detection mode

Optionally, the detection mode can be specified: FREE or STRICT. The Strict model is used by default.

  • STRICT mode will return an error if multiple faces are detected.

  • FREE mode will return the largest face and a warning if multiple faces are detected. The performance of the detection in the FREE mode is affected by the max detection count property.

    • If the max detection count is lower than the actual number of faces on the image, then it is not guaranteed that the biggest face will be detected and returned.

    • If the max detection count is much higher than the number of faces on the single image inputs, then the detection runs slower than needed.

Face size ratio

The Face size ratio is a ratio between the face size and the shorter side of an image.

The service detects only faces with a face size ratio within a certain range that is configurable via min and max properties.

Optionally, override the configuration by specifying a custom face size ratio in the request.

Min face size ratio restrictions

The size of detectable faces depends on the input image and the configured face detection speed accuracy mode.

The allowed min face size ratio is restricted based on the size of the input image. If the requested min face ratio is too small for the input image, the detection request will end up with an error: FACE_SIZE_MEMORY_LIMIT.

Table 17. Min valid face size calculation per detection mode
Face detection modeMin valid face size ratio calculation

fast

12 / "ShorterSide"

balanced,
accurate

max {

(3 / "ShorterSide"),
(10 / "FD_MAX_IMAGE_SIZE" * "LongerSide" / "ShorterSide")

}

ShorterSide and LongerSide values are in pixels. Big input images will be shrunk before processing so the longer side is at most 3000px. If the image needs to be shrunk, use the resized lengths of longer and shorter sides to calculate the min valid face size ratio.

FD_MAX_IMAGE_SIZE is configurable via parameter.

Face detection speed accuracy mode

The face detection speed accuracy mode represents a trade-off between face and facial features detection speed and accuracy. By default the face detection runs in the accurate mode. To increase the speed of the detection, change the detection mode via the face detection speed accuracy mode property.

Supported modes are:

  • fast - some faces that are partially occluded faces or faces with sunglasses may be missed. Also faces printed on ID cards may not be detected. However, the speed performance of the face detection is much better as when other modes are used. It is compatible with DOT Mobile Kits.

  • balanced - the performance of the face detection is somewhere in between accurate and fast modes.

  • accurate - partially occluded, blurry, profile faces or faces with sunglasses are detected. The speed of this face detection on CPU is the slowest compared to other modes.

Check Face Quality

To find out if a face matches ICAO specifications or if it is suitable for additional processing, check its quality.

To get info about quality, make a GET /quality request on the detected face.

The response contains information about face expression, head pose and quality attributes of the image. Each attribute contains an actual score and a flag indicating if the score is reliable.

For each use case, check if preconditions for each relevant attribute are met and if their scores are from the requested range.

ICAO conditions

Table 18. ICAO conditions

Attribute

Condition

face detection confidence

>= 0.53

yaw angle

<-10.0; 10.0>

pitch angle

<-10.0; 10.0>

roll angle

<-10.0; 10.0>

sharpness

<0.5; 1.0>

brightness

<0.25; 0.75>

contrast

<0.25; 0.75>

unique intensity levels

<0.5; 1.0>

shadow

<0.49; 1.0>

nose shadow

<0.496; 1.0>

specularity

<0.495; 1.0>

heavy frame

<0.0; 0.515>

mouth

<0.5; 1.0>

background uniformity

<0.3; 1.0>

right eye

<0.5; 1.0>

left eye

<0.5; 1.0>

red right eye

<0.0; 0.5>

red left eye

<0.0; 0.5>

eye gaze

<0.48; 1.0>

Liveness preconditions

Table 19. Liveness preconditions

Attribute

Condition

face detection confidence

>= 0.55

face size

>= 60px

yaw angle

<-20.0; 20.0>

pitch angle

<-20.0; 20.0>

brightness

<0.11; 0.75>

contrast

<0.25; 0.8>

unique intensity levels

<0.525; 1.0>

Mask detection preconditions

Table 20. Mask detection preconditions

Attribute

Condition

face detection confidence

>= 0.53

eye distance

<12; 100 000>

Glasses detection preconditions

Table 21. Glasses detection preconditions

Attribute

Condition

face detection confidence

>= 0.53

yaw angle

<-40.0; 40.0>

pitch angle

<-40.0; 40.0>

Age & Gender preconditions

Table 22. Age & Gender preconditions

Attribute

Condition

face detection confidence

>= 0.06

face size

>= 30px

yaw angle

<-20.0; 20.0>

pitch angle

<-15.0; 15.0>

Face Features

Face API provides operations to get features of the detected face.

Age and Gender

To get age and gender, a`GET /aspects` request should be made on the detected face.

The response contains an estimated age and a gender score from the interval <0.0,1.0>. Values near 0.0 indicates 'male', values near 1.0 indicates 'female'.

The speed and accuracy of this estimation can be configured via the face attribute speed and accuracy mode property.

Mask

To check for the presence of a face mask, make a GET /face-mask request on the detected face.

The response contains a score from the interval <0.0,1.0>. Values near 1.0 indicate the presence of a mask.

The mask detection result is reliable only if the original image matches mask detection preconditions.

The mask detection only works when the face detection speed accuracy mode is set to balanced or accurate.

Glasses

To check for the presence of glasses mask, make a GET /glasses request on the detected face.

The response contains a score from the interval <0.0,1.0>. Values near 1.0 indicate the presence of glasses. The response contains additional scores for the presence of tinted glasses and glasses with a heavy frame.

The glasses detection result is reliable only if the original image matches glasses detection preconditions.

Face Crop

To get a face crop image, make a GET /crop request on the detected face.

The Response contains a base64 encoded image with a cropped face in the JPG format. The compression quality of returned images can be configured via the property.

The Service allows customizing width and height of the returned image via corresponding optional parameters.

Face Crop

Configure Crop Method

Two cropping methods defined in ISO/IEC 19794-5 standard are supported: FULL FRONTAL and TOKEN FRONTAL. By default the FULL FRONTAL is being used. It can be changed via the crop method property.

Crop without Background

To get a face crop with removed background, make a GET /crop/remove-background request on the detected face.

The Response contains a base64 encoded image in the PNG format.

The Service allows customizing the width and height of the returned image via corresponding optional parameters.

Configure Segmentation

The segmentation can be fine-tuned by the segmentation threshold parameter. The segmentation threshold is in the range <-10000, 10000>. It’s quantile-normalized and 0 represents equal error rate (EER). Higher segmentation threshold means that result image will contain more foreground. Lower segmentation threshold means that result image will contain more background.

Configure Returned Image Type

The type of returned image is determined based on the Segmentation Image Type property. Supported types:

  • mask - The segmentation mask only (single-channel)

  • masked - The three-channel image with applied segmentation. The masked areas are filled with the color defined by the background color property.

  • masked_alpha. - The four-channel image with applied segmentation. The masked areas are marked as transparent in the alpha channel.

Table 23. Segmentation Image Type Examples
originalcroppedmaskmaskedmasked_alpha

Original

Cropped

Mask

Masked

Masked Alpha

segmentation matting type: global
segmentation threshold: 0
segmentation matting possible threshold: 0
segmentation matting sure threshold: 500
background color: FFFFFF

Configure Background Color

When the image type is set to masked, the background color can be defined. Valid value is hexadecimal code string e.g. RRGGBB.

Table 24. Background Color Examples
background color: white FFFFFFbackground color: red FF0000

Masked White

Masked Red

segmentation type: masked
segmentation matting type: global
segmentation threshold: 0
segmentation matting possible threshold: 0
segmentation matting sure threshold: 500

Configure Segmentation Matting

To soften mask edges after segmentation, enable the global matting via the Segmentation Matting Type property.

Fine tune softening mask edges by two thresholds: the Segmentation Matting Possible Threshold and the Segmentation Matting Sure Threshold.

  • <-10000, POSSIBLE_THRESHOLD> - The background for sure, the matting doesn’t influence this range.

  • <POSSIBLE_THRESHOLD, SURE_THRESHOLD> - The unsure range, the matting decides what is the foreground and what is the background

  • <SURE_THRESHOLD, 10000> - The foreground for sure, the matting doesn’t influence this range.

Table 25. Segmentation Matting Type Examples
segmentation matting type: globalsegmentation matting type: off

Masked White

Masked White

segmentation type: masked
segmentation threshold: 0
segmentation matting possible threshold: 0
segmentation matting sure threshold: 500
background color: FFFFFF

Crop Coordinates

To get coordinates of a face crop within the original input image, make a GET /crop/coordinates request on the detected face.

The response contains coordinates of 4 corner points and the information whether the crop is fully present within the input image.

Crop Coordinates
Figure 7. Illustration of crop coordinates

Create Face Template

To create a face template, make a GET /template request on the detected face.

The returned template can be used as a reference in the face verification.

Template extraction mode

The extraction mode represents a trade-off between face template creation speed and face template quality. By default, the face template extraction runs in the accurate mode. To increase the speed of extraction, change the extraction mode via the extraction speed accuracy mode property.

Supported modes are:

  • fast - produces face templates suitable for verification of fairly good accuracy are created when fast mode is used. The performance of face template creation is very fast. It is compatible with DOT Mobile Kits.

  • balanced - produces face templates suitable for verification/identification of high accuracy. The performance of the face template creation is somewhere in between accurate and fast modes.

  • accurate - produces face templates suitable for verification/identification of very high accuracy. However the performance of the face template creation is not as good as when balanced or fast mode is used.

Match Face Against Reference

To verify the similarity between two faces or to match a face to a template, make a POST /similarity request on the detected face.

Provide a template or a face to be used as a reference. The provision of both is not permitted.

Face as a reference

Any face can be used as a reference while it is persisted in the cache. It is not possible to use a face after it has expired. Find the face identifier in the face resource url.

Reference template

To match a face with a face template, provide the base64 encoded template. The template has to be compatible with the server’s configuration and supported by server’s version of the IFace.

If the provided template is not supported by the server’s version of the IFace, the error response with a code UNSUPPORTED_VERSION_TEMPLATE will be returned.

If the provided template was created with a different extraction speed accuracy mode, the error response with a code INCOMPATIBLE_TEMPLATE will be returned.

If the provided template is corrupted, the error response with a code CORRUPTED_TEMPLATE will be returned.

Request session management

The Digital Identity Service supports the creation of resources (customer or face) as part of a session. This adds a higher level of resource protection and ensures that the incoming data were created at the time of the onboarding.

The recommended flow process should consist of the following steps: 1. Create session 2. Create resource (customer or face) linked to the session 3. Perform actions on the resource 4. Delete session

Create session

To create a session a POST /sessions request must be made, specifying the session’s active period in the timeout attribute.

The response will contain a Base64 encoded session token.

Create resource

To create a resource linked to the session, the session token from the response must be provided in the x-inn-session-token header.

Any subsequent request accessing this resource must contain the same session token in the header that was provided during the creation.

Delete session

To delete a session a DELETE /sessions request must be made, containing the session token in the x-inn-session-token header.

This will delete the requested session as well as any resources linked to this session.

Application configuration

Digital Identity Service is configurable via YAML file under the config folder:

config/application.yml
Table 26. Digital Identity Service configuration properties

Property

Description

innovatrics.dot.dis

  • jpg-compression-quality

The compression quality of images returned in JPG format.

Default value: 0.9

innovatrics.dot.dis.proxy

  • host

The hostname of your proxy server.

Example: squid.example.com

  • port

Available port of your proxy server.

Example: 8088

innovatrics.dot.dis.iface

  • license.filepath

Innovatrics IFace license file path.

Default value: ../license/iengine.lic

  • solvers.filepath

Innovatrics IFace solvers file path.

Default value: ../solvers

  • crop-method

Face cropping method according to ISO/IEC 19794-5 standard.

Supported methods: FULL_FRONTAL and TOKEN_FRONTAL

  • face-attribute.age-gender-speed-accuracy-mode

The face template extraction speed accuracy mode.

Supported modes: fast, balanced and accurate.

  • face-template.extraction-speed-accuracy-mode

The face template extraction speed accuracy mode.

Supported modes: fast, balanced and accurate.

innovatrics.dot.dis.iface.face-detection

  • face-size-ratio.min

The minimal face size ratio of faces detected to the shorter side of an image. The application will recognize a face in the image only if the face size ratio >= than this value.

Default value: 0.05

  • face-size-ratio.max

The maximal face size ratio of faces detected to the shorter side of an image. The application will recognize a face in the image only if the face size ratio <= than this value.

Default value: 0.35

  • confidence-threshold

The face detection confidence threshold. Faces with a confidence score lower that this value will be ignored.

The supported interval is [0, 10000].

  • speed-accuracy-mode

The face detection speed accuracy mode.

Supported modes: fast, balanced and accurate.

  • max-detection-count

The max number of faces to detect in a single image when the mode is FREE.

Default value: 2

Min value: 2

See more in Detection mode.

  • max-image-size

The parameter defines maximal image size of image entering to internal solver in balanced and accurate detection mode.

It affects the memory requirements. When set to a higher number, the face detection consumes more memory.

The value of this param affects the limit for minFaceSize. If minFaceSize needs to be set to a smaller value to detect smaller faces, then this parameter must be set to a higher value.

Default value: 1200

innovatrics.dot.dis.iface.background-removal

  • segmentation-image-type

The parameter determines type of returned image. Valid values are mask, masked, masked_alpha.

  • segmentation-matting-type

The parameter defining type of matting used after head shoulder segmentation. Valid values are off, global.

  • segmentation-threshold

The parameter defining threshold for segmentation mask. The parameter should be in range <-10000, 10000>.

  • segmentation-matting-possible-threshold

The parameter defining threshold for a possible foreground for generating trimap for matting. The parameter should be in range <-10000, 10000>.

  • segmentation-matting-sure-threshold

The parameter defining threshold for a sure foreground for generating trimap for matting. The parameter should be in range <-10000, 10000>.

  • background-color

The parameter defining color which is used to fill in parts of cropped image that fall outside the original source image boundaries. Valid value is hexadecimal code string e.g. RRGGBB.

innovatrics.dot.dis.customer.eye-gaze-liveness

  • min-valid-selfies-count

The minimal selfie count required for eye gaze liveness.

Acceptable values: 4, 5, 6, 7

innovatrics.dot.dis.customer.passive-liveness

  • mode

The evaluation mode used by the passive liveness check.

Possible values: UNIVERSAL (default), STANDARD

innovatrics.dot.dis.customer.smile-liveness

  • passive-liveness-threshold

The passive liveness threshold used by the smile liveness check.

The supported interval is [0, 1].

Default value: 0.85

innovatrics.dot.dis.customer.selfie

  • face-size-ratio.min

The minimal face size ratio of faces detected in customer’s selfies. The application will recognize a face in the image only if the face size ratio >= than this value.

Default value: 0.1

  • face-size-ratio.max

The maximal face size ratio of faces detected in customer’s selfies. The application will recognize a face in the image only if the face size ratio <= than this value.

Default value: 0.4

innovatrics.dot.dis.customer.document.portrait

  • cropped-portrait-face-size-ratio.min

The minimal face size ratio of faces detected in customer’s document portrait of classified documents.

The application will recognize a face in the image only if the face size ratio >= than this value.

Default value: 0.14

  • cropped-portrait-face-size-ratio.max

The maximal face size ratio of faces detected in customer’s document portrait of classified documents. The application will recognize a face in the image only if the face size ratio <= than this value.

Default value: 0.4

  • non-cropped-portrait-face-size-ratio.min

The minimal face size ratio of faces detected in pages of unknown documents. The application will recognize a face in the image only if the face size ratio >= than this value.

Default value: 0.05

  • non-cropped-portrait-face-size-ratio.max

The maximal face size ratio of faces detected in pages of unknown documents. The application will recognize a face in the image only if the face size ratio <= than this value.

Default value: 0.25

innovatrics.dot.dis.customer.document.inspection

  • ocr-text-field-threshold

The ocr text field confidence threshold.

Text fields with a ocr confidence lower than this value will be listed in the inspection of the visual zone.

Default value: 0.92

  • color-profile-change-detection-threshold

The color profile change detection threshold. Document pages with a score lower than this value will be evaluated as tampered.

The supported interval is [0.0, 1.0].

Default value: 0.4

  • screenshot-detection-threshold

The screenshot detection threshold. Document pages with a score lower than this value will be evaluated as display spoofs.

Default value: 0.351

  • tampered-text-detection-threshold

The tampered text detection threshold. Document pages with a score lower than this value will be evaluated as tampered.

Default value: 0.5

innovatrics.dot.dis.customer.inspection

  • face-mask-threshold

The face mask detection threshold.

If the mask detection score on a customer selfie is above this value, then the mask is considered to be present

The supported interval is [0.0, 1.0].

Default value: 0.4975

  • selfie-similarity-with-document-portrait-threshold

Customer selfie similarity with document portrait threshold.

Customer selfie and face detected on document portrait are evaluated as similar if their similarity confidence is above this value.

The supported interval is [0.0, 1.0].

Default value: 0.35

  • selfie-similarity-with-liveness-selfies-threshold

Customer selfie similarity with liveness selfies threshold.

Customer selfie and selfies added to liveness are evaluated as similar if their similarity confidence is above this value. If there are multiple liveness selfies, then the average confidence is used for the evaluation.

The supported interval is [0.0, 1.0].

Default value: 0.50

Appendix

Changelog

1.34.0 - 2024-03-14

Changed
  • You can now use the ADDITIONAL_LIBS build argument to add additional libraries to the Docker image. Check the Docker part of the documentation for more information.

  • Fixed an issue that resulted in HTTP 500 responses when the Innovatrics tracking service was not available in high concurrency scenarios

Added
  • the GET /prometheus endpoint now includes the dis_license_expiration_remaining_days metric providing the number of days remaining until the expiration of the DOT license.

1.33.0 - 2024-02-15

Changed
  • The default value of the innovatrics.dot.dis.customer.smile-liveness.passive-liveness-threshold property has changed to 0.85. Please update your configuration accordingly.

Added
  • The GET /metadata endpoint may now return a new attribute classificationAdviceRequired which indicates whether a precise classification advice is required for the given document.

1.32.1 - 2024-02-06

Changed
  • Internal improvements

1.32.0 - 2024-02-01

Changed
  • The provided Dockerfiles now use Rocky Linux 9 as a base image instead of Ubuntu 22.04. You can find more info about this change in the documentation in the Docker section

Added
  • The PUT /customers/{customerId}/document/pages endpoint now returns cornerOutOfImage attribute which indicates whether document corners are detected outside the image.

  • A new docker compose file with redis cache for testing purposes.

  • Deployment recommendations are now available here

1.31.0 - 2024-01-11

Changed
  • Internal improvements

1.30.0 - 2023-12-21

Changed
  • Updated IFace to 5.12.0

Added
  • New and improved models for estimating the age and gender of a person have been added, while the older model is still available. The models can be configured by a new property innovatrics.dot.dis.iface.face-attribute.age-gender-speed-accuracy-mode with the following options:

    • fast (default) - the fastest model, but with the lowest accuracy (this is the model that has been used for all previous versions)

    • balanced - a balanced model with a good compromise between speed and accuracy

    • accurate - the most accurate model, but with the lowest speed

1.29.0 - 2023-12-07

Changed
  • The server will now output loaded document models only to the debug log on startup

Added
  • A new Passive Liveness Universal Model, this can be toggled on via a new mandatory property innovatrics.dot.dis.customer.passive-liveness.mode, please update your configuration accordingly

    • options: UNIVERSAL (default), STANDARD (legacy)

    • for thresholds and more details please refer to our technical documentation

  • A key prefix for grouping the key/values can now be configured for Redis cache via the innovatrics.dot.dis.persistence.redis.key-prefix property

  • Face submission in POST /faces via faceOrigin attribute now also supports link to the customer document portrait

  • The POST /customers/{customerId}/inspect/disclose endpoint now also returns score for selfie similarity with liveness selfies

  • Server-side request forgery protection can now be configured for endpoints that support image.url in the JSON body via the innovatrics.dot.dis.data-downloader.ssrf-protection property. Please refer to our technical documentation for more information

    • The data downloader is now completely disabled in the default configuration and can be enabled by setting the innovatrics.dot.dis.data-downloader.enabled property to true

1.28.1 - 2023-11-28

Added
  • Updated IFace to 5.10.0

1.28.0 - 2023-11-16

Changed
  • Internal improvements

Added
  • Added support for Ehcache as a cache option, please refer to our technical documentation for detailed information

    • this cache is not distributed and is only available for a single instance of the application and therefore should not be used in a clustered environment

    • The cache can be enabled by setting the innovatrics.dot.dis.persistence.type property to ehcache

1.27.0 - 2023-10-27

Changed
  • Interval of the innovatrics.dot.dis.customer.smile-liveness.passive-liveness-threshold property changed from [0..100] to [0..1]. Please update your configuration accordingly.

    • For example, the default value is now 0.89 instead of 89

  • Rootless Dockerfile is now the default and the recommended way for building containers. The root Dockerfile (root-user.Dockerfile) is also provided for convenience

  • Internal improvements

Added
  • A new system was introduced for QR codes processing, which should increase the chance of successfully reading QR codes from various Level 2 documents

1.26.0 - 2023-10-03

Changed
  • Updated IFace to 5.8.0

  • Internal improvements

Added
  • A proxy server can now be configured for the Innovatrics tracking service via the innovatrics.dot.dis.proxy properties

    • innovatrics.dot.dis.proxy.host - a proxy host

    • innovatrics.dot.dis.proxy.port - a proxy port

1.25.0 - 2023-09-14

Added
  • New option for image submission using the binary data produced by the Innovatrics Web components or Mobile SDKs

  • The API endpoint POST /customers/{id}/liveness/records now includes support for Passive and Smile Liveness

  • The /customers/{id}/inspect endpoint returns video injection evaluation, indicating whether it was evaluated and detected

  • Performance improvements

1.24.0 - 2023-08-17

Changed
  • For security reasons, the Liveness Record endpoint (POST /customers/{{customerId}}/liveness/records) for MagnifEye liveness is now incompatible with the following DOT components:

    • DOT Android SDK 6.2.1 (or lower)

    • DOT iOS SDK 6.2.0 (or lower)

    • DOT Web Components 4.1.5 (or lower)

  • Internal improvements

1.23.1 - 2023-08-01

Changed
  • Internal improvements

1.23.0 - 2023-07-27

Changed
  • The Screenshot Detection feature is using an improved model

    • the score for the Inspection Disclose endpoint (POST /customers/{customerId}/document/inspect/disclose) may now return different values in range from 0 to 1 because of a different normalization method

    • the screenshot-detection-threshold used for the Inspection endpoint (POST /customers/{customerId}/document/inspect) now has the default value set at 0.351 (please update your configuration accordingly)

  • The application will no longer fail to start when the hostname and port is not configured while using Redis in the CLUSTER mode (it is still mandatory for both STANDALONE and MASTER_SLAVE modes)

Added
  • New mandatory configuration property innovatrics.dot.dis.customer.smile-liveness.passive-liveness-threshold

1.22.1 - 2023-07-20

Changed
  • Internal improvements

1.22.0 - 2023-07-13

Changed
  • Updated IFace to 5.6.0 with an improved:

    • glasses and tinted glasses detection

    • model for Passive Liveness (threshold change is needed)

  • Fixed a problem with inconsistent cache expiration of the customer’s resources

  • Updated default values for log retention in logback-spring.xml file. We recommend to update your logs retention accordingly.

  • Performance improvements in machines with high number of CPU cores.

  • The application will now shut down on startup if a CPU without AVX2 instructions is detected

  • The underlying OS in our Dockerfile has been upgraded to Ubuntu 22.04 LTS. This brings the previous Ubuntu 18.04 LTS out of support as it has passed its EOL period.

  • Internal improvements

Added
  • New option for the face submission in POST /faces via faceOrigin attribute which will use the link to the customer selfie

1.21.0 - 2023-06-09

Changed
  • Added support for Redis as a cache option, please refer to our technical documentation for more information

  • Tracing and metrics URIs no longer contain curly brackets

  • Internal improvements

Added
  • New mandatory configuration property innovatrics.dot.dis.persistence.type

1.20.0 - 2023-05-19

Changed
  • Updated IFace to 5.4.0

  • Internal improvements

1.19.0 - 2023-04-27

Changed
  • Updated IFace to 5.2.0 with an improved model for Passive Liveness (threshold change is needed)

  • Tracing has been changed to Micrometer OpenTelemetry tracing, which now sends data via gRPC to the configured collector (e.g.: Jaeger)

  • Internal improvements

Added
  • A new Customer Document Inspection "Disclose" endpoint (POST /customers/{customerId}/document/inspect/disclose) which returns more technical data for document inspection

1.18.0 - 2023-04-05

Changed
  • Internal improvements

Added
  • Support for comparison of truncated documentNumber in the Machine Readable Zone via POST /customers/{id}/document/inspect

  • Duplicated text field values on L2 supported documents are now returned via GET /customers/{id} as a visualZoneDuplicates array for the given text field

1.17.0 - 2023-03-27

Changed
  • Fixed OpenAPI3 Swagger JSON file that can be used to generate code

    • operationId of DELETE /customers/{id}/document - delete_1 has been renamed to deleteDocument

    • operationId of DELETE /customers/{id} - delete has been renamed to deleteCustomer

    • TD1Mrz and TD2Mrz responses will now correctly export with optionalDataFirstLine (for both) and optionalDataSecondLine (for TD1) as required = false

    • ErrorResponse will now correctly be exported as a single-object schema instead of an array

...
"application/json": {
    "schema": {
        "$ref": "#/components/schemas/ErrorResponse"
    }
}
...
  • Updated IFace to 5.1.1

  • New option for the selfie submission in PUT /customers/{id}/selfie via selfieOrigin attribute which will use the selfie from the MagnifEye liveness

  • The /customers/{id}/inspect functionality will now take the submitted MagnifEye liveness into account, if available

  • Internal improvements

Added
  • New feature: MagnifEye Liveness

    • new API endpoint POST /customers/{id}/liveness/records with accepted content-type application/octet-stream

    • new API endpoint GET /customers/{id}/liveness/records/{recordId}/selfie which returns the selfie from the MagnifEye liveness

    • new liveness type MAGNIFEYE_LIVENESS for liveness evaluation

1.16.0 - 2023-03-02

Changed
  • Improved MRZ cross-checking via POST /customers/{id}/document/inspect

  • Internal improvements

1.15.0 - 2023-02-10

Changed
  • Updated IFace to 5.0.3 (no changes in the Liveness Detection)

  • A list of open-source libraries is now available in the distribution package and can be found at doc/dependency-license/index.html

1.14.0 - 2023-01-25

Changed
  • The trailing slash is no longer resolved (e.g.: /api/v1/customers will resolve but /api/v1/customers/ will return 404 Not Found)

  • The Screenshot Detection feature is using an improved model

    • the screenshot-detection-threshold default value is now at -0.5

Added
  • A new "Create Customer" endpoint (POST /customers/{customUUID}) which allows creation of customer with custom UUIDv4

1.13.0 - 2022-12-16

Changed
  • Added 401 and 403 responses to Swagger YAML

  • Innovatrics tracking service has the URL hardcoded and may be enforced by the license

  • api/v1/health endpoint now also reports the health of the tracking service

  • Internal improvements

1.12.0 - 2022-11-25

Changed
  • Internal improvements

1.11.0 - 2022-11-18

Added
  • A new Customer Inspection "Disclose" endpoint (POST /customers/{customerId}/inspect/disclose) which returns more technical data for customer inspection

1.10.1 - 2022-11-03

Changed
  • Support for AVX instructions turned on to further increase speed

1.10.0 - 2022-10-27

Changed
  • Onboarding operations (face, document) have improved throughput and latency by parallelizing internal algorithms

  • Other internal improvements

1.9.1 - 2022-10-12

Changed
  • Fixes with OAuth2 implementation

1.9.0 - 2022-10-06

Changed
  • Update IFace to 4.20.0 with an improved passive liveness feature

  • Changed passive liveness thresholds

  • Internal improvements

1.8.0 - 2022-09-08

Changed
  • Data from parsed barcodes from level 2 documents are now returned via GET /customer/{customerUUID}

  • Data isolation framework now correctly returns HTTP 403 Forbidden when appropriate

  • If the server is started with a legacy license, the This license type is not supported. Please contact your sales representative to get a proper license error is now thrown

  • Internal improvements

1.7.0 - 2022-08-17

Changed
  • Failed authentication now returns an HTTP 401 Unauthorized response

  • Increased transaction log retention to 455 days

  • UUID validation messages are now consistent across the API

  • Internal improvements

1.6.0 - 2022-08-03

Added
  • Introduced support for multi-tenancy - isolation of data

  • Internal improvements

1.5.0 - 2022-07-15

Changed
  • Update IFace to 4.18.0

  • Changed score distribution for passive liveness

1.4.0 - 2022-06-23

Added
  • Added licensing support for docker environments

Changed
  • Update IFace to 4.16.0

1.3.0 - 2022-06-15

  • Internal improvements

  • Increased minimum required Java version to 17

  • Introduced caching of liveness evaluation

1.2.1 - 2022-05-27

  • Changed threshold for document looks like screenshot detection from 0.83 to 0.80

1.2.0 - 2022-05-27

  • Fixes and changed threshold for document looks like screenshot detection from 0.85 to 0.83

1.1.1 - 2022-05-19

  • Internal improvements

1.1.0 - 2022-05-12

  • Introduced Smile Liveness evaluation

  • Authentication mechanism changes

  • Other internal improvements

1.0.0 - 2022-04-19

  • Initial release

Digital Identity Service Dockerfile example

ARG UID=1000
ARG GID=1000
ARG ROCKY_ROOTFS=/mnt/rootfs

FROM rockylinux:9 as rocky-micro-build

ARG UID
ARG GID
ARG ROCKY_ROOTFS
ARG ADDITIONAL_LIBS

RUN mkdir -p ${ROCKY_ROOTFS}

RUN printf \
'[Adoptium] \n\
name=Adoptium \n\
baseurl=https://packages.adoptium.net/artifactory/rpm/rhel/$releasever/$basearch \n\
enabled=1 \n\
gpgcheck=1 \n\
gpgkey=https://packages.adoptium.net/artifactory/api/gpg/key/public' >> /etc/yum.repos.d/adoptium.repo

RUN set -ex && \
    yum install epel-release --setopt install_weak_deps=false --nodocs -y

RUN yum install --installroot ${ROCKY_ROOTFS} \
    libusbx \
    libusb \
    coreutils-single \
    glibc-langpack-en \
    temurin-17-jre \
    jemalloc \
    libgomp \
    ${ADDITIONAL_LIBS} \
    --setopt install_weak_deps=false --nodocs --releasever 9 -y && \
    yum --installroot ${ROCKY_ROOTFS} clean all

RUN rm -rf ${ROCKY_ROOTFS}/var/cache/* ${ROCKY_ROOTFS}/var/log/dnf* ${ROCKY_ROOTFS}/var/log/yum.* ${ROCKY_ROOTFS}/usr/share/zoneinfo

# Add a user to run the application
RUN set -eux && \
    groupadd --gid=${GID} dot-dis && \
    adduser --gid=${GID} --uid=${UID} dot-dis && \
    passwd -l dot-dis

# copy additional files to /mnt/rootfs
RUN set -eux && \
    cp -r /etc/yum.repos.d/ ${ROCKY_ROOTFS}/etc/yum.repos.d/ && \
    cp /etc/group ${ROCKY_ROOTFS}/etc/group && \
    cp /etc/passwd ${ROCKY_ROOTFS}/etc/passwd && \
    cp /etc/shadow ${ROCKY_ROOTFS}/etc/shadow

FROM scratch

ARG UID
ARG GID
ARG ROCKY_ROOTFS

COPY --from=rocky-micro-build /mnt/rootfs/ /

WORKDIR /srv/dot-digital-identity-service

# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8

# Add entrypoint script
COPY entrypoint.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/entrypoint.sh && \
    chown ${UID}:${GID} /usr/local/bin/entrypoint.sh

# Add libs
ARG INNOONNXRUNTIME_LIB
COPY ${INNOONNXRUNTIME_LIB} ./libs/

ARG IFACE_LIB
COPY ${IFACE_LIB} ./libs/

COPY solvers/* ./libs/solvers/

ARG SAM_OCR_LIB
COPY ${SAM_OCR_LIB} ./libs/

# Config libs
RUN ldconfig "$(realpath libs)"

# Add application
ARG JAR_FILE
COPY ${JAR_FILE} ./app.jar

# Set timestamps and run user permissions
RUN touch ./*.jar && \
    chown --recursive ${UID}:${GID} .

# change user to the created one (dot-dis)
USER ${UID}

# configure environment
ENV CONFIG_DIR=/srv/dot-digital-identity-service/config
ENV LOGS_DIR=/srv/dot-digital-identity-service/logs
ARG JAVA_OPTS
ENV JAVA_OPTS=""

# Configure jemalloc
ENV LD_PRELOAD="/usr/lib64/libjemalloc.so.2"

EXPOSE 8080

CMD ["entrypoint.sh"]

Entrypoint.sh script example

#!/bin/sh
set -eux

java $JAVA_OPTS \
  -Dspring.config.additional-location=file:$CONFIG_DIR/application.yml \
  -Dlogging.config=file:$CONFIG_DIR/logback-spring.xml \
  -jar app.jar