DOT Digital Identity Service

v1.42.0

Overview

Digital Identity Service enables two main features:

  • Customer onboarding

  • Face biometrics

Customer onboarding is the basic use-case of DOT. A selfie and photos of identity card should be provided by the customer, and a liveness check should have passed. Provided data can be checked for inconsistencies, and based on the checked result, the client decides if the customer will be onboarded.

The biometric processing of face images allows the client to support specific use-cases with the need for face biometrics.

API Reference

The Digital Identity Service API reference is published here

Distribution package contents

The distribution package can be found in our older CRM portal or in the new Customer portal. It contains these files:

Your sales representative will provide credentials for the CRM login.
  • config – The configuration folder

    • application.yml – The application configuration file, see Application configuration

    • logback-spring.xml – The logging configuration file

  • doc – The documentation folder

    • Innovatrics_DOT_Digital_Identity_Service_1.42.0_Technical_Documentation.html – Technical documentation

    • Innovatrics_DOT_Digital_Identity_Service_1.42.0_Technical_Documentation.pdf – Technical documentation

    • swagger.json – Swagger API file

    • EULA.txt - The license agreement

  • docker – The Docker folder

    • Dockerfile – The text document that contains all the commands to assemble a Docker image, see Docker

    • root-user.Dockerfile - The alternative Dockerfile to assemble a Docker image with Digital Identity Service running as a root user

    • entrypoint.sh – The entry point script

  • libs – The libraries folder

    • libsam.so – The Innovatrics OCR library

    • libiface.so – The Innovatrics IFace library

    • libinnoonnxruntime.so – The Innovatrics runtime library

    • solvers – The Innovatrics IFace library solvers

  • dot-digital-identity-service.jar – The executable JAR file, see How to run

  • Innovatrics_DOT_Digital_Identity_Service_1.42.0_postman_collection.json – Postman collection

Installation

System requirements

While the following requirements are minimal (e.g.: we require some disk space for the app itself, logging and configuration), please refer to the performance measurements page for detailed results on varying configurations.
  • Rocky Linux 9.x (64-bit)

  • A CPU supporting the AVX2 instruction set

  • Unless agreed otherwise, the machine hosting the Digital Identity Service needs to be able to access the URL innovatrics.count.ly.

Minimal system requirements

  • CPU: 2 vCPU

  • RAM: 7 GB

  • DISK: 4 GB

Minimal Redis requirements

Version: 7.x.x

We recommend two nodes with the following configuration:

  • CPU: 2 vCPU

  • RAM: 3 GB

Minimal Memcached requirements

We recommend two nodes with the following configuration:

  • CPU: 2 vCPU

  • RAM: 3 GB

Steps

  1. Install the following packages:

    • Eclipse Temurin 21 Runtime Environment (Headless JRE) (temurin-21-jre)

    • userspace USB programming library (libusb; libusbx)

    • GCC OpenMP (GOMP) support library (libgomp1)

    • Locales (glibc-langpack-en)

    • JEmalloc (jemalloc) - recommended for production environments

    yum install -y temurin-21-jre libusb libusbx libgomp glibc-langpack-en jemalloc
  2. Set the locale

    sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen
    export LANG=en_US.UTF-8; export LANGUAGE=en_US:en; export LC_ALL=en_US.UTF-8
  3. Extract the Digital Identity Service distribution package to any folder.

  4. Link the application libraries:

    ldconfig /local/path/to/current/dir/libs
    Replace the path /local/path/to/current/dir in the command with your current path. Keep /libs as a suffix in the path.

Activate the DOT license

For Digital Identity Service version 1.20.0 and above

Starting from Digital Identity Service version 1.20.0, a new method for retrieving licenses is available. To obtain a license, please contact your sales representative or email sales@innovatrics.com to gain access to the customer portal where the license can be obtained. Once you have received the license, deploy it as described below the next paragraph in the Deploying the obtained license section.

For the Digital Identity Service version 1.19.0 and below

When using a license generated via the customer portal in versions 1.19.0 and earlier of the Digital Identity Service, the application will start up, but consistently return HTTP 401 Unauthorized. Please contact your sales representative or sales@innovatrics.com to give you license for your specific version. Once you get the license, please deploy it as described below in the Deploying the obtained license section.

Deploying the obtained license

Copy your license file iengine.lic for Innovatrics IFace SDK 6.2.2 into {DOT_DIGITAL_IDENTITY_SERVICE_DIR}/license/

How to run

As Digital Identity Service is a stand-alone Spring Boot application with an embedded servlet container, there is no need for deployment on a pre-installed web server.

Digital Identity Service needs a running redis or memcached. redis or memcached must be configured via the externalized configuration first.

Digital Identity Service can be run from the application folder:

java -Dspring.config.additional-location=file:config/application.yml -Dlogging.config=file:config/logback-spring.xml -DLOGS_DIR=logs -Djna.library.path=libs/ -jar dot-digital-identity-service.jar

Embedded Tomcat web server will be started and the application will be listening on the port 8080 (or another configured port).

Docker

To build a Docker image, use the Dockerfile and the entrypoint.sh script. A Dockerfile example and Entrypoint.sh script example can also be found in the Appendix.

Multi-arch Docker image

Since version 1.40.0, the Dockerfile(s) for Digital Identity Service have been modified to support building multi-architecture Docker images. The same Dockerfile can be used to build images for both linux/amd64 and linux/arm64 (also known as AArch64) platforms.

The linux/arm64 binaries that are included in the distribution package (packaged with the -arm64 suffix) are not meant for production use at this moment (i.e.: unsupported), and are there for testing or development purposes only. The linux/arm64 binaries are built for the ARM64 architecture and are not optimized for performance.

For production use, please use the x86 (linux/amd64 - packaged with the -amd64 suffix) binaries.

The ARM64 Docker container is also supported on MacOS computers with Apple Silicon chips natively (without using QEMU or Rosetta 2 emulation).

Building the Docker image

Due to the nature of multi-arch Docker images, the directory structure may differ (linux/amd64 or linux/arm64).

The Docker image should be built as follows:

docker build \
    --build-arg="JAR_FILE=dot-digital-identity-service.jar" \
    --build-arg="SAM_OCR_LIB=libsam.so" \
    --build-arg="IFACE_LIB=libiface.so" \
    --build-arg="INNOONNXRUNTIME_LIB=libinnoonnxruntime.so" \
    --build-arg="ADDITIONAL_LIBS=" \
    -t dot-digital-identity-service \
    .

The Docker image will now take into account the --platform flag and build the image for the specified platform. The --platform flag is optional and can be omitted if you want to build the image for the platform you are currently using.

In the ADDITIONAL_LIBS build argument, you can set space-separated names of additional linux libraries that should be included in the Docker image. For instance, if you want to include curl and wget linux libraries, you can set ADDITIONAL_LIBS like this:

    --build-arg="ADDITIONAL_LIBS=curl wget" \

Digital Identity Service needs a running redis or memcached. Redis or Memcached must be configured via the externalized configuration first.

Run the container according to the instructions below:

docker run -v /local/path/to/license/dir/:/srv/dot-digital-identity-service/license -v /local/path/to/config/dir/:/srv/dot-digital-identity-service/config -v /local/path/to/logs/dir/:/srv/dot-digital-identity-service/logs -p 8080:8080 dot-digital-identity-service
Replace the path /local/path/to/license/dir/ in the command with your local path to the license directory.
Replace the path /local/path/to/config/dir/ in the command with your local path to the config directory (from the distribution package).
Important Replace the path /local/path/to/logs/dir/ in the command with your local path to the logs directory (you need to create the directory mounted to a persistent drive). The volume mount into the docker is mandatory, otherwise the application will not start successfully.
Important The Digital Identity Service running inside the container, built from Dockerfile, runs under dot-dis user and not as root user. This may cause issues with files and directories mounted from outside the docker container (e.g. logs directory). To overcome this issue, you must ensure that the user’s UID (User ID) on the host machine, who owns the file or directory, matches the UID of the dot-dis user, which is 1000. Alternatively, you have the option to build the Docker container using the root-user.Dockerfile, which runs Digital Identity Service under the root user and does not have this limitation, but is less secure.

Rocky linux as a base in the Digital Identity Service version 1.32.0 and above

From version 1.32.0 the Digital Identity Service is using a Rocky Linux 9 distribution as a base image instead of Ubuntu 22.04. As a result of this change, the Docker image now contains only mandatory Linux packages. Packages that are commonly preinstalled and used by the user such as the package manager are now not installed. This change was implemented mainly to minimise the amount of external binaries, resulting in fewer security patches needed.

Docker Compose

The project distribution bundle also contains a Docker Compose file that can be used to run the application in a root Docker container along with a Redis instance.

The Docker Compose file is located in the root of the distribution package and is intended for development and testing purposes only.

Before launching the Docker Compose file, you must ensure that the license and logs directories are present in the directory (the config directory is already present). These can be created as follows:

mkdir -p license
mkdir -p logs

Your license file (iengine.lic) must be placed in the license directory.

The Docker Compose file then can be run as follows:

docker-compose up -d

The Docker Compose file also exposes the application on port 8080.

The Docker Compose file inherits the system architecture from the host machine (linux/amd64 or linux/arm64). If you want to run the application on a different architecture from your own, you must modify the docker-compose.yml and append the platform key to the digital-identity-service section.

Logging

Digital Identity Service logs to the console and also writes the log file (dot-digital-identity-service.log). The log file is located at a directory defined by the LOGS_DIR system property. Log files rotate when reaching 5 MB size, maximum history is by default set to 7 days or logs size of 1GB.

As this is a Spring Boot application, debug logging can be turned on by setting the logging.level.root property to DEBUG in the application.yml file.

API Transaction Counter Log

The separate log files following filename pattern dot-digital-identity-service-transaction-counter.log.%d{yyyy-MM-dd}.%i.gz are located at a directory defined by the LOGS_DIR system property. The %d{yyyy-MM-dd} template represents the date and the %i represents the index of log window within the day, starting at 0. These log files contain information about counts of API calls (transactions). The same rolling policy is applied as for the application log, except the maximum history of these log files is 455 days.

For proper transactions billing, please be sure to send all transactions logs every time.

Docker: Persisting log files in local filesystem

When Digital Identity Service is run as a Docker container, log files may be accessed even after the container no longer exists. This can be achieved by using Docker volumes. To find out how to run a container, see Docker.

Monitoring

Information as build or license info can be accessed on /api/v1/info. Information about available endpoints can be viewed under /swagger-ui/index.html.

The health endpoint accessible under /api/v1/health provides information about the application health and the Innovatrics Tracking Service status. This feature can be used by an external tool such as Spring Boot Admin, etc.

The application also supports exposing metrics in standardized prometheus format. These are accessible under /api/v1/prometheus. This endpoint can be exposed in your configuration:

management:
  endpoints:
    web:
      exposure:
        include: health, info, prometheus

For more information, see Spring Boot documentation, sections Endpoints and Metrics. Spring Boot Actuator Documentation also provides info about other monitoring endpoints that can be enabled.

Monitor the metrics via Prometheus

To scrape Digital Identity Service metrics, the Prometheus configuration must be set as follows

prometheus.yml
scrape_configs:
  - job_name: 'digital_identity_service'
    scrape_interval: 2h # Set this up to your preferred scraping interval
    metrics_path: '/api/v1/prometheus'
    scheme: https # Define the protocol scheme used for requests
    static_configs:
      - targets: ['dis.hostname:port'] # Configure the hostname and port of your application

If you do not wish to use static configuration for the target application, consider dynamically discovering the target via supported service-discovery mechanisms. For more information on how to set up service discovery configuration refer to Prometheus documentation.

If you wish to configure Prometheus alerts, you must define prometheus.rules.yml file and reference it in the Prometheus configuration:

prometheus.yml
rule_files:
  - prometheus.rules.yml # Reference to Prometheus rules file if you want to use alerts

Prometheus can be configured to regularly send alert state information to Alertmanager, which handles dispatching of notifications to Slack, email, or other specified destinations.

Monitor license expiration

The Digital Identity Service exposes a dis_license_expiration_remaining_days metric that tracks the remaining days until the expiration of the DOT license. You can monitor the expiration via Prometheus and set up alerting for timely notifications.

To access the current value of the metric, use Prometheus' expression browser (http://localhost:9090/) under the Graph tab with the query dis_license_expiration_remaining_days.

Prometheus browser graph
Figure 1. DOT license expiration metric in Prometheus browser

To configure alerting for DOT license expiration, define the Prometheus rule configuration as follows:

prometheus.rules.yml
groups:
  - name: digital-identity-service
    rules:
      - alert: DISLicenseIsAboutToExpire
        expr: dis_license_expiration_remaining_days < 30
        annotations:
          summary: "DOT License for Digital Identity Service will expire in {{ $value }} days."

The configuration instructs Prometheus to trigger an alert when there’s less than 30 days remaining before the license expiration.

Access the alert under Alert tab in the Prometheus browser. Currently, the alert is inactive since there are more than 30 days remaining until expiration. However, it will transition to the "Firing" state when the license is about to expire.

Prometheus browser alerts
Figure 2. DOT license expiration alert in Prometheus browser

Tracing

Micrometer tracing with OpenTelemetry API is used to collect traces. Data is exported via gRPC using OTLP format, to the configured collector (e.g.: Jaeger) defined by the management.tracing.endpoint property (default: http://localhost:4317).

By default, OpenTelemetry tracing uses W3C format for context propagation. To enable tracing propagation using the B3 format, the management.tracing.propagation.type property can be set to b3.

Tracing is disabled by default. It can be enabled by the following property:

management:
  tracing:
    enabled: true

Collect traces via Jaeger

For quick local testing, you can utilize the Jaeger All-in-One image, which incorporates Jaeger UI as well. Otherwise, please consult the Jaeger documentation for guidance on configuring your preferred setup.

If you are running Jaeger collector locally, it accepts OTLP format via gRPC under http://localhost:4317. This endpoint corresponds to the default value of the management.tracing.endpoint property in Digital Identity Service configuration.

To view exported traces, access the Jaeger UI at http://localhost:16686, then select the dot-digital-identity-service from the list of services. The UI displays all traces and spans. If you want to filter out traces generated by HTTP endpoints, look for those starting with "http".

Jaeger UI traces
Figure 3. Traces in Jaeger UI

By clicking on a trace, you can access all associated nested spans, allowing you to monitor the duration of each relevant call.

Jaeger UI spans
Figure 4. Trace overview in Jaeger UI

Architecture

Digital Identity Service is a semi-stateful service. It temporarily retains intermediate results and images in an external cache. This enables the exposed API to flexibly use only the methods needed for a specific use case, without repeating expensive operations. Another advantage is that the user can provide data when available, without the need to cache on the user’s side.

The Digital Identity Service can be horizontally scaled. Multiple instances of the service can share the same cache or a cache cluster.

Architecture diagram
Figure 5. Horizontal scaling of Digital Identity Service with a cache cluster

The services of Digital Identity Service are better suited for shorter-time processes. The cache can nevertheless be configured to support various use cases and processes.

Cache

The Digital Identity Service currently supports Redis and Memcached as cache options. For development and test purposes, embedded EhCache is also available. However, please note that this option is not suitable for production or an environment with multiple Digital Identity Service instances. The table below describes configuration options for switching between these options:

Table 1. Cache type configuration properties

Property

Description

innovatrics.dot.dis.persistence

  • type

Type of cache implementation to use.

Possible values: redis, memcached or ehcache

Various tools exist to monitor the performance of your Redis or Memcached server, and we recommend using one to ensure the cache is performing as expected.

Common cache record expiration configuration

Every cache option supports setting the expiration time for both customer and face records. The expiration time can be configured independently for all of these resources. The configuration is described in the table below:

Table 2. Cache record expiration configuration properties

Property

Description

innovatrics.dot.dis.persistence.cache

  • customer-expiration

The time in seconds to persist all data created and used by Onboarding API.

Example value: 1800

  • face-expiration

The time in seconds to persist face records created and used by Face API.

EhCache

Cache option intended for development and test purposes only. Each instance of Digital Identity Service runs its own embedded EhCache, which is not shared between other instances. This means that running multiple Digital Identity Service instances in cluster mode may lead to unexpected behavior.

Configuration

Maximum size of memory which can be allocated by embedded cache may be configured via configuration properties. Both Java heap and off-heap memory are supported. In general, heap memory is faster in terms of I/O operations, but comes with performance cost due to Java garbage collection. In the scope of Digital Identity Service, this performance difference should be negligible.

If no off-heap-size property is set, the cache will solely rely on Java heap memory.

In case of exceeding configured memory EhCache will remove records even before configured TTL.

Table 3. EhCache configuration properties

Property

Description

innovatrics.dot.dis.persistence.ehcache.resource-pool

  • heap-size

Maximum number of records which can be allocated in Java heap memory.

Example value: 200

  • off-heap-size

Maximum size in MB which can be allocated in Java off-heap memory.

Example value: 800

System requirements

Have in mind that embedded cache share resources with Digital Identity Service, so to ensure the smooth operation, it is crucial to allocate appropriate resources that cater to both the application itself and its embedded cache.

Redis

The Digital Identity Service also supports Redis as a cache option in various setups which depend on the configuration of your environment. The Lettuce client is used for communication with Redis.

An eager initialization has been configured, so the client will attempt to connect to the Redis server on startup. If the connection fails, the application will fail to start.

We require the Redis server to be of version 7.x.x. Using older versions or a higher major version may result in an unexpected behavior.

The following Redis environment setups are supported:

The individual setups and their configurations are described in the following sections. The application will also fail to start if one of the setups has been configured incorrectly or is incomplete.

The option to configure the usage of SSL/TLS is also available. The SSL/TLS is optional and can be configured via application properties.

For redis authentication use username and password configured via application properties. If left empty, no authentication will be used.

The timeout for all Redis operations has been configured to 10 seconds. This can be overridden via application properties.

The table below describes configuration options common to all Redis setups:

Table 4. Redis common configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis

  • key-prefix

A String prefix for grouping the key/values. This is useful when multiple applications share the same Redis instance.

This property is optional.

A : is appended automatically to the prefix if it is not empty.

Example value: innovatrics:dis

  • setup

Setup of your Redis environment.

Possible values: STANDALONE, MASTER_REPLICA, CLUSTER

  • use-ssl

Indicates whether to use SSL/TLS for communication with Redis.

This property is optional.

Possible values: true or false (default)

  • credentials.username

The username for authentication to your Redis environment.

This property is optional.

Example value: user

  • credentials.password

The password for authentication to your Redis environment.

This property is optional.

Example value: pass

  • timeout

The timeout for all Redis operations in milliseconds.

This property is optional.

Example value: 10000 (default)

Configuration

Standalone

The standalone mode is the simplest mode of operation. It is suitable for development and testing environments.

The following configuration properties are available:

Table 5. Redis Standalone cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis

  • hostname

The hostname of the Redis server.

Example: localhost

  • port

The port of the Redis server.

Example: 6379

Master/Replica

The master/replica mode is suitable for production environments. The client is configured in a way where the reads are set to be preferred on the replicas.

The following configuration properties are available:

Table 6. Redis Master/Replica cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis

  • hostname

The hostname of the Redis server.

Example: localhost

  • port

The port of the Redis server.

Example: 6379

  • master-replica.info-command-used

Indication whether your environment uses the INFO command to retrieve the master/replica information. This results in a different configuration to be used.

Possible values: true (default) or false

Cluster

The cluster mode is suitable for high-performance production environments with the need for automatic failover.

The application will automatically discover the cluster topology and will use it for communication.

In the case of a primary node failure, the application will automatically failover to a new primary node and will continue to operate normally. The application will attempt to reconnect to the cluster in case of a failure.

The topology refresh interval has been configured to 60 seconds. This can be overridden via application properties. If the topology refresh interval is not set, the topology will not be refreshed.

The following configuration properties are available:

Table 7. Redis Cluster cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.redis.cluster

  • nodes

The hostname of the Redis cluster.

Individual nodes are delimited by a comma, however we recommend to provide hostname of the cluster entrypoint (e.g.: AWS ElastiCache cluster configuration endpoint) as opposed to individual nodes.

Example: clustercfg.your-redis-instance:6379 (recommended) or node1.your-redis-instance:6379,node2.your-redis-instance:6379

  • topology-refresh-interval

Topology refresh interval in milliseconds. If unset, the topology will not be refreshed.

This property is optional.

Example: 60000

Memcached (deprecated)

Configuration

The cache is configurable via the externalized configuration.

It can be configured either with the AWS Elastic Cache, or a list of hosted memcached servers can be used.

Efficient memory usage

For optimal performance, the expiration of records must be configured according to the nature of the implemented process:

  • Short expiration time causes smaller memory usage and higher throughput of short requests.

  • Long expiration time enables longer processing of cached records and higher memory requirements.

Memory consumption for longer processes can be lowered by cleaning records once no longer needed. The API provides deletion methods for each resource.

The expiration of records can be configured independently for the onboarding API and for face operations.

Table 8. Memcached cache configuration properties

Property

Description

innovatrics.dot.dis.persistence.memcached

  • aws-elastic-cache-config-endpoint

The host and port of aws elastic cache config endpoint.

Format: host:port

  • servers

The list of host and port pairs to the memcached instances. Only used if aws elastic cache config endpoint is not configured.

Format: host1:port1 host2:port2

  • read-timeout

The memcached read timeout in milliseconds.

Example value: 2000

  • write-timeout

The memcached write timeout in milliseconds.

Example value: 2000

  • operation-timeout

The memcached operation timeout in milliseconds.

Example value: 5000

Authentication and authorization

The Digital Identity Service API is secured with an API Key authentication, hence an HTTP Authorization header needs to be sent with every request.

The header must contain a Bearer token, which is a UTF-8 Base64 encoded string that consists of two parts, delimited by a colon:

Table 9. Token description

Token part

Description

  • API Key

A unique identifier that is received with your license

  • API Secret

A unique string that is received with your license

The server will return a HTTP 401 Unauthorized response for every request that either does not contain the Authorization header, or if the header contents are invalid (e.g.: malformed Base64 or invalid API Key or Secret).

Some endpoints are not secured by design (such as /metrics, /health or /info) and do not require any authentication

Authorization header creation

For the Digital Identity Service version 1.20.0 and above

Credentials for the Digital Identity Service can be retrieved from the customer portal. The Api Key & Secret contains 3 values, as shown in the figure below:

Api Key & Secret
Figure 6. Api Key & Secret pop-up window

Each request must contain the Authorization header which consists of the Bearer keyword and the Bearer Token value, e.g.:

Bearer aW5rXzcwYTJjOTg4Omluc19XRjBhVzl1WDNScGJJQ0l3TURJeklERXhPV1ZCVDBpZlE9PQ==

For the Digital Identity Service version 1.19.0 and below

In the Digital Identity Service version 1.19.0, the process for creating an API token differs. It requires getting both the key and secret from the license. Below is an example snippet illustrating the structure of the API key and secret within the license file:

{
  "contract": {
    "dot": {
      "authentication": {
        "apiKeyAndSecrets": [
          {
            "key": "some-api-key",
            "secret": "mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6"
          }
        ]
      }
    },
    ...
  },
  ...
}

You will need to encode the key and secret parts into a valid UTF-8 Base64 string (those two parts, delimited by a colon), e.g.:

some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6

The encoding can be performed by the user via the bash command below:

echo 'some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6' | base64 -w 0

Once the aforementioned token has been encoded into Base64, each request must contain the Authorization header which consists of the Bearer keyword and encoded key and secret:

Bearer c29tZS1hcGkta2V5Om1iN0RaUTZKd2VzUkhrV1BiaktWRGdHSFh4ckFIRmQ2

Data isolation

The resources created with one API key are accessible only with that particular API key. This is to prevent any unauthorized access by isolating the created resources in the cache.

Image Data Downloader

The Digital Identity Service API supports two ways to provide an image in its requests:

  • base64 encoded data

  • url to the remote image

Images provided are downloaded by the Image Data Downloader.

The Image Data Downloader is enabled by default and can be disabled via the configuration to prevent downloading images from remote URLs. The data downloader can also be configured to allow or block only specific URLs to be downloaded from. See the Server-side request forgery (SSRF) protection section for more details.

The connection timeout and the read timeout for the Image Data Downloader are configurable via properties.

Table 10. Image Data Downloader configuration properties

Property

Description

innovatrics.dot.dis.data-downloader

  • enabled

Indicates whether the Image Data Downloader is enabled. If false, submitting data via URLs is not allowed.

Default value: true

  • connection-timeout

The connection timeout for image data downloader in milliseconds.

Default value: 2000

  • read-timeout

The read timeout for image data downloader in milliseconds.

Default value: 30000

Server-side request forgery (SSRF) protection - Optional

If needed, the Image Data Downloader can be protected against SSRF attacks.

The URLs can be either configured as absolute URLs or as regular expressions. The regular expressions can be enclosed in .* and can contain any number of characters.

If the whitelist property is configured, only the URLs matching the configured URLs will be allowed (any other will be blocked).

If the blacklist property is configured, only the URLs matching the configured URLs will be blocked (any other will be allowed).

Configuring both properties is not allowed, also configuring them with wildcards \* or .* is not allowed. The following configuration properties are available:

Table 11. Image Data Downloader SSRF configuration properties

Property

Description

innovatrics.dot.dis.data-downloader.ssrf-protection

  • whitelist

The list of allowed hosts for the Image Data Downloader.

This property is optional.

Example:

whitelist:
  - 'https://example.com'
  - '.*example.org.*'
  • blacklist

The list of disallowed hosts for the Image Data Downloader.

This property is optional.

Example:

blacklist:
  - 'https://example.com'
  - '.*example.org.*'

Logging Transactions via the Innovatrics Tracking Service

For billing purposes, all transactions performed must be reported by any running instance of the Digital Identity Service.

The Digital Identity Service is configured to periodically publish metadata about executed transactions to the Innovatrics tracking service.

No sensitive details are stored, only information about transaction count, outcome of operations, and the quality of inputs. Collected statistics may subsequently be used to improve system performance in your environment.

All data published to Innovatrics tracking service is also logged to the dot-digital-identity-service-countly-event.log file. If it is not possible to configure the deployment to communicate with the Innovatrics tracking service, transactions can be reported by sending this file or by uploading this file to Customer Portal.

Use the api/v1/health endpoint to verify the successful connectivity of the Digital Identity Service with the Innovatrics Tracking Service. Upon success, the expected JSON response should include components.countly.status set to UP:

{
    "status": "UP",
    "components": {
        "countly": {
            "status": "UP"
        }
    }
}

For additional details on how Digital Identity Service verifies transactions, please refer to the Transaction Tracking and Charging

The reporting URL is configured as innovatrics.count.ly. This cannot be changed but can be used for forwarding via your proxy server/egress instance.

Proxy server configuration

If your deployment is behind a proxy server, a proxy server to allow communication with the Innovatrics tracking service needs to be configured. This can be done by setting the following properties in the application.yml file:

Table 12. Server proxy configuration properties

Property

Description

innovatrics.dot.dis.proxy

  • host

The hostname of your proxy server.

Example: squid.example.com

  • port

Available port of your proxy server.

Example: 8088

If either of the aforementioned properties is not set, the proxy server will not be used.

Multiple options for image uploads

The Digital Identity Service API supports multiple ways to provide an image during the onboarding process:

  1. a direct upload of the image data as a base64 encoded string

  2. providing a URL to the remote image

  3. an octet-stream upload of the image data produced by the Innovatrics Web components or Mobile SDKs

    • this option provides more security than the base64 encoded string upload, enabling detection of any tampering of the image data or potential spoof

The requests must contain only one of the above options as the image source. Combining multiple options in one request is not allowed.

Examples of these options are included in the Postman collection.

Customer Onboarding

The Customer Onboarding API enables a fully digital process to remotely verify identity to enroll a new customer.

During the onboarding, a person registers with a company or government entity. They provide their identity document, and one or more selfies to prove their identity.

With a digital onboarding process powered by Digital Identity Service, a company can easily and securely convert a person into a trusted customer.

Standard Onboarding Flow

The recommended customer onboarding process looks like this:

To use any part of the Customer Onboarding API, create a customer must be called first. The customer will be persisted for a configurable amount of time (see config section). Once created, additional actions can be performed while the record is persisted.

The data-gathering steps (2-4) can be performed in any order. Extracted data can be deleted or replaced by repeating the same action with different inputs.

The results of the get customer request (5) or inspection steps (6-7) depend on data previously gathered.

Once the onboarding has been completed, the customer can be deleted to reduce required memory. Deleting a customer will remove any related data, such as selfies and document pages. Otherwise, the data will expire after a configured amount of time.

Actions for onboarding a customer have to be performed sequentially, parallel processing of the same customer is not allowed. If there are concurrent requests on any resource belonging to the same customer resource, only one such request will succeed and the rest will end with an error (409 Conflict). For example, the front and back page of the document cannot be uploaded in parallel.

Create Customer

To create a customer, a POST /customers request must be made.

The response will contain a link to the newly created customer resource, as well as the ID of the customer.

Add Selfie

To provide a selfie for a customer, a PUT /selfie request must be made on the customer resource.

If a liveness selfie or liveness image data were already uploaded via Create Liveness Record request, the reference to the liveness selfie can be specified in the payload instead.

A successful response will contain the position of the detected face in the input image, the confidence, and a link to the newly-created customer selfie resource. The response may also contain a list of warnings. An unsuccessful response will contain an error code.

The face position is represented by the face rectangle.

The detection confidence contains a score from the interval <0.0,1.0>. Values near 1.0 indicate high confidence that a human face was detected.

Each customer can have max one selfie. An existing selfie can be replaced by adding a new selfie.

Once the face has been detected, you can:

Face Detection Configuration

Face detection on a customer’s selfie is configurable. The speed, accuracy, and other aspects can be adjusted according to needs and available resources. Find more details about image requirements, face detection speed-accuracy modes, and face size ratio in the Face API section of this document.

Liveness Check

Liveness check allows verification of interaction with a live, physically present person. It can distinguish live faces from photos, videos, 2D/3D masks, and other attacks.

The Digital Identity Service provides various approaches to verify liveness:

The liveness check generally comprises the 3 following steps:

Create Liveness Check

To create a liveness check, a PUT /liveness request must be made on the customer resource.

The response will contain a link to the newly-created customer’s liveness resource.

Add Selfie to Liveness Check

This alternative of creating the liveness check is done by using one or multiple images (selfies) in standard JPG or PNG format.

To add a selfie, a POST /liveness/selfies request must be made on the customer’s liveness resource.

If a selfie that was already added as a customer’s selfie is required for use, its reference can be specified in the payload instead of uploading it again.

For each selfie added to the liveness check, the assertion must be specified. The provided assertion will determine if and how the selfie will be used for the selected liveness method evaluation in the next step.

The successful response will be empty.

If the quality of the selfie does not fully match the requirements for evaluation, the response will contain a warning. If this happens, this selfie can still be used to evaluate the liveness, but the result is not guaranteed to be reliable. In the case of not wishing to proceed with this selfie, then delete the liveness resource, and start again by creating a new one.

If the selfie was not accepted, the response will contain an error code.

Multiple selfies can be added to one liveness check.

The Digital Identity Service will try to detect a face on every selfie provided. The configuration of face detection on selfies is explained in this chapter.

Providing liveness selfies using this option is only supported for the Passive Liveness Check, Eye-gaze Liveness Check and Smile Liveness Check.

Create liveness record

This alternative of creating the liveness check is done by using the binary file produced only by Innovatrics web component or mobile SDKs.

To create a liveness record, a POST /liveness/records request must be made on the customer’s liveness resource.

A successful response will contain the position of the detected face on the liveness selfie, represented by the face rectangle. The response also contains the detection confidence, a score from the interval <0.0,1.0>, where values near 1.0 indicate high confidence that a human face was detected.

A successful response also contains a link to the newly-created liveness record selfie resource.

In the case of not wishing to proceed with this liveness record, create a new one and the old one will be automatically replaced.

An unsuccessful response will contain an error code and liveness record will not be created.

Once the liveness record has been successfully created, you can:

  • Access the liveness selfie via a GET request on the provided liveness record selfie link

  • Use the liveness selfie as a customer selfie via the Add Selfie request referencing the liveness record selfie link from the response

Evaluate Liveness

To evaluate liveness, a POST /liveness/evaluation request must be made on the customer’s liveness resource.

The type of liveness check to be evaluated must be specified.

A successful response will contain a score from the interval <0.0,1.0>. Values towards 1.0 indicate higher confidence that the associated selfies contained a live person. The score has to be compared to a threshold to determine the liveness. See the documentation page dedicated to the active and passive liveness for recommended thresholds.

An unsuccessful response will contain an error code.

The evaluation can be repeated for different types of liveness on the same liveness resource. Only selfies with a relevant assertion will be used for a given type of liveness.

Passive Liveness Check

The passive liveness check is a process of determining whether the presented face is a real person without requiring the user to perform any additional actions.

It is recommended to perform this check on the customer`s selfie. A user can add the existing customer’s selfie to the liveness check by providing a reference to it.

To add a selfie for a passive liveness evaluation, the assertion must be set to NONE. Only selfies with this assertion will be evaluated for passive liveness.

To evaluate passive liveness, the type of liveness needs to be specified as PASSIVE_LIVENESS.

Passive liveness can be evaluated once at least a single selfie with the correct assertion was added. If there are multiple selfies with the corresponding assertion, the returned score will be the average of all of them.

There are two modes of passive liveness evaluation (UNIVERSAL and STANDARD), which can be configured via a property. The default is UNIVERSAL.

Eye-gaze Liveness Check

Eye-gaze liveness check is the process of determining whether the presented faces belong to a real person, by requiring the user to follow an object displayed on the screen with their eyes.

This check is recommended for applications where security is paramount, and is recommended as an additional step after performing the passive liveness check.

Follow these steps to implement the eye-gaze liveness check:

  1. generate object movement instructions randomly on your application server

  2. send these instructions to the client and ask the customer to follow the movement of the object with his/her eyes

  3. capture photos of the customer while he/she follows the object, and add them to the liveness with a corresponding assertion

Selfies for eye-gaze liveness need to have one of the following assertion values: EYE_GAZE_TOP_LEFT, EYE_GAZE_TOP_RIGHT, EYE_GAZE_BOTTOM_LEFT, EYE_GAZE_BOTTOM_RIGHT

Each of these assertions corresponds to the position of the object at the moment the photo was taken.

Selfies with assertions needs to be provided sequentially in the order captured. Parallel processing is not allowed.

Eye-gaze liveness can be evaluated only once the required number of selfies with relevant assertions was added.

The minimum number of selfies for eye-gaze liveness is configurable via a property.

Eye-Gaze Liveness Check with Challenge

Supported from Digital Identity Service version 1.39.0 and DOT iOS/Android SDKs version 8.0.0, the Eye-gaze liveness check with challenge is available. This feature is an extension of the Eye-gaze Liveness Check, where the object movement instructions are generated on the server with additional validation upon the data retrieval.

Follow these steps to implement the eye-gaze liveness with challenge:

  1. initiate the process by getting the object movement instructions using endpoint PUT /liveness/records/challenge, which will return a list of Eye-gaze liveness corners that need to be captured

  2. follow the instructions explained in the DOT mobile components

  3. upload the binary file created by the DOT components via POST /liveness/records request

Keep in mind that subsequent call to the PUT /liveness/records/challenge endpoint for the same customer will result in same list of Eye-gaze liveness assertions that need to be captured.

In case you have already created a list of Eye-gaze liveness assertions, but something went wrong, and you want to generate a new list of Eye-gaze liveness assertions for the same customer, you first need to call PUT /livenesss endpoint which basically means that you are starting a new liveness check for that customer and only then call PUT /liveness/records/challenge endpoint.

This feature introduces some restrictions to the Eye-gaze liveness. You cannot combine the Eye-gaze liveness with challenge and the standard Eye-gaze liveness.

Once you call the PUT /liveness/records/challenge endpoint it is expected that the images required for the Eye-gaze liveness will be uploaded via POST /liveness/records endpoint.

Calling POST /liveness/selfies with assertion set to EYE_GAZE_TOP_LEFT, EYE_GAZE_TOP_RIGHT, EYE_GAZE_BOTTOM_LEFT or EYE_GAZE_BOTTOM_RIGHT will result in error.

Similarly, once you call the POST /liveness/selfies endpoint with the assertion set to EYE_GAZE_TOP_LEFT, EYE_GAZE_TOP_RIGHT, EYE_GAZE_BOTTOM_LEFT or EYE_GAZE_BOTTOM_RIGHT, the Digital Identity Service will assume the standard Eye-gaze Liveness Check is being performed. Any subsequent call to the PUT /liveness/records/challenge endpoint will result in an error.

Smile Liveness Check

Smile liveness check is the process of determining whether the presented faces belong to a real person by requiring the user to change his/her expression.

Follow these steps to implement the smile liveness check:

  1. ask the customer to maintain a neutral expression and then smile

  2. capture photos of the customer with both expressions, and add them to the liveness with a corresponding assertion

Selfies with assertions need to be provided sequentially. Parallel processing is not allowed.

Smile liveness can be evaluated only once selfies with both SMILE and NEUTRAL assertions have been added. As a part of the evaluation process passive liveness is calculated on both photos.

You can fine tune the passive liveness threshold for smile liveness with a property.

If the passive liveness threshold property is not provided, the smile liveness score is returned as a continuous value ranging from 0.0 to 1.0. Otherwise, if the threshold property is provided, the score is returned as a binary value, either 0.0 or 1.0.

MagnifEye Liveness Check

MagnifEye liveness check is the process of determining whether the presented faces belong to a real person by navigating the user to capture a detailed image of the eye. It is a semi-passive method inspired by our extensive know-how in the domain of facial and iris recognition. The core of the technology is built upon Innovatrics Passive Liveness Detection, while also taking into account the uniqueness of the human eye.

Follow these steps to implement the magnifeye liveness check:

  1. follow the instructions explained in the DOT web/mobile components

  2. upload the binary file created by the DOT components via Create Liveness Record request

To evaluate magnifeye liveness, the type of liveness needs to be specified as MAGNIFEYE_LIVENESS. Liveness can be evaluated once the liveness record has been successfully created.

Customer Document Operations

The Onboarding API provides services to recognize and process customer’s photo identity documents. (Only identity documents containing a photo of the holder are usable for remote identity verification.)

The process starts with creating an identity document. At this point, information can be provided about the document type and/or edition. Parts of the document to be processed can also be specified

The second step is to upload pictures of the document pages. The system will try to detect and classify the document on the picture.

Once at least one page has been successfully recognized, it is possible to:

Supported Identity Documents

The Digital Identity Service can support identity documents of the following types:

  • Passports

  • Identity cards

  • Driving licenses

  • Foreigner permanent residence cards

  • and other cards of similar format that include the holder’s photo

Support for document recognition may be in two levels:

Level 1 support

Level 1 support includes all documents compliant with ICAO machine-readable travel document specification.

The Digital Identity Service can process the document portrait and parse data from the machine-readable zone of documents with this level of support.

Level 2 support

For Level 2 support, the Digital Identity Service needs to be trained to support each individual document type and its edition.

Once the document is supported, the Digital Identity Service can process any data available on it.

The list of documents with Level 2 support can be found via the get metadata endpoint.

In the case that an ID document type required does not have Level 2 support, contact Innovatrics to request support for such document type in a future version of the Digital Identity Service.

Get Metadata for documents with Level 2 support

To get the full list of documents with Level 2 support, make a GET /metadata request.

The response contains a list of documents supported by the current version of the Digital Identity Service and the metadata for each document.

The metadata for an individual document contains a list of its pages. For each page, there is a list of text fields that the Digital Identity Service was trained to OCR.

For each text field, there is information if the field’s value is being returned as found on the document or if it is being normalized and returned in a standard format.

If present on the document, there is also the original label for each text field.

If a document page has the classificationAdviceRequired attribute set to true, a create document request must contain a precise classification advice, comprising the exact document type, edition, and country of the given document.

Document Classification

The amount of data that the Digital Identity Service can extract from an identity document depends on how precisely it can classify this document.

There are 3 levels of classification:

The Digital Identity Service tries to classify the document up to the level that allows the processing of all requested document sources:

  1. It will try to fully classify the document if the processing of visual zone or barcodes was requested.

  2. Otherwise, it will only try to recognize the travel document type of the document.

  3. If the document was not at least partially classified, it will be processed as an unknown document.

The classification of a document can be affected by classification advice that can be optionally provided in the create document request payload.

It can be also affected by optional advice on the type of page in the add document page request payload.

If the document page has classificationAdviceRequired attribute set to true, classification advice is required and must be provided in the create document request payload. If the classification advice is missing or invalid, the document will be classified as UNKNOWN.
Full classification

A full classification means the Digital Identity Service knows the type of the document, its issuing country, the exact edition, and the type of travel document if the document is compliant with travel document specifications.

Only documents that have Level 2 support can be fully classified.

Any document source on a fully classified document can be processed. That means the Digital Identity Service can:

  • ocr textual data from the visual zone

  • parse data from the machine-readable zone

  • decode data from barcodes

  • extract biometric information from the document portrait

  • check input for tampering by inspecting the color profile of the image

  • identify image fields: signature, fingerprint, ghost portrait and document portrait

Partial classification

A partial classification means the Digital Identity Service knows the type of the travel document.

With a partially classified document, only the machine-readable zone and the document portrait sources can be processed. That means the Digital Identity Service can:

  • parse data from the machine-readable zone

  • extract biometric information from the document portrait

Partial classification is possible for any document with Level 1 support.

A document can be partially classified only after a page containing a machine-readable zone is provided. That means:

  • TD1 document can be partially classified after a back page is provided. If the front page was provided first, it will stay unrecognized till the back page is added.

  • TD2 and TD3 documents can be partially classified after a front page is provided

Document not recognized

If the Digital Identity Service was unable to recognize the document’s exact edition nor its travel document type, then the document will be processed as an unknown document.

With an unknown document, the Digital Identity Service can only process the document portrait source. If a portrait is present on the provided page, the Digital Identity Service can:

  • extract biometric information from the document portrait

The system only keeps the last provided page for an unknown document. If there are multiple images provided and the document is still unknown, all previous pages are replaced with the last one.

Classification of an additional page

Once the document is at least partially classified, any page added later has to match the existing classification.

  • That means if the document is fully classified, then it will only accept pages from the same document edition.

  • If the document is partially classified, then it will accept pages from documents with the same travel document type.

  • If the document is not recognized, it will accept pages of any type.

The level of classification of a document can be increased with an additional page. For example, the exact edition of a document that is only partially classified as a travel document of TD1 type can subsequently be specified by recognizing it from an additional page. The recognized edition has to be compliant with the already recognized type of travel document. The classification level will move from partial classification to full classification.

If the document was classified incorrectly, the whole document needs to be deleted and the process started again. Classification can be improved by providing classification advice and/or by providing images of better quality.

Create Document

To create an identity document for a customer, make a PUT /document request on the customer resource.

Improve the performance of the document processing by providing classification adviceand/or specifying data sources on the document to be processed.

The response will contain a link to the newly created customer document resource.

There can be at most one document for a customer. The existing document can be replaced by creating a new document for the customer.

Classification Advice

If it’s known upfront what type of document will be uploaded, performance of the classification can be improved by providing classification advice.

Classification advice can influence how the document will be recognized. Potential candidates can be restricted by specifying allowed countries, document types, editions, and/or travel document types.

If no advice is provided, the system will perform the classification considering all supported document types.

Document Sources

The performance of document processing can be improved by specifying what parts of the document need to be processed.

Provide a list of document sources that need to be processed. If the list of sources is not provided in the request, or if it is empty, then the Digital Identity Service will try to process all of them.

Table 13. Supported document sources

Document Source

Description

Requirements

visual zone

  • read data from text fields

  • crop image fields: signature, ghost-portrait, fingerprint

document page needs to be fully recognized

machine-readable zone

  • parse data from machine-readable zone

the type of machine-readable travel document needs to be recognized

document portrait

  • extract biometric data from document portrait

document portrait needs to be present on provided page

barcode

  • extract data encoded in barcodes

document page needs to be fully recognized

Add Document Page

To add a page to the identity document for a customer, make a PUT /pages request on the customer’s document resource. There are alternative ways of uploading the image of a document page described in Multiple options for image uploads.

Improve the performance of the page’s processing by specifying whether it is a front or a back page in the classification advice.

The optional classification advice in the add document page request can specify only the type of page. To provide advice on the type of document, use the classification advice in the create document request.

A successful response will contain info about the classified document type and the recognized type of page. It will also contain the position of the detected document in the input image, the confidence, and a link to the newly created document page resource.

The response may contain a list of warnings.

An unsuccessful response will contain an error code.

When a page for a document is provided, the Digital Identity Service will try to recognize the type of page and the type of document. This process is called classification and is described in chapter Document Classification.

Image requirements

Ideally, the photo of identity document should be created with Innovatrics’ auto-capture components, whether in mobile libraries or browser-based. These components ensure the quality requirements mentioned below:

  • The supported image formats are JPEG and PNG, or binary data created by Innovatrics web components or mobile SDKs

  • The document image must be large enough — when the document card is normalized, the text height must be at least 32 px (or the document card width in the image must be approximately 1000 px)

  • The document card edges must be clearly visible and be placed at least 10 px inside the image area

  • The image must be sharp enough for the human eye to recognize the text

  • The image should not contain objects or background with visible edges. (example below) This can confuse the process of detecting card on image