DOT Digital Identity Service

v1.2.1

Overview

Digital Identity Service enables two main features:

  • Customer onboarding

  • Face biometrics

The customer onboarding is basic use-case of DOT. The customer provides his/her selfie, photos of his/her identity card and he/she should pass liveness detection. Provided data can be check for inconsistencies, and based on the check result, client decides if the customer will be onboard.

The biometric processing of face images allows client to support their specific use-cases with need of the face biometrics.

API Reference

The Digital Identity Service API reference is published here

Distribution package contents

You can find the distribution package in our CRM portal. It contains these files:

Your sales representative will provide you the credentials for the CRM login.
  • config – The configuration folder

    • application.yml – The application configuration file, see Application configuration

    • logback-spring.xml – The logging configuration file

  • doc – The documentation folder

    • Innovatrics_DOT_Digital_Identity_Service_1.2.1_Technical_Documentation.html – Technical documentation

    • Innovatrics_DOT_Digital_Identity_Service_1.2.1_Technical_Documentation.pdf – Technical documentation

    • swagger.json – Swagger API file

    • EULA.txt - The license agreement

  • docker – The Docker folder

    • Dockerfile – The text document that contains all the commands to assemble a Docker image, see Docker

    • entrypoint.sh – The entry point script

    • withCache

      • Dockerfile – The text document that contains all the commands to assemble a Docker image with both server and cache running in one container

      • entrypoint.sh – The entry point script

      • install_memcached.sh - The script to install memcached

  • libs – The libraries folder

    • libsam.so – The Innovatrics OCR library

    • libiface.so – The Innovatrics IFace library

    • solvers – The Innovatrics IFace library solvers

  • dot-digital-identity-service.jar – The executable JAR file, see How to run

  • Innovatrics_DOT_Digital_Identity_Service_1.2.1_postman_collection.json – Postman collection

Installation

System requirements

  • Ubuntu 18.04 (64-bit)

Steps

  1. Install the following packages:

    • OpenJDK Runtime Environment (JRE) (openjdk-11-jre)

    • userspace USB programming library (libusb-0.1)

    • GCC OpenMP (GOMP) support library (libgomp1)

    • Locales

    apt-get update
    apt-get install -y openjdk-11-jre libusb-0.1 libgomp1 locales
  2. Set the locale

    sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen
    export LANG=en_US.UTF-8; export LANGUAGE=en_US:en; export LC_ALL=en_US.UTF-8
  3. Extract the Digital Identity Service distribution package to any folder.

  4. Link the application libraries:

    ldconfig /local/path/to/current/dir/libs
    Replace the path /local/path/to/current/dir in the command with your current path. Keep /libs as a suffix in the path.

Activate the DOT license

The activation of the DOT license depends on the type of your deployment.

If you perform serverless or Docker deployments, please contact your sales representative or sales@innovatrics.com to receive a license. Once you receive the license, please deploy it as described in step 5 below.

If you perform a bare metal installation, or use a fixed VM or AWS instance, perform the following steps:

  1. Run DOT Digital Identity Service to generate the Hardware ID necessary for the license.

    java -Dspring.config.additional-location=file:config/application.yml -Dlogging.config=file:config/logback-spring.xml -DLOGS_DIR=logs -Djna.library.path=libs/ -jar dot-digital-identity-service.jar

    Copy the Hardware ID, which you can find in the output. See the example below:

    Unable to init IFace. Hardware ID: xxxxxxxxxxxx
  2. Visit our CRM portal and go to Products > Digital Onboarding Toolkit > Licenses.

  3. Then, select Generate License and paste the Hardware ID.

    Generate license
  4. Confirm again with Generate License and download the license.

  5. Copy your license file iengine.lic for Innovatrics IFace SDK 4.15.0 into {DOT_DIGITAL_IDENTITY_SERVICE_DIR}/license/

How to run

As Digital Identity Service is a stand-alone Spring Boot application with an embedded servlet container, there is no need for deployment on a pre-installed web server.

Digital Identity Service needs a running memcached instance or cluster. You have to configure memcached via the externalized configuration first.

You can run the Digital Identity Service from the application folder:

java -Dspring.config.additional-location=file:config/application.yml -Dlogging.config=file:config/logback-spring.xml -DLOGS_DIR=logs -Djna.library.path=libs/ -jar dot-digital-identity-service.jar

Embedded Tomcat web server will be started and the application will be listening on the port 8080 (or another configured port).

Docker

For building a Docker image, you can use the Dockerfile and the entrypoint.sh script. A Dockerfile example and Entrypoint.sh script example can be also found in the Appendix.

Build the Docker image as follows:

cd docker
cp ../dot-digital-identity-service.jar .
cp ../libs/libsam.so.* .
cp ../libs/libiface.so.* .
cp -r ../libs/solvers/ ./solvers
docker build --build-arg JAR_FILE=dot-digital-identity-service.jar --build-arg SAM_OCR_LIB=libsam.so.* --build-arg IFACE_LIB=libiface.so.* -t dot-digital-identity-service .

Digital Identity Service needs a running memcached instance or cluster. You have to configure memcached via the externalized configuration first.

Run the container according to the instructions below:

docker run -v /local/path/to/license/dir/:/srv/dot-digital-identity-service/license -v /local/path/to/config/dir/:/srv/dot-digital-identity-service/config -v /local/path/to/logs/dir/:/srv/dot-digital-identity-service/logs -p 8080:8080 dot-digital-identity-service
Replace the path /local/path/to/license/dir/ in the command with your local path to the license directory.
Replace the path /local/path/to/config/dir/ in the command with your local path to the config directory (from the distribution package).
Important Replace the path /local/path/to/logs/dir/ in the command with your local path to the logs directory (you need to create the directory mounted to a persistent drive). The volume mount into the docker is mandatory, otherwise application does not start successfully.

Logging

Digital Identity Service logs to the console and writes the log file (dot-digital-identity-service.log) as well. The log file is located at a directory defined by the LOGS_DIR system property. Log files rotate when they reach 5 MB size and the maximum history is 5 files by default.

API Transaction Counter Log

The separate log file dot-digital-identity-service-transaction-counter.log is located at a directory defined by the LOGS_DIR system property. This log file contains information about counts of API calls (transactions). The same rolling policy is applied as for the application log, except the maximum history of this log file is 180 files.

Docker: Persisting log files in local filesystem

When you run Digital Identity Service as a Docker container, you may have access to log files even after the container doesn’t exist anymore. This can be achieved by using Docker volumes. To find out how to run a container, see Docker.

Monitoring

Information as build or license info can be accessed on /api/v1/info. Information about available endpoints can be viewed under /swagger-ui.html.

The health endpoint accessible under /api/v1/health provides information about the health of the application. This feature can be used by an external tool such as Spring Boot Admin, etc.

Application also supports exposing metrics in standardised prometheus format. These are accessible under /api/v1/prometheus. You can expose this endpoint in your configuration:

management:
  endpoints:
    web:
      exposure:
        include: health, info, prometheus

For more information, see Spring Boot documentation, section Production ready monitoring. Spring Boot Actuator Documentation also provides info about other monitoring endpoints that can be enabled.

Tracing

OpenTracing API with Jaeger implementation is used for tracing purposes. The Digital Identity Service tracing implementation supports SpanContext extraction from HTTP request using HTTP Headers format. For more information, see OpenTracing Specification. Tracing is disabled by default. To enable Jaeger tracing:

Set these application properties:

opentracing:
  jaeger:
    enabled: true
    udp-sender:
      host: jaegerhost
      port: portNumber

For more information about Jaeger configuration, see Jaeger Client Lib Docs.

Architecture

Digital Identity Service is a semi-stateful service. It is temporarily persisting intermediate results and images in an external cache. Thanks to this, the exposed API allows to flexibly use just the methods needed for a specific use case without the need to repeat expensive operations. Another advantage is that the user can provide the data when they are available without the need to cache them on the user’s side.

The Digital Identity Service can be horizontally scaled. Multiple instances of the service can share the same cache or a cache cluster.

Architecture diagram
Figure 1. Horizontal scaling of Digital Identity Service with a memcached cluster

The services of Digital Identity Service are better suited for processes that do not take long time. Anyway the cache can be configured to support various use cases and processes.

Cache

The Digital Identity Service currently supports Memcached as a cache implementation.

There are various tools for monitoring the performance of your Memcached server and we recommend to use one.

Memcached configuration

The cache is configurable via the externalized configuration.

It can be configured either with the AWS Elastic Cache or you can use a list of hosted memcached servers.

Efficient memory usage

For the optimal performance you have to configure the expiration of records according to the nature of the implemented process:

  • Short expiration time causes a smaller memory usage and a higher throughput of short requests.

  • Long expiration time allows a longer processing of cached records and causes higher memory requirements.

The memory consumption for longer processes can be lowered by cleaning records once they are not needed. The API provides delete methods for each resource.

The expiration of records can be configured independently for the onboarding API and for the face operations.

Table 1. Cache configuration properties

Property

Description

innovatrics.dot.dis.cache

  • customer-expiration

The time to persist all data created and used by Onboarding API in seconds.

Example value: 1800

  • face-expiration

The time to persist face records created and used by Face API in seconds.

Example value: 600

innovatrics.dot.dis.persistence.memcached

  • aws-elastic-cache-config-endpoint

The host and port of aws elastic cache config endpoint.

Format: host:port

  • servers

The list of host and port pairs to the memcached instances. Only used if aws elastic cache config endpoint is not configured.

Format: host1:port1 host2:port2

  • read-timeout

The memcached read timeout in milliseconds.

Example value: 2000

  • write-timeout

The memcached write timeout in milliseconds.

Example value: 2000

  • operation-timeout

The memcached operation timeout in milliseconds.

Example value: 5000

Authentication

The Digital Identity Service API is secured with an API Key authentication, therefore you will need to send an HTTP Authorization header with every request.

The header must contain a Bearer token, which is a UTF-8 Base64 encoded string that consists of two parts, delimited by a colon:

Table 2. Token description

Token part

Description

  • API Key

A unique identifier that you will receive along with your license

  • API Secret

A unique string that you will receive along with your license

The server will return a HTTP 403 Forbidden response for every request that either does not contain the Authorization header or if the contents of the header are invalid (e.g.: malformed Base64 or invalid API Key or Secret).

Some endpoints are not secured by design (such as /metrics, /health or /info) and do not require any authentication

Authorization header creation

The following is an example snippet of the structure of API key and secret in the license file:

{
  "contract": {
    "dot": {
      "authentication": {
        "apiKeyAndSecrets": [
          {
            "key": "some-api-key",
            "secret": "mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6"
          }
        ]
      }
    },
    ...
  },
  ...
}

You will need to encode the key and secret parts into a valid UTF-8 Base64 string (those two parts, delimited by a colon), e.g.:

some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6

You can do the encoding itself via the bash command below:

echo 'some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6' | base64 -w 0

Once you encode the aforementioned token into Base64, each request must contain the Authorization header which consists of the Bearer keyword and the encoded key and secret:

Bearer c29tZS1hcGkta2V5Om1iN0RaUTZKd2VzUkhrV1BiaktWRGdHSFh4ckFIRmQ2

Image Data Downloader

The Digital Identity Service API supports 2 ways how to provide an image in its requests:

  • base64 encoded data

  • url to the remote image

Images provided are being downloaded by Image Data Downloader.

The connection timeout and the read timeout for the Image Data Downloader are configurable via properties.

Table 3. Image Data Downloader configuration properties

Property

Description

innovatrics.dot.dis.data-downloader

  • connection-timeout

The connection timeout for image data downloader in milliseconds.

Default value: 2000

  • read-timeout

The read timeout for image data downloader in milliseconds.

Default value: 30000

Logging Transactions via Countly

For billing purposes, you have to report all transactions performed by any running instance of the Digital Identity Service.

The Digital Identity Service can be configured to periodically publish metadata about executed transactions to Countly. You have to configure the URL to Countly to set up automatic reporting.

There are no sensitive details stored, only information about transaction count, outcome of operations and the quality of inputs. The collected statistics may be later used to improve the performance of the system in your environment.

Every data published to Countly is also being logged to a file named dot-digital-identity-service-countly-event.log. If you are unable to integrate with an external Countly server to setup automatic reporting, you can report transactions by sending the content of this file.

Table 4. Countly configuration properties

Property

Description

innovatrics.dot.dis.countly-connector

  • server-url

The URL to a Countly server.

If the property is not configured, transactions are not being automatically reported.

  • update-interval-seconds

The update interval in seconds for reporting transactions.

Default value: 60

Customer Onboarding

The Customer Onboarding API enables a fully digital process to remotely verify identity to enroll a new customer.

During the onboarding, a person registers with a company or government entity. They will provide their identity document, and one or more selfies to prove their identity.

With a digital onboarding process powered by Digital Identity Service, a company can easily and securely convert a person to a trusted customer.

Standard Onboarding Flow

The recommended customer onboarding process looks like this:

To use any part of the Customer Onboarding API you need to create a customer first. The customer will be persisted for a configurable amount of time (see the config section). Once created, you can perform additional actions while the record is persisted.

The data-gathering steps (2-4) can be performed in any order. The extracted data can be deleted or replaced by repeating the same action with different inputs.

The results of the get customer request (5) or inspection steps (6-7) depend on data previously gathered.

Once the onboarding is over you can delete the customer to reduce the memory needed. Deleting a customer will remove any related data such as selfies or document pages. Otherwise, the data will expire after a configured amount of time.

Actions for onboarding a customer have to be performed sequentially, parallel processing on the same customer is not allowed. If there are concurrent requests on any resource belonging to the same customer resource, only one of them will succeed and the rest will end with an error (409 Conflict). For example, it is not possible to upload the front and the back page of the document in parallel.

Create Customer

To create a customer, you have to make a POST /customers request.

The response will contain a link to the newly created customer resource as well as the ID of the customer.

Add Selfie

To provide a selfie for a customer you have to make a PUT /selfie request on the customer resource.

You have to provide a base64 encoded image or an URL to the input image. It is not allowed to provide both.

A successful response will contain the position of the detected face in the input image, the confidence, and a link to the newly created customer selfie resource. The response may contain a list of warnings as well. An unsuccessful response will contain the error code.

The position of the face is represented by the face rectangle.

The detection confidence contains a score from the interval <0.0,1.0>. Values near 1.0 indicate a high confidence a human face was detected.

There can be at most one selfie for a customer. You can replace the existing one by adding a new selfie.

Once the face is detected, you can:

Face Detection Configuration

The face detection on customer’s selfie is configurable. You can adjust the speed, accuracy, and other aspects according to your needs and available resources. You can find more details about image requirements, face detection speed-accuracy modes, and face size ratio, in the Face API section of this document.

Liveness Detection

Liveness detection allows you to verify that you are interacting with a live, physically present person. It can distinguish live faces from photos, videos, 2D/3D masks, and other attacks.

The Digital Identity Service offers different approaches to verify the liveness:

In general, the liveness detection consists of 3 following steps:

Create Liveness Detection

To create a liveness detection, you have to make a PUT /liveness request on the customer resource.

The response will contain a link to the newly created customer’s liveness resource.

Add Selfie to Liveness Detection

To Add a selfie, you have to make a POST /liveness/selfies request on the customer’s liveness resource.

If you want to use a selfie, that was already added as a customer’s selfie, you have to specify the reference to it in the payload.

The other option is to provide a new selfie for the liveness detection. In this case, you have to provide a base64 encoded image or an URL to the input image. It is not allowed to provide both.

For each selfie added to the liveness detection, you have to specify the assertion. The provided assertion will determine if and how the selfie will be used for the selected liveness method evaluation in the next step.

The successful response will be empty.

If the quality of the selfie does not fully match the requirements for evaluation, the response will contain a warning. If this happens, you can still use this selfie to evaluate the liveness, however, the result is not guaranteed to be reliable. If you do not want to proceed with this selfie, you have to delete the liveness resource and start again by creating a new one.

If the selfie was not accepted, the response will contain an error code.

You can add multiple selfies to one liveness detection.

The Digital Identity Service will try to detect a face on every selfie provided. The configuration of face detection on selfies is explained in this chapter.

Evaluate Liveness

To evaluate the liveness, you have to make a POST /liveness/evaluation request on the customer’s liveness resource.

You have to specify the type of liveness detection to be evaluated.

A successful response will contain a score from the interval <0.0,1.0>. Values near 1.0 indicate high confidence, that associated selfies contained a live person.

An unsuccessful response will contain an error code.

You can repeat the evaluation for different types of liveness on the same liveness resource. Only selfies with a relevant assertion will be used for a given type of liveness.

Passive Liveness Detection

The passive liveness detection is a process of determining whether the presented face is a real person without requiring the user to perform any additional actions.

It is recommended to perform this check on the customer`s selfie. You can add the existing customer’s selfie to the liveness detection by providing a reference to it.

To add a selfie for a passive liveness evaluation, you have to set the assertion to NONE. Only selfies with this assertion will be evaluated for passive liveness.

To evaluate the passive liveness you have to specify the type of liveness as PASSIVE_LIVENESS.

The passive liveness can be evaluated once at least a single selfie with the correct assertion was added. If there are multiple selfies with the corresponding assertion, the returned score will be the average of all of them.

Eye-gaze Liveness Detection

The eye-gaze liveness detection is the process of determining whether the presented faces belong to a real person, by requiring the user to follow an object displayed on the screen with their eyes.

This check is recommended for applications, where the security is in the first place and is recommended as an additional step after performing the passive liveness detection.

To implement the eye-gaze liveness detection you have to:

  1. generate object movement instructions randomly on your application server

  2. send these instructions to the client and ask the customer to follow the movement of the object with his eyes

  3. capture photos of the customer while he follows the object and add them to the liveness with a corresponding assertion

Selfies for eye-gaze liveness need to have one of the following assertion values: EYE_GAZE_TOP_LEFT, EYE_GAZE_TOP_RIGHT, EYE_GAZE_BOTTOM_LEFT, EYE_GAZE_BOTTOM_RIGHT

Each of these assertions corresponds to the position of the object at the moment the photo was taken.

Selfies with assertions needs to be provided sequentially in the order as they were captured. Parallel processing is not allowed.

The eye-gaze liveness can be evaluated only once the required number of selfies with relevant assertions was added.

The minimum number of selfies for eye-gaze liveness is configurable via a property.

Smile Liveness Detection

The smile liveness detection is the process of determining whether the presented faces belong to a real person, by requiring the user to change his expression.

To implement the smile liveness detection you have to:

  1. ask the customer to maintain a neutral expression and then to change it to smiling

  2. capture photos of the customer with both expressions and add them to the liveness with a corresponding assertion

Selfies with assertions need to be provided sequentially. Parallel processing is not allowed.

The smile liveness can be evaluated only once selfies with both SMILE and NEUTRAL assertions were added.

Customer Document Operations

The Onboarding API provides services to recognize and process customer’s photo identity documents. (Only identity documents containing a photo of the holder are usable for remote identity verification.)

The process starts with creating an identity document. At this point, you can provide information about the document’s type and/or edition. You can also specify what parts of the document you want to process.

The second step is to upload pictures of the documents’ pages. The system will try to detect and classify the document on the picture.

Once at least one page is successfully recognized, you can:

Supported Identity Documents

The Digital Identity Service can support identity documents of following types:

  • Passports

  • Identity cards

  • Driving licenses

  • Foreigner permanent residence cards

  • and other cards of similar format containing a photo of holder

The support for document recognition may be in two levels:

Level 1 support

The Level 1 support includes all documents that are compliant with the ICAO machine-readable travel document specification.

The Digital Identity Service can process the document portrait and parse data from the machine-readable zone of documents with this level of support.

Level 2 support

For Level 2 support, the Digital Identity Service needs to be trained to support each individual document type and its edition.

Once the document is supported, the Digital Identity Service can process any data available on it.

You can find the list of documents with Level 2 support via the get metadata endpoint.

In case an ID document type required does not have Level 2 support, it is possible to contact Innovatrics to request support of such document type in a future version of the Digital Identity Service.

Get Metadata for documents with the Level 2 support

To get the full list of documents with the Level 2 support you have to make a GET /metadata request.

The response contains a list of documents supported by the current version of the Digital Identity Service and the metadata for each document.

The metadata for an individual document contains a list of its pages. For each page, there is a list of text fields, that the Digital Identity Service was trained to OCR.

For each text field, there is information if the field’s value is being returned as found on the document or if it is being normalized and returned in a standard format.

If present on the document, there is also the original label for each text field.

Document Classification

The amount of data that the Digital Identity Service can extract from an identity document depends on how precisely it can classify this document.

There are 3 levels of classification:

The Digital Identity Service tries to classify the document up to the level that allows the processing of all requested document sources:

  1. It will try to fully classify the document if the processing of visual zone or barcodes was requested.

  2. Otherwise, it will only try to recognize the travel document type of the document.

  3. If the document was not at least partially classified, it will be processed as an unknown document.

The classification of a document can be affected by classification advice that can be optionally provided in the create document request payload.

It can be also affected by optional advice on the type of page in the add document page request payload.

Full classification

A full classification means the Digital Identity Service knows the type of the document, its issuing country, the exact edition, and the type of the travel document if the document is compliant with travel document specifications.

Only documents that have Level 2 support can be fully classified.

Any document source on a fully classified document can be processed. That means the Digital Identity Service can:

  • ocr textual data from the visual zone

  • parse data from the machine-readable zone

  • decode data from barcodes

  • extract biometric information from the document portrait

  • check input for tampering by inspecting the color profile of the image

  • identify image fields: signature, fingerprint, ghost portrait and document portrait

Partial classification

A partial classification means the Digital Identity Service knows the type of the travel document.

With a partially classified document, only the machine-readable zone and the document portrait sources can be processed. That means the Digital Identity Service can:

  • parse data from the machine-readable zone

  • extract biometric information from the document portrait

Partial classification is possible for any document with Level 1 support.

A document can be partially classified only after a page containing a machine-readable zone is provided. That means:

  • TD1 document can be partially classified after a back page is provided. If the front page was provided first, it will stay unrecognized till the back page is added.

  • TD2 and TD3 documents can be partially classified after a front page is provided

Document not recognized

If the Digital Identity Service was unable to recognize the document’s exact edition nor its travel document type, then the document will be processed as an unknown document.

With an unknown document, the Digital Identity Service can only process the document portrait source. If a portrait is present on the provided page, the Digital Identity Service can:

  • extract biometric information from the document portrait

The system only keeps the last provided page for an unknown document. If there are multiple images provided and the document is still unknown, all previous pages are replaced with the last one.

Classification of an additional page

Once the document is at least partially classified, any page added later has to match the existing classification.

  • That means if the document is fully classified, then it will only accept pages from the same document edition.

  • If the document is partially classified, then it will accept pages from documents with the same travel document type.

  • If the document is not recognized, it will accept pages of any type.

It is possible to increase the level of classification of a document with an additional page. For example, the exact edition of a document that is only partially classified as a travel document of TD1 type can be later specified by recognizing it from an additional page. The recognized edition has to be compliant with the already recognized type of travel document. The classification level will move from partial classification to full classification.

If the document was classified incorrectly, you have to delete the whole document and start again. You can improve the classification by providing classification advice and/or by providing images of better quality.

Create Document

To create an identity document for a customer you have to make a PUT /document request on the customer resource.

You can improve the performance of the document processing by providing classification advice and/or specifying data sources on the document that you want to process.

The response will contain a link to the newly created customer document resource.

There can be at most one document for a customer. You can replace the existing one by creating a new document for the customer.

Classification Advice

If you know upfront what type of document will be uploaded, you can improve the performance of the classification by providing classification advice.

With the classification advice, you can influence how the document will be recognized. You can restrict potential candidates by specifying allowed countries, document types, editions, and/or travel document types.

If no advice is provided, the system will perform the classification considering all supported document types.

Document Sources

You can improve the performance of the document processing by specifying what parts of the document need to be processed.

You can provide a list of document sources that need to be processed. If the list of sources is not provided in the request, or if it is empty, then the Digital Identity Service will try to process all of them.

Table 5. Supported document sources

Document Source

Description

Requirements

visual zone

  • read data from text fields

  • crop image fields: signature, ghost-portrait, fingerprint

document page needs to be fully recognized

machine-readable zone

  • parse data from machine-readable zone

the type of machine-readable travel document needs to be recognized

document portrait

  • extract biometric data from document portrait

document portrait needs to be present on provided page

barcode

  • extract data encoded in barcodes

document page needs to be fully recognized

Add Document Page

To add a page to the identity document for a customer you have to make a PUT /pages request on the customer’s document resource.

You can improve the performance of the page’s processing by specifying whether it is a front or a back page in the classification advice.

The optional classification advice in the add document page request can specify only the type of page. To provide advice on the type of document, you have to use the classification advice in the create document request.

A successful response will contain info about the classified document type and the recognized type of page. It will also contain the position of the detected document in the input image, the confidence, and a link to the newly created document page resource.

The response may contain a list of warnings.

An unsuccessful response will contain the error code.

When a page for a document is provided, the Digital Identity Service will try to recognize the type of page and the type of document. This process is called classification and is described in the chapter Document Classification.

Image requirements

Ideally, the photo of identity document should be created with Innovatrics’ auto-capture components, whether in mobile libraries or browser-based. These components ensure the quality requirements mentioned below:

  • The supported image formats are JPEG, PNG or GIF

  • The document image must be large enough — when the document card is normalized, the text height must be at least 32 px (document card height is approximately 1000 px)

  • The document card edges must be clearly visible and be placed at least 10 px inside the image area

  • The image must be sharp enough for the human eye to recognize the text

  • The image should not contain objects or background with visible edges. (example below) This can confuse the process of detecting card on image