DOT Digital Identity Service
v1.12.0
Overview
Digital Identity Service enables two main features:
Customer onboarding
Face biometry
Customer onboarding is the basic use-case of DOT. A selfie and photos of identity card should be provided by the customer, and a liveness check should have passed. Provided data can be checked for inconsistencies, and based on the checked result, the client decides if the customer will be onboarded.
The biometric processing of face images allows the client to support specific use-cases with the need for face biometry.
API Reference
The Digital Identity Service API reference is published here
Distribution package contents
The distribution package can be found in our CRM portal. It contains these files:
Your sales representative will provide credentials for the CRM login. |
config
– The configuration folderapplication.yml
– The application configuration file, see Application configurationlogback-spring.xml
– The logging configuration file
doc
– The documentation folderInnovatrics_DOT_Digital_Identity_Service_1.12.0_Technical_Documentation.html
– Technical documentationInnovatrics_DOT_Digital_Identity_Service_1.12.0_Technical_Documentation.pdf
– Technical documentationswagger.json
– Swagger API fileEULA.txt
- The license agreement
docker
– The Docker folderDockerfile
– The text document that contains all the commands to assemble a Docker image, see Dockerentrypoint.sh
– The entry point scriptwithCache
Dockerfile
– The text document that contains all the commands to assemble a Docker image, with both server and cache running in one containerentrypoint.sh
– The entry point scriptinstall_memcached.sh
- The script to install memcached
libs
– The libraries folderlibsam.so
– The Innovatrics OCR librarylibiface.so
– The Innovatrics IFace librarylibinnoonnxruntime.so
– The Innovatrics runtime librarysolvers
– The Innovatrics IFace library solvers
dot-digital-identity-service.jar
– The executable JAR file, see How to runInnovatrics_DOT_Digital_Identity_Service_1.12.0_postman_collection.json
– Postman collection
Installation
System requirements
Ubuntu 18.04 (64-bit)
Steps
Install the following packages:
OpenJDK 17 Runtime Environment (Headless JRE) (
openjdk-17-jre-headless
)userspace USB programming library (
libusb-0.1
)GCC OpenMP (GOMP) support library (
libgomp1
)Locales
apt-get update apt-get install -y openjdk-17-jre-headless libusb-0.1 libgomp1 locales
Set the locale
sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen export LANG=en_US.UTF-8; export LANGUAGE=en_US:en; export LC_ALL=en_US.UTF-8
Extract the Digital Identity Service distribution package to any folder.
Link the application libraries:
ldconfig /local/path/to/current/dir/libs
Replace the path /local/path/to/current/dir
in the command with your current path. Keep/libs
as a suffix in the path.
Activate the DOT license
Please contact your sales representative or sales@innovatrics.com to receive a license. Once a license has been received, please deploy as described below.
Copy your license file iengine.lic
for Innovatrics IFace SDK 4.20.0 into {DOT_DIGITAL_IDENTITY_SERVICE_DIR}/license/
How to run
As Digital Identity Service is a stand-alone Spring Boot application with an embedded servlet container, there is no need for deployment on a pre-installed web server.
Digital Identity Service needs a running memcached instance or cluster. memcached must be configured via the externalized configuration first.
Digital Identity Service can be run from the application folder:
java -Dspring.config.additional-location=file:config/application.yml -Dlogging.config=file:config/logback-spring.xml -DLOGS_DIR=logs -Djna.library.path=libs/ -jar dot-digital-identity-service.jar
Embedded Tomcat web server will be started and the application will be listening on the port 8080 (or another configured port).
Docker
To build a Docker image, use the Dockerfile
and the entrypoint.sh
script. A Dockerfile example and Entrypoint.sh script example can also be found in the Appendix.
The Docker image should be built as follows:
cd docker cp ../dot-digital-identity-service.jar . cp ../libs/libsam.so.* . cp ../libs/libiface.so.* . cp ../libs/libinnoonnxruntime.so.* . cp -r ../libs/solvers/ ./solvers docker build --build-arg JAR_FILE=dot-digital-identity-service.jar --build-arg SAM_OCR_LIB=libsam.so.* --build-arg IFACE_LIB=libiface.so.* --build-arg INNOONNXRUNTIME_LIB=libinnoonnxruntime.so.* -t dot-digital-identity-service .
Digital Identity Service needs a running memcached instance or cluster. memcached must be configured via the externalized configuration first.
Run the container according to the instructions below:
docker run -v /local/path/to/license/dir/:/srv/dot-digital-identity-service/license -v /local/path/to/config/dir/:/srv/dot-digital-identity-service/config -v /local/path/to/logs/dir/:/srv/dot-digital-identity-service/logs -p 8080:8080 dot-digital-identity-service
Replace the path /local/path/to/license/dir/ in the command with your local path to the license directory. |
Replace the path /local/path/to/config/dir/ in the command with your local path to the config directory (from the distribution package). |
Important Replace the path /local/path/to/logs/dir/ in the command with your local path to the logs directory (you need to create the directory mounted to a persistent drive). The volume mount into the docker is mandatory, otherwise the application will not start successfully. |
Logging
Digital Identity Service logs to the console and also writes the log file (dot-digital-identity-service.log
). The log file is located at a directory defined by the LOGS_DIR
system property. Log files rotate when reaching 5 MB size, maximum history is 5 files by default.
API Transaction Counter Log
The separate log files following filename pattern dot-digital-identity-service-transaction-counter.log.%d{yyyy-MM-dd}.%i.gz
are located at a directory defined by the LOGS_DIR
system property. The %d{yyyy-MM-dd} template represents the date and the %i represents the index of log window within the day, starting at 0. These log files contain information about counts of API calls (transactions). The same rolling policy is applied as for the application log, except the maximum history of these log files is 455 files.
For proper transactions billing, please be sure to send all transactions logs every time.
Docker: Persisting log files in local filesystem
When Digital Identity Service is run as a Docker container, log files may be accessed even after the container no longer exists. This can be achieved by using Docker volumes. To find out how to run a container, see Docker.
Monitoring
Information as build or license info can be accessed on /api/v1/info
. Information about available endpoints can be viewed under /swagger-ui.html
.
The health endpoint accessible under /api/v1/health
provides information about the health of the application.
This feature can be used by an external tool such as Spring Boot Admin, etc.
The application also supports exposing metrics in standardized prometheus format. These are accessible under /api/v1/prometheus
. This endpoint can be exposed in your configuration:
management: endpoints: web: exposure: include: health, info, prometheus
For more information, see Spring Boot documentation, sections Endpoints and Metrics. Spring Boot Actuator Documentation also provides info about other monitoring endpoints that can be enabled.
Tracing
OpenTracing API with Jaeger implementation is used for tracing purposes. The Digital Identity Service tracing implementation supports SpanContext
extraction from HTTP request using HTTP Headers
format. For more information, see OpenTracing Specification. Tracing is disabled by default. To enable Jaeger tracing with X-B3-TraceId
, X-B3-SpanId
headers propagation into tracing and logs:
Set these application properties:
opentracing: jaeger: enabled: true enable-b3-propagation: true udp-sender: host: jaegerhost port: portNumber
For more information about Jaeger configuration, see Jaeger Client Lib Docs.
Architecture
Digital Identity Service is a semi-stateful service. It temporarily retains intermediate results and images in an external cache. This enables the exposed API to flexibly use only the methods needed for a specific use case, without repeating expensive operations. Another advantage is that the user can provide data when available, without the need to cache on the user’s side.
The Digital Identity Service can be horizontally scaled. Multiple instances of the service can share the same cache or a cache cluster.
The services of Digital Identity Service are better suited for shorter-time processes. The cache can nevertheless be configured to support various use cases and processes.
Cache
The Digital Identity Service currently supports Memcached as a cache implementation.
Various tools exist to monitor the performance of your Memcached server, and we recommend using one.
Memcached configuration
The cache is configurable via the externalized configuration.
It can be configured either with the AWS Elastic Cache, or a list of hosted memcached serverscan be used.
Efficient memory usage
For optimal performance, the expiration of records must be configured according to the nature of the implemented process:
Short expiration time causes smaller memory usage and higher throughput of short requests.
Long expiration time enables longer processing of cached records and higher memory requirements.
Memory consumption for longer processes can be lowered by cleaning records once no longer needed. The API provides deletion methods for each resource.
The expiration of records can be configured independently for the onboarding API and for face operations.
Property | Description |
---|---|
innovatrics.dot.dis.cache | |
The time in seconds to persist all data created and used by Onboarding API. Example value: 1800 | |
The time in seconds to persist face records created and used by Face API. Example value: 600 | |
innovatrics.dot.dis.persistence.memcached | |
The host and port of aws elastic cache config endpoint. Format: | |
The list of host and port pairs to the memcached instances. Only used if aws elastic cache config endpoint is not configured. Format: | |
The memcached read timeout in milliseconds. Example value: 2000 | |
The memcached write timeout in milliseconds. Example value: 2000 | |
The memcached operation timeout in milliseconds. Example value: 5000 |
Authentication and authorization
The Digital Identity Service API is secured with an API Key authentication, hence an HTTP
Authorization
header needs to be sent with every request.
The header must contain a Bearer
token, which is a UTF-8
Base64
encoded string that consists of two parts, delimited by a colon:
Token part | Description |
---|---|
| A unique identifier that is received with your license |
| A unique string that is received with your license |
The server will return a HTTP 401 Unauthorized
response for every request that either does not contain the Authorization
header, or if the header contents are invalid (e.g.: malformed Base64 or invalid API Key or Secret).
Some endpoints are not secured by design (such as /metrics , /health or /info ) and do not require any authentication |
Authorization header creation
The following is an example snippet of the structure of API key and secret in the license file:
{
"contract": {
"dot": {
"authentication": {
"apiKeyAndSecrets": [
{
"key": "some-api-key",
"secret": "mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6"
}
]
}
},
...
},
...
}
You will need to encode the key
and secret
parts into a valid UTF-8
Base64
string (those two parts, delimited by a colon), e.g.:
some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6
The encoding can be performed by the user via the bash command below:
echo 'some-api-key:mb7DZQ6JwesRHkWPbjKVDgGHXxrAHFd6' | base64 -w 0
Once the aforementioned token has been encoded into Base64, each request must contain the Authorization
header which consists of the Bearer
keyword and encoded key and secret:
Bearer c29tZS1hcGkta2V5Om1iN0RaUTZKd2VzUkhrV1BiaktWRGdHSFh4ckFIRmQ2
Data isolation
The resources created with one API key are accessible only with that particular API key. This is to prevent any unauthorized access by isolating the created resources in the cache.
Image Data Downloader
The Digital Identity Service API supports two ways to provide an image in its requests:
base64 encoded data
url to the remote image
Images provided are downloaded by the Image Data Downloader.
The connection timeout and the read timeout for the Image Data Downloader are configurable via properties.
Property | Description |
---|---|
innovatrics.dot.dis.data-downloader | |
The connection timeout for image data downloader in milliseconds. Default value: 2000 | |
The read timeout for image data downloader in milliseconds. Default value: 30000 |
Logging Transactions via Countly
For billing purposes, all transactions performed must be reported by any running instance of the Digital Identity Service.
The Digital Identity Service can be configured to periodically publish metadata about executed transactions to Countly. URL to Countly must be configured to set up automatic reporting.
No sensitive details are stored, only information about transaction count, outcome of operations, and the quality of inputs. Collected statistics may subsequently be used to improve system performance in your environment.
All data published to Countly is also logged to the dot-digital-identity-service-countly-event.log
file.
If an external Countly server cannot be integrated to setup automatic reporting,
transactions can be reported by sending this file’s contents.
Property | Description |
---|---|
innovatrics.dot.countly-connector | |
The URL to a Countly server. If the property is not configured, transactions are not automatically reported. | |
The update interval in seconds for reporting transactions. Default value: 60 |
Customer Onboarding
The Customer Onboarding API enables a fully digital process to remotely verify identity to enroll a new customer.
During the onboarding, a person registers with a company or government entity. They provide their identity document, and one or more selfies to prove their identity.
With a digital onboarding process powered by Digital Identity Service, a company can easily and securely convert a person into a trusted customer.
Standard Onboarding Flow
The recommended customer onboarding process looks like this:
To use any part of the Customer Onboarding API, create a customer must be called first. The customer will be persisted for a configurable amount of time (see config section). Once created, additional actions can be performed while the record is persisted.
The data-gathering steps (2-4) can be performed in any order. Extracted data can be deleted or replaced by repeating the same action with different inputs.
The results of the get customer request (5) or inspection steps (6-7) depend on data previously gathered.
Once the onboarding has been completed, the customer can be deleted to reduce required memory. Deleting a customer will remove any related data, such as selfies and document pages. Otherwise, the data will expire after a configured amount of time.
Actions for onboarding a customer have to be performed sequentially, parallel processing of the same customer is not allowed. If there are concurrent requests on any resource belonging to the same customer resource, only one such request will succeed and the rest will end with an error (409 Conflict). For example, the front and back page of the document cannot be uploaded in parallel.
Create Customer
To create a customer, a POST /customers
request must be made.
The response will contain a link to the newly created customer resource, as well as the ID of the customer.
Add Selfie
To provide a selfie for a customer, a PUT /selfie
request must be made on the customer resource.
A base64 encoded image or URL must be provided to the input image. The provision of both is not permitted.
A successful response will contain the position of the detected face in the input image, the confidence, and a link to the newly-created customer selfie resource. The response may also contain a list of warnings. An unsuccessful response will contain an error code.
The face position is represented by the face rectangle.
The detection confidence contains a score from the interval <0.0,1.0>. Values near 1.0 indicate high confidence that a human face was detected.
Each customer can have max one selfie. An existing selfie can be replaced by adding a new selfie.
Once the face has been detected, you can:
Compare biometric data from the selfie with data extracted from other sources
Get any extracted biometric information from the selfie via the Get Customer request
Face Detection Configuration
Face detection on a customer’s selfie is configurable. The speed, accuracy, and other aspects can be adjusted according to needs and available resources. Find more details about image requirements, face detection speed-accuracy modes, and face size ratio in the Face API section of this document.
Liveness Check
Liveness check allows verification of interaction with a live, physically present person. It can distinguish live faces from photos, videos, 2D/3D masks, and other attacks.
The Digital Identity Service provides various approaches to verify liveness:
The liveness check generally comprises the 3 following steps:
Create Liveness Check
To create a liveness check, a PUT /liveness
request must be made on the customer resource.
The response will contain a link to the newly-created customer’s liveness resource.
Add Selfie to Liveness Check
To add a selfie, a POST /liveness/selfies
request must be made on the customer’s liveness resource.
If a selfie that was already added as a customer’s selfie is required for use, its reference must be specified in the payload.
The other option is to provide a new selfie for the liveness check. In this case, a base64 encoded image or URL to the input image must be provided. The provision of both is not permitted.
For each selfie added to the liveness check, the assertion must be specified. The provided assertion will determine if and how the selfie will be used for the selected liveness method evaluation in the next step.
The successful response will be empty.
If the quality of the selfie does not fully match the requirements for evaluation, the response will contain a warning. If this happens, this selfie can still be used to evaluate the liveness, but the result is not guaranteed to be reliable. In the case of not wishing to proceed with this selfie, then delete the liveness resource, and start again by creating a new one.
If the selfie was not accepted, the response will contain an error code.
Multiple selfies can be added to one liveness check.
The Digital Identity Service will try to detect a face on every selfie provided. The configuration of face detection on selfies is explained in this chapter.
Evaluate Liveness
To evaluate liveness, a POST /liveness/evaluation
request must be made on the customer’s liveness resource.
The type of liveness check to be evaluated must be specified.
A successful response will contain a score from the interval <0.0,1.0>. Values near 1.0 indicate high confidence that the associated selfies contained a live person.
An unsuccessful response will contain an error code.
The evaluation can be repeated for different types of liveness on the same liveness resource. Only selfies with a relevant assertion will be used for a given type of liveness.
Passive Liveness Check
The passive liveness check is a process of determining whether the presented face is a real person without requiring the user to perform any additional actions.
It is recommended to perform this check on the customer`s selfie. A user can add the existing customer’s selfie to the liveness check by providing a reference to it.
To add a selfie for a passive liveness evaluation, the assertion must be set to NONE
.
Only selfies with this assertion will be evaluated for passive liveness.
To evaluate passive liveness, the type of liveness needs to be specified as PASSIVE_LIVENESS
.
Passive liveness can be evaluated once at least a single selfie with the correct assertion was added. If there are multiple selfies with the corresponding assertion, the returned score will be the average of all of them.
Eye-gaze Liveness Check
Eye-gaze liveness check is the process of determining whether the presented faces belong to a real person, by requiring the user to follow an object displayed on the screen with their eyes.
This check is recommended for applications where security is paramount, and is recommended as an additional step after performing the passive liveness check.
Follow these steps to implement the eye-gaze liveness check:
generate object movement instructions randomly on your application server
send these instructions to the client and ask the customer to follow the movement of the object with his/her eyes
capture photos of the customer while he/she follows the object, and add them to the liveness with a corresponding assertion
Selfies for eye-gaze liveness need to have one of the following assertion values:
EYE_GAZE_TOP_LEFT
, EYE_GAZE_TOP_RIGHT
, EYE_GAZE_BOTTOM_LEFT
, EYE_GAZE_BOTTOM_RIGHT
Each of these assertions corresponds to the position of the object at the moment the photo was taken.
Selfies with assertions needs to be provided sequentially in the order captured. Parallel processing is not allowed.
Eye-gaze liveness can be evaluated only once the required number of selfies with relevant assertions was added.
The minimum number of selfies for eye-gaze liveness is configurable via a property.
Smile Liveness Check
Smile liveness check is the process of determining whether the presented faces belong to a real person by requiring the user to change his/her expression.
Follow these steps to implement the smile liveness check:
ask the customer to maintain a neutral expression and then smile
capture photos of the customer with both expressions, and add them to the liveness with a corresponding assertion
Selfies with assertions need to be provided sequentially. Parallel processing is not allowed.
Smile liveness can be evaluated only once selfies with both SMILE
and NEUTRAL
assertions have been added.
Customer Document Operations
The Onboarding API provides services to recognize and process customer’s photo identity documents. (Only identity documents containing a photo of the holder are usable for remote identity verification.)
The process starts with creating an identity document. At this point, information can be provided about the document type and/or edition. Parts of the document to be processed can also be specified
The second step is to upload pictures of the document pages. The system will try to detect and classify the document on the picture.
Once at least one page has been successfully recognized, it is possible to:
Compare biometric data from the document portrait with the selfie
Get any extracted information from the document via the Get Customer request
Supported Identity Documents
The Digital Identity Service can support identity documents of the following types:
Passports
Identity cards
Driving licenses
Foreigner permanent residence cards
and other cards of similar format that include the holder’s photo
Support for document recognition may be in two levels:
Level 1 support
Level 1 support includes all documents compliant with ICAO machine-readable travel document specification.
The Digital Identity Service can process the document portrait and parse data from the machine-readable zone of documents with this level of support.
Level 2 support
For Level 2 support, the Digital Identity Service needs to be trained to support each individual document type and its edition.
Once the document is supported, the Digital Identity Service can process any data available on it.
The list of documents with Level 2 support can be found via the get metadata endpoint.
In the case that an ID document type required does not have Level 2 support, contact Innovatrics to request support for such document type in a future version of the Digital Identity Service.
Get Metadata for documents with Level 2 support
To get the full list of documents with Level 2 support, make a GET /metadata
request.
The response contains a list of documents supported by the current version of the Digital Identity Service and the metadata for each document.
The metadata for an individual document contains a list of its pages. For each page, there is a list of text fields that the Digital Identity Service was trained to OCR.
For each text field, there is information if the field’s value is being returned as found on the document or if it is being normalized and returned in a standard format.
If present on the document, there is also the original label for each text field.
Document Classification
The amount of data that the Digital Identity Service can extract from an identity document depends on how precisely it can classify this document.
There are 3 levels of classification:
The Digital Identity Service tries to classify the document up to the level that allows the processing of all requested document sources:
It will try to fully classify the document if the processing of visual zone or barcodes was requested.
Otherwise, it will only try to recognize the travel document type of the document.
If the document was not at least partially classified, it will be processed as an unknown document.
The classification of a document can be affected by classification advice that can be optionally provided in the create document request payload.
It can be also affected by optional advice on the type of page in the add document page request payload.
Full classification
A full classification means the Digital Identity Service knows the type of the document, its issuing country, the exact edition, and the type of travel document if the document is compliant with travel document specifications.
Only documents that have Level 2 support can be fully classified.
Any document source on a fully classified document can be processed. That means the Digital Identity Service can:
ocr textual data from the visual zone
parse data from the machine-readable zone
decode data from barcodes
extract biometric information from the document portrait
check input for tampering by inspecting the color profile of the image
identify image fields: signature, fingerprint, ghost portrait and document portrait
Partial classification
A partial classification means the Digital Identity Service knows the type of the travel document.
With a partially classified document, only the machine-readable zone and the document portrait sources can be processed. That means the Digital Identity Service can:
parse data from the machine-readable zone
extract biometric information from the document portrait
Partial classification is possible for any document with Level 1 support.
A document can be partially classified only after a page containing a machine-readable zone is provided. That means:
TD1 document can be partially classified after a back page is provided. If the front page was provided first, it will stay unrecognized till the back page is added.
TD2 and TD3 documents can be partially classified after a front page is provided
Document not recognized
If the Digital Identity Service was unable to recognize the document’s exact edition nor its travel document type, then the document will be processed as an unknown document.
With an unknown document, the Digital Identity Service can only process the document portrait source. If a portrait is present on the provided page, the Digital Identity Service can:
extract biometric information from the document portrait
The system only keeps the last provided page for an unknown document. If there are multiple images provided and the document is still unknown, all previous pages are replaced with the last one.
Classification of an additional page
Once the document is at least partially classified, any page added later has to match the existing classification.
That means if the document is fully classified, then it will only accept pages from the same document edition.
If the document is partially classified, then it will accept pages from documents with the same travel document type.
If the document is not recognized, it will accept pages of any type.
The level of classification of a document can be increased with an additional page. For example, the exact edition of a document that is only partially classified as a travel document of TD1 type can subsequently be specified by recognizing it from an additional page. The recognized edition has to be compliant with the already recognized type of travel document. The classification level will move from partial classification to full classification.
If the document was classified incorrectly, the whole document needs to be deleted and the process started again. Classification can be improved by providing classification advice and/or by providing images of better quality.
Create Document
To create an identity document for a customer, make a PUT /document
request on the customer resource.
Improve the performance of the document processing by providing classification adviceand/or specifying data sources on the document to be processed.
The response will contain a link to the newly created customer document resource.
There can be at most one document for a customer. The existing document can be replaced by creating a new document for the customer.
Classification Advice
If it’s known upfront what type of document will be uploaded, performance of the classification can be improved by providing classification advice.
Classification advice can influence how the document will be recognized. Potential candidates can be restricted by specifying allowed countries, document types, editions, and/or travel document types.
If no advice is provided, the system will perform the classification considering all supported document types.
Document Sources
The performance of document processing can be improved by specifying what parts of the document need to be processed.
Provide a list of document sources that need to be processed. If the list of sources is not provided in the request, or if it is empty, then the Digital Identity Service will try to process all of them.
Document Source | Description | Requirements |
---|---|---|
visual zone |
| document page needs to be fully recognized |
machine-readable zone |
| the type of machine-readable travel document needs to be recognized |
document portrait |
| document portrait needs to be present on provided page |
barcode |
| document page needs to be fully recognized |
Add Document Page
To add a page to the identity document for a customer, make a PUT /pages
request on the customer’s document resource.
Improve the performance of the page’s processing by specifying whether it is a front or a back page in the classification advice.
The optional classification advice in the add document page request can specify only the type of page. To provide advice on the type of document, use the classification advice in the create document request. |
A successful response will contain info about the classified document type and the recognized type of page. It will also contain the position of the detected document in the input image, the confidence, and a link to the newly created document page resource.
The response may contain a list of warnings.
An unsuccessful response will contain an error code.
When a page for a document is provided, the Digital Identity Service will try to recognize the type of page and the type of document. This process is called classification and is described in chapter Document Classification.
Image requirements
Ideally, the photo of identity document should be created with Innovatrics’ auto-capture components, whether in mobile libraries or browser-based. These components ensure the quality requirements mentioned below:
The supported image formats are JPEG and PNG
The document image must be large enough — when the document card is normalized, the text height must be at least 32 px (document card height is approximately 1000 px)
The document card edges must be clearly visible and be placed at least 10 px inside the image area
The image must be sharp enough for the human eye to recognize the text
The image should not contain objects or background with visible edges. (example below) This can confuse the process of detecting card on image