The Cloud Matcher is distributed as a set of Docker images hosted on the GitLab Container Registry and as a set of docker-compose files, scripts and configuration files hosted on the GitHub. The Docker images provide an easy way of deploying and scaling SmartFace Cloud Matcher with all the benefits of containerization.

To initiate the Docker images you will need credentials (name and password) available on our Customer Portal. Your sales representative will provide credentials for the Customer Portal login.

Please note: for the old CRM portal please visit the old CRM.

Get The Cloud Matcher

System requirements

The deployment is limited only to Linux platforms that are supported by the Docker technology. For more information see Docker Install documentation. Docker Compose is only one of the ways to orchestrate SmartFace Platform containers. You can find more information about Docker Compose here.

Docker Compose is used as an orchestration engine because it is simple and easy to use. There are more robust orchestration engines for production workloads available, such as Kubernetes, Nomad or Docker Swarm. It is up to you which engine you will use for a deployment of SmartFace Cloud Matcher Docker images.

List of requirements:

  • CPU supporting AVX2 instruction set , eg Intel Haswell microarchitecture or AMD Zen family
  • Docker engine (version 20.10.0 and higher), Docker CLI
  • Docker Compose (version 1.29.0 and higher)
  • Git for version control

Download The Cloud Matcher

You can download the SmartFace distribution using Git command
git clone

Distribution contents

Once the repository is cloned into your local directory you will see the smartface folder with a set of files and folders inside. The included folder sf-docker has several preset use case examples that can be used right away. The examples are organized in folders and each example preset has it’s own folder. The contents inside vary. The list of example presets is:

  • access-control - preconfigured for the Access Controll use case
  • all-in-one - an All-in-One setup known from previous version. Contains all available services
  • cloud-matcher - a Cloud Matcher deployment sample
  • multi-camera - a Multi Camera setup
  • multi-server - sample of the SmartFace distributed on 3 servers
  • nvidia-jetson- sample of the SmartFace running on the [Nvidia Jetson]( jetson-developer-kits) devices
  • rapid-video-processing - sample preset for video processing and investigation
  • single-camera - sample for an easy one camera setup
  • special \ jetson-cloud-matcher - a sample of the Cloud Matcher running on the [Nvidia Jetson]( jetson-developer-kits) devices
  • special \ sf-with-keycloak - a sample of the SmartFace using Keycloak authentication

On this page we focus on the cloud-matcher example and omit other presets. Having this in mind we get the file structure below. Please see description of each file included:

  • api folder with Swagger API file
    • swagger.json Swagger API file
  • sf-docker folder with Docker compose files and configuration files
    • cloud-matcher
      • sf_dependencies folder with SmartFace dependencies Docker compose files
        • docker-compose.yml the text document as a configuration file to setup Docker containers for SmartFace dependencies
        • etc_rmq folder with configuration files for the Rabbit MQ
          • enabled_plugins file to set Rabbit MQ plugins
          • rabbitmq.conf configuration file for the Rabbit MQ
      • .env configuration file to setup the SmartFace Platform
      • ‘read me’ file about how to deploy and setup basic installation
      • docker-compose.yml the text document as a configuration file to setup Docker containers for SmartFace Platform (General use case)
      • initialization script for the SmartFace Cloud Matcher use case
  • windows
    • ‘read me’ file about how to setup and manage Windows services
  • ‘read me’ file about how to get started

Installation Steps

  1. Enter a preferred server location such as /srv/smartface/ and get the distribution contents using Git command:
    git clone

  2. Log in to the container registry using credentials provided by our Customer portal (Please note: for the old CRM portal visit the old CRM) using command:
    docker login -u <username> -p <password>

  3. Identify hardware ID for your machine using command:
    docker run

  4. Obtain license for your hardware ID (obtained in the previous step) from our Customer portal

  5. Copy the license you have from our Customer portal to the /srv/smartface/sf-docker/cloud-matcher directory

  6. To initialize and run SmartFace Cloud Matcher please run the following script from the sf-docker/cloud-matcher directory. This script will orchestrate the initialization of the Docker and the Docker containers setup with the SmartFace Cloud Matcher in mind. This includes initial database and dependencies setup.

⚠️ It is suggested to use another than the default location for the production environment deployment. It is not recommended to use the git working location for your production files to avoid any unexpected changes during updating the git directory.


It is suggested that you add your user to the Docker user group so the sudo command is not needed for running Docker containers. It can be done with the commands below:

Create the Docker group.
$ sudo groupadd docker

Add your user to the Docker group. Switch $USER in the command for your username.
$ sudo usermod -aG docker $USER

Log in and log out to apply the changes, you can do so also by running command below:
$ newgrp docker

Custom Deployment

In the case you would like to perform a custom and/or large deployment of the SmartFace Cloud Matcher, eg. spawn more instances of respective services, deploy MS SQL database instead of Postgre SQL, use different orchestration engine, or do any other custom change, you can deploy and configure SmartFace Cloud Matcher based on your needs and the use case for which SmartFace Cloud Matcher will be used.

The configuration can be applied in the provided docker-compose.yml, sf_dependencies/docker-compose.yml and .env files.

Adjusting configuration files


This file provides configuration for Docker-compose setup. Inside the file you can see a list of Docker containers, their parameters and inputs, and setup for internal network.

Example of a container definition:

   image: ${REGISTRY}sf-detector:${SF_VERSION}
   container_name: SFDetectCpu
     - RabbitMQ__Hostname
     - RabbitMQ__Username
     - RabbitMQ__Password
     - RabbitMQ__Port
     - AppSettings__Log-RollingFile-Enabled=false
     - "./iengine.lic:/etc/innovatrics/iengine.lic"

Terms used:
image - Docker image and path to be used
container_name - name to be used in the Docker listings
environment - list of environmental variables for this Docker container
volumes - setup path for the license file

You can add additional Docker containers and their setup in a similar manner to add additional functionality for your installation.


This is another docker-compose configuration file for the Docker containers. For practical purposes there are two docker-compose files and their containers are separated into two configuration files. The reason is the dependencies once they are set they are usually not changed as often and do not need to be replaced/restarted while minor changes are done in the docker installation.

Setup for data volumes, SQL and NoSQL databases, Rabbit MQ and Jaeger tracking is done here.

An example of a SQL container setup is below:

   image: ""
   container_name: mssql
     - "1433:1433"
     - SA_PASSWORD=Test1234
   restart: unless-stopped
     - mssqldata:/var/opt/mssql

image - Docker image and path to be used
container_name - name to be used in the Docker listings
ports - ports to be used by this container
environment - list of environmental variables for this Docker container, in this sample setup it shows connection and run parameters for the MS SQL database
restart - you can setup a restart policy in here. for more information please visit Docker documentation
volumes - volumes this Docker container has access to


This configuration file contains information about the Docker environment setup. You can customize several configurations here such as a desired versions of SmartFace, used database and its connection setup, or configuration of the Rabbit MQ.

Rabbit MQ connection setup example:

# RMQ config

Database configuration example:

ConnectionStrings__CoreDbContext=Server=pgsql;Database=smartface;Username=postgres;Password=Test1234;Trust Server Certificate=true;

Version setup example:

# Version


You can scale the desired containers up. This means that instead of one Docker container a set of X containers is created each time the containers start. This is very useful to scale up containers that provide processing power for your tasks, namely detectors, extractors, liveness, and matchers.

The amount of containers running is proportional to the amount of tasks that can be performed at once.

It can be easily done by adding replicas: X line in the container configuration under the deploy: section where X is the amount of containers to be created.

    image: ${REGISTRY}sf-extractor:${SF_VERSION}
    restart: unless-stopped
      replicas: 5
      - RabbitMQ__Hostname
      - RabbitMQ__Username
      - RabbitMQ__Password
      - RabbitMQ__Port
      - AppSettings__Log-RollingFile-Enabled=false
      - AppSettings__USE_JAEGER_APP_SETTINGS
      # - Gpu__GpuEnabled=true
      # - Gpu__GpuNeuralRuntime=Tensor
      - "./iengine.lic:/etc/innovatrics/iengine.lic"
    # - "/var/tmp/innovatrics/tensor-rt:/var/tmp/innovatrics/tensor-rt"
    #runtime: nvidia
⚠️ To apply scaling changes, please restart the docker-compose.

Cloud Scaling

On top of the on the premise deployment the SmartFace Cloud Matcher supports deployments in the cloud. The cloud deployments enable scaling the cloud environments to match your needs using the same services as the on premise deployments. Scaling can be done over a variable amount of cloud machines so it is up to you and your custom installation to achieve your goals. The deployment can also be done with other Docker container systems, such as Kubernetes or Nomad on top of Docker Compose.

Applying Changes to the Configuration

The configuration files are read and used everytime docker-compose starts. This means once we need to apply a change in the configuration we do need to restart the containers. We can do so safely by running a set of commands in an appropriate folder. Changes related to dependencies need to be applied in the sf-dependencies folder, changes related to main containers need to be run in the main sf-docker folder.

To restart Docker containers safely please run the following set of commands:

docker-compose stop;
docker-compose down;
docker-compose up -d

The parameter -d runs the containers in a detached mode, eg. it runs in the background, it does not occupy the terminal and does not show log information as it is generated.

After changes to the setup of containers it is possible to remove the existing orphan containers (containers not defined in the docker-compose file) by adding a parameter –-remove-orphans such as docker-compose up -d –-remove-orphans.


Updating the Cloud Matcher’s version

The GIT allows you to pull recent changes directly from the Github each time a new release comes. This might not be the best practice to use the Github working repository directly as a production environment, as sometimes your specific configuration and setup would be affected and possibly changed to the default setup. It is a good practice to keep your installation files separate from the location where your GIT clone is initialized.

For a version update without an additional upgrade to the set of containers running or any major platform changes you can do the update by:

  1. updating the # Version section of the .env file to mach the release numbers as per information available at the release page.
# Version
  1. stopping the docker compose
docker-compose stop;
docker-compose down;
  1. running again the initialization script

As explained the above guide is applicable only if there are no significant changes requiring additional configuration and file structure changes. If you want to know more about the updates and upgrades between versions please take a look at the releases page .


Each container creates its own logs during its lifetime. You can access them in real time if needed. To do so you need to know the name of the container you are after.

To find out the list of containers and their names you can use this command:
docker ps -a

column NAMES shows you the names of the containers

Once you have a container name you would like to know more about you can invoke the logs by command:
docker logs <ContainerName>

To allow logs to be continuously fed to your screen, please use additional parameter -f, such as:
docker logs <ContainerName> -f

To see a continuous logs of events as they happen across the whole setup at once (useful for real time debugging), you can run command below in the folder where the docker-compose.yml file is located:
docker-compose logs --tail=0 -f

Due to the nature of the Docker container system it might be useful to be able to locate all logs on the file system directly. You can do this by the following:
sudo du -h $(docker inspect --format='{{.LogPath}}' $(docker ps -qa))

All the logs can be scraped and organized into a central log system such as Grafana Loki within a customized setup and deployment.


SmartFace Cloud Matcher does provide tracing, allowing you to trace events and steps being performed during the REST API calls. Jaeger is being used as the tracing engine. By default it is turned off. To turn it on you need to edit .env configuration file and set the AppSettings__USE_JAEGER_APP_SETTINGS variable to true.

Once the .env configuration is updated you need to restart the docker containers to apply changes. For more information click here. Once the Jaeger tracing is allowed you can visit the web interface on the port 16686 (such as http://localhost:16686).

On the main page of the web interface you can use the Search function to find tracings you are interested in.

Once you find a tracing you are interested in you can click on the tracing to get more information.

More detailed information is provided including each step and how long it takes to perform the step.

More information about Jaeger tracing can be found at