Advanced Features

In addition to real-time matching of a detected face against faces stored in watchlists, SmartFace is also capable to search in all detected faces stored by SmartFace from all real-time camera streams or all offline video files processed by Rapid Video Processing.

The SmartFace Face Search feature is provided by SmartFace Face Matcher service. This service keeps all available face templates in the memory for the best search performance. The list of face templates is constantly updated with new face templates from camera streams and offline video processing.

For a guide with detail instructions how to proceed to do a face search, please read within the guides section.

Face search HW requirements

The Face Matcher service keeps all available face templates in the memory for the best search performance. If you have a long history of detected faces in the database from many high-traffic cameras, you need to consider allocating special HW resources just to run this service.

⚠️ It is important to allocate enough resources to Face Matcher, otherwise it can have a major impact on the core functionality of the SmartFace system. In case you do not use Feace Search feature, it is recommended to remove Face Matcher form your system design.
ℹ️ All measurements were done on x86 PC with following configuration: i7-7700 CPU, 3.60GHz, Microsoft SQL server 2016. All times may vary depending on your system HW and SW configuration.

Memory footprint of Face Matcher

Reference measurements table is available below:

Faces in DBMemory usage
1,151,5541.9 GB
2,403,0443.5 GB

Example: If your face database contains e.g. 10,000,000 faces, you have to count with 15 GB of memory for loaded templates and +20% increase in memory during search requests.

Initial loading time

It will take approximately 3.5 minutes to initially load 1 million face templates from the database into Face Matcher if the database is on the same server as the service itself. Increase of the loading time is linear with increasing number of faces.

Example: If your face database contains e.g. 10,000,000 faces, initial load time will be 35 minutes.

Search time

Final duration of the face search consist from two times:

  • matching time - in memory match of all loaded templates with uploaded face

  • results insert time - time of inserting SearchSessionObjects into the database

Matching time on 1 million face templates in memory is approximately 500 ms.

Results insert time depend on number of results what need to be inserted, database access speed and available HW.

Reference measurements is available below:

Number of resultsInsert time
1,017,35840s
100,0002s

Example: If your face database contains e.g. 10,000,000 faces, and you will have 1% matches (~ 100,000) you can calculate the search time:
Search Time = Matching time + Result Insert Time = 0.5s * 10 + 2s = approx. 7 seconds

Cleanup process time

Reference measurements table is available below:

Session object countApprox. cleanup time
100,0002s
1,000,00020s
100Instant
1,000Instant
10,000Instant

Grouping

Grouping is a process of organizing detected faces into groups based on their biometric similarity. In SmartFace Platform, each group of similar faces represents a unique individual. Therefore, this individual consists of a set of faces which are similar based on their matching score. The grouping functionality, the same as matching, is based on comparison of biometric templates extracted from detected faces. For more information about this topic, see Matching

The grouping process is limited by the number of faces which can be grouped. Because of this limitation, the grouping process behaves differently when performed on live video streams or uploaded video and image files. See the explanation below.

Images and video files

Images and video files contain a limited number of faces, therefore grouping may be performed on all the faces detected in the uploaded files. The faces are compared with each other and grouped into individuals based on their similarity.

For example, you may upload a gallery of photos from an event (team building, wedding, etc.). SmartFace Platform automatically detects faces and extracts templates. After that, grouping organizes all similar faces into individuals. If you later upload other photos into the same collection, SmartFace Platform will again compare the new detected faces with all faces in the specific collection and group similar ones into individuals.

Live video streams

A live video stream is a continuously incoming input and may contain a high number of detected faces without a limitation. Therefore, it is not possible to process all faces by grouping at once. It is necessary to limit their number by defining a time period over which SmartFace Platform groups faces continuously in time. Faces detected during the defined time period are compared with each other and grouped into individuals.

For example, you defined the time period of 8 hours for the grouping. SmartFace Platform groups faces of a person who reappears in front of the cameras within this time period into the same unique individual. If the time difference between the appearances of this person is greater than this time period, then SmartFace Platform assigns the newly detected faces of this person to a different individual.

Step by Step Guide to Grouping

For more information about how to set up the Grouping Feature, please read the guide.

Watchlist autolearn

Watchlist Autolearn is a feature developed to considerably increase the accuracy of identification of people registered in watchlists. This feature can be mostly used in the access control use case, where people registered in watchlists are recurring periodically, usually on a daily basis.

Watchlist Autolearn every day automatically selects a face image from all the matches of a person against a watchlist member and adds the image to the corresponding watchlist member.

The face images are collected over a user-defined time period. At the end of this time period (the default value is 30 days), the oldest image added to the watchlist member is replaced by a new image from the last day. This means that over this time period, watchlist members accumulate multiple images of themselves which are updated periodically and which represent their current appearance.

The feature ensures that SmartFace Platform can match a person with the corresponding watchlist member with a higher accuracy. In addition, matching isn’t influenced by changes in the face of a person, as the collected images reflect the current face of the matched person.

Selection of the face image added to the watchlist member is based on the selection threshold. Watchlist Autolearn adds only a face with a matching score equal or higher than the selection threshold to the watchlist.

Autolearn face clustering

To increase the positive impact of the Watchlist Autolearn, faces selected daily are added into the separate collections. These collections are called clusters. Currently two clusters are supported:

  • No mask cluster
  • Face mask cluster

Matched faces where a face mask is present are added to the face mask cluster. Matched faces where face mask is not present are added to the no-mask cluster. Due to having the autolearn faces added into multiple clusters SmartFace Platform can optimize the selection of the faces on a per-cluster basis and also optimize the face matching.

⚠️ When updating SmartFace Platform from previous versions, which do not support face mask detection and face clustering, the Watchlist Autolearn will automatically sort faces based on face mask on its first run after the installation. Note that some autolearn faces may be removed from the watchlist due to them not conforming to any cluster, because extraction is not able to asses whether the mask is present or not.

Configuration of Watchlist Autolearn

Via REST API

The Watchlist Autolearn feature can be set and configured with the REST API using the   PUT   /api/v1/Setup/Watchlists/AutoLearn endpoint.

By default, the autolearn feature is disabled and can be configured by WatchlistAutoLearnConfig configuration. The following table describes parameters of the configuration.

Configuration propertyDefault ValueDescription
EnabledfalseA flag which indicates whether the Watchlist Autolearn features should be started at the defined ExecutionStartTime
ExecutionStartTimenullTime of day (UTC) when Watchlist Autolearn runs. The format is hour:minute:second. For example: 23:00:00.
SelectionThreshold50The minimal threshold for the Selection strategy. A higher threshold can decrease the chance of adding an incorrect face image to the watchlist member. A lower threshold can cause watchlist poisoning, when a face image which doesn’t belong to the watchlist member might be added.

We recommend setting the value higher than your matching threshold.

The value is used for no mask cluster.
MaskedSelectionThreshold70The minimal threshold for the Selection strategy. A higher threshold can decrease the chance of adding an incorrect face image to the watchlist member. A lower threshold can cause watchlist poisoning, when a face image which doesn’t belong to the watchlist member might be added.

We recommend setting the value higher than your matching threshold.

The value is used for face mask cluster.
MaxAutoLearnFacesCount30The maximum number of Watchlist Autolearn faces that are stored for one watchlist member into one cluster. Only one autolearn face is added per day.
NoFaceMaskConfidenceThreshold-3000Required face mask confidence for the face to be selected into the No face mask cluster. The detected face needs to have a FaceMaskConfidence lower than this value.
FaceMaskConfidenceThreshold3000Required face mask confidence for the face to be selected into the Face mask cluster. The detected face needs to have a FaceMaskConfidence higher than this value.

Matched faces with FaceMaskConfidance value between configured NoFaceMaskConfidenceThreshold and FaceMaskConfidenceThreshold (default -3000 to 3000) are not added to the selection by the autolearn.

⚠️ Watchlist Autolearn feature isn't working, if the collecting of data from the processed inputs is disabled. It is possible to enable the data storage and store only match result for Watchlist Autolearn purposes. For more information, see Data storage configuration.

Via the SmartFace Station

The Watchlist Autolearn can also be set using the SmartFace Station. For more information please read the SmartFace Station manual.

Health Checks

The SmartFace platform offers a comprehensive Health check feature, designed to ensure the smooth operation and optimal performance of the system. The Health check feature empowers you to evaluate the performance and stability of the platform, guaranteeing its optimal functionality. Performing regular Health checks is vital to proactively identify potential issues and implement measures to uphold a resilient system.

By conducting Health checks, users can gain valuable insights into the overall health of various components within each SmartFace Platform service. This feature analyzes various components and metrics, providing a comprehensive assessment of the system’s performance. It offers an efficient way to monitor critical aspects and promptly address any concerns that may arise.

This document describes the configuration options and endpoints related to Health checks.

How to enable Health Checks

The Health Check functionality is available on the SmartFace Platform since version 5.4.20. For more information about available versions and how to update, please visit SmartFace Release Packages.

The health checks are available for each service and they are per default running on the port 6060 for each service. Depending on your monitoring method it might be necessary to allow each service to be available at the custom port of the SmartFace server. This can be easily done in the docker-compose.yml file where you need to add your own port mapping such as on the code sample below:

...
    ports:
      - 6062:6060
...

The above sample binds the service’s inner port 6060 to the public port 6062. You need to do this for each service where the port needs to be public. For example for the matcher service the whole setup might look like as on the image below.

Configuration

The health check configuration is managed through environment variables (or standard appsetting.json / console args). Please see the following table for the variables to control the behavior of the health checks.

Please note that the default setup should cover the most of the cases, the environment variables below are not suggested to be updated unless necessary.

Environment variableDescription
HealthCheck__HostThe host on which the health check endpoint is available. The default value * indicates that the endpoint is accessible on all available network interfaces.
HealthCheck__PortThe port number on which the health check endpoint listens for incoming requests. The default port is set to 6060.
HealthCheck__Tags__dbIt controls the evaluation of health checks. By default, all health checks are evaluated. However, in certain deployments where some SmartFace Platform services can run without a database (e.g., API), this setting can be used to turn off the evaluation of database health checks. Setting this environment variable to false will disable the evaluation of database health checks.
HealthCheck__Tags__rmqIt controls the evaluation of health checks for the RabbitMQ (RMQ) service. By default, all health checks are evaluated. Setting this value to false will disable the evaluation of RMQ health checks.

In a specific case it is possible you would like to set HealthCheck__Tags__db or HealthCheck__Tags__rmq, such as the cases where you do not expect the service to communicate with the database or RabbitMq. In that case please add the variable with value false as per image below.

If you do such change, the service needs to be restarted. The easiest way to achieve this is to call the command below:

docker-compose up -d

How to listen to Health Checks

Once the health checks are enabled and the ports are available you can connect to an individual service’s health check via the port specified and on the defined endpoint.

Available Endpoints

The default health check server listens on port 6060 and accepts requests on the following paths:

PathDescriptionResponse status codes
/healthz/readyThis endpoint is used to check the readiness of a service. A service that has not finished the initialization is not yet ready.HTTP 200 status code - the service is healthy and available for handling requests.

HTTP 503 status code - the service is unhealthy or not ready to handle requests.
/healthz/liveThis endpoint is used to check the liveness of a service. A service that is not able to wake up from an issue is not considered live.HTTP 200 status code - the service is running and healthy.

HTTP 503 status code - the service is not alive or experiencing issues.

The provided endpoints enable you to perform general health checks and obtain detailed information about the status of individual services.

For example to access the health check of the api service mentioned above you can visit http://your-smartface-ip:6063/healthz/ready or http://your-smartface-ip:6063/healthz/live.

The response JSON structure may vary depending on the specific service you are using. Example is below:

{
  "status": "Healthy",
  "results": 
  {
    "RabbitMQHealthCheck": 
    {
      "status": "Healthy",
      "description": null,
      "data": {},
      "tags": 
      [
        "rmq"
      ]
    },
    "CoreDbContext": 
    {
      "status": "Healthy",
      "description": null,
      "data": {},
      "tags": 
      [
        "db"
      ]
    },
    "S3HealthCheck": 
    {
      "status": "Healthy",
      "description": null,
      "data": {},
      "tags": []
    }  
    "WatchlistMatcherLoadHealthCheck": 
    {
      "status": "Healthy",
      "description": "The watchlist matcher database successfully initialized",
      "data": 
      {
        "InitialMatcherLoadTimeMs": 629,
        "InitialMemberLoadCount": 15
      },
      "tags": []
    },
    "MatcherRpcServerHealthCheck": 
    {
      "status": "Healthy",
      "description": "The matcher RPC server is ready",
      "data": {},
      "tags": []  
    }
}

Each of the sub-results gather’s information about different aspects of the service:

  • RabbitMQHealthCheck - information whether the connection to the RabbitMQ is as expected

  • CoreDbContext - information whether the connection to the SQL Database is as expected

  • S3HealthCheck - information whether the connection to the MinIO Object Database is as expected

Some of the services have it’s own specific results, such as the results for the Matcher service:

  • WatchlistMatcherLoadHealthCheck - information about the Watchlists being loaded by the service

  • MatcherRpcServerHealthCheck - information about the Matcher’s communication with the Rpc server

A sample output of the live endpoint would look like this:

{
"status": "Healthy"
}