The REST API for the Cloud Matcher is a subset of SmartFace Platform REST API endpoints. The SmartFace Platform allows you to control many of its aspects via REST API directly. The list of endpoints is substantial however the most used endpoints are related to creating watchlists, registering their members, searching incoming images to existing watchlists, checking two faces against each other, and/or doing a liveness check to avoid spoof attacks. The SmartFace Cloud Matcher covers all of these most used endpoints.

The SmartFace Cloud Matcher API documentation is available here

How to use the Cloud Matcher REST API

Use REST API programmatically

The Cloud Matcher communicates with the world using the REST API. This API is accessible by default via port 8098 on the server of installation. It does have a web interface using Swagger. Swagger documents all available endpoints and services.

You can connect and make calls to the SmartFace Cloud Matcher via any programming language supporting REST API calls. Understanding schemas and request parameters is possible via using the Swagger web interface portal.

Use REST API via Web Interface

By accessing your server installation on port 8098 (such as http://localhost:8098) you will see a list of endpoints available in several groups.

Watchlist group of endpoints

When you click Try it out button you can make a call directly from the web interface. Once you have prepared the request you can click Execute to execute the request.

Once the request is executed you will receive a response.

The most used functionalities

You can find the list of the most used SmartFace Cloud Matcher functionalities below. They are organized into 5 categories:

Watchlist management

Retrieve information about existing watchlists

You can easily retrieve information about existing watchlist(s) by using these 2 endpoints:

Retrieve information about ALL Watchlists

GET /api/v1/Watchlists

This endpoint allows you to list all existing watchlists and provide information about them. The output is paginated, eg. the list of Watchlists is split into pages where each page has a set amount of Watchlists. Page number, page size and sorting order can be specified. You can also show total items count.

The optional parameters can be used by adding them after the endpoint URL such as when you want to have a paginated list Watchlists in ascending order where there are 10 Watchlists on each page, plus you want to know the total count of Watchlists, please use this call: http://localhost:8098/api/v1/Watchlists?Ascending=true&PageSize=10&ShowTotalCount=true

Successful response contains set of values for each found watchlist, including:
threshold - shows the the default Threshold to be used against Matching Score when doing Matching
previewColor - shows the color of the watchlist name when mentioned in the SmartFace station (currently not used in SmartFace Cloud Matcher). The color is in a hexadecimal format, such as #012abc or #01a
id - shows the unique identifier used by SmartFace Cloud Matcher internally for referencing a watchlist. When referring to a watchlist in your requests, this is the value to be used.

Retrieve information about ONE Watchlist

GET /api/v1/Watchlists/{id}

This endpoint allows you to retrieve information about a specific endpoint if you know its unique identifier (id). You will get information about one watchlist at a time.

For a sample watchlist with id equal to 8f02f8b6-dd02-4dd1-bc24-bc559ce16705 you would have a request URL looking like this: http://localhost:8098/api/v1/Watchlists/8f02f8b6-dd02-4dd1-bc24-bc559ce16705

Create a new watchlist

POST /api/v1/Watchlists

You can create a new watchlist using this endpoint. Before you can proceed you need to decide several parameters: displayName, fullName, threshold, and preview color.

Recommended default threshold is 40. This value can be adjusted to match your specific needs.

Change a watchlist’s attributes

PUT /api/v1/Watchlists

You can change a watchlist with the endpoint. If a watchlist with such a unique identifier (id) does not exist, it will create a new watchlist with desired parameters. The input parameters are matching the request parameters of creating a watchlist.

Delete a watchlist

DELETE /api/v1/Watchlists/{id}

Deletes a watchlist by its id. The deleted watchlist does not delete associated watchlist members. To remove watchlist members you need to delete them directly.

Watchlist members management

You can manage Watchlist Members for each Watchlist. One Watchlist member can be linked to more than one Watchlist.

Get information about existing watchlist members

There are few ways how to get information about Watchlist Members.


GET /api/v1/WatchlistMembers

Using this endpoint you can get paged list of all Watchlist Members. You can order the list and watch the list split into pages that are sized to your liking.

# Sample output:
  "totalItemsCount": null,
  "items": [
      "displayName": "John Wick",
      "fullName": "John Wick",
      "note": null,
      "labels": [],
      "id": "45d9843b-6776-4f9c-a000-90da442420f0",
      "createdAt": "2022-10-05T12:53:47.726661Z",
      "updatedAt": "2022-10-05T12:53:56.620247Z"
      "displayName": "James Bond",
      "fullName": "James Bond",
      "note": "",
      "labels": [],
      "id": "03e4a2d6-98f6-4565-8081-e75bbb9efb34",
      "createdAt": "2022-10-26T11:44:37.456912Z",
      "updatedAt": null
      "displayName": "Lara Croft",
      "fullName": "Lara Croft",
      "note": "",
      "labels": [],
      "id": "0ebfeebc-3859-49d8-a798-acd273b4a017",
      "createdAt": "2022-11-21T11:51:30.675498Z",
      "updatedAt": null
  "pageSize": 3,
  "pageNumber": 1,
  "previousPage": null,
  "nextPage": null


GET /api/v1/WatchlistMembers/{id}

You can get information about only 1 Watchlist Member as long as you know member’s id. You can easily get id of an member in the GET /api/v1/WatchlistMembers.

Linked to watchlist

GET /api/v1/Watchlists/{id}/WatchlistMembers

You can also get a list of Watchlist Members linked to any Watchlist using this endpoint. You need to know the Watchlist Id value. The Id value can be retrieved from GET /api/v1/Watchlists.

Register new watchlist member

POST /api/v1/WatchlistMembers/Register

You can register a watchlist member using this endpoint. It allows a complex enrollment as you can enroll using a face image, set custom a id, choose watchlists to enroll into, and also set registering conditions which need to be fulfilled to enroll a member.

If you preset an id you would like to use it will be set for the member you are enrolling. If you omit this information it will be set automatically.

"id": "juraj-custom-id",

To register a new watchlist member you need to provide an image that contains the members face. The face needs to be provided in the base64 coding and is part of the REST API call.

"images": [
      "faceId": null,
      "data": "<insert-image-base64-here>"
ℹ️ Base64 is a common encoding used to represent data as a text. This allows you to send an image via REST API to SmartFace Cloud Matcher to be processed.

You need to select at least one Watchlist to enroll into using this endpoint. You can however set several Watchlists at once if needed.

# Choosing Watchlists to enroll into
"watchlistIds": [
    "Watchlist-id-one", "Watchlist-id-two", "Watchlist-id-three"

Setting up detector configuration is meant to ensure the quality of the enrollment pictures so you will not register a member with an image with too low resolution or an image with otherwise low quality. The minFaceSize and maxFaceSize should

# Setting up the detector for enrollment
"faceDetectorConfig": {
    "minFaceSize": 30,
    "maxFaceSize": 600,
    "maxFaces": 20,
    "confidenceThreshold": 1450

As you can register several members at once from one image you can also set the maximum number of detected faces at once.

# Maximum amount of faces to be detected on the incoming image
"maxFaces": 20,

Additional customer information called labels can be added. These labels needs to be defined once Cloud Matcher wise but once you set the information it can be passed onto the Cloud Matcher for each member during enrollment. For more information about labels please read here.

A Watchlist Member can be enrolled into several Watchlists. However during the registration phase possibly not all Watchlists needed are known. You can adjust this by freely linking and unlinking Members to and from Watchlists.

POST /api/v1/WatchlistMembers/LinkToWatchlist

This endpoint allows you to set a list of Members(N) to be linked into a Watchlist(1).

POST /api/v1/WatchlistMembers/UnlinkFromWatchlist

This enpoint allows you to set a list of Members(N) to be unlinked from a Watchlist(1).

Delete watchlist member (from all watchlists)

DELETE /api/v1/WatchlistMembers/{id}

This endpoint allows you to completely remove a member from the system including its membership in all Watchlists the member was enrolled into.


When you provide an image you can identify person(s) on the image by finding a match with Watchlist Members in the Cloud Matcher. The identification process goes through few steps: detection, extraction and matching. These steps are seamless and are all done to in the background when an identification is requested.

When SmartFace detects a face, the system creates a cropped image of the found face. The crop is used for the identification process. Successful Face Matching is when the Matching Score is higher than Matching Threshold.

Face identification can provide additional information. You can show/hide this information as required.

"faceFeaturesConfig": {
    "age": true,
    "gender": true,
    "faceMask": true,
    "noseTip": true,
    "yawAngle": true,
    "pitchAngle": true,
    "rollAngle": true

SmartFace Cloud Matcher can provide an information whether the detected person(s) wear a mask. The confidence threshold can be adjusted.

"faceMaskConfidenceRequest": {
    "faceMaskThreshold": 3000

When you are identifiing person(s) on an image you might be interested in liveness of the detected person(s). The liveness/spoof check can be done as a part of identification process without a need to run a separate liveness check. For better understanding how to evaluate liveness using Cloud Matcher’s REST API please see the liveness subsection.

POST /api/v1/Watchlists/Search

Identification in all watchlists

If you would like to search all watchlists using an image of person(s) you can use the endpoint POST /api/v1/Faces/Search. As you do not have a specific Watchlist in mind you can omit the list of Watchlists by keeping the watchlistIds array empty.

"watchlistIds": [

Identification in chosen watchlist(s)

If you would like to search only in set watchlists using an image of person(s) you can use the same endpoint POST /api/v1/Faces/Search but you need to specify the watchlists to be searched within by listing their Watchlist Ids ( watchlistIds) in the array.

"watchlistIds": [
ℹ️ You can get the Watchlist Ids from the endpoint GET /api/v1/Watchlists.

Get top 10 candidates

By default when you request results of the search (identification) endpoint the maxResultCount parameter is set to 1. This means that you will get only the most matching person - ie. the face with the highest score that is over the threshold. Depending on your threshold levels and similarity of the Watchlist Members you can actually request more candidates as results of the search endpoint.

This might be usefull to receive a list of similar candidates. It is recommended to set the threshold lower than the standard score so more faces would pass this threshold. So for an example instead of threshold equal to 40, you can lower it to value of 20 and set the maxResultCount to value of 10. In this case you can receive up to 10 Watchlist Members that would likely be similar to each other.


SmartFace Cloud Matcher does provide information about Watchlist Members or whether they are part of said Watchlist(s). On top of this Cloud Matcher can be used to provide additional information about individual photos/images even if the detected persons are not, or are not meant to be registered in any Watchlists.

Verification one to one comparison

Sometimes it is useful to find out whether 2 images are depicting the same person. To retrieve the the confidence that two faces actually belong to the same person can be done using POST /api/v1/Faces/Verify endpoint. To verify two images against each other you need to provide probeImage and referenceImage in the form of base64 encoded string. To ensure quality of the image used for a verification please set configuration for the face detector.


The passive liveness detection is a process of determining whether the presented face is a real person, without requiring the user to perform any additional action. This check is recommended for applications where the user’s experience and seamless app flow is paramount. SmartFace Cloud Matcher provide the passive liveness detection in two ways:

  • by direct passive liveness call POST /api/v1/Faces/SpoofCheck

  • integrated in watchlist search process POST /api/v1/Watchlists/Search

Presentation attack detection

Our algorithms have been trained to detect real faces and various kinds of spoof attacks. These include:

  • Faces displayed on electronic screen
  • Printed faces
  • 2D masks
  • 3D masks

Passive Liveness steps

In order to evaluate passive liveness, the following steps must be performed:

  • Face detection – find position of the face and face attributes in the image
  • Passive liveness evaluation – evaluates facial image attributes and calculates passive liveness score
Face detection

The first step of performing the passive liveness detection is face detection. This is an important step because there may be no face or multiple faces present in the picture. Once a face is detected, it can be used to evaluate passive liveness. There are two face detection modes available:

  • balanced mode - provides lower latency,
  • accurate mode - is more precise.

Passive liveness evaluation

A facial image with sufficient background should be submitted for evaluation. Recommended minimum requirements for the image are:

  • face should not be near to the side of the image
  • not too strong backlight or sidelight
  • no overexposed or underexposed images
  • image should not be cropped or compressed between the capture and processing step

There are two passive liveness available in SmartFace Cloud Matcher for different usage:

  • distant passive liveness – use on wild images (not recommended for selfies)
  • nearby passive liveness – use on selfie/enrollment images

Passive liveness conditions

When evaluating a liveness for provided image you can adjust Liveness Conditions in the spoofCheckConfig. There are two sets of Liveness Conditions. One set is for the nearby Liveness Check, the other one is for the distant Liveness Check.

"spoofCheckConfig": {
    "distantLivenessScoreThreshold": 90,
    "nearbyLivenessScoreThreshold": 90,
    "distantLivenessConditions": "default",
    "nearbyLivenessConditions": "default",
    "keepEvaluatingConditionsAfterFirstFail": false

The default values can be adjusted as needed however the default sets are recommended unless the environmental conditions of the image request it.

Default conditions to start distant passive liveness evaluation

Example for distant Liveness Condition string:

FACE_CONFIDENCE: [1000; 10000] && FACE_SIZE: [30; inf] && FACE_RELATIVE_AREA: [0.009; inf] && FACE_RELATIVE_AREA_IN_IMAGE: [0.9; inf] && YAW_ANGLE: [-20; 20] && PITCH_ANGLE: [-20; 20] && SHARPNESS_RAW: [2000; inf]
FACE_CONFIDENCE <0,10000>Face attribute for evaluating confidence score of the face related to face detection.
FACE_SIZEFace attribute representing face size - the maximum of eye distance and eye-mouth distance.
FACE_RELATIVE_AREAArea of face relative to image size.
FACE_RELATIVE_AREA_IN_IMAGEArea of face visible in image relative to total area of face. This value implies the percentage of face area outside the image.
YAW_ANGLEFace attribute representing angle rotation of head towards camera reference frame around Y-axis as per DIN9300.
PITCH_ANGLEFace attribute representing angle rotation of head towards camera reference frame around X-axis as per DIN9300.
SHARPNESS_RAWFace attribute representing sharpness of the image before normalization.
Default conditions to start nearby passive liveness evaluation

Example for nearby Liveness Condition string:

FACE_CONFIDENCE: [1000; 10000] && FACE_SIZE: [60; inf] && FACE_RELATIVE_AREA: [0.25; inf] && YAW_ANGLE: [-20; 20] && PITCH_ANGLE: [-20; 20] && BRIGHTNESS: [-7800; 5000] && CONTRAST: [-5000; 6000] && SHARPNESS_RAW: [4000; inf] && UNIQUE_INTENSITY_LEVELS: [500; 10000]
FACE_CONFIDENCE <0,10000>Face attribute for evaluating confidence score of the face related to face detection.
FACE_SIZEFace attribute representing face size - the maximum of eye distance and eye-mouth distance.
FACE_RELATIVE_AREAArea of face relative to image size.
YAW_ANGLEFace attribute representing angle rotation of head towards camera reference frame around Y-axis as per DIN9300.
PITCH_ANGLEFace attribute representing angle rotation of head towards camera reference frame around X-axis as per DIN9300.
BRIGHTNESS <-10000~dark, 10000~light>Face attribute for evaluating whether an area of face is correctly exposed.
CONTRAST <-10000~low, 10000~high>Face attribute for evaluating whether an area of face is contrast enough.
SHARPNESS_RAW <-10000~blurry, 10000~sharp>Face attribute representing sharpness of the image before normalization.
UNIQUE_INTENSITY_LEVELS <-10000~few levels, 10000~enough levels>Face attribute for evaluating whether an area of face has appropriate number of unique intensity levels.
⚠️ If the conditions for passive liveness are not fulfilled, evaluation will not be done. You can add additional attributes or modify ranges for existing ones in default conditions before passive liveness detection process run.

Passive liveness thresholds

The final decision whether a face is real, or spoof, should be determined by the passive liveness score and threshold. If the score is above the threshold, this can be interpreted as accepted. If the score is below the threshold, it is rejected.

Setting the correct threshold depends on the security/convenience balance that is required for the specific use case.

Thresholds for distant passive liveness are strongly depending on the face size. You can find recommended thresholds for default minimal face size (~30) and face size 80 in tables below:

False Accept Rate [%]False Reject Rate [%]Threshold (FACE_SIZE > 30)
False Accept Rate [%]False Reject Rate [%]Threshold (FACE_SIZE > 80)

Thresholds for nearby passive liveness:

False Accept Rate [%]False Reject Rate [%]Threshold

Let’s set the threshold of nearby passive liveness to 89.76. If we have a representative set of 10,000 real faces, statistically 220 of the faces will be on average incorrectly marked as spoofs, even though they were real faces (False Reject Rate). If we have a set of 10,000 spoofs, statistically 10 of the spoofs will be on average wrongly marked as real faces (False Accept Rate).

Parameters for the Liveness call

When setting up a call to the POST /api/v1/Faces/SpoofCheck endpoint the parameter spoofDetectorResourceIds represents an array of liveness detectors what should be run during the spoof check.

"spoofDetectorResourceIds": [

Possible values are: liveness_distant_any_remote or liveness_distant_cpu_remote or liveness_distant_gpu_remote for a distant passive Liveness Check and liveness_nearby_any_remote or liveness_nearby_cpu_remote or liveness_nearby_gpu_remote for a nearby passive Liveness Check.

⚠️ If you want to use the gpu version of spoof detector resources you need to have GPU acceleration properly set up. For more information about setting up the GPU acceleration please read this page.

You can set thresholds for the liveness checks, ie. the number your liveness score is compared against to decide whether the Liveness Check has passed. Threshold for the distant liveness check is set as distantLivenessScoreThreshold and the threshold for a nearby Liveness Check is set as nearbyLivenessScoreThreshold. The value of keepEvaluatingConditionsAfterFirstFail let’s you enable to have full information about the reasons why the liveness check was not performed instead of only providing information about the first reason why it is not being performed.

"spoofCheckConfig": {
    "distantLivenessScoreThreshold": 90,
    "nearbyLivenessScoreThreshold": 90,
    "distantLivenessConditions": "default",
    "nearbyLivenessConditions": "default",
    "keepEvaluatingConditionsAfterFirstFail": false

Responses of the Liveness call

The POST /api/v1/Faces/SpoofCheck endpoint responds with information about whether the Liveness check was performed and whether it has succeeded. If it has succeeded the Liveness Check score is provided. First level information is an agregation of both nearby and distant Liveness checks. Then each check provides its own information. If the face is detected but the liveness check is not done due to the condition string not being fulfilled then the notPerformedReasons.reasonMessage provides information about what condition(s) was/were not fulfilled.

  "performed": true,
  "passed": true,
  "distantLivenessSpoofCheck": {
    "performed": true,
    "passed": true,
    "score": 0,
    "notPerformedReasons": [
        "reasonMessage": "string"
  "nearbyLivenessSpoofCheck": {
    "performed": true,
    "passed": true,
    "score": 0,
    "notPerformedReasons": [
        "reasonMessage": "string"

Terms explained:
performed is true, if Liveness Checks fulfilled all the conditions.
passed is true, if Liveness Checks passed the liveness check.

SpoofCheck response flowchart