DOT iOS Face library
v3.8.1
Introduction
DOT iOS Face provides components for the digital onboarding process using the latest Innovatrics IFace image processing library. It wraps the core functionality of the library to a higher level module which is easy to integrate into the iOS application. Components having UI are available as UIViewController
classes and can be embedded into the application’s existing UI or presented using the standard methods (For example: show(), push(), present()
).
List of UI components
- FACE CAPTURE
A visual component for capturing good quality photos and creating templates suitable for verification.
- FACE CAPTURE SIMPLE
A visual component for capturing photos and creating templates suitable for verification without considering photo quality requirements.
- LIVENESS CHECK
A visual component which performs the liveness detection based on object tracking. An object is shown on the screen and the user is instructed to follow the movement of this object.
- LIVENESS CHECK 2
A visual component for capturing face photos and templates with combination of the liveness detection based on the object tracking. An object is shown on the screen and the user is instructed to follow the movement of this object.
List of non-UI components
- FACE DETECTOR
A component for performing face detection on an image and creating templates as well as computing face features and ICAO attributes.
- TEMPLATE VERIFIER
A component for performing template verification.
- FACE IMAGE VERIFIER
A component for performing face image verification.
Requirements
Xcode 11.4+
iOS 10.1+
Swift or Objective-C
CocoaPods
Distribution
DOT iOS Face is distributed as a XCFramework - DOT.xcframework using Cocoapods with its dependencies stored in our public GitHub repository.
Two versions of the library are available – standard (dot-face
) and with passive liveness evaluation (dot-face-passive-liveness
). The version with passive liveness includes the complete functionality of the standard version and also the on device passive liveness evaluation.
The first step is to insert the following line of code on top of your Podfile
.
source 'https://github.com/innovatrics/innovatrics-podspecs'
Then, specify the dependency on the DOT library in Podfile
. Dependencies of this library will be downloaded alongside the library. If you wish to use the passive liveness library, you should specify pod 'dot-face-passive-liveness'
in your Podfile
.
source 'https://github.com/innovatrics/innovatrics-podspecs'
use_frameworks!
target 'YOUR_TARGET' do
pod 'dot-face'
end
Note | If a CocoaPods problem with
|
Sample project
The DOT iOS Kit Sample demo project demonstrates the usage and configuration of DOT iOS Face.
To run the sample, install pods and then open the DOTSample.xcworkspace
file. The sample project requires the Evaluation license. For more information on how to obtain this license, see Licensing.
Licensing
In order to use DOT iOS Face in other apps, it must be licensed. The license can be compiled into the application as it is bound to Bundle Identifier specified in the General tab in XCode.
First, you should generate a License ID. Then, set the Bundle Identifier and run the app. To retrieve the License ID, use this code sample (required only once for license generation):
import DOT
...
let licenseId = DOTHandler.licenseId
...
After you generated the License ID, please contact your Innovatrics’ representative specifying this ID. Your representative will then provide you the license.
Permissions
Set the following permission in Info.plist
:
<key>NSCameraUsageDescription</key>
<string>Your usage description</string>
Initialization
Before using any of the DOT iOS Face components, you should initialize DOT iOS Face with the license. The binary license content must be passed into DOTHandler.initialize(with: YOUR_LICENSE)
. To initialize the license object, use the License
class.
if let path = Bundle.main.path(forResource: "your_license_path", ofType: "lic") {
do {
let license = try License(path: path)
DOTHandler.initialize(with: license)
} catch {
print(error)
}
}
Face detection Confidence threshold
One of the basic DOT parameters is the face detection Confidence threshold. The interval of the Confidence score is [0, 10000] and the default value of the threshold is 600. Faces with a Confidence lower that this value are ignored.
You can override this value by using the faceDetectionConfidenceThreshold
argument of the DOTHandler.initAsync()
method:
DOTHandler.initialize(with: license, faceDetectionConfidenceThreshold: 1500)
Finish using DOT iOS Face features
When you have completed a process (e.g. onboarding) using the DOT iOS Face, it is usually a good practice to close it in order to free memory. You can close DOT iOS Face only after the complete process is finished and not within the lifecycle of individual iOS components. This can be performed using the DOTHandler.deinitialize()
method. If you want to use the DOT iOS Face components the next time, call the DOTHandler.initialize()
method.
Localization
String resources can be overridden in your application and alternative strings for supported languages can be provided following these two steps:
Add your own
Localizable.strings
file to your project using standard iOS localization mechanism. To change a specific text override corresponding key in thisLocalizable.strings
file.Set the localization bundle to the bundle of your application (preferably during the application launch in your
AppDelegate
).
import DOT DotFaceLocalization.bundle = .main
Custom localization
You can override standard iOS localization mechanism by providing your own translation dictionary and setting the DotFaceLocalization.useLocalizationDictionary
flag to true
.
import DOT
guard let localizableUrl = Bundle.main.url(forResource: "Localizable", withExtension: "strings", subdirectory: nil, localization: "de"),
let dictionary = NSDictionary(contentsOf: localizableUrl) as? [String: String]
else { return }
DotFaceLocalization.useLocalizationDictionary = true
DotFaceLocalization.localizationDictionary = dictionary
"face_capture.instruction_step.center" = "Center your face";
"face_capture.instruction_step.close" = "Move back";
"face_capture.instruction_step.far" = "Move closer";
"face_capture.instruction_step.light" = "Turn towards light";
"face_capture.instruction_step.glassStatus" = "Remove glasses";
"face_capture.instruction_step.backgroundUniformity" = "Plain background required";
"face_capture.instruction_step.pitchHigh" = "Lower your chin";
"face_capture.instruction_step.pitchLow" = "Lift your chin";
"face_capture.instruction_step.yawHigh" = "Look left";
"face_capture.instruction_step.yawLow" = "Look right";
"face_capture.instruction_step.eyeStatus" = "Open your eyes";
"face_capture.instruction_step.mouthStatus" = "Close your mouth";
"face_capture.instruction_step.capture" = "Stay still!";
"liveness.state.watchObject" = "Watch the object";
"liveness.state.lowQuality" = "Turn towards light";
"liveness.state.noFace" = "Look straight";
"liveness.state.tooClose" = "Move back";
"liveness.state.tooFar" = "Move closer";
Logging
By default, logging is disabled. You can enable it by using the following method from the DOTHandler
class.
DOTHandler.logLevel = .error
error
verbose
none
The appropriate place for this call is before initialization of DOT. Each log message starts with prefix DOT
.
Note | Please note that logging should be used just for debugging purposes as it might produce a lot of log messages. |
UI Components
Controllers configuration
Components containing UI are embedded into the application as controllers. All controllers can be embedded inside your own controller or used standardly.
Controllers requiring runtime interaction provide public methods, for example: requestFaceCapture()
.
let controller = FaceCaptureController.create(configuration: .init(), style: .init())
controller.delegate = self
navigationController?.pushViewController(controller, animated: true)
You can embed a controller to your own controller. For example:
private lazy var faceCaptureSimpleController: FaceCaptureSimpleController = {
let faceCaptureConfiguration = FaceCaptureSimpleConfiguration()
let controller = FaceCaptureSimpleController.create(configuration: faceCaptureConfiguration, style: .init())
controller.delegate = self
return controller
}()
override func viewDidLoad() {
super.viewDidLoad()
addChildViewController(faceCaptureSimpleController)
faceCaptureSimpleController.view.translatesAutoresizingMaskIntoConstraints = true
faceCaptureSimpleController.view.frame = containerView.bounds
containerView.addSubview(faceCaptureSimpleController.view)
faceCaptureSimpleController.didMove(toParentViewController: self)
}
Dark mode support
Each UI component contains the style
parameter, which allows you to change the color and the font of the component.
If you want to use a different color for the Dark mode or the Light mode, follow this setup:
let backgroundColor = UIColor { (UITraitCollection: UITraitCollection) -> UIColor in
if UITraitCollection.userInterfaceStyle == .dark {
return UIColor.black
} else {
return UIColor.white
}
}
let style = FaceCaptureStyle(backgroundColor: backgroundColor)
Face Capture
Integration with controller
The controller with instructions for obtaining quality images suitable for verification. The following options can be configured for Face capture:
(Required)
[.init()]
FaceCaptureStyle style
– The color and font customization for your controller(Required)
[.init()]
FaceCaptureConfiguration configuration
(Optional)
[.front]
AVCaptureDevice.Position cameraPosition
– The front or back camera(Optional)
[true]
Bool showCheckAnimation
– Shows a checkmark animation after enrollment(Optional)
[0.10]
Double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.24]
Double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
Set<QualityAttributeConfiguration> qualityAttributeConfigurations
– Desired quality attributes which the output image must meet
Optionally you can initialize FaceCaptureConfiguration
with QualityAttributePreset
, see Quality Attributes section.
If a face present in an image has a face size out of the minimum or maximum face size interval, it won’t be detected. Please note that a wider minimum or maximum face size interval results in a lower performance (detection fps).
You can respond to the FaceCaptureController
events, using its delegate FaceCaptureControllerDelegate
.
public protocol FaceCaptureControllerDelegate: class {
//Optional
func faceCaptureDidLoad(_ controller: FaceCaptureController)
func faceCaptureDidAppear(_ controller: FaceCaptureController)
func faceCaptureWillAppear(_ controller: FaceCaptureController)
func faceCaptureDidDisappear(_ controller: FaceCaptureController)
func faceCaptureWillDisappear(_ controller: FaceCaptureController)
func faceCaptureCameraInitFailed(_ controller: FaceCaptureController)
func faceCaptureNoCameraPermission(_ controller: FaceCaptureController)
func faceCapture(_ controller: FaceCaptureController, stateChanged state: CaptureState, withImage image: DOTImage?)
func faceCapture(_ controller: FaceCaptureController, previewSizeChanged size: CGSize)
//Required
func faceCapture(_ controller: FaceCaptureController, didCapture captureCandidate: CaptureCandidate)
func faceCaptureDidFailed(_ controller: FaceCaptureController)
}
Note | The face size is defined as the larger of the two: the eye distance and the eye to mouth distance (distances are shown in the picture below). |
The CaptureCandidate
object provides the following data:
fullImage: UIImage?
– The original face imagecroppedFaceImage: UIImage?
– The face image cropped to the ICAO standardtemplate: Template?
– The template extractionglassStatusScore: Double
– The glass status scoreglassStatusDependenciesFulfilled: Bool
– Checks if dependencies are fulfilled for an accurate evaluation of the glass statuspassiveLivenessScore: Double
– The passive liveness scorepassiveLivenessDependenciesFulfilled: Bool
– Checks if dependencies are fulfilled for accurate passive liveness evaulation
Note | Please note that passive liveness features are available only if you use pod 'dot-face-passive-liveness' . |
CaptureState
events are emitted when the user enters each step.
presenceStep
positionStep
proximityStep
glassStatusStep
backgroundUniformityStep
pitchAngleStep
yawAngleStep
eyeStatusStep
mouthStatusStep
lightStep
Quality attributes of the output image
You may adjust quality attributes of the output image by creating a set of QualityAttributeConfiguration
objects and passing it to FaceCaptureConfiguration.qualityAttributeConfigurations
.
These quality attributes are grouped into three QualityAttributePreset
: verification, ICAO and passive liveness. Each preset has its own provider with the following functions:
func qualityAttributeConfigurationSet() -> Set<QualityAttributeConfiguration>
func qualityAttributeConfiguration(_ qualityAttribute: QualityAttribute) -> QualityAttributeConfiguration?
Available quality providers:
VerificationQualityProvider
– The resulting image suitable for verification.PassiveLivenessQualityProvider
– The resulting image suitable for evaluation of the passive liveness.IcaoQualityProvider
– The resulting image passing ICAO checks.
You can use these providers to use a predefined set of quality attributes for the chosen use case. Also, you can modify these set of quality attributes according to your requierements.
You may add a quality attribute with custom or with default ranges. To add a quality attribute with default ranges, use QualityAttributeConfigurationRegistry
.
For example, if you wish to capture an image suitable for verification, but you also want to make sure users don’t wear glasses, you can use the following implementation:
let registry = QualityAttributeConfigurationRegistry()
let verificationProvider = VerificationQualityProvider()
var customSet = verificationProvider.qualityAttributeConfigurationSet()
customSet.insert(registry.qualityAttributeConfiguration(.glassStatus))
if let config = try? FaceCaptureConfiguration(qualityAttributeConfigurations: customSet) {
let controller = FaceCaptureController.create(configuration: config, style: .init())
controller.delegate = self
navigationController?.pushViewController(controller, animated: true)
}
Face Capture Simple
Integration with Controller
The controller for obtaining images for verification without considering any photo quality requirements. The following options can be configured for FaceCaptureSimpleController
:
(Required)
[.init()]
FaceCaptureSimpleStyle style
– The color and font customization for your controller(Required)
[.init()]
FaceCaptureSimpleConfiguration configuration
(Optional)
[.front]
AVCaptureDevice.Position cameraPosition
– The front or back camera(Optional)
[0.10]
Double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.24]
Double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image
If a face present in an image has a face size out of the minimum or maximum face size interval, it won’t be detected. Please note that a wider minimum or maximum face size interval results in a lower performance (detection fps).
You can interact with FaceCaptureSimpleController
using its delegate FaceCaptureSimpleControllerDelegate
.
public protocol FaceCaptureSimpleControllerDelegate: class {
//Optional
func faceCaptureSimpleDidLoad(_ controller: FaceCaptureSimpleController)
func faceCaptureSimpleDidAppear(_ controller: FaceCaptureSimpleController)
func faceCaptureSimpleWillAppear(_ controller: FaceCaptureSimpleController)
func faceCaptureSimpleDidDisappear(_ controller: FaceCaptureSimpleController)
func faceCaptureSimpleWillDisappear(_ controller: FaceCaptureSimpleController)
func faceCaptureSimpleCameraInitFailed(_ controller: FaceCaptureSimpleController)
func faceCaptureSimpleNoCameraPermission(_ controller: FaceCaptureSimpleController)
func faceCaptureSimple(_ controller: FaceCaptureSimpleController, previewSizeChanged size: CGSize)
//Required
func faceCaptureSimple(_ controller: FaceCaptureSimpleController, didCapture captureCandidate: CaptureCandidate)
func faceCaptureSimpleDidFailed(_ controller: FaceCaptureSimpleController)
}
Capture is started when requestFaceCapture()
is called. In the example below, it is started on faceCaptureSimpleDidAppear
.
extension DemoFaceCaptureSimpleController: FaceCaptureSimpleControllerDelegate {
func faceCaptureSimple(_ controller: FaceCaptureSimpleController, didCapture captureCandidate: CaptureCandidate) {
self.candidate = captureCandidate
}
func faceCaptureSimpleDidAppear(_ controller: FaceCaptureSimpleController) {
controller.requestFaceCapture()
}
func faceCaptureSimpleDidFailed(_ controller: FaceCaptureSimpleController) {
controller.stopFaceCapture()
}
}
Liveness Detection
The view controller with moving or fading object on the screen.
The Liveness detection controller is configured by the LivenessCheckConfiguration
class which has the following attributes:
(Required)
[.init()]
LivenessCheckStyle style
– The color and font customization for your controller(Required)
[.init()]
FaceCaptureConfiguration configuration
(Required)
[-]
TransitionType transitionType
– The transition type used for the liveness detection object animation.move
.fade
(Optional)
[DOTSegment] segments
– Segments for the object animation(Optional)
[50]
Int dotSize
– The dot size for the animation in dp(Optional)
[.white]
UIColor backgroundColor
– The background color for the dot animation(Optional)
[-]
UIImage dotImage
– The custom image to be used instead of dot(Optional)
[0.10]
Double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.24]
Double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
[4]
Int minValidSegmentsCount
– The minimum number of valid captured segments. The value can be within the interval [4, 7].(Optional)
[0.5]
Double proximityTolerance
– The tolerance of face size ratio (The tolerance of the distance between the face and the camera). A value greater than 1.0 disables the proximity check.
To start the liveness detection process, call startLivenessCheck()
.
To stop the liveness detection process, call stopLivenessCheck()
.
To reset the view, call restartTransitionView()
.
The table below describes how FAR and FRR change with the threshold value.
FAR levels | FAR [%] | FRR [%] | Threshold |
---|---|---|---|
1:5 | 19.792 | 4.255 | 0.923700869083 |
1:10 | 9.375 | 6.383 | 0.977084677219 |
1:20 | 4.167 | 6.383 | 0.9999781847 |
1:50 | 1.042 | 9.574 | 0.999994039536 |
1:100 | 0.000 | 9.574 | 0.99999976158 |
You can interact with LivenessCheckController
using its delegate LivenessCheckControllerDelegate
.
public protocol LivenessCheckControllerDelegate: class {
//Required
/// Check if Liveness Detection should start running on load
func livenessCheckInitialStart(_ controller: LivenessCheckController) -> Bool
/// Tells you that liveness detection didn't start, because of camera initialization failed
func livenessCheckCameraInitFailed(_ controller: LivenessCheckController)
/// Tells you that state of liveness detection has changed
func livenessCheck(_ controller: LivenessCheckController, stateChanged state: LivenessContextState)
/// Tells you that liveness detection did finish with score and captured frames from all segments
func livenessCheck(_ controller: LivenessCheckController, checkDoneWith score: Float, capturedSegmentImages segmentImagesList: [SegmentImage])
/// Tells you that liveness detection couldn't be validated because you don't have enough segments
func livenessCheckNoMoreSegments(_ controller: LivenessCheckController)
/// Tells you that liveness detection did failed, because no eyes on camera were detected
func livenessCheckNoEyesDetected(_ controller: LivenessCheckController)
//Optional
func livenessCheckDidLoad(_ controller: LivenessCheckController)
func livenessCheckWillDisappear(_ controller: LivenessCheckController)
func livenessCheckDidDisappear(_ controller: LivenessCheckController)
func livenessCheckWillAppear(_ controller: LivenessCheckController)
func livenessCheckDidAppear(_ controller: LivenessCheckController)
/// Tells you that you don't have permission to use camera
func livenessCheckNoCameraPermission(_ controller: LivenessCheckController)
}
The liveness detection follows [DOTSegment] segments
and renders an object in the specified corners of the screen. For the best accuracy it is recommended to display the object in at least three different corners.
If the user’s eyes can’t be detected in the first segment, the process will be terminated with the livenessCheckNoEyesDetected
deleagte call. If the eyes aren’t detected in any of the segments, the process will set the isValid
flag to false
in the corresponding SegmentImage
in livenessCheck(_ controller: LivenessCheckController, checkDoneWith score: Float, capturedSegmentImages segmentImagesList: [SegmentImage])
.
The process is automatically finished when the number of accepted items in segmentImagesList: [SegmentImage]
reaches minValidSegmentsCount
. After that is called livenessCheck(_ controller: LivenessCheckController, checkDoneWith score: Float, capturedSegmentImages segmentImagesList: [SegmentImage])
and the score can be evaluated. The order of items in the segmentImagesList: [SegmentImage]
output corresponds to the order of items in the segments: [DOTSegment]
input.
The process fails with the livenessCheckNoMoreSegments
delegate call, when all the segments in segments: [DOTSegment]
were displayed, but it wasn’t possible to collect a number of accepted images specified in minValidSegmentsCount
.
You can use SegmentImage
items for verification purposes, even when the eyes weren’t detected in a segment and the isValid
flag is set to false
.
For a better user experience, it is recommended to provide the user more attempts, so the size of segments: [DOTSegment]
should be greater than minValidSegmentsCount
. However, this should be limited, as it is better to terminate the process if the user is failing in many segments. The recommended implementation of segment generation:
let configuration = LivenessConfiguration(transitionType: .move,
segments: [DOTSegment(targetPosition: .bottomLeft, duration: 1000),
DOTSegment(targetPosition: .bottomRight, duration: 1000),
DOTSegment(targetPosition: .topLeft, duration: 1000),
DOTSegment(targetPosition: .bottomLeft, duration: 1000),
DOTSegment(targetPosition: .topRight, duration: 1000)])
If you want to perform a server side validation of the liveness detection, please follow this recommended approach:
The object movement is generated on your server and then rendered on the device using segments: [DOTSegment]
. When the process is finished successfully, the segmentImagesList: [SegmentImage]
is transferred to the server to evaluate the liveness detection. Please note that segments: [DOTSegment]
is no longer transferred and you should store it in the session of the server.
You can evaluate the liveness detection by combining the corresponding segmentImagesList: [SegmentImage]
with segments: [DOTSegment]
and sending the request to DOT Core Server. If the user could finish the process without using all segments, the remaining items of segments: [DOTSegment]
should be dropped to match the number of items in segmentImagesList: [SegmentImage]
.
Liveness Detection 2
The view controller with moving or fading object on the screen.
The Liveness detection 2 controller is configured by the LivenessCheck2Configuration
class which has following attributes:
(Optional)
[DOTSegment] segments
– Segments for the object animation(Optional)
[50]
Int dotSize
– The dot size for the animation in dp(Optional)
[1]
Double cameraPreviewOverlayAlpha
– The overlay alpha(Optional)
[-]
UIImage dotImage
– The custom image to be used instead of dot(Required)
[-]
TransitionType transitionType
– The transition type used for the liveness detection object animation.move
.fade
(Optional)
[.front]
CaptureSide captureSide
– The front or back camera(Optional)
[0.25]
Double positionTolerance
– The tolerance of the distance between the face center and the screen center. The value can be within the interval [0, 1].(Optional)
[0.35]
Double lightScoreThreshold
– The minimum value of the Light score(Optional)
[0.5]
Double proximityTolerance
– The tolerance of the face size ratio (The tolerance of the distance between the face and the camera). A value greater than 1.0 disables the proximity check.(Optional)
[0.10]
Double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.24]
Double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
[4]
Int minValidSegmentsCount
– The minimum number of valid captured segments. The value can be within the interval [4, 7].(Optional)
[.init()]
LivenessCheck2Style style
– The color and font customization for your controller
To start the liveness detection 2 process, call startLivenessCheck()
.
To stop the liveness detection 2 process, call stopLivenessCheck()
.
To reset the view, call restartTransitionView()
.
You can interact with LivenessCheck2Controller
using its delegate LivenessCheck2ControllerDelegate
.
public protocol LivenessCheck2ControllerDelegate: class {
/// Check if LivenessDetection should start after camera succesully initialized and ready to capture images
func livenessCheck2InitialStart(_ controller: LivenessCheck2Controller) -> Bool
/// Tells you that liveness detection didn't start, because of camera initialization failed
func livenessCheck2CameraInitFailed(_ controller: LivenessCheck2Controller)
/// Tells you that state of liveness detection has changed
func livenessCheck2(_ controller: LivenessCheck2Controller, livenessStateChanged state: LivenessContextState)
/// Tells you that liveness detection did finish with score and captured frames from all segments
func livenessCheck2(_ controller: LivenessCheck2Controller, checkDoneWith score: Float, capturedKeyFrames keyFrames: [DOTImage?])
/// Called when face capture has failed
func livenessCheck2FaceCaptureFailed(_ controller: LivenessCheck2Controller)
/// Tells you that liveness detection couldn't be validated because you don't have enough segments
func livenessCheck2NoMoreSegments(_ controller: LivenessCheck2Controller)
/// Tells you that liveness detection did failed, because no eyes on camera were detected
func livenessCheck2NoEyesDetected(_ controller: LivenessCheck2Controller)
/// Tells you that you don't have permission to use camera
func livenessCheck2NoCameraPermission(_ controller: LivenessCheck2Controller)
///Called on face capture state change.
func livenessCheck2(_ controller: LivenessCheck2Controller, captureStateChanged captureState: FaceCaptureState)
/// Called when face capture has finished
func livenessCheck2(_ controller: LivenessCheck2Controller, didSuccess detectedFace: DetectedFace)
func livenessCheck2DidLoad(_ controller: LivenessCheck2Controller)
func livenessCheck2WillDisappear(_ controller: LivenessCheck2Controller)
func livenessCheck2DidDisappear(_ controller: LivenessCheck2Controller)
func livenessCheck2WillAppear(_ controller: LivenessCheck2Controller)
func livenessCheck2DidAppear(_ controller: LivenessCheck2Controller)
}
Non-UI components
Face detector
The FaceDetector
class provides the face detection functionality without the use of UI components. Face detection stops when maximumFaces
is reached.
To perform detection, call the following method in the background thread:
func detectFaces(image: FaceImage, maximumFaces: Int) → [DetectedFace]
Template Verifier
In order to verify face templates (1:1), use the TemplateVerifier
class. The recommended approach is to create face templates using FaceCapture
or FaceDetector
and use only templates for verification.
func match(referenceTemplate: Template, probeTemplate: Template) throws -> NSNumber
Face Image Verifier
In order to verify or identify face images (1:1 or 1:N), use the FaceImageVerifier
class. It is also possible to verify a face image or face image array against a template (This is the recommended approach if you already have an available reference template).
func match(referenceFaceImage: FaceImage, probeFaceImages: [FaceImage]) throws -> [NSNumber]
func match(referenceFaceTemplate: Template, probeFaceImages: [FaceImage]) throws -> [NSNumber]
func match(referenceFaceImage: FaceImage, probeFaceImage: FaceImage) throws -> NSNumber
func match(referenceFaceTemplate: Template, probeFaceImage: FaceImage) throws -> NSNumber
Common classes
FaceImage
The entity which can be used for face detection and verification.
To create FaceImage
:
public init(image: UIImage, minFaceSizeRatio: Double = 0.02, maxFaceSizeRatio: Double = 0.5)
DetectedFace
This entity provides information about a detected face. The following properties and methods are available:
UIImage image
– Creates a full (original) image of the face.Int eyeDistance
– The distance between the eyes in the original image.Double confidence
– The Confidence score of the face detection. It also represents the quality of the detected face.Template template
– The face template which can be used for verification.UIImage croppedFace
– Creates a ICAO full frontal image of the face. If boundaries of the normalized image leak outside of the original image, a white background is applied.func features(_ features: [FeatureWrapper]) → [FeaturePoint]
– Creates a collection of significant points of the detected face. Positions are absolute to the original input image.func attributes(_ attributes: [AttributeWrapper]) → [IcaoAttribute]
– Creates a collection of ICAO attributes that can be used for a detailed face quality assessment.
Face attributes
You can get these face attributes using the Face
class:
Name | Description |
---|---|
glassStatus | The face attribute for evaluating glasses presence. Glasses values are within the interval [-10000,10000]. Values near -10000 indicate 'no glasses present', values near 10000 indicate 'glasses present'. The decision threshold is around 0.This attribute can be also taken as an ICAO feature. |
passiveLiveness | The face attribute for evaluating the passive liveness score of a face. Passive liveness score values are within the interval [-10000,10000]. Values near -10000 indicate 'face not live', values near 10000 indicate 'face live'. You can use |
Also, you should check the evaluateAttributeCondition()
method of the Face
class to ensure all conditions for the attribute computation are met. If these conditions aren’t met, the attribute score will be still computed but the accuracy of results isn’t garanted.
Note | Please note that the passive liveness features are available only if you use pod 'dot-face-passive-liveness' . |
ICAO attributes
You can get the ICAO attributes using the DetectedFace
class. The following attributes can be taken as ICAO features:
Name | Description |
---|---|
sharpness | The face attribute for evaluating whether an area of the face image is blurred. Sharpness values are within the interval [-10000,10000]. Values near -10000 indicate 'very blurred', values near 10000 indicate 'very sharp'. The decision threshold is around 0. |
brightness | The face attribute for evaluating whether an area of the face is correctly exposed. Brightness values are within the interval [-10000,10000]. Values near -10000 indicate 'too dark', values near 10000 indicate 'too light', values around 0 indicate OK. The decision thresholds are around -5000 and 5000. |
contrast | The face attribute for evaluating whether an area of the face is contrast enough. Contrast values are within the interval [-10000,10000]. Values near -10000 indicate 'very low contrast', values near 10000 indicate 'very high contrast', values around 0 indicate OK. The decision thresholds are around -5000 and 5000. |
uniqueIntensityLevels | The face attribute for evaluating whether an area of the face has an appropriate number of unique intensity levels. Unique intensity levels values are within the interval [-10000,10000]. Values near -10000 indicate 'very few unique intensity levels', values near 10000 indicate 'enough unique intensity levels'. The decision threshold is around 0. |
shadow | The face attribute for evaluating whether an area of the face is overshadowed. Shadow values are within the interval [-10000,10000]. Values near -10000 indicate 'very strong global shadows present', values near 10000 indicate 'no global shadows present'. The decision threshold is around 0. |
specularity | The face attribute for evaluating whether spotlights are present on the face. Specularity values are within the interval [-10000,10000]. Values near -10000 indicate 'very strong specularity present', values near 10000 indicate 'no specularity present'. The decision threshold is around 0. |
eyeStatusRight | The face attribute for evaluating the right eye status. Right eye values are within the interval [-10000,10000]. Values near -10000 indicate 'closed, narrowed or bulged eye', values near 10000 indicate 'normally opened eye'. The decision threshold is around 0. |
eyeStatusLeft | The face attribute for evaluating the left eye status. Left eye values are within the interval [-10000,10000]. Values near -10000 indicate 'closed, narrowed or bulged eye', values near 10000 indicate 'normally opened eye'. The decision threshold is around 0. |
mouthStatus | The face attribute for evaluating the mouth status. Mouth status values are within the interval [-10000,10000]. Values near -10000 indicate 'open mouth, smile showing teeth or round lips present', values near 10000 indicate 'mouth with no expression'. The decision threshold is around 0. |
backgroundUniformity | The face attribute for evaluating whether the background is uniform. Background uniformity values are within the interval [-10000,10000]. Values near -10000 indicate 'very non-uniform background present', values near 10000 indicate 'uniform background present'. The decision threshold is around 0. |
rollAngle | Face attribute representing angle rotation of head towards camera reference frame around Z-axis as per DIN9300. |
yawAngle | Face attribute representing angle rotation of head towards camera reference frame around Y-axis as per DIN9300. |
pitchAngle | Face attribute representing angle rotation of head towards camera reference frame around X-axis as per DIN9300. |
type: redirect redirect: https://developers.innovatrics.com/digital-onboarding/docs/latest-version-matrix/ robots: noindex ---
Changelog
3.8.1 - 2021-06-24
Added
support for interface orientation
portraitUpsideDown
to all UI components
3.8.0 - 2021-06-17
Changed
updated IFace to 4.10.0 - improved background uniformity algorithm
Removed
FaceAttributeId.yaw
,.roll
,.pitch
use.yawAngle
,.rollAngle
,.pitchAngle
instead
3.7.1 - 2021-05-10
Fixed
updated IFace to 4.9.1 - minor issue
updated glass status range in
QualityAttributeConfigurationRegistry
3.7.0 - 2021-05-03
Changed
updated IFace to 4.9.0 - improved glass status evaluation
3.6.0 - 2021-04-13
Changed
updated IFace to 4.8.0 - improved Passive Liveness algorithm
3.5.1 - 2021-03-19
Added
FaceCaptureStyle.hintTextColor
and.hintBackgroundColor
Changed
renamed style properties to be consistent across all UI components
added 'Color' suffix to name of style properties which represent
UIColor
3.5.0 - 2021-03-17
Added
DotFaceLocalization
class to improve localization mechanismCaptureCandidate.init()
to initialize withDetectedFace
public access to
CaptureCandidate.detectedFace
Changed
updated IFace to 4.4.0
renamed
Attribute
toFaceAttributeId
renamed
Feature
toFaceFeature
range of
eyeStatus
inQualityAttributeConfigurationRegistry
removed
DOTHandler.localizationBundle
useDotFaceLocalization.bundle
insteadliveness localization keys
CaptureState.yawStep
and.pitchStep
to.yawAngleStep
and.pitchAngleStep
QualityAttribute.yaw
and.pitch
to.yawAngle
and.pitchAngle
ICAO attributes now have
yawAngle
,pitchAngle
androllAngle
instead ofyaw
,pitch
androll
3.4.2 - 2020-12-16
Added
support for iOS Simulator arm64 architecture
3.4.1 - 2020-11-25
Fixed
FaceCaptureController
user interface issues
3.4.0 - 2020-09-03
Changed
updated IFace to 3.13.1
CaptureCandidate.glassStatusDependenciesFulfilled
toCaptureCandidate.glassStatusConditionFulfilled
CaptureCandidate.passiveLivenessDependenciesFulfilled
toCaptureCandidate.passiveLivenessConditionFulfilled
removed
Face.attributeIsDependencyFulfilled
, addedFace.evaluateAttributeCondition
3.3.1 - 2020-08-18
Fixed
FaceCaptureController
layout warnings
3.3.0 - 2020-08-14
Fixed
Make sure all background tasks are stopped when
LivenessCheckController.stopLivenessCheck()
is called
3.2.2 - 2020-08-11
Fixed
improved interface of
DOTCamera
3.2.1 - 2020-08-06
Fixed
crash in
DOTImage
ifCGImage
is nil
Changed
init
DOTImage
withCGimage
instead ofUIImage
updated eye status
QualityAttributeConfiguration
ranges
3.2.0 - 2020-07-30
Changed
on screen messages during face capture remain shown longer to minimize instruction flickering
changed ranges of
QualityAttributeConfigurationRegistry
removed detected face indicator after face capture finished
3.1.0 - 2020-07-10
Added
DOTRange
QualityAttribute
QualityAttributeConfiguration
QualityAttributeConfigurationRegistry
QualityAttributePreset
VerificationQualityProvider
ICAOQualityProvider
PassiveLivenessQualityProvider
Changed
removed
useAlternativeInstructions
,requestFullImage
,requestCropImage
,requestTemplate
,lightScoreThreshold
fromFaceCaptureConfiguration
added
qualityAttributeConfigurations: Set<QualityAttributeConfiguration>
toFaceCaptureConfiguration
added
static func validate(configuration: FaceCaptureConfiguration)
toFaceCaptureConfiguration
removed
requestFullImage
,requestCropImage
,requestTemplate
fromFaceCaptureSimpleConfiguration
changed
func faceCapture(_ controller: FaceCaptureController, stateChanged state: FaceCaptureState)
tofunc faceCapture(_ controller: FaceCaptureController, stateChanged state: CaptureState, withImage image: DOTImage?)
inFaceCaptureControllerDelegate
changed
func livenessCheck2(_ controller: LivenessCheck2Controller, captureStateChanged captureState: FaceCaptureState, withImage image: DOTImage?)
tofunc livenessCheck2(_ controller: LivenessCheck2Controller, stateChanged state: CaptureState, withImage image: DOTImage?)
inLivenessCheck2ControllerDelegate
3.0.1 - 2020-07-02
Fixed
draw circle around face during face capture
face capture hint label not updating correctly
3.0.0 - 2020-06-15
Changed
Update IFace to 3.10.0
FaceCaptureControllerDelegate
returnsCaptureCandidate
instead ofFaceCaptureImage
FaceCaptureSimpleControllerDelegate
returnsCaptureCandidate
instead ofFaceCaptureImage