DOT iOS Face library
v4.0.0
Introduction
DOT iOS Face as a part of the DOT iOS libraries family provides components for the digital onboarding process using the latest Innovatrics IFace image processing library. It wraps the core functionality of IFace library to a higher-level module which is easy to integrate into an iOS application.
Requirements
DOT iOS Face has the following requirements:
Xcode 11.4+
iOS 11.0+
Swift or Objective-C
CocoaPods
Distribution
Modularization
DOT iOS Face is divided into core module and optional feature modules. This enables you to reduce the size of the library and include only modules that are actually used in your use case.
DOT iOS Face is divided into following modules:
dot-face-core
(Required) - provides API for all the features and functionalities.dot-face-detection
(Optional) - enables the face detection feature.dot-face-verification
(Optional) - enables template extraction and verification feature.dot-face-eye-gaze-liveness
(Optional) - enables the eye gaze liveness feature.dot-face-passive-liveness
(Optional) - enables the passive liveness feature.
Each feature module can have other modules as their dependency and cannot be used without it, see the table below.
Module | Dependency |
| dot-face-core |
|
|
|
|
|
|
For example, if you want to use Eye Gaze Liveness you will have to use these three modules: dot-face-core
(always required), dot-face-eye-gaze-liveness
, dot-face-detection
(required by dot-face-eye-gaze-liveness
).
Cocoapods
DOT iOS Face is distributed with Cocoapods as a set of XCFramework packages. Each module is distributed as a single XCFramework package, see the table below.
Module | Swift module | Module class | XCFramework |
| DotFaceCore | - | DotFaceCore.xcframework |
| DotFaceDetection | DotFaceDetectionModule | DotFaceDetection.xcframework |
| DotFaceVerification | DotFaceVerificationModule | DotFaceVerification.xcframework |
| DotFaceEyeGazeLiveness | DotFaceEyeGazeLivenessModule | DotFaceEyeGazeLiveness.xcframework |
| DotFacePassiveLiveness | DotFacePassiveLivenessModule | DotFacePassiveLiveness.xcframework |
In order to integrate DOT iOS Face into your project, the first step is to insert the following line of code on top of your Podfile
.
source 'https://github.com/innovatrics/innovatrics-podspecs'
Then, add the module(s) which you want to use to your Podfile
. Dependencies of the specified module(s) will be added to the project automatically.
Following Podfile
shows how to use all modules:
source 'https://github.com/innovatrics/innovatrics-podspecs'
use_frameworks!
target 'YOUR_TARGET' do
pod 'dot-face-verification'
pod 'dot-face-eye-gaze-liveness'
pod 'dot-face-passive-liveness'
end
Following Podfile
shows how to use only dot-face-detection
module:
source 'https://github.com/innovatrics/innovatrics-podspecs'
use_frameworks!
target 'YOUR_TARGET' do
pod 'dot-face-detection'
end
If a CocoaPods problem with
|
Supported Architectures
DOT iOS Face provides all supported architectures in the distributed XCFramework package.
Device binary contains: arm64
.
Simulator binary contains: x86_64
, arm64
.
Licensing
In order to use DOT iOS Face in other apps, it must be licensed. The license can be compiled into the application as it is bound to Bundle Identifier specified in the General tab in Xcode.
The license ID can be retrieved as follows – required only once for license generation:
import DotFaceCore
...
print("LicenseId: " + DotFace.shared.licenseId)
...
In order to obtain the license, please contact your Innovatrics’ representative specifying the License ID.
Permissions
Set the following permission in Info.plist
:
<key>NSCameraUsageDescription</key>
<string>Your usage description</string>
Basic Setup
Initialization
Before using any of the DOT iOS Face components, you need to initialize it with the license and list of feature modules you want to use. Each module can be accessed by its singleton *Module
class.
Following code snippet shows how to initialize DOT iOS Face with all feature modules. If you want to handle initialization and deinitialization events of DotFace
class, you need to implement DotFaceDelegate
.
import DotFaceCore
import DotFacePassiveLiveness
import DotFaceVerification
import DotFaceEyeGazeLiveness
import DotFaceDetection
if let url = Bundle.main.url(forResource: "YOUR_LICENSE", withExtension: "lic") {
do {
let license = try Data(contentsOf: url)
let configuration = DotFaceConfiguration(license: license,
modules: [
DotFacePassiveLivenessModule.shared,
DotFaceVerificationModule.shared,
DotFaceEyeGazeLivenessModule.shared,
DotFaceDetectionModule.shared])
DotFace.shared.setDelegate(self)
DotFace.shared.initialize(configuration: configuration)
} catch {
print(error.localizedDescription)
}
}
After you have successfully finished initialization, you can use all added features by importing only DotFaceCore
Swift module in your source files. Keep in mind that if you try to use any feature which was not added during initialization DOT iOS Face will generate fatal error.
DOT Face Configuration
You can configure DotFace
using DotFaceConfiguration
class.
let configuration = try DotFaceConfiguration(license: license,
modules: modules,
faceDetectionConfidenceThreshold: 0.1)
Face Detection Confidence Threshold (faceDetectionConfidenceThreshold
)
The interval of the confidence score is [0.0, 1.0]
and the default value of the threshold is 0.06
. Faces with a confidence score lower that this value are ignored.
Deinitialization
When you have finished using the DOT iOS Face, it is usually a good practice to close it in order to free the memory. You can close DOT iOS Face only after the complete process is finished and not within the life cycle of individual components. This can be performed using the DotFace.shared.deinitialize()
method. If you want to use the DOT iOS Face components again, you need to call DotFace.shared.initialize()
again.
Following code snippet shows how to deinitialize DOT iOS Face:
DotFace.shared.deinitialize();
Logging
DOT iOS Face supports logging using a global Logger
class. You can set the log level as follows:
import DotFaceCore
Logger.logLevel = .debug
Log levels:
info
debug
warning
error
none
Each log message contains dot-face
tag. Keep in mind that logging should be used just for debugging purposes.
Components
Overview
DOT iOS Face provides both non-UI and UI components. Non-UI components are aimed to be used by developers who want to build their own UI using the DOT iOS Face functionality. UI components are build on top of non-UI components. Components having UI are available as UIViewController
classes and can be embedded into the application’s existing UI or presented using the standard methods.
List of Non-UI Components
- FACE DETECTOR
A component for performing face detection on an image, creating templates and evaluating face attributes.
- TEMPLATE MATCHER
A component for performing template matching.
- FACE MATCHER
A component for performing face matching.
List of UI Components
- FACE AUTO CAPTURE
A visual component for capturing good quality face photos and creating templates suitable for matching.
- FACE SIMPLE CAPTURE
A visual component for capturing face photos and creating templates suitable for matching without considering photo quality requirements.
- EYE GAZE LIVENESS
A visual component which performs the liveness detection based on object tracking. An object is shown on the screen and the user is instructed to follow the movement of this object with her/his eyes.
Non-UI Components
Face Detector
The FaceDetector
class provides the face detection functionality. Face detection stops when maximumFaces
is reached.
This component requires dot-face-detection
module.
Create FaceDetector
:
let faceDetector = FaceDetector()
To perform detection, call the following method on the background thread:
let detectedFaces = faceDetector.detect(faceImage: faceImage, maximumFaces: 10)
Template Matcher
In order to match face templates (1:1), use the TemplateMatcher
class. The recommended approach is to create face templates using FaceDetector
or Face Auto Capture component and use only templates for matching.
This component requires dot-face-verification
module.
Create TemplateMatcher
:
let templateMatcher = TemplateMatcher()
To perform matching, call the following method on the background thread:
let result = try? matcher.match(referenceTemplate: referenceTemplate, probeTemplate: probeTemplate)
Face Matcher
In order to match face images (1:1), use the FaceMatcher
class. It is also possible to match a face image against a template, which is a recommended approach if you already have an available reference template.
This component requires dot-face-detection
and dot-face-verification
modules.
Create FaceMatcher
:
let faceMatcher = FaceMatcher()
To perform matching, call one of the following methods on the background thread:
let result = try? faceMatcher.match(referenceFaceImage: referenceImage, probeFaceImage: probeImage)
let result = try? faceMatcher.match(referenceTemplate: referenceTemplate, probeFaceImage: probeImage)
UI Components
View Controller Configuration
Components containing UI are embedded into the application as view controllers. All view controllers can be embedded into your own view controller or presented directly. Each view controller can be configured using its *Configuration
class and each view controller can have its appearance customized using its *Style
class.
To present view controller:
let controller = FaceAutoCaptureViewController.create(configuration: .init(), style: .init())
controller.delegate = self
navigationController?.pushViewController(controller, animated: true)
To embed view controller into your view controller:
override func viewDidLoad() {
super.viewDidLoad()
addChild(viewController)
view.addSubview(viewController.view)
viewController.view.translatesAutoresizingMaskIntoConstraints = false
viewController.didMove(toParent: self)
NSLayoutConstraint.activate([
viewController.view.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor),
viewController.view.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor),
viewController.view.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),
viewController.view.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor)
])
}
Safe Area
DOT iOS Face view controllers ignore safe area layout guide when they layout their subviews. Therefore, for example if you push DOT iOS Face view controller using UINavigationController
, you will get incorrect layout. If you want to respect safe area layout guide, you should embed DOT iOS Face view controller in a container view controller and setup the layout constraints accordingly.
Face Auto Capture
The view controller with instructions for obtaining quality face images suitable for matching.
This component requires dot-face-detection
module.
The following properties are available in FaceAutoCaptureConfiguration
:
(Optional)
[CameraFacing.front]
cameraFacing: CameraFacing
– Camera facingCameraFacing.front
CameraFacing.back
(Optional)
[CameraPreviewScaleType.fit]
cameraPreviewScaleType: CameraPreviewScaleType
– The camera preview scale typeCameraPreviewScaleType.fit
(Optional)
[0.10]
minFaceSizeRatio: Double
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.30]
maxFaceSizeRatio: Double
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
[false]
isCheckAnimationEnabled: Bool
– Shows a checkmark animation after enrollment(Optional)
qualityAttributes: Set<QualityAttribute>
– Provide the required quality attributes of the output image
The following properties are available in FaceAutoCaptureStyle
:
(Optional)
[UIColor.white]
backgroundColor: UIColor
- Background color of top level view.(Optional)
[UIColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 0.8)]
backgroundOverlayColor: UIColor
- Background color of overlay view.(Optional)
[UIColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)]
circleOutlineColor: UIColor
- Color of circle in overlay view.(Optional)
[UIColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 0.5)]
trackingCircleColor: UIColor
- Tracking circle color.(Optional)
[UIColor(red: 0.53, green: 0.71, blue: 0.38, alpha: 1.0)]
progressValidColor: UIColor
- Overlay circle color for finished state.(Optional)
[UIColor(red: 0.93, green: 0.52, blue: 0.0, alpha: 1.0)]
progressIntermediateColor: UIColor
- Overlay circle color for almost fulfilled state.(Optional)
[UIColor(red: 0.86, green: 0.26, blue: 0.20, alpha: 1.0)]
progressInvalidColor: UIColor
- Overlay circle color for not fulfilled state.(Optional)
[UIColor(red: 0.53, green: 0.71, blue: 0.38, alpha: 1.0)]
tickColor: UIColor
- Animated tick color.(Optional)
[UIFont.systemFont(ofSize: 12)]
hintFont: UIFont
- Hint label font.(Optional)
[UIColor.black]
hintTextColor: UIColor
- Hint label text color.(Optional)
[UIColor.white]
hintBackgroundColor: UIColor
- Hint view background color.
If a face present in an image has a face size out of the minimum or maximum face size interval, it won’t be detected. Please note that a wider face size interval results in a lower performance (detection FPS).
You can handle the FaceAutoCaptureViewController
events using its delegate FaceAutoCaptureViewControllerDelegate
.
@objc(DOTFaceAutoCaptureViewControllerDelegate) public protocol FaceAutoCaptureViewControllerDelegate: AnyObject {
@objc optional func faceAutoCaptureViewControllerViewDidLoad(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewDidLayoutSubviews(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewWillAppear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewDidAppear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewWillDisappear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewDidDisappear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewWillTransition(_ viewController: FaceAutoCaptureViewController)
/// Tells the delegate that you have no permission for camera usage.
@objc optional func faceAutoCaptureViewControllerNoCameraPermission(_ viewController: FaceAutoCaptureViewController)
/// Tells the delegate that the capture step has changed.
@objc func faceAutoCaptureViewController(_ viewController: FaceAutoCaptureViewController, stepChanged captureStepId: CaptureStepId, with detectedFace: DetectedFace?)
/// Tells the delegate that the face was captured.
@objc func faceAutoCaptureViewController(_ viewController: FaceAutoCaptureViewController, captured detectedFace: DetectedFace)
}
CaptureStepId
events are emitted when the user enters each step.
presence
position
proximity
glassStatus
backgroundUniformity
pitchAngle
yawAngle
eyeStatus
mouthStatus
light
Quality Attributes of the Output Image
You may adjust quality requirements for the output image. To perform this, you can use various QualityProvider
implementations with recommended values and pass this configuration via FaceAutoCaptureConfiguration
by setting the qualityAttributes
. You can also provide your own implementations according to your needs.
For example, if you wish to capture an image suitable for matching but you also want to make sure a user doesn’t wear glasses, you can use the following implementation:
let registry = DefaultQualityAttributeRegistry()
let provider = MatchingQualityProvider()
var customSet = provider.getQualityAttributes()
customSet.insert(registry.findBy(qualityAttributeId: .glassStatus))
let configuration = try! FaceAutoCaptureConfiguration(qualityAttributes: customSet)
let controller = FaceAutoCaptureViewController.create(configuration: configuration, style: .init())
controller.delegate = self
navigationController?.pushViewController(controller, animated: true)
See DefaultQualityAttributeRegistry
for default values and all available quality attributes.
Available quality providers:
MatchingQualityProvider
– The resulting image suitable for matching.PassiveLivenessQualityProvider
– The resulting image suitable for evaluation of the passive liveness.IcaoQualityProvider
– The resulting image passing ICAO checks.
Face Simple Capture
The view controller for obtaining images for matching without considering any photo quality requirements. This component requires dot-face-detection
module.
The following properties are available in FaceSimpleCaptureConfiguration
:
(Optional)
[CameraFacing.front]
cameraFacing: CameraFacing
– Camera facingCameraFacing.front
CameraFacing.back
(Optional)
[CameraPreviewScaleType.fit]
cameraPreviewScaleType: CameraPreviewScaleType
– The camera preview scale typeCameraPreviewScaleType.fit
(Optional)
[0.10]
minFaceSizeRatio: Double
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.30]
maxFaceSizeRatio: Double
– The maximum ratio of the face size to the width of the shorter side of the image
The following properties are available in FaceSimpleCaptureStyle
:
(Optional)
[UIColor.white]
backgroundColor: UIColor
- Background color of top level view.(Optional)
[UIColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 0.5)]
trackingCircleColor: UIColor
- Tracking circle color.
If a face present in an image has a face size out of the minimum or maximum face size interval, it won’t be detected. Please note that a wider minimum or maximum face size interval results in a lower performance (detection FPS).
You can handle the FaceSimpleCaptureViewController
events using its delegate FaceSimpleCaptureViewControllerDelegate
.
@objc(DOTFaceSimpleCaptureViewControllerDelegate) public protocol FaceSimpleCaptureViewControllerDelegate: AnyObject {
@objc optional func faceSimpleCaptureViewControllerViewDidLoad(_ viewController: FaceSimpleCaptureViewController)
@objc optional func faceSimpleCaptureViewControllerViewDidLayoutSubviews(_ viewController: FaceSimpleCaptureViewController)
@objc optional func faceSimpleCaptureViewControllerViewWillAppear(_ viewController: FaceSimpleCaptureViewController)
@objc optional func faceSimpleCaptureViewControllerViewDidAppear(_ viewController: FaceSimpleCaptureViewController)
@objc optional func faceSimpleCaptureViewControllerViewWillDisappear(_ viewController: FaceSimpleCaptureViewController)
@objc optional func faceSimpleCaptureViewControllerViewDidDisappear(_ viewController: FaceSimpleCaptureViewController)
@objc optional func faceSimpleCaptureViewControllerViewWillTransition(_ viewController: FaceSimpleCaptureViewController)
/// Tells the delegate that you have no permission for camera usage.
@objc optional func faceSimpleCaptureViewControllerNoCameraPermission(_ viewController: FaceSimpleCaptureViewController)
/// Tells the delegate that the face was captured.
@objc func faceSimpleCaptureViewController(_ viewController: FaceSimpleCaptureViewController, captured detectedFace: DetectedFace)
}
You need to call requestCapture()
method in order to request a capture.
Eye Gaze Liveness
The view controller with a moving or fading object on the screen. This component requires dot-face-detection
and dot-face-eye-gaze-liveness
modules.
The following properties are available in EyeGazeLivenessConfiguration
:
(Required)
[-]
segments: [Segment]
– Array of segments for the object animation(Optional)
[0.10]
minFaceSizeRatio: Double
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.30]
maxFaceSizeRatio: Double
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
[0.5]
proximityTolerance: Double
– The tolerance of the face size ratio (The tolerance of the distance between the face and the camera). A value greater than 1.0 disables the proximity check(Optional)
[4]
minValidSegmentCount: Int
– The minimum number of valid captured segments. The value can be within the interval [4, 7].(Optional)
[TransitionType.move]
transitionType: TransitionType
– The transition type used for the liveness detection object animationTransitionType.move
TransitionType.fade
(Optional)
[-]
objectImage: UIImage?
– The moving object image.(Optional)
[CGSize(width: 50, height: 50)]
objectImageSize: CGSize
– Size of the moving object.
The following properties are available in EyeGazeLivenessStyle
:
(Optional)
[UIColor.white]
backgroundColor: UIColor
- Background color of top level view.(Optional)
[UIColor.black]
objectColor: UIColor
- Moving object color.(Optional)
[UIFont.systemFont(ofSize: 12)]
hintFont: UIFont
- Hint label font.(Optional)
[UIColor.black]
hintTextColor: UIColor
- Hint label text color.(Optional)
[UIColor(red: 0.9, green: 0.9, blue: 0.9, alpha: 1.0)]
hintBackgroundColor: UIColor
- Hint view background color.
You can customize the color of the default objectImage
or you can replace the default objectImage
with custom image.
To start the liveness detection process, call start()
method.
You can handle the EyeGazeLivenessViewController
events using its delegate EyeGazeLivenessViewControllerDelegate
.
@objc(DOTEyeGazeLivenessViewControllerDelegate) public protocol EyeGazeLivenessViewControllerDelegate: AnyObject {
@objc optional func eyeGazeLivenessViewControllerViewDidLoad(_ viewController: EyeGazeLivenessViewController)
@objc optional func eyeGazeLivenessViewControllerViewDidLayoutSubviews(_ viewController: EyeGazeLivenessViewController)
@objc optional func eyeGazeLivenessViewControllerViewWillAppear(_ viewController: EyeGazeLivenessViewController)
@objc optional func eyeGazeLivenessViewControllerViewDidAppear(_ viewController: EyeGazeLivenessViewController)
@objc optional func eyeGazeLivenessViewControllerViewWillDisappear(_ viewController: EyeGazeLivenessViewController)
@objc optional func eyeGazeLivenessViewControllerViewDidDisappear(_ viewController: EyeGazeLivenessViewController)
@objc optional func eyeGazeLivenessViewControllerViewWillTransition(_ viewController: EyeGazeLivenessViewController)
/// Tells the delegate that you have no permission for camera usage.
@objc optional func eyeGazeLivenessViewControllerNoCameraPermission(_ viewController: EyeGazeLivenessViewController)
/// Tells the delegate that the eye gaze liveness state has changed.
@objc func eyeGazeLivenessViewController(_ viewController: EyeGazeLivenessViewController, stateChanged state: EyeGazeLivenessState)
/// Tells the delegate that eye gaze liveness has finished with score and captured segments.
@objc func eyeGazeLivenessViewController(_ viewController: EyeGazeLivenessViewController, finished score: Float, with segmentImages: [SegmentImage])
/// Tells the delegate that eye gaze liveness cannot continue because there are no segments left.
@objc func eyeGazeLivenessViewControllerNoMoreSegments(_ viewController: EyeGazeLivenessViewController)
/// Tells the delegate that eye gaze liveness failed, because no eyes were detected.
@objc func eyeGazeLivenessViewControllerEyesNotDetected(_ viewController: EyeGazeLivenessViewController)
/// Tells the delegate that face tracking has failed.
@objc func eyeGazeLivenessViewControllerFaceTrackingFailed(_ viewController: EyeGazeLivenessViewController)
}
The liveness detection follows segments: [Segment]
and renders an object in the specified corners of the screen. For the best accuracy it is recommended to display the object in at least three different corners.
If the user’s eyes can’t be detected in the first segment, the process will be terminated with the eyeGazeLivenessEyesNotDetected(_:)
callback.
The process is automatically finished when the number of valid items in segmentImages: [SegmentImage]
reaches minValidSegmentCount
. After that, eyeGazeLiveness(_:finished:with:)
callback is called and the score can be evaluated.
The process fails with the eyeGazeLivenessNoMoreSegments(_:)
callback when all the segments in segments: [Segment]
were displayed but it wasn’t possible to collect a number of valid images specified in minValidSegmentCount
. You can use segmentImages: [SegmentImage]
items for matching purposes, even when the eyes weren’t detected in a segment.
For a better user experience, it is recommended to provide the user more attempts, so the size of segments: [Segment]
should be greater than minValidSegmentCount
. However, this should be limited, as it is better to terminate the process if the user is failing in many segments. The recommended way of segment generation is to use a RandomSegmentsGenerator
:
let generator = RandomSegmentsGenerator()
let segmentCount = 8
let segmentDurationMillis = 1000
let segments = generator.generate(segmentCount: segmentCount, segmentDurationMillis: segmentDurationMillis)
If you want to perform a server side validation of the liveness detection, please follow this recommended approach:
The object movement is generated on your server and then rendered on the device using segments: [Segment]
. When the process is finished successfully, the segmentImages: [SegmentImage]
is transferred to the server to evaluate the liveness detection. Please note that segments: [Segment]
is no longer transferred and you should store it in the session of the server.
You can evaluate the liveness detection by combining the corresponding segmentImages: [SegmentImage]
with segments: [Segment]
and sending the request to DOT Core Server. If the user could finish the process without using all segments, the remaining items of segments: [Segment]
should be dropped to match the number of items in segmentImages: [SegmentImage]
.
Customization of UI Components
Localization
String resources can be overridden in your application and alternative strings for supported languages can be provided following these two steps:
Add your own
Localizable.strings
file to your project using standard iOS localization mechanism. To change a specific text override corresponding key in thisLocalizable.strings
file.Set the localization bundle to the bundle of your application (preferably during the application launch in your
AppDelegate
).
Use this setup if you want to use standard iOS localization mechanism, which means your iOS application uses system defined locale.
import DotFaceCore
Localization.bundle = .main
Custom Localization
You can override standard iOS localization mechanism by providing your own translation dictionary and setting the Localization.useLocalizationDictionary
flag to true
. Use this setup if you do not want to use standard iOS localization mechanism, which means your iOS application ignores system defined locale and uses its own custom locale.
import DotFaceCore
guard let localizableUrl = Bundle.main.url(forResource: "Localizable", withExtension: "strings", subdirectory: nil, localization: "de"),
let dictionary = NSDictionary(contentsOf: localizableUrl) as? [String: String]
else { return }
Localization.useLocalizationDictionary = true
Localization.localizationDictionary = dictionary
"dot.face_auto_capture.instruction.face_centering" = "Center your face";
"dot.face_auto_capture.instruction.face_too_close" = "Move back";
"dot.face_auto_capture.instruction.face_too_far" = "Move closer";
"dot.face_auto_capture.instruction.lighting" = "Turn towards light";
"dot.face_auto_capture.instruction.glasses_present" = "Remove glasses";
"dot.face_auto_capture.instruction.background_nonuniform" = "Plain background required";
"dot.face_auto_capture.instruction.pitch_too_high" = "Lower your chin";
"dot.face_auto_capture.instruction.pitch_too_low" = "Lift your chin";
"dot.face_auto_capture.instruction.yaw_too_right" = "Look left";
"dot.face_auto_capture.instruction.yaw_too_left" = "Look right";
"dot.face_auto_capture.instruction.eye_status_low" = "Open your eyes";
"dot.face_auto_capture.instruction.mouth_status_low" = "Close your mouth";
"dot.face_auto_capture.instruction.capturing" = "Stay still!";
"dot.eye_gaze_liveness.instruction.watch_object" = "Watch the object";
"dot.eye_gaze_liveness.instruction.lighting" = "Turn towards light";
"dot.eye_gaze_liveness.instruction.face_not_present" = "Look straight";
"dot.eye_gaze_liveness.instruction.face_too_close" = "Move back";
"dot.eye_gaze_liveness.instruction.face_too_far" = "Move closer";
Common Classes
ImageSize
Class which represents a size of an image. To create an instance:
let imageSize = ImageSize(width: 100, height: 100)
BgrRawImage
Class which represents an image.
To create an instance from CGImage
:
let bgrRawImage = BgrRawImageFactory.create(cgImage: cgImage)
To create CGImage
from BgrRawImage
:
let cgImage = CGImageFactory.create(bgrRawImage: bgrRawImage)
FaceImage
Class which represents a face image and can be used for face detection and matching. To create an instance:
let faceImage = FaceImage(image: bgrRawImage)
let faceImage = try? FaceImage(image: bgrRawImage, minFaceSizeRatio: 0.05, maxFaceSizeRatio: 0.3)
DetectedFace
This class represents the face detection result. The following properties and methods are available:
image: BgrRawImage
– Get a full (original) image of the face.confidence: Double
- The confidence score of the face detection. It also represents the quality of the detected face.createFullFrontalImage() → BgrRawImage
- Creates a ICAO full frontal image of a face. If boundaries of the normalized image leak outside of the original image, a white background is applied.createTemplate() throws → Template
- The face template which can be used for matching. This method requiresdot-face-verification
module.evaluateFaceAspects() throws → FaceAspects
- Evaluates face aspects.evaluateFaceQuality() throws → FaceQuality
- Evaluates face attributes that can be used for a detailed face quality assessment.evaluateFaceQuality(faceQualityQuery: FaceQualityQuery) throws → FaceQuality
- Evaluates only specific face attributes that can be used for a detailed face quality assessment. This is the recommended way for face quality evaluation due to performance reasons.evaluatePassiveLiveness() throws → FaceAttribute
- Evaluates passive liveness. This method requiresdot-face-passive-liveness
module.
Appendix
type: redirect redirect: https://developers.innovatrics.com/digital-onboarding/docs/latest-version-matrix/ robots: noindex ---
Changelog
4.0.0 - 2021-09-28
Added
DotFaceDetection
module andDotFaceDetectionModule
classDotFaceVerification
module andDotFaceVerificationModule
classDotFacePassiveLiveness
module andDotFacePassiveLivenessModule
classDotFaceEyeGazeLiveness
module andDotFaceEyeGazeLivenessModule
classLogger
class to improve logging mechanismDotFaceDelegate
to handleDotFace
events.DotFaceConfiguration
to wrap all configuration options ofDotFace
into single object.BgrRawImage
to represent image suitable for biometric operationsBgrRawImageConverter
andCGImageConverter
to convert betweenCGImage
andBgrRawImage
protocol
SegmentsGenerator
and classRandomSegmentsGenerator
FaceQuality
andFaceQualityQuery
FaceImageQuality
andFaceImageQualityQuery
HeadPose
andHeadPoseQuery
HeadPoseAttribute
Wearables
andWearablesQuery
Glasses
Expression
andExpressionQuery
EyesExpression
andEyesExpressionQuery
FaceAttribute
FaceAspects
Changed
minimal required iOS version to iOS 11.0
DOT iOS Face is split into multiple iOS libraries, see sections Distribution and Initialization in the integration manual
renamed module name
DOT
toDotFaceCore
renamed
DOTHandler
toDotFace
DotFace
to singletonDotFaceHandler.initialize(with license: License? = nil, faceDetectionConfidenceThreshold: Int = 600)
to.initialize(configuration: DotFaceConfiguration)
localization keys
renamed
DotFaceLocalization
toLocalization
component "Liveness Detection" to "Eye Gaze Liveness" and all related API
component "Face Capture" to "Face Auto Capture" and all related API
component "Face Capture Simple" to "Face Simple Capture" and all related API
FaceImageVerifier
toFaceMatcher
TemplateVerifer
toTemplateMatcher
matching scores, face confidence, face attribute scores and attribute quality values are in interval [0.0, 1.0]
FaceImage
now usesBgrRawImage
DetectedFace
now usesBgrRawImage
renamed
QualityAttributeConfiguration
toQualityAttribute
renamed
DOTRange
toValueRange
renamed
QualityAttributeConfigurationProvider
toDefaultQualityAttributeRegistry
renamed
VerificationQualityProvider
toMatchingQualityProvider
renamed
DOTSegment
toSegment
renamed
DotPosition
toCorner
default value of
FaceAutoCaptureConfiguration.isCheckAnimationEnabled
tofalse
Removed
component "Liveness Detection 2" and all related API
DOTHandler.authorizeCamera()
use native API insteadDOTHandler.logLevel
useLogger.logLevel
insteadLicense
use nativeData
type to represent license fileDOTCamera
and all related API, camera can be configured using "Configuration" classes of UI componentsFace
all API was moved toDetectedFace
CaptureCandidate
useDetectedFace
insteadFaceAttributeScore
IcaoAttribute
andIcaoRangeStatus
3.8.2 - 2021-07-23
Fixed
fixed issue with active liveness making it more seamless
3.8.1 - 2021-06-24
Added
support for interface orientation
portraitUpsideDown
to all UI components
3.8.0 - 2021-06-17
Changed
updated IFace to 4.10.0 - improved background uniformity algorithm
Removed
FaceAttributeId.yaw
,.roll
,.pitch
use.yawAngle
,.rollAngle
,.pitchAngle
instead
3.7.1 - 2021-05-10
Fixed
updated IFace to 4.9.1 - minor issue
updated glass status range in
QualityAttributeConfigurationRegistry
3.7.0 - 2021-05-03
Changed
updated IFace to 4.9.0 - improved glass status evaluation
3.6.0 - 2021-04-13
Changed
updated IFace to 4.8.0 - improved Passive Liveness algorithm
3.5.1 - 2021-03-19
Added
FaceCaptureStyle.hintTextColor
and.hintBackgroundColor
Changed
renamed style properties to be consistent across all UI components
added 'Color' suffix to name of style properties which represent
UIColor
3.5.0 - 2021-03-17
Added
DotFaceLocalization
class to improve localization mechanismCaptureCandidate.init()
to initialize withDetectedFace
public access to
CaptureCandidate.detectedFace
Changed
updated IFace to 4.4.0
renamed
Attribute
toFaceAttributeId
renamed
Feature
toFaceFeature
range of
eyeStatus
inQualityAttributeConfigurationRegistry
removed
DOTHandler.localizationBundle
useDotFaceLocalization.bundle
insteadliveness localization keys
CaptureState.yawStep
and.pitchStep
to.yawAngleStep
and.pitchAngleStep
QualityAttribute.yaw
and.pitch
to.yawAngle
and.pitchAngle
ICAO attributes now have
yawAngle
,pitchAngle
androllAngle
instead ofyaw
,pitch
androll
3.4.2 - 2020-12-16
Added
support for iOS Simulator arm64 architecture
3.4.1 - 2020-11-25
Fixed
FaceCaptureController
user interface issues
3.4.0 - 2020-09-03
Changed
updated IFace to 3.13.1
CaptureCandidate.glassStatusDependenciesFulfilled
toCaptureCandidate.glassStatusConditionFulfilled
CaptureCandidate.passiveLivenessDependenciesFulfilled
toCaptureCandidate.passiveLivenessConditionFulfilled
removed
Face.attributeIsDependencyFulfilled
, addedFace.evaluateAttributeCondition
3.3.1 - 2020-08-18
Fixed
FaceCaptureController
layout warnings
3.3.0 - 2020-08-14
Fixed
Make sure all background tasks are stopped when
LivenessCheckController.stopLivenessCheck()
is called
3.2.2 - 2020-08-11
Fixed
improved interface of
DOTCamera
3.2.1 - 2020-08-06
Fixed
crash in
DOTImage
ifCGImage
is nil
Changed
init
DOTImage
withCGimage
instead ofUIImage
updated eye status
QualityAttributeConfiguration
ranges
3.2.0 - 2020-07-30
Changed
on screen messages during face capture remain shown longer to minimize instruction flickering
changed ranges of
QualityAttributeConfigurationRegistry
removed detected face indicator after face capture finished
3.1.0 - 2020-07-10
Added
DOTRange
QualityAttribute
QualityAttributeConfiguration
QualityAttributeConfigurationRegistry
QualityAttributePreset
VerificationQualityProvider
ICAOQualityProvider
PassiveLivenessQualityProvider
Changed
removed
useAlternativeInstructions
,requestFullImage
,requestCropImage
,requestTemplate
,lightScoreThreshold
fromFaceCaptureConfiguration
added
qualityAttributeConfigurations: Set<QualityAttributeConfiguration>
toFaceCaptureConfiguration
added
static func validate(configuration: FaceCaptureConfiguration)
toFaceCaptureConfiguration
removed
requestFullImage
,requestCropImage
,requestTemplate
fromFaceCaptureSimpleConfiguration
changed
func faceCapture(_ controller: FaceCaptureController, stateChanged state: FaceCaptureState)
tofunc faceCapture(_ controller: FaceCaptureController, stateChanged state: CaptureState, withImage image: DOTImage?)
inFaceCaptureControllerDelegate
changed
func livenessCheck2(_ controller: LivenessCheck2Controller, captureStateChanged captureState: FaceCaptureState, withImage image: DOTImage?)
tofunc livenessCheck2(_ controller: LivenessCheck2Controller, stateChanged state: CaptureState, withImage image: DOTImage?)
inLivenessCheck2ControllerDelegate
3.0.1 - 2020-07-02
Fixed
draw circle around face during face capture
face capture hint label not updating correctly
3.0.0 - 2020-06-15
Changed
Update IFace to 3.10.0
FaceCaptureControllerDelegate
returnsCaptureCandidate
instead ofFaceCaptureImage
FaceCaptureSimpleControllerDelegate
returnsCaptureCandidate
instead ofFaceCaptureImage