DOT iOS Face Lite library
v1.1.1
Introduction
DOT iOS Face Lite provides components for face capture and related functionalities which are easy to integrate into an iOS application.
Requirements
Xcode 11.4+
iOS 11.0+
Swift or Objective-C
CocoaPods
Distribution
Cocoapods
DOT iOS Face Lite is distributed as a XCFramework - DotFaceLite.xcframework using Cocoapods
with its dependencies stored in our public github repository. It can be easily integrated into Xcode with custom definition of podspecs
. First step is to insert following line of code on top of you Podfile
.
source 'https://github.com/innovatrics/innovatrics-podspecs'
Then DOT iOS Face Lite dependency must be specified in Podfile
. Dependencies of DOT iOS Face Lite will be downloaded alongside it.
source 'https://github.com/innovatrics/innovatrics-podspecs'
use_frameworks!
target 'YOUR_TARGET' do
pod 'dot-face-lite'
end
In case of CocoaPods problem with
|
Supported Architectures
DOT iOS Face Lite provides all supported architectures in the distributed XCFramework package.
Device binary contains: arm64
.
Simulator binary contains: x86_64
, arm64
.
Permissions
Set the following permission in Info.plist
:
<key>NSCameraUsageDescription</key>
<string>Your usage description</string>
Basic Setup
Logging
DOT iOS Face Lite supports logging using a global Logger
class. You can set the log level as follows:
import DotFaceLite
Logger.logLevel = .debug
Log levels:
info
debug
warning
error
none
Each log message contains dot-face-lite
tag. Keep in mind that logging should be used just for debugging purposes.
Components
Overview
DOT iOS Face Lite provides both non-UI and UI components. Non-UI components are aimed to be used by developers who want to build their own UI using the DOT iOS Face Lite functionality. UI components are build on top of non-UI components. Components having UI are available as UIViewController
classes and can be embedded into the application’s existing UI or presented using the standard methods.
List of Non-UI Components
- FACE DETECTOR
A component for performing face detection on an image.
- FACE AUTO CAPTURE CONTROLLER
A component for capturing good quality image of human face.
List of UI Components
- FACE AUTO CAPTURE
A visual component for capturing good quality image of human face.
Non-UI Components
Face Detector
The FaceDetector
class provides a face detection functionality.
Create a FaceDetector
:
let faceDetector = FaceDetector()
To perform detection, call the following method on the background thread:
let result = try? faceDetector.detect(bgraRawImage: bgraRawImage)
Face Auto Capture Controller
The FaceAutoCaptureController
class provides a stateful face auto capture functionality.
Create FaceAutoCaptureController
:
let configuration = FaceAutoCaptureControllerConfiguration(
minValidFramesInRowToStartCandidateSelection: 2
candidateSelectionDurationMillis: 1000,
detectionNormalizedRectangle: detectionNormalizedRectangle,
validators: validators)
let controller = FaceAutoCaptureController(configuration: configuration)
FaceAutoCaptureControllerConfiguration
(Required)
[-]
validators: [FaceAutoCaptureDetectionValidator]
- Array of validators which will be used to validate input image.(Optional)
[2]
minValidFramesInRowToStartCandidateSelection: Int
- Minimum number of valid frames in a row to start candidate selection.(Optional)
[1000]
candidateSelectionDurationMillis: Int
- Duration of candidate selection phase.(Optional)
[-]
detectionNormalizedRectangle: RectangleDouble
- Crop an input image to normalized detection rectangle and use that for face detection.
You can use detectionNormalizedRectangle
to specify the region in the input image which will be used for face detection. For example, if you want to ignore top 30% and bottom 30% of the input image, you can do it as follows:
let detectionNormalizedRectangle = RectangleDouble(left: 0, top: 0.3, right: 1.0, bottom: 0.7)
If detectionNormalizedRectangle
is set to nil
(default) the full input image is used for face detection.
To capture a good quality face image, repeatedly call the process()
method using the camera frames:
faceAutoCaptureController.process(bgraRawImage: bgraRawImage, timestampMillis: timestampMillis)
The controller evaluates the face image requirements for each frame. Once the controller detects enough (minValidFramesInRowToStartCandidateSelection
) valid frames in a row, candidate selection is started with duration of candidateSelectionDurationMillis
milliseconds. After the candidate selection is finished, the best face image candidate is returned by the delegate and the face auto capture process is over.
In case you want to force the capture event, call the requestCapture()
method. After you call the next process()
method, the input image will be returned as a result by the delegate and the face auto capture process will be finished.
faceAutoCaptureController.requestCapture();
In case you want to restart the face auto capture process, call the restart()
method.
faceAutoCaptureController.restart();
UI Components
View Controller Configuration
Components containing UI are embedded into the application as view controllers. All view controllers can be embedded into your own view controller or presented directly. Each view controller can be configured using its *Configuration
class and each view controller can have its appearance customized using its *Style
class.
To present view controller:
let controller = FaceAutoCaptureViewController.create(configuration: .init(), style: .init())
controller.delegate = self
navigationController?.pushViewController(controller, animated: true)
To embed view controller into your view controller:
override func viewDidLoad() {
super.viewDidLoad()
addChild(viewController)
view.addSubview(viewController.view)
viewController.view.translatesAutoresizingMaskIntoConstraints = false
viewController.didMove(toParent: self)
NSLayoutConstraint.activate([
viewController.view.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor),
viewController.view.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor),
viewController.view.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),
viewController.view.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor)
])
}
Safe Area
DOT iOS Face Lite view controllers ignore safe area layout guide when they layout their subviews. Therefore, for example if you push DOT iOS Face Lite view controller using UINavigationController
, you will get incorrect layout. If you want to respect safe area layout guide, you should embed DOT iOS Face Lite view controller in a container view controller and setup the layout constraints accordingly.
Face Auto Capture
The view controller with placeholder which is used for capturing face images.
The following properties are available in FaceAutoCaptureConfiguration
:
(Optional)
[-]
qualityAttributeThresholds: QualityAttributeThresholds
- Customize thresholds for the quality attributes of the face auto capture result.(Optional)
[CameraFacing.front]
cameraFacing: CameraFacing
– Camera facing.CameraFacing.front
CameraFacing.back
(Optional)
[CameraPreviewScaleType.fit]
cameraPreviewScaleType: CameraPreviewScaleType
– The camera preview scale type.CameraPreviewScaleType.fit
CameraPreviewScaleType.fill
(Optional)
[false]
isDetectionLayerVisible: Bool
- Use this flag to show or hide detection circle during the face auto capture process.
The following properties are available in FaceAutoCaptureStyle
:
(Optional)
[UIColor.black]
backgroundColor: UIColor
- Background color of top level view.(Optional)
[UIFont.systemFont(ofSize: 16, weight: .semibold)]
instructionFont: UIFont
- Instruction label font.(Optional)
[UIColor(red: 2.0/255.0, green: 27.0/255.0, blue: 65.0/255.0, alpha: 1.0)]
instructionTextColor: UIColor
- Instruction label text color.(Optional)
[UIColor(red: 2.0/255.0, green: 27.0/255.0, blue: 65.0/255.0, alpha: 1.0)]
instructionCandidateSelectionTextColor: UIColor
- Instruction label text color during capture.(Optional)
[UIColor(red: 248.0/255.0, green: 251.0/255.0, blue: 251.0/255.0, alpha: 1.0)]
instructionBackgroundColor: UIColor
- Instruction background color.(Optional)
[UIColor(red: 0, green: 191.0/255.0, blue: 178.0/255.0, alpha: 1)]
instructionCandidateSelectionBackgroundColor: UIColor
- Instruction background color during capture.(Optional)
[UIColor.white]
placeholderColor: UIColor
- Placeholder color.(Optional)
[UIColor(red: 0, green: 191.0/255.0, blue: 178.0/255.0, alpha: 1)]
placeholderCandidateSelectionColor: UIColor
- Placeholder color during capture.(Optional)
[UIColor.white]
detectionLayerColor: UIColor
- Detection layer color.(Optional)
[UIColor(red: 19.0/255.0, green: 19.0/255.0, blue: 19.0/255.0, alpha: 0.5)]
overlayColor: UIColor
- Overlay color, semi-transparent color is recommended.
You can handle the FaceAutoCaptureViewController
events using its delegate FaceAutoCaptureViewControllerDelegate
.
@objc(DOTFLFaceAutoCaptureViewControllerDelegate) public protocol FaceAutoCaptureViewControllerDelegate: AnyObject {
/// Tells the delegate that the face was captured.
@objc func faceAutoCaptureViewController(_ viewController: FaceAutoCaptureViewController, captured result: FaceAutoCaptureResult)
/// Tells the delegate that the new detection was processed.
@objc optional func faceAutoCaptureViewController(_ viewController: FaceAutoCaptureViewController, detected detection: FaceAutoCaptureDetection)
/// Tells the delegate that the candidate selection phase has started.
@objc optional func faceAutoCaptureViewControllerCandidateSelectionStarted(_ viewController: FaceAutoCaptureViewController)
/// Tells the delegate that you have no permission for camera usage.
@objc optional func faceAutoCaptureViewControllerNoCameraPermission(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewDidLoad(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewDidLayoutSubviews(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewWillAppear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewDidAppear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewWillDisappear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewDidDisappear(_ viewController: FaceAutoCaptureViewController)
@objc optional func faceAutoCaptureViewControllerViewWillTransition(_ viewController: FaceAutoCaptureViewController)
}
In order to start the face auto capture process call the start()
method.
In case you want to handle detection data, implement faceAutoCaptureViewController(:detected:)
delegate callback. This callback is called with each processed camera frame. There is also faceAutoCaptureViewControllerCandidateSelectionStarted(:)
delegate callback, which is called only once during the whole process, when candidate selection is started.
In case you want to force the capture event, call the requestCapture()
method.
In case you want to restart the face auto capture process, call the restart()
method. The whole process will start from the beginning.
Customization of UI Components
Localization
String resources can be overridden in your application and alternative strings for supported languages can be provided following these two steps:
Add your own
Localizable.strings
file to your project using standard iOS localization mechanism. To change a specific text override corresponding key in thisLocalizable.strings
file.Set the localization bundle to the bundle of your application (preferably during the application launch in your
AppDelegate
).
Use this setup if you want to use standard iOS localization mechanism, which means your iOS application uses system defined locale.
import DotFaceLite
Localization.bundle = .main
Custom Localization
You can override standard iOS localization mechanism by providing your own translation dictionary and setting the Localization.useLocalizationDictionary
flag to true
. Use this setup if you do not want to use standard iOS localization mechanism, which means your iOS application ignores system defined locale and uses its own custom locale.
import DotFaceLite
guard let localizableUrl = Bundle.main.url(forResource: "Localizable", withExtension: "strings", subdirectory: nil, localization: "de"),
let dictionary = NSDictionary(contentsOf: localizableUrl) as? [String: String]
else { return }
Localization.useLocalizationDictionary = true
Localization.localizationDictionary = dictionary
"dot.face_lite_auto_capture.instruction.candidate_selection" = "Hold still...";
"dot.face_lite_auto_capture.instruction.face_centering" = "Center your face";
"dot.face_lite_auto_capture.instruction.face_not_present" = "Present your face";
"dot.face_lite_auto_capture.instruction.face_too_far" = "Move closer";
"dot.face_lite_auto_capture.instruction.face_too_close" = "Move back";
"dot.face_lite_auto_capture.instruction.lighting" = "Turn towards light";
Common Classes
ImageSize
Class which represents a size of an image. To create an instance:
let imageSize = ImageSize(width: 100, height: 100)
BgraRawImage
Class which represents an image.
To create an instance from CGImage
:
let bgraRawImage = BgraRawImageFactory.create(cgImage: cgImage)
To create an instance from CIImage
:
let bgraRawImage = BgraRawImageFactory.create(ciImage: ciImage, ciContext: ciContext)
To create CGImage
from BgraRawImage
:
let cgImage = CGImageFactory.create(bgraRawImage: bgraRawImage)
To create CIImage
from BgraRawImage
:
let ciImage = CIImageFactory.create(bgraRawImage: bgraRawImage, ciContext: ciContext)
Appendix
Changelog
1.1.1 - 2022-08-15
Fixed
crash when camera device is not available
camera session lifecycle
camera permission issue
1.0.0 - 2022-05-19
First release