DOT Android Face library
v3.8.0
Introduction
DOT Android Face as a part of the DOT Android libraries family provides components for the digital onboarding process using the latest Innovatrics IFace image processing library. It wraps the core functionality of the library to a higher-level module which is easy to integrate into an Android application. The library contains IFace binaries for armeabi-v7a, arm64-v8a, x86 and x86_64 architectures.
Components overview
DOT Android Face provides both UI and non-UI components. UI components are available as abstract fragments and can be extended and then embedded into the application’s existing activity providing more control. These abstract fragments are built on top of non-UI components. Non-UI components are aimed to be used by developers who want to build their own UI using the DOT Android Face functionality.
List of UI components
- FACE CAPTURE
A visual component for capturing good quality photos and creating templates suitable for verification.
- FACE CAPTURE SIMPLE
A visual component for capturing photos and creating templates suitable for verification without considering photo quality requirements.
- LIVENESS CHECK
A visual component which performs the liveness detection based on object tracking. An object is shown on the screen and the user is instructed to follow the movement of this object.
- LIVENESS CHECK 2
A visual component for capturing face photos and templates with combination of the liveness detection based on the object tracking. An object is shown on the screen and the user is instructed to follow the movement of this object.
List of non-UI components
- FACE DETECTOR
A component for performing face detection on an image and creating templates as well as computing face features and ICAO attributes.
- TEMPLATE VERIFIER
A component for performing template verification.
- FACE IMAGE VERIFIER
A component for performing face image verification.
Requirements
Android API level 14
Distribution
The library is distributed as an *.aar
package stored in the Innovatrics public Maven repository. It can be easily integrated into an Android Studio project.
The first step is to include the Innovatrics Maven public repository and Google repository to your top level build.gradle
file.
allprojects {
repositories {
jcenter()
google()
maven {
url 'http://maven.innovatrics.com/releases'
}
}
}
Then, specify the dependency on the DOT Android Face library in the module’s build.gradle
file. Dependencies of this library will be downloaded alongside the library. Version x.y.z
must be replaced with the current version of the library.
Two versions of the library are available – standard and with passive liveness evaluation. The version with passive liveness includes the complete functionality of the standard version and also the on device passive liveness evaluation.
Standard version
dependencies {
…
implementation 'com.innovatrics.android:dot-face:x.y.z'
…
}
Version with passive liveness evaluation
dependencies {
…
implementation 'com.innovatrics.android:dot-face-passive-liveness:x.y.z'
…
}
Sample project
The sample project demonstrates the usage and configuration of DOT Android Face. To run the sample, import it first into Android Studio. The temporary license bound to the sample project is bundled with the sample.
Permissions
DOT Android Face declares the following permission in AndroidManifest.xml
:
<uses-permission android:name="android.permission.CAMERA" />
Supported architectures
DOT Android Face provides binaries for armeabi-v7a, arm64-v8a, x86 and x86_64 architectures. If the APK splits are not specified, the generated APK file will contain binaries for all available architectures. However, DOT Android Face binaries are too large for embedding all variants into a single APK file. Therefore we recommended to use APK splits. To generate armeabi-v7a, arm64-v8a, x86 and x86_64 APKs, add the following section into your module build.gradle
:
splits {
abi {
enable true
reset()
include 'armeabi-v7a', 'arm64-v8a', 'x86', 'x86_64'
universalApk false
}
}
If you do not specify this section, the resulting application can become too large in size.
Proguard
For applications that use Proguard, add the following rules to the Proguard configuration file:
-dontwarn com.sun.jna.**
-dontwarn com.innovatrics.commons.pc.**
# JNA
-keep class com.sun.jna.** { *; }
# Innovatrics IFace
-keep class com.innovatrics.iface.** { *; }
Licensing
In order to use DOT Android Face in other apps, it must be licensed. The license can be compiled into the application as it is bound to the application ID specified in build.gradle
:
defaultConfig {
applicationId "com.innovatrics.android.dot.sample"
…
}
The license ID can be retrieved as follows – required only once for license generation:
Log.i(TAG, "LicenseId: " + DotFace.getInstance().getLicenseId());
To obtain the license, please contact your Innovatrics’ representative specifying the License ID. If the application uses build flavors with different application IDs, each flavor must contain a separate license.
Initialization
Before using any of the DOT Android Face components, you should initialize DOT Android Face with the license. The binary license content must be passed into DotFace.initAsync()
. The LicenseUtils.loadRawLicense()
utility can be used for license loading from raw resources.
...
if (!DotFace.getInstance().isInitialized()) {
initialize()
}
...
private void initialize() {
byte[] license = LicenseUtils.loadRawLicense(context, R.raw.demo_license);
DotFace.InitializationListener initializationListener = createInitializationListener();
DotFace.getInstance().initAsync(context, license, initializationListener);
}
private DotFace.InitializationListener createInitializationListener() {
return new DotFace.InitializationListener() {
@Override
public void onSuccess() {
// implementation
}
@Override
public void onFailure(DotFace.InitializationException exception) {
// implementation
}
};
}
As a result of initialization a dot folder under the application files folder is created. |
DOT Face parameters
You can configure global DOT Face parameters using DotFaceParameters
DTO. Pass it to the DotFace.initAsync()
method. Here is an example of building such an object:
DotFaceParameters dotFaceParameters = new DotFaceParameters.Builder()
.faceDetectionConfidenceThreshold(1000f)
.build();
Face detection confidence threshold (faceDetectionConfidenceThreshold
)
The interval of the confidence score is [0, 10000]
and the default value of the threshold is 600
. Faces with a confidence score lower that this value are ignored.
Finish using DOT Android Face features
When a process (e.g. onboarding) using the DOT Android Face has been completed, it is usually a good practice to free the resources used by it.
You can perfom this by calling DotFace.closeAsync()
. If you want to use the DOT Android Face components again after that point, you need to call DotFace.initAsync()
again. This shouldn’t be performed within the lifecycle of individual Android components.
Logging
By default, logging is disabled. You can enable it by using the following method from the com.innovatrics.android.commons.Logger
class.
Logger.setLoggingEnabled(true);
The appropriate place for this call is within the onCreate()
method of your subclass of android.app.Application
. Each tag of a log message starts with the dot-face:
prefix.
This setting enables logging for all DOT Android libraries. |
Please note that logging should be used just for debugging purposes as it might produce a lot of log messages. |
Fragment configuration (UI components)
Components containing UI are embedded into the application as fragments from Android Support Library. All fragments are abstract. They must be subclassed and override their abstract methods.
Fragments requiring runtime interaction provide public methods, for example startLivenessCheck()
.
public class DemoLivenessCheckFragment extends LivenessCheckFragment {
@Override
protected void onCameraReady() {
startLivenessCheck();
}
…
}
For configuration not intended to be changed in runtime, fragment arguments are available.
FaceCaptureArguments faceCaptureArguments = new FaceCaptureArguments.Builder().build();
Bundle arguments = new Bundle();
arguments.putSerializable(FaceCaptureFragment.ARGUMENTS, faceCaptureArguments);
Fragment fragment = new DemoFaceCaptureFragment();
fragment.setArguments(arguments);
getSupportFragmentManager()
.beginTransaction()
.replace(android.R.id.content, fragment)
.commit();
Arguments are either wrapped by the *Arguments
class and in this case you must put them as a Serializable under the ARGUMENTS
key to the fragment. Alternatively, they are defined in abstract classes and have the prefix ARG_
.
Builder.build() method throws IllegalArgumentException if any of the arguments is not valid. Keep in mind to handle the exception. |
UI Components
Face Capture
The fragment with instructions for obtaining quality images suitable for verification. In order to configure the behaviour of FaceCaptureFragment
, use FaceCaptureArguments
(see Fragment configuration (UI components)).
The following arguments are wrapped in FaceCaptureArguments
:
(Optional)
[ID of the first front facing camera]
int cameraId
– ID of the camera to use. If a device does not have a camera with specified ID,onCameraAccessFailed()
is called(Optional)
[CENTER_INSIDE]
ScaleType cameraPreviewScaleType
– The camera preview scale typeCENTER_INSIDE
CENTER_CROP
(Optional)
[0.10]
double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.24]
double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
[0.30]
double lightScoreThreshold
– The Light score threshold to accept a face image from the camera(Optional)
[true]
boolean showCheckAnimation
– Shows a checkmark animation after enrollment (or a static icon on devices which don’t support animation)(Optional)
Set<QualityAttributeConfiguration> qualityConfiguration
– Sets the required quality which the output image must meet
If a face present in an image has a face size out of the minimum or maximum face size interval, it won’t be detected. Please note that a wider minimum or maximum face size interval results in a lower performance (detection fps).
To use the fragment, create a subclass of FaceCaptureFragment
and override appropriate callbacks:
public class DemoFaceCaptureFragment extends FaceCaptureFragment {
@Override
protected void onCameraInitFailed() {
// Callback implementation
}
@Override
protected void onCameraAccessFailed() {
// Callback implementation
}
@Override
protected void onNoCameraPermission() {
// Callback implementation
}
@Override
protected void onCaptureStateChange(CaptureStepId captureState, Photo photo) {
// Callback implementation
}
@Override
protected void onCaptureSuccess(DetectedFace detectedFace) {
// Callback implementation
}
}
The face size is defined as the larger of the two: the eye distance and the eye to mouth distance (distances are shown in the picture below). |
CaptureStepId
events are emitted when the user enters each step.
PRESENCE
PROXIMITY
POSITION
BACKGROUND_UNIFORMITY
PITCH_ANGLE
YAW_ANGLE
EYE_STATUS
GLASS_STATUS
MOUTH_STATUS
LIGHT
Quality attributes of the output image
You may adjust quality requirements for the output image. To perform this, you can use various QualityProvider
implementations with recommended values and pass this configuration via FaceCaptureArguments
by setting the qualityConfiguration
. You can also extend the default implementations according to your needs.
For example, if you wish to capture an image suitable for verification but you also want to make sure a user doesn’t wear glasses, you can use the following implementation:
static class VerificationWithGlasssStatusQualityProvider extends VerificationQualityProvider {
final Set<QualityAttributeConfiguration> configurationSet;
public VerificationWithGlasssStatusQualityProvider() {
configurationSet = new HashSet<>(super.getQualityAttributeConfigurationSet());
configurationSet.add(new DefaultQualityRegistry().getQualityAttributeConfigurationForId(QualityAttributeId.GLASS_STATUS));
}
@Override
public Set<QualityAttributeConfiguration> getQualityAttributeConfigurationSet() {
return configurationSet;
}
@Override
public QualityAttributeConfiguration getQualityAttributeConfigurationForId(QualityAttributeId qualityAttributeId) {
return QualityAttributeConfiguration.getQualityAttributeById(configurationSet, qualityAttributeId);
}
}
See DefaultQualityRegistry
for default values and all available quality configuration attributes.
Available quality providers:
VerificationQualityProvider
– The resulting image suitable for verification.PassiveLivenessQualityProvider
– The resulting image suitable for evaluation of the passive liveness.IcaoQualityProvider
– The resulting image passing ICAO checks.
Face Capture Simple
The fragment for obtaining images for verification without considering any photo quality requirements.
In order to configure the behavior of FaceCaptureSimpleFragment
, use FaceCaptureSimpleArguments
(see Fragment configuration (UI components)).
The following arguments are wrapped in FaceCaptureSimpleArguments
:
(Optional)
[ID of the first front facing camera]
int cameraId
– ID of the camera to use. If a device does not have a camera with specified ID,onCameraAccessFailed()
is called(Optional)
[CENTER_INSIDE]
ScaleType cameraPreviewScaleType
– The camera preview scale typeCENTER_INSIDE
CENTER_CROP
(Optional)
[0.10]
double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.24]
double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image
If a face present in an image has a face size out of the minimum or maximum face size interval, it won’t be detected. Please note that a wider minimum or maximum face size interval results in a lower performance (detection fps).
To use the FaceCaptureSimpleFragment
fragment subclass, override the appropriate callbacks:
public class DemoFaceCaptureSimpleFragment extends FaceCaptureSimpleFragment {
@Override
public void onResume() {
super.onResume();
requestPhoto();
}
@Override
protected void onCameraInitFailed() {
// Callback implementation
}
@Override
protected void onCameraAccessFailed() {
// Callback implementation
}
@Override
protected void onNoCameraPermission() {
// Callback implementation
}
@Override
protected void onCapture(DetectedFace detectedFace) {
// Callback implementation
}
}
Capture is started when requestPhoto()
method is called. In the example above, it is started immediately.
Liveness Detection
The fragment with a moving or a fading object on the screen.
In order to configure the behavior of LivenessCheckFragment
, use LivenessCheckArguments
(see Fragment configuration (UI components)).
The following arguments are wrapped in LivenessCheckArguments
:
(Optional)
[ID of the first front facing camera]
int cameraId
– ID of the camera to use. If a device does not have a camera with specified ID,onCameraAccessFailed()
is called. Specified camera must be a front facing camera orIllegalArgumentException
is thrown(Optional)
[800x600]
CameraSize preferredCameraSize
– Sets the camera resolution for liveness detection (The resolution must be supported by the device. If it isn’t supported, the default resolution strategy is applied.)(Optional)
[0.10]
double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.28]
double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
[0.5]
double proximityTolerance
– The tolerance of the face size ratio (The tolerance of the distance between the face and the camera). A value greater than 1.0 disables the proximity check(Required)
[-]
List<SegmentConfiguration> segmentList
– Segments for the object animation(Optional)
[4]
int minValidSegmentCount
– The minumum number of valid captured segments. The value can be within the interval [4, 7].(Optional)
[FADING]
LivenessCheckArguments.TransitionType transitionType
– The transition type used for the liveness detection object animationMOVING
FADING
(Optional)
[50]
int dotSize
– The dot size for the animation in dp(Optional)
[-]
Integer dotColorResId
– The color resource ID for the dot animation(Optional)
[-]
Integer drawableResId
– The drawable resource ID for custom drawable instead of dot(Optional)
[-]
Integer backgroundColorResId
– The color resource ID for liveness screen background
To restart the liveness detection process, call startLivenessCheck()
. To reset the view, call restartTransitionView()
.
The table below describes how FAR and FRR change with the threshold value.
FAR levels | FAR [%] | FRR [%] | Threshold |
---|---|---|---|
1:5 | 19.792 | 4.255 | 0.923700869083 |
1:10 | 9.375 | 6.383 | 0.977084677219 |
1:20 | 4.167 | 6.383 | 0.9999781847 |
1:50 | 1.042 | 9.574 | 0.999994039536 |
1:100 | 0.000 | 9.574 | 0.99999976158 |
public class DemoLivenessCheckFragment extends LivenessCheckFragment {
@Override
protected void onCameraReady() {
startLivenessCheck();
}
@Override
protected void onCameraInitFailed() {
// Callback implementation
}
@Override
protected void onCameraAccessFailed() {
// Callback implementation
}
@Override
protected void onLivenessStateChange(FaceLivenessState faceLivenessState) {
if (faceLivenessState == FaceLivenessState.LOST) {
restartTransitionView();
startLivenessCheck();
}
}
@Override
protected void onLivenessCheckDone(float score, List<SegmentPhoto> segmentPhotoList) {
// Callback implementation
}
@Override
protected void onLivenessCheckFailedNoMoreSegments() {
// Callback implementation
}
@Override
protected void onLivenessCheckFailedEyesNotDetected() {
// Callback implementation
}
@Override
protected void onLivenessCheckFailedFaceTrackingFailed() {
// Callback implementation
}
@Override
protected void onNoCameraPermission() {
// Callback implementation
}
@Override
protected void onNoFrontCamera() {
// Callback implementation
}
}
The liveness detection follows List<SegmentConfiguration> segmentList
and renders an object in the specified corners of the screen. For the best accuracy it is recommended to display the object in at least three different corners.
If the user’s eyes can’t be detected in the first segment, the process will be terminated with the onLivenessCheckFailedEyesNotDetected()
callback. If the eyes aren’t detected in any of the segments, the process is marked with the accepted
flag set to false
in the corresponding SegmentPhoto
class in onLivenessCheckDone()
.
The process is automatically finished when the number of accepted items in segmentPhotoList
reaches minValidSegmentCount
. After that is called onLivenessCheckDone
and the score can be evaluated.
The process fails with the onLivenessCheckFailedNoMoreSegments()
callback when all the segments in List<SegmentConfiguration> segmentList
were displayed but it wasn’t possible to collect a number of accepted images specified in minValidSegmentCount
.
You can use SegmentPhoto
items for verification purposes, even when the eyes weren’t detected in a segment and the accepted
flag is set to false
.
For a better user experience, it is recommended to provide the user more attempts, so the size of List<SegmentConfiguration> segmentList
should be greater than minValidSegmentCount
. However, this should be limited, as it is better to terminate the process if the user is failing in many segments. The recommended implementation of segment generation can be found in DOT Android Kit Sample:
private List<SegmentConfiguration> createSegmentConfigurationList() {
List<SegmentConfiguration> list = new ArrayList<>();
for (int i = 0; i < 8; i++) {
DotPosition position = (DotPosition.getRandomPositionExclude(Arrays.asList(
i > 0 ? list.get(i - 1).getTargetPosition() : null,
i > 1 ? list.get(i - 2).getTargetPosition() : null)));
list.add(new SegmentConfiguration(position.name(), 1000));
}
return list;
}
If the want to perform a server side validation of the liveness detection, please follow this recommended approach:
The object movement is generated on your server and then rendered on the device using List<SegmentConfiguration> segmentList
. When the process is finished successfully, the List<SegmentPhoto> segmentPhotoList
is transferred to the server to evaluate the liveness detection. Please note that segmentList
is no longer transferred and you should store it in the session of the server.
You can evaluate the liveness detection by combining the corresponding segmentPhotoList
with segmentList
and sending the request to DOT Core Server. If the user could finish the process without using all segments, the remaining items of segmentList
should be dropped to match the number of items in segmentPhotoList
.
Liveness Detection 2
The fragment with moving or fading object on the screen. In order to configure the behavior of LivenessCheck2Fragment
, use LivenessCheck2Arguments
(see Fragment configuration (UI components)).
The following arguments are wrapped in LivenessCheck2Arguments
:
(Optional)
[ID of the first front facing camera]
int cameraId
– ID of the camera to use. If a device does not have a camera with specified ID,onCameraAccessFailed()
is called. Specified camera must be a front facing camera orIllegalArgumentException
is thrown(Optional)
[CENTER_INSIDE]
ScaleType cameraPreviewScaleType
– The camera preview scale typeCENTER_INSIDE
CENTER_CROP
(Optional)
[800x600]
CameraSize preferredCameraSize
– Sets the camera resolution for the liveness detection (The resolution must be supported by the device. If it isn’t supported, then the default resolution strategy is applied.)(Optional)
[0.10]
double minFaceSizeRatio
– The minimum ratio of the face size to the width of the shorter side of the image(Optional)
[0.28]
double maxFaceSizeRatio
– The maximum ratio of the face size to the width of the shorter side of the image(Optional)
[0.25]
double positionTolerance
– The tolerance of the distance between the face center and the screen center. The value can be within the interval [0, 1](Optional)
[0.5]
double proximityTolerance
– The tolerance of the face size ratio (The tolerance of the distance between the face and the camera). A value greater than 1.0 disables the proximity check(Optional)
[0.35]
double lightScoreThreshold
– The minimum value of the Light score(Optional)
[false]
boolean requestFullImage
– Requests the original face image(Optional)
[false]
boolean requestCropImage
– Requests the face image cropped to the ICAO standard(Optional)
[false]
boolean requestTemplate
– Requests the template extraction(Required)
[-]
List<SegmentConfiguration> segmentList
– Segments for animation(Optional)
[4]
int minValidSegmentCount
- The minimum number of valid captured segments. The value can be within the interval [4, 7](Optional)
[FADING]
LivenessCheck2Arguments.TransitionType transitionType
– The transition type used for the liveness detection object animationMOVING
FADING
(Optional)
[50]
int dotSize
– The dot size for animation in dp(Optional)
[-]
Integer dotColorResId
– The color resource ID for the dot animation
To restart the liveness detection process, call startLivenessCheck()
.
public class DemoLivenessCheck2Fragment extends LivenessCheck2Fragment {
@Override
protected void onCameraInitFail() {
// Callback implementation
}
@Override
protected void onCameraAccessFailed() {
// Callback implementation
}
@Override
protected void onNoCameraPermission() {
// Callback implementation
}
@Override
protected void onCaptureStateChange(CaptureStepId captureState, Photo photo) {
// Callback implementation
}
@Override
protected void onCaptureSuccess(DetectedFace detectedFace) {
// Callback implementation
}
@Override
protected void onLivenessStateChange(FaceLivenessState faceLivenessState) {
if (faceLivenessState == FaceLivenessState.LOST) {
startLivenessCheck();
}
}
@Override
protected void onLivenessCheckDone(float score, List<SegmentPhoto> segmentPhotoList) {
// Callback implementation
}
@Override
protected void onLivenessCheckFailNoMoreSegments() {
// Callback implementation
}
@Override
protected void onLivenessCheckFailEyesNotDetected() {
// Callback implementation
}
@Override
protected void onLivenessCheckFailFaceTrackingFailed() {
// Callback implementation
}
}
Customization of UI components
Strings
You can override the string resources in your application and provide alternative strings for supported languages using the standard Android localization mechanism.
<!-- Face Capture -->
<string name="dot_face_capture_instruction_step_position_centering">Center your face</string>
<string name="dot_face_capture_instruction_step_position_too_close">Move back</string>
<string name="dot_face_capture_instruction_step_position_face_not_straight">Look straight</string>
<string name="dot_face_capture_instruction_step_position_eye_status_low">Open your eyes</string>
<string name="dot_face_capture_instruction_step_position_mouth_status_low">Close your mouth</string>
<string name="dot_face_capture_instruction_step_position_pitch_high">Lower your chin</string>
<string name="dot_face_capture_instruction_step_position_pitch_low">Lift your chin</string>
<string name="dot_face_capture_instruction_step_position_yaw_high">Look left</string>
<string name="dot_face_capture_instruction_step_position_yaw_low">Look right</string>
<string name="dot_face_capture_instruction_step_position_too_far">Move closer</string>
<string name="dot_face_capture_instruction_step_lighting">Turn towards light</string>
<string name="dot_face_capture_instruction_step_remove_glasses">Remove glasses</string>
<string name="dot_face_capture_instruction_step_background_uniformity_invalid">Plain background required</string>
<string name="dot_face_capture_instruction_step_capture">Stay still…</string>
<!-- Liveness Detection -->
<string name="dot_liveness_check_instruction_watch_object">Watch the object</string>
<string name="dot_liveness_check_instruction_watch_object_no_eyes">Can\'t see your eyes. Watch the object.</string>
<string name="dot_liveness_check_instruction_look_straight">Look straight</string>
<string name="dot_liveness_check_instruction_low_quality_face">Move towards light</string>
Colors
You may customize the colors used by DOT Android Face in your application. To use custom colors, override the specific color.
<!-- Common -->
<color name="dot_common_background">#e1e1e1</color>
<color name="dot_common_error">#dc4431</color>
<!-- Face Capture -->
<color name="dot_face_capture_background_overlay">#e1ffffff</color>
<color name="dot_face_capture_circle_outline">#ffffff</color>
<color name="dot_face_capture_tracking_circle_outline">#1e000000</color>
<color name="dot_face_capture_tracking_circle_background">#78ffffff</color>
<color name="dot_face_capture_progress_valid">#88b661</color>
<color name="dot_face_capture_progress_intermediate">#ed8500</color>
<color name="dot_face_capture_progress_invalid">#dc4232</color>
<color name="dot_face_capture_instruction_text">#ff000000</color>
<color name="dot_face_capture_instruction_text_background">#ffffffff</color>
<color name="dot_face_capture_instruction_text_stay_still">#ffffffff</color>
<color name="dot_face_capture_instruction_text_background_stay_still">#88b661</color>
<!-- Liveness Detection -->
<color name="dot_liveness_check_instruction_text">#ff000000</color>
<color name="dot_liveness_check_instruction_text_background">#ffffffff</color>
Styles
You can style the text views and buttons by overriding the parent style in the application. The default style is AppCompat
.
<style name="TextAppearance.Dot.Medium" parent="TextAppearance.AppCompat.Medium" />
<style name="Widget.Dot.Button.Colored" parent="Widget.AppCompat.Button.Colored" />
Non-UI components
Face detector
The FaceDetector
class provides the face detection functionality without the use of UI components. Face detection stops when maximumFaces
is reached.
To perform detection, call the following method in the background thread:
List<DetectedFace> detectFaces(FaceImage image, int maximumFaces);
Template Verifier
In order to verify face templates (1:1), use the TemplateVerifier
class. The recommended approach is to create face templates using FaceCapture
or FaceDetector
and use only templates for verification.
float match(byte[] referenceTemplate, byte[] probeTemplate) throws TemplateVerifierException;
Face Image Verifier
In order to verify or identify face images (1:1 or 1:N), use the FaceImageVerifier
class. It is also possible to verify a face image or face image array against a template (This is a recommended approach if you already have an available reference template).
List<Float> match(FaceImage referenceFaceImage, List<FaceImage> probeFaceImageList) throws FaceImageVerifierException;
List<Float> match(byte[] referenceTemplate, List<FaceImage> probeFaceImageList) throws FaceImageVerifierException;
float match(FaceImage referenceFaceImage, FaceImage probeFaceImage) throws FaceImageVerifierException;
float match(byte[] referenceTemplate, FaceImage probeFaceImage) throws FaceImageVerifierException;
Common classes
FaceImage
The entity which can be used for face detection and verification.
To create FaceImage
from Bitmap
:
public static FaceImage create(Bitmap image)
or:
public static FaceImage create(Bitmap image, double minFaceSizeRatio, double maxFaceSizeRatio)
DetectedFace
This entity provides information about a detected face. The following methods are available:
Bitmap createFullImage()
– Creates a full (original) image of the face.float getEyeDistance()
– The distance between the eyes in the original image.float getConfidence()
– The Confidence score of the face detection. It also represents the quality of the detected face.FaceTemplateData createTemplate()
– The face template which can be used for verification.Bitmap createCroppedImage()
– Creates a ICAO full frontal image of a face. If boundaries of the normalized image leak outside of the original image, a white background is applied.Map<FaceFeatureId, FaceFeaturePoint> createFaceFeatures()
– Creates a collection of significant points of the detected face. Positions are absolute to the original input image.Map<com.innovatrics.android.dot.face.facemodel.FaceAttributeId, FaceAttribute> createFaceAttributes(List<com.innovatrics.android.dot.face.facemodel.FaceAttributeId> attributeIdList)
– Creates a collection of face attributes.Map<IcaoAttributeId, IcaoAttribute> createIcaoAttributes(List<IcaoAttributeId> attributeIdList)
– Creates a collection of ICAO attributes that can be used for a detailed face quality assessment.
Face attributes
You can get these face attributes using the DetectedFace
class:
Name | Description |
---|---|
glassStatus | The face attribute for evaluating glasses presence. Glasses values are within the interval [-10000,10000]. Values near -10000 indicate 'no glasses present', values near 10000 indicate 'glasses present'. The decision threshold is around 0.This attribute can be also taken as an ICAO feature. |
passiveLiveness | The face attribute for evaluating the passive liveness score of a face. Passive liveness score values are within the interval [-10000,10000]. Values near -10000 indicate 'face not live', values near 10000 indicate 'face live'. You can use the |
Also, you should check the isDependenciesFulfilled()
method of the FaceAttribe
to ensure all dependencies needed for evaluation are fullfilled. If the denpendencies aren’t fulfilled, the attribute score will be still computed but the accuracy of results isn’t garanted.
ICAO attributes
You can get the ICAO attributes using the DetectedFace
class. The following attributes can be taken as ICAO features:
Name | Description |
---|---|
backgroundUniformity | The face attribute for evaluating whether the background is uniform. Background uniformity values are within the interval [-10000,10000]. Values near -10000 indicate 'very non-uniform background present', values near 10000 indicate 'uniform background present'. The decision threshold is around 0. |
brightness | The face attribute for evaluating whether an area of the face is correctly exposed. Brightness values are within the interval [-10000,10000]. Values near -10000 indicate 'too dark', values near 10000 indicate 'too light', values around 0 indicate OK. The decision thresholds are around -5000 and 5000. |
contrast | The face attribute for evaluating whether an area of the face is contrast enough. Contrast values are within the interval [-10000,10000]. Values near -10000 indicate 'very low contrast', values near 10000 indicate 'very high contrast', values around 0 indicate OK. The decision thresholds are around -5000 and 5000. |
eyeStatusLeft | The face attribute for evaluating the left eye status. Left eye values are within the interval [-10000,10000]. Values near -10000 indicate 'closed, narrowed or bulged eye', values near 10000 indicate 'normally opened eye'. The decision threshold is around 0. |
eyeStatusRight | The face attribute for evaluating the right eye status. Right eye values are within the interval [-10000,10000]. Values near -10000 indicate 'closed, narrowed or bulged eye', values near 10000 indicate 'normally opened eye'. The decision threshold is around 0. |
mouthStatus | The face attribute for evaluating the mouth status. Mouth status values are within the interval [-10000,10000]. Values near -10000 indicate 'open mouth, smile showing teeth or round lips present', values near 10000 indicate 'mouth with no expression'. The decision threshold is around 0. |
pitchAngle | Face attribute representing angle rotation of head towards camera reference frame around X-axis as per DIN9300. |
rollAngle | Face attribute representing angle rotation of head towards camera reference frame around Z-axis as per DIN9300. |
shadow | The face attribute for evaluating whether an area of the face is overshadowed. Shadow values are within the interval [-10000,10000]. Values near -10000 indicate 'very strong global shadows present', values near 10000 indicate 'no global shadows present'. The decision threshold is around 0. |
sharpness | The face attribute for evaluating whether an area of the face image is blurred. Sharpness values are within the interval [-10000,10000]. Values near -10000 indicate 'very blurred', values near 10000 indicate 'very sharp'. The decision threshold is around 0. |
specularity | The face attribute for evaluating whether spotlights are present on the face. Specularity values are within the interval [-10000,10000]. Values near -10000 indicate 'very strong specularity present', values near 10000 indicate 'no specularity present'. The decision threshold is around 0. |
uniqueIntensityLevels | The face attribute for evaluating whether an area of the face has an appropriate number of unique intensity levels. Unique intensity levels values are within the interval [-10000,10000]. Values near -10000 indicate 'very few unique intensity levels', values near 10000 indicate 'enough unique intensity levels'. The decision threshold is around 0. |
yawAngle | Face attribute representing angle rotation of head towards camera reference frame around Y-axis as per DIN9300. |
Appendix
type: redirect redirect: https://developers.innovatrics.com/digital-onboarding/docs/latest-version-matrix/ robots: noindex ---
Changelog
3.8.0 - 2021-06-17
Changed
Update IFace to 4.10.0 - improved background uniformity algorithm.
Fixed
Requesting camera permission if it is already denied.
3.7.1 - 2021-05-10
Fixed
Update IFace to 4.9.1 - minor issue.
Update glass status range in
DefaultQualityRegistry
.
3.7.0 - 2021-05-03
Changed
Update IFace to 4.9.0 - improved glass status evaluation.
3.6.0 - 2021-04-12
Changed
Update IFace to 4.8.0 - improved passive liveness algorithm.
3.5.0 - 2021-03-17
Added
DotFaceParameters
DTO.DotFace.InitializationException
exception.
Changed
Update IFace to 4.4.0 - face templates are incompatible and must be regenerated.
Signature of
DotFace.initAsync()
method.Signature of
DotFace.closeAsync()
method.DotFace.Listener
toDotFace.InitializationListener
andDotFace.CloseListener
.Ranges of
DefaultQualityRegistry
.CaptureStepId.PITCH
toCaptureStepId.PITCH_ANGLE
.CaptureStepId.YAW
toCaptureStepId.YAW_ANGLE
.IcaoAttributeId.PITCH
toIcaoAttributeId.PITCH_ANGLE
.IcaoAttributeId.ROLL
toIcaoAttributeId.ROLL_ANGLE
.IcaoAttributeId.YAW
toIcaoAttributeId.YAW_ANGLE
.QualityAttributeId.PITCH
toQualityAttributeId.PITCH_ANGLE
.QualityAttributeId.YAW
toQualityAttributeId.YAW_ANGLE
.
Fixed
DotFace.initAsync()
behavior when DOT Android Face is already initialized.DotFace.closeAsync()
behavior when DOT Android Face is not initialized.
3.4.0 - 2021-02-01
Changed
Update target Android SDK version to 30 (Android 11).
FaceCaptureArguments
: changecameraFacing
tocameraId
.FaceCaptureSimpleArguments
: changecameraFacing
tocameraId
.LivenessCheckArguments
: changecameraFacing
tocameraId
.LivenessCheck2Arguments
: changecameraFacing
tocameraId
.
3.3.1 - 2020-09-23
Fixed
Animations not working in rare cases for active liveness.
3.3.0 - 2020-09-04
Changed
Adjusted default ranges for quality providers.
Update IFace to 3.13.1 - face templates are incompatible and must be regenerated.
Background uniformity calculation improved and added to
IcaoQualityProvider
.
3.2.2 - 2020-08-04
Fixed
QualityProvider
andQualityAttributeId
added to public API.
3.2.1 - 2020-07-31
Added
Add stay still instruction color configuration.
Fixed
Stay still indicator not colored during capture.
3.2.0 - 2020-07-30
Changed
On screen messages during face capture remain shown longer to minimize instruction flickering.
Changed ranges of
DefaultQualityRegistry
and made it public.Removed detected face indicator in
FaceCaptureFragment
during animation ifshowCheckAnimation
is set.
Fixed
Fix camera preview freezing.
3.1.1 - 2020-07-13
Added
New
FaceAttributes
section to documentation.On device passive liveness evaluation provided by
FaceAttributes
. Artifactdot-face-passive-liveness
must be used for this functionality.QualityProvider
implementations -VerificationQualityProvider
,PassiveLivenessQualityProvider
,IcaoQualityProvider
which can be used byFaceCaptureFragment
.New
CaptureStepId
events available forFaceCaptureFragment
-PITCH
,YAW
,EYE_STATUS
,GLASS_STATUS
andMOUTH_STATUS
. These events are added by specificQualityProvider
and instructions for these steps can be customized, see documentation for details.
Changed
Removed alternative instructions for
FaceCaptureFragment
.
Fixed
Crash in Liveness Detection when track is called without init.
Crash during premature finish of Liveness Detection 2.
Bug which caused that liveness detection could not be completed when animations are disabled.
Rare crash during face capture.
3.0.0 - 2020-06-02
Changed
New major release: DOT Android Kit becomes DOT Android Face - library focused on facial recognition.
Update IFace to 3.10.1 - face templates are incompatible and must be regenerated.
Removed
onCaptureFail()
inFaceCaptureFragment
andonFaceCaptureFail()
inLivenessCheck2Fragment
. Need for these callbacks was eliminated by internal rework.Calculate min and max face size ratio from width of the image in
FaceDetector
. Keep calculation from shorter side (height) in landscape mode for UI components.
Fixed
Rare dot tracking liveness detection sudden change of dot direction.
Crash during premature finish of Liveness Detection 2.