Palm Verification

Palm Verification is a method for authenticating a live person online while preserving their privacy. It requires no special hardware—only a common smartphone. The feature includes client components for automatic palm photo capture, while the server provides palm image comparison and presentation attack detection.

When creating an account that uses palm verification, the user must take a photo of their palm. This reference image is stored on the server and used for comparison with future palm images during verification.

Benefits

Compared to facial biometrics, palm biometrics offer several advantages:

  • Security - Palm images are not typically available on social media..
  • Privacy - Palm images cannot be matched against facial images or ID documents, allowing biometric use without compromising privacy.
  • Consent - Capturing a palm image requires active consent. A person must present their palm to the camera, making it difficult to capture without permission.
  • Accuracy - A palm contains more biometric information than a single fingerprint or iris.

Palm Photo Autocapture

Palm auto-capture is a client component that automatically captures a palm image suitable for verification once all quality requirements are met—no manual trigger is required.

Palm auto capture is provided in the following components of mobile app libraries:

For web development there is the Palm auto capture web component:

Palm Comparison and Storage

To compare two palms, the process includes:

  • Template extraction - Detect the palm in the image and create its biometric representation for comparison.
  • Presentation attack detection (liveness) - Detect if the image is of a genuine palm and not a paper or a screen presentation
  • Comparison - Compare two palm templates and return a similarity score

Template extraction

A palm template is a set of biometric features extracted from an image, stored in binary form and significantly smaller than the original photo. Once a reference template is stored on the server, extraction is performed only on the probe image during authentication. Note: Only templates generated by the same mode and product version are compatible. Some product upgrades may require template regeneration, as noted in the product changelog.

Comparison (formerly Matching)

The comparison process calculates the similarity between two templates, producing a comparison score. The higher the score, the greater the similarity. This process is typically very fast.

The Digital Identity Service (DIS) provides the palm comparison functions via dedicated API endpoints.

Comparison threshold and comparison decision

The decision whether two palm images belong to the same person is based on whether the comparison score exceeds a selected threshold (score range is 0–1). This is not a probability percentage, it has a non-linear characteristic.

Comparison Accuracy

Decision threshold affects the following mistakes: -False Match Rate FMR (formerly FAR) is the proportion of comparison trials that result in a false match on a given dataset with a given threshold. False non-match is when comparison decides that two palm images of the same hand do not match. -False Non-match Rate FNMR (formerly FRR) is the proportion of comparison trials that result in a false non-match on the same dataset and with the same threshold.

Comparison accuracy is expressed as a combination of FMR @ x% FNMR.

Threshold for comparison

Recommended threshold is 0.5 for declaring a match between two palms.

Palm Presentation Attack Detection (Liveness)

Palm liveness detection determines whether the palm in an image belongs to a real person, without requiring active user interaction or response to a challenge.

Liveness scores, Threshold and Accuracy

The decision whether a palm photo is a bona-fide presentation (genuine) is based on whether the liveness score exceeds a selected threshold (score range is 0–1). This is not a probability percentage, it has a non-linear characteristic.

Decision threshold affects the following mistakes:

  • APCER: Attack presentation images that are classified as bona-fide presentations are false accepts. Rate of this error on a given dataset and given threshold represents the Attack Presentation Classification Error Rate (formerly FAR).
  • BPCER: Bona-fide presentation images that are classified as attacks are false rejects. Rate of this error on a given dataset and given threshold represents the Bona-fide Presentation Classification Error Rate (formerly FRR).

Measured accuracy of algorithm depends on dataset quality and representativeness. Real-world results vary by project and can be influenced by factors such as user demographics, prevailing lighting conditions, phone camera quality, and client-side auto-capture components configuration. Innovatrics' dataset is large and representative enough for the results to be extrapolated for real world conditions.

Use case typeThresholdPerformance
Convenience (minimum rejected attempts)0.7682.32% APCER @ 0.5% BPCER
Balanced (equal error rate)0.7991.04% both APCER & BPCER
Security (minimum accepted frauds)0.8251.87% BPCER @ 0.5% APCER