[go: up one dir, main page]

0% found this document useful (0 votes)
142 views34 pages

Liveness Detection Framework

Guia de programacion de detección de persona viva

Uploaded by

a_osoriod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views34 pages

Liveness Detection Framework

Guia de programacion de detección de persona viva

Uploaded by

a_osoriod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Liveness Detection Framework

Implementation Guide
Liveness Detection Framework Implementation Guide

Liveness Detection Framework: Implementation Guide


Copyright © 2022 Amazon Web Services, Inc. and/or its affiliates. All rights reserved.

Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Liveness Detection Framework Implementation Guide

Table of Contents
Welcome ........................................................................................................................................... 1
Cost .................................................................................................................................................. 2
Example cost estimate: 50,000 challenge attempts per month ......................................................... 2
Architecture overview ......................................................................................................................... 4
Solution components .......................................................................................................................... 6
Create challenge API workflow .................................................................................................... 6
Put challenge frame API workflow ............................................................................................... 7
Verify challenge response API workflow ........................................................................................ 7
Security ............................................................................................................................................ 9
IAM roles .................................................................................................................................. 9
Cross-origin resource sharing (CORS) ............................................................................................ 9
Security HTTP headers ................................................................................................................ 9
Data retention ........................................................................................................................... 9
File handling ............................................................................................................................. 9
Tracing .................................................................................................................................... 10
Amazon Cognito user pools ....................................................................................................... 10
Design considerations ....................................................................................................................... 11
Nose challenge ........................................................................................................................ 12
Pose challenge ......................................................................................................................... 13
Custom challenge ..................................................................................................................... 14
Regional deployments ............................................................................................................... 14
Supported deployment Regions ......................................................................................... 14
AWS CloudFormation template .......................................................................................................... 15
Automated deployment .................................................................................................................... 16
Deployment overview ............................................................................................................... 16
Step 1. Launch the stack ........................................................................................................... 16
Step 2. Sign in to the web interface ........................................................................................... 17
Additional resources ......................................................................................................................... 18
Create a custom challenge ................................................................................................................ 19
API reference ................................................................................................................................... 22
Create challenge API ................................................................................................................. 22
Put challenge frame API ........................................................................................................... 22
Verify challenge response API .................................................................................................... 23
Uninstall the solution ....................................................................................................................... 25
Deleting the AWS CloudFormation stack ..................................................................................... 25
Deleting the Amazon S3 buckets ................................................................................................ 25
Deleting the Amazon DynamoDB table ....................................................................................... 25
Source code ..................................................................................................................................... 27
Revisions ......................................................................................................................................... 28
Contributors .................................................................................................................................... 29
Notices ............................................................................................................................................ 30
AWS glossary ................................................................................................................................... 31

iii
Liveness Detection Framework Implementation Guide

Incorporate liveness detection


mechanisms into your applications
to address spoofing attacks
Publication date: January 2022

Facial recognition has become a widely used mechanism for identity verification applications. It provides
a low-friction user experience and a safer approach than password-based alternatives. Even though
current technology is capable of identifying a person's face with high accuracy, counterfeiters still
circumvent such systems by impersonating other users by using static photos, video replays, and masks.

Such vulnerabilities against spoofing attacks can be overcome by augmenting a facial recognition system
with some form of liveness detection. Liveness detection is any technique used to identify spoofing
attempts by determining whether the source of a biometric sample is a live human being or a fake
representation. This is accomplished through algorithms that analyze images captured through cameras
(and sometimes other types of sensor data) in order to detect signs of reproduced samples.

The Liveness Detection Framework solution helps you implement liveness detection mechanisms into
your applications by means of an extensible architecture. It comprises a set of APIs to process and verify
liveness challenges, along with two different types of challenges provided as reference implementations.
In addition to those, you can extend the framework and implement your own liveness detection
algorithms. This solution also includes a sample web application fully integrated with the APIs. You can
use it as a reference to create your own front end that fits your business needs.

This implementation guide describes architectural considerations and configuration steps for deploying
Liveness Detection Framework in the Amazon Web Services (AWS) Cloud. It includes instructions to
launch and configure the AWS services required to deploy this solution using AWS best practices for
security and availability.

The guide is intended for IT architects and developers who have practical experience architecting in the
AWS Cloud.

1
Liveness Detection Framework Implementation Guide
Example cost estimate: 50,000
challenge attempts per month

Cost
You are responsible for the cost of the AWS services used while running the solution, which can vary
based on the following factors:

• Number of images processed by Amazon Rekognition per month: the solution uses the DetectFaces
operation to extract face metadata from each image.
• Amount of data served by Amazon CloudFront: static assets such as HTML, JavaScript, and images files
are served by Amazon CloudFront.
• Number of calls to AWS Secrets Manager: API tokens are signed using an AWS Secrets Manager secret.
• Number of images stored in the Amazon Simple Storage Service (Amazon S3) bucket per month: the
solution stores all captured user images in Amazon S3.
• Number of Amazon DynamoDB write/read requests per month: the solution records all challenge
attempts in DynamoDB.
• Number of Amazon API Gateway requests per month: all solution requests go through API Gateway.
• Number of AWS Lambda invocations per month: the backend logic runs on an AWS Lambda function.
• Number of Amazon Cognito monthly active users.

This solution is based entirely on serverless AWS services. Therefore, when the solution is not in use, you
only pay for data stored in S3 and DynamoDB and for the AWS Secrets Manager’s secret.

We recommend creating a budget through AWS Cost Explorer to help manage costs. For full details, refer
to the pricing webpage for each AWS service used in this solution.

Example cost estimate: 50,000 challenge attempts


per month
The following table provides a monthly cost breakdown example for deploying this solution with the
default parameters in the US East (N. Virginia) Region, excluding free tier. This example assumes that:

• 50,000 challenge attempts are performed per month


• Only the two provided challenge types are activated
• Challenge attempts are equally divided between the two challenge types (25,000 attempts for each
challenge)
• Size for processed images is 480 by 480 pixels, with an average size of 100KB
• For each nose challenge attempt, 10 images are captured
• 50% of the traffic comes from United States and 50% of the traffic comes from Europe and Israel
• 2,000 monthly active users signing in

From the assumptions above, we derive the following:

• For each nose challenge attempt, 10 calls to Amazon Recognition’s DetectFaces API are performed
(one per image).
• For each pose challenge attempt, one call to Amazon Recognition’s DetectFaces API is performed.
• For each nose challenge attempt, 12 API calls are performed (one to start the challenge, 10 to send
each image, and one to verify the challenge). In total: 25,000 x 12 = 300,000.

2
Liveness Detection Framework Implementation Guide
Example cost estimate: 50,000
challenge attempts per month

• For each pose challenge attempt, three API calls are performed (one to start the challenge, one to
send the frame, and one to verify the challenge). In total: 25,000 x 3 = 75,000.
• The total number of API calls is equal to the number of AWS Lambda requests and AWS Secrets
Manager API calls, because each API call is backed by the AWS Lambda function and the function uses
AWS Secrets Manager.

AWS service Dimensions Cost

Amazon Rekognition 275,000 DetectFaces API calls $275.00

Amazon Cognito 2,000 monthly active users $11.00

Amazon CloudFront 38GB served $4.14

AWS Secrets Manager 1 secret 375,000 API calls $2.28

Amazon S3 28GB stored 275,000 PUT $2.03


requests

Amazon DynamoDB 0.2GB data stored 375,000 $1.94


write request units 50,000 read
request units

Amazon API Gateway 375,000 REST API requests $1.31

AWS Lambda 375,000 requests 75,000,000 $0.70


ms compute duration (512 MB
memory allocated)

TOTAL $298.40 / month

Note
Average cost per challenge attempt: $0.005968

3
Liveness Detection Framework Implementation Guide

Architecture overview
We leverage Amazon Rekognition to detect the facial details needed to verify the challenge. The
solution’s architecture is composed of a web application that serves as the user front end, and a
serverless backend with APIs that are invoked by the front end.

The client device allows the user to access the sample web application. The sample web application
captures user images (frames) using the device embedded camera and invokes the solution APIs in the
AWS Cloud.

Deploying this solution with the default parameters builds the following environment in the AWS Cloud.

Figure 1: Liveness Detection Framework architecture

The AWS CloudFormation template deploys the following infrastructure:

1. An Amazon CloudFront distribution to serve the web application to the client device.
2. An Amazon S3 source bucket to host the sample web application static files (HTML, JavaScript, and
CSS).
3. Amazon API Gateway to expose the REST/HTTP API endpoints invoked by the client device.
4. AWS Lambda function to process API requests. All liveness detection logic runs inside that function.
5. An Amazon DynamoDB table to store information about each user’s challenge attempts, such as user
ID, timestamp, and challenge-related parameters.
6. An Amazon S3 object storage bucket that holds user images captured by the client device and
uploaded via the APIs.
7. Amazon Rekognition for identifying faces in an image along with their position and landmarks, such
as eyes, nose, and mouth.
8. AWS Secrets Manager to store the secrets used to sign tokens.
9. Amazon Cognito user pool to provide user access control to the API calls.

4
Liveness Detection Framework Implementation Guide

Note
Although the architecture is fully serverless and scalable, with many simultaneous users, you can
reach the maximum transactions per second (TPS) for Amazon Rekognition. Service quotas vary
by AWS Region and can be increased through the AWS Support Center.

5
Liveness Detection Framework Implementation Guide
Create challenge API workflow

Solution components
The solution supports several workflows. These include the create challenge API, put challenge frame
API, and verify challenge response API workflows.

Create challenge API workflow


When the user initiates a new action that requires a challenge, the client’s application initiates this
challenge. The client device invokes the API, passing the user's ID and image dimensions from the device
camera. The API then returns the challenge parameters so that the client device can prompt the user
with instructions on how to perform the challenge.

Figure 2: Create challenge API workflow

1. The user opens your app on their client device.


2. The client device accesses the static files hosted on an Amazon S3 bucket, served through an Amazon
CloudFront distribution.
3. The client device passes the username and password entered by the user to Amazon Cognito. After
successful authentication, Amazon Cognito returns an access token that is used by the client device in
all subsequent requests to the API. All endpoints are protected with Amazon Cognito and, therefore,
require an access token.
4. The client device issues a POST HTTP request to the API Gateway /challenge endpoint, passing the
user ID and the device camera image dimensions. API Gateway forwards the request to the Lambda
Challenge function.

6
Liveness Detection Framework Implementation Guide
Put challenge frame API workflow

5. The Lambda Challenge function receives the request and selects which type of challenge will
be presented to the user. Based on the challenge type, it generates the challenge parameter
values. It also generates a challenge ID and a security token, signed with a secret stored in AWS
Secrets Manager. The Lambda Challenge function stores the challenge parameters in the Amazon
DynamoDB Challenges table and returns them to the client device.
6. The client device receives the challenge parameters and shows the user instructions for performing
the challenge.

Put challenge frame API workflow


While the user interacts with the challenge, the client device uses the embedded camera to capture one
or more images and uploads the frames, one by one, to the API.

Figure 3: Put challenge frame API workflow

1. The user interacts with the camera on their client device while it captures images.
2. The client device issues a PUT HTTP request to the /challenge/{id}/frame API endpoint,
passing the image and the security token. Amazon API Gateway forwards the request to the Lambda
Challenge function.
3. The Lambda Challenge function validates the security token. If it is valid, it stores the image in the
Amazon S3 Frames bucket. It also updates the challenge record in the DynamoDB Challenges table
with the image S3 URL.

These steps are repeated for as many images as required by the challenge type until the user completes
all challenge instructions.

Verify challenge response API workflow


After the user successfully completes the challenge instructions, the client device invokes the API for
final verification.

7
Liveness Detection Framework Implementation Guide
Verify challenge response API workflow

Figure 4: Verify challenge response API Workflow

1. The user follows the instructions and completes the challenge on the client device.
2. The client device issues a POST HTTP request to the /challenge/{id}/verify API endpoint,
passing the security token, to start the challenge verification in the AWS Cloud. Amazon API Gateway
forwards the request to the Lambda Challenge function.
3. The Lambda Challenge function validates the security token. If it is valid, it looks up the challenge
data in the DynamoDB Challenges table. Then, it invokes Amazon Rekognition to analyze the
image(s) stored in the Amazon S3 Frames bucket. The Lambda Challenge function then runs the
verification logic specific to the challenge type. The final result (success or fail) is returned to the
client device.
4. The client device displays the final result to the user.

During the final verification, the Lambda Challenge function invokes, for each frame image, the
DetectFaces operation from Amazon Rekognition Image. For each detected face, the operation returns
the facial details. From all details captured from DetectFaces operation, the solution uses the
bounding box coordinates of the face, facial landmarks coordinates, pose, and other attributes, such as
smile and eyes open or closed.

8
Liveness Detection Framework Implementation Guide
IAM roles

Security
When you build systems on AWS infrastructure, security responsibilities are shared between you and
AWS. This shared model reduces your operational burden because AWS operates, manages, and controls
the components including the host operating system, the virtualization layer, and the physical security
of the facilities in which the services operate. For more information about AWS security, visit AWS Cloud
Security.

IAM roles
AWS Identity and Access Management (IAM) roles allow you to assign granular access policies and
permissions to services and users on the AWS Cloud. This solution creates IAM roles that grant the
solution’s AWS Lambda functions access to create Regional resources.

Cross-origin resource sharing (CORS)


The AWS Lambda function that implements the APIs, supports CORS HTTP headers, as configured in
the Chalice microframework. As a sample implementation, the default configuration allows API calls
from any origin. If deployed to production, we recommend that you apply a more fine-grained CORS
configuration.

Security HTTP headers


The sample web application is provided as a reference implementation used for development purposes.
If deployed to production, we recommend that the web hosting service supports security HTTP headers
to prevent attacks like Man in the middle (MITM) and Cross-site scripting (XSS). If using Amazon S3 and
Amazon CloudFront to host the web application, consider using Lambda@Edge functions to generate the
HTTP headers.

Data retention
The Amazon S3 buckets used in this solution might store sensitive data, such as user images and related
metadata. For security reasons, such sensitive data should be stored only long enough to satisfy the
business requirements of the application. If the solution is deployed to production, we recommend that
you delete user images after they are no longer needed. Consider using lifecycle policies or Amazon S3
Intelligent-Tiering storage class in the Amazon S3 buckets for automatically expiring objects.

File handling
The put challenge frame API receives JPEG file content sent by the sample web client application. In a
production environment, other untrusted sources could attempt to send malicious content to the API.
Therefore, we recommend that you perform additional handling to the file content, such as format and
size validation, malware detection, and Content Disarm and Reconstruction (CDR).

9
Liveness Detection Framework Implementation Guide
Tracing

Tracing
This solution doesn’t include tracing capabilities. Consider using AWS X-Ray. This service collects data
about requests that your application serves, and provides tools that you can use to view, filter, and gain
insights into that data to identify issues and opportunities for optimization.

Amazon Cognito user pools


You can add multi-factor authentication (MFA) to a user pool to protect the identity of your users. MFA
adds a second authentication method that doesn't rely solely on user name and password. You can
choose to use SMS text messages, or time-based one-time (TOTP) passwords as second factors in signing
in your users. You can also use adaptive authentication with its risk-based model to predict when you
might need another authentication factor. Adaptive authentication is part of the user pool advanced
security features, which also include protections against compromised credentials. Learn more in Adding
multi-factor authentication (MFA) to a user pool and Adding advanced security to a user pool in the
Amazon Cognito Developer Guide.

10
Liveness Detection Framework Implementation Guide

Design considerations
This solution deploys a framework that supports different types of liveness challenges.

The framework backend is implemented in Python and built on top of the Chalice microframework. In
the backend, the framework architecture provides all of the API implementations and extension points to
integrate logic specifically for your application and custom challenges.

The framework’s front-end web application is implemented using React JavaScript library and TypeScript
syntax language. The web application is a sample implementation that demonstrates how a client
application should interact with the backend APIs and provide a user experience for performing the
liveness challenges. Use it as a reference to build a custom web or mobile application.
Important
The sample web application is intended for demonstration purposes only. We strongly
recommend that you customize it to best meet your security, performance, and usage standards.

The framework considers the following assumptions about supported liveness challenges:

• To deliver challenge instructions to the user and run the challenge-specific workflow, the front end
might require some parameter definitions provided by the backend when a challenge attempt is
initiated.
• The challenge verification logic is based on one or more static images from the user, captured by a
client device camera. Verification logic cannot rely on videos, only multiple individual frame images.
• The challenge verification logic is based on the following metadata extracted from each image: face
bounding boxes, facial landmark coordinates (eyes, nose, mouth, etc.), face pose (pitch, roll, yaw),
attributes (gender, age, beard, glasses, mouth open, eyes open, smile) and emotion (angry, calm,
confused, disgusted, happy, surprised, sad). For more details about Amazon Rekognition API types,
refer to Data types in the Amazon Rekognition Developer Guide.
• The challenge verification logic can be represented as a state machine with one or more states.
• When multiple types of challenge are used, the backend is responsible for defining the selected
challenge type when a challenge attempt is initiated by the front end. The selection logic can use
metadata provided by the front end.

Based on these assumptions, the framework exposes the following extension points in the form of
Python function decorators:

• Challenge type selection logic: This is an application-wide extension point. It is used to define
which challenge type a user should complete when the front end initiates a challenge. The challenge
selection can be based on custom client metadata provided by the front end. Exposed as the
@challenge_type_selector decorator.
• Challenge parameters definition logic: This challenge-specific extension point is used to define
the parameter values for a certain challenge attempt. The logic runs when the front end initiates a
challenge, immediately after the challenge type is selected. Exposed as the @challenge_params
decorator.
• Challenge verification logic: This challenge-specific extension point is used to define how a challenge
attempt is verified, based on the challenge parameters and the face metadata extracted from the
images. If the challenge requires multiple images, such as video frames, the logic must be defined
as a state machine that processes one image at a time. To define the state machine logic, the
@challege_state decorator is exposed.

11
Liveness Detection Framework Implementation Guide
Nose challenge

Included in the framework are two types of liveness challenges (nose challenge and pose challenge),
which can be used as-is, customized, or used as a reference for implementing new custom challenges.

Nose challenge
This challenge is an active liveness detection approach that prompts the user to position their face inside
an oval area in the center of the image and then move their nose to a target point.

Figure 5: Nose challenge user experience

When a nose challenge is initiated, its challenge parameters definition logic expects to receive the
image dimensions from the client device camera, specifically imageWidth and imageHeight metadata
attributes. Based on these dimensions, the logic determines the coordinates for the central oval area
(areaTop, areaLeft, areaWidth, and areaHeight) and the random target nose position (noseTop,
noseLeft, noseWidth, and noseHeight) and returns them as the challenge parameters.

Based on these parameters, the front end displays the device camera feed and instructs the user to
perform the movements. As the user performs the challenge, the front end must also continually capture
frames and upload them to the backend API. After the user has concluded the movement, the front end
invokes the verification API.
Note
The face-api.js library is used in the front end to detect the user’s face and landmarks to
provide real-time feedback as the user performs the challenge. Liveness validation occurs in
the backend only, in the verification API, using Amazon Rekognition. Results from the front-end
library are not used for any means of user liveness validation.

The nose challenge verification logic is represented by a state machine that processes the frames
uploaded for a certain challenge attempt. For each frame, the state machine checks the detected face
metadata and either advances to the next step, fails, or succeeds in the challenge. The state machine is
represented below:

Figure 6: Nose challenge verification states

12
Liveness Detection Framework Implementation Guide
Pose challenge

• Face state: Checks if there is one, and only one, face detected in the frame image. If that is the case,
the verification advances to the next state. Otherwise, the challenge fails.
• Area state: Checks if the user's face is positioned inside the central area. If the face is fitted in the area
before the specified timeout, the verification advances to the next state. Otherwise, the challenge fails.
• Nose state: Checks if the user's nose is at the target position. If the nose reaches the target position
before the specified timeout, the challenge succeeds. Otherwise, the challenge fails.

Pose challenge
This challenge is an active liveness detection approach that prompts the user to reproduce a certain
pose.

Figure 7: Pose challenge user experience

The pose is random and combines eyes and mouth position variations. Eyes must be opened (looking
forward), closed, looking left, or looking right. The mouth must be closed or smiling.

When a pose challenge is initiated, the backend returns how the eyes and the mouth should look in the
pose. The client device uses that information to generate an image with the corresponding pose and asks
the user to reproduce it. The user then needs to take a selfie (self-portrait photo). After the user takes a
selfie, they can compare the result with the pose and, if the user doesn’t think they look the same, they
can retake the photo. The user can retake the photo as many times as necessary. When ready, the photo
is uploaded to the backend for verification.

The backend verifies the following using the photo sent by the client device:

1. There’s one, and only one, face in the photo.


2. The confidence of the face detection is high (above a configurable threshold value).
3. The face is not rotated.
4. The eyes are positioned as required by the challenge (the user is looking in the correct direction, or the
eyes are closed).The mouth is positioned as required by the challenge (closed or smiling).

If all verifications pass, the challenge is considered successfully performed. Otherwise, the challenge
fails.
Note
Simple challenges are generally easy for users; however, they are more susceptible to spoofing
attacks. Keep this in mind when using this challenge as it is. You could present this challenge in
low-risk scenarios or you could extend it by adding more facial expressions or add hand gestures
into the mix.

13
Liveness Detection Framework Implementation Guide
Custom challenge

Custom challenge
This solution allows you to implement custom challenges using the framework. For details, refer to
Create a custom challenge (p. 19).

Regional deployments
This solution uses the Amazon Rekognition service, which is not currently available in all AWS Regions.
You must launch this solution in an AWS Region where Amazon Rekognition is available.

Supported deployment Regions


Liveness Detection Framework is supported in the following AWS Regions:

Region name

US East (N. Virginia) Asia Pacific (Sydney)

US East (Ohio) Asia Pacific (Tokyo)

US West (Northern California) Canada (Central)

US West (Oregon) Europe (Frankfurt)

Asia Pacific (Mumbai) Europe (Ireland)

Asia Pacific (Seoul) Europe (London)

Asia Pacific (Singapore)  

14
Liveness Detection Framework Implementation Guide

AWS CloudFormation template


To automate deployment, this solution uses the following AWS CloudFormation template, which you can
download before deployment:

liveness-detection-framework.template: Use
this template to launch the solution and all associated components. The default configuration deploys
Amazon Rekognition, Amazon Cognito, Amazon CloudFront, AWS Secrets Manager, Amazon S3, Amazon
DynamoDB, Amazon API Gateway, and AWS Lambda, but you can customize the template to meet your
specific needs.

15
Liveness Detection Framework Implementation Guide
Deployment overview

Automated deployment
Before you launch the automated deployment, review the architecture, components, and other
considerations in this guide. Follow the step-by-step instructions in this section to configure and deploy
the solution into your account.

Time to deploy: Approximately 10 minutes

Deployment overview
Use the following steps to deploy this solution on AWS. For detailed instructions, follow the links for
each step.

Step 1. Launch the stack (p. 16)

• Launch the AWS CloudFormation template into your AWS account.


• Review the templates parameters and enter or adjust the default values as needed.

Step 2. Sign in to the web interface (p. 17)

• Retrieve the URL.

Step 1. Launch the stack


This automated AWS CloudFormation template deploys the Liveness Detection Framework solution in
the AWS Cloud.
Note
You are responsible for the cost of the AWS services used while running this solution. For more
details, visit the Cost (p. 2) section in this guide, and refer to the pricing webpage for each AWS
service used in this solution.

1. Sign in to the AWS Management Console and select the button to launch the liveness-
detection-framework.template AWS CloudFormation template.

Alternatively, you can download the template as a starting point for your own implementation.
2. The template launches in the US East (N. Virginia) Region by default. To launch the solution in a
different AWS Region, use the Region selector in the console navigation bar.
Note
This solution uses the Amazon Rekognition service, which is not currently available in all
AWS Regions. You must launch this solution in an AWS Region where Amazon Rekognition is
available. For the most current availability by Region, refer to the AWS Regional Services List.

16
Liveness Detection Framework Implementation Guide
Step 2. Sign in to the web interface

3. On the Create stack page, verify that the correct template URL is in the Amazon S3 URL text box and
choose Next.
4. On the Specify stack details page, assign a name to your solution stack. For information about
naming character limitations, refer to IAM and STS Limits in the AWS Identity and Access Management
User Guide.
5. Under Parameters, review the parameters for this solution template and modify them as necessary.
This solution uses the following default values.

Parameter Default Description

AdminEmail <Requires input> The email of the system


administrator.
Note
You will receive your
temporary password
and username at this
address.

AdminName <Requires input> The name of the system


administrator.
6. Choose Next.
7. On the Configure stack options page, leave all the values and configurations as default and choose
Next.
8. On the Review page, review and confirm the settings. Check the boxes under Capabilities,
acknowledging that the template creates AWS Identity and Access Management (IAM) resources and
grants the CAPABILITY_AUTO_EXPAND option for the template.
9. Choose Create stack to deploy the stack. You can view the status of the stack in the AWS
CloudFormation console in the Status column. You should receive a CREATE_COMPLETE status in
approximately 10 minutes.
Note
In addition to the primary AWS Lambda function, this solution includes a website custom
resource Lambda function that runs only during initial configuration or when updating or
deleting resources. When you run this solution, you will notice the Lambda function in the
AWS Management Console. Do not delete the website custom resource Lambda function, as it
is needed to manage associated resources.

Step 2. Sign in to the web interface


After the AWS CloudFormation stack is created, you can sign in to the web interface. The solution sends
an email containing your admin credentials, and a temporary password. Use the following procedure to
sign in to the web interface for the first time.

1. Sign in to the AWS CloudFormation console and select the solution’s stack.
2. Choose the Outputs tab.
3. Under the Key column, locate URL, and select the link.
4. From the sign in page, enter the username and temporary password provided in the invitation email.
5. From the Change password page, follow the prompts to create a new password. Password
requirements: minimum of 6 characters, requiring at least one upper case character, one lower case
character, one number, and one symbol.
6. After signing in, select the liveness detection challenge and follow the steps.

17
Liveness Detection Framework Implementation Guide

Additional resources
AWS services

• Amazon Cognito • Amazon DynamoDB

• AWS CloudFormation • Amazon Rekognition

• AWS Lambda • AWS Secrets Manager

• Amazon Simple Storage Service • Amazon CloudFront

• Amazon API Gateway

Related projects

• AWS Chalice

18
Liveness Detection Framework Implementation Guide

Create a custom challenge


To implement a custom challenge using the framework, you must edit the source code for the backend
part of the solution. Refer to the GitHub repository for the source code.

First, create a new Python module inside the chalicelib directory. You can use the module
custom.py as a template. Inside the new module, implement the challenge parameters definition logic
and the challenge verification logic.

The framework requires you to define a string value to identify your custom challenge type. For example,
for the nose challenge, the identifier is 'NOSE', and for the pose challenge, it is 'POSE'. Choose a
different identifier for your custom challenge and use it consistently in all functions.

Challenge parameters definition

For the challenge parameters definition logic, modify the function decorated with the
@challenge_params decorator. The following sample code is for a challenge parameters definition
function, as provided in the custom.py module.

@challenge_params(challenge_type='CUSTOM')
def custom_challenge_params(client_metadata):
params = dict()
params.update(client_metadata)
return params

Set the decorator attribute challenge_type with the value of your custom challenge identifier. The
function receives the input parameter client_metadata, which is a dictionary that might contain
custom attributes provided by the front end when it calls the create challenge API. You can use these
client-provided attributes inside your logic to modify your parameter values. The function must return a
dictionary containing attributes representing your custom challenge parameters. The returned dictionary
should also include the input client metadata attributes.

Challenge verification

For the challenge verification logic, you must determine if your challenge will be based on individual or
multiple images. In the case of individual images, your verification state machine contains only one state.
In the case of multiple images, it can contain one or more states. For each state, you must implement
a function decorated with the @challenge_state decorator. When the verify challenge response API
is called, the framework is responsible for invoking your custom state functions to process each frame
metadata. The following sample code is for a first state (or single state) function, as provided in the
custom.py module.

@challenge_state(challenge_type='CUSTOM', first=True, next_state='second_state')


def first_state(params, frame, context):
if True:
return STATE_NEXT
return STATE_CONTINUE

Set the decorator attribute challenge_type with the value of your custom challenge identifier.
For the first state, set the attribute first to True. In case your logic has more states after the first,
indicate which one is the next by setting the attribute next_state with the name of the function that
represents the next state.

The function receives the following input parameters:

19
Liveness Detection Framework Implementation Guide

• params: Dictionary containing the challenge parameters.


• frame: Dictionary containing information about the current frame image to be processed by the state.
The face metadata detected by Amazon Rekognition can be found in the rekMetadata attribute.
• context: Dictionary containing context information that can be shared across states and frame
iterations. You can use this dictionary's attributes to store variables to be accessed during the
processing of the next frames by the current state or the next states.

As a result of processing frame metadata, the function must return one of the following values:

• STATE_CONTINUE: Signals the framework to stay in the current state for processing the next frame.
• STATE_NEXT: Signals the framework to advance to the next state for processing the next frame.
• CHALLENGE_FAIL: Signals the framework that the challenge is considered not valid and ends the
state machine processing.
• CHALLENGE_SUCCESS: Signals the framework that the challenge was successfully validated and ends
the state machine processing.

In case your challenge contains only one state, the return value must be either CHALLENGE_FAIL or
CHALLENGE_SUCCESS.

The following sample code is for functions that implement other states after the first, as provided in the
custom.py module.

@challenge_state(challenge_type='CUSTOM', next_state='second_state')
def second_state(params, frame, context):
if True:
return STATE_NEXT
return STATE_CONTINUE

@challenge_state(challenge_type='CUSTOM')
def last_state(params, frame, context):
if True:
return CHALLENGE_SUCCESS
return CHALLENGE_FAIL

Set the decorator attribute challenge_type with the value of your custom challenge identifier.
In case your state has more states afterward, indicate which one is the next by setting the attribute
next_state with the name of the function that represents the next state. In case your state is the last
one, do not set a value for the attribute next_state.

These other state functions receive the same input parameters and must return the same values as those
described for the first state function.

For the last state function, the return value must be either CHALLENGE_FAIL or CHALLENGE_SUCCESS.

Challenge type selection logic

After you have implemented your custom challenge module, you must modify the application-
wide challenge type section logic to include your new challenge. To do this, you must edit the file
app.py. The default logic randomly selects the default provided challenges: nose challenge or pose
challenge. The following default code is for the challenge type selection function, decorated with the
@challenge_type_selector decorator.

@challenge_type_selector
def random_challenge_selector(client_metadata):
app.log.debug('random_challenge_selector')
if CLIENT_CHALLENGE_SELECTION and 'challengeType' in client_metadata:

20
Liveness Detection Framework Implementation Guide

return client_metadata['challengeType']
return random.choice(['POSE', 'NOSE'])

The function receives the input parameter client_metadata, which is a dictionary that can contain
custom attributes provided by the front end when it calls the create challenge API. You can use
these client-provided attributes inside your logic to modify your challenge type selection. The
default implementation allows the client-side to specify a preferred challenge type via the custom
attribute challengeType. If the environment variable CLIENT_CHALLENGE_SELECTION is set
to True, it returns the preferred challenge type. For your customized challenge selection function,
you can implement the logic that best fits your use case and includes any other attributes in the
client_metadata as required, making sure your front end provides the new attributes when invoking
the API. The function must return a string value identifier for the selected challenge type.

Challenge configuration

Additionally, for the framework to run your custom module and invoke your decorated custom functions,
you must include an import statement in the file app.py.

The following sample code is to import the provided custom.py module. If you want to create your own
module file, modify the statement accordingly.

import_module('chalicelib.nose')
import_module('chalicelib.pose')
import_module('chalicelib.custom') # <-- Importing the custom module

21
Liveness Detection Framework Implementation Guide
Create challenge API

API reference
Create challenge API
POST /challenge

Request

{
"string": "string",
...
}

The request body can send client metadata to the backend, as one or more pairs of attribute names and
values. Each pair is in the form "name": "value". The default implementation of the framework uses
the following attributes:

• imageWidth: Width of images captured by the client device.


• imageHeight: Height of images captured by the client device.
• challengeType: Preferred challenge type selected by the user.

Additional custom attributes can be defined as required by custom challenges and framework
extensions.

Response

{
"id": "string",
"token": "string",
"type": "string",
"params": {
"string": "string",
...
}
}

The response body contains the following attributes:

• id: The generated ID for the challenge attempt.


• token: The security token generated for the challenge attempt, that should be informed in the next
API calls.
• type: The string identifier for the type of challenge selected by the API.
• params: The challenge parameters for the challenge type selected by the API. Parameters are
specified as one or more name-value pairs, in the form "name": "value".

Put challenge frame API


PUT /challenge/{id}/frame

22
Liveness Detection Framework Implementation Guide
Verify challenge response API

The API path must contain the id parameter, which is the challenge ID returned by the create challenge
API.

Request

{
"token": "string",
"timestamp": "string",
"frameBase64": "string"
}

The request body must contain the following attributes:

• token: The security token generated by the create challenge API.


• timestamp: The timestamp when the frame was captured, as the number of milliseconds since
January 1, 1970, 00:00:00 UTC.
• frameBase64: Captured frame image in JPEG format, encoded as a string in base64.

Response

{
"message": "string"
}

The response body contains the following attribute:

• message: Success or error message.

Verify challenge response API


POST /challenge/{id}/verify

The API path must contain the id parameter, which is the challenge ID returned by the create challenge
API.

Request

{
"token": "string",
}

The request body must contain the following attribute:

• token: The security token generated by the create challenge API.

Response

{
"success": boolean
}

The response body contains the following attribute:

23
Liveness Detection Framework Implementation Guide
Verify challenge response API

• success: Boolean value indicating if the challenge is successful or failed.

24
Liveness Detection Framework Implementation Guide
Deleting the AWS CloudFormation stack

Uninstall the solution


You can uninstall the Liveness Detection Framework solution by deleting the AWS CloudFormation
stacks. You must manually delete the Amazon S3 buckets and Amazon DynamoDB table created by this
solution. AWS Solutions Implementations do not automatically delete buckets and tables in case you
have stored data to retain.

Deleting the AWS CloudFormation stack


1. Sign in to the AWS CloudFormation console.
2. On the Stacks page, select the solution’s stack.
3. Choose Delete.

Deleting the Amazon S3 buckets


This solution is configured to retain the solution-created Amazon S3 buckets (for deploying in an opt-in
Region) if you decide to delete the stacks to prevent accidental data loss. After uninstalling the solution,
you can manually delete this S3 buckets if you do not need to retain the data. Follow these steps to
delete the Amazon S3 buckets.
Note
Before attempting to delete all the Amazon S3 buckets, each S3 bucket must be empty. Do this
by repeating steps 1-4 for each bucket.

1. Sign in to the Amazon S3 console.


2. Choose Buckets from the left navigation pane.
3. Locate the S3 bucket to empty.
4. Select the S3 bucket and choose Empty.

After all buckets are empty, proceed to delete the buckets:

5. Locate the <stack-name>-backend*-challengebucket-<id> S3 bucket, select it, and choose


Delete.
6. Locate the <stack-name>-backend*-loggingbucket-<id> S3 bucket, select it, and choose
Delete.
7. Locate the <stack-name>-backend*-trailbucket-<id> S3 bucket, select it, and choose Delete.
8. Locate the <stack-name>-client*-staticwebsitebucket-<id> S3 bucket, select it, and choose
Delete.
9. Locate the <stack-name>-client*-loggingbucket-<id> S3 bucket, select it, and choose Delete.

Deleting the Amazon DynamoDB table


After uninstalling the solution, you can manually delete this Amazon DynamoDB table if you do not need
to retain the data. Follow these steps to delete the Amazon DynamoDB table.

1. Sign in to the Amazon DynamoDB console.

25
Liveness Detection Framework Implementation Guide
Deleting the Amazon DynamoDB table

2. Locate the <stack-name>-BackendStack-<id>-ChallengeTable-<id> table.


3. Select the table and choose Delete table.

26
Liveness Detection Framework Implementation Guide

Source code
Visit our GitHub repository to download the source files for this solution and to share your
customizations with others.

27
Liveness Detection Framework Implementation Guide

Revisions
Date Change

January 2022 Initial release

28
Liveness Detection Framework Implementation Guide

Contributors
• David Laredo
• Henrique Fugita
• Rafael Werneck
• Rafael Ribeiro Martins
• Lucas Otsuka

29
Liveness Detection Framework Implementation Guide

Notices
Customers are responsible for making their own independent assessment of the information in this
document. This document: (a) is for informational purposes only, (b) represents AWS current product
offerings and practices, which are subject to change without notice, and (c) does not create any
commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services
are provided “as is” without warranties, representations, or conditions of any kind, whether express or
implied. AWS responsibilities and liabilities to its customers are controlled by AWS agreements, and this
document is not part of, nor does it modify, any agreement between AWS and its customers.

Liveness Detection Framework is licensed under the terms of the of the Apache License Version 2.0
available at The Apache Software Foundation.

Liveness Detection Framework uses the Amazon Rekognition service. Customers should review the Use
cases that involve public safety and the general AWS Service Terms.

30
Liveness Detection Framework Implementation Guide

AWS glossary
For the latest AWS terminology, see the AWS glossary in the AWS General Reference.

31

You might also like