Liveness Detection Framework
Liveness Detection Framework
Implementation Guide
Liveness Detection Framework Implementation Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Liveness Detection Framework Implementation Guide
Table of Contents
Welcome ........................................................................................................................................... 1
Cost .................................................................................................................................................. 2
Example cost estimate: 50,000 challenge attempts per month ......................................................... 2
Architecture overview ......................................................................................................................... 4
Solution components .......................................................................................................................... 6
Create challenge API workflow .................................................................................................... 6
Put challenge frame API workflow ............................................................................................... 7
Verify challenge response API workflow ........................................................................................ 7
Security ............................................................................................................................................ 9
IAM roles .................................................................................................................................. 9
Cross-origin resource sharing (CORS) ............................................................................................ 9
Security HTTP headers ................................................................................................................ 9
Data retention ........................................................................................................................... 9
File handling ............................................................................................................................. 9
Tracing .................................................................................................................................... 10
Amazon Cognito user pools ....................................................................................................... 10
Design considerations ....................................................................................................................... 11
Nose challenge ........................................................................................................................ 12
Pose challenge ......................................................................................................................... 13
Custom challenge ..................................................................................................................... 14
Regional deployments ............................................................................................................... 14
Supported deployment Regions ......................................................................................... 14
AWS CloudFormation template .......................................................................................................... 15
Automated deployment .................................................................................................................... 16
Deployment overview ............................................................................................................... 16
Step 1. Launch the stack ........................................................................................................... 16
Step 2. Sign in to the web interface ........................................................................................... 17
Additional resources ......................................................................................................................... 18
Create a custom challenge ................................................................................................................ 19
API reference ................................................................................................................................... 22
Create challenge API ................................................................................................................. 22
Put challenge frame API ........................................................................................................... 22
Verify challenge response API .................................................................................................... 23
Uninstall the solution ....................................................................................................................... 25
Deleting the AWS CloudFormation stack ..................................................................................... 25
Deleting the Amazon S3 buckets ................................................................................................ 25
Deleting the Amazon DynamoDB table ....................................................................................... 25
Source code ..................................................................................................................................... 27
Revisions ......................................................................................................................................... 28
Contributors .................................................................................................................................... 29
Notices ............................................................................................................................................ 30
AWS glossary ................................................................................................................................... 31
iii
Liveness Detection Framework Implementation Guide
Facial recognition has become a widely used mechanism for identity verification applications. It provides
a low-friction user experience and a safer approach than password-based alternatives. Even though
current technology is capable of identifying a person's face with high accuracy, counterfeiters still
circumvent such systems by impersonating other users by using static photos, video replays, and masks.
Such vulnerabilities against spoofing attacks can be overcome by augmenting a facial recognition system
with some form of liveness detection. Liveness detection is any technique used to identify spoofing
attempts by determining whether the source of a biometric sample is a live human being or a fake
representation. This is accomplished through algorithms that analyze images captured through cameras
(and sometimes other types of sensor data) in order to detect signs of reproduced samples.
The Liveness Detection Framework solution helps you implement liveness detection mechanisms into
your applications by means of an extensible architecture. It comprises a set of APIs to process and verify
liveness challenges, along with two different types of challenges provided as reference implementations.
In addition to those, you can extend the framework and implement your own liveness detection
algorithms. This solution also includes a sample web application fully integrated with the APIs. You can
use it as a reference to create your own front end that fits your business needs.
This implementation guide describes architectural considerations and configuration steps for deploying
Liveness Detection Framework in the Amazon Web Services (AWS) Cloud. It includes instructions to
launch and configure the AWS services required to deploy this solution using AWS best practices for
security and availability.
The guide is intended for IT architects and developers who have practical experience architecting in the
AWS Cloud.
1
Liveness Detection Framework Implementation Guide
Example cost estimate: 50,000
challenge attempts per month
Cost
You are responsible for the cost of the AWS services used while running the solution, which can vary
based on the following factors:
• Number of images processed by Amazon Rekognition per month: the solution uses the DetectFaces
operation to extract face metadata from each image.
• Amount of data served by Amazon CloudFront: static assets such as HTML, JavaScript, and images files
are served by Amazon CloudFront.
• Number of calls to AWS Secrets Manager: API tokens are signed using an AWS Secrets Manager secret.
• Number of images stored in the Amazon Simple Storage Service (Amazon S3) bucket per month: the
solution stores all captured user images in Amazon S3.
• Number of Amazon DynamoDB write/read requests per month: the solution records all challenge
attempts in DynamoDB.
• Number of Amazon API Gateway requests per month: all solution requests go through API Gateway.
• Number of AWS Lambda invocations per month: the backend logic runs on an AWS Lambda function.
• Number of Amazon Cognito monthly active users.
This solution is based entirely on serverless AWS services. Therefore, when the solution is not in use, you
only pay for data stored in S3 and DynamoDB and for the AWS Secrets Manager’s secret.
We recommend creating a budget through AWS Cost Explorer to help manage costs. For full details, refer
to the pricing webpage for each AWS service used in this solution.
• For each nose challenge attempt, 10 calls to Amazon Recognition’s DetectFaces API are performed
(one per image).
• For each pose challenge attempt, one call to Amazon Recognition’s DetectFaces API is performed.
• For each nose challenge attempt, 12 API calls are performed (one to start the challenge, 10 to send
each image, and one to verify the challenge). In total: 25,000 x 12 = 300,000.
2
Liveness Detection Framework Implementation Guide
Example cost estimate: 50,000
challenge attempts per month
• For each pose challenge attempt, three API calls are performed (one to start the challenge, one to
send the frame, and one to verify the challenge). In total: 25,000 x 3 = 75,000.
• The total number of API calls is equal to the number of AWS Lambda requests and AWS Secrets
Manager API calls, because each API call is backed by the AWS Lambda function and the function uses
AWS Secrets Manager.
Note
Average cost per challenge attempt: $0.005968
3
Liveness Detection Framework Implementation Guide
Architecture overview
We leverage Amazon Rekognition to detect the facial details needed to verify the challenge. The
solution’s architecture is composed of a web application that serves as the user front end, and a
serverless backend with APIs that are invoked by the front end.
The client device allows the user to access the sample web application. The sample web application
captures user images (frames) using the device embedded camera and invokes the solution APIs in the
AWS Cloud.
Deploying this solution with the default parameters builds the following environment in the AWS Cloud.
1. An Amazon CloudFront distribution to serve the web application to the client device.
2. An Amazon S3 source bucket to host the sample web application static files (HTML, JavaScript, and
CSS).
3. Amazon API Gateway to expose the REST/HTTP API endpoints invoked by the client device.
4. AWS Lambda function to process API requests. All liveness detection logic runs inside that function.
5. An Amazon DynamoDB table to store information about each user’s challenge attempts, such as user
ID, timestamp, and challenge-related parameters.
6. An Amazon S3 object storage bucket that holds user images captured by the client device and
uploaded via the APIs.
7. Amazon Rekognition for identifying faces in an image along with their position and landmarks, such
as eyes, nose, and mouth.
8. AWS Secrets Manager to store the secrets used to sign tokens.
9. Amazon Cognito user pool to provide user access control to the API calls.
4
Liveness Detection Framework Implementation Guide
Note
Although the architecture is fully serverless and scalable, with many simultaneous users, you can
reach the maximum transactions per second (TPS) for Amazon Rekognition. Service quotas vary
by AWS Region and can be increased through the AWS Support Center.
5
Liveness Detection Framework Implementation Guide
Create challenge API workflow
Solution components
The solution supports several workflows. These include the create challenge API, put challenge frame
API, and verify challenge response API workflows.
6
Liveness Detection Framework Implementation Guide
Put challenge frame API workflow
5. The Lambda Challenge function receives the request and selects which type of challenge will
be presented to the user. Based on the challenge type, it generates the challenge parameter
values. It also generates a challenge ID and a security token, signed with a secret stored in AWS
Secrets Manager. The Lambda Challenge function stores the challenge parameters in the Amazon
DynamoDB Challenges table and returns them to the client device.
6. The client device receives the challenge parameters and shows the user instructions for performing
the challenge.
1. The user interacts with the camera on their client device while it captures images.
2. The client device issues a PUT HTTP request to the /challenge/{id}/frame API endpoint,
passing the image and the security token. Amazon API Gateway forwards the request to the Lambda
Challenge function.
3. The Lambda Challenge function validates the security token. If it is valid, it stores the image in the
Amazon S3 Frames bucket. It also updates the challenge record in the DynamoDB Challenges table
with the image S3 URL.
These steps are repeated for as many images as required by the challenge type until the user completes
all challenge instructions.
7
Liveness Detection Framework Implementation Guide
Verify challenge response API workflow
1. The user follows the instructions and completes the challenge on the client device.
2. The client device issues a POST HTTP request to the /challenge/{id}/verify API endpoint,
passing the security token, to start the challenge verification in the AWS Cloud. Amazon API Gateway
forwards the request to the Lambda Challenge function.
3. The Lambda Challenge function validates the security token. If it is valid, it looks up the challenge
data in the DynamoDB Challenges table. Then, it invokes Amazon Rekognition to analyze the
image(s) stored in the Amazon S3 Frames bucket. The Lambda Challenge function then runs the
verification logic specific to the challenge type. The final result (success or fail) is returned to the
client device.
4. The client device displays the final result to the user.
During the final verification, the Lambda Challenge function invokes, for each frame image, the
DetectFaces operation from Amazon Rekognition Image. For each detected face, the operation returns
the facial details. From all details captured from DetectFaces operation, the solution uses the
bounding box coordinates of the face, facial landmarks coordinates, pose, and other attributes, such as
smile and eyes open or closed.
8
Liveness Detection Framework Implementation Guide
IAM roles
Security
When you build systems on AWS infrastructure, security responsibilities are shared between you and
AWS. This shared model reduces your operational burden because AWS operates, manages, and controls
the components including the host operating system, the virtualization layer, and the physical security
of the facilities in which the services operate. For more information about AWS security, visit AWS Cloud
Security.
IAM roles
AWS Identity and Access Management (IAM) roles allow you to assign granular access policies and
permissions to services and users on the AWS Cloud. This solution creates IAM roles that grant the
solution’s AWS Lambda functions access to create Regional resources.
Data retention
The Amazon S3 buckets used in this solution might store sensitive data, such as user images and related
metadata. For security reasons, such sensitive data should be stored only long enough to satisfy the
business requirements of the application. If the solution is deployed to production, we recommend that
you delete user images after they are no longer needed. Consider using lifecycle policies or Amazon S3
Intelligent-Tiering storage class in the Amazon S3 buckets for automatically expiring objects.
File handling
The put challenge frame API receives JPEG file content sent by the sample web client application. In a
production environment, other untrusted sources could attempt to send malicious content to the API.
Therefore, we recommend that you perform additional handling to the file content, such as format and
size validation, malware detection, and Content Disarm and Reconstruction (CDR).
9
Liveness Detection Framework Implementation Guide
Tracing
Tracing
This solution doesn’t include tracing capabilities. Consider using AWS X-Ray. This service collects data
about requests that your application serves, and provides tools that you can use to view, filter, and gain
insights into that data to identify issues and opportunities for optimization.
10
Liveness Detection Framework Implementation Guide
Design considerations
This solution deploys a framework that supports different types of liveness challenges.
The framework backend is implemented in Python and built on top of the Chalice microframework. In
the backend, the framework architecture provides all of the API implementations and extension points to
integrate logic specifically for your application and custom challenges.
The framework’s front-end web application is implemented using React JavaScript library and TypeScript
syntax language. The web application is a sample implementation that demonstrates how a client
application should interact with the backend APIs and provide a user experience for performing the
liveness challenges. Use it as a reference to build a custom web or mobile application.
Important
The sample web application is intended for demonstration purposes only. We strongly
recommend that you customize it to best meet your security, performance, and usage standards.
The framework considers the following assumptions about supported liveness challenges:
• To deliver challenge instructions to the user and run the challenge-specific workflow, the front end
might require some parameter definitions provided by the backend when a challenge attempt is
initiated.
• The challenge verification logic is based on one or more static images from the user, captured by a
client device camera. Verification logic cannot rely on videos, only multiple individual frame images.
• The challenge verification logic is based on the following metadata extracted from each image: face
bounding boxes, facial landmark coordinates (eyes, nose, mouth, etc.), face pose (pitch, roll, yaw),
attributes (gender, age, beard, glasses, mouth open, eyes open, smile) and emotion (angry, calm,
confused, disgusted, happy, surprised, sad). For more details about Amazon Rekognition API types,
refer to Data types in the Amazon Rekognition Developer Guide.
• The challenge verification logic can be represented as a state machine with one or more states.
• When multiple types of challenge are used, the backend is responsible for defining the selected
challenge type when a challenge attempt is initiated by the front end. The selection logic can use
metadata provided by the front end.
Based on these assumptions, the framework exposes the following extension points in the form of
Python function decorators:
• Challenge type selection logic: This is an application-wide extension point. It is used to define
which challenge type a user should complete when the front end initiates a challenge. The challenge
selection can be based on custom client metadata provided by the front end. Exposed as the
@challenge_type_selector decorator.
• Challenge parameters definition logic: This challenge-specific extension point is used to define
the parameter values for a certain challenge attempt. The logic runs when the front end initiates a
challenge, immediately after the challenge type is selected. Exposed as the @challenge_params
decorator.
• Challenge verification logic: This challenge-specific extension point is used to define how a challenge
attempt is verified, based on the challenge parameters and the face metadata extracted from the
images. If the challenge requires multiple images, such as video frames, the logic must be defined
as a state machine that processes one image at a time. To define the state machine logic, the
@challege_state decorator is exposed.
11
Liveness Detection Framework Implementation Guide
Nose challenge
Included in the framework are two types of liveness challenges (nose challenge and pose challenge),
which can be used as-is, customized, or used as a reference for implementing new custom challenges.
Nose challenge
This challenge is an active liveness detection approach that prompts the user to position their face inside
an oval area in the center of the image and then move their nose to a target point.
When a nose challenge is initiated, its challenge parameters definition logic expects to receive the
image dimensions from the client device camera, specifically imageWidth and imageHeight metadata
attributes. Based on these dimensions, the logic determines the coordinates for the central oval area
(areaTop, areaLeft, areaWidth, and areaHeight) and the random target nose position (noseTop,
noseLeft, noseWidth, and noseHeight) and returns them as the challenge parameters.
Based on these parameters, the front end displays the device camera feed and instructs the user to
perform the movements. As the user performs the challenge, the front end must also continually capture
frames and upload them to the backend API. After the user has concluded the movement, the front end
invokes the verification API.
Note
The face-api.js library is used in the front end to detect the user’s face and landmarks to
provide real-time feedback as the user performs the challenge. Liveness validation occurs in
the backend only, in the verification API, using Amazon Rekognition. Results from the front-end
library are not used for any means of user liveness validation.
The nose challenge verification logic is represented by a state machine that processes the frames
uploaded for a certain challenge attempt. For each frame, the state machine checks the detected face
metadata and either advances to the next step, fails, or succeeds in the challenge. The state machine is
represented below:
12
Liveness Detection Framework Implementation Guide
Pose challenge
• Face state: Checks if there is one, and only one, face detected in the frame image. If that is the case,
the verification advances to the next state. Otherwise, the challenge fails.
• Area state: Checks if the user's face is positioned inside the central area. If the face is fitted in the area
before the specified timeout, the verification advances to the next state. Otherwise, the challenge fails.
• Nose state: Checks if the user's nose is at the target position. If the nose reaches the target position
before the specified timeout, the challenge succeeds. Otherwise, the challenge fails.
Pose challenge
This challenge is an active liveness detection approach that prompts the user to reproduce a certain
pose.
The pose is random and combines eyes and mouth position variations. Eyes must be opened (looking
forward), closed, looking left, or looking right. The mouth must be closed or smiling.
When a pose challenge is initiated, the backend returns how the eyes and the mouth should look in the
pose. The client device uses that information to generate an image with the corresponding pose and asks
the user to reproduce it. The user then needs to take a selfie (self-portrait photo). After the user takes a
selfie, they can compare the result with the pose and, if the user doesn’t think they look the same, they
can retake the photo. The user can retake the photo as many times as necessary. When ready, the photo
is uploaded to the backend for verification.
The backend verifies the following using the photo sent by the client device:
If all verifications pass, the challenge is considered successfully performed. Otherwise, the challenge
fails.
Note
Simple challenges are generally easy for users; however, they are more susceptible to spoofing
attacks. Keep this in mind when using this challenge as it is. You could present this challenge in
low-risk scenarios or you could extend it by adding more facial expressions or add hand gestures
into the mix.
13
Liveness Detection Framework Implementation Guide
Custom challenge
Custom challenge
This solution allows you to implement custom challenges using the framework. For details, refer to
Create a custom challenge (p. 19).
Regional deployments
This solution uses the Amazon Rekognition service, which is not currently available in all AWS Regions.
You must launch this solution in an AWS Region where Amazon Rekognition is available.
Region name
14
Liveness Detection Framework Implementation Guide
liveness-detection-framework.template: Use
this template to launch the solution and all associated components. The default configuration deploys
Amazon Rekognition, Amazon Cognito, Amazon CloudFront, AWS Secrets Manager, Amazon S3, Amazon
DynamoDB, Amazon API Gateway, and AWS Lambda, but you can customize the template to meet your
specific needs.
15
Liveness Detection Framework Implementation Guide
Deployment overview
Automated deployment
Before you launch the automated deployment, review the architecture, components, and other
considerations in this guide. Follow the step-by-step instructions in this section to configure and deploy
the solution into your account.
Deployment overview
Use the following steps to deploy this solution on AWS. For detailed instructions, follow the links for
each step.
1. Sign in to the AWS Management Console and select the button to launch the liveness-
detection-framework.template AWS CloudFormation template.
Alternatively, you can download the template as a starting point for your own implementation.
2. The template launches in the US East (N. Virginia) Region by default. To launch the solution in a
different AWS Region, use the Region selector in the console navigation bar.
Note
This solution uses the Amazon Rekognition service, which is not currently available in all
AWS Regions. You must launch this solution in an AWS Region where Amazon Rekognition is
available. For the most current availability by Region, refer to the AWS Regional Services List.
16
Liveness Detection Framework Implementation Guide
Step 2. Sign in to the web interface
3. On the Create stack page, verify that the correct template URL is in the Amazon S3 URL text box and
choose Next.
4. On the Specify stack details page, assign a name to your solution stack. For information about
naming character limitations, refer to IAM and STS Limits in the AWS Identity and Access Management
User Guide.
5. Under Parameters, review the parameters for this solution template and modify them as necessary.
This solution uses the following default values.
1. Sign in to the AWS CloudFormation console and select the solution’s stack.
2. Choose the Outputs tab.
3. Under the Key column, locate URL, and select the link.
4. From the sign in page, enter the username and temporary password provided in the invitation email.
5. From the Change password page, follow the prompts to create a new password. Password
requirements: minimum of 6 characters, requiring at least one upper case character, one lower case
character, one number, and one symbol.
6. After signing in, select the liveness detection challenge and follow the steps.
17
Liveness Detection Framework Implementation Guide
Additional resources
AWS services
Related projects
• AWS Chalice
18
Liveness Detection Framework Implementation Guide
First, create a new Python module inside the chalicelib directory. You can use the module
custom.py as a template. Inside the new module, implement the challenge parameters definition logic
and the challenge verification logic.
The framework requires you to define a string value to identify your custom challenge type. For example,
for the nose challenge, the identifier is 'NOSE', and for the pose challenge, it is 'POSE'. Choose a
different identifier for your custom challenge and use it consistently in all functions.
For the challenge parameters definition logic, modify the function decorated with the
@challenge_params decorator. The following sample code is for a challenge parameters definition
function, as provided in the custom.py module.
@challenge_params(challenge_type='CUSTOM')
def custom_challenge_params(client_metadata):
params = dict()
params.update(client_metadata)
return params
Set the decorator attribute challenge_type with the value of your custom challenge identifier. The
function receives the input parameter client_metadata, which is a dictionary that might contain
custom attributes provided by the front end when it calls the create challenge API. You can use these
client-provided attributes inside your logic to modify your parameter values. The function must return a
dictionary containing attributes representing your custom challenge parameters. The returned dictionary
should also include the input client metadata attributes.
Challenge verification
For the challenge verification logic, you must determine if your challenge will be based on individual or
multiple images. In the case of individual images, your verification state machine contains only one state.
In the case of multiple images, it can contain one or more states. For each state, you must implement
a function decorated with the @challenge_state decorator. When the verify challenge response API
is called, the framework is responsible for invoking your custom state functions to process each frame
metadata. The following sample code is for a first state (or single state) function, as provided in the
custom.py module.
Set the decorator attribute challenge_type with the value of your custom challenge identifier.
For the first state, set the attribute first to True. In case your logic has more states after the first,
indicate which one is the next by setting the attribute next_state with the name of the function that
represents the next state.
19
Liveness Detection Framework Implementation Guide
As a result of processing frame metadata, the function must return one of the following values:
• STATE_CONTINUE: Signals the framework to stay in the current state for processing the next frame.
• STATE_NEXT: Signals the framework to advance to the next state for processing the next frame.
• CHALLENGE_FAIL: Signals the framework that the challenge is considered not valid and ends the
state machine processing.
• CHALLENGE_SUCCESS: Signals the framework that the challenge was successfully validated and ends
the state machine processing.
In case your challenge contains only one state, the return value must be either CHALLENGE_FAIL or
CHALLENGE_SUCCESS.
The following sample code is for functions that implement other states after the first, as provided in the
custom.py module.
@challenge_state(challenge_type='CUSTOM', next_state='second_state')
def second_state(params, frame, context):
if True:
return STATE_NEXT
return STATE_CONTINUE
@challenge_state(challenge_type='CUSTOM')
def last_state(params, frame, context):
if True:
return CHALLENGE_SUCCESS
return CHALLENGE_FAIL
Set the decorator attribute challenge_type with the value of your custom challenge identifier.
In case your state has more states afterward, indicate which one is the next by setting the attribute
next_state with the name of the function that represents the next state. In case your state is the last
one, do not set a value for the attribute next_state.
These other state functions receive the same input parameters and must return the same values as those
described for the first state function.
For the last state function, the return value must be either CHALLENGE_FAIL or CHALLENGE_SUCCESS.
After you have implemented your custom challenge module, you must modify the application-
wide challenge type section logic to include your new challenge. To do this, you must edit the file
app.py. The default logic randomly selects the default provided challenges: nose challenge or pose
challenge. The following default code is for the challenge type selection function, decorated with the
@challenge_type_selector decorator.
@challenge_type_selector
def random_challenge_selector(client_metadata):
app.log.debug('random_challenge_selector')
if CLIENT_CHALLENGE_SELECTION and 'challengeType' in client_metadata:
20
Liveness Detection Framework Implementation Guide
return client_metadata['challengeType']
return random.choice(['POSE', 'NOSE'])
The function receives the input parameter client_metadata, which is a dictionary that can contain
custom attributes provided by the front end when it calls the create challenge API. You can use
these client-provided attributes inside your logic to modify your challenge type selection. The
default implementation allows the client-side to specify a preferred challenge type via the custom
attribute challengeType. If the environment variable CLIENT_CHALLENGE_SELECTION is set
to True, it returns the preferred challenge type. For your customized challenge selection function,
you can implement the logic that best fits your use case and includes any other attributes in the
client_metadata as required, making sure your front end provides the new attributes when invoking
the API. The function must return a string value identifier for the selected challenge type.
Challenge configuration
Additionally, for the framework to run your custom module and invoke your decorated custom functions,
you must include an import statement in the file app.py.
The following sample code is to import the provided custom.py module. If you want to create your own
module file, modify the statement accordingly.
import_module('chalicelib.nose')
import_module('chalicelib.pose')
import_module('chalicelib.custom') # <-- Importing the custom module
21
Liveness Detection Framework Implementation Guide
Create challenge API
API reference
Create challenge API
POST /challenge
Request
{
"string": "string",
...
}
The request body can send client metadata to the backend, as one or more pairs of attribute names and
values. Each pair is in the form "name": "value". The default implementation of the framework uses
the following attributes:
Additional custom attributes can be defined as required by custom challenges and framework
extensions.
Response
{
"id": "string",
"token": "string",
"type": "string",
"params": {
"string": "string",
...
}
}
22
Liveness Detection Framework Implementation Guide
Verify challenge response API
The API path must contain the id parameter, which is the challenge ID returned by the create challenge
API.
Request
{
"token": "string",
"timestamp": "string",
"frameBase64": "string"
}
Response
{
"message": "string"
}
The API path must contain the id parameter, which is the challenge ID returned by the create challenge
API.
Request
{
"token": "string",
}
Response
{
"success": boolean
}
23
Liveness Detection Framework Implementation Guide
Verify challenge response API
24
Liveness Detection Framework Implementation Guide
Deleting the AWS CloudFormation stack
25
Liveness Detection Framework Implementation Guide
Deleting the Amazon DynamoDB table
26
Liveness Detection Framework Implementation Guide
Source code
Visit our GitHub repository to download the source files for this solution and to share your
customizations with others.
27
Liveness Detection Framework Implementation Guide
Revisions
Date Change
28
Liveness Detection Framework Implementation Guide
Contributors
• David Laredo
• Henrique Fugita
• Rafael Werneck
• Rafael Ribeiro Martins
• Lucas Otsuka
29
Liveness Detection Framework Implementation Guide
Notices
Customers are responsible for making their own independent assessment of the information in this
document. This document: (a) is for informational purposes only, (b) represents AWS current product
offerings and practices, which are subject to change without notice, and (c) does not create any
commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services
are provided “as is” without warranties, representations, or conditions of any kind, whether express or
implied. AWS responsibilities and liabilities to its customers are controlled by AWS agreements, and this
document is not part of, nor does it modify, any agreement between AWS and its customers.
Liveness Detection Framework is licensed under the terms of the of the Apache License Version 2.0
available at The Apache Software Foundation.
Liveness Detection Framework uses the Amazon Rekognition service. Customers should review the Use
cases that involve public safety and the general AWS Service Terms.
30
Liveness Detection Framework Implementation Guide
AWS glossary
For the latest AWS terminology, see the AWS glossary in the AWS General Reference.
31