[go: up one dir, main page]

CN109977764B - Living body recognition method, device, terminal and storage medium based on plane detection - Google Patents

Living body recognition method, device, terminal and storage medium based on plane detection Download PDF

Info

Publication number
CN109977764B
CN109977764B CN201910111148.5A CN201910111148A CN109977764B CN 109977764 B CN109977764 B CN 109977764B CN 201910111148 A CN201910111148 A CN 201910111148A CN 109977764 B CN109977764 B CN 109977764B
Authority
CN
China
Prior art keywords
living body
posture
local
head
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910111148.5A
Other languages
Chinese (zh)
Other versions
CN109977764A (en
Inventor
王路生
陆进
陈斌
宋晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910111148.5A priority Critical patent/CN109977764B/en
Publication of CN109977764A publication Critical patent/CN109977764A/en
Priority to PCT/CN2019/118553 priority patent/WO2020164284A1/en
Application granted granted Critical
Publication of CN109977764B publication Critical patent/CN109977764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明属于活体识别技术领域,尤其涉及一种基于平面检测的活体识别方法、装置、终端及存储介质。其中,所述活体识别方法包括:获取检测对象的平面图像;提取所述平面图像上的人脸特征点;基于提取的人脸特征点确定所述平面图像对应的整体头部姿态;将提取的人脸特征点划分为多个局部特征组,并确定每个局部特征组对应的局部头部姿态;计算所述局部头部姿态与所述整体头部姿态的姿态差异;基于所述姿态差异判断所述检测对象是否为活体。本发明通过平面图像检测实现了活体识别,有利于提高识别效率和降低设备成本。

The present invention belongs to the field of liveness recognition technology, and in particular, relates to a liveness recognition method, device, terminal and storage medium based on plane detection. The liveness recognition method includes: obtaining a plane image of the detection object; extracting facial feature points on the plane image; determining the overall head posture corresponding to the plane image based on the extracted facial feature points; dividing the extracted facial feature points into multiple local feature groups, and determining the local head posture corresponding to each local feature group; calculating the posture difference between the local head posture and the overall head posture; judging whether the detection object is alive based on the posture difference. The present invention realizes liveness recognition through plane image detection, which is beneficial to improving recognition efficiency and reducing equipment costs.

Description

Living body identification method, device, terminal and storage medium based on plane detection
Technical Field
The invention belongs to the technical field of living body identification, and particularly relates to a living body identification method, device, terminal and storage medium based on plane detection.
Background
Currently, in-vivo identification is mainly used to determine the true physiological characteristics of a detected object in some authentication scenarios. For example, in face recognition applications, by detecting a combination of blinking, opening mouth, shaking head, nodding head, etc., it is verified whether the currently detected object is a real living body. Common living body attack means such as photos, face changing, masks, shielding, screen flipping and the like can be effectively resisted, so that fraudulent conduct screening is conducted, and the benefits of users are guaranteed.
The existing human face living body detection method mainly has the problems that firstly, the calculation time is high, for example, three-dimensional depth information is needed, the change of non-rigid operation of the human face is calculated by adopting an optical flow method, the calculation process is complex, and secondly, the device cost is high because additional biological characteristic recognition equipment is needed to carry out matching recognition, for example, the temperature of a detection object is needed to be detected by using additional infrared human body detection equipment, or sound recognition and the like are needed to be carried out by using sound collection equipment in a combined mode. As can be seen, the living body identification method in the prior art has the problems of complex calculation process or higher cost of needing to be equipped with additional identification equipment.
Disclosure of Invention
In view of the above, the present invention provides a living body identification method, apparatus, terminal and storage medium based on plane detection, so as to solve the problems of complex calculation process or higher cost of needing to be equipped with additional identification equipment in the living body identification method in the prior art.
A first aspect of an embodiment of the present invention provides a living body identification method based on plane detection, which may include:
Acquiring a plane image of a detection object;
Extracting face feature points on the plane image;
Determining the integral head gesture corresponding to the planar image based on the extracted face feature points;
Dividing the extracted face feature points into a plurality of local feature groups, and determining the local head gesture corresponding to each local feature group;
calculating the posture difference between the local head posture and the overall head posture;
and judging whether the detection object is a living body or not based on the posture difference.
A second aspect of an embodiment of the present invention provides a living body recognition apparatus based on plane detection, which may include:
a planar image acquisition unit for acquiring a planar image of a face of a detection object;
the feature point extraction unit is used for extracting face feature points of the face plane image;
The overall posture determining unit is used for determining the overall head posture corresponding to the plane image based on the extracted face feature points;
the local pose determining unit is used for dividing the extracted face feature points into a plurality of local feature groups and determining the local head pose corresponding to each local feature group;
A posture difference calculating unit for calculating a posture difference between the local head posture and the overall head posture;
And a living body judgment unit configured to judge whether the detection object is a living body based on the posture difference.
A third aspect of the embodiments of the present invention provides an identification terminal, including a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the steps of the living body identification method based on plane detection in the first aspect of the present invention and any one of the possible implementation manners of the first aspect when the processor executes the computer readable instructions.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing computer-readable instructions which, when executed by a processor, implement the steps of the method for plane detection-based living body identification in the first aspect of the present invention and any one of the possible implementations of the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
The invention can solve the problems of complex calculation process or higher cost of additional identification equipment in the existing living body identification mode by acquiring the planar image of the detection object, utilizing the human face characteristic points on the planar image to determine the overall head gesture and the local head gesture of the detection object on the planar image and judging whether the detection object is a living body according to the gesture difference of the overall head gesture and the local head gesture.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of one embodiment of a method for identifying a living body based on planar detection in an embodiment of the present invention;
FIG. 2 is a flow chart of an embodiment of step S103 in the embodiment shown in FIG. 1;
FIG. 3 is a flowchart of another embodiment of a method for identifying a living body based on planar detection according to an embodiment of the present invention;
FIG. 4 is a block diagram showing an embodiment of a living body recognition apparatus based on plane detection in the embodiment of the present invention;
fig. 5 is a schematic block diagram of an identification terminal in an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of an embodiment of a method for identifying a living body based on plane detection according to an embodiment of the present invention may include:
In step S101, a planar image of the detection object is acquired.
In the embodiment of the invention, a planar image of a detection object is first acquired, wherein the planar image refers to a two-dimensional image, and the planar image can be acquired through a terminal provided with an image sensor. For example, a planar image of the detection object may be acquired by a mobile phone terminal provided with a camera.
In step S102, face feature points on the planar image are extracted.
In the embodiment of the invention, the human face characteristic points refer to pixel points or pixel point sets which can be used for representing human face characteristics on a planar image, each human face characteristic point can reflect one characteristic of a human face, and in particular, each human face characteristic point can be one pixel point or a set of a plurality of adjacent pixel points, for example, a pixel block formed by a plurality of adjacent pixel points, on the planar image.
In the embodiment of the invention, the extracted face feature points may be a predefined plurality of face feature points, for example, the extracted face feature points may include pixels or a set of pixels corresponding to face parts such as nose tip, chin, left corner of left eye, right corner of right eye, left corner of mouth, right corner of mouth, and the like on a planar image.
In step S103, the overall head pose corresponding to the planar image is determined based on the extracted face feature points.
In an embodiment of the present invention, for a given planar image, the head pose of a person thereon may include pitch (flip up and down, e.g., low head and supine), yaw (flip left and right, e.g., left turn or left turn head), roll (in-plane rotation, e.g., low head left or right). The overall head pose, i.e., the orientation of the face in three dimensions, may be one of the angles in three dimensions described above.
In the embodiment of the invention, different orientations of the face in the three-dimensional space are reflected on the plane image and are mainly reflected on different position distributions of the feature points of each face. Therefore, the orientation of the face on the planar image in the three-dimensional space can be determined according to the position distribution of the extracted face feature points, namely, the integral head gesture corresponding to the planar image in the three-dimensional space can be obtained according to the position distribution of the face feature points on the planar image.
In one implementation, the overall head pose of the planar image may be determined by building a neural network for overall head pose estimation, training and deep learning the built neural network using the planar image of the determined head pose as a training sample, and inputting the planar image of the head pose to be determined into the trained neural network.
In addition, in another implementation, the estimation of the overall head pose may also be achieved by using apparent features of the face on the planar image by an apparent-based approach.
Optionally, as shown in fig. 2, an embodiment of the step S103 may include:
Step S1031, obtaining a position distribution of the extracted face feature points on the planar image, to obtain a first position distribution.
Step S1032, adjusting the gesture of the preset three-dimensional face model, and acquiring the position distribution of the projection of the face feature points on the three-dimensional face model on the two-dimensional plane in the adjustment process, so as to obtain the second position distribution.
In the embodiment of the invention, the face feature points on the three-dimensional face model are in one-to-one correspondence with the face feature points on the extracted planar image.
Step S1033, obtaining a spatial pose of the three-dimensional face model when the second position distribution is consistent with the first position distribution, and obtaining a target pose.
And step S1031, determining the target gesture as the integral head gesture corresponding to the plane image.
In the embodiment of the invention, a standard three-dimensional face model can be preset, the three-dimensional face model is rotated from a reference direction (such as a forward direction) to adjust the face orientation of the three-dimensional face model, the position change of each face feature point on the three-dimensional face model is monitored, and when the projected position distribution of the face feature points on the three-dimensional face model on a two-dimensional plane is consistent or nearly consistent with the position distribution of the face feature points on the extracted plane image, the azimuth angle rotation information of the three-dimensional face model is obtained, and the azimuth angle rotation information at the moment is the head posture of the three-dimensional face model, namely the integral head posture corresponding to the plane image.
Specifically, the projection calculation of the face feature points on the three-dimensional face model on the two-dimensional plane can be realized by using a camera calibration method and a direct linear transformation (DIRECTLINEAR TRANSFORM, DLT) method, namely, the conversion from a world coordinate system (three-dimensional) to an image coordinate system (two-dimensional plane).
In step S104, the extracted face feature points are divided into a plurality of local feature groups, and the local head pose corresponding to each local feature group is determined.
In the embodiment of the invention, the extracted face feature points are grouped to obtain a plurality of local feature groups, and for each local feature group, the three-dimensional face model is adjusted in the same way, so that the local head gesture corresponding to the local feature group can be obtained.
Optionally, the step S104 may include:
dividing the extracted face feature points into a plurality of local feature groups by taking the appointed number of face feature points as a group;
calculating a local spatial gesture corresponding to each local feature group when the three-dimensional face model is in the target gesture;
And determining the local space gesture corresponding to each local feature group when the three-dimensional face model is in the target gesture as the local head gesture corresponding to the local feature group.
In the embodiment of the present invention, the specified number is an integer not less than 3, and the specified number is smaller than the number of the extracted face feature points, for example, the specified number may be 3, that is, every three face feature points serve as a local feature group.
The three face feature points of the nose tip, the left corner of the left eye and the right corner of the right eye are divided into a group to obtain a first local feature group, and when the three-dimensional face model is in the target gesture, the orientations of planes corresponding to the three face feature points of the nose tip, the left corner of the left eye and the right corner of the right eye on the three-dimensional face model are determined to be the local head gesture corresponding to the first local feature group.
And similarly, dividing three face feature points of the chin, the left corner of the left eye and the right corner of the right eye into a group to obtain a second local feature group, and determining the directions of planes of the chin, the left corner of the left eye and the right corner of the right eye of the three-dimensional face model as the local head gesture corresponding to the second local feature group when the three-dimensional face model is in the target gesture.
In step S105, a posture difference between the partial head posture and the overall head posture is calculated.
In the embodiment of the invention, the posture difference between the local head posture corresponding to each local feature group and the overall head posture of the planar image is counted, and whether the detection object is a living body can be further judged through the counted posture difference.
Further, the step S105 may include:
acquiring an attitude vector when the three-dimensional face model is in the target attitude;
Obtaining a plane normal vector of each local feature group when the three-dimensional face model is in the target attitude;
and calculating an included angle between each plane normal vector and the attitude vector, wherein the size of the included angle represents the size of the attitude difference.
In the embodiment of the invention, the whole head posture and the local head posture are represented in a vectorization mode, the whole head posture is represented by a posture vector when the three-dimensional face model is in the target posture, the local head posture is represented by a plane normal vector of a plane where each local feature group is located when the three-dimensional face model is in the target posture, the included angle of each plane normal vector and the posture vector when the three-dimensional face model is in the target posture is calculated, the posture difference between the local head posture and the whole head posture is represented by the included angle, the larger the included angle is, the larger the posture difference between the local head posture and the whole head posture is represented, and the smaller the included angle is, the smaller the posture difference between the local head posture and the whole head posture is represented.
In step S106, it is determined whether the detection object is a living body based on the posture difference.
In the embodiment of the present invention, the posture difference reflects the difference between the local plane and the overall posture of the head, but in practice, for a living object, the posture difference generally does not exceed a threshold, and then by comparing the posture difference with the threshold, it can be determined whether the detected object is a living object, for example, when the posture difference between the local head posture corresponding to a certain local feature group and the overall head posture is greater than a set threshold, it can be determined that the detected correspondence is not a living object.
Further, the step 106 may include:
counting a first number of local feature groups with included angles with the gesture vectors being larger than a first preset threshold;
If the first number is greater than a first specified value, determining that the detection object is not a living body;
And if the first number is not greater than the first specified value, determining that the detection object is a living body.
In the embodiment of the invention, in order to improve the recognition accuracy, the first number of the local feature groups with the included angle larger than the first preset threshold value with the attitude vector can be counted, when the first number is larger than a first appointed value, the detection object is judged to be not a living body, and when the first number is not larger than the first appointed value, the detection object is judged to be the living body.
For example, when a living body attack is performed with a tablet computer or a display (the detected object is a person on the tablet computer or the display), the head pose orientation of the face (overall head pose) will be substantially unchanged with the change in the angle between the display plane and the orientation of the camera (the device that captures the planar image), because the video (image) of the attack head portrait ensures that the angle between the local plane (local head pose) and the orientation of the camera will be incorrect compared to the natural face, i.e. the angle difference (pose difference) will occur between the two vectors.
Still another example, assuming that the attack is a handheld photo attack, the overall pose of the human head is unchanged, as previously described, but due to the deformation of the paper and the change in the handheld tilt angle, these deformations can be detected from the normal vector to the local plane. Therefore, according to the embodiment of the invention, through calculating the included angles between the obtained multiple groups of plane normal vectors (local head postures) and the head overall posture vector (overall head posture), and based on the included angles, living body judgment is carried out, and because the included angles between the groups of plane normal vectors corresponding to the real living body target and the head overall posture vector are kept within the preset threshold, living body judgment can be carried out based on the included angles.
In summary, the method and the device for recognizing the living body can solve the problems that the existing living body recognition mode is complex in calculation process or high in cost because additional recognition equipment is needed to be equipped, namely, on one hand, the recognition process of the method and the device can not involve three-dimensional depth information, simplify the calculation process and are beneficial to improving the living body recognition efficiency, and on the other hand, the recognition process of the method and the device for recognizing the living body does not need to introduce additional biological feature recognition equipment and reduces the equipment cost for recognizing the living body.
Referring to fig. 3, another embodiment of a living body identification method based on plane detection according to an embodiment of the present invention may include:
Step S301, a planar image of the detection object is acquired.
And step S302, extracting the face characteristic points on the plane image.
Step S303, determining the integral head gesture corresponding to the plane image based on the extracted face feature points.
Step S304, dividing the extracted face feature points into a plurality of local feature groups, and determining the local head gesture corresponding to each local feature group.
Step S305, calculating a posture difference between the local head posture and the global head posture.
In the embodiment of the present invention, the steps S301 to S305 can be specifically referred to the steps S101 to S105 in the embodiment shown in fig. 1, and are not described herein again.
Step S306, outputting an operation instruction for instructing the detection target to perform the specified head operation.
Step S307, monitoring a posture difference change rate of the detection object when the head motion is performed;
step S308, determining whether the detection object is a living body based on the posture difference change rate.
In the embodiment of the invention, in order to further improve the accuracy of recognition, the detection object is required to be subjected to one or more specified gesture changes such as left, right, up, down and the like in actual implementation, and the effectiveness of the algorithm can be improved because the attack image cannot be fixed.
In the action process of the head of the detection object for posture change, the included angle change rate of each plane normal vector and the head posture vector can be estimated in real time, the included angle change rate reflects the posture difference change rate of the whole head posture and the local head posture when the detection object performs the head action, and the living body identification and judgment can be performed according to the included angle change rate.
In the specific implementation, a plane image of the detected object after the head motion is performed may be obtained, a posture difference may be obtained based on the plane image in the same manner as described above, and the posture difference may be compared with a posture difference corresponding to the plane image before the head motion is performed, so as to obtain a posture difference change rate.
Further, the step S308 may include:
Counting a second number of local feature groups with the attitude difference change rate larger than a second preset threshold;
if the second number is greater than a second specified value, determining that the detection object is not a living body;
And if the second number is not greater than the second specified value, determining that the detection object is a living body.
In the embodiment of the invention, the second number of the local feature groups with the statistical gesture difference change rate larger than the second preset threshold value is used for judging that the detection object is not a living body when the second number is larger than the second designated value, and the second number is not larger than the second designated value.
In summary, the method and the device for recognizing the living body can solve the problems that the existing living body recognition mode is complex in calculation process or high in cost because additional recognition equipment is needed to be equipped, namely, on one hand, the recognition process of the method and the device can not involve three-dimensional depth information, simplify the calculation process and are beneficial to improving the living body recognition efficiency, and on the other hand, the recognition process of the method and the device for recognizing the living body does not need to introduce additional biological feature recognition equipment and reduces the equipment cost for recognizing the living body.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Corresponding to the living body identification method based on plane detection described in the above embodiments, fig. 4 shows a block diagram of one embodiment of a living body identification device based on plane detection provided in an embodiment of the present invention.
In the present embodiment, the living body recognition apparatus 4 based on plane detection may include a plane image acquisition unit 41, a feature point extraction unit 42, an overall posture determination unit 43, a local posture determination unit 44, a posture difference calculation unit 45, and a living body judgment unit 46.
A planar image acquisition unit 41 for acquiring a face planar image of a detection object;
a feature point extracting unit 42, configured to extract a face feature point of the face plane image;
an overall pose determining unit 43, configured to determine an overall head pose corresponding to the planar image based on the extracted face feature points;
A local pose determining unit 44, configured to divide the extracted face feature points into a plurality of local feature groups, and determine a local head pose corresponding to each local feature group;
A posture difference calculating unit 45 for calculating a posture difference of the partial head posture and the overall head posture;
A living body judgment unit 46 for judging whether the detection object is a living body based on the posture difference.
Optionally, the living body recognition device 4 based on the plane detection may further include:
The first acquisition unit is used for acquiring the position distribution of the extracted face feature points on the plane image to obtain first position distribution;
The second acquisition unit is used for adjusting the gesture of a preset three-dimensional face model, and acquiring the position distribution of the projection of the face feature points on the three-dimensional face model on a two-dimensional plane in the adjustment process to obtain second position distribution, wherein the face feature points on the three-dimensional face model are in one-to-one correspondence with the face feature points on the extracted plane image;
the third acquisition unit is used for acquiring the spatial pose of the three-dimensional face model when the second position distribution is consistent with the first position distribution, so as to obtain a target pose;
the overall posture determining unit 43 is specifically configured to determine the target posture as an overall head posture corresponding to the planar image.
Optionally, the living body recognition device 4 based on the plane detection may further include:
the feature group dividing unit is used for dividing the extracted face feature points into a plurality of local feature groups by taking a specified number of face feature points as a group, wherein the specified number is smaller than the number of the extracted face feature points;
the space gesture calculation unit is used for calculating the local space gesture corresponding to each local feature group when the three-dimensional face model is in the target gesture;
The local pose determination unit 44 is specifically configured to determine, as a local head pose corresponding to each local feature group, a local spatial pose corresponding to each local feature group when the three-dimensional face model is in the target pose.
Optionally, the living body recognition device 4 based on the plane detection may further include:
the first vector acquisition unit is used for acquiring an attitude vector of the three-dimensional face model when the three-dimensional face model is in the target attitude;
the second vector acquisition unit is used for acquiring plane normal vectors of each local feature group when the three-dimensional face model is in the target attitude;
The posture difference calculating unit 45 is specifically configured to calculate an included angle between each plane normal vector and the posture vector, where the size of the included angle represents the size of the posture difference.
Optionally, the living body recognition device 4 based on the plane detection may further include:
The first quantity counting unit is used for counting the first quantity of the local feature groups with the included angles with the attitude vectors larger than a first preset threshold value;
The living body judging unit 46 is specifically configured to judge that the detection object is not a living body if the first number is greater than a first specified value, and judge that the detection object is a living body if the first number is not greater than the first specified value.
Optionally, the living body recognition device 4 based on the plane detection may further include:
an action instruction unit configured to output an action instruction for instructing the detection object to perform a specified head action;
a difference monitoring unit for monitoring a posture difference change rate when the detection object performs the head motion;
the living body judgment unit 46 is also configured to judge whether the detection object is a living body based on the posture difference change rate.
Optionally, the living body recognition device 4 based on the plane detection may further include:
A second number statistics unit, configured to count a second number of local feature groups with the change rate of the posture difference being greater than a second preset threshold;
The living body determining unit 46 is specifically further configured to determine that the detection object is not a living body if the second number is greater than a second specified value, and determine that the detection object is a living body if the second number is not greater than the second specified value.
In summary, the method and the device for recognizing the living body can solve the problems that the existing living body recognition mode is complex in calculation process or high in cost because additional recognition equipment is needed to be equipped, namely, on one hand, the recognition process of the method and the device can not involve three-dimensional depth information, simplify the calculation process and are beneficial to improving the living body recognition efficiency, and on the other hand, the recognition process of the method and the device for recognizing the living body does not need to introduce additional biological feature recognition equipment and reduces the equipment cost for recognizing the living body.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described apparatus, modules and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Fig. 5 shows a schematic block diagram of an identification terminal according to an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown.
In this embodiment, the identification terminal 5 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The identification terminal 5 may comprise a processor 50, a memory 51 and computer readable instructions 52 stored in said memory 51 and executable on said processor 50, for example computer readable instructions for performing the above-mentioned in-plane detection based living body identification method. The processor 50, when executing the computer readable instructions 52, implements the steps described above in each of the above-described embodiments of plane detection-based living body identification, such as steps S101 through S106 shown in fig. 1. Or the processor 50, when executing the computer-readable instructions 52, performs the functions of the units in the apparatus embodiments described above, such as the units 41-46 shown in fig. 4.
Illustratively, the computer readable instructions 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to accomplish the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing a specific function describing the execution of the computer readable instructions 52 in the identification terminal 5.
The Processor 50 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the identification terminal 5, such as a hard disk or a memory of the identification terminal 5. The memory 51 may be an external storage device of the identification terminal 5, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the identification terminal 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the identification terminal 5. The memory 51 is used for storing the computer readable instructions as well as other instructions and data required by the identification terminal 5. The memory 51 may also be used to temporarily store data that has been output or is to be output.
The functional units in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, comprising a number of computer readable instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, etc., which can store computer readable instructions.
The embodiments described above are only for illustrating the technical solution of the present invention, but not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solution described in the foregoing embodiments may be modified or substituted for some of the technical features thereof, and that these modifications or substitutions do not depart from the spirit and scope of the technical solution of the embodiments of the present invention.

Claims (9)

1. A living body identification method based on plane detection, characterized by comprising:
Acquiring a plane image of a detection object;
Extracting face feature points on the plane image;
Determining the corresponding overall head gesture of the planar image based on the extracted face feature points, wherein the overall head gesture is the orientation of a face in a three-dimensional space, and determining the orientation of the face in the three-dimensional space on the planar image according to the position distribution of the extracted face feature points on the planar image;
Dividing the extracted face feature points into a plurality of local feature groups, and determining the local head gesture corresponding to each local feature group;
calculating the posture difference between the local head posture and the overall head posture;
judging whether the detection object is a living body based on the posture difference;
the calculating the pose difference of the local head pose and the overall head pose includes:
Acquiring a posture vector of the three-dimensional face model when the three-dimensional face model is in a target posture;
Obtaining a plane normal vector of each local feature group when the three-dimensional face model is in the target attitude;
and calculating an included angle between each plane normal vector and the attitude vector, wherein the size of the included angle represents the size of the attitude difference.
2. The planar detection-based living body recognition method according to claim 1, wherein the determining the overall head pose corresponding to the planar image based on the extracted face feature points includes:
acquiring the position distribution of the extracted face feature points on the plane image to obtain first position distribution;
adjusting the gesture of a preset three-dimensional face model, and acquiring the position distribution of the projection of the face feature points on the three-dimensional face model on a two-dimensional plane in the adjustment process to obtain second position distribution, wherein the face feature points on the three-dimensional face model are in one-to-one correspondence with the face feature points on the extracted plane image;
acquiring the spatial pose of the three-dimensional face model when the second position distribution is consistent with the first position distribution, and obtaining a target pose;
and determining the target gesture as the integral head gesture corresponding to the plane image.
3. The planar detection-based living body recognition method according to claim 2, wherein the dividing the extracted face feature points into a plurality of local feature groups and determining the local head pose corresponding to each local feature group includes:
Dividing the extracted face feature points into a plurality of local feature groups by taking a specified number of face feature points as a group, wherein the specified number is smaller than the number of the extracted face feature points;
calculating a local spatial gesture corresponding to each local feature group when the three-dimensional face model is in the target gesture;
And determining the local space gesture corresponding to each local feature group when the three-dimensional face model is in the target gesture as the local head gesture corresponding to the local feature group.
4. The living body recognition method based on planar detection according to claim 1, wherein the determining whether the detection object is a living body based on the posture difference includes:
counting a first number of local feature groups with included angles with the gesture vectors being larger than a first preset threshold;
If the first number is greater than a first specified value, determining that the detection object is not a living body;
And if the first number is not greater than the first specified value, determining that the detection object is a living body.
5. The plane detection-based living body recognition method according to any one of claims 1 to 3, further comprising, after calculating a posture difference of the local head posture and the overall head posture:
Outputting an action instruction for instructing the detection object to perform a specified head action;
monitoring the posture difference change rate of the detected object when the head acts;
Correspondingly, the determining whether the detection object is a living body based on the posture difference specifically includes:
judging whether the detection object is a living body based on the posture difference change rate.
6. The living body recognition method based on planar detection according to claim 5, wherein the judging whether the detection object is a living body based on the posture difference change rate includes:
Counting a second number of local feature groups with the attitude difference change rate larger than a second preset threshold;
if the second number is greater than a second specified value, determining that the detection object is not a living body;
And if the second number is not greater than the second specified value, determining that the detection object is a living body.
7. A living body recognition apparatus based on plane detection, characterized by comprising:
a planar image acquisition unit for acquiring a planar image of a face of a detection object;
the feature point extraction unit is used for extracting face feature points of the face plane image;
The overall posture determining unit is used for determining the overall head posture corresponding to the planar image based on the extracted face feature points, wherein the overall head posture is the orientation of the face in the three-dimensional space, and the orientation of the face in the three-dimensional space on the planar image is determined according to the position distribution of the extracted face feature points on the planar image;
the local pose determining unit is used for dividing the extracted face feature points into a plurality of local feature groups and determining the local head pose corresponding to each local feature group;
A posture difference calculating unit for calculating a posture difference between the local head posture and the overall head posture;
A living body judgment unit configured to judge whether the detection object is a living body based on the posture difference;
the calculating the pose difference of the local head pose and the overall head pose includes:
Acquiring a posture vector of the three-dimensional face model when the three-dimensional face model is in a target posture;
Obtaining a plane normal vector of each local feature group when the three-dimensional face model is in the target attitude;
and calculating an included angle between each plane normal vector and the attitude vector, wherein the size of the included angle represents the size of the attitude difference.
8. An identification terminal comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, characterized in that the processor, when executing the computer readable instructions, implements the steps of the plane detection based living body identification method according to any one of claims 1 to 6.
9. A computer-readable storage medium storing computer-readable instructions, which when executed by a processor implement the steps of the plane detection-based living body identification method according to any one of claims 1 to 6.
CN201910111148.5A 2019-02-12 2019-02-12 Living body recognition method, device, terminal and storage medium based on plane detection Active CN109977764B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910111148.5A CN109977764B (en) 2019-02-12 2019-02-12 Living body recognition method, device, terminal and storage medium based on plane detection
PCT/CN2019/118553 WO2020164284A1 (en) 2019-02-12 2019-11-14 Method and apparatus for recognising living body based on planar detection, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910111148.5A CN109977764B (en) 2019-02-12 2019-02-12 Living body recognition method, device, terminal and storage medium based on plane detection

Publications (2)

Publication Number Publication Date
CN109977764A CN109977764A (en) 2019-07-05
CN109977764B true CN109977764B (en) 2024-12-31

Family

ID=67076912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910111148.5A Active CN109977764B (en) 2019-02-12 2019-02-12 Living body recognition method, device, terminal and storage medium based on plane detection

Country Status (2)

Country Link
CN (1) CN109977764B (en)
WO (1) WO2020164284A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977764B (en) * 2019-02-12 2024-12-31 平安科技(深圳)有限公司 Living body recognition method, device, terminal and storage medium based on plane detection
CN111639582B (en) * 2020-05-26 2023-10-10 清华大学 Living body detection method and equipment
CN112560742A (en) * 2020-12-23 2021-03-26 杭州趣链科技有限公司 Human face in-vivo detection method, device and equipment based on multi-scale local binary pattern
CN113724418B (en) * 2021-08-26 2023-07-04 广州小鹏自动驾驶科技有限公司 Data processing method, device and readable storage medium
CN116934848B (en) * 2022-03-31 2024-11-19 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium
CN115019400B (en) * 2022-07-19 2023-03-03 北京拙河科技有限公司 Illegal behavior detection method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550637A (en) * 2015-12-04 2016-05-04 小米科技有限责任公司 Contour point positioning method and contour point positioning device
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392220A (en) * 2014-11-27 2015-03-04 苏州福丰科技有限公司 Three-dimensional face recognition airport security inspection method based on cloud server
CN105426827B (en) * 2015-11-09 2019-03-08 北京市商汤科技开发有限公司 Living body verification method, device and system
CN107729838A (en) * 2017-10-12 2018-02-23 中科视拓(北京)科技有限公司 A Head Pose Estimation Method Based on Deep Learning
CN109977764B (en) * 2019-02-12 2024-12-31 平安科技(深圳)有限公司 Living body recognition method, device, terminal and storage medium based on plane detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550637A (en) * 2015-12-04 2016-05-04 小米科技有限责任公司 Contour point positioning method and contour point positioning device
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection

Also Published As

Publication number Publication date
CN109977764A (en) 2019-07-05
WO2020164284A1 (en) 2020-08-20

Similar Documents

Publication Publication Date Title
CN109977764B (en) Living body recognition method, device, terminal and storage medium based on plane detection
US11747898B2 (en) Method and apparatus with gaze estimation
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
JP6528764B2 (en) Face matching device, method, and recording medium
US9881204B2 (en) Method for determining authenticity of a three-dimensional object
JP6334735B2 (en) Image feature detection using edge vectors
WO2020000908A1 (en) Method and device for face liveness detection
JP5552519B2 (en) Construction of face feature vector
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
JP5159950B2 (en) Image processing apparatus, method, and program
EP2993614A1 (en) Method and apparatus for facial recognition
CN110675487A (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN112257696B (en) Sight estimation method and computing equipment
US10650260B2 (en) Perspective distortion characteristic based facial image authentication method and storage and processing device thereof
KR102476016B1 (en) Apparatus and method for determining position of eyes
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
CN110852310A (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
JP6487642B2 (en) A method of detecting a finger shape, a program thereof, a storage medium of the program, and a system for detecting a shape of a finger.
CN113298158B (en) Data detection method, device, equipment and storage medium
WO2018103416A1 (en) Method and device for detecting facial image
CN111639582A (en) Living body detection method and apparatus
US20250029425A1 (en) Live human face detection method and apparatus, computer device, and storage medium
CN104636727A (en) Face recognition method applicable to multiple expressions and multiple gestures
CN113870190B (en) Vertical line detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant