CN102375970B - A kind of identity identifying method based on face and authenticate device - Google Patents
A kind of identity identifying method based on face and authenticate device Download PDFInfo
- Publication number
- CN102375970B CN102375970B CN201010254201.6A CN201010254201A CN102375970B CN 102375970 B CN102375970 B CN 102375970B CN 201010254201 A CN201010254201 A CN 201010254201A CN 102375970 B CN102375970 B CN 102375970B
- Authority
- CN
- China
- Prior art keywords
- face
- result
- frame
- tracking
- authentication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Collating Specific Patterns (AREA)
Abstract
The invention provides a kind of identity identifying method based on face and authenticate device, method comprises: carry out Face datection to every two field picture of camera head collection and obtain facial image, carries out tracking obtain tracking results to described facial image; Carry out posture analysis, if saltus step does not occur corresponding human face posture in the relative previous frame image of human face posture in the attitude information obtained, then posture analysis result is non-saltus step; Carry out face alignment; Carry out statistical study to facial image authentication result described in posture analysis result described in tracking results, multiframe described in multiframe and multiframe, if statistic analysis result is consistent with prerequisite, then authentication is passed through.The present invention can utilize 2 dimensional plane images to distinguish photo and true man, solves prior art and utilizes three-dimensional information and carry out the system identification speed that photo array causes and greatly decline, cannot meet the technical matters that application requires.
Description
Technical Field
The invention relates to an identity authentication technology of a face image, in particular to an identity authentication method and an identity authentication device based on a face.
Background
With the development of face recognition technology, the applications related to face recognition are gradually increased, and the face authentication system as an application system of the face recognition technology has more and more applications in the aspects of automatic access control, face login and the like.
The face authentication system collects the face image of the person to be authenticated by using the camera, compares the face image with the corresponding identity in the information base, if the comparison is passed, the person to be authenticated and the face image with the corresponding identity in the information base are considered to have the same identity, the authentication is passed, otherwise, the authentication is not passed. For authentication, the imposter may take a picture of a person in the information base for authentication, and if the authentication system cannot distinguish the picture from a real person, the imposter holding the picture will be authenticated. Therefore, a function of distinguishing a photo from a real person is required to be added to the system, and distinguishing a real person from a photo is one of the problems to be solved by the face authentication system.
The difference between photos and real persons is mainly: the photograph is two-dimensional and the real person is three-dimensional. By utilizing the difference, the photos and the real persons can be distinguished through the reconstruction of the three-dimensional face, including methods such as binocular image synthesis and the like, but the three-dimensional data volume is large, the calculation speed is low, and the binocular camera also needs operations such as calibration and the like, so that the method not only needs more occupied resources, but also causes the system identification speed to be greatly reduced, and is difficult to meet the real-time application requirements.
Disclosure of Invention
The invention aims to provide an identity authentication method and an identity authentication device based on a human face, which can distinguish a photo from a real person by using a 2-dimensional plane image and solve the technical problems that the system identification speed is greatly reduced and the application requirements cannot be met due to the fact that three-dimensional information is used and photo identification is carried out in the prior art.
In order to achieve the above object, in one aspect, a face-based identity authentication method is provided, including:
carrying out face detection on each frame of image acquired by a camera device to obtain a face image, tracking the face image to obtain a tracking result, and if the tracking result is that the face is tracked, enabling the face and the corresponding face in the previous frame of image to belong to the same person;
carrying out attitude analysis on the face image to obtain attitude information, wherein if the face attitude in the attitude information does not jump relative to the corresponding face attitude in the previous frame image, the attitude analysis result is not jumping;
comparing the face image with the images in the appointed database, if the comparison result is the same type or the similarity is greater than the appointed threshold value, the face image is considered to be a person in the appointed database, the authentication result of the current frame face image is passed, otherwise, the authentication result is not passed;
and carrying out statistical analysis on the tracking results of multiple frames, the posture analysis results of multiple frames and the authentication results of multiple frames of the face images, and if the statistical analysis results are in accordance with preset conditions, passing the identity authentication.
Preferably, in the above method, the preset conditions are:
the tracking result is that the number of continuous frames which are tracked to the human face and the posture analysis result is not jumped is greater than a first threshold; and,
the ratio of the frame number passing the face image authentication result to the continuous frame number tracking the face and not jumping the posture analysis result is greater than a second threshold; and,
and the tracking result is that the human face is tracked and the posture variation of the human face in the continuous multi-frame which is not jumped is larger than a third threshold.
Preferably, in the above method, the preset conditions are:
keeping the human face gestures in the 1 st frame to the nth frame of the human face image which are not jumped according to the gesture analysis result to be in accordance with a first gesture range; and,
keeping the human face gestures in the (n + k) th frame to the (n + k + m) th frame of the human face image, which have not jumped gesture analysis results, in accordance with a second gesture range; wherein n and m are integers more than 1, and k is an integer more than or equal to 1.
Preferably, in the above method, before the step of performing face detection and tracking on each frame of image acquired by the camera, the method further includes: and prompting the tested person in front of the lens to change the head posture as required through voice or images.
Preferably, the method further includes: if the tracking result is that the face is not tracked and the current frame face tracking is disconnected, clearing the statistical analysis result and restarting the statistics; or the attitude analysis result jumps, and if the current frame attitude tracking is disconnected, the statistical analysis result is cleared and the statistics is restarted.
Preferably, in the above method, the step of performing pose analysis on the face image specifically includes: measuring a first distance formed between a first key point and a second key point in the face image, measuring a second distance formed between a third key point and a fourth key point, and taking the ratio of the first distance to the second distance as an attitude parameter; and if the variation of the attitude parameter of the face image relative to the attitude parameter of the previous frame image is smaller than a preset attitude variation threshold value, the face image does not generate attitude jump.
Preferably, in the above method, the first key point is a left nostril, the second key point is a left corner of mouth, the third key point is a right nostril, and the fourth key point is a right corner of mouth.
In order to achieve the above object, an embodiment of the present invention further provides an identity authentication apparatus based on a human face, including:
a face detection tracking module to: carrying out face detection on each frame of image acquired by a camera device to obtain a face image, tracking the face image to obtain a tracking result, and if the tracking result is that the face is tracked, enabling the face to belong to the same person as the corresponding face in the previous frame of image;
a pose analysis module to: carrying out attitude analysis on the face image to obtain attitude information, wherein if the face attitude in the attitude information does not jump relative to the corresponding face attitude in the previous frame image, the attitude analysis result is not jumping;
a face authentication module to: comparing the face image with the images in the appointed database, if the comparison result is the same type or the similarity is greater than the appointed threshold value, the face image is considered to be a person in the appointed database, the authentication result of the current frame face image is passed, otherwise, the authentication result is not passed;
a multi-frame authentication result integration module, configured to: and carrying out statistical analysis on the tracking results of multiple frames, the posture analysis results of multiple frames and the authentication results of multiple frames of the face images, and if the statistical analysis results are in accordance with preset conditions, passing the identity authentication.
Preferably, in the above apparatus, the preset conditions are: the tracking result is that the number of continuous frames which are tracked to the human face and the posture analysis result is not jumped is greater than a first threshold; the ratio of the frame number passing the face image authentication result to the continuous frame number tracking the face and not jumping the posture analysis result is greater than a second threshold; the tracking result is that the human face is tracked, and the posture analysis result is that the variation of the human face posture in non-jumping continuous multi-frames is larger than a third threshold; or,
the preset conditions are as follows: keeping the human face gestures in the 1 st frame to the nth frame of the human face image which are not jumped according to the gesture analysis result to be in accordance with a first gesture range; keeping the human face gestures in the (n + k) th frame to the (n + k + m) th frame of the human face image, of which the gesture analysis result is not jumped, in accordance with a second gesture range; wherein n and m are integers more than 1, and k is an integer more than or equal to 1.
Preferably, the above apparatus further comprises:
and the prompt and authentication result output module is used for: prompting a tested person in front of the lens to change the head posture according to requirements through voice or images; and outputting the authentication result of the tested person through voice or images.
Preferably, the above apparatus further comprises a reset module, configured to:
if the tracking result is that the face is not tracked and the current frame face is tracked and disconnected, clearing the statistical analysis result of the multi-frame authentication result comprehensive module, and restarting the statistics of the multi-frame authentication result comprehensive module; or if the attitude analysis result is jumping and the current frame attitude tracking is disconnected, clearing the statistical analysis result of the multi-frame authentication result comprehensive module, and restarting the statistics of the multi-frame authentication result comprehensive module.
The invention has at least the following technical effects:
1) the invention can improve the authentication performance and avoid the impostor from passing the authentication by using the photo by combining the face tracking, the posture analysis and the face authentication.
2) The embodiment of the invention does not carry out three-dimensional analysis, only analyzes the difference of the face images when the shooting angles are different from two-dimensional angles, and can also judge the angle change of the human head relative to the shooting device.
Drawings
FIG. 1 is a flow chart of the steps of a method provided by an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a processing procedure for a current frame according to an embodiment of the present invention;
FIG. 3 is a block diagram of an apparatus provided by an embodiment of the present invention;
FIG. 4 is a schematic view of the rotation angle of the head according to the embodiment of the present invention;
fig. 5 is a schematic diagram of authentication sample collection according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the following detailed description of the embodiments is provided with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating steps of a method according to an embodiment of the present invention, and as shown in fig. 1, a method for performing face authentication using a plane image includes:
step 101, performing face detection on each frame of image acquired by a camera device to obtain a face image, tracking the face image to obtain a tracking result, and if the tracking result is that a face is tracked, enabling the face and the corresponding face in the previous frame of image to belong to the same person;
102, performing attitude analysis on the face image to obtain attitude information, wherein if the face attitude in the attitude information does not jump relative to the corresponding face attitude in the previous frame of image, the attitude analysis result is not jumping;
103, comparing the face image with the images in the appointed database, if the comparison result is the same type or the similarity is greater than the appointed threshold value, considering that the face image is the person in the appointed database, and judging that the authentication result of the face image of the current frame is passed, otherwise, judging that the authentication result of the face image of the current frame is not passed;
and 104, performing statistical analysis on a plurality of frames of tracking results, a plurality of frames of posture analysis results and a plurality of frames of face image authentication results, and if the statistical analysis results are in accordance with preset conditions, passing the identity authentication.
The embodiment of the invention does not carry out three-dimensional analysis, only analyzes the difference of the face images when the shooting angles are different from two-dimensional angles, and can also judge the angle change of the human head relative to the shooting device.
Wherein, the preset condition may be: the tracking result is that the number of continuous frames which are tracked to the human face and the posture analysis result is not jumped is greater than a first threshold; the ratio of the frame number passing the face image authentication result to the continuous frame number tracking the face and not jumping the posture analysis result is greater than a second threshold; and the tracking result is that the human face is tracked and the posture analysis result is that the variation of the human face posture in the non-jumping continuous multi-frame is greater than a third threshold.
The preset conditions may also be: keeping the human face gestures in the 1 st frame to the nth frame of the human face image which are not jumped according to the gesture analysis result to be in accordance with a first gesture range; keeping the human face gestures in the (n + k) th frame to the (n + k + m) th frame of the human face image, of which the gesture analysis result is not jumped, in accordance with a second gesture range; wherein n and m are integers more than 1, and k is an integer more than or equal to 1.
If the tracking result is that the face is not tracked and the current frame face is tracked and disconnected, clearing the statistical analysis result and restarting the statistics; or if the attitude analysis result is jumping and the current frame attitude tracking is disconnected, clearing the statistical analysis result and restarting the statistics.
In the step 102 of performing pose analysis on the face tracking image, the method specifically includes: measuring a first distance formed between a first key point and a second key point in the face image, measuring a second distance formed between a third key point and a fourth key point, and taking the ratio of the first distance to the second distance as an attitude parameter; and if the variation of the attitude parameter of the face image relative to the attitude parameter of the previous frame image is smaller than a preset attitude variation threshold value, the face image does not generate attitude jump. For example, the first key point is the left nostril, the second key point is the left corner of the mouth, the third key point is the right nostril, and the fourth key point is the right corner of the mouth.
Before step 101, the method may further include: and prompting the tested person in front of the lens to change the head posture as required through voice or images. The head posture can be changed into a left-right shaking head or a top-bottom nodding head.
Therefore, the embodiment of the invention provides a face authentication method based on face detection tracking and posture analysis, which requires a user to do head left-right rotation in the authentication process, so that the head posture of the acquired image is changed. Meanwhile, the face tracking can be used for determining that the face with the changed posture is the same face, and if two photos with different postures are used for authentication, the face cannot be tracked and the posture cannot be tracked during photo conversion. Thereby distinguishing between a real person and a photograph.
Fig. 2 is a flowchart of a step of processing a current frame according to an embodiment of the present invention, as shown in fig. 2, including:
step 201, inputting a current frame image;
step 202, carrying out face detection tracking;
step 203, judging whether the face detection tracking is tracked, if so, executing step 204, otherwise, executing step 206;
step 204, carrying out attitude tracking;
step 205, judging whether the posture change is continuous, if so, executing step 207, otherwise, executing step 206;
step 206, resetting the information of the comprehensive module when the face in the current frame image is a new target; turning to step 208;
step 207, judging whether the human face target in the current frame image passes the authentication, if so, executing step 209, otherwise, executing step 208;
step 208, carrying out posture classification and face authentication of the current frame image;
step 209, performing multi-frame authentication result synthesis;
step 210, outputting the authentication result.
An embodiment of the present invention further provides a face authentication device, and fig. 3 is a structural diagram of the face authentication device provided in the embodiment of the present invention, which includes:
a face detection tracking module 301 for: carrying out face detection and tracking on each frame of image acquired by a camera device to obtain a face image, wherein if the tracking result is that the face is tracked, the face and the corresponding face in the previous frame of image belong to the same person;
a pose analysis module 302 to: carrying out attitude analysis on the face image to obtain attitude information, wherein if the face attitude in the attitude information does not jump relative to the corresponding face attitude in the previous frame image, the attitude analysis result is not jumping;
a face authentication module 303, configured to: comparing the face image with the images in the appointed database, if the comparison result is the same type or the similarity is greater than the appointed threshold value, the face image is considered to be a person in the appointed database, the authentication result of the current frame face image is passed, otherwise, the authentication result is not passed;
a multi-frame authentication result integration module 304, configured to: and carrying out statistical analysis on the tracking results of multiple frames, the posture analysis results of multiple frames and the authentication results of multiple frames of the face images, and if the statistical analysis results are in accordance with preset conditions, passing the identity authentication.
Wherein, can also include: and the prompt and authentication result output module is used for: prompting a tested person in front of the lens to change the head posture according to requirements through voice or images; and outputting the authentication result of the tested person through voice or images.
The system may further comprise a reset module configured to: if the tracking result is that the face is not tracked and the current frame face is tracked and disconnected, clearing the statistical analysis result of the multi-frame authentication result comprehensive module, and restarting the statistics of the multi-frame authentication result comprehensive module; or if the attitude analysis result is jumping and the current frame attitude tracking is disconnected, clearing the statistical analysis result of the multi-frame authentication result comprehensive module, and restarting the statistics of the multi-frame authentication result comprehensive module.
Hereinafter, each block will be described in detail.
A face detection tracking module 301.
Face detection and tracking belong to the current mature technology. The human face detection mostly adopts a method based on an Adaboost algorithm (the Adaboost algorithm: an iterative algorithm, the core idea is to train different weak classifiers aiming at the same training set, then the weak classifiers are integrated to form a stronger final classifier), and the classifier of the human face is trained through a large amount of human faces and non-human face images. The face tracking is based on a MeanShift algorithm (the MeanShift algorithm generally refers to an iterative step, namely, firstly calculating the shift mean value of the current point, moving the point to the shift mean value, then taking the shift mean value as a new starting point, and continuing to move until a certain condition is met), a statistical model-based method and the like, and the face tracking can be used for tracking the same face in a video. If the performance of the tracking algorithm is good, the face detected in the current frame will be tracked if the next frame does not disappear. If the faces of the front and rear frames are not tracked, the faces in the video can be considered to be not the same face any more. Therefore, whether the face in the current frame and the face detected in the previous frame are the same face can be determined through face tracking. If the current frame is tracked, the face in the current frame and the face in the previous frame are the same face, if the face in the previous frame passes the authentication, a signal that the authentication passes is directly given, and the subsequent processing is not performed. And if the person fails to pass the authentication in the previous frame, possibly the data of the person is not enough and cannot meet the requirement of passing the authentication, inputting the face to a posture analysis and face authentication module, and continuing the subsequent processing. If the current frame is not tracked, the situation that a new face is detected in the current frame is detected, data information related to the previous face in the multi-frame authentication result integration module is removed, then the face is input to the posture analysis and face authentication module, and subsequent processing is continued.
A pose analysis module 302.
The target of the posture analysis is to analyze the angle of the current face relative to the camera, and if the current face rotates in the tracking process, the current face is not a photo. Because the posture of a single photo is determined, if the photos are multiple photos, even if the postures of the photos are different, the photos are not tracked, and a new face appears, so that the new face still has only one posture. Here, the plurality of photographs means two or more photographs.
Fig. 4 is a schematic view of the rotation angle of the head according to the embodiment of the present invention, and it can be seen that the head can rotate around three axes. The rotation around the Z axis, namely in-plane rotation, does not change the relationship between the face imaging surface and the camera, and therefore, the photo and the real person cannot be distinguished. The camera is rotated around an X axis, the angle of the nose and the like relative to the camera is changed, the change of the nose in the vertical direction is not easy to detect, when the camera is rotated around a Y axis, the nose can form shadows and shelters on two sides, meanwhile, the left side and the right side of the face can also form shadows and shelters, the change of the face is large, and therefore, the change of the direction is selected to distinguish a real person from a photo.
The pose analysis can be obtained by training the relationship between the pose and the imaging, and the pose changes are continuous, that is, the head can rotate at a continuous angle around the central axis, so that the camera has a continuously changing angle with the direction of the front face when imaging. In this case, the postures can be simply divided into three types of front, left, and right.
When the posture changes, the face structure of the face changes greatly, so that the face detection and tracking are difficult, and the face authentication performance is also reduced. In order to ensure that the face can be tracked, it is necessary to ensure that the pose change cannot be too large. Here, the both-side limit line may be employed to limit the head rotation amplitude of the person to be authenticated.
For the application environment of the invention, the accurate angle of the face relative to the camera does not need to be determined, and only the degree of the posture change of the images of different frames needs to be known, so that the invention provides a simple and effective method for representing the posture change, and the posture of the face is represented by utilizing the mutual position relationship of a plurality of key points of the face. The key points on the face are the eyeball, the corner of the eye, the tip of the nose, the nostril and the corner of the mouth, wherein the eyeball can rotate, the position can be greatly changed, the positioning of the eyeball and the corner of the eye can be influenced by the existence of the glasses, the tip of the nose is difficult to position in a non-frontal image, and the nostril and the corner of the mouth can be detected definitely but only one corner of the mouth or one nostril is possible. Here, the posture of the human face is represented by the nostrils and the mouth corners and their relative position relationship with other key points, and the specific method is as follows: defining the distance between the left nostril and the left mouth corner as a, and the distance between the right nostril and the right mouth corner as b, the ratio of a to b changes when the head rotates, and the ratio of a to b does not change when the picture rotates left and right. Pose and left and right are defined in the following figures.
A simple classification of poses can be made with the ratio of a to b.
Namely: left side of a/b < 1
1/b in front view
a/b > 1 right side
Other similar rules may also be determined by the keypoints. Or by a gesture classifier. First, the postures are classified into three types of front, left, and right, and since a photograph of a change in pitch can be realized, the pitch is not selected here. Then, a large number of face samples are selected for each pose, the sample selection needs to consider the application environment, such as the requirement of face tracking, and the selected samples must be tracked. Then, the sample is subjected to feature extraction, such as extraction of Gabor (breaking point) features, ASM (active shape model), LBP (local binary pattern), and the like, and feature selection is performed, wherein the feature selection can be performed by using an Adaboost method, and then a posture classifier is trained, which is a commonly used SVM (support vector machine) classifier. With the pose classifier, a given face image can be pose classified.
Since human head rotation is continuous, the variation of the ratio of a/b should satisfy a certain rule that the difference between the ratios of the previous and next frames should not be too large, which is obtained by ensuring a certain frame rate. If the difference of the ratio of the two frames exceeds a certain threshold value, the situation that the posture jump occurs is considered, namely the posture is not tracked, and the two frames do not belong to the same face.
A face authentication module 303.
The face authentication is to compare the input face image with a sample corresponding to the identity selected by the user, and if the face image is of the same type, the authentication is passed, otherwise, the authentication is not passed. The method for selecting the identity by the user can be a method of swiping a card, clicking by a mouse and the like (or the identity is not selected, so that the face to be recognized is respectively compared with all data in the library, and if one of the faces passes the identity authentication, the authentication is passed). The process of the face authentication comprises the steps of firstly extracting features, then inputting feature vectors into a classifier, and determining whether authentication passes or not according to an output result of the classifier. The common features include PCA dimension reduction features, Gabor features, LBP features, histogram features and the like, if necessary, the features can be selected after feature extraction, and the common classifiers include boosting classifiers, SVM classifiers, Bayesian classifiers, inter-class classifiers and the like. Through the face authentication, whether the current face image passes the authentication or not can be known.
A multi-frame authentication result integration module 304.
The comprehensive module of the multi-frame authentication result is one of the main contents of the invention and is also a main factor influencing the authentication effect. Here, two possible strategies are proposed:
policy one, the authentication that satisfies the following equation passes. The th1, th2, and th3 are preset thresholds, and may be set according to the existing experience and practical application environment. If the performance of the face authentication algorithm is good, the required tracking frame number is small, the ratio of the authentication frame number to the tracking frame number is high, and the requirement for posture change is high.
The tracking frame number is greater than th 1;
the authentication frame number/tracking frame number is greater than th 2;
the attitude change is greater than or equal to th 3.
Wherein, the tracking frame number refers to the number tracked by both face tracking and pose tracking
And a second strategy is to adopt a sample collection method with prompt. Fig. 5 is a schematic diagram illustrating the collection of authentication samples according to the embodiment of the present invention, and as shown in fig. 5, if the 1 st-nth frame samples satisfy the requirement of the pose 1 and the authentication is passed, and the (n + 1) th to mth frame samples satisfy the requirement of the pose 2 and the authentication is passed, the face authentication is passed. At least two numbers of poses are required, such as a frontal face and a right or left side. The prompting method can be a voice method, a screen image method and the like. Or sample collection is performed in left-middle-right order.
The first policy and the second policy each have advantages and disadvantages, and can be selected according to an authentication method and the like. The first policy does not need to prompt, the person to be authenticated can naturally perform authentication, the posture of the person to be authenticated can be naturally changed, the recognition module of the first policy can only authenticate the face image in one posture, if the recognition module only needs to support a front face, the recognition module can also support multiple postures, but multiple frames are needed to ensure the authentication performance, and no prompt is provided, and if the user does not know that the posture of the person needs to be changed, the person cannot pass the authentication. Strategy two needs to have the suggestion, needs the classifier to support multiple gesture, owing to there is the suggestion, can make things convenient for the user to cooperate to the face authentication who has the gesture more than two passes through, more reduces the mistake and knows the rate.
And outputting an authentication result after the authentication is passed.
The authentication result output module can output the authentication result by adopting voice, voice prompt, image prompt and the like.
From the above, the embodiments of the present invention have the following advantages:
1) the invention can improve the authentication performance and avoid the impostor from passing the authentication by using the photo by combining the face tracking, the posture analysis and the face authentication.
2) The embodiment of the invention does not carry out three-dimensional analysis, only analyzes the difference of the face images when the shooting angles are different from two-dimensional angles, and can also judge the angle change of the human head relative to the shooting device.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (9)
1. An identity authentication method based on human faces is characterized by comprising the following steps:
carrying out face detection on each frame of image acquired by a camera device to obtain a face image, tracking the face image to obtain a tracking result, and if the tracking result is that the face is tracked, enabling the face and the corresponding face in the previous frame of image to belong to the same person;
carrying out attitude analysis on the face image to obtain attitude information, wherein if the face attitude in the attitude information does not jump relative to the corresponding face attitude in the previous frame image, the attitude analysis result is not jumping;
comparing the face image with the images in the appointed database, if the comparison result is the same type or the similarity is greater than the appointed threshold value, the face image is considered to be a person in the appointed database, the authentication result of the face image of the current frame is passed, otherwise, the authentication result is not passed;
carrying out statistical analysis on a plurality of frames of tracking results, a plurality of frames of posture analysis results and a plurality of frames of face image authentication results, and if the statistical analysis results are in accordance with preset conditions, passing the identity authentication; the preset conditions are as follows: the tracking result is that the number of continuous frames which are tracked to the human face and the posture analysis result is not jumped is greater than a first threshold; the ratio of the frame number passing the face image authentication result to the continuous frame number tracking the face and not jumping the posture analysis result is greater than a second threshold; and the tracking result is that the human face is tracked and the posture analysis result is that the variation of the human face posture in the non-jumping continuous multi-frame is greater than a third threshold.
2. The identity authentication method according to claim 1, wherein the preset conditions are:
keeping the human face gestures in the 1 st frame to the nth frame of the human face image which are not jumped according to the gesture analysis result to be in accordance with a first gesture range; and,
keeping the human face gestures in the (n + k) th frame to the (n + k + m) th frame of the human face image, which have not jumped gesture analysis results, in accordance with a second gesture range; wherein n and m are integers more than 1, and k is an integer more than or equal to 1.
3. The identity authentication method according to claim 1 or 2, wherein before the step of performing face detection tracking on each frame of image acquired by the camera device, the method further comprises: and prompting the tested person in front of the lens to change the head posture as required through voice or images.
4. The identity authentication method of claim 1, further comprising: if the tracking result is that the face is not tracked and the current frame face tracking is disconnected, clearing the statistical analysis result and restarting the statistics; or the attitude analysis result jumps, and if the current frame attitude tracking is disconnected, the statistical analysis result is cleared and the statistics is restarted.
5. The identity authentication method according to claim 1, wherein the step of performing pose analysis on the face image specifically comprises: measuring a first distance formed between a first key point and a second key point in the face image, measuring a second distance formed between a third key point and a fourth key point, and taking the ratio of the first distance to the second distance as an attitude parameter; and if the variation of the attitude parameter of the face image relative to the attitude parameter of the previous frame image is smaller than a preset attitude variation threshold value, the face image does not generate attitude jump.
6. The identity authentication method of claim 5, wherein the first key point is a left nostril, the second key point is a left corner of the mouth, the third key point is a right nostril, and the fourth key point is a right corner of the mouth.
7. An identity authentication device based on human face, comprising:
a face detection tracking module to: carrying out face detection on each frame of image acquired by a camera device to obtain a face image, tracking the face image to obtain a tracking result, and if the tracking result is that the face is tracked, enabling the face to belong to the same person as the corresponding face in the previous frame of image;
a pose analysis module to: carrying out attitude analysis on the face image to obtain attitude information, wherein if the face attitude in the attitude information does not jump relative to the corresponding face attitude in the previous frame image, the attitude analysis result is not jumping;
a face authentication module to: comparing the face image with the images in the appointed database, if the comparison result is the same type or the similarity is greater than the appointed threshold value, the face image is considered to be a person in the appointed database, the authentication result of the face image of the current frame is passed, otherwise, the authentication result is not passed;
a multi-frame authentication result integration module, configured to: carrying out statistical analysis on a plurality of frames of tracking results, a plurality of frames of posture analysis results and a plurality of frames of face image authentication results, and if the statistical analysis results are in accordance with preset conditions, passing the identity authentication; the preset conditions are as follows: the tracking result is that the number of continuous frames which are tracked to the human face and the posture analysis result is not jumped is greater than a first threshold; the ratio of the frame number passing the face image authentication result to the continuous frame number tracking the face and not jumping the posture analysis result is greater than a second threshold; the tracking result is that the human face is tracked, and the posture analysis result is that the variation of the human face posture in non-jumping continuous multi-frames is larger than a third threshold; or, the preset conditions are as follows: keeping the human face gestures in the 1 st frame to the nth frame of the human face image which are not jumped according to the gesture analysis result to be in accordance with a first gesture range; keeping the human face gestures in the (n + k) th frame to the (n + k + m) th frame of the human face image, of which the gesture analysis result is not jumped, in accordance with a second gesture range; wherein n and m are integers more than 1, and k is an integer more than or equal to 1.
8. The identity authentication device of claim 7, further comprising:
and the prompt and authentication result output module is used for: prompting a tested person in front of the lens to change the head posture according to requirements through voice or images; and outputting the authentication result of the tested person through voice or images.
9. The identity authentication device of claim 7, further comprising a reset module configured to:
if the tracking result is that the face is not tracked and the current frame face is tracked and disconnected, clearing the statistical analysis result of the multi-frame authentication result comprehensive module, and restarting the statistics of the multi-frame authentication result comprehensive module; or if the attitude analysis result is jumping and the current frame attitude tracking is disconnected, clearing the statistical analysis result of the multi-frame authentication result comprehensive module, and restarting the statistics of the multi-frame authentication result comprehensive module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010254201.6A CN102375970B (en) | 2010-08-13 | 2010-08-13 | A kind of identity identifying method based on face and authenticate device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010254201.6A CN102375970B (en) | 2010-08-13 | 2010-08-13 | A kind of identity identifying method based on face and authenticate device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102375970A CN102375970A (en) | 2012-03-14 |
CN102375970B true CN102375970B (en) | 2016-03-30 |
Family
ID=45794557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201010254201.6A Active CN102375970B (en) | 2010-08-13 | 2010-08-13 | A kind of identity identifying method based on face and authenticate device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102375970B (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779274B (en) * | 2012-07-19 | 2015-02-25 | 冠捷显示科技(厦门)有限公司 | Intelligent television face recognition method based on binocular camera |
CN103020607B (en) * | 2012-12-27 | 2017-05-03 | Tcl集团股份有限公司 | Face recognition method and face recognition device |
CN104348619B (en) * | 2013-07-31 | 2018-12-14 | 联想(北京)有限公司 | Verify the method and terminal device of identity |
CN103500330B (en) * | 2013-10-23 | 2017-05-17 | 中科唯实科技(北京)有限公司 | Semi-supervised human detection method based on multi-sensor and multi-feature fusion |
CN103605969B (en) * | 2013-11-28 | 2018-10-09 | Tcl集团股份有限公司 | A kind of method and device of face typing |
TWI557004B (en) * | 2014-01-10 | 2016-11-11 | Utechzone Co Ltd | Identity authentication system and its method |
CN115457664A (en) * | 2015-01-19 | 2022-12-09 | 创新先进技术有限公司 | Living body face detection method and device |
CN105989338A (en) * | 2015-02-13 | 2016-10-05 | 多媒体影像解决方案有限公司 | Face recognition method and system thereof |
CN104794458A (en) * | 2015-05-07 | 2015-07-22 | 北京丰华联合科技有限公司 | Fuzzy video person identifying method |
CN106203242B (en) * | 2015-05-07 | 2019-12-24 | 阿里巴巴集团控股有限公司 | Similar image identification method and equipment |
CN105407098A (en) * | 2015-11-26 | 2016-03-16 | 小米科技有限责任公司 | Identity verification method and device |
CN106897658B (en) * | 2015-12-18 | 2021-12-14 | 腾讯科技(深圳)有限公司 | Method and device for identifying living body of human face |
CN108021846A (en) * | 2016-11-01 | 2018-05-11 | 杭州海康威视数字技术股份有限公司 | A kind of face identification method and device |
CN106778198A (en) * | 2016-11-23 | 2017-05-31 | 北京小米移动软件有限公司 | Perform the safety certifying method and device of operation |
CN106682591B (en) * | 2016-12-08 | 2020-04-07 | 广州视源电子科技股份有限公司 | Face recognition method and device |
CN107748869B (en) * | 2017-10-26 | 2021-01-22 | 奥比中光科技集团股份有限公司 | 3D face identity authentication method and device |
CN107633165B (en) * | 2017-10-26 | 2021-11-19 | 奥比中光科技集团股份有限公司 | 3D face identity authentication method and device |
CN107609383B (en) * | 2017-10-26 | 2021-01-26 | 奥比中光科技集团股份有限公司 | 3D face identity authentication method and device |
WO2019205009A1 (en) | 2018-04-25 | 2019-10-31 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for identifying a body motion |
CN109034130A (en) * | 2018-08-31 | 2018-12-18 | 深圳市研本品牌设计有限公司 | A kind of unmanned plane and storage medium for news tracking |
CN108921145A (en) * | 2018-08-31 | 2018-11-30 | 深圳市研本品牌设计有限公司 | Based on hot spot character news method for tracing and system |
CN111091028A (en) * | 2018-10-23 | 2020-05-01 | 北京嘀嘀无限科技发展有限公司 | Method and device for recognizing shaking motion and storage medium |
CN110008673B (en) * | 2019-03-06 | 2022-02-18 | 创新先进技术有限公司 | Identity authentication method and device based on face recognition |
CN111783677B (en) * | 2020-07-03 | 2023-12-01 | 北京字节跳动网络技术有限公司 | Face recognition method, device, server and computer readable medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1293446C (en) * | 2005-06-02 | 2007-01-03 | 北京中星微电子有限公司 | Non-contact type visual control operation system and method |
US8682029B2 (en) * | 2007-12-14 | 2014-03-25 | Flashfoto, Inc. | Rule-based segmentation for objects with frontal view in color images |
CN100592322C (en) * | 2008-01-04 | 2010-02-24 | 浙江大学 | Computer Automatic Discrimination Method of Photographic Face and Live Human Face |
CN101710383B (en) * | 2009-10-26 | 2015-06-10 | 北京中星微电子有限公司 | Method and device for identity authentication |
CN101770613A (en) * | 2010-01-19 | 2010-07-07 | 北京智慧眼科技发展有限公司 | Social insurance identity authentication method based on face recognition and living body detection |
-
2010
- 2010-08-13 CN CN201010254201.6A patent/CN102375970B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN102375970A (en) | 2012-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102375970B (en) | A kind of identity identifying method based on face and authenticate device | |
Shao et al. | Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing | |
CN106557726B (en) | Face identity authentication system with silent type living body detection and method thereof | |
CN107346422B (en) | Living body face recognition method based on blink detection | |
Kähm et al. | 2d face liveness detection: An overview | |
Chakraborty et al. | An overview of face liveness detection | |
CN102004899B (en) | Human face identifying system and method | |
Zhang et al. | Fast and robust occluded face detection in ATM surveillance | |
CN102385703B (en) | A kind of identity identifying method based on face and system | |
CN105740779B (en) | Method and device for detecting living human face | |
WO2015149534A1 (en) | Gabor binary pattern-based face recognition method and device | |
JP2008146539A (en) | Face authentication device | |
KR20160066380A (en) | Method and apparatus for registering face, method and apparatus for recognizing face | |
CN105574509B (en) | A kind of face identification system replay attack detection method and application based on illumination | |
JP6071002B2 (en) | Reliability acquisition device, reliability acquisition method, and reliability acquisition program | |
CN107480586B (en) | Detection method of biometric photo counterfeiting attack based on facial feature point displacement | |
CN103593648B (en) | Face recognition method for open environment | |
CN105138967B (en) | Biopsy method and device based on human eye area active state | |
CN105512618A (en) | Video tracking method | |
WO2013075295A1 (en) | Clothing identification method and system for low-resolution video | |
Paul et al. | Extraction of facial feature points using cumulative histogram | |
CN112766065A (en) | Mobile terminal examinee identity authentication method, device, terminal and storage medium | |
US20250029425A1 (en) | Live human face detection method and apparatus, computer device, and storage medium | |
Nikitin et al. | Face anti-spoofing with joint spoofing medium detection and eye blinking analysis | |
Sutoyo et al. | Unlock screen application design using face expression on android smartphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C41 | Transfer of patent application or patent right or utility model | ||
TR01 | Transfer of patent right |
Effective date of registration: 20160517 Address after: 519031 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105 -478 Patentee after: GUANGDONG ZHONGXING ELECTRONICS CO., LTD. Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor Patentee before: Beijing Vimicro Corporation |