CN117037241A - Face living body detection method based on light supplementing lamp control - Google Patents
Face living body detection method based on light supplementing lamp control Download PDFInfo
- Publication number
- CN117037241A CN117037241A CN202310899604.3A CN202310899604A CN117037241A CN 117037241 A CN117037241 A CN 117037241A CN 202310899604 A CN202310899604 A CN 202310899604A CN 117037241 A CN117037241 A CN 117037241A
- Authority
- CN
- China
- Prior art keywords
- face
- sequence
- living body
- brightness
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001502 supplementing effect Effects 0.000 title claims abstract description 70
- 238000001514 detection method Methods 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 60
- 230000008859 change Effects 0.000 claims abstract description 53
- 230000003321 amplification Effects 0.000 claims abstract description 43
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 43
- 239000008280 blood Substances 0.000 claims description 23
- 210000004369 blood Anatomy 0.000 claims description 23
- 239000013589 supplement Substances 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 238000012795 verification Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 2
- 210000001747 pupil Anatomy 0.000 description 7
- 230000009471 action Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000001574 biopsy Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000010247 heart contraction Effects 0.000 description 2
- 238000001727 in vivo Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000031700 light absorption Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The application discloses a face living body detection method based on light filling lamp control, which relates to the technical field of face detection and comprises the following steps: s1, acquiring a face image by using a face camera; s2, performing color amplification on the face detection area to calculate a face color sequence S A The method comprises the steps of carrying out a first treatment on the surface of the S3, recording a brightness sequence of the change of the light supplementing lamp; s4, compensating the light supplementing lamp and then obtaining the light supplementing lamp by using the S1 methodFace color sequence S is taken B The method comprises the steps of carrying out a first treatment on the surface of the S5, the face color sequence S A And S is equal to B Making a difference value to obtain a human face living body characteristic value sequence; s6, establishing a human face living body judgment model; and S7, alarming when the model is a non-living human face. The application effectively solves the problem that whether the video playing face is a living body or not can not be judged by adopting the motion amplification technology, and improves the effect of the living body detection of the face.
Description
Technical Field
The application belongs to the technical field of face detection, and particularly relates to a face living body detection method based on light supplementing lamp control.
Background
The existing living body detection method is close to the method mainly comprises the schemes of depth camera living body detection, active cooperation type human face living body authentication, pupil change judgment by utilizing a light source, human face movement amplification or color amplification living body detection and the like. Several advantages and disadvantages of the conventional face biopsy scheme are described below.
1) Depth camera-based human face living body detection:
the method adopts face living body verification made by various types of depth cameras such as a binocular depth camera, a TOF depth camera, a structured light depth camera and the like. Such as CN202211490094.6 face detection method, device, equipment and storage medium based on depth camera, CN201810812209.6 is a driver identity fast authentication method and system, CN202211532984.9 is a binocular face living body detection method, device, equipment and storage medium, etc. The method has the advantages that the anti-counterfeiting performance is strong, the living body verification can be completed without any action matching, the applicability is strong, and the method can be used in the dark or in the condition of insufficient light; disadvantages are the high cost and the need for special 3D depth imaging equipment implementation.
2) Face in-vivo authentication based on active coordination:
the method is guided by interaction, and the user cooperates with the instruction action of the corresponding prompt, so that the aim of living body authentication is fulfilled. Such as CN202310206478.9, a reading-based face biopsy method, CN202211153492.9, a face biopsy method, apparatus, electronic device, storage medium, etc. The method has the advantages that whether the real person interaction operation is realized can be accurately judged through the active coordination of the random instruction; the method has the defects that the flow is complicated, and the method is difficult to apply to application scenes of face access control.
3) Living body detection of pupil change by light source:
the method adopts the method and the equipment for judging the change of the corresponding pupil size based on the change of the external light source to achieve the aim of judging whether the face is a living body, such as a method and equipment for detecting the living body face by utilizing the pupil diameter by the CN 201810086129.7. The method utilizes the external light source to stimulate and change the size of the pupil, can effectively judge whether the pupil is a human face living body, but has higher imaging requirement, the size of the pupil can be clearly detected, and meanwhile, the influence of the reflection of some sunglasses or glasses on the detection is obvious.
4) Color amplification or motion amplification based human face living body detection:
the method mainly analyzes the micro motion of the face region through the motion amplification of the face region, and also can judge the living body state through analyzing the change of the face blood on the light absorption and measuring the heart rate or the change of the face blood volume through the color amplification. Such as CN202111404746.5 a human face image living body detection method, device, computer equipment and storage medium, CN202111147201.0 a non-contact heart rate detection device based on multiple filtering and mixed amplification, CN201910711588.4 a pedestrian face anti-fraud method, CN202110535959.5 a non-contact mental stress evaluation system, etc. The method has the advantages that the judgment of the human face living body can be realized by using the camera of the human face access control without adding extra equipment; in the aspect of defects, the method has better resolution capability for static picture attack, but if a mobile phone or a tablet is adopted to play video, the information shot and collected by a face access control camera still contains the tiny motion or color change of the face, and has a certain influence on the judgment of the living body of the face.
Therefore, a face living body detection method based on light filling lamp control is provided to solve the difficulty existing in the prior art, and is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, in order to overcome the defects in the prior art, the present application provides a face living body detection method based on light supplement lamp control, which is implemented by the following scheme.
The method comprises the steps of performing face detection on an image acquired by a face camera by establishing a relation model of brightness change of a light supplementing lamp and a face living body characteristic value, and performing color amplification on a face detection area to obtain a face color sequence S A Recording the brightness sequence of the change of the light supplementing lamp, and obtaining the face color sequence S by using the S1 method after the light supplementing lamp is compensated B The face color sequence S A And S is equal to B And (3) obtaining a human face living body characteristic value sequence by making a difference value, and finally establishing a human face living body judgment model, and giving an alarm when the model is a non-living human face.
In order to achieve the above purpose, the present application adopts the following technical scheme:
a face living body detection method based on light filling lamp control comprises the following steps:
s1, acquiring a face image by using a face camera;
s2, performing color amplification on a face detection area in the face image, and calculating to obtain the change condition of the face blood volume and a face color sequence S A ;
S3, controlling the light supplementing lamp to record a brightness sequence of the change of the light supplementing lamp according to the face blood volume change condition obtained by face color amplification, and obtaining a face image after light supplementing;
s4, performing color amplification on a face detection area in the face image after the light supplement in the S3, and calculating to obtain the change condition of the face blood volume after the light supplement and a face color sequence S after the light supplement B ;
S5, the face color sequence S A Face color sequence S after light supplement B Making a difference value to obtain a human face living body characteristic value sequence S;
s6, establishing a human face living body judgment model by utilizing the brightness sequence of the change of the light supplementing lamp and the human face living body characteristic value sequence S;
and S7, when the judgment model judges that the current face image is a non-living face, alarming is carried out.
In the above method, optionally, the algorithm flow of face color amplification in S2 is as follows:
s201, regarding a face image sequence as a four-dimensional signal I (x, y, c, t), wherein the 3 rd dimension is color, and the 4 th dimension is time;
s202, carrying out Laplacian pyramid decomposition on each frame of image in the sequence, wherein each layer of pyramid represents different spatial frequencies in the original image;
s203, carrying out band-pass filtering on each pixel point in each layer of signals of the pyramid;
s204, multiplying each layer of filtered signals by a specific amplification factor, and adding the signals with the original signals before frequency domain filtering to obtain a new pyramid;
s205, synthesizing each layer in the new pyramid to obtain a final color-amplified face sequence image.
The above method, optionally, the face color sequence S A The extraction method of (2) comprises the following steps:
for the face image obtained through color amplification, let H (i, j) be the brightness of the face image after color amplification, i be the image abscissa, j be the image ordinate, w be the image width, H be the image height, and the average brightness Y of the face image after color amplification be calculated as follows:
within the sequence time t, the face color sequence S A The method comprises the following steps:
S A ={Y 1 ,Y 2 ,···,Y t }。
the method, optionally, the calculation method of the S3 brightness change sequence is as follows:
for the face color sequence S A Counting, wherein the minimum value of the face brightness in the sequence is Y min Maximum value is Y max The brightness adjusting range of the light supplementing lamp is K, and the human face color sequence S is A Face brightness Y corresponding to any time t t For the value L corresponding to the desired adjustment of the brightness of the light compensating lamp t The method comprises the following steps:
the brightness change sequence L of the light supplementing lamp is as follows:
L={L 1 ,L 2 ,···,L t }。
the method, optionally, the light supplementing lamp adjustment control compensation method comprises the following steps:
desired value L for adjusting brightness of light supplementing lamp t Adding the adjusted target brightness of the light supplement which is the brightness before the adjustment; face color sequence S at elapsed time t A And after calculating the expected value of the brightness adjustment of the light supplementing lamp, starting the brightness adjustment of the light supplementing lamp, taking the brightness of the light supplementing lamp before starting as a reference, and adjusting the brightness of the corresponding light supplementing lamp according to the brightness change sequence L of the light supplementing lamp in the subsequent time t.
The method, optionally, S4 the face color sequence S B The extraction method of (2) is as follows:
after the light supplementing lamp is turned on and adjusted, in the same time t, according to the human face color sequence S A The same calculation method of (2) to obtain the face color sequence S after the compensation of the light supplementing lamp control B 。
The method, optionally, the S5 human face living body characteristic value sequence calculating method is as follows:
in the face color sequence S B After the calculation is completed, the face color sequence S A With a face color sequence S B Subtracting to obtain a human face living body characteristic value sequence S, namely
S=S A -S B 。
In the above method, optionally, S6 construction of a human face living body judgment model:
adopting a one-dimensional convolution neural network, wherein the input L is a brightness change sequence L of a light supplementing lamp, the input S is a human face living body characteristic value sequence S, the two inputs are added at the Dropout layer after being subjected to independent one-dimensional convolution, and finally, classification judgment is carried out through a softmax layer after fusion; after the brightness change sequence L of the light supplementing lamp and the human face living body characteristic value sequence S are collected, the human face living body judgment model is sent to be classified, when the model judges that the human face is living, the system passes verification, and when the model judges that the human face is not living, the system generates an alarm.
Compared with the prior art, the application provides the face living body detection method based on the light supplementing lamp control, which mainly has the following beneficial effects:
according to the application, the problem that whether the video playing face is a living body or not can not be judged by adopting a motion amplification technology can be effectively solved by establishing the relation model of the brightness change of the light supplementing lamp and the human face living body characteristic value, and the human face living body detection effect is effectively improved; the change condition of the illumination effect living human face with brightness change is extracted, the characteristic is obviously different from the condition that the human face played by video only changes brightness along with the light supplementing lamp, and the method has obvious effect on classification confirmation of living human face and non-living human face.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a workflow diagram of a face in-vivo detection method based on light supplement lamp control disclosed by the application;
fig. 2 is a diagram of one-dimensional convolutional neural network of a face living body detection method based on light filling lamp control according to the embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the present disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions, and the terms "comprise," "include," or any other variation thereof, are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, the application discloses a face living body detection method based on light filling lamp control, which comprises the following specific working procedures:
s1, acquiring a face image by using a face camera;
s2, performing color amplification on a face detection area in the face image, and calculating to obtain the change condition of the face blood volume and a face color sequence S A ;
S3, controlling the light supplementing lamp to record a brightness sequence of the change of the light supplementing lamp according to the face blood volume change condition obtained by face color amplification, and obtaining a face image after light supplementing;
s4, performing color amplification on a face detection area in the face image after the light supplement in the S3, and calculating to obtain the change condition of the face blood volume after the light supplement and a face color sequence S after the light supplement B ;
S5, the face color sequence S A Face color sequence S after light supplement B The difference value is made, and the difference value is calculated,obtaining a human face living body characteristic value sequence S;
s6, establishing a human face living body judgment model by utilizing the brightness sequence of the change of the light supplementing lamp and the human face living body characteristic value sequence S;
and S7, when the judgment model judges that the current face image is a non-living face, alarming is carried out.
Further, the face color sequence S is performed on the image acquired by the face camera A Is extracted from (1):
and for the video stream acquired by the face camera, face detection is performed first, and the face area is positioned. For the face region, before the light supplementing lamp is not adjusted, a face picture sequence of t seconds is acquired for carrying out a face color sequence S A Is an extraction of (2).
Further, color amplification is performed on the acquired face image:
for the acquired t-second human face picture sequence, as blood passes through blood vessels during heart beating, the larger the blood quantity passing through the blood vessels is, the more light is absorbed by blood, and the less light is reflected by the surface of human skin, therefore, in the time range of t seconds, the human face image has fine color difference along with the quantity of the inflow of heart beating blood, but the difference of the color can not be recognized due to the resolution capability of human eyes, a color amplification algorithm can be adopted to amplify the change process of the face color to the degree that the human eyes can distinguish, and the obvious human face color sequence S is further convenient to extract A . There are many methods of color amplification, here exemplified by color amplification algorithms in euler image amplification.
Furthermore, the algorithm flow of face color amplification in S2 is as follows:
s201, regarding a face image sequence as a four-dimensional signal I (x, y, c, t), wherein the 3 rd dimension is color, and the 4 th dimension is time;
s202, carrying out Laplacian pyramid decomposition on each frame of image in the sequence, wherein each layer of pyramid represents different spatial frequencies in the original image;
s203, carrying out band-pass filtering on each pixel point in each layer of signals of the pyramid;
s204, multiplying each layer of filtered signals by a specific amplification factor, and adding the signals with the original signals before frequency domain filtering to obtain a new pyramid;
s205, synthesizing each layer in the new pyramid to obtain a final color-amplified face sequence image.
Still further, a face color sequence S A The extraction method of (2) comprises the following steps:
for the face image obtained through color amplification, let H (i, j) be the brightness of the face image after color amplification, i be the image abscissa, j be the image ordinate, w be the image width, H be the image height, and the average brightness Y of the face image after color amplification be calculated as follows:
within the sequence time t, the face color sequence S A The method comprises the following steps:
S A ={Y 1 ,Y 2 ,···,Y t }。
further, the calculation of the sequence of light supplement lamp control and brightness change:
in order to examine the living face and the non-living face, the face blood changes extracted by the color amplification of the face area are used for driving the light-compensating lamp to synchronously change the brightness: when the facial blood volume is large, the brightness of the light supplementing lamp is improved; when the amount of blood on the face is small, the brightness of the light supplementing lamp is reduced. For non-living human faces, the brightness change of the light supplementing lamp only affects the linear change of the brightness of the whole image; in the case of a real face, the light absorbed by blood changes with the change of the blood volume and the light intensity, so that the imaging effect after color amplification is different from that of a non-living face, and whether the face is a real face is determined based on the difference. In addition, the brightness change of the light supplementing lamp is synchronous with the change of the facial blood volume, so that the consistency of the color amplification calculation result and the trend of the light supplementing lamp before control is started can be maintained, and the calculation of the follow-up feature extraction is facilitated.
Further, before the light supplement lamp is not adjusted, an expected value of the light supplement lamp adjusted along with the facial blood change is calculated first.
Further, the calculation method of the S3 brightness change sequence is as follows:
for the face color sequence S A Counting, wherein the minimum value of the face brightness in the sequence is Y min Maximum value is Y max The brightness adjusting range of the light supplementing lamp is K, and the human face color sequence S is A Face brightness Y corresponding to any time t t For the value L corresponding to the desired adjustment of the brightness of the light compensating lamp t The method comprises the following steps:
the brightness change sequence L of the light supplementing lamp is as follows:
L={L 1 ,L 2 ,···,L t }。
still further, the light supplementing lamp adjusting, controlling and compensating method comprises the following steps:
desired value L for adjusting brightness of light supplementing lamp t Adding the adjusted target brightness of the light supplement which is the brightness before the adjustment; face color sequence S at elapsed time t A And after calculating the expected value of the brightness adjustment of the light supplementing lamp, starting the brightness adjustment of the light supplementing lamp, taking the brightness of the light supplementing lamp before starting as a reference, and adjusting the brightness of the corresponding light supplementing lamp according to the brightness change sequence L of the light supplementing lamp in the subsequent time t.
Further, S4 face color sequence S B The extraction method of (2) is as follows:
after the light supplementing lamp is turned on and adjusted, in the same time t, according to the human face color sequence S A The same calculation method of (2) to obtain the face color sequence S after the compensation of the light supplementing lamp control B 。
Further, the S5 human face living body characteristic value sequence calculating method comprises the following steps:
in the face color sequence S B After the calculation is completed, the face color sequence S A With a face color sequence S B Subtracting to obtain the living body characteristics of the human faceValue sequence S, i.e.
S=S A -S B 。
Further, the characteristic value reflects the real absorption condition of the human face to the light irradiated by the light supplement lamp on the human face under the condition that the light supplement lamp periodically changes along with the blood of the human face.
Referring to fig. 2, construction of a face living body judgment model is S6:
adopting a one-dimensional convolution neural network, wherein the input L is a brightness change sequence L of a light supplementing lamp, the input S is a human face living body characteristic value sequence S, the two inputs are added at the Dropout layer after being subjected to independent one-dimensional convolution, and finally, classification judgment is carried out through a softmax layer after fusion; after the brightness change sequence L of the light supplementing lamp and the human face living body characteristic value sequence S are collected, the human face living body judgment model is sent to be classified, when the model judges that the human face is living, the system passes verification, and when the model judges that the human face is not living, the system generates an alarm.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
Those of skill would further appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both.
To clearly illustrate this interchangeability of hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (8)
1. The human face living body detection method based on the light supplementing lamp control is characterized by comprising the following steps of:
s1, acquiring a face image by using a face camera;
s2, performing color amplification on a face detection area in the face image, and calculating to obtain the change condition of the face blood volume and a face color sequence S A ;
S3, controlling the light supplementing lamp to record a brightness sequence of the change of the light supplementing lamp according to the face blood volume change condition obtained by face color amplification, and obtaining a face image after light supplementing;
s4, performing color amplification on a face detection area in the face image after the light supplement in the S3, and calculating to obtain the change condition of the face blood volume after the light supplement and a face color sequence S after the light supplement B ;
S5, the face color sequence S A Face color sequence S after light supplement B Making a difference value to obtain a human face living body characteristic value sequence S;
s6, establishing a human face living body judgment model by utilizing the brightness sequence of the change of the light supplementing lamp and the human face living body characteristic value sequence S;
and S7, when the judgment model judges that the current face image is a non-living face, alarming is carried out.
2. The method for detecting human face living body based on the light filling control according to claim 1, wherein,
the algorithm flow of face color amplification in S2 is as follows:
s201, regarding a face image sequence as a four-dimensional signal I (x, y, c, t), wherein the 3 rd dimension is color, and the 4 th dimension is time;
s202, carrying out Laplacian pyramid decomposition on each frame of image in the sequence, wherein each layer of pyramid represents different spatial frequencies in the original image;
s203, carrying out band-pass filtering on each pixel point in each layer of signals of the pyramid;
s204, multiplying each layer of filtered signals by a specific amplification factor, and adding the signals with the original signals before frequency domain filtering to obtain a new pyramid;
s205, synthesizing each layer in the new pyramid to obtain a final color-amplified face sequence image.
3. The method for detecting human face living body based on the light filling control according to claim 1, wherein,
face color sequence S A The extraction method of (2) comprises the following steps:
for the face image obtained through color amplification, let H (i, j) be the brightness of the face image after color amplification, i be the image abscissa, j be the image ordinate, w be the image width, H be the image height, and the average brightness Y of the face image after color amplification be calculated as follows:
within the sequence time t, the face color sequence S A The method comprises the following steps:
S A ={Y 1 ,Y 2 ,···,Y t }。
4. the method for detecting human face living body based on the light filling control according to claim 1, wherein,
the calculation method of the S3 brightness change sequence is as follows:
for the face color sequence S A Counting, wherein the minimum value of the face brightness in the sequence is Y min Maximum value is Y max The brightness adjusting range of the light supplementing lamp is K, and the human face color sequence S is A Face brightness Y corresponding to any time t t For the value L corresponding to the desired adjustment of the brightness of the light compensating lamp t The method comprises the following steps:
the brightness change sequence L of the light supplementing lamp is as follows:
L={L 1 ,L 2 ,···,L t }。
5. the method for detecting human face living body based on the light filling control according to claim 4, wherein,
the light supplementing lamp adjusting control compensation method comprises the following steps:
desired value L for adjusting brightness of light supplementing lamp t Adding the adjusted target brightness of the light supplement which is the brightness before the adjustment; face color sequence S at elapsed time t A And after calculating the expected value of the brightness adjustment of the light supplementing lamp, starting the brightness adjustment of the light supplementing lamp, taking the brightness of the light supplementing lamp before starting as a reference, and adjusting the brightness of the corresponding light supplementing lamp according to the brightness change sequence L of the light supplementing lamp in the subsequent time t.
6. The method for detecting human face living body based on the light filling control according to claim 1, wherein,
s4 face color sequence S B The extraction method of (2) is as follows:
after the light supplementing lamp is turned on and adjusted, in the same time t, according to the human face color sequence S A The same calculation method of (2) to obtain the face color sequence S after the compensation of the light supplementing lamp control B 。
7. The method for detecting human face living body based on the light filling control according to claim 1, wherein,
s5, the calculation method of the human face living body characteristic value sequence is as follows:
in the face color sequence S B After the calculation is completed, the face color sequence S A With a face color sequence S B Subtracting to obtain a human face living body characteristic value sequence S, namely
S=S A -S B 。
8. The method for detecting human face living body based on the light filling control according to claim 1, wherein,
s6, building a human face living body judgment model:
adopting a one-dimensional convolution neural network, wherein the input L is a brightness change sequence L of a light supplementing lamp, the input S is a human face living body characteristic value sequence S, the two inputs are added at the Dropout layer after being subjected to independent one-dimensional convolution, and finally, classification judgment is carried out through a softmax layer after fusion; after the brightness change sequence L of the light supplementing lamp and the human face living body characteristic value sequence S are collected, the human face living body judgment model is sent to be classified, when the model judges that the human face is living, the system passes verification, and when the model judges that the human face is not living, the system generates an alarm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310899604.3A CN117037241A (en) | 2023-07-20 | 2023-07-20 | Face living body detection method based on light supplementing lamp control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310899604.3A CN117037241A (en) | 2023-07-20 | 2023-07-20 | Face living body detection method based on light supplementing lamp control |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117037241A true CN117037241A (en) | 2023-11-10 |
Family
ID=88636379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310899604.3A Pending CN117037241A (en) | 2023-07-20 | 2023-07-20 | Face living body detection method based on light supplementing lamp control |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117037241A (en) |
-
2023
- 2023-07-20 CN CN202310899604.3A patent/CN117037241A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112308095B (en) | Image preprocessing and model training method, device, server and storage medium | |
US7912252B2 (en) | Time-of-flight sensor-assisted iris capture system and method | |
US8818041B2 (en) | Method of controlling a function of a device and system for detecting the presence of a living being | |
KR101356358B1 (en) | Computer-implemented method and apparatus for biometric authentication based on images of an eye | |
US10237459B2 (en) | Systems and methods for liveness analysis | |
US8542877B2 (en) | Processing images of at least one living being | |
CN106372629A (en) | Living body detection method and device | |
CN101165706B (en) | Image processing apparatus and image acquisition method | |
CN109255282A (en) | A kind of biometric discrimination method, device and system | |
CN112861588B (en) | Living body detection method and device | |
CN111382646B (en) | Living body identification method, storage medium and terminal equipment | |
Zheng et al. | Hand-over-face occlusion and distance adaptive heart rate detection based on imaging photoplethysmography and pixel distance in online learning | |
CN117037241A (en) | Face living body detection method based on light supplementing lamp control | |
TWI712909B (en) | Method and system of false prevention assisting facial recognition | |
CN112818782B (en) | Generalized silence living body detection method based on medium sensing | |
Wu et al. | Motion robust imaging photoplethysmography in defocus blurring | |
Seibold et al. | High-quality deepfakes have a heart! | |
He et al. | Enhanced usability of iris recognition via efficient user interface and iris image restoration | |
CN109902597A (en) | Anti-counterfeiting method, device and system based on living blood vessels | |
HK1235891B (en) | Systems and methods for liveness analysis | |
HK1235891A1 (en) | Systems and methods for liveness analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |