CN110781770A - Living body detection method, device and equipment based on face recognition - Google Patents
Living body detection method, device and equipment based on face recognition Download PDFInfo
- Publication number
- CN110781770A CN110781770A CN201910947729.2A CN201910947729A CN110781770A CN 110781770 A CN110781770 A CN 110781770A CN 201910947729 A CN201910947729 A CN 201910947729A CN 110781770 A CN110781770 A CN 110781770A
- Authority
- CN
- China
- Prior art keywords
- feature map
- living body
- face recognition
- preliminary
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 79
- 238000004458 analytical method Methods 0.000 claims abstract description 7
- 238000010586 diagram Methods 0.000 claims description 27
- 230000004927 fusion Effects 0.000 claims description 23
- 238000000034 method Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 18
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 238000000638 solvent extraction Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 8
- 238000003709 image segmentation Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000001727 in vivo Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a living body detection method based on face recognition, which comprises the following steps: acquiring an original picture of a user to be detected; segmenting the original picture by utilizing a pyramid scene analysis network to obtain a preliminary feature map corresponding to the original picture; cutting the preliminary feature map by using a preset face detection frame to cut out a regional feature map containing face features; calculating a confidence coefficient obtained after the preliminary feature map and the region feature map are fused; and determining the result of the living body detection according to the magnitude relation between the confidence coefficient and a preset confidence coefficient threshold value. The invention also discloses a living body detection device based on the face recognition and living body detection equipment based on the face recognition. By adopting the embodiment of the invention, the spatio-temporal information can be fully utilized, the noise influence of a single picture is reduced, the fitting capability to the actual condition is high, and the identification accuracy is high.
Description
Technical Field
The invention relates to the field of face recognition, in particular to a living body detection method, a living body detection device and living body detection equipment based on face recognition.
Background
With the wide application of face recognition technology, there are many attack ways of false faces such as face photos, face videos, three-dimensional masks, and the like, and face living body detection is more and more concerned by the industry and academia. The human face living body detection is also gradually an indispensable link in a human face recognition system. From the viewpoint of processing image types, the currently common face living body detection method is a traditional face recognition system, the judgment on living body textures is realized through an improved scheme of LBP, the stability of living body detection is realized by combining a tracking technology based on the scheme, and matched embedded hardware equipment is provided.
However, the design of the traditional method is mainly based on manual work, the design is complicated, the effect is limited, the fitting capability to the actual situation is not high, and a certain bottleneck exists in the situation that the traditional method can be used for coping with. And the scheme for distinguishing based on the texture has single characteristic, limited application range and lower identification accuracy.
Disclosure of Invention
The embodiment of the invention aims to provide a living body detection method, a living body detection device and living body detection equipment based on face recognition, which can make full use of spatio-temporal information, reduce the noise influence of a single picture, and have high fitting capacity and high recognition accuracy on actual conditions.
In order to achieve the above object, an embodiment of the present invention provides a living body detection method based on face recognition, including:
acquiring an original picture of a user to be detected;
segmenting the original picture by utilizing a pyramid scene analysis network to obtain a preliminary feature map corresponding to the original picture;
cutting the preliminary feature map by using a preset face detection frame to cut out a regional feature map containing face features;
calculating a confidence coefficient obtained after the preliminary feature map and the region feature map are fused;
and determining the result of the living body detection according to the magnitude relation between the confidence coefficient and a preset confidence coefficient threshold value.
As an improvement of the above scheme, after the obtaining of the preliminary feature map corresponding to the original picture, the method further includes:
partitioning the preliminary feature map;
and performing fine-grained identification on the blocked preliminary characteristic graph.
As an improvement of the above scheme, after the obtaining of the original picture of the user to be detected, the method further includes:
cutting the original picture by using the face detection frame to cut a picture to be detected containing face characteristics;
3D reconstruction is carried out on the picture to be detected, and a depth map corresponding to the picture to be detected is obtained;
fusing the depth map and the region feature map to generate a fused feature map;
and sequencing the fusion feature maps according to a preset time sequence, and fusing the sequenced fusion feature maps by utilizing convolution to generate a space-time feature map.
As an improvement of the above scheme, the calculating of the confidence degree obtained by fusing the preliminary feature map and the region feature map specifically includes:
and calculating the confidence coefficient obtained by fusing the preliminary feature map, the region feature map and the space-time feature map.
As an improvement of the above scheme, the determining a result of the in-vivo detection according to a magnitude relationship between the confidence level and a preset confidence level threshold specifically includes:
when the confidence coefficient is greater than or equal to a preset confidence coefficient threshold value, judging that the user to be detected is a living body;
and when the confidence coefficient is smaller than a preset confidence coefficient threshold value, judging that the user to be detected is a non-living body.
In order to achieve the above object, an embodiment of the present invention further provides a living body detection device based on face recognition, including:
the original picture acquiring unit is used for acquiring an original picture of a user to be detected;
a preliminary feature map generation unit, configured to divide the original picture by using a pyramid scene analysis network to obtain a preliminary feature map corresponding to the original picture;
the regional characteristic image generating unit is used for cutting the preliminary characteristic image by using a preset human face detection frame so as to cut out a regional characteristic image containing human face characteristics;
the confidence coefficient calculation unit is used for calculating the confidence coefficient obtained after the preliminary characteristic diagram and the region characteristic diagram are fused;
and the living body judging unit is used for determining the result of the living body detection according to the magnitude relation between the confidence coefficient and a preset confidence coefficient threshold value.
As an improvement of the above, the apparatus further comprises:
and the fine grit distinguishing unit is used for partitioning the preliminary feature map and performing fine grit identification on the partitioned preliminary feature map.
As an improvement of the above, the apparatus further comprises:
the picture generating unit to be detected is used for cutting the original picture by using the face detection frame so as to cut out a picture to be detected containing face characteristics;
the depth map generating unit is used for carrying out 3D reconstruction on the picture to be detected to obtain a depth map corresponding to the picture to be detected;
a fusion feature map generation unit, configured to fuse the depth map and the region feature map to generate a fusion feature map;
and the space-time characteristic diagram generating unit is used for sequencing the fusion characteristic diagrams according to a preset time sequence and fusing the sequenced fusion characteristic diagrams by convolution to generate the space-time characteristic diagram.
As an improvement of the above scheme, the confidence coefficient calculating unit is specifically configured to calculate a confidence coefficient obtained by fusing the preliminary feature map, the region feature map, and the spatio-temporal feature map.
In order to achieve the above object, an embodiment of the present invention further provides a face recognition-based living body detection device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and when the processor executes the computer program, the face recognition-based living body detection method according to any one of the above embodiments is implemented.
Compared with the prior art, the embodiment of the invention provides a living body detection method, a living body detection device and living body detection equipment based on face recognition. Firstly, 3D reconstruction and segmentation detection are simultaneously carried out on an original picture; then, fusing a preliminary feature map obtained by image segmentation and a depth map obtained by 3D reconstruction, and then performing space-time feature fusion to obtain a space-time feature map; meanwhile, the preliminary feature map obtained by image segmentation is subjected to face detection frame cutting and fine-grained identification to obtain a regional feature map; and finally, fusing the space-time characteristic diagram, the region characteristic diagram and the preliminary characteristic diagram to obtain the probability of whether the living body exists. The method makes full use of the space-time information, reduces the noise influence of a single picture, has high accuracy, does not need user cooperation, and can meet the requirements of practical application.
Drawings
Fig. 1 is a flowchart of a living body detection method based on face recognition according to an embodiment of the present invention;
FIG. 2 is a flow chart for generating spatiotemporal feature maps provided by an embodiment of the present invention;
FIG. 3 is a flowchart of another living body detection method based on face recognition according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a living body detecting device based on face recognition according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a living body detection device based on face recognition according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a living body detection method based on face recognition according to an embodiment of the present invention; the living body detection method based on the face recognition comprises the following steps:
s11, acquiring an original picture of a user to be detected;
s12, segmenting the original picture by utilizing a pyramid scene analysis network to obtain a preliminary feature map corresponding to the original picture;
s13, cutting the preliminary feature map by using a preset face detection frame to cut out a regional feature map containing face features;
s14, calculating a confidence coefficient obtained after the preliminary feature map and the region feature map are fused;
and S15, determining the result of the living body detection according to the magnitude relation between the confidence coefficient and the preset confidence coefficient threshold value.
Specifically, in step S11, an original picture of the user to be detected may be acquired by the camera device. For example, after the original picture is obtained, the original picture may be preprocessed, for example, the original picture is randomly cropped, flipped, and scaled.
Specifically, in step S12, the Pyramid Scene parsing network (PSPNet) may perform overall judgment on the original picture at a pixel level to obtain a mask and a result output by a network on the mask. The mask is a binary image only containing black and white, the white part is a segmented image part, and the black part is a segmented background part; the result output by the network on the upper layer of the mask is the network output after the network part of the mask is directly output in the image segmentation network is removed, and the output is the primary characteristic diagram used in the embodiment of the invention. For example, the mask can be used for supervising the training of the network in the training process, and is convenient for describing and positioning the position of the preliminary feature map used subsequently.
Specifically, in step S13, the preliminary feature map is cut out using a preset face detection frame, so as to cut out a region feature map containing face features. The human face features comprise special lines such as eyes, nose, mouth and the like. Illustratively, face detection algorithms such as Viola-Jones, Histogram of Oriented Gradients (HOG), and convolutional neural networks may be used for face detection.
Further, after the preliminary feature map is obtained, fine-grained discrimination needs to be performed on the preliminary feature map for discrimination of background noise distribution differences of different parts. At this time, the method further includes:
partitioning the preliminary feature map; and performing fine-grained identification on the blocked preliminary characteristic graph. The specific fine-grained discrimination process may refer to a fine-grained discrimination process in the prior art, and is not described herein again.
Specifically, in steps S14 to S15, a confidence level obtained by fusing the preliminary feature map and the region feature map is calculated, and a result of the in vivo detection is determined according to a magnitude relationship between the confidence level and a preset confidence level threshold. When the confidence coefficient is greater than or equal to a preset confidence coefficient threshold value, judging that the user to be detected is a living body; and when the confidence coefficient is smaller than a preset confidence coefficient threshold value, judging that the user to be detected is a non-living body, wherein the acquired original picture may be a facial picture, a facial video, a three-dimensional mask and other prosthesis faces. Preferably, the confidence threshold value can be flexibly adjusted in combination with the actual scene.
Further, after the original picture is obtained in step S11, steps S21 to S24 are further included, in which fig. 2 is referred to, and fig. 2 is a flowchart for generating a spatiotemporal feature map according to an embodiment of the present invention; the method comprises the following steps:
s21, cutting the original picture by using the face detection frame to cut a picture to be detected containing face characteristics;
s22, carrying out 3D reconstruction on the picture to be detected to obtain a depth map corresponding to the picture to be detected;
s23, fusing the depth map and the region feature map to generate a fused feature map;
s24, sequencing the fusion feature maps according to a preset time sequence, and fusing the sequenced fusion feature maps by convolution to generate a space-time feature map; illustratively, in embodiments of the present invention the convolution may be a 3D convolution, but in other embodiments the convolution may be any convolution capable of generating a spatiotemporal feature map, and is within the scope of the present invention.
Firstly, after the picture to be detected is obtained, 3D reconstruction is carried out on the picture to be detected by utilizing a 4-layer trained (convolution + Relu + pool) module to obtain a depth map corresponding to the picture to be detected; then, fusing the depth map and the preliminary feature map obtained in step S12 by using a residual module (basic component module of a resnet50 network) of layer 2 to generate a fused feature map (it should be noted that when there are a plurality of input original pictures, there are a plurality of depth maps, preliminary feature maps, and fused feature maps obtained correspondingly); and finally, sequencing the fusion feature maps according to a preset time sequence, and fusing the sequenced fusion feature maps by utilizing convolution to generate a space-time feature map.
Then, the step S14 specifically includes:
and calculating the confidence coefficient obtained by fusing the preliminary feature map, the region feature map and the space-time feature map.
Illustratively, the structural body fusing the preliminary feature map, the regional feature map and the spatiotemporal feature map employs a 3D convolution (3D convolution) + residual convolution module (Resnet Block) + pooling (pool), and furthermore, the preliminary feature map, the regional feature map and the spatiotemporal feature map may be fused by, but not limited to, additionally employing Attention mechanism (Attention), feature map stitching (concat), feature map dot multiplication, feature map addition (sum) and a lightweight technique similar to separable convolution (depthwise separable convolution), and the result of the fusion is the confidence.
Further, the above process can refer to fig. 3.
Compared with the prior art, the embodiment of the invention provides a living body detection method based on face recognition, which comprises the steps of firstly, simultaneously carrying out 3D reconstruction and segmentation detection on an original picture; then, fusing a preliminary feature map obtained by image segmentation and a depth map obtained by 3D reconstruction, and then performing space-time feature fusion to obtain a space-time feature map; meanwhile, the preliminary feature map obtained by image segmentation is subjected to face detection frame cutting and fine-grained identification to obtain a regional feature map; and finally, fusing the space-time characteristic diagram, the region characteristic diagram and the preliminary characteristic diagram to obtain the probability of whether the living body exists. The method makes full use of the space-time information, reduces the noise influence of a single picture, has high accuracy, does not need user cooperation, and can meet the requirements of practical application.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a living body detecting device 100 based on face recognition according to an embodiment of the present invention; the living body detection device 100 based on the face recognition includes:
an original picture acquiring unit 101, configured to acquire an original picture of a user to be detected;
a preliminary feature map generation unit 102, configured to divide the original picture by using a pyramid scene analysis network to obtain a preliminary feature map corresponding to the original picture;
the regional feature map generating unit 103 is configured to cut the preliminary feature map by using a preset face detection frame to cut a regional feature map including face features;
a confidence coefficient calculation unit 104, configured to calculate a confidence coefficient obtained by fusing the preliminary feature map and the region feature map;
and a living body judging unit 105, configured to determine a result of the living body detection according to a magnitude relationship between the confidence and a preset confidence threshold.
Optionally, the living body determination unit 105 is specifically configured to:
when the confidence coefficient is greater than or equal to a preset confidence coefficient threshold value, judging that the user to be detected is a living body;
and when the confidence coefficient is smaller than a preset confidence coefficient threshold value, judging that the user to be detected is a non-living body.
Further, the living body detection device 100 based on face recognition further includes:
and a fine-grained judging unit 106, configured to block the preliminary feature map, and perform fine-grained identification on the blocked preliminary feature map.
Further, the living body detection device 100 based on face recognition further includes:
a to-be-detected picture generating unit 107, configured to cut the original picture by using the face detection frame, so as to cut out a to-be-detected picture including face features;
a depth map generating unit 108, configured to perform 3D reconstruction on the picture to be detected to obtain a depth map corresponding to the picture to be detected;
a fusion feature map generation unit 109 for fusing the depth map and the region feature map to generate a fusion feature map;
and a spatiotemporal feature map generation unit 110, configured to sort the fused feature maps according to a preset time sequence, and fuse the sorted fused feature maps by using convolution to generate a spatiotemporal feature map.
Then, the confidence calculating unit 104 is specifically configured to calculate a confidence obtained by fusing the preliminary feature map, the region feature map, and the spatio-temporal feature map.
For a specific working process of the living body detecting device 100 based on face recognition, reference may be made to the working process of the living body detecting method based on face recognition described in the foregoing embodiment, and details are not repeated here.
Compared with the prior art, the embodiment of the invention provides a living body detection device 100 based on face recognition, which comprises the steps of firstly, simultaneously carrying out 3D reconstruction and segmentation detection on an original picture; then, fusing a preliminary feature map obtained by image segmentation and a depth map obtained by 3D reconstruction, and then performing space-time feature fusion to obtain a space-time feature map; meanwhile, the preliminary feature map obtained by image segmentation is subjected to face detection frame cutting and fine-grained identification to obtain a regional feature map; and finally, fusing the space-time characteristic diagram, the region characteristic diagram and the preliminary characteristic diagram to obtain the probability of whether the living body exists. The method makes full use of the space-time information, reduces the noise influence of a single picture, has high accuracy, does not need user cooperation, and can meet the requirements of practical application.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a living body detecting apparatus 200 based on face recognition according to an embodiment of the present invention. The living body detecting apparatus 200 based on face recognition of this embodiment includes: a processor 201, a memory 202 and a computer program stored in said memory 202 and executable on said processor 201. The processor 201, when executing the computer program, implements the steps in each of the above-described embodiments of the face recognition-based living body detection method, such as step S11 shown in fig. 1. Alternatively, the processor 201, when executing the computer program, implements the functions of the modules/units in the above-mentioned device embodiments, such as the original picture acquiring unit 101.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 202 and executed by the processor 201 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the living body detection device 200 based on the face recognition. For example, the computer program may be divided into an original image acquisition unit 101, a preliminary feature map generation unit 102, a region feature map generation unit 103, a confidence calculation unit 104, a living body judgment unit 105, a fine-grained discrimination unit 106, a to-be-detected image generation unit 107, a depth map generation unit 108, a fusion feature map generation unit 109, and a spatiotemporal feature map generation unit 110, and specific functions of each unit may refer to functions of each unit in the living body detection apparatus 100 based on face recognition, and are not described herein again.
The living body detecting device 200 based on the face recognition may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The living body detection device 200 based on face recognition may include, but is not limited to, a processor 201 and a memory 202. It will be understood by those skilled in the art that the schematic diagram is merely an example of the living body detection device 200 based on the face recognition, and does not constitute a limitation of the living body detection device 200 based on the face recognition, and may include more or less components than those shown, or combine some components, or different components, for example, the living body detection device 200 based on the face recognition may further include an input-output device, a network access device, a bus, etc.
The Processor 201 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The processor 201 is a control center of the face recognition based living body detecting apparatus 200, and various interfaces and lines are used to connect various parts of the whole face recognition based living body detecting apparatus 200.
The memory 202 may be used for storing the computer programs and/or modules, and the processor 201 implements various functions of the face recognition based liveness detection device 200 by running or executing the computer programs and/or modules stored in the memory 202 and calling data stored in the memory 202. The memory 202 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 202 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the integrated modules/units of the living body detecting device 200 based on the human face recognition can be stored in a computer readable storage medium if they are implemented in the form of software functional units and sold or used as independent products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by the processor 201, the steps of the method embodiments described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
Claims (10)
1. A living body detection method based on face recognition is characterized by comprising the following steps:
acquiring an original picture of a user to be detected;
segmenting the original picture by utilizing a pyramid scene analysis network to obtain a preliminary feature map corresponding to the original picture;
cutting the preliminary feature map by using a preset face detection frame to cut out a regional feature map containing face features;
calculating a confidence coefficient obtained after the preliminary feature map and the region feature map are fused;
and determining the result of the living body detection according to the magnitude relation between the confidence coefficient and a preset confidence coefficient threshold value.
2. The living body detection method based on face recognition as claimed in claim 1, wherein after obtaining the preliminary feature map corresponding to the original picture, the method further comprises:
partitioning the preliminary feature map;
and performing fine-grained identification on the blocked preliminary characteristic graph.
3. The living body detecting method based on face recognition as claimed in claim 1, wherein after obtaining the original picture of the user to be detected, the method further comprises:
cutting the original picture by using the face detection frame to cut a picture to be detected containing face characteristics;
3D reconstruction is carried out on the picture to be detected, and a depth map corresponding to the picture to be detected is obtained;
fusing the depth map and the region feature map to generate a fused feature map;
and sequencing the fusion feature maps according to a preset time sequence, and fusing the sequenced fusion feature maps by utilizing convolution to generate a space-time feature map.
4. The living body detection method based on face recognition according to claim 3, wherein the calculating the confidence degree obtained by fusing the preliminary feature map and the region feature map specifically comprises:
and calculating the confidence coefficient obtained by fusing the preliminary feature map, the region feature map and the space-time feature map.
5. The living body detection method based on face recognition according to claim 1, wherein the determining the result of the living body detection according to the magnitude relationship between the confidence degree and the preset confidence degree threshold specifically comprises:
when the confidence coefficient is greater than or equal to a preset confidence coefficient threshold value, judging that the user to be detected is a living body;
and when the confidence coefficient is smaller than a preset confidence coefficient threshold value, judging that the user to be detected is a non-living body.
6. A living body detection device based on face recognition is characterized by comprising:
the original picture acquiring unit is used for acquiring an original picture of a user to be detected;
a preliminary feature map generation unit, configured to divide the original picture by using a pyramid scene analysis network to obtain a preliminary feature map corresponding to the original picture;
the regional characteristic image generating unit is used for cutting the preliminary characteristic image by using a preset human face detection frame so as to cut out a regional characteristic image containing human face characteristics;
the confidence coefficient calculation unit is used for calculating the confidence coefficient obtained after the preliminary characteristic diagram and the region characteristic diagram are fused;
and the living body judging unit is used for determining the result of the living body detection according to the magnitude relation between the confidence coefficient and a preset confidence coefficient threshold value.
7. The living body detecting device based on face recognition as recited in claim 6, wherein the device further comprises:
and the fine grit distinguishing unit is used for partitioning the preliminary feature map and performing fine grit identification on the partitioned preliminary feature map.
8. The living body detecting device based on face recognition as recited in claim 6, wherein the device further comprises:
the picture generating unit to be detected is used for cutting the original picture by using the face detection frame so as to cut out a picture to be detected containing face characteristics;
the depth map generating unit is used for carrying out 3D reconstruction on the picture to be detected to obtain a depth map corresponding to the picture to be detected;
a fusion feature map generation unit, configured to fuse the depth map and the region feature map to generate a fusion feature map;
and the space-time characteristic diagram generating unit is used for sequencing the fusion characteristic diagrams according to a preset time sequence and fusing the sequenced fusion characteristic diagrams by convolution to generate the space-time characteristic diagram.
9. The living body detecting device based on face recognition as claimed in claim 8, wherein the confidence calculating unit is specifically configured to calculate a confidence obtained by fusing the preliminary feature map, the region feature map, and the spatiotemporal feature map.
10. A face recognition based liveness detection device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the face recognition based liveness detection method according to any one of claims 1 to 5 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910947729.2A CN110781770B (en) | 2019-10-08 | 2019-10-08 | Living body detection method, device and equipment based on face recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910947729.2A CN110781770B (en) | 2019-10-08 | 2019-10-08 | Living body detection method, device and equipment based on face recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110781770A true CN110781770A (en) | 2020-02-11 |
CN110781770B CN110781770B (en) | 2022-05-06 |
Family
ID=69384809
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910947729.2A Active CN110781770B (en) | 2019-10-08 | 2019-10-08 | Living body detection method, device and equipment based on face recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110781770B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368666A (en) * | 2020-02-25 | 2020-07-03 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-current network |
CN111680563A (en) * | 2020-05-09 | 2020-09-18 | 苏州中科先进技术研究院有限公司 | Living body detection method and device, electronic equipment and storage medium |
CN112507903A (en) * | 2020-12-15 | 2021-03-16 | 平安科技(深圳)有限公司 | False face detection method and device, electronic equipment and computer readable storage medium |
CN113095284A (en) * | 2021-04-30 | 2021-07-09 | 平安国际智慧城市科技股份有限公司 | Face selection method, device, equipment and computer readable storage medium |
CN113743379A (en) * | 2021-11-03 | 2021-12-03 | 杭州魔点科技有限公司 | Light-weight living body identification method, system, device and medium for multi-modal characteristics |
CN114360015A (en) * | 2021-12-30 | 2022-04-15 | 杭州萤石软件有限公司 | Living body detection method, living body detection device, living body detection equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180165512A1 (en) * | 2015-06-08 | 2018-06-14 | Beijing Kuangshi Technology Co., Ltd. | Living body detection method, living body detection system and computer program product |
CN108594997A (en) * | 2018-04-16 | 2018-09-28 | 腾讯科技(深圳)有限公司 | Gesture framework construction method, apparatus, equipment and storage medium |
CN108985343A (en) * | 2018-06-22 | 2018-12-11 | 深源恒际科技有限公司 | Automobile damage detecting method and system based on deep neural network |
CN109815797A (en) * | 2018-12-17 | 2019-05-28 | 北京飞搜科技有限公司 | Biopsy method and device |
CN110097090A (en) * | 2019-04-10 | 2019-08-06 | 东南大学 | A kind of image fine granularity recognition methods based on multi-scale feature fusion |
CN110119728A (en) * | 2019-05-23 | 2019-08-13 | 哈尔滨工业大学 | Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network |
CN110163078A (en) * | 2019-03-21 | 2019-08-23 | 腾讯科技(深圳)有限公司 | The service system of biopsy method, device and application biopsy method |
CN110210457A (en) * | 2019-06-18 | 2019-09-06 | 广州杰赛科技股份有限公司 | Method for detecting human face, device, equipment and computer readable storage medium |
-
2019
- 2019-10-08 CN CN201910947729.2A patent/CN110781770B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180165512A1 (en) * | 2015-06-08 | 2018-06-14 | Beijing Kuangshi Technology Co., Ltd. | Living body detection method, living body detection system and computer program product |
CN108594997A (en) * | 2018-04-16 | 2018-09-28 | 腾讯科技(深圳)有限公司 | Gesture framework construction method, apparatus, equipment and storage medium |
CN108985343A (en) * | 2018-06-22 | 2018-12-11 | 深源恒际科技有限公司 | Automobile damage detecting method and system based on deep neural network |
CN109815797A (en) * | 2018-12-17 | 2019-05-28 | 北京飞搜科技有限公司 | Biopsy method and device |
CN110163078A (en) * | 2019-03-21 | 2019-08-23 | 腾讯科技(深圳)有限公司 | The service system of biopsy method, device and application biopsy method |
CN110097090A (en) * | 2019-04-10 | 2019-08-06 | 东南大学 | A kind of image fine granularity recognition methods based on multi-scale feature fusion |
CN110119728A (en) * | 2019-05-23 | 2019-08-13 | 哈尔滨工业大学 | Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network |
CN110210457A (en) * | 2019-06-18 | 2019-09-06 | 广州杰赛科技股份有限公司 | Method for detecting human face, device, equipment and computer readable storage medium |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368666A (en) * | 2020-02-25 | 2020-07-03 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-current network |
CN111368666B (en) * | 2020-02-25 | 2023-08-18 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-flow network |
CN111680563A (en) * | 2020-05-09 | 2020-09-18 | 苏州中科先进技术研究院有限公司 | Living body detection method and device, electronic equipment and storage medium |
CN111680563B (en) * | 2020-05-09 | 2023-09-19 | 苏州中科先进技术研究院有限公司 | Living body detection method, living body detection device, electronic equipment and storage medium |
CN112507903A (en) * | 2020-12-15 | 2021-03-16 | 平安科技(深圳)有限公司 | False face detection method and device, electronic equipment and computer readable storage medium |
CN112507903B (en) * | 2020-12-15 | 2024-05-10 | 平安科技(深圳)有限公司 | False face detection method, false face detection device, electronic equipment and computer readable storage medium |
CN113095284A (en) * | 2021-04-30 | 2021-07-09 | 平安国际智慧城市科技股份有限公司 | Face selection method, device, equipment and computer readable storage medium |
CN113743379A (en) * | 2021-11-03 | 2021-12-03 | 杭州魔点科技有限公司 | Light-weight living body identification method, system, device and medium for multi-modal characteristics |
CN114360015A (en) * | 2021-12-30 | 2022-04-15 | 杭州萤石软件有限公司 | Living body detection method, living body detection device, living body detection equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110781770B (en) | 2022-05-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110781770B (en) | Living body detection method, device and equipment based on face recognition | |
CN110852310B (en) | Three-dimensional face recognition method and device, terminal equipment and computer readable medium | |
US11676390B2 (en) | Machine-learning model, methods and systems for removal of unwanted people from photographs | |
CN112419170B (en) | Training method of shielding detection model and beautifying processing method of face image | |
CN110443210B (en) | Pedestrian tracking method and device and terminal | |
CN109934065B (en) | Method and device for gesture recognition | |
CN109035246B (en) | Face image selection method and device | |
CN109858384B (en) | Face image capturing method, computer readable storage medium and terminal device | |
CN109815881A (en) | Training method, the Activity recognition method, device and equipment of Activity recognition model | |
EP3709266A1 (en) | Human-tracking methods, apparatuses, systems, and storage media | |
CN109829396B (en) | Face recognition motion blur processing method, device, equipment and storage medium | |
CN110197149B (en) | Ear key point detection method and device, storage medium and electronic equipment | |
CN111325107B (en) | Detection model training method, device, electronic equipment and readable storage medium | |
WO2023279799A1 (en) | Object identification method and apparatus, and electronic system | |
CN112200056A (en) | Face living body detection method and device, electronic equipment and storage medium | |
EP3699865B1 (en) | Three-dimensional face shape derivation device, three-dimensional face shape deriving method, and non-transitory computer readable medium | |
CN108805838A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN112508989A (en) | Image processing method, device, server and medium | |
CN113570615A (en) | Image processing method based on deep learning, electronic equipment and storage medium | |
CN117252947A (en) | Image processing method, image processing apparatus, computer, storage medium, and program product | |
CN108229281B (en) | Neural network generation method, face detection device and electronic equipment | |
CN110610131B (en) | Face movement unit detection method and device, electronic equipment and storage medium | |
CN113688839B (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
CN113920023B (en) | Image processing method and device, computer readable medium and electronic equipment | |
CN114897762A (en) | A kind of automatic positioning method and device of coal mine working face shearer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |