CN108280418A - The deception recognition methods of face image and device - Google Patents
The deception recognition methods of face image and device Download PDFInfo
- Publication number
- CN108280418A CN108280418A CN201810048661.XA CN201810048661A CN108280418A CN 108280418 A CN108280418 A CN 108280418A CN 201810048661 A CN201810048661 A CN 201810048661A CN 108280418 A CN108280418 A CN 108280418A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- facial
- depth
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本公开涉及一种脸部图像的欺骗识别方法及装置,所述方法包括:获取待识别的脸部图像;利用训练好的第一神经网络提取所述脸部图像中脸部的局部特征,得到所述脸部图像的局部特征值;利用训练好的第二神经网络提取所述脸部图像中脸部的深度特征,得到所述脸部图像的深度特征值;将所述局部特征值和所述深度特征值进行融合,得到所述脸部图像的融合值;将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像的欺骗识别结果。本公开识别准确度高,鲁棒性好,能够应对多种图像欺骗攻击。
The present disclosure relates to a method and device for deception recognition of facial images, the method comprising: acquiring a facial image to be recognized; using a trained first neural network to extract local features of the face in the facial image, and obtaining The local feature value of the facial image; utilize the trained second neural network to extract the depth feature of the face in the facial image to obtain the depth feature value of the facial image; combine the local feature value and the The depth feature value is fused to obtain the fusion value of the facial image; the fusion value is compared with a threshold, and the deception recognition result of the facial image is judged according to the comparison result. The disclosure has high recognition accuracy and good robustness, and can cope with various image spoofing attacks.
Description
技术领域technical field
本公开涉及图像识别技术领域,尤其涉及一种脸部图像的欺骗识别方法及装置。The present disclosure relates to the technical field of image recognition, and in particular to a method and device for fraud recognition of facial images.
背景技术Background technique
人脸识别技术是基于人的脸部特征进行身份识别的一种生物识别技术,随着信息科技的不断发展,计算机技术的广泛应用,也相继产生了许多优秀的人脸识别算法,如Fisher脸法,局部持征分析法,子空间法等等,特别是在特征脸(eigenface)法提出之后,人脸识别有了更进一步的重大发展。随着对人脸识别方法研究的深入,当前的人脸识别算法己经到达了较高的水平,人脸识别也成为了主流的生物识别技术,在实践中得到了广泛的应用,比如网络账户登录,银行系统登录,出入控制,人脸支付等等。生物识别技术通过计算机与光学、声学、生物传感器和生物统计学原理等高科技手段密切结合,利用人体固有的生理特性,(如指纹、脸象、虹膜等)和行为特征(如笔迹、声音、步态等)来进行个人身份的鉴定。生物识别技术利用诸如指纹,脸部和虹膜等生理特征,或者诸如打字节奏和步态的行为特征来唯一地识别或验证个体。由于生物识别系统被广泛应用于包括移动电话认证和访问控制在内的实际应用中,生物特征欺骗或演示攻击(PA)正在成为一个更大的威胁,其中伪造的生物特征样本被呈现给生物特征识别系统并试图被认证。由于人脸是最容易获取的生物特征模式,因此人脸的多种不同类型的人脸识别包括打印攻击,重放攻击,3D掩模等等。因此,传统的人脸识别系统对于这样的演示攻击是非常脆弱的。Face recognition technology is a biometric recognition technology based on human facial features. With the continuous development of information technology and the wide application of computer technology, many excellent face recognition algorithms have been produced, such as Fisher face recognition technology. method, local feature analysis method, subspace method, etc., especially after the eigenface method was proposed, face recognition has made further significant development. With the in-depth research on face recognition methods, the current face recognition algorithm has reached a higher level, face recognition has also become the mainstream biometric technology, and has been widely used in practice, such as network account Login, banking system login, access control, face payment, etc. Biometrics technology is closely combined with high-tech means such as computer and optics, acoustics, biosensors and biostatistics, and uses the inherent physiological characteristics of the human body (such as fingerprints, faces, irises, etc.) and behavioral characteristics (such as handwriting, voice, etc.) gait, etc.) for personal identification. Biometrics utilizes physical characteristics such as fingerprints, face and irises, or behavioral characteristics such as typing rhythm and gait to uniquely identify or authenticate individuals. As biometric systems are widely used in real-world applications including mobile phone authentication and access control, biometric spoofing or presentation attacks (PA) are becoming an even greater threat, in which forged biometric samples are presented to biometric Identify the system and attempt to be authenticated. Since the face is the most easily acquired biometric pattern, there are many different types of face recognition for the face including printing attack, replay attack, 3D mask and many more. Therefore, traditional face recognition systems are very vulnerable to such demonstration attacks.
发明内容Contents of the invention
为克服相关技术中存在的问题,本公开提供一种脸部图像的欺骗识别方法及装置,用于解决传统方法中,脸部图像的欺骗导致脸部识别准确率低的问题。In order to overcome the problems existing in related technologies, the present disclosure provides a face image deception recognition method and device, which are used to solve the problem of low face recognition accuracy caused by face image spoofing in traditional methods.
根据本公开实施例的一个方面,提供一种脸部图像的欺骗识别方法,所述方法包括:According to an aspect of an embodiment of the present disclosure, there is provided a deception recognition method of a facial image, the method comprising:
获取待识别的脸部图像;Obtain the face image to be recognized;
利用训练好的第一神经网络提取所述脸部图像中脸部的局部特征,得到所述脸部图像的局部特征值;Using the trained first neural network to extract the local features of the face in the facial image, to obtain the local feature value of the facial image;
利用训练好的第二神经网络提取所述脸部图像中脸部的深度特征,得到所述脸部图像的深度特征值;Utilize the trained second neural network to extract the depth feature of the face in the face image to obtain the depth feature value of the face image;
将所述局部特征值和所述深度特征值进行融合,得到所述脸部图像的融合值;Fusing the local feature value and the depth feature value to obtain a fusion value of the face image;
将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像的欺骗识别结果。Comparing the fusion value with a threshold, and judging the deception recognition result of the facial image according to the comparison result.
在一种可能的实现方式中,获取待识别的脸部图像,包括:In a possible implementation manner, obtaining the facial image to be recognized includes:
获取待识别脸部的第一图像和第二图像,所述第一图像和所述第二图像的成像方法不同;Acquiring a first image and a second image of the face to be recognized, the imaging methods of the first image and the second image are different;
根据所述第一图像和所述第二图像,得到所述待识别的脸部图像。Obtain the facial image to be recognized according to the first image and the second image.
在一种可能的实现方式中,利用训练好的第一神经网络提取所述脸部图像中脸部的局部特征,得到所述脸部图像的局部特征值,包括:In a possible implementation manner, using the trained first neural network to extract the local features of the face in the face image to obtain the local feature values of the face image, including:
在所述脸部图像中随机确定脸部的子区域;randomly determining a subregion of a face in said face image;
利用训练好的第一神经网络在所述子区域中提取局部特征,得到所述脸部图像的子区域特征;Using the trained first neural network to extract local features in the sub-region to obtain the sub-region features of the face image;
根据所述子区域特征,得到所述脸部图像的局部特征值。A local feature value of the face image is obtained according to the sub-region features.
在一种可能的实现方式中,利用训练好的第二神经网络提取所述脸部图像中脸部的深度特征,得到所述脸部图像的深度特征值,包括:In a possible implementation manner, the depth feature of the face in the face image is extracted by using the trained second neural network, and the depth feature value of the face image is obtained, including:
在所述脸部图像中确定脸部的深度计算区域;determining a depth calculation area of a face in the face image;
利用训练好的第二神经网络提取所述深度计算区域的深度特征,得到所述脸部图像的深度特征值。Using the trained second neural network to extract the depth feature of the depth calculation area to obtain the depth feature value of the face image.
在一种可能的实现方式中,将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像的欺骗识别结果,包括:In a possible implementation, the fusion value is compared with a threshold, and the deception recognition result of the facial image is judged according to the comparison result, including:
将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像为现场图像或欺骗图像。The fusion value is compared with a threshold, and the face image is judged to be a live image or a spoofed image according to the comparison result.
根据本公开实施例的另一个方面,提供一种脸部图像的欺骗识别装置,包括:According to another aspect of an embodiment of the present disclosure, there is provided a deception recognition device for facial images, including:
脸部图像获取模块,用于获取待识别的脸部图像;Facial image acquisition module, used to acquire facial images to be identified;
局部特征获取模块,用于利用训练好的第一神经网络提取所述脸部图像中脸部的局部特征,得到所述脸部图像的局部特征值;The local feature acquisition module is used to extract the local features of the face in the facial image by using the trained first neural network to obtain the local feature value of the facial image;
深度特征获取模块,用于利用训练好的第二神经网络提取所述脸部图像中脸部的深度特征,得到所述脸部图像的深度特征值;The depth feature acquisition module is used to extract the depth feature of the face in the facial image by using the trained second neural network to obtain the depth feature value of the facial image;
融合模块,用于将所述局部特征值和所述深度特征值进行融合,得到所述脸部图像的融合值;A fusion module, configured to fuse the local feature value and the depth feature value to obtain a fusion value of the face image;
识别结果获取模块,用于将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像的欺骗识别结果。The recognition result acquisition module is used to compare the fusion value with a threshold, and judge the deception recognition result of the facial image according to the comparison result.
在一种可能的实现方式中,所述脸部图像获取模块,包括:In a possible implementation manner, the facial image acquisition module includes:
第一图像获取子模块,用于获取待识别脸部的第一图像和第二图像,所述第一图像和所述第二图像的成像方法不同;The first image acquisition submodule is used to acquire a first image and a second image of the face to be recognized, and the imaging methods of the first image and the second image are different;
第二图像获取子模块,用于根据所述第一图像和所述第二图像,得到所述待识别的脸部图像。The second image acquisition sub-module is configured to obtain the facial image to be recognized according to the first image and the second image.
在一种可能的实现方式中,所述局部特征获取模块,包括:In a possible implementation manner, the local feature acquisition module includes:
子区域确定子模块,用于在所述脸部图像中随机确定脸部的子区域;The sub-region determination submodule is used to randomly determine the sub-region of the face in the facial image;
子区域特征获取子模块,用于利用训练好的第一神经网络在所述子区域中提取局部特征,得到所述脸部图像的子区域特征;The sub-area feature acquisition submodule is used to extract local features in the sub-area using the trained first neural network to obtain the sub-area features of the face image;
局部特征获取子模块,用于根据所述子区域特征,得到所述脸部图像的局部特征值。The local feature acquisition sub-module is used to obtain the local feature value of the facial image according to the sub-region feature.
在一种可能的实现方式中,所述深度特征获取模块,包括:In a possible implementation, the depth feature acquisition module includes:
深度计算区域子模块,用于在所述脸部图像中确定脸部的深度计算区域;A depth calculation area submodule, configured to determine the depth calculation area of the face in the face image;
深度特征获取子模块,用于利用训练好的第二神经网络提取所述深度计算区域的深度特征,得到所述脸部图像的深度特征值。The depth feature acquisition sub-module is used to extract the depth feature of the depth calculation area by using the trained second neural network, and obtain the depth feature value of the facial image.
在一种可能的实现方式中,所述识别结果获取模块,包括:In a possible implementation, the recognition result acquisition module includes:
识别结果获取子模块,用于将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像为现场图像或欺骗图像。The recognition result acquisition sub-module is used to compare the fusion value with a threshold, and judge that the facial image is a live image or a spoofed image according to the comparison result.
根据本公开实施例的另一个方面,提供一种脸部图像的欺骗识别装置,包括:According to another aspect of an embodiment of the present disclosure, there is provided a deception recognition device for facial images, including:
处理器;processor;
用于存储处理器可执行指令的存储器;memory for storing processor-executable instructions;
其中,所述处理器被配置为:执行本公开实施例中任意一项所述的方法。Wherein, the processor is configured to: execute the method described in any one of the embodiments of the present disclosure.
根据本公开实施例的另一个方面,提供一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时,使得处理器能够执行本公开实施例中任意一项所述的方法。According to another aspect of the embodiments of the present disclosure, there is provided a non-volatile computer-readable storage medium on which computer program instructions are stored, wherein when the computer program instructions are executed by a processor, the processor can Execute the method described in any one of the embodiments of the present disclosure.
本公开的实施例提供的技术方案可以包括以下有益效果:通过提取待识别脸部图像的局部特征和深度特征,得到待识别脸部图像的融合特征值,将融合特征值与阈值比较后,得到所述待识别脸部图像的欺骗识别结果。本公开识别准确度高,鲁棒性好,能够应对多种图像欺骗攻击。The technical solution provided by the embodiments of the present disclosure may include the following beneficial effects: by extracting the local features and depth features of the facial image to be recognized, the fusion feature value of the facial image to be recognized is obtained, and after comparing the fusion feature value with the threshold value, the obtained The deception recognition result of the face image to be recognized. The disclosure has high recognition accuracy and good robustness, and can cope with various image spoofing attacks.
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments with reference to the accompanying drawings.
附图说明Description of drawings
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本公开的示例性实施例、特征和方面,并且用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the specification, serve to explain the principles of the disclosure.
图1是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图;Fig. 1 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment;
图2是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图;Fig. 2 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment;
图3是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图;Fig. 3 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment;
图4是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图;Fig. 4 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment;
图5是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图;Fig. 5 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment;
图6是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图;Fig. 6 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment;
图7是根据一示例性实施例示出的脸部图像的欺骗识别装置的框图;Fig. 7 is a block diagram of a device for identifying fraudulent facial images according to an exemplary embodiment;
图8是根据一示例性实施例示出的脸部图像的欺骗识别装置的框图;Fig. 8 is a block diagram of a device for identifying fraudulent facial images according to an exemplary embodiment;
图9是根据一示例性实施例示出的一种脸部图像的欺骗识别装置的框图。Fig. 9 is a block diagram of a device for identifying fraudulent facial images according to an exemplary embodiment.
图10是根据一示例性实施例示出的一种脸部图像的欺骗识别装置的框图。Fig. 10 is a block diagram of a device for identifying fraudulent facial images according to an exemplary embodiment.
具体实施方式Detailed ways
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features, and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. The same reference numbers in the figures indicate functionally identical or similar elements. While various aspects of the embodiments are shown in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or better than other embodiments.
另外,为了更好的说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。In addition, in order to better illustrate the present disclosure, numerous specific details are given in the following specific implementation manners. It will be understood by those skilled in the art that the present disclosure may be practiced without some of the specific details. In some instances, methods, means, components and circuits that are well known to those skilled in the art have not been described in detail so as to obscure the gist of the present disclosure.
现有的脸部图像的识别方法包括:Existing facial image recognition methods include:
1.基于纹理的识别方式:人脸的纹理特征具有较高的鉴别性,所以提取人脸图像的纹理特征往往能得到良好的分类识别效果。图像的纹理特征提取方法一般可归类为统计方法、模型方法、结构方法、信号处理方法4个大类。1. Texture-based recognition method: The texture features of the face are highly discriminative, so extracting the texture features of the face image can often get a good classification and recognition effect. Image texture feature extraction methods can generally be classified into four categories: statistical methods, model methods, structural methods, and signal processing methods.
2.基于采集脸部运动的方式:比如眼球和嘴唇的运动,通过提取图像中人的脸部运动来识别是否是真实的人脸。2. Based on the way of collecting facial movements: such as the movement of eyeballs and lips, by extracting the facial movements of people in the image to identify whether it is a real face.
3.基于图像质量和反射率的相关方法:通过提取和比较图像上的光照和噪声信息来判别是否是真实的人脸。3. Correlation method based on image quality and reflectivity: By extracting and comparing the illumination and noise information on the image to determine whether it is a real face.
但提取的纹理特征的像素密度和不同的攻击方式之间没有较大的相关性,因此提取稳定使用的纹理特征显得十分困难采集脸部运动的方式对于识别静态的图像有一定的优势,但是对于视频或者图像重放攻击无效。基于图像质量和反射率的方法对图像要求较高,对噪声敏感,不利于稳定识别。However, there is no great correlation between the pixel density of the extracted texture features and different attack methods, so it is very difficult to extract stable texture features. The method of collecting facial motion has certain advantages for recognizing static images, but for Video or image replay attacks are ineffective. The method based on image quality and reflectivity has high requirements on the image and is sensitive to noise, which is not conducive to stable recognition.
图1是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图,如图1所示,该脸部图像的欺骗识别方法可以包括:Fig. 1 is a flow chart of a deception recognition method for facial images according to an exemplary embodiment. As shown in Fig. 1, the deception recognition method for facial images may include:
步骤S10,获取待识别的脸部图像。Step S10, acquiring a face image to be recognized.
步骤S20,利用训练好的第一神经网络提取所述脸部图像中脸部的局部特征,得到所述脸部图像的局部特征值。Step S20, using the trained first neural network to extract the local features of the face in the face image to obtain the local feature values of the face image.
步骤S30,利用训练好的第二神经网络提取所述脸部图像中脸部的深度特征,得到所述脸部图像的深度特征值。Step S30, using the trained second neural network to extract the depth features of the face in the face image to obtain the depth feature value of the face image.
步骤S40,将所述局部特征值和所述深度特征值进行融合,得到所述脸部图像的融合值。In step S40, the local feature value and the depth feature value are fused to obtain a fusion value of the facial image.
步骤S50,将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像的欺骗识别结果。Step S50, comparing the fusion value with a threshold value, and judging the deception recognition result of the facial image according to the comparison result.
待识别的脸部图像可以包括多种。例如,现场拍摄得到的活体例如人的脸部图像,和可能为欺骗图像的非活体的脸部图像。The facial images to be recognized may include multiple types. For example, a facial image of a living body such as a person captured on the spot, and a face image of a non-living body that may be a spoofing image.
由于单独的提取脸部的局部特征,不利于抵御现有的欺骗图像的攻击。本公开将待识别脸部图像中的脸部划分为多个子区域,并提取待识别脸部图像各子区域的局部特征。本公开还以待识别脸部图像中脸部自身的深度为参照,提取待识别脸部图像整体的深度特征。本公开将提取到的局部特征和整体特征进行融合后,利用融合值和设定的阈值进行比较,根据比较结果判断待识别脸部图像是否为欺骗图像,提高欺骗图像的识别准确率。Since the local features of the face are extracted separately, it is not conducive to resisting the attack of the existing spoofing images. The disclosure divides the face in the facial image to be recognized into multiple sub-regions, and extracts the local features of each sub-region of the facial image to be recognized. The disclosure also extracts the overall depth feature of the face image to be recognized by referring to the depth of the face itself in the face image to be recognized. The disclosure fuses the extracted local features and overall features, compares the fusion value with a set threshold, and judges whether the facial image to be recognized is a deceptive image according to the comparison result, thereby improving the recognition accuracy of the deceptive image.
图2是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图,如图2所示,与上述实施例的不同之处在于,步骤S10包括:Fig. 2 is a flow chart of a face image deception recognition method shown according to an exemplary embodiment. As shown in Fig. 2, the difference from the above embodiment is that step S10 includes:
步骤S11,获取待识别脸部的第一图像和第二图像,所述第一图像和所述第二图像的成像方法不同。Step S11, acquiring a first image and a second image of the face to be recognized, where the imaging methods of the first image and the second image are different.
步骤S12,根据所述第一图像和所述第二图像,得到所述待识别的脸部图像。Step S12, obtaining the face image to be recognized according to the first image and the second image.
在一种可能的实现方式中,通过红外成像方式获取第一图像,第一图像为红外光图像。通过可见光成像方式获取第二图像,第二图像为可见光图像。将第一图像和第二图像进行图像融合,得到待识别的脸部图像。图像融合由低到高分为三个层次:数据级融合、特征级融合和决策级融合。In a possible implementation manner, the first image is acquired by means of infrared imaging, and the first image is an infrared light image. The second image is acquired by means of visible light imaging, and the second image is a visible light image. Image fusion is performed on the first image and the second image to obtain a facial image to be recognized. Image fusion is divided into three levels from low to high: data-level fusion, feature-level fusion and decision-level fusion.
其中,数据级融合也称像素级融合,是指直接对传感器采集来得数据进行处理而获得融合图像的过程,可保持尽可能多的现场原始数据。在特征级融合中,可以保证不同图像包含信息的特征,如红外光对于对象热量的表征,可见光对于对象亮度的表征等。决策级融合主要在于主观的要求,可以采用一些规则,如贝叶斯法,D-S证据法和表决法等。Among them, data-level fusion, also known as pixel-level fusion, refers to the process of directly processing the data collected by the sensor to obtain a fused image, which can keep as much original data as possible on site. In feature-level fusion, it can be guaranteed that different images contain information features, such as the representation of infrared light for object heat, and the representation of visible light for object brightness. Decision-level fusion mainly lies in subjective requirements, and some rules can be used, such as Bayesian method, D-S evidence method and voting method.
将融合后得到的图像利用图像识别技术进行脸部识别,包括利用训练好的脸部识别神经网络进行脸部检测,得到脸部图像。Face recognition is performed on the fused image using image recognition technology, including face detection using a trained face recognition neural network to obtain a face image.
图3是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图,如图3所示,与上述实施例的不同之处在于,步骤S20包括:Fig. 3 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment. As shown in Fig. 3, the difference from the above embodiment is that step S20 includes:
步骤S21,在所述脸部图像中随机确定脸部的子区域。Step S21, randomly determining a face sub-region in the face image.
步骤S22,利用训练好的第一神经网络在所述子区域中提取局部特征,得到所述脸部图像的子区域特征。Step S22, using the trained first neural network to extract local features in the sub-region to obtain the sub-region features of the face image.
步骤S23,根据所述子区域特征,得到所述脸部图像的局部特征值。Step S23, according to the sub-region feature, obtain the local feature value of the face image.
在一种可能的实现方式中,在待识别的脸部图像中随机确定的脸部的子区域,可以包括眼部子区域、鼻部子区域、嘴部子区域等按照脸部的部位区分的子区域,也可以包括比脸部的各部位更小的子区域。利用训练好的第一神经网络,在各子区域中提取局部特征,例如提取SIFT(Scale-invariant feature transform,尺度不变特征变换)特征,LBP(Local Binary Patterns,局部二值模式特征)特征,HOG(Histogram of OrientedGradient,方向梯度直方图)特征。将提取到的各子区域的局部特征进行数据处理后,得到待识别脸部图像的局部特征值。In a possible implementation, the face subregions randomly determined in the face image to be recognized may include eye subregions, nose subregions, mouth subregions, etc. The sub-regions may also include sub-regions smaller than each part of the face. Use the trained first neural network to extract local features in each sub-region, such as extracting SIFT (Scale-invariant feature transform, scale-invariant feature transform) features, LBP (Local Binary Patterns, local binary pattern features) features, HOG (Histogram of OrientedGradient, histogram of oriented gradient) feature. After data processing is performed on the extracted local features of each sub-region, local feature values of the face image to be recognized are obtained.
图4是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图,如图4所示,与上述实施例的不同之处在于,步骤S30包括:Fig. 4 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment. As shown in Fig. 4, the difference from the above embodiment is that step S30 includes:
步骤S31,在所述脸部图像中确定脸部的深度计算区域。Step S31, determining the depth calculation area of the face in the face image.
步骤S32,利用训练好的第二神经网络提取所述深度计算区域的深度特征,得到所述脸部图像的深度特征值。Step S32, using the trained second neural network to extract the depth features of the depth calculation area to obtain the depth feature values of the face image.
在一种可能的实现方式中,在本公开中,深度为脸部区域以自身的某个位置为参照的深度,而不是以其它物体为参照的深度,也不是脸部区域到其它外部位置的距离。计算脸部图像的深度,不需要计算脸部图像中的所有区域。一种方式是,先在脸部图像中确定深度计算区域,只在深度计算区域中计算脸部的深度。在确定的深度计算区域中,利用第二神经网络提取深度特征后,根据提取到的深度特征得到脸部图像的深度特征值。In a possible implementation, in the present disclosure, the depth refers to the depth of the face region with reference to a certain position of itself, not the depth of other objects, nor the distance from the face region to other external positions. distance. To calculate the depth of the face image, it is not necessary to calculate all the regions in the face image. One way is to first determine the depth calculation area in the face image, and only calculate the depth of the face in the depth calculation area. In the determined depth calculation area, after using the second neural network to extract the depth feature, the depth feature value of the facial image is obtained according to the extracted depth feature.
图5是根据一示例性实施例示出的脸部图像的欺骗识别方法的流程图,如图5所示,与上述实施例的不同之处在于,步骤S50包括:Fig. 5 is a flow chart of a method for identifying fraudulent facial images according to an exemplary embodiment. As shown in Fig. 5, the difference from the above embodiment is that step S50 includes:
步骤S51,将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像为现场图像或欺骗图像。Step S51 , comparing the fusion value with a threshold, and judging that the facial image is a live image or a spoofed image according to the comparison result.
在一种可能的实现方式中,将脸部图像的局部特征值和深度特征值进行融合后得到融合值。再将融合值和设定的阈值进行比较。如果融合值高于阈值,则待识别的脸部图像为欺骗图像。如果融合值低于阈值,则待识别的脸部图像为现场图像。In a possible implementation manner, the fusion value is obtained after the local feature value and the depth feature value of the face image are fused. Then compare the fusion value with the set threshold. If the fusion value is higher than the threshold, the face image to be recognized is a spoofed image. If the fusion value is lower than the threshold, the face image to be recognized is a live image.
为更好的说明本公开的方法,以下实施例为本公开一示例性实施例。图6是根据另一示例性实施例示出的脸部图像的欺骗识别方法的流程图,如图6所示,包括:In order to better illustrate the method of the present disclosure, the following example is an exemplary embodiment of the present disclosure. Fig. 6 is a flow chart of a method for identifying fraudulent facial images according to another exemplary embodiment, as shown in Fig. 6 , including:
步骤1,利用不同的成像方式分别获取图像。例如利用红外摄像头拍摄获取的红外光图像,利用可见光摄像头拍摄获取的可见光图像。在实际应用中可以利用双目摄像头同时获取红外光图像和可见光图像。Step 1, using different imaging methods to acquire images respectively. For example, an infrared light image is captured by an infrared camera, and a visible light image is captured by a visible light camera. In practical applications, binocular cameras can be used to simultaneously acquire infrared light images and visible light images.
步骤2,将红外光图像和可见光图像融合后,得到输入图像。参见上述实施例中的相关描述。Step 2, after fusing the infrared light image and the visible light image, an input image is obtained. Refer to related descriptions in the foregoing embodiments.
步骤3,在输入图像中检测人脸后,得到用于分析的脸部图像。包括利用图像识别技术在输入的图像中进行脸部识别。参见上述实施例中的相关描述。Step 3, after detecting the human face in the input image, obtain the facial image for analysis. This includes face recognition in incoming images using image recognition technology. Refer to related descriptions in the foregoing embodiments.
步骤4,将脸部图像分别输入两个神经网络中,图6中利用上下两个流程分别标识两个CNN(Convolutional Neural Network,卷积神经网络)中的处理步骤。图6中,上部分的CNN流程处理脸部图像的局部特征。在脸部图像中提取局部补丁特征,即在脸部图像中随机划分子区域后,分别提取各子区域的局部特征。下部分的CNN流程中,将脸部图像输入基于深度的CNN中,提取脸部图像的整体深度特征。In step 4, the facial images are respectively input into two neural networks, and the upper and lower two processes are used to identify the processing steps in two CNNs (Convolutional Neural Networks, Convolutional Neural Networks) in Fig. 6 . In Figure 6, the CNN pipeline in the upper part processes the local features of the face image. Extract local patch features in the face image, that is, after randomly dividing the sub-regions in the face image, extract the local features of each sub-region respectively. In the CNN process in the lower part, the face image is input into the depth-based CNN to extract the overall depth features of the face image.
随着感知环境和图像欺骗的方式越来越多,单独提取特征进行识别已经不能覆盖所有的攻击,因此使用卷积神经网络来对海量的数据进行学习,利用海量的训练数据来区分现场和欺骗样本。对于基于补丁的CNN,训练一个深度的卷积神经网络端到端学习丰富的外观特征,能够使用从人脸图像中随机提取的补丁区分活的和非现场的人脸图像。对于基于深度的CNN,训练完全卷积网络(FCN)来估计人脸图像的深度,假设打印或重播图像攻击具有平坦的深度图,而活人脸包含正常的人脸深度。且基于深度的CNN基于外观或深度提示可以独立检测脸部攻击。As there are more and more ways to perceive the environment and image deception, feature extraction alone for identification can no longer cover all attacks. Therefore, convolutional neural networks are used to learn from massive data, and use massive training data to distinguish between the scene and deception sample. For patch-based CNN, a deep convolutional neural network is trained end-to-end to learn rich appearance features, capable of distinguishing live and off-site face images using randomly extracted patches from face images. For depth-based CNNs, a fully convolutional network (FCN) is trained to estimate the depth of face images, assuming that print or replay image attacks have flat depth maps, while live faces contain normal facial depth. And depth-based CNNs can independently detect facial attacks based on appearance or depth cues.
步骤5,在上部分的CNN流程中,将提取到的各子区域的局部特诊,输入基于局部补丁的CNN中进行处理。在下部分的CNN流程中,根据基于深度的CNN输出的深度特征,得到脸部图像的深度特征。深度特征利用了整个人脸,并将人脸描述为一个3D物体,而将非活体脸部描述为一个平坦的平面。Step 5. In the CNN process in the upper part, the extracted local special diagnosis of each sub-region is input into the CNN based on local patches for processing. In the CNN process in the lower part, the depth features of the face image are obtained according to the depth features output by the depth-based CNN. The deep features utilize the whole face and describe the face as a 3D object, while the non-living face is described as a flat plane.
从单个RGB图像估计深度是计算机视觉中的一个基本问题。对于人脸图像,可以将来自一个图像或多个图像的脸部重建视为深度估计的一种方式。本公开估计活体脸部和非活体脸部的深度,以脸部自身为深度参照,而不是将脸部与拍摄脸部的相机之间的固定距离作为深度进行计算。Estimating depth from a single RGB image is a fundamental problem in computer vision. For face images, face reconstruction from an image or multiple images can be considered as a means of depth estimation. The present disclosure estimates the depths of live and non-living faces, using the face itself as a depth reference instead of calculating the fixed distance between the face and the camera that captures the face as the depth.
步骤6,在上部分的CNN流程中,为各局部特征估计活跃度分数,从而得到脸部图像的局部特征的得分。在下部分的CNN流程中,对整体特征进行SVM(Support VectorMachine,支持向量机)分类,得到脸部图像的深度特征的得分。Step 6, in the CNN process in the upper part, estimate the activity score for each local feature, so as to obtain the score of the local feature of the face image. In the CNN process in the lower part, SVM (Support Vector Machine, Support Vector Machine) classification is performed on the overall feature to obtain the score of the deep feature of the face image.
上部分的CNN是端到端的训练,并从脸部图像分配每个随机提取的补丁分数。然后分配脸部图像的平均分数。下部分的CNN估计人脸图像的深度图并基于估计的深度图向人脸图像提供活跃度分数。The CNN in the upper part is trained end-to-end and assigns each randomly extracted patch score from face images. The average score of face images is then assigned. The CNN in the lower part estimates the depth map of the face image and provides liveness scores to the face image based on the estimated depth map.
步骤7,将局部特征的得分和深度特征的得分进行融合后,将融合得分与设定的阈值进行比较。Step 7, after fusing the scores of the local features and the scores of the deep features, compare the fusion score with the set threshold.
步骤8,根据比较结果判断所述脸部图像为虚假图像或现场活体图像。将局部特征得分和深度特征得分融合输出称为欺骗得分。如果欺骗分数高于预定义的阈值,则脸部图像或视频剪辑被分类为非活体图像。Step 8, according to the comparison result, it is judged that the facial image is a fake image or a live image. The fusion output of local feature score and deep feature score is called spoof score. If the spoof score is higher than a predefined threshold, the face image or video clip is classified as a non-live image.
本实施例提出一种新型的利用两个卷积神经网络进行人脸验证反欺骗的方法:采用一个神经网络提取局部特征,采用一个神经网络提取深度特征,通过独立于全局特征提取得到的深度图谱来验证输入的图像是否是实体。本实施例基于双路神经网络的搭建分别识别局部特征和整体深度特征,对局部特征和深度特征判别得分融合比较的技术,更快更便捷地实现活体人脸的识别,在身份验证之前检测是否为活体,同时能够有效检测照片、视频、3D面具等的欺骗攻击。This embodiment proposes a new method of using two convolutional neural networks for face verification and anti-spoofing: one neural network is used to extract local features, one neural network is used to extract deep features, and the depth map obtained by extracting independent global features to verify whether the input image is an entity. This embodiment is based on the construction of a two-way neural network to separately identify local features and overall depth features, and the technology of fusion and comparison of local features and depth features to distinguish scores can realize the recognition of living faces faster and more conveniently, and detect whether it is true before identity verification. It is a living body and can effectively detect spoofing attacks such as photos, videos, and 3D masks.
本实施例基于红外摄像头和可见光摄像头同时采集图像,采用二路神经网络进行局部和深度判别,技术新颖,识别准确度高,同时鲁棒性好,能够应对多种图像欺骗攻击。This embodiment collects images simultaneously based on an infrared camera and a visible light camera, and uses a two-way neural network for local and depth discrimination. The technology is novel, the recognition accuracy is high, and the robustness is good, which can cope with various image spoofing attacks.
图7是根据一示例性实施例示出的脸部图像的欺骗识别装置的框图,如图7所示,脸部图像的欺骗识别装置包括:Fig. 7 is a block diagram of a deception recognition device for facial images according to an exemplary embodiment. As shown in Fig. 7, the deception recognition device for facial images includes:
脸部图像获取模块61,用于获取待识别的脸部图像;Facial image acquisition module 61, used to acquire facial images to be identified;
局部特征获取模块62,用于利用训练好的第一神经网络提取所述脸部图像中脸部的局部特征,得到所述脸部图像的局部特征值;The local feature acquisition module 62 is used to extract the local feature of the face in the facial image by using the trained first neural network to obtain the local feature value of the facial image;
深度特征获取模块63,用于利用训练好的第二神经网络提取所述脸部图像中脸部的深度特征,得到所述脸部图像的深度特征值;Depth feature acquisition module 63, for utilizing the trained second neural network to extract the depth feature of the face in the facial image, to obtain the depth feature value of the facial image;
融合模块64,用于将所述局部特征值和所述深度特征值进行融合,得到所述脸部图像的融合值;A fusion module 64, configured to fuse the local feature value and the depth feature value to obtain a fusion value of the face image;
识别结果获取模块65,用于将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像的欺骗识别结果。The recognition result acquisition module 65 is configured to compare the fusion value with a threshold, and judge the deception recognition result of the facial image according to the comparison result.
图8是根据一示例性实施例示出的脸部图像的欺骗识别装置的框图,如图8所示,Fig. 8 is a block diagram of a face image deception recognition device according to an exemplary embodiment, as shown in Fig. 8 ,
在一种可能的实现方式中,所述脸部图像获取模块61,包括:In a possible implementation manner, the facial image acquisition module 61 includes:
第一图像获取子模块611,用于获取待识别脸部的第一图像和第二图像,所述第一图像和所述第二图像的成像方法不同;The first image acquisition sub-module 611 is configured to acquire a first image and a second image of the face to be recognized, and the imaging methods of the first image and the second image are different;
第二图像获取子模块612,用于根据所述第一图像和所述第二图像,得到所述待识别的脸部图像。The second image acquisition sub-module 612 is configured to obtain the face image to be recognized according to the first image and the second image.
在一种可能的实现方式中,所述局部特征获取模块62,包括:In a possible implementation, the local feature acquisition module 62 includes:
子区域确定子模块621,用于在所述脸部图像中随机确定脸部的子区域;The sub-region determining sub-module 621 is used to randomly determine the sub-region of the face in the facial image;
子区域特征获取子模块622,用于利用训练好的第一神经网络在所述子区域中提取局部特征,得到所述脸部图像的子区域特征;The sub-area feature acquisition submodule 622 is used to extract local features in the sub-area by using the trained first neural network to obtain the sub-area features of the face image;
局部特征获取子模块623,用于根据所述子区域特征,得到所述脸部图像的局部特征值。The local feature acquisition sub-module 623 is used to obtain the local feature value of the face image according to the sub-region feature.
在一种可能的实现方式中,所述深度特征获取模块63,包括:In a possible implementation, the depth feature acquisition module 63 includes:
深度计算区域子模块631,用于在所述脸部图像中确定脸部的深度计算区域;The depth calculation area submodule 631 is used to determine the depth calculation area of the face in the facial image;
深度特征获取子模块632,用于利用训练好的第二神经网络提取所述深度计算区域的深度特征,得到所述脸部图像的深度特征值。The depth feature acquisition sub-module 632 is configured to use the trained second neural network to extract the depth features of the depth calculation area to obtain the depth feature value of the facial image.
在一种可能的实现方式中,所述识别结果获取模块65,包括:In a possible implementation, the recognition result acquisition module 65 includes:
识别结果获取子模块651,用于将所述融合值和阈值进行比较,根据比较结果判断所述脸部图像为现场图像或欺骗图像。The recognition result acquisition sub-module 651 is configured to compare the fusion value with a threshold, and judge that the facial image is a live image or a spoofed image according to the comparison result.
图9是根据一示例性实施例示出的一种用于脸部图像的欺骗识别的装置800的框图。例如,装置800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。Fig. 9 is a block diagram of an apparatus 800 for deception recognition of facial images according to an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
参照图9,装置800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。Referring to FIG. 9, the device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and communication component 816 .
处理组件802通常控制装置800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。The processing component 802 generally controls the overall operations of the device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
存储器804被配置为存储各种类型的数据以支持在装置800的操作。这些数据的示例包括用于在装置800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图像,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 804 is configured to store various types of data to support operations at the device 800 . Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, images, video, and the like. The memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
电源组件806为装置800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为装置800生成、管理和分配电力相关联的组件。The power supply component 806 provides power to the various components of the device 800 . Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 800 .
多媒体组件808包括在所述装置800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当装置800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当装置800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone (MIC) configured to receive external audio signals when the device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 . In some embodiments, the audio component 810 also includes a speaker for outputting audio signals.
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
传感器组件814包括一个或多个传感器,用于为装置800提供各个方面的状态评估。例如,传感器组件814可以检测到装置800的打开/关闭状态,组件的相对定位,例如所述组件为装置800的显示器和小键盘,传感器组件814还可以检测装置800或装置800一个组件的位置改变,用户与装置800接触的存在或不存在,装置800方位或加速/减速和装置800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of device 800 . For example, the sensor component 814 can detect the open/closed state of the device 800, the relative positioning of components, such as the display and keypad of the device 800, and the sensor component 814 can also detect a change in the position of the device 800 or a component of the device 800 , the presence or absence of user contact with the device 800 , the device 800 orientation or acceleration/deceleration and the temperature change of the device 800 . Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. Sensor assembly 814 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
通信组件816被配置为便于装置800和其他设备之间有线或无线方式的通信。装置800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication component 816 is configured to facilitate wired or wireless communication between the apparatus 800 and other devices. The device 800 can access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
在示例性实施例中,装置800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, apparatus 800 may be programmed by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由装置800的处理器820执行以完成上述方法。In an exemplary embodiment, there is also provided a non-volatile computer-readable storage medium, such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the device 800 to implement the above method.
图10是根据一示例性实施例示出的一种用于脸部图像的欺骗识别的装置1900的框图。例如,装置1900可以被提供为一服务器。参照图10,装置1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。Fig. 10 is a block diagram of an apparatus 1900 for deception recognition of facial images according to an exemplary embodiment. For example, apparatus 1900 may be provided as a server. Referring to FIG. 10 , apparatus 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. In addition, the processing component 1922 is configured to execute instructions to perform the above method.
装置1900还可以包括一个电源组件1926被配置为执行装置1900的电源管理,一个有线或无线网络接口1950被配置为将装置1900连接到网络,和一个输入输出(I/O)接口1958。装置1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,MacOS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。Device 1900 may also include a power component 1926 configured to perform power management of device 1900 , a wired or wireless network interface 1950 configured to connect device 1900 to a network, and an input-output (I/O) interface 1958 . The apparatus 1900 can operate based on an operating system stored in the memory 1932, such as Windows Server™, MacOS X™, Unix™, Linux™, FreeBSD™ or the like.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由装置1900的处理组件1922执行以完成上述方法。In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the apparatus 1900 to implement the above-mentioned method.
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。The present disclosure can be a system, method and/or computer program product. A computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。A computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. A computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above. As used herein, computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages. Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect). In some embodiments, an electronic circuit, such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA), can be customized by utilizing state information of computer-readable program instructions, which can Various aspects of the present disclosure are implemented by executing computer readable program instructions.
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It should be understood that each block of the flowcharts and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram. These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。It is also possible to load computer-readable program instructions into a computer, other programmable data processing device, or other equipment, so that a series of operational steps are performed on the computer, other programmable data processing device, or other equipment to produce a computer-implemented process , so that instructions executed on computers, other programmable data processing devices, or other devices implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。Having described various embodiments of the present disclosure above, the foregoing description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and alterations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principles of the various embodiments, practical applications or technical improvements over technologies in the market, or to enable other persons of ordinary skill in the art to understand the various embodiments disclosed herein.
Claims (12)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711314867 | 2017-12-12 | ||
CN2017113148674 | 2017-12-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108280418A true CN108280418A (en) | 2018-07-13 |
Family
ID=62804076
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810048661.XA Pending CN108280418A (en) | 2017-12-12 | 2018-01-18 | The deception recognition methods of face image and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108280418A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
CN109344747A (en) * | 2018-09-17 | 2019-02-15 | 平安科技(深圳)有限公司 | A kind of recognition methods that distorting figure, storage medium and server |
CN109583375A (en) * | 2018-11-30 | 2019-04-05 | 中山大学 | A kind of the facial image illumination recognition methods and system of multiple features fusion |
CN109886244A (en) * | 2019-03-01 | 2019-06-14 | 北京视甄智能科技有限公司 | A kind of recognition of face biopsy method and device |
CN109948467A (en) * | 2019-02-28 | 2019-06-28 | 中国科学院深圳先进技术研究院 | Method, device, computer equipment and storage medium for face recognition |
CN110059579A (en) * | 2019-03-27 | 2019-07-26 | 北京三快在线科技有限公司 | For the method and apparatus of test alive, electronic equipment and storage medium |
CN110414437A (en) * | 2019-07-30 | 2019-11-05 | 上海交通大学 | Fusion tampered face detection analysis method and system based on convolutional neural network model |
CN110490060A (en) * | 2019-07-10 | 2019-11-22 | 特斯联(北京)科技有限公司 | A kind of security protection head end video equipment based on machine learning hardware structure |
WO2020019760A1 (en) * | 2018-07-27 | 2020-01-30 | 北京市商汤科技开发有限公司 | Living body detection method, apparatus and system, and electronic device and storage medium |
WO2020151489A1 (en) * | 2019-01-25 | 2020-07-30 | 杭州海康威视数字技术股份有限公司 | Living body detection method based on facial recognition, and electronic device and storage medium |
CN111666901A (en) * | 2020-06-09 | 2020-09-15 | 创新奇智(北京)科技有限公司 | Living body face detection method and device, electronic equipment and storage medium |
CN111767760A (en) * | 2019-04-01 | 2020-10-13 | 北京市商汤科技开发有限公司 | Living body detection method and device, electronic device and storage medium |
CN112085035A (en) * | 2020-09-14 | 2020-12-15 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
CN112668453A (en) * | 2020-12-24 | 2021-04-16 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN113051998A (en) * | 2019-12-27 | 2021-06-29 | 豪威科技股份有限公司 | Robust anti-spoofing technique using polarization cues in near infrared and visible wavelength bands in biometric identification techniques |
CN113450806A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Training method of voice detection model, and related method, device and equipment |
CN113537173A (en) * | 2021-09-16 | 2021-10-22 | 中国人民解放军国防科技大学 | A Face Image Authenticity Recognition Method Based on Facial Patch Mapping |
CN113610071A (en) * | 2021-10-11 | 2021-11-05 | 深圳市一心视觉科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN113627263A (en) * | 2021-07-13 | 2021-11-09 | 支付宝(杭州)信息技术有限公司 | Exposure method, device and equipment based on face detection |
CN114627537A (en) * | 2022-03-21 | 2022-06-14 | 平安普惠企业管理有限公司 | Micro-expression recognition model training method, anti-fraud method and related equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874871A (en) * | 2017-02-15 | 2017-06-20 | 广东光阵光电科技有限公司 | A kind of recognition methods of living body faces dual camera and identifying device |
CN107358157A (en) * | 2017-06-07 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN107368810A (en) * | 2017-07-20 | 2017-11-21 | 北京小米移动软件有限公司 | Method for detecting human face and device |
-
2018
- 2018-01-18 CN CN201810048661.XA patent/CN108280418A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874871A (en) * | 2017-02-15 | 2017-06-20 | 广东光阵光电科技有限公司 | A kind of recognition methods of living body faces dual camera and identifying device |
CN107358157A (en) * | 2017-06-07 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN107368810A (en) * | 2017-07-20 | 2017-11-21 | 北京小米移动软件有限公司 | Method for detecting human face and device |
Non-Patent Citations (1)
Title |
---|
YOUSEF ATOUM,YAOJIE LIU ET AL.: "Face Anti-Spoofing Using Patch and Depth-Based CNNs", 《2017 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7026225B2 (en) | 2018-07-27 | 2022-02-25 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Biological detection methods, devices and systems, electronic devices and storage media |
KR20200081450A (en) * | 2018-07-27 | 2020-07-07 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Biometric detection methods, devices and systems, electronic devices and storage media |
US11321575B2 (en) | 2018-07-27 | 2022-05-03 | Beijing Sensetime Technology Development Co., Ltd. | Method, apparatus and system for liveness detection, electronic device, and storage medium |
KR102391792B1 (en) * | 2018-07-27 | 2022-04-28 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Biometric detection methods, devices and systems, electronic devices and storage media |
JP2021503659A (en) * | 2018-07-27 | 2021-02-12 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Biodetection methods, devices and systems, electronic devices and storage media |
WO2020019760A1 (en) * | 2018-07-27 | 2020-01-30 | 北京市商汤科技开发有限公司 | Living body detection method, apparatus and system, and electronic device and storage medium |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
CN109344747B (en) * | 2018-09-17 | 2024-01-05 | 平安科技(深圳)有限公司 | Tamper graph identification method, storage medium and server |
CN109344747A (en) * | 2018-09-17 | 2019-02-15 | 平安科技(深圳)有限公司 | A kind of recognition methods that distorting figure, storage medium and server |
CN109583375B (en) * | 2018-11-30 | 2021-04-06 | 中山大学 | A multi-feature fusion method and system for face image illumination recognition |
CN109583375A (en) * | 2018-11-30 | 2019-04-05 | 中山大学 | A kind of the facial image illumination recognition methods and system of multiple features fusion |
WO2020151489A1 (en) * | 2019-01-25 | 2020-07-30 | 杭州海康威视数字技术股份有限公司 | Living body detection method based on facial recognition, and electronic device and storage medium |
US11830230B2 (en) | 2019-01-25 | 2023-11-28 | Hangzhou Hikvision Digital Technology Co., Ltd. | Living body detection method based on facial recognition, and electronic device and storage medium |
CN109948467A (en) * | 2019-02-28 | 2019-06-28 | 中国科学院深圳先进技术研究院 | Method, device, computer equipment and storage medium for face recognition |
CN109886244A (en) * | 2019-03-01 | 2019-06-14 | 北京视甄智能科技有限公司 | A kind of recognition of face biopsy method and device |
CN110059579A (en) * | 2019-03-27 | 2019-07-26 | 北京三快在线科技有限公司 | For the method and apparatus of test alive, electronic equipment and storage medium |
CN111767760A (en) * | 2019-04-01 | 2020-10-13 | 北京市商汤科技开发有限公司 | Living body detection method and device, electronic device and storage medium |
CN110490060B (en) * | 2019-07-10 | 2020-09-11 | 特斯联(北京)科技有限公司 | Security protection front-end video equipment based on machine learning hardware architecture |
CN110490060A (en) * | 2019-07-10 | 2019-11-22 | 特斯联(北京)科技有限公司 | A kind of security protection head end video equipment based on machine learning hardware structure |
CN110414437A (en) * | 2019-07-30 | 2019-11-05 | 上海交通大学 | Fusion tampered face detection analysis method and system based on convolutional neural network model |
CN113051998A (en) * | 2019-12-27 | 2021-06-29 | 豪威科技股份有限公司 | Robust anti-spoofing technique using polarization cues in near infrared and visible wavelength bands in biometric identification techniques |
CN111666901A (en) * | 2020-06-09 | 2020-09-15 | 创新奇智(北京)科技有限公司 | Living body face detection method and device, electronic equipment and storage medium |
CN112085035A (en) * | 2020-09-14 | 2020-12-15 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
WO2022134418A1 (en) * | 2020-12-24 | 2022-06-30 | 平安科技(深圳)有限公司 | Video recognition method and related device |
CN112668453B (en) * | 2020-12-24 | 2023-11-14 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN112668453A (en) * | 2020-12-24 | 2021-04-16 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN113450806A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Training method of voice detection model, and related method, device and equipment |
CN113450806B (en) * | 2021-05-18 | 2022-08-05 | 合肥讯飞数码科技有限公司 | Training method of voice detection model, and related method, device and equipment |
CN113627263A (en) * | 2021-07-13 | 2021-11-09 | 支付宝(杭州)信息技术有限公司 | Exposure method, device and equipment based on face detection |
CN113627263B (en) * | 2021-07-13 | 2023-11-17 | 支付宝(杭州)信息技术有限公司 | Exposure method, device and equipment based on face detection |
CN113537173A (en) * | 2021-09-16 | 2021-10-22 | 中国人民解放军国防科技大学 | A Face Image Authenticity Recognition Method Based on Facial Patch Mapping |
CN113537173B (en) * | 2021-09-16 | 2022-03-18 | 中国人民解放军国防科技大学 | A Face Image Authenticity Recognition Method Based on Facial Patch Mapping |
CN113610071A (en) * | 2021-10-11 | 2021-11-05 | 深圳市一心视觉科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN114627537A (en) * | 2022-03-21 | 2022-06-14 | 平安普惠企业管理有限公司 | Micro-expression recognition model training method, anti-fraud method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108280418A (en) | The deception recognition methods of face image and device | |
CN108197586B (en) | Face recognition method and device | |
US10339402B2 (en) | Method and apparatus for liveness detection | |
US10762368B2 (en) | Facial recognition-based authentication | |
US9652663B2 (en) | Using facial data for device authentication or subject identification | |
JP6650946B2 (en) | System and method for performing fingerprint-based user authentication using images captured with a mobile device | |
De Marsico et al. | Firme: Face and iris recognition for mobile engagement | |
WO2020062969A1 (en) | Action recognition method and device, and driver state analysis method and device | |
US9710691B1 (en) | Touchless fingerprint matching systems and methods | |
CN110287671B (en) | Verification method and device, electronic equipment and storage medium | |
JP2018032391A (en) | Liveness inspection method and apparatus | |
CN105678242B (en) | Focusing method and device under hand-held certificate mode | |
US10685251B2 (en) | Methods and systems for detecting user liveness | |
KR20120139100A (en) | Apparatus and method for security management using face recognition | |
TWI752105B (en) | Feature image acquisition method, acquisition device, and user authentication method | |
CN108197585A (en) | Recognition algorithms and device | |
CN110612530A (en) | Method for selecting a frame for use in face processing | |
WO2016197389A1 (en) | Method and device for detecting living object, and mobile terminal | |
CN107977636B (en) | Face detection method and device, terminal and storage medium | |
CN111259757A (en) | Image-based living body identification method, device and equipment | |
US11810401B1 (en) | Methods and systems for enhancing user liveness detection | |
Ohki et al. | Efficient spoofing attack detection against unknown sample using end-to-end anomaly detection | |
Wang et al. | Enhancing QR Code System Security by Verifying the Scanner's Gripping Hand Biometric | |
US20250037509A1 (en) | System and method for determining liveness using face rotation | |
JP2012221002A (en) | User detection apparatus, user detection method, and user detection program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180713 |
|
RJ01 | Rejection of invention patent application after publication |