[go: up one dir, main page]

CN105023010B - A kind of human face in-vivo detection method and system - Google Patents

A kind of human face in-vivo detection method and system Download PDF

Info

Publication number
CN105023010B
CN105023010B CN201510500115.1A CN201510500115A CN105023010B CN 105023010 B CN105023010 B CN 105023010B CN 201510500115 A CN201510500115 A CN 201510500115A CN 105023010 B CN105023010 B CN 105023010B
Authority
CN
China
Prior art keywords
face
feature point
matching
images
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510500115.1A
Other languages
Chinese (zh)
Other versions
CN105023010A (en
Inventor
李卫军
师亚亭
宁欣
刘文杰
孙琳钧
董肖莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wave Kingdom Co ltd
Institute of Semiconductors of CAS
Original Assignee
Shenzhen Wave Kingdom Co ltd
Institute of Semiconductors of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wave Kingdom Co ltd, Institute of Semiconductors of CAS filed Critical Shenzhen Wave Kingdom Co ltd
Priority to CN201510500115.1A priority Critical patent/CN105023010B/en
Publication of CN105023010A publication Critical patent/CN105023010A/en
Application granted granted Critical
Publication of CN105023010B publication Critical patent/CN105023010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种人脸活体检测方法及系统,其中所述人脸活体检测方法包括步骤:同时通过两个摄像头分别获取识别对象的两幅图像;采用人脸分类器定位出两幅图像中的人脸区域,获得分别与两幅图像对应的人脸图像;对两幅人脸图像进行特征点定位;对特征点定位后的人脸图像进行特征点快速立体匹配;计算两幅人脸图像在匹配特征点的视差,获取匹配特征点的深度信息值;根据多个匹配特征点的深度信息值,判断所述识别对象的人脸是否为活体。采用本发明,无需用户过多配合,安全性高,隐蔽性强,姿态适应性强,适应范围广,且匹配速度快,用户体验好。

The invention discloses a human face living body detection method and system, wherein the human face living body detection method includes the steps of: simultaneously acquiring two images of the recognition object through two cameras; using a face classifier to locate the two images The face area of the face is obtained, and the face images corresponding to the two images are respectively obtained; the feature points of the two face images are located; the feature points of the face image after the feature point positioning are used for fast stereo matching of the feature points; the two face images are calculated In the disparity of the matching feature points, the depth information value of the matching feature point is obtained; according to the depth information values of the multiple matching feature points, it is judged whether the face of the recognition object is a living body. Adopting the present invention does not require too much cooperation from users, has high security, strong concealment, strong posture adaptability, wide adaptability, fast matching speed, and good user experience.

Description

一种人脸活体检测方法及系统Method and system for human face liveness detection

技术领域technical field

本发明涉及模式识别技术领域,特别涉及一种人脸活体检测方法及系统。The invention relates to the technical field of pattern recognition, in particular to a method and system for detecting human face liveness.

背景技术Background technique

随着生物识别技术的发展,人脸识别技术已经趋于成熟,目前,在良好的光照条件和姿态下,人脸识别系统己经可以进行较为准确的人脸检测与识别,但是,对于一些诸如门禁、登录等系统的人脸识别系统而言,用户可以凭借照片等非法手段欺骗系统。因而,对于这类系统,识别率越高,安全隐患反而越大。因此,实现活体检测成为这类系统安全性的保障手段。With the development of biometric technology, face recognition technology has become mature. At present, under good lighting conditions and postures, the face recognition system can already perform relatively accurate face detection and recognition. However, for some For the face recognition system of access control, login and other systems, users can deceive the system by illegal means such as photos. Therefore, for this type of system, the higher the recognition rate, the greater the potential safety hazard. Therefore, the realization of living body detection has become a means of guaranteeing the security of this type of system.

现有技术中,用于人脸识别系统的人脸活体检测方法有些需要用户配合做一些特定动作,算法简单,但容易暴漏判断依据,使非法用户易于模仿,且配合动作耗时长,用户体验有限,有些方法如眨眼检测、光流检测一般要求用户精准地将脸正对摄像头,姿态适应性不高,用户体验也不好。In the prior art, some face detection methods used in face recognition systems require the user to cooperate with specific actions. The algorithm is simple, but it is easy to leak the judgment basis, making it easy for illegal users to imitate, and it takes a long time to cooperate with the action. Limited, some methods such as blink detection and optical flow detection generally require the user to accurately face the camera, the posture adaptability is not high, and the user experience is not good.

因此,如何克服上述不足,是本领域技术人员亟待解决的技术问题。Therefore, how to overcome the above shortcomings is a technical problem to be solved urgently by those skilled in the art.

发明内容Contents of the invention

有鉴于此,本发明旨在提供一种人脸活体检测方法及系统,不仅对姿态适应性强,安全性高,而且无需用户配合,速度快,用户体验好。In view of this, the present invention aims to provide a face detection method and system, which not only have strong adaptability to gestures, high security, but also do not require user cooperation, are fast, and provide good user experience.

具体而言,所述人脸活体检测方法包括步骤:Specifically, described human face detection method comprises steps:

S1、同时通过两个摄像头分别获取识别对象的两幅图像;S1. Simultaneously acquire two images of the recognition object through two cameras respectively;

S2、采用人脸分类器定位出两幅图像中的人脸区域,获得分别与两幅图像对应的人脸图像;S2. Use a face classifier to locate the face areas in the two images, and obtain face images respectively corresponding to the two images;

S3、对两幅人脸图像进行特征点定位;S3, performing feature point positioning on the two face images;

S4、对特征点定位后的人脸图像进行特征点快速立体匹配;S4. Perform fast stereo matching of feature points on the face image after feature point positioning;

S5、计算两幅人脸图像在匹配特征点的视差,获取匹配特征点的深度信息值;S5. Calculate the parallax of the two face images at the matching feature points, and obtain the depth information value of the matching feature points;

S6、根据多个匹配特征点的深度信息值,判断所述识别对象的人脸是否为活体。S6. According to the depth information values of the plurality of matching feature points, it is judged whether the human face of the recognition object is a living body.

优选的,在本发明实施例中,所述两个摄像头通过左右布置或上下布置组成双目立体视觉系统。Preferably, in the embodiment of the present invention, the two cameras are arranged left and right or up and down to form a binocular stereo vision system.

优选的,在本发明实施例中,所述人脸分类器基于如下至少一种方式:肤色和几何特性、积分投影、模板匹配、线连通。Preferably, in the embodiment of the present invention, the face classifier is based on at least one of the following methods: skin color and geometric characteristics, integral projection, template matching, and line connection.

优选的,在本发明实施例中,所述特征点包括位于如下至少一个区域中的全部或部分关键点:人脸轮廓、眉毛、眼睛、鼻子和嘴巴。Preferably, in the embodiment of the present invention, the feature points include all or part of the key points located in at least one of the following regions: human face outline, eyebrows, eyes, nose and mouth.

优选的,在本发明实施例中,在步骤S3中,特征点的定位方式采用如下至少一种算法:深度学习相关算法、主动形状模型相关算法、主动外观模型相关算法以及级联形状回归相关算法。Preferably, in the embodiment of the present invention, in step S3, at least one of the following algorithms is used for the positioning of feature points: deep learning related algorithms, active shape model related algorithms, active appearance model related algorithms, and cascaded shape regression related algorithms .

优选的,在本发明实施例中,步骤S4具体为:以一个人脸图像中的特征点为参考特征点,通过绝对误差累计SAD方式在另一人脸图像对应特征点周围获取与该参考特征点对应的匹配特征点。Preferably, in the embodiment of the present invention, step S4 specifically includes: taking a feature point in one face image as a reference feature point, and acquiring the same reference feature point around the corresponding feature point of another face image by means of accumulated absolute error (SAD) Corresponding matching feature points.

优选的,在本发明实施例中,所述参考特征点与所述匹配特征点的纵坐标相同;或者,当参考特征点与对应的匹配特征点的纵坐标不相同时,将所述匹配特征点的纵坐标强制等于与其对应的参考特征点的纵坐标,得到匹配点,并在匹配点周围各取两个像素点,将匹配点及各像素点作为匹配特征点的候选点,并分别计算以候选点为中心的误差累计值,选取误差累计值最小的候选点为匹配特征点。Preferably, in the embodiment of the present invention, the ordinates of the reference feature point and the matching feature point are the same; or, when the ordinates of the reference feature point and the corresponding matching feature point are different, the matching feature point The ordinate of the point is forced to be equal to the ordinate of the corresponding reference feature point, and the matching point is obtained, and two pixel points are taken around the matching point, and the matching point and each pixel point are used as the candidate points of the matching feature point, and are calculated separately The error accumulation value centered on the candidate point, and the candidate point with the smallest error accumulation value is selected as the matching feature point.

优选的,在本发明实施例中,步骤S6采用如下方式的至少一种:支持向量机SVM算法、基于主成分分析PCA的特征提取算法、线性判别式分析LDA算法;或者,步骤S6包括:Preferably, in the embodiment of the present invention, step S6 adopts at least one of the following methods: support vector machine SVM algorithm, feature extraction algorithm based on principal component analysis PCA, linear discriminant analysis LDA algorithm; or, step S6 includes:

S61、确定深度信息值最小的匹配特征点为最小匹配特征点,获取所述最小匹配特征点的深度信息值;S61. Determine the matching feature point with the smallest depth information value as the smallest matching feature point, and acquire the depth information value of the smallest matching feature point;

S62、分别将各匹配特征点的深度信息值减去所述最小匹配特征点的深度信息值,得到各匹配特征点的相对深度信息值;S62. Subtract the depth information value of the minimum matching feature point from the depth information value of each matching feature point to obtain the relative depth information value of each matching feature point;

S63、计算各匹配特征点的相对深度信息值的和或者平方和并作为比较值,当所述比较值小于预设阈值时判定所述识别对象为非活体,否则判定为活体。S63. Calculate the sum or square sum of the relative depth information values of each matching feature point and use it as a comparison value. When the comparison value is smaller than a preset threshold, it is determined that the recognition object is not a living body, otherwise it is determined as a living body.

优选的,在本发明实施例中,步骤S63后还包括:Preferably, in the embodiment of the present invention, after step S63, it also includes:

S64、对于经步骤S63判定为活体的识别对象,根据特征点定位结果,估计人脸大小,计算左右两幅人脸图像的人脸高宽比,当左右两幅人脸图像的人脸宽高比均超出预设区间时,判定所述识别对象为非活体。S64, for the identification object determined to be a living body through step S63, according to the feature point positioning result, estimate the size of the face, calculate the face aspect ratio of the left and right two face images, when the face width and height of the left and right two face images When the ratio exceeds the preset interval, it is determined that the identification object is not a living body.

优选的,在本发明实施例中,所述方法在步骤S1之前还包括步骤:S0、对所述两个摄像头组成的双目立体视觉系统进行标定。Preferably, in the embodiment of the present invention, the method further includes a step before step S1: S0, calibrating the binocular stereo vision system composed of the two cameras.

优选的,在本发明实施例中,所述方法在步骤S2之前还包括步骤:对两幅图像进行包括立体校正在内的图像预处理操作。Preferably, in the embodiment of the present invention, the method further includes a step before step S2: performing image preprocessing operations including stereo correction on the two images.

优选的,在本发明实施例中,所述的对图像立体校正包括:根据步骤S 1中的标定结果,对两个摄像头的图像平面重投影,使得两个图像精确落在同一个平面上,并且两个图像的行完全对准到前向平行的结构上。Preferably, in the embodiment of the present invention, the stereoscopic correction of the image includes: reprojecting the image planes of the two cameras according to the calibration result in step S1, so that the two images fall on the same plane precisely, And the rows of the two images are perfectly aligned on the front-parallel structure.

在本发明实施例的另一面,还提供了一种人脸活体检测系统,所述系统包括:On the other side of the embodiments of the present invention, a human face liveness detection system is also provided, the system comprising:

两个摄像头,用于分别同时获取识别对象的两幅图像;Two cameras, used to simultaneously acquire two images of the identified object respectively;

人脸分类器,用于在两个摄像头所获取的两幅图像中定位出人脸区域,获得分别与两幅图像对应的人脸图像;The face classifier is used to locate the face area in the two images acquired by the two cameras, and obtain the face images corresponding to the two images respectively;

特征点定位单元,用于对两幅人脸图像进行特征点定位;A feature point positioning unit is used to perform feature point positioning on two face images;

立体匹配单元,用于对特征点定位后的人脸图像进行特征点快速立体匹配;Stereo matching unit, for performing fast stereo matching of feature points on the face image after feature point positioning;

计算单元,用于计算两幅人脸图像在匹配特征点的视差,获取匹配特征点的深度信息值;A calculation unit is used to calculate the parallax of the two face images at the matching feature points, and obtain the depth information value of the matching feature points;

处理单元,用于根据多个匹配特征点的深度信息值,判断所述识别对象的人脸是否为活体。The processing unit is configured to judge whether the face of the recognition object is a living body according to the depth information values of the plurality of matching feature points.

在本发明的至少一种方案中,利用双摄像机模拟人的双眼,由两个摄像机从不同角度拍摄用户(识别对象)的两幅或者多幅二维图像,通过一系列技术最后转换成世界坐标系内的三维实物信息,其中通过摄像机的标定来确定摄像机本身的内外参数,然后对所拍摄的二维平面图像进行处理,从而能够使所拍摄的图像位于同一平面上,对左右图像的像素点进行立体匹配,将三维实物中的点分别在两幅二维图像中标记出来,再按照仿生学人眼的视差原理将模型放在仿射空间中进行计算还原以获得物体的深度信息值。In at least one solution of the present invention, dual cameras are used to simulate human eyes, and two or more two-dimensional images of the user (recognition object) are captured by the two cameras from different angles, and finally converted into world coordinates through a series of techniques The three-dimensional physical information in the system, in which the internal and external parameters of the camera itself are determined through the calibration of the camera, and then the captured two-dimensional plane images are processed, so that the captured images can be located on the same plane, and the pixel points of the left and right images Perform stereo matching, mark the points in the three-dimensional object in two two-dimensional images, and then place the model in the affine space according to the parallax principle of the bionic human eye for calculation and restoration to obtain the depth information value of the object.

当使用双目立体视觉系统拍摄用户时,得到的人脸器官如鼻子、眼睛的深度明显低于人脸轮廓,头部姿势变化时面部器官与某一侧轮廓深度差别更为明显,而图像、视频则无法表现出有区分性的深度差异,大尺寸的图像在弯曲的情况下即使能够表现出明显的深度差异,也必将牺牲正常的人脸比例。通过计算人脸图像中不同特征点的深度信息值和先验知识可以判别用户是否为活体。When the binocular stereo vision system is used to photograph the user, the obtained facial organs such as the nose and the depth of the eyes are obviously lower than the contour of the face, and the depth difference between the facial organs and the contour of a certain side is more obvious when the head posture changes. Videos cannot show discriminative depth differences. Even if a large-sized image can show obvious depth differences in the case of curvature, it will inevitably sacrifice the normal face ratio. By calculating the depth information value and prior knowledge of different feature points in the face image, it can be judged whether the user is alive or not.

由此可知,采用本发明的方案后,在人脸活体检测时,无需用户过多配合,安全性高,隐蔽性强,姿态适应性强,适应范围广,且在在图像匹配中使用特征点定位技术,可以准确找出匹配特征点的位置,只需在小范围内搜索即可达到特征点图像匹配,大大精简了图像匹配过程,提高了匹配速度,用户体验好。It can be seen that, after adopting the solution of the present invention, in the detection of human face liveness, there is no need for too much cooperation from the user, and the security is high, the concealment is strong, the posture adaptability is strong, the adaptability is wide, and the feature points are used in image matching. The positioning technology can accurately find out the position of the matching feature point, and only need to search in a small area to achieve the image matching of the feature point, which greatly simplifies the image matching process, improves the matching speed, and provides a good user experience.

附图说明Description of drawings

构成本发明的一部分的附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The drawings constituting a part of the present invention are used to provide a further understanding of the present invention, and the schematic embodiments and descriptions of the present invention are used to explain the present invention, and do not constitute an improper limitation of the present invention. In the attached picture:

图1为本发明实施例中所述人脸活体检测方法的步骤示意图;FIG. 1 is a schematic diagram of the steps of the face detection method described in the embodiment of the present invention;

图2a为本发明实施例的所述人脸活体检测方法中立体校正前的左右图像示意图;Fig. 2a is a schematic diagram of the left and right images before stereo correction in the face liveness detection method according to the embodiment of the present invention;

图2b为本发明实施例的所述人脸活体检测方法中立体校正后的左右图像示意图;Fig. 2b is a schematic diagram of the left and right images after stereo correction in the face liveness detection method according to the embodiment of the present invention;

图3为本发明实施例的所述人脸活体检测方法中右图像中特征点定位结果与匹配特征点位置图;Fig. 3 is a feature point positioning result and a matching feature point position diagram in the right image in the human face living body detection method according to an embodiment of the present invention;

图4a为本发明实施例的所述人脸活体检测方法中真实人脸的特征点深度信息值的示意图;Fig. 4a is a schematic diagram of the feature point depth information value of a real face in the face liveness detection method according to an embodiment of the present invention;

图4b为本发明实施例的所述人脸活体检测方法中人脸照片特征点深度信息值的示意图;Fig. 4b is a schematic diagram of depth information values of feature points of face photos in the face liveness detection method according to the embodiment of the present invention;

图5为本发明实施例所提供的人脸活体检测系统的结构示意图。FIG. 5 is a schematic structural diagram of a human face liveness detection system provided by an embodiment of the present invention.

具体实施方式Detailed ways

应当指出,本部分中对具体结构的描述及描述顺序仅是对具体实施例的说明,不应视为对本发明的保护范围有任何限制作用。此外,在不冲突的情形下,本部分中的实施例以及实施例中的特征可以相互组合。It should be pointed out that the description and sequence of specific structures in this section are only descriptions of specific embodiments, and should not be considered as limiting the protection scope of the present invention. In addition, the embodiments in this section and the features in the embodiments can be combined with each other under the condition of no conflict.

请同时参考图1至图5,下面将结合附图对本发明实施例的人脸活体检测方法及系统作详细说明。Please refer to FIG. 1 to FIG. 5 at the same time. The human face liveness detection method and system according to the embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

结合图1所示,本发明的人脸活动检测方法可以包括:As shown in Figure 1, the human face activity detection method of the present invention may include:

步骤1:通过两个摄像头分别同时抓取识别对象(以下以用户作举例说明)两幅图像,即左右两幅图像;Step 1: Use two cameras to simultaneously capture two images of the identified object (the user is used as an example below), namely the left and right images;

步骤2:采用人脸分类器定位两幅图像中的人脸区域,获得分别与两幅图像对应的人脸图像;Step 2: Use a face classifier to locate the face areas in the two images, and obtain face images corresponding to the two images respectively;

步骤3:对两幅人脸图像进行特征点定位;Step 3: Perform feature point positioning on two face images;

步骤4:对特征点定位后的人脸图像进行特征点快速立体匹配;Step 4: Perform fast stereo matching of feature points on the face image after feature point positioning;

步骤5:计算两幅人脸图像在匹配特征点的视差,获取匹配特征点的深度信息值;Step 5: Calculate the parallax of the two face images at the matching feature points, and obtain the depth information value of the matching feature points;

步骤6:根据多个匹配特征点的深度信息值,判断该用户的人脸是否为活体。Step 6: According to the depth information values of multiple matching feature points, determine whether the user's face is a living body.

其中,在具体实施时,两个摄像头可以采用同型号500万像素网络摄像头,两个摄像头呈左右固定布置,并使两个摄像头组成的双目立体视觉系统;在实际应用中,两个摄像头之间的水平距离设定满足两个摄像头处于同一平面上,并能够同时获取用户图像。摄像头与人脸的距离约为0.5米左右,处于一般的室内光照环境下,左右图像照片大小均为640×480,人脸检测、特征点定位、活体检测、识别系统等功能可以通过PC机实现。Wherein, during specific implementation, the two cameras can adopt the same type of 5 million-pixel network camera, and the two cameras are fixedly arranged on the left and right, and make a binocular stereoscopic vision system composed of the two cameras; in practical applications, between the two cameras The horizontal distance between the two cameras is set so that the two cameras are on the same plane and can acquire user images at the same time. The distance between the camera and the face is about 0.5 meters. In the general indoor lighting environment, the size of the left and right images is 640×480. Functions such as face detection, feature point positioning, living body detection, and recognition systems can be realized through a PC. .

作为一种优选方式,本发明实施例的方法在步骤1还可以包括步骤:对两个摄像头组成的双目立体视觉系统进行标定,其具体实现流程可以为:As a preferred method, the method of the embodiment of the present invention may also include a step in step 1: calibrating the binocular stereo vision system composed of two cameras, and its specific implementation process may be as follows:

步骤11:分别标定两个摄像头,标定内容具体可以包括:左右摄像头内参数矩阵和畸变向量。Step 11: Calibrate the two cameras respectively, and the calibration content may specifically include: internal parameter matrices and distortion vectors of the left and right cameras.

步骤12:标定由两个摄像头组成的双目立体视觉系统。标定内容具体可以包括:双目立体视觉系统的旋转矩阵和平移向量。Step 12: Calibrate the binocular stereo vision system composed of two cameras. The calibration content may specifically include: the rotation matrix and the translation vector of the binocular stereo vision system.

步骤13:得到立体校正参数和重投影矩阵。Step 13: Obtain stereo correction parameters and a reprojection matrix.

在该方式中,步骤11可以采用棋盘标定法进行摄像头标定,内参数矩阵可以包括摄像头水平方向焦距、垂直方向焦距、主点位置;畸变向量可以由径向畸变系数和切向畸变系数组成。In this way, step 11 can use the checkerboard calibration method to calibrate the camera, and the internal parameter matrix can include the focal length in the horizontal direction of the camera, the focal length in the vertical direction, and the position of the principal point; the distortion vector can be composed of radial distortion coefficients and tangential distortion coefficients.

在该方式中,对步骤11得到的标定结果,可以利用数学方法消除径向和切线方向的镜头畸变,从而输出无畸变图像,令(xp,yp)为没有畸变的点的位置,(xd,yd)为畸变位置,有:In this way, for the calibration result obtained in step 11, the lens distortion in the radial and tangential directions can be eliminated by mathematical methods, so as to output an undistorted image, let (x p , y p ) be the position of the point without distortion, ( x d , y d ) is the distortion position, there are:

这样就得到了没有镜头畸变的图像。This results in an image without lens distortion.

在该方式中,步骤12所述的旋转矩阵和平移向量可以用于描述右摄像头相对左摄像头的位置关系,数学表示为XR=R*XL+T,其中,XL,XR是三维空间中任一点P在双目立体视觉系统左右摄像头参考坐标系中的三维位置向量,R、T是双目立体视觉系统的旋转矩阵和平移向量。In this way, the rotation matrix and translation vector described in step 12 can be used to describe the positional relationship of the right camera relative to the left camera, and the mathematical expression is X R =R*X L +T, where X L and X R are three-dimensional The three-dimensional position vector of any point P in the space in the reference coordinate system of the left and right cameras of the binocular stereo vision system, and R and T are the rotation matrix and translation vector of the binocular stereo vision system.

在该方式中,步骤13可以使用Bouguet算法得到双目立体视觉系统的立体校正参数,并利用逆向映射法获得左右视图的校正映射表,得到用于将二维图像点重投影到三维中的重投影矩阵。In this way, step 13 can use the Bouguet algorithm to obtain the stereo correction parameters of the binocular stereo vision system, and use the reverse mapping method to obtain the correction mapping table of the left and right views, and obtain the reprojection for reprojecting the two-dimensional image points into three dimensions. projection matrix.

另外,在步骤2中所述的使用人脸分类器定位两幅图像中的人脸区域之前,本发明实施例的方法还可以包括步骤:对两幅图像进行包括立体校正在内的图像预处理操作。其中,对立体校正的操作可以根据前述标定结果,通过查找左右校正映射表,对两台摄像机的图像平面重投影,使得左右图像精确落在同一个平面上,并且两个图像的行完全对准到前向平行的结构上,即同一点在一台摄像机的像素行与另一台位于同一行上,立体校正前后的左右图像可以参见图2a和图2b所示。In addition, before using the face classifier described in step 2 to locate the face areas in the two images, the method in the embodiment of the present invention may further include the step of: performing image preprocessing including stereo correction on the two images operate. Among them, the stereo correction operation can be based on the aforementioned calibration results, by looking up the left and right correction mapping tables, and reprojecting the image planes of the two cameras, so that the left and right images fall on the same plane precisely, and the lines of the two images are completely aligned For the front-parallel structure, that is, the pixel row of one camera is on the same row as the other at the same point, the left and right images before and after stereo correction can be shown in Figure 2a and Figure 2b.

在该方式中,步骤2中所述使用人脸分类器定位两幅图像中的人脸区域,所采用的图像预处理操作可以作为对立体校正后的图像进行灰度变换,滤波处理,获得前向平行对准且质量高的灰度图;所采用的人脸定位方法可以是一种基于模板匹配的方法,即对左右灰度图利用积分快速计算出各自的Haar-Like小波特征值,应用到离线训练好的Adaboost-Cascade分类器,检测出左右图像人脸区域。当然,也可以基于肤色和几何特性、基于积分投影、基于线连通等方法实现定位。In this way, the face classifier is used to locate the face areas in the two images as described in step 2, and the image preprocessing operation adopted can be performed as grayscale transformation and filtering processing on the stereo-corrected image to obtain the previous parallel-aligned and high-quality gray-scale images; the face location method adopted can be a method based on template matching, that is, the respective Haar-Like wavelet eigenvalues are quickly calculated by integrating the left and right gray-scale images, and the application Go to the Adaboost-Cascade classifier trained offline to detect the face area of the left and right images. Of course, positioning can also be realized based on skin color and geometric characteristics, based on integral projection, based on line connectivity and other methods.

在该方式中,步骤3中所述对定位得到的人脸图像进行特征点定位,特征点是指人脸轮廓、眉毛、眼睛、鼻子和嘴巴的位置坐标;在本实施例中,采用级联形状回归算法进行特征点定位,当然在其他实施例中,也可以采用基于深度学习相关算法、主动形状模型相关算法、主动外观模型相关算法等方式实现。采用级联形状回归算法进行特征点定位,如采用索引特征的双层级联形状回归算法,可以直接学习一个向量回归函数,结合图像本身并最小化训练形状对齐误差来估计人脸形状;结合了两层级联回归、形状索引特征和基于相关的特征选择方法,能够快速精确的训练一个无参回归模型,有着极快的速度,是一种高效、高精度的人脸特征点精确定位算法,其具体步骤为:In this mode, the feature point location is carried out to the face image obtained by positioning described in step 3, and the feature point refers to the position coordinates of the outline of the face, eyebrows, eyes, nose and mouth; in this embodiment, cascading is used The feature point location is performed by the shape regression algorithm. Of course, in other embodiments, it can also be implemented by using a deep learning-based algorithm, an active shape model-related algorithm, an active appearance model-related algorithm, and the like. Using the cascade shape regression algorithm for feature point location, such as the two-layer cascade shape regression algorithm using index features, can directly learn a vector regression function, combine the image itself and minimize the training shape alignment error to estimate the shape of the face; combined Two-layer cascade regression, shape index feature and correlation-based feature selection method can quickly and accurately train a non-parametric regression model with extremely fast speed. It is an efficient and high-precision precise positioning algorithm for facial feature points. The specific steps are:

步骤31:加载离线训练好的基于索引特征的双层级联形状回归器;Step 31: Load the offline-trained two-layer cascaded shape regressor based on index features;

步骤32:从加载回归器中随机选择L个训练样本形状作为待定位人脸区域特征点的初始形状;Step 32: Randomly select L training sample shapes from the loaded regressor as the initial shape of the feature points of the face region to be located;

步骤33:计算以某一个被选择的训练样本为初始形状时得到的估计用户人脸形状;Step 33: Calculate the estimated user's face shape obtained when a certain selected training sample is used as the initial shape;

步骤34:将L个估计用户人脸形状的平均值作为用户的人脸形状。Step 34: Take the average value of the L estimated user's face shapes as the user's face shape.

其中,形状指人脸特征点位置信息组成的向量,数学表示为S={x1,y1,x2,y2,…,xn,yn},xi,yi是第i个特征点对应的像素坐标。Among them, the shape refers to the vector composed of the position information of the feature points of the face, and the mathematical expression is S={x 1 , y 1 , x 2 , y 2 ,..., x n , y n }, x i , y i are the ith The pixel coordinates corresponding to the feature points.

在该方式中,步骤33的具体包括以下流程:In this way, step 33 specifically includes the following processes:

步骤331:计算F个灰度差值特征;Step 331: Calculating F grayscale difference features;

步骤332:获得当前形状的更新值,更新当前形状;Step 332: Obtain the update value of the current shape, and update the current shape;

步骤333:完成规定回归次数的形状更新,得到最终的当前用户人脸估计形状。Step 333: Complete the shape update for the prescribed number of regressions to obtain the final estimated shape of the current user's face.

其中,步骤331所述灰度差值特征的获取方式可以为:根据回归器中F对特征点索引号和相对位置信息,获取用户人脸图像上对应索引特征的灰度值,计算每对索引特征点灰度差值。Wherein, the acquisition method of the gray scale difference feature described in step 331 may be: according to the F pair of feature point index numbers and relative position information in the regressor, obtain the gray scale value of the corresponding index feature on the user's face image, and calculate each pair of index Gray difference of feature points.

步骤332所述获得当前形状的更新值的方法可以为:将步骤331中获得F个灰度差值与对应的阈值作比较,获得对应的F比特二级制值,即可找出训练器中该二进制值对应的更新形状。The method for obtaining the update value of the current shape described in step 332 may be: compare the F gray difference values obtained in step 331 with the corresponding thresholds to obtain the corresponding F-bit secondary value, and then find out the The updated shape corresponding to this binary value.

步骤332所述更新当前形状的具体算法可以为:The specific algorithm for updating the current shape described in step 332 can be:

Si=Si-1+δSS i =S i-1 +δS

其中,Si-1是当前估计形状的前一次估计形状,6S指的是更新形状。Among them, S i-1 is the previous estimated shape of the current estimated shape, and 6S refers to the updated shape.

另外,步骤4中所述对特征点定位后的人脸图像进行特征点快速立体匹配,是指可以使用“绝对误差累计(SAD,Sum of Absolute Difference)”小窗口查找左右两幅立体校正图像之间的匹配特征点。以左图像中的每一个特征点为参考点,在右图像对应特征点周围用SAD滑动窗口进行匹配搜索,绝对误差累计最小的点即为左图像特征点在右图像上的匹配特征点。In addition, the rapid stereo matching of the feature points on the face image after the feature point positioning described in step 4 means that the small window of "Sum of Absolute Difference (SAD, Sum of Absolute Difference)" can be used to find the difference between the left and right stereo corrected images. matching feature points. Using each feature point in the left image as a reference point, use the SAD sliding window to perform matching search around the corresponding feature point in the right image. The point with the smallest accumulated absolute error is the matching feature point of the left image feature point on the right image.

在该方式中,所述SAD滑动窗口进行搜索匹配过程中窗口所在位置的绝对误差累计值可以为:In this manner, the accumulated absolute error value of the position of the window during the search and matching process of the SAD sliding window can be:

由于步骤3可以先对左右图像进行了精确的人脸特征点定位,且图像大小为640×480,故采用大小预定数目(如5)的SAD滑动窗口,在以右图像特征点为中心预定数目(如5个)像素的小窗口内进行匹配搜索。Since step 3 can firstly carry out accurate facial feature point positioning on the left and right images, and the image size is 640×480, so a predetermined number of SAD sliding windows with a predetermined size (such as 5) are used, and a predetermined number of points are centered on the right image feature point. Search for matches within a small window of (eg 5) pixels.

在该方式中,步骤4中所述特征点快速立体匹配后的匹配特征点可以要求满足纵坐标相同,当匹配特征点纵坐标不相同时,可以将右图像中匹配特征点纵坐标强制等于左图像对应被参考特征点的纵坐标,得到匹配点M′,并在M′周围左右各取两个像素点,将这5个点作为M′的候选点,分别计算以候选点为中心的误差累计值,选取误差累计值最小的点为匹配点,附图3为右图像中特征点定位结果与匹配特征点位置关系图,由图可以看出,匹配特征点就在定位的特征点附近,节省了匹配时间。In this way, the matching feature points after the fast stereo matching of the feature points described in step 4 can be required to have the same vertical coordinates. When the vertical coordinates of the matching feature points are different, the vertical coordinates of the matching feature points in the right image can be forced to be equal to the The image corresponds to the ordinate of the referenced feature point, and the matching point M' is obtained, and two pixel points are taken on the left and right sides of M', and these 5 points are used as candidate points of M', and the error centered on the candidate point is calculated respectively Cumulative value, select the point with the smallest error cumulative value as the matching point. Attached figure 3 is the positional relationship between the feature point positioning result and the matching feature point in the right image. It can be seen from the figure that the matching feature point is near the positioned feature point. Saves matching time.

另外,步骤5可以包括:Additionally, step 5 may include:

步骤51:计算左右两幅人脸图像在匹配特征点的视差;Step 51: Calculate the parallax of the left and right face images at the matching feature points;

步骤52:根据该特征点的视差得到特征点的深度信息值。Step 52: Obtain the depth information value of the feature point according to the parallax of the feature point.

其中,步骤51中所述视差可以是左右匹配特征点横坐标距离。Wherein, the parallax in step 51 may be the abscissa distance of the left and right matching feature points.

步骤52中所匹配述特征点的深度信息值可以利用相似三角形原理获得,匹配特征点的深度信息值计算方法为其中,T指两摄像机投影中心的距离,f是同型号两台摄像机的焦距。在本发明中,使用型号相同的两个摄像头,T和f提前标定好,d为左右匹配特征点视差。已知视差d和二维图像点(x,y),可以利用步骤1中的重投影矩阵Q可以计算图像中的二维图像点对应的三维坐标:The depth information value of the matching feature point in step 52 can be obtained by using the principle of similar triangles, and the calculation method of the depth information value of the matching feature point is Among them, T refers to the distance between the projection centers of the two cameras, and f is the focal length of the two cameras of the same model. In the present invention, two cameras of the same model are used, T and f are calibrated in advance, and d is the disparity of left and right matching feature points. Knowing the disparity d and the two-dimensional image point (x, y), the reprojection matrix Q in step 1 can be used to calculate the three-dimensional coordinates corresponding to the two-dimensional image point in the image:

即为该点三维坐标,为二维图像点的深度。 is the three-dimensional coordinates of the point, is the depth of the 2D image point.

另外,步骤6中所述根据不同部位匹配特征点的深度信息值,判断人脸是否为活体,具体步骤可以为:In addition, as described in step 6, according to the depth information value of matching feature points in different parts, it is judged whether the face is a living body, and the specific steps can be as follows:

步骤61:确定深度信息值最小的匹配特征点为最小匹配特征点,获取所述最小匹配特征点的深度信息值;Step 61: Determine the matching feature point with the smallest depth information value as the minimum matching feature point, and obtain the depth information value of the minimum matching feature point;

步骤62:分别将各匹配特征点的深度信息值减去所述最小匹配特征点的深度信息值,得到各匹配特征点的相对深度信息值;Step 62: respectively subtracting the depth information value of the minimum matching feature point from the depth information value of each matching feature point to obtain the relative depth information value of each matching feature point;

步骤63:计算所述各匹配特征点的相对深度信息值的平方和或深度和,与预设阈值比较,平方和小于阈值判定为非活体,否则判定为活体;Step 63: Calculate the sum of squares or the sum of depths of the relative depth information values of each matching feature point, and compare it with a preset threshold, and if the sum of squares is less than the threshold, it is determined to be non-living; otherwise, it is determined to be living;

在实施步骤6时,分别以用户本人的真实人脸(活体)和用户的人脸照片为识别对象所得到的匹配特征点深度信息值可以参见图4a和图4b所示,其中,图4a为本发明实施例的人脸活体检测方法中真实人脸的特征点深度信息值的示意图,图4b为本发明实施例的所述人脸活体检测方法中人脸照片特征点深度信息值的示意图。根据图4a和图4b的比较可以看出,通过用户本人的真实人脸为识别对象所得到的不同匹配特征点间的深度信息值的变化明显,而通过用户的人脸照片为识别对象所得到的不同匹配特征点间的深度信息值的变化则相对不明显。When implementing step 6, the matching feature point depth information values obtained by using the user's real face (living body) and the user's face photo as identification objects can be shown in Figure 4a and Figure 4b, wherein Figure 4a is A schematic diagram of the feature point depth information value of a real face in the face liveness detection method of the embodiment of the present invention. FIG. 4b is a schematic diagram of the face photo feature point depth information value in the face liveness detection method of the embodiment of the present invention. According to the comparison of Figure 4a and Figure 4b, it can be seen that the depth information value between different matching feature points obtained by using the user's real face as the identification object changes significantly, while the depth information value obtained by using the user's face photo as the identification object The change of the depth information value between different matching feature points is relatively inconspicuous.

步骤64:根据特征点定位结果,估计人脸大小,计算左右两幅人脸图像的人脸高宽比,当左右两幅人脸图像的人脸宽高比均超出预设区间时,判定所述识别对象为非活体。Step 64: Estimate the size of the face according to the feature point positioning results, and calculate the face aspect ratio of the left and right face images. When the face aspect ratios of the left and right face images exceed the preset interval, determine the The above-mentioned identification object is a non-living body.

步骤64的目的是将已经经过步骤63判定为活体的识别对象进一步的识别,从而提高识别率,避免欺骗认证。The purpose of step 64 is to further identify the identification object that has been determined to be living in step 63, so as to improve the identification rate and avoid fraudulent authentication.

在实际应用中,通过弯曲大照片(与实体人脸大小接近)的方式,有可能通过步骤63的判定,即,步骤63是有可能被进行欺骗认证的。为了避免这种欺骗认证,在本发明实施例中,还包括了人脸高宽比的判断步骤,具体的:In practical applications, it is possible to pass the determination of step 63 by bending a large photo (close to the size of a real human face), that is, step 63 may be fraudulently authenticated. In order to avoid this fraudulent authentication, in the embodiment of the present invention, a judging step of the face aspect ratio is also included, specifically:

对于已经经过步骤63判定为活体的用户,在本步骤中,分别计算左右两幅人脸图像的人脸高宽比,通过计算人脸高宽比是否超出预设区间的方式来判断识别对象是否为弯曲大照片。For the user who has been judged to be living in step 63, in this step, the face aspect ratios of the left and right face images are calculated respectively, and whether the recognition object is judged by calculating whether the face aspect ratio exceeds the preset interval For bending large photos.

在实际应用中,可以在当第一次和第二次判定人脸高宽超出预设区间后,返回步骤1重新采集图像判断,三次(或其他预定次数)均大于该阈值时判定为非活体,这样通过多次的判断以避免误判。这种约束方法能够防止通过弯曲大照片的方式欺骗认证。接着,若最终判定为活体,则进入人脸识别环节。In practical applications, after the first and second judgments that the height and width of the face exceed the preset interval, return to step 1 to re-acquire the image judgment, and when the three times (or other predetermined times) are greater than the threshold, it is judged as non-living , so as to avoid misjudgment through multiple judgments. This constrained approach prevents crooked authentication by bending large photos. Then, if it is finally determined to be a living body, it will enter the face recognition link.

另外,在整个过程中,可以通过双目立体视觉系统分别拍摄真实用户和人脸照片各200张,拍摄时用户不断变化姿态、表情、相对摄像机的位置,对人脸照片进行平放、远近、弯曲等操作,且人脸照片使用6寸大小和12寸大小照片实验,可以完全正确区分用户是否活体。In addition, in the whole process, the binocular stereo vision system can be used to take 200 photos of the real user and 200 face photos respectively. Operations such as bending, and experiments using 6-inch and 12-inch face photos can completely and correctly distinguish whether the user is alive or not.

需要说明的是,上述实施例的方案不仅适用于摄像头左右固定布置的双目立体视觉系统,也适用于上下固定布置的双目立体视觉系统(两个摄像头呈上下布置,两个摄像头之间的垂直距离设定满足两个摄像头处于同一平面上,并能够同时获取用户图像)。对于采用上下布置的方案,前述左图像对应于上下放置结构中的上图像、右图像对应下图像、横坐标对应纵坐标、纵坐标对应横坐标。It should be noted that the solutions of the above-mentioned embodiments are not only applicable to binocular stereo vision systems with cameras fixedly arranged left and right, but also applicable to binocular stereo vision systems with fixed arrangements up and down (two cameras are arranged up and down, and the distance between the two cameras The vertical distance is set so that the two cameras are on the same plane and can acquire user images at the same time). For the solution of adopting the vertical arrangement, the aforementioned left image corresponds to the upper image in the vertical arrangement structure, the right image corresponds to the lower image, the abscissa corresponds to the ordinate, and the ordinate corresponds to the abscissa.

需要说明的是,步骤6中所述根据不同部位匹配特征点的深度信息值,判断人脸是否为活体的分类方法并不局限上述方案,也可使用支持向量机SVM、基于主成分分析PCA的特征提取、线性判别式分析LDA等方法,只要能够实现相应的功能即可。It should be noted that the classification method for judging whether a face is alive or not according to the depth information value of matching feature points in different parts described in step 6 is not limited to the above-mentioned scheme, and support vector machine SVM and PCA based on principal component analysis can also be used. Feature extraction, linear discriminant analysis LDA and other methods, as long as the corresponding functions can be realized.

需要说明的是,图像的预处理操作可以根据人脸分类器的不同模式选择不同的预处理方法。It should be noted that, the image preprocessing operation may select different preprocessing methods according to different modes of the face classifier.

本发明实施例的人脸活体检测方法应用后,可以利用双摄像机模拟人的双眼,由两个摄像机从不同角度拍摄用户的两幅或者多幅二维图像,通过一系列技术最后转换成世界坐标系内的三维实物信息,其中通过摄像机的标定来确定摄像机本身的内外参数,然后对所拍摄的二维平面图像进行处理,从而能够使所拍摄的图像位于同一平面上,对左右图像的像素点进行立体匹配,将三维实物中的点分别在两幅二维图像中标记出来,再按照仿生学人眼的视差原理将模型放在仿射空间中进行计算还原以获得物体的深度信息值。After the face detection method of the embodiment of the present invention is applied, two cameras can be used to simulate the eyes of a person, and the two cameras can take two or more two-dimensional images of the user from different angles, and finally convert them into world coordinates through a series of technologies The three-dimensional physical information in the system, in which the internal and external parameters of the camera itself are determined through the calibration of the camera, and then the captured two-dimensional plane images are processed, so that the captured images can be located on the same plane, and the pixel points of the left and right images Perform stereo matching, mark the points in the three-dimensional object in two two-dimensional images, and then place the model in the affine space according to the parallax principle of the bionic human eye for calculation and restoration to obtain the depth information value of the object.

当使用双目立体视觉系统拍摄用户时,得到的人脸器官如鼻子、眼睛的深度明显低于人脸轮廓,头部姿势变化时面部器官与某一侧轮廓深度差别更为明显,而图像、视频则无法表现出有区分性的深度差异,大尺寸的图像在弯曲的情况下即使能够表现出明显的深度差异,也必将牺牲正常的人脸比例。通过计算人脸图像中不同匹配特征点的深度信息值和先验知识可以判别用户是否为活体。从上述可知,与现有技术相比,本发明实施例的方案,不仅对姿态适应性强,安全性高,而且无需用户配合,速度快,用户体验好。When the binocular stereo vision system is used to photograph the user, the obtained facial organs such as the nose and the depth of the eyes are obviously lower than the contour of the face, and the depth difference between the facial organs and the contour of a certain side is more obvious when the head posture changes. Videos cannot show discriminative depth differences. Even if a large-sized image can show obvious depth differences in the case of curvature, it will inevitably sacrifice the normal face ratio. By calculating the depth information value and prior knowledge of different matching feature points in the face image, it can be judged whether the user is alive or not. It can be seen from the above that, compared with the prior art, the solution of the embodiment of the present invention not only has strong adaptability to gestures and high security, but also requires no user cooperation, is fast, and provides good user experience.

此外,结合图5所示,本发明实施例还提供了一种人脸活体检测系统,该系统可以包括两个摄像头、人脸分类器、特征点定位单元、立体匹配单元、计算单元和处理单元。其中:两个摄像头用于分别获取用户的图像;人脸分类器用于将两个摄像头获取的两幅图像定位出人脸区域;特征点定位单元用于对人脸分类器定位得到的人脸图像进行特征点定位;立体匹配单元用于对特征点定位后的人脸图像进行特征点快速立体匹配;计算单元用于计算人脸图像中匹配特征点的深度信息值;处理单元用于根据不同部位匹配特征点的深度信息值,判断该用户的人脸是否为活体。In addition, as shown in FIG. 5 , an embodiment of the present invention also provides a human face detection system, which may include two cameras, a face classifier, a feature point positioning unit, a stereo matching unit, a computing unit and a processing unit . Among them: two cameras are used to obtain the user's image respectively; the face classifier is used to locate the two images acquired by the two cameras into the face area; the feature point positioning unit is used to locate the face image obtained by the face classifier Perform feature point positioning; the stereo matching unit is used to perform fast stereo matching of feature points on the face image after feature point positioning; the calculation unit is used to calculate the depth information value of the matching feature point in the face image; the processing unit is used for according to different parts Match the depth information value of the feature points to determine whether the user's face is alive.

在具体实施时,上述各单元及人脸分类器的功能不局限于通过PC机实现,只要能够实现这些部分的相应功能即可。另外,该人脸活体检测系统的其他扩展和说明可以参见方法实施例的相关描述,兹不赘述;并且该系统也能起到相应的技术效果。During specific implementation, the functions of the above-mentioned units and the face classifier are not limited to be realized by a PC, as long as the corresponding functions of these parts can be realized. In addition, for other extensions and descriptions of the human face liveness detection system, reference may be made to the relevant descriptions of the method embodiments, which are not repeated here; and the system can also achieve corresponding technical effects.

本领域普通技术人员可以理解,实现上述实施例的部分步骤/单元/模块可以通过程序指令相关的硬件来完成,前述程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述实施例各单元中对应的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光碟等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that some of the steps/units/modules of the above embodiments can be implemented by hardware related to program instructions, and the aforementioned programs can be stored in computer-readable storage media. When the program is executed, the execution includes The corresponding steps in each unit of the above-mentioned embodiment; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.

以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.

Claims (13)

1.一种人脸活体检测方法,其特征在于,所述方法包括步骤:1. a human face living body detection method, is characterized in that, described method comprises steps: S1、同时通过两个摄像头分别获取识别对象的两幅图像;S1. Simultaneously acquire two images of the recognition object through two cameras respectively; S2、采用人脸分类器定位出两幅图像中的人脸区域,获得分别与两幅图像对应的人脸图像;S2. Use a face classifier to locate the face areas in the two images, and obtain face images respectively corresponding to the two images; S3、对两幅人脸图像进行特征点定位;S3, performing feature point positioning on the two face images; S4、对特征点定位后的人脸图像进行特征点快速立体匹配;S4. Perform fast stereo matching of feature points on the face image after feature point positioning; S5、计算两幅人脸图像在匹配特征点的视差,获取匹配特征点的深度信息值;S5. Calculate the parallax of the two face images at the matching feature points, and obtain the depth information value of the matching feature points; S6、根据多个匹配特征点的深度信息值,判断所述识别对象的人脸是否为活体;S6. According to the depth information values of a plurality of matching feature points, it is judged whether the human face of the recognition object is a living body; 步骤S6包括:Step S6 includes: S61、确定深度信息值最小的匹配特征点为最小匹配特征点,获取所述最小匹配特征点的深度信息值;S61. Determine the matching feature point with the smallest depth information value as the smallest matching feature point, and acquire the depth information value of the smallest matching feature point; S62、分别将各匹配特征点的深度信息值减去所述最小匹配特征点的深度信息值,得到各匹配特征点的相对深度信息值;S62. Subtract the depth information value of the minimum matching feature point from the depth information value of each matching feature point to obtain the relative depth information value of each matching feature point; S63、计算各匹配特征点的相对深度信息值的和或者平方和并作为比较值,当所述比较值小于预设阈值时判定所述识别对象为非活体,否则判定为活体。S63. Calculate the sum or square sum of the relative depth information values of each matching feature point and use it as a comparison value. When the comparison value is smaller than a preset threshold, it is determined that the recognition object is not a living body, otherwise it is determined as a living body. 2.如权利要求1所述的人脸活体检测方法,其特征在于,所述两个摄像头通过左右布置或上下布置组成双目立体视觉系统。2. the human face living body detection method as claimed in claim 1, is characterized in that, described two cameras form binocular stereo vision system by arranging left and right or arranging up and down. 3.如权利要求1所述的人脸活体检测方法,其特征在于,所述人脸分类器基于如下至少一种方式:肤色和几何特性、积分投影、模板匹配、线连通。3. The human face liveness detection method according to claim 1, wherein the human face classifier is based on at least one of the following methods: skin color and geometric characteristics, integral projection, template matching, and line connection. 4.如权利要求1所述的人脸活体检测方法,其特征在于,所述特征点包括位于如下至少一个区域中的全部或部分关键点:人脸轮廓、眉毛、眼睛、鼻子和嘴巴。4. The human face liveness detection method according to claim 1, wherein the feature points include all or part of the key points located in at least one of the following areas: human face outline, eyebrows, eyes, nose and mouth. 5.如权利要求1所述的人脸活体检测方法,其特征在于,在步骤S3中,特征点的定位方式采用如下至少一种算法:深度学习相关算法、主动形状模型相关算法、主动外观模型相关算法以及级联形状回归相关算法。5. The face detection method according to claim 1, characterized in that, in step S3, at least one of the following algorithms is used for the positioning of feature points: deep learning related algorithms, active shape model related algorithms, active appearance models Correlation Algorithms and Cascaded Shape Regression Correlation Algorithms. 6.如权利要求1所述的人脸活体检测方法,其特征在于,步骤S4具体为:以一个人脸图像中的特征点为参考特征点,通过绝对误差累计SAD方式在另一人脸图像对应特征点周围获取与该参考特征点对应的匹配特征点。6. The human face living body detection method as claimed in claim 1, wherein step S4 is specifically: taking a feature point in a face image as a reference feature point, corresponding to another face image by means of absolute error accumulation SAD The matching feature points corresponding to the reference feature point are acquired around the feature point. 7.如权利要求6所述的人脸活体检测方法,其特征在于,所述参考特征点与所述匹配特征点的纵坐标相同;或者,当参考特征点与对应的匹配特征点的纵坐标不相同时,将所述匹配特征点的纵坐标强制等于与其对应的参考特征点的纵坐标,得到匹配点,并在匹配点周围各取两个像素点,将匹配点及各像素点作为匹配特征点的候选点,并分别计算以候选点为中心的误差累计值,选取误差累计值最小的候选点为匹配特征点。7. human face living body detection method as claimed in claim 6, is characterized in that, the ordinate of described reference feature point and described matching feature point is identical; Or, when the ordinate of reference feature point and corresponding matching feature point When not the same, the ordinate of the matching feature point is forced to be equal to the ordinate of the corresponding reference feature point to obtain the matching point, and two pixel points are respectively taken around the matching point, and the matching point and each pixel point are used as matching points. Candidate points of feature points, and respectively calculate the cumulative error value centered on the candidate point, and select the candidate point with the smallest cumulative error value as the matching feature point. 8.如权利要求1所述的人脸活体检测方法,其特征在于,步骤S6采用如下方式的至少一种:支持向量机SVM算法、基于主成分分析PCA的特征提取算法、线性判别式分析LDA算法。8. human face detection method as claimed in claim 1, is characterized in that, step S6 adopts at least one of the following modes: support vector machine SVM algorithm, feature extraction algorithm based on principal component analysis PCA, linear discriminant analysis LDA algorithm. 9.如权利要求8所述的人脸活体检测方法,其特征在于,步骤S63后还包括:9. human face detection method as claimed in claim 8, is characterized in that, also comprises after step S63: S64、对于经步骤S63判定为活体的识别对象,根据特征点定位结果,估计人脸大小,计算左右两幅人脸图像的人脸高宽比,当左右两幅人脸图像的人脸宽高比均超出预设区间时,判定所述识别对象为非活体。S64, for the identification object determined to be a living body through step S63, according to the feature point positioning result, estimate the size of the face, calculate the face aspect ratio of the left and right two face images, when the face width and height of the left and right two face images When the ratio exceeds the preset interval, it is determined that the identification object is not a living body. 10.如权利要求2至9任一项所述的人脸活体检测方法,其特征在于,所述方法在步骤S1之前还包括步骤:S0、对所述两个摄像头组成的双目立体视觉系统进行标定。10. The human face living body detection method according to any one of claims 2 to 9, characterized in that, the method also includes the step before step S1: S0, the binocular stereo vision system formed by the two cameras Calibrate. 11.如权利要求10所述的人脸活体检测方法,其特征在于,所述方法在步骤S2之前还包括步骤:对两幅图像进行包括立体校正在内的图像预处理操作。11. The human face liveness detection method according to claim 10, characterized in that, before step S2, the method further comprises the step of: performing image preprocessing operations including stereo correction on the two images. 12.如权利要求11所述的人脸活体检测方法,其特征在于,所述的对图像立体校正包括:根据步骤S1中的标定结果,对两个摄像头的图像平面重投影,使得两个图像精确落在同一个平面上,并且两个图像的行完全对准到前向平行的结构上。12. The human face liveness detection method according to claim 11, wherein the stereoscopic correction of the image comprises: reprojecting the image planes of the two cameras according to the calibration result in step S1, so that the two images fall exactly on the same plane, and the lines of the two images are perfectly aligned on the front-parallel structure. 13.一种人脸活体检测系统,其特征在于,包括:13. A human face living body detection system, comprising: 两个摄像头,用于分别同时获取识别对象的两幅图像;Two cameras, used to simultaneously acquire two images of the identified object respectively; 人脸分类器,用于在两个摄像头所获取的两幅图像中定位出人脸区域,获得分别与两幅图像对应的人脸图像;The face classifier is used to locate the face area in the two images acquired by the two cameras, and obtain the face images corresponding to the two images respectively; 特征点定位单元,用于对两幅人脸图像进行特征点定位;A feature point positioning unit is used to perform feature point positioning on two face images; 立体匹配单元,用于对特征点定位后的人脸图像进行特征点快速立体匹配;Stereo matching unit, for performing fast stereo matching of feature points on the face image after feature point positioning; 计算单元,用于计算两幅人脸图像在匹配特征点的视差,获取匹配特征点的深度信息值;A calculation unit is used to calculate the parallax of the two face images at the matching feature points, and obtain the depth information value of the matching feature points; 处理单元,用于根据多个匹配特征点的深度信息值,判断所述识别对象的人脸是否为活体;A processing unit, configured to determine whether the face of the recognition object is a living body according to the depth information values of a plurality of matching feature points; 所述处理单元具体还用于:The processing unit is also specifically used for: 确定深度信息值最小的匹配特征点为最小匹配特征点,获取所述最小匹配特征点的深度信息值;Determining the matching feature point with the smallest depth information value as the minimum matching feature point, and obtaining the depth information value of the minimum matching feature point; 分别将各匹配特征点的深度信息值减去所述最小匹配特征点的深度信息值,得到各匹配特征点的相对深度信息值;Respectively subtracting the depth information value of the minimum matching feature point from the depth information value of each matching feature point to obtain the relative depth information value of each matching feature point; 计算各匹配特征点的相对深度信息值的和或者平方和并作为比较值,当所述比较值小于预设阈值时判定所述识别对象为非活体,否则判定为活体。The sum or the square sum of the relative depth information values of each matching feature point is calculated and used as a comparison value. When the comparison value is less than a preset threshold, it is determined that the recognition object is not a living body, otherwise it is determined as a living body.
CN201510500115.1A 2015-08-17 2015-08-17 A kind of human face in-vivo detection method and system Active CN105023010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510500115.1A CN105023010B (en) 2015-08-17 2015-08-17 A kind of human face in-vivo detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510500115.1A CN105023010B (en) 2015-08-17 2015-08-17 A kind of human face in-vivo detection method and system

Publications (2)

Publication Number Publication Date
CN105023010A CN105023010A (en) 2015-11-04
CN105023010B true CN105023010B (en) 2018-11-06

Family

ID=54412965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510500115.1A Active CN105023010B (en) 2015-08-17 2015-08-17 A kind of human face in-vivo detection method and system

Country Status (1)

Country Link
CN (1) CN105023010B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI734454B (en) 2020-04-28 2021-07-21 鴻海精密工業股份有限公司 Identity recognition device and identity recognition method
TWI740143B (en) 2019-05-16 2021-09-21 立普思股份有限公司 Face recognition system for three-dimensional living body recognition, three-dimensional living body recognition method and storage medium

Families Citing this family (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335722B (en) * 2015-10-30 2021-02-02 商汤集团有限公司 Detection system and method based on depth image information
CN105243376A (en) * 2015-11-06 2016-01-13 北京汉王智远科技有限公司 Living body detection method and device
CN108020158A (en) * 2016-11-04 2018-05-11 浙江大华技术股份有限公司 A kind of three-dimensional position measuring method and device based on ball machine
EP3374967B1 (en) * 2015-11-11 2023-01-04 Zhejiang Dahua Technology Co., Ltd Methods and systems for binocular stereo vision
CN105512637A (en) * 2015-12-22 2016-04-20 联想(北京)有限公司 Image processing method and electric device
CN105740779B (en) * 2016-01-25 2020-11-13 北京眼神智能科技有限公司 Method and device for detecting living human face
CN105740780B (en) * 2016-01-25 2020-07-28 北京眼神智能科技有限公司 Method and device for detecting living human face
CN105574518B (en) * 2016-01-25 2020-02-21 北京眼神智能科技有限公司 Method and device for face liveness detection
CN105740775B (en) * 2016-01-25 2020-08-28 北京眼神智能科技有限公司 A three-dimensional face living body recognition method and device
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN105868733A (en) * 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 Face in-vivo validation method and device
CN105930710B (en) * 2016-04-22 2019-11-12 北京旷视科技有限公司 Living body detection method and apparatus
CN107346410B (en) * 2016-05-05 2020-03-06 杭州海康威视数字技术股份有限公司 Picture processing method and device
CN105912912B (en) * 2016-05-11 2018-12-18 青岛海信电器股份有限公司 A kind of terminal user ID login method and system
CN106599660A (en) * 2016-12-02 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Terminal safety verification method and terminal safety verification device
CN106682607A (en) * 2016-12-23 2017-05-17 山东师范大学 Offline face recognition system and offline face recognition method based on low-power-consumption embedded and infrared triggering
CN106897675B (en) * 2017-01-24 2021-08-17 上海交通大学 A face detection method based on the combination of binocular visual depth feature and apparent feature
CN107169405B (en) * 2017-03-17 2020-07-03 上海云从企业发展有限公司 Method and device for living body identification based on binocular camera
CN106998332B (en) * 2017-05-08 2020-06-30 深圳市牛鼎丰科技有限公司 Secure login method and device, storage medium and computer equipment
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
CN109255282B (en) * 2017-07-14 2021-01-05 深圳荆虹科技有限公司 Biological identification method, device and system
CN107273875A (en) * 2017-07-18 2017-10-20 广东欧珀移动通信有限公司 Face living body detection method and related product
CN107274508A (en) * 2017-07-26 2017-10-20 南京多伦科技股份有限公司 A kind of vehicle-mounted timing have the records of distance by the log terminal and using the terminal recognition methods
US20190034857A1 (en) * 2017-07-28 2019-01-31 Nuro, Inc. Food and beverage delivery system on autonomous and semi-autonomous vehicle
CN108875331B (en) * 2017-08-01 2022-08-19 北京旷视科技有限公司 Face unlocking method, device and system and storage medium
US10867161B2 (en) * 2017-09-06 2020-12-15 Pixart Imaging Inc. Auxiliary filtering device for face recognition and starting method for electronic device
CN107590463A (en) 2017-09-12 2018-01-16 广东欧珀移动通信有限公司 Face identification method and Related product
CN107862244A (en) * 2017-09-21 2018-03-30 曙光信息产业(北京)有限公司 A kind of face identification method and device based on wearable intelligent glasses
CN109558764B (en) * 2017-09-25 2021-03-16 杭州海康威视数字技术股份有限公司 Face recognition method and device and computer equipment
CN108875508B (en) * 2017-11-23 2021-06-29 北京旷视科技有限公司 Living body detection algorithm updating method, device, client, server and system
CN107958236B (en) * 2017-12-28 2021-03-19 深圳市金立通信设备有限公司 Face recognition sample image generation method and terminal
CN108229375B (en) * 2017-12-29 2022-02-08 百度在线网络技术(北京)有限公司 Method and device for detecting face image
CN109993024A (en) * 2017-12-29 2019-07-09 技嘉科技股份有限公司 Authentication device, authentication method, and computer-readable storage medium
CN108389053B (en) * 2018-03-19 2021-10-29 广州逗号智能零售有限公司 Payment method, payment device, electronic equipment and readable storage medium
CN108876835A (en) * 2018-03-28 2018-11-23 北京旷视科技有限公司 Depth information detection method, device and system and storage medium
WO2019196074A1 (en) * 2018-04-12 2019-10-17 深圳阜时科技有限公司 Electronic device and facial recognition method therefor
US10956714B2 (en) * 2018-05-18 2021-03-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, electronic device, and storage medium
CN108764091B (en) * 2018-05-18 2020-11-17 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN108960097B (en) * 2018-06-22 2021-01-08 维沃移动通信有限公司 Method and device for obtaining face depth information
CN109035516A (en) * 2018-07-25 2018-12-18 深圳市飞瑞斯科技有限公司 Control method, apparatus, equipment and the storage medium of smart lock
CN109241832B (en) * 2018-07-26 2021-03-30 维沃移动通信有限公司 A method and terminal device for face liveness detection
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109190528B (en) * 2018-08-21 2021-11-30 厦门美图之家科技有限公司 Living body detection method and device
CN109376595B (en) * 2018-09-14 2023-06-23 杭州宇泛智能科技有限公司 Monocular RGB camera living body detection method and system based on human eye attention
CN109299696B (en) * 2018-09-29 2021-05-18 成都臻识科技发展有限公司 Face detection method and device based on double cameras
CN109492551B (en) * 2018-10-25 2023-03-24 腾讯科技(深圳)有限公司 Living body detection method and device and related system applying living body detection method
CN109325472B (en) * 2018-11-01 2022-05-27 四川大学 A face detection method based on depth information
CN111382607B (en) * 2018-12-28 2024-06-25 北京三星通信技术研究有限公司 Living body detection method, living body detection device and face authentication system
CN109447049B (en) * 2018-12-28 2020-07-31 豪威科技(武汉)有限公司 Light source quantitative design method and stereoscopic vision system
CN111444744A (en) * 2018-12-29 2020-07-24 北京市商汤科技开发有限公司 Living body detection method, living body detection device, and storage medium
CN111383255B (en) * 2018-12-29 2024-04-12 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109753930B (en) * 2019-01-03 2021-12-24 京东方科技集团股份有限公司 Face detection method and face detection system
CN109688400A (en) * 2019-01-04 2019-04-26 Oppo广东移动通信有限公司 Electronic equipment and mobile platform
CN109815669A (en) * 2019-01-14 2019-05-28 平安科技(深圳)有限公司 Authentication method and server based on face recognition
CN110008820A (en) * 2019-01-30 2019-07-12 广东世纪晟科技有限公司 Silent in-vivo detection method
CN109948439B (en) * 2019-02-13 2023-10-31 平安科技(深圳)有限公司 Living body detection method, living body detection system and terminal equipment
EP3938953A4 (en) * 2019-03-12 2022-12-28 Element, Inc. Detecting spoofing of facial recognition with mobile devices
CN109887016B (en) * 2019-03-25 2021-04-20 北京奇艺世纪科技有限公司 Similarity calculation method and device
CN110298230A (en) * 2019-05-06 2019-10-01 深圳市华付信息技术有限公司 Silent biopsy method, device, computer equipment and storage medium
CN110095819A (en) * 2019-05-16 2019-08-06 安徽天帆智能科技有限责任公司 A kind of water survival system and lifesaving method based on In vivo detection technology
CN110321793A (en) * 2019-05-23 2019-10-11 平安科技(深圳)有限公司 Check enchashment method, apparatus, equipment and computer readable storage medium
CN110363111B (en) * 2019-06-27 2023-08-25 平安科技(深圳)有限公司 Face living body detection method, device and storage medium based on lens distortion principle
CN110458069A (en) * 2019-08-02 2019-11-15 深圳市华方信息产业有限公司 A kind of method and system based on face recognition Added Management user's on-line study state
CN111932754B (en) * 2019-08-19 2021-12-28 北京戴纳实验科技有限公司 Laboratory entrance guard verification system and verification method
CN110942032B (en) * 2019-11-27 2022-07-15 深圳市商汤科技有限公司 Living body detection method and device, and storage medium
CN111160178B (en) * 2019-12-19 2024-01-12 深圳市商汤科技有限公司 Image processing method and device, processor, electronic equipment and storage medium
CN111046845A (en) * 2019-12-25 2020-04-21 上海骏聿数码科技有限公司 Living body detection method, device and system
CN111160233B (en) * 2019-12-27 2023-04-18 中国科学院苏州纳米技术与纳米仿生研究所 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN111652086B (en) * 2020-05-15 2022-12-30 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112232109B (en) * 2020-08-31 2024-06-04 奥比中光科技集团股份有限公司 Living body face detection method and system
CN112347904B (en) * 2020-11-04 2023-08-01 杭州锐颖科技有限公司 Living body detection method, device and medium based on binocular depth and picture structure
CN114596599A (en) * 2020-11-20 2022-06-07 中移动信息技术有限公司 A face recognition living body detection method, device, equipment and computer storage medium
CN112414559B (en) * 2021-01-25 2021-04-20 湖南海讯供应链有限公司 Living body non-contact temperature measurement system and method
CN112801038B (en) * 2021-03-02 2022-07-22 重庆邮电大学 A multi-view face liveness detection method and system
CN114429568A (en) * 2022-01-28 2022-05-03 北京百度网讯科技有限公司 Image processing method and device
JP7457991B1 (en) * 2023-03-14 2024-03-29 有限会社バラエティーエム・ワン Impersonation detection system and impersonation detection program
CN116798130A (en) * 2023-06-15 2023-09-22 广州朗国电子科技股份有限公司 Face anti-counterfeiting method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN103824087A (en) * 2012-11-16 2014-05-28 广州三星通信技术研究有限公司 Detection positioning method and system of face characteristic points
CN103927747A (en) * 2014-04-03 2014-07-16 北京航空航天大学 Face matching space registration method based on human face biological characteristics

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3977776B2 (en) * 2003-03-13 2007-09-19 株式会社東芝 Stereo calibration device and stereo image monitoring device using the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824087A (en) * 2012-11-16 2014-05-28 广州三星通信技术研究有限公司 Detection positioning method and system of face characteristic points
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN103927747A (en) * 2014-04-03 2014-07-16 北京航空航天大学 Face matching space registration method based on human face biological characteristics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI740143B (en) 2019-05-16 2021-09-21 立普思股份有限公司 Face recognition system for three-dimensional living body recognition, three-dimensional living body recognition method and storage medium
TWI734454B (en) 2020-04-28 2021-07-21 鴻海精密工業股份有限公司 Identity recognition device and identity recognition method

Also Published As

Publication number Publication date
CN105023010A (en) 2015-11-04

Similar Documents

Publication Publication Date Title
CN105023010B (en) A kind of human face in-vivo detection method and system
CN109558764B (en) Face recognition method and device and computer equipment
CN104933389B (en) Finger vein-based identification method and device
CN107609383B (en) 3D face identity authentication method and device
CN107748869B (en) 3D face identity authentication method and device
CN107633165B (en) 3D face identity authentication method and device
CN109271950B (en) Face living body detection method based on mobile phone forward-looking camera
US10262190B2 (en) Method, system, and computer program product for recognizing face
CN109670390B (en) Living face recognition method and system
CN106355147A (en) Acquiring method and detecting method of live face head pose detection regression apparatus
CN104915656B (en) A kind of fast human face recognition based on Binocular vision photogrammetry technology
CN109840565A (en) A kind of blink detection method based on eye contour feature point aspect ratio
CN106897675A (en) The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
Medioni et al. Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models
CN108182397B (en) Multi-pose multi-scale human face verification method
CN110909693A (en) 3D face living body detection method and device, computer equipment and storage medium
CN109583304A (en) A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN110598556A (en) Human body shape and posture matching method and device
ES3012985T3 (en) Method, system and computer program product for point of gaze estimation
CN109376518A (en) Method and related equipment for preventing privacy leakage based on face recognition
CN110956114A (en) Face living body detection method, device, detection system and storage medium
CN105608448B (en) A method and device for extracting LBP features based on facial key points
CN112257641A (en) Face recognition living body detection method
CN110909634A (en) Visible light and double infrared combined rapid in vivo detection method
CN104200200A (en) System and method for realizing gait recognition by virtue of fusion of depth information and gray-scale information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100083 Beijing Haidian District Tsinghua East Road 35, Chinese Academy of Sciences Semiconductor Research Institute 1 building 432

Applicant after: Semiconductor Inst., Chinese Academy of Sciences

Applicant after: Shenzhen Wei Fu vision limited company

Address before: 100083 Beijing Haidian District Tsinghua East Road 35, Chinese Academy of Sciences Semiconductor Research Institute 1 building 432

Applicant before: Semiconductor Inst., Chinese Academy of Sciences

Applicant before: SHENZHEN WEIFU SECURITY CO., LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant