[go: up one dir, main page]

CN105243386A - Face liveness judgment method and system - Google Patents

Face liveness judgment method and system Download PDF

Info

Publication number
CN105243386A
CN105243386A CN201410327039.4A CN201410327039A CN105243386A CN 105243386 A CN105243386 A CN 105243386A CN 201410327039 A CN201410327039 A CN 201410327039A CN 105243386 A CN105243386 A CN 105243386A
Authority
CN
China
Prior art keywords
eye
face
eye image
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410327039.4A
Other languages
Chinese (zh)
Other versions
CN105243386B (en
Inventor
黄磊
蔡利君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Technology Co Ltd
Original Assignee
Hanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Technology Co Ltd filed Critical Hanwang Technology Co Ltd
Priority to CN201410327039.4A priority Critical patent/CN105243386B/en
Publication of CN105243386A publication Critical patent/CN105243386A/en
Application granted granted Critical
Publication of CN105243386B publication Critical patent/CN105243386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

One aspect of the present invention provides a face living body judgment method for distinguishing a real face from a spoofed face, including: acquiring a certain number of face images; acquiring an eye image of each face image and the characteristics of the eye image; predicting the sight line according to the characteristics of the obtained eye images and the sight line estimation model to obtain the positions of the sight lines with corresponding quantity; quantizing the sight line positions of the corresponding number according to a sight line dictionary to establish a sight line histogram; and obtaining information entropy according to the sight line histogram and carrying out living body judgment, wherein if the entropy value is larger than 0, the real face is judged, and otherwise, the deceptive face is judged. The human face living body judgment method provided by the invention is a human face living body judgment method which does not need additional equipment or user cooperation, and can effectively distinguish a real human face from a deceptive human face due to the characteristics that the visual line behavior is uncertain and is difficult to be acquired by other people through a camera and the like.

Description

人脸活体判断方法以及系统Face liveness judgment method and system

技术领域 technical field

本发明涉及一种身份认证技术领域,尤其涉及一种人脸活体判断方法以及系统。 The invention relates to the technical field of identity authentication, in particular to a method and system for judging human face livingness.

背景技术 Background technique

人脸识别技术是一种有效的身份认证技术,但是随着人脸识别技术的广泛使用,出现了各种方法冒充人脸以通过身份认证,这种使用非真实的欺骗人脸进行身份认证的方式被称为“人脸攻击”。常见的人脸攻击形式包括以照片、视频播放或3D模型进行的欺骗性识别以通过身份认证。其中照片和视频播放是最常见的攻击方式,人们可以从移动设备或者监控摄像头中轻易地获取合法用户的相关资料。而且随着现代技术的发展,3D人脸的合成和模型的获取不再是一件难以实现的事,例如,ThatsMyFace.com的服务可以通过上传一张正面和一张侧面照片实现3D人脸的重构和3D模型的订制。因此出于信息安全的考虑,在人脸识别系统中加入活体检测的功能受到越来越多的关注。 Face recognition technology is an effective identity authentication technology, but with the widespread use of face recognition technology, there have been various methods of posing as faces to pass identity authentication. The method is called "face attack". Common forms of facial attacks include deceptive identification with photos, video playback, or 3D models to pass identity verification. Among them, playing photos and videos is the most common attack method, and people can easily obtain relevant information of legitimate users from mobile devices or surveillance cameras. Moreover, with the development of modern technology, the synthesis of 3D faces and the acquisition of models are no longer difficult to achieve. For example, the service of ThatsMyFace.com can achieve 3D face recognition by uploading a frontal and a side photo. Reconstruction and customization of 3D models. Therefore, for the sake of information security, the function of adding liveness detection to the face recognition system has received more and more attention.

人们对常见的人脸攻击方式进行分析,认为和真实人脸相比,照片人脸是平面的,并存在二次获取带来的质量损失、模糊等问题;视频人脸具有LCD反光等现象;3D模型人脸运动是刚性运动等。这些都是目前人脸攻击方式存在的弱点,因此,与这些弱点相对应,当前人脸识别中的活体检测方法主要分为三类:基于运动的方法、基于纹理的方法以及融合二者的方法。 People analyze common face attack methods and think that compared with real faces, photo faces are flat, and there are problems such as quality loss and blurring caused by secondary acquisition; video faces have LCD reflections and other phenomena; 3D model face motion is rigid motion etc. These are the weaknesses of the current face attack methods. Therefore, corresponding to these weaknesses, the current liveness detection methods in face recognition are mainly divided into three categories: motion-based methods, texture-based methods, and fusion methods. .

基于运动的方法主要是分析图像帧序列的运动趋势,比如通过光流法估计出人脸不同区域的运动趋势来区分真实人脸和照片攻击,或者采用光流法通过估计物体的刚体运动来判断是否活体等各种方法。除了无意识的头部运动,合法用户的其他生物行为也被用来判断是否活体,比如眨眼行为。基于纹理的方法则基于欺骗人脸在二次获取的过程中会带来质量下降、模糊等微纹理的变化,通过找出单帧真实人脸和欺骗人脸的显著性区分特征进行活体判断。 The motion-based method is mainly to analyze the motion trend of the image frame sequence, such as estimating the motion trend of different areas of the face through the optical flow method to distinguish real faces from photo attacks, or using the optical flow method to judge by estimating the rigid body motion of the object Various methods such as whether it is alive or not. In addition to involuntary head movements, other biological behaviors of legitimate users are also used to judge whether they are alive, such as blinking behavior. The texture-based method is based on the micro-texture changes such as quality degradation and blurring caused by deceiving faces in the process of secondary acquisition, and performs liveness judgment by finding out the salient distinguishing features of single-frame real faces and deceptive faces.

但是,虽然上述两种方法能够在一定程度上抵御人脸攻击,但是仍然存在很多问题,比如基于运动的方法能够抵御照片攻击,对扭曲的照片和视频攻击基本无效。基于纹理的方法能够有效地学习真实人脸和欺骗人脸的判别模型,然而却丢失了图像帧之间的相关信息。 However, although the above two methods can resist face attacks to a certain extent, there are still many problems. For example, motion-based methods can resist photo attacks, and are basically ineffective against distorted photos and video attacks. Texture-based methods can effectively learn discriminative models for real and spoofed faces, however, they lose the correlation information between image frames.

此外,目前还存在基于多模和多频谱的活体检测方法。基于多模的活体检测是将人脸和其他生物特征,如姿态、语音、指纹和手形等结合起来进行身份认证。一般来说,基于多模的生物认证系统比基于单一特征的认证系统抵抗攻击的能力强。基于多频谱的活体检测方法是利用在近红外下真实人脸和欺骗人脸的反射率不同来进行区分。虽然这两类方法取得了较好的活体检测性能,但是此类方法需要额外的设备,甚至在基于多模的活体检测系统中需要用户的配合,加重了用户的负担。 In addition, there are currently liveness detection methods based on multi-mode and multi-spectrum. Liveness detection based on multi-mode is to combine face and other biological features, such as posture, voice, fingerprint and hand shape, etc. for identity authentication. In general, biometric authentication systems based on multimodality are more resistant to attacks than authentication systems based on single features. The liveness detection method based on multi-spectrum distinguishes real faces and spoofed faces by using the difference in reflectivity in near-infrared. Although these two types of methods have achieved better performance in live detection, these methods require additional equipment, and even require the cooperation of users in a multi-mode live detection system, which increases the burden on users.

发明内容 Contents of the invention

本发明鉴于以上问题,提供了一种人脸活体判断方法,用于在人脸识别中区分真实人脸和欺骗人脸,其既不需要额外的设备也不需要用户配合的人脸活体判断方法,可以有效区分真实人脸和欺骗人脸。 In view of the above problems, the present invention provides a human face liveness judgment method for distinguishing real faces and fake faces in face recognition, which requires neither additional equipment nor user cooperation. , which can effectively distinguish real faces from spoofed faces.

本发明的一个方面提供一种人脸活体判断方法,用于在人脸识别中区分真实人脸和欺骗人脸,其包括:获取一定数量的人脸图像的步骤;获取每个人脸图像的眼睛图像以及该眼睛图像的特征的步骤;根据获取的眼睛图像的特征以及视线估计模型进行视线预测获得对应数量视线位置的步骤;根据视线词典对所述对应数量视线位置进行量化,建立视线直方图的步骤;和根据视线直方图获得信息熵并进行活体判断的步骤,其中如果熵值大于0,则判定为真实人脸,否则判定为欺骗人脸。 One aspect of the present invention provides a method for judging the living body of a human face, which is used to distinguish a real human face from a spoofed human face in human face recognition, which includes: the step of obtaining a certain number of human face images; obtaining the eyes of each human face image The steps of the features of the image and the eye image; the step of predicting the line of sight according to the features of the acquired eye image and the line of sight estimation model to obtain the corresponding number of line-of-sight positions; quantifying the corresponding number of line-of-sight positions according to the line-of-sight dictionary, and establishing the line-of-sight histogram Steps; and the step of obtaining information entropy according to the line-of-sight histogram and performing living body judgment, wherein if the entropy value is greater than 0, it is judged as a real face, otherwise it is judged as a deceptive face.

本发明提供的人脸活体判断方法是一种既不需要额外的设备也不需要用户配合的人脸活体判断方法,并且由于视线行为的不确定性、很难被他人通过摄像头等获取的特性,可以有效区分真实人脸和欺骗人脸。 The human face liveness judgment method provided by the present invention is a face liveness judgment method that requires neither additional equipment nor user cooperation, and due to the uncertainty of line of sight behavior and the characteristics that it is difficult for others to obtain it through a camera, etc., It can effectively distinguish real faces from spoofed faces.

本发明的另一个方面提供一种活体判断系统,其包括:图像采集单元,其用于采集一定数量的人脸图像;特征提取单元,其用于获取每个人脸图像的眼睛图像以及该眼睛图像的特征;视线估计单元,其根据获取的眼睛图像的特征以及视线估计模型进行视线预测获得对应数量视线位置;视线统计单元,其根据视线词典对所述对应数量视线位置进行量化,建立视线直方图,和活体判断单元,其根据视线直方图获得信息熵并进行活体,其中如果熵值大于0,则判定为真实人脸,否则判定为欺骗人脸。 Another aspect of the present invention provides a living body judgment system, which includes: an image acquisition unit, which is used to collect a certain number of human face images; a feature extraction unit, which is used to obtain an eye image of each human face image and the eye image The feature of the line of sight estimation unit, which performs line of sight prediction according to the characteristics of the acquired eye image and the line of sight estimation model to obtain the corresponding number of line of sight positions; the line of sight statistical unit quantifies the corresponding number of line of sight positions according to the line of sight dictionary, and establishes a line of sight histogram , and a living body judging unit, which obtains information entropy according to the line-of-sight histogram and conducts living body, wherein if the entropy value is greater than 0, it is judged as a real face, otherwise it is judged as a deceptive face.

本发明提供的活体判断系统,利用视线行为的不确定性、难被他人通过摄像头等获取的特性,可以有效区分真实人脸和欺骗人脸。 The living body judgment system provided by the present invention can effectively distinguish between real human faces and deceptive human faces by taking advantage of the uncertainty of line of sight behavior and the characteristics that it is difficult for others to obtain through a camera.

附图说明 Description of drawings

图1是本发明一实施方式的视线估计模型建立方法流程图; Fig. 1 is a flowchart of a method for establishing a line of sight estimation model according to an embodiment of the present invention;

图2是本发明一实施方式用于建立视线估计模型的数据采集系统; Fig. 2 is a data acquisition system for establishing a line of sight estimation model according to an embodiment of the present invention;

图3是本发明一实施方式获取眼睛图像及特征的示意图; Fig. 3 is a schematic diagram of obtaining eye images and features according to an embodiment of the present invention;

图4是本发明一实施方式获取眼睛图像及特征的方法流程图; Fig. 4 is a flow chart of a method for acquiring eye images and features according to an embodiment of the present invention;

图5是本发明一实施方式的人脸活体判断方法流程图; Fig. 5 is a flow chart of a method for judging human face livingness according to an embodiment of the present invention;

图6是本发明一实施方式视线词典构建示意图; Fig. 6 is a schematic diagram of constructing a sight dictionary according to an embodiment of the present invention;

图7是本发明一实施方式的人脸活体判断系统的结构示意图; Fig. 7 is a schematic structural diagram of a human face living body judging system according to an embodiment of the present invention;

图8是图7所示人脸活体判断系统的特征提取单元的详细结构示意图。 Fig. 8 is a schematic diagram of the detailed structure of the feature extraction unit of the human face liveness judgment system shown in Fig. 7 .

具体实施方式 detailed description

为使本领域技术人员更好地理解本发明的技术方案,下面结合附图和具体实施方式对本发明提供的人脸活体判断方法和系统进行详细描述。在这些附图中,对于相同或者相当的构成要素,标注相同标号。以下仅为本发明的人脸活体判断方法和系统的最佳实施方式,本发明并不仅限于下述结构。 In order to enable those skilled in the art to better understand the technical solution of the present invention, the method and system for judging human face livingness provided by the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. In these drawings, the same reference numerals are assigned to the same or corresponding components. The following are only the best implementations of the method and system for judging human face life of the present invention, and the present invention is not limited to the following structures.

本发明的人脸活体判断方法及系统,用于在人脸识别中区分真实人脸和欺骗人脸。由于注意力转移过程具有不确定性,视线行为是一种难以被他人通过监控摄像头等设备获取的生物信息,同时,和欺骗人脸相比,真实人脸的视线运动具有更大的不确定性,因此本发明将视线引入活体判断问题中可以有效区分真实人脸和欺骗人脸。本发明的人脸活体判断方法及系统基本思想是获取眼睛图像以及特征,并根据眼睛图像特征及视线估计模型进行视线估计,然后对预测的视线进行量化和统计,进而进行活体判断,即,判断被识别对象是真实人脸还是欺骗人脸。 The human face liveness judgment method and system of the present invention are used for distinguishing real human faces and spoofed human faces in human face recognition. Due to the uncertainty of the attention transfer process, gaze behavior is a kind of biological information that is difficult to be obtained by others through surveillance cameras and other equipment. At the same time, compared with spoofed faces, the gaze movement of real faces has greater uncertainty. , so the present invention introduces the line of sight into the living body judgment problem, which can effectively distinguish real faces from spoofed faces. The basic idea of the method and system for judging human face livingness of the present invention is to obtain eye images and features, and estimate line of sight according to eye image features and line of sight estimation models, and then quantify and count the predicted line of sight, and then perform living body judgment, that is, judge Whether the recognized object is a real face or a spoofed face.

为此,本发明首先需要建立合适的视线估计模型,利用该模型可以很好地检测到视线变化。现有的视线估计模型主要为:基于外观的方法和基于特征的方法,基于特征的方法需要提取眼睛的局部特征(例如虹膜轮廓、瞳孔和亮斑等),并建立特征与视点之间的映射关系。然而,基于此类方法一般需要高清摄像头甚至需要多个光源或多个摄像头。基于外观的方法首先定位出眼睛区域,然后直接建立眼睛图像与屏幕中视点坐标的映射关系,充分利用了眼睛视线的信息。考虑到本发明提出的人脸活体判断方法在自然光环境下并且不依赖额外的设备或者光源的条件下进行的(实际上,只有一个分辨率为640×480像素的USB摄像头),因此本实施方式选择基于外观的方法对视线方向进行建模。此外,考虑到真实人脸和欺骗人脸存在微纹理的区别,本发明首先对眼睛图像提取局部纹理特征,然后通过回归模型建立局部纹理特征向量和视点之间的映射关系。在本实施方式中,视线估计模型的建立主要通过让用户依次注视计算机屏幕中的9个视点,由此获得9个视线方向的用户正面图像数据,通过回归模型建立用户眼睛图像特征和计算机屏幕中视点的映射关系。下面结合图1~图4来说明本发明视线估计模型的建立和求解。 For this reason, the present invention first needs to establish a suitable line of sight estimation model, which can be used to detect line of sight changes well. The existing line of sight estimation models are mainly: appearance-based methods and feature-based methods. Feature-based methods need to extract local features of the eye (such as iris outline, pupil, and bright spots, etc.), and establish a mapping between features and viewpoints. relation. However, such methods generally require a high-definition camera or even multiple light sources or multiple cameras. The appearance-based method first locates the eye area, and then directly establishes the mapping relationship between the eye image and the coordinates of the viewpoint on the screen, making full use of the information of the eye sight. Considering that the human face liveness judgment method proposed in the present invention is carried out under natural light environment and does not rely on additional equipment or light sources (actually, there is only one USB camera with a resolution of 640×480 pixels), so this embodiment Choose an appearance-based approach to model gaze directions. In addition, considering the difference in microtexture between real faces and spoofed faces, the present invention first extracts local texture features from eye images, and then establishes a mapping relationship between local texture feature vectors and viewpoints through a regression model. In this embodiment, the line of sight estimation model is mainly established by letting the user look at nine viewpoints on the computer screen in turn, thereby obtaining user frontal image data in nine sight directions, and establishing the user's eye image features and computer screen through a regression model. Viewpoint mapping relationship. The establishment and solution of the line-of-sight estimation model of the present invention will be described below in conjunction with FIGS. 1 to 4 .

图1是本发明一实施方式的视线估计模型建立方法流程图;图2是本发明一实施方式用于建立视线估计模型的数据采集系统。如图1所示,该视线估计模型的建立方法包括:步骤S1,数据采集;步骤S2,提取眼睛图像及特征;步骤S3,模型建立和求解。 Fig. 1 is a flow chart of a method for establishing a line of sight estimation model according to an embodiment of the present invention; Fig. 2 is a data collection system for establishing a line of sight estimation model according to an embodiment of the present invention. As shown in FIG. 1 , the method for establishing the sight line estimation model includes: step S1, data collection; step S2, extracting eye images and features; step S3, model establishment and solution.

为了建立视线估计的统计模型,首先进行相关数据的采集,本实施方式的数据采集系统如图2中(a)所示,其由1个19寸大小(分辨率为1440×900像素)的液晶显示器和1个分辨率为640×480像素的USB摄像头构成。在液晶显示器上有9个固定点,称为视点,视点的具体位置见图2中(b)所示,每一个视点的坐标是以像素为单位,图2中(b)中1、2、3….9表示视点的标号,标号右下方、右上方或者左下方括号内的值是对应视点在计算机屏幕中的位置,比如第一个视点的位置为(88,83)。 In order to establish the statistical model of line-of-sight estimation, at first carry out the acquisition of relevant data, the data acquisition system of the present embodiment is shown in (a) in Fig. Display and a USB camera with a resolution of 640×480 pixels. There are 9 fixed points on the liquid crystal display, which are called viewpoints. The specific positions of the viewpoints are shown in (b) in Figure 2. The coordinates of each viewpoint are in units of pixels. In Figure 2 (b), 1, 2, 3....9 represents the label of the viewpoint, and the value in the brackets at the lower right, upper right or lower left of the label is the position of the corresponding viewpoint on the computer screen, for example, the position of the first viewpoint is (88, 83).

在步骤S1中,采集对象坐在距离计算机屏幕大约50cm-60cm的地方,并且尽量保持头部的固定。在数据采集的过程中,首先要求采集对象在保持头部固定的前提下目光注视第1个视点(88,83)并保持该视线方向,保存在此期间摄像头捕捉到的采集对象的正面图像,达到30帧后停止图像的保存;然后引导采集对象的视线转向第2个视点(552,83),同样保存用户正面图像30帧;继续下去,直到9个视点下的270帧图像采集完毕为止。以相同的方式,本文采集50个样本注视9个视点的图像,共13500张用户正面图像,人工去除闭眼的图像,剩余12698张有效图像。 In step S1, the acquisition subject sits about 50cm-60cm away from the computer screen, and keeps the head as fixed as possible. In the process of data collection, firstly, the subject is required to gaze at the first point of view (88, 83) under the premise of keeping the head fixed and maintain the line of sight direction, and save the frontal image of the subject captured by the camera during this period, After reaching 30 frames, stop saving the image; then guide the subject’s sight to the second viewpoint (552, 83), and also save 30 frames of the user’s frontal image; continue until 270 frames of images from 9 viewpoints are collected. In the same way, this paper collects images of 50 samples looking at 9 viewpoints, a total of 13,500 user frontal images, and manually removes images with closed eyes, leaving 12,698 valid images.

需要说明的是,在数据采集中,可以和前述描述一样,每个视点采集30帧图像,但也可以是其他合适数量的图像,并不局限于30帧。因为对于模型建立来说,没有一个明确的范围,样本越多,模型会越准确,同时模型也会越复杂,本实施方式综合考虑模型的准确度和复杂度,选择30帧。 It should be noted that, in the data collection, as described above, 30 frames of images may be collected for each viewpoint, but other suitable number of images may also be used, and it is not limited to 30 frames. Because there is no clear scope for model building, the more samples there are, the more accurate the model will be, and the more complex the model will be. In this embodiment, 30 frames are selected comprehensively considering the accuracy and complexity of the model.

当完成步骤S1后,进行步骤S2,在步骤S2中提取眼睛图像及特征。下面结合图3~图4来说明本实施方式中眼睛图像及特征的提取方法。 After step S1 is completed, proceed to step S2, in which eye images and features are extracted. The method for extracting eye images and features in this embodiment will be described below with reference to FIGS. 3 to 4 .

如图3所示,本实施方式提取眼睛图像及特征包括: As shown in Figure 3, the extraction of eye images and features in this embodiment includes:

步骤S20,人脸定位和内外眼角点检测。比如通过经典的ViolaandJones方法进行人脸定位和内外眼角点的检测,以左眼为例如图4中(a)所示。 Step S20, face positioning and inner and outer eye corner detection. For example, the classic Viola and Jones method is used to locate the face and detect the inner and outer corners of the eyes. Taking the left eye as an example, it is shown in (a) in Figure 4.

步骤S21,眼睛图像对齐,即根据眼角位置裁剪并对齐眼睛区域。具体对齐的方法为:首先通过统计采集到的人脸区域,定义一个64×32大小的眼睛图像标准模板,并设置内外两个眼角的位置分别为(54,20)和(9,20),通过将人脸区域根据外眼角的位置进行尺度和旋转的变换,得到对齐的眼睛图像,如图4中(b)所示。 Step S21, eye image alignment, that is, cropping and aligning the eye area according to the position of the corner of the eye. The specific alignment method is as follows: Firstly, through statistics of the collected face areas, define a standard template of eye images with a size of 64×32, and set the positions of the inner and outer corners of the eyes as (54, 20) and (9, 20), respectively. By transforming the scale and rotation of the face area according to the position of the outer corner of the eye, an aligned eye image is obtained, as shown in (b) in Figure 4.

进一步地,以左眼为例,尺度和旋转变换的具体步骤为:1)连接内外眼角,将该线段通过旋转变换使得内眼角和外眼角在同一水平线上,同时对人脸区域的其他像素也进行相同的旋转变换;2)将处于同一水平位置上的左右眼角通过尺度变化使之与根据标准模板上定义的内外眼角位置重合,与此同时对人脸区域的其他像素也做相同的尺度变换;3)最后根据定义的眼睛图像标准模板的大小截取经过旋转和尺度变换的人脸区域,得到对齐的眼睛图像。 Further, taking the left eye as an example, the specific steps of scale and rotation transformation are: 1) Connect the inner and outer corners of the eyes, and rotate the line segment so that the inner and outer corners of the eyes are on the same horizontal line, and at the same time, other pixels in the face area are also Perform the same rotation transformation; 2) Change the scale of the left and right eye corners at the same horizontal position to coincide with the inner and outer eye corner positions defined on the standard template, and at the same time do the same scale transformation for other pixels in the face area ; 3) Finally, according to the size of the defined eye image standard template, the rotated and scale-transformed face area is intercepted to obtain an aligned eye image.

需要说明的是,上述眼睛图像标准模板是适应于所有人的,只要定位到左眼的内外眼角位置,就可以通过旋转和尺度变换将检测到的内外眼角和标准模板上定义的内外眼角位置保持一致,从而得到对齐的眼睛图像。眼睛图像对齐是人脸识别的一个重要的预处理步骤,在本实施方式中在建立模型之前进行眼睛图像对齐,是为了处理采集图像中出现的微小头部运动。 It should be noted that the above-mentioned eye image standard template is suitable for everyone. As long as the inner and outer corners of the left eye are positioned, the detected inner and outer corners of the eye and the inner and outer corners defined on the standard template can be maintained through rotation and scale transformation. consistent, resulting in aligned eye images. Eye image alignment is an important preprocessing step for face recognition. In this embodiment, eye image alignment is performed before building a model to deal with tiny head movements that appear in the collected images.

步骤S22,获取眼睛图像的特征。在特征提取部分,为了充分利用真实人脸和欺骗人脸存在的微纹理差异,将眼睛图像进一步划分为r×c个子区域,本实施方式采用4×2,如图4中(c)所示,对每一个子区域提取双直方图局部二值模式(DualHistogramLocalBinaryPattern,DH-LBP)特征(DH-LBP是局部二值模式(LBP,LocalBinaryPattern)直方图的改进,大大降低了特征维数的同时保留了LBP良好的分辨能力),将所有子区域的DH-LBP特征串联,形成眼睛图像的特征,如图4中(d)所示,其为128维。 Step S22, acquiring features of the eye image. In the feature extraction part, in order to make full use of the micro-texture difference between the real face and the deceptive face, the eye image is further divided into r×c sub-regions, which are 4×2 in this embodiment, as shown in (c) in Figure 4 , extract the dual histogram local binary pattern (DualHistogramLocalBinaryPattern, DH-LBP) feature for each sub-region (DH-LBP is an improvement of the local binary pattern (LBP, LocalBinaryPattern) histogram, which greatly reduces the feature dimension while retaining The good resolution ability of LBP), the DH-LBP features of all sub-regions are concatenated to form the features of the eye image, as shown in (d) in Figure 4, which is 128 dimensions.

可以理解的是,一方面所述眼睛图像特征可以用于表示用户的视线方向;另一方面由于印刷、显示屏反光等问题,真实人脸和欺骗人脸存在明显的微纹理区别,因此采用局部纹理特征能够很好的区分真实人脸和欺骗人脸。. It can be understood that, on the one hand, the eye image features can be used to indicate the user's line of sight; on the other hand, due to problems such as printing and display reflections, there are obvious micro-texture differences between real faces and fake faces, so local Texture features can well distinguish real faces from spoofed faces. .

当完成步骤S2后,进行步骤S3,在步骤S3中进行模型的建立和求解。下面详细描述视线估计模型的建立和求解过程。 After step S2 is completed, proceed to step S3, in which the model is established and solved. The establishment and solution process of the line-of-sight estimation model will be described in detail below.

通过步骤S1和S2得到这样的数据集其中,xi∈Rn(本实施方式中n=128,其与眼睛图像特征的维数有关)是样本的眼睛图像特征,yi=(px,i,py,i)∈R2是对应视点的二维坐标。本实施方式中通过如下的二维回归模型来建立眼睛图像特征与对应的视点坐标之间的映射关系: Obtain such a dataset through steps S1 and S2 Among them, x i ∈ R n (n=128 in this embodiment, which is related to the dimension of the eye image feature) is the eye image feature of the sample, y i =(p x,i ,p y,i )∈R 2 are the two-dimensional coordinates of the corresponding viewpoint. In this embodiment, the following two-dimensional regression model is used to establish the mapping relationship between eye image features and corresponding viewpoint coordinates:

y=wTx+b, y= wTx +b,

w=(w1,w2)∈Rn×2,b=(b1,b2)∈R2(1) w=(w 1 , w 2 )∈R n×2 , b=(b 1 , b 2 )∈R 2 (1)

其中w,b是待求的模型参数 Where w, b are the model parameters to be sought

并定义对应的损失函数为: And define the corresponding loss function as:

lossloss == ΣΣ ii == 11 NN || || ythe y ii -- ythe y ^^ ii || || 22 -- -- -- (( 22 ))

其中是对应的预测值,一般来说,式(1)可通过最小二乘法求得。然而,结合该问题的实际情况,最小二乘法具有以下不足之处:1)非高清摄像头下的视线估计问题是一个复杂的非线性问题,尤其是对于跨度较大的9个标定点;2)最小二乘法对于拟合直线之外的点较敏感;3)最小二乘法仅对经验风险最小化,推广能力不强。基于以上几点,本实施方式决定采用具有较强鲁棒性的支持向量回归(SVR,SupportVectorRegression)法求解该问题。为了应用SVR,将式(1)简化为两个一维回归模型: in is the corresponding predicted value. Generally speaking, formula (1) can be obtained by least square method. However, combined with the actual situation of this problem, the least squares method has the following disadvantages: 1) The line of sight estimation problem under a non-HD camera is a complex nonlinear problem, especially for the 9 calibration points with a large span; 2) The least squares method is more sensitive to the points outside the fitting line; 3) The least squares method only minimizes the empirical risk, and the generalization ability is not strong. Based on the above points, this embodiment decides to use a support vector regression (SVR, Support Vector Regression) method with strong robustness to solve this problem. In order to apply SVR, formula (1) is simplified into two one-dimensional regression models:

pp xx == ww 11 TT xx ++ bb 11 pp ythe y == ww 22 TT xx ++ bb 22 -- -- -- (( 33 ))

即,将x坐标点和y坐标点分别考虑,为使拟合函数尽可能的平滑,式(1)进一步转化为以下两个优化问题的求解: That is, considering the x coordinate point and the y coordinate point separately, in order to make the fitting function as smooth as possible, formula (1) is further transformed into the solution of the following two optimization problems:

minmin imizeimize 11 22 || || ww 11 || || 22 -- -- -- (( 44 ))

subjectto|px,i-w1 Tx-b1|≤ε1 subjectto|p x,i -w 1 T xb 1 |≤ε 1

and

minmin imizeimize 11 22 || || ww 22 || || 22 -- -- -- (( 55 ))

subjectto|py,i-w2 Tx-b2|≤ε2 subjectto|p y,i -w 2 T xb 2 |≤ε 2

其中ε12是预测值和实际值之间可允许的最大偏差。本实施方式中采用libsvm工具箱求解(4)和(5)得到即对于测试样本z,其视线位置的x和y坐标值分别预测为fx(z),fy(z): Where ε 1 , ε 2 are the allowable maximum deviations between the predicted value and the actual value. In this embodiment, the libsvm toolbox is used to solve (4) and (5) to obtain and That is, for the test sample z, the x and y coordinate values of its sight position are predicted as f x (z), f y (z):

ff xx (( zz )) == ΣΣ ii == 11 nno 11 ww 11 ** (( zz ,, xx ii )) ++ bb 11 ** ff ythe y (( zz )) == ΣΣ ii == 11 nno 22 ww 22 ** (( zz ,, ythe y ii )) ++ bb 22 ** -- -- -- (( 66 ))

其中,xi,yi分别为支持向量,n1,n2是支持向量数。相比于线性回归,非线性回归能够更加有效的对数据建模,将线性转化为非线性常用的方法是核技巧。通过定义核函数,式(1)的解为: Among them, x i , y i are support vectors respectively, and n 1 , n 2 are the number of support vectors. Compared with linear regression, nonlinear regression can more effectively model data. The common method for converting linear to nonlinear is the kernel technique. By defining the kernel function, the solution of formula (1) is:

ff xx (( zz )) == ΣΣ ii == 11 nno xx ww xx ** kk (( zz ,, xx ii )) ++ bb xx ** ff ythe y (( zz )) == ΣΣ ii == 11 nno ythe y ww ythe y ** kk (( zz ,, ythe y ii )) ++ bb ythe y ** -- -- -- (( 77 ))

本实施方式采用径向基核函数进行求解,对于测试数据z,通过(7)式得到预测的视线位置 y ^ = ( p ^ x , p ^ y ) = ( f x ( z ) , f y ( z ) ) . In this embodiment, the radial basis kernel function is used to solve the problem, and for the test data z, the predicted line-of-sight position is obtained by formula (7) the y ^ = ( p ^ x , p ^ the y ) = ( f x ( z ) , f the y ( z ) ) .

根据上述步骤S1~S3可以建立的视线估计模型(7)侧重于预测视线的变化,而不是侧重估计的准确度,并且本实施方式提出的视线估计模型只需在自然光下配置一个分辨率为640×480像素的USB摄像头就能很好的检测到视线的变化。 The line-of-sight estimation model (7) that can be established according to the above steps S1-S3 focuses on predicting changes in line-of-sight rather than the accuracy of estimation, and the line-of-sight estimation model proposed in this embodiment only needs to be configured under natural light with a resolution of 640 The USB camera with ×480 pixels can detect the change of line of sight very well.

可以理解的是,虽然本实施方式在步骤S3中采用非线性回归来建立和求解视线估计模型,但是实际上也可以采用其他的回归方法,例如偏最小二乘回归等。 It can be understood that although the present embodiment adopts nonlinear regression to establish and solve the line-of-sight estimation model in step S3, in fact, other regression methods, such as partial least squares regression, can also be used.

进一步地,本实施方式的视线估计模型和现有的视线估计模型最大的不同在于由于应用的场合不同,因此侧重点也不同。现有的视线估计模型主要应用于人机交互,因此注重的是视线估计的准确度。现有的视线估计模型要么需要多个标定点,要么需要高清摄像头,有的甚至需要多个摄像头。而根据本发明的应用场合---活体判断,本实施方式提出的视线估计模型侧重于视线的变化,而不是视线估计的准确度,并且本实施方式提出的视线估计模型只需在自然光下配置一个640×480像素大小的USB摄像头就能很好的检测到视线的变化。 Furthermore, the biggest difference between the line of sight estimation model in this embodiment and the existing line of sight estimation model is that the emphases are also different due to different application occasions. The existing gaze estimation models are mainly used in human-computer interaction, so they focus on the accuracy of gaze estimation. Existing line-of-sight estimation models either require multiple calibration points, or high-definition cameras, and some even require multiple cameras. However, according to the application occasion of the present invention---living body judgment, the line of sight estimation model proposed in this embodiment focuses on the change of line of sight, rather than the accuracy of line of sight estimation, and the line of sight estimation model proposed in this embodiment only needs to be configured under natural light A USB camera with a size of 640×480 pixels can detect the change of line of sight very well.

如前所述,在建立上述视线估计模型后,可以利用该模型来进行活体判断。下面结合图5和图6来说明本发明的人脸活体判断方法。 As mentioned above, after the above line-of-sight estimation model is established, the model can be used for living body judgment. The method for judging human face livingness of the present invention will be described below with reference to FIG. 5 and FIG. 6 .

图5是本发明一实施方式的人脸活体判断方法流程图;图6是本发明一实施方式视线词典构建示意图。如图5所示,本实施方式的人脸活体判断方法包括下述步骤: Fig. 5 is a flow chart of a method for judging human face liveness according to an embodiment of the present invention; Fig. 6 is a schematic diagram of constructing a gaze dictionary according to an embodiment of the present invention. As shown in Figure 5, the method for judging human face livingness in the present embodiment includes the following steps:

步骤S100,获取人脸图像。在步骤S100,通过诸如usb摄像头等图像获取单元获取一定数量人脸图像,比如采集10秒,获得100帧图像。 Step S100, acquiring a face image. In step S100, a certain number of face images are acquired by an image acquisition unit such as a usb camera, for example, 100 frames of images are acquired for 10 seconds.

步骤S200,获取眼睛图像及眼睛图像的特征。在步骤200中获取每个人脸图像的眼睛图像以及该眼睛图像的特征,比如获取前述100帧人脸图像的每个图像的眼睛图像以及眼睛图像特征。眼睛图像及眼睛图像的特征获取方法同前文所述一样,在此不再赘述。 Step S200, acquiring an eye image and features of the eye image. In step 200, the eye image and the features of the eye image of each face image are obtained, for example, the eye image and the feature of the eye image of each of the aforementioned 100 frames of face images are obtained. The eye image and the feature acquisition method of the eye image are the same as those described above, and will not be repeated here.

步骤S300,进行视线预测,即根据获取的眼睛图像的特征以及视线估计模型进行视线预测获得对应数量视线位置。具体地,在步骤S300中,根据视线估计模型式(7)来预测一段时间内用户的视线位置序列。比如,对于某一用户,假设当前时间段内采集到M帧图像,通过对图像序列进行眼睛图像及特征的获取,得到预测到的M个视线位置 { y ^ i = ( p ^ x , i , p ^ y , i ) } i = 1 M . Step S300, perform gaze prediction, that is, perform gaze prediction according to the features of the acquired eye image and the gaze estimation model to obtain a corresponding number of gaze positions. Specifically, in step S300, the user's line of sight position sequence within a period of time is predicted according to line of sight estimation model formula (7). For example, for a certain user, assuming that M frames of images are collected in the current time period, the predicted M sight positions can be obtained by acquiring eye images and features of the image sequence { the y ^ i = ( p ^ x , i , p ^ the y , i ) } i = 1 m .

步骤S400,视线量化及统计,即,根据视线词典对所述对应数量视线位置进行量化,建立视线直方图。 Step S400, line of sight quantification and statistics, that is, quantify the corresponding number of sight line positions according to the line of sight dictionary, and establish a line of sight histogram.

词袋模型(BagofWords,BOW)通常应用在信息检索领域中,通过统计每个词汇出现的频次形成直方图来对文档进行表示。在本实施方式中通过将文档中的词汇扩展到图像中的``视觉词汇″将词袋模型延伸为视觉词袋模型,此处要统计的是视线方向,因此这里的``视觉词汇″就具体化为``视线方向″。本发明中称之为``视线词袋模型″,并将视线词典中的词条称为”视线词条”。类似于词袋模型,视线词袋模型的构建主要分为2步:视线词典(codebook)的构建和视线直方图的生成。本发明中视线词条是通过将用户视线范围进行网格划分来获取的,假设用户的视线范围为m×n大小,这里m,n是以像素为单位,将其均匀划分为r×c大小的网格,则网格上的(r+1)×(c+1)个网格点的集合构成了视线词典。例如,如图6所示,当将本实施方式的人脸活体判断方法应用于分辨率为640×480像素大小的窗口中时,首先确定用户的视线范围(假设为600×400像素大小的矩形区域(边缘区域不考虑)),然后将600×400划分为2×2(这里r和c的取值可以不是2)个子区域,则9个红色点构成了视线词典。优选地,本实施方式视线词典的构造是以建立视线估计模型时的9个视点作为视线词条,即{(88,83),(552,83),(1016,83),(88,440),(552,440),(1016,440),(88,797),(552,797),(1016,797)}。 The bag of words model (Bag of Words, BOW) is usually applied in the field of information retrieval, and the document is represented by counting the frequency of occurrence of each word to form a histogram. In this embodiment, the bag-of-words model is extended to the bag-of-words model by extending the vocabulary in the document to the ``visual vocabulary'' in the image. It is embodied as "line of sight direction", which is called "line of sight word bag model" in the present invention, and the entries in the line of sight dictionary are called "line of sight entries". Similar to the bag-of-words model, the construction of the bag-of-sight word model is mainly divided into two steps: the construction of the sight-sight dictionary (codebook) and the generation of the sight-sight histogram. In the present invention, the line-of-sight entry is obtained by dividing the user's line-of-sight range into a grid. Assume that the user's line-of-sight range is m×n in size, where m and n are in units of pixels, and are evenly divided into r×c sizes The grid, then the set of (r+1)×(c+1) grid points on the grid constitutes the line-of-sight dictionary. For example, as shown in Figure 6, when the method for judging human face life in this embodiment is applied to a window with a resolution of 640×480 pixels, first determine the range of the user’s line of sight (assumed to be a rectangle with a size of 600×400 pixels) area (the edge area is not considered)), and then divide 600×400 into 2×2 (here the values of r and c may not be 2) sub-regions, then 9 red points constitute the line-of-sight dictionary. Preferably, the structure of the line-of-sight dictionary in this embodiment is based on 9 viewpoints when establishing the line-of-sight estimation model as line-of-sight entries, namely {(88,83), (552,83), (1016,83), (88,440), (552,440), (1016,440), (88,797), (552,797), (1016,797)}.

当构建完视线词典后,将预测到的视线位置序列根据视线词典进行量化。比如,找到与每一个预测位置距离最近的视线词条并投票(本实施方式采用欧式距离),统计每一个位置词条的票数形成直方图并进行归一化,形成视线直方图。 After constructing the gaze dictionary, the predicted gaze position sequence Quantification based on sight dictionary. For example, find the location associated with each prediction The nearest line of sight entry is voted (this embodiment adopts Euclidean distance), and the number of votes of each position entry is counted to form a histogram and normalized to form a line of sight histogram.

步骤500,活体判断,即,根据视线直方图获得信息熵并进行活体判断。 Step 500, living body judgment, that is, obtaining information entropy according to the line-of-sight histogram and performing living body judgment.

具体地,对于归一化的视线直方图H={p1,…,p9}(满足根据熵的定义求得熵值 Specifically, for the normalized line-of-sight histogram H={p 1 ,...,p 9 } (satisfying Calculate the entropy value according to the definition of entropy

entroyentroy == -- ΣΣ ii == 11 99 pp ii loglog (( pp ii )) -- -- -- (( 88 ))

如果 &Exists; { p i k > 0 } k = 1 l , 1 < l &le; 9 , 则有 if &Exists; { p i k > 0 } k = 1 l , 1 < l &le; 9 , then there is

entropyentropy == -- &Sigma;&Sigma; ii == 11 99 pp ii loglog (( pp ii )) == -- &Sigma;&Sigma; kk == 11 ll pp ii kk loglog (( pp ii kk )) >> 00 -- -- -- (( 99 ))

表明视线方向被量化到了l个不同的视线词条内,此时对应的熵值大于0。 Indicates that the line of sight direction is quantified into l different line of sight entries, and the corresponding entropy value is greater than 0 at this time.

如果仅存在某一则有 If only one then there is

entroyentroy == -- &Sigma;&Sigma; ii == 11 99 pp ii loglog (( pp ii )) == -- pp ii 00 loglog (( pp ii 00 )) == 00 -- -- -- (( 1010 ))

表明视线方向只被量化到一个视线词条内,此时对应的熵值等于0。即在量化的条件下,视线方向发生变化时,熵值大于0;不发生变化时,熵值等于0。 Indicates that the line of sight direction is only quantized into one line of sight entry, and the corresponding entropy value is equal to 0 at this time. That is, under quantized conditions, when the line of sight changes, the entropy value is greater than 0; when there is no change, the entropy value is equal to 0.

综上,本实施方式中设定阈值为0,并设定活体判断的条件为:如果求得(8)式熵值entropy>0,则判定为真实人脸,否则为欺骗人脸。 To sum up, in this embodiment, the threshold value is set to 0, and the living body judgment condition is set as follows: if the entropy value entropy>0 in formula (8) is obtained, it is judged as a real face; otherwise, it is a spoofed face.

本发明除了提供一种人脸活体判断方法外,还提供一种活体判断系统,用于在人脸识别中区分真实人脸和欺骗人脸。下面结合图7和图8来说明本发明的活体判断系统。 In addition to providing a human face living body judging method, the present invention also provides a living body judging system for distinguishing real faces and spoofed faces in face recognition. The living body judging system of the present invention will be described below with reference to FIG. 7 and FIG. 8 .

图7是本发明一实施方式的活体判断系统的结构示意图;图8是图7所示活体判断系统的特征提取单元的详细结构示意图。 FIG. 7 is a schematic structural diagram of a living body judging system according to an embodiment of the present invention; FIG. 8 is a detailed structural schematic diagram of a feature extraction unit of the living body judging system shown in FIG. 7 .

如图7所示,本实施方式的活体判断系统包括图像采集单元100、特征提取单元200、视线估计单元300、视线统计单元400和活体判断单元500。 As shown in FIG. 7 , the living body judgment system of this embodiment includes an image acquisition unit 100 , a feature extraction unit 200 , a line of sight estimation unit 300 , a sight line statistics unit 400 and a living body judgment unit 500 .

其中,图像采集单元100,比如为640×480像素的摄像头,用于采集一定数量的人脸图像。 Wherein, the image acquisition unit 100, such as a camera with 640×480 pixels, is used to acquire a certain number of face images.

特征提取单元200用于获取每个人脸图像的眼睛图像以及该眼睛图像的特征。特征提取单元200包括定位检测模块201、眼睛图像对齐模块202和特征提取模块203。定位检测模块201用于进行人脸定位和内外眼角点检测,比如通过经典的ViolaandJones方法进行人脸定位和内外眼角点的检测,以左眼为例如图4中(a)所示。眼睛图像对齐模块202用于根据眼角位置裁剪并对齐眼睛区域从而得到对齐的眼睛图像,其具体执行下述操作:通过统计采集到的人脸图像,定义眼睛图像标准模板,并设置内眼角和外眼角位置,和将所述人脸区域根据外眼角的位置进行尺度和旋转的变换获得对齐的眼睛图像,详细过程如前所述,在此不再赘述。特征提取模块203根据所述对齐的眼睛图像获取眼睛图像的特征,其具体执行下述操作:将所述眼睛图像划分为r*c个子区域,并对每个子区域提取双直方图局部二值模式特征,将所有子区域的双直方图局部二值模式特征串联,形成所述眼睛图像的特征。 The feature extraction unit 200 is used to acquire the eye image of each face image and the features of the eye image. The feature extraction unit 200 includes a location detection module 201 , an eye image alignment module 202 and a feature extraction module 203 . The positioning detection module 201 is used for face positioning and inner and outer corner detection, such as face positioning and inner and outer corner detection through the classic Viola and Jones method, as shown in (a) in FIG. 4 taking the left eye as an example. The eye image alignment module 202 is used to crop and align the eye area according to the position of the corner of the eye to obtain the aligned eye image. The position of the corner of the eye, and the scale and rotation transformation of the face area according to the position of the outer corner of the eye to obtain an aligned eye image, the detailed process is as described above, and will not be repeated here. The feature extraction module 203 acquires the features of the eye image according to the aligned eye image, and specifically performs the following operations: divide the eye image into r*c sub-regions, and extract a dual histogram local binary pattern for each sub-region feature, concatenating the dual histogram local binary pattern features of all sub-regions to form the feature of the eye image.

视线估计单元300用于根据获取的眼睛图像的特征以及视线估计模型进行视线预测获得对应数量视线位置。即,根据视线估计模型式(7)来预测一段时间内用户的视线位置序列。比如,对于某一用户,假设当前时间段内采集到M帧图像,通过对图像序列进行眼睛图像及特征的获取,得到预测到的M个视线位置 The line-of-sight estimation unit 300 is configured to perform line-of-sight prediction according to the features of the acquired eye image and the line-of-sight estimation model to obtain a corresponding number of line-of-sight positions. That is, according to the gaze estimation model formula (7), the user's gaze position sequence within a period of time is predicted. For example, for a certain user, assuming that M frames of images are collected in the current time period, the predicted M sight positions can be obtained by acquiring eye images and features of the image sequence

视线统计单元400根据视线词典对所述对应数量视线位置进行量化,建立视线直方图。视线统计以及量化的过程如前所述,在此不再赘述。 The gaze statistics unit 400 quantifies the corresponding number of gaze positions according to the gaze dictionary, and establishes a gaze histogram. The process of line-of-sight statistics and quantification is as described above, and will not be repeated here.

活体判断单元500根据视线直方图获得信息熵并进行活体,其中 The living body judging unit 500 obtains information entropy according to the line-of-sight histogram and performs living body, wherein

如果熵值大于0,则判定为真实人脸,否则判定为欺骗人脸。 If the entropy value is greater than 0, it is judged as a real face, otherwise it is judged as a spoofed face.

以上实施方式仅仅是为了说明本发明的原理而采用的示例性实施方式,然而本发明并不局限于此。对于本领域内的普通技术人员而言,在不脱离本发明的精神和实质的情况下,可以做出各种变型和改进。这些变型和改进也视为本发明的保护区间。 The above embodiments are only exemplary embodiments adopted to illustrate the principles of the present invention, but the present invention is not limited thereto. For those skilled in the art, various modifications and improvements can be made without departing from the spirit and essence of the present invention. These variations and improvements are also regarded as the protection zone of the present invention.

Claims (12)

1. a face live body determination methods, for distinguishing real human face and deception face in recognition of face, is characterized in that, comprise:
Obtain the step of the facial image of some;
Obtain the step of the eye image of each facial image and the feature of this eye image;
The step that sight line prediction obtains respective amount eye position is carried out according to the feature of eye image obtained and sight line estimation model;
According to sight line dictionary, described respective amount eye position is quantized, set up the histogrammic step of sight line; With
Obtain information entropy according to sight line histogram and carry out the step of live body judgement, wherein
If entropy is greater than 0, be then judged to be real human face, otherwise be judged to cheat face.
2. face live body determination methods as claimed in claim 1, it is characterized in that, the step of the eye image of each facial image of described acquisition and the feature of this eye image comprises:
Carry out the step of Face detection and the survey of interior tail of the eye spot check;
According to the cutting of position, canthus and the eye areas thus obtain of the aliging step of eye image of aliging; With
According to the step of the feature of the Ophthalmologic image-taking eye image of described alignment.
3. face live body determination methods as claimed in claim 2, is characterized in that, described according to the cutting of position, canthus and align eye areas thus the step obtaining the eye image alignd comprise:
By the facial image that statistics gatherer arrives, definition eye image standard form, and the step of inner eye corner and tail of the eye position is set; With
Described human face region is carried out the step of the eye image of the conversion acquisition alignment of yardstick and rotation according to the position of the tail of the eye.
4. face live body determination methods as claimed in claim 2, it is characterized in that, the step according to the feature of the Ophthalmologic image-taking eye image of described alignment is specially:
Described eye image is divided into r × c sub regions, and Double-histogram local binary patterns feature is extracted to every sub regions, the Double-histogram local binary patterns feature of all subregions is connected, forms the feature of described eye image.
5. the face live body determination methods as described in claim 1-4 any one, is characterized in that, also comprise:
Set up the step of sight line estimation model, this step comprises:
The step of image data, described data be user observe successively setting quantity viewpoint time facial image;
Obtain the step of the eye image of each facial image and the feature of this eye image; With
According to the feature of described eye image and the coordinate of corresponding viewpoint, set up sight line estimation model and solve.
6. face live body determination methods as claimed in claim 5, it is characterized in that, described sight line estimation model is:
f x ( z ) = &Sigma; i = 1 n x w x * k ( z , x i ) + b x * f y ( z ) = &Sigma; i = 1 n y w y * k ( z , y i ) + b y *
Wherein, described eye position x i∈ R nthe feature of eye image, y i=(p x,i, p y,i) ∈ R 2it is corresponding viewpoint two-dimensional coordinate.
7. live body judges a system, for distinguishing real human face and deception face in recognition of face, it is characterized in that, comprise:
Image acquisition units, it is for gathering the facial image of some;
Feature extraction unit, it is for the feature of the eye image and this eye image that obtain each facial image;
Sight line estimation unit, it carries out sight line prediction according to the feature of the eye image obtained and sight line estimation model and obtains respective amount eye position;
Sight line statistic unit, it quantizes described respective amount eye position according to sight line dictionary, sets up sight line histogram, and
Live body judging unit, it obtains information entropy according to sight line histogram and carries out live body, wherein
If entropy is greater than 0, be then judged to be real human face, otherwise be judged to cheat face.
8. live body as claimed in claim 7 judges system, and it is characterized in that, described feature extraction unit comprises:
Detection and localization module, it is for carrying out Face detection and the survey of interior tail of the eye spot check;
Eye image alignment module, its for according to the cutting of position, canthus and align eye areas thus obtain align eye image; With
Characteristic extracting module, it is according to the feature of the Ophthalmologic image-taking eye image of described alignment.
9. live body as claimed in claim 8 judges system, and it is characterized in that, described eye image alignment module specifically performs following operation:
By the facial image that statistics gatherer arrives, definition eye image standard form, and inner eye corner and tail of the eye position are set; With
The conversion that described human face region carries out yardstick and rotation according to the position of the tail of the eye is obtained the eye image alignd.
10. live body as claimed in claim 8 judges system, and it is characterized in that, described characteristic extracting module specifically performs following operation:
Described eye image is divided into r × c sub regions, and Double-histogram local binary patterns feature is extracted to every sub regions, the Double-histogram local binary patterns feature of all subregions is connected, forms the feature of described eye image.
11. live bodies as claimed in claim 7 judge system, and it is characterized in that, described image acquisition units is the camera of 640 × 480 pixels.
12. live bodies as described in claim 7-11 judge system, and it is characterized in that, described sight line estimation model is:
f x ( z ) = &Sigma; i = 1 n x w x * k ( z , x i ) + b x * f y ( z ) = &Sigma; i = 1 n y w y * k ( z , y i ) + b y *
Wherein, described eye position x i∈ R nthe feature of eye image, y i=(p x,i, p y,i) ∈ R 2it is corresponding viewpoint two-dimensional coordinate.
CN201410327039.4A 2014-07-10 2014-07-10 Face living body judgment method and system Active CN105243386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410327039.4A CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410327039.4A CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Publications (2)

Publication Number Publication Date
CN105243386A true CN105243386A (en) 2016-01-13
CN105243386B CN105243386B (en) 2019-02-05

Family

ID=55041027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410327039.4A Active CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Country Status (1)

Country Link
CN (1) CN105243386B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN106203372A (en) * 2016-07-19 2016-12-07 奇酷互联网络科技(深圳)有限公司 Eye-based living body detection method and device and terminal equipment
CN106203373A (en) * 2016-07-19 2016-12-07 中山大学 A kind of human face in-vivo detection method based on deep vision word bag model
CN106682578A (en) * 2016-11-21 2017-05-17 北京交通大学 Human face recognition method based on blink detection
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Human face part motion detection method and device and living body identification method and system
CN107992811A (en) * 2017-11-27 2018-05-04 北京小米移动软件有限公司 Face identification method and device
CN108229376A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method and device of blink
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN108549887A (en) * 2018-07-23 2018-09-18 北京智芯原动科技有限公司 A kind of living body faces detection method and device
CN108875468A (en) * 2017-06-12 2018-11-23 北京旷视科技有限公司 Biopsy method, In vivo detection system and storage medium
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
CN110705507A (en) * 2016-06-30 2020-01-17 北京七鑫易维信息技术有限公司 Identity recognition method and device
CN110705569A (en) * 2019-09-18 2020-01-17 西安交通大学 Image local feature descriptor extraction method based on texture features
CN110738072A (en) * 2018-07-18 2020-01-31 浙江宇视科技有限公司 Living body judgment method and device
CN110929705A (en) * 2020-02-17 2020-03-27 京东数字科技控股有限公司 Living body detection method and device, identity authentication method and system and storage medium
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005277567A (en) * 2004-03-23 2005-10-06 Olympus Corp Image recording device
US7027617B1 (en) * 1999-01-19 2006-04-11 Humanscan Gmbh Method and system for preventing fraud in the biometric detection of persons
CN101710383A (en) * 2009-10-26 2010-05-19 北京中星微电子有限公司 Method and device for identity authentication
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027617B1 (en) * 1999-01-19 2006-04-11 Humanscan Gmbh Method and system for preventing fraud in the biometric detection of persons
JP2005277567A (en) * 2004-03-23 2005-10-06 Olympus Corp Image recording device
CN101710383A (en) * 2009-10-26 2010-05-19 北京中星微电子有限公司 Method and device for identity authentication
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705507B (en) * 2016-06-30 2022-07-08 北京七鑫易维信息技术有限公司 Identity recognition method and device
CN110705507A (en) * 2016-06-30 2020-01-17 北京七鑫易维信息技术有限公司 Identity recognition method and device
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN106203373B (en) * 2016-07-19 2019-04-16 中山大学 A kind of human face in-vivo detection method based on deep vision bag of words
CN106203372A (en) * 2016-07-19 2016-12-07 奇酷互联网络科技(深圳)有限公司 Eye-based living body detection method and device and terminal equipment
CN106203373A (en) * 2016-07-19 2016-12-07 中山大学 A kind of human face in-vivo detection method based on deep vision word bag model
CN106682578A (en) * 2016-11-21 2017-05-17 北京交通大学 Human face recognition method based on blink detection
CN106682578B (en) * 2016-11-21 2020-05-05 北京交通大学 Low-light face recognition method based on blink detection
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Human face part motion detection method and device and living body identification method and system
CN108875468A (en) * 2017-06-12 2018-11-23 北京旷视科技有限公司 Biopsy method, In vivo detection system and storage medium
CN107992811A (en) * 2017-11-27 2018-05-04 北京小米移动软件有限公司 Face identification method and device
CN108229376A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method and device of blink
CN108229376B (en) * 2017-12-29 2022-06-03 百度在线网络技术(北京)有限公司 Method and device for detecting blinking
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN110738072A (en) * 2018-07-18 2020-01-31 浙江宇视科技有限公司 Living body judgment method and device
CN108549887A (en) * 2018-07-23 2018-09-18 北京智芯原动科技有限公司 A kind of living body faces detection method and device
CN109190522B (en) * 2018-08-17 2021-05-07 浙江捷尚视觉科技股份有限公司 Living body detection method based on infrared camera
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
CN110705569A (en) * 2019-09-18 2020-01-17 西安交通大学 Image local feature descriptor extraction method based on texture features
CN110929705A (en) * 2020-02-17 2020-03-27 京东数字科技控股有限公司 Living body detection method and device, identity authentication method and system and storage medium
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium
CN113869219B (en) * 2021-09-29 2024-05-21 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN105243386B (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN105243386A (en) Face liveness judgment method and system
US12223760B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US11288504B2 (en) Iris liveness detection for mobile devices
US20220215686A1 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US10339362B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US9361507B1 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US8345936B2 (en) Multispectral iris fusion for enhancement and interoperability
CN105389554A (en) Live body discrimination method and device based on face recognition
US11450130B2 (en) Animal identification based on unique nose patterns
KR20190094352A (en) System and method for performing fingerprint based user authentication using a captured image using a mobile device
CN109583304A (en) A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN107798279B (en) Face living body detection method and device
CN101739546A (en) Image cross reconstruction-based single-sample registered image face recognition method
CN105243376A (en) Living body detection method and device
US20210200990A1 (en) Method for extracting image of face detection and device thereof
CN106650623A (en) Face detection-based method for verifying personnel and identity document for exit and entry
CN112613471A (en) Face living body detection method and device and computer readable storage medium
CN112711968B (en) Human face liveness detection method and system
Méndez-Llanes et al. On the use of local fixations and quality measures for deep face recognition
Pflug Ear recognition: Biometric identification using 2-and 3-dimensional images of human ears
Bharadwaj et al. Biometric quality: from assessment to multibiometrics
Sequeira Liveness detection and robust recognition in iris and fingerprint biometric systems
Avazpour et al. Optimization of Human Recognition from the Iris Images using the Haar Wavelet.
Lee et al. Ocular and iris recognition baseline algorithm
Galdi Design and development of multi-biometric systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant