[go: up one dir, main page]

CN1137662C - Main unit component analysis based multimode human face identification method - Google Patents

Main unit component analysis based multimode human face identification method Download PDF

Info

Publication number
CN1137662C
CN1137662C CNB011365773A CN01136577A CN1137662C CN 1137662 C CN1137662 C CN 1137662C CN B011365773 A CNB011365773 A CN B011365773A CN 01136577 A CN01136577 A CN 01136577A CN 1137662 C CN1137662 C CN 1137662C
Authority
CN
China
Prior art keywords
face
eyes
feature
nose
mouth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB011365773A
Other languages
Chinese (zh)
Other versions
CN1341401A (en
Inventor
苏光大
张翠平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB011365773A priority Critical patent/CN1137662C/en
Publication of CN1341401A publication Critical patent/CN1341401A/en
Application granted granted Critical
Publication of CN1137662C publication Critical patent/CN1137662C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present invention belongs to the technical fields of image processing, computer vision and pattern recognition. The present invention comprises: human face images are positioned; five kinds of human face components, namely naked faces, eyebrows and eyes, eyes, nasal tips and mouths, are extracted from whole human faces; five kinds of human face components from each human face image, each known human face image and each human face image to be recognized in a training set are extracted; five kind character components, and the projection character values of the human face components of the known human faces are respectively formed by a character face method in a principal component analysis method, a database which comprises the projection character valves of the human face components of the known human faces, the compressing images of the known human faces and personal identity files of the known human is established, and the character valves of the human face components of the human faces to be recognized are established; the human faces to be recognized in the database of the known human faces are processed by multimode overall human face recognition and part human face recognition by computing similarity and by a similarity ordering method. The present invention achieves high recognition rate.

Description

基于部件主分量分析的多模式人脸识别方法Multimodal Face Recognition Method Based on Component Principal Component Analysis

技术领域technical field

本发明属于图像处理、计算机视觉、模式识别技术领域,特别涉及人脸识别方法。The invention belongs to the technical fields of image processing, computer vision and pattern recognition, and in particular relates to a face recognition method.

人脸识别涉及到很多学科,包括图像处理、计算机视觉、模式识别以及神经网络等,也和神经生理学和神经生物学对人脑结构的研究成果紧密相关。人脸识别的难点在于:Face recognition involves many disciplines, including image processing, computer vision, pattern recognition, and neural networks. It is also closely related to the research results of neurophysiology and neurobiology on the structure of the human brain. The difficulty of face recognition lies in:

(1)表情引起的人脸塑性变形(1) Face plastic deformation caused by expression

(2)姿态引起的人脸多样性(2) Face diversity caused by posture

(3)年龄引起的人脸变化(3) Face changes caused by age

(4)发型、胡须、眼镜、化装等因素引起的人脸模式的多重性(4) The multiplicity of face patterns caused by factors such as hairstyles, beards, glasses, and makeup

(5)光照的角度、强度以及传感器特性等因素引起的人脸图像的差异性(5) Differences in face images caused by factors such as illumination angle, intensity, and sensor characteristics

诸多的因素使得人像识别成为一项棘手而富挑战性的课题,也因此在近年成为科研的热点。Many factors make portrait recognition a thorny and challenging subject, and thus become a hot research topic in recent years.

目前已有的人脸识别方法都是对整个人脸进行识别的,而在诸多的识别方法中,主要采用主分量分析(PCA-Principal Component Analysis)、弹性匹配、神经网络、几何特征等方法,例如目前采用的PCA方法,是对整个人脸计算特征脸并计算整个人脸的投影特征值。由于上述五个方面的难点,使得整个人脸的识别难以达到高的识别率。At present, the existing face recognition methods all recognize the whole face, and in many recognition methods, mainly use methods such as principal component analysis (PCA-Principal Component Analysis), elastic matching, neural network, geometric features, etc. For example, the PCA method currently used is to calculate the eigenface for the entire face and calculate the projected eigenvalue of the entire face. Due to the difficulties in the above five aspects, it is difficult to achieve a high recognition rate for the recognition of the entire face.

发明内容Contents of the invention

本发明的目的是为克服已有技术的不足之处,提出了一种基于部件主分量分析的多模式人脸识别方法,对人脸进行部件提取,再对人脸部件进行主分量分析及多模式识别,以达到高的识别率。The purpose of the present invention is to overcome the deficiencies of the prior art, and propose a multi-mode face recognition method based on component principal component analysis, carry out component extraction to the human face, and then carry out principal component analysis and detection of the human face components Multi-pattern recognition to achieve high recognition rate.

本发明提出的一种基于部件主分量分析的多模式人脸识别方法,其特征在于:包括以下步骤:A kind of multi-mode face recognition method based on component principal component analysis that the present invention proposes, is characterized in that: comprise the following steps:

1)采用模板匹配和投影直方图的方法对人脸图像定位,确定出人脸粗定位区、左右眼球、鼻尖、嘴、下颌顶点的基本位置;1) Use template matching and projection histogram methods to locate the face image, and determine the basic positions of the rough positioning area of the face, left and right eyeballs, nose tip, mouth, and mandible vertices;

2)从整个人脸中提取出裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件;2) Extract five face parts from the whole face: naked face, eyebrows+eyes, eyes, nose tip, and mouth;

3)对训练集的每一个人的人脸图像采用步骤1)、2)提取出每一个人的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,对从训练集人脸中提取出来的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,利用主分量分析方法中的特征脸方法,分别形成特征裸脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴;3) Use steps 1) and 2) to extract each person's face image of each person in the training set, including naked face, eyebrows+eyes, eyes, nose tip, and mouth. The naked face, eyebrows+eyes, eyes, nose tip, and mouth extracted from the face are used to form the characteristic naked face, characteristic (eyes+eyebrows), and characteristic eyes respectively by using the eigenface method in the principal component analysis method. , characteristic nose, characteristic mouth;

4)对已知的每一个人的人脸图像采用步骤1)、2)提取出的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,对从已知人脸中提取出来的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,利用主分量分析方法中的投影特征值分析方法,提取已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴五种人脸部件的投影特征值并建立包括已知人脸裸脸、眼睛+眉毛、眼睛、鼻子、嘴五种人脸部件的投影特征值和已知人脸的压缩图像及已知人的个人身份档案的数据库;4) For each known face image of each person, use steps 1) and 2) to extract the naked face, eyebrows+eyes, eyes, nose tip, and mouth five face parts, and extract from known faces The naked face, eyebrows+eyes, eyes, nose tip, and mouth are five face parts, and the projection eigenvalue analysis method in the principal component analysis method is used to extract the naked face, eyes+eyebrows, eyes, nose, and mouth of known faces. Projection eigenvalues of five face parts and establishment of projected eigenvalues including known face naked face, eyes+eyebrows, eyes, nose, mouth, compressed images of known faces and known personal database of identity files;

5)对待识别的每一个人的人脸图像采用步骤1)、2)提取出的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,对从待识别人脸中提取出来的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,利用主分量分析方法中的特征投影值分析方法,提取待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值;5) The face image of each person to be recognized adopts the naked face, eyebrows+eyes, eyes, nose tip, and mouth five face parts extracted in steps 1) and 2), and extracts from the face to be recognized The bare face, eyebrows+eyes, eyes, nose tip, and mouth are five face parts, and the feature projection value analysis method in the principal component analysis method is used to extract the naked face, eyes+eyebrows, eyes, nose, The projected eigenvalues of the mouth;

6)在已知人脸数据库中对待识别的人脸采用计算相似度和按相似度排序的方法进行多模式的全局人脸识别和局部人脸识别。6) In the known face database, the faces to be recognized are calculated and sorted by similarity for multi-mode global face recognition and local face recognition.

所说的多模式的全局人脸识别方法,可包括特征脸、特征[眼睛+眉毛]、特征眼睛、特征鼻子、特征嘴巴综合的识别方法。Said multi-mode global face recognition method may include a comprehensive recognition method of eigenface, eigen[eyes+eyebrows], eigeneye, eigennose, and eigenmouth.

所说的多模式的局部人脸识别方法可包括单个特征脸、特征[眼睛+眉毛]、特征眼睛、特征鼻子、特征嘴的识别,或特征脸、特征[眼睛+眉毛]、特征眼睛、特征鼻子、特征嘴相互之间的组合识别。The partial face recognition method of said multi-mode can comprise the recognition of single eigenface, feature [eye+eyebrow], feature eye, feature nose, feature mouth, or feature face, feature [eye+eyebrow], feature eye, feature Combination recognition between nose and characteristic mouth.

本发明的特点及效果Features and effects of the present invention

本发明对人脸进行部件提取,再对人脸部件进行主分量分析及多模式识别,达到高的识别率。The invention extracts the components of the face, and then performs principal component analysis and multi-pattern recognition on the components of the face to achieve a high recognition rate.

附图说明图1为本发明实施例的人脸图像定位结果示意图。图2为本发明用于匹配的人脸模板示意图。图3:待匹配区域的划分图4:梯度的方向积分投影图5:谷点的搜索图6:两鼻孔X轴上梯度积分投影图7:眼球以下部分做灰度积分投影图8:从整个人脸中提取出裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种部件图9:用PCA分析形成特征脸的示意图图10:人脸识别系统的框图图11:人脸局部(裸脸、眼睛+眉毛、眼睛)识别查询图12:全局人脸识别查询BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of a face image positioning result according to an embodiment of the present invention. Fig. 2 is a schematic diagram of a face template used for matching in the present invention. Figure 3: Division of the area to be matchedFigure 4: Gradient directional integral projectionFigure 5: Valley point searchFigure 6: Gradient integral projection on the X-axis of the two nostrilsFigure 7: Grayscale integral projection of the lower part of the eyeballFigure 8: From the whole Five parts of naked face, eyebrows+eyes, eyes, nose tip, and mouth are extracted from the human face. Eyes + eyebrows, eyes) recognition query Figure 12: Global face recognition query

具体实施方式Detailed ways

本发明提出的一种基于部件PCA的多模式人脸识别方法实施例结合各附图详细说明如下;A kind of multi-mode face recognition method embodiment based on component PCA that the present invention proposes is described in detail as follows in conjunction with each accompanying drawing;

本实施例包括以下步骤:1)对人脸图像定位。选定标准人脸图像尺寸为330×480(宽×高),定位的过程分为粗定位和细定位两步。粗定位的过程也就是在人脸图中找出人脸粗定位区的过程,细定位的过程也就是在粗定位的基础上定出人的两眼珠所在的位置的过程。进而确定鼻尖、嘴巴、下颌顶端位置,人脸图像定位结果如图1所示。图1中,白线框为粗定位所确定的人脸粗定位区的位置,眼球中的白点为细定位所确定的左右眼球所在位置,鼻尖、下颌顶端处的小白圈分别为细定位所确定的鼻尖、下颌顶端位置,嘴巴中的小白圈是嘴巴中心的位置。其中,粗定位采用模板匹配的方法,图2是用于匹配的人脸模板,这是一个48×33像素大小的模板。图3是将待定位的人脸图像中单个匹配区域划分为9个子区域,每一个子区域的尺寸为在16×11。在模板匹配中,先对待匹配区域的每一个子区域进行梯度统计,统计结果为Mg1~Mg9,,根据人脸的梯度分布认为匹配区域只要符合下列条件之一即不进行匹配运算:This embodiment includes the following steps: 1) Positioning the face image. The selected standard face image size is 330×480 (width×height), and the positioning process is divided into two steps: rough positioning and fine positioning. The process of rough positioning is the process of finding the rough positioning area of the face in the face map, and the process of fine positioning is the process of determining the position of the eyes of the person on the basis of rough positioning. Then determine the positions of the tip of the nose, mouth, and the top of the mandible, and the face image positioning results are shown in Figure 1. In Figure 1, the white line frame is the position of the rough positioning area of the face determined by the rough positioning, the white dots in the eyeballs are the positions of the left and right eyeballs determined by the fine positioning, and the small white circles at the tip of the nose and the top of the mandible are the fine positioning The determined position of the tip of the nose, the top of the mandible, and the small white circle in the mouth is the position of the center of the mouth. Among them, the rough positioning adopts the method of template matching, and Fig. 2 is a face template used for matching, which is a template with a size of 48×33 pixels. FIG. 3 divides a single matching area in the face image to be located into 9 sub-areas, and the size of each sub-area is 16×11. In template matching, gradient statistics are performed on each sub-area of the area to be matched first, and the statistical results are Mg1~Mg9. According to the gradient distribution of the face, it is considered that the matching area does not perform matching operations as long as it meets one of the following conditions:

          Mg1<Mg4或Mg3<Mg6或Mg1<Mg2或Mg3<Mg2在进行匹配运算时,采用了(1)式匹配函数。   When Mg1<Mg4 or Mg3<Mg6 or Mg1<Mg2 or Mg3<Mg2 is used for matching operations, the matching function of formula (1) is used.

          S=Ssym+S0+Shist                                (1)其中Ssym表示待匹配区域左右两边的对称性,S0表示模板匹配的结果,Shist是区域1和4的X轴积分投影直方图的匹配情况。实际中S0的计算是根据Ssym将左右区域的均值调整到同一值的情况下再进行计算的。S0是计算模板和待匹配区域的相关性。在待匹配区域和模板的均值和方差调整到同一值的情况下,采用相减再取绝对值进行匹配。S=S sym +S 0 +S hist (1) where S sym represents the symmetry of the left and right sides of the region to be matched, S 0 represents the result of template matching, and Sh hist is the matching of the X-axis integral projection histograms of regions 1 and 4 Condition. In practice, the calculation of S 0 is performed after adjusting the mean values of the left and right regions to the same value according to S sym . S 0 is to calculate the correlation between the template and the region to be matched. In the case where the mean and variance of the region to be matched and the template are adjusted to the same value, subtract and then take the absolute value for matching.

人脸的精确定位是通过眼球的定位来实现的,区域1和3应该是眼球所可能出现的区域。眼球的定位是通过寻找谷点来实现的。这里采用了自下往上的搜索办法,如图4所示,图中的积分投影是梯度图的积分投影。当搜索到图4中A点峰值时,认为已经找到了眼睛的大致水平位置Ay,给出感兴趣区域y属于(Ay-delta,Ay+delta),让x在很大的范围内变化,寻找此区域内的谷点(最低点)。采用图5所示的模型进行(2)式所示的比值计算,从而确定谷点的位置,图5中,子区域5表示眼球所在的区域。 S = &Sigma; n = 1 9 Gray ( n ) Gray ( 5 ) - - - ( 2 ) The precise positioning of the face is realized through the positioning of the eyeballs, and areas 1 and 3 should be the areas where the eyeballs may appear. The positioning of the eyeball is achieved by finding the valley point. A bottom-up search method is adopted here, as shown in Figure 4, the integral projection in the figure is the integral projection of the gradient map. When the peak value of point A in Figure 4 is found, it is considered that the approximate horizontal position Ay of the eye has been found, and the region of interest y belongs to (Ay-delta, Ay+delta), let x change in a large range, and look for The valley point (lowest point) in this area. Adopt the model shown in Fig. 5 to carry out the ratio calculation shown in (2) formula, thereby determine the position of valley point, in Fig. 5, sub-area 5 represents the area where eyeball is located. S = &Sigma; no = 1 9 Gray ( no ) Gray ( 5 ) - - - ( 2 )

(2)式中的表示子区域5内外的灰度比,其中Gray(n)表示n区域的灰度和。(2) represents the gray scale ratio inside and outside the sub-area 5, where Gray(n) represents the gray scale sum of the n area.

对两鼻孔之间中心线的提取主要是依赖两鼻孔X轴上梯度积分投影的峰值信息,如图6所示。The extraction of the centerline between the two nostrils mainly depends on the peak information of the gradient integral projection on the X-axis of the two nostrils, as shown in Figure 6.

图6中,L1为左鼻孔的x坐标,L2为右鼻孔的x坐标,x0为两鼻孔之间中心线的x坐标。对待匹配人脸的眼球以下部分做灰度积分投影,如图7所示。图7中,y1为下颌顶点的Y坐标,y2为嘴巴中心的Y坐标,y3为鼻尖的Y坐标,其X坐标均为x0。In Fig. 6, L1 is the x-coordinate of the left nostril, L2 is the x-coordinate of the right nostril, and x0 is the x-coordinate of the center line between the two nostrils. The part below the eyeball of the matching face is to be gray-scale integral projection, as shown in Figure 7. In Figure 7, y1 is the Y coordinate of the apex of the mandible, y2 is the Y coordinate of the center of the mouth, y3 is the Y coordinate of the tip of the nose, and their X coordinates are all x0.

这样,完成了左右眼球、鼻尖、嘴巴中心、下颌顶端的定位。In this way, the positioning of the left and right eyeballs, the tip of the nose, the center of the mouth, and the top of the mandible is completed.

2)进行人脸部件提取。根据对人脸图像定位所确定的左右眼球、鼻尖、嘴巴中心、下颌顶端的具体位置,再从整个人脸中提取出裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种部件,这5种部件如图8所示。裸脸的图像尺寸为90×120,眉毛+眼睛的图像尺寸为182×70,眼睛的图像尺寸为160×40,鼻尖的图像尺寸为78×32,嘴巴的图像尺寸为90×68。2) Carry out facial component extraction. According to the specific positions of the left and right eyeballs, the tip of the nose, the center of the mouth, and the top of the mandible determined by the positioning of the face image, five parts of the naked face, eyebrows+eyes, eyes, nose tip, and mouth are extracted from the entire face. The components are shown in Figure 8. The image size of the bare face is 90×120, the image size of the eyebrows+eyes is 182×70, the image size of the eyes is 160×40, the image size of the tip of the nose is 78×32, and the image size of the mouth is 90×68.

3)对从训练集提取出来的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴,利用PCA(Principal Component Analysis)方法分析,分别形成特征裸脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴,分别提取已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值和提取待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值。3) Use the PCA (Principal Component Analysis) method to analyze the naked face, eyebrows + eyes, eyes, nose tip, and mouth extracted from the training set, and form the characteristic naked face, feature (eyes + eyebrows), feature eyes, and feature nose respectively. , Feature mouth, respectively extract the projected feature values of the naked face, eyes + eyebrows, eyes, nose, and mouth of the known face and the projected feature values of the naked face, eyes + eyebrows, eyes, nose, and mouth of the face to be recognized.

选取1000幅以上的人脸图像,每幅人脸图像均采用上述步骤1)、2)进行处理,形成裸脸训练集、(眼睛+眉毛)训练集、眼睛训练集、鼻子训练集、嘴巴训练集,对这些训练集分别进行PCA分析,形成特征脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴,图9给出了把训练集的人脸图像通过PCA分析形成特征脸的过程,其中,X为裸脸的象素数量,N为训练集的人脸图像数量,D为保留特征脸的数量。Select more than 1,000 face images, and each face image is processed by the above steps 1) and 2) to form a naked face training set, (eyes+eyebrows) training set, eye training set, nose training set, and mouth training set PCA analysis is performed on these training sets to form eigenfaces, features (eyes+eyebrows), feature eyes, feature noses, and feature mouths. Figure 9 shows the process of forming eigenfaces from the face images of the training set through PCA analysis. Process, where X is the number of pixels of the naked face, N is the number of face images in the training set, and D is the number of reserved eigenfaces.

特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴的形成过程类似于特征脸的形成过程。The formation process of features (eyes+eyebrows), feature eyes, feature nose, and feature mouth is similar to that of feature faces.

在已获得特征脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴的基础之上,再分别提取已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值和待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值。具体的做法是:On the basis of the obtained eigenface, feature (eye+eyebrow), eigeneye, eigennose, and eigenmouth, the projected eigenvalues and Projection eigenvalues of naked face, eyes+eyebrows, eyes, nose, and mouth of the face to be recognized. The specific method is:

用n×N矩阵表示N个人脸矢量,n为人脸图象的点阵数,N为训练人脸图项的点阵数,则: C = 1 N XX T - - - X = ( X 1 , X 2 , . . . , X N ) - - - ( 3 ) Represent N human face vectors with n * N matrix, n is the lattice number of human face image, N is the lattice number of training human face figure item, then: C = 1 N XX T - - - x = ( x 1 , x 2 , . . . , x N ) - - - ( 3 )

(3)式中Xk=x1k,x2k,...,xnk       k=(1,2,...,N)(3) where X k = x 1k , x 2k , ..., x nk k = (1, 2, ..., N)

在计算C的特征向量和特征值中,由于计算XXT的维数很大(n2维),而采用奇异值分解,改为计算XTX,这样可间接获得C的特征向量和特征值,而计算XTX后则降为N2维,XXT与XTX的特征值是一样的,而它们之间的特征向量的关系满足下式: u k = 1 &lambda; k &times; &phi; k - - - ( 4 ) In calculating the eigenvectors and eigenvalues of C, due to the large dimension of XX T (n 2 dimensions), the singular value decomposition is used to calculate X T X, so that the eigenvectors and eigenvalues of C can be obtained indirectly , and after calculating X T X, it is reduced to N 2 dimensions, the eigenvalues of XX T and X T X are the same, and the relationship between their eigenvectors satisfies the following formula: u k = 1 &lambda; k &times; &phi; k - - - ( 4 )

(4)式uk为XXT的特征向量,而φk为XTX的特征向量,λk既是XXT的特征值,同时也是XTX的特征值。(4) u k is the eigenvector of XX T , and φ k is the eigenvector of X T X, λ k is not only the eigenvalue of XX T , but also the eigenvalue of X T X.

对于矩阵R,存在一个Φ矩阵,使下式成立:For the matrix R, there exists a Φ matrix such that the following formula holds:

             R×Φ=Φ×Λ                                      (5)R×Φ=Φ×Λ (5)

(5)式中Λ包含了矩阵R的特征值,A=Dig{λ1,λ2,...,λN},(5)式表示成N个等式:In the formula (5), Λ contains the eigenvalues of the matrix R, A=Dig{λ 1 , λ 2 ,...,λ N }, and the formula (5) is expressed as N equations:

             R×φk=λk×φk      k=1,2,...,N              (6)R × φ k = λ k × φ k k = 1, 2, ..., N (6)

(6)式中特征值λk可通过下式求得:The eigenvalue λ k in formula (6) can be obtained by the following formula:

              |R-λk×I|=0                                     (7)|R-λ k ×I|=0 (7)

把求得的λk数值按从大到小进行排序,取出前D个最大的特征值并保留与之相对应的D个特征向量φkSort the obtained values of λ k from large to small, take out the first D largest eigenvalues and keep the corresponding D eigenvectors φ k .

由(4)式算出矩阵C的特征向量ukThe eigenvector u k of matrix C is calculated from formula (4).

矩阵C分别为从训练集人脸中分离出来的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴,分别通过(3)、(4)、(5)、(6)、(7)式的运算,得到特征脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴,这就是PCA(Principal ComponentAnalysis,主分量分析)方法中的特征脸方法。Matrix C is the naked face, eyes+eyebrows, eyes, nose, and mouth separated from the faces of the training set, respectively, through the operations of formulas (3), (4), (5), (6), and (7) , get the eigenface, feature (eyes + eyebrows), feature eyes, feature nose, feature mouth, this is the eigenface method in the PCA (Principal Component Analysis, principal component analysis) method.

4)提取已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值并建立已知人脸的数据库。4) Extract the projected feature values of the naked face, eyes+eyebrows, eyes, nose, and mouth of known faces and establish a database of known faces.

对已知人脸的图像采用上述步骤1)、2)所述的方法分离出裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴,形成人脸特征。采用下面(8)式的运算(即PCA(PrincipalComponent Analysis,主分量分析)方法中的投影特征值分析方法),分别形成已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值。 B = u k T &times; q - - - k = 1,2 , . . . , D - - - ( 8 ) Use the methods described in steps 1) and 2) above to separate the naked face, eyes+eyebrows, eyes, nose, and mouth from the image of a known face to form face features. Using the operation of the following formula (8) (i.e. the projection eigenvalue analysis method in the PCA (Principal Component Analysis) method), respectively form the projection features of the naked face, eyes + eyebrows, eyes, nose, and mouth of the known human face value. B = u k T &times; q - - - k = 1,2 , . . . , D. - - - ( 8 )

(8)式中q分别为已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴,uk T分别为从训练集人脸中得到的特征脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴,D=60。(8) where q are the naked face, eyes+eyebrows, eyes, nose, and mouth of known faces, and u k T are the eigenfaces, features (eyes+eyebrows), and eigeneyes obtained from the faces in the training set, respectively. , characteristic nose, characteristic mouth, D=60.

将一个已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的特征投影值按照裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的顺序组成一个已知人脸的投影特征值串,在此基础上建立一个供人脸识别用的数据库,该数据库包含有已知人脸图像(采用JPEG方法压缩)、已知人脸的投影特征值串和个人的身份档案。The characteristic projection values of naked face, eyes+eyebrows, eyes, nose, and mouth of a known face are formed into a projection feature value string of a known face in the order of naked face, eyes+eyebrows, eyes, nose, and mouth. Based on this, a database for face recognition is established, which contains known face images (compressed by JPEG method), projection feature value strings of known faces and personal identity files.

5)提取待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的特征投影值。对待识别人脸的图像采用上述1)、2)所述的方法分离出裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴,形成人脸特征。采用(9)式进行计算(即PCA(PrincipalComponent Analysis,主分量分析)方法中的投影特征值分析方法),分别形成待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值。 A = u k T &times; q - - - k = 1,2 , . . . , D - - - ( 9 ) 5) Extract the feature projection values of the naked face, eyes + eyebrows, eyes, nose, and mouth of the face to be recognized. Use the methods described in 1) and 2) above to separate the naked face, eyes + eyebrows, eyes, nose, and mouth from the image of the face to be recognized to form facial features. Use formula (9) to calculate (that is, the projection eigenvalue analysis method in the PCA (Principal Component Analysis, principal component analysis) method) to form the projection features of the naked face, eyes+eyebrows, eyes, nose, and mouth of the face to be recognized value. A = u k T &times; q - - - k = 1,2 , . . . , D. - - - ( 9 )

(9)式中q为待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴,uk T分别为从训练集人脸中得到的特征脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴。(9) where q is the naked face, eyes + eyebrows, eyes, nose, and mouth of the face to be recognized, and u k T are the eigenface, feature (eye + eyebrow), and eigeneye obtained from the faces in the training set, respectively. , characteristic nose, characteristic mouth.

6)采用全局人脸识别和局部人脸识别的方法进行人脸识别。人脸识别的过程为:以待识别人脸的特征与数据库中存储人脸的特征进行比对,计算相似度,再按与待识别人脸相似度的大小对数据库中的人脸进行从大到小的排序,并按照这一顺序显示出被查找出的人的照片、个人的身份档案和与待识别人的相似度,从而查找出待识别者的身份或与待识别者在面貌上相似人的身份。计算待识别人脸与已知人脸相似度采用(10)式。 R = 1 - | | A - B | | | | A | | + | | B | | - - - ( 10 ) 6) The method of global face recognition and partial face recognition is used for face recognition. The process of face recognition is: compare the features of the face to be recognized with the features of the face stored in the database, calculate the similarity, and then compare the faces in the database according to the similarity with the face to be recognized. To the small order, and in this order, display the photo of the person being found, the personal identity file and the similarity with the person to be identified, so as to find out the identity of the person to be identified or similar in appearance to the person to be identified person's identity. Formula (10) is used to calculate the similarity between the face to be recognized and the known face. R = 1 - | | A - B | | | | A | | + | | B | | - - - ( 10 )

(10)式中A为待识别人脸的投影特征值串、B为数据库中已知人脸的归投影特征值串。(10) where A is the projected eigenvalue string of the face to be recognized, and B is the normalized projected eigenvalue string of the known face in the database.

采用全局人脸识别方法时,对已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值按5∶6∶4∶3∶2的比例进行加权,同时对待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值按5∶6∶4∶3∶2的比例进行加权,然后按(10)式计算相似度。When the global face recognition method is used, the projected feature values of the naked face, eyes+eyebrows, eyes, nose, and mouth of the known face are weighted in a ratio of 5:6:4:3:2, and the eigenvalues of the face to be recognized are The projected eigenvalues of naked face, eyes+eyebrows, eyes, nose, and mouth are weighted according to the ratio of 5:6:4:3:2, and then the similarity is calculated according to (10).

采用局部人脸识别方法时,用人机交互的方法选择裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的任意组合,其组合数为5!共120种即共有120种人脸识别模式。裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值仍按5∶6∶4∶3∶2的比例进行加权。When using the partial face recognition method, use the method of human-computer interaction to select any combination of naked face, eyes+eyebrows, eyes, nose, and mouth, and the number of combinations is 5! A total of 120 types, that is, a total of 120 face recognition modes. The projected feature values of bare face, eyes+brows, eyes, nose, and mouth are still weighted in a ratio of 5:6:4:3:2.

采用本实施例方法的一种人脸识别系统结构如图10所示。包括将方法中的1~6步的6个步骤,再加上图像的输入,采用VC++编制程序,形成图10中的各个模块,分别安装在5台PC微机(4台作为服务器,一台作为客户机)中,还包括用于图像输入的摄像机、microtek E6型扫描仪。The structure of a face recognition system adopting the method of this embodiment is shown in FIG. 10 . Including the 6 steps of 1 to 6 steps in the method, plus the input of the image, using VC++ to compile the program to form each module in Figure 10, which is respectively installed on 5 PC microcomputers (4 as a server, and one as a server) In the client computer), it also includes a camera for image input, a microtek E6 scanner.

采用本方法所构成的人脸识别系统采用客户机/服务器方式,人脸识别的匹配算法嵌入到服务器中。通过注册模块可以建立已知人像数据库。客户端通过认定查询模块向服务器输送查询请求和待识别人脸的特征并接受从服务器送回来的识别查询结果。为了提高识别查询速度,采用多个服务器按集群计算机连接实现并行识别查询。人脸输入模块接受由扫描仪或摄像机输入的人脸图像,通过质量认定模块、特征提取模块后,再通过认定查询模块实现人像的识别查询。The face recognition system constituted by this method adopts the client/server mode, and the matching algorithm of face recognition is embedded in the server. A database of known portraits can be established through the registration module. The client sends the query request and the features of the face to be recognized to the server through the recognition query module, and accepts the recognition query result sent back from the server. In order to improve the identification and query speed, multiple servers are connected by cluster computer to realize parallel identification and query. The face input module accepts the face image input by the scanner or the camera, and after passing the quality identification module and the feature extraction module, the identification query module realizes the identification query of the portrait.

系统中的供人脸识别查询用的数据库包括个人的身份档案、人脸图像(用JPEG方法压缩)和人脸投影特征值三部分。按集群计算机连接组成的四个服务器包括一个主服务器和三个从服务器,为了降低网络的通信流量,需预先进行数据库的分库工作,即把一个大库分为四个小库分存入到四个服务器中。The database for face recognition query in the system includes three parts: personal identity file, face image (compressed by JPEG method) and face projection feature value. The four servers connected by cluster computers include a master server and three slave servers. In order to reduce the communication traffic of the network, it is necessary to carry out database sub-database work in advance, that is, divide a large database into four small databases and store them in of the four servers.

系统的数据库主服务器使用曙光PC服务器,配置为:双CPU-PIII 500,512MB内存,双SCISI硬盘-9G×2。安装Windows NT 4.0和Oracle8。The main database server of the system uses Sugon PC server, configured as: dual CPU-PIII 500, 512MB memory, dual SCISI hard disk-9G×2. Install Windows NT 4.0 and Oracle8.

本系统使用了三个数据库从服务器,每个从服务器的配置为:CPU-PIII733MHz,512MB内存,15G IDE硬盘,安装Windows NT 4.0和Oracle8数据库。This system uses three database slave servers, and the configuration of each slave server is: CPU-PIII733MHz, 512MB memory, 15G IDE hard disk, Windows NT 4.0 and Oracle8 database installed.

本实施例所录入的已知人员有92000人。There are 92,000 known persons entered in this embodiment.

客户机的配置为:CPU-PII 266,128MB内存,安装Windows 98。整个系统由网络联结,网络集线器采用为100Bb的交换机。The configuration of the client computer is: CPU-PII 266, 128MB memory, Windows 98 is installed. The entire system is connected by a network, and the network hub adopts a 100Bb switch.

由以上方法组成的人脸识别系统通过实际运行的测试,效果非常明显,举例说明如下:The face recognition system composed of the above methods has passed the actual running test, and the effect is very obvious. The examples are as follows:

实例1:如图11所示,采用本方法的人脸局部(裸脸、眼睛+眉毛、眼睛)识别查询,嫌疑人的模拟人脸(如图左部位所示)和真正嫌疑人的人脸的匹配度为87.88%,查询排序真正嫌疑人在92000名人脸图像中排在182名。如果用全局人脸整体识别,查询排序真正嫌疑人在92000名人脸图像中排在16617名(图中未示出)。Example 1: As shown in Figure 11, using this method for partial (naked face, eyes+eyebrows, eyes) recognition query, the suspect's simulated face (as shown in the left part of the figure) and the real suspect's face The matching degree is 87.88%, and the real suspect is ranked 182 out of 92,000 face images. If the global face recognition is used, the real suspect in the query ranking ranks 16617 among the 92000 face images (not shown in the figure).

实例2:如图12所示,采用本方法的全局人脸识别查询,被查询人脸如图左部位所示,识别查询结果为:在92000名人脸图像库中,被查询人不同时期的照片排在前三名。Example 2: As shown in Figure 12, the global face recognition query using this method, the queried face is shown in the left part of the figure, and the recognition query result is: in the 92,000 face image database, the photos of the queried person in different periods in the top three.

Claims (3)

1, a kind of multi-mode face identification method based on the parts principal component analysis is characterized in that: may further comprise the steps:
1) adopts the method for template matching and projection histogram that facial image is located, determine the home position on people's face coarse positioning district, left and right sides eyeball, nose, mouth, lower jaw summit;
2) from whole people's face, extract naked face, eyebrow+eyes, eyes, nose, five kinds of face components of mouth;
3) everyone facial image to training set adopts step 1), 2) extract everyone five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth, to the five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth that from training set people face, extract, utilize the eigenface method in the principal component method, form the naked face of feature, feature eyes+feature eyebrow, feature eyes, feature nose, feature face respectively;
4) known everyone facial image is adopted step 1), 2) the naked face that extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, to the naked face that from the known person face, extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, utilize the projection properties value analytical method in the principal component method, extract the naked face of known person face, eyes+eyebrow, eyes, nose, the projection properties value of five kinds of face components of mouth and foundation comprise the naked face of known person face, eyes+eyebrow, eyes, nose, the data base of the personal identification archives of the projection properties value of five kinds of face components of mouth and the compressed image of known person face and known person;
5) facial image to be identified everyone adopts step 1), 2) five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth of extracting, to the five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth that from people's face to be identified, extract, utilize the feature projection value analytical method in the principal component method, extract naked face, eyes+eyebrow, eyes, the nose of people's face to be identified, the projection properties value of face;
6) in known face database, people's face to be identified is adopted the calculating similarity and carries out multimodal overall recognition of face and local recognition of face by the method for sequencing of similarity.
2, the multi-mode face identification method based on the parts principal component analysis as claimed in claim 1, it is characterized in that, said multimodal overall face identification method comprises eigenface, feature eyes+feature eyebrow, feature eyes, feature nose, the comprehensive recognition methods of feature face.
3, the multi-mode face identification method based on the parts principal component analysis as claimed in claim 1, it is characterized in that, said multimodal local face identification method comprises the identification of single eigenface, feature eyes+feature eyebrow, feature eyes, feature nose, feature mouth, or the combination identification each other of eigenface, feature eyes+feature eyebrow, feature eyes, feature nose, feature mouth.
CNB011365773A 2001-10-19 2001-10-19 Main unit component analysis based multimode human face identification method Expired - Fee Related CN1137662C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB011365773A CN1137662C (en) 2001-10-19 2001-10-19 Main unit component analysis based multimode human face identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB011365773A CN1137662C (en) 2001-10-19 2001-10-19 Main unit component analysis based multimode human face identification method

Publications (2)

Publication Number Publication Date
CN1341401A CN1341401A (en) 2002-03-27
CN1137662C true CN1137662C (en) 2004-02-11

Family

ID=4673750

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB011365773A Expired - Fee Related CN1137662C (en) 2001-10-19 2001-10-19 Main unit component analysis based multimode human face identification method

Country Status (1)

Country Link
CN (1) CN1137662C (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100416592C (en) * 2005-12-23 2008-09-03 北京海鑫科金高科技股份有限公司 Human face automatic identifying method based on data flow shape
CN108334869A (en) * 2018-03-21 2018-07-27 北京旷视科技有限公司 Selection, face identification method and the device and electronic equipment of face component

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1296872C (en) * 2003-11-11 2007-01-24 易连科技股份有限公司 A Method of Quickly Establishing a Plane Model of Human Face Image
CN1627317A (en) * 2003-12-12 2005-06-15 北京阳光奥森科技有限公司 Method for obtaining image of human faces by using active light source
CN1319013C (en) * 2005-03-16 2007-05-30 沈阳工业大学 Combined recognising method for man face and ear characteristics
CN1319014C (en) * 2005-03-16 2007-05-30 沈阳工业大学 Personal identity recognising method based on pinna geometric parameter
JP4653606B2 (en) 2005-05-23 2011-03-16 株式会社東芝 Image recognition apparatus, method and program
CN100412885C (en) * 2005-05-23 2008-08-20 株式会社东芝 Image recognition apparatus and method
CN100412884C (en) * 2006-04-10 2008-08-20 中国科学院自动化研究所 Fast Face Detection Method Based on Local Descriptor
CN100444191C (en) * 2006-11-08 2008-12-17 中山大学 Multi-expression whole face contour detection method based on active shape model
JP4337064B2 (en) * 2007-04-04 2009-09-30 ソニー株式会社 Information processing apparatus, information processing method, and program
CN101305913B (en) * 2008-07-11 2010-06-09 华南理工大学 A video-based face beauty evaluation method
WO2011120206A1 (en) * 2010-04-02 2011-10-06 Nokia Corporation Methods and apparatuses for face detection
CN101819631B (en) * 2010-04-16 2012-12-26 深圳大学 Identity identification method and identity identification system
CN101853397A (en) * 2010-04-21 2010-10-06 中国科学院半导体研究所 A bionic face detection method based on human visual characteristics
CN102043966B (en) * 2010-12-07 2012-11-28 浙江大学 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation
CN103065130B (en) * 2012-12-31 2015-12-09 华中科技大学 A kind of target identification method of three-dimensional fuzzy space
CN103268654A (en) * 2013-05-30 2013-08-28 苏州福丰科技有限公司 Electronic lock based on three-dimensional face identification
CN105095917B (en) * 2015-08-31 2019-08-06 小米科技有限责任公司 Image processing method, device and terminal
CN109993042A (en) * 2017-12-29 2019-07-09 国民技术股份有限公司 A face recognition method and device thereof
CN109446893A (en) * 2018-09-14 2019-03-08 百度在线网络技术(北京)有限公司 Face identification method, device, computer equipment and storage medium
CN109241943A (en) * 2018-10-09 2019-01-18 深圳市三宝创新智能有限公司 Non-alignment face feature extraction method, device, computer equipment and storage medium
CN109684917A (en) * 2018-11-14 2019-04-26 南宁学院 A kind of fast human face recognition
CN112766013A (en) * 2019-10-21 2021-05-07 深圳君正时代集成电路有限公司 Recognition method for performing multistage screening in face recognition
CN116386119A (en) * 2023-05-09 2023-07-04 北京维艾狄尔信息科技有限公司 Body-building footpath-based identity recognition method, body-building footpath-based identity recognition system, body-building footpath-based identity recognition terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100416592C (en) * 2005-12-23 2008-09-03 北京海鑫科金高科技股份有限公司 Human face automatic identifying method based on data flow shape
CN108334869A (en) * 2018-03-21 2018-07-27 北京旷视科技有限公司 Selection, face identification method and the device and electronic equipment of face component

Also Published As

Publication number Publication date
CN1341401A (en) 2002-03-27

Similar Documents

Publication Publication Date Title
CN1137662C (en) Main unit component analysis based multimode human face identification method
Li et al. A comprehensive survey on 3D face recognition methods
Sawant et al. Age invariant face recognition: a survey on facial aging databases, techniques and effect of aging
Wang et al. Robust 3D face recognition by local shape difference boosting
Abaza et al. A survey on ear biometrics
Gao et al. Robust visual similarity retrieval in single model face databases
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
CN102799870B (en) Based on the single training image per person method of the consistent LBP of piecemeal and sparse coding
CN102682309B (en) A face registration method and device based on template learning
CN107330383A (en) A kind of face identification method based on depth convolutional neural networks
CN101493887B (en) Eyebrow image segmentation method based on semi-supervised learning and hash index
CN1266642C (en) Multi-category-based human face classifying and identifying method
CN105138954A (en) Image automatic screening, query and identification system
CN105825163A (en) Retrieval system and method of face image
CN111126240A (en) A three-channel feature fusion face recognition method
CN108615007B (en) Three-dimensional face recognition method, device and storage medium based on feature tensor
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN1304114A (en) Identity identification method based on multiple biological characteristics
Angadi et al. Face recognition through symbolic modeling of face graphs and texture
CN104966075B (en) A kind of face identification method and system differentiating feature based on two dimension
CN105469117A (en) Image recognition method and device based on robust characteristic extraction
Whitehill et al. A discriminative approach to frame-by-frame head pose tracking
Abushanap et al. A survey of human face recognition for partial face view
Rajalakshmi et al. A review on classifiers used in face recognition methods under pose and illumination variation
Wong et al. Optimal linear combination of facial regions for improving identification performance

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20040211

Termination date: 20101019