CN107424141B - Face image quality evaluation method based on probability block - Google Patents
Face image quality evaluation method based on probability block Download PDFInfo
- Publication number
- CN107424141B CN107424141B CN201710185733.0A CN201710185733A CN107424141B CN 107424141 B CN107424141 B CN 107424141B CN 201710185733 A CN201710185733 A CN 201710185733A CN 107424141 B CN107424141 B CN 107424141B
- Authority
- CN
- China
- Prior art keywords
- image
- probability
- block
- feature vector
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000013441 quality evaluation Methods 0.000 title abstract 2
- 239000013598 vector Substances 0.000 claims abstract description 24
- 230000009466 transformation Effects 0.000 claims abstract description 9
- 238000010606 normalization Methods 0.000 claims abstract description 6
- 239000003086 colorant Substances 0.000 claims abstract description 4
- 230000001815 facial effect Effects 0.000 claims description 30
- 238000001303 quality assessment method Methods 0.000 claims description 11
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 2
- 230000017105 transposition Effects 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 230000004807 localization Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
技术领域technical field
本发明涉及面部图像质量评估领域,尤其涉及一种基于概率块的面部图像质量评估方法。The invention relates to the field of facial image quality assessment, in particular to a probability block-based facial image quality assessment method.
背景技术Background technique
随着监控系统的快速发展以及应用成本的降低,越来越多监控系统应用在人们生活的各个方面,但由于低分辨率、模糊图像、大姿势变化和低对比度等问题的存在,使得在监视条件下基于视频的身份推断具有挑战性[1]。With the rapid development of monitoring systems and the reduction of application costs, more and more monitoring systems are applied in all aspects of people's lives. However, due to the existence of problems such as low resolution, blurred images, large posture changes and low contrast, the Conditional video-based identity inference is challenging [1] .
近年来,提出了许多方法来处理劣质图像中的人脸识别问题[2]。一种方法是假定图像是序列中的离群值,然而,当序列中的大多数图像具有差的质量时,这些方法会将良好质量的图像分类为异常值;另一种方法是显式子集选择,对每个图像自动进行面部质量评估,选择由高质量图像组成的子集。提高了识别性能,减少总体计算负荷,但很难为“面部质量”找到一个很好的定义。In recent years, many methods have been proposed to deal with the problem of face recognition in inferior images [2] . One approach is to assume that images are outliers in the sequence, however, when the majority of images in the sequence are of poor quality, these approaches classify good quality images as outliers; another approach is to explicitly Set selection, where facial quality assessment is performed automatically for each image, selecting a subset consisting of high-quality images. Improved recognition performance and reduced overall computational load, but it was difficult to find a good definition for "face quality".
ISO/IEC 19794-5和ICAO 9303是用于面部质量评价的面部图像标准,基于上述标准,已经提出了许多方法来分析各种面部和图像属性。ISO/IEC 19794-5 and ICAO 9303 are facial image standards for facial quality assessment, and based on the above standards, many methods have been proposed to analyze various facial and image attributes.
由于面部识别性能同时受到多个因素的影响,能够检测一个或两个质量对于鲁棒子集的选择是不够的。Nasrollahi和Moeslund[3]提出了一种加权的质量融合方法,以结合面外旋转、锐度、亮度和图像分辨率质量;Rua等人[4]提出了一个类似的质量评估方法,使用不对称分析和两个锐度测量;Hsu等人[5]提出了在多个质量分数上学习融合参数,以实现与面部图像对之间匹配分数的最大相关性;但由于各种性质是单独测量并且对面部质量具有不同的影响,上述方法难以将它们组合输出单个质量分数用于图像选择;Luo[6]提出了一种基于学习的方法,其中质量模型被训练以匹配手动标记的质量分数。然而,考虑到人类标记的主观性质,该方法可能不会产生用于面部识别的最佳质量模型。Since facial recognition performance is simultaneously affected by multiple factors, being able to detect one or two qualities is not sufficient for robust subset selection. Nasrollahi and Moeslund [3] proposed a weighted quality fusion method to combine out-of-plane rotation, sharpness, brightness and image resolution quality; Rua et al. [4] proposed a similar quality assessment method using asymmetric analysis and two sharpness measures; Hsu et al. [5] proposed to learn fusion parameters on multiple quality scores to achieve maximum correlation with matching scores between face image pairs; but due to the various properties are measured separately and have different effects on facial quality, and it is difficult for the above methods to combine them to output a single quality score for image selection; Luo [6] proposed a learning-based method in which a quality model is trained to match the manually labeled quality scores. However, given the subjective nature of human labeling, this approach may not yield the best quality models for facial recognition.
面部图像质量评估问题目前面临的主要挑战为:由于对准误差、姿态变化、图像阴影和图像清晰度不高等多种问题的存在,使得选择整体性能最佳的面部图像受到很大制约;投射阴影、自动面部定位等问题的存在,给视频中人脸识别带来很大困难。The main challenges facing the facial image quality assessment problem are: due to the existence of various problems such as alignment errors, pose changes, image shadows and low image clarity, the selection of the facial image with the best overall performance is greatly restricted; cast shadows The existence of problems such as automatic face positioning and other problems brings great difficulties to face recognition in videos.
发明内容SUMMARY OF THE INVENTION
本发明提供了一种基于概率块的面部图像质量评估方法,本发明避免了由投射阴影、模糊性以及自动面部定位导致的对准误差对人脸识别的影响,提高了识别的准确度,详见下文描述:The present invention provides a method for evaluating the quality of facial images based on probability blocks. The present invention avoids the influence of alignment errors on face recognition caused by projected shadows, ambiguity and automatic facial positioning, and improves the accuracy of recognition. See description below:
一种基于概率块的面部图像质量评估方法,所述方法包括以下步骤:A face image quality assessment method based on probability block, the method comprises the following steps:
采用对数变换对图像进行归一化处理,放大低强度像素并压缩高强度像素,减少肤色之间的强度差异;The image is normalized by logarithmic transformation, amplifying low-intensity pixels and compressing high-intensity pixels, reducing the intensity difference between skin tones;
对变换后图像中的每块归一化处理,使其具有零均值和单位方差,并提取二维DCT特征向量;Normalize each block in the transformed image to have zero mean and unit variance, and extract a two-dimensional DCT feature vector;
采用位置化概率模型计算相应特征向量的概率,通过整合局部概率产生总体质量分数,反映图像质量。The localization probability model is used to calculate the probability of the corresponding feature vector, and the overall quality score is generated by integrating the local probability, reflecting the image quality.
其中,所述对变换后图像中的每块归一化处理,使其具有零均值和单位方差,并提取二维DCT特征向量的步骤具体为:The steps of normalizing each block in the transformed image to make it have zero mean and unit variance, and extracting the two-dimensional DCT feature vector are as follows:
为适应面部图像之间的对比度变化,将变换后的图像中的每块归一化为具有零均值和单位方差;并从每个块中,提取2维DCT特征向量,并排除没有归一化信息的第0个DCT分量,保留含有通用面部纹理的前d个低频分量。To accommodate for contrast changes between face images, each block in the transformed image is normalized to have zero mean and unit variance; and from each block, a 2-dimensional DCT feature vector is extracted and excluded without normalization The 0th DCT component of the information, retains the first d low frequency components containing the generic facial texture.
其中,所述采用位置化概率模型计算相应特征向量的概率,通过整合局部概率产生总体质量分数,反映图像质量的步骤具体为:Wherein, the positional probability model is used to calculate the probability of the corresponding feature vector, and the overall quality score is generated by integrating the local probability, and the steps of reflecting the image quality are as follows:
每个位置的模型使用具有正面照明和自然表情的正面面部图像来训练,所有要训练的面部图像首先被缩放并且对齐到固定大小,使每个眼睛位于固定位置;The model at each location is trained using frontal face images with frontal lighting and natural expressions, all face images to be trained are first scaled and aligned to a fixed size with each eye at a fixed position;
采用位置化概率模型计算每个块相应特征向量的概率,并假设每个位置的模型是独立的,通过整合局部概率产生总体质量分数。The probability of the corresponding feature vector of each block is calculated using a positional probability model, and the overall quality score is generated by integrating the local probabilities, assuming that the models at each position are independent.
本发明提供的技术方案的有益效果是:The beneficial effects of the technical scheme provided by the present invention are:
1、该方法具有最佳的整体性能,能够识别最正面、对齐良好、照明和锐利的图像;1. The method has the best overall performance, being able to identify the most frontal, well-aligned, illuminated and sharp images;
2、对于每个给定的人脸图像集,所提出的方法用于根据图像的质量对图像进行排序,通过选择仅包含顶级质量图像的子集,显著提高识别准确度。2. For each given set of face images, the proposed method is used to rank the images according to their quality, significantly improving the recognition accuracy by selecting a subset containing only top-quality images.
附图说明Description of drawings
图1为一种基于概率块的面部图像质量评估方法。Figure 1 shows a method for evaluating the quality of facial images based on probability blocks.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面对本发明实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention are further described in detail below.
实施例1Example 1
为了解决以上问题,需要能够全面、自动、准确地进行面部选择与面部识别。研究表明:基于块的局部分析通过量化给定面部与概率面部模型的相似性,在不需要定位面部特征及不求助于融合的情况下,能够针对每个图像输出单个分数,通过分数反映对准误差的程度、姿态变化、阴影和图像清晰度。本发明实施例提出了一种基于概率块的面部图像质量评估方法,参见图1,详见下文描述:In order to solve the above problems, it is necessary to be able to perform face selection and face recognition comprehensively, automatically and accurately. Research has shown that patch-based local analysis can output a single score for each image by quantifying the similarity of a given face to a probabilistic face model, without the need to locate facial features and without resorting to fusion, with scores reflecting alignment Degree of error, pose changes, shadows, and image sharpness. An embodiment of the present invention proposes a probability block-based facial image quality assessment method, see FIG. 1 , and details are described below:
101:采用对数变换对图像进行归一化处理,放大低强度像素并压缩高强度像素,减少肤色之间的强度差异;101: Use logarithmic transformation to normalize the image, amplify low-intensity pixels and compress high-intensity pixels, and reduce the intensity difference between skin colors;
102:对变换后图像中的每块归一化处理,使其具有零均值和单位方差,并提取二维DCT(Discrete Cosine Transform,离散余弦变换)特征向量;102: Normalize each block in the transformed image so that it has zero mean and unit variance, and extract a two-dimensional DCT (Discrete Cosine Transform, discrete cosine transform) feature vector;
103:采用位置化概率模型计算相应特征向量的概率,通过整合局部概率产生总体质量分数,反映图像质量。103: Calculate the probability of the corresponding feature vector by using the localization probability model, and generate an overall quality score by integrating the local probability, reflecting the image quality.
其中,步骤102中对变换后图像中的每块归一化处理,使其具有零均值和单位方差,并提取二维DCT特征向量的步骤具体为:Wherein, in step 102, each block in the transformed image is normalized so that it has zero mean and unit variance, and the steps of extracting the two-dimensional DCT feature vector are as follows:
为适应面部图像之间的对比度变化,将变换后的图像中的每块归一化为具有零均值和单位方差;并从每个块中,提取2维DCT特征向量,并排除没有归一化信息的第0个DCT分量,保留含有通用面部纹理的前d个低频分量。To accommodate for contrast changes between face images, each block in the transformed image is normalized to have zero mean and unit variance; and from each block, a 2-dimensional DCT feature vector is extracted and excluded without normalization The 0th DCT component of the information, retains the first d low frequency components containing the generic facial texture.
进一步地,步骤103中的采用位置化概率模型计算相应特征向量的概率,通过整合局部概率产生总体质量分数,反映图像质量的步骤具体为:Further, in step 103, the positional probability model is used to calculate the probability of the corresponding feature vector, and the overall quality score is generated by integrating the local probabilities, and the steps of reflecting the image quality are as follows:
每个位置的模型使用具有正面照明和自然表情的正面面部图像来训练,所有要训练的面部图像首先被缩放并且对齐到固定大小,使每个眼睛位于固定位置;The model at each location is trained using frontal face images with frontal lighting and natural expressions, all face images to be trained are first scaled and aligned to a fixed size with each eye at a fixed position;
采用位置化概率模型计算每个块相应特征向量的概率,并假设每个位置的模型是独立的,通过整合局部概率产生总体质量分数。The probability of the corresponding feature vector of each block is calculated using a positional probability model, and the overall quality score is generated by integrating the local probabilities, assuming that the models at each position are independent.
综上所述,本发明实施例避免了由投射阴影、模糊性以及自动面部定位导致的对准误差对人脸识别的影响,提高了识别的准确度。To sum up, the embodiment of the present invention avoids the influence of alignment errors caused by projected shadows, ambiguity and automatic face positioning on face recognition, and improves the recognition accuracy.
实施例2Example 2
下面结合具体的计算公式、图1对实施例1中的方案进行进一步地介绍,详见下文描述:The scheme in Embodiment 1 is further introduced below in conjunction with the specific calculation formula and Fig. 1, and is described in detail below:
201:采用对数变换对图像I进行归一化处理,放大低强度像素并压缩高强度像素,减少肤色之间的强度差异;201: Normalize image I by logarithmic transformation, amplify low-intensity pixels and compress high-intensity pixels, and reduce the intensity difference between skin colors;
对于给定的图像I,执行非线性预处理(对数变换)以减少数据的动态范围,使用下式计算归一化图像Ilog:For a given image I, non-linear preprocessing (logarithmic transformation) is performed to reduce the dynamic range of the data, and the normalized image I log is calculated using:
Ilog(r,c)=ln[I(r,c)+1]I log (r,c)=ln[I(r,c)+1]
其中,I(r,c)是位于(r,c)位置处的像素强度。对数归一化能够放大低强度像素并压缩高强度像素,有助于减少肤色之间的强度差异。where I(r,c) is the pixel intensity at position (r,c). Logarithmic normalization amplifies low-intensity pixels and compresses high-intensity pixels, helping to reduce intensity differences between skin tones.
202:对变换后图像Ilog中的每块归一化处理,使其具有零均值和单位方差,并提取二维DCT特征向量;202: Normalize each block in the transformed image I log so that it has zero mean and unit variance, and extract a two-dimensional DCT feature vector;
其中,变换后的图像Ilog被分成N个重叠块,每个块bi具有n×n个像素的大小并且与相邻块重叠t个像素,为了适应面部图像之间的对比度变化,每个片被归一化为具有零均值和单位方差。where the transformed image I log is divided into N overlapping blocks, each block bi has the size of n × n pixels and overlaps with adjacent blocks by t pixels, in order to accommodate the contrast changes between face images, each Slices are normalized to have zero mean and unit variance.
从每个块中,提取2维DCT特征向量,排除第0个DCT分量,保留了前d个低频分量,低频分量保留通用面部纹理,而在很大程度上省略个人的特定信息,同时,投射阴影以及姿势和对齐方式的变化可以改变局部纹理。From each block, a 2-dimensional DCT feature vector is extracted, the 0th DCT component is excluded, the first d low-frequency components are preserved, and the low-frequency components retain the general facial texture while largely omitting individual-specific information, while, projecting Shadows and changes in pose and alignment can alter local textures.
203:采用位置化概率模型计算相应特征向量xi的概率,通过整合局部概率产生总体质量分数,反映图像质量。203 : Calculate the probability of the corresponding feature vector xi by using the localization probability model, and generate an overall quality score by integrating the local probabilities, reflecting the image quality.
每个位置的模型使用具有正面照明和自然表情的正面面部图像来训练,所有要训练的面部图像首先被缩放并且对齐到固定大小,使每个眼睛位于固定位置。The model at each location is trained using frontal face images with frontal lighting and natural expressions, all face images to be trained are first scaled and aligned to a fixed size with each eye at a fixed position.
对于每个块的位置i,使用位置化概率模型计算特征向量xi的概率:For each block's position i, compute the probability of the feature vector x i using a positional probability model:
其中,xi为特征向量,μi、∑i是正态分布的平均值和协方差矩阵,T表示转置,d表示低频分量的个数。Among them, x i is the eigenvector, μ i and Σ i are the mean value and covariance matrix of the normal distribution, T represents the transposition, and d represents the number of low-frequency components.
通过假设每个位置的模型是独立的,使用下式来计算由N个块组成的图像I的整体概率质量分数Q(I):By assuming the models at each location are independent, the overall probability quality score Q(I) for an image I consisting of N patches is calculated using:
所得到的质量分数表示给定面部图像与“理想”面部图像的概率相似性,其中,“理想”面部图像由一组训练图像表示。The resulting quality score represents the probabilistic similarity of a given face image to an "ideal" face image, where the "ideal" face image is represented by a set of training images.
质量分数越高,表示图像质量越好,反之,则图像质量越差。The higher the quality score, the better the image quality, and vice versa, the worse the image quality.
综上所述,本发明实施例避免了由投射阴影、模糊性以及自动面部定位导致的对准误差对人脸识别的影响,提高了识别的准确度。To sum up, the embodiment of the present invention avoids the influence of alignment errors caused by projected shadows, ambiguity and automatic face positioning on face recognition, and improves the recognition accuracy.
实施例3Example 3
下面结合具体的实例、计算公式、表1、表2、表3对实施例1和2中的方案进行可行性验证,详见下文描述:Below in conjunction with concrete example, calculation formula, table 1, table 2, table 3, feasibility verification is carried out to the scheme in embodiment 1 and 2, see below for details:
FERET和PIE数据库用来验证本方法在正确选择具有期望特性的图像方面的性能。Choke Point数据库用来验证本方法在选择基于视频的图像子集用于面部识别方面的有效性。The FERET and PIE databases were used to verify the performance of the method in correctly selecting images with desired properties. The Choke Point database is used to verify the effectiveness of our method in selecting a subset of video-based images for facial recognition.
FERET(The Facial Recognition Technology Database)数据库是人脸识别领域应用最广泛的人脸数据库之一。其中,'fb'子集具有模糊图像和对准误差图像,对准误差由水平位移和垂直位移(移位0,±2,±4,±6,±8个像素)、面内旋转(0°,±10°,±20°,±30°)、尺度变换(将0.7,0.8,0.9,1.0,1.1,1.2,1.3作为缩放因子)等引起。该子集可以用来模拟模糊图像以及四个对准误差图像。FERET (The Facial Recognition Technology Database) database is one of the most widely used face databases in the field of face recognition. Among them, the 'fb' subset has blurred images and images with alignment errors, and the alignment errors are determined by horizontal and vertical displacement (shift 0, ±2, ±4, ±6, ±8 pixels), in-plane rotation (0 °, ±10°, ±20°, ±30°), scaling (using 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3 as scaling factors), etc. This subset can be used to simulate blurred images as well as four alignment error images.
PIE(Pose Illumination Expression)数据库由美国卡耐基梅隆大学创建,包含多姿态、光照和表情的面部图像。其照明子集用于评估各种投影条件下的性能。实施例中,基于光源角度将前视图图像分成六个子集,子集1具有最多的正面光源,而子集6具有最大的光源角(54°-67°)。该子集可以用来验证在投射阴影存在的情况下所提出的方法的性能。The PIE (Pose Illumination Expression) database was created by Carnegie Mellon University in the United States and contains facial images of multiple poses, lighting and expressions. A subset of its illumination is used to evaluate performance under various projection conditions. In an embodiment, the front view image is divided into six subsets based on light source angle, with subset 1 having the most frontal light sources and subset 6 having the largest light source angle (54°-67°). This subset can be used to verify the performance of the proposed method in the presence of cast shadows.
Choke Point数据库为视频数据集,设计用于在真实世界的监视条件下进行人脸识别或验证的实验,数据集由入口1中的25个受试者(19男性和6个女性)和入口2中的29个受试者(23男性和6女性)组成,共48个视频序列和64204个面部图像。The Choke Point database is a video dataset designed for experiments on face recognition or verification under real-world surveillance conditions, the dataset consists of 25 subjects (19 males and 6 females) in Entrance 1 and Entrance 2 consists of 29 subjects (23 males and 6 females) in 48 video sequences and 64,204 facial images.
将本方法与以下三种方法进行对比,评估所提出的质量评估方法在同时提供质量良好和质量差的面部图像时如何确定最好质量的面部图像:This method is compared with the following three methods to evaluate how the proposed quality assessment method determines the best quality face image when both good and poor quality face images are provided:
Asym_shrp[4]:一种基于像素的不对称分析和两个锐度分析的分数融合法;Asym_shrp [4] : a pixel-based asymmetry analysis and a fractional fusion method for two sharpness analyses;
Gabor_asym[7]:一种基于Gabor特征的不对称分析法;Gabor_asym [7] : an asymmetric analysis method based on Gabor features;
DFFS[8]:(Distance From Face Space)一种经典的距离面空间方法。DFFS [8] : (Distance From Face Space) A classic distance face space method.
为了证明本方法对各种面部识别系统质量评估的有效性,不失一般性地,分别使用多区域直方图(Multi-Region Histograms,简称为MRH)和局部二值模式(Local BinaryPatterns,简称为LBP)从每个面部图像提取特征;使用相互子空间法(Mutual SubspaceMethod,简称为MSM)对面部集进行分类。In order to prove the effectiveness of this method for quality assessment of various facial recognition systems, without loss of generality, we use Multi-Region Histograms (MRH) and Local Binary Patterns (LBP) respectively. ) extract features from each face image; use the Mutual Subspace Method (MSM for short) to classify the set of faces.
表1Table 1
由表1中所示的移位、旋转和尺度变化的结果表明,本方法在大多数变化中一致地实现了最佳或接近最佳的性能;Gabor_asym对于检测具有各种清晰度变化的图像的表现也不佳;Asym_shrp通过组合不对称分析和两个图像清晰度测量来解决这个问题,尽管如此,其整体表现仍然很差;DFFS对准误差的性能低于提出的方法,且未能检测到具有最佳锐度的图像。The results for shifts, rotations, and scale changes shown in Table 1 show that the method consistently achieves optimal or near-optimal performance in most variations; Gabor_asym is useful for detecting images with various sharpness changes. The performance is also poor; Asym_shrp addresses this by combining asymmetry analysis and two image sharpness measures, but its overall performance is still poor; DFFS performs worse than the proposed method and fails to detect Image with the best sharpness.
表2Table 2
由表2中的6个PIE照明子集上的结果表明,即使存在投射阴影,本方法也可以获得良好的结果;Asym_shrp获得了最佳性能(正面标记为具有高质量的正面照明面);相比之下,Gabor_asym在子集1和4之间混淆,而DFFS错误地将子集4中的大多数面部图像(包含显著阴影)标记为具有最高质量。The results on the 6 PIE lighting subsets in Table 2 show that this method can achieve good results even in the presence of cast shadows; Asym_shrp achieves the best performance (fronts are marked as having high-quality front-illuminated surfaces); By contrast, Gabor_asym confuses between subsets 1 and 4, while DFFS incorrectly labels most face images (containing significant shadows) in subset 4 as having the highest quality.
表3table 3
由表3中基于视频数据库的人脸识别性能验证结果表明,子集数从4变化到16,不论使用哪种面部特征提取算法,所提出的质量测量方法一致地得到比其他三种方法更好的面部验证性能。实验结果验证了本方法的可行性与优越性。The results of face recognition performance verification based on video database in Table 3 show that the number of subsets varies from 4 to 16, regardless of which facial feature extraction algorithm is used, the proposed quality measurement method consistently obtains better results than the other three methods. face verification performance. The experimental results verify the feasibility and superiority of this method.
参考文献:references:
[1]董海波.监控视频图像质量评估[D].上海交通大学,2013.[1] Dong Haibo. Surveillance video image quality assessment [D]. Shanghai Jiaotong University, 2013.
[2]杨宇.视频图像中的人脸识别关键技术的研究[D].河北工程大学,2014.[2] Yang Yu. Research on key technologies of face recognition in video images [D]. Hebei University of Engineering, 2014.
[3]K.Nasrollahi and T.B.Moeslund.Face quality assessment system invideo sequences.In BIOID,Lecture Notes in Computer Science(LNCS),volume 5372,pages 10–18,2008..[3] K.Nasrollahi and T.B.Moeslund.Face quality assessment system in video sequences.In BIOID, Lecture Notes in Computer Science (LNCS), volume 5372, pages 10–18, 2008..
[4]E.A.Rúa,J.L.A.Castro,and C.G.Mateo.Quality-based scorenormalization and frame selection for video-based person authentication.InBIOID,Lecture Notes in Computer Science(LNCS),pages 1–9,2008.[4] E.A.Rúa, J.L.A.Castro, and C.G.Mateo. Quality-based scorenormalization and frame selection for video-based person authentication. InBIOID, Lecture Notes in Computer Science (LNCS), pages 1–9, 2008.
[5]R.-L.V.Hsu,J.Shah,and B.Martin.Quality assessment of facialimages.In Biometrics Symposium,2006.[5] R.-L.V.Hsu, J.Shah, and B.Martin.Quality assessment of facialimages.In Biometrics Symposium, 2006.
[6]H.Luo.A training-based no-reference image quality assessmentalgorithm.In International Conference on Image Processing(ICIP),pages 2973–2976,2004.[6] H. Luo. A training-based no-reference image quality assessment algorithm. In International Conference on Image Processing (ICIP), pages 2973–2976, 2004.
[7]J.Sang,Z.Lei,and S.Z.Li.Face image quality evaluation for ISO/IECstandards 19794-5 and 29794-5.In ICB,Lecture Notes in Computer Science(LNCS),volume 5558,pages 229–238,2009.[7]J.Sang,Z.Lei,and S.Z.Li.Face image quality evaluation for ISO/IECstandards 19794-5 and 29794-5.In ICB,Lecture Notes in Computer Science(LNCS),volume 5558,pages 229–238 , 2009.
[8]H.Bae and S.Kim.Real-time face detection and recognition usinghybrid-information extracted from face space and facial features.Image andVision Computing,23(13):1181–1191,2005.[8] H. Bae and S. Kim. Real-time face detection and recognition using hybrid-information extracted from face space and facial features. Image and Vision Computing, 23(13):1181–1191, 2005.
本领域技术人员可以理解附图只是一个优选实施例的示意图,上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred embodiment, and the above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710185733.0A CN107424141B (en) | 2017-03-26 | 2017-03-26 | Face image quality evaluation method based on probability block |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710185733.0A CN107424141B (en) | 2017-03-26 | 2017-03-26 | Face image quality evaluation method based on probability block |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107424141A CN107424141A (en) | 2017-12-01 |
CN107424141B true CN107424141B (en) | 2020-07-28 |
Family
ID=60424254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710185733.0A Active CN107424141B (en) | 2017-03-26 | 2017-03-26 | Face image quality evaluation method based on probability block |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107424141B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447058B (en) * | 2018-03-30 | 2020-07-14 | 北京理工大学 | Image quality evaluation method and system |
CN110070010B (en) * | 2019-04-10 | 2022-06-14 | 武汉大学 | Face attribute association method based on pedestrian re-recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855631A (en) * | 2012-08-23 | 2013-01-02 | 浙江大学 | Method for extracting visual energy information for image quality evaluation |
CN103996192A (en) * | 2014-05-12 | 2014-08-20 | 同济大学 | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model |
CN105976351A (en) * | 2016-03-31 | 2016-09-28 | 天津大学 | Central offset based three-dimensional image quality evaluation method |
CN106127741A (en) * | 2016-06-17 | 2016-11-16 | 西安电子科技大学 | Non-reference picture quality appraisement method based on improvement natural scene statistical model |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7418148B2 (en) * | 2003-09-29 | 2008-08-26 | Casio Computer Co., Ltd. | Image compression method and image compression device |
CN101179707A (en) * | 2007-09-21 | 2008-05-14 | 清华大学 | Multi-view collaborative target tracking measurement method for wireless network video images |
US9014471B2 (en) * | 2010-09-17 | 2015-04-21 | I.C.V.T. Ltd. | Method of classifying a chroma downsampling error |
US9129422B2 (en) * | 2013-02-25 | 2015-09-08 | Siemens Aktiengesellschaft | Combined surface reconstruction and registration for laparoscopic surgery |
US9256950B1 (en) * | 2014-03-06 | 2016-02-09 | Google Inc. | Detecting and modifying facial features of persons in images |
CN105225206A (en) * | 2014-11-22 | 2016-01-06 | 四川大学 | A kind of outdoor optical based on image is according to environment rebuilt method |
-
2017
- 2017-03-26 CN CN201710185733.0A patent/CN107424141B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855631A (en) * | 2012-08-23 | 2013-01-02 | 浙江大学 | Method for extracting visual energy information for image quality evaluation |
CN103996192A (en) * | 2014-05-12 | 2014-08-20 | 同济大学 | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model |
CN105976351A (en) * | 2016-03-31 | 2016-09-28 | 天津大学 | Central offset based three-dimensional image quality evaluation method |
CN106127741A (en) * | 2016-06-17 | 2016-11-16 | 西安电子科技大学 | Non-reference picture quality appraisement method based on improvement natural scene statistical model |
Also Published As
Publication number | Publication date |
---|---|
CN107424141A (en) | 2017-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wong et al. | Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition | |
Davison et al. | Objective micro-facial movement detection using facs-based regions and baseline evaluation | |
Chingovska et al. | The 2nd competition on counter measures to 2D face spoofing attacks | |
Komulainen et al. | Face spoofing detection using dynamic texture | |
Almaev et al. | Local gabor binary patterns from three orthogonal planes for automatic facial expression recognition | |
Singh et al. | Muhavi: A multicamera human action video dataset for the evaluation of action recognition methods | |
CN110490158B (en) | Robust face alignment method based on multistage model | |
Nishiyama et al. | Facial deblur inference using subspace analysis for recognition of blurred faces | |
Zhang et al. | Fast and robust occluded face detection in ATM surveillance | |
Dawoud et al. | Fast template matching method based optimized sum of absolute difference algorithm for face localization | |
Faraji et al. | Face recognition under varying illuminations using logarithmic fractal dimension-based complete eight local directional patterns | |
Sae-Bae et al. | Towards automatic detection of child pornography | |
Skodras et al. | Precise localization of eye centers in low resolution color images | |
Yap et al. | A survey on facial wrinkles detection and inpainting: datasets, methods, and challenges | |
Rukhiran et al. | Effecting of environmental conditions to accuracy rates of face recognition based on IoT solution | |
Kroon et al. | Eye localization in low and standard definition content with application to face matching | |
CN103605993A (en) | Image-to-video face identification method based on distinguish analysis oriented to scenes | |
Juneja | Multiple feature descriptors based model for individual identification in group photos | |
Li et al. | Face liveness detection and recognition using shearlet based feature descriptors | |
CN107424141B (en) | Face image quality evaluation method based on probability block | |
Annaby et al. | Fast template matching and object detection techniques using φ-correlation and binary circuits | |
Lin et al. | A feature-based gender recognition method based on color information | |
TW201118763A (en) | Face recognition method based on SIFT features and head pose estimation | |
Padole et al. | Compensating for pose and illumination in unconstrained periocular biometrics | |
Kotwal et al. | CNN patch pooling for detecting 3D mask presentation attacks in NIR |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |