[go: up one dir, main page]

CN108090873A - Pyramid face image super-resolution reconstruction method based on regression model - Google Patents

Pyramid face image super-resolution reconstruction method based on regression model Download PDF

Info

Publication number
CN108090873A
CN108090873A CN201711381261.2A CN201711381261A CN108090873A CN 108090873 A CN108090873 A CN 108090873A CN 201711381261 A CN201711381261 A CN 201711381261A CN 108090873 A CN108090873 A CN 108090873A
Authority
CN
China
Prior art keywords
mrow
msub
image
resolution
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711381261.2A
Other languages
Chinese (zh)
Other versions
CN108090873B (en
Inventor
于明
熊敏
刘依
郭迎春
于洋
师硕
毕容甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201711381261.2A priority Critical patent/CN108090873B/en
Publication of CN108090873A publication Critical patent/CN108090873A/en
Application granted granted Critical
Publication of CN108090873B publication Critical patent/CN108090873B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Battery Electrode And Active Subsutance (AREA)
  • Image Processing (AREA)

Abstract

本发明基于回归模型的金字塔人脸图像超分辨率重建方法,涉及图像的增强或复原,利用图像具有非局部相似性的特征,对测试集中低分辨率人脸图像在其对应特征图像中搜索重建图像块的相似块,得到所有相似块的位置集合,将训练集中所有低分辨率图像在该位置集合中的人脸图像块作为测试集中的低分辨率人脸图像块对应的低分辨率训练集,利用测试集中的低分辨率人脸图像块对应的特征图像块与训练集中的低分辨率人脸图像块对应的特征图像块之间的距离以及测试集中的低分辨率图像经过插值放大后的人脸图像块对应的特征图像块与训练集中高分辨率人脸图像块对应的特征图像块之间距离之和构建约束条件;克服了现有技术在人脸图像重建过程中存在的诸多缺陷。

The method for super-resolution reconstruction of pyramidal face images based on a regression model in the present invention involves image enhancement or restoration, using the feature of non-local similarity of images to search and reconstruct low-resolution face images in the test set in their corresponding feature images The similar blocks of the image block, get the position set of all similar blocks, and use the face image blocks of all the low-resolution images in the training set in the position set as the low-resolution training set corresponding to the low-resolution face image blocks in the test set , using the distance between the feature image blocks corresponding to the low-resolution face image blocks in the test set and the feature image blocks corresponding to the low-resolution face image blocks in the training set and the interpolated and enlarged low-resolution images in the test set The sum of the distances between the feature image block corresponding to the face image block and the feature image block corresponding to the high-resolution face image block in the training set constructs a constraint condition; it overcomes many defects in the process of face image reconstruction in the prior art.

Description

基于回归模型的金字塔人脸图像超分辨率重建方法Pyramid Face Image Super-resolution Reconstruction Method Based on Regression Model

技术领域technical field

本发明的技术方案涉及图像的增强或复原,具体地说是基于回归模型的金字塔人脸图像超分辨率重建方法。The technical solution of the present invention relates to image enhancement or restoration, in particular to a method for super-resolution reconstruction of pyramid face images based on a regression model.

背景技术Background technique

图像采集的过程中,由于成像系统的限制和环境因素的影响,获取到的图像和真实场景往往会有偏差。如何提高图像的空间分辨率,改善图像质量,一直以来都是图像获取技术所致力解决的重要问题。随着科学技术的发展,成像系统的硬件设备性能越来越好,但是通过提升硬件系统来提高图像质量的方法需要很高的成本。在硬件水平已经达到一定高度的基础上,通过软件的技术提高图像的质量成为一种经济有效的方法,图像超分辨率重建(Super Resolution,SR)就是基于此的一种有效方法。During the image acquisition process, due to the limitations of the imaging system and the influence of environmental factors, the acquired images often deviate from the real scene. How to improve the spatial resolution of the image and improve the image quality has always been an important problem that the image acquisition technology is dedicated to solving. With the development of science and technology, the performance of the hardware equipment of the imaging system is getting better and better, but the method of improving the image quality by upgrading the hardware system requires a high cost. On the basis that the hardware level has reached a certain height, it becomes a cost-effective method to improve image quality through software technology, and image super-resolution reconstruction (Super Resolution, SR) is an effective method based on this.

从广义上说,图像超分辨率重建方法主要分为基于多幅图像的超分辨率重建方法和基于单幅图像的超分辨率重建方法。由于后者的适用范围广泛,学习效果好,近年来成了众多学者研究的重点之一。例如文献1提出了一种基于位置限制的邻域嵌入和中间字典的多层人脸超分辨率重建的方法,利用图像块局部几何结构的流形限制进行超分辨率重建,捕捉了图像的退化过程,并通过构建中间字典的方法增强了重建高分辨率人脸图像和原始高分辨率图像之间的一致性,从而重建出高质量的人脸图像,但是直接将由低分辨率图像得到的映射关系应用到高分辨率图像上并不完全适合高分辨率图像,低分辨率图像与高分辨率图像之间存在的差异容易造成图像重建的误差。CN103824272B公开了基于K近邻重识别的人脸超分辨率重建方法,该方法同时利用低分辨率流形和高分辨率流形的几何信息对识别出的近邻图像块进行更新,由重新识别的近邻图像块计算权重系数,高分辨率图像提供的信息弥补了低分辨率图像提供信息的不足,重建图像的质量有了较大的提升,然而该方法在进行了一次K近邻图像块搜索之后又在所有高分辨率图像块中再进行一次近邻块的搜索,然后选取重复次数最多的图像块作为训练图像块,两次搜索过程和一次比较过程降低了重建方法的效率。上述两种基于邻域嵌入的人脸超分辨率重建方法均存在图像容易由于过拟合或者欠拟合产生模糊的现象的缺陷。为了解决这个问题,将稀疏先验的知识引入到人脸超分辨率重建中,CN103325104A提出一种基于迭代稀疏表达的人脸图像超分辨率重建的方法,首先对高分辨率人脸估计图像用一个高分辨率人脸图像字典进行线性表达,再使用基于局部线性回归方法对得到的高分辨率人脸估计结果,最后通过迭代的方式收敛到稳定值,得到最终的重建人脸图像,解决了人脸图像超分辨率对人脸图像对齐不准的问题,然而字典的训练过程以及迭代收敛过程会花费很长时间,导致实时性差的缺陷。上述基于邻域嵌入和基于稀疏表达的超分辨率重建方法均存在重建的人脸图像依然不能满足人们对高质量图像的需求的不足。为了充分利用不同人脸图像具有相似性的特征,文献2提出了基于位置块的人脸图像超分辨率重建方法,假设不同人脸图像相同位置处的图像块具有相同的图像结构,直接使用训练集中相同位置处的人脸图像块组成集合来重构人脸图像的方法,然而该方法直接使用相同位置的所有块进行重建,忽略了少数差异较大的块对重建带来的影响,存在重建图像容易出现部分模糊现象的缺陷。文献3中提出通过加入低秩约束的方法挑选出训练集中与输入块相同类别的图像块进行重建,但是该方法存在过度依赖训练集,没有利用输入图像自身的性质的缺陷。文献4中提出根据输入块与训练集中相同位置块的距离构建权重矩阵来求解映射矩阵,然而该基于位置块的方法都是由低分辨率人脸图像块训练出人脸图像块之间的映射关系,没有考虑高分辨率人脸图像块之间的关系可能影响人脸图像的重建效果,且人脸图像的重建过程不能体现图像的衰减过程,重建的高分辨率人脸图像存在局部重影的现象的缺陷。Broadly speaking, image super-resolution reconstruction methods are mainly divided into super-resolution reconstruction methods based on multiple images and super-resolution reconstruction methods based on single images. Because the latter has a wide range of applications and good learning effects, it has become one of the focuses of many scholars' research in recent years. For example, Document 1 proposes a multi-layer face super-resolution reconstruction method based on position-restricted neighborhood embedding and intermediate dictionary, which uses the manifold restriction of the local geometric structure of the image block for super-resolution reconstruction, and captures the degradation of the image process, and enhance the consistency between the reconstructed high-resolution face image and the original high-resolution image through the method of constructing an intermediate dictionary, thereby reconstructing a high-quality face image, but directly converting the mapping obtained from the low-resolution image The relationship applied to high-resolution images is not completely suitable for high-resolution images, and the differences between low-resolution images and high-resolution images are likely to cause errors in image reconstruction. CN103824272B discloses a face super-resolution reconstruction method based on K-nearest neighbor re-identification, which uses the geometric information of low-resolution manifold and high-resolution manifold to update the identified neighbor image block, and the re-identified neighbor The image block calculates the weight coefficient, the information provided by the high-resolution image makes up for the lack of information provided by the low-resolution image, and the quality of the reconstructed image has been greatly improved. All the high-resolution image blocks are searched for adjacent blocks again, and then the image block with the most repeated times is selected as the training image block. Two search processes and one comparison process reduce the efficiency of the reconstruction method. Both of the above two face super-resolution reconstruction methods based on neighborhood embedding have the defect that images are prone to blurring due to over-fitting or under-fitting. In order to solve this problem, the sparse prior knowledge is introduced into face super-resolution reconstruction. CN103325104A proposes a method for face image super-resolution reconstruction based on iterative sparse expression. First, the high-resolution face estimation image is used A high-resolution face image dictionary is linearly expressed, and then the high-resolution face estimation result is obtained by using the local linear regression method, and finally iteratively converges to a stable value to obtain the final reconstructed face image. Face image super-resolution is inaccurately aligned to face images. However, the training process of the dictionary and the iterative convergence process will take a long time, resulting in poor real-time performance. Both the above-mentioned super-resolution reconstruction methods based on neighborhood embedding and sparse representation have the disadvantage that the reconstructed face images still cannot meet people's needs for high-quality images. In order to make full use of the similarity features of different face images, Literature 2 proposes a face image super-resolution reconstruction method based on position blocks, assuming that the image blocks at the same position in different face images have the same image structure, directly using training Concentrating the face image blocks at the same position to form a set to reconstruct the face image, however, this method directly uses all the blocks at the same position for reconstruction, ignoring the impact of a few blocks with large differences on the reconstruction, there is a reconstruction The image is prone to the defect of partial blur phenomenon. Document 3 proposes to select image blocks of the same category as the input block in the training set for reconstruction by adding low-rank constraints, but this method has the defect of relying too much on the training set and not utilizing the nature of the input image itself. In Document 4, it is proposed to construct a weight matrix according to the distance between the input block and the same position block in the training set to solve the mapping matrix. However, the method based on the position block is to train the mapping between face image blocks from low-resolution face image blocks. The relationship between the high-resolution face image blocks may affect the reconstruction effect of the face image, and the reconstruction process of the face image cannot reflect the attenuation process of the image, and the reconstructed high-resolution face image has local ghosting phenomenon of defects.

总之,人脸图像超分辨率重建方法的现有技术存在没有解决高分辨率图像之间存在的差异对重建图像质量影响的问题,以及存在人脸图像的重建过程不能真实地反映人脸图像的退化过程的缺陷和导致重建的人脸图像依然存在局部重影的现象的缺陷。In short, the existing technology of face image super-resolution reconstruction method does not solve the problem that the difference between high-resolution images affects the quality of the reconstructed image, and there is a problem that the reconstruction process of the face image cannot truly reflect the face image. Defects in the degradation process and defects that cause partial ghosting in the reconstructed face image.

上述文中涉及的现有技术论文文献来源如下:The literature sources of the prior art papers involved in the above-mentioned papers are as follows:

文献1:Jiang,J.,Hu,R.,Wang,Z.,&Han,Z.(2014).Face super-resolution viamultilayer locality-constrained iterative neighbor embedding and intermediatedictionary learning.IEEE Transactions on Image Processing,23(10),4220-4231.Document 1: Jiang, J., Hu, R., Wang, Z., & Han, Z. (2014). Face super-resolution viamultilayer locality-constrained iterative neighbor embedding and intermediate dictionary learning. IEEE Transactions on Image Processing, 23(10 ), 4220-4231.

文献2:Ma,X.,Zhang,J.,&Qi,C.(2010).Hallucinating face by position-patch.Pattern Recognition,43(6),2224-2236.Document 2: Ma, X., Zhang, J., & Qi, C. (2010). Hallucinating face by position-patch. Pattern Recognition, 43(6), 2224-2236.

文献3:Gao,G.,Jing,X.Y.,Huang,P.,Zhou,Q.,Wu,S.,&Yue,D.(2016).Locality-Constrained Double Low-Rank Representation for Effective FaceHallucination.IEEE Access,4,8775-8786.Document 3: Gao, G., Jing, X.Y., Huang, P., Zhou, Q., Wu, S., & Yue, D. (2016).Locality-Constrained Double Low-Rank Representation for Effective FaceHallucination.IEEE Access, 4, 8775-8786.

文献4:Jiang,J.,Chen,C.,Ma,J.,Wang,Z.,Wang,Z.,&Hu,R.(2017).SRLSP:Aface image super-resolution algorithm using smooth regression with localstructure prior.IEEE Transactions on Multimedia,19(1),27-40.Document 4: Jiang, J., Chen, C., Ma, J., Wang, Z., Wang, Z., & Hu, R. (2017). SRLSP: Aface image super-resolution algorithm using smooth regression with local structure prior .IEEE Transactions on Multimedia, 19(1), 27-40.

发明内容Contents of the invention

本发明所要解决的技术问题是:提供基于回归模型的金字塔人脸图像超分辨率重建方法,对训练集中的高、低分辨率人脸图像分别提取梯度特征得到相应的梯度特征图像后,对训练集中的高低分辨率图像及其对应的梯度特征图像分别进行有重叠的分块;对测试集中的低分辨率人脸图像提取梯度特征得到特征图像,并在特征图像中利用非局部相似性搜索重建图像块得到相似块,利用训练集中与相似块位置相同的低分辨率人脸图像块扩充重建人脸图像块的训练集;重建时利用测试集中低分辨率人脸图像块对应的特征图像块和插值放大的人脸图像块对应的特征图像块与训练集中人脸图像块对应的特征图像块的距离构建约束条件,使重建的回归过程更平滑;通过对测试集中低分辨率人脸图像和放大的高分辨率人脸图像进行不同尺度分块构建金字塔模型实现人脸超分辨重建。本发明方法克服了现有技术存在的在人脸图像重建时没有考虑训练集中高分辨率图像之间存在的差异对重建图像质量影响的问题,以及人脸图像重建过程不能真实地反映人脸图像的退化过程的缺陷和导致重建的人脸图像依然存在局部重影现象的缺陷。The technical problem to be solved by the present invention is to provide a method for super-resolution reconstruction of pyramidal face images based on a regression model, to extract gradient features from high and low resolution face images in the training set to obtain corresponding gradient feature images, and to train The high- and low-resolution images in the set and their corresponding gradient feature images are divided into overlapping blocks; the gradient features are extracted from the low-resolution face images in the test set to obtain the feature image, and non-local similarity search is used to reconstruct the feature image Obtain similar blocks from the image blocks, and use the low-resolution face image blocks in the same position as the similar blocks in the training set to expand the training set of reconstructed face image blocks; use the feature image blocks corresponding to the low-resolution face image blocks in the test set and The distance between the feature image block corresponding to the enlarged face image block and the feature image block corresponding to the face image block in the training set is constructed by interpolation to make the regression process of reconstruction smoother; The high-resolution face images are divided into blocks of different scales to build a pyramid model to achieve super-resolution reconstruction of faces. The method of the present invention overcomes the problems in the prior art that the difference between the high-resolution images in the training set does not consider the influence of the difference between the high-resolution images in the training set on the quality of the reconstructed image in the prior art, and the face image reconstruction process cannot truly reflect the face image The defects of the degradation process and the defects that cause the reconstructed face image still have partial ghosting phenomenon.

本发明解决该技术问题所采用的技术方案是:基于回归模型的金字塔人脸图像超分辨率重建方法,具体步骤如下:The technical solution adopted by the present invention to solve the technical problem is: a method for super-resolution reconstruction of a pyramid human face image based on a regression model, and the specific steps are as follows:

A.训练集中低分辨率人脸图像集和高分辨率人脸图像集的训练过程:A. The training process of the low-resolution face image set and the high-resolution face image set in the training set:

第一步,扩充训练集中低分辨率人脸图像集和高分辨率人脸图像集:The first step is to expand the low-resolution face image set and high-resolution face image set in the training set:

根据人脸图像对称的特性,对训练集中的低分辨率人脸图像集和高分辨率人脸图像集通过左右翻转的方式进行扩充,图像的尺寸不变,数量扩充两倍,分别得到扩充的低分辨率人脸图像集和扩充的高分辨率人脸图像集其中l表示低分辨率图像,尺寸为a*b像素,h表示高分辨率图像,尺寸为(d*a)*(d*b)像素,d是倍数,M表示图像的数量;According to the symmetrical characteristics of face images, the low-resolution face image set and high-resolution face image set in the training set are expanded by flipping left and right. The size of the image remains the same, and the number is doubled, respectively. A collection of low-resolution face images and an expanded set of high-resolution face images Among them, l represents a low-resolution image with a size of a*b pixels, h represents a high-resolution image with a size of (d*a)*(d*b) pixels, d is a multiple, and M represents the number of images;

第二步,对扩充后的低分辨率人脸图像集Pl和高分辨率人脸图像集Ph分别提取梯度特征:In the second step, gradient features are extracted from the expanded low-resolution face image set P l and high-resolution face image set P h respectively:

对扩充后的低分辨率人脸图像集Pl和高分辨率人脸图像集Ph中的每幅人脸图像,分别提取一阶梯度和二阶梯度作为分量构成一个梯度特征,得到低分辨率人脸图像集Pl中的低分辨率人脸梯度特征图像集和高分辨率人脸图像集Ph中的高分辨率人脸梯度特征图像集 For each face image in the expanded low-resolution face image set P l and high-resolution face image set Ph , extract the first-order gradient and second-order gradient as components to form a gradient feature, and obtain the low-resolution The low-resolution face gradient feature image set in the high-rate face image set P l and the high-resolution face gradient feature image set in the high-resolution face image set Ph

第三步,对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块:The third step is to block the expanded high-resolution face image set Ph and its corresponding high-resolution face gradient feature image set G h respectively :

对扩充后的高分辨率人脸图像集Ph中的每一幅人脸图像及相应的高分辨率人脸梯度特征图像分别进行有重叠的分块,每个分块大小为R1*R1像素,R1的数值为8~12,重叠的方式是当前块与上下相邻图像块之间分别重叠K1行像素,与左右相邻图像块之间重叠K1列像素,且0≤K1≤R1/2,然后采用从上到下和从左到右的顺序对每幅高分辨率人脸图像及其对应的梯度特征图像的所有分块进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块,由此完成对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块;For each face image in the expanded high-resolution face image set Ph and the corresponding high-resolution face gradient feature image Carry out overlapped blocks respectively, each block size is R 1 * R 1 pixels, and the value of R 1 is 8-12. The overlapping method is to overlap K 1 row of pixels between the current block and the upper and lower adjacent image blocks. , overlap K 1 columns of pixels between the left and right adjacent image blocks, and 0≤K 1 ≤R 1 /2, and then use the order from top to bottom and from left to right for each high-resolution face image and its corresponding gradient feature image All the blocks of each image are numbered, and the numbers are 1, 2,..., U, and U are the total number of blocks of each image. The image blocks with the same number are called the image blocks at the same position, thus completing the expansion of the high The high-resolution face image set Ph and its corresponding high-resolution face gradient feature image set G h are respectively divided into blocks;

第四步,对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块:The fourth step is to block the expanded low-resolution face image set P l and its corresponding low-resolution face gradient feature image set G l respectively:

与上述高分辨率人脸图像集Ph的分块方式相同,对扩充后的低分辨率人脸图像集Pl中的每一幅低分辨率人脸图像及相应的低分辨率人脸梯度特征图像分别进行有重叠的分块,每块大小为(R1/d)*(R1/d)像素,R1的数值为8~12,重叠的方式是当前图像块与上下相邻图像块之间重叠K1/d行像素,与左右相邻图像块之间重叠K1/d列像素,然后采用从上到下和从左到右的顺序对每幅低分辨率人脸图像及其对应的梯度特征图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块,由此完成对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块;In the same way as the above-mentioned high-resolution face image set P h , for each low-resolution face image in the expanded low-resolution face image set P l and the corresponding low-resolution face gradient feature image Carry out overlapped blocks respectively, the size of each block is (R 1 /d)*(R 1 /d) pixels, the value of R 1 is 8~12, the way of overlapping is between the current image block and the upper and lower adjacent image blocks Overlap K 1 /d row pixels between the left and right adjacent image blocks, and overlap K 1 /d column pixels between the left and right adjacent image blocks, and then use the order from top to bottom and from left to right for each low-resolution face image and its corresponding gradient feature image All the blocks of each image are numbered respectively, and the numbers are 1, 2,..., U, U is the total number of blocks of each image, and the image blocks with the same number are called the image blocks of the same position, thus completing the expansion of the The low-resolution face image set P l and its corresponding low-resolution face gradient feature image set G l are respectively divided into blocks;

至此,完成A.训练集低分辨率人脸图像集Pl和高分辨率人脸图像集Ph的训练过程;So far, complete the training process of A. training set low-resolution face image set P l and high-resolution face image set P h ;

B.测试集中低分辨率人脸图像的重建过程:B. Reconstruction process of low-resolution face images in the test set:

第五步,放大测试集中的低分辨率人脸图像得到放大的高分辨率人脸图像:The fifth step is to enlarge the low-resolution face images in the test set to obtain enlarged high-resolution face images:

将需要测试的低分辨率人脸图像输入到计算机中得到测试集中的低分辨率人脸图像Itl,采用双三次插值的方式放大测试集中的某一幅低分辨率人脸图像,得到放大的图像作为测试集中的放大的高分辨率人脸图像Ith,使得测试集中的放大的高分辨率人脸图像Ith与训练集中的高分辨率人脸图像尺寸相等;Input the low-resolution face image to be tested into the computer to obtain the low-resolution face image I tl in the test set, and use bicubic interpolation to enlarge a certain low-resolution face image in the test set to obtain the enlarged The image is used as the enlarged high-resolution face image I th in the test set, so that the enlarged high-resolution face image I th in the test set is the same as the high-resolution face image in the training set equal in size;

第六步,对测试集中的低分辨率人脸图像Itl和放大的高分辨率人脸图像Ith分别提取梯度特征:The sixth step is to extract gradient features from the low-resolution face image I tl and the enlarged high-resolution face image I th in the test set:

分别提取上述第五步得到的测试集中的低分辨率人脸图像Itl和放大的高分辨率人脸图像Ith的一阶梯度和二阶梯度作为分量构成各自的梯度特征,得到它们各自对应的低分辨率人脸梯度特征图像gtl和高分辨率人脸梯度特征图像gthExtract the first-order gradient and second-order gradient of the low-resolution face image I tl and the enlarged high-resolution face image I th in the test set obtained in the fifth step above respectively as components to form their respective gradient features, and obtain their corresponding The low-resolution face gradient feature image g tl and the high-resolution face gradient feature image g th ;

第七步,对测试集中的放大的高分辨率人脸图像Ith及其对应的高分辨率人脸梯度特征图像gth进行分块:The seventh step is to block the enlarged high-resolution face image I th and its corresponding high-resolution face gradient feature image g th in the test set:

对上述第五步中得到的测试集中的放大的高分辨率人脸图像Ith及其对应的上述第六步中的高分辨率人脸梯度特征图像gth分别进行有重叠的分块,每块大小为R1*R1像素,R1的数值为8~12,使分块大小与训练集中高分辨率人脸图像的分块大小相同,重叠的方式是当前图像块与上下相邻图像块之间重叠K1行像素,与左右相邻图像块之间重叠K1列像素,然后采用从上到下和从左到右的顺序对每幅人脸图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块;The enlarged high-resolution face image I th in the test set obtained in the fifth step above and the corresponding high-resolution face gradient feature image g th in the sixth step above are respectively divided into overlapping blocks, each The block size is R 1 * R 1 pixels, and the value of R 1 is 8 to 12, so that the block size is the same as the block size of the high-resolution face image in the training set, and the overlapping method is that the current image block and the upper and lower adjacent images Overlap K 1 row of pixels between the blocks, and overlap K 1 column of pixels between the left and right adjacent image blocks, and then use the order from top to bottom and from left to right to number all the blocks of each face image respectively, The numbers are 1, 2,..., U, U is the total number of blocks of each image, and the image blocks with the same number are called image blocks at the same position;

第八步,对测试集中的低分辨率人脸图像Itl及其对应的低分辨率人脸梯度特征图像gtl进行分块:The eighth step is to block the low-resolution face image I tl and its corresponding low-resolution face gradient feature image g tl in the test set:

对上述第五步得到的测试集中的低分辨率人脸图像Itl及其对应的上述第六步中的低分辨率人脸梯度特征图像gtl分别进行有重叠的分块,每块大小为(R1/d)*(R1/d),R1的数值为8~12,使分块大小与训练集中低分辨率人脸图像的分块大小相同,重叠的方式是当前图像块与上下相邻图像块之间重叠K1/d行像素,与左右相邻图像块之间重叠K1/d列像素,然后采用从上到下和从左到右的顺序对每幅人脸图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块;The low-resolution face image I tl in the test set obtained in the fifth step above and the corresponding low-resolution face gradient feature image g tl in the above-mentioned sixth step are divided into overlapping blocks, each block size is (R 1 /d)*(R 1 /d), the value of R 1 is 8 to 12, so that the block size is the same as the block size of the low-resolution face image in the training set, and the overlapping method is that the current image block and Overlap K 1 /d rows of pixels between the upper and lower adjacent image blocks, and overlap K 1 /d columns of pixels between the left and right adjacent image blocks, and then use the order from top to bottom and from left to right for each face image All the blocks of are numbered respectively, the numbers are 1, 2,..., U, U is the total number of blocks of each image, and the image blocks with the same number are called image blocks at the same position;

第九步,利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl求相似块的编号:The ninth step is to use the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set to find the number of similar blocks:

按照从上到下和从左到右的顺序对上述第八步中得到的测试集中的低分辨率人脸图像Itl的图像块进行重建,以对第j块图像块进行重建为例,利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl的非局部相似性,在测试集中低分辨率人脸图像Itl中寻找第j块图像块的相似块,设测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl的第j块人脸梯度特征图像块为gtl,j,对低分辨率人脸梯度特征图像gtl中的所有人脸图像块采用从上到下和从左到右的顺序进行扫描,扫描的图像块与第j块图像块不重复,计算扫描到的人脸梯度特征图像块与第j块人脸梯度特征图像块的欧式距离,然后按照距离从小到大的顺序对所有低分辨率人脸梯度特征图像块的距离进行排序,取距离最小的前n块作为第j块低分辨率人脸梯度特征图像块gtl,j的相似图像块,设相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn],该编号集合对应的低分辨率人脸梯度特征图像块的集合为由此完成利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl求相似块的编号的过程;According to the order from top to bottom and from left to right, the image blocks of the low-resolution face image I t1 in the test set obtained in the above eighth step are reconstructed, taking the reconstruction of the jth image block as an example, using The non-local similarity of the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set, and find the similar block of the jth image block in the low-resolution face image I tl in the test set , let the jth face gradient feature image block of the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set be g tl,j , for the low-resolution face gradient feature image All face image blocks in g tl are scanned from top to bottom and from left to right, the scanned image block is not repeated with the jth image block, and the scanned face gradient feature image block and the jth image block are calculated The Euclidean distance of the face gradient feature image block, and then sort the distances of all low-resolution face gradient feature image blocks in the order of distance from small to large, and take the first n blocks with the smallest distance as the jth low-resolution face block Similar image blocks of face gradient feature image block g tl,j , set the number set of similar low-resolution face gradient feature image blocks as [v 1 ,v 2 ,...,v n ], the corresponding low The set of resolution face gradient feature image blocks is This completes the process of seeking the numbering of similar blocks using the low-resolution face gradient feature image g t1 corresponding to the low-resolution face image I t1 in the test set;

第十步,利用相似块的位置编号求训练集中扩充后的低分辨率人脸梯度特征图像集Gl中的所有图像在相同编号处的图像块组成的集合:In the tenth step, use the position numbers of the similar blocks to obtain the set of image blocks at the same number for all images in the expanded low-resolution face gradient feature image set G l in the training set:

对上述第二步中的训练集中扩充后的低分辨率人脸梯度特征图像集Gl中的第i,i=1,2,...,M幅人脸图像中编号为j的人脸特征图像块和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]中相同的图像块组成集合则训练集中扩充后的低分辨率人脸梯度特征图像集Gl中所有图像中编号为j的图像块和相似低分辨率人脸梯度特征图像块的编号集合[v1,v2,...,vn]的图像块组成的集合为:For the i, i=1, 2, ..., M face images in the low-resolution face gradient feature image set G l expanded in the training set in the second step above The numbered set of the face feature image block numbered j and the similar low-resolution face gradient feature image block in the ninth step above is the same image block in [v 1 ,v 2 ,...,v n ] Form a collection Then in the expanded low-resolution face gradient feature image set Gl in the training set, the image block numbered j in all images and the numbered set of similar low -resolution face gradient feature image blocks [v 1 ,v 2 , .. .,v n ] set of image blocks for:

为方便书写,将记为:For convenience of writing, the Recorded as:

其中M*(1+n)表示有M幅人脸图像,每幅人脸图像有1+n个图像块;Wherein M*(1+n) represents that there are M face images, and each face image has 1+n image blocks;

第十一步,利用相似块的位置编号求训练集中扩充后的高分辨率人脸梯度特征图像集Gh中的所有图像在相同编号处的图像块组成的集合:In the eleventh step, use the position number of the similar block to obtain the set of image blocks at the same number for all images in the expanded high-resolution face gradient feature image set G h in the training set:

对上述第二步中的训练集中扩充后的高分辨率人脸梯度特征图像集Gh中的第i,i=1,2,...,M幅图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则训练集中扩充后的高分辨率人脸梯度特征图像集Gh中所有图像编号为j和[v1,v2,...,vn]的图像块组成的集合为:For the i-th, i=1, 2,...,M images in the high-resolution face gradient feature image set G h expanded in the training set in the second step above A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then in the expanded high-resolution face gradient feature image set G h in the training set, it is a set composed of all image numbers j and [v 1 ,v 2 ,...,v n ] image blocks for:

为方便书写,将记为:For convenience of writing, the Recorded as:

第十二步,利用相似块的位置编号求扩充后低分辨率人脸图像集Pl中所有人脸图像在相同编号处的图像块组成的集合:In the twelfth step, use the position numbers of the similar blocks to find the set of all face images in the image blocks at the same number in the expanded low-resolution face image set P1 :

对上述第一步中的扩充后低分辨率人脸图像集Pl中的第i,i=1,2,...,M幅人脸图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则Pl中所有图像编号为j和[v1,v2,...,vn]的图像块组成的集合为:For the first i, i= 1 , 2,..., M pieces of face images in the expanded low-resolution face image set P1 in the above first step A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then all the image numbers in P l are j and the set composed of image blocks [v 1 ,v 2 ,...,v n ] for:

为方便书写,将记为:For convenience of writing, the Recorded as:

第十三步,利用相似块的位置编号求扩充后高分辨率人脸图像集Ph中所有人脸图像在相同编号处的图像块组成的集合:The thirteenth step, use the position numbers of similar blocks to find the set of image blocks at the same number of all face images in the expanded high-resolution face image set Ph :

对上述第一步中的扩充后高分辨率人脸图像集Ph中的第i,i=1,2,...,M幅人脸图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则Ph中所有图像编号为j和[v1,v2,...,vn]的图像块组成集合为:For the first i, i =1, 2,..., M face images in the expanded high-resolution face image set Ph in the first step above A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then all the image blocks in P h whose number is j and [v 1 ,v 2 ,...,v n ] form a set for:

为方便书写,将记为:For convenience of writing, the Recorded as:

第十四步,计算第j块人脸图像块对应的权重矩阵:In the fourteenth step, calculate the weight matrix corresponding to the jth face image block:

先用如下的公式(9)计算上述第八步测试集中的低分辨率人脸图像Itl对应的梯度特征图像的第j块人脸图像块gtl,j与上述第十步得到的中所有人脸图像块的欧式距离集合再用如下的公式(10)计算上述第七步测试集中放大的高分辨率人脸图像Ith对应的高分辨率人脸梯度特征图像gth的第j块图像块gth,j与上述第十一步中的中所有图像块的欧式距离集合 First use the following formula (9) to calculate the jth face image block g tl,j of the gradient feature image corresponding to the low-resolution face image I tl in the above-mentioned eighth step test set and the above-mentioned tenth step obtained The Euclidean distance set of all face image blocks in Then use the following formula (10) to calculate the jth block image block gth, j of the high-resolution face gradient feature image gth corresponding to the high-resolution face image Ith enlarged in the test set of the above-mentioned seventh step. in eleven steps The set of Euclidean distances of all image blocks in

得到以上距离后,第j块的权重矩阵Wj由公式(11)求出:After obtaining the above distances, the weight matrix W j of the jth block is obtained by formula (11):

其中α为平滑因子;where α is the smoothing factor;

第十五步,计算第j块人脸图像块对应的映射矩阵:In the fifteenth step, calculate the mapping matrix corresponding to the jth face image block:

将训练集中由第j块低分辨率人脸图像块得到对应的第j块高分辨率人脸图像块的映射过程记为简单的映射关系,得到公式:The mapping process of obtaining the corresponding jth high-resolution face image block from the jth low-resolution face image block in the training set is recorded as a simple mapping relationship, and the formula is obtained:

其中表示第j块人脸图像块的映射矩阵,T表示矩阵的转置,最优映射矩阵由以下公式(13)得到:in Represents the mapping matrix of the jth face image block, T represents the transposition of the matrix, and the optimal mapping matrix is obtained by the following formula (13):

由于高分辨率人脸图像块与低分辨率人脸图像块之间并不是简单的映射关系,利用第十四步得到的距离矩阵对公式(13)进行平滑约束得到以下平滑回归公式(14):Since there is not a simple mapping relationship between high-resolution face image blocks and low-resolution face image blocks, the distance matrix obtained in the fourteenth step is used to perform smooth constraints on formula (13) to obtain the following smooth regression formula (14) :

其中其中tr(.)为矩阵的迹,为了使映射过程更平滑,加入正则化项得到如下公式(15):in Where tr(.) is the trace of the matrix. In order to make the mapping process smoother, a regularization term is added to obtain the following formula (15):

其中F表示Frobenius范数,λ用来权衡重建误差和Aj的稀疏性,通过化简求出第j块图像对应的映射矩阵:in F represents the Frobenius norm, λ is used to weigh the reconstruction error and the sparsity of A j , and the mapping matrix corresponding to the j-th block image is obtained by simplification:

其中E表示单位矩阵;where E represents the identity matrix;

第十六步,重建测试集中的低分辨率人脸图像块得到高分辨率人脸图像块:In the sixteenth step, reconstruct the low-resolution face image blocks in the test set to obtain high-resolution face image blocks:

通过得到测试集中低分辨率人脸图像Itl中的人脸图像块Itl,j对应的高分辨率人脸图像块的高频信息,然后将高频信息插值到Itl,j中得到重建的人脸图像块It'h,jpass Obtain the high-frequency information of the high-resolution face image block corresponding to the face image block I tl,j in the low-resolution face image I tl in the test set, and then interpolate the high-frequency information into I tl,j to obtain the reconstructed Face image block I t 'h,j;

第十七步,组合所有重建图像块到重建的高分辨率人脸图像:In the seventeenth step, combine all reconstructed image blocks into the reconstructed high-resolution face image:

按照从上到下和从左到右的顺序,将重建得到的所有人脸图像块按照编号进行组合,组合过程中重叠部分取均值,得到重建的高分辨率人脸图像It'hAccording to the order from top to bottom and from left to right, all the reconstructed face image blocks are combined according to the numbers, and the overlapping parts are averaged during the combination process to obtain the reconstructed high-resolution face image I t 'h;

第十八步,构建金字塔人脸超分辨率重建模型:The eighteenth step is to build a pyramidal face super-resolution reconstruction model:

(18.1)对上述第十七步得到的It'h使用最近邻插值方法进行降维,得到降维后的低分辨率人脸图像It'l,使降维后的人脸图像与Itl的大小相同;(18.1) Use the nearest neighbor interpolation method to carry out dimensionality reduction on the I t ' h obtained in the seventeenth step above, and obtain a low-resolution face image I t ' l after dimension reduction, so that the face image after dimension reduction is consistent with I tl is the same size;

(18.2)用上述第一步到第十七步的步骤对训练集中所有的低分辨率人脸图像进行重建,对训练集中的第i幅低分辨率人脸图像进行重建的过程为:作为测试集中低分辨率人脸图像,训练集中的作为训练集,利用上述第一步到第十七步重建得到高分辨率图像然后用最近邻插值方法对进行降维,得到 (18.2) Reconstruct all low-resolution face images in the training set with the steps from the first step to the seventeenth step above, and reconstruct the i-th low-resolution face image in the training set The process of rebuilding is: As the low-resolution face images in the test set, the training set and As a training set, use the above steps 1 to 17 to reconstruct high-resolution images Then use the nearest neighbor interpolation method to Perform dimensionality reduction, get

(18.3)取高分辨率人脸图像的分块大小为R2*R2像素,R2的数值为6~10,且R2≠R1,高分辨率图像块之间重叠的像素个数为K2,低分辨率人脸图像的分块大小为(R2/d)*(R2/d)像素,d为缩小倍数且与第一步中的d取值相同,低分辨率图像块之间重叠的像素个数为K2/d,将(18.1)得到的It'l作为测试集中低分辨率人脸图像,(18.2)得到的作为训练集,再进行一次人脸图像超分辨率重建过程,得到最终的重建人脸图像;(18.3) Take the block size of the high-resolution face image as R 2 *R 2 pixels, the value of R 2 is 6 to 10, and R 2 ≠ R 1 , the number of overlapping pixels between high-resolution image blocks is K 2 , the block size of the low-resolution face image is (R 2 /d)*(R 2 /d) pixels, d is the reduction factor and the value of d in the first step is the same, the low-resolution image The number of overlapping pixels between blocks is K 2 /d, and I t ' l obtained in (18.1) is used as the low-resolution face image in the test set, and the obtained in (18.2) and As a training set, perform a face image super-resolution reconstruction process again to obtain the final reconstructed face image;

至此,完成B.测试集中低分辨率人脸图像的重建过程,也最终完成基于回归模型的金字塔人脸图像超分辨率重建。So far, the reconstruction process of low-resolution face images in the B. test set is completed, and the super-resolution reconstruction of pyramidal face images based on the regression model is finally completed.

上述基于回归模型的金字塔人脸图像超分辨率重建方法,所述第一步,扩充训练集中低分辨率人脸图像集和高分辨率人脸图像集中的尺寸为(d*a)*(d*b)像素,d是倍数,该d的数值为2;所述第三步,对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块中的与左右相邻图像块之间重叠K1列像素,该K1的数值为4;所述第四步,对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块中的每块大小为(R1/d)*(R1/d)像素,该d的数值为2;与左右相邻图像块之间重叠K1/d列像素,该K1的数值为4;所述第七步,对测试集中的放大的高分辨率人脸图像Ith及其对应的高分辨率人脸梯度特征图像gth进行分块中的重叠的方式是当前图像块与上下相邻图像块之间重叠K1行像素,与左右相邻图像块之间重叠K1列像素,该K1的数值为4;所述第八步,对测试集中的低分辨率人脸图像Itl及其对应的低分辨率人脸梯度特征图像gtl进行分块中的每块大小为(R1/d)*(R1/d),该d的数值为2;与左右相邻图像块之间重叠K1/d列像素,该K1的数值为4;所述第十八步,构建金字塔人脸超分辨率重建模型的(18.3)中高分辨率图像块之间重叠的像素个数为K2,该K2的数值为4;低分辨率人脸图像的分块大小为(R2/d)*(R2/d)像素,d为缩小倍数且与第一步中的d取值相同,该d的取值为2。The above-mentioned regression model-based pyramidal face image super-resolution reconstruction method, the first step, expands the size of the low-resolution face image set and the high-resolution face image set in the training set to be (d*a)*(d *b) pixel, d is a multiple, and the value of d is 2; the third step, for the expanded high-resolution face image set Ph and its corresponding high-resolution face gradient feature image set G h Carry out K 1 columns of pixels overlapping between the left and right adjacent image blocks in the block respectively, and the value of this K 1 is 4; The fourth step is to expand the low-resolution face image set P 1 and its corresponding The low-resolution face gradient feature image set G l is divided into blocks, each block size is (R 1 /d)*(R 1 /d) pixels, and the value of d is 2; Between overlapping K 1 /d column pixels, the value of the K 1 is 4; the seventh step, for the enlarged high-resolution face image I th in the test set and its corresponding high-resolution face gradient feature image The way g th overlaps in the block is that the current image block overlaps K 1 row of pixels with the upper and lower adjacent image blocks, and overlaps K 1 column of pixels with the left and right adjacent image blocks, and the value of K 1 is 4; The eighth step, the low-resolution face image I tl and its corresponding low-resolution face gradient feature image g tl in the test set are divided into blocks. The size of each block is (R 1 /d)*(R 1 /d), the value of this d is 2; overlap K 1 /d columns of pixels between the left and right adjacent image blocks, and the value of this K 1 is 4; the eighteenth step is to construct a pyramid face super-resolution The number of overlapping pixels between the high-resolution image blocks in (18.3) of the reconstruction model is K 2 , and the value of K 2 is 4; the block size of the low-resolution face image is (R 2 /d)*(R 2 /d) pixels, d is the reduction factor and has the same value as d in the first step, and the value of d is 2.

本发明中使用到的公知技术有:梯度特征、非局部相似性和线性回归。The known techniques used in the present invention are: gradient feature, non-local similarity and linear regression.

本发明的有益效果是:与现有技术相比,本发明的突出的实质性特点和显著进步如下:The beneficial effects of the present invention are: compared with the prior art, the outstanding substantive features and remarkable progress of the present invention are as follows:

(1)本发明利用图像具有非局部相似性的特征,对测试集中的低分辨率人脸图像在其对应的特征图像中搜索重建图像块的相似块,得到所有相似块的位置集合,将训练集中所有低分辨率图像在该位置集合中的人脸图像块作为测试集中的低分辨率人脸图像块对应的低分辨率训练集,而不是像上述文献1、2、3和4中记载的方法,仅仅使用训练集中的低分辨率人脸图像中的某一个位置处的所有人脸图像块组成的集合,或者比较低分辨率训练集中所有的人脸图像块与测试集中低分辨率人脸图像块的距离,将距离最近的某些人脸图像块组成的集合作为低分辨率训练集,本发明方法能够更准确且高效地获取测试集中低分辨率人脸图像块对应的低分辨率训练集。(1) The present invention utilizes the feature that the image has non-local similarity, searches for the similar blocks of the reconstructed image block in the low-resolution face image in the test set in its corresponding feature image, obtains the position set of all similar blocks, and trains Concentrate the face image blocks of all low-resolution images in this location set as the low-resolution training set corresponding to the low-resolution face image blocks in the test set, rather than the ones described in the above documents 1, 2, 3 and 4 method, only use a set of all face image blocks at a certain position in the low-resolution face image in the training set, or compare all face image blocks in the low-resolution training set with the low-resolution face image in the test set The distance of the image block, the set of some face image blocks with the closest distance is used as a low-resolution training set, and the method of the present invention can more accurately and efficiently obtain the low-resolution training set corresponding to the low-resolution face image block in the test set. set.

(2)本发明通过利用测试集中的低分辨率人脸图像块对应的特征图像块与训练集中的低分辨率人脸图像块对应的特征图像块之间的距离以及测试集中的低分辨率图像经过插值放大后的人脸图像块对应的特征图像块,与训练集中高分辨率人脸图像块对应的特征图像块之间的距离之和构建约束条件,与现有技术CN103824272B记载的方法相比,本发明直接结合了测试集中低分辨率人脸图像块之间的距离和高分辨率人脸图像块之间的距离,且只需要在一幅特征图像中搜索一次相似块得到相似块的位置,而不需要既对测试集中的低分辨率人脸图像块之间的距离进行排序,又对测试集中插值放大的高分辨率人脸图像块与训练集中高分辨率人脸图像块之间的距离进行排序,计算距离时也不需要搜索训练集中的所有的低分辨率人脸图像块和高分辨率人脸图像块,在保证获取精确的约束条件的同时具有更高的搜索效率,具有突出的实质性特点。(2) The present invention uses the distance between the feature image block corresponding to the low-resolution face image block in the test set and the feature image block corresponding to the low-resolution face image block in the training set and the low-resolution image block in the test set The feature image block corresponding to the face image block after interpolation amplification, and the sum of the distances between the feature image blocks corresponding to the high-resolution face image block in the training set construct constraint conditions, compared with the method described in prior art CN103824272B , the present invention directly combines the distance between low-resolution face image blocks and the distance between high-resolution face image blocks in the test set, and only needs to search for similar blocks once in a feature image to obtain the positions of similar blocks , instead of sorting the distance between the low-resolution face image blocks in the test set, and the distance between the interpolated and enlarged high-resolution face image blocks in the test set and the high-resolution face image blocks in the training set The distance is sorted, and there is no need to search all the low-resolution face image blocks and high-resolution face image blocks in the training set when calculating the distance. It has higher search efficiency while ensuring accurate constraints. substantive features.

(3)本发明根据分块大小不同构建人脸图像重建的金字塔模型,保证了对人脸图像的重建过程涵盖了多个不同尺度,从而有效地融合了不同尺度人脸图像的特征,图像细节恢复得更清晰,并且金字塔模型克服了现有人脸超分辨率重建方法不能真实反映图像退化过程的问题,保证了重建的人脸图像更加逼近真实的人脸图像,也克服了现有技术存在的在人脸图像重建时没有考虑高分辨率图像之间存在的差异对重建图像质量影响的问题以及人脸图像重建过程不能真实地反映人脸图像的退化过程的缺陷。(3) The present invention builds the pyramid model of face image reconstruction according to the different block sizes, which ensures that the reconstruction process of the face image covers a plurality of different scales, thereby effectively merging the features of different scales of face images, image details The restoration is clearer, and the pyramid model overcomes the problem that the existing face super-resolution reconstruction method cannot truly reflect the image degradation process, ensures that the reconstructed face image is closer to the real face image, and also overcomes the existing technology. The problem that the difference between high-resolution images affects the quality of the reconstructed image is not considered in the reconstruction of the face image, and the defect that the reconstruction process of the face image cannot truly reflect the degradation process of the face image.

(4)本发明根据人脸图像左右对称的特征,通过左右翻转的方式扩充数据集,得到了信息更丰富的训练集,保证了在小样本的情况下拥有足够丰富的相似图像块来重构输入块的目的。(4) According to the left-right symmetrical feature of the face image, the present invention expands the data set by flipping left and right, obtains a training set with richer information, and ensures that there are enough similar image blocks to reconstruct in the case of small samples The purpose of the input block.

(5)本发明利用图像具有非局部相似性的特性,通过图像的非局部相似性构建训练集中输入块和相似块相同编号处的图像块组成的集合,丰富了基于位置块的人脸图像超分辨率重建方法中相同位置处的块集合,保证了人脸图像重建的效果。(5) The present invention utilizes the characteristics of non-local similarity of images, and constructs a set of image blocks at the same number of input blocks and similar blocks in the training set through the non-local similarity of images, which enriches the face image superposition based on position blocks. The set of blocks at the same position in the resolution reconstruction method ensures the effect of face image reconstruction.

(6)本发明通过测试集中低分辨率图像块与训练集中低分辨率图像块之间的距离和测试集中插值放大图像块与训练集中高分辨率图像块之间的距离之和构建权重矩阵,能够同时利用低分辨率图像信息和高分辨率图像信息,避免了低分辨率图像差异较大时重建图像不准确的缺点,并通过权重的约束,使得重建图像过程更平滑,图像细节恢复得更准确。(6) The present invention constructs weight matrix by the sum of the distance between the low-resolution image block in the test set and the low-resolution image block in the training set and the distance between the interpolation enlarged image block in the test set and the high-resolution image block in the training set, It can use low-resolution image information and high-resolution image information at the same time, avoiding the shortcomings of inaccurate reconstruction of images when the difference between low-resolution images is large, and through weight constraints, the process of reconstructing images is smoother and image details are restored more easily. precise.

附图说明Description of drawings

下面结合附图和实施例对本发明进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

图1是本发明方法的流程示意框图。Fig. 1 is a schematic flow diagram of the method of the present invention.

图2是本发明方法中高分辨率人脸图像的分块过程示意图。Fig. 2 is a schematic diagram of the block process of a high-resolution face image in the method of the present invention.

图3是本发明方法中的插值过程示意图。Fig. 3 is a schematic diagram of the interpolation process in the method of the present invention.

图4为FERET数据库和CAS-PEAL-R1数据库中的样本示例,第一行为FERET数据库中的样本示例,第二行为CAS-PEAL-R1数据库样本示例图。Figure 4 is an example of samples in the FERET database and CAS-PEAL-R1 database, the first row is a sample example in the FERET database, and the second row is a sample diagram of the CAS-PEAL-R1 database.

图5为FERET数据库中应用Bicubic、ANR、A+、LINE、SRLSP、本发明方法六种不同方法对图像进行重建的效果图。Fig. 5 is an effect diagram of image reconstruction using six different methods of Bicubic, ANR, A+, LINE, SRLSP, and the method of the present invention in the FERET database.

图6为CAS-PEAL-R1数据库中应用Bicubic、ANR、A+、LINE、SRLSP、本发明方法六种不同方法对图像进行重建的效果图。Fig. 6 is an effect diagram of image reconstruction using six different methods of Bicubic, ANR, A+, LINE, SRLSP, and the method of the present invention in the CAS-PEAL-R1 database.

具体实施方式Detailed ways

图1所示实施例表明,本发明方法的流程为:即A.训练集中低分辨率人脸图像集和高分辨率人脸图像集的训练过程:扩充训练集中低分辨率人脸图像集和高分辨率人脸图像集→对扩充后的低分辨率人脸图像集Pl和高分辨率人脸图像集Ph分别提取梯度特征→对扩充的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块→对扩充的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块→即B.测试集中低分辨率人脸图像的重建过程:放大测试集中的低分辨率人脸图像得到放大的高分辨率人脸图像→对测试集中低分辨率人脸图像Itl和放大的高分辨率人脸图像Ith分别提取梯度特征→对测试集中的放大的高分辨率人脸图像Ith及其对应的高分辨率人脸梯度特征图像gth进行分块→对测试集中的低分辨率人脸图像Itl及其对应的低分辨率人脸梯度特征图像gtl进行分块→利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl求相似块的编号→利用相似块的位置编号求训练集中扩充后的低分辨率人脸梯度特征图像集Gl中的所有图像在相同编号处的图像块组成的集合→利用相似块的位置编号求训练集中扩充后的高分辨率人脸梯度特征图像集Gh中的所有图像在相同编号处的图像块组成的集合→利用相似块的位置编号求扩充后低分辨率人脸图像集Pl中所有人脸图像在相同编号处的图像块组成的集合→利用相似块的位置编号求扩充后高分辨率人脸图像集Ph中所有人脸图像在相同编号处的图像块组成的集合→计算第j块人脸图像块对应的权重矩阵->计算第j块人脸图像块对应的映射矩阵→重建测试集中的低分辨率人脸图像块得到高分辨率人脸图像块→组合所有重建图像块得到重建的高分辨率人脸图像→构建金字塔人脸超分辨率重建模型。The embodiment shown in Fig. 1 shows that the flow process of the inventive method is: That is, A. The training process of the low-resolution face image set and the high-resolution face image set in the training set: expanding the low-resolution face image set and the high-resolution face image set in the training set → for the expanded low-resolution face image set Gradient features are extracted from the face image set P l and the high-resolution face image set P h respectively → for the expanded high-resolution face image set P h and its corresponding high-resolution face gradient feature image set G h Divide into blocks → respectively divide the expanded low-resolution face image set P l and its corresponding low-resolution face gradient feature image set G l into blocks → That is, B. The reconstruction process of low-resolution face images in the test set: Enlarge the low-resolution face images in the test set to obtain enlarged high-resolution face images → pair the low-resolution face images I tl and the enlarged high-resolution face images in the test set Extract the gradient features from the resolution face image I th respectively → divide the enlarged high-resolution face image I th in the test set and its corresponding high-resolution face gradient feature image g th into blocks → divide the low-resolution face image in the test set The rate face image I tl and its corresponding low-resolution face gradient feature image g tl are divided into blocks → use the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set to find similarity The number of the block → use the position number of the similar block to find the set of image blocks at the same number of all images in the expanded low-resolution face gradient feature image set G l in the training set → use the position number of the similar block to find the training set Concentrate all the images in the expanded high-resolution face gradient feature image set G h as a collection of image blocks at the same number → use the position numbers of similar blocks to find all the expanded low-resolution face image sets P l A set of image blocks at the same number of face images → use the position numbers of similar blocks to find the set of image blocks at the same number of all face images in the expanded high-resolution face image set P h → calculate the first The weight matrix corresponding to j face image blocks -> calculate the mapping matrix corresponding to the jth face image block → reconstruct the low-resolution face image blocks in the test set to obtain high-resolution face image blocks → combine all reconstructed image blocks Obtain a reconstructed high-resolution face image → construct a pyramidal face super-resolution reconstruction model.

图2所示实施例表明,图中,R1为分块大小,K1为块重叠像素,本发明方法中高分辨率人脸图像的分块过程是:高分辨率人脸图像块的分块大小为R1*R1,并且当前图像块与上下相邻图像块之间分别重叠K1行像素,与左右相邻图像块之间分别重叠K1列像素。低分辨率人脸图像的分块过程与此类似。The embodiment shown in Fig. 2 shows, among the figure, R 1 is block size, and K 1 is block overlap pixel, and the block process of high-resolution face image in the inventive method is: the block of high-resolution face image block The size is R 1 *R 1 , and the current image block overlaps K 1 rows of pixels with the upper and lower adjacent image blocks, and overlaps K 1 columns of pixels with the left and right adjacent image blocks. The block process for low-resolution face images is similar.

图3所示实施例表明,图中,LR表示低分辨率,HR表示高分辨率,每一个黑色圆点表示低分辨率人脸图像块中的一个低分辨率像素点,每一个白色圆点表示重建得到的高频信息中的一个高分辨率像素点;本发明方法中的插值过程是:输入LR图像→LR图像块→加入HR信息→输出HR图像块,即将得到的高频信息按照从上到下、从左到右的顺序依次插值到低分辨率人脸图像块中获得重建的高分辨率人脸图像块。The embodiment shown in Fig. 3 shows that in the figure, LR represents low resolution, HR represents high resolution, and each black dot represents a low-resolution pixel in the low-resolution face image block, and each white dot Represents a high-resolution pixel in the high-frequency information obtained by reconstruction; the interpolation process in the method of the present invention is: input LR image → LR image block → add HR information → output HR image block, the high-frequency information to be obtained is according to The order from top to bottom and from left to right is sequentially interpolated into low-resolution face image blocks to obtain reconstructed high-resolution face image blocks.

图4所示实施例表明,FERET数据库和CAS-PEAL-R1数据库中的样本示例,第一行为FERET数据库中的样本示例,第二行为CAS-PEAL-R1数据库样本示例。FERET数据库包含200个人,本实施例在FERET数据库上随机选取了80名男性每人一幅正面人脸图像和70名女性每人一幅正面人脸图像组成训练集,在剩下的人中选取28名男性每人一幅正面人脸图像和22名女性每人一幅正面人脸图像用于测试;CAS-PEAL-R1数据库包含1040个人,本实施样例在CAS-PEAL-R1数据库上随机选取了103名男性每人一幅正面人脸图像和97名女性每人一幅正面人脸图像组成训练集,另外随机选取了57名男性每人一幅正面人脸图像和43名女性每人一幅正面人脸图像用于测试。The embodiment shown in FIG. 4 shows that the sample examples in the FERET database and the CAS-PEAL-R1 database, the first row is a sample example in the FERET database, and the second row is a sample example in the CAS-PEAL-R1 database. The FERET database contains 200 people. In this embodiment, 80 males are randomly selected from the FERET database with one frontal face image per person and 70 females with one frontal face image per person to form a training set. One frontal face image for each of 28 men and one frontal face image for each of 22 women are used for testing; the CAS-PEAL-R1 database contains 1040 individuals, and this implementation example is randomly selected on the CAS-PEAL-R1 database. A frontal face image for each of 103 men and a frontal face image for each of 97 women were selected to form the training set. In addition, a frontal face image for each of 57 men and one frontal face image for each of 43 women were randomly selected. A frontal face image is used for testing.

图5所示实施例表明,在FERET数据库中应用Bicubic、ANR、A+、LINE、SRLSP、本发明方法六种不同方法对图像进行重建的效果图,每一行代表FERET数据库中的同一幅人脸图像,从上到下每一行依次为选取的5幅人脸图像。对于每一行,从左到右依次代表了使用Bicubic、ANR、A+、LINE、SRLSP、本发明方法重建得到的高分辨率人脸图像和原始高分辨率人脸图像,其中Bicubic方法作为基础对比,可以看出Bicubic方法得到的人脸图像最模糊,ANR和A+方法在眼睛和嘴巴部位细节恢复得比较模糊,LINE和SRLSP方法虽然细节恢复得更好,但是图像局部重影现象比较严重。本发明的方法在保证了细节恢复的同时克服了图像的重影现象,得到了最优的重建人脸图像。The embodiment shown in Fig. 5 shows, in the FERET database, apply Bicubic, ANR, A+, LINE, SRLSP, the effect figure that the six different methods of the present invention method are reconstructed to the image, each row represents the same face image in the FERET database , each line from top to bottom is the selected 5 face images. For each row, from left to right, it represents the high-resolution face image and the original high-resolution face image reconstructed using Bicubic, ANR, A+, LINE, SRLSP, and the method of the present invention, where the Bicubic method is used as a basic comparison, It can be seen that the face image obtained by the Bicubic method is the most blurred. The details of the eyes and mouth are restored by the ANR and A+ methods. Although the details of the LINE and SRLSP methods are better restored, the partial ghosting of the image is more serious. The method of the invention overcomes the ghosting phenomenon of the image while ensuring the restoration of details, and obtains the optimal reconstructed face image.

图6所示实施例表明,在CAS-PEAL-R1数据库中应用Bicubic、ANR、A+、LINE、SRLSP、本发明方法六种不同方法对图像进行重建的效果图,每一行代表CAS-PEAL-R1数据库中的同一幅人脸图像,从上到下每一行依次为选取的5幅人脸图像。对于每一行,从左到右依次代表了使用Bicubic、ANR、A+、LINE、SRLSP、本发明方法重建得到的高分辨率人脸图像和原始高分辨率人脸图像,其中Bicubic方法作为基础对比,可以看出Bicubic方法得到的人脸图像最模糊,ANR和A+方法在眼睛和嘴巴部位细节恢复得比较模糊,LINE和SRLSP方法虽然细节恢复得更好,但是出现了局部重影现象,部分图像还出现了边缘锯齿现象。本发明的方法不仅图像的细节恢复得最清晰,而且克服了其他方法存在的局部重影现象和边缘锯齿现象,得到了最优的重建人脸图像。The embodiment shown in Fig. 6 shows, in CAS-PEAL-R1 database, apply Bicubic, ANR, A+, LINE, SRLSP, the effect figure that six different methods of the method of the present invention carry out image reconstruction, and each row represents CAS-PEAL-R1 For the same face image in the database, each row from top to bottom is the selected 5 face images. For each row, from left to right, it represents the high-resolution face image and the original high-resolution face image reconstructed using Bicubic, ANR, A+, LINE, SRLSP, and the method of the present invention, where the Bicubic method is used as a basic comparison, It can be seen that the face image obtained by the Bicubic method is the most blurred. The details of the eyes and mouth are restored by the ANR and A+ methods. Although the details of the LINE and SRLSP methods are better restored, there is a partial ghosting phenomenon, and some images are still blurred. There is jagged edges. The method of the invention not only recovers the details of the image most clearly, but also overcomes the local ghosting phenomenon and the edge sawtooth phenomenon existing in other methods, and obtains the optimal reconstructed face image.

实施例1Example 1

本实施例的基于回归模型的金字塔人脸图像超分辨率重建方法,具体步骤如下:The method for super-resolution reconstruction of pyramid face image based on regression model of the present embodiment, concrete steps are as follows:

A.训练集中低分辨率人脸图像集和高分辨率人脸图像集的训练过程:A. The training process of the low-resolution face image set and the high-resolution face image set in the training set:

第一步,扩充训练集中低分辨率人脸图像集和高分辨率人脸图像集:The first step is to expand the low-resolution face image set and high-resolution face image set in the training set:

根据人脸图像对称的特性,对训练集中的低分辨率人脸图像集和高分辨率人脸图像集通过左右翻转的方式进行扩充,图像的尺寸不变,数量扩充两倍,分别得到扩充的低分辨率人脸图像集和扩充的高分辨率人脸图像集其中l表示低分辨率图像,尺寸为a*b像素,h表示高分辨率图像,尺寸为(d*a)*(d*b)像素,d是倍数,d的数值为2,M表示图像的数量;According to the symmetrical characteristics of face images, the low-resolution face image set and high-resolution face image set in the training set are expanded by flipping left and right. The size of the image remains the same, and the number is doubled, respectively. A collection of low-resolution face images and an expanded set of high-resolution face images Where l represents a low-resolution image with a size of a*b pixels, h represents a high-resolution image with a size of (d*a)*(d*b) pixels, d is a multiple, the value of d is 2, and M represents an image quantity;

第二步,对扩充后的低分辨率人脸图像集Pl和高分辨率人脸图像集Ph分别提取梯度特征:In the second step, gradient features are extracted from the expanded low-resolution face image set P l and high-resolution face image set P h respectively:

对扩充后的低分辨率人脸图像集Pl和高分辨率人脸图像集Ph中的每幅人脸图像,分别提取一阶梯度和二阶梯度作为分量构成一个梯度特征,得到低分辨率人脸图像集Pl中的低分辨率人脸梯度特征图像集和高分辨率人脸图像集Ph中的高分辨率人脸梯度特征图像集 For each face image in the expanded low-resolution face image set P l and high-resolution face image set Ph , extract the first-order gradient and second-order gradient as components to form a gradient feature, and obtain the low-resolution The low-resolution face gradient feature image set in the high-rate face image set P l and the high-resolution face gradient feature image set in the high-resolution face image set Ph

第三步,对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块:The third step is to block the expanded high-resolution face image set Ph and its corresponding high-resolution face gradient feature image set G h respectively :

对扩充后的高分辨率人脸图像集Ph中的每一幅人脸图像及相应的高分辨率人脸梯度特征图像分别进行有重叠的分块,每个分块大小为R1*R1像素,R1的数值为8,重叠的方式是当前块与上下相邻图像块之间分别重叠K1行像素,与左右相邻图像块之间重叠K1列像素,K1的数值为4,然后采用从上到下和从左到右的顺序对每幅高分辨率人脸图像及其对应的梯度特征图像的所有分块进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块,由此完成对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块;For each face image in the expanded high-resolution face image set Ph and the corresponding high-resolution face gradient feature image Carry out overlapped blocks respectively, each block size is R 1 * R 1 pixels, and the value of R 1 is 8. The overlapping method is to overlap K 1 row of pixels between the current block and the upper and lower adjacent image blocks, and Overlap K 1 columns of pixels between the left and right adjacent image blocks, the value of K 1 is 4, and then use the order from top to bottom and from left to right for each high-resolution face image and its corresponding gradient feature image All the blocks of each image are numbered, and the numbers are 1, 2,..., U, and U are the total number of blocks of each image. The image blocks with the same number are called the image blocks at the same position, thus completing the expansion of the high The high-resolution face image set Ph and its corresponding high-resolution face gradient feature image set G h are respectively divided into blocks;

第四步,对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块:The fourth step is to block the expanded low-resolution face image set P l and its corresponding low-resolution face gradient feature image set G l respectively:

与上述高分辨率人脸图像集Ph的分块方式相同,对扩充后的低分辨率人脸图像集Pl中的每一幅低分辨率人脸图像及相应的低分辨率人脸梯度特征图像分别进行有重叠的分块,每块大小为(R1/d)*(R1/d)像素,R1的数值为8,d的数值为2,重叠的方式是当前图像块与上下相邻图像块之间重叠K1/d行像素,与左右相邻图像块之间重叠K1/d列像素,K1的数值为4,然后采用从上到下和从左到右的顺序对每幅低分辨率人脸图像及其对应的梯度特征图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块,由此完成对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块;In the same way as the above-mentioned high-resolution face image set P h , for each low-resolution face image in the expanded low-resolution face image set P l and the corresponding low-resolution face gradient feature image Carry out overlapped blocks separately, each block size is (R 1 /d)*(R 1 /d) pixels, the value of R 1 is 8, and the value of d is 2. The way of overlapping is that the current image block is the same as the upper and lower Overlap K 1 /d rows of pixels between adjacent image blocks, and overlap K 1 /d columns of pixels between left and right adjacent image blocks, the value of K 1 is 4, and then use the order from top to bottom and from left to right to Each low-resolution face image and its corresponding gradient feature image All the blocks of each image are numbered respectively, and the numbers are 1, 2,..., U, U is the total number of blocks of each image, and the image blocks with the same number are called the image blocks of the same position, thus completing the expansion of the The low-resolution face image set P l and its corresponding low-resolution face gradient feature image set G l are respectively divided into blocks;

至此,完成A.训练集低分辨率人脸图像集Pl和高分辨率人脸图像集Ph的训练过程;So far, complete the training process of A. training set low-resolution face image set P l and high-resolution face image set P h ;

B.测试集中低分辨率人脸图像的重建过程:B. Reconstruction process of low-resolution face images in the test set:

第五步,放大测试集中的低分辨率人脸图像得到放大的高分辨率人脸图像:The fifth step is to enlarge the low-resolution face images in the test set to obtain enlarged high-resolution face images:

将需要测试的低分辨率人脸图像输入到计算机中得到测试集中的低分辨率人脸图像Itl,采用双三次插值的方式放大测试集中的某一幅低分辨率人脸图像,得到放大的图像作为测试集中的放大的高分辨率人脸图像Ith,使得测试集中的放大的高分辨率人脸图像Ith与训练集中的高分辨率人脸图像尺寸相等;Input the low-resolution face image to be tested into the computer to obtain the low-resolution face image I tl in the test set, and use bicubic interpolation to enlarge a certain low-resolution face image in the test set to obtain the enlarged The image is used as the enlarged high-resolution face image I th in the test set, so that the enlarged high-resolution face image I th in the test set is the same as the high-resolution face image in the training set equal in size;

第六步,对测试集中的低分辨率人脸图像Itl和放大的高分辨率人脸图像Ith分别提取梯度特征:The sixth step is to extract gradient features from the low-resolution face image I tl and the enlarged high-resolution face image I th in the test set:

分别提取上述第五步得到的测试集中的低分辨率人脸图像Itl和放大的高分辨率人脸图像Ith的一阶梯度和二阶梯度作为分量构成各自的梯度特征,得到它们各自对应的低分辨率人脸梯度特征图像gtl和高分辨率人脸梯度特征图像gthExtract the first-order gradient and second-order gradient of the low-resolution face image I tl and the enlarged high-resolution face image I th in the test set obtained in the fifth step above respectively as components to form their respective gradient features, and obtain their corresponding The low-resolution face gradient feature image g tl and the high-resolution face gradient feature image g th ;

第七步,对测试集中的放大的高分辨率人脸图像Ith及其对应的高分辨率人脸梯度特征图像gth进行分块:The seventh step is to block the enlarged high-resolution face image I th and its corresponding high-resolution face gradient feature image g th in the test set:

对上述第五步中得到的测试集中的放大的高分辨率人脸图像Ith及其对应的上述第六步中的高分辨率人脸梯度特征图像gth分别进行有重叠的分块,每块大小为R1*R1像素,R1的数值为8,使分块大小与训练集中高分辨率人脸图像的分块大小相同,重叠的方式是当前图像块与上下相邻图像块之间重叠K1行像素,与左右相邻图像块之间重叠K1列像素,K1的数值为4,然后采用从上到下和从左到右的顺序对每幅人脸图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块;The enlarged high-resolution face image I th in the test set obtained in the fifth step above and the corresponding high-resolution face gradient feature image g th in the sixth step above are respectively divided into overlapping blocks, each The block size is R 1 * R 1 pixels, and the value of R 1 is 8, so that the block size is the same as the block size of the high-resolution face image in the training set, and the overlapping method is between the current image block and the upper and lower adjacent image blocks. overlap K 1 row of pixels between the left and right adjacent image blocks, and overlap K 1 column pixels between the left and right adjacent image blocks. The value of K 1 is 4. The blocks are numbered respectively, and the numbers are 1, 2,..., U, U is the total number of blocks for each image, and the image blocks with the same number are called image blocks at the same position;

第八步,对测试集中的低分辨率人脸图像Itl及其对应的低分辨率人脸梯度特征图像gtl进行分块:The eighth step is to block the low-resolution face image I tl and its corresponding low-resolution face gradient feature image g tl in the test set:

对上述第五步得到的测试集中的低分辨率人脸图像Itl及其对应的上述第六步中的低分辨率人脸梯度特征图像gtl分别进行有重叠的分块,每块大小为(R1/d)*(R1/d),R1的数值为8,d的数值为2,使分块大小与训练集中低分辨率人脸图像的分块大小相同,重叠的方式是当前图像块与上下相邻图像块之间重叠K1/d行像素,与左右相邻图像块之间重叠K1/d列像素,K1的数值为4,然后采用从上到下和从左到右的顺序对每幅人脸图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块;The low-resolution face image I tl in the test set obtained in the fifth step above and the corresponding low-resolution face gradient feature image g tl in the above-mentioned sixth step are divided into overlapping blocks, each block size is (R 1 /d)*(R 1 /d), the value of R 1 is 8, and the value of d is 2, so that the block size is the same as the block size of the low-resolution face image in the training set, and the overlapping method is The current image block overlaps K 1 /d row pixels with the upper and lower adjacent image blocks, and overlaps K 1 /d column pixels with the left and right adjacent image blocks. The value of K 1 is 4. Number all blocks of each face image in order from left to right, the numbers are 1, 2,..., U, U is the total number of blocks in each image, and the image blocks with the same number are called the same position the image block;

第九步,利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl求相似块的编号:The ninth step is to use the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set to find the number of similar blocks:

按照从上到下和从左到右的顺序对上述第八步中得到的测试集中的低分辨率人脸图像Itl的图像块进行重建,以对第j块图像块进行重建为例,利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl的非局部相似性,在测试集中低分辨率人脸图像Itl中寻找第j块图像块的相似块,设测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl的第j块人脸梯度特征图像块为gtl,j,对低分辨率人脸梯度特征图像gtl中的所有人脸图像块采用从上到下和从左到右的顺序进行扫描,扫描的图像块与第j块图像块不重复,计算扫描到的人脸梯度特征图像块与第j块人脸梯度特征图像块的欧式距离,然后按照距离从小到大的顺序对所有低分辨率人脸梯度特征图像块的距离进行排序,取距离最小的前n块作为第j块低分辨率人脸梯度特征图像块gtl,j的相似图像块,设相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn],该编号集合对应的低分辨率人脸梯度特征图像块的集合为由此完成利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl求相似块的编号的过程;According to the order from top to bottom and from left to right, the image blocks of the low-resolution face image I t1 in the test set obtained in the above eighth step are reconstructed, taking the reconstruction of the jth image block as an example, using The non-local similarity of the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set, and find the similar block of the jth image block in the low-resolution face image I tl in the test set , let the jth face gradient feature image block of the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set be g tl,j , for the low-resolution face gradient feature image All face image blocks in g tl are scanned from top to bottom and from left to right, the scanned image block is not repeated with the jth image block, and the scanned face gradient feature image block and the jth image block are calculated The Euclidean distance of the face gradient feature image block, and then sort the distances of all low-resolution face gradient feature image blocks in the order of distance from small to large, and take the first n blocks with the smallest distance as the jth low-resolution face block Similar image blocks of face gradient feature image block g tl,j , set the number set of similar low-resolution face gradient feature image blocks as [v 1 ,v 2 ,...,v n ], the corresponding low The set of resolution face gradient feature image blocks is This completes the process of seeking the numbering of similar blocks using the low-resolution face gradient feature image g t1 corresponding to the low-resolution face image I t1 in the test set;

第十步,利用相似块的位置编号求训练集中扩充后的低分辨率人脸梯度特征图像集Gl中的所有图像在相同编号处的图像块组成的集合:In the tenth step, use the position numbers of the similar blocks to obtain the set of image blocks at the same number for all images in the expanded low-resolution face gradient feature image set G l in the training set:

对上述第二步中的训练集中扩充后的低分辨率人脸梯度特征图像集Gl中的第i,i=1,2,...,M幅人脸图像中编号为j的人脸特征图像块和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]中相同的图像块组成集合则训练集中扩充后的低分辨率人脸梯度特征图像集Gl中所有图像中编号为j的图像块和相似低分辨率人脸梯度特征图像块的编号集合[v1,v2,...,vn]的图像块组成的集合为:For the i, i=1, 2, ..., M face images in the low-resolution face gradient feature image set G l expanded in the training set in the second step above The numbered set of the face feature image block numbered j and the similar low-resolution face gradient feature image block in the ninth step above is the same image block in [v 1 ,v 2 ,...,v n ] Form a set Then in the expanded low-resolution face gradient feature image set Gl in the training set, the image block numbered j in all images and the numbered set of similar low -resolution face gradient feature image blocks [v 1 ,v 2 , .. .,v n ] set of image blocks for:

为方便书写,将记为:For convenience of writing, the Recorded as:

其中M*(1+n)表示有M幅人脸图像,每幅人脸图像有1+n个图像块;Wherein M*(1+n) represents that there are M face images, and each face image has 1+n image blocks;

第十一步,利用相似块的位置编号求训练集中扩充后的高分辨率人脸梯度特征图像集Gh中的所有图像在相同编号处的图像块组成的集合:In the eleventh step, use the position number of the similar block to obtain the set of image blocks at the same number for all images in the expanded high-resolution face gradient feature image set G h in the training set:

对上述第二步中的训练集中扩充后的高分辨率人脸梯度特征图像集Gh中的第i,i=1,2,...,M幅图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则训练集中扩充后的高分辨率人脸梯度特征图像集Gh中所有图像编号为j和[v1,v2,...,vn]的图像块组成的集合为:For the i-th, i=1, 2,...,M images in the high-resolution face gradient feature image set G h expanded in the training set in the second step above A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then in the expanded high-resolution face gradient feature image set G h in the training set, it is a set composed of all image numbers j and [v 1 ,v 2 ,...,v n ] image blocks for:

为方便书写,将记为:For convenience of writing, the Recorded as:

第十二步,利用相似块的位置编号求扩充后低分辨率人脸图像集Pl中所有人脸图像在相同编号处的图像块组成的集合:In the twelfth step, use the position numbers of the similar blocks to find the set of all face images in the image blocks at the same number in the expanded low-resolution face image set P1 :

对上述第一步中的扩充后低分辨率人脸图像集Pl中的第i,i=1,2,...,M幅人脸图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则Pl中所有图像编号为j和[v1,v2,...,vn]的图像块组成的集合为:For the first i, i= 1 , 2,..., M pieces of face images in the expanded low-resolution face image set P1 in the above first step A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then all the image numbers in P l are j and the set composed of image blocks [v 1 ,v 2 ,...,v n ] for:

为方便书写,将记为:For convenience of writing, the Recorded as:

第十三步,利用相似块的位置编号求扩充后高分辨率人脸图像集Ph中所有人脸图像在相同编号处的图像块组成的集合:The thirteenth step, use the position numbers of similar blocks to find the set of image blocks at the same number of all face images in the expanded high-resolution face image set Ph :

对上述第一步中的扩充后高分辨率人脸图像集Ph中的第i,i=1,2,...,M幅人脸图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则Ph中所有图像编号为j和[v1,v2,...,vn]的图像块组成集合为:For the first i, i =1, 2,..., M face images in the expanded high-resolution face image set Ph in the first step above A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then all the image blocks in P h whose number is j and [v 1 ,v 2 ,...,v n ] form a set for:

为方便书写,将记为:For convenience of writing, the Recorded as:

第十四步,计算第j块人脸图像块对应的权重矩阵:In the fourteenth step, calculate the weight matrix corresponding to the jth face image block:

先用如下的公式(9)计算上述第八步测试集中的低分辨率人脸图像Itl对应的梯度特征图像的第j块人脸图像块gtl,j与上述第十步得到的中所有人脸图像块的欧式距再用如下的公式(10)计算上述第七步测试集中放大的高分辨率人脸图像Ith对应的高分辨率人脸梯度特征图像gth的第j块图像块gth,j与上述第十一步中的中所有图像块的欧式距离集合 First use the following formula (9) to calculate the jth face image block g tl of the gradient feature image corresponding to the low-resolution face image I tl in the above-mentioned eighth step test set , j and the above-mentioned tenth step obtain The Euclidean distance of all face image blocks in Then use the following formula (10) to calculate the jth block image block gth, j of the high-resolution face gradient feature image gth corresponding to the high-resolution face image Ith enlarged in the test set of the above-mentioned seventh step. in eleven steps The set of Euclidean distances of all image blocks in

得到以上距离后,第j块的权重矩阵Wj由公式(11)求出:After obtaining the above distances, the weight matrix W j of the jth block is obtained by formula (11):

其中α为平滑因子;where α is the smoothing factor;

第十五步,计算第j块人脸图像块对应的映射矩阵:In the fifteenth step, calculate the mapping matrix corresponding to the jth face image block:

将训练集中由第j块低分辨率人脸图像块得到对应的第j块高分辨率人脸图像块的映射过程记为简单的映射关系,得到公式:The mapping process of obtaining the corresponding jth high-resolution face image block from the jth low-resolution face image block in the training set is recorded as a simple mapping relationship, and the formula is obtained:

其中表示第j块人脸图像块的映射矩阵,T表示矩阵的转置,最优映射矩阵由以下公式(13)得到:in Represents the mapping matrix of the jth face image block, T represents the transposition of the matrix, and the optimal mapping matrix is obtained by the following formula (13):

由于高分辨率人脸图像块与低分辨率人脸图像块之间并不是简单的映射关系,利用第十四步得到的距离矩阵对公式(13)进行平滑约束得到以下平滑回归公式(14):Since there is not a simple mapping relationship between high-resolution face image blocks and low-resolution face image blocks, the distance matrix obtained in the fourteenth step is used to perform smooth constraints on formula (13) to obtain the following smooth regression formula (14) :

其中其中tr(.)为矩阵的迹,为了使映射过程更平滑,加入正则化项得到如下公式(15):in Where tr(.) is the trace of the matrix. In order to make the mapping process smoother, a regularization term is added to obtain the following formula (15):

其中F表示Frobenius范数,λ用来权衡重建误差和Aj的稀疏性,通过化简求出第j块图像对应的映射矩阵:in F represents the Frobenius norm, λ is used to weigh the reconstruction error and the sparsity of A j , and the mapping matrix corresponding to the j-th block image is obtained by simplification:

其中E表示单位矩阵;where E represents the identity matrix;

第十六步,重建测试集中的低分辨率人脸图像块得到高分辨率人脸图像块:In the sixteenth step, reconstruct the low-resolution face image blocks in the test set to obtain high-resolution face image blocks:

通过得到测试集中低分辨率人脸图像Itl中的人脸图像块Itl,j对应的高分辨率人脸图像块的高频信息,然后将高频信息插值到Itl,j中得到重建的人脸图像块I'th,jpass Obtain the high-frequency information of the high-resolution face image block corresponding to the face image block I tl,j in the low-resolution face image I tl in the test set, and then interpolate the high-frequency information into I tl,j to obtain the reconstructed face image block I'th,j;

第十七步,组合所有重建图像块到重建的高分辨率人脸图像:In the seventeenth step, combine all reconstructed image blocks into the reconstructed high-resolution face image:

按照从上到下和从左到右的顺序,将重建得到的所有人脸图像块按照编号进行组合,组合过程中重叠部分取均值,得到重建的高分辨率人脸图像It'hAccording to the order from top to bottom and from left to right, all the reconstructed face image blocks are combined according to the numbers, and the overlapping parts are averaged during the combination process to obtain the reconstructed high-resolution face image I t 'h;

第十八步,构建金字塔人脸超分辨率重建模型:The eighteenth step is to build a pyramidal face super-resolution reconstruction model:

(18.1)对上述第十七步得到的I'th使用最近邻插值方法进行降维,得到降维后的低分辨率人脸图像I'tl,使降维后的人脸图像与Itl的大小相同;(18.1) Use the nearest neighbor interpolation method to carry out dimensionality reduction to the I'th that above-mentioned seventeenth step obtains, obtain the low-resolution face image I' tl after dimensionality reduction, make the face image after dimensionality reduction and I tl the same size;

(18.2)用上述第一步到第十七步的步骤对训练集中所有的低分辨率人脸图像进行重建,对训练集中的第i幅低分辨率人脸图像进行重建的过程为:作为测试集中低分辨率人脸图像,训练集中的作为训练集,利用上述第一步到第十七步重建得到的高分辨率图像然后用最近邻插值方法对进行降维,得到 (18.2) Reconstruct all low-resolution face images in the training set with the steps from the first step to the seventeenth step above, and reconstruct the i-th low-resolution face image in the training set The process of rebuilding is: As the low-resolution face images in the test set, the training set and As a training set, use the high-resolution images reconstructed from the first step to the seventeenth step above Then use the nearest neighbor interpolation method to Perform dimensionality reduction, get

(18.3)取高分辨率人脸图像的分块大小为R2*R2像素,R2的数值为6,高分辨率图像块之间重叠的像素个数为K2,K2的数值为4,低分辨率人脸图像的分块大小为(R2/d)*(R2/d)像素,d为缩小倍数且与第一步中的d取值相同,取值为2,低分辨率图像块之间重叠的像素个数为K2/d,将(18.1)得到的It'l作为测试集中低分辨率人脸图像,(18.2)得到的作为训练集,再进行一次人脸图像超分辨率重建过程,得到最终的重建人脸图像;(18.3) Take the block size of the high-resolution face image as R 2 *R 2 pixels, the value of R 2 is 6, the number of overlapping pixels between high-resolution image blocks is K 2 , and the value of K 2 is 4. The block size of the low-resolution face image is (R 2 /d)*(R 2 /d) pixels, d is the reduction factor and the value of d in the first step is the same, the value is 2, low The number of overlapping pixels between the resolution image blocks is K 2 /d, and the I t ' l obtained in (18.1) is used as the low-resolution face image in the test set, and the obtained in (18.2) and As a training set, perform a face image super-resolution reconstruction process again to obtain the final reconstructed face image;

至此,完成B.测试集中低分辨率人脸图像的重建过程,也最终完成基于回归模型的金字塔人脸图像超分辨率重建。So far, the reconstruction process of low-resolution face images in the B. test set is completed, and the super-resolution reconstruction of pyramidal face images based on the regression model is finally completed.

实施例2Example 2

除第三步中的R1的数值为10,第四步中的R1的数值为10,第七步中的R1的数值为10,第八步中的R1的数值为10,第十八步的(18.3)中的R2的数值为8之外,其他同实施例1。Except that the value of R 1 in the third step is 10, the value of R 1 in the fourth step is 10, the value of R 1 in the seventh step is 10, the value of R 1 in the eighth step is 10, and the value of R 1 in the eighth step is 10. The value of R in (18.3) of eighteen steps is 8, other is with embodiment 1.

实施例3Example 3

除第三步中的R1的数值为12,第四步中的R1的数值为12,第七步中的R1的数值为12,第八步中的R1的数值为12,第十八步的(18.3)中的R2的数值为10之外,其他同实施例1。Except that the value of R 1 in the third step is 12, the value of R 1 in the fourth step is 12, the value of R 1 in the seventh step is 12, the value of R 1 in the eighth step is 12, and the value of R 1 in the eighth step is 12. The value of R in (18.3) of 18 steps is outside 10, and other is with embodiment 1.

上述实施例中使用到的公知技术有:梯度特征、非局部相似性和线性回归。The known techniques used in the above embodiments include: gradient features, non-local similarity and linear regression.

Claims (2)

1.基于回归模型的金字塔人脸图像超分辨率重建方法,其特征在于具体步骤如下:1. the super-resolution reconstruction method of pyramid face image based on regression model, it is characterized in that concrete steps are as follows: A.训练集中低分辨率人脸图像集和高分辨率人脸图像集的训练过程:A. The training process of the low-resolution face image set and the high-resolution face image set in the training set: 第一步,扩充训练集中低分辨率人脸图像集和高分辨率人脸图像集:The first step is to expand the low-resolution face image set and high-resolution face image set in the training set: 根据人脸图像对称的特性,对训练集中的低分辨率人脸图像集和高分辨率人脸图像集通过左右翻转的方式进行扩充,图像的尺寸不变,数量扩充两倍,分别得到扩充的低分辨率人脸图像集和扩充的高分辨率人脸图像集其中l表示低分辨率图像,尺寸为a*b像素,h表示高分辨率图像,尺寸为(d*a)*(d*b)像素,d是倍数,M表示图像的数量;According to the symmetrical characteristics of face images, the low-resolution face image set and high-resolution face image set in the training set are expanded by flipping left and right. The size of the image remains the same, and the number is doubled, respectively. A collection of low-resolution face images and an expanded set of high-resolution face images Among them, l represents a low-resolution image with a size of a*b pixels, h represents a high-resolution image with a size of (d*a)*(d*b) pixels, d is a multiple, and M represents the number of images; 第二步,对扩充后的低分辨率人脸图像集Pl和高分辨率人脸图像集Ph分别提取梯度特征:In the second step, gradient features are extracted from the expanded low-resolution face image set P l and high-resolution face image set P h respectively: 对扩充后的低分辨率人脸图像集Pl和高分辨率人脸图像集Ph中的每幅人脸图像,分别提取一阶梯度和二阶梯度作为分量构成一个梯度特征,得到低分辨率人脸图像集Pl中的低分辨率人脸梯度特征图像集和高分辨率人脸图像集Ph中的高分辨率人脸梯度特征图像集 For each face image in the expanded low-resolution face image set P l and high-resolution face image set Ph , extract the first-order gradient and second-order gradient as components to form a gradient feature, and obtain the low-resolution The low-resolution face gradient feature image set in the high-rate face image set P l and the high-resolution face gradient feature image set in the high-resolution face image set Ph 第三步,对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块:The third step is to block the expanded high-resolution face image set Ph and its corresponding high-resolution face gradient feature image set G h respectively : 对扩充后的高分辨率人脸图像集Ph中的每一幅人脸图像及相应的高分辨率人脸梯度特征图像分别进行有重叠的分块,每个分块大小为R1*R1像素,R1的数值为8~12,重叠的方式是当前块与上下相邻图像块之间分别重叠K1行像素,与左右相邻图像块之间重叠K1列像素,且0≤K1≤R1/2,然后采用从上到下和从左到右的顺序对每幅高分辨率人脸图像及其对应的梯度特征图像的所有分块进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块,由此完成对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块;For each face image in the expanded high-resolution face image set Ph and the corresponding high-resolution face gradient feature image Carry out overlapped blocks respectively, each block size is R 1 * R 1 pixels, and the value of R 1 is 8-12. The overlapping method is to overlap K 1 row of pixels between the current block and the upper and lower adjacent image blocks. , overlap K 1 columns of pixels between the left and right adjacent image blocks, and 0≤K 1 ≤R 1 /2, and then use the order from top to bottom and from left to right for each high-resolution face image and its corresponding gradient feature image All the blocks of each image are numbered, and the numbers are 1, 2,..., U, and U are the total number of blocks of each image. The image blocks with the same number are called the image blocks at the same position, thus completing the expansion of the high The high-resolution face image set Ph and its corresponding high-resolution face gradient feature image set G h are respectively divided into blocks; 第四步,对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块:The fourth step is to block the expanded low-resolution face image set P l and its corresponding low-resolution face gradient feature image set G l respectively: 与上述高分辨率人脸图像集Ph的分块方式相同,对扩充后的低分辨率人脸图像集Pl中的每一幅低分辨率人脸图像及相应的低分辨率人脸梯度特征图像分别进行有重叠的分块,每块大小为(R1/d)*(R1/d)像素,R1的数值为8~12,重叠的方式是当前图像块与上下相邻图像块之间重叠K1/d行像素,与左右相邻图像块之间重叠K1/d列像素,然后采用从上到下和从左到右的顺序对每幅低分辨率人脸图像及其对应的梯度特征图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块,由此完成对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块;In the same way as the above-mentioned high-resolution face image set P h , for each low-resolution face image in the expanded low-resolution face image set P l and the corresponding low-resolution face gradient feature image Carry out overlapped blocks respectively, the size of each block is (R 1 /d)*(R 1 /d) pixels, the value of R 1 is 8~12, the way of overlapping is between the current image block and the upper and lower adjacent image blocks Overlap K 1 /d row pixels between the left and right adjacent image blocks, and overlap K 1 /d column pixels between the left and right adjacent image blocks, and then use the order from top to bottom and from left to right for each low-resolution face image and its corresponding gradient feature image All the blocks of each image are numbered respectively, and the numbers are 1, 2,..., U, U is the total number of blocks of each image, and the image blocks with the same number are called the image blocks of the same position, thus completing the expansion of the The low-resolution face image set P l and its corresponding low-resolution face gradient feature image set G l are respectively divided into blocks; 至此,完成A.训练集低分辨率人脸图像集Pl和高分辨率人脸图像集Ph的训练过程;So far, complete the training process of A. training set low-resolution face image set P l and high-resolution face image set P h ; B.测试集中低分辨率人脸图像的重建过程:B. Reconstruction process of low-resolution face images in the test set: 第五步,放大测试集中的低分辨率人脸图像得到放大的高分辨率人脸图像:The fifth step is to enlarge the low-resolution face images in the test set to obtain enlarged high-resolution face images: 将需要测试的低分辨率人脸图像输入到计算机中得到测试集中的低分辨率人脸图像Itl,采用双三次插值的方式放大测试集中的某一幅低分辨率人脸图像,得到放大的图像作为测试集中的放大的高分辨率人脸图像Ith,使得测试集中的放大的高分辨率人脸图像Ith与训练集中的高分辨率人脸图像尺寸相等;Input the low-resolution face image to be tested into the computer to obtain the low-resolution face image I tl in the test set, and use bicubic interpolation to enlarge a certain low-resolution face image in the test set to obtain the enlarged The image is used as the enlarged high-resolution face image I th in the test set, so that the enlarged high-resolution face image I th in the test set is the same as the high-resolution face image in the training set equal in size; 第六步,对测试集中的低分辨率人脸图像Itl和放大的高分辨率人脸图像Ith分别提取梯度特征:The sixth step is to extract gradient features from the low-resolution face image I tl and the enlarged high-resolution face image I th in the test set: 分别提取上述第五步得到的测试集中的低分辨率人脸图像Itl和放大的高分辨率人脸图像Ith的一阶梯度和二阶梯度作为分量构成各自的梯度特征,得到它们各自对应的低分辨率人脸梯度特征图像gtl和高分辨率人脸梯度特征图像gthExtract the first-order gradient and second-order gradient of the low-resolution face image I tl and the enlarged high-resolution face image I th in the test set obtained in the fifth step above respectively as components to form their respective gradient features, and obtain their corresponding The low-resolution face gradient feature image g tl and the high-resolution face gradient feature image g th ; 第七步,对测试集中的放大的高分辨率人脸图像Ith及其对应的高分辨率人脸梯度特征图像gth进行分块:The seventh step is to block the enlarged high-resolution face image I th and its corresponding high-resolution face gradient feature image g th in the test set: 对上述第五步中得到的测试集中的放大的高分辨率人脸图像Ith及其对应的上述第六步中的高分辨率人脸梯度特征图像gth分别进行有重叠的分块,每块大小为R1*R1像素,R1的数值为8~12,使分块大小与训练集中高分辨率人脸图像的分块大小相同,重叠的方式是当前图像块与上下相邻图像块之间重叠K1行像素,与左右相邻图像块之间重叠K1列像素,然后采用从上到下和从左到右的顺序对每幅人脸图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块;The enlarged high-resolution face image I th in the test set obtained in the fifth step above and the corresponding high-resolution face gradient feature image g th in the sixth step above are respectively divided into overlapping blocks, each The block size is R 1 * R 1 pixels, and the value of R 1 is 8 to 12, so that the block size is the same as the block size of the high-resolution face image in the training set, and the overlapping method is that the current image block and the upper and lower adjacent images Overlap K 1 row of pixels between the blocks, and overlap K 1 column of pixels between the left and right adjacent image blocks, and then use the order from top to bottom and from left to right to number all the blocks of each face image respectively, The numbers are 1, 2,..., U, U is the total number of blocks of each image, and the image blocks with the same number are called image blocks at the same position; 第八步,对测试集中的低分辨率人脸图像Itl及其对应的低分辨率人脸梯度特征图像gtl进行分块:The eighth step is to block the low-resolution face image I tl and its corresponding low-resolution face gradient feature image g tl in the test set: 对上述第五步得到的测试集中的低分辨率人脸图像Itl及其对应的上述第六步中的低分辨率人脸梯度特征图像gtl分别进行有重叠的分块,每块大小为(R1/d)*(R1/d),R1的数值为8~12,使分块大小与训练集中低分辨率人脸图像的分块大小相同,重叠的方式是当前图像块与上下相邻图像块之间重叠K1/d行像素,与左右相邻图像块之间重叠K1/d列像素,然后采用从上到下和从左到右的顺序对每幅人脸图像的所有分块分别进行编号,编号分别为1,2,...,U,U为每幅图像分块总数,编号相同的图像块称为相同位置的图像块;The low-resolution face image I tl in the test set obtained in the fifth step above and the corresponding low-resolution face gradient feature image g tl in the above-mentioned sixth step are divided into overlapping blocks, each block size is (R 1 /d)*(R 1 /d), the value of R 1 is 8 to 12, so that the block size is the same as the block size of the low-resolution face image in the training set, and the overlapping method is that the current image block and Overlap K 1 /d rows of pixels between the upper and lower adjacent image blocks, and overlap K 1 /d columns of pixels between the left and right adjacent image blocks, and then use the order from top to bottom and from left to right for each face image All the blocks of are numbered respectively, the numbers are 1, 2,..., U, U is the total number of blocks of each image, and the image blocks with the same number are called image blocks at the same position; 第九步,利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl求相似块的编号:The ninth step is to use the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set to find the number of similar blocks: 按照从上到下和从左到右的顺序对上述第八步中得到的测试集中的低分辨率人脸图像Itl的图像块进行重建,以对第j块图像块进行重建为例,利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl的非局部相似性,在测试集中低分辨率人脸图像Itl中寻找第j块图像块的相似块,设测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl的第j块人脸梯度特征图像块为gtl,j,对低分辨率人脸梯度特征图像gtl中的所有人脸图像块采用从上到下和从左到右的顺序进行扫描,扫描的图像块与第j块图像块不重复,计算扫描到的人脸梯度特征图像块与第j块人脸梯度特征图像块的欧式距离,然后按照距离从小到大的顺序对所有低分辨率人脸梯度特征图像块的距离进行排序,取距离最小的前n块作为第j块低分辨率人脸梯度特征图像块gtl,j的相似图像块,设相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn],该编号集合对应的低分辨率人脸梯度特征图像块的集合为由此完成利用测试集中的低分辨率人脸图像Itl对应的低分辨率人脸梯度特征图像gtl求相似块的编号的过程;According to the order from top to bottom and from left to right, the image blocks of the low-resolution face image I t1 in the test set obtained in the above eighth step are reconstructed, taking the reconstruction of the jth image block as an example, using The non-local similarity of the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set, and find the similar block of the jth image block in the low-resolution face image I tl in the test set , let the jth face gradient feature image block of the low-resolution face gradient feature image g tl corresponding to the low-resolution face image I tl in the test set be g tl,j , for the low-resolution face gradient feature image All face image blocks in g tl are scanned from top to bottom and from left to right, the scanned image block is not repeated with the jth image block, and the scanned face gradient feature image block and the jth image block are calculated The Euclidean distance of the face gradient feature image block, and then sort the distances of all low-resolution face gradient feature image blocks in the order of distance from small to large, and take the first n blocks with the smallest distance as the jth low-resolution face block Similar image blocks of face gradient feature image block g tl,j , set the number set of similar low-resolution face gradient feature image blocks as [v 1 ,v 2 ,...,v n ], the corresponding low The set of resolution face gradient feature image blocks is This completes the process of seeking the numbering of similar blocks using the low-resolution face gradient feature image g t1 corresponding to the low-resolution face image I t1 in the test set; 第十步,利用相似块的位置编号求训练集中扩充后的低分辨率人脸梯度特征图像集Gl中的所有图像在相同编号处的图像块组成的集合:In the tenth step, use the position numbers of the similar blocks to obtain the set of image blocks at the same number for all images in the expanded low-resolution face gradient feature image set G l in the training set: 对上述第二步中的训练集中扩充后的低分辨率人脸梯度特征图像集Gl中的第i,i=1,2,...,M幅人脸图像中编号为j的人脸特征图像块和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]中相同的图像块组成集合则训练集中扩充后的低分辨率人脸梯度特征图像集Gl中所有图像中编号为j的图像块和相似低分辨率人脸梯度特征图像块的编号集合[v1,v2,...,vn]的图像块组成的集合为:For the i, i=1, 2, ..., M face images in the low-resolution face gradient feature image set G l expanded in the training set in the second step above The numbered set of the face feature image block numbered j and the similar low-resolution face gradient feature image block in the ninth step above is the same image block in [v 1 ,v 2 ,...,v n ] Form a collection Then in the expanded low-resolution face gradient feature image set Gl in the training set, the image block numbered j in all images and the numbered set of similar low -resolution face gradient feature image blocks [v 1 ,v 2 , .. .,v n ] set of image blocks for: <mrow> <msub> <mi>S</mi> <msub> <mi>G</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mi>M</mi> </msubsup> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>G</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mn>1</mn></msubsup><mo>,</mo><mn>...</mn><mo>,</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mn>1</mn></msubsup><mo>,</mo><mn>...</mn><mo>,</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mi>M</mi></msubsup><mo>,</mo><mn>...</mn><mo>,</mo><msubsup><mi>g</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mi>M</mi></msubsup><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow><mo>,</mo></mrow> 为方便书写,将记为:For convenience of writing, the Recorded as: <mrow> <msub> <mi>S</mi> <msub> <mi>G</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mi>M</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>G</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msub><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mn>2</mn></mrow></msub><mo>,</mo><mn>...</mn><mo>,</mo><msub><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mi>M</mi><mo>*</mo><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>n</mi><mo>)</mo></mrow></mrow></msub><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow><mo>,</mo></mrow> 其中M*(1+n)表示有M幅人脸图像,每幅人脸图像有1+n个图像块;Wherein M*(1+n) represents that there are M face images, and each face image has 1+n image blocks; 第十一步,利用相似块的位置编号求训练集中扩充后的高分辨率人脸梯度特征图像集Gh中的所有图像在相同编号处的图像块组成的集合:In the eleventh step, use the position number of the similar block to obtain the set of image blocks at the same number for all images in the expanded high-resolution face gradient feature image set G h in the training set: 对上述第二步中的训练集中扩充后的高分辨率人脸梯度特征图像集Gh中的第i,i=1,2,...,M幅图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则训练集中扩充后的高分辨率人脸梯度特征图像集Gh中所有图像编号为j和[v1,v2,...,vn]的图像块组成的集合为:For the i-th, i=1, 2,...,M images in the high-resolution face gradient feature image set G h expanded in the training set in the second step above A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then in the expanded high-resolution face gradient feature image set G h in the training set, it is a set composed of all image numbers j and [v 1 ,v 2 ,...,v n ] image blocks for: <mrow> <msub> <mi>S</mi> <msub> <mi>G</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mn>...</mn> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mn>...</mn> <mo>,</mo> <msubsup> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mi>M</mi> </msubsup> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>G</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mn>1</mn></msubsup><mn>...</mn><mo>,</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mn>1</mn></msubsup><mo>,</mo><mn>...</mn><mo>,</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mi>M</mi></msubsup><mn>...</mn><mo>,</mo><msubsup><mi>g</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mi>M</mi></msubsup><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow><mo>,</mo></mrow> 为方便书写,将记为:For convenience of writing, the Recorded as: <mrow> <msub> <mi>S</mi> <msub> <mi>G</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mi>M</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>G</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msub><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mn>2</mn></mrow></msub><mo>,</mo><mn>...</mn><mo>,</mo><msub><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mi>M</mi><mo>*</mo><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>n</mi><mo>)</mo></mrow></mrow></msub><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow><mo>,</mo></mrow> 第十二步,利用相似块的位置编号求扩充后低分辨率人脸图像集Pl中所有人脸图像在相同编号处的图像块组成的集合:In the twelfth step, use the position numbers of the similar blocks to find the set of all face images in the image blocks at the same number in the expanded low-resolution face image set P1 : 对上述第一步中的扩充后低分辨率人脸图像集Pl中的第i,i=1,2,...,M幅人脸图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则Pl中所有图像编号为j和[v1,v2,...,vn]的图像块组成的集合为:For the first i, i= 1 , 2,..., M pieces of face images in the expanded low-resolution face image set P1 in the above first step A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then all the image numbers in P l are j and the set composed of image blocks [v 1 ,v 2 ,...,v n ] for: <mrow> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msubsup> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mn>...</mn> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mn>1</mn> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mn>...</mn> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mi>M</mi> </msubsup> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msubsup><mi>p</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mn>1</mn></msubsup><mn>...</mn><mo>,</mo><msubsup><mi>p</mi><mrow><mn>1</mn><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mn>1</mn></msubsup><mo>,</mo><mn>...</mn><mo>,</mo><msubsup><mi>p</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mi>M</mi></msubsup><mn>...</mn><mo>,</mo><msubsup><mi>p</mi><mrow><mi>l</mi><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mi>M</mi></msubsup><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow><mo>,</mo></mrow> 为方便书写,将记为:For convenience of writing, the Recorded as: <mrow> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>p</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mi>M</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msub><mi>p</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>p</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mn>2</mn></mrow></msub><mo>,</mo><mn>...</mn><mo>,</mo><msub><mi>p</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mi>M</mi><mo>*</mo><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>n</mi><mo>)</mo></mrow></mrow></msub><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow><mo>,</mo></mrow> 第十三步,利用相似块的位置编号求扩充后高分辨率人脸图像集Ph中所有人脸图像在相同编号处的图像块组成的集合:The thirteenth step, use the position numbers of similar blocks to find the set of image blocks at the same number of all face images in the expanded high-resolution face image set Ph : 对上述第一步中的扩充后高分辨率人脸图像集Ph中的第i,i=1,2,...,M幅人脸图像中编号为j和上述第九步中的相似低分辨率人脸梯度特征图像块的编号集合为[v1,v2,...,vn]的图像块组成集合则Ph中所有图像编号为j和[v1,v2,...,vn]的图像块组成集合为:For the first i, i =1, 2,..., M face images in the expanded high-resolution face image set Ph in the first step above A set of image blocks whose numbers are j and similar low-resolution face gradient feature image blocks in the ninth step above are [v 1 ,v 2 ,...,v n ] Then all the image blocks in P h whose number is j and [v 1 ,v 2 ,...,v n ] form a set for: <mrow> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mn>1</mn> </msubsup> <mn>...</mn> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mn>1</mn> </msubsup> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mi>M</mi> </msubsup> <mn>...</mn> <mo>,</mo> <msubsup> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <msub> <mi>v</mi> <mi>n</mi> </msub> </mrow> <mi>M</mi> </msubsup> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mn>1</mn></msubsup><mn>...</mn><mo>,</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mn>1</mn></msubsup><mo>,</mo><mn>...</mn><mo>,</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>1</mn></msub></mrow><mi>M</mi></msubsup><mo>,</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mn>2</mn></msub></mrow><mi>M</mi></msubsup><mn>...</mn><mo>,</mo><msubsup><mi>p</mi><mrow><mi>h</mi><mo>,</mo><msub><mi>v</mi><mi>n</mi></msub></mrow><mi>M</mi></msubsup><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>7</mn><mo>)</mo></mrow><mo>,</mo></mrow> 为方便书写,将记为:For convenience of writing, the Recorded as: <mrow> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>p</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mi>M</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><msub><mi>p</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>p</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mn>2</mn></mrow></msub><mo>,</mo><mn>...</mn><mo>,</mo><msub><mi>p</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mi>M</mi><mo>*</mo><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>n</mi><mo>)</mo></mrow></mrow></msub><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow><mo>,</mo></mrow> 第十四步,计算第j块人脸图像块对应的权重矩阵:In the fourteenth step, calculate the weight matrix corresponding to the jth face image block: 先用如下的公式(9)计算上述第八步测试集中的低分辨率人脸图像Itl对应的梯度特征图像的第j块人脸图像块gtl,j与上述第十步得到的中所有人脸图像块的欧式距离集合再用如下的公式(10)计算上述第七步测试集中放大的高分辨率人脸图像Ith对应的高分辨率人脸梯度特征图像gth的第j块图像块gth,j与上述第十一步中的中所有图像块的欧式距离集合 First use the following formula (9) to calculate the jth face image block g tl,j of the gradient feature image corresponding to the low-resolution face image I tl in the above-mentioned eighth step test set and the above-mentioned tenth step obtained The Euclidean distance set of all face image blocks in Then use the following formula (10) to calculate the jth block image block gth, j of the high-resolution face gradient feature image gth corresponding to the high-resolution face image Ith enlarged in the test set of the above-mentioned seventh step. in eleven steps The set of Euclidean distances of all image blocks in <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mo>_</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>_</mo> <msub> <mi>S</mi> <msub> <mi>G</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mn>1</mn> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mn>2</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mi>M</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mo>_</mo><msub><mi>g</mi><mrow><mi>t</mi><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>_</mo><msub><mi>S</mi><msub><mi>G</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mrow><mo>(</mo><msub><mi>g</mi><mrow><mi>t</mi><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mn>1</mn></mrow></msub><mo>)</mo></mrow><mo>,</mo><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mrow><mo>(</mo><msub><mi>g</mi><mrow><mi>t</mi><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mn>2</mn></mrow></msub><mo>)</mo></mrow><mo>,</mo><mn>...</mn><mo>,</mo><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mrow><mo>(</mo><msub><mi>g</mi><mrow><mi>t</mi><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi><mi>M</mi><mo>*</mo><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>n</mi><mo>)</mo></mrow></mrow></msub><mo>)</mo></mrow><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>9</mn><mo>)</mo></mrow><mo>,</mo></mrow> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mo>_</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>_</mo> <msub> <mi>S</mi> <msub> <mi>G</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mn>2</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mi>t</mi> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>g</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> <mi>M</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mo>_</mo><msub><mi>g</mi><mrow><mi>t</mi><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>_</mo><msub><mi>S</mi><msub><mi>G</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><mo>&amp;lsqb;</mo><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mrow><mo>(</mo><msub><mi>g</mi><mrow><mi>t</mi><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mn>1</mn></mrow></msub><mo>)</mo></mrow><mo>,</mo><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mrow><mo>(</mo><msub><mi>g</mi><mrow><mi>t</mi><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mn>2</mn></mrow></msub><mo>)</mo></mrow><mo>,</mo><mn>...</mn><mo>,</mo><mi>d</mi><mi>i</mi><mi>s</mi><mi>t</mi><mrow><mo>(</mo><msub><mi>g</mi><mrow><mi>t</mi><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi><mi>M</mi><mo>*</mo><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>n</mi><mo>)</mo></mrow></mrow></msub><mo>)</mo></mrow><mo>&amp;rsqb;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>10</mn><mo>)</mo></mrow><mo>,</mo></mrow> 得到以上距离后,第j块的权重矩阵Wj由公式(11)求出:After obtaining the above distances, the weight matrix W j of the jth block is obtained by formula (11): 其中α为平滑因子;where α is the smoothing factor; 第十五步,计算第j块人脸图像块对应的映射矩阵:In the fifteenth step, calculate the mapping matrix corresponding to the jth face image block: 将训练集中由第j块低分辨率人脸图像块得到对应的第j块高分辨率人脸图像块的映射过程记为简单的映射关系,得到公式:The mapping process of obtaining the corresponding jth high-resolution face image block from the jth low-resolution face image block in the training set is recorded as a simple mapping relationship, and the formula is obtained: <mrow> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msubsup> <mi>A</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>=</mo><msubsup><mi>A</mi><mi>j</mi><mi>T</mi></msubsup><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>12</mn><mo>)</mo></mrow><mo>,</mo></mrow> 其中表示第j块人脸图像块的映射矩阵,T表示矩阵的转置,最优映射矩阵由以下公式(13)得到:in Represents the mapping matrix of the jth face image block, T represents the transposition of the matrix, and the optimal mapping matrix is obtained by the following formula (13): <mrow> <msub> <mi>A</mi> <mi>j</mi> </msub> <mo>=</mo> <munder> <mi>min</mi> <msub> <mi>A</mi> <mi>j</mi> </msub> </munder> <mo>|</mo> <mo>|</mo> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>-</mo> <msubsup> <mi>A</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>|</mo> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msub><mi>A</mi><mi>j</mi></msub><mo>=</mo><munder><mi>min</mi><msub><mi>A</mi><mi>j</mi></msub></munder><mo>|</mo><mo>|</mo><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>-</mo><msubsup><mi>A</mi><mi>j</mi><mi>T</mi></msubsup><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>|</mo><mo>|</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>13</mn><mo>)</mo></mrow><mo>,</mo></mrow> 由于高分辨率人脸图像块与低分辨率人脸图像块之间并不是简单的映射关系,利用第十四步得到的距离矩阵对公式(13)进行平滑约束得到以下平滑回归公式(14):Since there is not a simple mapping relationship between high-resolution face image blocks and low-resolution face image blocks, the distance matrix obtained in the fourteenth step is used to perform smooth constraints on formula (13) to obtain the following smooth regression formula (14) : <mrow> <msubsup> <mi>A</mi> <mi>j</mi> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <munder> <mi>min</mi> <msub> <mi>A</mi> <mi>j</mi> </msub> </munder> <mo>|</mo> <mo>|</mo> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>-</mo> <msubsup> <mi>A</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>|</mo> <msub> <mo>|</mo> <msub> <mi>W</mi> <mi>j</mi> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msubsup><mi>A</mi><mi>j</mi><mo>&amp;prime;</mo></msubsup><mo>=</mo><munder><mi>min</mi><msub><mi>A</mi><mi>j</mi></msub></munder><mo>|</mo><mo>|</mo><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>-</mo><msubsup><mi>A</mi><mi>j</mi><mi>T</mi></msubsup><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>|</mo><msub><mo>|</mo><msub><mi>W</mi><mi>j</mi></msub></msub><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>14</mn><mo>)</mo></mrow><mo>,</mo></mrow> 其中其中tr(.)为矩阵的迹,为了使映射过程更平滑,加入正则化项得到如下公式(15):in Where tr(.) is the trace of the matrix. In order to make the mapping process smoother, a regularization term is added to obtain the following formula (15): <mrow> <msubsup> <mi>A</mi> <mi>j</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msubsup> <mo>=</mo> <munder> <mi>min</mi> <msub> <mi>A</mi> <mi>j</mi> </msub> </munder> <mo>|</mo> <mo>|</mo> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>-</mo> <msubsup> <mi>A</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>|</mo> <msub> <mo>|</mo> <msub> <mi>W</mi> <mi>j</mi> </msub> </msub> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>A</mi> <mi>j</mi> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msubsup><mi>A</mi><mi>j</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msubsup><mo>=</mo><munder><mi>min</mi><msub><mi>A</mi><mi>j</mi></msub></munder><mo>|</mo><mo>|</mo><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>-</mo><msubsup><mi>A</mi><mi>j</mi><mi>T</mi></msubsup><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><mo>|</mo><msub><mo>|</mo><msub><mi>W</mi><mi>j</mi></msub></msub><mo>+</mo><mi>&amp;lambda;</mi><mo>|</mo><mo>|</mo><msub><mi>A</mi><mi>j</mi></msub><mo>|</mo><msubsup><mo>|</mo><mi>F</mi><mn>2</mn></msubsup><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>15</mn><mo>)</mo></mrow><mo>,</mo></mrow> 其中F表示Frobenius范数,λ用来权衡重建误差和Aj的稀疏性,通过化简求出第j块图像对应的映射矩阵:in F represents the Frobenius norm, λ is used to weigh the reconstruction error and the sparsity of A j , and the mapping matrix corresponding to the j-th block image is obtained by simplification: <mrow> <msubsup> <mi>A</mi> <mi>j</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msubsup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>W</mi> <mi>j</mi> </msub> <msubsup> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mi>T</mi> </msubsup> <mo>+</mo> <mi>&amp;lambda;</mi> <mi>E</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msub> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>W</mi> <mi>j</mi> </msub> <msubsup> <mi>S</mi> <msub> <mi>P</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mi>T</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> <mrow><msubsup><mi>A</mi><mi>j</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msubsup><mo>=</mo><msup><mrow><mo>(</mo><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><msub><mi>W</mi><mi>j</mi></msub><msubsup><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub><mi>T</mi></msubsup><mo>+</mo><mi>&amp;lambda;</mi><mi>E</mi><mo>)</mo></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><msub><mi>S</mi><msub><mi>P</mi><mrow><mi>h</mi><mo>,</mo><mi>j</mi></mrow></msub></msub><msub><mi>W</mi><mi>j</mi></msub><msubsup><mi>S</mi><msub><mi>P</mi><mrow><mi>l</mi><mo>,</mo><mi>j</mi></mrow></msub><mi>T</mi></msubsup><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>16</mn><mo>)</mo></mrow><mo>,</mo></mrow> 其中E表示单位矩阵;where E represents the identity matrix; 第十六步,重建测试集中的低分辨率人脸图像块得到高分辨率人脸图像块:In the sixteenth step, reconstruct the low-resolution face image blocks in the test set to obtain high-resolution face image blocks: 通过得到测试集中低分辨率人脸图像Itl中的人脸图像块Itl,j对应的高分辨率人脸图像块的高频信息,然后将高频信息插值到Itl,j中得到重建的人脸图像块I′th,jpass Obtain the high-frequency information of the high-resolution face image block corresponding to the face image block I tl,j in the low-resolution face image I tl in the test set, and then interpolate the high-frequency information into I tl,j to obtain the reconstructed Face image block I′ th,j ; 第十七步,组合所有重建图像块到重建的高分辨率人脸图像:In the seventeenth step, combine all reconstructed image blocks into the reconstructed high-resolution face image: 按照从上到下和从左到右的顺序,将重建得到的所有人脸图像块按照编号进行组合,组合过程中重叠部分取均值,得到重建的高分辨率人脸图像I′thAccording to the order from top to bottom and from left to right, all the face image blocks that are reconstructed are combined according to the numbers, and the overlapping parts are averaged during the combination process to obtain the reconstructed high-resolution face image I′ th ; 第十八步,构建金字塔人脸超分辨率重建模型:The eighteenth step is to build a pyramidal face super-resolution reconstruction model: (18.1)对上述第十七步得到的I′th使用最近邻插值方法进行降维,得到降维后的低分辨率人脸图像I′tl,使降维后的人脸图像与Itl的大小相同;(18.1) Use the nearest neighbor interpolation method to carry out dimensionality reduction to the I′ th that above-mentioned seventeenth step obtains, obtain the low-resolution face image I′ tl after dimensionality reduction, make the face image after dimensionality reduction and I tl the same size; (18.2)用上述第一步到第十七步的步骤对训练集中所有的低分辨率人脸图像进行重建,对训练集中的第i幅低分辨率人脸图像进行重建的过程为:作为测试集中低分辨率人脸图像,训练集中的作为训练集,利用上述第一步到第十七步重建得到高分辨率图像然后用最近邻插值方法对进行降维,得到 (18.2) Reconstruct all low-resolution face images in the training set with the steps from the first step to the seventeenth step above, and reconstruct the i-th low-resolution face image in the training set The process of rebuilding is: As the low-resolution face images in the test set, the training set and As a training set, use the above steps 1 to 17 to reconstruct high-resolution images Then use the nearest neighbor interpolation method to Perform dimensionality reduction, get (18.3)取高分辨率人脸图像的分块大小为R2*R2像素,R2的数值为6~10,且R2≠R1,高分辨率图像块之间重叠的像素个数为K2,低分辨率人脸图像的分块大小为(R2/d)*(R2/d)像素,d为缩小倍数且与第一步中的d取值相同,低分辨率图像块之间重叠的像素个数为K2/d,将(18.1)得到的I′tl作为测试集中低分辨率人脸图像,(18.2)得到的作为训练集,再进行一次人脸图像超分辨率重建过程,得到最终的重建人脸图像;(18.3) Take the block size of the high-resolution face image as R 2 *R 2 pixels, the value of R 2 is 6 to 10, and R 2 ≠ R 1 , the number of overlapping pixels between high-resolution image blocks is K 2 , the block size of the low-resolution face image is (R 2 /d)*(R 2 /d) pixels, d is the reduction factor and the value of d in the first step is the same, the low-resolution image The number of overlapping pixels between blocks is K 2 /d, the I′ tl obtained in (18.1) is used as the low-resolution face image in the test set, and the I′ tl obtained in (18.2) and As a training set, perform a face image super-resolution reconstruction process again to obtain the final reconstructed face image; 至此,完成B.测试集中低分辨率人脸图像的重建过程,也最终完成基于回归模型的金字塔人脸图像超分辨率重建。So far, the reconstruction process of low-resolution face images in the B. test set is completed, and the super-resolution reconstruction of pyramidal face images based on the regression model is finally completed. 2.根据权利要求1所述基于回归模型的金字塔人脸图像超分辨率重建方法,其特征在于:所述第一步,扩充训练集中低分辨率人脸图像集和高分辨率人脸图像集中的尺寸为(d*a)*(d*b)像素,d是倍数,该d的数值为2;所述第三步,对扩充后的高分辨率人脸图像集Ph及其对应的高分辨率人脸梯度特征图像集Gh分别进行分块中的与左右相邻图像块之间重叠K1列像素,该K1的数值为4;所述第四步,对扩充后的低分辨率人脸图像集Pl及其对应的低分辨率人脸梯度特征图像集Gl分别进行分块中的每块大小为(R1/d)*(R1/d)像素,该d的数值为2;与左右相邻图像块之间重叠K1/d列像素,该K1的数值为4;所述第七步,对测试集中的放大的高分辨率人脸图像Ith及其对应的高分辨率人脸梯度特征图像gth进行分块中的重叠的方式是当前图像块与上下相邻图像块之间重叠K1行像素,与左右相邻图像块之间重叠K1列像素,该K1的数值为4;所述第八步,对测试集中的低分辨率人脸图像Itl及其对应的低分辨率人脸梯度特征图像gtl进行分块中的每块大小为(R1/d)*(R1/d),该d的数值为2;与左右相邻图像块之间重叠K1/d列像素,该K1的数值为4;所述第十八步,构建金字塔人脸超分辨率重建模型的(18.3)中高分辨率图像块之间重叠的像素个数为K2,该K2的数值为4;低分辨率人脸图像的分块大小为(R2/d)*(R2/d)像素,d为缩小倍数且与第一步中的d取值相同,该d的取值为2。2. according to the described method for super-resolution reconstruction of pyramid human face image based on regression model of claim 1, it is characterized in that: described the first step, expand low-resolution human face image set and high-resolution human face image set in the training The size is (d*a)*(d*b) pixels, d is a multiple, and the value of this d is 2; the third step, for the expanded high-resolution face image set Ph and its corresponding The high-resolution face gradient feature image set G h is divided into blocks and overlaps K 1 columns of pixels between the left and right adjacent image blocks, and the value of this K 1 is 4; the fourth step, for the expanded low The resolution face image set P l and its corresponding low-resolution face gradient feature image set G l are divided into blocks, each block size is (R 1 /d)*(R 1 /d) pixels, the d The numerical value of is 2; Overlap between the left and right adjacent image blocks K 1 /d row of pixels, the numerical value of this K 1 is 4; The seventh step, to the enlarged high-resolution face image I th and The corresponding high-resolution face gradient feature image g th is overlapped in the block division by overlapping K 1 row of pixels between the current image block and the upper and lower adjacent image blocks, and overlapping K 1 rows with the left and right adjacent image blocks Column pixel, the numerical value of this K 1 is 4; Described eighth step, to the low-resolution face image I tl and the corresponding low-resolution face gradient feature image g tl in the test set, carry out each block in the block The size is (R 1 /d)*(R 1 /d), the value of d is 2; it overlaps K 1 /d columns of pixels between the left and right adjacent image blocks, and the value of K 1 is 4; the first Eighteen steps, the number of overlapping pixels between the high-resolution image blocks in (18.3) of the pyramidal face super-resolution reconstruction model is K 2 , and the value of this K 2 is 4; the block of the low-resolution face image The size is (R 2 /d)*(R 2 /d) pixels, d is the reduction factor and the value of d in the first step is the same, and the value of d is 2.
CN201711381261.2A 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model Expired - Fee Related CN108090873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711381261.2A CN108090873B (en) 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711381261.2A CN108090873B (en) 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model

Publications (2)

Publication Number Publication Date
CN108090873A true CN108090873A (en) 2018-05-29
CN108090873B CN108090873B (en) 2021-03-05

Family

ID=62177638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711381261.2A Expired - Fee Related CN108090873B (en) 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model

Country Status (1)

Country Link
CN (1) CN108090873B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559278A (en) * 2018-11-28 2019-04-02 山东财经大学 Super resolution image reconstruction method and system based on multiple features study
CN109949240A (en) * 2019-03-11 2019-06-28 厦门美图之家科技有限公司 A kind of image processing method and calculate equipment
CN110189255A (en) * 2019-05-29 2019-08-30 电子科技大学 Face detection method based on two-level detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842115A (en) * 2012-05-31 2012-12-26 哈尔滨工业大学(威海) Compressed sensing image super-resolution reconstruction method based on double dictionary learning
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN105550988A (en) * 2015-12-07 2016-05-04 天津大学 Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
US20170178293A1 (en) * 2014-02-13 2017-06-22 Thomson Licensing Method for performing super-resolution on single images and apparatus for performing super-resolution on single images
CN107067367A (en) * 2016-09-08 2017-08-18 南京工程学院 A kind of Image Super-resolution Reconstruction processing method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842115A (en) * 2012-05-31 2012-12-26 哈尔滨工业大学(威海) Compressed sensing image super-resolution reconstruction method based on double dictionary learning
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
US20170178293A1 (en) * 2014-02-13 2017-06-22 Thomson Licensing Method for performing super-resolution on single images and apparatus for performing super-resolution on single images
CN105550988A (en) * 2015-12-07 2016-05-04 天津大学 Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
CN107067367A (en) * 2016-09-08 2017-08-18 南京工程学院 A kind of Image Super-resolution Reconstruction processing method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
GUANGWEI GAO 等: "Locality-Constrained Double Low-Rank Representation for Effective Face Hallucination", 《IEEE ACCESS》 *
JUNJUN JIANG 等: "SRLSP: A Face Image Super-Resolution Algorithm Using Smooth Regression with Local Structure Prior", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
JUNJUN JIANG: "Face Super-Resolution via Multilayer Locality-Constrained Iterative Neighbor Embedding and Intermediate Dictionary Learning", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
XIANG MA 等: "Hallucinatingfacebyposition-patch", 《PATTERN RECOGNITION》 *
卫保国 等: "基于约束块重建的人脸超分辨率方法", 《计算机仿真》 *
薛翠红 等: "基于MAP 框架的金字塔人脸超分辨率算法", 《计算机工程》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559278A (en) * 2018-11-28 2019-04-02 山东财经大学 Super resolution image reconstruction method and system based on multiple features study
CN109559278B (en) * 2018-11-28 2019-08-09 山东财经大学 Super-resolution image reconstruction method and system based on multi-feature learning
CN109949240A (en) * 2019-03-11 2019-06-28 厦门美图之家科技有限公司 A kind of image processing method and calculate equipment
CN110189255A (en) * 2019-05-29 2019-08-30 电子科技大学 Face detection method based on two-level detection
CN110189255B (en) * 2019-05-29 2023-01-17 电子科技大学 Face detection method based on two-level detection

Also Published As

Publication number Publication date
CN108090873B (en) 2021-03-05

Similar Documents

Publication Publication Date Title
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN109255755B (en) Image super-resolution reconstruction method based on multi-column convolutional neural network
CN102354397B (en) A face image super-resolution reconstruction method based on the similarity of facial features and organs
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN101719266B (en) Affine transformation-based frontal face image super-resolution reconstruction method
CN102968766B (en) Dictionary database-based adaptive image super-resolution reconstruction method
CN110111256A (en) Image Super-resolution Reconstruction method based on residual error distillation network
CN103279933B (en) A kind of single image super resolution ratio reconstruction method based on bilayer model
CN105550988A (en) Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
CN101556690A (en) Image super-resolution method based on overcomplete dictionary learning and sparse representation
Xin et al. Residual attribute attention network for face image super-resolution
CN101299235A (en) Method for reconstructing human face super resolution based on core principle component analysis
CN107341765A (en) A kind of image super-resolution rebuilding method decomposed based on cartoon texture
Zhu et al. Generative adversarial image super‐resolution through deep dense skip connections
CN108090873A (en) Pyramid face image super-resolution reconstruction method based on regression model
CN117333750A (en) Spatial registration and local-global multi-scale multi-modal medical image fusion method
He et al. Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks
CN106651772A (en) Super-resolution reconstruction method of satellite cloud picture
Zeng et al. Densely connected transformer with linear self-attention for lightweight image super-resolution
CN118799179A (en) Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution
CN103325104B (en) Based on the face image super-resolution reconstruction method of iteration sparse expression
CN105427249A (en) Wind power image quality enhancing method based on robustness nuclear norm regular regression
CN116228823A (en) An artificial intelligence-based method for unsupervised cascade registration of magnetic resonance images
CN114022362B (en) An image super-resolution method based on pyramid attention mechanism and symmetric network
Shao et al. SRWGANTV: image super-resolution through wasserstein generative adversarial networks with total variational regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210305

Termination date: 20211220

CF01 Termination of patent right due to non-payment of annual fee