CN109035142A - A kind of satellite image ultra-resolution method fighting network integration Aerial Images priori - Google Patents
A kind of satellite image ultra-resolution method fighting network integration Aerial Images priori Download PDFInfo
- Publication number
- CN109035142A CN109035142A CN201810777731.5A CN201810777731A CN109035142A CN 109035142 A CN109035142 A CN 109035142A CN 201810777731 A CN201810777731 A CN 201810777731A CN 109035142 A CN109035142 A CN 109035142A
- Authority
- CN
- China
- Prior art keywords
- image
- loss
- resolution
- model
- satellite
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000010354 integration Effects 0.000 title claims abstract 12
- 238000012549 training Methods 0.000 claims abstract description 60
- 238000012805 post-processing Methods 0.000 claims abstract description 11
- 238000003707 image sharpening Methods 0.000 claims abstract description 8
- 230000000007 visual effect Effects 0.000 claims abstract description 6
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 36
- 230000008569 process Effects 0.000 claims description 12
- 230000008447 perception Effects 0.000 claims description 10
- 239000000203 mixture Substances 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims 3
- 230000003321 amplification Effects 0.000 claims 1
- 238000004458 analytical method Methods 0.000 claims 1
- 238000003199 nucleic acid amplification method Methods 0.000 claims 1
- 238000004321 preservation Methods 0.000 claims 1
- 238000005070 sampling Methods 0.000 claims 1
- 238000002474 experimental method Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 description 7
- 238000012360 testing method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 238000006731 degradation reaction Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 206010021403 Illusion Diseases 0.000 description 1
- 235000009754 Vitis X bourquina Nutrition 0.000 description 1
- 235000012333 Vitis X labruscana Nutrition 0.000 description 1
- 240000006365 Vitis vinifera Species 0.000 description 1
- 235000014787 Vitis vinifera Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013100 final test Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明属于图像超分辨技术领域,具体涉及一种基于多尺度感知损失与生成对抗网络结合航拍图像先验的卫星图像超分辨方法。The invention belongs to the technical field of image super-resolution, in particular to a satellite image super-resolution method based on multi-scale perceptual loss and generation confrontation network combined with aerial image prior.
背景技术Background technique
图像分辨率是图像质量的重要指标,分辨率越高的图像可以更清晰的展现出更多细节,但在获得图像过程中受硬件以及外部环境的影响,获取的图像分辨率较低,从而产生如何从低分辨率的图像获取高分辨图像的问题。目前,随着卫星的数量的增加,卫星可覆盖地球范围超过90%,这使得通过卫星可监控的范围要远大于其他手段获得图像所覆盖的范围,但是卫星图像受多方面原因的影响,分辨率较低。例如相对于航拍图像,卫星图像相对模糊缺乏细节信息,但是航拍图像的覆盖面远不及卫星图像,所以如何获得分辨率更高的卫星影像有着重要的意义和价值。Image resolution is an important indicator of image quality. An image with higher resolution can show more details more clearly. However, due to the influence of hardware and external environment during the image acquisition process, the acquired image resolution is lower, resulting in The problem of how to obtain high-resolution images from low-resolution images. At present, with the increase in the number of satellites, satellites can cover more than 90% of the earth, which makes the range that can be monitored by satellites much larger than the range covered by images obtained by other means, but satellite images are affected by many reasons. The rate is lower. For example, compared with aerial images, satellite images are relatively blurry and lack detailed information, but the coverage of aerial images is far less than that of satellite images, so how to obtain satellite images with higher resolution is of great significance and value.
在图像超分辨领域,深度神经网络与传统图像超分辨问题的结合,使得图像超分辨技术有了新的突破。随着计算机硬件设备的发展,大规模运算加速的成本显著降低,训练深度神经网络的代价降低,大大方便了科研工作者,也使得这一技术应用广泛于各个领域。从最初提出的深度学习与超分辨问题结合的网络SRCNN到现在的生成对抗网络(Generative Adversarial Nets,GAN)实现的超分辨算法SRGAN,通过使用低分辨与高分辨图像对网络参数进行训练,从而获得从低分辨率图像到高分辨图像转换模型,在只有低分辨图像的情况下生成高分辨图像。In the field of image super-resolution, the combination of deep neural network and traditional image super-resolution problems has made a new breakthrough in image super-resolution technology. With the development of computer hardware equipment, the cost of large-scale computing acceleration has been significantly reduced, and the cost of training deep neural networks has been reduced, which greatly facilitates scientific research workers and makes this technology widely used in various fields. From the original deep learning and super-resolution network SRCNN to the current super-resolution algorithm SRGAN implemented by Generative Adversarial Nets (GAN), the network parameters are trained by using low-resolution and high-resolution images to obtain Transform models from low-resolution images to high-resolution images to generate high-resolution images when only low-resolution images are available.
图像超分辨问题描述如下:The image super-resolution problem is described as follows:
图像超分辨问题,是指从一个低分辨率的图像得到对应的高分辨率的图像的过程,通过这样的技术突破原有系统成像硬件条件的限制,获得更清晰的图像。在图像超分辨技术中,一般可分为两种情况下的超分辨问题:基于单幅图像的超分辨方法和基于多幅图像的超分辨方法。单幅图像超分辨通过对低分辨图像的放大,通过重建算法提高图像分辨率的方法。基于多幅图像的超分辨算法,则是利用多帧相似的图像序列融合的方法来重建出高分辨率的图像。The problem of image super-resolution refers to the process of obtaining a corresponding high-resolution image from a low-resolution image. Through such a technology, it breaks through the limitations of the original system imaging hardware conditions and obtains a clearer image. In the image super-resolution technology, it can generally be divided into super-resolution problems in two cases: the super-resolution method based on a single image and the super-resolution method based on multiple images. Single image super-resolution is a method of enlarging low-resolution images and improving image resolution through reconstruction algorithms. The super-resolution algorithm based on multiple images uses the method of fusion of multiple frames of similar image sequences to reconstruct high-resolution images.
在基于单幅图像的超分辨方法中,算法通过建立低分辨率图像与高分辨率图像间的关系。从而通过低分辨率图像来重建高分辨率的图像。传统算法通过各种方式来模拟低分辨率图像的成因,构建各种退化模型来拟合低分辨率图像生成的过程从而构建低分辨率图像与高分辨率图像之间的关系,来预测生成高分辨率图像。这样的模拟过程可用如下公式描述:In the single-image-based super-resolution method, the algorithm establishes the relationship between the low-resolution image and the high-resolution image. In this way, a high-resolution image is reconstructed from a low-resolution image. Traditional algorithms simulate the causes of low-resolution images in various ways, build various degradation models to fit the process of low-resolution image generation, and construct the relationship between low-resolution images and high-resolution images to predict the generation of high-resolution images. resolution image. Such a simulation process can be described by the following formula:
IL=HIH+nI L =HI H +n
其中IL为低分辨率图像,IH为IL对应的高分辨率图像,H为生成低分辨率图像的退化模型,n为生成低分辨率图像过程中的噪声干扰因子。H作为退化模型又可以表示为:Among them, I L is the low-resolution image, I H is the high-resolution image corresponding to I L , H is the degradation model for generating the low-resolution image, and n is the noise interference factor in the process of generating the low-resolution image. H as a degradation model can be expressed as:
H=DSub×B×GH=D Sub ×B×G
其中,DSub代表下采样方法,B为模糊因子,G为几何形变因子。Among them, D Sub represents the downsampling method, B is the blur factor, and G is the geometric deformation factor.
解决以上退化模型构建的方法主要有,基于插值的方法,基于图像重建的方法,以及基于学习的方法。在插值方法中,通过对图像进行分解,内插和返回内插值的方法实现图像的超分辨,具有运行速度快,可并行计算,可满足实时图像超分辨的要求。但插值法无法预测出从低分辨率图像到高分辨率图像中丢失的高频信息,产生的高分辨率图像缺乏纹理细节和清晰的边缘。在基于图像重建的超分辨算法中,又分为空域法和频域法,通过在空域或是频域中建立低分辨率图像与高分辨率图像的对应关系,人工设计对应关系模型来实现从低分辨率图像到高分辨率图像的过程。如比较经典的凸集投影法,最大后验概率估计等。这样的方法的缺陷在于,人工设计模型无法适应于多种多样的图像细节恢复,构建模型只能在少数数据上获得很好的效果,在数据增加的情况下无法进一步改进图像细节清晰程度。The methods to solve the above degradation model construction mainly include methods based on interpolation, methods based on image reconstruction, and methods based on learning. In the interpolation method, the super-resolution of the image is realized by decomposing the image, interpolating and returning the interpolation value. It has fast running speed and can be calculated in parallel, which can meet the requirements of real-time image super-resolution. But the interpolation method cannot predict the high-frequency information lost in going from low-resolution images to high-resolution images, and the resulting high-resolution images lack texture details and sharp edges. In the super-resolution algorithm based on image reconstruction, it is divided into the spatial domain method and the frequency domain method. By establishing the corresponding relationship between the low-resolution image and the high-resolution image in the spatial domain or the frequency domain, the corresponding relationship model is artificially designed to realize the The process of converting a low-resolution image to a high-resolution image. Such as the more classic convex set projection method, maximum a posteriori probability estimation, etc. The disadvantage of this method is that the artificially designed model cannot be adapted to restore a variety of image details, and the construction model can only achieve good results on a small amount of data, and cannot further improve the clarity of image details when the data increases.
在基于学习的方法中,同基于图像重建的方法类似的是,它们都通过建立低分辨率图像与高分辨率图像间的关系,但基于学习的方法是利用外部训练样本获取关于低分辨图像与高分辨图像间关系的先验知识,来实现从低分辨图像到高分辨图像的转变的。例如基于流形学习的方法,基于稀疏表示的方法以及基于深度神经网络的方法。在稀疏表示等学习方法中受构建字典大小,以及数据稀疏性难以保证的原因限制,无法获得稳定的图像超分辨效果。在基于深度神经网络的超分辨方法中,已提出的基于残差网络以及基于生成对抗网络等方法中,都需要通过大量参数对低分辨图像与高分辨图像对的学习训练,这样的方法同样存在需要大量数据对训练,训练时容易对数据产生过拟合,测试时无法获得很好的鲁棒性等问题。同时,在预测高分辨图像的高频信息时,仍会出现缺失,使得纹理丰富区域看起来平滑。In the learning-based method, similar to the image reconstruction-based method, they all establish the relationship between the low-resolution image and the high-resolution image, but the learning-based method uses external training samples to obtain information about the relationship between the low-resolution image and the high-resolution image. The prior knowledge of the relationship between high-resolution images is used to realize the transition from low-resolution images to high-resolution images. For example, methods based on manifold learning, methods based on sparse representation, and methods based on deep neural networks. In learning methods such as sparse representation, due to the size of the construction dictionary and the difficulty in guaranteeing data sparsity, stable image super-resolution results cannot be obtained. In the super-resolution method based on deep neural network, the proposed methods based on residual network and generative confrontation network need to learn and train low-resolution images and high-resolution image pairs through a large number of parameters. Such methods also exist. A large amount of data is required for training, it is easy to overfit the data during training, and it cannot obtain good robustness during testing. At the same time, when predicting high-frequency information of high-resolution images, there will still be missing, making texture-rich regions look smooth.
在卫星图像超分辨问题中还有一些现实条件的限制,目前无法获取非常高分辨率的卫星图像,这使得进行图像超分辨时难以获得高分辨卫星图像与低分辨图像对的数据,很多需要低分辨率图像与高分辨图像对的超分辨方法不能直接用于这样卫星图像超分辨任务。在卫星图像的获取时,噪声影响严重,使得获得的图像中颗粒噪声明显,直接进行单幅图像超分辨会使图像中的噪声放大,影响清晰度。作为辅助数据,航拍图像虽然覆盖面远不及卫星图像,但在航拍图像与卫星图像中相似的地方非常多,而且相对于卫星图像航拍图像具有非常好的清晰度。目前的获得航拍的图像数据与卫星图像数据,也不具有成对的性质,即非同一地点及同一时间段拍摄。如何在现有的有限条件下,对卫星图像进行去噪,超分辨的以及如何利用清晰的航拍图像数据对卫星数据进行清晰度的增强成为一个有待解决的问题。There are still some practical limitations in the satellite image super-resolution problem. At present, it is impossible to obtain very high-resolution satellite images, which makes it difficult to obtain high-resolution satellite image and low-resolution image pair data when performing image super-resolution. Super-resolution methods for pairs of high-resolution images and high-resolution images cannot be directly used for such satellite image super-resolution tasks. In the acquisition of satellite images, the influence of noise is serious, which makes the grain noise in the obtained image obvious. Direct super-resolution of a single image will amplify the noise in the image and affect the clarity. As auxiliary data, although the coverage of aerial images is far less than that of satellite images, there are many similarities between aerial images and satellite images, and compared with satellite images, aerial images have very good clarity. The current aerial image data and satellite image data do not have the nature of pairing, that is, they are not taken at the same place and at the same time period. How to denoise and super-resolution satellite images under the existing limited conditions and how to use clear aerial image data to enhance the definition of satellite data has become a problem to be solved.
发明内容Contents of the invention
本发明所要解决的技术问题在于针对上述现有技术中的不足,提供一种基于多尺度感知损失与生成对抗网络结合航拍图像先验的卫星图像超分辨方法,可以弥补普通只使用卫星图像的超分辨算法过程中对清晰图像(缺乏清晰的卫星图像)的先验不足的问题,生成更加清晰的卫星图像。同时,在只使用卫星数据的情况下,由于加入多尺度的感知损失,可生成比别的方法更加清晰的超分辨图像。The technical problem to be solved by the present invention is to provide a satellite image super-resolution method based on multi-scale perception loss and generative adversarial network combined with aerial image prior, which can make up for the ordinary super-resolution method that only uses satellite images. In the process of resolution algorithm, the prior insufficient problem of clear images (lack of clear satellite images) can be generated to generate clearer satellite images. At the same time, in the case of only using satellite data, due to the addition of multi-scale perceptual loss, clearer super-resolution images can be generated than other methods.
本发明采用以下技术方案:The present invention adopts following technical scheme:
一种对抗网络结合航拍图像先验的卫星图像超分辨方法,使用16级含噪声图像与其对应的16级不含噪声图像构成图像对训练去噪模型,然后利用航拍数据训练图像超分辨模型;采用航拍图像构建GMM模型外部先验字典,并引导内部不清晰的卫星图像进行重建,完成对生成的超分辨图像的后处理,然后使用高斯滤波的方式进行图像锐化,最终得到原卫星图像的高分辨图像,实现在原卫星图像基础上的图像视觉质量提升。An adversarial network combined with aerial image prior satellite image super-resolution method, using 16 levels of noise-containing images and their corresponding 16 levels of noise-free images to form image pairs to train the denoising model, and then using aerial photography data to train the image super-resolution model; using Aerial images construct the external prior dictionary of the GMM model, and guide the internal unclear satellite images to be reconstructed, complete the post-processing of the generated super-resolution images, and then use Gaussian filtering to sharpen the images, and finally obtain the high-resolution images of the original satellite images. Distinguish images and realize image visual quality improvement on the basis of original satellite images.
具体的,包括以下步骤:Specifically, the following steps are included:
S1、定义生成对抗网络中的生成器,判决器和多尺度的感知损失网络;S1. Define the generator in the generation confrontation network, the decision device and the multi-scale perceptual loss network;
S2、利用现有卫星数据中从18级开始提取的图像下采样至16级,设获得的16级卫星图像作为去噪的目标ID_H,从16级提取的卫星数据作为带噪图像ID_L构成图像对,设生成的不含噪卫星图像为ID_GH;S2. Use the images extracted from the 18th level in the existing satellite data to downsample to the 16th level, set the obtained 16th level satellite image as the target ID_H for denoising, and the satellite data extracted from the 16th level as the noisy image ID_L to form Image pair, let the generated noise-free satellite image be ID_GH ;
S3、以步骤S2中构成的图像对,对去噪模型中的生成器进行初始化训练,在初始化训练中,以均方误差作为损失函数,计算生成器生成的图像与其对应的目标图像间的像素的均方误差得到MSE生成器损失函数lossMSE,计算梯度并回传调整模型参数;S3. Using the image pair formed in step S2, perform initialization training on the generator in the denoising model. In the initialization training, use the mean square error as the loss function to calculate the pixels between the image generated by the generator and its corresponding target image. The mean square error of the MSE generator is obtained by the loss function loss MSE , and the gradient is calculated and returned to adjust the model parameters;
S4、经过100个epoch的初始化训练以后,进行完整模型训练,计算出损失及对应梯度并回传调整生成器与判决器中的参数模型,感知损失网络VGG19不调整参数;S4. After initial training for 100 epochs, perform complete model training, calculate the loss and the corresponding gradient, and send back to adjust the parameter models in the generator and decision device. The perceptual loss network VGG19 does not adjust the parameters;
S5、按照以上设置训练200个epoch达到收敛,保存模型,训练的生成器用于去噪处理使用,获得的去噪后的图像为ID_GH作为图像超分辨的输入,定义卫星图像超分辨模型;S5. According to the above settings, train 200 epochs to achieve convergence, save the model, and the trained generator is used for denoising processing. The obtained image after denoising is ID_GH as the input of image super-resolution, and defines the satellite image super-resolution model;
S6、重复步骤S3~S5,完成超分辨网络训练过程与去噪模型,然后生成超分辨图像ISR_GH,采用高斯混合模型构建外部先验字典;S6. Repeat steps S3-S5 to complete the super-resolution network training process and denoising model, then generate a super-resolution image I SR_GH , and construct an external prior dictionary using a Gaussian mixture model;
S7、构建GMM外部先验字典,将清晰的航拍17级图像分成15*15的小块,然后根据欧式距离进行初步分组;S7. Construct the GMM external prior dictionary, divide the clear 17-level aerial image into small blocks of 15*15, and then perform preliminary grouping according to the Euclidean distance;
S8、根据重建的内部图形块分组重建卫星图,对重建的卫星图进行图像锐化操作,获得最终的结果图。S8. Reconstruct the satellite image in groups according to the reconstructed internal graphics blocks, perform an image sharpening operation on the reconstructed satellite image, and obtain a final result image.
进一步的,步骤S1中,生成对抗网络中的生成器定义为:使用一个残差网络作为生成器,残差网络中包含16个残差模块,每个残差模块中包含三个卷积层;Further, in step S1, the generator in the generative confrontation network is defined as: using a residual network as the generator, the residual network contains 16 residual modules, and each residual module contains three convolutional layers;
判决器结构定义为:使用一个10层的卷积神经网络作为判决器,卷积神经网络的卷积层使用空洞卷积;The decision device structure is defined as: use a 10-layer convolutional neural network as the decision device, and the convolutional layer of the convolutional neural network uses hole convolution;
多尺度的感知损失定义为:使用在IMAGENET1000类分类数据库上预训练过的VGG19网络作为感知损失网络,通过使用conv2_2,conv3_4,conv4_4,多层中的多尺度特征图,构建多尺度感知损失。The multi-scale perceptual loss is defined as: use the VGG19 network pre-trained on the IMAGENET1000 class classification database as the perceptual loss network, and construct a multi-scale perceptual loss by using conv2_2 , conv3_4 , conv4_4 , multi-scale feature maps in multiple layers.
进一步的,步骤S3中,MSE生成器损失函数lossMSE如下:Further, in step S3, the MSE generator loss function loss MSE is as follows:
lossMSE=MSE(ID_GH,ID_H)。loss MSE = MSE( ID_GH , ID_H ).
进一步的,步骤S4中,模型训练时,将生成器损失函数中的MSE生成器损失函数lossMSE,感知损失函数lossvgg和对抗损失函数lossGAN加权相加后构成整体训练时的生成器损失函数如下:Further, in step S4, during model training, the MSE generator loss function loss MSE in the generator loss function, the perceptual loss function loss vgg and the adversarial loss function loss GAN are weighted and added to form the generator loss function during overall training as follows:
lossG=lossMSE+lossvgg+lossGAN。loss G = loss MSE + loss vgg + loss GAN .
进一步的,感知损失lossvgg如下:Further, the perceptual loss loss vgg is as follows:
lossvgg=10-6×(lossmse_conv2_2+lossmse_conv3_4+lossmse_conv4_4)loss vgg =10 -6 ×(loss mse_conv2_2 +loss mse_conv3_4 +loss mse_conv4_4 )
lossmse_conv2_2=MSE(fi_conv2_2,ft_conv2_2)loss mse_conv2_2 =MSE(f i_conv2_2 , ft_conv2_2 )
lossmse_conv3_4=MSE(fi_conv3_4,ft_conv3_4)loss mse_conv3_4 =MSE(f i_conv3_4 , ft_conv3_4 )
lossmse_conv4_4=MSE(fi_conv4_4,ft_conv4_4)loss mse_conv4_4 =MSE(f i_conv4_4 , ft_conv4_4 )
其中,fi_conv2_2,fi_conv3_4,fi_conv4_4为输入生成图像到感知模型中对应conv2_2,conv3_4,conv4_4层特征图,ft_conv2_3,ft_conv3_3,ft_conv4_3为生成图像对应目标图像输入感知模型中得到的对应conv2_2,conv3_4,conv4_4层特征图;Among them, f i_conv2_2 , f i_conv3_4 , f i_conv4_4 are input generated images to the perceptual model corresponding to conv2_2 , conv3_4 , conv4_4 layer feature maps, f t_conv2_3 , f t_conv3_3 , f t_conv4_3 are corresponding conv2_2 obtained in the generated image corresponding to the target image input perceptual model , conv3_4 , conv4_4 layer feature map;
对抗损失函数lossGAN如下:The confrontation loss function loss GAN is as follows:
lossGAN=10-4×cross_entropy(ID_GH,True)loss GAN =10 -4 ×cross_entropy( ID_GH ,True)
cross_entropy(ID_GH,True)=log(D(ID_GH))cross_entropy(I D_GH ,True)=log(D(I D_GH ))
其中,D(·)为判决器。Among them, D(·) is the decision device.
进一步的,步骤S4中,整体训练时判决器损失函数lossD定义为:Further, in step S4, the loss function loss D of the decision device during overall training is defined as:
lossD=loss1+loss2 loss D = loss 1 + loss 2
loss1=sigmoid_cross_entropy(ID_GH,False)loss 1 = sigmoid_cross_entropy( ID_GH , False)
loss2=sigmoid_cross_entropy(ID_H,True)。loss 2 = sigmoid_cross_entropy( ID_H , True).
进一步的,步骤S5中,超分辨模型包括生成器,感知模型和判决器,感知模型和判决器与去噪模型中使用的结构相同,定义图像超分辨模型中的生成器如下:Further, in step S5, the super-resolution model includes a generator, a perception model and a decision device, the perception model and the decision device have the same structure as that used in the denoising model, and the generator in the image super-resolution model is defined as follows:
通过构建残差模块然后多个残差模块叠构成网络结构主体,通过亚像素卷积层实现对图像的放大使用。By building a residual module and stacking multiple residual modules to form the main body of the network structure, the image can be enlarged and used through the sub-pixel convolution layer.
更进一步的,超分辨模型的生成器训练使用的数据为航拍数据,输入为ISR_L低分辩的16级航拍图和其对应的高分辨17级航拍图ISR_H构成的图像对,生成器输出为ISR_GH,定义生成器的损失函数如下:Furthermore, the data used for training the generator of the super-resolution model is aerial photography data, and the input is an image pair composed of a low-resolution 16-level aerial image of I SR_L and its corresponding high-resolution 17-level aerial image I SR_H , and the output of the generator is I SR_GH , define the loss function of the generator as follows:
lossMSE_SR=MSE(ISR_GH,ISR_H)。loss MSE_SR = MSE(I SR_GH , I SR_H ).
进一步的,步骤S7具体如下:Further, step S7 is specifically as follows:
S701、根据分组的图像块构建GMM模型,对得到的模型中协方差矩阵进行SVD分解,构建字典,作为外部先验以引导后面卫星图像的重建;S701. Construct a GMM model according to the grouped image blocks, perform SVD decomposition on the covariance matrix in the obtained model, and construct a dictionary as an external prior to guide the reconstruction of subsequent satellite images;
S702、以前面超分辨模型中输出的ISR_GH作为内部图像输入,输入后按15*15分块,利用构建外部先验字典时的GMM模型引导分块进行聚类;S702. Using the I SR_GH output in the previous super-resolution model as the internal image input, divide the input into 15*15 blocks, and use the GMM model when constructing the external prior dictionary to guide the block clustering;
S703、利用外部先验构成的字典引导内部图像块再次构建内部字典;S703. Use the dictionary formed by the external prior to guide the internal image blocks to construct the internal dictionary again;
S704、对内部字典进行稀疏编码,并结合原内部图像块分组重建新的内部图形块分组。S704. Perform sparse coding on the internal dictionary, and reconstruct a new internal graphic block group by combining the original internal image block group.
与现有技术相比,本发明至少具有以下有益效果:Compared with the prior art, the present invention has at least the following beneficial effects:
本发明一种对抗网络结合航拍图像先验的卫星图像超分辨方法,针对现实状况中希望改善卫星图像分辨率及可视化效果,但是不存在卫星图像与其对应的清晰的航拍图像对的情况,设计了一套卫星图像超分辨流程,包含图像去噪,图像超分辨和图像后处理三个部分,在可利用数据范围内,逐步提升最终卫星图像超分辨结果的方法流程The present invention is a satellite image super-resolution method combined with adversarial networks and aerial image priors, aiming at improving the satellite image resolution and visualization effect in real conditions, but there is no satellite image and its corresponding clear aerial image pair, it is designed A set of satellite image super-resolution process, including three parts: image denoising, image super-resolution and image post-processing, within the range of available data, gradually improve the method flow of the final satellite image super-resolution result
进一步的,在卫星图像超分辨流程中的去噪模型和超分辨模型均使用了生成对抗网络构成,并且在此基础上提出加入多尺度的感知损失,进一步提升生成对抗网络在实现图像去噪与图像超分辨时的性能,感知损失的作用在于,从特征域对生成器生成的图像与其对应目标间的约束,使得生成图像与真实目标图像在视觉上更接近。多尺度的感知损失更是将多个尺度的感知损失相结合,加上了更强的约束,所以生成效果得到了进一步的提升。Furthermore, both the denoising model and the super-resolution model in the satellite image super-resolution process are composed of generative adversarial networks, and on this basis, it is proposed to add multi-scale perceptual loss to further improve the ability of generative adversarial networks to achieve image denoising and The performance of image super-resolution, the role of perceptual loss is to constrain the image generated by the generator and its corresponding target from the feature domain, so that the generated image is visually closer to the real target image. The multi-scale perceptual loss combines the perceptual loss of multiple scales and adds stronger constraints, so the generation effect has been further improved.
进一步的,作为生成对抗网络中的不同模块,生成器、判决器有着不同的作用。这里在实现卫星图像去噪与卫星图像超分辨时分别定义具有重要作用。生成器主要针对点对点像素构建损失,在网络主体中也更加关注提取图像的高频信息(通过残差结构)。而判别器更多的是关注高层语义层面,保证生成的图像与真实的目标图像的一致性,需要更大的感受野(通过空洞卷积实现)。多尺度感知损失则是对生成图像与真实目标图像间在特征域的约束,这里通过使用在IMAGENET上预训练过的网络实现。Furthermore, as different modules in the generative confrontation network, the generator and the decider have different roles. The definitions here play an important role in the realization of satellite image denoising and satellite image super-resolution. The generator mainly constructs losses for point-to-point pixels, and in the main body of the network, it also pays more attention to extracting high-frequency information of images (through the residual structure). The discriminator pays more attention to the high-level semantic level to ensure the consistency between the generated image and the real target image, and requires a larger receptive field (realized by hole convolution). The multi-scale perceptual loss is the constraint between the generated image and the real target image in the feature domain, which is realized by using the network pre-trained on IMAGENET.
进一步的,生成器生成不含噪声且与真实清晰图像在像素级别相似的图像,所以损失函数使用基于像素间差异的MSE函数。Further, the generator generates an image that is noise-free and similar to the real clear image at the pixel level, so the loss function uses the MSE function based on the difference between pixels.
进一步的,判决器从高级语义层次约束生成图像与真实清晰图像的相似性。使用交叉熵函数为一个基于判决概率的损失函数,希望生成图像与真实的目标图像在语义上被判为同一类别的概率最大。即生成图像与真实目标图像尽可能的相似。Further, the decision device constrains the similarity between the generated image and the real clear image from the high-level semantic level. Using the cross-entropy function as a loss function based on the decision probability, it is hoped that the generated image and the real target image have the highest probability of being judged as the same category semantically. That is, the generated image is as similar as possible to the real target image.
进一步的,在超分辨模型中,判决器和感知模型的作用与噪声模型中的相同,所以使用了相同的结构。生成器部分,网络主体相似(但仍需要生成更多的高频信息,同样采用残差结构),但由于超分辨模型需要生成比输入低分辨率图像尺寸更大的图像,这里使用亚像素卷积层与葡萄卷积层配合的设计来实现。Furthermore, in the super-resolution model, the functions of the decision device and the perceptual model are the same as those in the noise model, so the same structure is used. In the generator part, the main body of the network is similar (but still needs to generate more high-frequency information, and also adopts the residual structure), but since the super-resolution model needs to generate an image larger than the input low-resolution image, sub-pixel volume is used here It is realized by the design of the combination of the accumulation layer and the grape convolution layer.
进一步的,现实情况下,无法获得更清晰的卫星图像(接近航拍图像的清晰度)与低分辨卫星图像对,这一问题限制着卫星图像超分辨的实现效果,本发明提出在图像超分辨之后使用的图像后处理方法,进一步改善图像可视化效果,通过使用清晰的航拍数据GMM构建外部先验字典引导卫星图像构建内部字典重建出更清晰的卫星图像。Further, in reality, it is impossible to obtain clearer satellite images (close to the definition of aerial images) and low-resolution satellite images. This problem limits the realization effect of satellite image super-resolution. The present invention proposes that after image super-resolution The image post-processing method used to further improve the image visualization effect, by using the clear aerial data GMM to build an external prior dictionary to guide the satellite image to build an internal dictionary to reconstruct a clearer satellite image.
综上所述,本发明通过在像素级,语义级以及多尺度特征域的约束结合实现去噪模型与图像超分辨模型,同时针对没有成对卫星图像训练数据,引入航拍图像进行图像进行超分辨模型的训练以及图像后处理中的GMM模型字典构建来引导重建更加清晰的卫星图像。In summary, the present invention implements the denoising model and image super-resolution model through the combination of pixel-level, semantic-level and multi-scale feature domain constraints. At the same time, for the lack of paired satellite image training data, the introduction of aerial images for image super-resolution Model training and GMM model dictionary construction in image post-processing guide the reconstruction of clearer satellite images.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.
附图说明Description of drawings
图1为整体流程图;Fig. 1 is overall flowchart;
图2为去噪模型中生成器的结构图;Figure 2 is a structural diagram of the generator in the denoising model;
图3为去噪模型中判别器的结构图;Fig. 3 is the structural diagram of the discriminator in the denoising model;
图4为去噪器中VGG19的结构图;Fig. 4 is a structural diagram of VGG19 in the denoiser;
图5为图像超分辨模型中的生成器结构图;Fig. 5 is a generator structure diagram in the image super-resolution model;
图6为使用航拍图像构建GMM模型并引导卫星图像重建的流程图;Fig. 6 is the flowchart of constructing GMM model and guiding satellite image reconstruction using aerial images;
图7为本发明效果图;Fig. 7 is effect diagram of the present invention;
图8为本发明结果比较图。Fig. 8 is a comparison chart of the results of the present invention.
具体实施方式Detailed ways
本发明提供了一种基于多尺度感知损失与生成对抗网络结合航拍图像先验的卫星图像超分辨方法,首先通过使用16级含噪声图像与其对应的16级不含噪声图像构成的图像对训练去噪模型,再利用清晰的航拍数据训练图像超分辨模型。由于不存在卫星图像与航拍图像对的情况,在对生成的超分辨图像进行图像后处理时,采用清晰的航拍图像构建GMM模型外部先验字典,并由此引导内部不清晰的卫星图像进行重建。重建后为进一步提升图像质量,使用高斯滤波的方式进行图像锐化。最终得到原卫星图像的高分辨图像,并且实现在原卫星图像基础上的图像视觉质量提升。由实验环节也可看出本方案的有效性。为解决现实中有条件限制情况下的卫星图像超分辨与图像质量提升提供有效思路。The present invention provides a satellite image super-resolution method based on multi-scale perceptual loss and generative confrontation network combined with aerial image prior. Noise model, and then use clear aerial data to train image super-resolution model. Since there is no pair of satellite images and aerial images, when post-processing the generated super-resolution images, the clear aerial images are used to construct the external prior dictionary of the GMM model, and thus guide the internal unclear satellite images to be reconstructed . After reconstruction, in order to further improve the image quality, the Gaussian filter is used for image sharpening. Finally, the high-resolution image of the original satellite image is obtained, and the visual quality of the image is improved based on the original satellite image. The effectiveness of this scheme can also be seen from the experimental link. It provides an effective idea to solve the satellite image super-resolution and image quality improvement under the conditional constraints in reality.
请参阅图1,本发明一种基于多尺度感知损失与生成对抗网络结合航拍图像先验的卫星图像超分辨方法,具体步骤如下:Please refer to Fig. 1, the present invention is a satellite image super-resolution method based on multi-scale perception loss and generation confrontation network combined with aerial image prior, the specific steps are as follows:
S1、实现去噪功能的生成对抗网络包含三个部分,生成器,判决器和一个用IMAGENET数据库预训练好的VGG19网络;S1. The generative adversarial network to realize the denoising function consists of three parts, the generator, the decision device and a VGG19 network pre-trained with the IMAGENET database;
S101、定义生成对抗网络中的生成器,这里使用一个残差网络作为生成器,其中包含16个残差模块,每个残差模块中包含三个卷积层。这里需要实现的时去噪功能这里不需要对图像进行放大,具体结构见图2。S101. Define the generator in the generative adversarial network. Here, a residual network is used as the generator, which includes 16 residual modules, and each residual module includes three convolutional layers. The denoising function that needs to be implemented here does not need to enlarge the image here, and the specific structure is shown in Figure 2.
S102、定义判决器结构,这里判决器使用一个10层的卷积神经网络,其中卷积层使用空洞卷积,通过设置空洞卷积的范围大小,在不使用池化层的条件下增加感受野的尺寸,提高判决器的准确度,具体结构见图3,在判决器的结构中包含10个卷积层,每层的卷积核个数分别为64,128,256,512,1024,512,256,128,128,128依次递增再递减的模式排列,前7层的卷积核尺寸均为4*4,步长为2,依次进行滑动卷积,递增的卷积核个数意味着尽可能多的特征类型。最后一层卷积核采用尺寸为1*1,作用在于减少参数量。由于前面卷积核个数的增加,通道数也随之增加,这里需加入这样一层进行调节。S102. Define the decision device structure. Here, the decision device uses a 10-layer convolutional neural network, in which the convolution layer uses hole convolution. By setting the size of the hole convolution range, the receptive field is increased without using the pooling layer. The size of the decision device improves the accuracy of the decision device. The specific structure is shown in Figure 3. The structure of the decision device contains 10 convolutional layers, and the number of convolution kernels in each layer is 64, 128, 256, 512, 1024, 512. , 256, 128, 128, 128 are arranged in increasing and decreasing patterns in turn. The convolution kernel size of the first 7 layers is 4*4, and the step size is 2. Sliding convolution is performed in turn. The increasing number of convolution kernels means As many feature types as possible. The last layer of convolution kernel uses a size of 1*1, which is used to reduce the amount of parameters. Due to the increase in the number of convolution kernels in the front, the number of channels also increases. Here, such a layer needs to be added for adjustment.
S103、定义多尺度的感知损失,使用在IMAGENET1000类分类数据库上预训练过的VGG19网络作为感知损失网络,与其他感知损失不同的是,通过使用conv2_2,conv3_4,conv4_4多层中的多尺度特征图,构建多尺度感知损失,提升生成器产生图像质量,具体结构见图4,其中包含两种卷积模块,第一种中包含两个卷积层和一个池化层,第二种卷积模块中包含四个卷积层和一个池化层。这里所有卷积层均采用3*3的卷积核,步长为1,卷积核个数采用类似判决器中逐层递增的模式依次为:64,64,128,128,256,256,256,256,512,512,512,512,512,512,512,512.其中conv2_2,conv3_4,conv4_4分别为第二个卷积模块的输出,第三个卷积模块的输出和第四个卷积模块的输出。S103. Define a multi-scale perceptual loss, use the VGG19 network pre-trained on the IMAGENET1000 class classification database as the perceptual loss network, different from other perceptual losses, by using multi-scale feature maps in conv2_2 , conv3_4 , conv4_4 multi-layer , build a multi-scale perceptual loss, and improve the quality of the image generated by the generator. The specific structure is shown in Figure 4, which contains two convolution modules, the first one contains two convolution layers and one pooling layer, and the second convolution module Contains four convolutional layers and one pooling layer. Here, all convolution layers use 3*3 convolution kernels with a step size of 1, and the number of convolution kernels adopts a layer-by-layer increasing mode similar to that of the decision device: 64, 64, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512. Among them, conv2_2 , conv3_4 , conv4_4 are the output of the second convolution module, the output of the third convolution module and the fourth convolution The output of the product module.
S2、利用现有卫星数据中从18级开始提取的图像下采样至16级(一般常见卫星图像均为16级,18级数据获取成本较高),这样获得的16级数据较为清晰,但是由于18级卫星数据获取成本很高,这样的清晰数据非常少。S2. Use the images extracted from the existing satellite data from level 18 to down-sample to level 16 (common satellite images are generally level 16, and the acquisition cost of level 18 data is relatively high), so the obtained level 16 data is relatively clear, but due to The acquisition cost of 18-level satellite data is very high, and such clear data is very rare.
设获得的16级卫星图像作为去噪的目标ID_H,常见的直接从16级提取的卫星数据作为带噪图像ID_L,通过这样的方式构成图像对,设生成的不含噪卫星图像为ID_GH;Let the obtained 16-level satellite image be used as the target ID_H for denoising, and the common satellite data directly extracted from the 16-level is used as the noisy image ID_L , and the image pair is formed in this way, and the generated noise-free satellite image is set as I D_GH ;
S3、以步骤S2中构成的图像对,对去噪模型中的生成器进行初始化训练,在初始化训练中,以均方误差(MSE)作为损失函数,计算生成器生成的图像与其对应的目标图像间的像素的均方误差,计算梯度并回传调整模型参数lossMSE如下:S3. Using the image pair formed in step S2, perform initialization training on the generator in the denoising model. In the initialization training, use the mean square error (MSE) as a loss function to calculate the image generated by the generator and its corresponding target image. The mean square error of the pixels between, calculate the gradient and return the adjusted model parameter loss MSE as follows:
lossMSE=MSE(ID_GH,ID_H)loss MSE =MSE( ID_GH , ID_H )
S4、经过大约100个epoch(一个epoch是指图像库中所有的图像数据都训练过一遍算作一个epoch)的初始化训练以后,进行完整模型的训练;S4, after the initial training of about 100 epochs (one epoch means that all the image data in the image library has been trained once and counted as one epoch), complete model training is carried out;
这时三个网络均要参与训练,但VGG19不调整参数,只需要输出感知损失传给生成器与判决器调整参数;整体训练时,生成器的损失函数相对单独初始化训练时有所不同。At this time, all three networks need to participate in the training, but VGG19 does not adjust the parameters, and only needs to output the perceptual loss to the generator and the decision device to adjust the parameters; during the overall training, the loss function of the generator is different from that of the individual initialization training.
整体训练时,生成器的损失函数包含三个部分:MSE生成器损失,感知损失和对抗损失,这三个部分加权相加后构成整体训练时的生成器损失函数:During overall training, the loss function of the generator consists of three parts: MSE generator loss, perceptual loss and confrontation loss. These three parts are weighted and added to form the generator loss function during overall training:
lossG=lossMSE+lossvgg+lossGAN loss G = loss MSE + loss vgg + loss GAN
其中,lossMSE和初始化训练时的损失函数一样,lossvgg为感知损失:Among them, loss MSE is the same as the loss function when initializing training, and loss vgg is the perceptual loss:
lossvgg=10-6×(lossmse_conv2_2+lossmse_conv3_4+lossmse_conv4_4)loss vgg =10 -6 ×(loss mse_conv2_2 +loss mse_conv3_4 +loss mse_conv4_4 )
lossmse_conv2_2=MSE(fi_conv2_2,ft_conv2_2)loss mse_conv2_2 =MSE(f i_conv2_2 , ft_conv2_2 )
lossmse_conv3_4=MSE(fi_conv3_4,ft_conv3_4)loss mse_conv3_4 =MSE(f i_conv3_4 , ft_conv3_4 )
lossmse_conv4_4=MSE(fi_conv4_4,ft_conv4_4)loss mse_conv4_4 =MSE(f i_conv4_4 , ft_conv4_4 )
其中,fi_conv2_2,fi_conv3_4,fi_conv4_4为输入生成图像到感知模型中对应conv2_2,conv3_4,conv4_4层特征图,ft_conv2_3,ft_conv3_3,ft_conv4_3为生成图像对应目标图像输入感知模型中得到的对应conv2_2,conv3_4,conv4_4层特征图;Among them, f i_conv2_2 , f i_conv3_4 , f i_conv4_4 are input generated images to the perceptual model corresponding to conv2_2 , conv3_4 , conv4_4 layer feature maps, f t_conv2_3 , f t_conv3_3 , f t_conv4_3 are corresponding conv2_2 obtained in the generated image corresponding to the target image input perceptual model , conv3_4 , conv4_4 layer feature map;
lossGAN为对抗损失函数:loss GAN is an anti-loss function:
lossGAN=10-4×cross_entropy(ID_GH,True)loss GAN =10 -4 ×cross_entropy( ID_GH ,True)
cross_entropy(ID_GH,True)=log(D(ID_GH))cross_entropy(I D_GH ,True)=log(D(I D_GH ))
其中,D(·)为判决器。Among them, D(·) is the decision device.
整体训练时判决器损失函数定义为:The decision loss function for the overall training is defined as:
lossD=loss1+loss2 loss D = loss 1 + loss 2
loss1=sigmoid_cross_entropy(ID_GH,False)loss 1 = sigmoid_cross_entropy( ID_GH , False)
loss2=sigmoid_cross_entropy(ID_H,True)loss 2 = sigmoid_cross_entropy( ID_H ,True)
其中,lossD为判决器损失,计算出损失及对应梯度并回传调整判决器中的参数模型。Among them, loss D is the loss of the decision device, which calculates the loss and the corresponding gradient and sends it back to adjust the parameter model in the decision device.
S5、按照以上设置训练200个epoch达到收敛,保存模型,其中训练的生成器用于后面去噪处理使用,获得的去噪后的图像为ID_GH,作为后面图像超分辨的输入,接下来定义卫星图像超分辨模型;S5. According to the above settings, train 200 epochs to achieve convergence, and save the model. The trained generator is used for subsequent denoising processing. The obtained image after denoising is ID_GH , which is used as the input for subsequent image super-resolution. Next, define the satellite Image super-resolution model;
超分辨模型也主要包含三个部分,即生成器,感知模型和判决器。其中感知模型和判决器使用结构与前面去噪模型中使用的是相同的结构。The super-resolution model also mainly includes three parts, namely the generator, the perceptual model and the decision device. The perceptual model and decision device use the same structure as that used in the previous denoising model.
定义图像超分辨模型中的生成器:生成器部分主体结构也使用了残差网络,即通过构建残差模块然后多个残差模块叠构成网络结构主体,后面实现对图像的放大使用的是亚像素卷积层(subpixel),具体结构见图5,超分辨生成器的结构与前面定义的去噪模型中的结构类似,采用多个残差模块叠加的模式,其中卷积层均采用3*3的卷积核,卷积核个数为64,后面的亚像素卷积层和其对应连接的卷积层均采用的是256个卷积核,卷积层采用3*3的卷积核.在实现x2的超分辨模型中第一个亚像素卷积层的scale=1,第二个亚像素卷积层的scale=2。Define the generator in the image super-resolution model: part of the main structure of the generator also uses a residual network, that is, build a residual module and then stack multiple residual modules to form the main body of the network structure, and then use sub The pixel convolution layer (subpixel), the specific structure is shown in Figure 5. The structure of the super-resolution generator is similar to the structure in the denoising model defined above. It adopts the mode of superposition of multiple residual modules, and the convolution layer uses 3* 3 convolution kernels, the number of convolution kernels is 64, the subsequent sub-pixel convolution layer and its correspondingly connected convolution layer use 256 convolution kernels, and the convolution layer uses 3*3 convolution kernels .In the super-resolution model that implements x2, the scale of the first sub-pixel convolutional layer=1, and the scale of the second sub-pixel convolutional layer=2.
超分辨模型的生成器,训练使用的是航拍数据,设输入为ISR_L低分辩的16级航拍图和其对应的高分辨17级航拍图ISR_H构成的图像对,生成器输出的为ISR_GH。The generator of the super-resolution model uses aerial photography data for training, and the input is an image pair composed of I SR_L low-resolution 16-level aerial picture and its corresponding high-resolution 17-level aerial picture I SR_H , and the output of the generator is I SR_GH .
生成器的损失函数定义为:The loss function of the generator is defined as:
lossMSE_SR=MSE(ISR_GH,ISR_H)loss MSE_SR =MSE(I SR_GH ,I SR_H )
S6、重复步骤S3~S5,完成超分辨网络训练过程与去噪模型,然后生成超分辨图像ISR_GH,为进一步结合航拍图像中清晰的先验到卫星图像中,这里采用高斯混合模型(GMM)构建外部先验字典,来引导内部图像重建和图像锐化结合的方法进一步提升生成的超分辨卫星图像的质量;S6. Repeat steps S3 to S5 to complete the super-resolution network training process and denoising model, and then generate a super-resolution image I SR_GH . In order to further combine the clear priors in the aerial images into the satellite images, a Gaussian mixture model (GMM) is used here Build an external prior dictionary to guide the combination of internal image reconstruction and image sharpening to further improve the quality of the generated super-resolution satellite image;
S7、构建GMM外部先验字典引导内部图像重建更清晰的卫星图像(原本用于图像去噪)。这里利用航拍图像与卫星图像间无法构成图像对的情况,无法直接使用之前提出的生成对抗网络模型进行训练,使用构建GMM外部先验字典的方式可以间接引入清晰航拍图像中的丰富细节到超分辨生成的卫星图像中;构建GMM外部先验字典,将清晰的航拍17级图像分成15*15的小块,块后进行初步的分组(根据欧式距离),如图6所示;S7. Constructing the GMM external prior dictionary to guide the internal image to reconstruct a clearer satellite image (originally used for image denoising). Here, the fact that the image pair cannot be formed between the aerial image and the satellite image cannot be directly used for training with the previously proposed generative confrontation network model. Using the method of constructing the GMM external prior dictionary can indirectly introduce the rich details in the clear aerial image to super-resolution In the generated satellite image; build the GMM external prior dictionary, divide the clear aerial photography 17-level image into small blocks of 15*15, and perform preliminary grouping (according to the Euclidean distance) after the block, as shown in Figure 6;
S701、根据分组的图像块构建GMM模型,对得到的模型中协方差矩阵进行SVD分解,构建字典,作为外部先验以引导后面卫星图像的重建;S701. Construct a GMM model according to the grouped image blocks, perform SVD decomposition on the covariance matrix in the obtained model, and construct a dictionary as an external prior to guide the reconstruction of subsequent satellite images;
S702、以前面超分辨模型中输出的ISR_GH作为内部图像输入,输入后分块(15*15),利用构建外部先验字典时的GMM模型引导分块进行聚类;S702. Using the I SR_GH output in the previous super-resolution model as an internal image input, divide the input into blocks (15*15), and use the GMM model when constructing the external prior dictionary to guide the block for clustering;
S703、同时利用外部先验构成的字典引导内部图像块再次构建内部字典;S703. At the same time, use the dictionary formed by the external prior to guide the internal image blocks to construct the internal dictionary again;
S704、对内部字典进行稀疏编码,并结合原内部图像块分组重建新的内部图形块分组。S704. Perform sparse coding on the internal dictionary, and reconstruct a new internal graphic block group by combining the original internal image block group.
S8、根据重建的内部图形块分组,重建卫星图,对重建的卫星图进行图像锐化的操作,使得图像中的边缘更加清晰获得最终的结果图。S8. Reconstruct the satellite image according to the grouping of the reconstructed internal graphics blocks, and perform an image sharpening operation on the reconstructed satellite image to make the edges in the image clearer to obtain a final result image.
本发明通过将多尺度感知损失与生成对抗网络结合,在有一定条件限制情况下实现卫星图像的超分辨。其中,利用卫星图像训练一个以去噪为目的的网络,利用航拍图像训练一个实现图像超分辨的网络,并结合使用高斯混合模型提取清晰的航拍图像中的特征先验,对超分辨重建后的图像进行进一步重建。再次经过一次高斯滤波对图像中的边缘进行锐化处理,最终产生更为清晰的卫星图像。The invention realizes the super-resolution of the satellite image under certain conditions by combining the multi-scale perceptual loss with the generation confrontation network. Among them, satellite images are used to train a network for denoising purposes, aerial images are used to train a network for image super-resolution, and the Gaussian mixture model is used to extract the feature prior of clear aerial images, and the super-resolution reconstructed The image is further reconstructed. After another Gaussian filter, the edges in the image are sharpened, and finally a clearer satellite image is produced.
本发明解决了在限制条件下的图像放分辨与图像质量改善的问题。通过使用多尺度的感知失真损失,实现对生成图像特征域的多尺度约束,以生成效果更好的图像。The invention solves the problems of image resolution and image quality improvement under limited conditions. By using a multi-scale perceptual distortion loss, the multi-scale constraints on the feature domain of the generated image are realized to generate better images.
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中的描述和所示的本发明实施例的组件可以通过各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. The components of the embodiments of the invention generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
A、实验条件A. Experimental conditions
1.实验使用数据库1. The experiment uses the database
本发明实验使用数据为卫星图像超分辨项目中提供的卫星图像数据和航拍图像数据。非公开数据集,这里只做部分展示。卫星图像数据包含:The data used in the experiment of the present invention are satellite image data and aerial image data provided in the satellite image super-resolution project. Non-public datasets, only partly shown here. Satellite imagery data contains:
数据类型1:从16级开始提取的卫星图像(包含颗粒状明显的噪声),清晰程度不高;Data type 1: Satellite images extracted from level 16 (including grainy and obvious noise), the clarity is not high;
数据类型2:从18级开始提取的卫星图像(颗粒状噪声不明显),清晰度稍微高一些。从18级开始提取的卫星图像下采样至16级时,可以获得相对从16级开始提取卫星图像更清楚一些的卫星图像。但是由于从18级开始提取的卫星图像成本较高,一般很难大量获得,一般常见的均为从16级开始提取的卫星图像。所以本项目的实现,通过在获得一小部分从18级提取的卫星图像的基础上训练图像超分辨模型,然后通过图像超分辨的技术实现根据从16级提取的低清晰度的卫星图像获得类似甚至超过从18级开始提取卫星图像(通过使用清晰的航拍图像进行辅助),具有重大的研究意义和价值。这一类型数据这里获得较少,但是与数据类型1中的覆盖区域有重叠,所以可以构成少量的图像对进行模型训练。Data type 2: Satellite images extracted from level 18 (no obvious granular noise), slightly higher definition. When the satellite images extracted from level 18 are down-sampled to level 16, satellite images that are clearer than satellite images extracted from level 16 can be obtained. However, due to the high cost of satellite images extracted from level 18, it is generally difficult to obtain a large number of them. Generally, satellite images extracted from level 16 are common. Therefore, the realization of this project is to train the image super-resolution model on the basis of obtaining a small part of the satellite images extracted from the 18th level, and then use the image super-resolution technology to obtain similar It even exceeds the extraction of satellite images from level 18 (assisted by using clear aerial images), which has great research significance and value. This type of data is less obtained here, but it overlaps with the coverage area in data type 1, so a small number of image pairs can be formed for model training.
数据类型3:清晰的航拍数据,由于拍摄高度以及拍摄方式的原因,相对卫星图像更加清晰。在与卫星图像同级别的航拍图像中,航拍图像要清晰的多,并且包含丰富的纹理信息。但是航拍图像覆盖面有限,来源有限,无法获得与数据类型1和数据类型2中同一位置相近时间段的航拍图像,不存在卫星图像与航拍图像构成的图像对,不能直接用来训练,如表1所示。Data type 3: Clear aerial data, due to the shooting height and shooting method, it is clearer than satellite images. Among aerial images of the same level as satellite images, aerial images are much clearer and contain rich texture information. However, the coverage of aerial images is limited, and the sources are limited. It is impossible to obtain aerial images of the same time period as the same location in data type 1 and data type 2. There is no image pair composed of satellite images and aerial images, which cannot be directly used for training, as shown in Table 1. shown.
表1数据集及其分布情况Table 1 Datasets and their distribution
2.实验要求2. Experimental requirements
实验分为三个部分:去噪模型训练,图像超分辨模型训练与图像后处理实验。The experiment is divided into three parts: denoising model training, image super-resolution model training and image post-processing experiments.
去噪模型训练:利用从16级提取的卫星图像(含噪声)与从18级开始提取之后下采样到16级的卫星图像(不含噪声,但清晰度不高)构成的图像对。作为训练数据,训练本方案中提出的生成对抗网络。训练完成以后,使用生成器模型,输入一幅含噪声的卫星图像可得到不含噪声的卫星图像。为保证模型的鲁棒性,测试使用均为与训练不同城市区域的卫星图像,同样为从16级提取的含噪声图像。Denoising model training: use the image pair composed of the satellite image extracted from level 16 (with noise) and the satellite image extracted from level 18 and downsampled to level 16 (no noise, but not high definition). As training data, train the generative adversarial network proposed in this scheme. After the training is complete, use the generator model to input a satellite image with noise to get a satellite image without noise. In order to ensure the robustness of the model, the test uses satellite images of different urban areas from the training, and is also a noisy image extracted from level 16.
图像超分辨模型训练:模型训练使用17级航拍图像以及17级下采样得到的16级航拍图像,构成图像对进行模型训练。训练本方案中提出的生成对抗网络完成后,利用生成器的模型,输入一幅不含噪声的16级卫星图像可以生成对应的17级高分辨图像。并比较生成高分辨图像的视觉效果Image super-resolution model training: Model training uses 17 levels of aerial images and 16 levels of aerial images obtained by downsampling of 17 levels to form image pairs for model training. After training the generative confrontation network proposed in this program, using the generator model, input a noise-free 16-level satellite image to generate a corresponding 17-level high-resolution image. and compare the visual effects of generating high-resolution images
图像后处理实验:对经过去噪与图像超分辨处理的卫星图像,为进一步提升图像质量,做图像后处理。首先,利用清晰的17级航拍图像训练得到GMM外部先验字典作为引导,输入超分辨得到的17级卫星图像,在外部先验引导下构建内部字典并重建图像,得到结合了航拍图像中清晰先验的卫星图像。并在此基础上使用高斯滤波的方法对图像进行锐化操作,得到最终后处理完成的图像。比较结果图像与原始图像的清晰程度以及视觉效果。Image post-processing experiment: For satellite images that have undergone denoising and image super-resolution processing, image post-processing is performed to further improve image quality. First, using the clear 17-level aerial image training to obtain the GMM external prior dictionary as a guide, input the 17-level satellite image obtained by super-resolution, construct the internal dictionary and reconstruct the image under the guidance of the external prior, and obtain a combination of clear and prior images in the aerial image. satellite imagery. On this basis, the Gaussian filtering method is used to sharpen the image to obtain the final post-processed image. Compare the clarity and visual quality of the resulting image to the original image.
3.实验参数设置3. Experimental parameter settings
在训练去噪模型与图像超分辨模型时采用的是相同的设置。首先是生成器的初始化训练,初始学习率设为0.0001,训练周期为100个epoch(训练数据全部过一遍是一个epoch)。网络整体训练时,初始学习率仍设置为0.0001,训练周期设置为200个epcoh,学习率在训练周期达到一半时衰减一次,衰减至0.00001.The same settings are used to train the denoising model and the image super-resolution model. The first is the initialization training of the generator, the initial learning rate is set to 0.0001, and the training period is 100 epochs (one epoch is all training data). When the network is trained as a whole, the initial learning rate is still set to 0.0001, the training cycle is set to 200 epcoh, and the learning rate decays once when the training cycle reaches half, and decays to 0.00001.
在图像后处理中,构建GMM模型外部先验字典包含以下参数:设置分块步长为3,分块大小为15*15,聚类时选取欧式距离最相近的10个图像块作为一组,GMM模型中包含32个高斯模型,即拟合32个类别。图像锐化时采用高斯滤波,设置滤波半径1.5,锐化强度为2.In the image post-processing, constructing the external prior dictionary of the GMM model includes the following parameters: set the block step size to 3, the block size to 15*15, and select the 10 image blocks with the closest Euclidean distance as a group during clustering. The GMM model contains 32 Gaussian models, which fit 32 categories. Gaussian filtering is used for image sharpening, the filter radius is set to 1.5, and the sharpening strength is set to 2.
B、实验结果评价标准B. Experimental result evaluation criteria
由于实际测试输入为从16级提取的卫星图像(包含噪声),不存在对应清晰的17级卫星图像。无法直接使用通用的PSNR和SSIM等衡量标准进行衡量。这里通过列举一些测试结果的图比较说明本方案的有效性。Since the actual test input is the satellite image (including noise) extracted from level 16, there is no corresponding clear satellite image of level 17. It cannot be measured directly using common metrics such as PSNR and SSIM. Here, the effectiveness of this scheme is illustrated by comparing the graphs of some test results.
C、对比试验方案C. Comparative test plan
请参阅图7和图8,以上列举测试结果表明,本次提出的方案在实际情况下的有效性。本方案背景中具有的限制性条件,导致一般图像超分辨算法无法直接训练处理,需要借助一系列的图像处理算法才能够达到预期效果。最终测试生成图像效果在原始包含噪声的16级卫星图像的基础上,不仅去除了噪声,还实现了超分辨至17级(即在尺寸上长宽*2)。并且借助航拍清晰(非同一地点),实现了生成17级卫星图像清晰程度的提升与改善。Please refer to Figure 7 and Figure 8, the test results listed above show the effectiveness of the proposed scheme in actual situations. Due to the restrictive conditions in the background of this solution, general image super-resolution algorithms cannot be directly trained and processed, and a series of image processing algorithms are needed to achieve the desired effect. Based on the original 16-level satellite image containing noise, the final test generated image effect not only removes the noise, but also achieves super-resolution to 17 levels (that is, length and width*2 in size). And with the help of clear aerial photography (not at the same location), the clarity of the generated 17-level satellite image has been improved and improved.
以上内容仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明权利要求书的保护范围之内。The above content is only to illustrate the technical ideas of the present invention, and cannot limit the protection scope of the present invention. Any changes made on the basis of the technical solutions according to the technical ideas proposed in the present invention shall fall within the scope of the claims of the present invention. within the scope of protection.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810777731.5A CN109035142B (en) | 2018-07-16 | 2018-07-16 | A satellite image super-resolution method based on adversarial network combined with aerial image priors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810777731.5A CN109035142B (en) | 2018-07-16 | 2018-07-16 | A satellite image super-resolution method based on adversarial network combined with aerial image priors |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109035142A true CN109035142A (en) | 2018-12-18 |
CN109035142B CN109035142B (en) | 2020-06-19 |
Family
ID=64642502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810777731.5A Active CN109035142B (en) | 2018-07-16 | 2018-07-16 | A satellite image super-resolution method based on adversarial network combined with aerial image priors |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109035142B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109801221A (en) * | 2019-01-18 | 2019-05-24 | 腾讯科技(深圳)有限公司 | Generate training method, image processing method, device and the storage medium of confrontation network |
CN109886875A (en) * | 2019-01-31 | 2019-06-14 | 深圳市商汤科技有限公司 | Image super-resolution rebuilding method and device, storage medium |
CN110009568A (en) * | 2019-04-10 | 2019-07-12 | 大连民族大学 | A generator construction method for super-resolution reconstruction of Manchurian images |
CN110070505A (en) * | 2019-04-12 | 2019-07-30 | 北京迈格威科技有限公司 | Enhance the method and apparatus of image classification plant noise robustness |
CN110120024A (en) * | 2019-05-20 | 2019-08-13 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of image procossing |
CN110119780A (en) * | 2019-05-10 | 2019-08-13 | 西北工业大学 | Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network |
CN110807762A (en) * | 2019-09-19 | 2020-02-18 | 温州大学 | An intelligent segmentation method of retinal blood vessel images based on GAN |
CN111209854A (en) * | 2020-01-06 | 2020-05-29 | 苏州科达科技股份有限公司 | Method and device for recognizing unbelted driver and passenger and storage medium |
CN111351502A (en) * | 2018-12-21 | 2020-06-30 | 赫尔环球有限公司 | Method, apparatus and computer program product for generating an overhead view of an environment from a perspective view |
CN112270654A (en) * | 2020-11-02 | 2021-01-26 | 浙江理工大学 | Image denoising method based on multi-channel GAN |
WO2021052261A1 (en) * | 2019-09-17 | 2021-03-25 | 中国科学院空天信息创新研究院 | Image super-resolution reconstruction method and apparatus for sharpening of label data |
CN113361508A (en) * | 2021-08-11 | 2021-09-07 | 四川省人工智能研究院(宜宾) | Cross-view-angle geographic positioning method based on unmanned aerial vehicle-satellite |
CN113535996A (en) * | 2021-05-27 | 2021-10-22 | 中国人民解放军火箭军工程大学 | A method and device for preparing road image data set based on aerial imagery |
US11257185B2 (en) * | 2019-09-17 | 2022-02-22 | Maxar International Sweden Ab | Resolution enhancement of aerial images or satellite images |
CN112686801B (en) * | 2021-01-05 | 2023-06-20 | 金陵科技学院 | Water Quality Monitoring Method Based on Aerial Images and Series Echo State Network |
CN116342392A (en) * | 2023-04-03 | 2023-06-27 | 兰州大学 | Single remote sensing image super-resolution method based on deep learning |
CN118570457A (en) * | 2024-08-05 | 2024-08-30 | 山东航天电子技术研究所 | An image super-resolution method driven by remote sensing target recognition task |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105590296A (en) * | 2015-12-07 | 2016-05-18 | 天津大学 | Dual-dictionary learning-based single-frame image super-resolution reconstruction method |
CN107154023A (en) * | 2017-05-17 | 2017-09-12 | 电子科技大学 | Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution |
US20180075581A1 (en) * | 2016-09-15 | 2018-03-15 | Twitter, Inc. | Super resolution using a generative adversarial network |
CN107977932A (en) * | 2017-12-28 | 2018-05-01 | 北京工业大学 | It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method |
CN108171656A (en) * | 2018-01-12 | 2018-06-15 | 西安电子科技大学 | Adaptive Global Dictionary remote sensing images ultra-resolution method based on rarefaction representation |
-
2018
- 2018-07-16 CN CN201810777731.5A patent/CN109035142B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105590296A (en) * | 2015-12-07 | 2016-05-18 | 天津大学 | Dual-dictionary learning-based single-frame image super-resolution reconstruction method |
US20180075581A1 (en) * | 2016-09-15 | 2018-03-15 | Twitter, Inc. | Super resolution using a generative adversarial network |
CN107154023A (en) * | 2017-05-17 | 2017-09-12 | 电子科技大学 | Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution |
CN107977932A (en) * | 2017-12-28 | 2018-05-01 | 北京工业大学 | It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method |
CN108171656A (en) * | 2018-01-12 | 2018-06-15 | 西安电子科技大学 | Adaptive Global Dictionary remote sensing images ultra-resolution method based on rarefaction representation |
Non-Patent Citations (3)
Title |
---|
LIUJUAN CAO 等: "Vehicle detection from highway satellite images via transfer learning", 《INFORMATION SCIENCES》 * |
ZHOU FUQIANG 等: "High-frequency details enhancing DenseNet for super-resolution", 《NEUROCOMPUTING》 * |
张宗祥: "基于地物类别的高光谱图像超分辨率复原算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111351502A (en) * | 2018-12-21 | 2020-06-30 | 赫尔环球有限公司 | Method, apparatus and computer program product for generating an overhead view of an environment from a perspective view |
US11720992B2 (en) | 2018-12-21 | 2023-08-08 | Here Global B.V. | Method, apparatus, and computer program product for generating an overhead view of an environment from a perspective image |
CN109801221A (en) * | 2019-01-18 | 2019-05-24 | 腾讯科技(深圳)有限公司 | Generate training method, image processing method, device and the storage medium of confrontation network |
CN109886875A (en) * | 2019-01-31 | 2019-06-14 | 深圳市商汤科技有限公司 | Image super-resolution rebuilding method and device, storage medium |
CN109886875B (en) * | 2019-01-31 | 2023-03-31 | 深圳市商汤科技有限公司 | Image super-resolution reconstruction method and device and storage medium |
CN110009568A (en) * | 2019-04-10 | 2019-07-12 | 大连民族大学 | A generator construction method for super-resolution reconstruction of Manchurian images |
CN110070505A (en) * | 2019-04-12 | 2019-07-30 | 北京迈格威科技有限公司 | Enhance the method and apparatus of image classification plant noise robustness |
CN110119780A (en) * | 2019-05-10 | 2019-08-13 | 西北工业大学 | Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network |
CN110119780B (en) * | 2019-05-10 | 2020-11-27 | 西北工业大学 | A Generative Adversarial Network-Based Super-resolution Reconstruction Method for Hyperspectral Images |
CN110120024A (en) * | 2019-05-20 | 2019-08-13 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of image procossing |
US11645735B2 (en) | 2019-05-20 | 2023-05-09 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for processing image, device and computer readable storage medium |
CN110120024B (en) * | 2019-05-20 | 2021-08-17 | 百度在线网络技术(北京)有限公司 | Image processing method, device, equipment and storage medium |
WO2021052261A1 (en) * | 2019-09-17 | 2021-03-25 | 中国科学院空天信息创新研究院 | Image super-resolution reconstruction method and apparatus for sharpening of label data |
US11257185B2 (en) * | 2019-09-17 | 2022-02-22 | Maxar International Sweden Ab | Resolution enhancement of aerial images or satellite images |
CN110807762A (en) * | 2019-09-19 | 2020-02-18 | 温州大学 | An intelligent segmentation method of retinal blood vessel images based on GAN |
CN110807762B (en) * | 2019-09-19 | 2021-07-06 | 温州大学 | An intelligent segmentation method of retinal blood vessel images based on GAN |
CN111209854A (en) * | 2020-01-06 | 2020-05-29 | 苏州科达科技股份有限公司 | Method and device for recognizing unbelted driver and passenger and storage medium |
CN112270654A (en) * | 2020-11-02 | 2021-01-26 | 浙江理工大学 | Image denoising method based on multi-channel GAN |
CN112686801B (en) * | 2021-01-05 | 2023-06-20 | 金陵科技学院 | Water Quality Monitoring Method Based on Aerial Images and Series Echo State Network |
CN113535996A (en) * | 2021-05-27 | 2021-10-22 | 中国人民解放军火箭军工程大学 | A method and device for preparing road image data set based on aerial imagery |
CN113535996B (en) * | 2021-05-27 | 2023-08-04 | 中国人民解放军火箭军工程大学 | A road image data set preparation method and device based on aerial images |
CN113361508A (en) * | 2021-08-11 | 2021-09-07 | 四川省人工智能研究院(宜宾) | Cross-view-angle geographic positioning method based on unmanned aerial vehicle-satellite |
CN116342392A (en) * | 2023-04-03 | 2023-06-27 | 兰州大学 | Single remote sensing image super-resolution method based on deep learning |
CN116342392B (en) * | 2023-04-03 | 2025-05-30 | 兰州大学 | Single remote sensing image super-resolution method based on deep learning |
CN118570457A (en) * | 2024-08-05 | 2024-08-30 | 山东航天电子技术研究所 | An image super-resolution method driven by remote sensing target recognition task |
Also Published As
Publication number | Publication date |
---|---|
CN109035142B (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109035142B (en) | A satellite image super-resolution method based on adversarial network combined with aerial image priors | |
CN109377459B (en) | Super-resolution deblurring method of generative confrontation network | |
CN110889895B (en) | Face video super-resolution reconstruction method fusing single-frame reconstruction network | |
CN112837224A (en) | A super-resolution image reconstruction method based on convolutional neural network | |
CN109522855B (en) | Low-resolution pedestrian detection method, system and storage medium combining ResNet and SENet | |
CN108171762A (en) | System and method for is reconfigured quickly in a kind of similar image of the compressed sensing of deep learning | |
CN113284051B (en) | Face super-resolution method based on frequency decomposition multi-attention machine system | |
CN109190684A (en) | SAR image sample generating method based on sketch and structural generation confrontation network | |
CN116739899B (en) | Image super-resolution reconstruction method based on SAUGAN network | |
CN110533683A (en) | A kind of image group analysis method merging traditional characteristic and depth characteristic | |
Rivadeneira et al. | Thermal image super-resolution challenge-pbvs 2021 | |
CN117078516B (en) | Mine image super-resolution reconstruction method based on residual hybrid attention | |
CN112967327A (en) | Monocular depth method based on combined self-attention mechanism | |
CN117114984A (en) | Remote sensing image super-resolution reconstruction method based on generative adversarial network | |
CN107194893A (en) | Depth image ultra-resolution method based on convolutional neural networks | |
CN113096015A (en) | Image super-resolution reconstruction method based on progressive sensing and ultra-lightweight network | |
CN117689579A (en) | SAR auxiliary remote sensing image thick cloud removal method with progressive double decoupling | |
Wang et al. | Image generation and recognition technology based on attention residual GAN | |
CN116596782A (en) | Image restoration method and system | |
Lu et al. | GradDT: Gradient-guided despeckling transformer for industrial imaging sensors | |
Gupta et al. | MCNeRF: Monte Carlo rendering and denoising for real-time NeRFs | |
CN112200752A (en) | Multi-frame image deblurring system and method based on ER network | |
Ye et al. | MRA-IDN: A lightweight super-resolution framework of remote sensing images based on multiscale residual attention fusion mechanism | |
CN113191947B (en) | Image super-resolution method and system | |
Mu et al. | Underwater image enhancement using a mixed generative adversarial network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |