[go: up one dir, main page]

CN109712083B - Single image defogging method based on convolutional neural network - Google Patents

Single image defogging method based on convolutional neural network Download PDF

Info

Publication number
CN109712083B
CN109712083B CN201811492894.5A CN201811492894A CN109712083B CN 109712083 B CN109712083 B CN 109712083B CN 201811492894 A CN201811492894 A CN 201811492894A CN 109712083 B CN109712083 B CN 109712083B
Authority
CN
China
Prior art keywords
image
layer
neural network
output
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811492894.5A
Other languages
Chinese (zh)
Other versions
CN109712083A (en
Inventor
张登银
钱雯
朱虹
陈灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201811492894.5A priority Critical patent/CN109712083B/en
Publication of CN109712083A publication Critical patent/CN109712083A/en
Application granted granted Critical
Publication of CN109712083B publication Critical patent/CN109712083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提出一种基于卷积神经网络的单幅图像去雾方法,该方法首先构建训练集作为深度卷积神经网络模型的输入,网络模型包括浅层神经网络模型和深层神经网络模型,浅层网络模型用于提取并融合有雾图像RGB颜色空间的特征,输出为有雾图像的场景深度图;深层网络模型在浅层网络模型的基础上,对场景深度图进行多尺度映射、池化、卷积等操作,输出为有雾图像的透射率图。最后,通过透射率、大气光值以及大气散射模型即可恢复无雾图像。本发明通过对雾化图像RGB颜色空间的特征进行提取和融合,构建浅层卷积神经网络模型,与多尺度深层神经网络模型连接建立端到端的神经网络模型,能在多种场景下实现去雾清晰化,尤其在阴暗环境下可避免图像出现颜色失真。

Figure 201811492894

The present invention proposes a single image dehazing method based on convolutional neural network. The method first constructs a training set as the input of a deep convolutional neural network model. The network model includes a shallow neural network model and a deep neural network model. The network model is used to extract and fuse the features of the RGB color space of the foggy image, and output the scene depth map of the foggy image; the deep network model performs multi-scale mapping, pooling, Convolution and other operations, the output is the transmittance map of the hazy image. Finally, the haze-free image can be recovered by the transmittance, atmospheric light values, and atmospheric scattering model. The present invention constructs a shallow convolutional neural network model by extracting and fusing the features of the RGB color space of the foggy image, and connects with the multi-scale deep neural network model to establish an end-to-end neural network model, which can realize the removal of The fog is sharpened, especially in dark environments, to avoid color distortion in the image.

Figure 201811492894

Description

一种基于卷积神经网络的单幅图像去雾方法A single image dehazing method based on convolutional neural network

技术领域technical field

本发明涉及单幅图像去雾方法,具体为一种基于卷积神经网络的单幅图像去雾方法。The invention relates to a single image dehazing method, in particular to a single image dehazing method based on a convolutional neural network.

背景技术Background technique

由于垃圾焚烧、建筑扬尘、汽车尾气排放等原因,国内很多城市蒙上了雾霾的阴影。雾霾成为近年来受到大家持续关注的环境问题。雾霾天气拍摄的图像由于对比度、色彩饱和度下降,导致图片不清晰,影响图片的使用。比如,雾天交通监控视频拍摄模糊,致使图像在识别和处理过程中出现偏差,不利于准确记录交通信息。因此,提升雾天图像质量,降低雾霾天气对户外成像的影响,有着迫切的理论和实际需求。Due to waste incineration, construction dust, automobile exhaust emissions and other reasons, many cities in China have cast the shadow of smog. Smog has become an environmental problem that has received continuous attention in recent years. The images taken in haze weather are unclear due to the decrease in contrast and color saturation, which affects the use of images. For example, the foggy traffic surveillance video is blurred, resulting in deviations in the image recognition and processing process, which is not conducive to accurately recording traffic information. Therefore, there are urgent theoretical and practical demands to improve image quality in foggy weather and reduce the impact of foggy weather on outdoor imaging.

随着计算机技术的发展,视频和图像去雾算法受到了广泛的关注,并广泛应用于民用和军事领域,如遥感、目标检测和交通监控。With the development of computer technology, video and image dehazing algorithms have received extensive attention and are widely used in civil and military fields, such as remote sensing, target detection, and traffic monitoring.

目前,图像去雾算法主要可以分为三种类型:第一类是图像增强的去雾方法。该方法不考虑导致图像退化的原因,使图像去雾的问题转化为对比度增强的问题,经过增强后的图像具有更高的对比度,使得复原后的图像更符合人类的审美观念,但是处理后的图像存在信息丢失的问题,会出现失真现象。第二类是图像复原的去雾方法。该方法是从图像退化的角度出发进行分析,建立雾天成像的模型,推导出图像退化的过程,据此恢复出去雾后的图像,该方法使经过处理后的图像更加清晰、自然,细节损失较少。然而,去雾的效果与模型参数的选取有关,不精确的参数将直接影响复原后图像的效果。近年来,随着深度学习的不断发展,越来越多的被用于图像处理领域,比如图像分类、物体识别、人脸识别等,且获得了较好的效果。因此,基于深度学习的去雾算法可以被认为是第三类去雾算法。现有基于深度学习的图像去雾算法,有雾图像大都是通过无雾图像经由大气散射模型,随机设置参数人工合成;再将合成的有雾图像输入学习网络中,输出图像的透射率,最后经过逆推计算出无雾图像。卷积神经网络(Convolutional Neural Network,CNN)是一种深度学习模型,它通过权值共享和局部感受野,减少参数个数和连接数量,降低了神经网络的复杂度,具有很强的适应性。因此,CNN被广泛应用于图像处理研究,其在图像识别领域的应用是当下研究热点。At present, image dehazing algorithms can be mainly divided into three types: the first type is the dehazing method of image enhancement. This method does not consider the cause of image degradation, and turns the problem of image dehazing into a problem of contrast enhancement. The enhanced image has higher contrast, making the restored image more in line with human aesthetic concepts, but the processed image There is a problem of information loss in the image, and distortion will occur. The second category is the dehazing method for image restoration. This method analyzes from the perspective of image degradation, establishes a foggy imaging model, deduces the process of image degradation, and restores the image after dehazing. This method makes the processed image clearer and more natural, with loss of details. less. However, the effect of dehazing is related to the selection of model parameters, and inaccurate parameters will directly affect the effect of the restored image. In recent years, with the continuous development of deep learning, more and more are used in the field of image processing, such as image classification, object recognition, face recognition, etc., and have achieved good results. Therefore, the dehazing algorithm based on deep learning can be considered as the third type of dehazing algorithm. In the existing deep learning-based image dehazing algorithms, most of the foggy images are artificially synthesized by using the haze-free image through the atmospheric scattering model and randomly setting parameters; The fog-free image is calculated by inverse calculation. Convolutional Neural Network (CNN) is a deep learning model, which reduces the number of parameters and connections through weight sharing and local receptive fields, reduces the complexity of neural networks, and has strong adaptability . Therefore, CNN is widely used in image processing research, and its application in the field of image recognition is a current research hotspot.

基于图像复原的去雾算法虽然效果相对较好,但由于简化的物理模型是基于大气是单散射且介质均匀的条件下,不具有普适性,如不均匀雾或天空区域,且在阴暗的环境中易造成颜色失真。Although the dehazing algorithm based on image restoration has a relatively good effect, it is not universal because the simplified physical model is based on the condition that the atmosphere is single scattering and the medium is homogeneous, such as uneven fog or sky area, and in the dark It is easy to cause color distortion in the environment.

发明内容SUMMARY OF THE INVENTION

发明目的:为解决上述技术问题,本发明提出一种新的基于卷积神经网络和多通道颜色信息融合的图像去雾方法,以达到图像去雾的目的。Purpose of the invention: In order to solve the above technical problems, the present invention proposes a new image dehazing method based on convolutional neural network and multi-channel color information fusion, so as to achieve the purpose of image dehazing.

技术方案:为实现上述技术效果,本发明提出的技术方案为:Technical scheme: In order to realize the above-mentioned technical effect, the technical scheme proposed by the present invention is:

一种基于卷积神经网络的单幅图像去雾方法,包括依次执行的步骤(1)至(9):A single image dehazing method based on convolutional neural network, comprising steps (1) to (9) performed in sequence:

(1)构建训练样本和测试样本:获取若干无雾图像,在无雾图像上添加不同浓度的雾,得到有雾图像,将有雾图像和无雾图像转换成HDF5格式的图像块,并按照预设比例分别将有雾图像的图像块和无雾图像的图像块分为两部分,一部分作为训练样本,另一部分作为测试样本;(1) Construct training samples and test samples: obtain several fog-free images, add fog of different concentrations to the fog-free images to obtain foggy images, convert the foggy and fog-free images into image blocks in HDF5 format, and follow The preset ratio divides the image block of the foggy image and the image block of the non-fog image into two parts, one part is used as a training sample, and the other part is used as a test sample;

(2)构建多尺度深度卷积网络模型,所述多尺度深度卷积网络模型包括浅层卷积神经网络和深层卷积神经网络;(2) building a multi-scale deep convolutional network model, the multi-scale deep convolutional network model includes a shallow convolutional neural network and a deep convolutional neural network;

浅层卷积神经网络包括:依次级联的输入层、卷积层、全连接层、池化层和输出层;其中,输入层将输入图像块i映射到R、G、B颜色空间,卷积层采用高斯滤波器对输入图像块的R、G、B颜色通道的值分别进行卷积,卷积后的结果为

Figure BDA0001895528370000021
The shallow convolutional neural network includes: an input layer, a convolutional layer, a fully connected layer, a pooling layer, and an output layer that are cascaded in sequence; the input layer maps the input image block i to the R, G, and B color spaces, and the volume The product layer uses a Gaussian filter to convolve the values of the R, G, and B color channels of the input image block respectively, and the result after convolution is
Figure BDA0001895528370000021

Figure BDA0001895528370000022
Figure BDA0001895528370000022

其中,Ic表示输入图像块R、G、B颜色空间的某一颜色通道的像素值矩阵,W1和B1分别表示对应的卷积网络的权重系数矩阵和偏差矩阵;Wherein, I c represents the pixel value matrix of a certain color channel of the input image block R, G, B color space, W 1 and B 1 respectively represent the weight coefficient matrix and the deviation matrix of the corresponding convolutional network;

全连接层对上述卷积层的结果进行合并,合并后的结果为:The fully connected layer combines the results of the above convolutional layers, and the combined result is:

Figure BDA0001895528370000023
Figure BDA0001895528370000023

池化层对全连接层的结果进行下采样后,可得到输入图像块的高维特征向量F2After the pooling layer downsamples the results of the fully connected layer, the high-dimensional feature vector F 2 of the input image block can be obtained:

Figure BDA0001895528370000024
Figure BDA0001895528370000024

其中,F2(x)表示输入图像块中像素点x处的特征值,Ω(x)是输入图像块中以像素点x为中心的某一区域;将池化层的结果F2通过输出层输出至深层卷积神经网络,F2即为输入图像块的场景深度矩阵;Among them, F 2 (x) represents the feature value at the pixel point x in the input image block, Ω(x) is a certain area centered on the pixel point x in the input image block; the result of the pooling layer F 2 is output through the output The layer is output to the deep convolutional neural network, and F2 is the scene depth matrix of the input image block;

深层卷积神经网络包括:输入层、多尺度映射单元、多尺度连接层、池化层、卷积层、BReLU激励层和输出层;The deep convolutional neural network includes: input layer, multi-scale mapping unit, multi-scale connection layer, pooling layer, convolution layer, BReLU excitation layer and output layer;

其中,输入层接收浅层卷积神经网络输出的高维特征向量F2;多尺度映射单元将F2映射为

Figure BDA0001895528370000031
Among them, the input layer receives the high-dimensional feature vector F 2 output by the shallow convolutional neural network; the multi-scale mapping unit maps F 2 as
Figure BDA0001895528370000031

Figure BDA0001895528370000032
Figure BDA0001895528370000032

其中,W3为多尺度映射单元中3组不同尺度的卷积网络分别对应的权重系数矩阵,B3为偏差矩阵,*代表卷积操作;Among them, W 3 is the weight coefficient matrix corresponding to three groups of convolutional networks of different scales in the multi-scale mapping unit, B 3 is the deviation matrix, and * represents the convolution operation;

多尺度连接层对上述多尺度映射单元的输出结果进行合并得到F3The multi-scale connection layer combines the output results of the above multi-scale mapping units to obtain F 3 :

Figure BDA0001895528370000033
Figure BDA0001895528370000033

其中,

Figure BDA0001895528370000034
分别为所述3组不同尺度的卷积网络的输出结果;in,
Figure BDA0001895528370000034
are the output results of the three groups of convolutional networks of different scales;

然后,池化层对多尺度连接层的输出进行下采样,池化层的输出为F4Then, the pooling layer downsamples the output of the multi-scale connection layer, and the output of the pooling layer is F4 :

Figure BDA0001895528370000035
Figure BDA0001895528370000035

卷积层将池化层的结果F4映射为F5The convolutional layer maps the result of the pooling layer F4 to F5 :

F5=W5*F4+B5 F 5 =W 5 *F 4 +B 5

其中,W5为多尺度映射单元中卷积网络的权重系数矩阵,B5为偏差矩阵,*代表卷积操作;Among them, W 5 is the weight coefficient matrix of the convolution network in the multi-scale mapping unit, B 5 is the deviation matrix, and * represents the convolution operation;

BReLU激励层利用双边修正线性单元BReLU激活函数对卷积层的输出结果F5进行非线性回归,得到F6The BReLU activation layer uses the bilateral modified linear unit BReLU activation function to perform nonlinear regression on the output result F 5 of the convolution layer, and obtains F 6 :

F6(x)=min(amax,F5(x))F 6 (x)=min(a max ,F 5 (x))

其中,amax为双边修正线性单元BReLU激活函数的上幅值;Among them, a max is the upper amplitude value of the bilateral modified linear unit BReLU activation function;

最后,令t=F6,输出层输出t,t即为输入图像块的透射率矩阵;Finally, let t=F 6 , the output layer outputs t, and t is the transmittance matrix of the input image block;

(3)构建损失函数:(3) Construct the loss function:

当只有单个训练样本i时,损失函数为:When there is only a single training sample i, the loss function is:

Figure BDA0001895528370000041
Figure BDA0001895528370000041

当有多个训练样本时,损失函数为:When there are multiple training samples, the loss function is:

Figure BDA0001895528370000042
Figure BDA0001895528370000042

其中,ti表示训练样本i的透射图,n是训练样本的个数,λ表示衰减参数,Wji表示训练样本i的权值系数矩阵Wj

Figure BDA0001895528370000043
表示训练样本i的实际透射率矩阵;Among them, t i represents the transmission map of the training sample i, n is the number of training samples, λ represents the attenuation parameter, W ji represents the weight coefficient matrix W j of the training sample i,
Figure BDA0001895528370000043
represents the actual transmittance matrix of training sample i;

(4)对每个Wji,用平均值为0和标准偏差为0.001的高斯分布随机初始化Wji中的各项分量;初始化Bji为0,Bji表示训练样本i的偏差矩阵Bj;初始化ΔWji=0,ΔBji=0;(4) For each W ji , use a Gaussian distribution with a mean value of 0 and a standard deviation of 0.001 to randomly initialize the components in W ji ; initialize B ji to 0, and B ji represents the deviation matrix B j of the training sample i; Initialize ΔW ji =0, ΔB ji =0;

(5)对每个样本i,利用反向传播算法求出Wji和Bji的偏导:

Figure BDA0001895528370000044
Figure BDA0001895528370000045
(5) For each sample i, use the backpropagation algorithm to find the partial derivatives of W ji and B ji :
Figure BDA0001895528370000044
and
Figure BDA0001895528370000045

求出Wji和Bji的变化量:Find the amount of change in W ji and B ji :

Figure BDA0001895528370000046
Figure BDA0001895528370000046

Figure BDA0001895528370000047
Figure BDA0001895528370000047

(6)更新:(6) Update:

Figure BDA0001895528370000048
Figure BDA0001895528370000048

Figure BDA0001895528370000049
Figure BDA0001895528370000049

(7)将更新后的Wji和Bji代入损失函数,重复执行步骤(5)至(7),直至损失函数的值最小,至此,所述多尺度深度卷积网络模型训练完毕,转入步骤(8);(7) Substitute the updated W ji and B ji into the loss function, and repeat steps (5) to (7) until the value of the loss function is the smallest. step (8);

(8)将新的有雾图像输入训练好的多尺度深度卷积网络模型,得到的输出结果作为该新的有雾图像的初始透射率;(8) Input the new foggy image into the trained multi-scale deep convolutional network model, and the obtained output is used as the initial transmittance of the new foggy image;

(9)估计出新的有雾图像拍摄时的大气光强A;根据大气散射模型恢复相应的无雾图像。(9) Estimate the atmospheric light intensity A when the new foggy image is taken; restore the corresponding fog-free image according to the atmospheric scattering model.

进一步的,所述

Figure BDA00018955283700000410
根据大气散射模型计算得到,大气散射模型为:Further, the said
Figure BDA00018955283700000410
According to the calculation of the atmospheric scattering model, the atmospheric scattering model is:

Figure BDA0001895528370000051
Figure BDA0001895528370000051

其中,I表示训练样本i的光强矩阵,J表示原始的无雾图像中与训练样本i相对应的图像块的光强矩阵,A为训练样本i相应的有雾图像拍摄时的大气光强度A。Among them, I represents the light intensity matrix of the training sample i, J represents the light intensity matrix of the image block corresponding to the training sample i in the original haze-free image, and A is the atmospheric light intensity when the foggy image corresponding to the training sample i was taken. A.

进一步的,所述估计有雾图像拍摄时的大气光强A的方法为:Further, the method for estimating the atmospheric light intensity A when the foggy image is photographed is:

估计有雾图像的暗通道图:Estimate the dark channel map for a hazy image:

Figure BDA0001895528370000052
Figure BDA0001895528370000052

其中,Idark表示有雾图像的像素矩阵I的暗通道图估计结果,Ic表示I的某一颜色通道的值,y表示像素点;Among them, I dark represents the dark channel map estimation result of the pixel matrix I of the foggy image, I c represents the value of a certain color channel of I, and y represents the pixel point;

对获得的暗通道图的像素值进行降序排序,选取排在前千分之一的像素点作为候选点,将候选点对应到有雾图像中的相应位置,在有雾图像中计算候选点相应位置的亮度值,将计算出的最大亮度值作为大气光强估计值A。Sort the pixel values of the obtained dark channel map in descending order, select the top one thousandth pixel point as the candidate point, correspond the candidate point to the corresponding position in the foggy image, and calculate the corresponding position of the candidate point in the foggy image. The brightness value of the location, and the calculated maximum brightness value is used as the estimated value of atmospheric light intensity A.

有益效果:与现有技术相比,本发明具有以下优势:Beneficial effect: Compared with the prior art, the present invention has the following advantages:

1、本发明与基于物理模型的图像复原算法相比,特别是经典的暗通道先验,本发明利用样本集的多样性以及网络结构的普适性,对介质不均匀以及雾化图像中的平坦区域都具有较好的去雾效果;1. Compared with the image restoration algorithm based on the physical model, the present invention, especially the classical dark channel prior, utilizes the diversity of the sample set and the universality of the network structure, and can effectively solve the problem of uneven medium and fog images. Flat areas have better dehazing effect;

2、本发明通过对雾化图像RGB颜色空间三个通道的信息进行融合,设计端到端的深层卷积神经网络,避免了阴暗环境下去雾造成的颜色失真;2. The present invention designs an end-to-end deep convolutional neural network by fusing the information of the three channels of the RGB color space of the foggy image, so as to avoid color distortion caused by fog removal in a dark environment;

3、本发明了将卷积神经网络与图像先验信息相结合,可以更加准确的估计有雾图像的媒介透射率,可以获得更好的去雾效果,去雾后的图像更加真实自然。3. The present invention combines the convolutional neural network with the image prior information, which can more accurately estimate the medium transmittance of the foggy image, obtain a better dehazing effect, and the image after dehazing is more real and natural.

附图说明Description of drawings

图1为发明所述一种基于卷积神经网络的单幅图像去雾方法的流程图;Fig. 1 is a flow chart of a method for dehazing a single image based on a convolutional neural network according to the invention;

图2为本发明中浅层卷积神经网络的结构图;2 is a structural diagram of a shallow convolutional neural network in the present invention;

图3为本发明中深层卷积神经网络的结构图。FIG. 3 is a structural diagram of a deep convolutional neural network in the present invention.

具体实施方式Detailed ways

下面结合附图和实施例对本发明作更进一步的说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.

图1所示为本发明所述一种基于卷积神经网络的单幅图像去雾方法的流程图,该方法包括:FIG. 1 is a flowchart of a method for dehazing a single image based on a convolutional neural network according to the present invention, and the method includes:

步骤1、获取PASCAL VOC数据集以及在网上下载的无雾图像作为训练样本中的无雾图像集;Step 1. Obtain the PASCAL VOC dataset and the haze-free images downloaded on the Internet as the haze-free image set in the training sample;

步骤2、利用柏林噪声(Perlin Noise)为步骤1中的无雾图像集添加不同浓度的雾,得到有雾图像集;将有雾图像集和无雾图像集中的图像裁剪成64*64的图像块,再转换成HDF5的数据格式存储,然后分别将有雾图像的图像块和无雾图像的图像块按比例分成两部分,一部分作为训练样本,另一部分作为测试样本,便于训练;为了能够适应不同天气条件下的雾浓度,学习到不同雾浓度图像的特征,对无雾图像集合成了浓度分别为10,20,30,40,50,60,70,80,90,100的雾,得到有雾图像集;挑选有雾图像和无雾图像2506对作为训练样本,剩余502对作为测试样本;Step 2. Use Perlin Noise to add fog of different concentrations to the fog-free image set in step 1 to obtain a foggy image set; crop the images in the foggy image set and the fog-free image set into 64*64 images Then convert the image block of the foggy image and the image block of the non-fog image into two parts in proportion, one part is used as a training sample, and the other part is used as a test sample, which is convenient for training; in order to adapt to Fog density under different weather conditions, the characteristics of images with different fog density are learned, and the fog-free images are assembled with fog concentrations of 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, respectively, and get Foggy image set; select 2506 pairs of foggy images and non-fog images as training samples, and the remaining 502 pairs as test samples;

步骤3、将步骤2中HDF5格式的训练样本作为输入,设计端到端的多尺度深度卷积网络模型;所述多尺度深度卷积网络模型包括浅层卷积神经网络和深层卷积神经网络;Step 3. Design an end-to-end multi-scale deep convolutional network model with the training samples in HDF5 format in step 2 as input; the multi-scale deep convolutional network model includes a shallow convolutional neural network and a deep convolutional neural network;

浅层卷积神经网络的结构如图2所示,浅层卷积神经网络包括:1个输入层,1个卷积层,1个全连接层,1个池化层和1个输出层。The structure of the shallow convolutional neural network is shown in Figure 2. The shallow convolutional neural network includes: 1 input layer, 1 convolutional layer, 1 fully connected layer, 1 pooling layer and 1 output layer.

其中,卷积层包括32个的高斯滤波器,输入图像块RGB颜色空间的三个通道分别与平行的32个高斯滤波器进行卷积,使得每一个输入图像块被一个高维特征向量所代表:Among them, the convolution layer includes 32 Gaussian filters, and the three channels of the RGB color space of the input image block are convolved with 32 parallel Gaussian filters, so that each input image block is represented by a high-dimensional feature vector. :

Figure BDA0001895528370000061
Figure BDA0001895528370000061

其中,Ic表示输入图像块R、G、B颜色空间的某一颜色通道的像素值矩阵,W1和B1分别表示对应的卷积网络的权重系数矩阵和偏差矩阵;Wherein, I c represents the pixel value matrix of a certain color channel of the input image block R, G, B color space, W 1 and B 1 respectively represent the weight coefficient matrix and the deviation matrix of the corresponding convolutional network;

全连接层对上述卷积层的结果进行合并,合并后的结果为:The fully connected layer combines the results of the above convolutional layers, and the combined result is:

Figure BDA0001895528370000062
Figure BDA0001895528370000062

其中,∩表示连接操作;Among them, ∩ represents the connection operation;

池化层对全连接层输出的高维特征向量进行下采样,采用Max pooling方法,滤波器大小为3*3,可得到输入图像块的高维特征向量F2The pooling layer downsamples the high-dimensional feature vector output by the fully-connected layer, adopts the Max pooling method, the filter size is 3*3, and the high-dimensional feature vector F 2 of the input image block can be obtained:

Figure BDA0001895528370000063
Figure BDA0001895528370000063

其中,F2(x)表示输入图像块中像素点x处的特征值,Ω(x)是输入图像块中以像素点x为中心的某一区域;Among them, F 2 (x) represents the feature value at the pixel point x in the input image block, and Ω(x) is a certain area centered on the pixel point x in the input image block;

将池化层的结果F2通过输出层输出至深层卷积神经网络,F2即为输入图像块的场景深度矩阵;The result of the pooling layer F 2 is output to the deep convolutional neural network through the output layer, and F 2 is the scene depth matrix of the input image block;

深层卷积神经网络的结构如图3所示,包括:输入层、多尺度映射单元、多尺度连接层、池化层、卷积层、BReLU激励层和输出层;The structure of the deep convolutional neural network is shown in Figure 3, including: input layer, multi-scale mapping unit, multi-scale connection layer, pooling layer, convolution layer, BReLU excitation layer and output layer;

其中,输入层接收浅层卷积神经网络输出的高维特征向量F2;多尺度映射单元将F2映射为

Figure BDA0001895528370000071
Among them, the input layer receives the high-dimensional feature vector F 2 output by the shallow convolutional neural network; the multi-scale mapping unit maps F 2 as
Figure BDA0001895528370000071

Figure BDA0001895528370000072
Figure BDA0001895528370000072

其中,W3为多尺度映射单元中3组不同尺度的卷积网络分别对应的权重系数矩阵,B3为偏差矩阵,*代表卷积操作;Among them, W 3 is the weight coefficient matrix corresponding to three groups of convolutional networks of different scales in the multi-scale mapping unit, B 3 is the deviation matrix, and * represents the convolution operation;

多尺度连接层对上述多尺度映射单元的输出结果进行合并得到F3The multi-scale connection layer combines the output results of the above multi-scale mapping units to obtain F 3 :

Figure BDA0001895528370000073
Figure BDA0001895528370000073

其中,

Figure BDA0001895528370000074
分别为所述3组不同尺度的卷积网络的输出结果;in,
Figure BDA0001895528370000074
are the output results of the three groups of convolutional networks of different scales;

然后,池化层对多尺度连接层的输出进行下采样,池化层的输出为F4Then, the pooling layer downsamples the output of the multi-scale connection layer, and the output of the pooling layer is F4 :

Figure BDA0001895528370000075
Figure BDA0001895528370000075

卷积层将池化层的结果F4映射为F5The convolutional layer maps the result of the pooling layer F4 to F5 :

F5=W5*F4+B5 F 5 =W 5 *F 4 +B 5

其中,W5为多尺度映射单元中卷积网络的权重系数矩阵,B5为偏差矩阵,*代表卷积操作;Among them, W 5 is the weight coefficient matrix of the convolution network in the multi-scale mapping unit, B 5 is the deviation matrix, and * represents the convolution operation;

BReLU激励层利用双边修正线性单元BReLU激活函数对卷积层的输出结果F5进行非线性回归,得到F6The BReLU activation layer uses the bilateral modified linear unit BReLU activation function to perform nonlinear regression on the output result F 5 of the convolution layer, and obtains F 6 :

F6(x)=min(amax,F5(x))F 6 (x)=min(a max ,F 5 (x))

其中,amax为双边修正线性单元BReLU激活函数的上幅值;双边修正线性单元BReLU激活函数的梯度函数为:Among them, a max is the upper amplitude value of the activation function of the bilateral modified linear unit BReLU; the gradient function of the activation function of the bilateral modified linear unit BReLU is:

Figure BDA0001895528370000081
Figure BDA0001895528370000081

最后,令t=F6,输出层输出t,t即为输入图像块的透射率矩阵;Finally, let t=F 6 , the output layer outputs t, and t is the transmittance matrix of the input image block;

步骤4:构建损失函数:Step 4: Build the loss function:

当只有单个训练样本i时,损失函数为:When there is only a single training sample i, the loss function is:

Figure BDA0001895528370000082
Figure BDA0001895528370000082

当有多个训练样本时,损失函数为:When there are multiple training samples, the loss function is:

Figure BDA0001895528370000083
Figure BDA0001895528370000083

其中,ti表示训练样本i的透射图,n是训练样本的个数,λ表示衰减参数,Wji表示训练样本i的权值系数矩阵Wj

Figure BDA0001895528370000084
表示训练样本i的实际透射率矩阵;所述
Figure BDA0001895528370000085
根据大气散射模型计算得到,大气散射模型为:Among them, t i represents the transmission map of the training sample i, n is the number of training samples, λ represents the attenuation parameter, W ji represents the weight coefficient matrix W j of the training sample i,
Figure BDA0001895528370000084
represents the actual transmittance matrix of training sample i; the
Figure BDA0001895528370000085
According to the calculation of the atmospheric scattering model, the atmospheric scattering model is:

Figure BDA0001895528370000086
Figure BDA0001895528370000086

其中,I表示训练样本i的光强矩阵,J表示原始的无雾图像中与训练样本i相对应的图像块的光强矩阵,A为训练样本i相应的有雾图像拍摄时的大气光强度A。在损失函数

Figure BDA0001895528370000087
中,等式右侧第一项
Figure BDA0001895528370000088
是均方差项,第二项
Figure BDA0001895528370000089
是规则项。可以看出,规则项和偏置Bji无关,仅能控制权重的大小,因此也称为权重衰减项。权重衰减项中权重的衰减参数λ可以用来决定两项在损失函数中的比重。训练的关键就是通过不断调整权重Wji和偏置Bji,获得最小的损失函数。Among them, I represents the light intensity matrix of the training sample i, J represents the light intensity matrix of the image block corresponding to the training sample i in the original haze-free image, and A is the atmospheric light intensity when the foggy image corresponding to the training sample i was taken. A. in the loss function
Figure BDA0001895528370000087
, the first term on the right-hand side of the equation
Figure BDA0001895528370000088
is the mean square error term, the second term
Figure BDA0001895528370000089
is the rule item. It can be seen that the rule term has nothing to do with the bias B ji , and can only control the size of the weight, so it is also called the weight decay term. The decay parameter λ of the weight in the weight decay term can be used to determine the weight of the two terms in the loss function. The key to training is to obtain the smallest loss function by continuously adjusting the weight W ji and the bias B ji .

训练时,首先对所有的权重Wji和偏置Bji参数进行初始化。网络模型每层的权重均使用平均值为0和标准偏差为0.001的高斯分布随机初始化滤波器权重,初始偏置设置为0。During training, all weights W ji and bias B ji parameters are initialized first. The weights of each layer of the network model are randomly initialized with a Gaussian distribution with a mean of 0 and a standard deviation of 0.001, and the initial bias is set to 0.

初始化完成后,使用随机梯度下降算法来更新权重Wji和偏置Bji。更新规则分别服从公式如下:After initialization, the stochastic gradient descent algorithm is used to update the weights W ji and biases B ji . The update rules respectively obey the following formulas:

Figure BDA0001895528370000091
Figure BDA0001895528370000091

Figure BDA0001895528370000092
Figure BDA0001895528370000092

式中,α表示学习速率。上述两个公式中的偏导数可以由反向传播算法求出,即对损失函数公式分别求对权Wji和偏置Bji的偏导:where α represents the learning rate. The partial derivatives in the above two formulas can be obtained by the back-propagation algorithm, that is, the partial derivatives of the weight W ji and the bias B ji are obtained respectively for the loss function formula:

Figure BDA0001895528370000093
Figure BDA0001895528370000093

Figure BDA0001895528370000094
Figure BDA0001895528370000094

其中,反向传播算法主要步骤是:首先将给定样本进行前向传递,得到全部网络神经节点的输出值,然后计算出总误差,并用总误差对某个节点进行求偏导,可得到该节点对最终输出的影响。Among them, the main steps of the back-propagation algorithm are: firstly forward the given sample to obtain the output values of all neural nodes in the network, then calculate the total error, and use the total error to take a partial derivative of a node to obtain the The effect of the node on the final output.

因此,完整的网络模型训练步骤如下:Therefore, the complete network model training steps are as follows:

对网络各层参数进行初始化;Initialize the parameters of each layer of the network;

对每个样本i:For each sample i:

a:利用反向传播求出

Figure BDA0001895528370000095
Figure BDA0001895528370000096
a: Use backpropagation to find
Figure BDA0001895528370000095
and
Figure BDA0001895528370000096

b:求出参数Wji和Bji的变化量:b: Find the variation of parameters W ji and B ji :

Figure BDA0001895528370000097
Figure BDA0001895528370000097

Figure BDA0001895528370000098
Figure BDA0001895528370000098

c:完成参数更新:c: complete parameter update:

Figure BDA0001895528370000099
Figure BDA0001895528370000099

Figure BDA0001895528370000101
Figure BDA0001895528370000101

d:将更新的权重Wji和偏置Bji代入损失函数,重复执行步骤a至步骤d,直至损失函数最小,更新结束,进入步骤5。在训练过程中使用Nvidia Ge Force GTX 1050 8G GPU进行加速。d: Substitute the updated weight W ji and bias B ji into the loss function, repeat step a to step d, until the loss function is the smallest, the update is over, and go to step 5. Use an Nvidia Ge Force GTX 1050 8G GPU for acceleration during training.

步骤5:将新的有雾图像输入训练好的多尺度深度卷积网络模型,得到的输出结果作为该新的有雾图像的初始透射率;Step 5: Input the new foggy image into the trained multi-scale deep convolutional network model, and the obtained output is used as the initial transmittance of the new foggy image;

步骤6:估计出新的有雾图像拍摄时的大气光强A;根据大气散射模型恢复相应的无雾图像。其中,估计有雾图像拍摄时的大气光强A的方法为:Step 6: Estimate the atmospheric light intensity A when the new foggy image is taken; restore the corresponding fog-free image according to the atmospheric scattering model. Among them, the method of estimating the atmospheric light intensity A when the foggy image is taken is:

估计有雾图像的暗通道图:Estimate the dark channel map for a hazy image:

Figure BDA0001895528370000102
Figure BDA0001895528370000102

其中,Idark表示有雾图像的像素矩阵I的暗通道图估计结果,Ic表示I的某一颜色通道的值,y表示像素点。Among them, I dark represents the dark channel image estimation result of the pixel matrix I of the foggy image, I c represents the value of a certain color channel of I, and y represents the pixel point.

对获得的暗通道图的像素值进行降序排序,选取排在前千分之一的像素点作为候选点,将候选点对应到有雾图像中的相应位置,在有雾图像中计算候选点相应位置的亮度值,将计算出的最大亮度值作为大气光强估计值A。Sort the pixel values of the obtained dark channel map in descending order, select the top one thousandth pixel point as the candidate point, correspond the candidate point to the corresponding position in the foggy image, and calculate the corresponding position of the candidate point in the foggy image. The brightness value of the location, and the calculated maximum brightness value is used as the estimated value of atmospheric light intensity A.

以上所述仅是本发明的优选实施方式,应当指出:对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only the preferred embodiment of the present invention, it should be pointed out that: for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made, and these improvements and modifications are also It should be regarded as the protection scope of the present invention.

Claims (3)

1.一种基于卷积神经网络的单幅图像去雾方法,其特征在于,包括依次执行的步骤(1)至(9):1. a single image dehazing method based on convolutional neural network, is characterized in that, comprises the steps (1) to (9) that carry out successively: (1)构建训练样本和测试样本:获取若干无雾图像,在无雾图像上添加不同浓度的雾,得到有雾图像,将有雾图像和无雾图像转换成HDF5格式的图像块,并按照预设比例分别将有雾图像的图像块和无雾图像的图像块分为两部分,一部分作为训练样本,另一部分作为测试样本;(1) Construct training samples and test samples: obtain several fog-free images, add fog of different concentrations to the fog-free images to obtain foggy images, convert the foggy and fog-free images into image blocks in HDF5 format, and follow The preset ratio divides the image block of the foggy image and the image block of the non-fog image into two parts, one part is used as a training sample, and the other part is used as a test sample; (2)构建多尺度深度卷积网络模型,所述多尺度深度卷积网络模型包括浅层卷积神经网络和深层卷积神经网络;(2) building a multi-scale deep convolutional network model, the multi-scale deep convolutional network model includes a shallow convolutional neural network and a deep convolutional neural network; 浅层卷积神经网络包括:依次级联的输入层、卷积层、全连接层、池化层和输出层;其中,输入层将输入图像块i映射到R、G、B颜色空间,卷积层采用高斯滤波器对输入图像块的R、G、B颜色通道的值分别进行卷积,卷积后的结果为
Figure FDA0002786854930000011
The shallow convolutional neural network includes: an input layer, a convolutional layer, a fully connected layer, a pooling layer, and an output layer that are cascaded in sequence; the input layer maps the input image block i to the R, G, and B color spaces, and the volume The product layer uses a Gaussian filter to convolve the values of the R, G, and B color channels of the input image block respectively, and the result after convolution is
Figure FDA0002786854930000011
Figure FDA0002786854930000012
Figure FDA0002786854930000012
其中,Ic表示输入图像块R、G、B颜色空间的某一颜色通道的像素值矩阵,W1和B1分别表示对应的卷积网络的权重系数矩阵和偏差矩阵;Wherein, I c represents the pixel value matrix of a certain color channel of the input image block R, G, B color space, W 1 and B 1 respectively represent the weight coefficient matrix and the deviation matrix of the corresponding convolutional network; 全连接层对上述卷积层的结果进行合并,合并后的结果为:The fully connected layer combines the results of the above convolutional layers, and the combined result is:
Figure FDA0002786854930000013
Figure FDA0002786854930000013
池化层对全连接层的结果进行下采样后,可得到输入图像块的高维特征向量F2After the pooling layer downsamples the results of the fully connected layer, the high-dimensional feature vector F 2 of the input image block can be obtained:
Figure FDA0002786854930000014
Figure FDA0002786854930000014
其中,F2(x)表示输入图像块中像素点x处的特征值,Ω(x)是输入图像块中以像素点x为中心的某一区域;将池化层的结果F2通过输出层输出至深层卷积神经网络,F2即为输入图像块的场景深度矩阵;Among them, F 2 (x) represents the feature value at the pixel point x in the input image block, Ω(x) is a certain area centered on the pixel point x in the input image block; the result of the pooling layer F 2 is output through the output The layer is output to the deep convolutional neural network, and F2 is the scene depth matrix of the input image block; 深层卷积神经网络包括:输入层、多尺度映射单元、多尺度连接层、池化层、卷积层、BReLU激励层和输出层;The deep convolutional neural network includes: input layer, multi-scale mapping unit, multi-scale connection layer, pooling layer, convolution layer, BReLU excitation layer and output layer; 其中,输入层接收浅层卷积神经网络输出的高维特征向量F2;多尺度映射单元将F2映射为
Figure FDA0002786854930000015
Among them, the input layer receives the high-dimensional feature vector F 2 output by the shallow convolutional neural network; the multi-scale mapping unit maps F 2 as
Figure FDA0002786854930000015
Figure FDA0002786854930000021
Figure FDA0002786854930000021
其中,W3为多尺度映射单元中3组不同尺度的卷积网络分别对应的权重系数矩阵,B3为偏差矩阵,*代表卷积操作;Among them, W 3 is the weight coefficient matrix corresponding to three groups of convolutional networks of different scales in the multi-scale mapping unit, B 3 is the deviation matrix, and * represents the convolution operation; 多尺度连接层对上述多尺度映射单元的输出结果进行合并得到F3The multi-scale connection layer combines the output results of the above multi-scale mapping units to obtain F 3 :
Figure FDA0002786854930000022
Figure FDA0002786854930000022
其中,
Figure FDA0002786854930000023
分别为所述3组不同尺度的卷积网络的输出结果;
in,
Figure FDA0002786854930000023
are the output results of the three groups of convolutional networks of different scales;
然后,池化层对多尺度连接层的输出进行下采样,池化层的输出为F4Then, the pooling layer downsamples the output of the multi-scale connection layer, and the output of the pooling layer is F4 :
Figure FDA0002786854930000024
Figure FDA0002786854930000024
卷积层将池化层的结果F4映射为F5The convolutional layer maps the result of the pooling layer, F4 , to F5 : F5=W5*F4+B5 F 5 =W 5 *F 4 +B 5 其中,W5为多尺度映射单元中卷积网络的权重系数矩阵,B5为偏差矩阵,*代表卷积操作;Among them, W 5 is the weight coefficient matrix of the convolution network in the multi-scale mapping unit, B 5 is the deviation matrix, and * represents the convolution operation; BReLU激励层利用双边修正线性单元BReLU激活函数对卷积层的输出结果F5进行非线性回归,得到F6The BReLU activation layer uses the bilateral modified linear unit BReLU activation function to perform nonlinear regression on the output result F 5 of the convolution layer, and obtains F 6 : F6(x)=min(amax,F5(x))F 6 (x)=min(a max ,F 5 (x)) 其中,amax为双边修正线性单元BReLU激活函数的上幅值;Among them, a max is the upper amplitude value of the bilateral modified linear unit BReLU activation function; 最后,令t=F6,输出层输出t,t即为输入图像块的透射率矩阵;Finally, let t=F 6 , the output layer outputs t, and t is the transmittance matrix of the input image block; (3)构建损失函数:(3) Construct the loss function: 当只有单个训练样本时,损失函数为:When there is only a single training sample, the loss function is:
Figure FDA0002786854930000025
Figure FDA0002786854930000025
当有多个训练样本时,损失函数为:When there are multiple training samples, the loss function is:
Figure FDA0002786854930000026
Figure FDA0002786854930000026
其中,t表示训练样本的透射图,n是训练样本的个数,λ表示衰减参数,Wji表示训练样本i的权值系数矩阵,t*表示训练样本的实际透射率矩阵;Among them, t represents the transmission map of the training sample, n is the number of training samples, λ represents the attenuation parameter, W ji represents the weight coefficient matrix of the training sample i, and t * represents the actual transmission matrix of the training sample; (4)对每个Wji,用平均值为0和标准偏差为0.001的高斯分布随机初始化Wji中的各项分量;初始化Bji为0,Bji表示训练样本i的偏差矩阵;初始化ΔWji=0,ΔBji=0;(4) For each W ji , use a Gaussian distribution with a mean value of 0 and a standard deviation of 0.001 to randomly initialize the components in W ji ; initialize B ji to 0, and B ji represents the deviation matrix of training sample i; initialize ΔW ji = 0, ΔB ji = 0; (5)对每个样本i,利用反向传播算法求出Wji和Bji的偏导:
Figure FDA0002786854930000031
Figure FDA0002786854930000032
(5) For each sample i, use the backpropagation algorithm to find the partial derivatives of W ji and B ji :
Figure FDA0002786854930000031
and
Figure FDA0002786854930000032
求出Wji和Bji的变化量:Find the amount of change in W ji and B ji :
Figure FDA0002786854930000033
Figure FDA0002786854930000033
Figure FDA0002786854930000034
Figure FDA0002786854930000034
(6)更新:(6) Update:
Figure FDA0002786854930000035
Figure FDA0002786854930000035
Figure FDA0002786854930000036
Figure FDA0002786854930000036
(7)将更新后的Wji和Bji代入损失函数,重复执行步骤(5)至(7),直至损失函数的值最小,至此,所述多尺度深度卷积网络模型训练完毕,转入步骤(8);(7) Substitute the updated W ji and B ji into the loss function, and repeat steps (5) to (7) until the value of the loss function is the smallest. step (8); (8)将新的有雾图像输入训练好的多尺度深度卷积网络模型,得到的输出结果作为该新的有雾图像的初始透射率;(8) Input the new foggy image into the trained multi-scale deep convolutional network model, and the obtained output is used as the initial transmittance of the new foggy image; (9)估计出新的有雾图像拍摄时的大气光强A;根据大气散射模型恢复相应的无雾图像。(9) Estimate the atmospheric light intensity A when the new foggy image is taken; restore the corresponding fog-free image according to the atmospheric scattering model.
2.根据权利要求1所述的一种基于卷积神经网络的单幅图像去雾方法,其特征在于,所述t*根据大气散射模型计算得到,大气散射模型为:2. a kind of single image dehazing method based on convolutional neural network according to claim 1, is characterized in that, described t * is calculated according to atmospheric scattering model, and atmospheric scattering model is: I=J×t*+A(1-t*)I=J×t * +A(1-t * ) 其中,I表示训练样本的光强矩阵,J表示原始的无雾图像中与训练样本相对应的图像块的光强矩阵,A为训练样本相应的有雾图像拍摄时的大气光强度A。Among them, I represents the light intensity matrix of the training sample, J represents the light intensity matrix of the image block corresponding to the training sample in the original haze-free image, and A is the atmospheric light intensity A of the foggy image corresponding to the training sample. 3.根据权利要求2所述的一种基于卷积神经网络的单幅图像去雾方法,其特征在于,所述估计有雾图像拍摄时的大气光强A的方法为:3. a kind of single image dehazing method based on convolutional neural network according to claim 2, is characterized in that, the method for the atmospheric light intensity A when described estimation hazy image is photographed is: 估计有雾图像的暗通道图:Estimate the dark channel map for a hazy image:
Figure FDA0002786854930000041
Figure FDA0002786854930000041
其中,Idark表示有雾图像的像素矩阵I的暗通道图估计结果,Ic表示I的某一颜色通道的值,y表示像素点;Among them, I dark represents the dark channel map estimation result of the pixel matrix I of the foggy image, I c represents the value of a certain color channel of I, and y represents the pixel point; 对获得的暗通道图的像素值进行降序排序,选取排在前千分之一的像素点作为候选点,将候选点对应到有雾图像中的相应位置,在有雾图像中计算候选点相应位置的亮度值,将计算出的最大亮度值作为大气光强估计值A。Sort the pixel values of the obtained dark channel map in descending order, select the top one thousandth pixel point as the candidate point, correspond the candidate point to the corresponding position in the foggy image, and calculate the corresponding position of the candidate point in the foggy image. The brightness value of the location, and the calculated maximum brightness value is used as the estimated value of atmospheric light intensity A.
CN201811492894.5A 2018-12-06 2018-12-06 Single image defogging method based on convolutional neural network Active CN109712083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811492894.5A CN109712083B (en) 2018-12-06 2018-12-06 Single image defogging method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811492894.5A CN109712083B (en) 2018-12-06 2018-12-06 Single image defogging method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN109712083A CN109712083A (en) 2019-05-03
CN109712083B true CN109712083B (en) 2021-02-12

Family

ID=66255491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811492894.5A Active CN109712083B (en) 2018-12-06 2018-12-06 Single image defogging method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN109712083B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097522B (en) * 2019-05-14 2021-03-19 燕山大学 Single outdoor image defogging method based on multi-scale convolution neural network
CN110310238B (en) * 2019-06-18 2023-01-10 华南农业大学 A single image deraining method based on compressed reward-punishment neural network that reuses original information
CN110363727B (en) * 2019-07-24 2020-06-12 中国人民解放军火箭军工程大学 Image dehazing method based on multi-scale dark channel prior cascaded deep neural network
CN110570363A (en) * 2019-08-05 2019-12-13 浙江工业大学 Image dehazing method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN110930320B (en) * 2019-11-06 2022-08-16 南京邮电大学 Image defogging method based on lightweight convolutional neural network
CN111192219B (en) * 2020-01-02 2022-07-26 南京邮电大学 Image defogging method based on improved inverse atmospheric scattering model convolution network
CN111539250B (en) * 2020-03-12 2024-02-27 上海交通大学 Image fog concentration estimation method, system and terminal based on neural network
CN111369472B (en) * 2020-03-12 2021-04-23 北京字节跳动网络技术有限公司 Image defogging method and device, electronic equipment and medium
CN111507914B (en) * 2020-04-10 2023-08-08 北京百度网讯科技有限公司 Training method, repairing method, device, equipment and medium for face repairing model
CN113706395A (en) * 2020-05-21 2021-11-26 无锡科美达医疗科技有限公司 Image defogging method based on antagonistic neural network
CN111681180B (en) * 2020-05-25 2022-04-26 厦门大学 Priori-driven deep learning image defogging method
CN111814753A (en) * 2020-08-18 2020-10-23 深延科技(北京)有限公司 Target detection method and device under foggy weather condition
CN112365476B (en) * 2020-11-13 2023-12-08 南京信息工程大学 Fog day visibility detection method based on double-channel depth network
CN112364136B (en) * 2021-01-12 2021-04-23 平安国际智慧城市科技股份有限公司 Keyword generation method, device, equipment and storage medium
CN113763259B (en) * 2021-02-18 2025-03-18 北京沃东天骏信息技术有限公司 Image defogging method and device
CN112950589A (en) * 2021-03-03 2021-06-11 桂林电子科技大学 Dark channel prior defogging algorithm of multi-scale convolution neural network
CN112991225B (en) * 2021-04-14 2025-04-11 集美大学 Defogging method for sea surface target image
CN113191980A (en) * 2021-05-12 2021-07-30 大连海事大学 Underwater image enhancement method based on imaging model
CN113658059B (en) * 2021-07-27 2024-03-26 西安理工大学 Remote sensing image defogging enhancement method based on deep learning
CN115587946A (en) * 2022-10-12 2023-01-10 中电莱斯信息系统有限公司 A remote sensing image defogging method based on multi-scale network
CN115689932B (en) * 2022-11-09 2025-07-01 重庆邮电大学 An image dehazing method based on deep neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8350933B2 (en) * 2009-04-08 2013-01-08 Yissum Research Development Company Of The Hebrew University Of Jerusalem, Ltd. Method, apparatus and computer program product for single image de-hazing
CN106127702B (en) * 2016-06-17 2018-08-14 兰州理工大学 A kind of image defogging method based on deep learning
KR101938945B1 (en) * 2016-11-07 2019-01-15 한국과학기술원 Method and system for dehazing image using convolutional neural network
CN107958465A (en) * 2017-10-23 2018-04-24 华南农业大学 A Single Image Dehazing Method Based on Deep Convolutional Neural Network
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net
CN108550130A (en) * 2018-04-23 2018-09-18 南京邮电大学 A kind of multiple dimensioned transmission plot fusion method of image pyramid model

Also Published As

Publication number Publication date
CN109712083A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109712083B (en) Single image defogging method based on convolutional neural network
CN110930320B (en) Image defogging method based on lightweight convolutional neural network
CN111192219B (en) Image defogging method based on improved inverse atmospheric scattering model convolution network
CN106910175B (en) A single image dehazing algorithm based on deep learning
CN108269244B (en) An Image Dehazing System Based on Deep Learning and Prior Constraints
CN113673590B (en) Rain removal method, system and medium based on multi-scale hourglass densely connected network
CN110555465B (en) Weather image identification method based on CNN and multi-feature fusion
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
CN106600560B (en) A kind of image defogging method suitable for automobile data recorder
CN110570371A (en) An image defogging method based on multi-scale residual learning
CN110517203B (en) Defogging method based on reference image reconstruction
CN110363727B (en) Image dehazing method based on multi-scale dark channel prior cascaded deep neural network
CN114881875A (en) Single image defogging network based on U-Net structure and residual error network and defogging method thereof
CN110197505A (en) Remote sensing images binocular solid matching process based on depth network and semantic information
Ma et al. Image-based air pollution estimation using hybrid convolutional neural network
CN116137023B (en) Low-light image enhancement method based on background modeling and detail enhancement
CN113284070A (en) Non-uniform fog image defogging algorithm based on attention transfer mechanism
CN111553856B (en) Image defogging method based on depth estimation assistance
CN113160286A (en) Near-infrared and visible light image fusion method based on convolutional neural network
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN104318528A (en) Foggy weather image restoration method based on multi-scale WLS filtering
CN112070691A (en) A U-Net-based Image Dehazing Method
Pazhani et al. A novel haze removal computing architecture for remote sensing images using multi-scale Retinex technique
CN110135501A (en) High Dynamic Range Image Forensics Method Based on Neural Network Framework
CN115100076A (en) A low-light image dehazing method based on context-aware attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant