CN112767280B - Single image raindrop removing method based on loop iteration mechanism - Google Patents
Single image raindrop removing method based on loop iteration mechanism Download PDFInfo
- Publication number
- CN112767280B CN112767280B CN202110134465.6A CN202110134465A CN112767280B CN 112767280 B CN112767280 B CN 112767280B CN 202110134465 A CN202110134465 A CN 202110134465A CN 112767280 B CN112767280 B CN 112767280B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- convolution
- module
- raindrop
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像和视频处理以及计算机视觉领域,具体涉及一种基于循环迭代机制的单幅图像雨滴去除方法。The invention relates to the fields of image and video processing and computer vision, in particular to a method for removing raindrops from a single image based on a loop iteration mechanism.
背景技术Background technique
随着互联网和多媒体技术的快速发展,图像已经成为人类交流和传递信息的不可或缺的一部分,对现代社会各方面的发展具有重要意义。然而,图像的采集不可避免地会在室外环境下进行,室外环境就难免会受到一些不良天气的影响,如雨天、雾天、雪天等。这些不良天气会给捕获到的图像和视频带来一系列的视觉质量下降,如对比度、饱和度、可见度等。雨天是日常生活中一种常见的天气现象,当雨滴附着在玻璃窗、挡风玻璃或相机镜头上时会妨碍背景场景的可见性,降低图像的质量,导致图像发生严重的退化。这使图像上的细节信息无法识别,图像的使用价值极大地被降低,对目标检测、行人重识别、图像分割等高级视觉理解任务带来了一定的困难。在多雨的天气下,雨滴不可避免地会附着在相机的镜头上。这些附着的雨滴可能会模糊图像中背景场景的部分区域或遮挡图像中前景场景的部分区域,因此从单幅图像中去除雨滴以恢复干净背景具有重大的意义。With the rapid development of the Internet and multimedia technology, images have become an indispensable part of human communication and information transmission, and are of great significance to the development of all aspects of modern society. However, the acquisition of images will inevitably be carried out in an outdoor environment, and the outdoor environment will inevitably be affected by some bad weather, such as rainy days, foggy days, snowy days, etc. These bad weather can bring a series of visual quality degradations to the captured images and videos, such as contrast, saturation, visibility, etc. Rainy days are a common weather phenomenon in daily life. When raindrops adhere to glass windows, windshields or camera lenses, they obstruct the visibility of background scenes, reduce the quality of images, and cause severe image degradation. This makes the detailed information on the image unrecognizable, greatly reduces the use value of the image, and brings certain difficulties to advanced visual understanding tasks such as object detection, pedestrian re-identification, and image segmentation. In rainy weather, raindrops inevitably adhere to the lens of the camera. These attached raindrops may blur parts of the background scene in the image or occlude parts of the foreground scene in the image, so it is of great significance to remove raindrops from a single image to restore a clean background.
虽然单幅图像雨条纹去除的研究已得到很好的探索,但关于单幅图像雨滴去除的研究还较少,并且这些得到较好探索的单幅图像雨条纹去除方法无法直接用于单幅图像雨滴的去除。雨滴虽然没有雨条纹那么稠密,但是雨滴尺寸一般要比雨条纹大,会完全遮挡住背景。而且雨滴的外观受很多参数的影响,它的形状通常与细和垂直的雨条纹不一样。同时,雨滴的物理建模也与雨条纹完全不一样,这导致雨滴的去除工作更加困难。Although the research on single image rain streak removal has been well explored, there are few studies on single image rain drop removal, and these well-explored single image rain streak removal methods cannot be directly applied to a single image Raindrop removal. Although raindrops are not as dense as rain streaks, the size of raindrops is generally larger than that of rain streaks and will completely block the background. And the appearance of raindrops is affected by many parameters, its shape is usually not the same as thin and vertical rain streaks. At the same time, the physical modeling of raindrops is completely different from that of rain streaks, which makes the removal of raindrops more difficult.
目前单幅图像雨滴去除的方法大体上被分为两大类,即基于模型的方法和现有的基于深度学习的方法。基于模型的方法通常都是先使用一个滤波器,将其分解为高频和低频两个部分,然后通过字典学习区分高频部分中的雨成分和非雨成分。基于模型的方法大多依赖于人工预先设好的参数来进行图像的特征提取和性能优化,无法较好地提取到雨滴的特征,导致在去除雨滴性能上表现欠佳。The current methods for raindrop removal from a single image are generally divided into two categories, namely model-based methods and existing deep learning-based methods. Model-based methods usually first use a filter to decompose it into high-frequency and low-frequency parts, and then learn to distinguish between rain and non-rain components in the high-frequency part through dictionary learning. Most of the model-based methods rely on manually preset parameters for image feature extraction and performance optimization, and cannot extract the features of raindrops well, resulting in poor performance in raindrop removal.
而基于深度学习的方法,是一种数据驱动的方法,通过利用大量的数据来训练一个卷积神经网络。卷积神经网络表现出来的强大特征学习表示能力,能够较好地提取到图像特征,取得了较好的雨滴去除效果。但是,这些基于深度学习的方法也存在着一个问题,即无法平衡好雨滴去除性能与网络参数量这二者的关系。这些方法要么达到了良好的性能,但是却以大量参数为代价,大大限制了其在计算资源有限的实际应用中的潜在价值;要么参数量较少,但是却以较差的性能为代价。因此,如何设计一个高效实用的单幅图像雨滴去除方法俨然会成为未来研究者们关注的热点之一。The deep learning-based method is a data-driven method that uses a large amount of data to train a convolutional neural network. The powerful feature learning and representation ability exhibited by the convolutional neural network can better extract image features and achieve a better raindrop removal effect. However, these deep learning-based methods also have a problem, that is, they cannot balance the relationship between the performance of raindrop removal and the amount of network parameters. These methods either achieve good performance, but at the cost of a large number of parameters, which greatly limits their potential value in practical applications with limited computing resources, or have a small number of parameters, but at the cost of poor performance. Therefore, how to design an efficient and practical method for removing raindrops from a single image will become one of the hotspots that researchers will pay attention to in the future.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于循环迭代机制的单幅图像雨滴去除方法,该方法能够显著提高图像雨滴去除的性能,同时大大减少了其网络参数的大小。The purpose of the present invention is to provide a single image raindrop removal method based on a loop iteration mechanism, which can significantly improve the performance of image raindrop removal and greatly reduce the size of its network parameters.
为实现上述目的,本发明的技术方案是:一种基于循环迭代机制的单幅图像雨滴去除方法,包括以下步骤:In order to achieve the above object, the technical scheme of the present invention is: a method for removing raindrops from a single image based on a loop iteration mechanism, comprising the following steps:
步骤A、对原始附着雨滴退化图像和干净图像的训练图像对进行预处理,得到原始附着雨滴退化图像和干净图像的训练图像对组成的图像块数据集;Step A, preprocessing the training image pair of the original attached raindrop degraded image and the clean image to obtain an image block dataset composed of the original attached raindrop degraded image and the training image pair of the clean image;
步骤B、基于“分而治之”的思想,利用不断迭代去雨的动机,设计一个单幅图像雨滴去除的卷积神经网络;Step B. Based on the idea of "divide and conquer", design a convolutional neural network for raindrop removal from a single image by using the motivation of continuous iterative rain removal;
步骤C、设计一个用于优化网络的目标损失函数loss,以图像块数据集为训练数据,根据所设计的目标损失函数loss,利用反向传播方法计算所设计的单幅图像雨滴去除卷积神经网络中各参数的梯度,并利用随机梯度下降方法更新参数,最终学习到单幅图像雨滴去除卷积神经网络的最优参数;Step C. Design a target loss function loss for optimizing the network, using the image block data set as training data, according to the designed target loss function loss, use the back propagation method to calculate the designed single image raindrop to remove the convolutional neural network. The gradient of each parameter in the network, and use the stochastic gradient descent method to update the parameters, and finally learn the optimal parameters of the single image raindrop removal convolutional neural network;
步骤D、将待测图像输入到所设计的单幅图像雨滴去除卷积神经网络,利用训练好的单幅图像雨滴去除卷积神经网络预测生成雨滴去除之后的干净图像。Step D: Input the image to be tested into the designed single image raindrop removal convolutional neural network, and use the trained single image raindrop removal convolutional neural network to predict and generate a clean image after raindrop removal.
在本发明一实施例中,所述步骤A具体实现方式为:将原始附着雨滴退化图像和其对应的干净图像按照一致的方式进行切块,得到W×W尺寸的图像块,同时为了避免重叠切块,每隔m个像素点进行切块;切块后W×W的附着雨滴图像块和W×W的干净图像块一一对应。In an embodiment of the present invention, the specific implementation method of step A is as follows: the original attached raindrop degraded image and its corresponding clean image are divided into blocks in a consistent manner to obtain image blocks of W×W size, and at the same time, in order to avoid overlapping Dicing is performed every m pixels; after dicing, the W×W attached raindrop image blocks and the W×W clean image blocks correspond one-to-one.
在本发明一实施例中,所述步骤B具体实现步骤如下:In an embodiment of the present invention, the specific implementation steps of step B are as follows:
步骤B1、设计一个多阶段的雨滴去除网络,该网络是基于循环迭代机制的卷积神经网络,具体由多个网络结构相同并且网络参数共享的阶段子网络构成;Step B1, designing a multi-stage raindrop removal network, which is a convolutional neural network based on a loop iteration mechanism, and is specifically composed of multiple stage sub-networks with the same network structure and shared network parameters;
步骤B2、设计阶段子网络,用于提取雨滴的相关特征,以更好地去除雨滴;Step B2, a sub-network in the design stage, which is used to extract the relevant features of raindrops to better remove raindrops;
步骤B3、设计阶段子网络中的上下文聚合模块和注意力上下文聚合模块,用于聚合其网络中缺乏的空间上下文信息。Step B3, the context aggregation module and the attention context aggregation module in the sub-network in the design stage are used to aggregate the spatial context information lacking in its network.
在本发明一实施例中,所述步骤B2具体实现步骤如下:In an embodiment of the present invention, the specific implementation steps of step B2 are as follows:
步骤B21、将上一阶段子网络得到的雨滴去除后的图像块与其对应的原始附着雨滴图像块在通道上拼接的结果作为每个阶段子网络的输入;对于第一个阶段的子网络,其输入为两张原始附着雨滴图像块在通道上拼接的结果;Step B21: The result of splicing the image block after the raindrop removal obtained by the sub-network of the previous stage and the corresponding original attached raindrop image block on the channel is used as the input of the sub-network in each stage; for the sub-network of the first stage, its The input is the result of splicing two original attached raindrop image patches on the channel;
步骤B22、将步骤B21得到的拼接后的结果输入到一个激活函数为ReLU的卷积层,进行图像到特征图的转换,按如下公式输出特征:Step B22, input the spliced result obtained in step B21 into a convolutional layer whose activation function is ReLU, convert the image to the feature map, and output the feature according to the following formula:
其中,Conv1代表激活函数为ReLU的卷积层,Io为原始附着雨滴图像块,It-1代表上一阶段得到的雨滴去除后的图像块,对于第一阶段,It-1为Io,代表按照通道拼接特征操作,F0代表所提取的特征图;Among them, Conv1 represents the convolutional layer whose activation function is ReLU, I o is the original attached raindrop image block, I t-1 represents the image block obtained in the previous stage after raindrop removal, for the first stage, I t-1 is I o , Represents the operation according to the channel splicing feature, and F 0 represents the extracted feature map;
步骤B23、将特征图F0输入到一个卷积长短期记忆网络模块,由遗忘门f、输入门i及输出门o构成,按如下公式计算:Step B23: Input the feature map F 0 into a convolutional long short-term memory network module, which is composed of a forgetting gate f, an input gate i and an output gate o, and is calculated according to the following formula:
ft=σ(Wxf*F0+Whf*Ht-1+Wcf⊙Ct-1+bf)f t =σ(W xf *F 0 +W hf *H t-1 +W cf ⊙C t-1 +b f )
it=σ(Wxi*F0+Whi*Ht-1+Wci⊙Ct-1+bi)i t =σ(W xi *F 0 +W hi *H t-1 +W ci ⊙C t-1 +b i )
Ct=ft⊙Ct-1+it⊙tanh(Wxc*F0+Whc*Ht-1+bc)C t =f t ⊙C t-1 +i t ⊙tanh(W xc *F 0 +W hc *H t-1 +b c )
ot=σ(Wxo*F0+Who*Ht-1+Wco⊙Ct+bo)o t =σ(W xo *F 0 +W ho *H t-1 +W co ⊙C t +b o )
F1=Ht=ot⊙tanh(Ct)F 1 =H t =o t ⊙tanh(C t )
其中,t时刻遗忘门ft和输入门it的输入都是由特征图F0、上一时刻(即t-1)卷积长短期记忆网络模块的输出Ht-1及上一时刻细胞信息状态Ct-1这三部分构成的,t时刻输出门ot的输入则由特征图F0、上一时刻(即t-1)卷积长短期记忆网络模块的输出Ht-1及t时刻细胞信息状态Ct这三部分构成;W*和b*分别为其对应卷积核的权重参数和偏差参数,tanh表示正切函数,σ表示Sigmoid函数,运算符*表示卷积操作,运算符⊙表示点乘操作;Ct为当前t时刻的细胞信息状态,该细胞信息状态会馈送到下一时刻的卷积长短期记忆网络模块,Ht表示当前t时刻的卷积长短期记忆网络模块输出的特征图;为方便描述,将Ht记为F1;Among them, the input of the forget gate f t and the input gate it at time t are the feature map F 0 , the output H t-1 of the convolutional long short-term memory network module at the previous time (ie t-1) and the cell at the previous time. The information state C t-1 is composed of these three parts. The input of the output gate o t at time t is composed of the feature map F 0 , the output H t-1 of the convolutional long short-term memory network module at the previous time (ie t-1), and The cell information state C t at time t is composed of three parts; W * and b * are the weight parameters and bias parameters of the corresponding convolution kernels, respectively, tanh represents the tangent function, σ represents the Sigmoid function, operator * represents the convolution operation, the operation The symbol ⊙ represents the dot product operation; C t is the cell information state at the current time t, which will be fed to the convolutional long-term and short-term memory network module at the next time, and H t is the current convolutional long-term and short-term memory network at time t. The feature map output by the module; for the convenience of description, denote H t as F 1 ;
通过多阶段的方式,将上述的每一时刻转变为每一阶段;对于网络的第一阶段,由于其没有上一阶段,因此将其遗忘门和输入门的输入设置为0;Through a multi-stage method, each moment mentioned above is transformed into each stage; for the first stage of the network, since there is no previous stage, the input of its forget gate and input gate are set to 0;
步骤B24、将卷积长短期记忆网络模块的输出F1输入到所设计的多个上下文聚合模块和注意力上下文聚合模块,依次包括扩张率为2的上下文聚合模块—>扩张率为2的注意力上下文聚合模块—>扩张率为2的上下文聚合模块—>扩张率为4的上下文聚合模块—>扩张率为4的注意力上下文聚合模块—>扩张率为4的上下文聚合模块,按如下公式计算:Step B24, input the output F1 of the convolutional long short-term memory network module into the designed multiple context aggregation modules and attention context aggregation modules, including in turn a context aggregation module with an expansion rate of 2—> attention with an expansion rate of 2 Force context aggregation module -> context aggregation module with expansion rate 2 -> context aggregation module with expansion rate 4 -> attention context aggregation module with expansion rate 4 -> context aggregation module with expansion rate 4, according to the following formula calculate:
F2=CAU4(SECAU4(CAU4(CAU2(SECAU2(CAU2(F1))))))F 2 = CAU 4 (SECAU 4 (CAU 4 (CAU 2 (SECAU 2 (CAU 2 (CAU 2 (F1)))))))
其中,CAUr(*)表示扩张率为r的上下文聚合模块,SECAUr(*)表示扩张率为r的注意力上下文聚合模块;Among them, CAU r (*) represents a context aggregation module with an expansion rate of r, and SECAU r (*) represents an attention context aggregation module with an expansion rate of r;
步骤B25、将步骤B24的输出结果F2输入到一个标准残差模块,再送入到一个激活函数为ReLU的卷积层,完成特征图到图像的转换,按如下公式输出通道数为3的当前阶段子网络t的雨滴去除图像:Step B25, input the output result F2 of step B24 into a standard residual module, and then send it into a convolutional layer whose activation function is ReLU to complete the conversion from feature map to image, and output the current channel number of 3 according to the following formula. Raindrop removal image for stage sub-network t:
It=Conv2(Res(F2))It = Conv2 (Res(F 2 ))
其中,Res(*)表示标准残差模块,Conv2表示激活函数为ReLU的卷积层,It为当前阶段子网络t的雨滴去除图像。Among them, Res(*) represents the standard residual module, Conv2 represents the convolutional layer whose activation function is ReLU, and It is the raindrop removal image of the sub-network t at the current stage.
在本发明一实施例中,所述步骤B3具体实现步骤如下:In an embodiment of the present invention, the specific implementation steps of step B3 are as follows:
步骤B31、在上下文聚合模块中,首先将输入特征F送到一个平滑空洞卷积模块,按以下公式计算:Step B31, in the context aggregation module, first send the input feature F to a smooth hole convolution module, and calculate it according to the following formula:
F3=Dilatedr(Sep(F))F 3 =Dilated r (Sep(F))
其中,F3为平滑空洞卷积模块的输出特征,F为上下文聚合模块的输入,Sep(*)为可分离共享卷积层,即基于通道且所有通道共享参数的可分离卷积,Dilatedr(*)为空洞(扩张)卷积,通过扩张率r这个参数来增大感受野,有效聚合空间上下文信息以更好地提取特征;扩张率r表示卷积核中元素之间隔了多少个0,当r=1,此时空洞卷积与普通卷积完全一样,卷积核中元素之间相互挨着,没有0;而当r>1时,卷积核中元素之间需要插入r-1个0来扩大其感受野;步骤B24中所述的扩张率r即为此处空洞卷积的扩张率;Among them, F 3 is the output feature of the smooth hole convolution module, F is the input of the context aggregation module, Sep(*) is the separable shared convolution layer, that is, the separable convolution based on channels and all channels share parameters, Dilated r (*) is a hole (dilated) convolution, the receptive field is increased by the dilation rate r parameter, and the spatial context information is effectively aggregated to better extract features; the dilation rate r indicates how many 0s are between the elements in the convolution kernel. , when r=1, the hole convolution is exactly the same as the ordinary convolution, the elements in the convolution kernel are next to each other, and there is no 0; and when r>1, the elements in the convolution kernel need to be inserted between r- 1 0 to expand its receptive field; the expansion rate r described in step B24 is the expansion rate of the hole convolution here;
注意力上下文聚合模块和上下文聚合模块唯一的差别就在于此步骤,即注意力上下文聚合模块增加了通道注意力模块,后续步骤完全一样,注意力上下文聚合模块按以下公式计算:The only difference between the attention context aggregation module and the context aggregation module is this step, that is, the attention context aggregation module adds a channel attention module, and the subsequent steps are exactly the same. The attention context aggregation module is calculated according to the following formula:
F3=SE(Dilatedr(Sep(F)))F 3 =SE(Dilated r (Sep(F)))
其中,SE(*)表示通道注意力模块;Among them, SE(*) represents the channel attention module;
步骤B32、注意力上下文聚合模块和上下文聚合模块中,将步骤B31输出的特征F3送入到一个自校正卷积构成的残差模块输出,按如下公式计算:In step B32, the attention context aggregation module and the context aggregation module, the feature F3 output in step B31 is sent to the output of a residual module formed by self-correction convolution, and calculated according to the following formula:
F4=LeakyReLU(F3+SCC(F3))F 4 =LeakyReLU(F 3 +SCC(F 3 ))
F4为自校正卷积构成的残差模块输出,该模块包括一个自校正卷积、LeakyReLU函数及残差连接,LeakyReLU(*)其公式如下:F 4 is the output of the residual module composed of self-correction convolution. This module includes a self-correction convolution, LeakyReLU function and residual connection. The formula of LeakyReLU(*) is as follows:
其中,x表示LeakyReLU函数的输入值,a为一固定的线性系数;Among them, x represents the input value of the LeakyReLU function, and a is a fixed linear coefficient;
SCC(*)为自校正卷积,其定义如下:SCC(*) is the self-correcting convolution, which is defined as follows:
首先将步骤B31的输出特征F3分别输入到一个无激活函数的1×1卷积层:First, input the output feature F3 of step B31 into a 1×1 convolutional layer without activation function:
X1,X2=Conv1×1(F3)X 1 , X 2 =Conv1×1(F 3 )
其中,Conv1×1为1×1的卷积层,X1,X2分别为经过1×1卷积层得到的通道数减半的特征图,即若F3的通道数为C,则X1,X2的通道数都为C/2;Among them, Conv 1×1 is the 1×1 convolution layer, X 1 , X 2 are the feature maps obtained by the 1×1 convolution layer with the number of channels reduced by half, that is, if the number of channels of F 3 is C, then The number of channels of X 1 and X 2 is C/2;
接着将X1,X2分别送入各自相应的分支操作,其中X1送入到自校正操作的分支,按如下公式计算:Then, X 1 and X 2 are respectively sent to their corresponding branch operations, wherein X 1 is sent to the branch of the self-correction operation, and is calculated according to the following formula:
T1=AvgPoolr(X1)T 1 =AvgPool r (X 1 )
X′1=Up(T1*K2)X' 1 =Up(T 1 *K 2 )
Y1=Y′1*K4 Y 1 =Y′ 1 *K 4
其中,AvgPoolr(*)为步长为r的平均池化,Up(*)为上采样操作,*为卷积操作,为逐元素相乘运算符,+为逐元素相加运算符,σ为sigmoid激活函数;K2、K3、K4为卷积核大小都相同的卷积核;Y1为自校正操作分支的输出结果;Among them, AvgPool r (*) is the average pooling with stride r, Up(*) is the upsampling operation, * is the convolution operation, is the element-wise multiplication operator, + is the element-wise addition operator, σ is the sigmoid activation function; K 2 , K 3 , K 4 are convolution kernels with the same convolution kernel size; Y 1 is the self-correction operation branch the output result;
同时,X2送入相应的卷积分支,按如下公式计算:At the same time, X 2 is sent to the corresponding convolution branch, which is calculated according to the following formula:
Y2=X2*K1 Y 2 =X 2 *K 1
最后,将两个分支的输出结果进行通道上的拼接,使其通道数恢复至原特征图的通道数C,按如下公式计算:Finally, the output results of the two branches are spliced on the channel, so that the number of channels is restored to the number of channels C of the original feature map, which is calculated according to the following formula:
为通道拼接操作,Y为自校正卷积模块的输出。 is the channel stitching operation, and Y is the output of the self-correcting convolution module.
在本发明一实施例中,所述步骤C具体实现步骤如下:In an embodiment of the present invention, the specific implementation steps of step C are as follows:
步骤C1、使用损失函数作为约束来优化单幅图像雨滴去除的卷积神经网络模型,具体的公式如下:Step C1, using the loss function as a constraint to optimize the convolutional neural network model for raindrop removal from a single image, the specific formula is as follows:
其中,SSIM为结构相似性损失函数;假设给定训练图像对(Xi,Yi),其中i=1,…,N(训练数据的样本总数),Xi是被雨滴损坏的输入图像的图像块,Yi是其对应的干净图像的图像块,Y表示使用训练图像对(Xi,Yi)时网络所预测生成的雨滴去除后的干净图像块;where SSIM is the structural similarity loss function; suppose a training image pair (X i , Y i ) is given, where i =1, . Image block, Y i is the image block of its corresponding clean image, Y represents the clean image block after the raindrops predicted and generated by the network when using the training image pair (X i , Y i );
步骤C2、将图像块数据集随机分成若干个批次,每个批次包含相同数量的图像块,对所设计的网络进行训练优化,直至步骤C1中计算得到的L值收敛到阈值或迭代次数达到阈值,保存训练好的模型,完成网络训练过程。Step C2: Randomly divide the image block data set into several batches, each batch contains the same number of image blocks, and train and optimize the designed network until the L value calculated in step C1 converges to the threshold or the number of iterations Reach the threshold, save the trained model, and complete the network training process.
相较于现有技术,本发明具有以下有益效果:本发明方法首先基于分而治之的思想,将雨滴去除任务分解成一个循环迭代的过程,通过设计一个多阶段卷积神经网络来循环去除雨滴以达到更好的图像恢复结果。同时采用了最新的平滑空洞卷积模块和自校正卷积模块,以更好地聚合空间上下文信息并且成功消除了原始空洞卷积在雨滴去除过程中产生的伪影问题。该方法针对图像雨滴去除问题设计了一个独立的图像雨滴去除的卷积神经网络,能够在保证雨滴去除后图像质量的同时,其网络参数的大小相对于其他方法较小,具有较高的使用价值。Compared with the prior art, the present invention has the following beneficial effects: the method of the present invention is first based on the idea of divide and conquer, decomposes the raindrop removal task into a cyclic and iterative process, and cyclically removes raindrops by designing a multi-stage convolutional neural network to achieve. Better image restoration results. At the same time, the latest smooth hole convolution module and self-correction convolution module are adopted to better aggregate spatial context information and successfully eliminate the artifact problem generated by the original hole convolution in the raindrop removal process. In this method, an independent convolutional neural network for image raindrop removal is designed for the problem of image raindrop removal, which can ensure the image quality after raindrop removal, and the size of its network parameters is smaller than other methods, which has high use value. .
附图说明Description of drawings
图1是本发明方法的实现流程图。Fig. 1 is the realization flow chart of the method of the present invention.
图2是本发明实施例中基于循环迭代机制的单幅图像雨滴去除方法模型的结构图。FIG. 2 is a structural diagram of a method model for removing raindrops from a single image based on a loop iteration mechanism in an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图,对本发明的技术方案进行具体说明。The technical solutions of the present invention will be described in detail below with reference to the accompanying drawings.
本发明提供了一种基于循环迭代机制的单幅图像雨滴去除方法,包括以下步骤:The invention provides a method for removing raindrops from a single image based on a loop iteration mechanism, comprising the following steps:
步骤A、对原始附着雨滴退化图像和干净图像的训练图像对进行预处理,得到原始附着雨滴退化图像和干净图像的训练图像对组成的图像块数据集;Step A, preprocessing the training image pair of the original attached raindrop degraded image and the clean image to obtain an image block dataset composed of the original attached raindrop degraded image and the training image pair of the clean image;
步骤B、基于“分而治之”的思想,利用不断迭代去雨的动机,设计一个单幅图像雨滴去除的卷积神经网络;Step B. Based on the idea of "divide and conquer", design a convolutional neural network for raindrop removal from a single image by using the motivation of continuous iterative rain removal;
步骤C、设计一个用于优化网络的目标损失函数loss,以图像块数据集为训练数据,根据所设计的目标损失函数loss,利用反向传播方法计算所设计的单幅图像雨滴去除卷积神经网络中各参数的梯度,并利用随机梯度下降方法更新参数,最终学习到单幅图像雨滴去除卷积神经网络的最优参数;Step C. Design a target loss function loss for optimizing the network, take the image block data set as training data, and use the back propagation method to calculate the designed single image raindrop removal convolution neural network according to the designed target loss function loss. The gradient of each parameter in the network, and use the stochastic gradient descent method to update the parameters, and finally learn the optimal parameters of the single image raindrop removal convolutional neural network;
步骤D、将待测图像输入到所设计的单幅图像雨滴去除卷积神经网络,利用训练好的单幅图像雨滴去除卷积神经网络预测生成雨滴去除之后的干净图像。Step D: Input the image to be tested into the designed single image raindrop removal convolutional neural network, and use the trained single image raindrop removal convolutional neural network to predict and generate a clean image after raindrop removal.
以下为本发明的具体实现过程。The following is the specific implementation process of the present invention.
如图1所示,一种基于循环迭代机制的单幅图像雨滴去除方法,包括以下步骤:As shown in Figure 1, a method for removing raindrops from a single image based on a loop iteration mechanism includes the following steps:
步骤A、对原始附着雨滴退化图像和干净图像的训练图像对进行预处理,得到原始附着雨滴退化图像和干净图像的训练图像对组成的图像块数据集;Step A, preprocessing the training image pair of the original attached raindrop degraded image and the clean image to obtain an image block dataset composed of the original attached raindrop degraded image and the training image pair of the clean image;
步骤B、基于“分而治之”的思想,利用不断迭代去雨的动机,设计一个单幅图像雨滴去除的卷积神经网络;Step B. Based on the idea of "divide and conquer", design a convolutional neural network for raindrop removal from a single image by using the motivation of continuous iterative rain removal;
步骤C、设计一个用于优化网络的目标损失函数loss,以图像块数据集为训练数据,根据所设计的目标损失函数loss,利用反向传播方法计算所设计的单幅图像雨滴去除卷积神经网络中各参数的梯度,并利用随机梯度下降方法更新参数,最终学习到单幅图像雨滴去除卷积神经网络的最优参数;Step C. Design a target loss function loss for optimizing the network, take the image block data set as training data, and use the back propagation method to calculate the designed single image raindrop removal convolution neural network according to the designed target loss function loss. The gradient of each parameter in the network, and use the stochastic gradient descent method to update the parameters, and finally learn the optimal parameters of the single image raindrop removal convolutional neural network;
步骤D、将待测图像输入到所设计的单幅图像雨滴去除卷积神经网络,利用训练好的单幅图像雨滴去除卷积神经网络预测生成雨滴去除之后的干净图像。Step D: Input the image to be tested into the designed single image raindrop removal convolutional neural network, and use the trained single image raindrop removal convolutional neural network to predict and generate a clean image after raindrop removal.
进一步地,所述步骤A包括以下步骤:Further, the step A includes the following steps:
步骤A1:将原始附着雨滴退化图像和其对应的干净图像按照一致的方式进行切块,得到W×W尺寸的图像块,同时为了避免重叠切块,每隔m个像素点进行切块。切块后W×W的附着雨滴图像块和W×W的干净图像块一一对应。Step A1: Divide the original degraded image with attached raindrops and its corresponding clean image into blocks in a consistent manner to obtain image blocks of W×W size. At the same time, in order to avoid overlapping dicing, dicing is performed every m pixels. After dicing, the W×W attached raindrop image blocks correspond to the W×W clean image blocks one-to-one.
进一步地,所述步骤B包括以下步骤:Further, the step B includes the following steps:
步骤B1、设计一个多阶段的雨滴去除网络,该网络是基于循环迭代机制的卷积神经网络,具体由多个网络结构相同并且网络参数共享的阶段子网络构成;Step B1, designing a multi-stage raindrop removal network, which is a convolutional neural network based on a loop iteration mechanism, and is specifically composed of multiple stage sub-networks with the same network structure and shared network parameters;
步骤B2、设计阶段子网络,用于提取雨滴的相关特征,以更好地去除雨滴;Step B2, a sub-network in the design stage, which is used to extract the relevant features of raindrops to better remove raindrops;
步骤B3、设计阶段子网络中的上下文聚合模块和注意力上下文聚合模块,用于聚合其网络中缺乏的空间上下文信息;Step B3, the context aggregation module and the attention context aggregation module in the sub-network in the design stage are used to aggregate the spatial context information lacking in its network;
进一步地,所述步骤B2包括以下步骤:Further, the step B2 includes the following steps:
步骤B21、将上一阶段子网络得到的雨滴去除后的图像块与其对应的原始附着雨滴图像块在通道上拼接的结果作为每个阶段子网络的输入。注意,对于第一个阶段的子网络,其输入则为两张原始附着雨滴图像块在通道上拼接的结果;Step B21 , the result of splicing the image block after the raindrop removal obtained by the sub-network in the previous stage and the corresponding original attached raindrop image block on the channel is used as the input of the sub-network in each stage. Note that for the sub-network in the first stage, its input is the result of splicing two original attached raindrop image patches on the channel;
步骤B22、将步骤B21得到的拼接后的结果输入到一个激活函数为ReLU的卷积层,进行图像到特征图的转换,按如下公式输出特征:Step B22, input the spliced result obtained in step B21 into a convolutional layer whose activation function is ReLU, convert the image to the feature map, and output the feature according to the following formula:
其中,Conv1代表激活函数为ReLU的卷积层,Io为原始附着雨滴图像块,It-1代表上一阶段得到的雨滴去除后的图像块,对于第一阶段,It-1为Io,代表按照通道拼接特征操作,F0代表所提取的特征图。Among them, Conv1 represents the convolutional layer whose activation function is ReLU, I o is the original attached raindrop image block, I t-1 represents the image block obtained in the previous stage after raindrop removal, for the first stage, I t-1 is I o , represents the operation according to the channel stitching feature, and F 0 represents the extracted feature map.
步骤B23、将特征图F0输入到一个卷积长短期记忆网络模块,由遗忘门f、输入门i及输出门o构成,按如下公式计算:Step B23: Input the feature map F 0 into a convolutional long short-term memory network module, which is composed of a forgetting gate f, an input gate i and an output gate o, and is calculated according to the following formula:
ft=σ(Wxf*F0+Whf*Ht-1+Wcf⊙Ct-1+bf)f t =σ(W xf *F 0 +W hf *H t-1 +W cf ⊙C t-1 +b f )
it=σ(Wxi*F0+Whi*Ht-1+Wci⊙Ct-1+bi)i t =σ(W xi *F 0 +W hi *H t-1 +W ci ⊙C t-1 +b i )
Ct=ft⊙Ct-1+it⊙tanh(Wxc*F0+Whc*Ht-1+bc)C t =f t ⊙C t-1 +i t ⊙tanh(W xc *F 0 +W hc *H t-1 +b c )
ot=σ(Wxo*F0+Who*Ht-1+Wco⊙Ct+bo)o t =σ(W xo *F 0 +W ho *H t-1 +W co ⊙C t +b o )
F1=Ht=ot⊙tanh(Ct)F 1 =H t =o t ⊙tanh(C t )
其中,t时刻遗忘门ft和输入门it的输入都是由特征图F0、上一时刻(即t-1)卷积长短期记忆网络模块的输出Ht-1及上一时刻细胞信息状态Ct-1这三部分构成的,t时刻输出门ot的输入则由特征图F0、上一时刻(即t-1)卷积长短期记忆网络模块的输出Ht-1及t时刻细胞信息状态Ct这三部分构成。W*和b*分别为其对应卷积核的权重参数和偏差参数,tanh表示正切函数,σ表示Sigmoid函数,运算符*表示卷积操作,运算符⊙表示点乘操作。Ct为当前t时刻的细胞信息状态,该细胞信息状态会馈送到下一时刻的卷积长短期记忆网络模块,Ht表示当前t时刻的卷积长短期记忆网络模块输出的特征图。为了方便接下来方法的描述,我们将Ht记为F1。Among them, the input of the forget gate f t and the input gate it at time t are the feature map F 0 , the output H t-1 of the convolutional long short-term memory network module at the previous time (ie t-1) and the cell at the previous time. The information state C t-1 is composed of these three parts. The input of the output gate o t at time t is composed of the feature map F 0 , the output H t-1 of the convolutional long short-term memory network module at the previous time (ie t-1), and The cell information state C t at time t is composed of three parts. W * and b * are the weight parameters and bias parameters of their corresponding convolution kernels, respectively, tanh represents the tangent function, σ represents the Sigmoid function, operator * represents the convolution operation, and operator ⊙ represents the dot product operation. C t is the cell information state at the current time t, which will be fed to the convolutional long-term and short-term memory network module at the next time, and H t represents the feature map output by the convolutional long-term and short-term memory network module at the current time t. To facilitate the description of the following method, we denote H t as F 1 .
我们的方法通过多阶段的方式,将上面所述的每一时刻转变为每一阶段。注意:对于我们网络的第一阶段,由于其没有上一阶段,因此我们将其遗忘门和输入门的输入设置为0。Our approach transforms each moment described above into each phase in a multi-stage manner. Note: For the first stage of our network, since it has no previous stage, we set the input of its forget gate and input gate to 0.
步骤B24、将卷积长短期记忆网络模块的输出F1输入到所设计的多个上下文聚合模块和注意力上下文聚合模块,依次包括扩张率为2的上下文聚合模块—>扩张率为2的注意力上下文聚合模块—>扩张率为2的上下文聚合模块—>扩张率为4的上下文聚合模块—>扩张率为4的注意力上下文聚合模块—>扩张率为4的上下文聚合模块,按如下公式计算:Step B24, input the output F1 of the convolutional long short-term memory network module into the designed multiple context aggregation modules and attention context aggregation modules, including in turn a context aggregation module with an expansion rate of 2—> attention with an expansion rate of 2 Force context aggregation module -> context aggregation module with expansion rate 2 -> context aggregation module with expansion rate 4 -> attention context aggregation module with expansion rate 4 -> context aggregation module with expansion rate 4, according to the following formula calculate:
F2=CAU4(SECAU4(CAU4(CAU2(SECAU2(CAU2(F1))))))F 2 =CAU 4 (SECAU 4 (CAU 4 (CAU 2 (SECAU 2 (CAU 2 (F 1 )))))))
其中,CAUr(*)表示扩张率为r的上下文聚合模块,SECAUr(*)表示扩张率为r的注意力上下文聚合模块。where CAU r (*) represents the context aggregation module with dilation rate r, and SECAU r (*) represents the attention context aggregation module with dilation rate r.
步骤B25、将步骤B24的输出结果F2输入到一个标准残差模块,再送入到一个激活函数为ReLU的卷积层,完成特征图到图像的转换,按如下公式输出通道数为3的当前阶段子网络t的雨滴去除图像:Step B25, input the output result F2 of step B24 into a standard residual module, and then send it into a convolutional layer whose activation function is ReLU to complete the conversion from feature map to image, and output the current channel number of 3 according to the following formula. Raindrop removal image for stage sub-network t:
It=Conv2(Res(F2))It = Conv2 (Res(F 2 ))
其中,Res(*)表示标准残差模块,Conv2表示激活函数为ReLU的卷积层,It为当前阶段子网络t的雨滴去除图像。Among them, Res(*) represents the standard residual module, Conv2 represents the convolutional layer whose activation function is ReLU, and It is the raindrop removal image of the sub-network t at the current stage.
进一步地,所述步骤B3包括以下步骤:Further, the step B3 includes the following steps:
步骤B31、在上下文聚合模块中,首先将输入特征F送到一个平滑空洞卷积模块,按以下公式计算:Step B31, in the context aggregation module, first send the input feature F to a smooth hole convolution module, and calculate it according to the following formula:
F3=Dilatedr(Sep(F))F 3 =Dilated r (Sep(F))
其中,F3为平滑空洞卷积模块的输出特征,F为上下文聚合模块的输入,Sep(*)为可分离共享卷积层,即基于通道且所有通道共享参数的可分离卷积,Dilatedr(*)为空洞(扩张)卷积,通过扩张率r这个参数来增大感受野,有效聚合空间上下文信息以更好地提取特征。扩张率r表示卷积核中元素之间隔了多少个0,当r=1,此时空洞卷积与普通卷积完全一样,卷积核中元素之间相互挨着,没有0;而当r>1时,卷积核中元素之间需要插入r-1个0来扩大其感受野。步骤B24中所述的扩张率r即为此处空洞卷积的扩张率。Among them, F 3 is the output feature of the smooth hole convolution module, F is the input of the context aggregation module, Sep(*) is the separable shared convolution layer, that is, the separable convolution based on channels and all channels share parameters, Dilated r (*) is a hole (dilated) convolution, and the receptive field is increased by the dilation rate r parameter, which effectively aggregates spatial context information to better extract features. The expansion rate r indicates how many 0s are between the elements in the convolution kernel. When r=1, the hole convolution is exactly the same as the ordinary convolution. The elements in the convolution kernel are next to each other without 0; and when r When >1, r-1 0s need to be inserted between elements in the convolution kernel to expand its receptive field. The expansion rate r described in step B24 is the expansion rate of the hole convolution here.
注意力上下文聚合模块和上下文聚合模块唯一的差别就在于此步骤,即注意力上下文聚合模块增加了通道注意力模块,后续步骤完全一样,注意力上下文聚合模块按以下公式计算:The only difference between the attention context aggregation module and the context aggregation module is this step, that is, the attention context aggregation module adds a channel attention module, and the subsequent steps are exactly the same. The attention context aggregation module is calculated according to the following formula:
F3=SE(Dilatedr(Sep(F)))F 3 =SE(Dilated r (Sep(F)))
其中,SE(*)表示通道注意力模块。where SE(*) represents the channel attention module.
步骤B32:注意力上下文聚合模块和上下文聚合模块中,将步骤B31输出的特征F3送入到一个自校正卷积构成的残差模块输出,按如下公式计算:Step B32: In the attention context aggregation module and the context aggregation module, the feature F3 output in step B31 is sent to the output of a residual module composed of self-correction convolution, and calculated according to the following formula:
F4=LeakyReLU(F3+SCC(F3))F 4 =LeakyReLU(F 3 +SCC(F 3 ))
F4为自校正卷积构成的残差模块输出,该模块包括一个自校正卷积、LeakyReLU函数及残差连接,LeakyReLU(*)其公式如下:F 4 is the output of the residual module composed of self-correction convolution. This module includes a self-correction convolution, LeakyReLU function and residual connection. The formula of LeakyReLU(*) is as follows:
其中,x表示LeakyReLU函数的输入值,a为一固定的线性系数。Among them, x represents the input value of the LeakyReLU function, and a is a fixed linear coefficient.
SCC(*)为自校正卷积,其定义如下:SCC(*) is the self-correcting convolution, which is defined as follows:
首先将步骤B31的输出特征F3分别输入到一个无激活函数的1×1卷积层:First, input the output feature F3 of step B31 into a 1×1 convolutional layer without activation function:
X1,X2=Conv1×1(F3)X 1 , X 2 =Conv 1×1 (F 3 )
其中,Conv1×1为1×1的卷积层,X1,X2分别为经过1×1卷积层得到的通道数减半的特征图,即若F3的通道数为C,则X1,X2的通道数都为C/2。Among them, Conv 1×1 is the 1×1 convolution layer, X 1 , X 2 are the feature maps obtained by the 1×1 convolution layer with the number of channels reduced by half, that is, if the number of channels of F 3 is C, then The number of channels of X 1 and X 2 is C/2.
接着将X1,X2分别送入各自相应的分支操作,其中X1送入到自校正操作的分支,按如下公式计算:Then, X 1 and X 2 are respectively sent to their corresponding branch operations, wherein X 1 is sent to the branch of the self-correction operation, and is calculated according to the following formula:
T1=AvgPoolr(X1)T 1 =AvgPool r (X 1 )
X′1=Up(T1*K2)X' 1 =Up(T 1 *K 2 )
Y1=Y′1*K4 Y 1 =Y′ 1 *K 4
其中,AvgPoolr(*)为步长为r的平均池化,Up(*)为上采样操作,*为卷积操作,为逐元素相乘运算符,+为逐元素相加运算符,σ为sigmoid激活函数。K2、K3、K4为卷积核大小都相同的卷积核。Y1为自校正操作分支的输出结果。Among them, AvgPool r (*) is the average pooling with stride r, Up(*) is the upsampling operation, * is the convolution operation, is the element-wise multiplication operator, + is the element-wise addition operator, and σ is the sigmoid activation function. K 2 , K 3 , and K 4 are convolution kernels with the same convolution kernel size. Y 1 is the output result of the self-correction operation branch.
同时,X2送入相应的卷积分支,按如下公式计算:At the same time, X 2 is sent to the corresponding convolution branch, which is calculated according to the following formula:
Y2=X2*K1 Y 2 =X 2 *K 1
最后,将两个分支的输出结果进行通道上的拼接,使其通道数恢复至原特征图的通道数C,按如下公式计算:Finally, the output results of the two branches are spliced on the channel, so that the number of channels is restored to the number of channels C of the original feature map, which is calculated according to the following formula:
为通道拼接操作,Y为自校正卷积模块的输出。 is the channel stitching operation, and Y is the output of the self-correcting convolution module.
进一步地,所述步骤C包括以下步骤:Further, described step C comprises the following steps:
步骤C1、我们使用常见的损失函数作为约束来优化我们的网络模型,具体的公式如下:Step C1, we use the common loss function as a constraint to optimize our network model, the specific formula is as follows:
其中,SSIM为结构相似性损失函数。假设给定训练图像对(Xi,Yi),其中i=1,…,N(训练数据的样本总数),Xi是被雨滴损坏的输入图像的图像块,Yi是其对应的干净图像的图像块,Y表示使用训练图像对(Xi,Yi)时网络所预测生成的雨滴去除后的干净图像块;Among them, SSIM is the structural similarity loss function. Suppose given a training image pair (X i , Y i ), where i = 1, . . . , N (the total number of samples of training data), X i is the image patch of the input image corrupted by raindrops, and Y i is its corresponding clean The image block of the image, Y represents the clean image block after the raindrops are removed and predicted by the network when using the training image pair (X i , Y i );
步骤C2、将图像块数据集随机分成若干个批次,每个批次包含相同数量的图像块,对所设计的网络进行训练优化,直至步骤C1中计算得到的L值收敛到阈值或迭代次数达到阈值,保存训练好的模型,完成网络训练过程。Step C2: Randomly divide the image block data set into several batches, each batch contains the same number of image blocks, and train and optimize the designed network until the L value calculated in step C1 converges to the threshold or the number of iterations Reach the threshold, save the trained model, and complete the network training process.
以上是本发明的较佳实施例,凡依本发明技术方案所作的改变,所产生的功能作用未超出本发明技术方案的范围时,均属于本发明的保护范围。The above are the preferred embodiments of the present invention, all changes made according to the technical solutions of the present invention, when the resulting functional effects do not exceed the scope of the technical solutions of the present invention, belong to the protection scope of the present invention.
Claims (3)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110134465.6A CN112767280B (en) | 2021-02-01 | 2021-02-01 | Single image raindrop removing method based on loop iteration mechanism |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110134465.6A CN112767280B (en) | 2021-02-01 | 2021-02-01 | Single image raindrop removing method based on loop iteration mechanism |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112767280A CN112767280A (en) | 2021-05-07 |
| CN112767280B true CN112767280B (en) | 2022-06-14 |
Family
ID=75704404
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110134465.6A Active CN112767280B (en) | 2021-02-01 | 2021-02-01 | Single image raindrop removing method based on loop iteration mechanism |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112767280B (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113450288B (en) * | 2021-08-04 | 2022-09-06 | 广东工业大学 | Single image rain removing method and system based on deep convolutional neural network and storage medium |
| CN113610329B (en) * | 2021-10-08 | 2022-01-04 | 南京信息工程大学 | A short-term now-rainfall forecasting method based on a two-stream convolutional long short-term memory network |
| CN113920029B (en) * | 2021-10-20 | 2024-11-05 | 常州微亿智造科技有限公司 | Rain line removal device in industrial inspection |
| CN116152074B (en) * | 2021-11-19 | 2025-12-16 | 重庆大学 | Lightweight image rain removing algorithm based on deep learning |
| CN115797213A (en) * | 2022-12-09 | 2023-03-14 | 武汉大学 | Region-aware loop iterative raindrop removal method |
| CN118864287B (en) * | 2024-09-23 | 2024-11-29 | 华侨大学 | Progressive single image rain and snow removal method, device and readable medium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10304193B1 (en) * | 2018-08-17 | 2019-05-28 | 12 Sigma Technologies | Image segmentation and object detection using fully convolutional neural network |
| CN111861925A (en) * | 2020-07-24 | 2020-10-30 | 南京信息工程大学滨江学院 | An Image Rain Removal Method Based on Attention Mechanism and Gated Recurrent Unit |
| CN112085678A (en) * | 2020-09-04 | 2020-12-15 | 国网福建省电力有限公司检修分公司 | A method and system for removing raindrops from patrol images of power equipment |
| CN112132756A (en) * | 2019-06-24 | 2020-12-25 | 华北电力大学(保定) | Attention mechanism-based single raindrop image enhancement method |
| CN112184566A (en) * | 2020-08-27 | 2021-01-05 | 北京大学 | An image processing method and system for removing attached water mist and water droplets |
| CN112184573A (en) * | 2020-09-15 | 2021-01-05 | 西安理工大学 | Context aggregation residual single image rain removing method based on convolutional neural network |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10803378B2 (en) * | 2017-03-15 | 2020-10-13 | Samsung Electronics Co., Ltd | System and method for designing efficient super resolution deep convolutional neural networks by cascade network training, cascade network trimming, and dilated convolutions |
-
2021
- 2021-02-01 CN CN202110134465.6A patent/CN112767280B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10304193B1 (en) * | 2018-08-17 | 2019-05-28 | 12 Sigma Technologies | Image segmentation and object detection using fully convolutional neural network |
| CN112132756A (en) * | 2019-06-24 | 2020-12-25 | 华北电力大学(保定) | Attention mechanism-based single raindrop image enhancement method |
| CN111861925A (en) * | 2020-07-24 | 2020-10-30 | 南京信息工程大学滨江学院 | An Image Rain Removal Method Based on Attention Mechanism and Gated Recurrent Unit |
| CN112184566A (en) * | 2020-08-27 | 2021-01-05 | 北京大学 | An image processing method and system for removing attached water mist and water droplets |
| CN112085678A (en) * | 2020-09-04 | 2020-12-15 | 国网福建省电力有限公司检修分公司 | A method and system for removing raindrops from patrol images of power equipment |
| CN112184573A (en) * | 2020-09-15 | 2021-01-05 | 西安理工大学 | Context aggregation residual single image rain removing method based on convolutional neural network |
Non-Patent Citations (3)
| Title |
|---|
| Joint Rain Detection and Removal from a Single Image with Contextualized Deep Networks;Wenhan Yang et al.;《 IEEE Transactions on Pattern Analysis and Machine Intelligence》;20200601;第42卷(第6期);全文 * |
| Removing Rain in Videos: A Large-Scale Database and a Two-Stream ConvLSTM Approach;Tie Liu et al.;《2019 IEEE International Conference on Multimedia and Expo (ICME)》;20190805;全文 * |
| 双LSTM的光场图像去雨算法研究;丁宇阳等;《计算机工程与应用》;20201231(第18期);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112767280A (en) | 2021-05-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112767280B (en) | Single image raindrop removing method based on loop iteration mechanism | |
| CN112884073B (en) | Image rain removal method, system, terminal and storage medium | |
| CN113506300B (en) | Picture semantic segmentation method and system based on rainy day complex road scene | |
| CN108537746B (en) | Fuzzy variable image blind restoration method based on deep convolutional network | |
| CN108288270B (en) | Target detection method based on channel pruning and full convolution deep learning | |
| CN111539887A (en) | A Neural Network Image Dehazing Method Based on Hybrid Convolutional Channel Attention Mechanism and Hierarchical Learning | |
| CN114723630A (en) | Image deblurring method and system based on cavity double-residual multi-scale depth network | |
| CN115861380B (en) | End-to-end UAV visual target tracking method and device in foggy and low-illumination scenes | |
| US20240428381A1 (en) | Image processing method and apparatus | |
| CN106157267A (en) | A kind of image mist elimination absorbance optimization method based on dark channel prior | |
| CN109285162A (en) | A Semantic Image Segmentation Method Based on Local Area Conditional Random Field Model | |
| CN111462019A (en) | Image deblurring method and system based on deep neural network parameter estimation | |
| CN113643303A (en) | Three-dimensional image segmentation method based on two-way attention coding and decoding network | |
| CN112419191A (en) | Image motion blur removing method based on convolution neural network | |
| CN116310095A (en) | A multi-view 3D reconstruction method based on deep learning | |
| CN106504204A (en) | A kind of removing rain based on single image method based on rarefaction representation | |
| CN110738624B (en) | Area-adaptive image defogging system and method | |
| CN115797213A (en) | Region-aware loop iterative raindrop removal method | |
| CN115239602A (en) | License plate image deblurring method based on cavity convolution expansion receptive field | |
| CN112767258B (en) | An end-to-end method for image sandstorm removal | |
| CN115034999B (en) | Image rain removing method based on rain and fog separation processing and multi-scale network | |
| CN105590296A (en) | Dual-dictionary learning-based single-frame image super-resolution reconstruction method | |
| CN114862711B (en) | Low-illumination image enhancement and denoising method based on dual complementary prior constraints | |
| CN113658074B (en) | Single image raindrop removal method based on LAB color space multi-scale fusion network | |
| CN116091350A (en) | An image deraining method and system based on a multi-cascade progressive convolution structure |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |























