[go: up one dir, main page]

CN113269763A - Underwater image definition recovery method based on depth image recovery and brightness estimation - Google Patents

Underwater image definition recovery method based on depth image recovery and brightness estimation Download PDF

Info

Publication number
CN113269763A
CN113269763A CN202110620221.9A CN202110620221A CN113269763A CN 113269763 A CN113269763 A CN 113269763A CN 202110620221 A CN202110620221 A CN 202110620221A CN 113269763 A CN113269763 A CN 113269763A
Authority
CN
China
Prior art keywords
image
depth
underwater
restoration
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110620221.9A
Other languages
Chinese (zh)
Other versions
CN113269763B (en
Inventor
张维石
杨彤雨
周景春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202110620221.9A priority Critical patent/CN113269763B/en
Publication of CN113269763A publication Critical patent/CN113269763A/en
Application granted granted Critical
Publication of CN113269763B publication Critical patent/CN113269763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种基于深度图复原和亮度估计的水下图像清晰度恢复方法。本发明方法,包含以下步骤:首先,对水下图像进行均衡化处理;其次,使用单目深度估计模型估计均衡化后水下图像的相对深度,再使用图像分割的策略,分割出后景深度估计错误的部分并进行重新估计;接着,使用引导滤波平滑深度重新估计的区域;然后,通过深度归一化操作将相对深度转化为绝对深度;将图像像素点按其深度值等分成多个区间,在各区间中搜索退化图像的潜在最小像素点;使用水下图像成像模型,分通道拟合参数并估计并去除后向散射;最后,采用自动亮度值估计方法估计亮度参数,在去除后向散射的图像上使用最优亮度参数调整亮度并去除水下图像偏色。

Figure 202110620221

The invention provides an underwater image clarity restoration method based on depth map restoration and brightness estimation. The method of the present invention includes the following steps: firstly, performing equalization processing on the underwater image; secondly, using the monocular depth estimation model to estimate the relative depth of the equalized underwater image, and then using the image segmentation strategy to segment the background depth Estimate the wrong part and re-estimate; then, use guided filtering to smooth the depth-re-estimated area; then, convert the relative depth to absolute depth through a depth normalization operation; divide the image pixels into multiple intervals equally by their depth values , search for the potential minimum pixel point of the degraded image in each interval; use the underwater image imaging model, fit the parameters by channel and estimate and remove the backscatter; finally, use the automatic brightness value estimation method to estimate the brightness Use the optimal brightness parameter on scattered images to adjust brightness and remove color casts in underwater images.

Figure 202110620221

Description

基于深度图复原和亮度估计的水下图像清晰度恢复方法Underwater Image Sharpness Restoration Method Based on Depth Map Restoration and Brightness Estimation

技术领域technical field

本发明涉及图像处理技术领域,具体而言,尤其涉及一种基于深度图复原和亮度估计的水下图像清晰度恢复方法。The invention relates to the technical field of image processing, in particular, to an underwater image clarity restoration method based on depth map restoration and brightness estimation.

背景技术Background technique

水下图像在海洋地质勘探、海洋生态保护以及海洋资源开发等领域中发挥着重要作用。但由于水和悬浮颗粒物对光的吸收和散射作用,水下图像和视频普遍存在低对比度、低清晰度和低色度等问题。退化严重的水下图像不具有实际应用价值。为了有效提升水下图像的质量,水下图像增强方法和水下图像复原方法是常用的两种方法。近年来随着深度学习方法在图像处理领域的快速发展,使用深度学习恢复水下图像的清晰度也成为一个热门的发展趋势。Underwater images play an important role in marine geological exploration, marine ecological protection and marine resource development. However, due to the absorption and scattering of light by water and suspended particles, underwater images and videos generally suffer from low contrast, low definition and low chroma. Severely degraded underwater images have no practical application value. In order to effectively improve the quality of underwater images, underwater image enhancement methods and underwater image restoration methods are two commonly used methods. In recent years, with the rapid development of deep learning methods in the field of image processing, the use of deep learning to restore the clarity of underwater images has also become a hot development trend.

其中,水下图像增强方法主要使用滤波器、直方图均衡化等方法恢复图像颜色和饱和度。这类方法虽然可以有效改善图像的视觉效果,但是却忽略了退化程度与景深之间的关系,而无法恢复场景的真实色彩。水下图像复原方法主要通过水下成像模型来逆转水下图像的退化过程。这类方法通常使用先验知识求解模型参数,但基于先验的水下图像复原方法存在采用先验和目标场景不匹配的问题。基于深度学习的方法目前仍不够成熟,存在着许多问题。一方面,基于深度学习的方法在经过训练后参数估计值固定,在处理复杂的水下环境时缺乏足够的灵活性。当新的水下图像类型与训练集的水下环境类型不同,训练后的模型可能无法输出满意的结果。另一方面,深度学习自身的局限性,如:需要大量的参数来学习复杂的映射函数以及能否找到合适的训练集等问题,也会限制深度学习方法在实际应用中的潜在价值。而基于深度图复原和亮度估计的水下图像清晰度恢复方法,不仅考虑到了成像范围对水下图像退化的影响,并且使用更精确的成像模型。这样保证了在处理不同水体类型的水下图像时,该方法能获得稳定且效果更好的复原结果。此外,该方法使用的颜色校正方法使得水下图像在亮度、色度和对比度上均有显著提升。Among them, underwater image enhancement methods mainly use methods such as filter and histogram equalization to restore image color and saturation. Although these methods can effectively improve the visual effect of the image, they ignore the relationship between the degree of degradation and the depth of field, and cannot restore the true color of the scene. The underwater image restoration method mainly uses the underwater imaging model to reverse the degradation process of the underwater image. Such methods usually use prior knowledge to solve the model parameters, but the underwater image restoration method based on prior has the problem that the prior and the target scene do not match. The methods based on deep learning are still immature and there are many problems. On the one hand, deep learning-based methods have fixed parameter estimates after training and lack sufficient flexibility when dealing with complex underwater environments. When the new underwater image type is different from the underwater environment type of the training set, the trained model may not output satisfactory results. On the other hand, the limitations of deep learning itself, such as the need for a large number of parameters to learn complex mapping functions and whether to find a suitable training set, also limit the potential value of deep learning methods in practical applications. The underwater image sharpness restoration method based on depth map restoration and brightness estimation not only considers the influence of imaging range on the degradation of underwater images, but also uses a more accurate imaging model. This ensures that the method can obtain stable and better restoration results when dealing with underwater images of different types of water bodies. In addition, the color correction method used in this method results in a significant improvement in brightness, chroma and contrast of underwater images.

发明内容SUMMARY OF THE INVENTION

根据上述提出的技术问题,提供一种基于深度图复原和亮度估计的水下图像清晰度恢复方法。本发明使用细化后的深度图和最小像素点,根据水下成像物理模型估计并去除后向散射,使用信息熵自动选择最优亮度参数,并调整图像整体亮度以获得色彩鲜艳的水下图像。According to the technical problems raised above, a method for restoring underwater image sharpness based on depth map restoration and brightness estimation is provided. The invention uses the refined depth map and the minimum pixel point, estimates and removes backscattering according to the physical model of underwater imaging, uses information entropy to automatically select the optimal brightness parameter, and adjusts the overall brightness of the image to obtain a brightly colored underwater image .

本发明采用的技术手段如下:The technical means adopted in the present invention are as follows:

一种基于深度图复原和亮度估计的水下图像清晰度恢复方法,包括以下步骤:An underwater image clarity restoration method based on depth map restoration and brightness estimation, comprising the following steps:

步骤S01:拉伸原始RGB图像的对比度,保证原始图像的像素最小值和最大值分别是0和255,使用基于单目深度估计模型估计拉伸对比度后的图像的相对深度图;Step S01: stretching the contrast of the original RGB image, ensuring that the minimum and maximum pixel values of the original image are 0 and 255 respectively, and using a monocular depth estimation model to estimate the relative depth map of the image after the stretched contrast;

步骤S02:使用分割方法分割出对比度拉伸后的图像的背景区域错误估计部分,重新估计该部分深度,并使用引导滤波平滑重估计区域;Step S02: use the segmentation method to segment out the background region error estimation part of the contrast-stretched image, re-estimate the depth of this part, and use the guided filtering to smooth the re-estimated region;

步骤S03:根据场景的实际深度选择深度上下限进行深度归一化处理,以获取所述原始RGB图像的绝对深度图;Step S03: Selecting the upper and lower limits of the depth according to the actual depth of the scene and performing depth normalization processing to obtain the absolute depth map of the original RGB image;

步骤S04:将原始RGB图像的各像素点按其深度值从小到大平均分为10组,每组内按像素点RGB值之和的大小从小至大进行排序,取排序靠前的200个像素点;Step S04: Divide the pixels of the original RGB image into 10 groups according to their depth values from small to large, and sort each group according to the size of the sum of the RGB values of the pixels from small to large, and take the top 200 pixels in the order. point;

步骤S05:根据水下图像成像模型,使用最小像素点及其深度值分别拟合出

Figure BDA0003099553360000021
的值,其中Ac表示大气光;
Figure BDA0003099553360000022
表示后向散射系数;Jc表示未经退化的水下图像;βc D表示带宽系数;Step S05: According to the underwater image imaging model, use the minimum pixel point and its depth value to respectively fit
Figure BDA0003099553360000021
, where Ac represents atmospheric light;
Figure BDA0003099553360000022
is the backscattering coefficient; J c is the undegraded underwater image; β c D is the bandwidth coefficient;

步骤S06:通过所述水下图像成像模型和参数

Figure BDA0003099553360000023
值,估算并去除后向散射;Step S06: Imaging the model and parameters through the underwater image
Figure BDA0003099553360000023
value, estimate and remove backscatter;

步骤S07:在去除后向散射的图像上,分别对三个通道的像素值进行排序,在第0.5%~2%间的像素点的像素值,按照0.15的间隔取值作为亮度参数;Step S07: on the backscattered image, the pixel values of the three channels are sorted respectively, and the pixel values of the pixel points between the 0.5% and 2% are taken as the brightness parameter according to the interval of 0.15;

步骤S08:每个亮度参数对应一幅通过该亮度参数进行增强的图像,选取信息熵最高的图像作为最终的增强结果。Step S08: Each brightness parameter corresponds to an image enhanced by the brightness parameter, and the image with the highest information entropy is selected as the final enhancement result.

进一步地,步骤S01中的图像对比度拉伸公式为:Further, the image contrast stretching formula in step S01 is:

Figure BDA0003099553360000031
Figure BDA0003099553360000031

其中,xmin和xmax分别表示原始图像中像素点的最小值和最大值,x表示图像各像素点,y表示对比度拉伸后的图像。Among them, x min and x max represent the minimum and maximum pixel points in the original image, respectively, x represents each pixel point of the image, and y represents the contrast-stretched image.

进一步地,步骤S02中的图像分割方法采用马氏距离度量RGB空间内各点之间的相似性;对于RGB空间中的任意点z和平均色m之间的马氏距离D(z,m)由下式给出:Further, the image segmentation method in step S02 adopts the Mahalanobis distance to measure the similarity between points in the RGB space; for the Mahalanobis distance D(z, m) between any point z in the RGB space and the average color m is given by:

Figure BDA0003099553360000032
Figure BDA0003099553360000032

其中,C表示所选样本的协方差矩阵。where C represents the covariance matrix of the selected samples.

进一步地,步骤S03中对于原深度图中的每一个相对深度值x,使用线性转换的方式将其转换为绝对深度值y,具体的转换公式如下:Further, in step S03, for each relative depth value x in the original depth map, linear conversion is used to convert it into an absolute depth value y, and the specific conversion formula is as follows:

Figure BDA0003099553360000033
Figure BDA0003099553360000033

其中,

Figure BDA0003099553360000034
表示原深度图中深度值的最大值和最小值;
Figure BDA0003099553360000035
Figure BDA0003099553360000036
表示需要转换的深度值的最小值和最大值。in,
Figure BDA0003099553360000034
Represents the maximum and minimum values of the depth values in the original depth map;
Figure BDA0003099553360000035
Figure BDA0003099553360000036
Indicates the minimum and maximum value of depth values that need to be converted.

进一步地,步骤S05中的水下图像成像模型为:Further, the underwater image imaging model in step S05 is:

Figure BDA0003099553360000037
Figure BDA0003099553360000037

其中,Ac表示大气光;

Figure BDA0003099553360000038
表示后向散射系数;Jc表示未经退化的水下图像;
Figure BDA0003099553360000039
表示带宽系数。Among them, A c represents atmospheric light;
Figure BDA0003099553360000038
is the backscattering coefficient; J c is the undegraded underwater image;
Figure BDA0003099553360000039
Indicates the bandwidth factor.

进一步地,步骤S08中自动获得最优增强图像方式如下:Further, in step S08, the method of automatically obtaining the optimal enhanced image is as follows:

Figure BDA00030995533600000310
Figure BDA00030995533600000310

其中,H(Dri)表示最大的信息熵,

Figure BDA00030995533600000311
表示对三个通道的像素值排序后,在每个通道第0.5%~2%间的像素点的像素值,按照0.15为间隔取的值;Dc是去除散射后的图像,W0=0.1;
Figure BDA00030995533600000312
即为最优增强图像,其中,信息熵的计算方式如下:Among them, H(D ri ) represents the maximum information entropy,
Figure BDA00030995533600000311
Indicates that after sorting the pixel values of the three channels, the pixel values of the pixels between the 0.5% and 2% of each channel are taken at intervals of 0.15; Dc is the image after the scattering is removed, W 0 =0.1;
Figure BDA00030995533600000312
That is, the optimal enhanced image, where the information entropy is calculated as follows:

Figure BDA0003099553360000041
Figure BDA0003099553360000041

其中,i表示像素点的灰度级,pi表示灰度级为i的像素点在整幅图像中的占比。Among them, i represents the gray level of the pixel, and pi represents the proportion of the pixel whose gray level is i in the whole image.

较现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

1、对于图像增强方法出现的颜色失真和传统基于DCP方法出现的透射率估计偏差大的问题,本发明从水下图像退化机制方面考虑,使用了新型水下物理成像模型估计后向散射,去散射效果明显,复原结果接近真实的未退化水下场景。1. For the problem of color distortion in the image enhancement method and the large deviation of transmittance estimation in the traditional DCP-based method, the present invention considers the degradation mechanism of the underwater image, and uses a new underwater physical imaging model to estimate the backscattering. The scattering effect is obvious, and the restoration result is close to the real non-degraded underwater scene.

2、本发明仅需要获得图像的深度图,而不需要估计图像的透射率和背景光,相比于传统复原方法,本发明具有更低的复杂度。2. The present invention only needs to obtain the depth map of the image, but does not need to estimate the transmittance and background light of the image. Compared with the traditional restoration method, the present invention has lower complexity.

3、本发明具有更好的实用性和鲁棒性,能够处理多种类型的水下图像而不出现过增强和伪影等问题。3. The present invention has better practicability and robustness, and can handle various types of underwater images without problems such as over-enhancement and artifacts.

基于上述理由本发明可在图像处理等领域广泛推广。Based on the above reasons, the present invention can be widely promoted in the fields of image processing and the like.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做以简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present invention, and for those of ordinary skill in the art, other drawings can also be obtained from these drawings without any creative effort.

图1为本发明所述水下图像清晰度恢复方法的流程示意图。FIG. 1 is a schematic flowchart of the underwater image clarity restoration method according to the present invention.

图2为本发明与其他水下图像复原方法在近海岸场景中的对比效果图;其中,图2-1为水下采集图像原图(海洋生物);图2-2为Li et al.UWCNN方法处理效果图;图2-3为Penget al.GDCP方法处理效果图;图2-4为Peng et al.IBLA方法处理效果图;图2-5为本发明所述方法处理效果图。Fig. 2 is a comparison effect diagram of the present invention and other underwater image restoration methods in a near-coastal scene; wherein, Fig. 2-1 is the original image (marine creatures) collected underwater; Fig. 2-2 is Li et al.UWCNN Figure 2-3 is a processing effect diagram of the Peng et al. GDCP method; Figure 2-4 is a processing effect diagram of the Peng et al. IBLA method; Figure 2-5 is a processing effect diagram of the method according to the present invention.

图3为本发明与其他水下图像方法的在浑浊水体中的对比效果图,其中,图3-1为水下采集图像原图(海龟);图3-2为Li et al.UWCNN方法处理效果图;图3-3为Peng etal.GDCP方法处理效果图;图3-4为Peng et al.IBLA方法处理效果图;图3-5为本发明方法处理效果图。Fig. 3 is a comparison effect diagram of the present invention and other underwater image methods in turbid water bodies, wherein Fig. 3-1 is the original image (sea turtle) of the underwater collected image; Fig. 3-2 is the processing by Li et al. UWCNN method Effect diagram; Figure 3-3 is a processing effect diagram of Peng et al. GDCP method; Figure 3-4 is a processing effect diagram of Peng et al. IBLA method; Figure 3-5 is a processing effect diagram of the method of the present invention.

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second" and the like in the description and claims of the present invention and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the invention described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.

实施例1Example 1

如图1所示,本发明提供了一种基于深度图复原和亮度估计的水下图像清晰度恢复方法,包括以下步骤:As shown in Figure 1, the present invention provides a method for restoring underwater image clarity based on depth map restoration and brightness estimation, comprising the following steps:

步骤S01:拉伸原始RGB图像的对比度,保证原始图像的像素最小值和最大值分别是0和255,使用基于单目深度估计模型估计拉伸对比度后的图像的相对深度图;图像对比度拉伸公式为:Step S01: Stretch the contrast of the original RGB image to ensure that the minimum and maximum pixel values of the original image are 0 and 255 respectively, and use a monocular depth estimation model to estimate the relative depth map of the stretched contrast image; image contrast stretching The formula is:

Figure BDA0003099553360000051
Figure BDA0003099553360000051

其中,xmin和xmax分别表示原始图像中像素点的最小值和最大值,x表示图像各像素点,y表示对比度拉伸后的图像;Among them, x min and x max represent the minimum and maximum pixel points in the original image, respectively, x represents each pixel point of the image, and y represents the contrast-stretched image;

步骤S02:使用分割方法分割出对比度拉伸后的图像的背景区域错误估计部分,重新估计该部分深度,并使用引导滤波平滑重估计区域;图像分割方法采用马氏距离度量RGB空间内各点之间的相似性;对于RGB空间中的任意点z和平均色m之间的马氏距离D(z,m)由下式给出:Step S02: use the segmentation method to segment out the background region misestimated part of the image after contrast stretching, re-estimate the depth of this part, and use the guided filtering to smooth the re-estimated region; the image segmentation method adopts the Mahalanobis distance to measure the difference between each point in the RGB space. The similarity between ; for any point z in the RGB space and the average color m, the Mahalanobis distance D(z,m) is given by:

Figure BDA0003099553360000061
Figure BDA0003099553360000061

其中,C表示所选样本的协方差矩阵;由于图像的背景区域是整幅图像中距离相机最远的区域,因此我们把结果区域样本点的深度值估计为原深度图按从小到大的方式排序后的第1%的深度值;Among them, C represents the covariance matrix of the selected sample; since the background area of the image is the area farthest from the camera in the whole image, we estimate the depth value of the sample points in the result area as the original depth map in an ascending manner. The depth value of the 1st percentile after sorting;

步骤S03:根据场景的实际深度选择合适的大致深度上下限(0~20米范围内)进行深度归一化处理,以获取所述原始RGB图像的绝对深度图;对于原深度图中的每一个相对深度值x,使用线性转换的方式将其转换为绝对深度值y,具体的转换公式如下:Step S03: According to the actual depth of the scene, select the appropriate approximate depth upper and lower limits (within the range of 0 to 20 meters) to perform depth normalization processing to obtain the absolute depth map of the original RGB image; The relative depth value x is converted to the absolute depth value y by linear conversion. The specific conversion formula is as follows:

Figure BDA0003099553360000062
Figure BDA0003099553360000062

其中,

Figure BDA0003099553360000063
表示原深度图中深度值的最大值和最小值;
Figure BDA0003099553360000064
Figure BDA0003099553360000065
表示需要转换的深度值的最小值和最大值,此处
Figure BDA0003099553360000066
表示该图像真实的深度估计范围(单位:米);in,
Figure BDA0003099553360000063
Represents the maximum and minimum values of the depth values in the original depth map;
Figure BDA0003099553360000064
Figure BDA0003099553360000065
Indicates the minimum and maximum values of depth values that need to be converted, here
Figure BDA0003099553360000066
Indicates the real depth estimation range of the image (unit: meters);

步骤S04:将原始RGB图像的各像素点按其深度值从小到大平均分为10组,每组内按像素点RGB值之和的大小从小至大进行排序,取排序靠前的200个像素点;Step S04: Divide the pixels of the original RGB image into 10 groups according to their depth values from small to large, and sort each group according to the size of the sum of the RGB values of the pixels from small to large, and take the top 200 pixels in the order. point;

步骤S05:根据水下图像成像模型,使用最小像素点及其深度值分别拟合出

Figure BDA0003099553360000067
的值,其中Ac表示大气光;
Figure BDA0003099553360000068
表示后向散射系数;Jc表示未经退化的水下图像;βc D表示带宽系数;水下图像成像模型为:Step S05: According to the underwater image imaging model, use the minimum pixel point and its depth value to respectively fit
Figure BDA0003099553360000067
, where Ac represents atmospheric light;
Figure BDA0003099553360000068
represents the backscattering coefficient; J c represents the undegraded underwater image; β c D represents the bandwidth coefficient; the underwater image imaging model is:

Figure BDA0003099553360000069
Figure BDA0003099553360000069

步骤S06:通过所述水下图像成像模型和参数

Figure BDA00030995533600000610
值,估算并去除后向散射;Step S06: Imaging the model and parameters through the underwater image
Figure BDA00030995533600000610
value, estimate and remove backscatter;

步骤S07:在去除后向散射的图像上,分别对三个通道的像素值进行排序,在第0.5%~2%间的像素点的像素值,按照0.15的间隔取值作为亮度参数;Step S07: on the backscattered image, the pixel values of the three channels are sorted respectively, and the pixel values of the pixel points between the 0.5% and 2% are taken as the brightness parameter according to the interval of 0.15;

步骤S08:每个亮度参数对应一幅通过该亮度参数进行增强的图像,选取信息熵最高的图像作为最终的增强结果;自动获得最优增强图像方式如下:Step S08: each brightness parameter corresponds to an image enhanced by the brightness parameter, and the image with the highest information entropy is selected as the final enhancement result; the method of automatically obtaining the optimal enhanced image is as follows:

Figure BDA00030995533600000611
Figure BDA00030995533600000611

其中,H(Dri)表示最大的信息熵,

Figure BDA0003099553360000071
表示对三个通道的像素值排序后,在每个通道第0.5%~2%间的像素点的像素值,按照0.15为间隔取的值;Dc是去除散射后的图像,W0=0.1;
Figure BDA0003099553360000072
即为最优增强图像。其中,信息熵的计算方式如下:Among them, H(D ri ) represents the maximum information entropy,
Figure BDA0003099553360000071
Indicates that after sorting the pixel values of the three channels, the pixel values of the pixels between the 0.5% and 2% of each channel are taken at intervals of 0.15; D c is the image after de-scattering, W 0 =0.1 ;
Figure BDA0003099553360000072
is the optimal enhanced image. Among them, the calculation method of information entropy is as follows:

Figure BDA0003099553360000073
Figure BDA0003099553360000073

其中,i表示像素点的灰度级,pi表示灰度级为i的像素点在整幅图像中的占比。Among them, i represents the gray level of the pixel, and pi represents the proportion of the pixel whose gray level is i in the whole image.

为了验证本发明去散射的有效性,选取不同场景的水下图像作为测试集,同时与Li et al.UWCNN(C.Li,S.Anwar,and F.Porikli,“Underwater scene prior inspireddeep underwater image and video enhancement,”Pattern Recognit.98,1-11(2020).UWCNN),Peng et al.GDCP(Y.Peng,K.Cao,and P.C.Cosman,“Generalization of theDark Channel Prior for Single Image Restoration,”IEEE Trans.Image Process.27(6),2856-2868(2018).GDCP),Peng et al.IBLA(Y.Peng and P.C.Cosman,“UnderwaterImage Restoration Based on Image Blurriness and Light Absorption,”IEEETrans.Image Process.26(4),1579-1594(2017).IBLA)方法的实验结果从定性和定量两方面进行对比分析。In order to verify the effectiveness of the de-scattering of the present invention, underwater images of different scenes are selected as the test set, and the results are compared with Li et al. video enhancement,"Pattern Recognit.98,1-11(2020).UWCNN), Peng et al.GDCP(Y.Peng,K.Cao,and P.C.Cosman,"Generalization of the Dark Channel Prior for Single Image Restoration,"IEEE Trans.Image Process. 27(6), 2856-2868(2018). GDCP), Peng et al. IBLA (Y. Peng and P.C. Cosman, “Underwater Image Restoration Based on Image Blurriness and Light Absorption,” IEEE Trans. Image Process. 26(4), 1579-1594(2017).IBLA) The experimental results of the method were compared qualitatively and quantitatively.

如图2所示,本发明提供与其他水下图像复原方法在近海岸场景(海洋生物)中的对比效果图。通过对比可以看出,本发明方法处理后的效果图中,图像对比度明显提升,且优于其他方法(Li et al.UWCNN,Peng et al.GDCP,Peng et al.IBLA)。因此本发明方法能够校正颜色和增强图像对比度,提升图像视觉效果。As shown in FIG. 2 , the present invention provides a comparison effect diagram with other underwater image restoration methods in a near-shore scene (sea creature). It can be seen from the comparison that the image contrast in the effect map processed by the method of the present invention is significantly improved, and is better than other methods (Li et al. UWCNN, Peng et al. GDCP, Peng et al. IBLA). Therefore, the method of the present invention can correct the color, enhance the contrast of the image, and improve the visual effect of the image.

如图3所示,本发明提供与其他方法在浑浊水体中(海龟)的实验效果对比图。通过与(Li et al.UWCNN,Peng et al.GDCP,Peng et al.IBLA)方法进行对比分析,本文处理后海龟的颜色复原效果明显优于其他方法,并且图像清晰度更高。因此本发明方法能够校正颜色和增强图像对比度,提升图像视觉效果。As shown in FIG. 3 , the present invention provides a comparison chart of the experimental effects of other methods in turbid water bodies (sea turtles). Through comparative analysis with (Li et al. UWCNN, Peng et al. GDCP, Peng et al. IBLA) methods, the color restoration effect of sea turtles processed in this paper is significantly better than other methods, and the image clarity is higher. Therefore, the method of the present invention can correct the color, enhance the contrast of the image, and improve the visual effect of the image.

本实施例为了验证本发明的鲁棒性,无参考图像质量评价指标UIQM和UCIQE进行对比分析,具体数据参见表1和表2。无参考图像质量评价指标越大,表明该方法生成图像的色度、饱和度以及对比度越好,越能获得良好的视觉效果。本发明方法处理后的图像的两个指标数据值优于其他方法。证明本发明方法可以有效提升图像的色彩以及对比度。In this embodiment, in order to verify the robustness of the present invention, no reference image quality evaluation indicators UIQM and UCIQE are compared and analyzed, and the specific data are shown in Table 1 and Table 2. The larger the no-reference image quality evaluation index is, the better the chroma, saturation and contrast of the image generated by this method, and the better the visual effect can be obtained. The two index data values of the image processed by the method of the present invention are superior to other methods. It is proved that the method of the present invention can effectively improve the color and contrast of the image.

表1本发明方法和其他方法处理结果的无参考图像质量评价指标(UIQM)Table 1 Unreferenced image quality evaluation index (UIQM) of the processing results of the method of the present invention and other methods

Raw imageRaw image UWCNNUWCNN GDCPGDCP IBLAIBLA OurOur 1.12171.1217 1.26941.2694 1.49641.4964 1.44351.4435 1.60651.6065 0.62240.6224 0.52320.5232 0.99000.9900 1.09831.0983 1.40341.4034

表2本发明方法和其他方法处理结果的无参考图像质量评价指标(UCIQE)Table 2 The unreferenced image quality evaluation index (UCIQE) of the processing results of the method of the present invention and other methods

Raw imageRaw image UWCNNUWCNN GDCPGDCP IBLAIBLA OurOur 0.43280.4328 0.45720.4572 0.55940.5594 0.54310.5431 0.65770.6577 0.32200.3220 0.31260.3126 0.37300.3730 0.47540.4754 0.63410.6341

最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that it can still be The technical solutions described in the foregoing embodiments are modified, or some or all of the technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1.基于深度图复原和亮度估计的水下图像清晰度恢复方法,其特征在于,包括以下步骤:1. the underwater image clarity restoration method based on depth map restoration and brightness estimation, is characterized in that, comprises the following steps: 步骤S01:拉伸原始RGB图像的对比度,保证原始图像的像素最小值和最大值分别是0和255,使用基于单目深度估计模型估计拉伸对比度后的图像的相对深度图;Step S01: stretching the contrast of the original RGB image, ensuring that the minimum and maximum pixel values of the original image are 0 and 255 respectively, and using a monocular depth estimation model to estimate the relative depth map of the image after the stretched contrast; 步骤S02:使用分割方法分割出对比度拉伸后的图像的背景区域错误估计部分,重新估计该部分深度,并使用引导滤波平滑重估计区域;Step S02: use the segmentation method to segment out the background region error estimation part of the contrast-stretched image, re-estimate the depth of this part, and use the guided filtering to smooth the re-estimated region; 步骤S03:根据场景的实际深度选择深度上下限进行深度归一化处理,以获取所述原始RGB图像的绝对深度图;Step S03: Selecting the upper and lower limits of the depth according to the actual depth of the scene and performing depth normalization processing to obtain the absolute depth map of the original RGB image; 步骤S04:将原始RGB图像的各像素点按其深度值从小到大平均分为10组,每组内按像素点RGB值之和的大小从小至大进行排序,取排序靠前的200个像素点;Step S04: Divide the pixels of the original RGB image into 10 groups according to their depth values from small to large, and sort each group according to the size of the sum of the RGB values of the pixels from small to large, and take the top 200 pixels in the order. point; 步骤S05:根据水下图像成像模型,使用最小像素点及其深度值分别拟合出Ac,
Figure FDA0003099553350000011
Jc,
Figure FDA0003099553350000012
的值,其中Ac表示大气光;
Figure FDA0003099553350000013
表示后向散射系数;Jc表示未经退化的水下图像;
Figure FDA0003099553350000014
表示带宽系数;
Step S05: According to the underwater image imaging model, use the minimum pixel point and its depth value to respectively fit A c ,
Figure FDA0003099553350000011
J c ,
Figure FDA0003099553350000012
, where Ac represents atmospheric light;
Figure FDA0003099553350000013
is the backscattering coefficient; J c is the undegraded underwater image;
Figure FDA0003099553350000014
represents the bandwidth coefficient;
步骤S06:通过所述水下图像成像模型和参数Ac,
Figure FDA0003099553350000015
Jc,
Figure FDA0003099553350000016
值,估算并去除后向散射;
Step S06: through the underwater image imaging model and the parameter A c ,
Figure FDA0003099553350000015
J c ,
Figure FDA0003099553350000016
value, estimate and remove backscatter;
步骤S07:在去除后向散射的图像上,分别对三个通道的像素值进行排序,在第0.5%~2%间的像素点的像素值,按照0.15的间隔取值作为亮度参数;Step S07: on the backscattered image, the pixel values of the three channels are sorted respectively, and the pixel values of the pixel points between the 0.5% and 2% are taken as the brightness parameter according to the interval of 0.15; 步骤S08:每个亮度参数对应一幅通过该亮度参数进行增强的图像,选取信息熵最高的图像作为最终的增强结果。Step S08: Each brightness parameter corresponds to an image enhanced by the brightness parameter, and the image with the highest information entropy is selected as the final enhancement result.
2.根据权利要求1所述的基于深度图复原和亮度估计的水下图像清晰度恢复方法,其特征在于,步骤S01中的图像对比度拉伸公式为:2. the underwater image clarity restoration method based on depth map restoration and brightness estimation according to claim 1, is characterized in that, the image contrast stretching formula in step S01 is:
Figure FDA0003099553350000017
Figure FDA0003099553350000017
其中,xmin和xmax分别表示原始图像中像素点的最小值和最大值,x表示图像各像素点,y表示对比度拉伸后的图像。Among them, x min and x max represent the minimum and maximum pixel points in the original image, respectively, x represents each pixel point of the image, and y represents the contrast-stretched image.
3.根据权利要求1所述的基于深度图复原和亮度估计的水下图像清晰度恢复方法,其特征在于,步骤S02中的图像分割方法采用马氏距离度量RGB空间内各点之间的相似性;对于RGB空间中的任意点z和平均色m之间的马氏距离D(z,m)由下式给出:3. the underwater image clarity restoration method based on depth map restoration and brightness estimation according to claim 1, is characterized in that, the image segmentation method in step S02 adopts Mahalanobis distance to measure similarity between each point in RGB space The Mahalanobis distance D(z,m) between any point z in the RGB space and the average color m is given by:
Figure FDA0003099553350000021
Figure FDA0003099553350000021
其中,C表示所选样本的协方差矩阵。where C represents the covariance matrix of the selected samples.
4.根据权利要求1所述的基于深度图复原和亮度估计的水下图像清晰度恢复方法,其特征在于,步骤S03中对于原深度图中的每一个相对深度值x,使用线性转换的方式将其转换为绝对深度值y,具体的转换公式如下:4. the underwater image clarity restoration method based on depth map restoration and brightness estimation according to claim 1, is characterized in that, in step S03, for each relative depth value x in the original depth map, the mode of linear conversion is used Convert it to an absolute depth value y, the specific conversion formula is as follows:
Figure FDA0003099553350000022
Figure FDA0003099553350000022
其中,
Figure FDA0003099553350000023
表示原深度图中深度值的最大值和最小值;
Figure FDA0003099553350000024
Figure FDA0003099553350000025
表示需要转换的深度值的最小值和最大值。
in,
Figure FDA0003099553350000023
Represents the maximum and minimum values of the depth values in the original depth map;
Figure FDA0003099553350000024
Figure FDA0003099553350000025
Indicates the minimum and maximum value of depth values that need to be converted.
5.根据权利要求1所述的基于深度图复原和亮度估计的水下图像清晰度恢复方法,其特征在于,步骤S05中的水下图像成像模型为:5. the underwater image clarity restoration method based on depth map restoration and brightness estimation according to claim 1, is characterized in that, the underwater image imaging model in step S05 is:
Figure FDA0003099553350000026
Figure FDA0003099553350000026
其中,Ac表示大气光;
Figure FDA0003099553350000027
表示后向散射系数;Jc表示未经退化的水下图像;
Figure FDA0003099553350000028
表示带宽系数。
Among them, A c represents atmospheric light;
Figure FDA0003099553350000027
is the backscattering coefficient; J c is the undegraded underwater image;
Figure FDA0003099553350000028
Indicates the bandwidth factor.
6.根据权利要求1所述的基于深度图复原和亮度估计的水下图像清晰度恢复方法,其特征在于,步骤S08中自动获得最优增强图像方式如下:6. the underwater image clarity restoration method based on depth map restoration and brightness estimation according to claim 1, is characterized in that, in step S08, obtains optimally enhanced image mode automatically as follows:
Figure FDA0003099553350000029
Figure FDA0003099553350000029
其中,
Figure FDA00030995533500000210
表示最大的信息熵,
Figure FDA00030995533500000211
表示对三个通道的像素值排序后,在每个通道第0.5%~2%间的像素点的像素值,按照0.15为间隔取的值;Dc是去除散射后的图像,W0=0.1;
Figure FDA00030995533500000212
即为最优增强图像,其中,信息熵的计算方式如下:
in,
Figure FDA00030995533500000210
represents the maximum information entropy,
Figure FDA00030995533500000211
Indicates that after sorting the pixel values of the three channels, the pixel values of the pixels between the 0.5% and 2% of each channel are taken at intervals of 0.15; D c is the image after de-scattering, W 0 =0.1 ;
Figure FDA00030995533500000212
That is, the optimal enhanced image, where the information entropy is calculated as follows:
Figure FDA00030995533500000213
Figure FDA00030995533500000213
其中,i表示像素点的灰度级,pi表示灰度级为i的像素点在整幅图像中的占比。Among them, i represents the gray level of the pixel, and pi represents the proportion of the pixel whose gray level is i in the whole image.
CN202110620221.9A 2021-06-03 2021-06-03 Underwater Image Definition Restoration Method Based on Depth Map Restoration and Brightness Estimation Active CN113269763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110620221.9A CN113269763B (en) 2021-06-03 2021-06-03 Underwater Image Definition Restoration Method Based on Depth Map Restoration and Brightness Estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110620221.9A CN113269763B (en) 2021-06-03 2021-06-03 Underwater Image Definition Restoration Method Based on Depth Map Restoration and Brightness Estimation

Publications (2)

Publication Number Publication Date
CN113269763A true CN113269763A (en) 2021-08-17
CN113269763B CN113269763B (en) 2023-07-21

Family

ID=77234232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110620221.9A Active CN113269763B (en) 2021-06-03 2021-06-03 Underwater Image Definition Restoration Method Based on Depth Map Restoration and Brightness Estimation

Country Status (1)

Country Link
CN (1) CN113269763B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082412A (en) * 2022-07-05 2022-09-20 中国科学院合肥物质科学研究院 A kind of ELMs filamentous structure extraction method
CN115496694A (en) * 2022-09-30 2022-12-20 湖南科技大学 Method of Restoring and Enhancement of Underwater Image Based on Improved Image Formation Model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017048927A1 (en) * 2015-09-18 2017-03-23 The Regents Of The University Of California Cameras and depth estimation of images acquired in a distorting medium
CN112488948A (en) * 2020-12-03 2021-03-12 大连海事大学 Underwater image restoration method based on black pixel point estimation backscattering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017048927A1 (en) * 2015-09-18 2017-03-23 The Regents Of The University Of California Cameras and depth estimation of images acquired in a distorting medium
CN112488948A (en) * 2020-12-03 2021-03-12 大连海事大学 Underwater image restoration method based on black pixel point estimation backscattering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩辉;周妍;蔡晨东;: "基于颜色衰减先验和白平衡的水下图像复原", 计算机与现代化, no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082412A (en) * 2022-07-05 2022-09-20 中国科学院合肥物质科学研究院 A kind of ELMs filamentous structure extraction method
CN115496694A (en) * 2022-09-30 2022-12-20 湖南科技大学 Method of Restoring and Enhancement of Underwater Image Based on Improved Image Formation Model

Also Published As

Publication number Publication date
CN113269763B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN111047530B (en) Underwater image color correction and contrast enhancement method based on multi-feature fusion
CN106530246B (en) Image defogging method and system based on dark Yu non local priori
CN111292257B (en) A Retinex-based Image Enhancement Method in Dark Vision Environment
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN105354865B (en) Method and system for automatic cloud detection of multi-spectral remote sensing satellite images
CN109816605B (en) A MSRCR image dehazing method based on multi-channel convolution
CN102063706B (en) Rapid defogging method
CN106204509B (en) Infrared and visible light image fusion method based on regional characteristics
CN112488948B (en) Underwater image restoration method based on black pixel point estimation back scattering
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
CN101783012A (en) Automatic image defogging method based on dark primary colour
CN105701785B (en) The image haze minimizing technology of Weighted T V transmissivities optimization is divided based on sky areas
CN109377450B (en) Edge protection denoising method
CN113888536B (en) Printed matter double image detection method and system based on computer vision
CN110503140B (en) Deep migration learning and neighborhood noise reduction based classification method
CN112200746A (en) Dehazing method and device for foggy traffic scene images
CN113269763A (en) Underwater image definition recovery method based on depth image recovery and brightness estimation
CN105678245A (en) Target position identification method based on Haar features
CN116843581B (en) Image enhancement method, system, device and storage medium for multi-scene graph
CN107203980B (en) Underwater target detection image enhancement method of self-adaptive multi-scale dark channel prior
CN112488955A (en) Underwater image restoration method based on wavelength compensation
CN104036469B (en) Method for eliminating word seen-through effect of image during document scanning
CN109376782A (en) Support vector machine cataract grading method and device based on eye image features
CN118735790B (en) Adaptive image-enhanced underwater vision SLAM system and implementation method
CN108564534A (en) A kind of picture contrast method of adjustment based on retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant