[go: up one dir, main page]

CN107833189A - The Underwater Target Detection image enchancing method of the limited self-adapting histogram equilibrium of contrast - Google Patents

The Underwater Target Detection image enchancing method of the limited self-adapting histogram equilibrium of contrast Download PDF

Info

Publication number
CN107833189A
CN107833189A CN201711038835.6A CN201711038835A CN107833189A CN 107833189 A CN107833189 A CN 107833189A CN 201711038835 A CN201711038835 A CN 201711038835A CN 107833189 A CN107833189 A CN 107833189A
Authority
CN
China
Prior art keywords
mrow
mtd
mtr
image
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711038835.6A
Other languages
Chinese (zh)
Inventor
马金祥
肖进
赵宇
柴济民
杜文汉
范新南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Institute of Technology
Original Assignee
Changzhou Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Institute of Technology filed Critical Changzhou Institute of Technology
Priority to CN201711038835.6A priority Critical patent/CN107833189A/en
Publication of CN107833189A publication Critical patent/CN107833189A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

本发明公开了一种对比受限自适应直方图均衡的水下目标探测图像增强方法,其步骤包括:计算原始彩色图像对应的灰度图像的4方向Sobel边缘检测器,梯度图像,以及自适应增益函数;将原始彩色图像由RGB空间经非线性变换至HSI空间;将HSI空间图像中的亮度向量应用对比受限自适应直方图均衡算法进行增强处理;将增强后的HSI空间图像转换返回至RGB空间;对增强后的RGB图像中R,G,B分量分别进行基于自适应增益函数的广义有界乘法运算,获取基于原始图像梯度信息的增强图像;进行增强后图像显示;对增强图像进行定量评价。本发明能充分利用原始图像的纹理实现图像增强处理,使得处理后的图像视觉质量提高、梯度信息丰富。

The invention discloses a contrast-limited adaptive histogram equalization underwater target detection image enhancement method. The steps include: calculating a 4-direction Sobel edge detector for a grayscale image corresponding to an original color image, a gradient image, and an adaptive Gain function; the original color image is non-linearly transformed from RGB space to HSI space; the brightness vector in the HSI space image is enhanced by a contrast-limited adaptive histogram equalization algorithm; the enhanced HSI space image is converted back to RGB space; the R, G, and B components in the enhanced RGB image are respectively subjected to generalized bounded multiplication based on the adaptive gain function to obtain an enhanced image based on the gradient information of the original image; the enhanced image is displayed; the enhanced image is Quantitative evaluation. The invention can make full use of the texture of the original image to realize image enhancement processing, so that the visual quality of the processed image is improved and the gradient information is rich.

Description

对比受限自适应直方图均衡的水下目标探测图像增强方法Image enhancement method for underwater target detection based on contrast-limited adaptive histogram equalization

技术领域technical field

本发明属于图像信息处理领域,具体涉及一种对比受限自适应直方图均衡的水下目标探测图像增强方法。The invention belongs to the field of image information processing, and in particular relates to an image enhancement method for underwater target detection with limited contrast and self-adaptive histogram equalization.

背景技术Background technique

水下目标探测图像存在非均匀亮度、低信噪比、低对比度等特殊情况,常用的水下目标探测图像增强算法主要分为修改水下图像的光照和抑制图像对比度以保留图像边缘两大类,但不可避免会降低探测图像的视觉质量。传统的基于对比度增强的图像增强算法有很大的局限性,如直方图均衡化对图像进行全局增强,但加大了噪声或者引入新的噪声。局部直方图均衡化,即自适应直方图均衡化方法(adaptive histogram equalization,AHE),虽然克服了全局直方图均衡化难以适应局部灰度分布的缺陷,但其均衡化后人工块效应明显。因此,对比度受限自适应性直方图均衡(contrast limited adaptivehistogram equalization,CLAHE)具有明显优势。由于受到水的光学特性以及水中各种微粒、浮游生物和水体流动的影响,直接平移和转嫁对比度受限自适应性直方图均衡的研究成果进行水下探测图像增强效果仍显不足。将原始图像丰富的梯度信息与对比度受限自适应性直方图均衡算法相结合,使得增强后的图像细节更加丰富清晰,整体图像的对比度和信息熵都能得到有效提升。Underwater target detection images have special conditions such as non-uniform brightness, low signal-to-noise ratio, and low contrast. Commonly used underwater target detection image enhancement algorithms are mainly divided into two categories: modifying the illumination of underwater images and suppressing image contrast to preserve image edges. , but it will inevitably reduce the visual quality of the detection image. The traditional image enhancement algorithm based on contrast enhancement has great limitations, such as histogram equalization to enhance the image globally, but increases the noise or introduces new noise. Local histogram equalization, that is, adaptive histogram equalization (AHE), although it overcomes the defect that global histogram equalization is difficult to adapt to local grayscale distribution, the artificial block effect is obvious after equalization. Therefore, contrast limited adaptive histogram equalization (CLAHE) has obvious advantages. Due to the influence of the optical characteristics of water and various particles, plankton and water flow in water, the research results of direct translation and transfer contrast-limited adaptive histogram equalization are still insufficient for underwater detection image enhancement. Combining the rich gradient information of the original image with the contrast-limited adaptive histogram equalization algorithm makes the details of the enhanced image richer and clearer, and the contrast and information entropy of the overall image can be effectively improved.

发明内容Contents of the invention

本发明所要解决的技术问题,面对存在非均匀亮度、低信噪比、低对比度环境下的水下目标探测图像的精确定位和准确描述的客观实际需求,研究一种对比受限自适应直方图均衡的水下目标探测图像增强方法,实现水下目标探测图像去噪增强处理,提高水下目标探测图像的视觉质量。The technical problem to be solved by the present invention is to study a contrast-limited self-adaptive histogram in the face of the objective and practical requirements of accurate positioning and accurate description of underwater target detection images in environments with non-uniform brightness, low signal-to-noise ratio, and low contrast. The image equalization image enhancement method for underwater target detection realizes the denoising enhancement processing of the underwater target detection image, and improves the visual quality of the underwater target detection image.

本发明采用如下方案实现:The present invention adopts following scheme to realize:

对比受限自适应直方图均衡的水下目标探测图像增强方法,包括如下步骤:The underwater object detection image enhancement method for contrasting restricted adaptive histogram equalization includes the following steps:

步骤一:获取水下目标探测原始彩色图像;Step 1: Obtain the original color image of underwater target detection;

步骤二:计算原始彩色图像对应的灰度图像的4方向Sobel边缘检测器,梯度图像,以及自适应增益函数;Step 2: Calculate the 4-direction Sobel edge detector, gradient image, and adaptive gain function of the grayscale image corresponding to the original color image;

步骤三:将原始彩色图像由RGB空间经非线性变换至HSI空间;Step 3: Transform the original color image from RGB space to HSI space through nonlinear transformation;

步骤四:将HSI空间图像中的亮度向量应用对比受限自适应直方图均衡算法进行增强处理;Step 4: applying a contrast-limited adaptive histogram equalization algorithm to the luminance vector in the HSI space image for enhancement processing;

步骤五:将增强后的HSI空间图像转换返回至RGB空间;Step 5: converting the enhanced HSI space image back to RGB space;

步骤六:对增强后的RGB图像中R,G,B分量分别进行基于自适应增益函数的广义有界乘法运算,获取基于原始图像梯度信息的增强图像;Step 6: Perform generalized bounded multiplication based on the adaptive gain function for the R, G, and B components of the enhanced RGB image, respectively, to obtain an enhanced image based on the gradient information of the original image;

步骤七:将增强图像的R,G,B分量由[0,1]范围变换至[0,255]范围,进行增强后图像显示;Step 7: Transform the R, G, and B components of the enhanced image from the [0,1] range to the [0,255] range, and display the enhanced image;

步骤八:对增强图像从均值、对比度、信息熵和色彩尺度等方面进行定量评价。Step 8: Quantitatively evaluate the enhanced image in terms of mean value, contrast, information entropy and color scale.

进一步,所述步骤二,Further, said step two,

计算原始彩色图像I对应的灰度图像Gray(I);利用人眼对边缘等高频信息比较敏感的特性,选择具有噪声鲁棒性的Sobel算子获取边缘梯度图像;在传统Sobel算子滤波,即0°和90°方向的基础上,增加两个对角方向,即45°和135°方向的滤波;Calculate the grayscale image Gray(I) corresponding to the original color image I; use the characteristic that the human eye is sensitive to high-frequency information such as edges, and select the Sobel operator with noise robustness to obtain the edge gradient image; in the traditional Sobel operator filter , that is, on the basis of the 0 ° and 90 ° directions, add two diagonal directions, that is, filter in the 45° and 135° directions;

四个方向上的Sobel边缘检测器掩模定义为:The Sobel edge detector mask in four directions is defined as:

假设Z(i,j)定义为像素点(i,j)的3×3图像邻域,则Z(i,j)可以表示为:Assuming that Z(i,j) is defined as the 3×3 image neighborhood of pixel point (i,j), then Z(i,j) can be expressed as:

其中,z(i,j)定义为像素点(i,j)的灰度值;Among them, z(i, j) is defined as the gray value of the pixel point (i, j);

像素点(i,j)的4方向的梯度向量可以定义为:The gradient vector of the 4 directions of the pixel point (i,j) can be defined as:

Gk(i,j)=∑∑z(i+m-1,j+n-1)×Sk(m,n),k=1,2,3,4G k (i,j)=∑∑z(i+m-1,j+n-1)×S k (m,n),k=1,2,3,4

像素点(i,j)的梯度图像可以定义为:The gradient image of pixel point (i, j) can be defined as:

梯度图像归一化为:The gradient image is normalized to:

其中,δ1和δ2为微小的扰动量,以确保gn(i,j)∈(0,1);Among them, δ 1 and δ 2 are small perturbations to ensure g n (i,j)∈(0,1);

在像素点(i,j)处的自适应增益函数λ(i,j)表述为:The adaptive gain function λ(i,j) at the pixel point (i,j) is expressed as:

其中,a和b为可调节正数变量,用于使自适应增益函数λ(i,j)均值 Among them, a and b are adjustable positive variables, which are used to make the adaptive gain function λ(i,j) mean

进一步,所述步骤三,Further, the step three,

将原始彩色图像由RGB空间经非线性变换至HSI空间,转换公式为:The original color image is nonlinearly transformed from RGB space to HSI space, and the conversion formula is:

其中, in,

在转换之前,先将R,G,B的值归一化到[0,1];则在转换后,S,I分量在[0,1]范围内,H在[0,360]范围内。Before conversion, the values of R, G, and B are normalized to [0,1]; after conversion, S and I components are in the range of [0,1], and H is in the range of [0,360].

进一步,所述步骤四,Further, the step four,

将HSI空间图像中的亮度向量I应用对比受限自适应直方图均衡算法进行增强处理,色彩信息(H,S)保持不变。The luminance vector I in the HSI space image is enhanced by contrast-limited adaptive histogram equalization algorithm, and the color information (H, S) remains unchanged.

进一步,所述步骤五,Further, the step five,

将增强后的HSI空间图像转换返回至RGB空间,设转换后的RGB空间图像表示为R′G′B′,转换公式为:The enhanced HSI space image is converted back to RGB space, and the converted RGB space image is represented as R'G'B', and the conversion formula is:

(a)RG区(0°≤H<120°):(a) RG area (0°≤H<120°):

(b)GB区(120°≤H<240°):(b) GB area (120°≤H<240°):

H=H-120°H=H-120°

(c)BR区(240°≤H<360°):(c) BR zone (240°≤H<360°):

H=H-240°H=H-240°

进一步,所述步骤六,Further, the step six,

对增强后的RGB图像中R′,G′,B′分量分别进行基于自适应增益函数的广义有界乘法运算,获取基于原始图像梯度信息的增强图像R″G″B″;广义有界乘法运算表示为:Perform generalized bounded multiplication based on the adaptive gain function for the R', G', and B' components in the enhanced RGB image, and obtain the enhanced image R″G″B” based on the gradient information of the original image; generalized bounded multiplication The operation is expressed as:

进一步,所述步骤七,Further, the step seven,

将增强图像的R,G,B分量由[0,1]范围变换至[0,255]范围,进行增强后图像显示;Transform the R, G, and B components of the enhanced image from the [0,1] range to the [0,255] range, and display the enhanced image;

最后的输出图像可以表示为:The final output image can be expressed as:

其中, in,

进一步,所述步骤八,Further, the step eight,

对增强图像RGBout从均值、对比度、信息熵和色彩尺度等方面进行定量评价,相关定量评价指标函数表示为:Quantitatively evaluate the RGB out of the enhanced image from the aspects of mean value, contrast, information entropy and color scale, and the related quantitative evaluation index function is expressed as:

均值:其中,μR、μG和μB分别为RGBout三通道颜色分量的均值;mean: Wherein, μ R , μ G and μ B are respectively the mean value of RGB out three-channel color components;

对比度:式中,P(i,j;d,θk)为灰度共生矩阵;θk为像素间角度,θk=(k-1)×45°,k=1,2,3,4;Contrast: In the formula, P(i,j; d,θ k ) is the gray level co-occurrence matrix; θ k is the angle between pixels, θ k =(k-1)×45°, k=1,2,3,4;

信息熵: Information entropy:

色彩尺度:其中,α=R-G,β=(R+G)/2-B;μα和μβ分别是α和β的均值,σα和σβ分别是α和β的标准差。Color scale: Among them, α=RG, β=(R+G)/2-B; μ α and μ β are the mean values of α and β respectively, and σ α and σ β are the standard deviations of α and β respectively.

对对比受限自适应直方图均衡算法图像增强有关问题的说明:Explanation on issues related to image enhancement of contrast-limited adaptive histogram equalization algorithm:

(1)对比受限自适应直方图均衡算法中有两项参数:剪切系数(clip limit,CL)和图像块尺寸(block size,BZ)。随着剪切系数增加,图像亮度增加且图像平滑度增加;随着图像块尺寸增加,图像动态范围增加。但图像质量增强效果主要取决于剪切系数,而非图像块尺寸。在实际应用过程中,需要合理设置这两项参数。(1) Contrast There are two parameters in the limited adaptive histogram equalization algorithm: clipping coefficient (clip limit, CL) and image block size (block size, BZ). As the shear factor increases, the image brightness increases and the image smoothness increases; as the image block size increases, the image dynamic range increases. However, the image quality enhancement effect mainly depends on the shear factor rather than the image block size. In the actual application process, these two parameters need to be set reasonably.

(2)根据对比度增强的需要,可以对自适应增益函数λ(i,j)中的参数a和b进行适当的调整,以期得到不同对比度的增强图像。(2) According to the needs of contrast enhancement, the parameters a and b in the adaptive gain function λ(i,j) can be adjusted appropriately in order to obtain enhanced images with different contrasts.

(3)在对增强图像进行定量评价时,不能仅要求很高的对比度,而应该综合考虑对比度、信息熵和色彩信息。(3) When quantitatively evaluating the enhanced image, high contrast should not be required only, but contrast, information entropy and color information should be considered comprehensively.

本发明所达到的有益之处在于:The benefits achieved by the present invention are:

本发明的方法,可以只利用单幅非均匀亮度、低信噪比、低对比度水下目标探测图像自身的信息,对图像进行对比受限自适应直方图均衡的去噪增强处理。先通过HSI空间图像进行对比受限自适应直方图均衡增强处理,再利用图像本身丰富的梯度信息进行自适应增益,最后从均值、对比度、信息熵和色彩尺度等综合定量评价指标评估暗通道先验的增强图像。本发明中用到的4方向Sobel边缘检测器,能充分利用图像自身丰富的梯度信息实现图像增强处理,使得处理后的图像视觉质量提高、纹理信息丰富。The method of the invention can only use the information of a single underwater target detection image with non-uniform brightness, low signal-to-noise ratio, and low contrast to perform contrast-limited adaptive histogram equalization denoising and enhancement processing on the image. First, the contrast-limited adaptive histogram equalization enhancement process is performed through the HSI space image, and then the rich gradient information of the image itself is used for adaptive gain. Enhanced images for testing. The 4-direction Sobel edge detector used in the present invention can make full use of the rich gradient information of the image itself to realize image enhancement processing, so that the visual quality of the processed image is improved and the texture information is rich.

附图说明Description of drawings

图1是本发明对比受限自适应直方图均衡的水下目标探测图像增强方法的控制流程图。Fig. 1 is a control flow chart of the contrast-limited adaptive histogram equalization method for underwater target detection image enhancement in the present invention.

具体实施方式Detailed ways

下面结合附图对本发明作进一步描述。以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The present invention will be further described below in conjunction with the accompanying drawings. The following examples are only used to illustrate the technical solution of the present invention more clearly, but not to limit the protection scope of the present invention.

参照图1所示,本发明是一种对比受限自适应直方图均衡的水下目标探测图像增强方法,整体流程图如图1所示,具体实现步骤如下:Referring to Figure 1, the present invention is a method for enhancing underwater target detection images with contrast-limited adaptive histogram equalization. The overall flow chart is shown in Figure 1, and the specific implementation steps are as follows:

第一步:获取水下目标原始彩色图像I。Step 1: Obtain the original color image I of the underwater target.

第二步:计算原始彩色图像I对应的灰度图像Gray(I)。利用人眼对边缘等高频信息比较敏感的特性,选择具有一定噪声鲁棒性的Sobel算子获取边缘梯度图像。在传统Sobel算子滤波(0°和90°方向)的基础上,增加了两个对角方向(45°和135°方向)的滤波,增强了平滑噪声的能力。Step 2: Calculate the grayscale image Gray(I) corresponding to the original color image I. Taking advantage of the sensitivity of human eyes to high-frequency information such as edges, a Sobel operator with certain noise robustness is selected to obtain edge gradient images. On the basis of traditional Sobel operator filtering (0° and 90° directions), two diagonal directions (45° and 135° directions) are added to enhance the ability to smooth noise.

四个方向上的Sobel边缘检测器掩模定义为:The Sobel edge detector mask in four directions is defined as:

假设Z(i,j)定义为像素点(i,j)的3×3图像邻域,则Z(i,j)可以表示为:Assuming that Z(i,j) is defined as the 3×3 image neighborhood of pixel point (i,j), then Z(i,j) can be expressed as:

其中,z(i,j)定义为像素点(i,j)的灰度值。Among them, z(i, j) is defined as the gray value of the pixel point (i, j).

像素点(i,j)的4方向的梯度向量可以定义为:The gradient vector of the 4 directions of the pixel point (i,j) can be defined as:

Gk(i,j)=∑∑z(i+m-1,j+n-1)×Sk(m,n),k=1,2,3,4G k (i,j)=∑∑z(i+m-1,j+n-1)×S k (m,n),k=1,2,3,4

像素点(i,j)的梯度图像可以定义为:The gradient image of pixel point (i, j) can be defined as:

梯度图像归一化为:The gradient image is normalized to:

其中,δ1和δ2为微小的扰动量,以确保gn(i,j)∈(0,1)。Among them, δ 1 and δ 2 are small perturbations to ensure g n (i,j)∈(0,1).

为了得到丰富梯度信息的图像,在像素点(i,j)处的自适应增益函数λ(i,j)可以表述为:In order to obtain an image with rich gradient information, the adaptive gain function λ(i,j) at the pixel point (i,j) can be expressed as:

其中,a和b为可调节正数变量,以确保自适应增益函数λ(i,j)均值 Among them, a and b are adjustable positive variables to ensure the mean value of the adaptive gain function λ(i,j)

第三步:将原始彩色图像由RGB空间经非线性变换至HSI空间,转换公式为:Step 3: Transform the original color image from RGB space to HSI space through nonlinear transformation, the conversion formula is:

其中, in,

在转换之前,应先将R,G,B的值归一化到[0,1]。则在转换后,S,I分量在[0,1]范围内,H在[0,360]范围内。Before conversion, the values of R, G, B should be normalized to [0,1]. Then after conversion, S and I components are in the range of [0,1], and H is in the range of [0,360].

第四步:将HSI空间图像中的亮度向量(I)应用对比受限自适应直方图均衡算法进行增强处理,色彩信息(H,S)保持不变。Step 4: Apply the contrast-limited adaptive histogram equalization algorithm to the luminance vector (I) in the HSI space image for enhancement processing, and the color information (H, S) remains unchanged.

设增强后的亮度向量表示为I′,则增强后的HSI空间图像表示为HSI′。Suppose the enhanced luminance vector is denoted as I', and the enhanced HSI spatial image is denoted as HSI'.

第五步:将增强后的HSI空间图像转换返回至RGB空间,设转换后的RGB空间图像表示为R′G′B′,转换公式为:Step 5: Convert the enhanced HSI space image back to the RGB space, let the converted RGB space image be expressed as R'G'B', and the conversion formula is:

(a)RG区(0°≤H<120°):(a) RG area (0°≤H<120°):

(b)GB区(120°≤H<240°):(b) GB area (120°≤H<240°):

H=H-120°H=H-120°

(c)BR区(240°≤H<360°):(c) BR zone (240°≤H<360°):

H=H-240°H=H-240°

第六步:对增强后的RGB图像中R′,G′,B′分量分别进行基于自适应增益函数的广义有界乘法运算,获取基于原始图像梯度信息的增强图像R″G″B″。广义有界乘法运算可以表示为:Step 6: Perform generalized bounded multiplication based on the adaptive gain function for the R', G', and B' components of the enhanced RGB image, respectively, to obtain the enhanced image R"G"B" based on the gradient information of the original image. The generalized bounded multiplication operation can be expressed as:

第七步:将增强图像的R,G,B分量由[0,1]范围变换至[0,255]范围,进行增强后图像显示。Step 7: Transform the R, G, and B components of the enhanced image from the [0,1] range to the [0,255] range, and display the enhanced image.

最后的输出图像可以表示为:The final output image can be expressed as:

其中, in,

步骤八:对增强图像RGBout从均值、对比度、信息熵和色彩尺度等方面进行定量评价,相关定量评价指标函数表示为:Step 8: Quantitatively evaluate the RGB out of the enhanced image in terms of mean value, contrast, information entropy, and color scale. The related quantitative evaluation index function is expressed as:

均值:其中,μR、μG和μB分别为RGBout三通道颜色分量的均值。mean: Among them, μ R , μ G and μ B are respectively the mean values of the RGB out three-channel color components.

对比度:式中,P(i,j;d,θk)为灰度共生矩阵;θk为像素间角度,θk=(k-1)×45°,k=1,2,3,4。Contrast: In the formula, P(i,j; d,θ k ) is the gray level co-occurrence matrix; θ k is the angle between pixels, θ k =(k-1)×45°, k=1,2,3,4.

信息熵: Information entropy:

色彩尺度:其中,α=R-G,β=(R+G)/2-B;μα和μβ分别是α和β的均值,σα和σβ分别是α和β的标准差。Color scale: Among them, α=RG, β=(R+G)/2-B; μ α and μ β are the mean values of α and β respectively, and σ α and σ β are the standard deviations of α and β respectively.

对对比受限自适应直方图均衡算法图像增强有关问题的说明:Explanation on issues related to image enhancement of contrast-limited adaptive histogram equalization algorithm:

(1)对比受限自适应直方图均衡算法中有两项参数:剪切系数(clip limit,CL)和图像块尺寸(block size,BZ)。随着剪切系数增加,图像亮度增加且图像平滑度增加;随着图像块尺寸增加,图像动态范围增加。但图像质量增强效果主要取决于剪切系数,而非图像块尺寸。在实际应用过程中,需要合理设置这两项参数。(1) Contrast There are two parameters in the limited adaptive histogram equalization algorithm: clipping coefficient (clip limit, CL) and image block size (block size, BZ). As the shear factor increases, the image brightness increases and the image smoothness increases; as the image block size increases, the image dynamic range increases. However, the image quality enhancement effect mainly depends on the shear factor rather than the image block size. In the actual application process, these two parameters need to be set reasonably.

(2)根据对比度增强的需要,可以对自适应增益函数λ(i,j)中的参数a和b进行适当的调整,以期得到不同对比度的增强图像。(2) According to the needs of contrast enhancement, the parameters a and b in the adaptive gain function λ(i,j) can be adjusted appropriately in order to obtain enhanced images with different contrasts.

(3)在对增强图像进行定量评价时,不能仅要求很高的对比度,而应该综合考虑对比度、信息熵和色彩信息。(3) When quantitatively evaluating the enhanced image, high contrast should not be required only, but contrast, information entropy and color information should be considered comprehensively.

以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明技术原理的前提下,还可以做出若干改进和变形,这些改进和变形也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the technical principle of the present invention, some improvements and modifications can also be made. It should also be regarded as the protection scope of the present invention.

Claims (8)

1.对比受限自适应直方图均衡的水下目标探测图像增强方法,包括如下步骤:1. Contrast the underwater target detection image enhancement method of limited adaptive histogram equalization, comprising the following steps: 步骤一:获取水下目标探测原始彩色图像;Step 1: Obtain the original color image of underwater target detection; 步骤二:计算原始彩色图像对应的灰度图像的4方向Sobel边缘检测器,梯度图像,以及自适应增益函数;Step 2: Calculate the 4-direction Sobel edge detector, gradient image, and adaptive gain function of the grayscale image corresponding to the original color image; 步骤三:将原始彩色图像由RGB空间经非线性变换至HSI空间;Step 3: Transform the original color image from RGB space to HSI space through nonlinear transformation; 步骤四:将HSI空间图像中的亮度向量应用对比受限自适应直方图均衡算法进行增强处理;Step 4: applying a contrast-limited adaptive histogram equalization algorithm to the luminance vector in the HSI space image for enhancement processing; 步骤五:将增强后的HSI空间图像转换返回至RGB空间;Step 5: converting the enhanced HSI space image back to RGB space; 步骤六:对增强后的RGB图像中R,G,B分量分别进行基于自适应增益函数的广义有界乘法运算,获取基于原始图像梯度信息的增强图像;Step 6: Perform generalized bounded multiplication based on the adaptive gain function for the R, G, and B components of the enhanced RGB image, respectively, to obtain an enhanced image based on the gradient information of the original image; 步骤七:将增强图像的R,G,B分量由[0,1]范围变换至[0,255]范围,进行增强后图像显示;Step 7: Transform the R, G, and B components of the enhanced image from the [0,1] range to the [0,255] range, and display the enhanced image; 步骤八:对增强图像从均值、对比度、信息熵和色彩尺度等方面进行定量评价。Step 8: Quantitatively evaluate the enhanced image in terms of mean value, contrast, information entropy and color scale. 2.根据权利要求1所述的对比受限自适应直方图均衡的水下目标探测图像增强方法,其特征在于:所述步骤二,2. the underwater target detection image enhancement method of contrast limited adaptive histogram equalization according to claim 1, is characterized in that: described step 2, 计算原始彩色图像I对应的灰度图像Gray(I);利用人眼对边缘等高频信息比较敏感的特性,选择具有噪声鲁棒性的Sobel算子获取边缘梯度图像;在传统Sobel算子滤波,即0°和90°方向的基础上,增加两个对角方向,即45°和135°方向的滤波;Calculate the grayscale image Gray(I) corresponding to the original color image I; use the characteristic that the human eye is sensitive to high-frequency information such as edges, and select the Sobel operator with noise robustness to obtain the edge gradient image; in the traditional Sobel operator filter , that is, on the basis of the 0° and 90° directions, add two diagonal directions, that is, filtering in the 45° and 135° directions; 四个方向上的Sobel边缘检测器掩模定义为:The Sobel edge detector mask in four directions is defined as: <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>2</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow><msub><mi>S</mi><mn>1</mn></msub><mo>=</mo><mfenced open = "(" close = ")"><mtable><mtr><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd><mtd><mn>0</mn></mtd><mtd><mn>1</mn></mtd></mtr><mtr><mtd><mrow><mo>-</mo><mn>2</mn></mrow></mtd><mtd><mn>0</mn></mtd><mtd><mn>2</mn></mtd></mtr><mtr><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd><mtd><mn>0</mn></mtd><mtd><mn>1</mn></mtd></mtr></mtable></mfenced></mrow> <mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>=</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>2</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow><msub><mi>S</mi><mn>2</mn></msub><mo>=</mo><mfenced open = "(" close = ")"><mtable><mtr><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd><mtd><mrow><mo>-</mo><mn>2</mn></mrow></mtd><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd></mtr><mtr><mtd><mn>0</mn></mtd><mtd><mn>0</mn></mtd><mtd><mn>0</mn></mtd></mtr><mtr><mtd><mn>1</mn></mtd><mtd><mn>2</mn></mtd><mtd><mn>1</mn></mtd></mtr></mtable></mfenced></mrow> <mrow> <msub> <mi>S</mi> <mn>3</mn> </msub> <mo>=</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>2</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow><msub><mi>S</mi><mn>3</mn></msub><mo>=</mo><mfenced open = "(" close = ")"><mtable><mtr><mtd><mn>0</mn></mtd><mtd><mn>1</mn></mtd><mtd><mn>2</mn></mtd></mtr><mtr><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd><mtd><mn>0</mn></mtd><mtd><mn>1</mn></mtd></mtr><mtr><mtd><mrow><mo>-</mo><mn>2</mn></mrow></mtr>mtd><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd><mtd><mn>0</mn></mtd></mtr></mtable></mfenced></mrow> <mrow> <msub> <mi>S</mi> <mn>4</mn> </msub> <mo>=</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mn>2</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow><msub><mi>S</mi><mn>4</mn></msub><mo>=</mo><mfenced open = "(" close = ")"><mtable><mtr><mtd><mn>2</mn></mtd><mtd><mn>1</mn></mtd><mtd><mn>0</mn></mtd></mtr><mtr><mtd><mn>1</mn></mtd><mtd><mn>0</mn></mtd><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd></mtr><mtr><mtd><mn>0</mn></mtd><mtd><mrow><mo>-</mo><mn>1</mn></mrow></mtd><mtd><mrow><mo>-</mo><mn>2</mn></mrow></mtd></mtr></mtable></mfenced></mrow> 假设Z(i,j)定义为像素点(i,j)的3×3图像邻域,则Z(i,j)可以表示为:Assuming that Z(i,j) is defined as the 3×3 image neighborhood of pixel point (i,j), then Z(i,j) can be expressed as: <mrow> <mi>Z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow><mi>Z</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mfenced open = "(" close = ")"><mtable><mtr><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn><mo>)</mo></mrow></mrow></mtd><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></mtd><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mi>mn><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn><mo>)</mo>mo></mrow></mrow></mtd><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></mtd><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn><mo>)</mo></mrow></mrow></mtd><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></mtd><mtd><mrow><mi>z</mi><mrow><mo>(</mo><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow></mrow></mtd></mtr></mtable></mfenced></mrow> 其中,z(i,j)定义为像素点(i,j)的灰度值;Among them, z(i, j) is defined as the gray value of the pixel point (i, j); 像素点(i,j)的4方向的梯度向量可以定义为:The gradient vector of the 4 directions of the pixel point (i,j) can be defined as: Gk(i,j)=∑∑z(i+m-1,j+n-1)×Sk(m,n),k=1,2,3,4G k (i,j)=∑∑z(i+m-1,j+n-1)×S k (m,n),k=1,2,3,4 像素点(i,j)的梯度图像可以定义为:The gradient image of pixel point (i, j) can be defined as: <mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>4</mn> </munderover> <msubsup> <mi>G</mi> <mi>k</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msqrt> </mrow> <mrow><mi>g</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><msqrt><mrow><munderover><mo>&amp;Sigma;</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>4</mn></munderover><msubsup><mi>G</mi><mi>k</mi><mn>2</mn></msubsup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></msqrt></mrow> 梯度图像归一化为:The gradient image is normalized to: <mrow> <msub> <mi>g</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>log</mi> <mrow> <mo>(</mo> <mi>g</mi> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>+</mo> <mn>1</mn> <mo>+</mo> <msub> <mi>&amp;delta;</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mi>log</mi> <mrow> <mo>(</mo> <mi>max</mi> <mo>(</mo> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>+</mo> <msub> <mi>&amp;delta;</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> <mrow><msub><mi>g</mi><mi>n</mi></msub><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><mi>log</mi><mrow><mo>(</mo><mi>g</mi><mo>(</mo><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow><mo>)</mo><mo>+</mo><mn>1</mn><mo>+</mo><msub><mi>&amp;delta;</mi><mn>1</mn></msub><mo>)</mo></mrow></mrow><mrow><mi>log</mi><mrow><mo>(</mo><mi>max</mi><mo>(</mo><mrow><mi>g</mi><mrow><mo>(</mo><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow><mo>)</mo></mrow></mrow><mo>)</mo><mo>+</mo><msub><mi>&amp;delta;</mi><mn>2</mn></msub><mo>)</mo></mrow></mrow></mfrac></mrow> 其中,δ1和δ2为微小的扰动量,以确保gn(i,j)∈(0,1);Among them, δ 1 and δ 2 are small perturbations to ensure g n (i,j)∈(0,1); 在像素点(i,j)处的自适应增益函数λ(i,j)表述为:The adaptive gain function λ(i,j) at the pixel point (i,j) is expressed as: <mrow> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mn>2</mn> <mrow> <mo>&amp;lsqb;</mo> <mi>a</mi> <mo>&amp;times;</mo> <msub> <mi>g</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> </mrow> <mrow><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><msup><mn>2</mn><mrow><mo>&amp;lsqb;</mo><mi>a</mi><mo>&amp;times;</mo><msub><mi>g</mi><mi>n</mi></msub><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>&amp;rsqb;</mo></mrow></msup><mo>+</mo><mi>b</mi></mrow> 其中,a和b为可调节正数变量,用于使自适应增益函数λ(i,j)均值 Among them, a and b are adjustable positive variables, which are used to make the adaptive gain function λ(i,j) mean 3.根据权利要求1所述的对比受限自适应直方图均衡的水下目标探测图像增强方法,其特征在于:所述步骤三,3. the underwater target detection image enhancement method of contrast limited adaptive histogram equalization according to claim 1, is characterized in that: described step 3, 将原始彩色图像由RGB空间经非线性变换至HSI空间,转换公式为:The original color image is nonlinearly transformed from RGB space to HSI space, and the conversion formula is: <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>H</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>&amp;theta;</mi> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>B</mi> <mo>&amp;le;</mo> <mi>G</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>360</mn> <mo>-</mo> <mi>&amp;theta;</mi> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>B</mi> <mo>&gt;</mo> <mi>G</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>S</mi> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mn>3</mn> <mrow> <mi>R</mi> <mo>+</mo> <mi>G</mi> <mo>+</mo> <mi>B</mi> </mrow> </mfrac> <mo>&amp;lsqb;</mo> <mi>min</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>,</mo> <mi>G</mi> <mo>,</mo> <mi>B</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>I</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <mrow> <mo>(</mo> <mi>R</mi> <mo>+</mo> <mi>G</mi> <mo>+</mo> <mi>B</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><mi>H</mi><mo>=</mo><mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><mi>&amp;theta;</mi><mo>,</mo></mrow></mtd><mtd><mrow><mi>i</mi><mi>f</mi></mrow></mtd><mtd><mrow><mi>B</mi><mo>&amp;le;</mo><mi>G</mi></mrow></mtd></mtr><mtr><mtd><mrow><mn>360</mn><mo>-</mo><mi>&amp;theta;</mi>mi><mo>,</mo></mrow></mtd><mtd><mrow><mi>i</mi><mi>f</mi></mrow></mtd><mtd><mrow><mi>B</mi><mo&gt;&gt;</mo><mi>G</mi></mrow></mtd></mtr></mtable></mfenced></mrow></mtd></mtr><mtr><mtd><mrow><mi>S</mi><mo>=</mo><mn>1</mn><mo>-</mo><mfrac><mn>3</mn><mrow><mi>R</mi><mo>+</mo><mi>G</mi><mo>+</mo><mi>B</mi></mrow></mfrac><mo>&amp;lsqb;</mo><mi>min</mi><mrow><mo>(</mo><mi>R</mo>mi><mo>,</mo><mi>G</mi><mo>,</mo><mi>B</mi><mo>)</mo></mrow><mo>&amp;rsqb;</mo></mrow></mtd></mtr><mtr><mtd><mrow><mi>I</mi><mo>=</mo><mfrac><mn>1</mn><mn>3</mn></mfrac><mrow><mo>(</mo><mi>R</mi><mo>+</mo><mi>G</mi><mo>+</mo><mi>B</mi><mo>)</mo></mrow></mrow></mtd></mtr></mtable></mfenced> 其中, in, 在转换之前,先将R,G,B的值归一化到[0,1];则在转换后,S,I分量在[0,1]范围内,H在[0,360]范围内。Before conversion, the values of R, G, and B are normalized to [0,1]; after conversion, S and I components are in the range of [0,1], and H is in the range of [0,360]. 4.根据权利要求1所述的对比受限自适应直方图均衡的水下目标探测图像增强方法,其特征在于:所述步骤四,4. the underwater target detection image enhancement method of contrast limited adaptive histogram equalization according to claim 1, is characterized in that: described step 4, 将HSI空间图像中的亮度向量I应用对比受限自适应直方图均衡算法进行增强处理,色彩信息(H,S)保持不变。The luminance vector I in the HSI space image is enhanced by contrast-limited adaptive histogram equalization algorithm, and the color information (H, S) remains unchanged. 5.根据权利要求1所述的对比受限自适应直方图均衡的水下目标探测图像增强方法,其特征在于:所述步骤五,5. the underwater target detection image enhancement method of contrast limited self-adaptive histogram equalization according to claim 1, is characterized in that: described step 5, 将增强后的HSI空间图像转换返回至RGB空间,设转换后的RGB空间图像表示为R′G′B′,转换公式为:The enhanced HSI space image is converted back to RGB space, and the converted RGB space image is represented as R'G'B', and the conversion formula is: (a)RG区(0°≤H<120°):(a) RG area (0°≤H<120°): (b)GB区(120°≤H<240°):(b) GB area (120°≤H<240°): H=H-120°H=H-120° (c)BR区(240°≤H<360°):(c) BR zone (240°≤H<360°): H=H-240°H=H-240° 6.根据权利要求1所述的对比受限自适应直方图均衡的水下目标探测图像增强方法,其特征在于:所述步骤六,6. the underwater target detection image enhancement method of contrast limited self-adaptive histogram equalization according to claim 1, is characterized in that: described step 6, 对增强后的RGB图像中R′,G′,B′分量分别进行基于自适应增益函数的广义有界乘法运算,获取基于原始图像梯度信息的增强图像R″G″B″;广义有界乘法运算表示为:Perform generalized bounded multiplication based on the adaptive gain function for the R', G', and B' components in the enhanced RGB image, and obtain the enhanced image R″G″B” based on the gradient information of the original image; generalized bounded multiplication The operation is expressed as: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>R</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;CircleTimes;</mo> <msup> <mi>R</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>&amp;phi;</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&amp;lsqb;</mo> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <msup> <mi>R</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <msup> <mi>R</mi> <mo>&amp;prime;</mo> </msup> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>G</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;CircleTimes;</mo> <msup> <mi>G</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>&amp;phi;</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&amp;lsqb;</mo> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <msup> <mi>G</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <msup> <mi>G</mi> <mo>&amp;prime;</mo> </msup> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>B</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;CircleTimes;</mo> <msup> <mi>B</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>&amp;phi;</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&amp;lsqb;</mo> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <msup> <mi>B</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <msup> <mi>B</mi> <mo>&amp;prime;</mo> </msup> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mi>&amp;lambda;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow> <mrow><mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><msup><mi>R</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>&amp;CircleTimes;</mo><msup><mi>R</mi><mo>&amp;prime;</mo></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><msup><mi>&amp;phi;</mi><mrow><mo>-</mo><mn>1</mn></mrow></msup><mo>&amp;lsqb;</mo><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><msup><mi>R</mi><mo>&amp;prime;</mo></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>&amp;rsqb;</mo><mo>=</mo><mfrac><mn>1</mn><mrow><msup><mrow><mo>(</mo><msup><mi>R</mi><mo>&amp;prime;</mo></msup><mo>(</mo><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow><mo>)</mo><mo>)</mo></mrow><mrow><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></msup><mo>+</mo><mn>1</mn></mrow></mfrac></mrow></mtd></mtr><mtr><mtd><mrow><msup><mi>G</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>&amp;CircleTimes;</mo><msup><mi>G</mi><mo>&amp;prime;</mo></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><msup><mi>&amp;phi;</mi><mrow><mo>-</mo><mn>1</mn></mrow></msup><mo>&amp;lsqb;</mo><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><msup><mi>G</mi><mo>&amp;prime;</mo></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>&amp;rsqb;</mo><mo>=</mo><mfrac><mn>1</mn><mrow><msup><mrow><mo>(</mo><msup><mi>G< /mi ><mo>&amp;prime;</mo></msup><mo>(</mo><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow><mo>)</mo><mo>)</mo></mrow><mrow><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></msup><mo>+</mo><mn>1</mn></mrow></mfrac></mrow></mtd></mtr><mtr><mtd><mrow><msup><mi>B</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msup><mrow><mo>(</mo><mi>i</mo>mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>&amp;CircleTimes;</mo><msup><mi>B</mi><mo>&amp;prime;</mo></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><msup><mi>&amp;phi;</mi><mrow><mo>-</mo><mn>1</mn></mrow></msup><mo>&amp;lsqb;</mo><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><msup><mi>B</mi><mo>&amp;prime;</mo></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>&amp;rsqb;</mo><mo>=</mo><mfrac><mn>1</mn><mrow><msup><mrow><mo>(</mo><msup><mi>B</mi><mo>&amp;prime;</mo></msup><mo>(</mo><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow><mo>)</mo><mo>)</mo></mrow><mrow><mi>&amp;lambda;</mi><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></msup><mo>+</mo><mn>1</mn></mrow></mfrac></mrow></mtd></mtr></mtable></mfenced><mo>.</mo></mrow> 7.根据权利要求1所述的对比受限自适应直方图均衡的水下目标探测图像增强方法,其特征在于:所述步骤七,7. the underwater target detection image enhancement method of contrast limited self-adaptive histogram equalization according to claim 1, is characterized in that: described step 7, 将增强图像的R,G,B分量由[0,1]范围变换至[0,255]范围,进行增强后图像显示;Transform the R, G, and B components of the enhanced image from the [0,1] range to the [0,255] range, and display the enhanced image; <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mover> <mi>R</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>255</mn> <mo>&amp;times;</mo> <msup> <mi>R</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mover> <mi>G</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>255</mn> <mo>&amp;times;</mo> <msup> <mi>G</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mover> <mi>B</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>255</mn> <mo>&amp;times;</mo> <msup> <mi>B</mi> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><mover><mi>R</mi><mo>~</mo></mover><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mn>255</mn><mo>&amp;times;</mo><msup><mi>R</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mover><mi>G</mi><mo>~</mo></mover><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mn>255</mn><mo>&amp;times;</mo><msup><mi>G</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mover><mi>B</mi><mo>~</mo></mover><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mn>255</mn><mo>&amp;times;</mo><msup><mi>B</mi><mrow><mo>&amp;prime;</mo><mo>&amp;prime;</mo></mrow></msup><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow></mrow></mtd></mtr></mtable></mfenced> 最后的输出图像可以表示为:The final output image can be expressed as: <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mover> <mi>R</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mover> <mi>R</mi> <mo>~</mo> </mover> <mi>min</mi> </msub> </mrow> <mrow> <msub> <mover> <mi>R</mi> <mo>~</mo> </mover> <mi>max</mi> </msub> <mo>-</mo> <msub> <mover> <mi>R</mi> <mo>~</mo> </mover> <mi>min</mi> </msub> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>G</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mover> <mi>G</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>min</mi> </msub> </mrow> <mrow> <msub> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>max</mi> </msub> <mo>-</mo> <msub> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>min</mi> </msub> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>B</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mover> <mi>B</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mover> <mi>B</mi> <mo>~</mo> </mover> <mi>min</mi> </msub> </mrow> <mrow> <msub> <mover> <mi>B</mi> <mo>~</mo> </mover> <mi>max</mi> </msub> <mo>-</mo> <msub> <mover> <mi>B</mi> <mo>~</mo> </mover> <mi>min</mi> </msub> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><msub><mi>R</mi><mrow><mi>o</mi><mi>u</mi><mi>t</mi></mrow></msub><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><mover><mi>R</mi><mo>~</mo></mover><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>R</mi><mo>~</mo></mover><mi>min</mi></msub></mrow><mrow><msub><mover><mi>R</mi><mo>~</mo></mover><mi>max</mi></msub><mo>-</mo><msub><mover><mi>R</mi><mo>~</mo></mover><mi>min</mi></msub></mrow></mfrac></mrow></mtd></mtr><mtr><mtd><mrow><msub><mi>G</mi><mrow><mi>o</mi><mi>u</mi><mi>t</mi></mrow></msub><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><mover><mi>G</mi><mo>~</mo></mover><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>G</mi><mo>~</mo></mover><mi>min</mi></msub></mrow><mrow><msub><mover><mi>G</mi><mo>~</mo></mover><mi>max</mi></msub><mo>-</mo><msub><mover><mi>G</mi><mo>~</mo></mover><mi>min</mi></msub></mrow></mfrac></mrow></mtd></mtr><mtr><mtd><mrow><msub><mi>B</mi><mrow><mi>o</mi><mi>u</mi><mi>t</mi></mrow></msub><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><mover><mi>B</mi><mo>~</mo></mover><mrow><mo>(</mo><mi>i</mi><mo>,</mo><mi>j</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>B</mi><mo>~</mo></mover><mi>min</mi></msub></mrow><mrow><msub><mover><mi>B</mi><mo>~</mo></mover><mi>max</mi></msub><mo>-</mo><msub><mover><mi>B</mi><mo>~</mo></mover><mi>min</mi></msub></mrow></mfrac></mrow></mtd></mtr></mtable></mfenced> 其中, in, 8.根据权利要求1所述的对比受限自适应直方图均衡的水下目标探测图像增强方法,其特征在于:所述步骤八,8. the underwater target detection image enhancement method of contrast limited self-adaptive histogram equalization according to claim 1, is characterized in that: described step eight, 对增强图像RGBout从均值、对比度、信息熵和色彩尺度等方面进行定量评价,相关定量评价指标函数表示为:Quantitatively evaluate the RGB out of the enhanced image from the aspects of mean value, contrast, information entropy and color scale, and the related quantitative evaluation index function is expressed as: 均值:其中,μR、μG和μB分别为RGBout三通道颜色分量的均值;mean: Wherein, μ R , μ G and μ B are respectively the mean value of RGB out three-channel color components; 对比度:式中,P(i,j;d,θk)为灰度共生矩阵;θk为像素间角度,θk=(k-1)×45°,k=1,2,3,4;Contrast: In the formula, P(i,j; d,θ k ) is the gray level co-occurrence matrix; θ k is the angle between pixels, θ k =(k-1)×45°, k=1,2,3,4; 信息熵: Information entropy: 色彩尺度:其中,α=R-G,β=(R+G)/2-B;μα和μβ分别是α和β的均值,σα和σβ分别是α和β的标准差。Color scale: Among them, α=RG, β=(R+G)/2-B; μ α and μ β are the mean values of α and β respectively, and σ α and σ β are the standard deviations of α and β respectively.
CN201711038835.6A 2017-10-30 2017-10-30 The Underwater Target Detection image enchancing method of the limited self-adapting histogram equilibrium of contrast Pending CN107833189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711038835.6A CN107833189A (en) 2017-10-30 2017-10-30 The Underwater Target Detection image enchancing method of the limited self-adapting histogram equilibrium of contrast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711038835.6A CN107833189A (en) 2017-10-30 2017-10-30 The Underwater Target Detection image enchancing method of the limited self-adapting histogram equilibrium of contrast

Publications (1)

Publication Number Publication Date
CN107833189A true CN107833189A (en) 2018-03-23

Family

ID=61651071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711038835.6A Pending CN107833189A (en) 2017-10-30 2017-10-30 The Underwater Target Detection image enchancing method of the limited self-adapting histogram equilibrium of contrast

Country Status (1)

Country Link
CN (1) CN107833189A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932706A (en) * 2018-08-14 2018-12-04 长沙全度影像科技有限公司 A kind of contrast and saturation degree Enhancement Method of color image
CN109658343A (en) * 2018-11-05 2019-04-19 天津大学 The underwater picture Enhancement Method of color combining conversion and adpative exposure
CN110009581A (en) * 2019-03-18 2019-07-12 深圳市华星光电技术有限公司 Image processing method, device and storage medium
CN110675332A (en) * 2019-08-20 2020-01-10 广东技术师范大学 An Enhancement Method of Metal Corrosion Image Quality
CN112598607A (en) * 2021-01-06 2021-04-02 安徽大学 Endoscope image blood vessel enhancement algorithm based on improved weighted CLAHE
CN112950565A (en) * 2021-02-25 2021-06-11 山东英信计算机技术有限公司 Method and device for detecting and positioning water leakage of data center and data center
CN114445300A (en) * 2022-01-29 2022-05-06 赵恒� Nonlinear underwater image gain algorithm for hyperbolic tangent deformation function transformation
CN114612340A (en) * 2022-03-25 2022-06-10 郑骐骥 Image data denoising method and system based on step-by-step contrast enhancement
CN116309203A (en) * 2023-05-19 2023-06-23 中国人民解放军国防科技大学 A method and device for unmanned platform motion estimation with polarization vision adaptive enhancement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527786A (en) * 2009-03-31 2009-09-09 西安交通大学 Method for strengthening definition of sight important zone in network video
CN103440635A (en) * 2013-09-17 2013-12-11 厦门美图网科技有限公司 Learning-based contrast limited adaptive histogram equalization method
CN103632339A (en) * 2012-08-21 2014-03-12 张晓光 Single image defogging method based on variation Retinex and apparatus
CN106056559A (en) * 2016-06-30 2016-10-26 河海大学常州校区 Dark-channel-prior-method-based non-uniform-light-field underwater target detection image enhancement method
CN107220950A (en) * 2017-05-31 2017-09-29 常州工学院 A kind of Underwater Target Detection image enchancing method of adaptive dark channel prior

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527786A (en) * 2009-03-31 2009-09-09 西安交通大学 Method for strengthening definition of sight important zone in network video
CN103632339A (en) * 2012-08-21 2014-03-12 张晓光 Single image defogging method based on variation Retinex and apparatus
CN103440635A (en) * 2013-09-17 2013-12-11 厦门美图网科技有限公司 Learning-based contrast limited adaptive histogram equalization method
CN106056559A (en) * 2016-06-30 2016-10-26 河海大学常州校区 Dark-channel-prior-method-based non-uniform-light-field underwater target detection image enhancement method
CN107220950A (en) * 2017-05-31 2017-09-29 常州工学院 A kind of Underwater Target Detection image enchancing method of adaptive dark channel prior

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHIYUAN XU 等: "Fog Removal from Color Images using Contrast Limited Adaptive Histogram Equalization", 《2009 2ND INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING》 *
张亚飞 等: "基于HSI和局部同态滤波的彩色图像增强算法", 《计算机应用与软件》 *
毕国玲 等: "基于照射_反射模型和有界运算的多谱段图像增强", 《物理学报》 *
闫钧宣 等: "HIS空间亮度信息的多尺度Retinex图像增强研究", 《计算机工程与应用》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932706A (en) * 2018-08-14 2018-12-04 长沙全度影像科技有限公司 A kind of contrast and saturation degree Enhancement Method of color image
CN109658343A (en) * 2018-11-05 2019-04-19 天津大学 The underwater picture Enhancement Method of color combining conversion and adpative exposure
CN109658343B (en) * 2018-11-05 2023-06-16 天津大学 Underwater Image Enhancement Method Combining Color Transformation and Adaptive Exposure
CN110009581A (en) * 2019-03-18 2019-07-12 深圳市华星光电技术有限公司 Image processing method, device and storage medium
CN110009581B (en) * 2019-03-18 2021-02-02 深圳市华星光电技术有限公司 Image processing method, device and storage medium
CN110675332A (en) * 2019-08-20 2020-01-10 广东技术师范大学 An Enhancement Method of Metal Corrosion Image Quality
CN112598607B (en) * 2021-01-06 2022-11-18 安徽大学 Endoscopic image vessel enhancement algorithm based on improved weighted CLAHE
CN112598607A (en) * 2021-01-06 2021-04-02 安徽大学 Endoscope image blood vessel enhancement algorithm based on improved weighted CLAHE
CN112950565A (en) * 2021-02-25 2021-06-11 山东英信计算机技术有限公司 Method and device for detecting and positioning water leakage of data center and data center
CN114445300A (en) * 2022-01-29 2022-05-06 赵恒� Nonlinear underwater image gain algorithm for hyperbolic tangent deformation function transformation
CN114612340A (en) * 2022-03-25 2022-06-10 郑骐骥 Image data denoising method and system based on step-by-step contrast enhancement
CN116309203A (en) * 2023-05-19 2023-06-23 中国人民解放军国防科技大学 A method and device for unmanned platform motion estimation with polarization vision adaptive enhancement
CN116309203B (en) * 2023-05-19 2023-08-01 中国人民解放军国防科技大学 Unmanned platform motion estimation method and device with polarization vision self-adaptation enhancement

Similar Documents

Publication Publication Date Title
CN107833189A (en) The Underwater Target Detection image enchancing method of the limited self-adapting histogram equilibrium of contrast
Wang et al. Biologically inspired image enhancement based on Retinex
CN106056559B (en) Nonuniform illumination Underwater Target Detection image enchancing method based on dark channel prior
CN104574293B (en) Multiple dimensioned Retinex image sharpenings algorithm based on bounded computing
Li et al. Insulator defect detection for power grid based on light correction enhancement and YOLOv5 model
Chen et al. Robust image and video dehazing with visual artifact suppression via gradient residual minimization
CN104156921B (en) An Adaptive Image Enhancement Method for Images with Low Illumination or Uneven Brightness
CN111292257B (en) A Retinex-based Image Enhancement Method in Dark Vision Environment
CN105678700B (en) Image interpolation method and system based on prediction gradient
Yuan et al. A region-wised medium transmission based image dehazing method
CN108932700A (en) Self-adaption gradient gain underwater picture Enhancement Method based on target imaging model
CN102663714B (en) Saliency-based method for suppressing strong fixed-pattern noise in infrared image
CN111161222B (en) A Visual Saliency Based Defect Detection Method for Printing Cylinders
CN106971153B (en) Illumination compensation method for face image
CN103886565B (en) Nighttime color image enhancement method based on purpose optimization and histogram equalization
CN109949247A (en) A Gradient Domain Adaptive Gain Underwater Image Enhancement Method Based on YIQ Spatial Optical Imaging Model
WO2021114564A1 (en) Enhancement method for low-contrast infrared image
CN107220950A (en) A kind of Underwater Target Detection image enchancing method of adaptive dark channel prior
CN105701785B (en) The image haze minimizing technology of Weighted T V transmissivities optimization is divided based on sky areas
Wang et al. Enhancement for dust-sand storm images
Mu et al. Low and non-uniform illumination color image enhancement using weighted guided image filtering
CN109961415A (en) An adaptive gain underwater image enhancement method based on HSI space optical imaging model
CN104463819A (en) Method and apparatus for filtering an image
CN110473152A (en) Based on the image enchancing method for improving Retinex algorithm
CN107203980B (en) Underwater target detection image enhancement method of self-adaptive multi-scale dark channel prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180323

RJ01 Rejection of invention patent application after publication