CN104881854B - High dynamic range images fusion method based on gradient and monochrome information - Google Patents
High dynamic range images fusion method based on gradient and monochrome information Download PDFInfo
- Publication number
- CN104881854B CN104881854B CN201510262164.6A CN201510262164A CN104881854B CN 104881854 B CN104881854 B CN 104881854B CN 201510262164 A CN201510262164 A CN 201510262164A CN 104881854 B CN104881854 B CN 104881854B
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- image
- mover
- brightness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 34
- 102000016550 Complement Factor H Human genes 0.000 claims abstract description 6
- 108010053085 Complement Factor H Proteins 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 13
- 238000000354 decomposition reaction Methods 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 6
- 238000001914 filtration Methods 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 31
- 230000000694 effects Effects 0.000 description 10
- 238000002474 experimental method Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
本发明涉及数字图像处理技术,为一种计算快速有效的多曝光图像融合方法。该方法不需复杂的图像分解及重构过程,采用像素加权方式完成融合,使图像显示更多的细节信息,提高图像质量。为此,本发明采取的技术方案是,基于梯度和亮度信息的高动态范围图像融合方法,包括如下步骤:1)对输入的每幅多曝光图像,计算其局部对比度权重因子Li(x,y);2)将原图像转换到色调H、色饱和度S和亮度I颜色空间,提取亮度分量I,计算其亮度权重因子Hi(x,y);3)权重估计4)采用递归滤波器对权重图进行改善,得到每幅输入图像最终的权重函数:5)对多曝光图像进行加权融合。本发明主要应用于数字图像处理。
The invention relates to digital image processing technology, which is a fast and effective multi-exposure image fusion method. This method does not require complex image decomposition and reconstruction process, and uses pixel weighting to complete the fusion, so that the image can display more detailed information and improve image quality. For this reason, the technical solution adopted by the present invention is that the high dynamic range image fusion method based on gradient and brightness information includes the following steps: 1) For each input multi-exposure image, calculate its local contrast weight factor L i (x, y); 2) Convert the original image to the color space of hue H, color saturation S and brightness I, extract the brightness component I, and calculate its brightness weight factor H i (x, y); 3) Weight estimation 4) Use recursive filtering The weight map is improved to obtain the final weight function of each input image: 5) Weighted fusion of multi-exposure images. The invention is mainly applied to digital image processing.
Description
技术领域technical field
本发明涉及数字图像处理技术,高动态范围图像处理技术。具体讲,涉及基于梯度和亮度信息的高动态范围图像融合方法。The invention relates to digital image processing technology and high dynamic range image processing technology. Specifically, it relates to a high dynamic range image fusion method based on gradient and brightness information.
背景技术Background technique
自然场景具有十分巨大的动态范围,普通摄像设备很难捕捉同一场景下所有的亮度等级。在光线强烈的场景下,普通摄像机拍摄出的图像经常会发生欠曝光或者过曝光的现象,因此在图像中过暗或者过亮的区域会丢失细节信息。为了解决这一问题,人们通常拍摄一系列曝光度不同的同一场景的图像,通过对图像序列进行处理生成高动态范围(HighDynamic Range,HDR)图像。高动态范围图像可以提供更多的动态范围和图像细节,更好的反映真实环境中的视觉效果。Natural scenes have a very large dynamic range, and it is difficult for ordinary camera equipment to capture all brightness levels in the same scene. In scenes with strong light, the images captured by ordinary cameras are often under-exposed or over-exposed, so details will be lost in areas that are too dark or too bright in the image. In order to solve this problem, people usually take a series of images of the same scene with different exposures, and generate a high dynamic range (High Dynamic Range, HDR) image by processing the image sequence. High dynamic range images can provide more dynamic range and image details, and better reflect the visual effects in the real environment.
目前已经有一些多曝光图像融合算法被提出。Goshtasby[1]提出了一种图像分块的多曝光图像融合技术。这种方法将每幅多曝光图像分成一系列大小相同的块,使用信息熵指标测量图像块的质量以找到每个区域对应的曝光最理想的图像块。之后,使用一个单调递减融合函数将被选择的块融合到一起。为了使得融合图像具有最大的信息量,最优的分块大小以及融合函数的宽度由梯度上升算法进行迭代优化。由于需要大量的复杂计算,这种算法的效率不高。另外,这种方法在物体边界处不能得到很好的融合效果。Mertens等[2]在RGB空间采用对比度、饱和度和曝光度作为融合指导指标生成权重图,采用拉普拉斯金字塔完成图像融合。算法首先将多曝光图像进行拉普拉斯分解,并将权重图进行高斯分解。之后,将拉普拉斯金字塔的每一层图像与对应的高斯金字塔的每一层图像相乘。最后,通过金字塔重构可生成融合图像。这种融合算法的结果在图像的暗区显示不清晰,欠曝光像素比较多。而且,随着金字塔分解层数的增加以及图像尺寸的增大,算法的运行时间会明显增加。Wei Zhang等[3]利用输入图像的梯度信息进行多曝光图像融合。算法分别采用梯度大小和方向构建一致性评估因子和可见性评估因子,之后将两个因子相乘得到权重图。最终的融合图像由原始图像序列与权重图相结合得到。Shen Jianbing等[4]提出了一种新颖的增强拉普拉斯金字塔融合算法。该算法基于局部权重因子和全局权重因子相结合作为权重测量图像曝光质量,增强过程由该权重图引导。这种方法生成的图像具有更好的色彩和纹理信息。但是,该算法同样使用了金字塔分解和重构过程,计算效率不高。At present, some multi-exposure image fusion algorithms have been proposed. Goshtasby[1] proposed a multi-exposure image fusion technique for image segmentation. This method divides each multi-exposure image into a series of blocks of the same size, and uses the information entropy index to measure the quality of the image block to find the image block with the most ideal exposure corresponding to each region. Afterwards, the selected blocks are fused together using a monotonically decreasing fusion function. In order to make the fused image have the maximum amount of information, the optimal block size and the width of the fusion function are iteratively optimized by the gradient ascending algorithm. This algorithm is not efficient due to the large number of complex calculations required. In addition, this method cannot get a good fusion effect at the object boundary. Mertens et al. [2] used contrast, saturation, and exposure as fusion guidance indicators to generate weight maps in RGB space, and used Laplacian pyramids to complete image fusion. The algorithm first decomposes the multi-exposure image by Laplace and decomposes the weight map by Gaussian. After that, each layer image of the Laplacian pyramid is multiplied by each layer image of the corresponding Gaussian pyramid. Finally, the fused image can be generated by pyramid reconstruction. The result of this fusion algorithm is not clear in the dark area of the image, and there are more underexposed pixels. Moreover, as the number of pyramid decomposition layers increases and the size of the image increases, the running time of the algorithm will increase significantly. Wei Zhang et al. [3] used the gradient information of the input image for multi-exposure image fusion. The algorithm uses the gradient size and direction to construct the consistency evaluation factor and the visibility evaluation factor respectively, and then multiplies the two factors to obtain the weight map. The final fused image is obtained by combining the original image sequence with the weight map. Shen Jianbing et al. [4] proposed a novel enhanced Laplacian pyramid fusion algorithm. The algorithm measures image exposure quality based on a combination of local and global weight factors as weights, and the enhancement process is guided by this weight map. Images generated by this method have better color and texture information. However, this algorithm also uses the process of pyramid decomposition and reconstruction, and the calculation efficiency is not high.
可以看出,现有的多曝光图像融合算法为了得到比较好的融合效果往往需要进行滤波或者图像分解重构过程,计算量比较大。融合过程简单的算法得到的结果又并不理想。因此,本发明提出一种简便的多曝光图像直接融合方法,利用梯度大小和亮度分量引导权重图完成融合过程,扩展图像的动态范围。It can be seen that the existing multi-exposure image fusion algorithm often needs to perform filtering or image decomposition and reconstruction in order to obtain a better fusion effect, and the calculation amount is relatively large. The result obtained by the simple algorithm of the fusion process is not ideal. Therefore, the present invention proposes a simple direct fusion method of multi-exposure images, which utilizes the gradient size and the brightness component to guide the weight map to complete the fusion process and expand the dynamic range of the image.
参考文献references
[1]Goshtasby A A.Fusion of multi-exposure images[J].Image and VisionComputing,2005,23(6):611-618.[1]Goshtasby A A.Fusion of multi-exposure images[J].Image and Vision Computing,2005,23(6):611-618.
[2]Mertens T,Kautz J,Van Reeth F.Exposure fusion[C]//ComputerGraphics and Applications,2007.PG'07.15th Pacific Conference on.IEEE,2007:382-390.[2]Mertens T, Kautz J, Van Reeth F. Exposure fusion[C]//Computer Graphics and Applications, 2007.PG'07.15th Pacific Conference on.IEEE, 2007:382-390.
[3]Zhang W,Cham W K.Gradient-directed composition of multi-exposureimages[C]//Computer Vision and Pattern Recognition(CVPR),2010 IEEE Conferenceon.IEEE,2010:530-536.[3] Zhang W, Cham W K. Gradient-directed composition of multi-exposure images [C]//Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010:530-536.
[4]Shen J,Zhao Y,Yan S,et al.Exposure fusion using boosting Laplacianpyramid[J].IEEE Trans.Cybern,2014,44(9):1579-1590.[4] Shen J, Zhao Y, Yan S, et al. Exposure fusion using boosting Laplacian pyramid [J]. IEEE Trans. Cybern, 2014, 44(9): 1579-1590.
发明内容Contents of the invention
为克服技术的不足,本发明旨在开发一种计算快速有效的多曝光图像融合方法。该方法不需复杂的图像分解及重构过程,采用像素加权方式完成融合,使图像显示更多的细节信息,提高图像质量。为此,本发明采取的技术方案是,基于梯度和亮度信息的高动态范围图像融合方法,包括如下步骤:To overcome the deficiencies in technology, the present invention aims to develop a fast and effective multi-exposure image fusion method. This method does not require complex image decomposition and reconstruction process, and uses pixel weighting to complete the fusion, so that the image can display more detailed information and improve image quality. For this reason, the technical scheme that the present invention takes is, the high dynamic range image fusion method based on gradient and brightness information, comprises the following steps:
1)对输入的每幅多曝光图像,计算其局部对比度权重因子Li(x,y);1) For each input multi-exposure image, calculate its local contrast weight factor L i (x, y);
2)将原图像转换到色调H、色饱和度S和亮度I颜色空间,提取亮度分量I,计算其亮度权重因子Hi(x,y);2) Convert the original image to the color space of hue H, color saturation S and brightness I, extract the brightness component I, and calculate its brightness weight factor H i (x, y);
3)权重估计:将两个图像质量测量因子结合起来以计算图像融合时的权重函数进行归一化后得到归一化的权重函数Wi′(x,y):3) Weight estimation: the two image quality measurement factors are combined to calculate the weight function for image fusion After normalization, the normalized weight function W i ′(x,y) is obtained:
ε<10-12;ε<10 -12 ;
4)采用递归滤波器对权重图进行改善,得到每幅输入图像最终的权重函数Wi(x,y):4) Use a recursive filter to improve the weight map to obtain the final weight function W i (x, y) of each input image:
Wi(x,y)=RF(Wi′(x,y),sigma_s,sigma_r) (9)W i (x, y) = RF (W i '(x, y), sigma_s, sigma_r) (9)
其中,RF表示递归滤波器算子,sigma_s和sigma_r为滤波器参数;Among them, RF represents the recursive filter operator, and sigma_s and sigma_r are filter parameters;
5)对多曝光图像进行加权融合,使用Wi(x,y)作为权重,得到最终的融合图像F(x,y):5) Perform weighted fusion of multi-exposure images, using W i (x, y) as the weight to obtain the final fused image F(x, y):
Ii(x,y)是第i幅输入图像位于(x,y)位置的像素点的强度。I i (x, y) is the intensity of the pixel at the position (x, y) of the i-th input image.
局部对比度权重因子Li(x,y)由以下公式计算:The local contrast weighting factor L i (x,y) is calculated by the following formula:
其中,gi(x,y)是对输入的第i幅图像的灰度图像应用索贝尔梯度算子得到的图像梯度大小值,是未归一化的局部对比度权重因子。Among them, g i (x, y) is the image gradient value obtained by applying the Sobel gradient operator to the grayscale image of the input i-th image, is the unnormalized local contrast weighting factor.
亮度权重因子的计算,用下式剔除这些曝光度不好的像素点,得到具有曝光度良好的像素点的输入图像Bi(x,y):The calculation of the brightness weighting factor uses the following formula to eliminate these pixels with poor exposure, and obtain the input image B i (x,y) with pixels with good exposure:
其中,li(x,y)是第i幅输入图像位于(x,y)位置的像素点的亮度值,t是一个定义曝光度范围的阈值,将亮度值为0.5的像素点的权重设为1,其它像素点的权重值根据亮度值的增加或者减弱依次递减,得到初始的亮度权重图 Among them, l i (x, y) is the brightness value of the pixel point of the i-th input image at the position (x, y), t is a threshold value defining the exposure range, and the weight of the pixel point with a brightness value of 0.5 is set to is 1, the weight values of other pixels decrease sequentially according to the increase or decrease of the brightness value, and the initial brightness weight map is obtained
为避免出现奇点,将亮度权重与一个非常小的值相加,得到改善后的亮度权重因子 To avoid singularities, the brightness weight is added to a very small value to get the improved brightness weight factor
归一化,得到最终的亮度权重因子Hi(x,y):Normalized to get the final brightness weight factor H i (x,y):
与已有技术相比,本发明的技术特点与效果:Compared with prior art, technical characteristic and effect of the present invention:
采用本发明的技术方案得到的融合图像从主观上看整体明暗效果较好,尤其对于图像中暗部区域的融合取得了较好的效果,而且具有较好的色彩还原度。由于采用了权重图与原多曝光图像序列直接相乘的方式进行融合,无需复杂的图像分解计算,本发明算法计算复杂度低。在客观指标实验方面。The fused image obtained by adopting the technical solution of the present invention has a better overall light and shade effect from a subjective point of view, especially for the fusion of dark areas in the image, and has a better color reproduction degree. Since the fusion is performed by directly multiplying the weight map and the original multi-exposure image sequence, complex image decomposition calculation is not required, and the calculation complexity of the algorithm of the present invention is low. In terms of objective index experiments.
附图说明Description of drawings
图1“Garage”多曝光图像序列及融合结果,图中,(a)“Garage”LDR图像序列;(b)Mertens等人算法;(c)Wei Zhang等人算法;(d)本发明算法。Figure 1 "Garage" multi-exposure image sequence and fusion results, in the figure, (a) "Garage" LDR image sequence; (b) algorithm of Mertens et al.; (c) algorithm of Wei Zhang et al.; (d) algorithm of the present invention.
图2本发明方案的流程图。Fig. 2 is a flowchart of the scheme of the present invention.
具体实施方式detailed description
图像传感器的动态范围远远小于自然界亮度的变化范围,致使在亮度变化剧烈的场合无法通过单次拍摄来获得清晰的图像。多曝光图像融合是一种有效的扩展图像动态范围的方式,对亮度变化较大且无快速运动目标的场景能够显著提高图像清晰度和对比度,使更多的图像细节信息表现出来。高动态范围技术可广泛应用于交通视频、生物医疗、卫星遥感、游戏等一些需要显示高动态细节图像的行业。因此,开发扩展图像动态范围的方法具有很高的研究以及应用价值。The dynamic range of the image sensor is much smaller than the natural brightness variation range, which makes it impossible to obtain a clear image with a single shot when the brightness changes drastically. Multi-exposure image fusion is an effective way to expand the dynamic range of images. It can significantly improve image clarity and contrast for scenes with large brightness changes and no fast moving objects, so that more image details can be displayed. High dynamic range technology can be widely used in traffic video, biomedical, satellite remote sensing, games and other industries that need to display high dynamic detail images. Therefore, developing a method for extending the dynamic range of an image has high research and application value.
本发明基于两个图像质量测量因子对多曝光图像直接加权融合。通过两个融合指标构成的权重函数,每幅多曝光图像可计算得出一幅权重图。两个权重因子如下:The invention directly weights and fuses multi-exposure images based on two image quality measurement factors. Through the weight function composed of two fusion indexes, a weight map can be calculated for each multi-exposure image. The two weighting factors are as follows:
1、局部对比度权重因子。对于输入的多曝光图像,其梯度大小可以反映图像的清晰度以及包含的信息量。在图像中曝光不足或者曝光过度的区域,像素点的梯度值会非常小。相反,在曝光良好的区域,像素点的梯度值会比较大。因此,可以用梯度大小作为衡量图像曝光质量的指标。多曝光图像的局部对比度权重因子Li(x,y)可由以下公式计算:1. Local contrast weighting factor. For the input multi-exposure image, its gradient size can reflect the clarity of the image and the amount of information it contains. In underexposed or overexposed areas of the image, the gradient values of the pixels will be very small. On the contrary, in well-exposed areas, the gradient value of pixels will be relatively large. Therefore, the gradient size can be used as an index to measure the image exposure quality. The local contrast weighting factor L i (x, y) of the multi-exposure image can be calculated by the following formula:
其中,ε是一个非常小的值,以避免出现奇点。gi(x,y)是对输入的第i幅图像的灰度图像应用索贝尔梯度算子得到的图像梯度大小值,是未归一化的局部对比度权重因子。where ε is a very small value to avoid singularities. g i (x, y) is the image gradient value obtained by applying the Sobel gradient operator to the grayscale image of the input i-th image, is the unnormalized local contrast weighting factor.
2、亮度权重因子。像素的亮度值往往可以反映图像曝光度的好坏。在图像中亮度值合适的像素点具有更好的色彩效果和更清晰的细节信息。因此,本发明选用亮度分量作为另一个图像质量测量因子。为了避免亮度值过小或过大的像素点在融合时造成干扰,可以用下式剔除这些曝光度不好的像素点,得到具有曝光度良好的像素点的输入图像Bi(x,y):2. Brightness weighting factor. The brightness value of a pixel can often reflect the quality of the image exposure. Pixels with appropriate brightness values in the image have better color effects and clearer details. Therefore, the present invention selects the luminance component as another image quality measurement factor. In order to avoid pixels with too small or too large brightness values causing interference during fusion, the following formula can be used to eliminate these pixels with poor exposure, and obtain an input image B i (x,y) with pixels with good exposure :
其中,li(x,y)是第i幅输入图像位于(x,y)位置的像素点的亮度值,t是一个定义曝光度范围的阈值。本发明在构造亮度权重因子时,将亮度值为0.5的像素点的权重设为1。其它像素点的权重值根据亮度值的增加或者减弱依次递减,得到初始的亮度权重图 Among them, l i (x, y) is the luminance value of the pixel at the (x, y) position of the i-th input image, and t is a threshold defining the exposure range. In the present invention, the weight of a pixel with a brightness value of 0.5 is set to 1 when constructing the brightness weight factor. The weight values of other pixels decrease in turn according to the increase or decrease of the brightness value, and the initial brightness weight map is obtained
为避免出现奇点,将亮度权重与一个非常小的值相加,得到改善后的亮度权重因子 To avoid singularities, the brightness weight is added to a very small value to get the improved brightness weight factor
归一化,得到最终的亮度权重因子Hi(x,y):Normalized to get the final brightness weight factor H i (x,y):
多曝光图像序列融合算法步骤如下:The steps of the multi-exposure image sequence fusion algorithm are as follows:
1)对输入的每幅多曝光图像,计算其局部对比度权重因子。1) For each input multi-exposure image, calculate its local contrast weighting factor.
2)将原图像转换到色调、色饱和度和亮度(HIS)颜色空间,提取亮度分量I,计算其亮度权重因子。2) Convert the original image to Hue, Saturation and Brightness (HIS) color space, extract the brightness component I, and calculate its brightness weight factor.
3)权重估计。将两个图像质量测量因子结合起来以计算图像融合时的权重函数进行归一化后得到归一化的权重函数Wi′(x,y):3) Weight estimation. Combines two image quality measures to compute a weighting function for image fusion After normalization, the normalized weight function W i ′(x,y) is obtained:
4)采用递归滤波器对权重图进行改善,使得在图像曝光度相似的区域其权重值也相近,避免融合图像中出现缝隙伪像现象,得到每幅输入图像最终的权重函数Wi(x,y):4) Use a recursive filter to improve the weight map, so that the weight values in areas with similar image exposures are also similar, avoiding gap artifacts in the fused image, and obtaining the final weight function W i (x, y):
Wi(x,y)=RF(Wi′(x,y),sigma_s,sigma_r) (9)W i (x, y) = RF (W i '(x, y), sigma_s, sigma_r) (9)
其中,RF表示递归滤波器算子,sigma_s和sigma_r为滤波器参数。Among them, RF represents a recursive filter operator, and sigma_s and sigma_r are filter parameters.
5)对多曝光图像进行加权融合,使用Wi(x,y)作为权重,得到最终的融合图像F(x,y):5) Perform weighted fusion of multi-exposure images, using W i (x, y) as the weight to obtain the final fused image F(x, y):
为验证算法效果,应用如上所述的算法对多曝光图像序列进行融合并对融合效果进行分析。对于彩色图像,在融合时对R、G、B三个通道分别进行计算,并且在计算亮度因子时只对亮度分量进行操作。为了比较融合图像的质量,将本发明算法的结果与Mertens等人的算法[2]、Wei Zhang等人的算法[3]做比较。分别从主观和客观两方面对实验结果进行分析。In order to verify the effect of the algorithm, the above-mentioned algorithm is applied to fuse the multi-exposure image sequence and the fusion effect is analyzed. For color images, the three channels of R, G, and B are calculated separately during fusion, and only the brightness component is operated when calculating the brightness factor. In order to compare the quality of the fused images, the results of the algorithm of the present invention are compared with the algorithm of Mertens et al. [2] and the algorithm of Wei Zhang et al. [3]. The experimental results were analyzed from both subjective and objective aspects.
客观实验分别采用图像均值、彩色熵、平均梯度、信息熵以及标准差五个图像质量测量指标对实验结果进行测量。图像均值反映了图像的平均明暗程度。彩色熵反映了图像R、G、B三个通道包含的信息量的总和。平均梯度即图像的清晰度,反映图像对细节对比的表达能力。信息熵反映了图像包含信息量的大小,是衡量图像信息丰富程度的一个重要指标。标准差反映了灰度均值的离散程度。The objective experiment uses image mean, color entropy, average gradient, information entropy and standard deviation to measure the experimental results. The image mean reflects the average lightness and darkness of the image. Color entropy reflects the sum of the amount of information contained in the three channels of the image R, G, and B. The average gradient is the sharpness of the image, which reflects the ability of the image to express the contrast of details. Information entropy reflects the amount of information contained in an image, and is an important indicator to measure the richness of image information. The standard deviation reflects the degree of dispersion of the gray mean.
“Garage”图像序列的实验结果如图1、表1所示。The experimental results of the “Garage” image sequence are shown in Figure 1 and Table 1.
表1客观指标实验结果Table 1 Experimental results of objective indicators
从图1的实验结果可以看出,Mertens等人算法在色彩还原度方面有较好的效果,但是在车库的暗区部分细节显示不清晰,欠曝光的像素比较多,整体图像偏暗。而且,由于需要进行拉普拉斯金字塔的分解与重构,Mertens等人的方法计算量太大。同样,Wei Zhang等人的算法在车库区域显示较暗,缺乏细节信息。采用本发明的技术方案得到的融合图像从主观上看整体明暗效果较好,尤其对于图像中暗部区域的融合取得了较好的效果,而且具有较好的色彩还原度。由于采用了权重图与原多曝光图像序列直接相乘的方式进行融合,无需复杂的图像分解计算,本发明算法计算复杂度低。在客观指标实验方面,由表1可见,本发明算法生成的融合图像,其均值、彩色熵、信息熵、标准差指标都比其它两种算法数值高,与主观视觉效果评价的结论一致。这说明本发明算法在细节信息的提取和处理等方面均优于其他两种算法。From the experimental results in Figure 1, it can be seen that the Mertens et al. algorithm has a good effect in terms of color reproduction, but the details in the dark area of the garage are not displayed clearly, there are many underexposed pixels, and the overall image is dark. Moreover, due to the need to decompose and reconstruct the Laplacian pyramid, the method of Mertens et al. is too computationally intensive. Similarly, the algorithm of Wei Zhang et al. shows darker garage area and lacks detailed information. The fused image obtained by adopting the technical solution of the present invention has a better overall light and shade effect from a subjective point of view, especially for the fusion of dark areas in the image, and has a better color reproduction degree. Since the fusion is performed by directly multiplying the weight map and the original multi-exposure image sequence, complex image decomposition calculation is not required, and the calculation complexity of the algorithm of the present invention is low. In terms of objective index experiment, it can be seen from Table 1 that the average value, color entropy, information entropy, and standard deviation index of the fusion image generated by the algorithm of the present invention are higher than those of the other two algorithms, which is consistent with the conclusion of subjective visual effect evaluation. This shows that the algorithm of the present invention is superior to the other two algorithms in the extraction and processing of detailed information.
在实际应用中,为了得到最佳的融合结果,对本发明算法中涉及的参数进行如下设置:ε取值范围为小于等于10-12,t的取值范围为0.05至0.2,sigma_s=60,sigma_r=2。在上述实验中ε=10-12,t=0.1。以多曝光图像序列为实验对象,采用本发明中描述的步骤,按上述参数值设置,可以得到视觉效果较好的融合图像。实验结果表明,本发明算法在主观视觉和客观定量指标等方面均有较好的效果。In practical applications, in order to obtain the best fusion result, the parameters involved in the algorithm of the present invention are set as follows: the value range of ε is less than or equal to 10 −12 , the value range of t is 0.05 to 0.2, sigma_s=60, sigma_r =2. In the above experiment, ε=10 -12 , t=0.1. Taking the multi-exposure image sequence as the experimental object, adopting the steps described in the present invention and setting the above parameter values, a fused image with better visual effect can be obtained. Experimental results show that the algorithm of the present invention has better effects in terms of subjective vision and objective quantitative indicators.
Claims (3)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510262164.6A CN104881854B (en) | 2015-05-20 | 2015-05-20 | High dynamic range images fusion method based on gradient and monochrome information |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510262164.6A CN104881854B (en) | 2015-05-20 | 2015-05-20 | High dynamic range images fusion method based on gradient and monochrome information |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104881854A CN104881854A (en) | 2015-09-02 |
| CN104881854B true CN104881854B (en) | 2017-10-31 |
Family
ID=53949339
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510262164.6A Expired - Fee Related CN104881854B (en) | 2015-05-20 | 2015-05-20 | High dynamic range images fusion method based on gradient and monochrome information |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104881854B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111212241A (en) * | 2020-01-13 | 2020-05-29 | 禾多科技(北京)有限公司 | Automatic exposure control method for high-speed autonomous driving based on image gradient and entropy fusion |
Families Citing this family (37)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105744159B (en) * | 2016-02-15 | 2019-05-24 | 努比亚技术有限公司 | An image synthesis method and device |
| CN106169182B (en) * | 2016-05-25 | 2019-08-09 | 西安邮电大学 | A method for synthesizing multiple images with different exposures |
| CN106251365A (en) * | 2016-07-22 | 2016-12-21 | 北京邮电大学 | Many exposure video fusion method and device |
| CN106355569A (en) * | 2016-08-29 | 2017-01-25 | 努比亚技术有限公司 | Image generating device and method thereof |
| CN107993189B (en) * | 2016-10-27 | 2021-06-18 | 瑞芯微电子股份有限公司 | Image tone dynamic adjustment method and device based on local blocking |
| CN108713318A (en) * | 2016-10-31 | 2018-10-26 | 华为技术有限公司 | A kind of processing method and equipment of video frame |
| CN106488150B (en) * | 2016-11-25 | 2019-04-19 | 阿依瓦(北京)技术有限公司 | The system for generating high dynamic range images based on Heterogeneous Computing |
| CN106780463B (en) * | 2016-12-15 | 2019-07-05 | 华侨大学 | It is a kind of to expose fused image quality appraisal procedures with reference to entirely more |
| CN108205796B (en) * | 2016-12-16 | 2021-08-10 | 大唐电信科技股份有限公司 | Multi-exposure image fusion method and device |
| CN106920221B (en) * | 2017-03-10 | 2019-03-26 | 重庆邮电大学 | Take into account the exposure fusion method that Luminance Distribution and details are presented |
| CN107424124B (en) * | 2017-03-31 | 2020-03-17 | 北京臻迪科技股份有限公司 | Image enhancement method and device |
| CN107220956A (en) * | 2017-04-18 | 2017-09-29 | 天津大学 | A kind of HDR image fusion method of the LDR image based on several with different exposures |
| EP3418972A1 (en) * | 2017-06-23 | 2018-12-26 | Thomson Licensing | Method for tone adapting an image to a target peak luminance lt of a target display device |
| CN108288253B (en) * | 2018-01-08 | 2020-11-27 | 厦门美图之家科技有限公司 | HDR image generation method and device |
| CN108090886B (en) * | 2018-01-11 | 2022-04-22 | 南京大学 | High dynamic range infrared image display and detail enhancement method |
| CN108717690B (en) * | 2018-05-21 | 2022-03-04 | 电子科技大学 | A method for synthesizing high dynamic range images |
| CN109001230A (en) * | 2018-05-28 | 2018-12-14 | 中兵国铁(广东)科技有限公司 | Welding point defect detection method based on machine vision |
| CN109146798A (en) * | 2018-07-10 | 2019-01-04 | 西安天盈光电科技有限公司 | image detail enhancement method |
| CN109166076B (en) * | 2018-08-10 | 2020-08-07 | 影石创新科技股份有限公司 | Multi-camera splicing brightness adjusting method and device and portable terminal |
| CN109712091B (en) * | 2018-12-19 | 2021-03-23 | Tcl华星光电技术有限公司 | Image processing method, device and electronic device |
| CN110211077B (en) * | 2019-05-13 | 2021-03-09 | 杭州电子科技大学上虞科学与工程研究院有限公司 | Multi-exposure image fusion method based on high-order singular value decomposition |
| CN110355465B (en) * | 2019-08-21 | 2023-07-25 | 广东福维德焊接股份有限公司 | High dynamic vision system based on industrial camera and synthesis method |
| CN110580696A (en) * | 2019-08-30 | 2019-12-17 | 金陵科技学院 | A Detail Preserving Method for Fast Fusion of Multi-exposure Images |
| CN110738627B (en) * | 2019-09-04 | 2022-04-26 | Tcl华星光电技术有限公司 | Multi-exposure image fusion device and multi-exposure image fusion method |
| CN111080565B (en) * | 2019-12-11 | 2023-07-18 | 九江学院 | Exposure fusion method, device and storage medium based on image quality variation rule |
| CN112241953B (en) * | 2020-10-22 | 2023-07-21 | 江苏美克医学技术有限公司 | Sample image fusion method and device based on multi-focus image fusion and HDR algorithm |
| CN113129391B (en) * | 2021-04-27 | 2023-01-31 | 西安邮电大学 | Multi-exposure fusion method based on multi-exposure image feature distribution weight |
| CN113222869B (en) * | 2021-05-06 | 2024-03-01 | 杭州海康威视数字技术股份有限公司 | An image processing method |
| CN113327534B (en) * | 2021-05-20 | 2023-02-28 | Tcl华星光电技术有限公司 | Data compensation method and data compensation device for display panel |
| CN113240609A (en) * | 2021-05-26 | 2021-08-10 | Oppo广东移动通信有限公司 | Image denoising method and device and storage medium |
| CN113674186B (en) * | 2021-08-02 | 2024-10-22 | 中国科学院长春光学精密机械与物理研究所 | Image synthesis method and device based on self-adaptive adjustment factors |
| CN113962914B (en) * | 2021-09-28 | 2026-01-13 | 影石创新科技股份有限公司 | Image fusion method, device, computer equipment and storage medium |
| CN115278090B (en) * | 2022-09-28 | 2022-12-06 | 江苏游隼微电子有限公司 | Single-frame four-exposure WDR processing method based on line exposure |
| CN115760663B (en) * | 2022-11-14 | 2023-09-22 | 辉羲智能科技(上海)有限公司 | Method for synthesizing high dynamic range images from low dynamic range images based on multiple frames and multiple exposures |
| CN115731146B (en) * | 2022-12-26 | 2023-05-12 | 中国人民解放军战略支援部队航天工程大学 | Multi-exposure image fusion method based on color gradient histogram feature optical flow estimation |
| CN117764931A (en) * | 2023-12-06 | 2024-03-26 | 中国人民解放军63811部队 | Optimization method of multi-source live images for space launch missions |
| CN118799316B (en) * | 2024-09-12 | 2025-03-14 | 宁德时代新能源科技股份有限公司 | Tab detection method, device, electronic equipment, imaging system and storage medium |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102063712A (en) * | 2010-11-04 | 2011-05-18 | 北京理工大学 | Multi-exposure image fusion method based on sub-band structure |
| CN103247036A (en) * | 2012-02-10 | 2013-08-14 | 株式会社理光 | Multiple-exposure image fusion method and device |
| CN103473749A (en) * | 2013-01-09 | 2013-12-25 | 深圳信息职业技术学院 | Method and apparatus based on total variation image fusion |
| CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20130031574A (en) * | 2011-09-21 | 2013-03-29 | 삼성전자주식회사 | Image processing method and image processing apparatus |
-
2015
- 2015-05-20 CN CN201510262164.6A patent/CN104881854B/en not_active Expired - Fee Related
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102063712A (en) * | 2010-11-04 | 2011-05-18 | 北京理工大学 | Multi-exposure image fusion method based on sub-band structure |
| CN103247036A (en) * | 2012-02-10 | 2013-08-14 | 株式会社理光 | Multiple-exposure image fusion method and device |
| CN103473749A (en) * | 2013-01-09 | 2013-12-25 | 深圳信息职业技术学院 | Method and apparatus based on total variation image fusion |
| CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111212241A (en) * | 2020-01-13 | 2020-05-29 | 禾多科技(北京)有限公司 | Automatic exposure control method for high-speed autonomous driving based on image gradient and entropy fusion |
| CN111212241B (en) * | 2020-01-13 | 2021-06-18 | 禾多科技(北京)有限公司 | Automatic exposure control method for high-speed autonomous driving based on image gradient and entropy fusion |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104881854A (en) | 2015-09-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104881854B (en) | High dynamic range images fusion method based on gradient and monochrome information | |
| Ram Prabhakar et al. | Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs | |
| Kou et al. | Multi-scale exposure fusion via gradient domain guided image filtering | |
| Raman et al. | Bilateral Filter Based Compositing for Variable Exposure Photography. | |
| CN107845128B (en) | Multi-exposure high-dynamic image reconstruction method with multi-scale detail fusion | |
| Ma et al. | Multi-exposure image fusion: A patch-wise approach | |
| Nejati et al. | Fast exposure fusion using exposedness function | |
| Nair et al. | Color image dehazing using surround filter and dark channel prior | |
| CN106127718B (en) | A kind of more exposure image fusion methods based on wavelet transformation | |
| CN107220956A (en) | A kind of HDR image fusion method of the LDR image based on several with different exposures | |
| Sidike et al. | Adaptive trigonometric transformation function with image contrast and color enhancement: Application to unmanned aerial system imagery | |
| Park et al. | Generation of high dynamic range illumination from a single image for the enhancement of undesirably illuminated images | |
| CN112070692B (en) | A single backlight image enhancement method based on virtual exposure | |
| Huo et al. | Fast fusion-based dehazing with histogram modification and improved atmospheric illumination prior | |
| CN105959510A (en) | Video rapid defogging method | |
| Huang et al. | A color multi-exposure image fusion approach using structural patch decomposition | |
| CN106157305A (en) | High-dynamics image rapid generation based on local characteristics | |
| Singh et al. | Anisotropic diffusion for details enhancement in multiexposure image fusion | |
| Kao | High dynamic range imaging by fusing multiple raw images and tone reproduction | |
| Singh et al. | Weighted least squares based detail enhanced exposure fusion | |
| Vanmali et al. | Low complexity detail preserving multi-exposure image fusion for images with balanced exposure | |
| Han et al. | Automatic illumination and color compensation using mean shift and sigma filter | |
| Wang et al. | Single low-light image brightening using learning-based intensity mapping | |
| Chen et al. | Low‐light image enhancement based on exponential Retinex variational model | |
| Le et al. | Fused logarithmic transform for contrast enhancement |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| EXSB | Decision made by sipo to initiate substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171031 Termination date: 20210520 |