CN104616273B - A kind of many exposure image fusion methods based on Laplacian pyramid - Google Patents
A kind of many exposure image fusion methods based on Laplacian pyramid Download PDFInfo
- Publication number
- CN104616273B CN104616273B CN201510038868.5A CN201510038868A CN104616273B CN 104616273 B CN104616273 B CN 104616273B CN 201510038868 A CN201510038868 A CN 201510038868A CN 104616273 B CN104616273 B CN 104616273B
- Authority
- CN
- China
- Prior art keywords
- image
- laplacian pyramid
- layer
- pyramid
- decomposition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 35
- 230000002708 enhancing effect Effects 0.000 claims abstract description 4
- 230000004927 fusion Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 238000013441 quality evaluation Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 2
- 238000011160 research Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- YBJHBAHKTGYVGT-ZKWXMUAHSA-N (+)-Biotin Chemical compound N1C(=O)N[C@@H]2[C@H](CCCCC(=O)O)SC[C@@H]21 YBJHBAHKTGYVGT-ZKWXMUAHSA-N 0.000 description 1
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- FEPMHVLSLDOMQC-UHFFFAOYSA-N virginiamycin-S1 Natural products CC1OC(=O)C(C=2C=CC=CC=2)NC(=O)C2CC(=O)CCN2C(=O)C(CC=2C=CC=CC=2)N(C)C(=O)C2CCCN2C(=O)C(CC)NC(=O)C1NC(=O)C1=NC=CC=C1O FEPMHVLSLDOMQC-UHFFFAOYSA-N 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及计算机视觉领域和图像/视频处理领域,研究偏重于图像处理。The invention relates to the fields of computer vision and image/video processing, and the research focuses on image processing.
背景技术Background technique
动态范围是指一个物理量的最大值和最小值之比。对于一个实际场景来说,动态范围就是指这个现实场景中最明亮的点与最阴暗的点的光照辐射度之比。高动态范围图像所覆盖的动态范围很大,可以很好地再现自然场景,保留场景中的细节信息,给人们带来逼真的视觉体验。高动态范围成像技术在图像和视频领域起着很重要的作用。当一个真实场景的动态范围很高时,即使照相机设置了正确的曝光度,获得的图像中也总是包含过度曝光和曝光不足的区域。高动态范围成像技术能够获得高动态范围图像,并且使这些含有丰富细节的高动态范围图像在低动态范围设备上显示。Dynamic range refers to the ratio between the maximum and minimum values of a physical quantity. For an actual scene, the dynamic range refers to the ratio of the light radiance of the brightest point to the darkest point in the real scene. The dynamic range covered by the high dynamic range image is very large, which can well reproduce the natural scene, retain the detailed information in the scene, and bring people a realistic visual experience. High dynamic range imaging technology plays an important role in the field of image and video. When a real scene has a high dynamic range, the resulting image will always contain overexposed and underexposed areas, even if the camera is set to the correct exposure. High dynamic range imaging technology can obtain high dynamic range images and make these high dynamic range images with rich details displayed on low dynamic range devices.
目前常用的高动态范围成像技术为多曝光图像融合方法,包括:直接融合方法、基于区域的融合方法和基于分层的融合方法。直接融合方法是根据获得的输入图像的权重图,将输入图像直接融合。在这种方法中,权重图质量是获得高质量图像的关键。为了获得高质量的权重图,这种方法会变得非常复杂。基于区域的融合方法首先将输入图像分成若干的区域,然后选取所有输入图像同一区域的最优块,最后融合选取的最优区域。由于这种方法需要像素级运算选择最优块,同时需要像素级的融合,因此算法运行时间较长。基于分层的融合方法是利用某种分层框架将输入图像分解,进行一定的处理后再进行重构的一种融合方法。常用的基于分层的融合方法有:基于拉普拉斯金字塔的融合方法和基于子带的融合方法。基于子带的融合方法能够获得含有丰富细节高质量的高动态范围图像。因为该方法的增益控制图的获取比较复杂,因此该方法的复杂度很高,并且实时性不高。基于拉普拉斯金字塔的多曝光图像融合方法是目前使用较多的获得高动态范围图像的方法。传统的基于拉普拉斯金字塔的融合方法在进行多尺度分解和重构的过程中损失了图像的细节等信息,并且没有对丢失的信息进行补偿,使得最终得到的高动态范围图像缺失细节等信息。因此,提出一种新的基于拉普拉斯金字塔的多曝光图像融合方法来获得含有丰富细节和高清晰度的高动态范围图像非常重要。图1为传统的基于拉普拉斯金字塔的融合框架。At present, the commonly used high dynamic range imaging technology is multi-exposure image fusion method, including: direct fusion method, region-based fusion method and layer-based fusion method. The direct fusion method is to directly fuse the input images according to the obtained weight map of the input images. In this method, weight map quality is the key to obtain high-quality images. In order to obtain high-quality weight maps, this method can become very complicated. The region-based fusion method first divides the input image into several regions, then selects the optimal blocks in the same region of all input images, and finally fuses the selected optimal regions. Since this method requires pixel-level operations to select the optimal block and pixel-level fusion, the algorithm takes a long time to run. The layer-based fusion method is a fusion method that uses a layered framework to decompose the input image, perform certain processing, and then reconstruct it. Commonly used layer-based fusion methods are: Laplacian-based fusion methods and sub-band-based fusion methods. Subband-based fusion methods can obtain high-quality high-dynamic-range images with rich details. Because the acquisition of the gain control chart of this method is relatively complicated, the complexity of this method is very high, and the real-time performance is not high. The multi-exposure image fusion method based on Laplacian pyramid is currently the most used method to obtain high dynamic range images. The traditional fusion method based on the Laplacian pyramid loses information such as image details in the process of multi-scale decomposition and reconstruction, and does not compensate for the lost information, so that the final high dynamic range image lacks details, etc. information. Therefore, it is very important to propose a new multi-exposure image fusion method based on Laplacian pyramid to obtain high dynamic range images with rich details and high definition. Figure 1 shows the traditional fusion framework based on Laplacian pyramid.
本发明的主要目的是实现一种实时性较高的融合方法并且能够生成含有丰富细节和高清晰度的高动态范围图像。为了实现此目标,提出一种基于拉普拉斯金字塔分解框架的新的多曝光图像融合方法。The main purpose of the present invention is to realize a high real-time fusion method and be able to generate high dynamic range images with rich details and high definition. To achieve this goal, a new multi-exposure image fusion method based on the Laplacian pyramid decomposition framework is proposed.
发明内容Contents of the invention
本发明针对背景技术的不足改进设计一种基于拉普拉斯金字塔分解的多曝光图像融合法,从而达到对多曝光图像融合后的生成图像包含有丰富的细节和较高的清晰度。The present invention improves and designs a multi-exposure image fusion method based on Laplacian pyramid decomposition in view of the deficiency of the background technology, so as to achieve that the generated image after multi-exposure image fusion contains rich details and high definition.
本发明的技术方案是一种基于拉普拉斯金字塔分解的多曝光图像融合法,该方法针对传统的基于拉普拉斯金字塔的融合方法在进行多尺度分解和重构的过程中损失了图像的细节等信息的弊端,利用图像细节增强方法对输入图像进行一定程度的细节增强,抵消一部分由于金字塔放大和缩小造成的细节损失,从而实现发明目的。因而发明一种基于拉普拉斯金字塔分解的多曝光图像融合法,该方法包括:The technical solution of the present invention is a multi-exposure image fusion method based on Laplacian pyramid decomposition, which aims at image loss in the process of multi-scale decomposition and reconstruction in the traditional fusion method based on Laplacian pyramid The drawbacks of information such as details, etc., use the image detail enhancement method to enhance the details of the input image to a certain extent, offset part of the loss of details caused by the pyramid enlargement and reduction, so as to achieve the purpose of the invention. Thereby inventing a kind of multi-exposure image fusion method based on Laplacian pyramid decomposition, the method comprises:
步骤1:选取同一个场景采取不同的曝光度获得的图像,这些图像中每幅图像都含有场景的细节,大小都相同,同时调整图像大小使得图像尺寸能够满足金字塔分解的条件;Step 1: Select images obtained by taking different exposures of the same scene. Each image in these images contains the details of the scene, and the size is the same. At the same time, the image size is adjusted so that the image size can meet the conditions of pyramid decomposition;
步骤2:对选取的每幅图像进行细节增强处理;Step 2: performing detail enhancement processing on each selected image;
步骤3:对每一幅处理后的图像进行拉普拉斯金字塔分解;Step 3: Perform Laplacian pyramid decomposition on each processed image;
步骤4:获得步骤1选取的各幅图像的权重图;Step 4: Obtain the weight map of each image selected in step 1;
步骤5:对各权重图进行与步骤3相同的拉普拉斯金字塔分解;Step 5: Perform the same Laplacian pyramid decomposition as in step 3 for each weight map;
步骤6:利用步骤5得到的权重图分解图将步骤3的分解图像进行融合;Step 6: Fuse the decomposed image of step 3 by using the weight map decomposition diagram obtained in step 5;
步骤7:将融合后的图像进行重构,获得最终图像。Step 7: Reconstruct the fused image to obtain the final image.
进一步地,所述步骤2中采用空域细节增强方法对选取的每幅图像进行细节增强处理;如下式所示:Further, in the step 2, the spatial detail enhancement method is used to perform detail enhancement processing on each selected image; as shown in the following formula:
O(x,y)=I(x,y)+α×[I(x,y)-LP(I(x,y))]O(x,y)=I(x,y)+α×[I(x,y)-LP(I(x,y))]
其中O(x,y)表示增强细节后的输出图像,I(x,y)表示原始输入图像,LP(I(x,y))表示对输入图像进行平滑滤波,α表示细节增强的权重因子。Among them, O(x,y) represents the output image after enhancing the details, I(x,y) represents the original input image, LP(I(x,y)) represents smoothing and filtering the input image, and α represents the weight factor for detail enhancement .
进一步地,所述步骤3采用基于高斯金字塔分解的拉普拉斯金字塔分解方法对每一幅处理后的图像进行分解;如下面公式所示:Further, the step 3 uses the Laplacian pyramid decomposition method based on Gaussian pyramid decomposition to decompose each processed image; as shown in the following formula:
其中Gl表示图像G0的第l层高斯金字塔图像,表示G0的第l+1层高斯金字塔图像经过内插放大了一级之后的图像,n表示高斯金字塔分解的总层数,Lapl表示拉普拉斯金字塔的第l层图像。Where G l represents the l-th layer Gaussian pyramid image of image G 0 , Indicates that the l+1 layer Gaussian pyramid image of G 0 is interpolated and enlarged by one level, n indicates the total number of layers decomposed by the Gaussian pyramid, and Lap l indicates the l-th layer image of the Laplacian pyramid.
进一步地,所述步骤4根据图像的细节度、饱和度、曝光度这3项质量评价标准来确定各幅图像的权重图。Further, the step 4 determines the weight map of each image according to the three quality evaluation criteria of image detail, saturation, and exposure.
进一步地,所述步骤6的具体步骤为:Further, the specific steps of the step 6 are:
步骤6-1:将同一幅图像的拉普拉斯金字塔分解后的各层图像与其权重图分解后的相同层图像进行对应的融合,每层都得到一幅融合图像;Step 6-1: Carry out corresponding fusion of the images of each layer decomposed by the Laplacian pyramid of the same image and the image of the same layer decomposed by its weight map, and obtain a fused image for each layer;
步骤6-2:将各幅图像相同层的融合图像进行融合,得到n幅融合图像,n表示拉普拉斯金字塔分解图像的层数。Step 6-2: Fusing the fused images of the same layer of each image to obtain n fused images, where n represents the number of layers of the Laplacian pyramid decomposition image.
进一步地,所述步骤7的具体步骤为:Further, the specific steps of the step 7 are:
步骤7-1:计算步骤6得到的各幅融合图像的图像特征,计算公式为:Step 7-1: Calculate the image features of each fusion image obtained in step 6, the calculation formula is:
其中w和h表示图像的宽和高,er,eg和eb分别表示(w,h)处的像素在r、g、b三个通道的过曝光度;Where w and h represent the width and height of the image, e r , e g and e b represent the overexposure of the pixel at (w, h) in the three channels r, g, and b respectively;
步骤7-2:构造各层融合图像的增强因子:其中l表示图像为融合后拉普拉斯金字塔的第l层,C为图像特征;Step 7-2: Construct the enhancement factor of the fusion image of each layer: Among them, l indicates that the image is the lth layer of the Laplacian pyramid after fusion, and C is the image feature;
步骤7-3:结合增强因子对图像进行重构:Step 7-3: Reconstruct the image combined with the enhancement factor:
其中Res表示重构后的结果图像,l表示拉普拉斯金字塔的层级,n表示金字塔的总层数,Lapl,l表示第l层的融合后的拉普拉斯金字塔的图像经过l次内插放大后所得到的的图像。Where Res represents the reconstructed result image, l represents the level of the Laplacian pyramid, n represents the total number of layers of the pyramid, Lap l, l represents the image of the fused Laplacian pyramid of the l layer after l times Interpolate the resulting enlarged image.
本发明一种基于拉普拉斯金字塔分解的多曝光图像融合法,该方法针对传统的基于拉普拉斯金字塔的融合方法在进行多尺度分解和重构的过程中损失了图像的细节等信息的弊端,利用图像细节增强方法对输入图像进行一定程度的细节增强,抵消一部分由于金字塔放大和缩小造成的细节损失,从而具有对多曝光图像融合后的生成图像包含有丰富的细节和较高的清晰度的效果。The present invention is a multi-exposure image fusion method based on Laplacian pyramid decomposition, which aims at the loss of image details and other information in the process of multi-scale decomposition and reconstruction in the traditional fusion method based on Laplacian pyramid The drawbacks of the image detail enhancement method are used to enhance the details of the input image to a certain extent, offsetting part of the loss of details caused by the pyramid enlargement and reduction, so that the generated image after the multi-exposure image fusion contains rich details and high The effect of clarity.
附图说明Description of drawings
图1为传统的基于拉普拉斯金字塔的融合流程图;Fig. 1 is a traditional fusion flow chart based on the Laplacian pyramid;
图2为本发明一种基于拉普拉斯金字塔分解的多曝光图像融合法流程图;Fig. 2 is a kind of flow chart of multi-exposure image fusion method based on Laplacian pyramid decomposition of the present invention;
具体实施方式detailed description
这种新的多曝光图像融合方法的主要思想是:利用图像细节增强方法对输入图像进行一定程度的细节增强,抵消一部分由于金字塔放大和缩小造成的细节损失。提出一种新的重构方法再次增强图像的细节和清晰度。本发明主要包括以下步骤:The main idea of this new multi-exposure image fusion method is: use the image detail enhancement method to enhance the details of the input image to a certain extent, and offset part of the loss of details caused by the pyramid enlargement and reduction. A new reconstruction method is proposed to enhance the detail and sharpness of the image again. The present invention mainly comprises the following steps:
A.选取同一场景的一组多曝光图像序列作为输入图像,同时调整图像大小使得图像尺寸能够满足金字塔分解的条件;A. Select a group of multi-exposure image sequences of the same scene as the input image, and adjust the image size at the same time so that the image size can meet the conditions of pyramid decomposition;
B.对每一幅输入图像进行细节增强;B. Enhance the details of each input image;
C.对增强后的输入图像进行拉普拉斯金字塔分解;C. Perform Laplacian pyramid decomposition on the enhanced input image;
D.获得原始图像的权重图;D. Obtain the weight map of the original image;
E.利用权重图对步骤C分解后的图像进行融合;E. Use the weight map to fuse the image decomposed in step C;
F.金字塔重构。F. Pyramid reconstruction.
步骤A中选取的输入图像分别为对同一个场景采取不同的曝光度获得的图像,这些图像中每幅图像都含有场景的某些细节,并且图像大小相同,没有偏差,调整图像的宽和高为2的某次幂同时图像的尺寸损失最小。The input images selected in step A are images obtained by taking different exposures of the same scene. Each of these images contains some details of the scene, and the images are of the same size without deviation. Adjust the width and height of the image It is a certain power of 2 and the size loss of the image is the smallest.
步骤B图像的细节增强采用空域细节增强技术。在一定程度上对图像进行了细节增强。细节增强算法如公式(1)所示。The detail enhancement of the image in step B adopts the spatial detail enhancement technology. The details of the image are enhanced to a certain extent. The detail enhancement algorithm is shown in formula (1).
O(x,y)=I(x,y)+α×[I(x,y)-LP(I(x,y))] (1)O(x,y)=I(x,y)+α×[I(x,y)-LP(I(x,y))] (1)
其中I(x,y)表示原始输入图像,O(x,y)表示增强细节后的输出图像,LP(I(x,y))表示对输入图像进行平滑滤波,α表示细节增强的权重因子。Where I(x,y) represents the original input image, O(x,y) represents the output image after enhancing the details, LP(I(x,y)) represents the smoothing and filtering of the input image, and α represents the weight factor for detail enhancement .
步骤C将每一组具有不同曝光度的输入图像进行拉普拉斯金字塔分解。拉普拉斯金字塔通常用于图像分析和图像处理领域。拉普拉斯金字塔在使用时,将图像分解为分辨率不同的多个等级。拉普拉斯金字塔的分解是基于高斯金字塔分解的。图像G0的拉普拉斯金字塔的构成如公式(2)所示。Step C decomposes each set of input images with different exposures into a Laplacian pyramid. Laplacian pyramids are commonly used in the fields of image analysis and image processing. When the Laplacian pyramid is used, the image is decomposed into multiple levels with different resolutions. The decomposition of the Laplacian pyramid is based on the decomposition of the Gaussian pyramid. The composition of the Laplacian pyramid of the image G 0 is shown in formula (2).
其中Gl表示图像G0的第l层高斯金字塔图像,表示G0的第l+1层高斯金字塔图像经过内插放大了一级之后的图像,n表示高斯金字塔分解的总层数,Lapl表示拉普拉斯金字塔的第l层图像,其中高斯金字塔的总层数n是根据调整后图像的长和宽中的数值较小的一个为2的多少次幂减1来确定,例如图像的宽较小且大小为32,则高斯金字塔的总层数n等于4。Where G l represents the l-th layer Gaussian pyramid image of image G 0 , Indicates that the l+1 layer Gaussian pyramid image of G 0 is interpolated and enlarged by one level, n indicates the total number of layers decomposed by the Gaussian pyramid, and Lap l indicates the l-th layer image of the Laplacian pyramid, where the Gaussian pyramid The total number of layers n is determined according to the smaller value of the length and width of the adjusted image, which is the power of 2 minus 1. For example, if the width of the image is small and the size is 32, the total number of layers of the Gaussian pyramid n is equal to 4.
步骤D获取原始输入图像的权重图。权重图在一定程度上能够体现一副图像中像素点的质量,权重图是一种灰度图像,像素值越高的区域,表明图像在此区域的质量越高。本发明中图像的质量评价标准采用图像的细节度、饱和度以及适度的曝光度。权重图的获取进一步包括:Step D obtains the weight map of the original input image. The weight map can reflect the quality of pixels in an image to a certain extent. The weight map is a grayscale image. The area with higher pixel value indicates the higher quality of the image in this area. The quality evaluation standard of the image in the present invention adopts the detail, saturation and appropriate exposure of the image. The acquisition of the weight map further includes:
D1.计算图像的细节度。将图像的灰度图的每个像素的梯度信息作为细节度测量方法。图像在(x,y)处的细节度D(x,y)计算方法如公式(3)和(4)所示。D1. Calculate the detail of the image. The gradient information of each pixel in the grayscale image of the image is used as a measure of detail. The calculation method of the detail degree D(x, y) of the image at (x, y) is shown in formulas (3) and (4).
其中I(x,y)表示图像(x,y)处的像素的灰度值,ΔIx表示(x,y)处水平方向的梯度值,ΔIy表示(x,y)处垂直方向的梯度值。Where I(x, y) represents the gray value of the pixel at image (x, y), ΔI x represents the gradient value in the horizontal direction at (x, y), and ΔI y represents the gradient in the vertical direction at (x, y). value.
D(x,y)=ΔIx+ΔIy (4)D(x,y)=ΔI x +ΔI y (4)
D2.计算图像的饱和度。随着曝光时间的增长,图像的颜色饱和度会逐渐的减少,直至完全消失,因此饱和度作为图像质量的一种评价因子是很有必要的。饱和度因子S可以由R,G,B三通道的标准偏差得到,如公式(5)所示。D2. Calculate the saturation of the image. As the exposure time increases, the color saturation of the image will gradually decrease until it disappears completely, so saturation is necessary as an evaluation factor of image quality. The saturation factor S can be obtained from the standard deviations of the R, G, and B channels, as shown in formula (5).
其中R,G,B分别表示图像(x,y)处的像素点在红绿蓝三个通道的像素值。Among them, R, G, and B respectively represent the pixel values of the pixels at the image (x, y) in the three channels of red, green, and blue.
D3.计算图像的适度曝光度。图像中过度曝光的区域一般显示为白色,对应像素点的像素值为1,同样曝光不足的区域显示为黑色,对应像素点的像素值为0,因此图像中的像素值既不接近1也不接近0的像素点应该被保留下来,并且像素值越接近0.5越应该赋予较高的权重。图像(x,y)处的曝光度E的计算如公式(6)所示。D3. Calculate the moderate exposure of the image. The overexposed area in the image is generally displayed as white, and the pixel value of the corresponding pixel point is 1, and the same underexposed area is displayed as black, and the pixel value of the corresponding pixel point is 0, so the pixel value in the image is neither close to 1 nor Pixels close to 0 should be reserved, and the closer the pixel value is to 0.5, the higher the weight should be given. The calculation of exposure E at image (x, y) is shown in formula (6).
其中r,g,b分别表示图像(x,y)处的像素点在红绿蓝三个通道的像素值,参数θ的值为0.2,控制着权重函数的范围。Among them, r, g, and b respectively represent the pixel values of the pixels at the image (x, y) in the three channels of red, green and blue, and the value of the parameter θ is 0.2, which controls the range of the weight function.
D4.融合测度因子生成权重图。将每个像素点的三种测度因子利用乘法集成起来形成最终权值。第k副图像的权重函数如公式(7)和(8)所示。D4. Fusing the measurement factors to generate a weight map. The three measurement factors of each pixel are integrated by multiplication to form the final weight. The weight function of the kth sub-image is shown in formulas (7) and (8).
Wk(x,y)=Dk(x,y)×Sk(x,y)×Ek(x,y) (7)W k (x, y) = D k (x, y) × S k (x, y) × E k (x, y) (7)
其中Dk(x,y),Sk(x,y)和Ek(x,y)分别表示图像在(x,y)处的细节度、饱和度和适度曝光度。为了得到一致的结果,将权重图进行归一化处理。归一化后所有输入图像相同位置的权重值和为一。归一化如公式(8)所示。Among them, D k (x, y), S k (x, y) and E k (x, y) represent the detail, saturation and moderate exposure of the image at (x, y), respectively. For consistent results, the weight maps are normalized. After normalization, the sum of the weight values of the same position of all input images is one. Normalization is shown in formula (8).
其中n表示输入图像的个数,Wnor,k(x,y)表示归一化后的权重图。Where n represents the number of input images, and W nor,k (x,y) represents the normalized weight map.
步骤E利用权重图对分解后的图像进行融合。输入图像的权重图首先利用高斯金字塔分解为与原始输入图像相同的分解层数。利用分解后的权重图融合得到的图像将含有丰的细节,高的饱和度以及适度的曝光度。图像融合是将拉普拉斯金字塔某一副输入图像的某一层的图像与高斯金字塔相同输入图像以及相同层的权重图进行结合,然后将所有结合后的同一层的图像进行进一步结合,融合方法如公式(9)所示。Step E uses the weight map to fuse the decomposed images. The weight map of the input image is first decomposed into the same number of decomposition layers as the original input image using a Gaussian pyramid. The image fused with the decomposed weight map will contain rich details, high saturation and moderate exposure. Image fusion is to combine the image of a certain layer of a certain input image of the Laplacian Pyramid with the same input image of the Gaussian Pyramid and the weight map of the same layer, and then further combine all the combined images of the same layer. The method is shown in formula (9).
融合后的拉普拉斯金字塔的第l层图像是所有输入图像的拉普拉斯金字塔第l层图像的加权平均。Lap(F)l表示融合后的拉普拉斯金字塔的第l层图像,表示第k副图像权重图的高斯金字塔的第l层图像,表示第k副输入图像的拉普拉斯金字塔的第l层图像。The fused layer-1 image of the Laplacian pyramid is the weighted average of the images of the layer-1 Laplacian pyramid of all input images. Lap(F) l represents the l-th layer image of the fused Laplacian pyramid, Represents the l-th layer image of the Gaussian pyramid of the k-th sub-image weight map, Represents the l-th layer image of the Laplacian pyramid of the k-th input image.
步骤F金字塔重构。拉普拉斯金字塔的重构是将融合后的拉普拉斯金字塔的每一层的图像进行内插放大,经过若干次放大后变为大小与原始输入图像相同的图像,然后进行像素级的加法运算,得以重构。图像内插方法的次数与其所处的拉普拉斯金字塔的层数相同。本发明中提出的新的重构方法,可以使重构后的图像含有丰富的细节以及高清晰度。重构方法进一步包括:Step F pyramid reconstruction. The reconstruction of the Laplacian Pyramid is to interpolate and enlarge the image of each layer of the fused Laplacian Pyramid. After several times of enlargement, it becomes an image with the same size as the original input image, and then performs pixel-level image reconstruction. The addition operation can be reconstructed. The degree of the image interpolation method is the same as the layer number of the Laplacian pyramid in which it is located. The new reconstruction method proposed in the present invention can make the reconstructed image contain rich details and high definition. Refactoring methods further include:
F1.计算融合后图像特征。融合后图像与原始输入图像相比质量得到了很大的提高,不仅表现在整体细节,还表现在饱和度和曝光度等方面。拉普拉斯金字塔的0~n-1层均为残差图序列,而拉普拉斯金字塔的第n层也就是最高层就是高斯金字塔的第n层,一定程度上与原始输入图像具有相同的特征。融合后的拉普拉斯金字塔的第n层同样可以某种程度的体现最终重构后的图像的某些特征。本发明中通过计算融合后拉普拉斯金字塔的过曝光程度来控制图像的增强程度,用于避免出现增强过度而过度曝光的区域。融合后图像特征的计算如公式(10)所示。F1. Calculate the fused image features. Compared with the original input image, the quality of the fused image has been greatly improved, not only in the overall details, but also in terms of saturation and exposure. Layers 0 to n-1 of the Laplacian Pyramid are residual image sequences, and the nth layer of the Laplacian Pyramid, which is the highest layer, is the nth layer of the Gaussian Pyramid, which has the same characteristics as the original input image to a certain extent. Characteristics. The nth layer of the fused Laplacian pyramid can also reflect some features of the final reconstructed image to some extent. In the present invention, the enhancement degree of the image is controlled by calculating the overexposure degree of the fused Laplacian pyramid, so as to avoid over-enhanced and over-exposed regions. The calculation of the fused image features is shown in formula (10).
其中w和h表示图像的宽和高,er,eg和eb分别表示(w,h)处的像素在红绿蓝三通道的过曝光度,如公式(11)、(12)和(13)所示。Where w and h represent the width and height of the image, e r , e g and e b respectively represent the overexposure of the pixel at (w, h) in the red, green and blue channels, such as formulas (11), (12) and (13).
其中r表示像素点在r通道的像素值。Where r represents the pixel value of the pixel in the r channel.
其中g表示像素点在g通道的像素值。Where g represents the pixel value of the pixel in the g channel.
其中b表示像素点在b通道的像素值。Where b represents the pixel value of the pixel in the b channel.
F2.构造增强因子。增强因子应该结合融合后图像的特征来构造,融合后图像的特征可以控制图像的曝光度,除此之外应该添加一些增强因子。融合后图像的不同层数由于被滤波的程度不同,各层的清晰度和细节度会有不同程度的损失。在进行重构时,各层图像依然需要再次进行滤波,细节和清晰度同样会有损失。本发明将为各层依据其清晰度和细节度分配不同的权重。处于高层的细节和清晰度的损失较为严重,将分配较低的权重,而处于低层的细节比较丰富并且清晰度较高,将被赋予较高的权重。第l层融合后拉普拉斯金字塔图像的增强因子的构造如公式(14)所示。F2. Structural enhancement factor. The enhancement factor should be constructed by combining the features of the fused image. The features of the fused image can control the exposure of the image. In addition, some enhancement factors should be added. Due to the different degrees of filtering of different layers of the fused image, the clarity and detail of each layer will be lost to varying degrees. When reconstructing, the images of each layer still need to be filtered again, and the details and clarity will also be lost. The present invention will assign different weights to each layer according to its sharpness and detail. The loss of detail and sharpness in the high layers is more severe, and will be assigned a lower weight, while the details and sharpness in the lower layers are richer, and will be assigned higher weights. The construction of the enhancement factor of the Laplacian pyramid image after layer l fusion is shown in formula (14).
其中L表示图像为融合后拉普拉斯金字塔的第l层,C为图像的特征。Among them, L indicates that the image is the lth layer of the fused Laplacian pyramid, and C is the feature of the image.
F3.结合增强因子进行图像重构。图像的重构是将拉普拉斯金字塔的各层图像进行不同次数的内插放大,直至与原始图像大小相同。将各层的增强因子与各层内插放大后的图像进行结合并重构。增强因子与相应图像的结合以及重构如公式(15)所示。F3. Combining enhancement factors for image reconstruction. The reconstruction of the image is to interpolate and enlarge the images of each layer of the Laplacian pyramid for different times until the size of the original image is the same. The enhancement factors of each layer are combined with the interpolated and enlarged images of each layer and reconstructed. The combination of the enhancement factor and the corresponding image and the reconstruction are shown in Equation (15).
其中Res表示重构后的结果图像,l表示拉普拉斯金字塔的层级,n表示金字塔的总层数,Lapl,l表示第l层的融合后的拉普拉斯金字塔的图像经过l次内插放大后所得到的的图像。Where Res represents the reconstructed result image, l represents the level of the Laplacian pyramid, n represents the total number of layers of the pyramid, Lap l, l represents the image of the fused Laplacian pyramid of the l layer after l times Interpolate the resulting enlarged image.
本发明的具体流程图如图2所示。The specific flow chart of the present invention is shown in Fig. 2 .
本发明针对基于拉普拉斯金字塔的多曝光图像融合方法,提出了一种新的多曝光图像融合方法。本发明不再采用传统的金字塔重构方法,提出一种新的重构方法。融合后的拉普拉斯金字塔的各层图像经过内插放大后变为与原始图像大小相同的图像,然后为这些图像添加增强因子,增强图像的细节和清晰度,最后将增强后的图像相加。Aiming at the multi-exposure image fusion method based on the Laplacian pyramid, the invention proposes a new multi-exposure image fusion method. The present invention no longer adopts the traditional pyramid reconstruction method, and proposes a new reconstruction method. The images of each layer of the fused Laplacian pyramid are interpolated and enlarged to become images of the same size as the original image, and then an enhancement factor is added to these images to enhance the details and clarity of the image, and finally the enhanced image is compared add.
在图像进行重构时,拉普拉斯金字塔不同层次的图像将根据其清晰度和细节度被分配不同的权重。内插放大前的图像处于拉普拉斯金字塔中的层数越高,其图像含有的细节度和清晰度越低,因此应该被分配较低的权重。而内插放大前处于拉普拉斯金字塔中较低层次的图像,将含有较高的细节度和清晰度,应该被分配较高的权重。这保证了融合后的图像含有较高的细节度和清晰度。为了保证图像的增强程度适度,融合后图像的特征也将融合进增强因子。融合后拉普拉斯金字塔的最高层不是残差图层,而是图像层,在一定程度上能够反映融合后图像的特征,本发明通过计算该图像的过曝光程度得到图像特征,当曝光度很高时,为了避免出现过度曝光现象,增强程度应该一定程度的减弱,当曝光度低时,应该适度的增加增强程度。When the image is reconstructed, the images at different levels of the Laplacian pyramid will be assigned different weights according to their clarity and detail. The higher the number of layers in the Laplacian pyramid of the image before interpolation and magnification, the lower the detail and clarity of the image, so it should be assigned a lower weight. However, images that are at lower levels in the Laplacian pyramid before interpolation and magnification will contain higher detail and clarity and should be assigned higher weights. This ensures that the fused image contains high detail and clarity. In order to ensure that the enhancement degree of the image is moderate, the features of the fused image will also be fused into the enhancement factor. The highest layer of the Laplacian pyramid after fusion is not the residual layer, but the image layer, which can reflect the characteristics of the fused image to a certain extent. The present invention obtains the image characteristics by calculating the degree of overexposure of the image. When the exposure When it is high, in order to avoid overexposure, the degree of enhancement should be weakened to a certain extent, and when the degree of exposure is low, the degree of enhancement should be moderately increased.
这种基于拉普拉斯金字塔分解的新的重构方法,在重构时添加增强因子,既保证了重构后的高动态范围图像含有丰富的细节和较高的清晰度,而且还可以控制由于增强过度而造成的过度曝光现象。这种新的多曝光图像融合方法使得图像的增强以及增强程度能够达到一种平衡。This new reconstruction method based on Laplacian pyramid decomposition adds an enhancement factor during reconstruction, which not only ensures that the reconstructed high dynamic range image contains rich details and high definition, but also can control Overexposure phenomenon caused by excessive enhancement. This new multi-exposure image fusion method makes image enhancement and enhancement degree reach a balance.
本发明所讲述的方法与其他方法相比,含有更加丰富的细节,更高的清晰度并且具有令人满意的运行速度。本专利的实验结果测量是利用传统的图像质量评价标准:饱和度、对比度和清晰度。与其他的三种得到高动态范围图像的多曝光图像融合方法进行比较,实验结果如表1、表2和表3所示;表1显示了利用四种融合方法得到的高动态范围图像的饱和度数据。从数据中可以看出利用本发明产生的高动态范围图像的饱和度最高,其他方法产生的图像的饱和度不足。表2显示了图像的对比度数据。本发明的结果图像的对比度与Mertens等人提出的方法具有相同的对比度数据,在某种程度上可以认为本发明的结果含有最高的对比度。表3显示了图像的细节度信息,除第一组实验结果外,本发明的结果图像含有的细节度比其他结果高很多。实验结果以及数据分析充分表明本发明的优点,实验结果契合了本发明的提出理念:提高图像质量,特别是图像的细节。本发明的速度与其他类似方法相似,具有较高的实用价值。Compared with other methods, the method described in the present invention contains richer details, higher definition and has satisfactory operation speed. The experimental results of this patent are measured using traditional image quality evaluation criteria: saturation, contrast, and sharpness. Compared with other three multi-exposure image fusion methods to obtain high dynamic range images, the experimental results are shown in Table 1, Table 2 and Table 3; Table 1 shows the saturation of the high dynamic range images obtained by the four fusion methods degree data. It can be seen from the data that the saturation of the high dynamic range image generated by the present invention is the highest, and the saturation of the image generated by other methods is insufficient. Table 2 shows the contrast data of the images. The contrast of the resulting images of the present invention has the same contrast data as the method proposed by Mertens et al., to the extent that the results of the present invention can be considered to contain the highest contrast. Table 3 shows the detail information of the image. Except for the first set of experimental results, the result image of the present invention contains much higher detail than other results. The experimental results and data analysis fully demonstrate the advantages of the present invention, and the experimental results conform to the proposed idea of the present invention: improving image quality, especially image details. The speed of the present invention is similar to other similar methods, and has high practical value.
表1图像的饱和度Table 1 Saturation of images
表2图像的对比度Table 2 Contrast of images
表3图像的细节度Table 3 The degree of detail of the image
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510038868.5A CN104616273B (en) | 2015-01-26 | 2015-01-26 | A kind of many exposure image fusion methods based on Laplacian pyramid |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510038868.5A CN104616273B (en) | 2015-01-26 | 2015-01-26 | A kind of many exposure image fusion methods based on Laplacian pyramid |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104616273A CN104616273A (en) | 2015-05-13 |
CN104616273B true CN104616273B (en) | 2017-07-07 |
Family
ID=53150706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510038868.5A Expired - Fee Related CN104616273B (en) | 2015-01-26 | 2015-01-26 | A kind of many exposure image fusion methods based on Laplacian pyramid |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104616273B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3820142A4 (en) * | 2019-08-14 | 2021-09-08 | SZ DJI Technology Co., Ltd. | Image processing method and apparatus, image photographing device, and mobile terminal |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104978722A (en) * | 2015-07-06 | 2015-10-14 | 天津大学 | Multi-exposure image fusion ghosting removing method based on background modeling |
CN106408547B (en) * | 2015-07-28 | 2019-11-01 | 展讯通信(上海)有限公司 | Image interfusion method, device and terminal device |
CN106408518B (en) * | 2015-07-30 | 2019-09-06 | 展讯通信(上海)有限公司 | Image interfusion method, device and terminal device |
CN105187739B (en) * | 2015-09-18 | 2018-10-26 | 北京中科慧眼科技有限公司 | Camera sensor design method based on HDR algorithms |
CN105551061A (en) * | 2015-12-09 | 2016-05-04 | 天津大学 | Processing method for retaining ghosting-free moving object in high-dynamic range image fusion |
CN105472255B (en) * | 2015-12-30 | 2019-02-26 | 深圳Tcl数字技术有限公司 | Video playing control method and device |
CN106251365A (en) * | 2016-07-22 | 2016-12-21 | 北京邮电大学 | Many exposure video fusion method and device |
CN106331429A (en) * | 2016-08-31 | 2017-01-11 | 上海交通大学 | A video detail amplification method |
CN107886476B (en) * | 2016-09-30 | 2021-06-15 | 联咏科技股份有限公司 | Method for texture synthesis and image processing device using the method |
CN108122218B (en) * | 2016-11-29 | 2021-11-16 | 联芯科技有限公司 | Image fusion method and device based on color space |
CN106506983B (en) * | 2016-12-12 | 2019-07-19 | 天津大学 | An HDR video generation method suitable for LDR video |
CN106780463B (en) * | 2016-12-15 | 2019-07-05 | 华侨大学 | It is a kind of to expose fused image quality appraisal procedures with reference to entirely more |
CN106971379A (en) * | 2017-03-02 | 2017-07-21 | 天津大学 | A kind of underwater picture Enhancement Method merged based on stratified calculation |
CN106981053A (en) * | 2017-03-02 | 2017-07-25 | 天津大学 | A kind of underwater picture Enhancement Method based on Weighted Fusion |
CN106709898A (en) * | 2017-03-13 | 2017-05-24 | 微鲸科技有限公司 | Image fusing method and device |
CN108629739B (en) * | 2017-03-23 | 2020-08-11 | 展讯通信(上海)有限公司 | HDR image generation method and device and mobile terminal |
CN107292845B (en) * | 2017-06-26 | 2020-04-17 | 安健科技(重庆)有限公司 | Standard deviation pyramid-based dynamic image noise reduction method and device |
CN107679470A (en) * | 2017-09-22 | 2018-02-09 | 天津大学 | A kind of traffic mark board detection and recognition methods based on HDR technologies |
CN107833184A (en) * | 2017-10-12 | 2018-03-23 | 北京大学深圳研究生院 | A kind of image enchancing method for merging framework again based on more exposure generations |
CN107945148B (en) * | 2017-12-15 | 2021-06-01 | 电子科技大学 | A Multi-Exposure Image Fusion Method Based on MRF Region Selection |
CN108171679B (en) * | 2017-12-27 | 2022-07-22 | 合肥君正科技有限公司 | Image fusion method, system and equipment |
CN108090886B (en) * | 2018-01-11 | 2022-04-22 | 南京大学 | A Display and Detail Enhancement Method for High Dynamic Range Infrared Image |
CN108184075B (en) * | 2018-01-17 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating image |
CN108305252B (en) * | 2018-02-26 | 2020-03-27 | 中国人民解放军总医院 | Image fusion method for portable electronic endoscopy |
CN108717690B (en) * | 2018-05-21 | 2022-03-04 | 电子科技大学 | A method for synthesizing high dynamic range images |
CN108898609A (en) * | 2018-06-21 | 2018-11-27 | 深圳辰视智能科技有限公司 | A kind of method for detecting image edge, detection device and computer storage medium |
CN109240210A (en) * | 2018-10-16 | 2019-01-18 | 重庆工业职业技术学院 | A kind of numerically-controlled machine tool with fault diagnosis and warning function |
CN109754377B (en) * | 2018-12-29 | 2021-03-19 | 重庆邮电大学 | A Multi-Exposure Image Fusion Method |
CN110047058B (en) * | 2019-03-25 | 2021-04-30 | 杭州电子科技大学 | Image fusion method based on residual pyramid |
WO2020206659A1 (en) * | 2019-04-11 | 2020-10-15 | Thunder Software Technology Co., Ltd. | Method and apparatus for combining low-dynamic range images to a single image |
CN110045418B (en) * | 2019-04-30 | 2021-06-01 | 中国海洋石油集团有限公司 | Three-dimensional earthquake recognition method for point dam lateral body |
CN110355465B (en) * | 2019-08-21 | 2023-07-25 | 广东福维德焊接股份有限公司 | High dynamic vision system based on industrial camera and synthesis method |
CN110738627B (en) * | 2019-09-04 | 2022-04-26 | Tcl华星光电技术有限公司 | Multi-exposure image fusion device and multi-exposure image fusion method |
CN111401203A (en) * | 2020-03-11 | 2020-07-10 | 西安应用光学研究所 | Target identification method based on multi-dimensional image fusion |
CN111445416B (en) * | 2020-03-30 | 2022-04-26 | 南京泓众电子科技有限公司 | Method and device for generating high-dynamic-range panoramic image |
CN111861959A (en) * | 2020-07-15 | 2020-10-30 | 广东欧谱曼迪科技有限公司 | An ultra-long depth of field and ultra-wide dynamic image synthesis algorithm |
CN112561909B (en) * | 2020-12-28 | 2024-05-28 | 南京航空航天大学 | A method for generating image adversarial samples based on fusion mutation |
CN112819736B (en) * | 2021-01-13 | 2023-08-29 | 浙江理工大学 | Workpiece character image local detail enhancement fusion method based on multiple exposure |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102063712A (en) * | 2010-11-04 | 2011-05-18 | 北京理工大学 | Multi-exposure image fusion method based on sub-band structure |
CN104182955A (en) * | 2014-09-05 | 2014-12-03 | 西安电子科技大学 | Image fusion method and device based on controllable pyramid transformation |
-
2015
- 2015-01-26 CN CN201510038868.5A patent/CN104616273B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102063712A (en) * | 2010-11-04 | 2011-05-18 | 北京理工大学 | Multi-exposure image fusion method based on sub-band structure |
CN104182955A (en) * | 2014-09-05 | 2014-12-03 | 西安电子科技大学 | Image fusion method and device based on controllable pyramid transformation |
Non-Patent Citations (3)
Title |
---|
Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography;Mertens T 等;《Computer Graphics Forum》;20090331;第28卷(第1期);161-171 * |
基于多曝光图像的高动态范围图像生成;吴晓军;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120315;摘要,第2.2、3.1-3.2、4.1节 * |
基于自适应平滑滤波的图像增强;张秋菊 等;《现代电子技术》;20131115;第32卷(第22期);第2节 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3820142A4 (en) * | 2019-08-14 | 2021-09-08 | SZ DJI Technology Co., Ltd. | Image processing method and apparatus, image photographing device, and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN104616273A (en) | 2015-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104616273B (en) | A kind of many exposure image fusion methods based on Laplacian pyramid | |
CN107845128B (en) | Multi-exposure high-dynamic image reconstruction method with multi-scale detail fusion | |
CN110728633B (en) | Method and device for constructing multi-exposure high dynamic range inverse tone mapping model | |
CN104881854B (en) | High dynamic range images fusion method based on gradient and monochrome information | |
CN108205796B (en) | Multi-exposure image fusion method and device | |
CN107220956A (en) | A kind of HDR image fusion method of the LDR image based on several with different exposures | |
CN109862389B (en) | Video processing method, device, server and storage medium | |
CN105915909B (en) | A kind of high dynamic range images layered compression method | |
CN112712485B (en) | Image fusion method and device | |
CN108022223B (en) | A Tone Mapping Method Based on Logarithmic Mapping Function Block Processing and Fusion | |
KR20120107429A (en) | Zone-based tone mapping | |
US8687883B2 (en) | Method and a device for merging a plurality of digital pictures | |
CN107154059A (en) | A kind of high dynamic range video processing method | |
CN107045715A (en) | A kind of method that single width low dynamic range echograms generates high dynamic range images | |
CN106169182B (en) | A method for synthesizing multiple images with different exposures | |
US20140064632A1 (en) | Image processing apparatus, image processing method and recording medium | |
CN105850114A (en) | Method for inverse tone mapping of an image | |
CN104077759A (en) | Multi-exposure image fusion method based on color perception and local quality factors | |
CN113129391B (en) | Multi-exposure fusion method based on multi-exposure image feature distribution weight | |
WO2019056549A1 (en) | Image enhancement method, and image processing device | |
CN110910336B (en) | Three-dimensional high dynamic range imaging method based on full convolution neural network | |
CN110009574A (en) | A kind of method that brightness, color adaptively inversely generate high dynamic range images with details low dynamic range echograms abundant | |
CN111105376A (en) | Single-exposure high dynamic range image generation method based on dual-branch neural network | |
CN109886906B (en) | Detail-sensitive real-time low-light video enhancement method and system | |
CN106157305A (en) | High-dynamics image rapid generation based on local characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170707 Termination date: 20200126 |