CN104715451B - A kind of image seamless fusion method unanimously optimized based on color and transparency - Google Patents
A kind of image seamless fusion method unanimously optimized based on color and transparency Download PDFInfo
- Publication number
- CN104715451B CN104715451B CN201510107663.8A CN201510107663A CN104715451B CN 104715451 B CN104715451 B CN 104715451B CN 201510107663 A CN201510107663 A CN 201510107663A CN 104715451 B CN104715451 B CN 104715451B
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- fusion
- transparency
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 17
- 230000004927 fusion Effects 0.000 claims abstract description 101
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000005457 optimization Methods 0.000 claims abstract description 29
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 5
- 230000010354 integration Effects 0.000 claims 1
- 238000010606 normalization Methods 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 13
- 230000011218 segmentation Effects 0.000 abstract description 12
- 230000000694 effects Effects 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 37
- 230000007704 transition Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000013178 mathematical model Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000002131 composite material Substances 0.000 description 3
- 238000005315 distribution function Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
本发明公开了一种基于颜色及透明度一致优化的图像无缝融合方法,属于图像图形处理领域。采用的技术方案为:首先,在颜色通道基于图割能量优化算法计算前景和背景图像在重叠区域的融合标签;其次,在透明度通道构造符合前景和背景梯度差异分布的能量函数,以求解透明度;最后,基于透明度进行图像无缝融合。本发明方法能够得到更好的分割效果,从而影响前景和背景图像在重叠区域的透明度分布,使其透明度取值更为均匀。相比于其它融合方法,本发明能够生成更好地融合效果。
The invention discloses an image seamless fusion method based on consistent optimization of color and transparency, belonging to the field of image graphics processing. The technical solution adopted is as follows: firstly, in the color channel, the fusion labels of the foreground and background images in the overlapping area are calculated based on the graph cut energy optimization algorithm; secondly, in the transparency channel, an energy function that conforms to the gradient difference distribution of the foreground and background is constructed to solve the transparency; Finally, images are seamlessly blended based on transparency. The method of the invention can obtain a better segmentation effect, thereby affecting the transparency distribution of the foreground and background images in the overlapping area, so that the transparency values are more uniform. Compared with other fusion methods, the present invention can generate better fusion effect.
Description
技术领域technical field
本发明属于图像图形处理领域,具体涉及一种基于颜色及透明度一致优化的图像无缝融合方法。The invention belongs to the field of image and graphic processing, in particular to an image seamless fusion method based on consistent optimization of color and transparency.
背景技术Background technique
图像融合是指将两个或多个图像合成一幅新的图像。图像融合在虚拟现实、数字媒体、文体设计等有着重要的作用。图像融合技术可以分为三个层次:像素级融合、特征级融合和决策级融合。像素级图像融合主要是对各个源图像数据直接进行信息的综合处理,是图像融合中最基础的工具。作为像素级图像融合的重要应用,图像无缝融合技术,在图像和视频编辑领域处于核心地位。Image fusion refers to combining two or more images into a new image. Image fusion plays an important role in virtual reality, digital media, and style design. Image fusion techniques can be divided into three levels: pixel-level fusion, feature-level fusion, and decision-level fusion. Pixel-level image fusion is mainly the comprehensive processing of information directly on each source image data, and it is the most basic tool in image fusion. As an important application of pixel-level image fusion, seamless image fusion technology plays a core role in the field of image and video editing.
目前,已有大量关于图像无缝融合方法的研究,主要有两大类:一是基于多分辨率的融合方法,(朱瑞辉,万敏,范国滨.基于金字塔变换的图像融合方法[J].计算机仿真,2008,24(12):178-180),该方法利用拉普拉斯金字塔分解方法对原始图像进行多分辨率分解,然后采用基于区域特征量测的方法对分解后的各层图像进行融合。(顾霞芳.基于小波变换的图像融合方法探讨与比较[J].中国西部科技,2009,8(26):21-22),该文对源图像进行多层小波分解而获得一系列子图像,并在变换域上进行特征选择而创建融合图像,最终通过逆变换重建融合图像。该类方法主要是对原始图像进行多分辨率分解,并对分解后的多幅尺度图像进行再合成,在整图范围内实现融合过渡,能在一定程度上降低对配准误差的敏感度,但由于多次滤波会造成信号减弱,因此最终合成的图像会变暗或模糊。二是基于梯度域的融合方法,比如(Pérez P,Gangnet M,Blake A.Poisson image editing[C]//ACMTransactions on Graphics(TOG).ACM,2003,22(3):313-318),该文利用图像梯度场将图像融合问题归结为求目标函数的最小化问题,并通过求解泊松方程来实现融合。这种基于泊松方程的融合方法需要求解大型稀疏线性方程组,计算量较大且耗内存,不容易执行。而(Farbman Z,Hoffer G,Lipman Y,et al.Coordinates for instant image cloning[C]//ACM Transactions on Graphics(TOG).ACM,2009,28(3):67),文章提出一种基于均值坐标的快速图像无缝融合算法,通过待融合区域内每个位置的均值坐标对融合区域边界像素进行插值,从而获得融合区域内每个像素的颜色值。该方法计算效率高,容易执行并节省内存,能进行实时融合。这类基于梯度场的融合方法在颜色通道进行融合,前景目标在融合到背景图像中以后,色调会随着背景图像而变化。At present, there have been a large number of researches on image seamless fusion methods, mainly in two categories: one is the fusion method based on multi-resolution, (Zhu Ruihui, Wan Min, Fan Guobin. Image fusion method based on pyramid transformation[J].Computer Simulation, 2008,24(12):178-180), this method uses the Laplacian pyramid decomposition method to decompose the original image at multiple resolutions, and then uses the method based on regional feature measurement to decompose each layer of the image fusion. (Gu Xiafang. Discussion and comparison of image fusion methods based on wavelet transform [J]. China West Science and Technology, 2009, 8(26): 21-22), this paper decomposes the source image by multi-layer wavelet to obtain a series of sub-images, And perform feature selection on the transform domain to create a fused image, and finally reconstruct the fused image through inverse transformation. This type of method is mainly to decompose the original image into multiple resolutions, and recompose the decomposed multi-scale images to achieve fusion transition within the entire image range, which can reduce the sensitivity to registration errors to a certain extent. However, due to signal attenuation caused by multiple filtering, the final composite image will be dark or blurred. The second is the fusion method based on the gradient domain, such as (Pérez P, Gangnet M, Blake A.Poisson image editing[C]//ACMTransactions on Graphics(TOG).ACM,2003,22(3):313-318), the In this paper, the image fusion problem is reduced to the minimization problem of the objective function by using the image gradient field, and the fusion is realized by solving the Poisson equation. This fusion method based on the Poisson equation needs to solve a large sparse linear equation system, which is computationally intensive and memory-intensive, and is not easy to implement. And (Farbman Z, Hoffer G, Lipman Y, et al.Coordinates for instant image cloning[C]//ACM Transactions on Graphics(TOG).ACM,2009,28(3):67), the article proposes a method based on the mean The fast image seamless fusion algorithm of coordinates interpolates the boundary pixels of the fusion area through the mean coordinates of each position in the fusion area to obtain the color value of each pixel in the fusion area. This method has high computational efficiency, is easy to implement and saves memory, and can perform real-time fusion. This kind of fusion method based on gradient field is fused in the color channel. After the foreground object is fused into the background image, the hue will change with the background image.
当源图像的背景颜色和纹理差异较大时,图像无缝融合方法可推广至拼图应用,已有相关工作进行研究。ROTHER等人(Rother C,Bordeaux L,Hamadi Y,etal.Autocollage[C]//ACM Transactions on Graphics(TOG).ACM,2006,25(3):847-852),提出一种基于透明度通道的泊松拼图方法,该方法通过优化最小能量函数求解源图像在重叠区域的透明度,并基于透明度通道进行融合。然而该方法的分割效果较差,获得的输出拼图中可能会丢失源图像中部分重要信息,不能够保证源图像中前景目标的完整性,合成图像边界线处过渡也不够平滑。When the background color and texture of the source images are quite different, the image seamless fusion method can be extended to collage applications, and related work has been carried out. ROTHER et al. (Rother C, Bordeaux L, Hamadi Y, et al. Autocollage [C] // ACM Transactions on Graphics (TOG). ACM, 2006, 25 (3): 847-852), proposed a transparency channel-based Poisson puzzle method, which solves the transparency of the source image in the overlapping area by optimizing the minimum energy function, and performs fusion based on the transparency channel. However, the segmentation effect of this method is poor, and some important information in the source image may be lost in the obtained output mosaic, and the integrity of the foreground object in the source image cannot be guaranteed, and the transition at the boundary line of the composite image is not smooth enough.
发明内容Contents of the invention
为了克服上述现有技术存在的缺陷,本发明的目的在于提供一种基于颜色及透明度一致优化的图像无缝融合方法,该方法能够得到更好的分割效果,从而影响前景和背景图像在重叠区域的透明度分布,使其透明度取值更为均匀。In order to overcome the above-mentioned defects in the prior art, the object of the present invention is to provide a method for seamless fusion of images based on consistent optimization of color and transparency, which can obtain better segmentation results, thereby affecting the overlap between foreground and background images The transparency distribution of , making its transparency value more uniform.
本发明是通过以下技术方案来实现:The present invention is achieved through the following technical solutions:
一种基于颜色及透明度一致优化的图像无缝融合方法,技术方案为:An image seamless fusion method based on consistent optimization of color and transparency, the technical solution is:
首先,利用图割能量优化算法在颜色通道计算前景和背景图像在重叠区域的融合标签;其次,在透明度通道构造符合前景和背景梯度差异分布的能量函数,以求解透明度;最后,基于透明度进行图像无缝融合。First, use the graph cut energy optimization algorithm to calculate the fusion label of the foreground and background images in the overlapping area in the color channel; secondly, construct an energy function that conforms to the gradient difference distribution of the foreground and background in the transparency channel to solve the transparency; finally, image based on transparency Blends seamlessly.
包括以下步骤:Include the following steps:
1)计算融合标签1) Calculate the fusion label
构建一个计算图像融合时颜色和透明度的能量函数,对该能量函数进行分步求解,分离出变量,并利用图割能量优化方法在颜色通道进行求解,以获得重叠区域的融合标签;Construct an energy function to calculate the color and transparency of image fusion, solve the energy function step by step, separate the variables, and use the graph cut energy optimization method to solve in the color channel to obtain the fusion label of the overlapping area;
2)计算透明度2) Calculation Transparency
固定融合标签l(r),从统一能量函数中分离出包含变量α(r)的一元函数,进行优化计算,并通过求解线性方程组分别获得前景和背景图像在待融合区域的透明度;Fix the fusion label l(r), separate the unary function containing the variable α(r) from the unified energy function, perform optimization calculations, and obtain the transparency of the foreground and background images in the area to be fused by solving the linear equations;
3)对透明度取值进行归一化处理3) Normalize the transparency value
分别对前景和背景图像的透明度取值αA,αB进行归一化,从而便于对前景和背景图像基于透明度进行混合,如下式:Normalize the transparency values α A and α B of the foreground and background images respectively, so as to facilitate the mixing of the foreground and background images based on transparency, as follows:
4)图像融合4) Image Fusion
对前景图像fA和背景图像fB在待融合区域基于归一化的透明度α'A、α'B进行融合,从而获得融合后的输出图像,如下式所示:The foreground image f A and the background image f B are fused based on the normalized transparency α' A and α' B in the area to be fused, so as to obtain the fused output image, as shown in the following formula:
f=α'AfA+α'BfB。f=α' A f A +α' B f B .
步骤1)所述的能量函数如下式:Step 1) described energy function is as follows:
E=min∫∫Ω‖l(r)-α(r)‖2+λ(r)‖▽l‖2+w(r)‖▽α‖2drE= min∫∫Ω ‖l(r)-α(r)‖ 2 +λ(r)‖▽l‖ 2 +w(r)‖▽α‖ 2 dr
其中,Ω为待融合的重叠区域,l(r)为二值函数,对应融合区域内每个像素的融合标签;和分别为待融合区域的前景轮廓线和用户画的闭合曲线;α(r)为透明度分布函数;λ(r)和w(r)分别代表基于颜色信息的融合标签能量项和透明度能量项的权重;▽l表示融合标签的梯度函数,▽α表示透明度分布的梯度函数,dr表示积分算子。Among them, Ω is the overlapping area to be fused, and l(r) is a binary function, corresponding to the fusion label of each pixel in the fusion area; with are the foreground contour line of the area to be fused and the closed curve drawn by the user; α(r) is the transparency distribution function; λ(r) and w(r) represent the weight of the fusion label energy item and transparency energy item based on color information, respectively ; ▽l represents the gradient function of the fusion label, ▽α represents the gradient function of the transparency distribution, and dr represents the integral operator.
步骤1)分离出变量,获得的一元函数如下式:Step 1) Separate the variables, and the obtained unary function is as follows:
E(l)=min∫∫Ω‖l(r)-α(r)‖2+λ(r)‖▽l‖2 E(l)= min∫∫Ω ‖l(r)-α(r)‖ 2 +λ(r)‖▽▽l‖ 2
其中,Ω为待融合的重叠区域,能量函数通过图割能量优化进行求解,对应的数据项和平滑项分别为:Among them, Ω is the overlapping area to be fused, and the energy function is solved by graph-cut energy optimization. The corresponding data items and smoothing items are:
Es(p,q,lp,lq)=|lp-lq|·(D(p)+D(q))E s (p,q,l p ,l q )=|l p -l q |·(D(p)+D(q))
D(v)=‖I1(v)-I2(v)‖2+α‖▽I1(v)-▽I2(v)‖2 D(v)=‖I 1 (v)-I 2 (v)‖ 2 +α‖▽I 1 (v)-▽I 2 (v)‖ 2
上式中,α=2,▽表示梯度算子,式中的p,q代表邻域像素,lp,lq代表邻域像素对应的融合标签,I1,I2代表待融合的前景和背景图像;上式表明前景图像和背景图像在待融合区域内像素点之间的灰度和梯度差异越大,平滑项越大,即相邻像素的颜色和梯度信息越接近,则其融合标签也保持一致;In the above formula, α=2, ▽ represents the gradient operator, p and q in the formula represent the neighboring pixels, l p and l q represent the fusion labels corresponding to the neighboring pixels, I 1 and I 2 represent the foreground and The background image; the above formula shows that the greater the grayscale and gradient difference between the foreground image and the background image in the region to be fused, the greater the smoothing item, that is, the closer the color and gradient information of adjacent pixels, the fusion label Also keep consistent;
同时,为避免能量函数陷入局部极小值解,对重叠区域的边界像素加入强制数据约束项:At the same time, in order to prevent the energy function from falling into a local minimum solution, a mandatory data constraint item is added to the boundary pixels of the overlapping area:
上式中,A表示背景图像,B表示前景图像,和分别表示重叠区域的前景轮廓线和用户画的闭合曲线,lA、lB分别表示前景图像和背景图像所属的标签。In the above formula, A represents the background image, B represents the foreground image, with Respectively represent the foreground contour line of the overlapping area and the closed curve drawn by the user, l A and l B represent the labels to which the foreground image and the background image belong, respectively.
对于颜色和纹理差异较大的源图像进行拼图时,若重叠区域的某边界像素落入一幅图像内部,则该边界像素被标记为该图像所属的标签。When mosaicing source images with large differences in color and texture, if a boundary pixel in the overlapping area falls into an image, the boundary pixel is marked as the label to which the image belongs.
步骤2)中分离出包含变量α(r)得到的一元函数Ek(α)为:The unary function E k (α) obtained by separating out the variable α(r) in step 2) is:
其中,为加权因子,为前景图像和背景图像的平均梯度,▽I1,▽I2分别代表前景图像和背景图像在像素r处的梯度,σ2为前景和背景图像的均方梯度;l(r)为二值函数,lr表示像素r的融合标签,α(r)表示像素r的透明度取值;上式表明,重叠区域的透明度分布尽量与融合标签结果趋于一致,且相邻像素的透明度取值保持近似;in, is the weighting factor, is the average gradient of the foreground image and the background image, ▽I 1 , ▽I 2 represent the gradient of the foreground image and the background image at pixel r respectively, σ 2 is the mean square gradient of the foreground and background images; l(r) is binary function, l r represents the fusion label of pixel r, and α(r) represents the transparency value of pixel r; the above formula shows that the transparency distribution of the overlapping area is as consistent as possible with the fusion label result, and the transparency value of adjacent pixels remains approximate;
分别将前景图像和背景图像的融合标签代入上式进行求解,从而获得它们在待融合重叠区域的透明度αA和αB;二次能量函数可利用下式所示的线性方程组进行求解:Substitute the fusion labels of the foreground image and the background image into the above formula to solve, so as to obtain their transparency α A and α B in the overlapping area to be fused; the quadratic energy function can be solved using the linear equations shown in the following formula:
对 right
其中,Ω为待融合的重叠区域,Np表示像素p的四邻域,w如前面所述加权因子,α(p)表示像素p的透明度取值,l(p)代表像素p的融合标签。上式中的目标能量函数能保证待融合的两幅图像在重叠区域的透明度取值均匀变化,使得融合后在分割线处过渡平滑,从而避免了基于颜色通道融合中出现的模糊和颜色散射现象。Among them, Ω is the overlapping area to be fused, N p is the four-neighborhood of pixel p, w is the weighting factor mentioned above, α(p) is the transparency value of pixel p, and l(p) is the fusion label of pixel p. The target energy function in the above formula can ensure that the transparency value of the two images to be fused changes uniformly in the overlapping area, making the transition at the dividing line smooth after fusion, thus avoiding the blurring and color scattering phenomenon in the fusion based on color channels .
与现有技术相比,本发明具有以下有益的技术效果:Compared with the prior art, the present invention has the following beneficial technical effects:
本发明构建了一个统一的能量函数计算前景和背景图像在融合区域的标签和透明度,并分步进行求解,首先通过图割优化算法在颜色通道计算前景和背景图像的融合标签,并在透明度通道构造符合前景和背景梯度差异分布的能量函数,以求解前景和背景图像在重叠区域的透明度,最后基于透明度进行融合。同时,本发明提出的图像无缝融合算法还可对颜色和纹理差异较大的源图像进行拼图。优势主要体现在:The present invention constructs a unified energy function to calculate the labels and transparency of the foreground and background images in the fusion area, and solves the problem step by step. Construct an energy function that conforms to the gradient difference distribution of the foreground and background to solve the transparency of the foreground and background images in the overlapping area, and finally fuse based on the transparency. At the same time, the image seamless fusion algorithm proposed by the present invention can also mosaic the source images with large differences in color and texture. The advantages are mainly reflected in:
(1)本发明结合颜色和透明度进行图像无缝融合,利用颜色和透明度的一致优化实现融合时的色调保持和边界连续过渡的效果。相比现有技术,本发明方法不会在融合后改变前景目标的原有色调。(1) The present invention combines color and transparency to seamlessly fuse images, and utilizes the consistent optimization of color and transparency to achieve the effect of color tone maintenance and boundary continuous transition during fusion. Compared with the prior art, the method of the present invention will not change the original color tone of the foreground object after fusion.
(2)本发明提出的图像无缝融合方法通过对边界条件的改进和约束能够推广至拼图应用,相比于其它拼图算法,该发明中融合边界的计算方法能够得到更好的分割效果,从而影响前景和背景图像在重叠区域的透明度分布,使其透明度取值更为均匀,因此,输出合成图在分割线处过渡更为平滑。(2) The image seamless fusion method proposed by the present invention can be extended to mosaic applications through the improvement and constraints of the boundary conditions. Compared with other mosaic algorithms, the calculation method of the fusion boundary in this invention can obtain better segmentation results, thus Affects the transparency distribution of the foreground and background images in the overlapping area to make the transparency value more uniform, so the transition of the output composite image at the split line is smoother.
附图说明Description of drawings
图1为本发明中基于图割优化的融合边界分割线求解示意图;Fig. 1 is a schematic diagram of solving a fusion boundary dividing line based on graph cut optimization in the present invention;
图2为本发明具体实例中图像无缝融合分割线求解结果图;Fig. 2 is the solution result figure of the image seamless fusion dividing line in the specific example of the present invention;
图3、4、5为利用不同背景的源图像与目标图像进行无缝融合的对比实验结果图;其中,3-1、4-1及5-1为前景图像;3-2、4-2及5-2为现有技术处理的结果图;3-3、4-3及5-3为本发明方法处理的结果图;Figures 3, 4, and 5 are comparison experiment results of seamless fusion of source images and target images using different backgrounds; among them, 3-1, 4-1, and 5-1 are foreground images; 3-2, 4-2 And 5-2 is the result figure of prior art processing; 3-3, 4-3 and 5-3 are the result figure of the method of the present invention process;
图6为基于拼图的图割能量分割线求解示意图;Fig. 6 is a schematic diagram of solving the graph-cut energy segmentation line based on the jigsaw puzzle;
图7两幅图像拼图的实验结果对比结果图;Fig. 7 The comparison results of the experimental results of the two image puzzles;
其中,7-1和7-2是选取的两幅待拼接的前景;7-3为分割线求解结果示意图;7-4为现有技术处理的结果图;7-5为本发明方法处理的结果图;Wherein, 7-1 and 7-2 are two selected foregrounds to be spliced; 7-3 is a schematic diagram of the solution result of the dividing line; 7-4 is a result map processed by the prior art; 7-5 is processed by the method of the present invention Result graph;
图8为多幅图像拼图的实验结果对比结果图;Fig. 8 is the comparison result figure of the experimental results of multiple image puzzles;
其中,8-1为现有技术处理的结果图;8-2为本发明方法处理的结果图。Among them, 8-1 is the result diagram of prior art processing; 8-2 is the result diagram of the method of the present invention.
具体实施方式detailed description
下面结合具体的实施例对本发明做进一步的详细说明,所述是对本发明的解释而不是限定。The present invention will be further described in detail below in conjunction with specific embodiments, which are explanations of the present invention rather than limitations.
本发明的一种基于颜色和透明度一致优化的图像无缝融合方法,建立了一个统一的数学模型计算前景和背景图像在重叠区域的融合标签和透明度,并通过两步优化策略对其进行求解:首先在颜色通道基于图割能量优化算法计算前景和背景图像在重叠区域的融合标签,然后在透明度通道根据符合前景和背景梯度差异分布的能量函数求解透明度,并基于透明度进行融合。An image seamless fusion method based on consistent optimization of color and transparency of the present invention establishes a unified mathematical model to calculate the fusion label and transparency of the foreground and background images in the overlapping area, and solves it through a two-step optimization strategy: First, in the color channel, the fusion labels of the foreground and background images in the overlapping area are calculated based on the graph cut energy optimization algorithm, and then in the transparency channel, the transparency is solved according to the energy function that conforms to the gradient difference distribution of the foreground and background, and fusion is performed based on the transparency.
包括以下步骤:Include the following steps:
1)计算融合标签1) Calculate the fusion label
构建一个计算图像融合时颜色和透明度的能量函数,对该能量函数进行分步求解,分离出变量,并利用图割能量优化方法在颜色通道进行求解,以获得重叠区域的融合标签;Construct an energy function to calculate the color and transparency of image fusion, solve the energy function step by step, separate the variables, and use the graph cut energy optimization method to solve in the color channel to obtain the fusion label of the overlapping area;
所述的能量函数如下式:Described energy function is as follows:
E=min∫∫Ω‖l(r)-α(r)‖2+λ(r)‖▽l‖2+w(r)‖▽α‖2drE= min∫∫Ω ‖l(r)-α(r)‖ 2 +λ(r)‖▽l‖ 2 +w(r)‖▽α‖ 2 dr
其中,l(r)为二值函数,对应融合区域内每个像素的融合标签;和分别为待融合区域的前景轮廓线和用户画的闭合曲线;α(r)为透明度分布函数;λ(r)和w(r)分别代表基于颜色信息的融合标签能量项和透明度能量项的权重;该能量函数表明,前景和背景的透明度取值尽可能地与融合标签趋于一致,且重叠区域内相邻像素的标签结果与透明度尽量相似,从而使重叠区域的透明度均匀分布,使融合边界线趋于平滑。Among them, l(r) is a binary function, corresponding to the fusion label of each pixel in the fusion area; with are the foreground contour line of the area to be fused and the closed curve drawn by the user; α(r) is the transparency distribution function; λ(r) and w(r) represent the weight of the fusion label energy item and transparency energy item based on color information, respectively ; The energy function shows that the transparency values of the foreground and background tend to be consistent with the fusion label as much as possible, and the label results and transparency of adjacent pixels in the overlapping area are as similar as possible, so that the transparency of the overlapping area is evenly distributed, and the fusion boundary The line tends to be smooth.
分离出变量,获得的一元函数如下式:The variable is separated, and the unary function obtained is as follows:
E(l)=min∫∫Ω‖l(r)-α(r)‖2+λ(r)‖▽l‖2 E(l)= min∫∫Ω ‖l(r)-α(r)‖ 2 +λ(r)‖▽▽l‖ 2
其中,Ω为待融合的重叠区域,能量函数通过图割能量优化进行求解,对应的数据项和平滑项分别为:Among them, Ω is the overlapping area to be fused, and the energy function is solved by graph-cut energy optimization. The corresponding data items and smoothing items are:
Es(p,q,lp,lq)=|lp-lq|·(D(p)+D(q))E s (p,q,l p ,l q )=|l p -l q |·(D(p)+D(q))
D(v)=‖I1(v)-I2(v)‖2+α‖▽I1(v)-▽I2(v)‖2 D(v)=‖I 1 (v)-I 2 (v)‖ 2 +α‖▽I 1 (v)-▽I 2 (v)‖ 2
上式中,α=2,▽表示梯度算子,上式表明前景图像和背景图像在待融合区域内像素点之间的灰度和梯度差异越大,平滑项越大,即相邻像素的颜色和梯度信息越接近,则其融合标签也保持一致;In the above formula, α=2, ▽ represents the gradient operator, and the above formula shows that the greater the gray level and gradient difference between the foreground image and the background image in the region to be fused, the greater the smoothing term, that is, the The closer the color and gradient information are, the more consistent their fusion labels are;
同时,为避免能量函数陷入局部极小值解,对重叠区域的边界像素加入强制数据约束项:At the same time, in order to prevent the energy function from falling into a local minimum solution, a mandatory data constraint item is added to the boundary pixels of the overlapping area:
上式中,A代表背景图像,B表示前景图像,和分别表示重叠区域的前景轮廓线和用户画的闭合曲线;In the above formula, A represents the background image, B represents the foreground image, with Respectively represent the foreground contour line of the overlapping area and the closed curve drawn by the user;
对于颜色和纹理差异较大的源图像进行拼图时,对边界条件作如下改进:如果重叠区域的某边界像素落入一幅图像内部,则该边界像素被标记为该图像所属的标签。When mosaicing source images with large differences in color and texture, the boundary conditions are improved as follows: if a boundary pixel in the overlapping area falls into an image, the boundary pixel is marked as the label to which the image belongs.
2)计算透明度2) Calculation Transparency
固定融合标签l(r),从统一能量函数中分离出包含变量α(r)的一元函数,进行优化计算,并通过求解线性方程组分别获得前景和背景图像在待融合区域的透明度;Fix the fusion label l(r), separate the unary function containing the variable α(r) from the unified energy function, perform optimization calculations, and obtain the transparency of the foreground and background images in the area to be fused by solving the linear equations;
分离出包含变量α(r)得到的一元函数E(α)为:The unary function E(α) obtained by separating out the variable α(r) is:
其中,为加权因子,为前景图像和背景图像的平均梯度,σ2为图像的均方梯度;上式表明,重叠区域的透明度分布尽量与融合标签结果趋于一致,且相邻像素的透明度取值保持近似;in, is the weighting factor, is the average gradient of the foreground image and the background image, σ 2 is the mean square gradient of the image; the above formula shows that the transparency distribution of the overlapping area is as close as possible to the result of the fusion label, and the transparency values of adjacent pixels remain approximate;
分别将前景图像和背景图像的融合标签代入上式进行求解,从而获得它们在待融合重叠区域的透明度αA和αB;二次能量函数可利用下式所示的线性方程组进行求解:Substitute the fusion labels of the foreground image and the background image into the above formula to solve, so as to obtain their transparency α A and α B in the overlapping area to be fused; the quadratic energy function can be solved using the linear equations shown in the following formula:
对 right
其中,Np表示像素p的四邻域,上式中的目标能量函数能保证待融合的两幅图像在重叠区域的透明度取值均匀变化,使得融合后在分割线处过渡平滑,从而避免了基于颜色通道融合中出现的模糊和颜色散射现象。Among them, N p represents the four-neighborhood of pixel p, and the target energy function in the above formula can ensure that the transparency values of the two images to be fused change uniformly in the overlapping area, making the transition smooth at the dividing line after fusion, thereby avoiding the Blurring and color scattering phenomena that occur in color channel fusion.
3)对透明度取值进行归一化处理3) Normalize the transparency value
分别对前景和背景图像的透明度取值αA,αB进行归一化,从而便于对前景和背景图像基于透明度进行混合,如下式:Normalize the transparency values α A and α B of the foreground and background images respectively, so as to facilitate the mixing of the foreground and background images based on transparency, as follows:
4)图像融合4) Image Fusion
对前景fA和背景图像fB在待融合区域基于归一化的透明度α'A、α'B进行融合,从而获得融合后的输出图像,如下式所示:The foreground f A and the background image f B are fused based on the normalized transparency α' A and α' B in the area to be fused, so as to obtain the fused output image, as shown in the following formula:
f=α'AfA+α'BfB。f=α' A f A +α' B f B .
具体实现过程如下:The specific implementation process is as follows:
步骤一、融合标签计算Step 1. Fusion label calculation
首先要求用户在待插入前景目标周围粗略地画一条封闭曲线并利用Grabcut(Rother C,Kolmogorov V,Blake A.Grabcut:Interactive foreground extractionusing iterated graph cuts[C]//ACM Transactions on Graphics(TOG).ACM,2004,23(3):309-314)算法进行分割以获得包含前景目标的边缘轮廓线由位于用户画的闭合曲线与边缘轮廓线之间的像素构成重叠区域。我们通过分离变量法求解建立的数学模型,首先,从统一的数学模型中分离出变量l(r),转化为graphcut能量优化进行求解,并利用重叠区域的颜色和梯度信息计算相应的图割优化能量项,然后通过最大流最小割算法进行求解,获得的最小割集合即为分割线,同时获得重叠区域内各像素的融合标签。基于图割优化能量函数时的边界条件考虑如下:对于图像无缝融合,位于重叠区域用户指定的闭合曲线上的像素被标记为属于背景标签的数据能量项为无穷大,而属于前景标签的数据能量项为0;相反,位于前景轮廓线上的像素被标记为属于前景标签的数据能量项为无穷大,而属于背景标签的数据能量项为0:First ask the user to roughly draw a closed curve around the foreground object to be inserted And use the Grabcut (Rother C, Kolmogorov V, Blake A.Grabcut:Interactive foreground extractionusing iterated graph cuts[C]//ACM Transactions on Graphics(TOG).ACM,2004,23(3):309-314) algorithm for segmentation to obtain the edge contour line containing the foreground object The overlapping area is formed by the pixels between the closed curve drawn by the user and the edge contour. We solve the mathematical model established by the separation of variables method. First, separate the variable l(r) from the unified mathematical model, transform it into graphcut energy optimization for solution, and use the color and gradient information of the overlapping area to calculate the corresponding graph cut optimization. The energy item is then solved by the maximum flow minimum cut algorithm, and the obtained minimum cut set is the segmentation line, and the fusion label of each pixel in the overlapping area is obtained at the same time. The boundary conditions when optimizing the energy function based on graph cuts are considered as follows: For seamless image fusion, pixels located on the closed curve specified by the user in the overlapping area are marked as belonging to the background label. The data energy term is infinite, while the data energy belonging to the foreground label term is 0; conversely, pixels lying on the foreground contour line are marked as belonging to the foreground label. The data energy term is infinite, while the data energy term belonging to the background label is 0:
为前景轮廓线,为用户画的闭合曲线,lA和lB分别为背景和前景图像所代表标签,基于图割优化的分割线求解示意图如图1所示。以母亲节主题海报为背景的图像无缝融合分割线求解结果如附图2所示。对于图像拼图应用,若重叠区域边界线上的像素位于某图像内部,则该像素被标记为属于这一图像标签的数据能量项为无穷大,而被标记为属于其它标签的数据能量项为0,比如附图6所示的基于拼图的graph cut分割线求解示意图,其中的边界约束条件如下所示: is the foreground contour line, is the closed curve drawn by the user, l A and l B are the labels represented by the background and foreground images respectively, and the schematic diagram of solving the segmentation line based on graph cut optimization is shown in Figure 1. Figure 2 shows the results of the seamless fusion of images with the Mother's Day themed poster as the background of the dividing line. For image mosaic applications, if the pixel on the boundary line of the overlapping area is located inside an image, the data energy item marked as belonging to this image label is infinite, and the data energy item marked as belonging to other labels is 0. For example, the schematic diagram for solving the puzzle-based graph cut dividing line shown in Figure 6, the boundary constraints are as follows:
步骤二、透明度计算Step 2. Transparency calculation
我们固定l(r),从统一数学模型中分离出变量α(r),获得符合前景和背景梯度差异分布的二次能量函数,对其进行偏微分求导,得到关于透明度分布函数的一次方程,并联立重叠区域每个像素的一次方程构成线性方程组,以求解前景和背景图像在重叠区域的透明度。该透明度为一实值函数,α(r)∈[0 1]。对于图像无缝融合,假设前景和背景,所属标签为分别为1和0,若某像素分割后的标签为1;则前景图像在该像素的透明度接近1,而背景图像在该像素的透明度接近于0;相反,若某像素分割后被标记为0,则背景图像在该像素的透明度近似为1,而前景图像在该像素透明度近似为0;上述表明,前景和背景图像在重叠区域的透明度取值与融合标签趋于一致。We fix l(r), separate the variable α(r) from the unified mathematical model, obtain a quadratic energy function that conforms to the gradient difference distribution of the foreground and background, and perform partial differential derivatives on it to obtain a linear equation about the transparency distribution function , and the first-order equations of each pixel in the overlapping area are combined to form a linear equation system to solve the transparency of the foreground and background images in the overlapping area. The transparency is a real-valued function, α(r)∈[0 1]. For the seamless fusion of images, it is assumed that the labels of the foreground and the background are 1 and 0 respectively. If the label of a certain pixel after segmentation is 1; then the transparency of the foreground image at this pixel is close to 1, while the transparency of the background image at this pixel is close to On the contrary, if a pixel is marked as 0 after segmentation, the transparency of the background image at this pixel is approximately 1, while the transparency of the foreground image at this pixel is approximately 0; the above shows that the transparency of the foreground and background images in the overlapping area The value tends to be consistent with the fusion label.
步骤三、对透明度取值进行归一化Step 3. Normalize the transparency value
对步骤二求得的前景和背景图像在重叠区域的透明度进行归一化,以获得它们在重叠区域的对比透明度。Normalize the transparencies of the foreground and background images obtained in step 2 in overlapping regions to obtain their contrasting transparencies in overlapping regions.
步骤四、图像融合Step 4. Image Fusion
利用归一化后的透明度对前景和背景图像进行加权混合,以获取融合后的生成图像。我们选取了不同背景的源图像和目标图像进行无缝融合实验,并与文献[13](Chen T,Zhu J Y,Shamir A,et al.Motion-Aware Gradient Domain Video Composition[J].Image Processing,IEEE Transactions on,2013,22(7):2532-2544)的实验结果进行对比,如图3、4、5所示。其中,3-1、4-1及5-1为前景图像;3-2、4-2及5-2为现有技术(文献[13])处理的结果图;3-3、4-3及5-3为本发明方法处理的结果图;从实验结果来看,相比于其它图像融合算法,由于本发明提出的算法结合颜色和透明度通道进行融合,能在不改变前景图像原有色调的基础上,有效地避免饱和像素的颜色突变问题,同时,本发明中的算法使透明度取值均匀分布,因而,融合边界过渡更为平滑。Foreground and background images are weighted blended with normalized transparency to obtain a fused generated image. We selected source images and target images of different backgrounds for seamless fusion experiments, and compared with literature [13] (Chen T, Zhu J Y, Shamir A, et al.Motion-Aware Gradient Domain Video Composition[J].Image Processing, IEEE Transactions on, 2013, 22(7): 2532-2544), the experimental results are compared, as shown in Figures 3, 4, and 5. Among them, 3-1, 4-1 and 5-1 are foreground images; 3-2, 4-2 and 5-2 are result images processed by prior art (document [13]); 3-3, 4-3 And 5-3 is the result figure that the method of the present invention handles; From the experimental results, compared with other image fusion algorithms, because the algorithm proposed in the present invention combines color and transparency channel to fuse, can not change the original tone of the foreground image On the basis of , the color mutation problem of saturated pixels is effectively avoided, and at the same time, the algorithm in the present invention makes the transparency values evenly distributed, so the blending boundary transition is smoother.
本发明中的算法通过对边界条件的约束推广至拼图应用,图7与图8分别为两幅与多幅图像拼图的对比实验结果,我们与现有技术做了对比,文献[1]为(Pérez P,GangnetM,Blake A.Poisson image editing[C]//ACM Transactions on Graphics(TOG).ACM,2003,22(3):313-318);文献[5]是(Rother C,Bordeaux L,Hamadi Y,et al.Autocollage[C]//ACM Transactions on Graphics(TOG).ACM,2006,25(3):847-852);其中,7-1和7-2是选取的两幅待拼接的前景;7-3为分割线求解结果示意图;7-4为现有技术(文献[1])处理的结果图;7-5为本发明方法处理的结果图;8-1为现有技术(文献[5])处理的结果图;8-2为本发明方法处理的结果图。其中,两幅图像拼图的结果显示,其融合后的边界比较突兀,容易出现颜色散射和模糊现象。而本发明中所得分割结果较为合理,在分割线处过渡也更为平滑。多幅图像拼图实验结果表明,当源图像的颜色和纹理差异较大时,本发明相比其它方法,能实现拼图后边界连续过渡的效果。The algorithm in the present invention is extended to puzzle applications through the constraints of boundary conditions. Figure 7 and Figure 8 are the comparative experimental results of two and multiple image puzzles respectively. We have made a comparison with the existing technology, and the literature [1] is ( Pérez P, Gangnet M, Blake A.Poisson image editing[C]//ACM Transactions on Graphics (TOG).ACM,2003,22(3):313-318); literature [5] is (Rother C, Bordeaux L, Hamadi Y, et al.Autocollage[C]//ACM Transactions on Graphics(TOG).ACM,2006,25(3):847-852); Among them, 7-1 and 7-2 are two selected images to be spliced 7-3 is a schematic diagram of the solution result of the dividing line; 7-4 is the result figure processed by the prior art (document [1]); 7-5 is the result figure processed by the method of the present invention; 8-1 is the prior art (Document [5]) the result figure of processing; 8-2 is the result figure of the method of the present invention process. Among them, the results of the two image puzzles show that the boundaries after fusion are relatively abrupt, and color scattering and blurring are prone to occur. However, the segmentation result obtained in the present invention is more reasonable, and the transition at the segmentation line is smoother. The experiment results of multiple image collages show that when the color and texture of the source images are quite different, the present invention can realize the effect of continuous border transition after collage compared with other methods.
综上所述,本发明方法建立了一个统一的数学模型计算图像融合时的颜色和透明度,并采用两步优化的策略进行求解:首先通过图割优化算法在颜色通道计算融合标签;然后优化该模型求解最优的透明度,实现图像融合。相比于其它融合方法,本发明能够生成更好地融合效果。In summary, the method of the present invention establishes a unified mathematical model to calculate the color and transparency of image fusion, and adopts a two-step optimization strategy to solve the problem: first, calculate the fusion label in the color channel through the graph cut optimization algorithm; then optimize the The model solves for optimal transparency for image fusion. Compared with other fusion methods, the present invention can generate better fusion effect.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510107663.8A CN104715451B (en) | 2015-03-11 | 2015-03-11 | A kind of image seamless fusion method unanimously optimized based on color and transparency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510107663.8A CN104715451B (en) | 2015-03-11 | 2015-03-11 | A kind of image seamless fusion method unanimously optimized based on color and transparency |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104715451A CN104715451A (en) | 2015-06-17 |
CN104715451B true CN104715451B (en) | 2018-01-05 |
Family
ID=53414746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510107663.8A Expired - Fee Related CN104715451B (en) | 2015-03-11 | 2015-03-11 | A kind of image seamless fusion method unanimously optimized based on color and transparency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104715451B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106407887B (en) * | 2016-08-24 | 2020-07-31 | 重庆大学 | Method and device for obtaining step size of candidate frame search |
CN108986058B (en) * | 2018-06-22 | 2021-11-19 | 华东师范大学 | Image fusion method for brightness consistency learning |
CN109192054B (en) * | 2018-07-27 | 2020-04-28 | 阿里巴巴集团控股有限公司 | Data processing method and device for map region merging |
CN112016630B (en) * | 2020-09-03 | 2024-03-19 | 平安科技(深圳)有限公司 | Training method, device, equipment and storage medium based on image classification model |
CN112214273B (en) * | 2020-10-14 | 2023-04-21 | 合肥芯颖科技有限公司 | Digital clock display method and device, electronic equipment and storage medium |
CN112954452B (en) * | 2021-02-08 | 2023-07-18 | 广州酷狗计算机科技有限公司 | Video generation method, device, terminal and storage medium |
CN113660531B (en) * | 2021-08-20 | 2024-05-17 | 北京市商汤科技开发有限公司 | Video processing method and device, electronic equipment and storage medium |
CN116188330B (en) * | 2023-04-28 | 2023-07-04 | 北京友通上昊科技有限公司 | Spliced image gray level processing method, image splicing method, device and medium |
CN116437205B (en) * | 2023-06-02 | 2023-08-11 | 华中科技大学 | Depth of field expansion method and system for multi-view multi-focal length imaging |
CN116703794B (en) * | 2023-06-06 | 2024-04-30 | 深圳市歌华智能科技有限公司 | Multi-image fusion method in HSV color space |
CN117522717B (en) * | 2024-01-03 | 2024-04-19 | 支付宝(杭州)信息技术有限公司 | Image synthesis method, device and equipment |
-
2015
- 2015-03-11 CN CN201510107663.8A patent/CN104715451B/en not_active Expired - Fee Related
Non-Patent Citations (4)
Title |
---|
Grabcut:Interactive foreground extraction using iterated graph cuts;Carsten Rother等;《ACM Transactions on Graphics(TOG).ACM》;20041231;第23卷(第3期);正文第2-4节 * |
Weighted Color and Texture Sample Selection for Image Matting;Ehsan Shahrian Varnousfaderani等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20131130;第22卷(第11期);第4260-4270页 * |
基于SIFT特征匹配的图像无缝拼接算法;曹楠等;《计算机与应用化学》;20110228;第28卷(第2期);第242-244页 * |
基于抠像技术的图像无缝融合算法;傅新元等;《中国图象图形学报》;20080630;第13卷(第6期);第1083-1088页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104715451A (en) | 2015-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104715451B (en) | A kind of image seamless fusion method unanimously optimized based on color and transparency | |
EP3830792B1 (en) | Joint unsupervised object segmentation and inpainting | |
Kopf et al. | Depixelizing pixel art | |
Sun et al. | Image vectorization using optimized gradient meshes | |
CN103177446B (en) | Based on the accurate extracting method of display foreground of neighborhood and non-neighborhood smoothing prior | |
CN109712223B (en) | Three-dimensional model automatic coloring method based on texture synthesis | |
US20080238942A1 (en) | Object-Based Image Inpainting | |
CN101639935A (en) | Digital human serial section image segmentation method based on geometric active contour target tracking | |
Zeng et al. | Region-based bas-relief generation from a single image | |
CN104952102B (en) | Towards the unified antialiasing method of delay coloring | |
Richardt et al. | Vectorising bitmaps into semi‐transparent gradient layers | |
CN107862664A (en) | A kind of image non-photorealistic rendering method and system | |
Ostyakov et al. | Seigan: Towards compositional image generation by simultaneously learning to segment, enhance, and inpaint | |
Ramanarayanan et al. | Constrained texture synthesis via energy minimization | |
CN102005061A (en) | Method for reusing cartoons based on layering/hole-filling | |
CN104361581B (en) | The CT scan data dividing method being combined based on user mutual and volume drawing | |
Bertalmío et al. | Pde-based image and surface inpainting | |
CN103198496B (en) | A kind of method of abstracted image being carried out to vector quantization | |
Wu et al. | An Effective Content-Aware Image Inpainting Method. | |
Wong et al. | Abstracting images into continuous-line artistic styles | |
CN107330957A (en) | A kind of image processing method with mapping interaction relation between figure layer | |
Cao et al. | Automatic motion-guided video stylization and personalization | |
CN106558050A (en) | A kind of obvious object dividing method based on three threshold value of self adaptation | |
Shen et al. | Video composition by optimized 3D mean‐value coordinates | |
Liu et al. | Recent development in image completion techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180105 |