[go: up one dir, main page]

CN104751432A - Image reconstruction based visible light and infrared image fusion method - Google Patents

Image reconstruction based visible light and infrared image fusion method Download PDF

Info

Publication number
CN104751432A
CN104751432A CN201510101000.5A CN201510101000A CN104751432A CN 104751432 A CN104751432 A CN 104751432A CN 201510101000 A CN201510101000 A CN 201510101000A CN 104751432 A CN104751432 A CN 104751432A
Authority
CN
China
Prior art keywords
image
sign
gradient
error
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510101000.5A
Other languages
Chinese (zh)
Other versions
CN104751432B (en
Inventor
傅志中
牛婷婷
徐进
李晓峰
周宁
郑婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201510101000.5A priority Critical patent/CN104751432B/en
Publication of CN104751432A publication Critical patent/CN104751432A/en
Application granted granted Critical
Publication of CN104751432B publication Critical patent/CN104751432B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an image reconstruction based visible light and infrared image fusion method and belongs to the field of image processing. The image reconstruction based visible light and infrared image fusion method sequentially comprises the steps of calculating the gradient of a registered infrared image and the gradient of a visible light brightness image, estimating noise strength of the brightness image, calculating the weighted sum of the gradient of the brightness image, obtaining a reconstructed image, performing image normalization and reconstruction, combining the reconstructed brightness image and an original visible light color component, and the problem of remarkable difference of image boundaries in the existing image splicing and fusion process is solved. The image reconstruction based visible light and infrared image fusion method can be applied to fusion processing of multiple registered images.

Description

一种基于图像重构的可见光与红外图像融合方法A Fusion Method of Visible Light and Infrared Images Based on Image Reconstruction

技术领域technical field

本发明属于图像处理领域,具体涉及一种基于图像重构的可见光与红外图像融合方法。The invention belongs to the field of image processing, and in particular relates to a fusion method of visible light and infrared images based on image reconstruction.

背景技术Background technique

图像融合是指将多个传感器探测的图像信息综合处理后,实现对探测场景更全面、更可靠的描述。图像融合通过整合不同图像信息源的互补信息,既可克服单一传感器图像在几何、光谱和空间分辨率等方面存在的局限性和差异性,又能去除多源图像信息的冗余性,在提高图像理解和识别效率的同时又提高了图像的质量,有利于对物理现象和事件进行定位、识别和解释。图像融合通常分为三个层次:像素级融合、特征级融合与决策级融合,本发明属于像素级融合。像素级融合处理的输入和输出数据都是以图像数据为载体,其处理目标是将待融合图像中的感兴趣信息提取出来,集成至融合图像中。Image fusion refers to the comprehensive processing of image information detected by multiple sensors to achieve a more comprehensive and reliable description of the detection scene. By integrating the complementary information of different image information sources, image fusion can not only overcome the limitations and differences of single sensor images in terms of geometry, spectrum and spatial resolution, but also remove the redundancy of multi-source image information. The efficiency of image understanding and recognition improves image quality at the same time, which is conducive to the positioning, identification and interpretation of physical phenomena and events. Image fusion is generally divided into three levels: pixel-level fusion, feature-level fusion and decision-level fusion. The present invention belongs to pixel-level fusion. The input and output data of pixel-level fusion processing are based on image data, and its processing goal is to extract the interesting information in the image to be fused and integrate it into the fused image.

随着多尺度分析工具的发展,以DWT和其一系列改进方案如非下采样轮廓波(NSCT)为代表多尺度分析工具已经广泛应用于图像融合领域[Kong W,Zhang L,Lei Y.Novel fusionmethod for visible light and infrared images based on NSST–SF–PCNN[J].Infrared Physics&Technology,2014,65:103-112.]。融合规则是基于多尺度分析的融合算法中另一个至关重要的因素,融合规则一般可分为三类:基于像素的融合规则、基于窗口的融合规则和基于区域的融合规则[叶传奇.基于多尺度分解的多传感器图像融合算法研究[D].[博士论文].西安电子科技大学,2009]。基于像素的融合规则如传统的加权平均得到的融合图像对比度较低;基于窗口的融合规则,如基于窗口区域统计特性的融合算法[张强,郭宝龙.一种基于非采样Contourlet变换红外图像与可见光图像融合算法[J].红外与毫米波学报,2007,06:476-480.],考虑了相邻像素间的相关性,在一定程度上提高了融合效果;基于区域的融合规则,是将构成某区域的多个像素作为一个整体参与到融合过程中,如基于区域分割的图像融合算法[Liu Kun,Guo Lei,Li Hui-hua,et al.Fusion of infrared and visible light images based on region segmentation[J].Chinese Journal of Aeronautics,2009,22(1):75-80],其融合图像的整体视觉效果更好,并可较好地抑制融合痕迹。但是,上述图像融合方法在对不同大小尺寸的图像进行融合处理时,或者在图像中存在显著梯度变化时,融合图像中存在显著的融合、拼接痕迹。采用边界滤波可以降低融合痕迹,但同时也降低了图像的清晰度。本发明采用图像重构的方法,调整参数少,无需对融合图像进行滤波,即可实现图像的无痕迹融合。With the development of multi-scale analysis tools, represented by DWT and a series of improved schemes such as non-subsampled contourlet (NSCT), multi-scale analysis tools have been widely used in the field of image fusion [Kong W, Zhang L, Lei Y.Novel fusionmethod for visible light and infrared images based on NSST–SF–PCNN[J].Infrared Physics&Technology,2014,65:103-112.]. Fusion rules are another crucial factor in fusion algorithms based on multi-scale analysis. Fusion rules can generally be divided into three categories: pixel-based fusion rules, window-based fusion rules, and region-based fusion rules [Ye Chuanqi. Based on Research on Multi-Scale Decomposition Multi-Sensor Image Fusion Algorithm [D].[Doctoral Dissertation]. Xidian University, 2009]. Fusion images based on pixel-based fusion rules such as traditional weighted average have low contrast; fusion rules based on windows, such as fusion algorithms based on statistical characteristics of window regions [Zhang Qiang, Guo Baolong. A non-sampled Contourlet transform infrared image and visible light image Fusion algorithm [J]. Journal of Infrared and Millimeter Waves, 2007, 06:476-480.], which considers the correlation between adjacent pixels, improves the fusion effect to a certain extent; Multiple pixels in a region participate in the fusion process as a whole, such as the image fusion algorithm based on region segmentation [Liu Kun, Guo Lei, Li Hui-hua, et al. Fusion of infrared and visible light images based on region segmentation[ J].Chinese Journal of Aeronautics,2009,22(1):75-80], the overall visual effect of the fusion image is better, and the fusion trace can be better suppressed. However, when the above image fusion method performs fusion processing on images of different sizes, or when there is a significant gradient change in the image, there are obvious fusion and splicing traces in the fusion image. The use of boundary filtering can reduce the fusion trace, but it also reduces the sharpness of the image. The invention adopts an image reconstruction method, has few adjustment parameters, and can realize image fusion without any traces without filtering the fusion image.

发明内容Contents of the invention

本发明针对现有技术中的不足之处提供了一种基于图像重构的可见光与红外图像融合方法,解决现有图像拼接、融合后,融合图像中存在融合痕迹问题,其流程如图1所示,主要包括以下步骤:Aiming at the deficiencies in the prior art, the present invention provides a fusion method of visible light and infrared images based on image reconstruction, which solves the problem of fusion traces in the fusion image after splicing and fusion of the existing images. The process flow is shown in Figure 1 It mainly includes the following steps:

步骤01.读取需要进行融合的m幅红外图像IRi(x,y)和n幅可见光图像IMj(x,y),其中,i=1…m,j=1…n;将每一幅可见光图像IMj(x,y)变换至YUV空间,YUV空间的三个分量分别为{IVj,Cbj,Crj},其中IVj是亮度分量,Cbj和Crj是色度分量;Step 01. Read m infrared images IR i (x, y) and n visible light images IM j (x, y) that need to be fused, where i=1...m, j=1...n; Transform a visible light image IM j (x, y) into YUV space, and the three components of YUV space are {IV j , Cb j , Cr j }, where IV j is the brightness component, Cb j and Cr j are the chrominance components ;

步骤02.对每一幅红外图像和可见光图像按步骤03至步骤06进行操作;Step 02. Perform operations from step 03 to step 06 for each infrared image and visible light image;

步骤03.以一阶前向或后向差分近似求出红外图像IRi(x,y)的梯度图像GRi(x,y)及可见光图像对应的亮度分量IVj(x,y)的梯度图像GVj(x,y);以下公式为采用一阶前向差分近似所得:Step 03. Approximately obtain the gradient image GR i (x, y) of the infrared image IR i (x, y) and the gradient of the brightness component IV j (x, y) corresponding to the visible light image by first-order forward or backward difference Image GV j (x,y); the following formula is approximated by the first-order forward difference:

GRi(x,y)=▽IRi(x,y)≈(IRi(x+1,y)-IRi(x,y),IRi(x,y+1)-IRi(x,y));GR i (x,y)=▽IR i (x,y)≈(IR i (x+1,y)-IR i (x,y),IR i (x,y+1)-IR i (x ,y));

GVj(x,y)=▽IVj(x,y)≈(IVj(x+1,y)-IVj(x,y),IVj(x,y+1)-IVj(x,y));GV j (x,y)=▽IV j (x,y)≈(IV j (x+1,y)-IV j (x,y),IV j (x,y+1)-IV j (x ,y));

步骤04.对梯度图像IRi(x,y)进行均值滤波得均值图像IRMi(x,y),对梯度图像IVj(x,y)进行均值滤波得均值图像IVMj(x,y);Step 04. Perform mean filtering on the gradient image IR i (x, y) to obtain the mean image IRM i (x, y), and perform mean filtering on the gradient image IV j (x, y) to obtain the mean image IVM j (x, y) ;

步骤05.从梯度图像IRi(x,y)中减去均值图像IRMi(x,y)得误差图像IREi(x,y),从梯度图像IVj(x,y)中减去均值图像IVMj(x,y)得误差图像IVEj(x,y);Step 05. Subtract the mean image IRM i (x, y) from the gradient image IR i (x, y) to obtain the error image IRE i (x, y), and subtract the mean from the gradient image IV j (x, y) The image IVM j (x, y) gets the error image IVE j (x, y);

步骤06.分别计算误差图像IREi(x,y)和IVEj(x,y)的噪声标准差;Step 06. Calculate the noise standard deviation of the error images IRE i (x, y) and IVE j (x, y) respectively;

误差图像IREi(x,y)的噪声标准差具体通过以下方法获得:The noise standard deviation of the error image IRE i (x, y) is specifically obtained by the following method:

(06-1)统计误差图像IREi(x,y)的标准差σR0,i(06-1) The standard deviation σ R0,i of the statistical error image IRE i (x,y);

(06-2)删除误差图像IREi(x,y)中分布于三倍方差3σR0,i之外的误差点;(06-2) Delete the error points in the error image IRE i (x, y) that are distributed outside the triple variance 3σ R0,i ;

(06-3)重复步骤(06-1)至(06-2)进行迭代运算,直至相邻两次迭代运算所得标准差的相对误差小于10%时止,即第p+1次迭代运算所得的标准差σRp+1,i相对于第p次迭代获得的标准差σRp+1,i的相对误差小于10%时止,记第p+1次迭代运算所得的标准差σRp+1,i为误差图像IREi(x,y)的噪声标准差σR,i(06-3) Repeat steps (06-1) to (06-2) to perform iterative operations until the relative error of the standard deviation obtained by two adjacent iterative operations is less than 10%, that is, the result of the p+1th iterative operation When the relative error of the standard deviation σ Rp+1,i relative to the standard deviation σ Rp+1,i obtained in the pth iteration is less than 10%, record the standard deviation σ Rp+1 obtained in the p+1th iteration ,i is the noise standard deviation σ R,i of the error image IRE i (x,y);

误差图像IVEj(x,y)的噪声标准差σV,j的计算方法与噪声标准差σR,i的计算方法相同;The calculation method of the noise standard deviation σ V,j of the error image IVE j (x,y) is the same as that of the noise standard deviation σ R,i ;

步骤07.计算加权系数μR,i和μV,jStep 07. Calculate the weighting coefficients μ R,i and μ V,j :

μR,i=σV,i/(σRsVs),μV,j=σR,j/(σRsVs)μ R,iV,i /(σ RsVs ), μ V,jR,j /(σ RsVs )

其中:σRs=σR,1R,2+…+σR,m;σVs=σV,1V,2+…+σV,nAmong them: σ Rs = σ R,1R,2 +...+σ R,m ; σ VsV,1V,2 +...+σ V,n ;

步骤08.获得梯度图像加权和G(x,y)=[Gx(x,y),Gy(x,y)],具体通过以下方式获得:Step 08. Obtain the gradient image weighted sum G(x, y)=[Gx(x, y), Gy(x, y)], which is specifically obtained in the following manner:

Gx(x,y)=μR,1*GR1,x(x,y)*[sign[|GR1,x(x,y)|-σR,1]+1]/2Gx(x,y)=μ R,1 *GR 1,x (x,y)*[sign[|GR 1,x (x,y)|-σ R,1 ]+1]/2

R,2*GR2,x(x,y)*[sign[|GR2,x(x,y)|-σR,2]+1]/2R,2 *GR 2,x (x,y)*[sign[|GR 2,x (x,y)|-σ R,2 ]+1]/2

……...

R,m*GRm,x(x,y)*[sign[|GRm,x(x,y)|-σR,m]+1]/2R,m *GR m,x (x,y)*[sign[|GR m,x (x,y)|-σ R,m ]+1]/2

V,1*GV1,x(x,y)*[sign[|GV1,x(x,y)|-σV,1]+1]/2V,1 *GV 1,x (x,y)*[sign[|GV 1,x (x,y)|-σ V,1 ]+1]/2

V,2*GV2,x(x,y)*[sign[|GV2,x(x,y)|-σV,2]+1]/2V,2 *GV 2,x (x,y)*[sign[|GV 2,x (x,y)|-σ V,2 ]+1]/2

……...

V,n*GVn,x(x,y)*[sign[|GVm,x(x,y)|-σV,m]+1]/2;V,n *GV n,x (x,y)*[sign[|GV m,x (x,y)|-σ V,m ]+1]/2;

Gy(x,y)=μR,1*GR1,y(x,y)*[sign[|GR1,y(x,y)|-σR,1]+1]/2Gy(x,y)=μ R,1 *GR 1,y (x,y)*[sign[|GR 1,y (x,y)|-σ R,1 ]+1]/2

R,2*GR2,y(x,y)*[sign[|GR2,y(x,y)|-σR,2]+1]/2R,2 *GR 2,y (x,y)*[sign[|GR 2,y (x,y)|-σ R,2 ]+1]/2

……...

R,m*GRm,y(x,y)*[sign[|GRm,y(x,y)|-σR,m]+1]/2R,m *GR m,y (x,y)*[sign[|GR m,y (x,y)|-σ R,m ]+1]/2

V,1*GV1,y(x,y)*[sign[|GV1,y(x,y)|-σV,1]+1]/2V,1 *GV 1,y (x,y)*[sign[|GV 1,y (x,y)|-σ V,1 ]+1]/2

V,2*GV2,y(x,y)*[sign[|GV2,y(x,y)|-σV,2]+1]/2V,2 *GV 2,y (x,y)*[sign[|GV 2,y (x,y)|-σ V,2 ]+1]/2

……...

V,n*GVn,y(x,y)*[sign[|GVm,y(x,y)|-σV,m]+1]/2;V,n *GV n,y (x,y)*[sign[|GV m,y (x,y)|-σ V,m ]+1]/2;

其中,GRi,x(x,y)与GRi,y(x,y)分别是梯度图像GRi(x,y)的x、y分量,GVi,x(x,y)与GVi,y(x,y)分别是梯度图像GVj(x,y)的x、y分量,即GRi(x,y)=[GRi,x(x,y),GRi,y(x,y)],GVj(x,y)=[GVi,x(x,y),GVi,y(x,y)];Among them, GR i,x (x,y) and GR i,y (x,y) are the x and y components of the gradient image GR i (x,y) respectively, and GV i,x (x,y) and GV i , y (x, y) are respectively the x and y components of the gradient image GV j (x, y), that is, GR i (x, y) = [GR i, x (x, y), GR i, y (x , y)], GV j (x, y) = [GV i, x (x, y), GV i, y (x, y)];

步骤09.求解泊松方程▽2IRe(x,y)=div(G(x,y)),获得重构的亮度图像IRe(x,y);Step 09. Solve the Poisson equation ▽ 2 I Re (x, y) = div (G (x, y)), and obtain the reconstructed brightness image I Re (x, y);

步骤10.读取图像IRe(x,y)的像素最大值ImaxRe和像素最小值IminRe,对图像IRe(x,y)归一化:IReunify(x,y)=(IRe(x,y)-IminRe)/(ImaxRe-IminRe);Step 10. Read the pixel maximum value I maxRe and the pixel minimum value I minRe of the image I Re (x, y), and normalize the image I Re (x, y): I Reunify (x, y)=(I Re (x,y)-I minRe )/(I maxRe -I minRe );

步骤11.从所述n幅可见光图像中随机选取一幅图像,将该可见光图像的色度分量Cb、Cr与重构的归一化亮度图像IReunify(x,y)组合,得到融合后的图像{IReunify,Cb,Cr}。Step 11. Randomly select an image from the n visible light images, and combine the chromaticity components Cb and Cr of the visible light image with the reconstructed normalized brightness image I Reunify (x, y) to obtain the fused Image {I Reunify , Cb, Cr}.

本发明的有益效果是:The beneficial effects of the present invention are:

采用本发明的图像融合方法,解决了现有图像拼接、融合算法得到的融合图像中存在的融合痕迹问题。同时,在融合过程中,抑制平坦区域的噪声。By adopting the image fusion method of the present invention, the problem of fusion traces existing in fusion images obtained by existing image splicing and fusion algorithms is solved. At the same time, during the fusion process, the noise in flat areas is suppressed.

附图说明Description of drawings

图1是本发明提供的基于图像重构的可见光与红外图像融合方法流程图;Fig. 1 is the flow chart of the visible light and infrared image fusion method based on image reconstruction provided by the present invention;

图2是用于实施例的RGB空间可见光图像,图像宽度为1024,高度为1024,其视场为红外图像的三分之一,并位于红外图像的中间;Fig. 2 is the RGB space visible light image that is used for embodiment, and image width is 1024, and height is 1024, and its field of view is one-third of infrared image, and is positioned at the middle of infrared image;

图3是用于实施例的红外图像,图像宽度320,高度为240,其视场是可见光图像的三倍;Fig. 3 is the infrared image used in the embodiment, the image width is 320, the height is 240, and its field of view is three times that of the visible light image;

图4是采用传统的多尺度图像融合方法把图2和图3进行融合后的图像;Fig. 4 is the image after merging Fig. 2 and Fig. 3 by using the traditional multi-scale image fusion method;

图5是实施例中采用本发明提供的图像融合方法的融合图像。Fig. 5 is a fused image using the image merging method provided by the present invention in the embodiment.

具体实施方式Detailed ways

下面结合附图和实施例对本发明作进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

实施例Example

本实施例的目的是把一幅红外图像与一幅可见光图像进行融合,具体包括以下步骤:The purpose of this embodiment is to fuse an infrared image with a visible light image, which specifically includes the following steps:

步骤01.读取可见光图像IM(x,y),如图2所示;将可见光图像变换至YUV空间,三个分量分别为{IV,Cb,Cr},其中IV(x,y)是亮度分量,Cb和Cr是色度分量,图像尺寸为1024×1024,视场为红外图像视场的三分之一,并位于红外图像的正中心;读取红外图像IR(x,y),如图3所示,图像尺寸为320×240,视场为可见光图像视场的三倍;Step 01. Read the visible light image IM(x,y), as shown in Figure 2; transform the visible light image into YUV space, and the three components are {IV,Cb,Cr}, where IV(x,y) is the brightness Components, Cb and Cr are chrominance components, the image size is 1024×1024, the field of view is one-third of the field of view of the infrared image, and is located in the very center of the infrared image; read the infrared image IR(x,y), such as As shown in Figure 3, the image size is 320×240, and the field of view is three times that of the visible light image;

步骤02.根据可见光与红外图像视场比例,完成可见光图像与红外图像像素缩放比例变换:将可见光图像水平和垂直方向都线性压缩为其图像的1/3,压缩后的尺寸为341×341,图像中心与红外图像中心重合,对浮点数坐标四舍五入法至整数栅格坐标;Step 02. According to the field of view ratio of the visible light image and the infrared image, complete the pixel scaling conversion between the visible light image and the infrared image: linearly compress the visible light image to 1/3 of its image in both horizontal and vertical directions, and the compressed size is 341×341, The center of the image coincides with the center of the infrared image, and the floating-point coordinates are rounded to integer grid coordinates;

步骤03.采用一阶前向差分求出红外图像IR(x,y)和亮度分量图像IV(x,y)的梯度图像GR(x,y)和GV(x,y):Step 03. Use the first-order forward difference to obtain the gradient images GR(x,y) and GV(x,y) of the infrared image IR(x,y) and the brightness component image IV(x,y):

GR(x,y)=▽IR(x,y)≈(IR(x+1,y)-IR(x,y),IR(x,y+1)-IR(x,y)),GR(x,y)=▽IR(x,y)≈(IR(x+1,y)-IR(x,y),IR(x,y+1)-IR(x,y)),

GV(x,y)=▽IV(x,y)≈(IV(x+1,y)-IV(x,y),IV(x,y+1)-IV(x,y));GV(x,y)=▽IV(x,y)≈(IV(x+1,y)-IV(x,y),IV(x,y+1)-IV(x,y));

步骤04.对梯度图像IR(x,y)进行均值滤波得均值图像IRM(x,y),对梯度图像IV(x,y)进行均值滤波得均值图像IVM(x,y);Step 04. Perform mean filtering on the gradient image IR(x,y) to obtain the mean image IRM(x,y), and perform mean filtering on the gradient image IV(x,y) to obtain the mean image IVM(x,y);

步骤05.从梯度图像IR(x,y)中减去均值图像IRM(x,y)得误差图像IRE(x,y),从梯度图像IV(x,y)中减去均值图像IVM(x,y)得误差图像IVE(x,y);Step 05. Subtract the mean image IRM(x,y) from the gradient image IR(x,y) to obtain the error image IRE(x,y), and subtract the mean image IVM(x ,y) to get the error image IVE(x,y);

步骤06.分别计算误差图像IRE(x,y)和IVE(x,y)的标准差σR,i和σV,jStep 06. Calculate the standard deviation σ R,i and σ V,j of the error images IRE(x,y) and IVE(x,y) respectively;

步骤07.计算加权系数μR,i和μV,jStep 07. Calculate the weighting coefficients μ R,i and μ V,j ;

步骤08.获得梯度图像加权和G(x,y)=[Gx(x,y),Gy(x,y)];Step 08. Obtain the gradient image weighted sum G(x, y)=[Gx(x, y), Gy(x, y)];

步骤09.求解泊松方程▽2IRe(x,y)=div(G(x,y)),获得重构的亮度图像IRe(x,y);Step 09. Solve the Poisson equation ▽ 2 I Re (x, y) = div (G (x, y)), and obtain the reconstructed brightness image I Re (x, y);

步骤10.读取图像IRe(x,y)的像素最大值ImaxRe和像素最小值IminRe,对图像IRe(x,y)进行归一化操作:IReunify(x,y)=(IRe(x,y)-IminRe)/(ImaxRe-IminRe);Step 10. Read the maximum pixel value I maxRe and the minimum pixel value I minRe of the image I Re (x, y), and perform a normalization operation on the image I Re (x, y): I Reunify (x, y)=( I Re (x,y)-I minRe )/(I maxRe -I minRe );

步骤11.将重构的归一化亮度图像IReunify(x,y)与原可见光图像的彩色分量Cb、Cr组合,得到融合后图像{IReunify,Cb,Cr},如图5所示,图像输出尺寸为320×240彩色图像。Step 11. Combine the reconstructed normalized brightness image I Reunify (x, y) with the color components Cb and Cr of the original visible light image to obtain the fused image {I Reunify , Cb, Cr}, as shown in Figure 5, The image output size is 320×240 color image.

图4是采用传统的多尺度图像融合方法,将图2与图3进行融合后的融合图像;经对比可知,本实施案例中采用本发明提供方法所得的融合图像较其它方法,能更好地处理融合图像的边界,融合图像边界自然、平滑。Fig. 4 is the fused image after merging Fig. 2 and Fig. 3 by adopting the traditional multi-scale image fusion method; as can be seen from comparison, the fused image obtained by adopting the method provided by the present invention in this embodiment case can be better than other methods The boundary of the fusion image is processed, and the boundary of the fusion image is natural and smooth.

Claims (3)

1., based on visible ray and the infrared image fusion method of Image Reconstruction, specifically comprise the following steps:
Step 01. reads the m width infrared image IR needing to carry out merging i(x, y) and n width visible images IM j(x, y), wherein, i=1 ... m, j=1 ... n; By each width visible images IM j(x, y) is converted into yuv space, and three components of yuv space are respectively { IV j, Cb j, Cr j, wherein IV jluminance component, Cb jand Cr jit is chromatic component;
Step 02. operates to step 06 by step 03 each width infrared image and visible images;
Step 03. is similar to single order forward direction or backward difference and obtains infrared image IR ithe gradient image GR of (x, y) ithe luminance component IV that (x, y) and visible images are corresponding jthe gradient image GV of (x, y) j(x, y);
Step 04. couple gradient image IR i(x, y) carries out mean filter and obtains average image IRM i(x, y), to gradient image IV j(x, y) carries out mean filter and obtains average image IVM j(x, y);
Step 05. is from gradient image IR iaverage image IRM is deducted in (x, y) i(x, y) obtains error image IRE i(x, y), from gradient image IV javerage image IVM is deducted in (x, y) j(x, y) obtains error image IVE j(x, y);
Step 06. is error of calculation image IRE respectively i(x, y) and IVE jthe noise criteria of (x, y) is poor;
Error image IRE ithe noise criteria difference of (x, y) obtains especially by following methods:
(06-1) statistical error image IRE ithe standard deviation sigma of (x, y) r0, i;
(06-2) error image IRE is deleted ithree times of variance 3 σ are distributed in (x, y) r0, ioutside error point;
(06-3) repeat step (06-1) and carry out interative computation to (06-2), until stop when the relative error of adjacent twice interative computation gained standard deviation is less than 10%, i.e. the standard deviation sigma of the p+1 time interative computation gained rp+1, irelative to the standard deviation sigma of the p time iteration acquisition rp+1, irelative error when being less than 10% only, remember the standard deviation sigma of the p+1 time interative computation gained rp+1, ifor error image IRE ithe noise criteria difference σ of (x, y) r,i;
Error image IVE jthe noise criteria difference σ of (x, y) v,jcomputing method and noise criteria difference σ r,icomputing method identical;
Step 07. calculates weighting coefficient μ r,iand μ v,j:
μ R,i=σ V,i/(σ RsVs),μ V,j=σ R,j/(σ RsVs)
Wherein: σ rsr, 1+ σ r, 2++ σ r,m; σ vsv, 1+ σ v, 2++ σ v,n;
Step 08. obtains gradient image weighted sum G (x, y)=[Gx (x, y), Gy (x, y)];
Step 09. solves Poisson equation ▽ 2i re(x, y)=div (G (x, y)), obtains the luminance picture I of reconstruct re(x, y);
Step 10. reading images I rethe pixel maximal value I of (x, y) maxRewith pixel minimum I minRe, to image I re(x, y) normalization: I reunify(x, y)=(I re(x, y)-I minRe)/(I maxRe-I minRe);
Step 11. is random selecting piece image from described n width visible images, by the normalization luminance picture I of chromatic component Cb, Cr of this visible images and reconstruct reunify(x, y) combines, and obtains the image { I after merging reunify, Cb, Cr}.
2. the visible ray based on Image Reconstruction according to claim 1 and infrared image fusion method, is characterized in that, the GR of gradient image described in step 03 i(x, y) and GV j(x, y) specifically adopts and tries to achieve so that single order forward difference is approximate:
GR i(x,y)=▽IR i(x,y)≈(IR i(x+1,y)-IR i(x,y),IR i(x,y+1)-IR i(x,y)),
GV j(x,y)=▽IV j(x,y)≈(IV j(x+1,y)-IV j(x,y),IV j(x,y+1)-IV j(x,y))。
3. the visible ray based on Image Reconstruction according to claim 1 and infrared image fusion method, is characterized in that, described gradient image weighted sum G (x, y)=[Gx (x, y), Gy (x, y)] obtain especially by with under type:
Gx(x,y)=μ R,1*GR 1,x(x,y)*[sign[|GR 1,x(x,y)|-σ R,1]+1]/2
R,2*GR 2,x(x,y)*[sign[|GR 2,x(x,y)|-σ R,2]+1]/2
······
R,m*GR m,x(x,y)*[sign[|GR m,x(x,y)|-σ R,m]+1]/2
V,1*GV 1,x(x,y)*[sign[|GV 1,x(x,y)|-σ V,1]+1]/2
V,2*GV 2,x(x,y)*[sign[|GV 2,x(x,y)|-σ V,2]+1]/2
······
V,n*GV n,x(x,y)*[sign[|GV m,x(x,y)|-σ V,m]+1]/2;
Gy(x,y)=μ R,1*GR 1,y(x,y)*[sign[|GR 1,y(x,y)|-σ R,1]+1]/2
R,2*GR 2,y(x,y)*[sign[|GR 2,y(x,y)|-σ R,2]+1]/2
······
R,m*GR m,y(x,y)*[sign[|GR m,y(x,y)|-σ R,m]+1]/2
V,1*GV 1,y(x,y)*[sign[|GV 1,y(x,y)|-σ V,1]+1]/2
V,2*GV 2,y(x,y)*[sign[|GV 2,y(x,y)|-σ V,2]+1]/2
······
V,n*GV n,y(x,y)*[sign[|GV m,y(x,y)|-σ V,m]+1]/2;
Wherein, GR i,x(x, y) and GR i,y(x, y) is gradient image GR respectively ix, y component of (x, y), GV i,x(x, y) and GV i,y(x, y) is gradient image GV respectively jx, y component of (x, y), i.e. GR i(x, y)=[GR i,x(x, y), GR i,y(x, y)], GV j(x, y)=[GV i,x(x, y), GV i,y(x, y)].
CN201510101000.5A 2015-03-09 2015-03-09 A kind of visible ray and infrared image fusion method based on Image Reconstruction Expired - Fee Related CN104751432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510101000.5A CN104751432B (en) 2015-03-09 2015-03-09 A kind of visible ray and infrared image fusion method based on Image Reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510101000.5A CN104751432B (en) 2015-03-09 2015-03-09 A kind of visible ray and infrared image fusion method based on Image Reconstruction

Publications (2)

Publication Number Publication Date
CN104751432A true CN104751432A (en) 2015-07-01
CN104751432B CN104751432B (en) 2017-06-16

Family

ID=53591053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510101000.5A Expired - Fee Related CN104751432B (en) 2015-03-09 2015-03-09 A kind of visible ray and infrared image fusion method based on Image Reconstruction

Country Status (1)

Country Link
CN (1) CN104751432B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966108A (en) * 2015-07-15 2015-10-07 武汉大学 Visible light and infrared image fusion method based on gradient transfer
CN105554483A (en) * 2015-07-16 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal
CN106204541A (en) * 2016-06-29 2016-12-07 南京雅信科技集团有限公司 The track foreign body intrusion detection method being combined with infrared light based on visible ray
CN106847149A (en) * 2016-12-29 2017-06-13 武汉华星光电技术有限公司 A kind of tone mapping of high dynamic contrast image and display methods
CN108154493A (en) * 2017-11-23 2018-06-12 南京理工大学 A kind of pseudo- color blending algorithm of the dual-band infrared image based on FPGA
CN108288259A (en) * 2018-01-06 2018-07-17 昆明物理研究所 A kind of gray scale fusion Enhancement Method based on color space conversion
CN108717689A (en) * 2018-05-16 2018-10-30 北京理工大学 Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background
CN109584174A (en) * 2019-01-29 2019-04-05 电子科技大学 A Gradient Minimization Method for Infrared Image Edge Preserving Denoising
CN109712070A (en) * 2018-12-04 2019-05-03 天津津航技术物理研究所 A kind of infrared panoramic image split-joint method based on graph cut
CN109919884A (en) * 2019-01-30 2019-06-21 西北工业大学 Infrared and visible light image fusion method based on Gaussian filter weighting
CN110517210A (en) * 2019-07-08 2019-11-29 河北工业大学 Multi-exposure Welding Area Image Fusion Method Based on Haar Wavelet Gradient Reconstruction
CN111738969A (en) * 2020-06-19 2020-10-02 无锡英菲感知技术有限公司 Image fusion method and device and computer readable storage medium
CN112907493A (en) * 2020-12-01 2021-06-04 航天时代飞鸿技术有限公司 Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle swarm cooperative reconnaissance
CN114648564A (en) * 2022-05-23 2022-06-21 四川大学 Visible light and infrared image optimized registration method and system for unsteady state target
DE102021006300A1 (en) 2021-12-22 2023-06-22 Diehl Defence Gmbh & Co. Kg Process for generating an initial image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609927A (en) * 2012-01-12 2012-07-25 北京理工大学 Foggy visible light/infrared image color fusion method based on scene depth
CN102982518A (en) * 2012-11-06 2013-03-20 扬州万方电子技术有限责任公司 Fusion method of infrared image and visible light dynamic image and fusion device of infrared image and visible light dynamic image
US20140168444A1 (en) * 2012-12-14 2014-06-19 Korea University Research And Business Foundation Apparatus and method for fusing images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609927A (en) * 2012-01-12 2012-07-25 北京理工大学 Foggy visible light/infrared image color fusion method based on scene depth
CN102982518A (en) * 2012-11-06 2013-03-20 扬州万方电子技术有限责任公司 Fusion method of infrared image and visible light dynamic image and fusion device of infrared image and visible light dynamic image
US20140168444A1 (en) * 2012-12-14 2014-06-19 Korea University Research And Business Foundation Apparatus and method for fusing images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯鑫 等: "基于Shearlet变换的红外与可见光图像融合", 《光电子·激光》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966108A (en) * 2015-07-15 2015-10-07 武汉大学 Visible light and infrared image fusion method based on gradient transfer
CN105554483A (en) * 2015-07-16 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal
CN105554483B (en) * 2015-07-16 2018-05-15 宇龙计算机通信科技(深圳)有限公司 A kind of image processing method and terminal
CN106204541A (en) * 2016-06-29 2016-12-07 南京雅信科技集团有限公司 The track foreign body intrusion detection method being combined with infrared light based on visible ray
CN106847149B (en) * 2016-12-29 2020-11-13 武汉华星光电技术有限公司 Tone mapping and displaying method for high dynamic contrast image
CN106847149A (en) * 2016-12-29 2017-06-13 武汉华星光电技术有限公司 A kind of tone mapping of high dynamic contrast image and display methods
CN108154493A (en) * 2017-11-23 2018-06-12 南京理工大学 A kind of pseudo- color blending algorithm of the dual-band infrared image based on FPGA
CN108154493B (en) * 2017-11-23 2021-11-30 南京理工大学 FPGA-based dual-waveband infrared image pseudo-color fusion algorithm
CN108288259A (en) * 2018-01-06 2018-07-17 昆明物理研究所 A kind of gray scale fusion Enhancement Method based on color space conversion
CN108717689A (en) * 2018-05-16 2018-10-30 北京理工大学 Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background
CN109712070A (en) * 2018-12-04 2019-05-03 天津津航技术物理研究所 A kind of infrared panoramic image split-joint method based on graph cut
CN109584174A (en) * 2019-01-29 2019-04-05 电子科技大学 A Gradient Minimization Method for Infrared Image Edge Preserving Denoising
CN109919884A (en) * 2019-01-30 2019-06-21 西北工业大学 Infrared and visible light image fusion method based on Gaussian filter weighting
CN110517210B (en) * 2019-07-08 2021-09-03 河北工业大学 Multi-exposure welding area image fusion method based on Haar wavelet gradient reconstruction
CN110517210A (en) * 2019-07-08 2019-11-29 河北工业大学 Multi-exposure Welding Area Image Fusion Method Based on Haar Wavelet Gradient Reconstruction
CN111738969A (en) * 2020-06-19 2020-10-02 无锡英菲感知技术有限公司 Image fusion method and device and computer readable storage medium
CN111738969B (en) * 2020-06-19 2024-05-28 无锡英菲感知技术有限公司 Image fusion method, device and computer readable storage medium
CN112907493A (en) * 2020-12-01 2021-06-04 航天时代飞鸿技术有限公司 Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle swarm cooperative reconnaissance
CN112907493B (en) * 2020-12-01 2024-07-23 航天时代飞鸿技术有限公司 Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle bee colony collaborative reconnaissance
DE102021006300A1 (en) 2021-12-22 2023-06-22 Diehl Defence Gmbh & Co. Kg Process for generating an initial image
CN114648564A (en) * 2022-05-23 2022-06-21 四川大学 Visible light and infrared image optimized registration method and system for unsteady state target
CN114648564B (en) * 2022-05-23 2022-08-23 四川大学 Visible light and infrared image optimization registration method and system for unsteady state target

Also Published As

Publication number Publication date
CN104751432B (en) 2017-06-16

Similar Documents

Publication Publication Date Title
CN104751432B (en) A kind of visible ray and infrared image fusion method based on Image Reconstruction
CN108230264B (en) A single image dehazing method based on ResNet neural network
Bennett et al. Multispectral bilateral video fusion
CN104620282B (en) For suppressing the method and system of the noise in image
CN105205794B (en) A kind of synchronous enhancing denoising method of low-light (level) image
CN101950416B (en) Real-time image defogging enhancement method based on bilateral filtering
CN108269244B (en) An Image Dehazing System Based on Deep Learning and Prior Constraints
CN104253930B (en) A real-time video defogging method
CN102982513B (en) A kind of adapting to image defogging method capable based on texture
US9870600B2 (en) Raw sensor image and video de-hazing and atmospheric light analysis methods and systems
CN103996178A (en) Sand and dust weather color image enhancing method
CN104504722B (en) Method for correcting image colors through gray points
CN103455991A (en) Multi-focus image fusion method
CN107958465A (en) A Single Image Dehazing Method Based on Deep Convolutional Neural Network
CN103313068B (en) White balance corrected image processing method and device based on gray edge constraint gray world
CN105959510A (en) Video rapid defogging method
CN103226816A (en) Haze image medium transmission rate estimation and optimization method based on quick gaussian filtering
CN104933728A (en) Mixed motion target detection method
CN111325688A (en) Unmanned aerial vehicle image defogging method fusing morphological clustering and optimizing atmospheric light
CN109598736A (en) The method for registering and device of depth image and color image
CN103971335B (en) A kind of image super-resolution rebuilding method based on confidence level kernel regression
Zhang et al. Multisensor infrared and visible image fusion via double joint edge preservation filter and nonglobally saliency gradient operator
CN110580684A (en) image enhancement method based on black-white-color binocular camera
CN102222321A (en) Blind reconstruction method for video sequence
CN107945119B (en) Intra-image correlation noise estimation method based on Bayer pattern

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170616

Termination date: 20200309