[go: up one dir, main page]

CN1545064A - Infrared and visible light image fusion method - Google Patents

Infrared and visible light image fusion method Download PDF

Info

Publication number
CN1545064A
CN1545064A CNA2003101089334A CN200310108933A CN1545064A CN 1545064 A CN1545064 A CN 1545064A CN A2003101089334 A CNA2003101089334 A CN A2003101089334A CN 200310108933 A CN200310108933 A CN 200310108933A CN 1545064 A CN1545064 A CN 1545064A
Authority
CN
China
Prior art keywords
image
fusion
information
visible light
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2003101089334A
Other languages
Chinese (zh)
Other versions
CN1273937C (en
Inventor
敬忠良
王宏
李建勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN 200310108933 priority Critical patent/CN1273937C/en
Publication of CN1545064A publication Critical patent/CN1545064A/en
Application granted granted Critical
Publication of CN1273937C publication Critical patent/CN1273937C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

本发明涉及一种红外与可见光图像融合方法,在对红外图像和可见光图像分别进行多分辨率分解的基础上,利用红外图像和可见光图像不同的成像特性,从原图像中获得相对的“目标”信息和“背景”信息,以此将图像划分为背景区域、目标区域以及它们之间的边缘部分,对这三部分分别采用三种不同的融合规则来确定融合图像的多分辨率表示,最后经过多分辨率反变换得到融合图像。本发明采用基于目标区域的方法融合同一场景的红外和可见光图像,在最大限度保留目标信息的同时,增强了背景信息中的边缘和细节信息,融合后的图像质量得以较大的提高。

The invention relates to a fusion method of infrared and visible light images. On the basis of multi-resolution decomposition of infrared images and visible light images, the relative "target" is obtained from the original image by using the different imaging characteristics of infrared images and visible light images. Information and "background" information, in order to divide the image into background area, target area and the edge part between them, three different fusion rules are used for these three parts to determine the multi-resolution representation of the fused image, and finally through The multi-resolution inverse transform is used to obtain the fused image. The invention adopts the method based on the target area to fuse the infrared and visible light images of the same scene, while retaining the target information to the greatest extent, it enhances the edge and detail information in the background information, and the quality of the fused image is greatly improved.

Description

红外与可见光图像融合方法Infrared and visible light image fusion method

技术领域:Technical field:

本发明涉及一种基于目标区域的图像融合方法,用以融合同一场景的红外和可见光图像,在光学成像、目标监视、安全检查等系统中均可有广泛应用。The invention relates to an image fusion method based on a target area, which is used to fuse infrared and visible light images of the same scene, and can be widely used in optical imaging, target monitoring, security inspection and other systems.

背景技术:Background technique:

图像融合技术是多传感器信息融合中可视信息的融合,它利用各种成像传感器不同的成像方式,为不同的影像提供互补信息,增加图像信息量,减少原始图像数据量,提高对环境的适应性,以获得更可靠、更准确的有用信息供观察或进一步处理。图像融合技术是一门综合了传感器、信号处理、图像处理和人工智能等的新兴技术,近年来已成为一种十分重要和有用的图像分析与计算机视觉技术,在自动目标识别、计算机视觉、遥感、机器人、医学图像处理以及军事应用等领域有着广泛的应用前景。Image fusion technology is the fusion of visual information in multi-sensor information fusion. It uses different imaging methods of various imaging sensors to provide complementary information for different images, increase the amount of image information, reduce the amount of original image data, and improve adaptability to the environment. In order to obtain more reliable and accurate useful information for observation or further processing. Image fusion technology is an emerging technology that integrates sensors, signal processing, image processing and artificial intelligence. In recent years, it has become a very important and useful image analysis and computer vision technology. , robotics, medical image processing, and military applications have broad application prospects.

就一般情况而言,由于目标和背景的红外辐射特性有差异,红外图像可提供较完整的目标信息,然而它包含的背景信息却模糊不清;相反可见光图像能够提供较全面的背景信息,但目标信息不明显。通过图像融合技术,可获取目标和背景信息均精确、全面的融合后图像。在处理红外和可见光图像融合方法中,具有代表性的方法是多分辨图像融合方法,其基本思想就是在将输入原图像分解得到不同分辨率表示的基础上,分别对它们进行基于融合规则的融合运算后,经多分辨率重构获得融合后图像。多分辨图像融合方法包括拉普拉斯金字塔算法,对比度金字塔算法,梯度金字塔算法等塔型方法。随着小波理论的发展和深入研究,离散小波变换已成为多分辨率图像融合中另一非常有用的工具。基于离散小波变换以及小波框架等融合方法已广泛应用于融合领域。Generally speaking, due to the difference in the infrared radiation characteristics of the target and the background, the infrared image can provide relatively complete target information, but the background information it contains is blurred; on the contrary, the visible light image can provide more comprehensive background information, but Target information is not obvious. Through image fusion technology, accurate and comprehensive fused images with target and background information can be obtained. In the processing of infrared and visible light image fusion methods, the representative method is the multi-resolution image fusion method. The basic idea is to decompose the input original image to obtain different resolution representations, and then fuse them respectively based on fusion rules. After calculation, the fused image is obtained through multi-resolution reconstruction. Multi-resolution image fusion methods include Laplacian pyramid algorithm, contrast pyramid algorithm, gradient pyramid algorithm and other pyramid methods. With the development and in-depth study of wavelet theory, discrete wavelet transform has become another very useful tool in multi-resolution image fusion. Fusion methods based on discrete wavelet transform and wavelet frame have been widely used in the field of fusion.

对于以往的融合算法研究,绝大多数算法在进行融合运算时,均未考虑利用诸如原图像成像特点等信息来提高和改进算法的性能。没有对原图像所包含的不同的信息进行针对性的融合运算。For the previous fusion algorithm research, most of the algorithms did not consider the use of information such as the imaging characteristics of the original image to improve the performance of the algorithm when performing fusion operations. There is no targeted fusion operation for the different information contained in the original image.

发明内容:Invention content:

本发明的目的在于针对现有技术存在的不足,提出一种红外与可见光图像融合方法,能够提高融合后的图像质量,达到较理想的融合效果。The purpose of the present invention is to propose a fusion method of infrared and visible light images aiming at the deficiencies in the prior art, which can improve the quality of the fusion image and achieve a more ideal fusion effect.

为实现这样的目的,本发明的方法在对红外图像和可见光图像分别进行多分辨率分解的基础上,利用红外图像和可见光图像不同的成像特性,从原图像中获得“目标”信息和“背景”信息(相对而言)以此将图像划分为三个部分:背景区域、目标区域以及它们之间的边缘部分,对这三部分分别采用三种不同的融合规则来确定融合图像的多分辨率表示,最后经过多分辨率逆变换得到融合图像。In order to achieve such a purpose, the method of the present invention, on the basis of multi-resolution decomposition of the infrared image and the visible light image, uses the different imaging characteristics of the infrared image and the visible light image to obtain the "target" information and the "background" information from the original image. "Information (relatively speaking) divides the image into three parts: the background area, the target area, and the edge part between them. For these three parts, three different fusion rules are used to determine the multi-resolution of the fused image. Indicates that the fused image is finally obtained through multi-resolution inverse transformation.

本发明的方法包括如下具体步骤:Method of the present invention comprises following specific steps:

1.在同一场景的红外图像和可见光图像进行配准的基础上,采用àtrous(多孔)小波算法来分解图像,采用B3样条尺度函数得到的二维卷积算子进行具体运算,将图像分解为不同频带上的细节信息和最低频带的近似信息。1. On the basis of the registration of the infrared image and the visible light image of the same scene, the àtrous (porous) wavelet algorithm is used to decompose the image, and the two-dimensional convolution operator obtained by the B 3 spline scaling function is used for specific operations, and the image Decomposed into detailed information on different frequency bands and approximate information on the lowest frequency band.

2.对红外图像采用阈值法进行区域分割,然后根据成像特性和目标图像特征(包括灰度信息,纹理信息,轮廓信息等特征)等信息来确定目标区域。一般而言,目标在红外图像中表现为像素灰度值相对较高。在确定出红外图像的目标区域后,将原图像划分为三个部分:背景区域,目标区域以及它们之间的边缘部分。2. Use the threshold method to segment the infrared image, and then determine the target area according to the imaging characteristics and target image features (including gray information, texture information, contour information, etc.). Generally speaking, the target appears in the infrared image with relatively high pixel gray value. After determining the target area of the infrared image, the original image is divided into three parts: the background area, the target area and the edge part between them.

3.在融合处理时,对于目标区域,直接选取红外图像相应区域的小波系数作为融合后小波系数;对于目标区域和背景区域之间的边缘部分,比较红外图像小波系数和可见光图像小波系数绝对值大小,选取绝对值大的小波系数作为融合后小波系数;对于背景区域,同时考虑小波系数本身以及其与邻域内小波系数的相关性,确定融合量测指标,融合量测指标的大小衡量小波系数所包含的信息的多少。通过比较红外图像和可见光图像融合量测指标的大小,确定指标大的小波系数为融合后小波系数。3. During fusion processing, for the target area, directly select the wavelet coefficient of the corresponding area of the infrared image as the fused wavelet coefficient; for the edge part between the target area and the background area, compare the absolute value of the wavelet coefficient of the infrared image and the wavelet coefficient of the visible light image Size, select the wavelet coefficient with a large absolute value as the wavelet coefficient after fusion; for the background area, consider the wavelet coefficient itself and its correlation with the wavelet coefficient in the neighborhood at the same time, determine the fusion measurement index, and measure the wavelet coefficient by the size of the fusion measurement index How much information is included. By comparing the size of the fusion measurement index of the infrared image and the visible light image, it is determined that the wavelet coefficient with the largest index is the wavelet coefficient after fusion.

4.对各区域得到的小波系数进行小波逆变换以获得融合后的图像。4. Perform wavelet inverse transform on the wavelet coefficients obtained in each area to obtain the fused image.

本发明的方法具有如下有益效果:Method of the present invention has following beneficial effect:

采用àtrous(多孔)小波算法来分解图像时,由于其具有平移不变性,在融合时可减小融合系数的错误选取及配准误差对融合结果的影响;在àtrous小波变换过程中,所得的小波面具有相同的大小,因此较容易找到各个小波面系数之间的对应关系,这有利于融合运算;由于àtrous小波算法在重构时不涉及卷积运算,在基于区域的融合运算时,这有利于减少对区域间边缘部分的影响。采用基于目标区域的融合规则,能够最大限度的保持目标信息的同时,增强了背景信息中的边缘和细节信息。采用本发明的图像融合方法大大提高了融合后图像质量,对于应用系统的后续处理和图像显示具有重要意义和实用价值。When the àtrous (porous) wavelet algorithm is used to decompose the image, due to its translation invariance, it can reduce the wrong selection of fusion coefficients and the influence of registration errors on the fusion results during fusion; in the process of àtrous wavelet transformation, the obtained wavelet The surfaces have the same size, so it is easier to find the corresponding relationship between the coefficients of each wavelet surface, which is beneficial to the fusion operation; since the àtrous wavelet algorithm does not involve convolution operations during reconstruction, it is useful for region-based fusion operations. It is beneficial to reduce the influence on the edge part between regions. Using the fusion rules based on the target area, the edge and detail information in the background information can be enhanced while maintaining the target information to the greatest extent. The image fusion method of the invention greatly improves the image quality after fusion, and has great significance and practical value for the subsequent processing and image display of the application system.

附图说明:Description of drawings:

图1为本发明基于目标区域的红外和可见光图像融合方法示意图。FIG. 1 is a schematic diagram of an infrared and visible light image fusion method based on a target area in the present invention.

如图所示,分别对红外图像和可见光图像进行小波变换(àtrous多孔小波算法)得到多个不同分辨率的小波系数表示,通过从原图像中所得到的“目标”信息和“背景”信息(相对而言)来指导融合决策,以获得融合的不同分辨率的小波系数表示。最后,对融合后小波系数进行小波逆变换以获得融合后的图像。As shown in the figure, the wavelet transform (àtrous porous wavelet algorithm) is performed on the infrared image and the visible light image respectively to obtain multiple wavelet coefficient representations with different resolutions. Through the "target" information and "background" information obtained from the original image ( Relatively speaking) to guide the fusion decision to obtain fused representations of wavelet coefficients at different resolutions. Finally, inverse wavelet transform is performed on the fused wavelet coefficients to obtain the fused image.

图2为本发明的融合决策流程图。Fig. 2 is a flow chart of fusion decision-making in the present invention.

如图所示,在对红外图像进行区域分割的基础上,通过图像的成像特性确定出目标区域后,将红外图像和可见光图像划分出三个部分:背景区域,目标区域以及它们之间的边缘部分。分别对这三个区域实施不同的融合规则以得到融合决策图。As shown in the figure, on the basis of regional segmentation of the infrared image, after the target area is determined through the imaging characteristics of the image, the infrared image and the visible light image are divided into three parts: the background area, the target area and the edge between them part. Different fusion rules are implemented on these three regions to obtain a fusion decision map.

图3为本发明红外和可见光原图像以及与基于像素融合方法和基于窗口融合方法的融合结果比较。Fig. 3 is the original infrared and visible light images of the present invention and the comparison with the fusion results based on the pixel fusion method and the window fusion method.

其中,(a)为可见光原图像;(b)为红外原图像;(c)为基于像素融合方法的融合结果;(d),(e)为两种基于窗口融合方法的融合结果;(f)为本发明基于目标区域融合方法的融合结果。Among them, (a) is the original visible light image; (b) is the original infrared image; (c) is the fusion result based on the pixel fusion method; (d), (e) is the fusion result of two window fusion methods; (f ) is the fusion result based on the target region fusion method of the present invention.

具体实施方式:Detailed ways:

为了更好地理解本发明的技术方案,以下结合附图对本发明的实施方式作进一步描述。In order to better understand the technical solutions of the present invention, the implementation manners of the present invention will be further described below in conjunction with the accompanying drawings.

本发明方法流程如图1所示,首先分别对红外图像和可见光图像进行小波变换得到多个不同分辨率的小波系数表示,通过从原图像中所得到的“目标”信息和“背景”信息(相对而言,可见光图像中背景信息更为丰富,红外图像中目标信息更为丰富)来指导融合决策,以获得融合的不同分辨率的小波系数表示。最后,对融合后小波系数进行小波逆变换以获得融合后的图像。The process flow of the method of the present invention is shown in Figure 1. First, the infrared image and the visible light image are respectively subjected to wavelet transformation to obtain a plurality of wavelet coefficient representations with different resolutions, and the "target" information and "background" information obtained from the original image ( Relatively speaking, the background information in the visible light image is richer, and the target information in the infrared image is richer) to guide the fusion decision to obtain the fusion of wavelet coefficients with different resolutions. Finally, inverse wavelet transform is performed on the fused wavelet coefficients to obtain the fused image.

本发明的融合决策流程如图2所示,在对红外图像进行区域分割的基础上,通过图像的成像特性确定出目标区域后,将红外图像和可见光图像划分出三个部分:背景区域,目标区域以及它们之间的边缘部分。分别对这三个区域实施不同的融合规则以得到融合决策图。The fusion decision-making process of the present invention is shown in Figure 2. On the basis of regional segmentation of the infrared image, after the target area is determined through the imaging characteristics of the image, the infrared image and the visible light image are divided into three parts: background area, target area regions and the margins between them. Different fusion rules are implemented on these three regions to obtain a fusion decision map.

本发明具体实施按如下步骤进行:The concrete implementation of the present invention carries out as follows:

1.在同一场景的红外图像和可见光图像进行配准的基础上,采用àtrous(多孔)小波算法分别对可见光原图像和红外进行分解,将图像分解为不同频带上的细节信息和最低频带的近似信息。1. On the basis of the registration of the infrared image and the visible light image of the same scene, the àtrous (porous) wavelet algorithm is used to decompose the original visible light image and the infrared image respectively, and the image is decomposed into detailed information on different frequency bands and the approximation of the lowest frequency band information.

àtrous小波算法的基本思想是将信号或图像分解为不同频带上的细节信息和最低频带的近似信息。该细节信息称为小波面,其大小与原图像大小相同。对于图像f(x,y)可逐级得到以下图像序列The basic idea of the àtrous wavelet algorithm is to decompose a signal or image into detailed information on different frequency bands and approximate information on the lowest frequency band. This detailed information is called a wavelet surface, and its size is the same as that of the original image. For the image f(x, y), the following image sequence can be obtained step by step

其中,fk(x,y)为尺度k下的近似图像,Lk为低通滤波。k=1,2,...,N.Among them, f k (x, y) is the approximate image at scale k, and L k is the low-pass filter. k=1, 2, . . . , N.

相邻尺度的近似图像间的差异构成小波变换的系数即小波面The difference between approximate images of adjacent scales constitutes the coefficients of the wavelet transform, that is, the wavelet surface

       ωk(x,y)=fk(x,y)-fk-1(x,y)    k=1,2,...,N         (2)ω k (x, y) = f k (x, y) - f k-1 (x, y) k = 1, 2, . . . , N (2)

采用B3样条尺度函数,得到的二维卷积算子如下所示。Using the B3 -spline scaling function, the obtained two-dimensional convolution operator is shown below.

11 256256 11 44 66 44 11 44 1616 24twenty four 1616 44 66 24twenty four 3636 24twenty four 66 44 1616 24twenty four 1616 44 11 44 66 44 11 .. .. .. (( 33 ))

2.对红外图像采用阈值法进行区域分割,然后根据成像特性和目标图像特征(包括灰度信息,纹理信息,轮廓信息等特征)等信息来确定目标区域。2. Use the threshold method to segment the infrared image, and then determine the target area according to the imaging characteristics and target image features (including gray information, texture information, contour information, etc.).

对于目标而言,其在红外图像中表现为像素灰度值相对较高,根据图像中像素灰度值的不同以及目标的运动信息,以此来确定出目标区域。当然,在实际划分时,不排除其他非目标区域被当作目标区域划分出来,然而由于其具有目标区域所具有的一些特点,在不能确定其是否为目标区域的情况下,仍将其看作目标区域确定下来。在此,目标区域选定的优先级要优于其他任何区域的选定。For the target, it appears in the infrared image that the gray value of the pixel is relatively high, and the target area is determined according to the difference in the gray value of the pixel in the image and the motion information of the target. Of course, in the actual division, it is not excluded that other non-target areas are classified as target areas. However, because it has some characteristics of the target area, it is still regarded as a target area if it cannot be determined whether it is a target area. The target area is determined. Here, the selection of the target area takes precedence over the selection of any other area.

在确定出目标图像的目标区域后,可将红外图像和可见光图像划分为三个部分:背景区域,目标区域以及它们之间的边缘部分。After determining the target area of the target image, the infrared image and the visible light image can be divided into three parts: the background area, the target area and the edge part between them.

3.采用不同的图像融合规则分别对这三部分进行融合运算。设红外图像为A,可见光图像为B;X表示目标区域, X表示背景区域,E表示为两区域间的边缘部分。具体融合规则如下:3. Use different image fusion rules to perform fusion operations on the three parts respectively. Let the infrared image be A, and the visible light image be B; X represents the target area, X represents the background area, and E represents the edge between the two areas. The specific fusion rules are as follows:

●“高频”系数的融合● Fusion of "high frequency" coefficients

(1)目标区域;一般而言,目标区域内的像素具有基本相同的灰度值或纹理信息,所要作的不是比较“高频”的大小,目的是最大限度地保持该区域的目标信息。为此,对该区域的融合系数选取为:(1) Target area; generally speaking, the pixels in the target area have basically the same gray value or texture information, what needs to be done is not to compare the size of the "high frequency", the purpose is to keep the target information of the area to the greatest extent. Therefore, the fusion coefficient of this area is selected as:

CC Ff Hh (( mm ,, nno )) == CC AA Hh (( mm ,, nno )) -- -- -- (( mm ,, nno )) ∈∈ Xx .. .. .. (( 44 ))

(2)两区域间的边缘部分;对于此区域,已知系数所处位置,为了更好地表现边缘信息,对该部分的融合系数选取为:(2) The edge part between the two regions; for this region, the position of the coefficient is known. In order to better express the edge information, the fusion coefficient of this part is selected as:

Figure A20031010893300072
Figure A20031010893300072

(3)背景区域;比较目标区域而言,背景区域内含有丰富的边缘和细节信息,为了更好地表现该区域的细节信息,以及考虑目标区域的某些细节信息未被提取到目标区域内,本发明采用一种新的融合量测指标来确定融合系数以突出细节信息。该融合量测指标同时考虑系数本身以及其与邻域内系数的相关性:(3) Background area; compared to the target area, the background area contains rich edge and detail information. In order to better express the detail information of the area, and consider that some detail information of the target area has not been extracted into the target area , the present invention uses a new fusion measurement index to determine fusion coefficients to highlight detailed information. This fused metric considers both the coefficient itself and its correlation with coefficients in the neighborhood:

               PIX(m,n)=Cp(m,n)·It(m,n)           (6)PI X (m, n) = C p (m, n)·I t (m, n) (6)

其中,PIX(m,n)衡量该系数所包含的信息;Cp(m,n)反映该系数的特征信息;It(m,n)反映邻域内系数的相关性。Among them, PI X (m, n) measures the information contained in the coefficient; C p (m, n) reflects the characteristic information of the coefficient; I t (m, n) reflects the correlation of coefficients in the neighborhood.

CC pp (( mm ,, nno )) == || CC Xx Hh (( mm ,, nno )) || .. .. .. (( 77 ))

其中,CX H(m,n)为图像的高频系数;Wherein, C X H (m, n) is the high-frequency coefficient of image;

对于邻域内系数的相关性,首先确定图像高频系数的符号值For the correlation of coefficients in the neighborhood, first determine the sign value of the high-frequency coefficients of the image

signsign == CC Xx Hh (( mm ,, nno )) .. .. .. (( 88 ))

如果高频系数的值大于或等于零,那么sign为1;否则sign为0。对It(m,n),可表示为:If the value of the high-frequency coefficient is greater than or equal to zero, then sign is 1; otherwise sign is 0. For I t (m, n), it can be expressed as:

                 It(m,n)=pX(m,n)·(1-pX(m,n))       (9)I t (m, n) = p X (m, n) · (1-p X (m, n)) (9)

其中,pX(m,n)为(m,n)点系数与邻域内系数具有相同符号值的该概率值。若(m,n)点系数与邻域内系数均相同,pX=1;若与邻域内系数均不相同,pX=0。Wherein, p X (m, n) is the probability value that the coefficient of point (m, n) has the same sign value as the coefficient in the neighborhood. If the coefficients of (m, n) points are the same as those in the neighborhood, p X =1; if they are different from the coefficients in the neighborhood, p X =0.

对该部分的融合系数选取为:The fusion coefficient of this part is selected as:

Figure A20031010893300081
Figure A20031010893300081

●低频系数的融合● Fusion of low frequency coefficients

4.对融合后所得到的“高频”融合系数和低频融合系数进行小波逆变换得到融合后的图像。4. Perform wavelet inverse transform on the "high-frequency" fusion coefficients and low-frequency fusion coefficients obtained after fusion to obtain a fused image.

图像的重构为:The reconstruction of the image is:

ff (( xx ,, ythe y )) == ΣΣ kk == 11 NN ωω kk (( xx ,, ythe y )) ++ ff NN (( xx ,, ythe y )) .. .. .. (( 1212 ))

将本发明所得的融合结果,与其他融合方法所得的融合结果进行了对比,评价结果对照如表1所示,图3为融合所得的图像。图3中,(a)为可见光原图像;(b)为红外原图像;(c)为基于像素融合方法的融合结果;(d),(e)为两种基于窗口融合方法的融合结果;(f)为本发明基于目标区域融合方法的融合结果。结果表明,本发明使得融合后图像的质量得以较大的提高,均优于其他融合方法。The fusion results obtained by the present invention are compared with the fusion results obtained by other fusion methods, and the evaluation results are compared as shown in Table 1, and Fig. 3 is the image obtained by fusion. In Figure 3, (a) is the original visible light image; (b) is the original infrared image; (c) is the fusion result based on the pixel fusion method; (d), (e) are the fusion results of two window fusion methods; (f) is the fusion result based on the target region fusion method of the present invention. The results show that the present invention greatly improves the quality of the fused image, which is better than other fused methods.

                         表1图像融合结果指标评价Table 1 Index evaluation of image fusion results

            基于目标区域    基于像素    基于窗口(方法1)   基于窗口(方法2)                                                                                              based

互有信息    3.5953         1.4415       1.4468            1.4479Mutual information 3.5953 1.4415 1.4468 1.4479

边缘信息    0.3228         0.3091        0.3106           0.3193Edge information 0.3228 0.3091 0.3106 0.3193

Claims (1)

1.一种红外与可见光图像融合方法,其特征在于包括如下具体步骤:1. An infrared and visible light image fusion method is characterized in that comprising the following specific steps: 1)在同一场景的红外图像和可见光图像进行配准的基础上,采用àtrous多孔小波算法来分解图像,采用B3样条尺度函数得到的二维卷积算子进行具体运算,将图像分解为不同频带上的细节信息和最低频带的近似信息;1) On the basis of the registration of the infrared image and the visible light image of the same scene, the àtrous porous wavelet algorithm is used to decompose the image, and the two-dimensional convolution operator obtained by the B3- spline scaling function is used for specific operations, and the image is decomposed into Detailed information on different frequency bands and approximate information on the lowest frequency band; 2)对红外图像采用阈值法进行区域分割,然后根据成像特性和目标图像特征信息来确定目标区域,在确定出红外图像的目标区域后,将原图像划分为背景区域,目标区域以及它们之间的边缘部分;2) The threshold method is used to segment the infrared image, and then the target area is determined according to the imaging characteristics and feature information of the target image. After the target area of the infrared image is determined, the original image is divided into the background area, the target area and the area between them. the edge part; 3)在融合处理时,对于目标区域,直接选取红外图像相应区域的小波系数作为融合后小波系数;对于目标区域和背景区域之间的边缘部分,比较红外图像小波系数和可见光图像小波系数绝对值大小,选取绝对值大的小波系数作为融合后小波系数;对于背景区域,同时考虑系数本身以及其与邻域内系数的相关性,确定融合量测指标,融合量测指标的大小衡量小波系数所包含的信息的多少,通过比较红外图像和可见光图像融合量测指标的大小,确定指标大的小波系数为融合后小波系数;3) During the fusion process, for the target area, directly select the wavelet coefficient of the corresponding area of the infrared image as the fused wavelet coefficient; for the edge part between the target area and the background area, compare the absolute value of the wavelet coefficient of the infrared image and the wavelet coefficient of the visible light image Size, select the wavelet coefficient with a large absolute value as the wavelet coefficient after fusion; for the background area, consider the coefficient itself and its correlation with the coefficients in the neighborhood at the same time, determine the fusion measurement index, the size of the fusion measurement index measures the wavelet coefficient contained The amount of information, by comparing the size of the infrared image and visible light image fusion measurement index, determine the wavelet coefficient with the largest index as the wavelet coefficient after fusion; 4)对各区域得到的小波系数进行小波逆变换以获得融合后的图像。4) Perform wavelet inverse transform on the wavelet coefficients obtained in each area to obtain the fused image.
CN 200310108933 2003-11-27 2003-11-27 Infrared and visible light image merging method Expired - Fee Related CN1273937C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200310108933 CN1273937C (en) 2003-11-27 2003-11-27 Infrared and visible light image merging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200310108933 CN1273937C (en) 2003-11-27 2003-11-27 Infrared and visible light image merging method

Publications (2)

Publication Number Publication Date
CN1545064A true CN1545064A (en) 2004-11-10
CN1273937C CN1273937C (en) 2006-09-06

Family

ID=34334947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200310108933 Expired - Fee Related CN1273937C (en) 2003-11-27 2003-11-27 Infrared and visible light image merging method

Country Status (1)

Country Link
CN (1) CN1273937C (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100383805C (en) * 2005-11-03 2008-04-23 复旦大学 A method for classifying urban features based on the fusion of satellite-borne microwave and infrared remote sensing
CN100442816C (en) * 2004-12-22 2008-12-10 索尼株式会社 Image processing device, image processing method, and imaging device
CN1873693B (en) * 2006-06-27 2010-05-12 上海大学 Image Fusion Method Based on Contourlet Transform and Improved Pulse-Coupled Neural Network
EP2194503A1 (en) * 2008-10-27 2010-06-09 Guangzhou SAT Infrared Technology Co., Ltd. Method and device for infrared and visible image fusion
CN101846751A (en) * 2010-05-14 2010-09-29 中国科学院上海技术物理研究所 Real-time image fusion system and method for detecting concealed weapons
CN101976436A (en) * 2010-10-14 2011-02-16 西北工业大学 Pixel-level multi-focus image fusion method based on correction of differential image
CN101493936B (en) * 2008-05-30 2011-03-23 内蒙古科技大学 Multi- resolution non-rigid head medicine image registration method based on image edge
CN102254314A (en) * 2011-07-17 2011-11-23 西安电子科技大学 Visible-light/infrared image fusion method based on compressed sensing
CN101527039B (en) * 2008-03-06 2011-12-28 河海大学 Automatic image registration and rapid super-resolution fusion method based on edge feature
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Fusion method of wavelet meteorological satellite cloud image based on combination of local variance and weighting
CN102722864A (en) * 2012-05-18 2012-10-10 清华大学 Image enhancement method
CN104021537A (en) * 2014-06-23 2014-09-03 西北工业大学 Infrared and visible image fusion method based on sparse representation
CN104268847A (en) * 2014-09-23 2015-01-07 西安电子科技大学 Infrared light image and visible light image fusion method based on interactive non-local average filtering
CN104809714A (en) * 2015-04-29 2015-07-29 华东交通大学 Image fusion method based on multi-morphological sparse representation
CN104867123A (en) * 2010-04-23 2015-08-26 前视红外系统股份公司 Infrared Resolution And Contrast Enhancement With Fusion
CN104899848A (en) * 2015-07-02 2015-09-09 苏州科技学院 Self-adaptive multi-strategy image fusion method based on riemannian metric
CN104995910A (en) * 2012-12-21 2015-10-21 菲力尔系统公司 Infrared imaging enhancement with fusion
CN105069769A (en) * 2015-08-26 2015-11-18 哈尔滨工业大学 Low-light and infrared night vision image fusion method
CN105739092A (en) * 2016-04-01 2016-07-06 深圳中科天衢能源安全技术有限公司 Dual-optical-path optical system and image fusion method thereof
CN106408585A (en) * 2016-11-28 2017-02-15 深圳万智联合科技有限公司 Ecological landscape slope monitoring system
CN106960202A (en) * 2017-04-11 2017-07-18 广西师范大学 A kind of smiling face's recognition methods merged based on visible ray with infrared image
CN107845109A (en) * 2017-11-17 2018-03-27 杨俊刚 For the panorama depth fusion method and system of light field array camera refocusing image
US9990730B2 (en) 2014-03-21 2018-06-05 Fluke Corporation Visible light image with edge marking for enhancing IR imagery
CN108510455A (en) * 2018-03-27 2018-09-07 长春理工大学 A kind of laser irradiation device image interfusion method and system
CN108765324A (en) * 2018-05-16 2018-11-06 上海爱优威软件开发有限公司 It is a kind of based on infrared image processing method and system
US10148895B2 (en) * 2013-12-19 2018-12-04 Flir Systems Ab Generating a combined infrared/visible light image having an enhanced transition between different types of image information
US10152811B2 (en) 2015-08-27 2018-12-11 Fluke Corporation Edge enhancement for thermal-visible combined images and cameras
CN109478315A (en) * 2016-07-21 2019-03-15 前视红外系统股份公司 Fusion image optimization system and method
US10249032B2 (en) 2010-04-23 2019-04-02 Flir Systems Ab Infrared resolution and contrast enhancement with fusion
CN110246108A (en) * 2018-11-21 2019-09-17 浙江大华技术股份有限公司 A kind of image processing method, device and computer readable storage medium
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola
CN110889817A (en) * 2019-11-19 2020-03-17 中国人民解放军海军工程大学 Image fusion quality evaluation method and device
CN111968068A (en) * 2020-08-18 2020-11-20 杭州海康微影传感科技有限公司 Thermal imaging image processing method and device
US11032492B2 (en) 2004-12-03 2021-06-08 Fluke Corporation Visible light and IR combined image camera
CN114519808A (en) * 2022-02-21 2022-05-20 烟台艾睿光电科技有限公司 Image fusion method, device and equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101696947B (en) * 2009-10-13 2011-08-17 公安部第一研究所 Intelligent method for fusing X-ray dual-energy transmission with Compton backscatter images

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11032492B2 (en) 2004-12-03 2021-06-08 Fluke Corporation Visible light and IR combined image camera
CN100442816C (en) * 2004-12-22 2008-12-10 索尼株式会社 Image processing device, image processing method, and imaging device
CN100383805C (en) * 2005-11-03 2008-04-23 复旦大学 A method for classifying urban features based on the fusion of satellite-borne microwave and infrared remote sensing
CN1873693B (en) * 2006-06-27 2010-05-12 上海大学 Image Fusion Method Based on Contourlet Transform and Improved Pulse-Coupled Neural Network
CN101527039B (en) * 2008-03-06 2011-12-28 河海大学 Automatic image registration and rapid super-resolution fusion method based on edge feature
CN101493936B (en) * 2008-05-30 2011-03-23 内蒙古科技大学 Multi- resolution non-rigid head medicine image registration method based on image edge
EP2194503A1 (en) * 2008-10-27 2010-06-09 Guangzhou SAT Infrared Technology Co., Ltd. Method and device for infrared and visible image fusion
US10249032B2 (en) 2010-04-23 2019-04-02 Flir Systems Ab Infrared resolution and contrast enhancement with fusion
CN104867123B (en) * 2010-04-23 2019-02-19 前视红外系统股份公司 Utilizes fused infrared resolution and contrast enhancement
CN104867123A (en) * 2010-04-23 2015-08-26 前视红外系统股份公司 Infrared Resolution And Contrast Enhancement With Fusion
US11514563B2 (en) 2010-04-23 2022-11-29 Flir Systems Ab Infrared resolution and contrast enhancement with fusion
CN101846751A (en) * 2010-05-14 2010-09-29 中国科学院上海技术物理研究所 Real-time image fusion system and method for detecting concealed weapons
CN101846751B (en) * 2010-05-14 2012-11-14 中国科学院上海技术物理研究所 Real-time image fusion system and method for detecting concealed weapons
CN101976436A (en) * 2010-10-14 2011-02-16 西北工业大学 Pixel-level multi-focus image fusion method based on correction of differential image
CN101976436B (en) * 2010-10-14 2012-05-30 西北工业大学 A pixel-level multi-focus image fusion method based on difference map correction
CN102254314A (en) * 2011-07-17 2011-11-23 西安电子科技大学 Visible-light/infrared image fusion method based on compressed sensing
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Fusion method of wavelet meteorological satellite cloud image based on combination of local variance and weighting
CN102722864B (en) * 2012-05-18 2014-11-26 清华大学 Image enhancement method
CN102722864A (en) * 2012-05-18 2012-10-10 清华大学 Image enhancement method
CN104995910A (en) * 2012-12-21 2015-10-21 菲力尔系统公司 Infrared imaging enhancement with fusion
CN104995910B (en) * 2012-12-21 2018-07-13 菲力尔系统公司 Enhanced with fused infrared images
US10148895B2 (en) * 2013-12-19 2018-12-04 Flir Systems Ab Generating a combined infrared/visible light image having an enhanced transition between different types of image information
US10366496B2 (en) 2014-03-21 2019-07-30 Fluke Corporation Visible light image with edge marking for enhancing IR imagery
US9990730B2 (en) 2014-03-21 2018-06-05 Fluke Corporation Visible light image with edge marking for enhancing IR imagery
US10726559B2 (en) 2014-03-21 2020-07-28 Fluke Corporation Visible light image with edge marking for enhancing IR imagery
CN104021537A (en) * 2014-06-23 2014-09-03 西北工业大学 Infrared and visible image fusion method based on sparse representation
CN104268847A (en) * 2014-09-23 2015-01-07 西安电子科技大学 Infrared light image and visible light image fusion method based on interactive non-local average filtering
CN104268847B (en) * 2014-09-23 2017-04-05 西安电子科技大学 A kind of infrared and visible light image fusion method based on interaction non-local mean filtering
CN104809714A (en) * 2015-04-29 2015-07-29 华东交通大学 Image fusion method based on multi-morphological sparse representation
CN104899848A (en) * 2015-07-02 2015-09-09 苏州科技学院 Self-adaptive multi-strategy image fusion method based on riemannian metric
CN105069769A (en) * 2015-08-26 2015-11-18 哈尔滨工业大学 Low-light and infrared night vision image fusion method
US10152811B2 (en) 2015-08-27 2018-12-11 Fluke Corporation Edge enhancement for thermal-visible combined images and cameras
CN105739092B (en) * 2016-04-01 2018-05-15 深圳中科天衢能源安全技术有限公司 A kind of method of double light path optical system and its image co-registration
CN105739092A (en) * 2016-04-01 2016-07-06 深圳中科天衢能源安全技术有限公司 Dual-optical-path optical system and image fusion method thereof
CN109478315B (en) * 2016-07-21 2023-08-01 前视红外系统股份公司 Fusion image optimization system and method
CN109478315A (en) * 2016-07-21 2019-03-15 前视红外系统股份公司 Fusion image optimization system and method
CN106408585B (en) * 2016-11-28 2019-03-15 江苏省山水生态环境建设工程有限公司 A kind of ecoscape slope monitoring system
CN106408585A (en) * 2016-11-28 2017-02-15 深圳万智联合科技有限公司 Ecological landscape slope monitoring system
CN106960202A (en) * 2017-04-11 2017-07-18 广西师范大学 A kind of smiling face's recognition methods merged based on visible ray with infrared image
CN106960202B (en) * 2017-04-11 2020-05-19 湖南灵想科技股份有限公司 Smiling face identification method based on visible light and infrared image fusion
CN107845109A (en) * 2017-11-17 2018-03-27 杨俊刚 For the panorama depth fusion method and system of light field array camera refocusing image
CN108510455A (en) * 2018-03-27 2018-09-07 长春理工大学 A kind of laser irradiation device image interfusion method and system
CN108510455B (en) * 2018-03-27 2020-07-17 长春理工大学 Method and system for image fusion of laser irradiator
CN108765324A (en) * 2018-05-16 2018-11-06 上海爱优威软件开发有限公司 It is a kind of based on infrared image processing method and system
CN110246108A (en) * 2018-11-21 2019-09-17 浙江大华技术股份有限公司 A kind of image processing method, device and computer readable storage medium
US11875520B2 (en) 2018-11-21 2024-01-16 Zhejiang Dahua Technology Co., Ltd. Method and system for generating a fusion image
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola
CN110889817A (en) * 2019-11-19 2020-03-17 中国人民解放军海军工程大学 Image fusion quality evaluation method and device
CN111968068A (en) * 2020-08-18 2020-11-20 杭州海康微影传感科技有限公司 Thermal imaging image processing method and device
CN114519808A (en) * 2022-02-21 2022-05-20 烟台艾睿光电科技有限公司 Image fusion method, device and equipment and storage medium
WO2023155324A1 (en) * 2022-02-21 2023-08-24 烟台艾睿光电科技有限公司 Image fusion method and apparatus, device and storage medium

Also Published As

Publication number Publication date
CN1273937C (en) 2006-09-06

Similar Documents

Publication Publication Date Title
CN1273937C (en) Infrared and visible light image merging method
Xiang et al. Crack detection algorithm for concrete structures based on super-resolution reconstruction and segmentation network
Bhat et al. Multi-focus image fusion techniques: a survey
CN1822046A (en) Infrared and visible light image fusion method based on regional feature fuzzy
CN100342398C (en) Image processing method and apparatus
CN1581231A (en) Infra-red and visible light dynamic image interfusion method based on moving target detection
CN1251145C (en) Pyramid image merging method being integrated with edge and texture information
Nie et al. Ghostsr: Learning ghost features for efficient image super-resolution
CN1568479A (en) Method and apparatus for discriminating between different regions of an image
CN1831556A (en) Super-resolution reconstruction method of small target in a single satellite remote sensing image
CN105913392A (en) Degraded image overall quality improving method in complex environment
CN1847782A (en) 2D Image Region Location Method Based on Raster Projection
Chengtao et al. A survey of image dehazing approaches
CN1284975C (en) An Optimal Method for Bilinear Interpolation and Wavelet Transform Fusion of Remote Sensing Images
CN114693524A (en) Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium
CN1489111A (en) Remote Sensing Image Fusion Method Based on Local Statistical Characteristics and Color Space Transformation
CN1770201A (en) An Adjustable Remote Sensing Image Fusion Method Based on Wavelet Transform
US8300965B2 (en) Methods and apparatus to perform multi-focal plane image acquisition and compression
CN1595433A (en) Recursion denoising method based on motion detecting image
CN101075350A (en) Assembly for converting two-dimensional cartoon into three-dimensional cartoon by dynamic outline technology
CN109242797B (en) Image denoising method, system and medium based on fusion of homogeneous and heterogeneous regions
CN106067163A (en) A kind of image rain removing method based on wavelet analysis and system
CN118052737B (en) Unmanned aerial vehicle image defogging algorithm based on perception guidance of super-pixel scene priori
CN1928920A (en) Threshold value dividing method based on single-pixel in three-dimensional scanning system
CN1588445A (en) Image fusing method based on direction filter unit

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20060906

Termination date: 20091228