CN107301420A - A kind of thermal infrared imagery object detection method based on significance analysis - Google Patents
A kind of thermal infrared imagery object detection method based on significance analysis Download PDFInfo
- Publication number
- CN107301420A CN107301420A CN201710527099.4A CN201710527099A CN107301420A CN 107301420 A CN107301420 A CN 107301420A CN 201710527099 A CN201710527099 A CN 201710527099A CN 107301420 A CN107301420 A CN 107301420A
- Authority
- CN
- China
- Prior art keywords
- thermal infrared
- infrared imagery
- notable
- image
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于显著性分析的热红外影像目标探测方法,包括:S1构建热红外影像的全局显著图;S2构建热红外影像局部显著图;S3构建热红外影像融合显著图;S4热红外影像目标探测,对融合显著图进行阈值分割得到标记目标位置的二值结果图。本发明方法弥补了形态学滤波、中值滤波、二维最小二乘法的不足,可用于热红外影像的影像增强,目标检测等工作。本发明基于单波段影像灰度特征和方向特征,不受限于热红外成像仪通道数,同时适用于单波段、多波段热红外影像;检测精度高,无需辅助信息和人工干预,计算速度快,可自动化处理。
The invention discloses a thermal infrared image target detection method based on saliency analysis, comprising: S1 constructing a global saliency map of a thermal infrared image; S2 constructing a local saliency map of a thermal infrared image; S3 constructing a fusion saliency map of a thermal infrared image; For infrared image target detection, threshold segmentation is performed on the fused saliency map to obtain a binary result map of the marked target position. The method of the invention makes up for the deficiencies of morphological filtering, median filtering and two-dimensional least square method, and can be used for image enhancement of thermal infrared images, target detection and the like. The invention is based on the gray scale feature and direction feature of the single-band image, is not limited to the number of channels of the thermal infrared imager, and is applicable to single-band and multi-band thermal infrared images at the same time; the detection accuracy is high, no auxiliary information and manual intervention are required, and the calculation speed is fast , which can be automated.
Description
技术领域technical field
本发明基于遥感图像技术处理领域,特别涉及一种基于显著性分析的热红外影像目标探测方法。The invention is based on the field of remote sensing image technology processing, and in particular relates to a thermal infrared image target detection method based on saliency analysis.
背景技术Background technique
热红外成像技术通过接收物体在热红外波段的辐射信息,得到物体的温度信息,从而具备探测热异常目标的能力。其在夜间和恶劣天气条件下的良好工作性能,使得热红外目标探测技术在目标识别与跟踪,警报系统,林火监测等方面都有着广泛的应用。热红外影像的信噪比低、缺乏纹理信息,一般采用背景抑制或者目标建模的方法来进行目标探测。Thermal infrared imaging technology obtains the temperature information of the object by receiving the radiation information of the object in the thermal infrared band, so that it has the ability to detect thermal anomalies. Its good working performance at night and in bad weather conditions makes thermal infrared target detection technology widely used in target recognition and tracking, alarm systems, and forest fire monitoring. Thermal infrared images have low signal-to-noise ratio and lack texture information, so background suppression or target modeling methods are generally used for target detection.
传统的热红外影像目标探测方法有形态学滤波、中值滤波、二维最小二乘法等。当热红外影像图像信噪比较低,目标结构信息微弱,并且噪声干扰严重时,形态学滤波容易降低原始图像信噪比,甚至丢失目标。中值滤波但的窗口尺寸难以确定,并且在滤除噪声的同时会使图像中重要的细节信息受损,对目标边缘的保持特性较差。将二维最小二乘法应用于热红外影像目标探测,当滤波器位于相关范围较小的目标区域内时容易出现不收敛的情况,从而导致对目标像素的估计不准确。传统的热红外影像目标探测算法都是基于特定目标,依赖于目标的先验知识进行滤波窗口设定,缺乏通用性。Traditional thermal infrared image target detection methods include morphological filtering, median filtering, two-dimensional least squares method, etc. When the signal-to-noise ratio of the thermal infrared image is low, the target structure information is weak, and the noise interference is serious, the morphological filter can easily reduce the signal-to-noise ratio of the original image, or even lose the target. However, the window size of the median filter is difficult to determine, and it will damage the important details in the image while filtering out the noise, and the preservation of the target edge is poor. When the two-dimensional least squares method is applied to thermal infrared image target detection, when the filter is located in the target area with a small correlation range, it is prone to non-convergence, which leads to inaccurate estimation of target pixels. Traditional thermal infrared image target detection algorithms are based on specific targets and rely on the prior knowledge of the target to set the filter window, which lacks versatility.
发明内容Contents of the invention
本发明的目的是针对热红外遥感影像,提供一种基于显著性分析的热红外影像目标探测方法。The object of the present invention is to provide a thermal infrared image target detection method based on saliency analysis for thermal infrared remote sensing images.
为达到上述目的,本发明的一种基于显著性分析的热红外影像目标探测方法,包括以下步骤:In order to achieve the above object, a thermal infrared image target detection method based on saliency analysis of the present invention comprises the following steps:
步骤1,读取热红外影像,构建热红外影像全局显著图;Step 1, read the thermal infrared image and construct the global saliency map of the thermal infrared image;
步骤2,读取热红外影像,构建热红外影像局部显著图,本步骤进一步包括以下子步骤;Step 2, read the thermal infrared image, construct the local saliency map of the thermal infrared image, this step further includes the following sub-steps;
步骤2.1,提取热红外影像灰度特征图;Step 2.1, extracting the grayscale feature map of the thermal infrared image;
步骤2.2,提取热红外影像方向特征图;Step 2.2, extracting the thermal infrared image direction feature map;
步骤2.3,融合灰度特征图和方向特征图,构建热红外影像局部显著图,并对局部显著图进行归一化处理;Step 2.3, fusing the grayscale feature map and the direction feature map to construct a local saliency map of the thermal infrared image, and normalize the local saliency map;
步骤3,对步骤1中的全局显著图进行归一化处理,结合归一化后的全局显著图和局部显著图,构建热红外影像融合显著图,具体实现方式如下,Step 3, normalize the global saliency map in step 1, combine the normalized global saliency map and local saliency map to construct a thermal infrared image fusion saliency map, the specific implementation method is as follows,
融合显著图的计算公式为S=SGοSL,其中ο可以为[+,*,max]中的任意一种,即对归一化全局显著图SG和归一化局部显著图SL进行相加、相乘和取最大值中的任意一种操作,并对得到的融合显著图进行归一化操作。The calculation formula of the fused saliency map is S=S G οS L , where ο can be any one of [+, *, max], that is, for the normalized global saliency map S G and the normalized local saliency map S L Perform any one of addition, multiplication, and maximum value operations, and perform normalization operations on the resulting fused saliency map.
步骤4,热红外影像目标探测,对融合显著图进行阈值分割得到标记目标位置的二值结果图。Step 4, thermal infrared image target detection, performing threshold segmentation on the fused saliency map to obtain a binary result map of the marked target position.
进一步的,所述步骤1的实现方式如下,Further, the implementation of step 1 is as follows,
步骤1.1,读取热红外影像,并对热红外影像进行高斯滤波预处理,去除噪声对目标探测的影响;Step 1.1, read the thermal infrared image, and perform Gaussian filter preprocessing on the thermal infrared image to remove the influence of noise on target detection;
步骤1.2,计算全局显著图,公式为其中为读取热红外影像的灰度平均值,I(x,y)为热红外影像在像素(x,y)处的灰度值,|| ||代表计算两个灰度值之间的欧式距离。Step 1.2, calculate the global saliency map, the formula is in In order to read the average gray value of the thermal infrared image, I(x, y) is the gray value of the thermal infrared image at the pixel (x, y), and || || represents the calculation of the Euclidean formula between two gray values distance.
进一步的,所述步骤2.1的实现方式如下,Further, the implementation of step 2.1 is as follows,
步骤2.1.1,通过不同尺度高斯函数对热红外影像进行平滑滤波得到从0尺度到8尺度的灰度影像金字塔:定义高斯卷积核,高斯卷积核公式为σ标准差,μ均值μ=0,尺度因子x=[1,2,...8],将热红外影像与高斯卷积核依次进行卷积操作,逐级降采样,建立从0尺度到8尺度的灰度影像金字塔;In step 2.1.1, the thermal infrared image is smoothed and filtered by Gaussian functions of different scales to obtain a grayscale image pyramid from scale 0 to scale 8: define the Gaussian convolution kernel, and the Gaussian convolution kernel formula is σ standard deviation, μ mean value μ=0, scale factor x=[1,2,...8], the thermal infrared image and the Gaussian convolution kernel are sequentially convolved, downsampled step by step, and the scale from 0 to 8-scale grayscale image pyramid;
步骤2.1.2,利用中央周围差操作计算热红外影像对应的灰度特征图,中央周围差操作公式表示为其中I(c)表示灰度特征金字塔影像的中心尺度影像,I(s)表示灰度特征金字塔影像的周围尺度影像,为中央周围差操作。In step 2.1.2, use the central and surrounding difference operation to calculate the grayscale feature map corresponding to the thermal infrared image, and the central and surrounding difference operation formula is expressed as Among them, I(c) represents the central scale image of the gray-scale feature pyramid image, and I(s) represents the surrounding scale image of the gray-scale feature pyramid image, Operate for the difference around the center.
进一步的,所述步骤2.2的实现方式如下,Further, the implementation of step 2.2 is as follows,
步骤2.2.1,定义二维Gabor函数,公式为其中(x,y)表示像素坐标,σx和σy表示高斯函数在x和y方向的标准差,选取0°,45°,90°和135°四个方向进行滤波,得到的Gabor滤波模板,公式为h(x,y)=g(x',y')cos(2πωfx'),其中(x',y')=(x cos(θf)+y sin(θf)-x sin(θf)+y cosn(θf)),ωf为调制频率,θf∈{0°,45°,90°,135°},将滤波模板与热红外影像进行卷积操作,得到0°,45°,90°和135°四个方向特征;Step 2.2.1, define the two-dimensional Gabor function, the formula is Where (x, y) represents the pixel coordinates, σ x and σ y represent the standard deviation of the Gaussian function in the x and y directions, and the four directions of 0°, 45°, 90° and 135° are selected for filtering, and the obtained Gabor filter template , the formula is h(x,y)=g(x',y')cos(2πω f x'), where (x',y')=(x cos(θ f )+y sin(θ f )- x sin(θ f )+y cosn(θ f )), ω f is the modulation frequency, θ f ∈ {0°,45°,90°,135°}, the filter template is convolved with the thermal infrared image, Obtain four direction features of 0°, 45°, 90° and 135°;
步骤2.2.2,重复步骤2.1.1对0°,45°,90°和135°四个热红外方向特征依次建立0°,45°,90°和135°对应的0尺度到8尺度的方向特征金字塔;Step 2.2.2, repeat step 2.1.1 for the four thermal infrared direction features of 0°, 45°, 90° and 135° to establish the directions from scale 0 to scale 8 corresponding to 0°, 45°, 90° and 135° feature pyramid;
步骤2.2.3,对上述四个热红外方向中的每个方向特征金字塔,选择中心尺度影像I(c)和周围尺度影像I(s)进行中央周围差运算,得到0°,45°,90°,135°一共四组子方向特征图;Step 2.2.3, for each direction feature pyramid in the above four thermal infrared directions, select the central scale image I(c) and the surrounding scale image I(s) to perform central and surrounding difference calculations to obtain 0°, 45°, 90° °, 135°, a total of four groups of sub-directional feature maps;
步骤2.2.4,线性融合四组子方向特征图,得到最终的方向特征图。In step 2.2.4, the four groups of sub-direction feature maps are linearly fused to obtain the final direction feature map.
进一步的,所述步骤2.3的实现方式如下:Further, the implementation of step 2.3 is as follows:
步骤2.3.1,对灰度特征图进行归一化操作,具体方式为分别统计灰度特征图中像素灰度最大值max,最小值min,重设灰度特征图的像素x的新像素值为 Step 2.3.1, perform normalization operation on the grayscale feature map, the specific method is to separately count the maximum value max and the minimum value min of the pixel grayscale in the grayscale feature map, and reset the new pixel value of the pixel x in the grayscale feature map for
步骤2.3.2,重复步骤2.3.1对方向特征图归一化操作;Step 2.3.2, repeat step 2.3.1 to normalize the direction feature map;
步骤2.3.3,计算局部显著图,将归一化后的灰度特征图和方向特征图逐像素相加,重新采样至与原始热红外影像同一尺寸,并归一化处理后得到局部显著图。Step 2.3.3, calculate the local saliency map, add the normalized grayscale feature map and direction feature map pixel by pixel, resample to the same size as the original thermal infrared image, and obtain the local saliency map after normalization .
进一步的,所述步骤4中阈值分割通过公式T=m+kσ计算获得,其中m为融合显著图的均值,σ融合显著图方差,k为经验阈值。Further, the threshold segmentation in step 4 is calculated by the formula T=m+kσ, where m is the mean value of the fused saliency map, σ is the variance of the fused saliency map, and k is the empirical threshold.
进一步的,经验阈值k的取值为5~10。Further, the value of the empirical threshold k is 5-10.
进一步的,所述步骤4中得到标记目标位置的二值结果图通过以下方式实现,建立与原始热红外影像同样大小的二值结果图,逐一比较融合显著图坐标(x,y)处像素的灰度值与阈值T的大小,若大于阈值T,则将二值结果图(x,y)处像素对应赋值为1。Further, the binary result map of the marked target position obtained in step 4 is realized in the following manner, a binary result map of the same size as the original thermal infrared image is established, and the pixels at the coordinates (x, y) of the fusion saliency map are compared one by one. The size of the grayscale value and the threshold value T, if it is greater than the threshold value T, then assign the corresponding value of 1 to the pixel at (x, y) in the binary result map.
本发明具有如下优点和有益效果:The present invention has following advantage and beneficial effect:
(1)检测精度高,误判较少;同时适用于不同成像平台采集的热红外影像。(1) High detection accuracy and less misjudgment; it is also applicable to thermal infrared images collected by different imaging platforms.
(2)检测基于单波段影像灰度特征和方向特征,不受限于热红外成像仪通道数,同时适用于单波段、多波段热红外影像。(2) The detection is based on the grayscale and directional features of single-band images, and is not limited by the number of channels of the thermal infrared imager, and is applicable to single-band and multi-band thermal infrared images at the same time.
(3)无需辅助信息和人工干预,计算速度快,可自动化处理。(3) No auxiliary information and manual intervention are needed, the calculation speed is fast, and it can be processed automatically.
(4)可扩展性强,当条件允许时,可将本发明方法作为初始判别准则,使用复合决策的方式,取得更精准的热红外目标检测结果。(4) Strong scalability. When conditions permit, the method of the present invention can be used as an initial criterion, and a compound decision-making method can be used to obtain more accurate thermal infrared target detection results.
附图说明Description of drawings
图1为本发明实施例流程图;Fig. 1 is a flowchart of an embodiment of the present invention;
图2为本发明实施例热红外目标检测结果图,(a)为包含汽车引擎目标的热红外影像,(b)为影像对应的融合显著图,(c)为分割融合显著图得到的二值结果图,白色表示目标。Fig. 2 is a thermal infrared target detection result diagram according to an embodiment of the present invention, (a) is a thermal infrared image containing a vehicle engine target, (b) is a fusion saliency map corresponding to the image, and (c) is a binary value obtained by segmenting and fusion saliency map Result map, white indicates target.
图3为本发明实施例热红外目标检测结果图,(a)为包含行人目标的热红外影像,(b)为影像对应的融合显著图,(c)为分割融合显著图得到的二值结果图,白色表示目标。Fig. 3 is a thermal infrared target detection result diagram according to an embodiment of the present invention, (a) is a thermal infrared image containing a pedestrian target, (b) is a fusion saliency map corresponding to the image, and (c) is a binary result obtained by segmentation and fusion saliency map Figure, white indicates the target.
具体实施方式detailed description
为了更好地理解本发明技术方案,下面将结合附图和实施例对本发明做进一步详细说明。In order to better understand the technical solutions of the present invention, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.
本具体实施方式在Matlab b2011环境下,由Matlab语言编程实现,整个过程可实现自动化处理。本步骤的具体实施方式如下:This specific implementation mode is implemented by Matlab language programming under the Matlab b2011 environment, and the whole process can realize automatic processing. The specific implementation of this step is as follows:
步骤1,读取热红外影像,构建热红外影像的全局显著图,此步骤进一步包括:Step 1, read the thermal infrared image, construct the global saliency map of the thermal infrared image, this step further includes:
步骤1.1,读取热红外影像,并对热红外影像进行高斯滤波预处理,去除噪声对目标探测的影响。Step 1.1, read the thermal infrared image, and perform Gaussian filter preprocessing on the thermal infrared image to remove the influence of noise on target detection.
步骤1.2,计算全局显著图,公式其中为读取热红外影像的灰度平均值,I(x,y)为热红外影像在像素(x,y)处的灰度值,|| ||代表计算两个灰度值之间的欧式距离。Step 1.2, calculate the global saliency map, the formula in In order to read the average gray value of the thermal infrared image, I(x, y) is the gray value of the thermal infrared image at the pixel (x, y), and || || represents the calculation of the Euclidean formula between two gray values distance.
步骤2,构建热红外影像局部显著图,本步骤的具体实施方式如下:Step 2, constructing the local saliency map of the thermal infrared image, the specific implementation of this step is as follows:
步骤2.1,提取热红外影像灰度特征图,其中进一步包括:Step 2.1, extracting the grayscale feature map of the thermal infrared image, which further includes:
步骤2.1.1,建立热红外影像的灰度特征金字塔影像。通过不同尺度高斯函数对热红外影像进行平滑滤波得到从0尺度到8尺度的灰度影像金字塔。定义高斯卷积核,高斯卷积核公式为:参数具体设置为标准差σ=2,均值μ=0,尺度因子x=[1,2,...8]。将热红外影像与高斯卷积核依次进行卷积操作,逐级降采样,建立从0尺度(原始影像)到8尺度的灰度影像金字塔;In step 2.1.1, the grayscale feature pyramid image of the thermal infrared image is established. The thermal infrared image is smoothed and filtered by Gaussian functions of different scales to obtain grayscale image pyramids from scale 0 to scale 8. Define the Gaussian convolution kernel, the Gaussian convolution kernel formula is: The parameters are specifically set as standard deviation σ=2, mean value μ=0, scale factor x=[1,2,...8]. The thermal infrared image and the Gaussian convolution kernel are sequentially convolved, down-sampled step by step, and a gray-scale image pyramid from scale 0 (original image) to scale 8 is established;
步骤2.1.2,利用中央周围差操作计算热红外影像对应的灰度特征图。中央周围差操作公式表示为其中I(c)表示灰度特征金字塔影像的中心尺度影像,I(s)表示灰度特征金字塔影像的周围尺度影像,为中央周围差操作。具体操作方法为将影像I(s)升采样到与I(c)同一分辨率后,进行逐像素相减,得到灰度特征图,其中I(c)可以选择0~2尺度影像,I(s)可以选择3~5尺度影像。.In step 2.1.2, the grayscale feature map corresponding to the thermal infrared image is calculated by using the difference operation around the center. The operating formula of the difference around the center is expressed as Among them, I(c) represents the central scale image of the gray-scale feature pyramid image, and I(s) represents the surrounding scale image of the gray-scale feature pyramid image, Operate for the difference around the center. The specific operation method is to up-sample the image I(s) to the same resolution as I(c), then perform pixel-by-pixel subtraction to obtain the grayscale feature map, where I(c) can choose a 0-2 scale image, and I( s) 3-5 scale images can be selected. .
步骤2.2,提取热红外影像方向特征图,其中进一步包括:Step 2.2, extracting the thermal infrared image direction feature map, which further includes:
步骤2.2.1,热红外影像Gabor方向特征提取,定义二维Gabor函数,公式为:其中(x,y)表示像素坐标,σx和σy表示高斯函数在x和y方向的标准差,选取0°,45°,90°和135°四个方向进行滤波,得到的Gabor滤波模板,公式为h(x,y)=g(x',y')cos(2πωfx'),其中(x',y')=(x cos(θf)+y sin(θf)-x sin(θf)+y cosn(θf)),ωf为调制频率,θf∈{0°,45°,90°,135°},将滤波模板与热红外影像进行卷积操作,得到0°,45°,90°和135°四个方向特征;Step 2.2.1, thermal infrared image Gabor direction feature extraction, define two-dimensional Gabor function, the formula is: Where (x, y) represents the pixel coordinates, σ x and σ y represent the standard deviation of the Gaussian function in the x and y directions, and four directions of 0°, 45°, 90° and 135° are selected for filtering, and the obtained Gabor filter template , the formula is h(x,y)=g(x',y')cos(2πω f x'), where (x',y')=(x cos(θ f )+y sin(θ f )- x sin(θ f )+y cosn(θ f )), ω f is the modulation frequency, θ f ∈ {0°,45°,90°,135°}, the filter template is convolved with the thermal infrared image, Obtain four direction features of 0°, 45°, 90° and 135°;
步骤2.2.2,建立方向特征金字塔,与步骤2.1.1中相似地,对0°,45°,90°和135°四个热红外方向特征依次建立0°,45°,90°和135°对应的0尺度(原始影像)到8尺度的方向特征金字塔;Step 2.2.2, establish the direction feature pyramid, similar to step 2.1.1, for the four thermal infrared direction features of 0°, 45°, 90° and 135°, establish 0°, 45°, 90° and 135° in sequence Corresponding 0-scale (original image) to 8-scale directional feature pyramid;
步骤2.2.3,利用中央周围差操作计算方向特征图,与步骤2.1.2相似地,在每个方向特征金字塔中选择中心尺度影像I(c)和周围尺度影像I(s)进行中央周围差运算,得到0°,45°,90°,135°一共四组子方向特征图。In step 2.2.3, use the central and surrounding difference operation to calculate the direction feature map. Similar to step 2.1.2, select the central scale image I(c) and the surrounding scale image I(s) in each direction feature pyramid to perform the central and surrounding difference After calculation, a total of four groups of sub-direction feature maps of 0°, 45°, 90°, and 135° are obtained.
步骤2.2.4,线性融合四组子方向特征图,得到最终的方向特征图。In step 2.2.4, the four groups of sub-directional feature maps are linearly fused to obtain the final directional feature map.
步骤2.3,融合灰度特征图和方向特征图,构建局部显著图,进一步包括:Step 2.3, fusing the grayscale feature map and the direction feature map to construct a local saliency map, further including:
步骤2.3.1,对灰度特征图和方向特征图分别进行归一化操作。具体方式为统计特征图中像素灰度最大值max,最小值min,重设特征图的像素的新像素值为其中x为原始像素值;In step 2.3.1, the normalization operation is performed on the grayscale feature map and the direction feature map respectively. The specific method is to count the maximum pixel gray value max and the minimum value min in the feature map, and reset the new pixel value of the pixel in the feature map where x is the original pixel value;
步骤2.3.2,计算局部显著图,将归一化后的灰度特征图和方向特征图逐像素相加,重新采样至与原始热红外同一尺寸,并归一化处理后得到局部显著图。Step 2.3.2, calculate the local saliency map, add the normalized grayscale feature map and direction feature map pixel by pixel, resample to the same size as the original thermal infrared, and obtain the local saliency map after normalization.
步骤3,构建热红外影像融合显著图,本步骤进一步包括:Step 3, building a thermal infrared image fusion saliency map, this step further includes:
步骤3.1,对步骤1中获得的全局显著图进行归一化处理;Step 3.1, normalize the global saliency map obtained in step 1;
步骤3.2,构建融合显著图S,公式为S=SGοSL,其中ο可以为[+,*,max]中的任意一种,即对归一化全局显著图SG和归一化局部显著图SL进行相加、相乘和取最大值中的任意一种操作,并对得到的融合显著图行进归一化操作。本实施例以相加操作为例,将归一化全局显著图SG和归一化局部显著图SL逐像素相加,并进行归一化操作得到融合显著图。Step 3.2, construct the fusion saliency map S, the formula is S=S G οS L , where ο can be any one of [+, *, max], that is, for the normalized global saliency map S G and the normalized local The saliency map S L performs any one of addition, multiplication, and maximum value operations, and performs normalization operations on the resulting fused saliency map. In this embodiment, taking the addition operation as an example, the normalized global saliency map S G and the normalized local saliency map S L are added pixel by pixel, and the normalization operation is performed to obtain the fused saliency map.
步骤4,热红外影像目标探测,对融合显著图进行阈值分割得到标记目标位置的二值结果图,本步骤进一步包括:Step 4, thermal infrared image target detection, performing threshold segmentation on the fused saliency map to obtain a binary result map of the marked target position, this step further includes:
步骤4.1,计算分割阈值,公式为T=m+kσ,其中m为融合显著图的均值,σ融合显著图方差,k为经验阈值,一般选取5~10;Step 4.1, calculate the segmentation threshold, the formula is T=m+kσ, where m is the mean value of the fused saliency map, σ is the variance of the fused saliency map, and k is the empirical threshold, generally selected from 5 to 10;
步骤4.2,建立与原始热红外影像同样大小的二值结果图,逐一比较融合显著图坐标(x,y)处像素的灰度值与阈值T的大小,若大于阈值T,则将二值结果图(x,y)处像素对应赋值为1。Step 4.2: Create a binary result map of the same size as the original thermal infrared image, compare the gray value of the pixel at the coordinates (x, y) of the fusion saliency map with the size of the threshold T one by one, if it is greater than the threshold T, the binary result The corresponding value of the pixel at (x, y) in the picture is 1.
本文中所描述的具体实施例仅仅是对本发明精神做举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例作各种各样的修改、补充或采用类似的方式替代,但不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention belongs may make various modifications, additions or similar substitutions to the described specific embodiments without departing from the spirit of the present invention or exceeding the definition defined in the appended claims. scope.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710527099.4A CN107301420A (en) | 2017-06-30 | 2017-06-30 | A kind of thermal infrared imagery object detection method based on significance analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710527099.4A CN107301420A (en) | 2017-06-30 | 2017-06-30 | A kind of thermal infrared imagery object detection method based on significance analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107301420A true CN107301420A (en) | 2017-10-27 |
Family
ID=60135213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710527099.4A Pending CN107301420A (en) | 2017-06-30 | 2017-06-30 | A kind of thermal infrared imagery object detection method based on significance analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107301420A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648184A (en) * | 2018-05-10 | 2018-10-12 | 电子科技大学 | A kind of detection method of remote sensing images high-altitude cirrus |
CN109214439A (en) * | 2018-08-22 | 2019-01-15 | 电子科技大学 | A kind of infrared image icing River detection method based on multi-feature fusion |
CN109993744A (en) * | 2019-04-09 | 2019-07-09 | 大连海事大学 | Infrared target detection method under offshore backlight environment |
CN110008969A (en) * | 2019-04-15 | 2019-07-12 | 京东方科技集团股份有限公司 | The detection method and device in saliency region |
CN110738166A (en) * | 2019-10-14 | 2020-01-31 | 西南大学 | Infrared target recognition method and storage medium of fishery administration monitoring system based on PCNN and PCANet |
CN110765948A (en) * | 2019-10-24 | 2020-02-07 | 长沙品先信息技术有限公司 | Target detection and identification method and system based on unmanned aerial vehicle |
CN110796677A (en) * | 2019-10-29 | 2020-02-14 | 北京环境特性研究所 | Cirrus cloud false alarm source detection method based on multiband characteristics |
CN111126287A (en) * | 2019-12-25 | 2020-05-08 | 武汉大学 | A deep learning detection method for dense targets in remote sensing images |
CN112329674A (en) * | 2020-11-12 | 2021-02-05 | 北京环境特性研究所 | Frozen lake detection method and device based on multi-texture feature fusion |
CN112734745A (en) * | 2021-01-20 | 2021-04-30 | 武汉大学 | Unmanned aerial vehicle thermal infrared image heating pipeline leakage detection method fusing GIS data |
CN114663682A (en) * | 2022-03-18 | 2022-06-24 | 北京理工大学 | Target significance detection method for improving anti-interference performance |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160358035A1 (en) * | 2015-06-04 | 2016-12-08 | Omron Corporation | Saliency information acquisition device and saliency information acquisition method |
-
2017
- 2017-06-30 CN CN201710527099.4A patent/CN107301420A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160358035A1 (en) * | 2015-06-04 | 2016-12-08 | Omron Corporation | Saliency information acquisition device and saliency information acquisition method |
Non-Patent Citations (5)
Title |
---|
LIBAO ZHANG ETC.: ""Global and Local Saliency Analysis for the Extraction of Residential Areas in High-Spatial-Resolution Remote Sensing Image"", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 * |
YAO XU ETC.: ""THERMAL ANOMALY DETECTION BASED ON SALIENCY COMPUTATION FOR DISTRICT HEATING SYSTEM"", 《IGAPSS》 * |
姚明海等: ""基于全局和局部显著性的织物疵点检测"", 《浙江工业大学学报》 * |
张晓露等: ""基于小波变换的自适应多模红外小目标检测"", 《激光与红外》 * |
陈媛媛: ""图像显著区域提取及其在图像检索中的应用"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648184A (en) * | 2018-05-10 | 2018-10-12 | 电子科技大学 | A kind of detection method of remote sensing images high-altitude cirrus |
CN109214439A (en) * | 2018-08-22 | 2019-01-15 | 电子科技大学 | A kind of infrared image icing River detection method based on multi-feature fusion |
CN109214439B (en) * | 2018-08-22 | 2021-12-03 | 电子科技大学 | Infrared image frozen river detection method based on multi-feature fusion |
CN109993744A (en) * | 2019-04-09 | 2019-07-09 | 大连海事大学 | Infrared target detection method under offshore backlight environment |
CN109993744B (en) * | 2019-04-09 | 2022-09-09 | 大连海事大学 | An infrared target detection method in a marine backlight environment |
CN110008969A (en) * | 2019-04-15 | 2019-07-12 | 京东方科技集团股份有限公司 | The detection method and device in saliency region |
CN110008969B (en) * | 2019-04-15 | 2021-05-14 | 京东方科技集团股份有限公司 | Method and device for detecting image saliency region |
CN110738166A (en) * | 2019-10-14 | 2020-01-31 | 西南大学 | Infrared target recognition method and storage medium of fishery administration monitoring system based on PCNN and PCANet |
CN110738166B (en) * | 2019-10-14 | 2023-04-18 | 西南大学 | Fishing administration monitoring system infrared target identification method based on PCNN and PCANet and storage medium |
CN110765948A (en) * | 2019-10-24 | 2020-02-07 | 长沙品先信息技术有限公司 | Target detection and identification method and system based on unmanned aerial vehicle |
CN110796677A (en) * | 2019-10-29 | 2020-02-14 | 北京环境特性研究所 | Cirrus cloud false alarm source detection method based on multiband characteristics |
CN110796677B (en) * | 2019-10-29 | 2022-10-21 | 北京环境特性研究所 | Cirrus cloud false alarm source detection method based on multiband characteristics |
CN111126287B (en) * | 2019-12-25 | 2022-06-03 | 武汉大学 | A deep learning detection method for dense targets in remote sensing images |
CN111126287A (en) * | 2019-12-25 | 2020-05-08 | 武汉大学 | A deep learning detection method for dense targets in remote sensing images |
CN112329674A (en) * | 2020-11-12 | 2021-02-05 | 北京环境特性研究所 | Frozen lake detection method and device based on multi-texture feature fusion |
CN112329674B (en) * | 2020-11-12 | 2024-03-12 | 北京环境特性研究所 | Icing lake detection method and device based on multi-texture feature fusion |
CN112734745A (en) * | 2021-01-20 | 2021-04-30 | 武汉大学 | Unmanned aerial vehicle thermal infrared image heating pipeline leakage detection method fusing GIS data |
CN114663682A (en) * | 2022-03-18 | 2022-06-24 | 北京理工大学 | Target significance detection method for improving anti-interference performance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107301420A (en) | A kind of thermal infrared imagery object detection method based on significance analysis | |
Zhang et al. | Object-oriented shadow detection and removal from urban high-resolution remote sensing images | |
CN111079556A (en) | Multi-temporal unmanned aerial vehicle video image change area detection and classification method | |
CN103077531B (en) | Based on the gray scale Automatic Target Tracking method of marginal information | |
CN109460764B (en) | Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method | |
CN102654902A (en) | Contour vector feature-based embedded real-time image matching method | |
CN110033431A (en) | Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge | |
CN106530281A (en) | Edge feature-based unmanned aerial vehicle image blur judgment method and system | |
CN111353371A (en) | Shoreline extraction method based on spaceborne SAR images | |
Yuan et al. | Combining maps and street level images for building height and facade estimation | |
CN110245566A (en) | A long-distance tracking method for infrared targets based on background features | |
CN105405138A (en) | Water surface target tracking method based on saliency detection | |
CN116704369A (en) | An object-oriented flood extraction method and system for fusion of optical and SAR remote sensing images | |
CN111583315A (en) | Novel visible light image and infrared image registration method and device | |
CN109063669B (en) | Bridge area ship navigation situation analysis method and device based on image recognition | |
Yang et al. | Fast and accurate vanishing point detection in complex scenes | |
CN114758139B (en) | Method for detecting accumulated water in foundation pit | |
CN114565653B (en) | A Matching Method for Heterogeneous Remote Sensing Images with Rotation Variation and Scale Difference | |
CN111783580B (en) | Pedestrian identification method based on human leg detection | |
CN114140698A (en) | Water system information extraction algorithm based on FasterR-CNN | |
CN114429593A (en) | Infrared small target detection method based on rapid guided filtering and application thereof | |
Majidi et al. | Aerial tracking of elongated objects in rural environments | |
CN112329796A (en) | Infrared imaging cirrus cloud detection method and device based on visual saliency | |
Du et al. | Shadow detection in high-resolution remote sensing image based on improved K-means | |
CN118968363B (en) | A remote sensing image classification and extraction method and system based on remote sensing index |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171027 |