CN103886344A - Image type fire flame identification method - Google Patents
Image type fire flame identification method Download PDFInfo
- Publication number
- CN103886344A CN103886344A CN201410148888.3A CN201410148888A CN103886344A CN 103886344 A CN103886344 A CN 103886344A CN 201410148888 A CN201410148888 A CN 201410148888A CN 103886344 A CN103886344 A CN 103886344A
- Authority
- CN
- China
- Prior art keywords
- image
- value
- fuzzy
- partiald
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Fire-Detection Mechanisms (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种图像型火灾火焰识别方法,包括以下步骤:一、图像采集;二、图像处理:201、图像预处理;步骤202、火灾识别:采用预先建立的二分类模型进行识别,二分类模型为对有火焰和无火焰两个类别进行分类的支持向量机模型;二分类模型的建立过程如下:Ⅰ、图像信息采集;Ⅱ、特征提取;Ⅲ、训练样本获取;Ⅳ、二分类模型建立:Ⅳ-1、核函数选取;Ⅳ-2、分类函数确定:采用共轭梯度法对参数C与D优化,再将优化后的参数C与D转换成γ与σ2;Ⅴ、二分类模型训练。本发明方法步骤简单、实现方便且操作简便、可靠性高、使用效果好,能有效解决现有视频火灾检测系统存在的复杂环境下可靠性较低、误报漏报率较高、使用效果较差等问题。
The invention discloses an image-type fire flame recognition method, which comprises the following steps: 1. Image acquisition; 2. Image processing: 201. Image preprocessing; The classification model is a support vector machine model for classifying two categories with flames and no flames; the establishment process of the binary classification model is as follows: Ⅰ, image information collection; Ⅱ, feature extraction; Ⅲ, training sample acquisition; Ⅳ, binary classification model Establishment: Ⅳ-1. Kernel function selection; Ⅳ-2. Classification function determination: use conjugate gradient method to optimize parameters C and D, and then convert optimized parameters C and D into γ and σ 2 ; Ⅴ. Two classification Model training. The method of the present invention has simple steps, convenient implementation, simple operation, high reliability and good use effect, and can effectively solve the problems of low reliability, high rate of false positives and false positives and poor use effect in the complex environment existing in the existing video fire detection system. Bad question.
Description
技术领域technical field
本发明属于图像采集及处理技术领域,尤其是涉及一种图像型火灾火焰识别方法。The invention belongs to the technical field of image acquisition and processing, and in particular relates to an image-based fire flame identification method.
背景技术Background technique
火灾是矿井重大灾害之一,严重威胁着人类健康、自然环境和煤矿的安全生产。随着科技进步,火灾自动检测技术逐渐成为监测和火灾预警的重要手段。现如今,在煤矿井下,火灾预测及检测主要以监测火的温度效应、燃烧生成物(发生烟雾与气体的效应)和电磁辐射效应为主,但上述现有的检测方法在灵敏度和可靠性方面都尚待提高,并且不能对早期火灾作出反应,因而与日趋严格的火灾安全要求已不相适应。尤其是当大空间内存在遮挡物时,火灾燃烧产物在空间的传播会受到空间高度和面积的影响,普通的点型感烟、感温火灾检测报警系统无法迅速采集火灾发出的烟温变化信息,只有当火灾发展到一定的程度时,才会做出响应,从而难以满足早期检测火灾的要求。视频处理技术和模式识别技术的迅速发展使火灾检测和预警方式正朝着图像化、数字化、规模化和智能化方向发展。而基于视频监控的火灾检测技术具有探测范围广、响应时间短、成本低、不受环境影响等优势,结合计算机智能技术可以提供更直观、更丰富的信息,对煤矿的安全生产具有重要意义。Fire is one of the major disasters in mines, which seriously threatens human health, natural environment and safe production of coal mines. With the advancement of science and technology, automatic fire detection technology has gradually become an important means of monitoring and fire warning. Nowadays, in coal mines, fire prediction and detection are mainly based on monitoring the temperature effect of fire, combustion products (effects of smoke and gas) and electromagnetic radiation effects, but the above-mentioned existing detection methods are in terms of sensitivity and reliability. Both have yet to be improved, and cannot respond to early fires, so they are not compatible with the increasingly stringent fire safety requirements. Especially when there are occluders in a large space, the spread of fire combustion products in the space will be affected by the height and area of the space. The ordinary point-type smoke-sensing and temperature-sensing fire detection and alarm system cannot quickly collect the information of the smoke temperature change emitted by the fire. , Only when the fire develops to a certain extent, will it respond, so it is difficult to meet the requirements of early detection of fire. With the rapid development of video processing technology and pattern recognition technology, fire detection and early warning methods are developing in the direction of image, digitization, scale and intelligence. The fire detection technology based on video surveillance has the advantages of wide detection range, short response time, low cost, and not affected by the environment. Combined with computer intelligence technology, it can provide more intuitive and richer information, which is of great significance to the safe production of coal mines.
目前,视频火灾检测技术在国内外尚属起步阶段,产品在检测方式、工作原理、系统结构以及实用场合等方面存在差异,典型的系统主要有美国axonx LLC公司开发的Signi Fire TM系统、美国DHF Intellvision公司开发的Alarm Eye VISFD分布式智能图像火灾检测系统、美国Bosque公司的红外和普通摄像机的双波段监控、瑞士ISL公司和Magnox Electric公司联合开发的用于电站火灾监控的VSD-8系统。国内对火灾检测和自动灭火的研究目前中国科技大学的火灾科学国家重点实验室做得比较领先,另外天津大学、西安交通大学、沈阳工业大学与上海交通大学也做了积极研究,但上述图像火灾检测系统多用于电站、建筑、仓库等的火灾检测,在煤矿井下的应用相对还比较少。近年来,国内外很多研究人员对这种图像型火灾检测系统的关键技术即火焰图像分析算法开展了深入的研究,并做出了巨大贡献,主要体现在以下方面:①基于火焰静态特征如像素亮度、色度等光谱特性和区域结构如形状、轮廓等的视频火焰检测方法;②基于火焰颜色运动区域的视频火焰检测方法;③基于火焰频闪特性与时频特性的视频检测方法。但上述火焰图像分析算法的研究在现有视频火灾检测系统上应用时,均不同程度地存在一定的局限性,在复杂场景中不能有效地去除干扰,系统误报漏报较严重。因此,现如今缺少一种方法步骤简单、实现方便且操作简便、可靠性高、使用效果好的图像型火灾火焰识别方法,能有效解决现有视频火灾检测系统存在的复杂环境下可靠性较低、误报漏报率较高、使用效果较差等问题。At present, video fire detection technology is still in its infancy at home and abroad, and there are differences in detection methods, working principles, system structures, and practical occasions. The Alarm Eye VISFD distributed intelligent image fire detection system developed by Intellvision Company, the dual-band monitoring of infrared and ordinary cameras of Bosque Company of the United States, and the VSD-8 system jointly developed by Swiss ISL Company and Magnox Electric Company for power station fire monitoring. Domestic research on fire detection and automatic fire extinguishing is currently done by the State Key Laboratory of Fire Science at the University of Science and Technology of China. In addition, Tianjin University, Xi'an Jiaotong University, Shenyang University of Technology and Shanghai Jiaotong University have also done active research, but the above image fire The detection system is mostly used for fire detection in power stations, buildings, warehouses, etc., and its application in underground coal mines is relatively small. In recent years, many researchers at home and abroad have carried out in-depth research on the key technology of this image-based fire detection system, that is, the flame image analysis algorithm, and made great contributions, mainly in the following aspects: ①Based on the static characteristics of the flame such as pixels Video flame detection method based on spectral characteristics such as brightness and chromaticity and regional structure such as shape and outline; ② Video flame detection method based on flame color movement area; ③ Video detection method based on flame stroboscopic characteristics and time-frequency characteristics. However, when the research on the above-mentioned flame image analysis algorithm is applied to the existing video fire detection system, there are certain limitations to varying degrees. It cannot effectively remove interference in complex scenes, and the system has serious false positives and negative negatives. Therefore, there is a lack of an image-based fire flame recognition method with simple steps, convenient implementation, easy operation, high reliability, and good use effect, which can effectively solve the low reliability in the complex environment of the existing video fire detection system. , high rate of false positives and false negatives, poor use effect and other issues.
发明内容Contents of the invention
本发明所要解决的技术问题在于针对上述现有技术中的不足,提供一种图像型火灾火焰识别方法,其方法步骤简单、实现方便且操作简便、可靠性高、使用效果好,能解决现有视频火灾检测系统存在的复杂环境下可靠性较低、误报漏报率较高、使用效果较差等问题。The technical problem to be solved by the present invention is to provide an image-type fire flame identification method for the above-mentioned deficiencies in the prior art. The video fire detection system has problems such as low reliability in complex environments, high rate of false positives and false positives, and poor use effect.
为解决上述技术问题,本发明采用的技术方案是:一种图像型火灾火焰识别方法,其特征在于该方法包括以下步骤:In order to solve the above-mentioned technical problems, the technical solution adopted in the present invention is: an image-type fire flame recognition method, which is characterized in that the method includes the following steps:
步骤一、图像采集:采用图像采集单元且按照预先设定的采样频率fs,对待检测区域的数字图像进行采集,并将每一个采样时刻所采集的数字图像同步传送至处理器;所述图像采集单元与处理器相接;
步骤二、图像处理:所述处理器按照时间先后顺序对步骤一中各采样时刻所采集的数字图像分别进行图像处理,且对各采样时刻所采集数字图像的处理方法均相同;对步骤一中任一个采样时刻所采集的数字图像进行处理时,均包括以下步骤:
步骤201、图像预处理,过程如下:Step 201, image preprocessing, the process is as follows:
步骤2011、图像接收与同步存储:所述处理器将此时所接收的当前采样时刻所采集的数字图像同步存储在数据存储器内,所述数据存储器与处理器相接;Step 2011, image reception and synchronous storage: the processor synchronously stores the received digital image collected at the current sampling time in the data memory, and the data memory is connected to the processor;
步骤2012、图像增强:通过处理器对当前采样时刻所采集的数字图像进行增强处理,获得增强处理后的数字图像;Step 2012, image enhancement: the digital image collected at the current sampling moment is enhanced by the processor to obtain the enhanced digital image;
步骤2013、图像分割:通过处理器对步骤2012中增强处理后的数字图像进行分割处理,获得目标图像;Step 2013, image segmentation: segment the digital image enhanced in step 2012 by the processor to obtain the target image;
步骤202、火灾识别:采用预先建立的二分类模型,对步骤2013中所述目标图像进行处理,并得出当前采样时刻待检测区域的火灾状态类别;所述火灾状态类别包括有火焰和无火焰两个类别,所述二分类模型为对有火焰和无火焰两个类别进行分类的支持向量机模型;Step 202, fire identification: use the pre-established binary classification model to process the target image described in step 2013, and obtain the fire state category of the area to be detected at the current sampling moment; the fire state category includes flame and no flame Two categories, the two classification model is a support vector machine model for classifying two categories with flame and no flame;
所述二分类模型的建立过程如下:The establishment process of the two classification models is as follows:
步骤Ⅰ、图像信息采集:采用所述图像采集单元,分别采集发生火灾时待检测区域的多帧数字图像一和未发生火灾时待检测区域的多帧数字图像二;
步骤Ⅱ、特征提取:对多帧所述数字图像一和多帧所述数字图像分别进行特征提取,并从各数字图像中分别提取出一组能代表并区别该数字图像的特征参数,且该组特征参数包括M个特征量,并对M个所述特征量进行编号,M个所述特征量组成一个特征向量,其中M≥2;Step II, feature extraction: perform feature extraction on the multi-frame digital image one and the multi-frame digital image respectively, and extract a group of characteristic parameters that can represent and distinguish the digital image from each digital image, and the The group feature parameters include M feature quantities, and the M feature quantities are numbered, and the M feature quantities form a feature vector, where M≥2;
步骤Ⅲ、训练样本获取:从步骤Ⅱ中特征提取后所获得的多帧所述数字图像一和多帧所述数字图像二的特征向量中,分别选取m1帧所述数字图像一的特征向量和m2帧所述数字图像二的特征向量组成训练样本集;其中,m1和m2均为正整数且m1=40~100,m2=40~100;所述训练样本集中训练样本的数量为m1+m2个;Step III, acquisition of training samples: from the feature vectors of multiple frames of the digital image one and multiple frames of the digital image two obtained after feature extraction in step II, respectively select the feature vectors and The eigenvectors of the second digital image in frame m2 form a training sample set; wherein m1 and m2 are positive integers and m1=40~100, m2=40~100; the number of training samples in the training sample set is m1+m2 indivual;
步骤Ⅳ、二分类模型建立,过程如下:Step Ⅳ, two-category model establishment, the process is as follows:
步骤Ⅳ-1、核函数选取:选用径向基函数作为所述二分类模型的核函数;Step IV-1, kernel function selection: select radial basis function as the kernel function of the binary classification model;
步骤Ⅳ-2、分类函数确定:待惩罚因子γ与步骤Ⅳ-1中所选用径向基函数的核参数σ2确定后,便获得所述二分类模型的分类函数,并完成所述二分类模型的建立过程;其中,γ=C-2,σ=D-1,0.01<C≤10,0.01<D≤50;Step IV-2, determination of classification function: After the factor γ to be penalized and the kernel parameter σ2 of the radial basis function selected in step IV-1 are determined, the classification function of the binary classification model is obtained, and the binary classification is completed Model establishment process; where, γ=C -2 , σ=D -1 , 0.01<C≤10, 0.01<D≤50;
对惩罚因子γ与核参数σ2进行确定时,先采用共轭梯度法对参数C与D进行优化,获得优化后的参数C与D,再根据γ=C-2和σ=D-1将优化后的参数C与D转换成惩罚因子γ与核参数σ2;When determining the penalty factor γ and the kernel parameter σ 2 , the parameters C and D are optimized using the conjugate gradient method first, and the optimized parameters C and D are obtained, and then according to γ=C -2 and σ=D -1 The optimized parameters C and D are converted into penalty factor γ and kernel parameter σ 2 ;
步骤Ⅴ、二分类模型训练:将步骤Ⅲ中所述训练样本集中的m1+m2个训练样本,输入到步骤Ⅳ中所建立的二分类模型进行训练。Step V, binary classification model training: input the m1+m2 training samples in the training sample set mentioned in step III to the binary classification model established in step IV for training.
上述一种图像型火灾火焰识别方法,其特征是:步骤Ⅲ中所述训练样本集中训练样本总数量为N且N=m1+m2;步骤Ⅳ中进行二分类模型建立之前,先对所述训练样本集中的N个训练样本进行编号,所述训练样本集中第p个训练样本的编号为p,p为正整数且p=1、2、…、N;第p个训练样本记作(xp,yp),其中xp为第p个训练样本的特征参数,yp为第p个训练样本的类别号且yp=1或-1,其中类别号为1表示有火焰,类别号为-1表示无火焰;The above-mentioned image type fire flame recognition method is characterized in that: the total number of training samples in the training sample set described in step III is N and N=m1+m2; N training samples in the sample set are numbered, the numbering of the pth training sample in the training sample set is p, p is a positive integer and p=1, 2, ..., N; the pth training sample is written as (x p ,y p ), where x p is the feature parameter of the pth training sample, y p is the category number of the pth training sample and y p =1 or -1, where the category number is 1, which means there is flame, and the category number is -1 means no flame;
步骤Ⅳ-2中采用共轭梯度法对参数C与D进行优化时,利用步骤Ⅲ中所述训练样本集中的m1+m2个训练样本进行优化,且优化过程如下:When using the conjugate gradient method to optimize parameters C and D in step IV-2, use the m1+m2 training samples in the training sample set mentioned in step III to optimize, and the optimization process is as follows:
步骤ⅰ、目标函数确定:
步骤ⅱ、初始参数设定:对参数C与D的初始值C1和D1分别进行确定,并对识别误差阈值ε进行设定且ε>0;Step ii. Initial parameter setting: determine the initial values C 1 and D 1 of parameters C and D respectively, and set the recognition error threshold ε and ε>0;
步骤ⅲ、当前迭代的梯度gk计算:根据公式计算得出步骤ⅰ中目标函数对Ck和Dk的梯度gk,k为迭代次数且k=1、2、…;若||gk||≤ε,停止计算,此时Ck和Dk分别为优化后的参数C与D;否则,进入步骤ⅳ;Step Ⅲ, the gradient g k calculation of the current iteration: according to the formula Calculate the gradient g k of the objective function to C k and D k in step i, k is the number of iterations and k=1, 2, ...; if ||g k ||≤ε, stop the calculation, at this time C k and D k are the optimized parameters C and D respectively; otherwise, go to step ⅳ;
其中,
步骤ⅳ、当前迭代的搜索方向dk计算:根据公式
步骤ⅴ、当前迭代的搜索步长λk确定:沿步骤ⅳ中所确定的搜索方向dk进行搜索,找出满足公式Step ⅳ, determine the search step size λ k of the current iteration: search along the search direction d k determined in step ⅳ, and find out that satisfies the formula
步骤ⅵ、根据公式
步骤ⅶ、令k=k+1,之后返回步骤ⅲ,进行下一次迭代;Step ⅶ, set k=k+1, and then return to step Ⅲ for the next iteration;
步骤Ⅳ-1中所选取的径向基函数为该径向基函数的回归函数为式中αt和b均为回归参数,s为正整数且s=1、2、…、N,t为正整数且t=1、2、…、N。The radial basis function selected in step IV-1 is The regression function of the radial basis function is In the formula, α t and b are regression parameters, s is a positive integer and s=1, 2, ..., N, t is a positive integer and t = 1, 2, ..., N.
上述一种图像型火灾火焰识别方法,其特征是:步骤Ⅱ中M=6,且6个特征量分别为面积、相似度、矩特性、致密度、纹理特征和频闪特性。The above-mentioned image-based fire flame recognition method is characterized in that: in step II, M=6, and the six feature quantities are area, similarity, moment characteristic, density, texture characteristic and stroboscopic characteristic.
上述一种图像型火灾火焰识别方法,其特征是:步骤ⅱ中对C1和D1进行确定时,采用网格搜索法或随机抽取数值的方法对C1和D1进行确定;采用随机抽取数值的方法对C1和D1进行确定时,C1为(0.01,1]中随机抽取的一个数值,D1为(0.01,50]中随机抽取的一个数值;采用网格搜索法对C1和D1进行确定时,先10-3为步长划分网格,再以C和D为自变量且以步骤ⅰ中所述目标函数为因变量制作三维网格图,之后通过网格搜索找出C和D的多组参数,最后对将多组参数取平均值作为C1和D1。The above-mentioned image-type fire flame recognition method is characterized in that: when C1 and D1 are determined in step ii, C1 and D1 are determined by grid search method or random value extraction; When determining C 1 and D 1 by numerical method, C 1 is a value randomly selected in (0.01, 1], and D 1 is a value randomly selected in (0.01, 50]; grid search method is used to determine C 1 and D 1 are determined, first divide the grid with a step size of 10 -3 , then use C and D as independent variables and the objective function described in step i as the dependent variable to make a three-dimensional grid map, and then use grid search Find multiple sets of parameters of C and D, and finally take the average value of multiple sets of parameters as C 1 and D 1 .
上述一种图像型火灾火焰识别方法,其特征是:步骤2012中进行图像增强时,采用基于模糊逻辑的图像增强方法进行增强处理。The above-mentioned image-based fire flame recognition method is characterized in that: when image enhancement is performed in step 2012, an image enhancement method based on fuzzy logic is used for enhancement processing.
上述一种图像型火灾火焰识别方法,其特征是:采用基于模糊逻辑的图像增强方法进行增强处理时,过程如下:Above-mentioned a kind of image type fire flame identification method is characterized in that: when adopting the image enhancement method based on fuzzy logic to carry out enhancement processing, the process is as follows:
步骤20121、由图像域变换到模糊域:根据隶属度函数
步骤20122、在模糊域利用模糊增强算子进行模糊增强处理:所采用的模糊增强算子为μ'gh=Ir(μgh)=Ir(Ir-1μgh),式中r为迭代次数且其为正整数,r=1、2、…;其中
步骤20123、由模糊域逆变换到图像域:根据公式(6),将模糊增强处理后得到的μ'gh进行逆变换,获得增强处理后数字图像中各像素点的灰度值,并获得增强处理后的数字图像。Step 20123, inverse transformation from fuzzy domain to image domain: according to the formula (6) Perform inverse transformation on the μ'gh obtained after the fuzzy enhancement processing, obtain the gray value of each pixel in the digital image after the enhancement processing, and obtain the digital image after the enhancement processing.
上述一种图像型火灾火焰识别方法,其特征是:步骤20121中由图像域变换到模糊域之前,先采用最大类间方差法对灰度阈值XT进行选取;采用最大类间方差法对灰度阈值XT进行选取之前,先从所述待增强图像的灰度变化范围中找出像素点数量为0的所有灰度值,并采用处理器(3)将找出的所有灰度值均标记为免计算灰度值;采用最大类间方差法对灰度阈值XT进行选取时,对所述待增强图像的灰度变化范围中除所述免计算灰度值之外的其它灰度值作为阈值时的类间方差值进行计算,并从计算得出的类间方差值找出最大类间方差值,所找出最大类间方差值对应的灰度值便为灰度阈值XT。The above-mentioned image-type fire flame recognition method is characterized in that: before transforming from the image domain to the fuzzy domain in step 20121, first adopt the maximum inter-class variance method to select the gray scale threshold X T ; Before selecting the threshold value X T of the image to be enhanced, find out all the gray values whose number of pixels is 0 from the gray range of the image to be enhanced, and use the processor (3) to average all the gray values found Marked as calculation-free grayscale value; when using the maximum inter-class variance method to select the grayscale threshold XT , other grayscales in the grayscale variation range of the image to be enhanced except for the calculation-free grayscale value Calculate the inter-class variance value when the value is used as the threshold value, and find the maximum inter-class variance value from the calculated inter-class variance value, and the gray value corresponding to the found maximum inter-class variance value is gray degree threshold X T .
上述一种图像型火灾火焰识别方法,其特征是:步骤一中,各采样时刻所采集数字图像的大小均为M1×N1个像素点;The above-mentioned image-type fire flame recognition method is characterized in that: in
步骤2013进行图像分割时,过程如下:When step 2013 performs image segmentation, the process is as follows:
步骤20131、二维直方图建立:采用处理器建立所述待分割图像的关于像素点灰度值与邻域平均灰度值的二维直方图;该二维直方图中任一点记为(i,j),其中i为该二维直方图的横坐标值且其为所述待分割图像中任一像素点(m,n)的灰度值,j为该二维直方图的纵坐标值且其为该像素点(m,n)的邻域平均灰度值;所建立二维直方图中任一点(i,j)发生的频数记为C(i,j),且点(i,j)发生的频率记为h(i,j),其中 Step 20131, two-dimensional histogram establishment: using a processor to establish a two-dimensional histogram of the pixel gray value and the neighborhood average gray value of the image to be segmented; any point in the two-dimensional histogram is denoted as (i , j), where i is the abscissa value of the two-dimensional histogram and it is the gray value of any pixel point (m, n) in the image to be segmented, and j is the ordinate value of the two-dimensional histogram And it is the neighborhood average gray value of the pixel point (m, n); the frequency of occurrence of any point (i, j) in the established two-dimensional histogram is recorded as C(i, j), and The frequency of occurrence of j) is denoted as h(i,j), where
步骤20132、模糊参数组合优化:所述处理器调用模糊参数组合优化模块,且利用粒子群优化算法对基于二维模糊划分最大熵的图像分割方法所用的模糊参数组合进行优化,并获得优化后的模糊参数组合;Step 20132, fuzzy parameter combination optimization: the processor calls the fuzzy parameter combination optimization module, and uses the particle swarm optimization algorithm to optimize the fuzzy parameter combination used in the image segmentation method based on two-dimensional fuzzy partition maximum entropy, and obtains the optimized Fuzzy parameter combination;
本步骤中,对模糊参数组合进行优化之前,先根据步骤20131中所建立的二维直方图,计算得出对所述待分割图像进行分割时的二维模糊熵的函数关系式,并将计算得出的二维模糊熵的函数关系式作为利用粒子群优化算法对模糊参数组合进行优化时的适应度函数;In this step, before optimizing the combination of fuzzy parameters, according to the two-dimensional histogram established in step 20131, the functional relational expression of the two-dimensional fuzzy entropy when the image to be segmented is segmented is calculated, and the calculated The obtained two-dimensional fuzzy entropy function relation is used as the fitness function when the particle swarm optimization algorithm is used to optimize the combination of fuzzy parameters;
步骤20133、图像分割:所述处理器利用步骤20132中优化后的模糊参数组合,并按照基于二维模糊划分最大熵的图像分割方法对所述待分割图像中的各像素点进行分类,并相应完成图像分割过程,获得分割后的目标图像。Step 20133, image segmentation: the processor uses the fuzzy parameter combination optimized in step 20132, and classifies each pixel in the image to be segmented according to the image segmentation method based on two-dimensional fuzzy partition maximum entropy, and correspondingly Complete the image segmentation process and obtain the segmented target image.
上述一种图像型火灾火焰识别方法,其特征是:步骤20131中所述待分割图像由目标图像O和背景图像P组成;其中目标图像O的隶属度函数为μo(i,j)=μox(i;a,b)μoy(j;c,d)(1);背景图像P的隶属度函数μb(i,j)=μbx(i;a,b)μoy(j;c,d)+μox(i;a,b)μby(j;c,d)+μbx(i;a,b)μby(j;c,d)(2);The above-mentioned image-type fire flame recognition method is characterized in that: the image to be segmented in step 20131 is composed of a target image O and a background image P; wherein the membership function of the target image O is μ o (i, j)=μ ox (i;a,b)μ oy (j;c,d)(1); membership function of background image P μ b (i,j)=μ bx (i;a,b)μ oy (j; c,d)+μ ox (i;a,b)μ by (j;c,d)+μ bx (i;a,b)μ by (j;c,d)(2);
式(1)和(2)中,μox(i;a,b)和μoy(j;c,d)均为目标图像O的一维隶属度函数且二者均为S函数,μbx(i;a,b)和μby(j;c,d)均为背景图像P的一维隶属度函数且二者均为S函数,μbx(i;a,b)=1-μox(i;a,b),μby(j;c,d)=1-μoy(j;c,d),其中a、b、c和d均为对目标图像O和背景图像P的一维隶属度函数形状进行控制的参数;In formulas (1) and (2), μ ox (i;a,b) and μ oy (j;c,d) are both one-dimensional membership functions of the target image O and both are S functions, μ bx (i;a,b) and μ by (j;c,d) are the one-dimensional membership functions of the background image P and both are S functions, μ bx (i;a,b)=1-μ ox (i;a,b), μ by (j;c,d)=1-μ oy (j;c,d), where a, b, c, and d are a combination of the target image O and the background image P Parameters that control the shape of the dimension membership function;
步骤20132中对二维模糊熵的函数关系式进行计算时,先根据步骤Ⅰ中所建立的二维直方图,对所述待分割图像的像素点灰度值的最小值gmin和最大值gmax以及邻域平均灰度值的最小值smin和最大值smax分别进行确定;When calculating the functional relational expression of the two-dimensional fuzzy entropy in step 20132, first, according to the two-dimensional histogram established in step I, the minimum value g min and the maximum value g max and the minimum value s min and maximum value s max of the neighborhood average gray value are determined respectively;
步骤20132中计算得出的二维模糊熵的函数关系式为:The functional relational expression of the two-dimensional fuzzy entropy calculated in step 20132 is:
步骤20132中利用粒子群优化算法对模糊参数组合进行优化时,所优化的模糊参数组合为(a,b,c,d)。When the particle swarm optimization algorithm is used to optimize the fuzzy parameter combination in step 20132, the optimized fuzzy parameter combination is (a, b, c, d).
上述一种图像型火灾火焰识别方法,其特征是:步骤20132中进行二维模糊划分最大熵的参数组合优化时,包括以下步骤:The above-mentioned image-type fire flame recognition method is characterized in that: when performing parameter combination optimization of two-dimensional fuzzy division maximum entropy in step 20132, the following steps are included:
步骤Ⅱ-1、粒子群初始化:将参数组合的一个取值作为一个粒子,并将多个粒子组成一个初始化的粒子群;记作(ak,bk,ck,dk),其中k为正整数且其k=1、2、3、~、K,其中K为正整数且其为所述粒子群中所包含粒子的数量,ak为参数a的一个随机取值,bk为参数b的一个随机取值,ck为参数c的一个随机取值,dk为参数d的一个随机取值,ak<bk且ck<dk;Step Ⅱ-1. Particle swarm initialization: take a value of the parameter combination as a particle, and form multiple particles into an initialized particle swarm; denoted as (a k , b k , c k , d k ), where k is a positive integer and its k=1, 2, 3, ~, K, wherein K is a positive integer and it is the number of particles contained in the particle group, a k is a random value of parameter a, and b k is A random value of parameter b, c k is a random value of parameter c, d k is a random value of parameter d, a k < b k and c k < d k ;
步骤Ⅱ-2、适应度函数确定:Step Ⅱ-2, fitness function determination:
将
步骤Ⅱ-3、粒子适应度评价:对当前时刻所有粒子的适应度分别进行评价,且所有粒子的适应度评价方法均相同;其中,对当前时刻第k个粒子的适应度进行评价时,先根据步骤Ⅱ-2中所确定的适应度函数计算得出当前时刻第k个粒子的适应度值并记作fitnessk,并将计算得出的fitnessk与Pbestk进行差值比较:当比较得出fitnessk>Pbestk时,Pbestk=fitnessk,并将更新为当前时间第k个粒子的位置,其中Pbestk为当前时刻第k个粒子所达到的最大适应度值且其为当前时刻第k个粒子的个体极值,为当前时刻第k个粒子的个体最优位置;其中,t为当前迭代次数且其为正整数;Step Ⅱ-3. Particle fitness evaluation: evaluate the fitness of all particles at the current moment, and the fitness evaluation methods of all particles are the same; when evaluating the fitness of the kth particle at the current moment, first According to the fitness function determined in step Ⅱ-2, the fitness value of the kth particle at the current moment is calculated and recorded as fitnessk, and the difference between the calculated fitnessk and Pbestk is compared: when the comparison results in fitnessk > When Pbestk, Pbestk=fitnessk, and set Update to the position of the kth particle at the current time, where Pbestk is the maximum fitness value achieved by the kth particle at the current moment and it is the individual extremum of the kth particle at the current moment, is the individual optimal position of the kth particle at the current moment; among them, t is the current iteration number and it is a positive integer;
待根据步骤Ⅱ-2中所确定的适应度函数将当前时刻所有粒子的适应度值均计算完成后,将当前时刻适应度值最大的粒子的适应度值记为fitnesskbest,并将fitnesskbest与gbest进行差值比较:当比较得出fitnesskbest>gbest时,gbest=fitnesskbest,且将更新为当前时间适应度值最大的粒子的位置,其中gbest为当前时刻的全局极值,为当前时刻的群体最优位置;After the fitness value of all particles at the current moment is calculated according to the fitness function determined in step II-2, the fitness value of the particle with the largest fitness value at the current moment is recorded as fitnesskbest, and fitnesskbest is compared with gbest Difference comparison: when the comparison shows that fitnesskbest>gbest, gbest=fitnesskbest, and the Update to the position of the particle with the largest fitness value at the current time, where gbest is the global extremum at the current moment, is the optimal position of the group at the current moment;
步骤Ⅱ-4、判断是否满足迭代终止条件:当满足迭代终止条件时,完成参数组合优化过程;否则,根据粒子中群优化算法更新得出下一时刻各粒子的位置和速度,并返回步骤Ⅱ-3;步骤Ⅱ-4中迭代终止条件为当前迭代次数t达到预先设定的最大迭代次数Imax或者Δg≤e,其中Δg=|gbest-gmax|,式中为gbest当前时刻的全局极值,gmax为原先设定的目标适应度值,e为正数且其为预先设定的偏差值。Step Ⅱ-4. Judging whether the iteration termination condition is satisfied: when the iteration termination condition is satisfied, the parameter combination optimization process is completed; otherwise, the position and speed of each particle at the next moment are obtained according to the update of the particle swarm optimization algorithm, and return to step Ⅱ -3; The iteration termination condition in step Ⅱ-4 is that the current iteration number t reaches the preset maximum iteration number I max or Δg≤e, where Δg=|gbest-gmax|, where is the global extremum value of gbest at the current moment , gmax is the originally set target fitness value, e is a positive number and it is a preset deviation value.
本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:
1、方法步骤简单、设计合理且实现方便,投入成本较低。1. The steps of the method are simple, the design is reasonable, the implementation is convenient, and the input cost is low.
2、所采用的图像增强方法步骤简单、设计合理且增强效果好,根据煤矿井下照度低、全天候人工照明导致图像成像质量差的特点,在分析和比较传统图像增强处理算法的基础上,提出了基于模糊逻辑的图像增强预处理方法,该方法采用新的隶属度函数,不仅能减小图像低灰度区域的像素信息损失,克服了因模糊增强带来的对比度下降的问题,提高了适应性。同时,采用一种快速的最大类间方差法进行阈值选取,实现模糊增强阈值自适应地快速选取,提高了算法运算速度,增强了实时性,能对不同环境下的图像进行了图像增强,并且能有效提高图像的细节信息,改善图像质量,而且计算速度快,满足实时性要求2. The image enhancement method adopted is simple in steps, reasonable in design and good in enhancement effect. According to the characteristics of low illumination in coal mines and poor image quality caused by all-weather artificial lighting, on the basis of analyzing and comparing traditional image enhancement processing algorithms, a new method is proposed. An image enhancement preprocessing method based on fuzzy logic, which adopts a new membership function, which can not only reduce the pixel information loss in the low gray area of the image, overcome the problem of contrast decrease caused by fuzzy enhancement, and improve the adaptability . At the same time, a fast maximum inter-class variance method is used for threshold selection, which realizes adaptive and rapid selection of fuzzy enhancement thresholds, improves the algorithm operation speed, enhances real-time performance, and can enhance images in different environments, and It can effectively improve the detailed information of the image, improve the image quality, and the calculation speed is fast to meet the real-time requirements
3、所采用的图像分割方法步骤简单、设计合理且分割效果好,由于一维最大熵法对信噪比较低、低照度的图像来说分割效果不够理想,因而采用基于二维模糊划分最大熵的分割方法进行分割,该分割方法考虑了灰度信息和空间邻域信息及自身模糊性的特点,但存在运算速度慢的缺陷,本发明专利申请中采用粒子群优化算法对模糊参数组合进行优化,使得能简便、快速且准确获得优化后的模糊参数组合,因而大幅度提高了图像分割效率。并且,所采用的粒子群优化算法设计合理且实现方便,其根据当前粒子群的状态和迭代次数自适应的调整局部空间大小,在不影响收敛速度的前提下获得了更高的搜索成功率和更高质量的解,分割效果好,鲁棒性强,而且提高了运算速度,满足实时性要求。3. The image segmentation method adopted has simple steps, reasonable design and good segmentation effect. Since the segmentation effect of the one-dimensional maximum entropy method is not ideal for images with low signal-to-noise ratio and low illumination, the maximum segmentation method based on two-dimensional fuzzy division is adopted. The entropy segmentation method is used for segmentation. This segmentation method considers the characteristics of gray level information, spatial neighborhood information and its own fuzziness, but has the disadvantage of slow operation speed. The optimization makes it easy, fast and accurate to obtain the optimized fuzzy parameter combination, thus greatly improving the efficiency of image segmentation. Moreover, the particle swarm optimization algorithm adopted is reasonably designed and easy to implement. It adaptively adjusts the size of the local space according to the current state of the particle swarm and the number of iterations, and obtains a higher search success rate and a higher search rate without affecting the convergence speed. Higher-quality solution, better segmentation effect, strong robustness, and improved computing speed to meet real-time requirements.
4、由于基于二维模糊划分最大熵的分割方法能对火焰图像进行快速、准确地分割,克服了传统算法采用单阈值噪声点被误分的问题,同时采用粒子群优化算法对模糊参数组合进行优化,解决了非线性整数规划问题,在克服噪声影响的同时使得分割的目标更好地保持形状。因而,本发明将基于二维模糊划分最大熵的分割方法与粒子群优化算法相结合实现红外图像的快速分割,设置参量组合(a,b,c,d)作为粒子,二维模糊划分熵作为适应度函数决定粒子在解空间的搜索方向,一旦获得了图像的二维直方图,采用PSO算法搜索使得适应度函数最大的最优参量组合(a,b,c,d),最终根据最大隶属度原则对图像中的像素进行分类,从而实现图像的分割。并且,采用本发明所述的分割方法对于噪音大、对比度低、目标较小的红外图像的分割效果都非常好。4. Because the segmentation method based on the maximum entropy of two-dimensional fuzzy partition can quickly and accurately segment the flame image, it overcomes the problem that the traditional algorithm uses a single threshold noise point to be misclassified. Optimization, which solves the nonlinear integer programming problem, makes the segmented target better maintain its shape while overcoming the influence of noise. Therefore, the present invention combines the segmentation method based on the maximum entropy of two-dimensional fuzzy division with the particle swarm optimization algorithm to realize the rapid segmentation of infrared images, and sets the parameter combination (a, b, c, d) as particles, and the two-dimensional fuzzy division entropy as The fitness function determines the search direction of the particle in the solution space. Once the two-dimensional histogram of the image is obtained, the PSO algorithm is used to search for the optimal parameter combination (a, b, c, d) that maximizes the fitness function, and finally according to the maximum membership The degree principle classifies the pixels in the image, so as to realize the segmentation of the image. Moreover, the segmentation method of the present invention has a very good segmentation effect on infrared images with large noise, low contrast and small targets.
5、实际进行特征提取时,选取面积、相似度、矩特性、致密度、纹理特征、频闪特性作为火灾图像识别的特征依据,既保留了对分类贡献大的特征,舍弃了冗余特征,降低了特征维数,完成了特征的优化选择。5. In actual feature extraction, area, similarity, moment characteristics, density, texture features, and stroboscopic characteristics are selected as the feature basis for fire image recognition, which not only retains the features that contribute greatly to the classification, but discards redundant features. The feature dimension is reduced, and the optimal selection of features is completed.
6、所采用的二分类模型建模方法简单。设计合理且实现方便,使用效果好,并且采用共轭梯度法对核函数的超参数进行优化。基于人工神经网络适合处理不完善和模糊信息的特点以及支持向量机具有的小样本、非线性及高维模式优势分别进行火灾火点识别,达到各个判据优势互补的目的,从而克服了传统的使用单一判据判断火灾隐患容易引起误报的缺点。常用的交叉验证优化参数方法,相当费时,且不保证选择的参数一定保证分类器具备最优秀的分类性能,而现有的其它超参数选择算法的都存在不能同时选择惩罚因子和核函数参数的缺陷,针对小样本LS-SVM模式分类问题,本发明采用以最小化留一预测误差平方和为目标,用梯度下降的方法,为小样本、非线性建模的LS-SVM同时选取两个超参数核函数参数和惩罚因子。本发明所建立的二分类模型不仅识别率高,分类精度高,而且所用时间短,能简便、快速完成火灾识别过程,当识别出当前所采集图像的类别为有火焰时,则说明发生火灾,及时进行报警提示,并采取相应措施。本发明针对煤矿复杂特殊环境下火灾识别的小样本问题和非线性问题,以及支持向量机在高维方面的优势,提出了基于最小二乘支持向量机的火灾图像识别方法,并在快速留一法的基础上,利用共轭梯度法进行超参数寻优,构建了FR-LSSVM模型。6. The modeling method of the binary classification model adopted is simple. The design is reasonable and easy to implement, and the use effect is good, and the hyperparameters of the kernel function are optimized by using the conjugate gradient method. Based on the characteristics of artificial neural network suitable for dealing with imperfect and fuzzy information and the advantages of small samples, nonlinear and high-dimensional models of support vector machine, the fire point identification is carried out separately to achieve the purpose of complementary advantages of each criterion, thus overcoming the traditional The disadvantage of using a single criterion to judge fire hazards is likely to cause false alarms. The commonly used cross-validation optimization parameter method is quite time-consuming, and it does not guarantee that the selected parameters will ensure that the classifier has the best classification performance. However, other existing hyperparameter selection algorithms cannot select the penalty factor and kernel function parameters at the same time. Deficiency, for the small sample LS-SVM model classification problem, the present invention adopts to minimize the sum of the prediction error squares as the goal, and uses the method of gradient descent to simultaneously select two super Parameters Kernel function parameters and penalty factors. The binary classification model established by the present invention not only has a high recognition rate and high classification accuracy, but also takes a short time and can easily and quickly complete the fire recognition process. When it is recognized that the category of the currently collected image is flame, it means that a fire has occurred. Make an alarm prompt in time and take corresponding measures. Aiming at the small-sample and non-linear problems of fire recognition in the complex and special environment of coal mines, and the advantages of support vector machines in high dimensions, the present invention proposes a fire image recognition method based on least squares support vector machines, and quickly stays one On the basis of the method, the hyperparameters are optimized using the conjugate gradient method, and the FR-LSSVM model is constructed.
综上所述,本发明方法步骤简单、实现方便且操作简便、可靠性高、使用效果好,能有效解决现有视频火灾检测系统存在的复杂环境下可靠性较低、误报漏报率较高、使用效果较差等问题。In summary, the method of the present invention has simple steps, convenient implementation, easy operation, high reliability, and good application effect, and can effectively solve the problem of low reliability and high false alarm and false negative rate in the complex environment existing in the existing video fire detection system. High, poor use effect and other issues.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.
附图说明Description of drawings
图1为本发明的方法流程框图。Fig. 1 is a flow chart of the method of the present invention.
图2为本发明所用图像采集及处理系统的电路原理框图。Fig. 2 is a schematic circuit block diagram of the image acquisition and processing system used in the present invention.
图3为本发明所建立二维直方图的结构示意图。FIG. 3 is a schematic structural diagram of a two-dimensional histogram established in the present invention.
图4为本发明进行图像分割时的分割状态示意图。Fig. 4 is a schematic diagram of the segmentation state when performing image segmentation in the present invention.
附图标记说明:Explanation of reference signs:
1—CCD摄像头; 2—视频采集卡; 3—处理器;4—数据存储器。1—CCD camera; 2—video capture card; 3—processor; 4—data memory.
具体实施方式Detailed ways
如图1所示的一种图像型火灾火焰识别方法,包括以下步骤:A kind of image type fire flame identification method as shown in Figure 1, comprises the following steps:
步骤一、图像采集:采用图像采集单元且按照预先设定的采样频率fs,对待检测区域的数字图像进行采集,并将每一个采样时刻所采集的数字图像同步传送至处理器3。所述图像采集单元与处理器3相接。
本实施例中,所述图像采集单元包括CCD摄像头1和与CCD摄像头1相接的视频采集卡2,所述CCD摄像头1与视频采集卡2相接,所述视频采集卡2与处理器3相接。In this embodiment, the image acquisition unit includes a
本实施例中,各采样时刻所采集数字图像的大小均为M1×N1个像素点。其中M1为所采集数字图像中每一行上像素点的数量,N1为所采集数字图像中每一列上像素点的数量。In this embodiment, the size of the digital image collected at each sampling moment is M1×N1 pixels. Where M1 is the number of pixels on each row in the collected digital image, and N1 is the number of pixels on each column in the collected digital image.
步骤二、图像处理:所述处理器3按照时间先后顺序对步骤一中各采样时刻所采集的数字图像分别进行图像处理,且对各采样时刻所采集数字图像的处理方法均相同;对步骤一中任一个采样时刻所采集的数字图像进行处理时,均包括以下步骤:
步骤201、图像预处理,过程如下:Step 201, image preprocessing, the process is as follows:
步骤2011、图像接收与同步存储:所述处理器3将此时所接收的当前采样时刻所采集的数字图像同步存储在数据存储器4内,所述数据存储器4与处理器3相接;Step 2011, image reception and synchronous storage: the
本实施例中,所述CCD摄像头1为红外CCD摄像头,并且所述CCD摄像头1、视频采集卡2、处理器3和数据存储器4组成图像采集及预处理系统,详见图2。In this embodiment, the
步骤2012、图像增强:通过处理器3对当前采样时刻所采集的数字图像进行增强处理,获得增强处理后的数字图像。Step 2012, image enhancement: the digital image collected at the current sampling moment is enhanced by the
步骤2013、图像分割:通过处理器3对步骤2012中增强处理后的数字图像进行分割处理,获得目标图像。Step 2013, image segmentation: the
步骤202、火灾识别:采用预先建立的二分类模型,对步骤2013中所述目标图像进行处理,并得出当前采样时刻待检测区域的火灾状态类别;所述火灾状态类别包括有火焰和无火焰两个类别,所述二分类模型为对有火焰和无火焰两个类别进行分类的支持向量机模型。Step 202, fire identification: use the pre-established binary classification model to process the target image described in step 2013, and obtain the fire state category of the area to be detected at the current sampling moment; the fire state category includes flame and no flame Two categories, the binary classification model is a support vector machine model for classifying two categories with flame and without flame.
所述二分类模型的建立过程如下:The establishment process of the two classification models is as follows:
步骤Ⅰ、图像信息采集:采用所述图像采集单元,分别采集发生火灾时待检测区域的多帧数字图像一和未发生火灾时待检测区域的多帧数字图像二。
步骤Ⅱ、特征提取:对多帧所述数字图像一和多帧所述数字图像分别进行特征提取,并从各数字图像中分别提取出一组能代表并区别该数字图像的特征参数,且该组特征参数包括M个特征量,并对M个所述特征量进行编号,M个所述特征量组成一个特征向量,其中M≥2。Step II, feature extraction: perform feature extraction on the multi-frame digital image one and the multi-frame digital image respectively, and extract a group of characteristic parameters that can represent and distinguish the digital image from each digital image, and the The group feature parameters include M feature quantities, and the M feature quantities are numbered, and the M feature quantities form a feature vector, where M≥2.
步骤Ⅲ、训练样本获取:从步骤Ⅱ中特征提取后所获得的多帧所述数字图像一和多帧所述数字图像二的特征向量中,分别选取分别选取m1帧所述数字图像一的特征向量和m2帧所述数字图像二的特征向量组成训练样本集;其中,m1和m2均为正整数且m1=40~100,m2=40~100;所述训练样本集中训练样本的数量为m1+m2个。Step Ⅲ, training sample acquisition: from the feature vectors of multiple frames of the
本实施例中,获取训练样本时,采用所述图像采集单元采集一段时间段t1内发生火灾时的数字图像序列一和一个未发生火灾时待检测区域的数字图像序列二;所述数字图像序列一中所包含数字图像的帧数为n1=t1×f,其中t1为所述数字图像序列一的采样时间;所述数字图像序列二中所包含数字图像的帧数为n2=t2×f,其中t2为所述数字图像序列二的采样时间。其中,n1不小于m1,n2不小于m2。之后,从所述数字图像序列一中选取m1个数字图像作为有火焰样本,并且从所述数字图像序列二中选取m2个数字图像作为无火焰样本。In this embodiment, when obtaining the training samples, the image acquisition unit is used to collect a
本实施例中,m1=m2。In this embodiment, m1=m2.
步骤Ⅳ、二分类模型建立,过程如下:Step Ⅳ, two-category model establishment, the process is as follows:
步骤Ⅳ-1、核函数选取:选用径向基函数作为所述二分类模型的核函数;Step IV-1, kernel function selection: select radial basis function as the kernel function of the binary classification model;
步骤Ⅳ-2、分类函数确定:待惩罚因子γ与步骤Ⅳ-1中所选用径向基函数的核参数σ2确定后,便获得所述二分类模型的分类函数,并完成所述二分类模型的建立过程;其中,γ=C-2,σ=D-1,0.01<C≤10,0.01<D≤50。Step IV-2, determination of classification function: After the factor γ to be penalized and the kernel parameter σ2 of the radial basis function selected in step IV-1 are determined, the classification function of the binary classification model is obtained, and the binary classification is completed Model establishment process; where, γ=C -2 , σ=D -1 , 0.01<C≤10, 0.01<D≤50.
对惩罚因子γ与核参数σ2进行确定时,先采用共轭梯度法对参数C与D进行优化,获得优化后的参数C与D,再根据γ=C-2和σ=D-1将优化后的参数C与D转换成惩罚因子γ与核参数σ2。When determining the penalty factor γ and the kernel parameter σ 2 , the parameters C and D are optimized using the conjugate gradient method first, and the optimized parameters C and D are obtained, and then according to γ=C -2 and σ=D -1 The optimized parameters C and D are transformed into penalty factor γ and kernel parameter σ 2 .
步骤Ⅳ-2中采用共轭梯度法对参数C与D进行优化时,利用步骤Ⅲ中所述训练样本集中的m1+m2个训练样本进行优化。When the conjugate gradient method is used to optimize parameters C and D in step IV-2, the m1+m2 training samples in the training sample set described in step III are used for optimization.
步骤Ⅴ、二分类模型训练:将步骤Ⅲ中所述训练样本集中的m1+m2个训练样本,输入到步骤Ⅳ中所建立的二分类模型进行训练。Step V, binary classification model training: input the m1+m2 training samples in the training sample set mentioned in step III to the binary classification model established in step IV for training.
本实施例中,步骤Ⅲ中所述训练样本集中训练样本总数量为N且N=m1+m2;步骤Ⅳ中进行二分类模型建立之前,先对所述训练样本集中的N个训练样本进行编号,所述训练样本集中第p个训练样本的编号为p,p为正整数且p=1、2、…、N;第p个训练样本记作(xp,yp),其中xp为第p个训练样本的特征参数(即所述特征向量),yp为第p个训练样本的类别号且yp=1或-1,其中类别号为1表示有火焰,类别号为-1表示无火焰。In this embodiment, the total number of training samples in the training sample set in step III is N and N=m1+m2; before the binary classification model is established in step IV, number the N training samples in the training sample set , the number of the pth training sample in the training sample set is p, p is a positive integer and p=1, 2, ..., N; the pth training sample is recorded as (x p , y p ), where x p is The feature parameter of the pth training sample (ie, the feature vector), yp is the category number of the pth training sample and yp =1 or -1, where the category number is 1 means there is flame, and the category number is -1 Indicates no flame.
步骤Ⅳ-2中采用共轭梯度法对参数C与D进行优化时,利用步骤Ⅲ中所述训练样本集中的m1+m2个训练样本进行优化,且优化过程如下:When using the conjugate gradient method to optimize parameters C and D in step IV-2, use the m1+m2 training samples in the training sample set mentioned in step III to optimize, and the optimization process is as follows:
步骤ⅰ、目标函数确定:
其中,矩阵
由于最小二乘支持向量机(LS-SVM)的约束条件表述为以下形式:Since the constraints of the least squares support vector machine (LS-SVM) are expressed in the following form:
(5.22),式中wT·φ(xp)+b为高维特征空间中的分类超平面,w和b为分类超平面的参数;ep为第p个训练样本的训练误差,为经验风险;wT·w=||w||2衡量了学习机器的复杂性。 (5.22), where w T φ(x p )+b is the classification hyperplane in the high-dimensional feature space, w and b are the parameters of the classification hyperplane; e p is the training error of the pth training sample, is the empirical risk; w T ·w=||w|| 2 measures the complexity of the learning machine.
在所述训练样本集确定后,LS-SVM模型的性能取决于其核函数的类型和两个超参数的选择,两个超参数分别为惩罚因子γ与核参数σ2,LS-SVM模型的分类精度与超参数的选择相关,核参数σ2表示径向基函数的宽度且其与LS-SVM模型的光滑性能有很大关系;惩罚因子γ也称为正则化参数。控制对错误样本的惩罚程度,其与LS-SVM模型的复杂度以及训练样本的匹配程度紧密相关。After the training sample set is determined, the performance of the LS-SVM model depends on the type of its kernel function and the selection of two hyperparameters, the two hyperparameters are the penalty factor γ and the kernel parameter σ 2 , the LS-SVM model The classification accuracy is related to the selection of hyperparameters. The kernel parameter σ 2 represents the width of the radial basis function and has a great relationship with the smooth performance of the LS-SVM model; the penalty factor γ is also called a regularization parameter. Controls the degree of punishment for wrong samples, which is closely related to the complexity of the LS-SVM model and the matching degree of training samples.
本实施例中,步骤Ⅳ-1中所选取的径向基函数为
公式(5.22)能写成:
公式(5.23)中C-2代替了惩罚因子γ,但同样能起到平衡LS-SVM模型复杂程度和经验风险的作用;σ被D-1代替,径向基函数
根据最小二乘支持向量机原理,公式(5.23)转化为线性方程组
对公式(5.25)进行求解,可得径向基函数的回归函数由公式(5.25)可以得出:矩阵s=A-1Y(5.28)。Solving the formula (5.25), the regression function of the radial basis function can be obtained From formula (5.25), it can be drawn: matrix s=A -1 Y(5.28).
通过所述训练样本集中的N个训练样本对所建立的二分类模型进行N次验证,其中进行第p次验证时,将第p个训练样本作为预测集合,而其余N-1个样本作为训练集合,通过训练集求解出LS-SVM参数ap和b之后,对作为预测集合的第p个训练样本进行分类,并记录分类结果正确与否;这样经过N次验证之后,可以计算出留一预测的错误分类率eLOO,计算公式为(5.29)。对于每组给定的超参数(包含C和D),都可以计算出对应的eLOO,从而能选择出eLOO为最小的超参数组合作为优化后的参数。The established binary classification model is verified N times through the N training samples in the training sample set. When performing the pth verification, the pth training sample is used as the prediction set, and the remaining N-1 samples are used as the training set. Set, after solving the LS-SVM parameters a p and b through the training set, classify the pth training sample as the prediction set, and record whether the classification result is correct or not; in this way, after N times of verification, you can calculate the leave-one-out The predicted misclassification rate e LOO is calculated as (5.29). For each set of given hyperparameters (including C and D), the corresponding eLOO can be calculated, so that the hyperparameter combination with the smallest eLOO can be selected as the optimized parameter.
由于
为使sse(C,D)达到最小,对公式
根据A·A-1=I(I为单位矩阵),可推导得到:(5.36);According to A·A -1 =I (I is the identity matrix), it can be derived: (5.36);
根据公式
根据公式(5.30),According to formula (5.30),
明显地,而可由式(5.32)-(5.35)计算得出。obviously, and It can be calculated by formula (5.32)-(5.35).
对每一组超参数C和D都可根据公式(5.37)和(5.38)计算sse(C,D)对它们的梯度。根据LS-SVM原理可知,LS-SVM超参数的选择从一个约束优化问题转换为无约束优化问题,用C-2代替γ,用D-1代替σ,这一变换不会影响LS-SVM模型的性能,另一方面C和D的取值情况不影响梯度的计算。For each set of hyperparameters C and D, the gradient of sse(C,D) to them can be calculated according to formulas (5.37) and (5.38). According to the principle of LS-SVM, the selection of hyperparameters of LS-SVM is converted from a constrained optimization problem to an unconstrained optimization problem, replacing γ with C -2 , and replacing σ with D -1 . This transformation will not affect the LS-SVM model On the other hand, the values of C and D do not affect the calculation of the gradient.
步骤ⅱ、初始参数设定:对参数C与D的初始值C1和D1分别进行确定,并对识别误差阈值ε进行设定且ε>0。Step ii. Initial parameter setting: determine the initial values C 1 and D 1 of parameters C and D respectively, and set the recognition error threshold ε and ε>0.
步骤ⅲ、当前迭代的梯度gk计算:根据公式计算得出步骤ⅰ中目标函数对Ck和Dk的梯度gk,k为迭代次数且k=1、2、…;若||gk||≤ε,停止计算,此时Ck和Dk分别为优化后的参数C与D;否则,进入步骤ⅳ。Step Ⅲ, the gradient g k calculation of the current iteration: according to the formula Calculate the gradient g k of the objective function to C k and D k in step i, k is the number of iterations and k=1, 2, ...; if ||g k ||≤ε, stop the calculation, at this time C k and D k are the optimized parameters C and D respectively; otherwise, go to step ⅳ.
其中,
步骤ⅳ、当前迭代的搜索方向dk计算:根据公式
步骤ⅴ、当前迭代的搜索步长λk确定:沿步骤ⅳ中所确定的搜索方向dk进行搜索,找出满足公式Step ⅳ, determine the search step size λ k of the current iteration: search along the search direction d k determined in step ⅳ, and find out that satisfies the formula
步骤ⅵ、根据公式
步骤ⅶ、令k=k+1,之后返回步骤ⅲ,进行下一次迭代。Step ⅶ, set k=k+1, and then return to step Ⅲ for the next iteration.
最终求得,矩阵
本实施例中,二分类模型建立后,最终采用的分类器为:
本实施例中,步骤ⅴ中T表示矩阵的转置,H为自相关矩阵且H=AT·A。In this embodiment, in step v T represents the transposition of the matrix, H is the autocorrelation matrix and H=A T ·A.
实际操作过程中,步骤ⅱ中对C1和D1进行确定时,采用网格搜索法或随机抽取数值的方法对C1和D1进行确定;采用随机抽取数值的方法对C1和D1进行确定时,C1为(0.01,1]中随机抽取的一个数值,D1为(0.01,50]中随机抽取的一个数值;采用网格搜索法对C1和D1进行确定时,先10-3为步长划分网格,再以C和D为自变量且以步骤ⅰ中所述目标函数为因变量制作三维网格图,之后通过网格搜索找出C和D的多组参数,最后对将多组参数取平均值作为C1和D1。In the actual operation process, when determining C 1 and D 1 in step ii, use the grid search method or the method of randomly extracting values to determine C 1 and D 1 ; use the method of randomly extracting values to determine C 1 and D 1 When determining, C 1 is a value randomly selected in (0.01, 1], D 1 is a value randomly selected in (0.01, 50]; when using the grid search method to determine C 1 and D 1 , first 10 -3 is the step size to divide the grid, and then use C and D as independent variables and the objective function described in step i as the dependent variable to make a three-dimensional grid map, and then find out multiple sets of parameters of C and D through grid search , and finally take the average value of multiple groups of parameters as C 1 and D 1 .
本实施例中,采用网格搜索法对C1和D1进行确定,并且通过网格搜索找出C和D的B组参数,其中B为正整数且B=5~20。In this embodiment, the grid search method is used to determine C 1 and D 1 , and the group B parameters of C and D are found through the grid search, where B is a positive integer and B=5-20.
共轭梯度法具有算法简便、所需存储量小、收敛速度快等特点,将多维问题转化为一系列沿负梯度方向用一维搜索方法寻优,只是局部目标函数值下降最快的方向,能有效减少迭代次数,减少运行时间。The conjugate gradient method has the characteristics of simple algorithm, small storage required, and fast convergence speed. It transforms the multidimensional problem into a series of one-dimensional search methods along the negative gradient direction. It can effectively reduce the number of iterations and reduce the running time.
而当采用网格搜索法对C和D进行寻优时,只有步长设置很小时才可找到精度较高的最优超参数值,这样将会非常耗时。When the grid search method is used to optimize C and D, only when the step size is set to a small value can the optimal hyperparameter value with high precision be found, which will be very time-consuming.
燃烧是一个持续的典型不稳定的物理过程,具有多种表征参数。火灾早期火焰图像主要有火焰面积增大、边缘抖动、形状不规则、位置基本稳定等特征。其中,面积增长特性所用的判据计算均在Visual C++[115]平台下开发实现,其中面积变化率定义为:其中,AR表示相邻帧间高亮区域的面积变化率,A(n)和A(n+1)分别表示当前帧和下一帧中可疑区域的面积。为防止相邻两帧图像中都不存在可疑火焰区域而使得计算出的面积变化率成为无穷大,在分母上加一个极小值eps。另外为了实现归一化,取两帧中高亮区域面积的最大值作为上式的分母,这样可使最终计算出的结果介于(0,1)之间。Combustion is a continuous and typically unstable physical process with various characterization parameters. The flame image in the early stage of the fire mainly has the characteristics of enlarged flame area, edge jitter, irregular shape, and basically stable position. Among them, the criterion calculations used for the area growth characteristics are all developed and implemented under the Visual C++ [115] platform, where the area change rate is defined as: Among them, AR represents the area change rate of highlighted regions between adjacent frames, and A(n) and A(n+1) represent the areas of suspicious regions in the current frame and the next frame, respectively. In order to prevent the calculated area change rate from becoming infinite because there is no suspicious flame area in two adjacent frames of images, a minimum value eps is added to the denominator. In addition, in order to achieve normalization, the maximum value of the highlighted area in the two frames is taken as the denominator of the above formula, so that the final calculated result can be between (0,1).
图像的形状相似性通常要借助于与已知描绘子的相似程度来进行,这种方法可以在任何复杂的程度上建立相应的相似性测度。根据背景差分法,设已知的图像序列为fh(x,y),h=1,2,…,N0,其中(x,y)为图像中各个像素的坐标,N0为帧数,设基准图像为fo(x,y),这样可以定义一个差值图像序列为:δh(x,y)=|fh(x,y)-fo(x,y)|,这个差值图像序列表示了原图像序列每一帧与基准图像之间的差异。接着,对得到的差值图像序列进行二值化得到图像序列{bh(x,y)}。在此图像序列中标识为1的像素表示原始图像序列与基准图像之间存在显著差别的区域。我们认为该区域即为可能的火焰区域,对滤除孤立点的影响后的图像序列中每帧为1的像素进行标记,得到序列图像中每帧中可能的火焰区域Ωh。当发现可疑火焰区域后,通过计算连续帧变化图像的相似度的方法对火焰与干扰物进行分类。连续帧变化图像的相似度ξh定义为:求得若干个相似度后,使用连续几帧图像的相似度ξh的平均值作为判据。The shape similarity of images is usually carried out by means of similarity with known descriptors, and this method can establish corresponding similarity measures at any degree of complexity. According to the background difference method, let the known image sequence be f h (x, y), h=1,2,..., N 0 , where (x, y) is the coordinate of each pixel in the image, and N 0 is the number of frames , let the reference image be f o (x,y), so that a difference image sequence can be defined as: δ h (x,y)=|f h (x,y)-f o (x,y)|, this The difference image sequence represents the difference between each frame of the original image sequence and the reference image. Next, binarize the obtained difference image sequence to obtain an image sequence {b h (x, y)}. Pixels marked as 1 in this image sequence represent areas where there is a significant difference between the original image sequence and the reference image. We consider this area to be the possible flame area, and mark the pixels with 1 in each frame in the image sequence after filtering out the influence of isolated points, and obtain the possible flame area Ω h in each frame in the sequence image. When the suspicious flame area is found, the flame and the disturbance are classified by calculating the similarity of the continuous frame change images. The similarity ξ h of continuous frame changing images is defined as: After obtaining several similarities, use the average value of the similarity ξ h of several consecutive frames of images as a criterion.
矩特性,从火焰识别的角度出发,采用了火焰图像的质心特征,用质心表示其稳定性。对于一幅火焰图像,首先计算其质心,如公式:M00为目标区域的零阶矩,即为该目标区域的面积。计算图像和方向的一阶矩(M10,M01),再计算得到其质心。Moment characteristics, from the perspective of flame recognition, use the center of mass feature of the flame image, and use the center of mass to represent its stability. For a flame image, first calculate its centroid, such as the formula: M 00 is the zero-order moment of the target area, that is, the area of the target area. Computational image and The first moment of the direction (M 10 , M 01 ), and then calculate its center of mass.
早期火灾火焰的边缘变化有其独特的规律,使用致密度和偏心率这种简单且实用的特征参数作为火灾判据之一来识别火焰的边缘变化。The edge change of the early fire flame has its own unique rules, and the simple and practical characteristic parameters such as density and eccentricity are used as one of the fire criteria to identify the edge change of the flame.
致密度通常用来描述物体边界的复杂程度,也称为圆形度或分散度,它是在面积和周长的基础上,计算物体或区域的形状复杂程度的特征量。定义如下:k=1,2,…n1,公式(4.7)中,Ck表示编号为k的图元的致密度,Pk为第k个图元的周长,即可疑图元的边界长度,可以通过计算边界链码得到。Ak为第k个图元的面积,对于灰度图像,可通过计算可疑图元的亮点数目获得,对于二值图像,可通过计算像素值为1的像素点数获得,n为图像中可疑火焰图元的个数。周长的计算相对较为复杂,但可通过提取边界链码而确定。Density is usually used to describe the complexity of object boundaries, also known as circularity or dispersion, which is a feature quantity that calculates the shape complexity of objects or regions on the basis of area and perimeter. It is defined as follows: k=1, 2,...n1, in the formula (4.7), C k represents the compactness of the primitive numbered k, and P k is the perimeter of the kth primitive, that is, the boundary length of the suspicious primitive, which can be obtained by Calculate the boundary chain code to get. A k is the area of the k-th graphic element. For grayscale images, it can be obtained by calculating the number of bright spots of suspicious graphic elements. For binary images, it can be obtained by calculating the number of pixels with a pixel value of 1. n is the suspicious flame in the image The number of primitives. The calculation of the perimeter is relatively complicated, but it can be determined by extracting the boundary chain code.
计算火焰区域致密度的步骤如下:The steps to calculate the density of the flame area are as follows:
①在图像分割的基础上计算出疑似火焰区域的面积;① Calculate the area of the suspected flame area on the basis of image segmentation;
②检测垂直方向连续的周长像素点,并记录连续周长像素点的个数Nx;检测水平方向连续的边界像素点,并记录连续边界像素点的个数Ny,计算边界像素点总数SN;②Detect continuous perimeter pixels in the vertical direction, and record the number Nx of continuous perimeter pixels; detect continuous boundary pixels in the horizontal direction, and record the number N y of continuous boundary pixels, and calculate the total number of boundary pixels S N ;
③偶数码的链码个数为NE=Nx+Ny,奇数码的链码个数N0=SN—NE,利用周长公式
④将①和③的结果代入公式(4.7)中计算致密度。④ Substitute the results of ① and ③ into formula (4.7) to calculate the density.
采用灰度共生矩阵提取图像纹理特征,Haralick等人由灰度共生矩阵提取了14种特征。本实施例中,所提取的图像纹理特征包括反差、熵、能量、均匀性和相关性五个特征。Using the gray-level co-occurrence matrix to extract image texture features, Haralick et al. extracted 14 features from the gray-level co-occurrence matrix. In this embodiment, the extracted image texture features include five features of contrast, entropy, energy, uniformity and correlation.
燃烧的火焰存在闪烁现象,这个特性体现了一帧图像的像素点在不同灰度级上的分布随时间的变化。通过计算边缘像素点的变化,就能得到目标模式的闪烁规律。火焰闪烁的频率一般处于10-20Hz的低频区。由于视频图像获取的一般频率为25Hz(2帧/s),达不到无失真获取闪烁频率的采样要求,因此直接通过视频信息获取其特征频谱比较难。对于闪烁规律,Toreyin提出利用每一帧中固定像素点的颜色值随时间的变化在RGB模型下利用小波来分析此点的R分量,若存在火焰,则在此点引起值的剧烈变化,小波分解的高频分量将是非零值。王振华[73]等提出离散小波变换对火焰特征时间序列进行分解与重构,利用面积变化情况表示其闪烁规律。张进华等指出火焰闪烁时其高度会发生很大变化,其变化规律与闪烁频率间存在着直接联系,而且与干扰源之间有很大差别,因此提出了采用火焰高度的变化代替火焰闪烁特征以进行火焰识别的方法。本实施例中,采用张进华等指出火焰闪烁时其高度会发生很大变化的方法提取频闪特性。The burning flame has a flickering phenomenon, which reflects the distribution of pixels of a frame of image at different gray levels over time. By calculating the changes of the edge pixels, the flickering law of the target pattern can be obtained. The frequency of flame flickering is generally in the low frequency region of 10-20Hz. Since the general frequency of video image acquisition is 25Hz (2 frames/s), it cannot meet the sampling requirements for obtaining flicker frequency without distortion, so it is difficult to obtain its characteristic spectrum directly through video information. For the flickering law, Toreyin proposed to use wavelet to analyze the R component of this point under the RGB model by using the color value of a fixed pixel point in each frame to change with time. If there is a flame, it will cause a drastic change in the value at this point. Wavelet The high frequency components of the decomposition will be non-zero. Wang Zhenhua [73] proposed discrete wavelet transform to decompose and reconstruct the time series of flame characteristics, and use the change of area to express its scintillation law. Zhang Jinhua et al. pointed out that the height of the flame will change greatly when the flame flickers, and there is a direct relationship between the change law and the flicker frequency, and there is a big difference between the interference source, so it is proposed to use the change of the flame height instead of the flame flicker feature to Methods for performing flame identification. In this embodiment, the stroboscopic characteristics are extracted by using the method pointed out by Zhang Jinhua et al. that the height of the flame will change greatly when it flickers.
本实施例中,步骤Ⅱ中M=6,且6个特征量分别为面积、相似度、矩特性、致密度、纹理特征和频闪特性。In this embodiment, M=6 in step II, and the 6 feature quantities are area, similarity, moment characteristic, density, texture characteristic and stroboscopic characteristic respectively.
实际操作过程中,为检测所建立二分类模型的性能,选择包含有火焰样本和无火焰样本的训练样本共计81个,每个样本都是7维的。针对81个训练样本,每次取出一组数据用于预测分类,其余80组数据对超参数进行优化选择。其中初始值C1=1且D1=1,用共轭梯度法进行搜索,所得C的均值为0.1386,均方差为0.0286,D的均值为0.2421,均方差为0.0273,可见优选的超参数比较稳定。将本发明所采用的二分类模型(FR-LSSVM模型)与BP(神经网络模型)、LS-SVM(最小二乘支持向量机模型)和标准SVM(支持向量机模型)三种分类模型进行对比,识别结果如表1所示:In the actual operation process, in order to test the performance of the established binary classification model, a total of 81 training samples including flame samples and non-flame samples were selected, and each sample was 7-dimensional. For 81 training samples, a set of data is taken out for prediction and classification each time, and the remaining 80 sets of data are used to optimize the selection of hyperparameters. Among them, the initial value C 1 =1 and D 1 =1, using the conjugate gradient method to search, the mean value of C obtained is 0.1386, the mean square error is 0.0286, the mean value of D is 0.2421, the mean square error is 0.0273, it can be seen that the optimal hyperparameter comparison Stablize. Compare the binary classification model (FR-LSSVM model) used in the present invention with BP (neural network model), LS-SVM (least squares support vector machine model) and standard SVM (support vector machine model) three classification models , the recognition results are shown in Table 1:
表1 不同分类模型的识别结果对比表Table 1 Comparison table of recognition results of different classification models
从表1可以看出,就识别率而言,BP最差,仅用网格搜索选定初值的LS-SVM也较差,FR-LSSVM和标准SVM明显优于它们。而就训练时间而言,FR-LSSVM和LS-SVM显著占优,标准SVM略显占优,标准SVM较难搜得最优超参数,BP神经网络训练十分耗时,且识别率略低,原因是训练样本数量较少,样本数量对识别率影响较大,包含特征信息量不充分,而且神经网络在收敛性及局部最小化方面存在不足,各个参数选择依靠经验,参数设置存在较大的不确定性,可通过补充训练样本,对BP神经网络的权值进一步修正来提高识别率。标准SVM的识别率较LS-SVM高,但训练时间和识别时间都更长。而FR-LSSVM超参数的算法较为规范,耗时少,且较稳定,减少了不确定性,尤其适用于小样本、非线性问题的建模,在速度和精度方面优越性有显著优势。另外这些算法对图像质量要求较高,如果图像像素较低,图像中的目标区域被障碍物大面积遮挡或被粉尘覆盖、包围,提取目标不完整或提取目标里有噪声等都可能导致识别率降低。It can be seen from Table 1 that in terms of recognition rate, BP is the worst, and LS-SVM with initial values selected only by grid search is also poor, while FR-LSSVM and standard SVM are significantly better than them. In terms of training time, FR-LSSVM and LS-SVM are significantly superior, standard SVM is slightly superior, and standard SVM is difficult to find the optimal hyperparameters, BP neural network training is time-consuming, and the recognition rate is slightly low. The reason is that the number of training samples is small, the number of samples has a great impact on the recognition rate, and the amount of feature information is insufficient, and the neural network has insufficient convergence and local minimization. The selection of each parameter depends on experience, and there is a large gap in the parameter setting. Uncertainty, the recognition rate can be improved by supplementing the training samples and further modifying the weights of the BP neural network. The recognition rate of standard SVM is higher than that of LS-SVM, but the training time and recognition time are longer. The FR-LSSVM hyperparameter algorithm is more standardized, less time-consuming, more stable, and reduces uncertainty. It is especially suitable for modeling small samples and nonlinear problems, and has significant advantages in speed and accuracy. In addition, these algorithms have high requirements on image quality. If the image pixels are low, the target area in the image is blocked by obstacles in a large area or covered or surrounded by dust, the extracted target is incomplete or there is noise in the extracted target, etc., which may lead to poor recognition rate. reduce.
本实施例中,步骤2012中进行图像增强时,采用基于模糊逻辑的图像增强方法进行增强处理。In this embodiment, when image enhancement is performed in step 2012, an image enhancement method based on fuzzy logic is used for enhancement processing.
实际进行增强处理时,采用基于模糊逻辑的图像增强方法(具体是经典的Pal-King模糊增强算法,即Pal算法)进行图像增强处理时,存在以下缺陷:In the actual enhancement process, when the image enhancement method based on fuzzy logic (specifically, the classic Pal-King fuzzy enhancement algorithm, namely the Pal algorithm) is used for image enhancement processing, there are the following defects:
①Pal算法在进行模糊变换及其逆变换时,采用复杂的幂函数作为模糊隶属函数,存在实时性差、运算量大的缺陷;①Pal algorithm uses complex power function as fuzzy membership function when performing fuzzy transformation and its inverse transformation, which has the defects of poor real-time performance and large amount of computation;
②在模糊增强变换过程中,将原图像中相当多的低灰度值硬性置为零,造成低灰度信息的损失;②In the process of fuzzy enhancement transformation, quite a lot of low grayscale values in the original image are hard set to zero, resulting in the loss of low grayscale information;
③模糊增强阈值(渡越点Xc)的选取一般凭经验或多次比较尝试获取,缺乏理论指导,具有随意性;隶属函数中参数Fd、Fe具有可调性,参数值Fd、Fe的合理选取与图像处理效果关系密切;③The selection of the fuzzy enhancement threshold (transition point X c ) is generally obtained by experience or multiple comparison attempts, which lacks theoretical guidance and is arbitrary; the parameters F d and F e in the membership function are adjustable, and the parameter values F d , The reasonable selection of F e is closely related to the effect of image processing;
④在模糊增强变换过程中,多次迭代运算是为了对图像反复进行增强处理,其迭代次数的选取无相关理论原则指导,迭代次数较多时会影响到边缘细节。④In the process of fuzzy enhancement transformation, multiple iterative operations are used to repeatedly enhance the image, and the selection of the number of iterations is not guided by relevant theoretical principles, and the edge details will be affected when the number of iterations is large.
为克服经典的Pal-King模糊增强算法存在上述缺陷,本实施例中,步骤2012中对所述数字图像即待增强图像进行增强处理时,过程如下:In order to overcome the above-mentioned defects in the classic Pal-King fuzzy enhancement algorithm, in the present embodiment, when the digital image is enhanced in step 2012, the image to be enhanced, the process is as follows:
步骤20121、由图像域变换到模糊域:根据隶属度函数
将所述待增强图像各像素点的灰度值均映射成模糊集的模糊隶属度后,相应地所述待增强图像所有像素点的灰度值映射成的模糊隶属度组成模糊集的模糊隶属矩阵。After the gray value of each pixel of the image to be enhanced is mapped to the fuzzy membership of the fuzzy set, the corresponding fuzzy membership mapped to the gray value of all the pixels of the image to be enhanced forms the fuzzy membership of the fuzzy set matrix.
由于公式(7)中μgh∈[0,1],克服了经典Pal-King模糊增强算法中模糊变换后许多原图像低灰度值被切削为零的缺陷,且以阈值XT为分界线,分区域定义灰度级xgh的隶属度,这种在图像低灰度区和高灰度区分别定义隶属度的方法,也保证了图像在低灰度区域的信息损失最小,从而保证图像增强的效果。Since μ gh ∈ [0,1] in formula (7), it overcomes the defect that many low gray values of the original image are cut to zero after fuzzy transformation in the classic Pal-King fuzzy enhancement algorithm, and the threshold X T is used as the dividing line , defining the membership degree of the gray level x gh in different regions, this method of defining the membership degree in the low gray level area and the high gray level area of the image separately also ensures that the information loss of the image in the low gray level area is minimal, thereby ensuring that the image Enhanced effect.
本实施例中,步骤20121中由图像域变换到模糊域之前,先采用最大类间方差法对灰度阈值XT进行选取。In this embodiment, before transforming from the image domain to the fuzzy domain in step 20121, the grayscale threshold X T is selected by using the method of maximum variance between classes.
步骤20122、在模糊域利用模糊增强算子进行模糊增强处理:所采用的模糊增强算子为μ'gh=Ir(μgh)=Ir(Ir-1μgh),式中r为迭代次数且其为正整数,r=1、2、…;其中
上述公式
步骤20123、由模糊域逆变换到图像域:根据公式(6),将模糊增强处理后得到的μ'gh进行逆变换,获得增强处理后数字图像中各像素点的灰度值,并获得增强处理后的数字图像。Step 20123, inverse transformation from fuzzy domain to image domain: according to the formula (6) Perform inverse transformation on the μ'gh obtained after the fuzzy enhancement processing, obtain the gray value of each pixel in the digital image after the enhancement processing, and obtain the digital image after the enhancement processing.
由于Pal算法中模糊增强阈值(渡越点Xc)的选取是图像增强的关键,在实际应用中需要凭经验或多次尝试获取。其中较经典的方法是最大类间方差法(Ostu),该方法简单稳定有效,是实际应用中经常采用的方法。Ostu阈值选取方法摆脱了需要人工介入进行多次尝试的局限性,能够由计算机根据图像的灰度信息自动确定最佳阈值。Ostu法的原理是利用类间方差作为判据,选取使类间方差最大的灰度值作为最佳阈值来实现模糊增强阈值的自动选取,从而避免增强处理过程中的人工干预。Since the selection of the fuzzy enhancement threshold (transition point X c ) in the Pal algorithm is the key to image enhancement, it needs to be obtained by experience or multiple attempts in practical applications. Among them, the more classic method is the method of maximum between-class variance (Ostu), which is simple, stable and effective, and is often used in practical applications. The Ostu threshold selection method gets rid of the limitation of multiple attempts of manual intervention, and the computer can automatically determine the optimal threshold according to the gray information of the image. The principle of the Ostu method is to use the variance between classes as the criterion, and select the gray value with the largest variance between classes as the optimal threshold to realize the automatic selection of the fuzzy enhancement threshold, thereby avoiding manual intervention in the enhancement process.
本实施例中,采用最大类间方差法对灰度阈值XT进行选取之前,先从所述待增强图像的灰度变化范围中找出像素点数量为0的所有灰度值,并采用处理器3将找出的所有灰度值均标记为免计算灰度值;采用最大类间方差法对灰度阈值XT进行选取时,对所述待增强图像的灰度变化范围中除所述免计算灰度值之外的其它灰度值作为阈值时的类间方差值进行计算,并从计算得出的类间方差值找出最大类间方差值,所找出最大类间方差值对应的灰度值便为灰度阈值XT。In this embodiment, before using the maximum inter-class variance method to select the grayscale threshold XT , first find out all the grayscale values whose number of pixels is 0 from the grayscale variation range of the image to be enhanced, and use processing All the gray values found by the
采用传统的最大类间方差法(Ostu)选取模糊增强时,若灰度值为s的像素数为ns,则总像素点数所采集的数字图像各个灰度级出现的概率阈值XT将图像中的像素点按其灰度级划分为两类C0和C1,C0={0,1,…t},C1={t+1,t+2,…L-1},并假定类C0和C1的像素点数占总像素点数的比率分别为w0(t)和w1(t)且二者平均灰度值分别为μ0(t)和μ1(t)。When using the traditional maximum inter-class variance method (Ostu) to select fuzzy enhancement, if the number of pixels with gray value s is n s , the total number of pixels The probability of occurrence of each gray level of the collected digital image Threshold X T divides the pixels in the image into two categories C 0 and C 1 according to their gray levels, C 0 ={0,1,…t}, C 1 ={t+1,t+2,…L -1}, and assume that the ratio of the number of pixels of class C 0 and C 1 to the total number of pixels is w 0 (t) and w 1 (t) respectively, and the average gray value of the two is μ 0 (t) and μ 1 (t).
对于C0有:
对于C1有:
其中是整体图像灰度的统计均值,则μ=w0μ0+w1μ1;in is the statistical mean of the overall image grayscale, then μ=w 0 μ 0 +w 1 μ 1 ;
因而最佳阈值
上述自动提取最佳模糊增强阈值XT的过程是:从灰度级0遍历所有的灰度级至L-1级,找到满足式(8)取最大值时的XT值即为所求阈值XT。因图像可能在某些灰度级上的像素数为零,为减少计算方差次数,本发明提出一种改进的快速Ostu法;The process of automatically extracting the optimal fuzzy enhancement threshold X T is as follows: traverse all gray levels from
假定灰度级为t'的像素数为零,则Pt'=0Assuming that the number of pixels with gray level t' is zero, then P t' =0
若选定t'-1为阈值时,则有:If t'-1 is selected as the threshold, then:
又当选t'为阈值时:When t' is selected as the threshold:
由此可见:It can be seen from this:
σ2(t'-1)=σ2(t') (2.37)σ 2 (t'-1)=σ 2 (t') (2.37)
又假设有连续的灰度级t1,t2,…,tn,亦可仿上推得:Also assuming that there are continuous gray levels t 1 , t 2 ,..., t n , it can also be pushed upwards:
σ2(t1-1)=σ2(t1)=σ2(t2-1)=σ2(t2)=…=σ2(tn-1)=σ2(tn) (2.38)σ 2 (t 1 -1)=σ 2 (t 1 )=σ 2 (t 2 -1)=σ 2 (t 2 )=…=σ 2 (t n -1)=σ 2 (t n ) ( 2.38)
由上述可知,若某一灰度级的像素数为零,则不必计算以其作为阈值时的类间方差值,而只需把最邻近像素数不为零的较小灰度级所对应的类间方差作为其类间方差值,因此,为快速找到类间方差的最大值,可以将类间方差相等的多个灰度级当作同一灰度级,即把那些像素数为零的灰度值视为不存在,直接将其作为阈值时的类间方差σ2(t)赋值为零,而不需计算它们的方差值,这对阈值最终结果的选取没有任何影响,却提高了增强阈值自适应选取的速度。It can be seen from the above that if the number of pixels of a gray level is zero, it is not necessary to calculate the inter-class variance value when it is used as the threshold, but only need to calculate the value corresponding to the smaller gray level whose nearest neighbor pixel number is not zero The inter-class variance of is used as the inter-class variance value. Therefore, in order to quickly find the maximum value of the inter-class variance, multiple gray levels with equal inter-class variance can be regarded as the same gray level, that is, the number of those pixels is zero The gray value of
本实施例中,步骤20122中进行模糊增强处理之前,先采用低通滤波方法对步骤20121中所获得的所述待增强图像的模糊集进行平滑处理;实际进行低通滤波处理时,所采用的滤波算子为
由于图像在生成和传输过程中易受到噪声污染,因此对图像进行增强处理之前,先对图像的模糊集进行平滑处理以减少噪声。本实施例中,通过3×3空域低通滤波算子与图像模糊集矩阵的卷积运算,来实现对图像模糊集的平滑处理。Because images are susceptible to noise pollution during generation and transmission, before image enhancement, the fuzzy set of the image should be smoothed to reduce noise. In this embodiment, the smoothing of the image fuzzy set is realized through the convolution operation of the 3×3 spatial domain low-pass filter operator and the image fuzzy set matrix.
本实施例中,步骤2013进行图像分割时,过程如下:In the present embodiment, when step 2013 performs image segmentation, the process is as follows:
步骤20131、二维直方图建立:采用处理器3建立所述待分割图像的关于像素点灰度值与邻域平均灰度值的二维直方图;该二维直方图中任一点记为(i,j),其中i为该二维直方图的横坐标值且其为所述待分割图像中任一像素点(m,n)的灰度值,j为该二维直方图的纵坐标值且其为该像素点(m,n)的邻域平均灰度值;所建立二维直方图中任一点(i,j)发生的频数记为C(i,j),且点(i,j)发生的频率记为h(i,j),其中 Step 20131, establishment of two-dimensional histogram: use
本实施例中,对像素点(m,n)的邻域平均灰度值进行计算时,根据公式
并且,邻域平均灰度值g(m,n)和像素点灰度值f(m,n)的灰度变化范围相同且二者的灰度变化范围均为[0,L),因而步骤Ⅰ中所建立的二维直方图为一个正方形区域,详见图3,其中L-1为邻域平均灰度值g(m,n)和像素点灰度值f(m,n)的最大值。Moreover, the neighborhood average gray value g(m,n) and the pixel gray value f(m,n) have the same gray scale variation range, and the gray scale variation range of both is [0, L), so the step The two-dimensional histogram established in Ⅰ is a square area, see Figure 3 for details, where L-1 is the maximum value of the neighborhood average gray value g(m,n) and pixel gray value f(m,n) value.
图3中,利用阈值向量(i,j)将所建立二维直方图分割成四个区域。由于目标图像内部或背景图像内部的像素点之间相关性很强,像素点的灰度值和它的邻域平均灰度值非常接近;而在目标图像和背景图像的边界附近像素点,其像素点灰度值和邻域平均灰度值之间的差异明显。因而,图3中0#区域与背景图像对应,1#区域与目标图像对应,而2#区域和3#区域表示边界附近像素点和噪声点分布,因而应该在0#和1#区域中用像素点灰度值与邻域平均灰度值并通过二维模糊划分最大熵的分割方法确定最佳阈值,使真正代表目标和背景的信息量最大。In FIG. 3 , the established two-dimensional histogram is divided into four regions by using the threshold vector (i, j). Due to the strong correlation between the pixels inside the target image or the background image, the gray value of the pixel is very close to the average gray value of its neighborhood; and the pixel near the boundary of the target image and the background image, its The difference between the pixel gray value and the neighborhood average gray value is obvious. Therefore, in Figure 3, the 0# area corresponds to the background image, the 1# area corresponds to the target image, and the 2# area and the 3# area represent the distribution of pixels and noise points near the boundary, so it should be used in the 0# and 1# area The gray value of the pixel point and the average gray value of the neighborhood are determined by the two-dimensional fuzzy division of the maximum entropy segmentation method to determine the optimal threshold, so that the amount of information that truly represents the target and the background is maximized.
步骤20132、模糊参数组合优化:所述处理器3调用模糊参数组合优化模块,且利用粒子群优化算法对基于二维模糊划分最大熵的图像分割方法所用的模糊参数组合进行优化,并获得优化后的模糊参数组合。Step 20132, fuzzy parameter combination optimization: the
本步骤中,对模糊参数组合进行优化之前,先根据步骤20131中所建立的二维直方图,计算得出对所述待分割图像进行分割时的二维模糊熵的函数关系式,并将计算得出的二维模糊熵的函数关系式作为利用粒子群优化算法对模糊参数组合进行优化时的适应度函数。In this step, before optimizing the combination of fuzzy parameters, according to the two-dimensional histogram established in step 20131, the functional relational expression of the two-dimensional fuzzy entropy when the image to be segmented is segmented is calculated, and the calculated The obtained two-dimensional fuzzy entropy function relation is used as the fitness function when the particle swarm optimization algorithm is used to optimize the combination of fuzzy parameters.
本实施例中,步骤20131中所述待分割图像由目标图像O和背景图像P组成;其中目标图像O的隶属度函数为μo(i,j)=μox(i;a,b)μoy(j;c,d)(1)。In this embodiment, the image to be segmented in step 20131 is composed of the target image O and the background image P; wherein the membership function of the target image O is μ o (i, j) = μ ox (i; a, b) μ oy (j;c,d)(1).
背景图像P的隶属度函数μb(i,j)=μbx(i;a,b)μoy(j;c,d)+μox(i;a,b)μby(j;c,d)+μbx(i;a,b)μby(j;c,d)(2)。The membership function of background image P μ b (i,j)=μ bx (i;a,b)μ oy (j;c,d)+μ ox (i;a,b)μ by (j;c, d)+μ bx (i;a,b)μ by (j;c,d)(2).
式(1)和(2)中,μox(i;a,b)和μoy(j;c,d)均为目标图像O的一维隶属度函数且二者均为S函数,μbx(i;a,b)和μby(j;c,d)均为背景图像P的一维隶属度函数且二者均为S函数,μbx(i;a,b)=1-μox(i;a,b),μby(j;c,d)=1-μoy(j;c,d),其中a、b、c和d均为对目标图像O和背景图像P的一维隶属度函数形状进行控制的参数。In formulas (1) and (2), μ ox (i;a,b) and μ oy (j;c,d) are both one-dimensional membership functions of the target image O and both are S functions, μ bx (i;a,b) and μ by (j;c,d) are the one-dimensional membership functions of the background image P and both are S functions, μ bx (i;a,b)=1-μ ox (i;a,b), μ by (j;c,d)=1-μ oy (j;c,d), where a, b, c, and d are a combination of the target image O and the background image P Parameters that control the shape of the dimension membership function.
其中,
步骤20132中对二维模糊熵的函数关系式进行计算时,先根据步骤20131中所建立的二维直方图,对所述待分割图像的像素点灰度值的最小值gmin和最大值gmax以及邻域平均灰度值的最小值smin和最大值smax分别进行确定。本实施例中,gmax=smax=L-1,并且gmin=smin=0。其中,L-1=255。When calculating the functional relational expression of the two-dimensional fuzzy entropy in step 20132, first, according to the two-dimensional histogram established in step 20131, the minimum value g min and the maximum value g of the pixel gray value of the image to be segmented are calculated max and the minimum value s min and maximum value s max of the neighborhood average gray value are determined respectively. In this embodiment, g max =s max =L-1, and g min =s min =0. Among them, L-1=255.
步骤20132中计算得出的二维模糊熵的函数关系式为:The functional relational expression of the two-dimensional fuzzy entropy calculated in step 20132 is:
步骤20132中利用粒子群优化算法对模糊参数组合进行优化时,所优化的模糊参数组合为(a,b,c,d)。When the particle swarm optimization algorithm is used to optimize the fuzzy parameter combination in step 20132, the optimized fuzzy parameter combination is (a, b, c, d).
本实施例中,步骤20132中进行二维模糊划分最大熵的参数组合优化时,包括以下步骤:In this embodiment, when performing parameter combination optimization of two-dimensional fuzzy partition maximum entropy in step 20132, the following steps are included:
步骤Ⅱ-1、粒子群初始化:将参数组合的一个取值作为一个粒子,并将多个粒子组成一个初始化的粒子群;记作(ak,bk,ck,dk),其中k为正整数且其k=1、2、3、~、K,其中K为正整数且其为所述粒子群中所包含粒子的数量,ak为参数a的一个随机取值,bk为参数b的一个随机取值,ck为参数c的一个随机取值,dk为参数d的一个随机取值,ak<bk且ck<dk。Step Ⅱ-1. Particle swarm initialization: take a value of the parameter combination as a particle, and form multiple particles into an initialized particle swarm; denoted as (a k , b k , c k , d k ), where k is a positive integer and its k=1, 2, 3, ~, K, wherein K is a positive integer and it is the number of particles contained in the particle group, a k is a random value of parameter a, and b k is A random value of parameter b, c k is a random value of parameter c, d k is a random value of parameter d, a k <b k and c k <d k .
本实施例中,K=15。In this embodiment, K=15.
实际使用时,可根据具体需要,将K在10~100之间进行取值。In actual use, K can be set to a value between 10 and 100 according to specific needs.
步骤Ⅱ-2、适应度函数确定:Step Ⅱ-2, fitness function determination:
将
步骤Ⅱ-3、粒子适应度评价:对当前时刻所有粒子的适应度分别进行评价,且所有粒子的适应度评价方法均相同;其中,对当前时刻第k个粒子的适应度进行评价时,先根据步骤Ⅱ-2中所确定的适应度函数计算得出当前时刻第k个粒子的适应度值并记作fitnessk,并将计算得出的fitnessk与Pbestk进行差值比较:当比较得出fitnessk>Pbestk时,Pbestk=fitnessk,并将更新为当前时间第k个粒子的位置,其中Pbestk为当前时刻第k个粒子所达到的最大适应度值且其为当前时刻第k个粒子的个体极值,为当前时刻第k个粒子的个体最优位置;其中,t为当前迭代次数且其为正整数。Step Ⅱ-3. Particle fitness evaluation: evaluate the fitness of all particles at the current moment, and the fitness evaluation methods of all particles are the same; where, when evaluating the fitness of the kth particle at the current moment, first According to the fitness function determined in step Ⅱ-2, the fitness value of the kth particle at the current moment is calculated and recorded as fitnessk, and the difference between the calculated fitnessk and Pbestk is compared: when the comparison results in fitnessk > When Pbestk, Pbestk=fitnessk, and set Update to the position of the kth particle at the current time, where Pbestk is the maximum fitness value achieved by the kth particle at the current moment and it is the individual extremum of the kth particle at the current moment, is the individual optimal position of the kth particle at the current moment; among them, t is the current iteration number and it is a positive integer.
待根据步骤Ⅱ-2中所确定的适应度函数将当前时刻所有粒子的适应度值均计算完成后,将当前时刻适应度值最大的粒子的适应度值记为fitnesskbest,并将fitnesskbest与gbest进行差值比较:当比较得出fitnesskbest>gbest时,gbest=fitnesskbest,且将更新为当前时间适应度值最大的粒子的位置,其中gbest为当前时刻的全局极值,为当前时刻的群体最优位置。After the fitness value of all particles at the current moment is calculated according to the fitness function determined in step II-2, the fitness value of the particle with the largest fitness value at the current moment is recorded as fitnesskbest, and fitnesskbest is compared with gbest Difference comparison: when the comparison shows that fitnesskbest>gbest, gbest=fitnesskbest, and the Update to the position of the particle with the largest fitness value at the current time, where gbest is the global extremum at the current moment, is the optimal position of the group at the current moment.
步骤Ⅱ-4、判断是否满足迭代终止条件:当满足迭代终止条件时,完成参数组合优化过程;否则,根据粒子中群优化算法更新得出下一时刻各粒子的位置和速度,并返回步骤Ⅱ-3。Step Ⅱ-4. Judging whether the iteration termination condition is satisfied: when the iteration termination condition is satisfied, the parameter combination optimization process is completed; otherwise, the position and speed of each particle at the next moment are obtained according to the update of the particle swarm optimization algorithm, and return to step Ⅱ -3.
步骤Ⅱ-4中迭代终止条件为当前迭代次数t达到预先设定的最大迭代次数Imax或者Δg≤e,其中Δg=|gbest-gmax|,式中为gbest当前时刻的全局极值,gmax为原先设定的目标适应度值,e为正数且其为预先设定的偏差值。The iteration termination condition in step II-4 is that the current iteration number t reaches the preset maximum iteration number I max or Δg≤e, where Δg=|gbest-gmax|, where is the global extreme value of gbest at the current moment, and gmax is The originally set target fitness value, e is a positive number and it is a preset deviation value.
本实施例中,最大迭代次数Imax=30。实际使用时,可根据具体需要,将最大迭代次数Imax在20~200之间进行调整。In this embodiment, the maximum number of iterations I max =30. In actual use, the maximum number of iterations I max can be adjusted between 20 and 200 according to specific needs.
本实施例中,步骤Ⅱ-1中进行粒子群初始化时,粒子(ak,bk,ck,dk)中(ak,ck)为第k个粒子的初始速度矢量,(bk,dk)为第k个粒子的初始位置。In this embodiment, when the particle swarm is initialized in step II-1, (a k , c k ) among particles (a k , b k , c k , d k ) is the initial velocity vector of the kth particle, ( b k , d k ) is the initial position of the kth particle.
步骤Ⅱ-4中根据粒子中群优化算法更新得出下一时刻各粒子的位置和速度时,所有粒子的位置和速度的更新方法均相同;其中,对下一时刻第k个粒子的速度和位置进行更新时,先根据当前时刻第k个粒子的速度矢量、位置和个体极值Pbestk以及全局极值,计算得出下一时刻第k个粒子的速度矢量,并根据当前时刻第k个粒子的位置和计算得出的下一时刻第k个粒子的速度矢量计算得出下一时刻第k个粒子的位置。When the position and velocity of each particle at the next moment are obtained according to the update of the particle swarm optimization algorithm in step II-4, the updating methods of the position and velocity of all particles are the same; among them, the velocity and velocity of the kth particle at the next moment are When the position is updated, first calculate the velocity vector of the kth particle at the next moment according to the velocity vector, position, individual extremum Pbestk and global extremum of the kth particle at the current moment, and calculate the velocity vector of the kth particle at the current moment according to the kth particle at the current moment position and the calculated velocity vector of the kth particle at the next moment to calculate the position of the kth particle at the next moment.
并且,步骤Ⅱ-4中对下一时刻第k个粒子的速度和位置进行更新时,根据
本实施例中,ωmax=0.9,ωmin=0.4,c1=c2=2。In this embodiment, ω max =0.9, ω min =0.4, c 1 =c 2 =2.
本实施例中,步骤Ⅱ-1中进行粒子群初始化之前,需先对ak、bk、ck和dk的搜索范围进行确定,其中步骤Ⅰ中所述待分割图像的像素点灰度最小值为gmin且其最小值为gmax;像素点(m,n)的邻域大小为d×d个像素点且其邻域的平均灰度最小值smin且其平均灰度最大值smax,ak=gmin、…、gmax-1,bk=gmin+1、…、gmax,ck=smin、…、smax-1,dk=smin+1、…、smax。In this embodiment, before the initialization of the particle swarm in step II-1, the search ranges of a k , b k , c k and d k need to be determined first, where the pixel grayscale of the image to be segmented in step I is The minimum value is g min and its minimum value is g max ; the neighborhood size of a pixel point (m, n) is d×d pixels, and the average gray value of its neighborhood is s min and its average gray value is maximum s max , a k =g min ,..., g max -1, b k =g min +1,..., g max , c k =s min ,..., s max -1, d k =s min +1, ..., s max .
本实施例中,d=5。In this embodiment, d=5.
实际使用过程中,可以根据具体需要,对d的取值大小进行相应调整。During actual use, the value of d can be adjusted accordingly according to specific needs.
步骤20133、图像分割:所述处理器3利用步骤20132中优化后的模糊参数组合,并按照基于二维模糊划分最大熵的图像分割方法对所述待分割图像中的各像素点进行分类,并相应完成图像分割过程,获得分割后的目标图像。Step 20133, image segmentation: the
本实施例中,获得优化后的模糊参数组合为(a,b,c,d)后,根据最大隶属度原则对像素进行分类:其中当μo(i,j)≥0.5时,将此类像素点划分为目标区域,否则划分为背景区域,详见图4。图4中,μo(i,j)≥0.5所在的方格即表示为图像分割后的目标区域。In this embodiment, after obtaining the optimized combination of fuzzy parameters (a, b, c, d), the pixels are classified according to the principle of maximum membership degree: when μ o (i, j) ≥ 0.5, the class The pixels are divided into the target area, otherwise they are divided into the background area, see Figure 4 for details. In FIG. 4 , the square where μ o (i,j)≥0.5 is represented as the target area after image segmentation.
以上所述,仅是本发明的较佳实施例,并非对本发明作任何限制,凡是根据本发明技术实质对以上实施例所作的任何简单修改、变更以及等效结构变化,均仍属于本发明技术方案的保护范围内。The above are only preferred embodiments of the present invention, and do not limit the present invention in any way. All simple modifications, changes and equivalent structural changes made to the above embodiments according to the technical essence of the present invention still belong to the technical aspects of the present invention. within the scope of protection of the scheme.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410148888.3A CN103886344B (en) | 2014-04-14 | 2014-04-14 | A kind of Image Fire Flame recognition methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410148888.3A CN103886344B (en) | 2014-04-14 | 2014-04-14 | A kind of Image Fire Flame recognition methods |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103886344A true CN103886344A (en) | 2014-06-25 |
CN103886344B CN103886344B (en) | 2017-07-07 |
Family
ID=50955227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410148888.3A Active CN103886344B (en) | 2014-04-14 | 2014-04-14 | A kind of Image Fire Flame recognition methods |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103886344B (en) |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105976365A (en) * | 2016-04-28 | 2016-09-28 | 天津大学 | Nocturnal fire disaster video detection method |
CN106204553A (en) * | 2016-06-30 | 2016-12-07 | 江苏理工学院 | Image fast segmentation method based on least square method curve fitting |
CN106355812A (en) * | 2016-08-10 | 2017-01-25 | 安徽理工大学 | Fire hazard prediction method based on temperature fields |
CN107015852A (en) * | 2016-06-15 | 2017-08-04 | 珠江水利委员会珠江水利科学研究院 | A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling |
CN107209873A (en) * | 2015-01-29 | 2017-09-26 | 高通股份有限公司 | Hyper parameter for depth convolutional network is selected |
CN107316012A (en) * | 2017-06-14 | 2017-11-03 | 华南理工大学 | The fire detection and tracking of small-sized depopulated helicopter |
CN107704820A (en) * | 2017-09-28 | 2018-02-16 | 深圳市鑫汇达机械设计有限公司 | A kind of effective coal-mine fire detecting system |
CN108038510A (en) * | 2017-12-22 | 2018-05-15 | 湖南源信光电科技股份有限公司 | A kind of detection method based on doubtful flame region feature |
CN105809643B (en) * | 2016-03-14 | 2018-07-06 | 浙江外国语学院 | A kind of image enchancing method based on adaptive block channel extrusion |
CN108280755A (en) * | 2018-02-28 | 2018-07-13 | 阿里巴巴集团控股有限公司 | The recognition methods of suspicious money laundering clique and identification device |
CN108319964A (en) * | 2018-02-07 | 2018-07-24 | 嘉兴学院 | A kind of fire image recognition methods based on composite character and manifold learning |
CN108416968A (en) * | 2018-01-31 | 2018-08-17 | 国家能源投资集团有限责任公司 | Fire warning method and device |
CN108537150A (en) * | 2018-03-27 | 2018-09-14 | 秦广民 | Reflective processing system based on image recognition |
CN108664980A (en) * | 2018-05-14 | 2018-10-16 | 昆明理工大学 | A kind of sun crown ring structure recognition methods based on guiding filtering and wavelet transformation |
CN108765335A (en) * | 2018-05-25 | 2018-11-06 | 电子科技大学 | A kind of forest fire detection method based on remote sensing images |
CN108876741A (en) * | 2018-06-22 | 2018-11-23 | 中国矿业大学(北京) | A kind of image enchancing method under the conditions of complex illumination |
CN108875626A (en) * | 2018-06-13 | 2018-11-23 | 江苏电力信息技术有限公司 | A kind of static fire detection method of transmission line of electricity |
CN109145796A (en) * | 2018-08-13 | 2019-01-04 | 福建和盛高科技产业有限公司 | A kind of identification of electric power piping lane fire source and fire point distance measuring method based on video image convergence analysis algorithm |
CN109204106A (en) * | 2018-08-27 | 2019-01-15 | 浙江大丰实业股份有限公司 | Stage equipment mobile system |
CN109272496A (en) * | 2018-09-04 | 2019-01-25 | 西安科技大学 | Fire image recognition method for video surveillance of coal mine fire |
CN109584423A (en) * | 2018-12-13 | 2019-04-05 | 佛山单常科技有限公司 | A kind of intelligent unlocking system |
CN109685266A (en) * | 2018-12-21 | 2019-04-26 | 长安大学 | A kind of lithium battery bin fire prediction method and system based on SVM |
CN109887220A (en) * | 2019-01-23 | 2019-06-14 | 珠海格力电器股份有限公司 | Air conditioner and control method thereof |
CN109919071A (en) * | 2019-02-28 | 2019-06-21 | 沈阳天眼智云信息科技有限公司 | Flame identification method based on infrared multiple features combining technology |
CN110033040A (en) * | 2019-04-12 | 2019-07-19 | 华南师范大学 | A kind of flame identification method, system, medium and equipment |
CN110120142A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | A kind of fire hazard aerosol fog video brainpower watch and control early warning system and method for early warning |
CN110163278A (en) * | 2019-05-16 | 2019-08-23 | 东南大学 | A kind of flame holding monitoring method based on image recognition |
CN110334664A (en) * | 2019-07-09 | 2019-10-15 | 中南大学 | A statistical method, device, electronic equipment and medium for alloy precipitated phase fraction |
CN111105587A (en) * | 2019-12-31 | 2020-05-05 | 广州思瑞智能科技有限公司 | Intelligent flame detection method and device, detector and storage medium |
CN111476965A (en) * | 2020-03-13 | 2020-07-31 | 深圳信息职业技术学院 | Method for constructing fire detection model, fire detection method and related equipment |
CN112115766A (en) * | 2020-07-28 | 2020-12-22 | 辽宁长江智能科技股份有限公司 | Flame identification method, device, equipment and storage medium based on video picture |
CN112149509A (en) * | 2020-08-25 | 2020-12-29 | 浙江浙大中控信息技术有限公司 | Traffic signal lamp fault detection method integrating deep learning and image processing |
CN112215831A (en) * | 2020-10-21 | 2021-01-12 | 厦门市美亚柏科信息股份有限公司 | Method and system for evaluating quality of face image |
CN112396026A (en) * | 2020-11-30 | 2021-02-23 | 北京华正明天信息技术股份有限公司 | Fire image feature extraction method based on feature aggregation and dense connection |
CN113158719A (en) * | 2020-11-30 | 2021-07-23 | 齐鲁工业大学 | Image identification method for fire disaster of photovoltaic power station |
CN113806895A (en) * | 2021-08-18 | 2021-12-17 | 广西电网有限责任公司河池供电局 | Power transmission line pin-level defect identification model tuning method based on continuous learning |
CN114220046A (en) * | 2021-11-25 | 2022-03-22 | 中国民用航空飞行学院 | Fire image fuzzy membership recognition method based on gray comprehensive association degree |
CN114255446A (en) * | 2021-12-20 | 2022-03-29 | 黄浦区消防救援支队 | Flame heat flow real-time detection method based on machine vision and support vector machine |
CN114530025A (en) * | 2021-12-31 | 2022-05-24 | 武汉烽理光电技术有限公司 | Tunnel fire alarm method and device based on array grating and electronic equipment |
CN116701409A (en) * | 2023-08-07 | 2023-09-05 | 湖南永蓝检测技术股份有限公司 | Sensor data storage method for intelligent on-line detection of environment |
CN117152474A (en) * | 2023-07-25 | 2023-12-01 | 华能核能技术研究院有限公司 | High-temperature gas cooled reactor flame identification method based on K-means clustering algorithm |
KR20240005403A (en) * | 2022-07-05 | 2024-01-12 | 한국항공우주산업 주식회사 | Fire monitoring system of aircraft part's manufacturing environment |
CN117612319A (en) * | 2024-01-24 | 2024-02-27 | 上海意静信息科技有限公司 | Alarm information grading early warning method and system based on sensor and picture |
CN119227059A (en) * | 2024-10-08 | 2024-12-31 | 浙江图岳控股有限公司 | A security incident early warning method and system based on alarm data analysis |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101393603B (en) * | 2008-10-09 | 2012-01-04 | 浙江大学 | Method for recognizing and detecting tunnel fire disaster flame |
-
2014
- 2014-04-14 CN CN201410148888.3A patent/CN103886344B/en active Active
Non-Patent Citations (3)
Title |
---|
VIKSHANT KHANNA等: "Fire Detection Mechanism using Fuzzy Logic", 《INTERNATIONAL JOURNAL OF COMPUTER APPLICATION》 * |
孙福志等: "火灾识别中RS-SVM模型的应用", 《计算机工程与应用》 * |
赵敏等: "模糊聚类遗传算法在遗煤自燃火灾识别中的应用", 《煤炭技术》 * |
Cited By (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107209873B (en) * | 2015-01-29 | 2021-06-25 | 高通股份有限公司 | Hyper-parameter selection for deep convolutional networks |
CN107209873A (en) * | 2015-01-29 | 2017-09-26 | 高通股份有限公司 | Hyper parameter for depth convolutional network is selected |
CN105809643B (en) * | 2016-03-14 | 2018-07-06 | 浙江外国语学院 | A kind of image enchancing method based on adaptive block channel extrusion |
CN105976365A (en) * | 2016-04-28 | 2016-09-28 | 天津大学 | Nocturnal fire disaster video detection method |
CN107015852A (en) * | 2016-06-15 | 2017-08-04 | 珠江水利委员会珠江水利科学研究院 | A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling |
CN106204553B (en) * | 2016-06-30 | 2019-03-08 | 江苏理工学院 | Image fast segmentation method based on least square method curve fitting |
CN106204553A (en) * | 2016-06-30 | 2016-12-07 | 江苏理工学院 | Image fast segmentation method based on least square method curve fitting |
CN106355812A (en) * | 2016-08-10 | 2017-01-25 | 安徽理工大学 | Fire hazard prediction method based on temperature fields |
CN107316012A (en) * | 2017-06-14 | 2017-11-03 | 华南理工大学 | The fire detection and tracking of small-sized depopulated helicopter |
CN107316012B (en) * | 2017-06-14 | 2020-12-22 | 华南理工大学 | Fire detection and tracking method for small unmanned helicopter |
CN107704820A (en) * | 2017-09-28 | 2018-02-16 | 深圳市鑫汇达机械设计有限公司 | A kind of effective coal-mine fire detecting system |
CN108038510A (en) * | 2017-12-22 | 2018-05-15 | 湖南源信光电科技股份有限公司 | A kind of detection method based on doubtful flame region feature |
CN108416968A (en) * | 2018-01-31 | 2018-08-17 | 国家能源投资集团有限责任公司 | Fire warning method and device |
CN108319964A (en) * | 2018-02-07 | 2018-07-24 | 嘉兴学院 | A kind of fire image recognition methods based on composite character and manifold learning |
CN108319964B (en) * | 2018-02-07 | 2021-10-22 | 嘉兴学院 | A Fire Image Recognition Method Based on Hybrid Feature and Manifold Learning |
CN110120142A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | A kind of fire hazard aerosol fog video brainpower watch and control early warning system and method for early warning |
CN108280755A (en) * | 2018-02-28 | 2018-07-13 | 阿里巴巴集团控股有限公司 | The recognition methods of suspicious money laundering clique and identification device |
WO2019165817A1 (en) * | 2018-02-28 | 2019-09-06 | 阿里巴巴集团控股有限公司 | Method and device for recognizing suspicious money laundering group |
CN108537150A (en) * | 2018-03-27 | 2018-09-14 | 秦广民 | Reflective processing system based on image recognition |
CN108664980A (en) * | 2018-05-14 | 2018-10-16 | 昆明理工大学 | A kind of sun crown ring structure recognition methods based on guiding filtering and wavelet transformation |
CN108765335A (en) * | 2018-05-25 | 2018-11-06 | 电子科技大学 | A kind of forest fire detection method based on remote sensing images |
CN108875626A (en) * | 2018-06-13 | 2018-11-23 | 江苏电力信息技术有限公司 | A kind of static fire detection method of transmission line of electricity |
CN108876741B (en) * | 2018-06-22 | 2021-08-24 | 中国矿业大学(北京) | Image enhancement method under complex illumination condition |
CN108876741A (en) * | 2018-06-22 | 2018-11-23 | 中国矿业大学(北京) | A kind of image enchancing method under the conditions of complex illumination |
CN109145796A (en) * | 2018-08-13 | 2019-01-04 | 福建和盛高科技产业有限公司 | A kind of identification of electric power piping lane fire source and fire point distance measuring method based on video image convergence analysis algorithm |
CN109204106B (en) * | 2018-08-27 | 2020-08-07 | 浙江大丰实业股份有限公司 | Stage equipment moving system |
CN109204106A (en) * | 2018-08-27 | 2019-01-15 | 浙江大丰实业股份有限公司 | Stage equipment mobile system |
CN109272496B (en) * | 2018-09-04 | 2022-05-03 | 西安科技大学 | A fire image recognition method for video monitoring of coal mine fire |
CN109272496A (en) * | 2018-09-04 | 2019-01-25 | 西安科技大学 | Fire image recognition method for video surveillance of coal mine fire |
CN109584423A (en) * | 2018-12-13 | 2019-04-05 | 佛山单常科技有限公司 | A kind of intelligent unlocking system |
CN109685266A (en) * | 2018-12-21 | 2019-04-26 | 长安大学 | A kind of lithium battery bin fire prediction method and system based on SVM |
CN109887220A (en) * | 2019-01-23 | 2019-06-14 | 珠海格力电器股份有限公司 | Air conditioner and control method thereof |
CN109919071A (en) * | 2019-02-28 | 2019-06-21 | 沈阳天眼智云信息科技有限公司 | Flame identification method based on infrared multiple features combining technology |
CN110033040A (en) * | 2019-04-12 | 2019-07-19 | 华南师范大学 | A kind of flame identification method, system, medium and equipment |
CN110163278A (en) * | 2019-05-16 | 2019-08-23 | 东南大学 | A kind of flame holding monitoring method based on image recognition |
CN110334664A (en) * | 2019-07-09 | 2019-10-15 | 中南大学 | A statistical method, device, electronic equipment and medium for alloy precipitated phase fraction |
CN111105587A (en) * | 2019-12-31 | 2020-05-05 | 广州思瑞智能科技有限公司 | Intelligent flame detection method and device, detector and storage medium |
CN111476965B (en) * | 2020-03-13 | 2021-08-03 | 深圳信息职业技术学院 | Construction method of fire detection model, fire detection method and related equipment |
CN111476965A (en) * | 2020-03-13 | 2020-07-31 | 深圳信息职业技术学院 | Method for constructing fire detection model, fire detection method and related equipment |
CN112115766A (en) * | 2020-07-28 | 2020-12-22 | 辽宁长江智能科技股份有限公司 | Flame identification method, device, equipment and storage medium based on video picture |
CN112149509A (en) * | 2020-08-25 | 2020-12-29 | 浙江浙大中控信息技术有限公司 | Traffic signal lamp fault detection method integrating deep learning and image processing |
CN112149509B (en) * | 2020-08-25 | 2023-05-09 | 浙江中控信息产业股份有限公司 | Traffic signal lamp fault detection method integrating deep learning and image processing |
CN112215831A (en) * | 2020-10-21 | 2021-01-12 | 厦门市美亚柏科信息股份有限公司 | Method and system for evaluating quality of face image |
CN112215831B (en) * | 2020-10-21 | 2022-08-26 | 厦门市美亚柏科信息股份有限公司 | Method and system for evaluating quality of face image |
CN113158719A (en) * | 2020-11-30 | 2021-07-23 | 齐鲁工业大学 | Image identification method for fire disaster of photovoltaic power station |
CN112396026A (en) * | 2020-11-30 | 2021-02-23 | 北京华正明天信息技术股份有限公司 | Fire image feature extraction method based on feature aggregation and dense connection |
CN112396026B (en) * | 2020-11-30 | 2024-06-07 | 北京华正明天信息技术股份有限公司 | Fire image feature extraction method based on feature aggregation and dense connection |
CN113806895A (en) * | 2021-08-18 | 2021-12-17 | 广西电网有限责任公司河池供电局 | Power transmission line pin-level defect identification model tuning method based on continuous learning |
CN114220046A (en) * | 2021-11-25 | 2022-03-22 | 中国民用航空飞行学院 | Fire image fuzzy membership recognition method based on gray comprehensive association degree |
CN114255446A (en) * | 2021-12-20 | 2022-03-29 | 黄浦区消防救援支队 | Flame heat flow real-time detection method based on machine vision and support vector machine |
CN114530025B (en) * | 2021-12-31 | 2024-03-08 | 武汉烽理光电技术有限公司 | Tunnel fire alarming method and device based on array grating and electronic equipment |
CN114530025A (en) * | 2021-12-31 | 2022-05-24 | 武汉烽理光电技术有限公司 | Tunnel fire alarm method and device based on array grating and electronic equipment |
KR20240005403A (en) * | 2022-07-05 | 2024-01-12 | 한국항공우주산업 주식회사 | Fire monitoring system of aircraft part's manufacturing environment |
CN117152474A (en) * | 2023-07-25 | 2023-12-01 | 华能核能技术研究院有限公司 | High-temperature gas cooled reactor flame identification method based on K-means clustering algorithm |
CN116701409A (en) * | 2023-08-07 | 2023-09-05 | 湖南永蓝检测技术股份有限公司 | Sensor data storage method for intelligent on-line detection of environment |
CN116701409B (en) * | 2023-08-07 | 2023-11-03 | 湖南永蓝检测技术股份有限公司 | Sensor data storage method for intelligent on-line detection of environment |
CN117612319A (en) * | 2024-01-24 | 2024-02-27 | 上海意静信息科技有限公司 | Alarm information grading early warning method and system based on sensor and picture |
CN119227059A (en) * | 2024-10-08 | 2024-12-31 | 浙江图岳控股有限公司 | A security incident early warning method and system based on alarm data analysis |
CN119227059B (en) * | 2024-10-08 | 2025-06-03 | 浙江图岳控股有限公司 | A security incident early warning method and system based on alarm data analysis |
Also Published As
Publication number | Publication date |
---|---|
CN103886344B (en) | 2017-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103886344B (en) | A kind of Image Fire Flame recognition methods | |
CN103871029B (en) | A kind of image enhaucament and dividing method | |
CN103942557B (en) | A kind of underground coal mine image pre-processing method | |
CN111126136B (en) | A Quantification Method of Smoke Concentration Based on Image Recognition | |
CN112069975B (en) | Comprehensive flame detection method based on ultraviolet, infrared and vision | |
CN113537099B (en) | Dynamic detection method for fire smoke in highway tunnel | |
Zhao et al. | SVM based forest fire detection using static and dynamic features | |
CN102332092B (en) | Flame detection method based on video analysis | |
CN108229458A (en) | A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction | |
CN104809463B (en) | A High Accuracy Fire Flame Detection Method Based on Dense Scale Invariant Feature Transformation Dictionary Learning | |
CN108319964A (en) | A kind of fire image recognition methods based on composite character and manifold learning | |
CN101770644A (en) | Forest-fire remote video monitoring firework identification method | |
CN109684922A (en) | A kind of recognition methods based on the multi-model of convolutional neural networks to finished product dish | |
CN102682303A (en) | Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model | |
CN108921215A (en) | A kind of Smoke Detection based on local extremum Symbiotic Model and energy spectrometer | |
CN105279485B (en) | The detection method of monitoring objective abnormal behaviour under laser night vision | |
CN110309781A (en) | Remote sensing recognition method for house damage based on multi-scale spectral texture adaptive fusion | |
CN107909027A (en) | It is a kind of that there is the quick human body target detection method for blocking processing | |
CN102496016A (en) | Infrared target detection method based on space-time cooperation framework | |
CN110334660A (en) | A forest fire monitoring method based on machine vision under foggy conditions | |
CN109034066A (en) | Building identification method based on multi-feature fusion | |
CN102169631A (en) | Manifold-learning-based traffic jam event cooperative detecting method | |
CN109886267A (en) | A saliency detection method for low-contrast images based on optimal feature selection | |
CN114495170A (en) | A method and system for pedestrian re-identification based on local suppression of self-attention | |
Xiao et al. | Traffic sign detection based on histograms of oriented gradients and boolean convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210125 Address after: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province Patentee after: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd. Address before: 710054 No. 58, middle section, Yanta Road, Shaanxi, Xi'an Patentee before: XI'AN University OF SCIENCE AND TECHNOLOGY |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211102 Address after: 257000 Room 308, building 3, Dongying Software Park, No. 228, Nanyi Road, development zone, Dongying City, Shandong Province Patentee after: Dongkai Shuke (Shandong) Industrial Park Co.,Ltd. Address before: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province Patentee before: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd. |