CN110648490A - Multi-factor flame identification method suitable for embedded platform - Google Patents
Multi-factor flame identification method suitable for embedded platform Download PDFInfo
- Publication number
- CN110648490A CN110648490A CN201910916354.3A CN201910916354A CN110648490A CN 110648490 A CN110648490 A CN 110648490A CN 201910916354 A CN201910916354 A CN 201910916354A CN 110648490 A CN110648490 A CN 110648490A
- Authority
- CN
- China
- Prior art keywords
- fire
- information
- frames
- quasi
- fire information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000002474 experimental method Methods 0.000 claims abstract description 24
- 238000002485 combustion reaction Methods 0.000 claims abstract description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 78
- 238000013528 artificial neural network Methods 0.000 claims description 33
- 238000001514 detection method Methods 0.000 claims description 20
- 239000000203 mixture Substances 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 13
- 238000012790 confirmation Methods 0.000 abstract description 16
- 238000012545 processing Methods 0.000 abstract description 8
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000011161 development Methods 0.000 description 12
- 238000009826 distribution Methods 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000255925 Diptera Species 0.000 description 1
- 240000000731 Fagus sylvatica Species 0.000 description 1
- 235000010099 Fagus sylvatica Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000012080 ambient air Substances 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 235000019504 cigarettes Nutrition 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000003345 natural gas Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000010893 paper waste Substances 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B17/00—Fire alarms; Alarms responsive to explosion
- G08B17/12—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
- G08B17/125—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Fire-Detection Mechanisms (AREA)
- Image Analysis (AREA)
Abstract
本申请揭示一种适用于嵌入式平台的多因子火焰识别方法,包括建立火灾样本库,火灾样本库来自于网络火灾图片及燃烧实验图片;分别获取多处现场视频帧;分别提取每一处现场视频帧中的运动目标,获得一个或多个准火灾区域;根据火灾样本库对一个或多个准火灾区域分别进行火灾确认,判断一个或多个准火灾区域是否存在火灾信息;若是,对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号。本申请通过分别处理获取的现场视频帧,实时分析现场监控环境中是否出现有可能发展成为火灾的火灾信息,并再次进行火灾确认,进一步判断是否真存在火灾信息,还会对火灾信息进行火灾等级分析并且并生警报,火灾信息识别更准确。
The present application discloses a multi-factor flame identification method suitable for an embedded platform, which includes establishing a fire sample library, and the fire sample library comes from network fire pictures and combustion experiment pictures; respectively acquiring multiple live video frames; One or more quasi-fire areas are obtained from the moving objects in the video frame; fire confirmation is performed on one or more quasi-fire areas respectively according to the fire sample library, and it is judged whether there is fire information in one or more quasi-fire areas; The information is divided into fire grades, and corresponding alarm signals are generated according to the divided fire grades. In this application, by separately processing the acquired live video frames, it analyzes in real time whether there is fire information that may develop into a fire in the on-site monitoring environment, and confirms the fire again to further determine whether there is real fire information. By analyzing and generating alarms, fire information can be identified more accurately.
Description
技术领域technical field
本申请涉及电子智能消防技术领域,具体地,涉及一种适用于嵌入式平台的多因子火焰识别方法。The present application relates to the technical field of electronic intelligent fire protection, and in particular, to a multi-factor flame identification method suitable for embedded platforms.
背景技术Background technique
目前火灾识别主流方法有三种,第一种采用传统的火灾探测传感器进行火灾信息的探测,普遍存在检测时间长准确率低的缺点。第二种是采用图像识别进行火灾信息的检测,即采用传统的数字图像处理手段,人工设定火灾的特征维度进行火灾识别,也即人工通过设计多个具有代表性的特征来表征火灾信息,但由于人工表征火灾的特征有限,因而对不同场景或者不同背景下的火灾信息无法正常表达,普遍存在误判率高,鲁棒性偏低的缺点。第三种则是采用深度学习的方式,自动学习样本图片中的火灾特征,通过自动学习样本图片中的火灾特征代替人工设计火灾的特征维度,提高了鲁棒性以及准确率。如图1所示,其为传统火灾识别系统图,通用型的深度学习方法在进行火灾识别时,多个视频采集端采集到的现场视频统一经交换机传送至后台服务器,由后台服务器进行集中计算,导致计算量巨大。At present, there are three mainstream methods of fire identification. The first one uses traditional fire detection sensors to detect fire information, which generally has the disadvantage of long detection time and low accuracy. The second is to use image recognition to detect fire information, that is, to use traditional digital image processing methods to manually set the feature dimensions of fire for fire identification, that is, to manually design multiple representative features to represent fire information. However, due to the limited characteristics of artificially representing fires, fire information in different scenes or backgrounds cannot be expressed normally, and there are generally disadvantages of high misjudgment rate and low robustness. The third method is to use deep learning to automatically learn the fire features in the sample pictures. By automatically learning the fire features in the sample pictures instead of manually designing the fire feature dimension, the robustness and accuracy are improved. As shown in Figure 1, it is a diagram of a traditional fire identification system. When the general deep learning method is used for fire identification, the live videos collected by multiple video acquisition terminals are uniformly transmitted to the background server through the switch, and the background server performs centralized calculation. , resulting in a huge amount of computation.
发明内容SUMMARY OF THE INVENTION
针对现有技术的不足,本申请提供一种适用于嵌入式平台的多因子火焰识别方法。In view of the deficiencies of the prior art, the present application provides a multi-factor flame identification method suitable for embedded platforms.
本申请公开的一种适用于嵌入式平台的多因子火焰识别方法,包括:A multi-factor flame identification method applicable to an embedded platform disclosed in this application includes:
建立火灾样本库,火灾样本库来自于网络火灾图片及燃烧实验图片;Establish a fire sample library, which comes from network fire pictures and combustion experiment pictures;
分别获取多处现场视频帧;Obtain multiple live video frames respectively;
分别提取每一处现场视频帧中的运动目标,获得一个或多个准火灾区域;Extract the moving objects in each live video frame separately to obtain one or more quasi-fire areas;
根据火灾样本库对一个或多个准火灾区域进行火灾确认,判断一个或多个准火灾区域是否存在火灾信息;Carry out fire confirmation on one or more quasi-fire areas according to the fire sample database, and determine whether there is fire information in one or more quasi-fire areas;
若是,对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号。If so, the fire information is divided into fire grades, and a corresponding alarm signal is generated according to the divided fire grades.
根据本申请的一实施方式,根据火灾样本库对一个或多个准火灾区域进行火灾确认,判断一个或多个准火灾区域是否存在火灾信息包括:According to an embodiment of the present application, performing fire confirmation on one or more quasi-fire areas according to the fire sample library, and determining whether there is fire information in the one or more quasi-fire areas includes:
分别采用BP神经网络算法、SSD算法及Yolo算法对一个或多个准火灾区域进行检测,其中采用BP神经网络算法对准火灾区域进行检测时,抽取火灾样本库中的网络火灾图片及燃烧实验图片组成BP神经网络训练集;The BP neural network algorithm, SSD algorithm and Yolo algorithm are respectively used to detect one or more quasi-fire areas. When the BP neural network algorithm is used to detect the fire area, the network fire pictures and combustion experiment pictures in the fire sample library are extracted. Constitute the BP neural network training set;
根据检测分别输出火灾置信度;Output the fire confidence according to the detection;
根据火灾置信度判断是否存在火灾信息。Determine whether there is fire information according to the fire confidence.
根据本申请的一实施方式,其中采用BP神经网络算法对一个或多个准火灾区域进行检测时,抽取火灾样本库中所包含的至少一半量的网络火灾图片及至少一半量的燃烧实验图片组成BP神经网络训练集。According to an embodiment of the present application, when the BP neural network algorithm is used to detect one or more quasi-fire areas, at least half of the network fire pictures and at least half of the combustion experiment pictures contained in the fire sample library are extracted. BP neural network training set.
根据本申请的一实施方式,根据火灾置信度判断是否存在火灾信息包括:采用BP神经网络算法检测时,输出BP网络火灾置信度P,P∈[0,1],根据火灾置信度P判断是否存在火灾信息。According to an embodiment of the present application, judging whether there is fire information according to the fire confidence level includes: when the BP neural network algorithm is used for detection, outputting the BP network fire confidence level P, P∈[0,1], and judging whether or not according to the fire confidence level P Fire information exists.
根据本申请的一实施方式,分别采用SSD算法及Yolo算法对火灾信息进行检测,并分别输出SSD火灾置信度P_A及Yolo火灾置信度为P_B,若P>0.8且P_A>0.8,则火灾信息为大火灾信息;若P>0.6且P_B>0.7,则火灾信息为小火灾信息。According to an embodiment of the present application, the SSD algorithm and the Yolo algorithm are respectively used to detect the fire information, and the SSD fire confidence level P_A and the Yolo fire confidence level are respectively output as P_B. If P>0.8 and P_A>0.8, the fire information is Big fire information; if P>0.6 and P_B>0.7, the fire information is small fire information.
根据本申请的一实施方式,对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号包括:若判断火灾信息为大火灾信息,则获取连续30帧以上60帧以下的现场视频帧,当连续30帧以上60帧以下的现场视频帧中均产生大火灾信息,则划分火灾信息为一级火灾预警,产生一级报警信号。According to an embodiment of the present application, classifying the fire information into a fire level and generating a corresponding alarm signal according to the classified fire level includes: if the fire information is determined to be a large fire information, acquiring a live video of more than 30 consecutive frames but less than 60 frames If there is a large fire information in the live video frames of more than 30 consecutive frames but less than 60 frames, the fire information is classified as a first-level fire warning, and a first-level alarm signal is generated.
根据本申请的一实施方式,对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号包括:对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号包括:若判断火灾信息为大火灾信息,则获取连续60帧以上90帧以下的现场视频帧,当连续60帧以上90帧以下的现场视频帧中均出现大火灾信息,则划分火灾信息为二级火灾预警,产生二级报警信号。According to an embodiment of the present application, classifying the fire information according to the fire level and generating the corresponding alarm signal according to the classified fire level includes: classifying the fire information according to the fire level, and generating the corresponding alarm signal according to the classified fire level includes: If it is judged that the fire information is the big fire information, obtain the live video frames with more than 60 frames and less than 90 frames in a row. When there is large fire information in the live video frames with more than 60 frames and less than 90 frames in a row, the fire information is classified as secondary fire. Early warning, generating a secondary alarm signal.
根据本申请的一实施方式,对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号包括:对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号包括:若判断火灾信息为大火灾信息,则获取连续90帧以上现场视频帧,当连续90帧以上的现场视频帧中均出现大火灾信息,则火灾信息为三级火灾预警,产生三级报警信号。According to an embodiment of the present application, classifying the fire information according to the fire level and generating the corresponding alarm signal according to the classified fire level includes: classifying the fire information according to the fire level, and generating the corresponding alarm signal according to the classified fire level includes: If it is judged that the fire information is large fire information, then obtain more than 90 consecutive live video frames. When large fire information appears in the continuous live video frames of more than 90 consecutive frames, the fire information is a three-level fire warning, and a three-level alarm signal is generated.
根据本申请的一实施方式,对火灾信息进行火灾等级划分,并根据划分的火灾等级产生对应的报警信号包括:若判断火灾信息为小火灾信息,则获取连续15帧以上30帧以下的现场视频帧,当连续15帧以上30帧以下的现场视频帧中均出现小火灾信息,则火灾信息为0级火灾预警,产生0级报警信号。According to an embodiment of the present application, classifying the fire information by fire level, and generating a corresponding alarm signal according to the divided fire level includes: if the fire information is determined to be small fire information, acquiring a live video of more than 15 frames and less than 30 frames in a row If there is small fire information in the live video frames of more than 15 consecutive frames but less than 30 frames, the fire information is a level 0 fire warning, and a level 0 alarm signal is generated.
根据本申请的一实施方式,提取视频帧中的运动目标,获得准火灾区域包括:According to an embodiment of the present application, extracting a moving object in a video frame to obtain a quasi-fire area includes:
采用混合高斯模型建模方法对获取的现场视频帧进行背景建模;The background modeling of the acquired live video frames is carried out by using a mixture of Gaussian model modeling method;
更新混合高斯模型中参数,获得背景图像;Update the parameters in the mixed Gaussian model to obtain the background image;
将现场视频帧与获得的背景图像相减,提取到现场视频帧中的运动目标,以获得准火灾区域。The live video frame is subtracted from the obtained background image, and the moving objects in the live video frame are extracted to obtain the quasi-fire area.
本申请的适用于嵌入式平台的多因子火焰识别方法,当获取多处的现场视频帧后,对每一处的现场视频帧分别进行处理,从而分别提取每一处现场视频帧中的运动目标,再分别进行火灾确认,也即采用分散式处理方式,避免集中处理导致计算量巨大。同时通过处理获取的现场视频帧,实时分析现场的监控环境中是否出现有可能发展成为火灾的火灾信息,获得准火灾区域,并对准火灾区域进行火灾确认,并且进一步判断准火灾区域是否存在火灾信息,如果确定存在火灾信息,则会对这些火灾信息进行火灾等级分析并且并生警报。并且,当该方法应用于现有的消防智能报警系统,与消防智能报警系统结合能够解决具有大空间火灾现场的火灾信息的检测问题,增加火灾现场检测的范围,并且相比传统传感器类型的火灾探测器,该方法具有更短的检测时间及更高的准确率,与此同时还可以将获取的现场视频帧进行存储,方便后续火灾现场的调查取证。In the multi-factor flame identification method applicable to the embedded platform of the present application, after acquiring multiple live video frames, each live video frame is processed separately, so as to extract the moving objects in each live video frame respectively. , and then carry out the fire confirmation separately, that is, adopt the distributed processing method to avoid the huge amount of calculation caused by the centralized processing. At the same time, by processing the acquired live video frames, we can analyze in real time whether there is fire information that may develop into a fire in the monitoring environment of the site, obtain the quasi-fire area, and conduct fire confirmation at the fire area, and further determine whether there is a fire in the quasi-fire area. If it is determined that there is fire information, a fire level analysis will be performed on the fire information and an alarm will be generated concurrently. Moreover, when the method is applied to the existing fire intelligent alarm system, it can solve the problem of fire information detection in a fire scene with a large space in combination with the fire intelligent alarm system, increase the scope of fire scene detection, and compare with traditional sensor type fires. Detector, the method has shorter detection time and higher accuracy, and at the same time, the acquired live video frames can also be stored, which is convenient for subsequent investigation and evidence collection of the fire scene.
附图说明Description of drawings
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described herein are used to provide further understanding of the present application and constitute a part of the present application. The schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute an improper limitation of the present application. In the attached image:
图1为现有火灾识别系统图;Figure 1 is a diagram of an existing fire identification system;
图2为实施例中用于嵌入式平台的多因子火焰识别系统图;2 is a diagram of a multi-factor flame identification system for an embedded platform in an embodiment;
图3为实施例中火灾信息识别流程图;Fig. 3 is the fire information identification flow chart in the embodiment;
图4为实施例中标准正太分布的图像;Fig. 4 is the image of the standard normal distribution in the embodiment;
图5为实施例中采用混合高斯模型从当前现场视频帧中提取运动目标过程示意图;5 is a schematic diagram of a process of extracting a moving target from a current live video frame using a Gaussian mixture model in an embodiment;
图6为实施例中火灾等级划分流程图。FIG. 6 is a flow chart of fire classification in the embodiment.
具体实施方式Detailed ways
以下将以图式揭露本申请的多个实施方式,为明确说明起见,许多实务上的细节将在以下叙述中一并说明。然而,应了解到,这些实务上的细节不应用以限制本申请。也就是说,在本申请的部分实施方式中,这些实务上的细节是非必要的。此外,为简化图式起见,一些习知惯用的结构与组件在图式中将以简单的示意的方式绘示之。Various embodiments of the present application will be disclosed in the drawings below, and for the sake of clarity, many practical details will be described together in the following description. It should be understood, however, that these practical details should not be used to limit this application. That is, in some embodiments of the present application, these practical details are unnecessary. In addition, for the purpose of simplifying the drawings, some well-known structures and components will be shown in a simple schematic manner in the drawings.
另外,在本申请中如涉及“第一”、“第二”等的描述仅用于描述目的,并非特别指称次序或顺位的意思,亦非用以限定本申请,其仅仅是为了区别以相同技术用语描述的组件或操作而已,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。In addition, the descriptions such as "first", "second", etc. in this application are only for the purpose of description, and do not specifically refer to the order or order, nor are they used to limit the application, but are only for the purpose of distinguishing The components or operations are described by the same technical terms, and should not be construed as indicating or implying their relative importance or implying the quantity of the indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In addition, the technical solutions between the various embodiments can be combined with each other, but must be based on the realization by those of ordinary skill in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered that the combination of such technical solutions does not exist. , is not within the scope of protection claimed in this application.
本申请提供一种适用于嵌入式平台的多因子火焰识别方法,包括四个阶段,第一个阶段为火灾样本库建设阶段、第二个阶段为运动目标提取阶段,第三个阶段为火灾区域确认阶段,第四个阶段为火灾等级分级及预警阶段。同时,本申请还具体提供如何使得多因子火焰识别方法移植于嵌入式平台,使得多因子火焰识别方法可以在类似于海思Hi 3519A系列芯片上进行运行。以下将详细阐述本申请适用于嵌入式平台的多因子火焰识别方法。The present application provides a multi-factor flame identification method suitable for embedded platforms, including four stages, the first stage is the construction stage of the fire sample library, the second stage is the moving target extraction stage, and the third stage is the fire area Confirmation stage, the fourth stage is the fire grade classification and early warning stage. At the same time, this application also specifically provides how to transplant the multi-factor flame identification method to the embedded platform, so that the multi-factor flame identification method can be run on a chip similar to the Hi 3519A series of HiSilicon. The multi-factor flame identification method applicable to the embedded platform of the present application will be described in detail below.
如图2所示,其为本申请其中一种适用于嵌入式平台的多因子火焰识别系统图,本申请适用于嵌入式平台的多因子火焰识别系统包括多个视频采集端,多个算法盒子、交换机、服务器、显示端以及报警端,每一个视频采集端通讯连接一个算法盒子,多个算法盒子通讯连接交换机,交换机、显示端以及报警端分别通讯连接服务器。其中,多个视频采集端分别安装在多个受监控区域,用于采集各受监控区域的现场视频帧,多个视频采集端分别将采集的现场视频帧传输至对应的每一个算法盒子,每一个对应的算法盒子根据接收的现场视频帧,完成现场视频帧中运动目标区域的提取,获得准火灾区域,每一个算法盒子将准火灾区域传输至服务器,服务器中建立有火灾样本库,服务器根据建立的火灾样本库,对火灾样本库中的图片进行训练和测试,得到算法模型,以便根据训练及测试得到的算法模型,对准火灾区域进行火灾确认,判断是否存在火灾信息,服务器还根据火灾信息进行火灾等级划分,并根据划分的火灾等级控制报警端产生对应的报警信号。以下详细介绍本申请的适用于嵌入式平台的多因子火焰识别方法。As shown in Figure 2, it is a diagram of a multi-factor flame recognition system suitable for an embedded platform in the present application. The multi-factor flame recognition system suitable for an embedded platform in the present application includes a plurality of video collection terminals, a plurality of algorithm boxes , switch, server, display terminal and alarm terminal, each video acquisition terminal is connected to an algorithm box, multiple algorithm boxes are connected to the switch, and the switch, display terminal and alarm terminal are respectively connected to the server. Among them, multiple video acquisition terminals are respectively installed in multiple monitored areas to collect live video frames of each monitored area, and multiple video acquisition terminals respectively transmit the collected live video frames to each corresponding algorithm box. A corresponding algorithm box completes the extraction of the moving target area in the live video frame according to the received live video frame, and obtains the quasi-fire area. Each algorithm box transmits the quasi-fire area to the server. A fire sample library is established in the server. The established fire sample library trains and tests the pictures in the fire sample library to obtain an algorithm model, so that according to the algorithm model obtained by training and testing, the fire area can be checked for fire, and whether there is fire information is judged. The information is divided into fire grades, and the alarm terminal is controlled to generate corresponding alarm signals according to the divided fire grades. The multi-factor flame identification method applicable to the embedded platform of the present application is described in detail below.
本申请的适用于嵌入式平台的多因子火焰识别方法中,在进行火灾识别之前,先建立火灾样本库,以便BP神经网络算法中对火灾样本库中的图片进行训练和测试,得到BP神经网络算法模型。在进行火灾样本库建设时,为了使得火灾信息的确认更加准确,本例中采用网络虚拟图片及真实生活火灾图片相结合的方式建立火灾样本库,这样使得建立的火灾样本库既有来源于网络的网络火灾图片,同样还有来源于真实生活中通过燃烧实验获得的燃烧实验图片。对于网络火灾图片以及燃烧实验图片,则分别通过网络以及人工模拟两种方式进行采集,火灾样本库中图片的总量至少为10万张。其中,网络火灾图片通过爬虫对百度及谷歌两个搜索引擎进行爬取,以及从国际开源数据库中下载并且筛选1万张,网络火灾图片的获取尽量丰富化,包括不同场景中获得的火灾图片,一共构成5万张的网络图片。在获得燃烧实验图片时,为了使得火灾样本库中图片更全面、丰富及真实,结合不同场景如室内和室外、不同材质燃烧如榉木、塑料、废纸、面料和天然气,不同干扰如阳光、白炽灯、蚊香、香烟及黄色/红色物体,在考虑场景时充分结合现实生活中常见场景,并且通过大量的燃烧实验,收集了5万张的真实场景样本图片,形成燃烧实验图片,以使获得的燃烧实验图片真实化。网络火灾图片以及燃烧实验图片共同组成不少于10万张的火灾样本库。如下表1所示,即为火灾样本库中图片的组成。In the multi-factor flame identification method applicable to the embedded platform of the present application, before fire identification, a fire sample library is established, so that the pictures in the fire sample library can be trained and tested in the BP neural network algorithm, and the BP neural network can be obtained. Algorithmic model. In the construction of the fire sample database, in order to make the confirmation of fire information more accurate, in this example, a combination of network virtual pictures and real life fire pictures is used to build a fire sample database, so that the established fire sample database has both sources from the network. The network fire pictures of , and also the burning experiment pictures obtained through burning experiments in real life. For network fire pictures and burning experiment pictures, they are collected through the network and artificial simulation respectively, and the total number of pictures in the fire sample library is at least 100,000. Among them, the network fire pictures are crawled by crawlers on two search engines, Baidu and Google, and downloaded from international open source databases and screened 10,000 pieces. The acquisition of network fire pictures is as rich as possible, including fire pictures obtained in different scenarios. A total of 50,000 network pictures are formed. When obtaining the pictures of the combustion experiment, in order to make the pictures in the fire sample library more comprehensive, rich and real, we combine different scenes such as indoor and outdoor, combustion of different materials such as beech wood, plastic, waste paper, fabric and natural gas, and different disturbances such as sunlight, incandescent Lamps, mosquito coils, cigarettes and yellow/red objects are fully combined with common scenes in real life when considering the scene, and through a large number of burning experiments, 50,000 sample pictures of real scenes have been collected to form burning experiment pictures, so that the obtained Realistic pictures of combustion experiments. Network fire pictures and combustion experiment pictures together form a fire sample library of no less than 100,000 pieces. As shown in Table 1 below, it is the composition of the pictures in the fire sample library.
表1火灾样本库图片来源Table 1 Source of pictures from the fire sample library
完成火灾样本库的建设及火灾样本库中图片的测试和训练,得到BP神经网络算法模型后,开始进行火灾信息识别。请参考图3,其为火灾信息识别流程图。当多个视频采集端采集现场视频帧,并传输至每一个与之对应通讯连接的算法盒子,每一个算法盒子获取对应的现场视频帧,并进行现场视频帧中运动目标的提取,获得准火灾区域。要对现场视频帧中的火灾信息进行识别,就必需要把发生火灾的区域也即火灾区域提取出来。火灾发生后,由于随着火灾的发展以及环境气流的作用,火灾区域以及背景图像都会不断的运动,因此,要进行火灾信息的识别,获取现场视频帧后首先需要把现场视频帧中的运动目标提取出来,得到的运动目标组成为准火灾区域,又因为准火灾区域中除了发生火灾的区域外同时还存在大量非火灾移动目标,要进行火灾信息的准确确认,还需要将准火灾区域中非火灾移动目标剔除,即可进行火灾确认,进而判断准火灾区域是否存在火灾信息。本例中,之所以先把现场视频帧中的运动目标提取出来,获得准火灾区域后再进行火灾确认主要基于以下两方面的考虑:1、在火灾发展的过程中,火灾区域以及部分背景区域都必然处于运动的状态,假设现场视频帧中提取出来的运动目标组成的准火灾区域为M,准火灾区域必然包括真实的火灾区域为N以及运动的部分背景区域,那么M包含N,即经过运动目标的提取,可以筛选出准火灾区域,便于对准火灾区域进行二次确认,将运动状态中的部分背景区域剔除以获得真实的火灾区域。2、从现场视频帧中提取准火灾区域后,将该准火灾区域作为研究对象,减少了现场视频帧中对应的图像的像素点,从而减少了需要运算的区域,对提升算法的性能有很大的作用,可以减少运算量。After completing the construction of the fire sample database and the testing and training of the pictures in the fire sample database, after obtaining the BP neural network algorithm model, the fire information identification begins. Please refer to FIG. 3 , which is a flowchart of fire information identification. When multiple video capture terminals collect live video frames and transmit them to each algorithm box that is connected to the corresponding communication, each algorithm box obtains the corresponding live video frames, and extracts the moving objects in the live video frames to obtain quasi-fire area. To identify the fire information in the live video frame, it is necessary to extract the fire area, that is, the fire area. After the fire occurs, due to the development of the fire and the effect of the ambient air flow, the fire area and the background image will continue to move. Therefore, to identify the fire information, after acquiring the live video frame, it is first necessary to convert the moving target in the live video frame. Extracted, the obtained moving targets are composed of quasi-fire areas, and because there are a large number of non-fire moving targets in the quasi-fire area besides the fire area, to accurately confirm the fire information, it is necessary to If the fire moving target is eliminated, the fire confirmation can be carried out, and then it can be judged whether there is fire information in the quasi-fire area. In this example, the reason why the moving objects in the live video frames are extracted first, and then the fire confirmation is performed after obtaining the quasi-fire area is mainly based on the following two considerations: 1. In the process of fire development, the fire area and some background areas All must be in a state of motion. Assuming that the quasi-fire area composed of moving objects extracted from the live video frame is M, the quasi-fire area must include the real fire area and some moving background areas, then M contains N, that is, After the extraction of the moving target, the quasi-fire area can be screened out, which is convenient for the secondary confirmation of the fire area, and some background areas in the moving state are eliminated to obtain the real fire area. 2. After the quasi-fire area is extracted from the live video frame, the quasi-fire area is taken as the research object, and the corresponding image pixels in the live video frame are reduced, thereby reducing the area that needs to be calculated, which greatly improves the performance of the algorithm. It has a large effect and can reduce the amount of calculation.
如何从现场视频帧中提取运动目标,常用的具体方法有两类,其中第一类为差分法,具体为背景差分法及帧间差分法,这两种方法都是对不同的两帧现场视频帧进行相减,把差分后的现场视频帧作为运动目标,不同的是帧间差分法是对相邻现场视频帧进行差分,而背景差分法是把当前现场视频帧与背景图像进行差分,由此可见背景图像的建立直接影响到运动目标的提取。背景图像的建立有总体上分为两大类,第一类是把背景图像固定不变,把当前现场视频帧与背景图像差分得到运动目标,这种方法一般都是把现场视频帧的第一帧作为背景图像。但是实际中背景图像通常都是会变化的,比如运动目标就是本来背景图像里面的物体,如果背景图像一直不变,就会把运动目标作为背景处理了,导致提取出来的运动目标不理想,又比如在现实生活中受自然因素(比如光照亮度,自然风等)的影响,背景会发生缓慢的变化,背景图像自然会跟随变化,如果背景图像一直不变的话,跟实际中的背景图像误差会慢慢的变大,这样对提取出来的运动目标也会造成很大的误差。How to extract moving objects from live video frames, there are two commonly used specific methods, the first type is the difference method, specifically the background difference method and the inter-frame difference method, these two methods are for different two frames of live video. The frame is subtracted, and the differenced live video frame is used as the moving target. The difference is that the inter-frame difference method is to differentiate the adjacent live video frames, and the background difference method is to differentiate the current live video frame and the background image. The establishment of this visible background image directly affects the extraction of moving objects. The establishment of the background image is generally divided into two categories. The first category is to fix the background image and obtain the moving target by taking the difference between the current live video frame and the background image. frame as a background image. However, in practice, the background image usually changes. For example, the moving target is the object in the original background image. If the background image remains unchanged, the moving target will be treated as the background, resulting in the extracted moving target being unsatisfactory. For example, in real life, affected by natural factors (such as light brightness, natural wind, etc.), the background will change slowly, and the background image will naturally follow the changes. If the background image remains unchanged, the error with the actual background image will be It gradually becomes larger, which will also cause a great error to the extracted moving target.
第二类建立背景的方法是背景图像可以随着环境的变化而发生缓慢的变化,这样就可以跟实际中的背景环境保持较小的误差。为了得到具有自适应能力的背景图像,通常采用背景建模算法,常用建模算法大致上也可以分为两种,一种是需要存储当前时刻之前的现场视频帧,然后对存储的这些现场视频帧中新出现的数据作为样本,再把这些样本按照一定的规则添加到背景图像当中,比如中值背景建模法和平均值背景建模法,中值背景建模法是对存储的各现场视频帧中相应位置的像素值求中值,把这个中值作为此时背景图像对应位置的像素值,而平均值背景建模法是对各现场视频帧中相应位置的像素值求平均值,并且把这个平均值作为当前现场视频帧的背景,这种方法效果比较理想,但是因为要根据存储一段时间的现场视频帧作为样本,增加了服务器内存的负担,增大了数据的计算量,对硬件要求比较高。平均值背景建模法克服了这些缺点,并不需要对现场视频帧进行存储作为样本,而是通过回归的方式根据当前现场视频帧来改变原来的背景图像,比如卡乐曼滤波器模型,单高斯模型和混合高斯模型。经过实验的反复对比,本例采用了混合高斯模型,以下详细介绍如何采用混合高斯模型方式进行运动目标区域的提取,以获取准火灾区域。The second method of establishing the background is that the background image can change slowly with the change of the environment, so that it can keep a small error with the actual background environment. In order to obtain a background image with adaptive capability, a background modeling algorithm is usually used. The commonly used modeling algorithms can be roughly divided into two types. One is to store the live video frames before the current moment, and then to store these live video The newly appeared data in the frame is used as samples, and then these samples are added to the background image according to certain rules, such as the median background modeling method and the average background modeling method. The pixel value of the corresponding position in the video frame is the median value, and this median value is used as the pixel value of the corresponding position of the background image at this time, and the average background modeling method is to average the pixel values of the corresponding positions in each live video frame. And use this average value as the background of the current live video frame, this method is ideal, but because the live video frame stored for a period of time is used as a sample, it increases the burden on the server memory and increases the amount of data calculation. Hardware requirements are relatively high. The average background modeling method overcomes these shortcomings. It does not need to store the live video frame as a sample, but changes the original background image according to the current live video frame through regression, such as the Kaleman filter model. Gaussian and mixture Gaussian models. After repeated experiments and comparisons, this example uses a Gaussian mixture model. The following describes in detail how to use the Gaussian mixture model to extract moving target areas to obtain quasi-fire areas.
本例中,提取视频帧中的运动目标,获得准火灾区域包括采用混合高斯模型建模方法对获取的现场视频帧进行背景建模;更新混合高斯模型中参数,获得背景图像;将现场视频帧与获得的背景图像相减,提取到现场视频帧中的运动目标,以获得准火灾区域。In this example, extracting the moving objects in the video frame to obtain the quasi-fire area includes using the mixed Gaussian model modeling method to model the background of the obtained live video frame; updating the parameters in the mixed Gaussian model to obtain the background image; The moving objects in the live video frame are extracted by subtraction from the obtained background image to obtain the quasi-fire area.
高斯分布中若随机变量X服从一个数学期望为μ、方差为σ^2的高斯分布,记为N(μ,σ^2),其概率密度函数为正态分布的期望值μ决定了其位置,其标准差σ决定了其分布的幅度。我们通常所说的标准正态分布是μ=0,σ=1的正态分布。如图4所示便是一个标准正太分布的图像。In the Gaussian distribution, if the random variable X obeys a Gaussian distribution with a mathematical expectation of μ and a variance of σ^2, denoted as N(μ, σ^2), its probability density function is the expected value of the normal distribution μ determines its position, Its standard deviation σ determines the magnitude of its distribution. What we usually call the standard normal distribution is the normal distribution with μ=0 and σ=1. Figure 4 is an image of a standard normal distribution.
当环境中不存在运动目标的情况,对不同时刻里同一位置的像素值进行统计,可以发现这些值是呈单高斯分布的,但是实际的环境当中通常会受到光照及风吹等外在因素的影响,单一的高斯分布已经满足不了像素值的分布,由此可以用几个高斯分布通过不同的权值组合起来描述一个位置像素值的统计情况,也就是本例中提到的混合高斯模型。高斯模型的个数越多可以描述出来的背景就越复杂,准确性就越高,但是换来的代价就是数据的运算量就越大。为了达到满意的效果而且又考虑到对计算机硬件的要求,一般工程上高斯模型的个数取3到5个为宜。When there is no moving target in the environment, the pixel values at the same position at different times are counted, and it can be found that these values are in a single Gaussian distribution, but the actual environment is usually affected by external factors such as light and wind. Influence, a single Gaussian distribution can no longer satisfy the distribution of pixel values, so several Gaussian distributions can be combined with different weights to describe the statistics of a pixel value at a location, which is the mixture Gaussian model mentioned in this example. The more the number of Gaussian models, the more complex the background can be described, and the higher the accuracy, but the price in exchange is the greater the amount of data computation. In order to achieve satisfactory results and take into account the requirements for computer hardware, the number of Gaussian models in general engineering is appropriate to take 3 to 5.
假设图像中像素i在时刻t的像素值为xit,则它的概率密度函数为:其中Wjt表示的是第i个像素在t时刻的第j个高斯模型的权值,W的值越大说明该高斯模型与当前图像的像素值越接近,k表示的是采用的高斯模型的个数,并且有即用于模拟一个像素的所有高斯模型的权值之和为1。表示的是用于描述第i个像素点在t时刻的第j个高斯模型,表示的是单个高斯模型。而uit表示的是该高斯模型的均值,表示的是该高斯模型的方差,在混合高斯模型的建模算法中,主要是通过调整均值和方差的值来达到需要的效果,因些在混合高斯模型的建模算法中,高斯模型的均值以及方差这两个参数的更新方法非常的重要,具体的更新方法后续会介绍。利用混合高斯模型进行背景建模的时候需要根据高斯模型与当前像素之间的相似程序对用于描述同一个像素点的k个单高斯模型进行排序,权值W越大就表示该模型与当前像素的相似度比较高,而越小就表示该组像素点的变化比较小,比较平稳。所以可以根据的值来描述这种相似度,的值越大就表示相似度越高,越有可能是属于背景图像的像素点。把各个高斯模型按照值从大到小排好序,通常来说运动目标与高斯模型的相似程度都比较小,而背景像素点因为变化小,相似程度比较大。因此可以定义一个阈值T,如果前d个高斯模型的权值之和刚好大于或者等于T,则前d个高斯模型用作背景子集,而剩下的k-d个高斯模型作为前景运动子集。T的取值会直接影响到提取运动前景的效果,当T的取值较小时,则d的值就越小,用于描述背景图像的子集也就越单一,所以一般T的值取0.75。Assuming that the pixel value of pixel i in the image at time t is x it , its probability density function is: Among them, W jt represents the weight of the jth Gaussian model of the ith pixel at time t. The larger the value of W, the closer the Gaussian model is to the pixel value of the current image, and k represents the Gaussian model used. number, and there are That is, the sum of the weights of all Gaussian models used to simulate a pixel is 1. Represents the j-th Gaussian model used to describe the i-th pixel at time t, Represents a single Gaussian model. And u it represents the mean of the Gaussian model, Represents the variance of the Gaussian model. In the modeling algorithm of the Gaussian mixture model, the desired effect is mainly achieved by adjusting the values of the mean and variance. Therefore, in the modeling algorithm of the Gaussian mixture model, the mean value of the Gaussian model is And the update method of the two parameters of variance is very important, and the specific update method will be introduced later. When using the mixture Gaussian model for background modeling, it is necessary to sort the k single Gaussian models used to describe the same pixel point according to the similar program between the Gaussian model and the current pixel. The similarity of pixels is relatively high, while The smaller the value, the smaller the change of the group of pixels, and the more stable it is. So according to to describe this similarity, The larger the value of , the higher the similarity, and the more likely it is a pixel that belongs to the background image. Put each Gaussian model according to The values are sorted from large to small. Generally speaking, the similarity between the moving target and the Gaussian model is relatively small, and the background pixels have a relatively large similarity because of the small change. Therefore, a threshold T can be defined. If the sum of the weights of the first d Gaussian models is just greater than or equal to T, the first d Gaussian models are used as the background subset, and the remaining kd Gaussian models are used as the foreground motion subset. The value of T will directly affect the effect of extracting the motion foreground. When the value of T is small, the value of d will be smaller, and the subset used to describe the background image will be more single, so the value of T generally takes 0.75 .
接下来将详细介绍混合高斯模型各参数的更新方法,以便根据更新能够准确识别背景图像。在更新各参数之前必需要判断该像素与哪个高斯模型最相似,一般是如果像素点xit满足就认为该像素与该模型相匹配(为匹配阈值,一般取2.5)。如果Xit与第i个高斯模型匹配,那么该高斯模型的参数就要被更新,更新的方程如下:Next, the update method of each parameter of the mixture Gaussian model will be introduced in detail, so that the background image can be accurately identified according to the update. Before updating each parameter, it is necessary to determine which Gaussian model the pixel is most similar to, generally if the pixel x it satisfies It is considered that the pixel matches the model (for the matching threshold, generally 2.5). If X it matches the ith Gaussian model, then the parameters of the Gaussian model will be updated, and the updated equation is as follows:
Wi,t+1=(1-α)wi,t+αMit Wi ,t+1 =(1-α)wi ,t +αM it
pit=αN(xit,uit,σ2)p it =αN(x it , u it , σ 2 )
ui,t+1=(1-pit)uit+pituit u i, t+1 = (1-p it )u it +p it u it
除了该高斯模型的参数需要被更新以外,其他的高斯模型保持不变。混合高斯模型虽然复杂,计算量也比较大,但是提取出来的运动目标效果比较好,因此被广泛地利用,如图5所示,其为采用混合高斯模型从当前现场视频帧中提取运动目标过程示意图。根据混合高斯背景建模的方法对样本现场视频帧进行背景建模,并且利用当前现场视频帧与当前的背景图像相减,得到运动目标前景。也即将现场视频帧与获得的背景图像相减,即可提取到现场视频帧中的运动目标,获得准火灾区域。获得现场视频帧后,根据混合高斯模型建模,获得背景图像,利用现场视频帧与背景图像相减,即获得当前现场视频帧中运动目标。Except that the parameters of the Gaussian model need to be updated, the other Gaussian models remain unchanged. Although the mixed Gaussian model is complex and the amount of calculation is relatively large, the effect of the extracted moving objects is better, so it is widely used, as shown in Figure 5, which is the process of extracting moving objects from the current live video frame by using the mixed Gaussian model Schematic. The background of the sample live video frame is modeled according to the method of mixed Gaussian background modeling, and the current scene video frame is subtracted from the current background image to obtain the foreground of the moving target. That is, by subtracting the live video frame from the obtained background image, the moving target in the live video frame can be extracted to obtain a quasi-fire area. After the live video frame is obtained, the background image is obtained by modeling according to the mixed Gaussian model, and the moving object in the current live video frame is obtained by subtracting the live video frame and the background image.
请复阅图3,获得准火灾区域后,需要对准火灾区域进行火灾确认,判断准火灾区域中是否存在火灾信息,也即火灾区域确认,针对上述通过运动目标提取算法筛选出来的准火灾区域进行二次确认。本例中该阶段一共采用三种人工智能算法进行共同确认,分别为基于人工特征工程的BP神经网络算法,基于深度卷积神经网络的SSD算法及基于深度学习卷积神经网络的Yolo算法。经过对准火灾区域的确认可以得到当前现场视频帧是否存在火灾信息。也即本例中分别采用BP神经网络算法、SSD算法及Yolo算法对一个或多个准火灾区域进行检测,其中采用BP神经网络算法对准火灾区域进行检测时,抽取火灾样本库中的网络火灾图片及燃烧实验图片组成BP神经网络训练集;根据检测分别输出火灾置信度;根据火灾置信度判断是否存在火灾信息。以下详细说明如何采用BP神经网络算法、SSD算法及Yolo算法进行火灾确认。Please review Figure 3. After obtaining the quasi-fire area, it is necessary to conduct fire confirmation on the fire area to determine whether there is fire information in the quasi-fire area, that is, fire area confirmation. Do a second confirmation. In this example, a total of three artificial intelligence algorithms are used for common confirmation at this stage, namely the BP neural network algorithm based on artificial feature engineering, the SSD algorithm based on the deep convolutional neural network, and the Yolo algorithm based on the deep learning convolutional neural network. Whether there is fire information in the current live video frame can be obtained by confirming that the fire area is aligned. That is to say, in this example, the BP neural network algorithm, the SSD algorithm and the Yolo algorithm are respectively used to detect one or more quasi-fire areas. When the BP neural network algorithm is used to detect the fire area, the network fire in the fire sample library is extracted. The pictures and the pictures of the combustion experiment constitute the BP neural network training set; according to the detection, the fire confidence is output respectively; according to the fire confidence, it is judged whether there is fire information. The following describes in detail how to use BP neural network algorithm, SSD algorithm and Yolo algorithm for fire confirmation.
A)基于人工特征工程的BP神经网络算法,提取火焰的区域弯曲度、火焰的区域扩散率及火焰的尖角变化率三个特征,并将这三个特征作为BP神经网络算法的输入。如表2所示,BP神经网络的训练集从由网络火灾图片及燃烧实验图片共同组成的火灾样本库中随机抽取至少50%图片,其中,抽取至少50%图片时,至少抽取一半量的网络火灾图片及至少抽取一半量的燃烧实验图片组成BP神经网络训练集。至于BP神经网络算法,设计了一个四层BP神经网络,输入层为3个单元,有两个隐层(每一层都有10个单元),输出层为一个单元,输出BP网络火灾置信度P,P∈[0,1],P=0表示无火灾信息,P=1表示出现了火灾信息,P越大说明置信度越高,经过大量的工程实验,BP神经网络输出的火灾置信度P存在漏判现象的机率比较小,漏判的概率稳定在0.01%;但是会存在误判的情况,误判的概率稳定在2%。在本申请中BP神经网络算法的判断只是作为其中的一个因子,利用漏判概率极小的优点,作为辅助判断模块。A) The BP neural network algorithm based on artificial feature engineering extracts three features of the regional curvature of the flame, the regional diffusion rate of the flame and the rate of change of the sharp angle of the flame, and these three features are used as the input of the BP neural network algorithm. As shown in Table 2, the training set of the BP neural network randomly selects at least 50% of the pictures from the fire sample library composed of the network fire pictures and the burning experiment pictures. When at least 50% of the pictures are selected, at least half of the network The fire pictures and at least half of the burning experiment pictures are composed of the BP neural network training set. As for the BP neural network algorithm, a four-layer BP neural network is designed, the input layer is 3 units, there are two hidden layers (each layer has 10 units), and the output layer is a unit, which outputs the BP network fire confidence. P, P∈[0,1], P=0 means no fire information, P=1 means there is fire information, the larger the P is, the higher the confidence level is. After a lot of engineering experiments, the fire confidence level output by the BP neural network P has a relatively small probability of missed judgment, and the probability of missed judgment is stable at 0.01%; however, there will be misjudgments, and the probability of misjudgment is stable at 2%. In this application, the judgment of the BP neural network algorithm is only used as one of the factors, and the advantage of the extremely small probability of missed judgment is used as an auxiliary judgment module.
表2神经网络训练集组成B)基于SSD算法的大目标检测以及基于Yolo算法的小目标检测,SSD算法和Yolo算法同属于基于深度卷积神经网络的目标检测算法,在本申请中,经过大量的对比实验可知,SSD算法对大火灾目标比较敏感,但容易忽略小火灾目标(阴燃阶段或者刚起火阶段的火焰);而Yolo算法却刚好相反,对小火灾目标比较敏感,对大火灾目标容易产生漏判。本申请使用SSD算法进行大目标检测,使用Yolo算法进行小目标检测,最后分别将SSD火灾置信度P_A以及Yolo火灾置信度P_B与网络火灾置信度进行整合,当P>0.8且P_A>0.8的情况下,断定产生了大火灾信息,当P>0.6且P_B>0.7的情况下,断定产生小火灾信息也即处于起火阶段。其中P的阈值与P_A、P_B两个变量有关系,当用于大火灾信息判断时P的阈值取0.8,当用于小火灾信息判断时P的阈值取0.6,另外,P_A=0.8、P_B=0.7均根据工程实验所得。Table 2 Neural network training set composition B) Large target detection based on SSD algorithm and small target detection based on Yolo algorithm, SSD algorithm and Yolo algorithm both belong to target detection algorithm based on deep convolutional neural network, in this application, after a large number of The comparison experiment shows that the SSD algorithm is more sensitive to large fire targets, but it is easy to ignore small fire targets (flames in the smoldering stage or the fire stage); while the Yolo algorithm is just the opposite, more sensitive to small fire targets, and more sensitive to large fire targets. Ease of misjudgment. This application uses the SSD algorithm for large target detection and the Yolo algorithm for small target detection. Finally, the SSD fire confidence P_A and Yolo fire confidence P_B are respectively integrated with the network fire confidence. When P>0.8 and P_A>0.8 In the case of P>0.6 and P_B>0.7, it is concluded that the information of a small fire is generated, that is, the fire is in the stage of fire. Among them, the threshold value of P is related to the two variables P_A and P_B. When it is used for judging large fire information, the threshold value of P is 0.8, and when it is used for judging small fire information, the threshold value of P is 0.6. In addition, P_A=0.8, P_B= 0.7 are based on engineering experiments.
完成火灾信息的确认后,对火灾信息进行火灾等级划分,并根据火灾等级产生对用的报警信号。本例中采用BP神经网络算法、SSD算法及Yolo算法进行多因子的整合,共同决策是否存在火灾信息。并且对火灾信息进行区分识别,划分出大火灾目标以及小火灾目标,在本阶段进一步对大火灾目标以及小火灾目标进行等级划分。After the confirmation of the fire information is completed, the fire information is classified into fire grades, and corresponding alarm signals are generated according to the fire grades. In this example, BP neural network algorithm, SSD algorithm and Yolo algorithm are used to integrate multiple factors to jointly decide whether there is fire information. And the fire information is distinguished and identified, the big fire target and the small fire target are divided, and the big fire target and the small fire target are further classified at this stage.
如图5所示,其为火灾等级划分流程图,本例中共设计四个火灾等级,其中若判断火灾信息为小火灾信息,获取连续15帧以上30帧以下的现场视频帧,当连续15帧以上30帧以下的现场视频帧中均出现小火灾信息,则产生0级火灾预警,该级别处于火灾起火阶段,暂时还没有造成财产以及人员的损失;当判断火灾信息为大火灾信息,获取连续30帧以上60帧以下的现场视频帧,当连续30帧以上60帧以下的现场视频帧中均产生大火灾信息,则产生一级火灾预警,该级别处于火灾的发展初期。当判断火灾信息为大火灾信息,则获取连续60帧以上90帧以下的现场视频帧,当连续60帧以上90帧以下的现场视频帧中均出现大火灾信息,则产生二级火灾预警,该级别处于火灾的快速发展期;当判断火灾信息为大火灾信息,则获取连续90帧以上现场视频帧,当连续90帧以上的现场视频帧中均出现大火灾信息,产生三级火灾预警,当发展到该级别说明已经造成一定程度的财产损失,需要紧急进行消防救援。As shown in Figure 5, it is a flow chart of fire level division. In this example, four fire levels are designed. If the fire information is judged to be small fire information, the live video frames of more than 15 consecutive frames and below 30 frames are obtained. If there is small fire information in the above 30 or less live video frames, a level 0 fire warning will be generated. This level is in the stage of fire and has not caused any loss of property and personnel. When the fire information is judged to be large fire information, the continuous For live video frames of more than 30 frames but less than 60 frames, when large fire information is generated in the continuous live video frames of more than 30 frames but less than 60 frames, a first-level fire warning will be generated, and this level is in the early stage of fire development. When it is judged that the fire information is the big fire information, the on-site video frames with more than 60 frames and less than 90 frames in a row are obtained. The level is in the rapid development period of the fire; when it is judged that the fire information is a large fire information, it will obtain more than 90 consecutive live video frames. Development to this level indicates that a certain degree of property damage has been caused, and emergency fire rescue is required.
当将本申请的适用于嵌入式平台的多因子火焰识别方法移植入海思Hi3519A系列芯片上进行运行时,移植步骤包括搭建NFS、采用串口方式连接海思开发板、安装海思交叉编译器、opencv3.4.5交叉编译、ncnn交叉编译以及项目工程编译,输出基于arm构架的静态库。其中搭建NFS包括安装NFS服务、编写配置文件及重启NFS服务。以串口方式连接海思开发板包括一般把串口线接上海思开发板后手动或自动安装驱动、以计算机-管理-设备管理器-端口的方式查看端口号、在PC端使用SecureCRT访问串口,码率为115200以及连接海思开发板与虚拟机并共享目录,若海思开发板没有分配IP,可手动配置。安装海思交叉编译器时,宿主机是64位,而交叉编译器是针对32位的开发板,需要补依赖包,而后直接执行安装包下面的安装脚本,随后测试是否安装成功。When transplanting the multi-factor flame identification method suitable for embedded platform of the present application to the Hi3519A series chip of HiSilicon for operation, the transplanting steps include building NFS, connecting the HiSilicon development board by serial port, installing the HiSilicon cross compiler, opencv3 .4.5 Cross-compilation, ncnn cross-compilation and project project compilation, output static library based on arm framework. Building NFS includes installing NFS services, writing configuration files, and restarting NFS services. Connecting the HiSilicon development board by serial port includes installing the driver manually or automatically after connecting the serial port cable to the HiSilicon development board, viewing the port number in the way of Computer-Management-Device Manager-Port, and using SecureCRT to access the serial port on the PC. The rate is 115200, and the HiSilicon development board is connected to the virtual machine to share the directory. If the HiSilicon development board does not have an IP assigned, it can be configured manually. When installing the HiSilicon cross-compiler, the host is 64-bit, and the cross-compiler is for 32-bit development boards. You need to supplement the dependency package, and then directly execute the installation script under the installation package, and then test whether the installation is successful.
本例的适用于嵌入式平台的多因子火焰识别方法相较于传统火灾识别具有如下优势:Compared with traditional fire identification, the multi-factor flame identification method suitable for embedded platform in this example has the following advantages:
1)采用BP神经网络算法进行火灾信息识别时,BP神经网络训练集的火灾样本库中图片数量达到10万张级别,图片既有网络图片,真实场景数据占50%,图片多样且丰富。目前现有方案使用的火灾图片样本库通常只为互联网进行爬取,网络上爬取的火灾图像样本场景比较单一,大部分为严重的火灾现场,缺乏火灾起火阶段或者阴燃阶段的样本数据。并且图片库的数量规模比较小(在1万张以内),这对算法模型的识别或者火灾特征的研究是不够全面的,造成算法的鲁棒性以及迁移能力比较弱,在测试集中的表现非常好,但是真实场景的识别率偏低。而本申请通过从互联网、国际开源库以及燃烧实验三个渠道进行图片收集,建设一个数量巨大,覆盖生活常见场景的理想训练集,为算法的训练效果提供了重要的保障。1) When the BP neural network algorithm is used for fire information recognition, the number of pictures in the fire sample library of the BP neural network training set reaches 100,000. The pictures have network pictures, and the real scene data accounts for 50%. The fire image sample library currently used in the existing solutions is usually only crawled from the Internet. The fire image sample scenes crawled on the Internet are relatively simple, most of which are serious fire scenes, and lack sample data of the fire stage or smoldering stage. And the number of picture libraries is relatively small (within 10,000 pictures), which is not comprehensive enough for the identification of algorithm models or the research on fire characteristics, resulting in relatively weak algorithm robustness and migration ability, and the performance in the test set is very high. Good, but the recognition rate of real scenes is low. In this application, images are collected from the Internet, international open source libraries, and combustion experiments to build a huge ideal training set covering common life scenarios, which provides an important guarantee for the training effect of the algorithm.
2)结合基于人工设计特征工程的BP神经网络算法以及其实深度卷积神经网络的SSD算法及YOLO算法,形成多因子决策方案,共同判决火灾信息,抗干扰能力强,鲁棒性强。目前主流方案使用传统数字图像处理方案,特征工程过程中人工设定的特征维度难以表征所有火灾的特点,比如火焰的扩散特征在火灾不同的发展阶段扩散率差别都比较大,造成算法抗干扰能力比较弱,容易受强光、弱光、特别是晚上灯光的影响。而本申请的适用于嵌入式平台的多因子火焰识别方法整合传统数字图像处理方案以及基于深度学习的目标检测方案形成多因子火灾识别方案,并且对大火灾目标以及小火灾目标设计不同的算法进行分场景识别,极大的提高了火灾识别的抗干扰能力,准确率也得到了极大的保障,实验证明准确率可稳定在99.5%。2) Combined with the BP neural network algorithm based on artificially designed feature engineering and the SSD algorithm and YOLO algorithm of the deep convolutional neural network, a multi-factor decision-making scheme is formed to jointly judge fire information, with strong anti-interference ability and strong robustness. At present, the mainstream solutions use traditional digital image processing solutions, and the feature dimensions manually set in the feature engineering process are difficult to characterize the characteristics of all fires. For example, the diffusion characteristics of flames vary greatly in different development stages of the fire, resulting in the anti-interference ability of the algorithm. Relatively weak, easily affected by strong light, weak light, especially night light. The multi-factor flame identification method suitable for embedded platforms of the present application integrates traditional digital image processing schemes and deep learning-based target detection schemes to form a multi-factor fire identification scheme, and designs different algorithms for large fire targets and small fire targets. Scenario-based recognition greatly improves the anti-interference ability of fire recognition, and the accuracy rate is also greatly guaranteed. Experiments show that the accuracy rate can be stabilized at 99.5%.
3)本申请算法复杂度低,减少消耗计算资源,适应在终端平台运行。深度学习火灾检测算法的准确率相比传统数字图像处理算法有较大的进步,但这种准确率的提升是牺牲了性能获得的,目前主流的深度学习火灾识别的模型都是使用通用型的目标检测算法进行训练,该算法的神经元数量巨大,通常需要在高性能的CPU服务器或者GPU服务器,在后台进行识别。需要将多路摄像头的视频流接入到同一台算法服务器进行统一处理,这样,一来服务器同时处理多路视频信号,计算压力巨大,硬件成本高。二来服务器处理耗时以及信号传输耗时造成实时性比较差。本申请首先对现场视频帧进行缩小,提取运动目标,获得准火灾区域,从而减少了在火灾检测环节的计算量。另外本申请对算法进行了基于ncnn轻量级框架的转换以及基于arm构架的移植,使得算法的运行平台可以适应arm架构的终端嵌入式平台,提升了算法的应用能力,降低硬件成本。3) The algorithm of the present application has low complexity, reduces consumption of computing resources, and is suitable for running on a terminal platform. Compared with traditional digital image processing algorithms, the accuracy of deep learning fire detection algorithms has made great progress, but this improvement in accuracy is achieved at the expense of performance. Currently, the mainstream deep learning fire detection models use general-purpose models. The target detection algorithm is trained. The number of neurons in this algorithm is huge, and it usually needs to be recognized in the background on a high-performance CPU server or GPU server. It is necessary to connect the video streams of multiple cameras to the same algorithm server for unified processing. In this way, the server processes multiple video signals at the same time, resulting in huge computational pressure and high hardware costs. Second, the server processing time and signal transmission time are time-consuming, resulting in poor real-time performance. The present application firstly reduces the live video frame, extracts the moving target, and obtains the quasi-fire area, thereby reducing the amount of calculation in the fire detection link. In addition, in this application, the algorithm is converted based on the ncnn lightweight framework and transplanted based on the arm framework, so that the running platform of the algorithm can adapt to the terminal embedded platform of the arm framework, which improves the application ability of the algorithm and reduces the hardware cost.
上所述仅为本申请的实施方式而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理的内所作的任何修改、等同替换、改进等,均应包括在本申请的权利要求范围之内。The above description is merely an embodiment of the present application, and is not intended to limit the present application. Various modifications and variations of this application are possible for those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application shall be included within the scope of the claims of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910916354.3A CN110648490B (en) | 2019-09-26 | 2019-09-26 | A multi-factor flame identification method suitable for embedded platforms |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910916354.3A CN110648490B (en) | 2019-09-26 | 2019-09-26 | A multi-factor flame identification method suitable for embedded platforms |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110648490A true CN110648490A (en) | 2020-01-03 |
CN110648490B CN110648490B (en) | 2021-07-27 |
Family
ID=69011420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910916354.3A Active CN110648490B (en) | 2019-09-26 | 2019-09-26 | A multi-factor flame identification method suitable for embedded platforms |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110648490B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242053A (en) * | 2020-01-16 | 2020-06-05 | 国网山西省电力公司电力科学研究院 | A kind of transmission line flame detection method and system |
CN111414514A (en) * | 2020-03-19 | 2020-07-14 | 山东雷火网络科技有限公司 | System and method for flame detection based on Shandong Jinnan province |
CN111681385A (en) * | 2020-05-12 | 2020-09-18 | 上海荷福人工智能科技(集团)有限公司 | Fire-fighting classification early-warning algorithm based on artificial intelligence and fire detection system |
CN112150750A (en) * | 2020-08-25 | 2020-12-29 | 航天信德智图(北京)科技有限公司 | Forest fire alarm monitoring system based on edge calculation |
CN112907886A (en) * | 2021-02-07 | 2021-06-04 | 中国石油化工股份有限公司 | Refinery plant fire identification method based on convolutional neural network |
CN112947147A (en) * | 2021-01-27 | 2021-06-11 | 上海大学 | Fire-fighting robot based on multi-sensor and cloud platform algorithm |
CN115376268A (en) * | 2022-10-21 | 2022-11-22 | 山东太平天下智慧科技有限公司 | Monitoring alarm fire-fighting linkage system based on image recognition |
CN117152675A (en) * | 2023-07-21 | 2023-12-01 | 华能(广东)能源开发有限公司汕头电厂 | Burner fire detection methods, devices and equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001356047A (en) * | 2000-06-14 | 2001-12-26 | Hochiki Corp | Flame detection device and detection sensitivity setting method thereof |
US20120179421A1 (en) * | 2010-12-07 | 2012-07-12 | Gautam Dasgupta | Emergency Response Management Apparatuses, Methods and Systems |
CN103150856A (en) * | 2013-02-28 | 2013-06-12 | 江苏润仪仪表有限公司 | Fire flame video monitoring and early warning system and fire flame detection method |
CN105336085A (en) * | 2015-09-02 | 2016-02-17 | 华南师范大学 | Remote large-space fire monitoring alarm method based on image processing technology |
US20170363475A1 (en) * | 2014-01-23 | 2017-12-21 | General Monitors, Inc. | Multi-spectral flame detector with radiant energy estimation |
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
CN108108695A (en) * | 2017-12-22 | 2018-06-01 | 湖南源信光电科技股份有限公司 | Fire defector recognition methods based on Infrared video image |
CN109800802A (en) * | 2019-01-10 | 2019-05-24 | 深圳绿米联创科技有限公司 | Visual sensor and object detecting method and device applied to visual sensor |
CN110163889A (en) * | 2018-10-15 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Method for tracking target, target tracker, target following equipment |
CN110378265A (en) * | 2019-07-08 | 2019-10-25 | 创新奇智(成都)科技有限公司 | A kind of incipient fire detection method, computer-readable medium and system |
-
2019
- 2019-09-26 CN CN201910916354.3A patent/CN110648490B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001356047A (en) * | 2000-06-14 | 2001-12-26 | Hochiki Corp | Flame detection device and detection sensitivity setting method thereof |
US20120179421A1 (en) * | 2010-12-07 | 2012-07-12 | Gautam Dasgupta | Emergency Response Management Apparatuses, Methods and Systems |
CN103150856A (en) * | 2013-02-28 | 2013-06-12 | 江苏润仪仪表有限公司 | Fire flame video monitoring and early warning system and fire flame detection method |
US20170363475A1 (en) * | 2014-01-23 | 2017-12-21 | General Monitors, Inc. | Multi-spectral flame detector with radiant energy estimation |
CN105336085A (en) * | 2015-09-02 | 2016-02-17 | 华南师范大学 | Remote large-space fire monitoring alarm method based on image processing technology |
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
CN108108695A (en) * | 2017-12-22 | 2018-06-01 | 湖南源信光电科技股份有限公司 | Fire defector recognition methods based on Infrared video image |
CN110163889A (en) * | 2018-10-15 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Method for tracking target, target tracker, target following equipment |
CN109800802A (en) * | 2019-01-10 | 2019-05-24 | 深圳绿米联创科技有限公司 | Visual sensor and object detecting method and device applied to visual sensor |
CN110378265A (en) * | 2019-07-08 | 2019-10-25 | 创新奇智(成都)科技有限公司 | A kind of incipient fire detection method, computer-readable medium and system |
Non-Patent Citations (2)
Title |
---|
孙琛: "基于视频图像的火灾检测算法研究与设计", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》 * |
熊爱民,温佳文,何远静: "《基于图像模式识别技术的大空间火灾报警系统设计》", 《电子科学与技术》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242053A (en) * | 2020-01-16 | 2020-06-05 | 国网山西省电力公司电力科学研究院 | A kind of transmission line flame detection method and system |
CN111414514A (en) * | 2020-03-19 | 2020-07-14 | 山东雷火网络科技有限公司 | System and method for flame detection based on Shandong Jinnan province |
CN111414514B (en) * | 2020-03-19 | 2024-01-19 | 山东雷火网络科技有限公司 | System and method for flame detection in Shandong Jinan environment |
CN111681385A (en) * | 2020-05-12 | 2020-09-18 | 上海荷福人工智能科技(集团)有限公司 | Fire-fighting classification early-warning algorithm based on artificial intelligence and fire detection system |
CN112150750A (en) * | 2020-08-25 | 2020-12-29 | 航天信德智图(北京)科技有限公司 | Forest fire alarm monitoring system based on edge calculation |
CN112947147A (en) * | 2021-01-27 | 2021-06-11 | 上海大学 | Fire-fighting robot based on multi-sensor and cloud platform algorithm |
CN112907886A (en) * | 2021-02-07 | 2021-06-04 | 中国石油化工股份有限公司 | Refinery plant fire identification method based on convolutional neural network |
CN115376268A (en) * | 2022-10-21 | 2022-11-22 | 山东太平天下智慧科技有限公司 | Monitoring alarm fire-fighting linkage system based on image recognition |
CN117152675A (en) * | 2023-07-21 | 2023-12-01 | 华能(广东)能源开发有限公司汕头电厂 | Burner fire detection methods, devices and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110648490B (en) | 2021-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110648490A (en) | Multi-factor flame identification method suitable for embedded platform | |
CN109785289B (en) | A transmission line defect detection method, system and electronic device | |
CN111126136B (en) | A Quantification Method of Smoke Concentration Based on Image Recognition | |
CN109522819B (en) | Fire image identification method based on deep learning | |
CN106097346B (en) | A self-learning video fire detection method | |
CN108985192A (en) | A kind of video smoke recognition methods based on multitask depth convolutional neural networks | |
CN111754498A (en) | A Conveyor Belt Idler Detection Method Based on YOLOv3 | |
CN110084166A (en) | Substation's smoke and fire intelligent based on deep learning identifies monitoring method | |
CN114155457A (en) | Control method and control device based on flame dynamic identification | |
CN116563762A (en) | Fire detection method, system, medium, equipment and terminal of an oil and gas station | |
CN105975991B (en) | An improved extreme learning machine fire type identification method | |
CN118609303A (en) | Fire prevention early warning method and system for mountain photovoltaic power stations based on visual analysis | |
CN111539325A (en) | Forest fire detection method based on deep learning | |
CN116958643A (en) | An intelligent identification method of airborne pollen-allergenic plants based on YOLO network | |
CN116824346A (en) | Global attention-based detection model training and fire detection method and system | |
CN117274881A (en) | Semi-supervised video fire detection method based on consistency regularization and distribution alignment | |
CN111062350B (en) | Artificial intelligence based firework recognition algorithm | |
Praneash et al. | Forest fire detection using computer vision | |
CN115049986A (en) | Flame detection method and system based on improved YOLOv4 | |
CN118968425A (en) | A tunnel fire abnormal event detection method based on YOLO | |
CN118608498A (en) | Insulator defect detection method, device, terminal, storage medium and program product | |
CN117037054B (en) | Factory smoke detection method based on Gaussian smoke plume model and improved YOLOv & lt 4 & gt | |
CN118097202A (en) | A mine fire image recognition and fire extinguishing method | |
CN114639043B (en) | A method for detecting electric vehicles and electric vehicle batteries based on TensorRT accelerated reasoning | |
Jiang et al. | Deep learning of qinling forest fire anomaly detection based on genetic algorithm optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |