[go: up one dir, main page]

CN104463253B - Passageway for fire apparatus safety detection method based on adaptive background study - Google Patents

Passageway for fire apparatus safety detection method based on adaptive background study Download PDF

Info

Publication number
CN104463253B
CN104463253B CN201510004803.9A CN201510004803A CN104463253B CN 104463253 B CN104463253 B CN 104463253B CN 201510004803 A CN201510004803 A CN 201510004803A CN 104463253 B CN104463253 B CN 104463253B
Authority
CN
China
Prior art keywords
image
gaussian
frame
pixel
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510004803.9A
Other languages
Chinese (zh)
Other versions
CN104463253A (en
Inventor
周雪
邹见效
徐红兵
杨武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201510004803.9A priority Critical patent/CN104463253B/en
Publication of CN104463253A publication Critical patent/CN104463253A/en
Application granted granted Critical
Publication of CN104463253B publication Critical patent/CN104463253B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于自适应背景学习的消防通道安全检测方法,首先利用消防通道监控视频流的前N帧图像作为训练样本训练得到混合高斯模型,然后根据混合高斯模型得到背景图像,采用背景图像对检测图像进行背景剪除方法提取出检测图像的前景区域,通过前景图像和检测图像在前景区域内的图像的HOG特征向量判断该前景区域是真正前景还是其他干扰导致的伪前景,从而判断检测图像中是否存在异常情况,如果连续M帧均存在异常情况,则消防通道存在安全问题。本发明通过建立背景的混合高斯模型,基于HOG特征的前景目标检测和自适应背景更新策略,实现对消防通道是否安全的准确检测。

The invention discloses a safety detection method for fire exits based on self-adaptive background learning. Firstly, the first N frame images of fire exit monitoring video streams are used as training samples to obtain a mixed Gaussian model, and then the background image is obtained according to the mixed Gaussian model. The image performs background clipping method on the detection image to extract the foreground area of the detection image, and judge whether the foreground area is the real foreground or the false foreground caused by other interferences through the foreground image and the HOG feature vector of the image in the foreground area of the detection image, so as to judge the detection Whether there is an abnormality in the image, if there is an abnormality in consecutive M frames, there is a safety problem in the fire exit. The invention realizes the accurate detection of whether the fire exit is safe or not by establishing a mixed Gaussian model of the background, foreground target detection and self-adaptive background update strategy based on HOG features.

Description

基于自适应背景学习的消防通道安全检测方法Fire Exit Safety Detection Method Based on Adaptive Background Learning

技术领域technical field

本发明属于计算机视觉技术领域,更为具体地讲,涉及一种基于自适应背景学习的消防通道安全检测方法。The invention belongs to the technical field of computer vision, and more specifically relates to a safety detection method for fire exits based on self-adaptive background learning.

背景技术Background technique

消防通道是指在各种险情发生时,用于消防人员实施营救和被困人员疏散的通道。当有火灾等紧急情况发生时,消防人员、消防车辆和消防设备要从消防通道进入。消防通道安全检测主要包括两个方面:(1)消防通道阻塞检测、(2)常闭防火门异常开启检测。如果消防通道被机动车或其他阻塞物占用就可能造成消防人员、消防车辆和消防设备不能及时到达现场而延误火灾的救援,造成巨大的经济损失和人员伤亡。防火门是一种具有回弹功能的闭门结构,它的作用在于能在一定时间内满足耐火稳定性、完整性和隔热性,除了具有普通门的作用外,具有防火、隔烟、阻挡高温的特殊功能。只有处于关闭状态时,发生火灾后才能有效地阻挡浓烟烈火的侵袭,为人员疏散及消防营救赢得时间。因此,消防通道阻塞检测和常闭防火门异常开启检测显得尤为重要。Fire exits refer to the passages used by firefighters to rescue and evacuate trapped people when various dangers occur. When an emergency such as a fire occurs, firefighters, fire vehicles and firefighting equipment must enter from the fire exit. Fire exit safety detection mainly includes two aspects: (1) fire exit blockage detection, (2) normally closed fire door abnormal opening detection. If the fire exit is occupied by motor vehicles or other obstructions, it may cause firefighters, firefighting vehicles and firefighting equipment to fail to arrive at the scene in time and delay the rescue of the fire, causing huge economic losses and casualties. The fire door is a closed door structure with a rebound function. Its function is to meet the fire resistance stability, integrity and heat insulation within a certain period of time. Special features for high temperature. Only when it is in the closed state can it effectively block the invasion of thick smoke and fire after a fire occurs, and win time for personnel evacuation and fire rescue. Therefore, it is particularly important to detect the blockage of fire exits and the abnormal opening of normally closed fire doors.

传统的消防通道安全检测主要依靠人工检查,指定专门的工作人员查看消防通道是否被阻塞、常闭防火门是否异常开启,此种方法简单易行,不需要依靠任何设备,成本低,但是该方法的缺点一是不能及时发现消防通道中存在的安全隐患;二是较大地依赖工作人员,主观性强。还有一种方法依靠采集视频然后连接到监控室由专门的工作人员负责查看,这种方法虽然比第一种方法方便,降低了工作人员的负担,能够集中的进行监控,但是对人员的依赖程度依然很大,而且达不到实时、自动检测的目的。The traditional safety inspection of fire exits mainly relies on manual inspection, and special staff are appointed to check whether the fire exits are blocked and whether the normally closed fire doors are opened abnormally. This method is simple and easy, does not require any equipment, and is low in cost. The disadvantage of the first is that it cannot find the potential safety hazards in the fire escape in time; the second is that it relies heavily on the staff and is highly subjective. Another method relies on collecting video and then connecting it to the monitoring room for special staff to check. Although this method is more convenient than the first method, it reduces the burden on the staff and can monitor in a centralized manner, but it is dependent on personnel It is still very large, and it cannot achieve the purpose of real-time and automatic detection.

发明内容Contents of the invention

本发明的目的在于克服现有技术的不足,提出一种基于自适应背景学习的消防通道安全检测方法,通过建立背景的混合高斯模型,基于HOG特征的前景目标检测和自适应背景更新策略,实现对消防通道是否安全的准确检测。The purpose of the present invention is to overcome the deficiencies in the prior art, propose a fire escape safety detection method based on adaptive background learning, by setting up a mixed Gaussian model of the background, foreground target detection and adaptive background update strategy based on HOG features, to achieve Accurate detection of whether the fire escape is safe.

为实现上述发明目的,本发明基于自适应背景学习的消防通道安全检测方法,包括以下步骤:In order to achieve the above-mentioned purpose of the invention, the fire exit safety detection method based on adaptive background learning of the present invention comprises the following steps:

S1:将消防通道监控视频流的前N帧图像作为训练样本,训练得到由三个高斯模型混合的背景混合高斯模型,训练的具体步骤包括:S1: Take the first N frames of images of the fire exit monitoring video stream as training samples, and train to obtain a background mixed Gaussian model mixed by three Gaussian models. The specific steps of training include:

S1.1:混合高斯模型进行初始化,其中高斯模型1根据第1帧图像进行初始化,表达式为:S1.1: Initialize the mixed Gaussian model, where Gaussian model 1 is initialized according to the first frame image, the expression is:

其中,表示高斯模型1中第1帧图像训练后像素点j的均值,j的取值范围为j=1,2,…,M,M为图像中像素点数量,xj,1表示第1帧图像中像素点j的像素值,表示高斯模型1中第1帧图像训练后像素点j的方差,σinit为预设的初始方差,表示高斯模型1中第1帧图像训练后像素点j的权重,ωinit为预设的初始权重;in, Indicates the mean value of pixel j after the training of the first frame image in Gaussian model 1, the value range of j is j=1,2,...,M, M is the number of pixels in the image, x j,1 represents the first frame image The pixel value of pixel point j in Indicates the variance of pixel j after the first frame of image training in Gaussian model 1, σ init is the preset initial variance, Indicates the weight of pixel j after the training of the first frame image in Gaussian model 1, ω init is the preset initial weight;

高斯模型2、3初始化的表达式为:The expressions for the initialization of Gaussian models 2 and 3 are:

分别表示高斯模型2、3中第1帧图像训练后像素点j的均值, 表示高斯模型2、3中第1帧图像训练后像素点j的方差,表示高斯模型2、3中第1帧图像训练后像素点j的权重; respectively represent the mean value of pixel j after the training of the first frame image in Gaussian model 2 and 3, Indicates the variance of pixel j after the training of the first frame image in Gaussian model 2 and 3, Represents the weight of pixel j after the training of the first frame image in Gaussian models 2 and 3;

S1.2:依次采用第2,3,…N帧图像对混合高斯模型进行训练,得到混合高斯模型,第t+1帧图像训练的方法为:S1.2: Use the 2nd, 3rd,...N frame images to train the mixed Gaussian model in sequence to obtain the mixed Gaussian model. The training method for the t+1th frame image is:

将经过前t帧图像得到的像素点j的三个高斯模型,按照的值降序排列,i=1,2,3,采用第t+1帧图像中像素点j的像素值xj,t+1依次对每个高斯模型进行匹配,匹配方法为:如果其中δ为大于0的常数,则认为匹配,否则不匹配;The three Gaussian models of the pixel point j obtained through the previous t frames of images, according to The values of i=1, 2, 3 are arranged in descending order, and each Gaussian model is matched in sequence by using the pixel value x j, t+1 of pixel j in the t+1th frame image. The matching method is: if Where δ is a constant greater than 0, it is considered to match, otherwise it does not match;

一旦某个高斯模型匹配成功,则该高斯模型采用像素值xj,t+1进行更新,其他高斯模型保持不变,更新公式为:Once a Gaussian model is successfully matched, the Gaussian model is updated using the pixel value x j,t+1 , and other Gaussian models remain unchanged. The update formula is:

其中,α是预设的模型学习率,取值范围为0<α<1,分别表示在第t帧和第t+1帧图像训练后像素点j的混合高斯模型中第i个高斯模型的权重,分别表示在第t帧和第t+1帧图像训练后像素点j的混合高斯模型中第i个高斯模型的均值,分别表示在第t帧和第t+1帧图像训练后像素点j的混合高斯模型中第i个高斯模型的方差;Among them, α is the preset model learning rate, and the value range is 0<α<1, respectively represent the weight of the i-th Gaussian model in the mixed Gaussian model of pixel j after image training in the t-th frame and the t+1-th frame, respectively represent the mean value of the i-th Gaussian model in the mixed Gaussian model of the pixel j after image training in the t-th frame and the t+1-th frame, respectively represent the variance of the i-th Gaussian model in the mixed Gaussian model of the pixel j after image training in the t-th frame and the t+1-th frame;

如果像素值xj,t+1与三个高斯模型均不匹配,则将最后一个高斯模型根据像素值xj,t+1进行取代更新,更新公式为:If the pixel value x j, t+1 does not match the three Gaussian models, the last Gaussian model is replaced and updated according to the pixel value x j, t+1 , and the update formula is:

更新完成后,将三个高斯模型的权重进行归一化,令记训练完成的混合高斯模型的均值为方差为权重为 After the update is complete, the weights of the three Gaussian models To normalize, let Note that the mean value of the trained mixed Gaussian model is Variance is Weight is

S2:对消防通道监控视频流中的检测图像,根据混合高斯模型采用背景减除法得到该帧图像的前景二值图像,具体方法为:S2: For the detected image in the video stream of the fire exit monitoring, the background subtraction method is used to obtain the foreground binary image of the frame image according to the mixed Gaussian model. The specific method is:

S2.1:将像素点j的三个高斯模型按进行降序排列,取前Bj个高斯模型作为背景的分布,Bj的计算公式为:S2.1: Press the three Gaussian models of pixel j into Arrange in descending order, take the first B j Gaussian models as the background distribution, the calculation formula of B j is:

T表示预设的阈值;T represents a preset threshold;

S2.2:将检测图像中像素点j的像素值x′j对Bj个高斯模型进行匹配,如果有任意一个高斯模型匹配成功,则判定像素点j为背景像素,将二值图像对应像素点的值置为0,否则判定像素点j为前景像素,将二值图像对应像素点的值置为1;S2.2: Match the pixel value x′j of pixel j in the detection image to B j Gaussian models, if any Gaussian model is successfully matched, then determine that pixel j is a background pixel, and match the corresponding pixel of the binary image The value of the point is set to 0, otherwise it is determined that the pixel point j is a foreground pixel, and the value of the corresponding pixel point of the binary image is set to 1;

S3:利用HOG特征判断前景二值图像中前景块的真实性,具体方法为:S3: Use the HOG feature to judge the authenticity of the foreground block in the foreground binary image, the specific method is:

S3.1:对步骤S2得到的前景二值图像进行连通域分析,将连成块的区域归为一类并提取出四个边界的位置信息,得到每块连通区域的外接矩形坐标;S3.1: Perform connected domain analysis on the foreground binary image obtained in step S2, classify the connected areas into one category and extract the position information of the four boundaries, and obtain the circumscribed rectangle coordinates of each connected area;

S3.2:将每个像素点的三个高斯模型中权重最大的高斯模型的像素均值作为背景像素点的像素值,组成一幅背景图像;S3.2: Use the pixel mean value of the Gaussian model with the largest weight among the three Gaussian models for each pixel as the pixel value of the background pixel to form a background image;

S3.3:对于每个连通区域,根据其外接矩形坐标分别从检测图像和背景图像的灰度图像中提取出对应的灰度图像块,分别提取两个灰度图像块的HOG特征向量,计算两个特征向量的欧式距离D,如果D大于预先设置的阈值TD,则判定该连通区域为真正前景块,否则为伪前景块;S3.3: For each connected region, extract the corresponding gray image block from the gray image of the detection image and the background image according to the coordinates of its circumscribed rectangle, respectively extract the HOG feature vectors of the two gray image blocks, and calculate The Euclidean distance D of the two feature vectors, if D is greater than the preset threshold T D , it is determined that the connected area is a true foreground block, otherwise it is a false foreground block;

S4:如果步骤S3中分析得到的结果没有连通域或每个连通区域都是伪前景块,则此帧检测图像中没有异常情况,如果至少有一个连通区域是真正前景块,则判断该帧有异常情况;如果连续M帧均存在异常情况,M为预设的帧数,则消防通道存在安全问题,否则消防通道安全。S4: If the result analyzed in step S3 has no connected domains or each connected domain is a pseudo foreground block, then there is no abnormality in the frame detection image, if at least one connected domain is a real foreground block, then it is judged that the frame has Abnormal situation; if there are abnormal situations in consecutive M frames, and M is the preset number of frames, then there is a safety problem in the fire exit, otherwise the fire exit is safe.

本发明基于自适应背景学习的消防通道安全检测方法,首先利用消防通道监控视频流的前N帧图像作为训练样本训练得到混合高斯模型,然后根据混合高斯模型得到背景图像,采用背景图像对检测图像进行背景剪除方法提取出检测图像的前景区域,通过前景图像和检测图像在前景区域内的图像的HOG特征向量判断该前景区域是真正前景还是其他干扰导致的伪前景,从而判断检测图像中是否存在异常情况,如果连续M帧均存在异常情况,则消防通道存在安全问题。本发明具有以下有益效果:The safety detection method of the fire exit based on self-adaptive background learning of the present invention first uses the first N frames of images of the fire exit monitoring video stream as training samples to obtain a mixed Gaussian model, then obtains the background image according to the mixed Gaussian model, and uses the background image to detect the image The background clipping method is used to extract the foreground area of the detected image, and the HOG feature vector of the foreground image and the image in the foreground area of the detected image is used to judge whether the foreground area is a real foreground or a false foreground caused by other interference, thereby judging whether there is Abnormal conditions, if there are abnormal conditions in consecutive M frames, there is a safety problem in the fire exit. The present invention has the following beneficial effects:

(1)本发明通过训练样本来自适应地建立监控场景的背景混合高斯模型,获取的背景图像更符合实际情况,基于该背景图像的检测结果更加准确;(1) The present invention self-adaptively builds the background mixed Gaussian model of monitoring scene by training samples, the background image that obtains is more in line with the actual situation, and the detection result based on this background image is more accurate;

(2)采用HOG特征向量判断前景目标的真实性,可以有效排除光照变化等引起的干扰;(2) Use the HOG feature vector to judge the authenticity of the foreground target, which can effectively eliminate the interference caused by illumination changes;

(3)还可以通过检测图像对背景的混合高斯模型进行实时的在线更新,并且在更新时采用只更新背景的策略,这样既对场景进行了必要的更新,又避免了前景目标出现的时间过长逐渐融为背景的错误。(3) The mixture Gaussian model of the background can also be updated online in real time by detecting images, and the strategy of only updating the background is adopted during the update, which not only updates the scene necessary, but also avoids the excessive time for the foreground target to appear Long bugs that fade into the background.

附图说明Description of drawings

图1是本发明基于自适应背景学习的消防通道安全检测方法的具体实施方式流程图;Fig. 1 is the flow chart of the specific implementation of the fire exit safety detection method based on self-adaptive background learning of the present invention;

图2是图1中背景的混合高斯模型训练的流程图;Fig. 2 is the flowchart of the mixed Gaussian model training of background in Fig. 1;

图3是图1中前景二值图像生成的流程图;Fig. 3 is a flow chart of foreground binary image generation in Fig. 1;

图4是图1中前景块真实性判断的流程图;Fig. 4 is the flowchart of authenticity judgment of the foreground block in Fig. 1;

图5是场景1正常的检测图像;Figure 5 is a normal detection image of scene 1;

图6是图5所示时刻根据背景混合高斯模型得到的背景图像;Fig. 6 is the background image obtained according to the background mixed Gaussian model at the time shown in Fig. 5;

图7是场景1得到的前景二值图像;Fig. 7 is the foreground binary image that scene 1 obtains;

图8是场景2光照突变的检测图像;Fig. 8 is the detection image of scene 2 illumination mutation;

图9是图8所示时刻背景混合模型得到的背景图像;Fig. 9 is the background image obtained by the time background mixture model shown in Fig. 8;

图10是场景2得到的前景二值图像;Fig. 10 is the foreground binary image obtained in scene 2;

图11是场景2连通域分析后提取的检测图像和背景图像的灰度图像,其中图11(a)是检测图像的灰度图像,图11(b)是背景图像的灰度图像;Fig. 11 is the grayscale image of the detection image and the background image extracted after the connected domain analysis of scene 2, wherein Fig. 11 (a) is the grayscale image of the detection image, and Fig. 11 (b) is the grayscale image of the background image;

图12是场景2的HOG特征向量的直方图;Fig. 12 is a histogram of the HOG feature vector of scene 2;

图13是场景3通道阻塞的检测图像;Figure 13 is the detection image of channel blockage in scene 3;

图14是图13所示时刻背景混合模型得到的背景图像;Fig. 14 is the background image obtained by the time background mixture model shown in Fig. 13;

图15是场景2得到的前景二值图像;Fig. 15 is the foreground binary image obtained in scene 2;

图16是场景3连通域分析后提取的检测图像和背景图像的灰度图像,其中图16(a)是检测图像的灰度图像,图16(b)是背景图像的灰度图像;Fig. 16 is the grayscale image of the detected image and the background image extracted after the connected domain analysis of scene 3, wherein Fig. 16 (a) is the grayscale image of the detected image, and Fig. 16 (b) is the grayscale image of the background image;

图17是场景3的HOG特征向量的直方图;Fig. 17 is a histogram of the HOG feature vector of scene 3;

图18是光照突变场景的场景示意图,其中图18(a)是检测图像,图18(b)是背景图像,图18(c)是检测结果图;Fig. 18 is a scene schematic diagram of a sudden illumination change scene, wherein Fig. 18(a) is a detection image, Fig. 18(b) is a background image, and Fig. 18(c) is a detection result diagram;

图19是防火门开启场景1的场景示意图,其中图19(a)是检测图像,图19(b)是背景图像,图19(c)是检测结果图;Fig. 19 is a scene schematic diagram of fire door opening scene 1, wherein Fig. 19(a) is a detection image, Fig. 19(b) is a background image, and Fig. 19(c) is a detection result diagram;

图20是防火门开启场景2的场景示意图,其中图20(a)是检测图像,图20(b)是背景图像,图20(c)是检测结果图;Fig. 20 is a scene schematic diagram of fire door opening scene 2, wherein Fig. 20 (a) is a detection image, Fig. 20 (b) is a background image, and Fig. 20 (c) is a detection result diagram;

图21是消防通道阻塞场景1的场景示意图,其中图21(a)是检测图像,图21(b)是背景图像,图21(c)是检测结果图;Fig. 21 is a scene schematic diagram of fire exit blocking scene 1, wherein Fig. 21(a) is a detection image, Fig. 21(b) is a background image, and Fig. 21(c) is a detection result diagram;

图22是消防通道阻塞场景2的场景示意图,其中图22(a)是检测图像,图22(b)是背景图像,图22(c)是检测结果图;Fig. 22 is a schematic diagram of the scene 2 of fire exit blocking scene, wherein Fig. 22(a) is a detection image, Fig. 22(b) is a background image, and Fig. 22(c) is a detection result diagram;

图23是室外消防通道场景2的场景示意图,其中图23(a)是检测图像,图23(b)是背景图像,图23(c)是检测结果图。Fig. 23 is a schematic diagram of the outdoor fire exit scene 2, wherein Fig. 23(a) is a detection image, Fig. 23(b) is a background image, and Fig. 23(c) is a detection result diagram.

具体实施方式Detailed ways

下面结合附图对本发明的具体实施方式进行描述,以便本领域的技术人员更好地理解本发明。需要特别提醒注意的是,在以下的描述中,当已知功能和设计的详细描述也许会淡化本发明的主要内容时,这些描述在这里将被忽略。Specific embodiments of the present invention will be described below in conjunction with the accompanying drawings, so that those skilled in the art can better understand the present invention. It should be noted that in the following description, when detailed descriptions of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.

实施例Example

图1是本发明基于自适应背景学习的消防通道安全检测方法的具体实施方式流程图。如图1所示,本发明基于自适应背景学习的消防通道安全检测方法包括以下步骤:FIG. 1 is a flow chart of a specific embodiment of the fire exit safety detection method based on adaptive background learning in the present invention. As shown in Figure 1, the fire escape safety detection method based on self-adaptive background learning of the present invention comprises the following steps:

S101:背景的混合高斯模型训练:S101: Gaussian mixture model training for the background:

将消防通道监控视频流的前N帧图像作为训练样本,训练得到由三个高斯模型混合的背景每个像素点的混合高斯模型。The first N frames of images of the fire exit monitoring video stream are used as training samples, and the mixed Gaussian model of each pixel of the background mixed by three Gaussian models is obtained through training.

图2是图1中混合高斯模型训练的流程图。如图2所示,混合高斯模型训练的具体步骤包括:Fig. 2 is a flowchart of the training of the mixed Gaussian model in Fig. 1 . As shown in Figure 2, the specific steps of the mixed Gaussian model training include:

S201:混合高斯模型进行初始化:S201: Initialize the mixed Gaussian model:

首先对混合高斯模型进行初始化。本发明中图像中的每个像素点的背景模型由三个高斯模型混合而成。其中高斯模型1根据第t=1帧图像进行初始化,表达式为:First, the mixture Gaussian model is initialized. The background model of each pixel in the image in the present invention is formed by mixing three Gaussian models. Among them, the Gaussian model 1 is initialized according to the t=1th frame image, and the expression is:

其中,表示高斯模型1中第1帧图像训练后像素点j的均值,j的取值范围为j=1,2,…,M,M为图像中像素点数量,xj,1表示第1帧图像中像素点j的像素值,表示高斯模型1中第1帧图像训练后像素点j的方差,σinit为预设的初始方差,表示高斯模型1中第1帧图像训练后像素点j的权重,ωinit为预设的初始权重。本实施例中,初始方差σinit设置为30,初始权重ωinit为0.05。in, Indicates the mean value of pixel j after the training of the first frame image in Gaussian model 1, the value range of j is j=1,2,...,M, M is the number of pixels in the image, x j,1 represents the first frame image The pixel value of pixel point j in Indicates the variance of pixel j after the first frame of image training in Gaussian model 1, σ init is the preset initial variance, Indicates the weight of pixel j after the first frame of image training in Gaussian model 1, ω init is the preset initial weight. In this embodiment, the initial variance σ init is set to 30, and the initial weight ω init is set to 0.05.

高斯模型2、3初始化的表达式为:The expressions for the initialization of Gaussian models 2 and 3 are:

分别表示高斯模型2、3中第1帧图像训练后像素点j的均值, 表示高斯模型2、3中第1帧图像训练后像素点j的方差,表示高斯模型2、3中第1帧图像训练后像素点j的权重。 respectively represent the mean value of pixel j after the training of the first frame image in Gaussian model 2 and 3, Indicates the variance of pixel j after the training of the first frame image in Gaussian model 2 and 3, Indicates the weight of pixel j after the training of the first frame image in Gaussian models 2 and 3.

S202:高斯模型排序:S202: Gaussian model sorting:

将经过前t帧图像得到的像素点j的三个高斯模型,按照的值降序排列。The three Gaussian models of the pixel point j obtained through the previous t frames of images, according to The values are sorted in descending order.

S203:令i=1。S203: set i=1.

S204:采用第t+1帧图像对高斯模型i进行匹配:S204: Match the Gaussian model i using the t+1th frame image:

采用第t+1帧图像中像素点j的像素值xj,t+1对高斯模型i进行匹配,匹配方法为:如果则认为匹配,否则不匹配。δ为大于0的常数,通常设置δ=2.5。Use the pixel value xj,t+1 of pixel j in the t+1th frame image to match the Gaussian model i. The matching method is: if If it matches, it doesn't match. δ is a constant greater than 0, usually set δ=2.5.

S205:判断匹配是否成功,进入步骤S206,否则进入步骤S207。S205: Determine whether the matching is successful, go to step S206, otherwise go to step S207.

S206:更新高斯模型i,进入步骤S210,更新公式为:S206: Update the Gaussian model i, enter step S210, the update formula is:

其中,α是预设的模型学习率,取值范围为0<α<1,分别表示在第t帧和第t+1帧图像训练后像素点j的混合高斯模型中第i个高斯模型的权重,分别表示在第t帧和第t+1帧图像训练后像素点j的混合高斯模型中第i个高斯模型的均值,分别表示在第t帧和第t+1帧图像训练后像素点j的混合高斯模型中第i个高斯模型的方差。Among them, α is the preset model learning rate, and the value range is 0<α<1, respectively represent the weight of the i-th Gaussian model in the mixed Gaussian model of pixel j after image training in the t-th frame and the t+1-th frame, respectively represent the mean value of the i-th Gaussian model in the mixed Gaussian model of the pixel j after image training in the t-th frame and the t+1-th frame, Represent the variance of the i-th Gaussian model in the mixed Gaussian model of pixel j after image training in the t-th frame and the t+1-th frame, respectively.

S207:判断是否i=3,如果不是,进入步骤S208,否则进入步骤S209。S207: Determine whether i=3, if not, go to step S208, otherwise go to step S209.

S208:令i=i+1,返回步骤S204。S208: Let i=i+1, return to step S204.

S209:取代更新最后一个高斯模型,进入步骤210:S209: instead of updating the last Gaussian model, go to step 210:

更新公式为:The update formula is:

S210:权重归一化:S210: Weight normalization:

每次更新完成后,将三个高斯模型的权重进行归一化,令 After each update, the weights of the three Gaussian models To normalize, let

S211:判断是否t=N-1,如果不是,进入步骤S212,否则进入步骤S213。S211: Determine whether t=N-1, if not, go to step S212, otherwise go to step S213.

S212:令t=t+1,返回步骤S202。S212: Let t=t+1, return to step S202.

S213:训练完成,得到混合高斯模型,记训练完成的混合高斯模型的均值为方差为权重为 S213: The training is completed, and the mixed Gaussian model is obtained, and the mean value of the trained mixed Gaussian model is Variance is Weight is

在实际应用中,如果监控视频图像为灰度图像,则像素值为灰度值,每个像素点只需要一个混合高斯模型,如果为彩色图像,那么有RGB三个通道,那么每个像素点存在三个混合高斯模型,分别对应RGB三个通道。In practical applications, if the surveillance video image is a grayscale image, the pixel value is a grayscale value, and only one mixed Gaussian model is required for each pixel. If it is a color image, then there are three channels of RGB, then each pixel There are three mixed Gaussian models, corresponding to the three channels of RGB.

S102:检测图像的前景二值图像生成:S102: Generate a foreground binary image of the detected image:

在建立好背景的混合高斯模型之后,需要对检测图片帧进行前景目标检测。对消防通道监控视频流中的检测图像,根据混合高斯模型采用背景减除法得到该帧图像的前景二值图像。After establishing the mixed Gaussian model of the background, it is necessary to perform foreground target detection on the detection picture frame. For the detection image in the video stream of fire exit monitoring, the foreground binary image of the frame image is obtained by using the background subtraction method according to the mixed Gaussian model.

图3是前景二值图像生成的流程图。如图3所示,前景二值图像生成包括以下步骤:Fig. 3 is a flow chart of foreground binary image generation. As shown in Figure 3, the foreground binary image generation includes the following steps:

S301:选择对比的Bj个高斯模型:S301: Select B j Gaussian models for comparison:

像素点j的混合高斯模型实际上描述了像素值在时间域上的概率分布,为了确定混合高斯模型中哪些高斯模型是由背景产生的,需要将像素点j的三个高斯模型按进行降序排列,取前Bj个高斯模型作为背景的分布,Bj的计算公式为:The mixed Gaussian model of pixel j actually describes the probability distribution of pixel values in the time domain. In order to determine which Gaussian models in the mixed Gaussian model are generated by the background, it is necessary to divide the three Gaussian models of pixel j into Arrange in descending order, take the first B j Gaussian models as the background distribution, the calculation formula of B j is:

T表示预设的阈值,其含义是背景像素点的比例,在本实施例中取值为0.8。T represents a preset threshold, which means the proportion of background pixels, and the value is 0.8 in this embodiment.

S302:对Bj个高斯模型进行匹配。S302: Match the B j Gaussian models.

将检测图像中像素点j的像素值x′j对Bj个高斯模型进行匹配,匹配方法与步骤S204相同。Match the pixel value x' j of pixel point j in the detected image to B j Gaussian models, and the matching method is the same as step S204.

S303:判断是否有高斯模型匹配成功,如果有任意一个匹配成功,进入步骤S304,否则进入步骤S305。S303: Determine whether any Gaussian model is successfully matched, if any one is successfully matched, go to step S304, otherwise go to step S305.

S304:判定像素点j为背景像素,将二值图像对应像素点的值置为0。S304: Determine that the pixel point j is a background pixel, and set the value of the corresponding pixel point of the binary image to 0.

S305:判定像素点j为前景像素,将二值图像对应像素点的值置为1。S305: Determine that pixel j is a foreground pixel, and set the value of the corresponding pixel in the binary image to 1.

在彩色图像中,由于每个像素点有三个通道的混合高斯模型,可能出现在三个通道中前景像素的判定结果不同。一般情况下,三个通道中如果有任意一个判定为前景像素,则认为该像素点为前景像素,如果三个通道都判定为背景像素,才认为该像素点为背景像素。In a color image, since each pixel has a Gaussian mixture model with three channels, it may appear that the determination results of foreground pixels in the three channels are different. Generally, if any one of the three channels is judged to be a foreground pixel, the pixel is considered to be a foreground pixel, and if all three channels are judged to be a background pixel, the pixel is considered to be a background pixel.

S103:前景块真实性判断:S103: Judging the authenticity of the foreground block:

由于监控图像中光照的变化,采用背景减除得到的前景二值图像中可能出现伪前景目标。为了排除光照瞬时突变对检测结果的影响,本发明引入了方向梯度直方图(Histogram of Oriented Gradient,HOG)特征。HOG特征主要描述图像的边缘和梯度信息。该特征是一种在计算机视觉和模式识别中用来进行物体检测的特征描述子,它通过计算和统计图像区域的梯度信息构成梯度方向直方图。该特征在计算过程中的归一化保证了该描述子的光学转化不变性,并且光照变化时图像的边缘和梯度信息都基本不变,所以采用HOG特征可以很好的排除光照突然变化对检测结果的影响,因此本发明采用背景和当前帧图像的HOG特征向量的距离来判断前景目标的真实性。Due to the change of illumination in the surveillance image, false foreground objects may appear in the foreground binary image obtained by background subtraction. In order to eliminate the influence of the transient sudden change of illumination on the detection result, the present invention introduces a histogram of oriented gradient (Histogram of Oriented Gradient, HOG) feature. The HOG feature mainly describes the edge and gradient information of the image. This feature is a feature descriptor used for object detection in computer vision and pattern recognition. It forms a gradient direction histogram by calculating and counting the gradient information of the image area. The normalization of this feature in the calculation process ensures the invariance of the optical transformation of the descriptor, and the edge and gradient information of the image are basically unchanged when the illumination changes, so the use of HOG features can well eliminate the impact of sudden illumination changes on detection. Therefore, the present invention uses the distance between the background and the HOG feature vector of the current frame image to judge the authenticity of the foreground target.

图4是前景块真实性判断的流程图。如图4所示,利用HOG特征判断前景二值图像中前景块真实性的具体方法为:Fig. 4 is a flow chart of foreground block authenticity judgment. As shown in Figure 4, the specific method of using HOG features to judge the authenticity of the foreground block in the foreground binary image is as follows:

S401:分析得到前景二值图像的连通域:S401: Analyze and obtain the connected domain of the foreground binary image:

对步骤S102得到的前景二值图像进行连通域分析,将连成块的区域归为一类并提取出四个边界的位置信息,得到每块连通域的外接矩形坐标。Connected domain analysis is performed on the foreground binary image obtained in step S102, the connected areas are classified into one category and the position information of the four boundaries is extracted to obtain the circumscribed rectangular coordinates of each connected domain.

由于图像中存在噪点,所以一般在进行连通域分析的时候,过小的连通域会直接舍弃,只保留大小大于阈值的连通域,这些连通域才能进行分块并提取HOG特征向量。Due to the presence of noise in the image, generally when performing connected domain analysis, the connected domains that are too small will be discarded directly, and only the connected domains whose size is greater than the threshold are kept, and these connected domains can be divided into blocks and the HOG feature vectors can be extracted.

S402:提取背景图像:S402: extract the background image:

由于每个像素点的混合高斯模型由若干个高斯模型组成,我们需要提取出一个最能代表背景的值来组成背景。根据高斯混合模型,将每个像素点各自的高斯模型中权重最大的高斯模型的像素均值作为背景像素点的像素值,组成一幅背景图像。Since the mixed Gaussian model of each pixel is composed of several Gaussian models, we need to extract a value that best represents the background to form the background. According to the Gaussian mixture model, the pixel mean value of the Gaussian model with the largest weight among the respective Gaussian models of each pixel point is used as the pixel value of the background pixel point to form a background image.

对于监控视频图像为灰度图像的情况,那么在提取代表背景像素值的时候是从三个高斯模型中提取权重最大的,如果监控视频图像是彩色图像,那么就需要从每个通道的三个高斯模型中各自提取权重最大的,然后组成彩色背景图像。For the case where the surveillance video image is a grayscale image, then the weight of the three Gaussian models is extracted from the three Gaussian models when extracting the representative background pixel value. If the surveillance video image is a color image, then three Each of the Gaussian models extracts the largest weight, and then composes a color background image.

S403:获取灰度图像:S403: Acquiring a grayscale image:

获取检测图像和步骤S402得到的背景图像的灰度图像。Acquire the detection image and the grayscale image of the background image obtained in step S402.

可见,如果监控视频图像本身为灰度图像,那么步骤S402得到的背景图像自然也是灰度图像。如果是彩色的监控视频图像,就需要先对检测图像和背景图像进行灰度图像转换。It can be seen that if the surveillance video image itself is a grayscale image, then the background image obtained in step S402 is naturally also a grayscale image. If it is a color surveillance video image, it is necessary to perform grayscale image conversion on the detection image and the background image.

S404:提取连通域的两个HOG特征向量:S404: Extract two HOG feature vectors of the connected domain:

对于每个连通域,根据其外接矩形坐标分别从检测图像和背景图像的灰度图像中提取出对应的灰度图像块,分别提取两个灰度图像块的HOG特征向量K1=(k11,k12,…,k1M)和K2=(k21,k22,…,k2M),M表示特征向量中的元素个数。For each connected domain, the corresponding gray-scale image blocks are extracted from the gray-scale images of the detection image and the background image according to the coordinates of its circumscribed rectangle, and the HOG feature vectors K 1 =(k 11 ,k 12 ,...,k 1M ) and K 2 =(k 21 ,k 22 ,...,k 2M ), where M represents the number of elements in the feature vector.

本实施例中,HOG特征向量的提取的具体方法为:首先对图像块中进行块(Block)划分。由于连通域外接矩形的尺寸各有相同,因此本发明采用固定块数目,而块中的像素点数目自适应变化的方法来划分图像块。块划分的具体方法为:记外接矩形的横向像素数量为X,纵向像素数量为Y,设置横向分块参数P和纵向分块参数Q,P和Q都是自然数,则每个块中横向像素数量纵向像素数量 表示向下取整,从原点横向移动块,横向移动步长到达横向边缘再纵向移动,纵向移动步长为如此循环直到对图像块完成分块。可见最终得到的块数量为(2MX-1)×(2MY-1)个。统计每个块中的梯度分布,最终形成HOG特征向量。In this embodiment, the specific method of extracting the HOG feature vector is as follows: firstly, divide the image block into blocks. Since the sizes of the bounding rectangles of connected domains are all the same, the present invention divides image blocks by adopting a method of fixing the number of blocks and adaptively changing the number of pixels in the blocks. The specific method of block division is: record the number of horizontal pixels of the circumscribed rectangle as X, the number of vertical pixels as Y, set the horizontal block parameter P and the vertical block parameter Q, P and Q are both natural numbers, then the horizontal pixels in each block quantity number of vertical pixels Represents rounding down, moving the block horizontally from the origin, and moving the step horizontally Reach the horizontal edge and then move vertically, the vertical movement step is This loops until the image blocks are divided into blocks. It can be seen that the number of finally obtained blocks is (2M X -1)×(2M Y -1). The gradient distribution in each block is counted, and finally the HOG feature vector is formed.

S405:计算两个HOG特征向量的欧式距离:S405: Calculate the Euclidean distance of two HOG feature vectors:

对于步骤S404得到的每个连通域的两个HOG特征向量,计算两个特征向量的欧式距离D。计算公式为:For the two HOG feature vectors of each connected domain obtained in step S404, the Euclidean distance D of the two feature vectors is calculated. The calculation formula is:

S406:判断欧式距离D是否大于预先设置的阈值TD,如果是,进入步骤S407,否则进入步骤S408。S406: Determine whether the Euclidean distance D is greater than the preset threshold T D , if yes, go to step S407, otherwise go to step S408.

S407:判定该连通域为真正前景块。S407: Determine that the connected domain is a real foreground block.

S408:判断该连通域为伪前景块;S408: Determine that the connected domain is a pseudo foreground block;

S104:异常情况判断:S104: judging abnormal conditions:

如果步骤S103中分析得到的结果没有连通域,或每个连通域都是伪前景块,则此帧检测图像中没有异常情况,如果至少有一个连通域是真正前景块,则判断该帧有异常情况。在实际应用中,安全通道可能存在行人短时通行或安全门短暂打开,因此当单帧图像发现异常情况后并不能确定消防通道存在安全问题,而需要等待一段时间,以排除短暂的异常情况,本发明通过连续判断M帧检测图像来实现,即如果连续M帧均存在异常情况,M为预设的帧数,则消防通道存在安全问题,需要警告或报警,否则消防通道安全。If the result analyzed in step S103 has no connected domains, or each connected domain is a false foreground block, then there is no abnormality in the frame detection image, if at least one connected domain is a real foreground block, then it is judged that the frame has abnormalities Happening. In practical applications, there may be pedestrians passing through the safety passage for a short time or the safety door is temporarily opened. Therefore, when an abnormal situation is found in a single frame image, it cannot be determined that there is a safety problem in the fire exit, and it is necessary to wait for a period of time to rule out the short-term abnormal situation. The invention is realized by continuously judging M frames of detection images, that is, if there are abnormalities in the continuous M frames, and M is the preset number of frames, then there is a safety problem in the fire exit, and a warning or alarm is required, otherwise the fire exit is safe.

在长时间的消防通道安全检测过程中,由于灯光等环境的变化,会令场景也发生变化,为了更好地适应这些变化,可以在每帧检测完成后采用检测图像对像素点的混合高斯模型进行学习更新。由于前景目标像素点的更新会导致在图像中停留时间较长的阻塞物或者被打开的常闭防火门被更新到背景模型中从而检测不出异常来,所以前景目标像素点的混合高斯模型不能更新,即本发明只更新背景像素点。其更新方法与步骤S101中训练样本对混合高斯模型的更新方法相同,即首先对三个高斯模型进行排序,然后将背景像素点的像素值进行匹配,一旦匹配成功即对该高斯模型进行更新,如果三个高斯模型均不能匹配成功,则将最后一个高斯模型采用该像素点的像素值进行取代更新,更新完成后对三个高斯模型的权重进行归一化。During the long-term fire exit safety detection process, the scene will change due to changes in the lighting and other environments. In order to better adapt to these changes, a mixed Gaussian model of detection images to pixels can be used after each frame detection is completed. Make learning updates. Since the update of the foreground target pixels will cause the blockages that stay in the image for a long time or the normally closed fire doors that are opened to be updated into the background model so that no abnormalities can be detected, the mixed Gaussian model of the foreground target pixels cannot Update, that is, the present invention only updates the background pixels. Its update method is the same as the update method of the training sample to the mixed Gaussian model in step S101, that is, the three Gaussian models are first sorted, and then the pixel values of the background pixels are matched. Once the matching is successful, the Gaussian model is updated. If none of the three Gaussian models can be successfully matched, the last Gaussian model is replaced and updated with the pixel value of the pixel point, and the weights of the three Gaussian models are normalized after the update is completed.

在在线更新过程中,为了防止更新过程使高斯模型的标准差过小导致前景检测过于敏感,可以给标准差设定一个最小值,当计算出的标准差小于最小值时,则将标准差置为最小值,否则不作操作,保持原值。本实施例中设置最小值为5。In the online update process, in order to prevent the update process from making the standard deviation of the Gaussian model too small and cause the foreground detection to be too sensitive, a minimum value can be set for the standard deviation. When the calculated standard deviation is smaller than the minimum value, the standard deviation is set to is the minimum value, otherwise no operation is performed and the original value remains. In this embodiment, the minimum value is set to 5.

显然,当通道监控图像为灰度图像,每个像素点只有一个混合高斯模型进行更新,如果是彩色图像,则需要对RGB三个通道的混合高斯模型进行更新。Obviously, when the channel monitoring image is a grayscale image, only one mixed Gaussian model is updated for each pixel. If it is a color image, the mixed Gaussian model of the three channels of RGB needs to be updated.

可见,在在线更新时,对于每一个像素点,根据其属性自动判断是否该被更新,属于前景的像素点保持原来的模型不变,只有属于背景的像素点才进行更新,这样也即是前景目标区域不更新而只有背景更新,这样既对场景进行了必要的更新,又避免了前景目标出现的时间过长逐渐融为背景的错误。It can be seen that when updating online, for each pixel, it is automatically judged whether it should be updated according to its attributes. The pixels belonging to the foreground keep the original model unchanged, and only the pixels belonging to the background are updated, which is also the foreground. The target area is not updated but only the background is updated, which not only updates the scene necessary, but also avoids the error that the foreground target appears for too long and gradually blends into the background.

在实际应用中,消防通道安全检测中存在一些很少量的误检情况。例如:光照变化引起了伪前景,但是这部分伪前景没有被前景目标确认步骤剔除而被错误地认为检测到了阻塞物或常闭安全门被打开。在对混合高斯模型进行在线更新时,由于采用的是前景(虽然在这时为伪前景目标)不更新策略,所以这部分伪前景目标部分的背景模型不会学习更新,而且光照变化后的当前图像也基本保持不变,这时这部分由于光照引起的伪前景目标区域就会一直被错误地检测成为异常状态,而无法自动恢复成正常的状态。虽然这部分误检情况很少出现,但是这种情况会造成检测系统一直持续的报警而无法恢复正常,造成很坏的影响。所以必须对这种情况予以解决。可以在检测系统增设一个人工标志来控制程序的执行进而让其恢复成正常的状态。当发生这种误检情况时,工作人员发现报警是属于误检,然后手动将重启控制标志设置成重启状态,将原来的背景混合高斯模型删除,重新从当前的监控视频流中提取N张图像作为训练样本,再次建立背景的混合高斯模型,基于新的混合高斯模型进行安全检测,这时检测结果就会恢复正常状态了。In practical applications, there are some very small amount of false detections in the safety detection of fire exits. For example: lighting changes cause false foreground, but this part of false foreground is not eliminated by the foreground target confirmation step and is mistakenly considered as an obstruction detected or a normally closed safety door is opened. When updating the mixed Gaussian model online, since the foreground (although it is a pseudo foreground target at this time) non-updating strategy is adopted, the background model of this part of the pseudo foreground target will not learn to update, and the current The image also basically remains unchanged. At this time, this part of the false foreground target area caused by lighting will always be wrongly detected as an abnormal state, and cannot be automatically restored to a normal state. Although this part of false detection rarely occurs, this situation will cause the detection system to continue to alarm and cannot return to normal, causing very bad effects. So this situation has to be addressed. An artificial flag can be added to the detection system to control the execution of the program and restore it to a normal state. When this kind of false detection occurs, the staff finds that the alarm is a false detection, and then manually sets the restart control flag to the restart state, deletes the original background mixed Gaussian model, and re-extracts N images from the current surveillance video stream As a training sample, the mixed Gaussian model of the background is established again, and the security detection is performed based on the new mixed Gaussian model, and the detection result will return to the normal state.

为了验证提出本发明的技术效果,在真实的消防通道监控视频序列上进行了一系列的实验。为了方便统计检测率和误检率,将正常无阻塞的消防通道、常闭防火门正常关闭作为正样本;将阻塞的消防通道、常闭防火门打开作为负样本。In order to verify the technical effect of the present invention, a series of experiments are carried out on real fire escape monitoring video sequences. In order to conveniently count the detection rate and false detection rate, the normal non-blocking fire exits and normally closed fire doors are normally closed as positive samples; the blocked fire exits and normally closed fire doors are opened as negative samples.

接下来从同一个摄像机下拍摄的消防通道监视视频的3个不同时刻截取3个场景来进行实验验证,其中场景1是正常无阻塞情形,场景2是光照突变情形,场景3是消防通道阻塞情形。图5是场景1正常的检测图像。图6是图5所示时刻根据背景混合高斯模型得到的背景图像。图7是场景1得到的前景二值图像。如图5至图7所示,由于场景1是正常无阻塞情形,所以得到的前景二值图像图7中没有前景,因此进行连通域的分析结果中没有前景连通域,所以判定结果是无异常情况。Next, 3 scenes are intercepted from the surveillance video of the fire exits captured by the same camera at 3 different times for experimental verification. Scene 1 is a normal non-blocking situation, Scene 2 is a sudden change in illumination, and Scene 3 is a blockage of fire exits. . Figure 5 is a normal detection image of scene 1. FIG. 6 is a background image obtained according to the background mixture Gaussian model at the time shown in FIG. 5 . Fig. 7 is the foreground binary image obtained in scene 1. As shown in Figures 5 to 7, since Scene 1 is a normal non-blocking situation, the obtained foreground binary image in Figure 7 has no foreground, so there is no foreground connected domain in the connected domain analysis result, so the judgment result is no abnormality Happening.

图8是场景2光照突变的检测图像。图9是图8所示时刻背景混合模型得到的背景图像。图10是场景2得到的前景二值图像。如图10所示,由于存在光照突变的情况,检测图像中存在光斑,因此在进行背景减除得到的前景二值图像中存在前景区域。Fig. 8 is the detection image of scene 2 illumination sudden change. FIG. 9 is a background image obtained by the time background mixture model shown in FIG. 8 . Figure 10 is the foreground binary image obtained in scene 2. As shown in Figure 10, due to the sudden change in illumination, there are spots in the detection image, so there is a foreground area in the foreground binary image obtained by background subtraction.

图11是场景2连通域分析后提取的检测图像和背景图像的灰度图像,其中图11(a)是检测图像的灰度图像,图11(b)是背景图像的灰度图像。图12是场景2的HOG特征向量的直方图。如图12所示,在场场景2中检测图像和背景图像的HOG特征向量非常接近,其欧式距离小于预先设定的阈值,因此判定该连通域为伪前景,因此场景2的判定结果是无异常情况。Figure 11 is the grayscale image of the detection image and the background image extracted after the connected domain analysis of scene 2, where Figure 11(a) is the grayscale image of the detection image, and Figure 11(b) is the grayscale image of the background image. FIG. 12 is a histogram of the HOG feature vectors for scene 2. As shown in Figure 12, the HOG feature vectors of the detection image and the background image in scene 2 are very close, and the Euclidean distance is smaller than the preset threshold, so it is judged that the connected domain is a pseudo foreground, so the judgment result of scene 2 is no abnormality Happening.

图13是场景3通道阻塞的检测图像。如图13所示,场景3下,消防通道中有行人通过,对通道形成了阻塞。图14是图13所示时刻背景混合模型得到的背景图像。图15是场景2得到的前景二值图像。如图13所示,由于行人通过,在进行背景减除得到的前景二值图像中存在前景区域。Figure 13 is the detection image of channel blockage in scene 3. As shown in Figure 13, in scene 3, there are pedestrians passing through the fire escape, which blocks the passage. FIG. 14 is a background image obtained by the time background mixture model shown in FIG. 13 . Fig. 15 is the foreground binary image obtained in scene 2. As shown in Figure 13, due to pedestrians passing by, there is a foreground area in the foreground binary image obtained by background subtraction.

图16是场景3连通域分析后提取的检测图像和背景图像的灰度图像,其中图16(a)是检测图像的灰度图像,图16(b)是背景图像的灰度图像。图17是场景3的HOG特征微量的直方图。如图17所示,在场景3中,场景2中检测图像和背景图像的HOG特征向量差异比较大,其欧式距离大于预先设定的阈值,因此判定该连通域为真正前景,因此场景3的判定结果是存在异常情况。但是由于该行人会在短时间内通过消防通道,不会造成长时间阻塞,异常情况持续帧数小于预设的帧数阈值,因此不会产生警告或报警。Fig. 16 is the grayscale image of the detection image and the background image extracted after the connected domain analysis of scene 3, wherein Fig. 16(a) is the grayscale image of the detection image, and Fig. 16(b) is the grayscale image of the background image. FIG. 17 is a histogram of the HOG feature trace of scene 3. As shown in Figure 17, in scene 3, the difference between the HOG feature vectors of the detection image and the background image in scene 2 is relatively large, and its Euclidean distance is greater than the preset threshold, so it is determined that the connected domain is the real foreground, so the scene 3 The judgment result is that there is an abnormal situation. However, since the pedestrian will pass through the fire exit in a short period of time, it will not cause long-term blockage, and the continuous frame rate of the abnormal situation is less than the preset frame rate threshold, so no warning or alarm will be generated.

接下来对6段不同的视频场景的检测结果进行统计,其中包括光照突变、常闭防火门异常开启、消防通道阻塞等情况。Next, statistics are made on the detection results of 6 different video scenes, including sudden changes in illumination, abnormal opening of normally closed fire doors, and blockage of fire exits.

图18是光照突变场景的场景示意图,其中图18(a)是检测图像,图18(b)是背景图像,图18(c)是检测结果图。Fig. 18 is a schematic diagram of a scene with a sudden illumination change, where Fig. 18(a) is a detection image, Fig. 18(b) is a background image, and Fig. 18(c) is a detection result diagram.

图19是防火门开启场景1的场景示意图,其中图19(a)是检测图像,图19(b)是背景图像,图19(c)是检测结果图。Fig. 19 is a schematic diagram of a fire door opening scene 1, wherein Fig. 19(a) is a detection image, Fig. 19(b) is a background image, and Fig. 19(c) is a detection result diagram.

图20是防火门开启场景2的场景示意图,其中图20(a)是检测图像,图20(b)是背景图像,图20(c)是检测结果图。Fig. 20 is a schematic diagram of a fire door opening scene 2, wherein Fig. 20(a) is a detection image, Fig. 20(b) is a background image, and Fig. 20(c) is a detection result diagram.

图21是消防通道阻塞场景1的场景示意图,其中图21(a)是检测图像,图21(b)是背景图像,图21(c)是检测结果图。Fig. 21 is a schematic diagram of scene 1 of fire exit blocking scene, wherein Fig. 21(a) is a detection image, Fig. 21(b) is a background image, and Fig. 21(c) is a detection result diagram.

图22是消防通道阻塞场景2的场景示意图,其中图22(a)是检测图像,图22(b)是背景图像,图22(c)是检测结果图。Fig. 22 is a schematic diagram of scene 2 of fire exit blocking scene, wherein Fig. 22(a) is a detection image, Fig. 22(b) is a background image, and Fig. 22(c) is a detection result diagram.

图23是室外消防通道场景2的场景示意图,其中图23(a)是检测图像,图23(b)是背景图像,图23(c)是检测结果图。Fig. 23 is a schematic diagram of the outdoor fire exit scene 2, wherein Fig. 23(a) is a detection image, Fig. 23(b) is a background image, and Fig. 23(c) is a detection result diagram.

表1是6个场景监控视频的定量检测结果统计表。Table 1 is a statistical table of quantitative detection results of 6 scene surveillance videos.

场景Scenes 检测率Detection rate 误检率False detection rate 光照突变场景light mutation scene >95%>95% <2%<2% 防火门开启场景1Fire door opening scene 1 >95%>95% <2%<2% 防火门开启场景2Fire door opening scene 2 >95%>95% <2%<2% 消防通道阻塞场景1Fire exit blocking scene 1 >95%>95% <2%<2% 消防通道阻塞场景2Fire exit blocking scene 2 >95%>95% <2%<2% 室外消防通道场景outdoor fire exit scene >85%>85% <10%<10%

表1Table 1

从表1可以看出,由于前面的5个视频场景在室内,各种干扰较少,相对简单,检测效果很好。可见本发明可以有效地克服光照变化的影响,对于防火门的异常开启和消防通道阻塞的检测都很准确。对于第6个室外消防通道场景,光线变化明显,各种人员和车辆等干扰较多,相对复杂,检测效果稍微有下降,但还是保持了较好的检测效果。It can be seen from Table 1 that since the previous 5 video scenes are indoors, various interferences are less, relatively simple, and the detection effect is very good. It can be seen that the present invention can effectively overcome the influence of illumination changes, and is very accurate in detecting abnormal opening of fire doors and blockage of fire exits. For the sixth outdoor fire exit scene, the light changes significantly, and there are many interferences from various people and vehicles, which is relatively complicated, and the detection effect is slightly reduced, but it still maintains a good detection effect.

尽管上面对本发明说明性的具体实施方式进行了描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。Although the illustrative specific embodiments of the present invention have been described above, so that those skilled in the art can understand the present invention, it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, As long as various changes are within the spirit and scope of the present invention defined and determined by the appended claims, these changes are obvious, and all inventions and creations using the concept of the present invention are included in the protection list.

Claims (4)

1. A fire fighting access safety detection method based on self-adaptive background learning is characterized by comprising the following steps:
s1: the method comprises the following steps of taking the first N frames of images of a fire fighting channel monitoring video stream as training samples, training to obtain a background mixed Gaussian model mixed by three Gaussian models, and specifically training the background mixed Gaussian model to obtain the background mixed Gaussian model, wherein the training steps comprise:
s1.1: initializing a Gaussian mixture model, wherein the Gaussian mixture model 1 is initialized according to the 1 st frame of image, and the expression is as follows:
wherein,representing the mean value of pixel points j after the 1 st frame of image training in the Gaussian model 1, wherein the value range of j is j =1,2, …, M is the number of pixel points in the image, and x is j,1 Representing the pixel value of pixel point j in the 1 st frame image,representing the variance, sigma, of the pixel point j after the 1 st frame image training in the Gaussian model 1 init Is a pre-set initial variance of the measured signal,represents the weight, omega, of the pixel point j after the 1 st frame image training in the Gaussian model 1 init Is a preset initial weight;
the gaussian models 2,3 are initialized with the expression:
respectively representing the average value of pixel points j after the 1 st frame of image training in the Gaussian models 2 and 3, represents the variance of the pixel point j after the 1 st frame image in the Gaussian models 2 and 3 is trained,representing the weight of a pixel point j after the 1 st frame of image in the Gaussian models 2 and 3 is trained;
s1.2: the method for training the frame t +1 image comprises the following steps of sequentially training a mixed Gaussian model by using the frame t 2,3 and the frame t … N image, and obtaining the mixed Gaussian model, wherein the method for training the frame t +1 image comprises the following steps:
three Gaussian models of pixel point j, t =1,2, … and N-1, obtained through the previous t frame image are processed according to the methodI =1,2,3, using the pixel value x of the pixel point j in the t +1 frame image j,t+1 And sequentially matching each Gaussian model, wherein the matching method comprises the following steps: if it is notWherein δ is a constant greater than 0, then it is considered to be matched, otherwise it is not matched;
once a Gaussian model is successfully matched, the Gaussian model takes the pixel value x j,t+1 Updating, keeping other Gaussian models unchanged, and updating the formula as follows:
wherein alpha is a preset model learning rate, the value range is more than 0 and less than 1,respectively representing the weight of the ith Gaussian model in the mixed Gaussian models of the pixel point j after the image training of the t frame and the t +1 frame,respectively representing the mean value of the ith Gaussian model in the mixed Gaussian models of the pixel point j after the t frame and the t +1 frame are trained,respectively representing the variance of the ith Gaussian model in the mixed Gaussian models of the pixel point j after the image training of the t frame and the t +1 frame;
if the pixel value x j,t+1 If the three Gaussian models are not matched, the last Gaussian model is used according to the pixel value x j,t+1 Performing replacement updating, wherein an updating formula is as follows:
after the updating is completed, the weights of the three Gaussian models are usedPerforming normalization toThe mean value of the Gaussian mixture model after training is recorded asVariance ofThe weight is
S2: for each frame of detection image in the fire fighting channel monitoring video stream, obtaining a foreground binary image of the frame of image by adopting a background subtraction method according to a Gaussian mixture model, and the method specifically comprises the following steps:
s2.1: pressing the three Gaussian models of the pixel point jPerforming descending order, taking the first B j Distribution of a Gaussian model as background, B j The calculation formula of (c) is:
t represents a preset threshold value;
s2.2: detecting the pixel value x 'of the pixel point j in the image' j To B j Matching the Gaussian models, if any Gaussian model is successfully matched, judging that the pixel point j is a background pixel, setting the value of the pixel point corresponding to the binary image to be 0, otherwise, judging that the pixel point j is a foreground pixel, and setting the value of the pixel point corresponding to the binary image to be 1;
s3: the authenticity of a foreground block in a foreground binary image is judged by using the HOG characteristics, and the method specifically comprises the following steps:
s3.1: carrying out connected domain analysis on the foreground binary image obtained in the step S2, classifying the areas connected into blocks into one class, extracting position information of four boundaries, and obtaining circumscribed rectangular coordinates of each connected area;
s3.2: taking the pixel mean value of the Gaussian model with the maximum weight in the three Gaussian models of each pixel point as the pixel value of the background pixel point to form a background image;
s3.3: for each connected region, extracting corresponding gray image blocks from the gray images of the detection image and the background image respectively according to external rectangular coordinates of the connected region, extracting HOG feature vectors of the two gray image blocks respectively, calculating Euclidean distance D of the two feature vectors, and if D is larger than a preset threshold value T D If yes, judging the connected region as a real foreground block, otherwise, judging the connected region as a false foreground block;
s4: if each connected region in the step S3 is a pseudo foreground block, the frame detection image has no abnormal condition, and if at least one connected region is a real foreground block, the frame is judged to have the abnormal condition; if the continuous M frames have abnormal conditions, and M is a preset frame number, the fire fighting channel has a safety problem, otherwise, the fire fighting channel is safe;
s5: after the detection is finished, the Gaussian mixture model is learned and updated according to the detection image, and the specific method comprises the following steps: first, the three Gaussian models are calculated according toThe values are arranged in a descending order, then the values are matched with the pixel values of the background pixel points in sequence, the Gaussian model is updated once the matching is successful, if the matching of the three Gaussian models is not successful, the last Gaussian model is replaced and updated by the pixel value of the pixel point, and the weights of the three Gaussian models are normalized after the updating is completed.
2. A fire fighting access safety detection method based on adaptive background learning according to claim 1, wherein in step S3.3, the extraction method of the HOG feature vector is as follows:
recording the number of transverse pixels of the circumscribed rectangle as X and the number of longitudinal pixels as Y, setting a transverse blocking parameter P and a longitudinal blocking parameter Q, wherein both P and Q are natural numbers, and then the number of transverse pixels in each blockNumber of vertical pixels Represents rounding down, moving the block laterally from the origin, moving the step size laterallyReach the transverse edge and move longitudinally by the step length ofThe steps are circulated until the image blocks are partitioned; make statistics of eachAnd distributing the gradients in the block to finally form the HOG characteristic vector.
3. The fire fighting access safety detection method based on adaptive background learning of claim 1, wherein in the learning update process of step S5, a minimum value is set for the standard deviation, and when the calculated standard deviation is smaller than the minimum value, the standard deviation is set to the minimum value.
4. The fire fighting access safety detection method based on adaptive background learning according to claim 1, further comprising processing false detection, and the specific method is as follows: and when false detection occurs, deleting the original background Gaussian mixture model, returning to the step S1 again to extract N images from the current monitoring video stream as training samples, establishing the background Gaussian mixture model again, and performing safety detection based on the new Gaussian mixture model.
CN201510004803.9A 2015-01-06 2015-01-06 Passageway for fire apparatus safety detection method based on adaptive background study Expired - Fee Related CN104463253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510004803.9A CN104463253B (en) 2015-01-06 2015-01-06 Passageway for fire apparatus safety detection method based on adaptive background study

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510004803.9A CN104463253B (en) 2015-01-06 2015-01-06 Passageway for fire apparatus safety detection method based on adaptive background study

Publications (2)

Publication Number Publication Date
CN104463253A CN104463253A (en) 2015-03-25
CN104463253B true CN104463253B (en) 2018-02-02

Family

ID=52909267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510004803.9A Expired - Fee Related CN104463253B (en) 2015-01-06 2015-01-06 Passageway for fire apparatus safety detection method based on adaptive background study

Country Status (1)

Country Link
CN (1) CN104463253B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189355A (en) * 2019-05-05 2019-08-30 暨南大学 Safety evacuation channel occupancy detection method, device, electronic equipment and storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108376406A (en) * 2018-01-09 2018-08-07 公安部上海消防研究所 A kind of Dynamic Recurrent modeling and fusion tracking method for channel blockage differentiation
CN110649573B (en) * 2019-09-05 2020-10-27 菱亚能源科技(深圳)股份有限公司 Control method and device for low-voltage switch cabinet group
CN110766915A (en) * 2019-09-19 2020-02-07 重庆特斯联智慧科技股份有限公司 Alarm method and system for identifying fire fighting access state
CN113553891A (en) * 2020-04-26 2021-10-26 杭州海康消防科技有限公司 Method and device for identifying state of fireproof door, electronic equipment and storage medium
CN112052821B (en) * 2020-09-15 2023-07-07 浙江智慧视频安防创新中心有限公司 Fire-fighting channel safety detection method, device, equipment and storage medium
CN112804447B (en) * 2020-12-30 2023-01-17 北京石头创新科技有限公司 A method, device, medium and electronic equipment for detecting near-field objects
CN112734791B (en) * 2021-01-18 2022-11-29 烟台南山学院 A Foreground and Background Separation Method for Online Video Based on Regularized Error Modeling
CN113591702A (en) * 2021-07-30 2021-11-02 中国工商银行股份有限公司 Video image foreground extraction method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177237A (en) * 2011-12-22 2013-06-26 中国移动通信集团河北有限公司 Video monitoring method and device based on on-line lasers
CN103971382A (en) * 2014-05-21 2014-08-06 国家电网公司 Target detection method avoiding light influences

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8509526B2 (en) * 2010-04-13 2013-08-13 International Business Machines Corporation Detection of objects in digital images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177237A (en) * 2011-12-22 2013-06-26 中国移动通信集团河北有限公司 Video monitoring method and device based on on-line lasers
CN103971382A (en) * 2014-05-21 2014-08-06 国家电网公司 Target detection method avoiding light influences

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于背景建模的动态目标检测算法的研究与仿真;方帅等;《系统仿真学报》;20050131;第17卷(第1期);参见第4-5节 *
基于自适应学习率的背景建模方法;李伟等;《计算机工程》;20110831;第37卷(第15期);参见第2-3节 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189355A (en) * 2019-05-05 2019-08-30 暨南大学 Safety evacuation channel occupancy detection method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104463253A (en) 2015-03-25

Similar Documents

Publication Publication Date Title
CN104463253B (en) Passageway for fire apparatus safety detection method based on adaptive background study
CN112069975B (en) Comprehensive flame detection method based on ultraviolet, infrared and vision
US10303955B2 (en) Foreground detector for video analytics system
CN108416250B (en) People counting method and device
CN106228150B (en) Video image-based smoke detection method
CN105513053B (en) One kind is used for background modeling method in video analysis
CN103903008B (en) A kind of method and system of the mist grade based on image recognition transmission line of electricity
Maksymiv et al. Real-time fire detection method combining AdaBoost, LBP and convolutional neural network in video sequence
CN105469105A (en) Cigarette smoke detection method based on video monitoring
CN110189355A (en) Safety evacuation channel occupancy detection method, device, electronic equipment and storage medium
CN109298785A (en) A man-machine joint control system and method for monitoring equipment
KR102107334B1 (en) Method, device and system for determining whether pixel positions in an image frame belong to a background or a foreground
CN105303191A (en) Method and apparatus for counting pedestrians in foresight monitoring scene
CN112132043B (en) Fire fighting channel occupation self-adaptive detection method based on monitoring video
CN111126293A (en) A method and system for detecting abnormality of flame and smoke
CN102982313A (en) Smog detecting method
CN106846362A (en) A kind of target detection tracking method and device
CN103530893A (en) Foreground detection method in camera shake scene based on background subtraction and motion information
CN104658152A (en) Video-based moving object intrusion alarm method
CN113223081A (en) High-altitude parabolic detection method and system based on background modeling and deep learning
CN115691034A (en) A smart home alarm method, system and storage medium for abnormal conditions
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN101316371A (en) Flame detection method and device
CN115661735A (en) Target detection method and device and computer readable storage medium
CN117437686A (en) A safety management method and system applied to surveillance video of ship unloading system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180202

Termination date: 20210106