[go: up one dir, main page]

CN101315701B - Moving Target Image Segmentation Method - Google Patents

Moving Target Image Segmentation Method Download PDF

Info

Publication number
CN101315701B
CN101315701B CN2008100538305A CN200810053830A CN101315701B CN 101315701 B CN101315701 B CN 101315701B CN 2008100538305 A CN2008100538305 A CN 2008100538305A CN 200810053830 A CN200810053830 A CN 200810053830A CN 101315701 B CN101315701 B CN 101315701B
Authority
CN
China
Prior art keywords
image
background
segmentation
difference
movement destination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008100538305A
Other languages
Chinese (zh)
Other versions
CN101315701A (en
Inventor
明东
刘双迟
张希
程龙龙
万柏坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN2008100538305A priority Critical patent/CN101315701B/en
Publication of CN101315701A publication Critical patent/CN101315701A/en
Application granted granted Critical
Publication of CN101315701B publication Critical patent/CN101315701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method for segmenting a moving target image, belonging to the technical field of a computer of image processing; the method comprises the following steps of: adopting a minimum median variance method to obtain a background image; adopting an indirect difference function to obtain a difference image; step 3, selecting a segmentation threshold T and carrying out binarization for the difference image so as to obtaining a binarization foreground image; updating the ground image dynamically according to the updated image, the current background image and the binarization foreground image; using a morphological filtering to eliminate the noise in the binary image and fill defects in the moving target image; the invention can provide help for effective use of a monitoring system and reliable evaluation of monitoring effect, and can be applied in various public places such as police, fire fighting, customs, ports and stations, etc. widely, thus obtaining considerable social effect and improving public safety service.

Description

运动目标图像分割方法 Moving Target Image Segmentation Method

技术领域technical field

本发明属于图像处理计算机技术领域,涉及一种运动目标图像分割方法。The invention belongs to the technical field of image processing computers, and relates to a moving target image segmentation method.

背景技术Background technique

复杂背景下运动人体图像的分割是视频监控图像处理最基本的环节,它旨在从监控所得视频序列图像中将运动人体区域从背景图象中提取出来。运动区域的有效分割对于监控目标的分类、跟踪和身份识别等后期处理非常重要。然而由于背景图象的动态变化,比如天气的变化、光照条件的变化、背景的混乱干扰、运动目标的影子、物体与环境之间或者物体与物体之间的遮挡、甚至摄像机的运动等,使得运动人体图像的检测成为一件相当困难的工作。也因此引起了国内外许多研究者的浓厚兴趣,成为近年来视频图像信息检测领域备受关注的前沿热点。Segmentation of moving human body image under complex background is the most basic part of video surveillance image processing, which aims to extract the moving human body area from the background image from the video sequence images obtained by monitoring. Effective segmentation of motion regions is very important for post-processing such as classification, tracking and identification of surveillance objects. However, due to dynamic changes in the background image, such as changes in weather, changes in lighting conditions, background chaos, shadows of moving objects, occlusion between objects and the environment or between objects, and even camera movement, etc. The detection of moving human images becomes a rather difficult task. Therefore, it has aroused the strong interest of many researchers at home and abroad, and has become a frontier hotspot in the field of video image information detection in recent years.

依据算法特点,复杂背景下视频图像中运动人体检测方法可大致分为运动分割和静止分割两类算法。运动分割算法是利用移动目标所具有的运动属性,将其从序列图像背景中区分出来。静止分割算法则是首先对单帧图像依据其各区域的灰度、纹路或轮廓梯度等信息做静态分割,之后在连续帧图像之间采用相似块匹配法做运动估计,最后再合并各分割区域完成运动目标的提取。对于背景静止、只有简单全局场景运动的视频图像,通常采用差分运动检测算法或背景建模运动检测算法。差分运动检测算法是将相邻两帧或三帧的图像差做阈值化来提取图像的运动区域。背景建模运动检测算法是常用的运动分割方法之一。通常,该算法先要利用完整的视频图像序列信息进行统计建模以区分像素类别(背景/运动区域)并生成背景图像;再从每帧图像扣除背景以获得运动目标。总之,运动分割算法众多,关键是寻找到一个最恰当的方法,即在代价最小的情况下实现预期的分割目标。According to the characteristics of the algorithm, the detection methods of moving people in video images under complex backgrounds can be roughly divided into two types of algorithms: motion segmentation and static segmentation. The motion segmentation algorithm uses the motion properties of the moving target to distinguish it from the sequence image background. The static segmentation algorithm first performs static segmentation on a single frame image based on information such as grayscale, texture or contour gradient of each area, and then uses similar block matching method between consecutive frame images for motion estimation, and finally merges each segmented area Complete the extraction of motion targets. For video images with a static background and only simple global scene motion, a differential motion detection algorithm or a background modeling motion detection algorithm is usually used. The differential motion detection algorithm is to threshold the image difference of two adjacent frames or three frames to extract the motion area of the image. Background modeling motion detection algorithm is one of the commonly used motion segmentation methods. Usually, the algorithm first needs to use the complete video image sequence information for statistical modeling to distinguish pixel categories (background/moving area) and generate a background image; then subtract the background from each frame image to obtain the moving target. In short, there are many motion segmentation algorithms, and the key is to find the most appropriate method, that is, to achieve the expected segmentation goal with the least cost.

发明内容Contents of the invention

本发明针对公共场所定点监控所得视频图像具有背景静止与全景运动的特点,提出一种可用于复杂背景视频监控中运动目标图像分割方法。本发明可为监控系统的有效使用及监控效果的可靠评价提供帮助,可广泛应用于公安、消防、海关、港口、车站等各种公共场所,获得可观的社会效益和公共安全服务的提升。Aiming at the characteristics of static background and panorama motion of video images obtained by fixed-point monitoring in public places, the invention proposes a moving target image segmentation method that can be used in complex background video monitoring. The invention can provide help for the effective use of the monitoring system and the reliable evaluation of the monitoring effect, and can be widely used in various public places such as public security, fire protection, customs, ports, stations, etc., and can obtain considerable social benefits and improve public safety services.

本发明采用如下的技术方案:The present invention adopts following technical scheme:

一种运动目标图像分割方法,每次的图像分割采用如下的步骤:A moving target image segmentation method, each image segmentation adopts the following steps:

步骤一:采用最小中位方差法获取背景图像;Step 1: Obtain the background image using the minimum median variance method;

步骤二:采用间接差分函数 f ( a , b ) = 1 - 2 ( a + 1 ) ( b + 1 ) ( a + 1 ) + ( b + 1 ) 2 ( 256 - a ) ( 256 - b ) ( 256 - a ) + ( 256 - b ) 获得差分图像,其中a,b分别表示当前图像与背景图像在同一像素点位置的灰度(强度)值,且0≤a,b≤255,0≤f(a,b)≤1;Step 2: Use the indirect difference function f ( a , b ) = 1 - 2 ( a + 1 ) ( b + 1 ) ( a + 1 ) + ( b + 1 ) 2 ( 256 - a ) ( 256 - b ) ( 256 - a ) + ( 256 - b ) Obtain a difference image, where a and b respectively represent the grayscale (intensity) values of the current image and the background image at the same pixel position, and 0≤a, b≤255, 0≤f(a, b)≤1;

步骤三:选择分割阈值T, M ( x , y ) = 1 f ( a ( x , y ) , b ( x , y ) ) ≥ T 0 Otherwise 将差分图像二值化,获得二值化前景图像;Step 3: Select the segmentation threshold T, m ( x , the y ) = 1 f ( a ( x , the y ) , b ( x , the y ) ) &Greater Equal; T 0 otherwise Binarize the difference image to obtain a binarized foreground image;

步骤四:设I(x,y) t为N帧采集的序列图像,B(x,y) n+1和B(x,y) n分别为根据步骤一得到的更新后和当前的背景图像,M(x,y) n为根据步骤三得到的二值化前景图像,根据公式

Figure S2008100538305D00023
动态更新背景图像;Step 4: Let I (x, y) t be the sequence image collected by N frames, B (x, y) n+1 and B (x, y) n are the updated and current background images obtained according to step 1 respectively , M (x, y) n is the binarized foreground image obtained according to step 3, according to the formula
Figure S2008100538305D00023
Dynamically update the background image;

步骤五:利用形态学滤波消除二值图像中的噪声并填补运动目标图像中的缺失。上述的运动目标图像分割方法,步骤一中,可以根据公式 B ( x , y ) = arg { min p med t ( I ( x , y ) t - p ) 2 } 分别得到R、G、B三个分量的背景图像,式中p是像素位置(x,y)处待确定的彩色图像值,t为帧索引值;步骤三中可以采用最大化类间方差法来确定阈值T;步骤四中,α可以取0.8。Step 5: Use morphological filtering to eliminate noise in the binary image and fill in the absence in the moving target image. Above-mentioned moving object image segmentation method, in step 1, can according to formula B ( x , the y ) = arg { min p med t ( I ( x , the y ) t - p ) 2 } Get the background image of three components of R, G, and B respectively, where p is the color image value to be determined at the pixel position (x, y), and t is the frame index value; in step 3, the method of maximizing the variance between classes can be used To determine the threshold T; in step 4, α can be 0.8.

本发明所用视频图像数据具有背景静止与全景运动的特点,采用动态背景建模和差分运动检测相结合的算法来分割运动目标图像。从图3、图4和图5的各阶段结果来看,本发明所采用复杂背景视频监控中运动人体轮廓图像分割方法的总体方案是合适的;其中静止背景建模、动态背景更新、差分运动分割与形态学后处理等处理环节皆是不可缺或的;其处理流程基本合理,最终分割结果比较清楚、利索。The video image data used in the invention has the characteristics of static background and panoramic motion, and an algorithm combining dynamic background modeling and differential motion detection is used to segment the moving target image. From the results of each stage in Fig. 3, Fig. 4 and Fig. 5, the overall scheme of the moving body contour image segmentation method in complex background video monitoring adopted by the present invention is suitable; wherein static background modeling, dynamic background update, differential motion Processing links such as segmentation and morphological post-processing are indispensable; the processing flow is basically reasonable, and the final segmentation result is relatively clear and neat.

在现代社会监控系统中,自动获取监控对象数据的方法大致可以分为两类:一类是利用压电、红外、环形磁感应线圈等传感器获得监控对象本身的参数,这类方法跟踪识别率较高,但是容易损坏,安装也不方便;还有一类就是基于图像处理和模式识别的方法(本发明属于此种),克服了前面一类方法的局限,由于图像处理识别技术的进步和硬件性价比的大幅提高,使得本发明所提出的方法得以实施。与前一种方法相比较,本发明产生的技术效果显著,环境适应能力强,能长期稳定工作,并可在被监控对象不知情的情况下进行监控,使得安全监控的效果大大提高,并能避免传统(第一种)监控器监测时与监控对象产生的的不必要的摩擦与矛盾。In the modern social monitoring system, the methods of automatically obtaining the data of the monitoring object can be roughly divided into two categories: one is to use sensors such as piezoelectric, infrared, and ring magnetic induction coils to obtain the parameters of the monitoring object itself, and this method has a higher tracking and recognition rate , but it is easily damaged, and it is also inconvenient to install; there is also a method based on image processing and pattern recognition (the present invention belongs to this kind), which overcomes the limitations of the previous method. Greatly improved, so that the method proposed by the present invention can be implemented. Compared with the former method, the present invention has remarkable technical effect, strong environmental adaptability, long-term stable work, and can monitor without the knowledge of the monitored object, so that the effect of safety monitoring is greatly improved, and can Avoid unnecessary friction and contradictions between the traditional (first type) monitor and the monitored object during monitoring.

附图说明Description of drawings

图1运动人体轮廓图像分割流程。Fig. 1 Segmentation process of moving human body contour image.

图2不同背景灰度的差分函数。Figure 2 Difference functions of different background gray levels.

图3(a)R分量的运动分割结果。Figure 3(a) Motion segmentation results of the R component.

图3(b)G分量的运动分割结果。Fig. 3(b) Motion segmentation result of G component.

图3(c)B分量的运动分割结果。Fig. 3(c) Motion segmentation result of B component.

图3(d)a、b、c三幅图取“或”运算结果。Figure 3(d) a, b, and c take the "or" operation results.

图4形态学滤波处理后结果。Figure 4 The results after morphological filtering.

图5最后运动目标分割结果。Figure 5. The final moving target segmentation results.

图6二值图像的膨胀运算示例,(a)是一幅二值图像,(b)图为结构元素B,标有“+”代表结构元素的参考点,(c)膨胀结果图。Figure 6 is an example of expansion operation of a binary image, (a) is a binary image, (b) is a structure element B, marked with "+" represents the reference point of the structure element, (c) is the result of expansion.

图7二值图像的腐蚀运算示例,(a)是一幅二值图像(b)图为结构元素B,标有“+”代表结构元素的参考点,(c)腐蚀结果图。Figure 7 is an example of corrosion operation of a binary image, (a) is a binary image, (b) is a structural element B, and the mark "+" represents the reference point of the structural element, (c) the corrosion result map.

具体实施方式Detailed ways

下面结合附图和实施例对本发明做详细描述。The present invention will be described in detail below in conjunction with the accompanying drawings and embodiments.

本发明以复杂背景视频监控中的运动人体轮廓图像分割方法作为实施例,整个图像分割过程包括静止背景建模、动态背景更新、差分运动分割与形态学后处理等步骤,如图1所示。下面分别对每个步骤,结合实施例做进一步详细描述。The present invention takes the moving body contour image segmentation method in complex background video monitoring as an embodiment. The whole image segmentation process includes steps such as static background modeling, dynamic background update, differential motion segmentation and morphological post-processing, as shown in FIG. 1 . Each step is further described in detail below in conjunction with embodiments.

1.静止背景建模1. Static background modeling

本发明实验所用数据为中国科学院北京自动化研究所发布的步态数据库。它是一个含20个对象的小型数据库,所有数据均在户外使用单台摄像机(Panasonic NV-DX100EN)采集而得,每个对象摄取三个方向行走(0°、45°、90°)的视频序列数据。本发明中仅使用0°方向的数据,即人体侧面行走序列。The data used in the experiment of the present invention is the gait database released by the Beijing Institute of Automation, Chinese Academy of Sciences. It is a small database of 20 subjects, all collected outdoors with a single camera (Panasonic NV-DX100EN), each subject taking video of walking in three directions (0°, 45°, 90°) sequence data. In the present invention, only the data in the direction of 0° is used, that is, the human side walking sequence.

本发明采用最小中位方差法(Least median of squares,LmedS)对背景图像进行建模。最小中位方差法(LmedS)是以稳健统计为理论基础提出的一种算法。稳健的概念是指个别异常值对统计量的影响程度,稳健统计是一种适用于样本群中存在异常值的参数估计方法。稳健统计在计算机视觉领域中受到了广泛的关注,主要是由于计算机视觉问题的输入数据中常常被异常值所干扰。The present invention uses the least median variance method (Least median of squares, LmedS) to model the background image. The least median variance method (LmedS) is an algorithm proposed based on the theory of robust statistics. The concept of robustness refers to the degree of influence of individual outliers on statistics. Robust statistics is a parameter estimation method suitable for outliers in the sample group. Robust statistics have received a lot of attention in the field of computer vision, mainly because the input data of computer vision problems are often disturbed by outliers.

在稳健统计中,为了评价某种算法抵抗异常值扰动的能力,Hampel提出了失效点BP(Breakdown Point)的概念。由于起初它是一个渐近结果,不便计算,故而Donoho和Huber定义了对于有限样本的BP为:In robust statistics, in order to evaluate the ability of an algorithm to resist outlier disturbances, Hampel proposed the concept of BP (Breakdown Point). Since it is an asymptotic result at first, it is inconvenient to calculate, so Donoho and Huber defined the BP for finite samples as:

Figure S2008100538305D00031
Figure S2008100538305D00031

其中T为参数估计值,n为样本X的含量,β(m;T,X)表示将X中任意替换m个点后两侧参数估计值之差的上确界。对于有限样本的BP,表示用该法作参数估计时,允许数据中存在的不会使估计值失效的异常点数目的最小比例,当数据中的异常点比例超过BP时,估计值会变得很不稳定。Where T is the estimated value of the parameter, n is the content of the sample X, and β(m; T, X) represents the supremum of the difference between the estimated values of the parameters on both sides after any replacement of m points in X. For BP with limited samples, it means the minimum proportion of the number of outliers in the data that will not invalidate the estimated value when using this method for parameter estimation. When the proportion of outliers in the data exceeds BP, the estimated value will become very unstable.

对异常点的处理方法基本有两种,一种是自协调方法(Accommodation-basedapproach),即方法本身能够承受异常点的干扰,另一种则是首先剔除异常点,再按传统方法进行处理。最小中位方差算法(LmedS)是二者的结合,在自协调前提下,再用最小均方估计算法(Least mean square,LMS)进行估计,因此它既能消除异常点干扰,同时能获得较好的估计效果。Rousseeuw和Leroy关于最小中位方差法的定义为:There are basically two ways to deal with abnormal points, one is the self-coordination method (Accommodation-based approach), that is, the method itself can withstand the interference of abnormal points, and the other is to eliminate the abnormal points first, and then deal with them according to the traditional method. The least median variance algorithm (LmedS) is a combination of the two. Under the premise of self-coordination, the least mean square estimation algorithm (Least mean square, LMS) is used for estimation. Therefore, it can eliminate the interference of abnormal points and obtain a comparative Good estimated effect. Rousseeuw and Leroy's definition of the minimum median variance method is:

已知序列X={x1,x2,…xi,…xN}中xi是x的N个观测值,根据xi对x进行估计,假设估计值为

Figure S2008100538305D00041
则:In the known sequence X={x 1 , x 2 ,... xi ,...x N }, x i is the N observed values of x, and x is estimated according to x i , assuming that the estimated value is
Figure S2008100538305D00041
but:

θθ ^^ == argarg {{ minmin θθ medmed ii (( xx ii -- θθ )) 22 }} -- -- -- (( 22 ))

其中i=1,2…,N。where i=1, 2...,N.

本发明所使用数据库中的视频序列是固定摄像机拍摄所得,理论上来说,即若完全不存在干扰或其他影响,背景是静止的。对视频序列单帧分析,跟踪其中某一点灰度随时间变化曲线,该曲线应基本保持平稳。然而,对有运动物体或其他环境干扰因素发生点而言,其灰度分布曲线会随时间而变化。本发明尝试对步态序列中不同位置像素的灰度进行了跟踪观察。The video sequences in the database used in the present invention are captured by fixed cameras. In theory, if there is no interference or other influences at all, the background is still. For single-frame analysis of the video sequence, track the curve of the gray level at a certain point over time, and the curve should basically remain stable. However, for points where there are moving objects or other environmental interference factors, the gray distribution curve will change with time. The present invention attempts to track and observe the gray levels of pixels at different positions in the gait sequence.

静止背景建模过程如下所述。The stationary background modeling process is described below.

若令I(x,y) t代表N帧采集的序列图像,其中t代表帧索引值(t=1,2,.,N),(x,y)∈It,则背景B(x,y)为:If let I (x, y) t represent the sequence images collected by N frames, where t represents the frame index value (t=1, 2,., N), (x, y)∈I t , then the background B (x, y) is:

BB (( xx ,, ythe y )) == argarg {{ minmin pp medmed tt (( II (( xx ,, ythe y )) tt -- pp )) 22 }} -- -- -- (( 33 ))

式中p是像素位置(x,y)处待确定的彩色图像(R,G,B)值,若每个分量是8比特的图像,则p的取值范围是0~255;t是帧索引值,它在1~N之间变化。(arg表示满足()中要求的未知数的值)算法的具体流程为(仅以R分量为例):In the formula, p is the value of the color image (R, G, B) to be determined at the pixel position (x, y). If each component is an 8-bit image, the value range of p is 0-255; t is the frame Index value, which varies between 1 and N. (arg represents the value of the unknown number that satisfies the requirements in ()) The specific process of the algorithm is (only taking the R component as an example):

(i)选定一个像素点位置(x,y);(i) select a pixel point position (x, y);

(ii)p=0;(ii)p=0;

(iii)依次计算(I(x,y) 1-p)2,(I(x,y) 2-p)2,…,(I(x,y) N-p)2(iii) sequentially calculate (I (x, y) 1 -p) 2 , (I (x, y) 2 -p) 2 , ..., (I (x, y) N -p) 2 ;

(iv)对计算结果排序,若N为偶数,取排序后第N/2和(N+1)/2个数的平均值,若N为奇数,则取第N/2个数,结果保存到数组med中,即med0(iv) Sort the calculation results. If N is an even number, take the average of the N/2th and (N+1)/2 numbers after sorting. If N is an odd number, take the N/2th number and save the result To the array med, namely med 0 ;

(v)p=p+1,当p<=255,返回(iii),重复执行(iii),(iv),(v),结果保存为medp,否则执行(vi);(v)p=p+1, when p<=255, return to (iii), repeat (iii), (iv), (v), save the result as med p , otherwise execute (vi);

(vi)找出med0,med1,…med255中最小值,对应的p值即为该像素点位置的背景灰度值。(vi) Find the minimum value among med 0 , med 1 , ... med 255 , and the corresponding p value is the background gray value of the pixel position.

(vii)重新选择像素点位置,返回(ii)重复执行,直到图像中所有像素点均计算完毕。(vii) Reselect the pixel point position, return to (ii) and repeat until all the pixel points in the image are calculated.

考虑到数据库图像均为RGB格式,因此这里对R、G、B三个分量分别建模,经合成也可获得R、G、B格式的彩色背景图像。Considering that the database images are all in RGB format, the three components of R, G, and B are modeled separately here, and color background images in R, G, and B formats can also be obtained after synthesis.

1.差分运动分割1. Differential Motion Segmentation

为了确定运动目标,最常用方法是对当前图像与背景模型相减得到的差分图像再进行阈值分割。这种方法的一个很大不足是对低对比度图像,将因其灰度变化太小而难以确定分割阈值,即很难将运动目标从背景中完全清晰地提取出来。为此,本发明改用一个间接差分函数来执行差分操作。该差分函数的表达为:In order to determine the moving target, the most commonly used method is to perform threshold segmentation on the difference image obtained by subtracting the current image from the background model. A big disadvantage of this method is that for low-contrast images, it will be difficult to determine the segmentation threshold because of the small grayscale change, that is, it is difficult to clearly extract the moving target from the background. For this reason, the present invention uses an indirect difference function instead to perform the difference operation. The expression of the difference function is:

ff (( aa ,, bb )) == 11 -- 22 (( aa ++ 11 )) (( bb ++ 11 )) (( aa ++ 11 )) ++ (( bb ++ 11 )) 22 (( 256256 -- aa )) (( 256256 -- bb )) (( 256256 -- aa )) ++ (( 256256 -- bb )) -- -- -- (( 55 ))

其中a,b分别表示当前图像与背景图像在同一像素点位置的灰度(强度)值,且0≤a,b≤255,0≤f(a,b)≤1。当a=b时,f(a,b)=0;当a,b不同时,f(a,b)与a,b之间的差值成正比。同时,差分函数的灵敏度可随背景灰度值自动改变。以b=5和b=100为例,其差分函数如图2所示。从图中可看出b不同,差分函数f(a,b)也不同,当b较小(b=5)时,差分函数随着a的增长迅速变大,这说明在低对比度情况下差分函数的灵敏度会自动提高,这种自适应性提高了图像分割的准确度。Where a and b respectively represent the grayscale (intensity) values of the current image and the background image at the same pixel position, and 0≤a, b≤255, 0≤f(a, b)≤1. When a=b, f(a, b)=0; when a and b are different, f(a, b) is proportional to the difference between a and b. At the same time, the sensitivity of the difference function can be changed automatically with the background gray value. Taking b=5 and b=100 as an example, the difference function is shown in FIG. 2 . It can be seen from the figure that b is different, and the difference function f(a, b) is also different. When b is small (b=5), the difference function becomes larger rapidly with the growth of a, which shows that the difference The sensitivity of the function is automatically increased, and this adaptability improves the accuracy of image segmentation.

当确定了背景图像与当前图像的差分值后,就需要选择分割阈值T(0≤T≤1)。本发明采取Otsu方法(论文出处:Otsu N.A threshold selection method from grey-levelhistograms.In:IEEE Trans.Systems,Man and Cybernetics,1979,SMC-9(1),62~66),即通过最大化类间方差来确定阈值T。其二值化过程可表述为:After determining the difference value between the background image and the current image, it is necessary to select a segmentation threshold T (0≤T≤1). The present invention adopts the Otsu method (the paper source: Otsu N.A threshold selection method from gray-level histograms. In: IEEE Trans. Systems, Man and Cybernetics, 1979, SMC-9 (1), 62~66), that is, by maximizing the Variance to determine the threshold T. Its binarization process can be expressed as:

Mm (( xx ,, ythe y )) == 11 ff (( aa (( xx ,, ythe y )) ,, bb (( xx ,, ythe y )) )) &GreaterEqual;&Greater Equal; TT 00 Otherwiseotherwise -- -- -- (( 66 ))

2.动态背景更新2. Dynamic background update

以上虽然通过最小中位方差法建立了静止背景模型,但由于噪声和光照变化的影响,实际视频序列的背景并非是时刻保持静止不变的。为了获得更加精确的差分运动分割(也即背景减除)效果,必须动态地更新背景。本发明采用Karmann与Brandt的卡尔曼滤波方法进行背景的动态更新(论文出处:Karmann K,Brandt A.Moving object recognitionusing an adaptive background memory.In:Cappellini V ed.Time-varying ImageProcessing and Moving Object Recognition.2.Elsevier,Amsterdam,The Netherlands,1990)。设B(x,y) n+1和B(x,y) n分别为更新后和当前的背景,根据当前帧I(x,y) n中做完运动检测所获取的二值化前景图像M(x,y) n,背景更新过程为:Although the static background model is established by the minimum median variance method above, due to the influence of noise and illumination changes, the background of the actual video sequence does not always remain static. In order to obtain a more accurate differential motion segmentation (ie, background subtraction) effect, the background must be updated dynamically. The present invention adopts the Kalman filtering method of Karmann and Brandt to carry out the dynamic updating of background (the paper source: Karmann K, Brandt A. Moving object recognition using an adaptive background memory.In: Cappellini V ed. Time-varying Image Processing and Moving Object Recognition.2 . Elsevier, Amsterdam, The Netherlands, 1990). Let B (x, y) n+1 and B (x, y) n be the updated and current background respectively, according to the binarized foreground image obtained by the motion detection in the current frame I (x, y) n M (x, y) n , the background update process is:

Figure S2008100538305D00053
Figure S2008100538305D00053

其中α是加权系数,本发明通过实验分析得出α取0.8较为合适。当更新后背景和更新前背景之间足够接近时,停止更新。Wherein α is a weighting coefficient, and the present invention finds that 0.8 is more suitable for α through experimental analysis. When the updated background is close enough to the pre-updated background, stop updating.

用自动更新的背景进行运动分割的结果如图3所示。The results of motion segmentation with automatically updated background are shown in Fig. 3.

3.形态学后处理3. Morphological post-processing

运动分割后的图像中难免会存在噪声,同时运动目标中会有少量点被误判为背景,因此还需要对图像进行后处理,以获得最佳的分割效果。本发明使用形态学滤波来消除二值图像中的噪声并填补运动目标中的缺失。There will inevitably be noise in the image after motion segmentation, and at the same time, a small number of points in the moving target will be misjudged as the background, so the image needs to be post-processed to obtain the best segmentation effect. The present invention uses morphological filtering to eliminate noise in binary images and fill in the absence of moving objects.

在形态学中,膨胀运算和腐蚀运算是最基本的形态变换。In morphology, dilation operation and erosion operation are the most basic morphological transformations.

①膨胀运算(Dilation)① Dilation

膨胀运算也称扩张运算,用符号

Figure S2008100538305D00061
表示,X用B来膨胀记为
Figure S2008100538305D00062
定义为The expansion operation is also called the expansion operation, with the symbol
Figure S2008100538305D00061
Indicates that X is expanded by B and denoted as
Figure S2008100538305D00062
defined as

Figure S2008100538305D00063
Figure S2008100538305D00063

膨胀过程可以描述如下:集合B首先做关于原点的映射B^,然后平移x形成集合(B^)x,最后计算集合(B^)x与集合X不为空集的结构元素参考点的集合。换句话说,用B来膨胀X得到的集合是B^的位移与集合X至少有一个非零元素相交时结构元素B的参考点位置的集合。The expansion process can be described as follows: set B first makes a mapping B^ about the origin, then translates x to form a set (B^)x, and finally calculates the set of structural element reference points where the set (B^)x and the set X are not empty sets . In other words, the set obtained by expanding X with B is the set of reference point positions of structural element B when the displacement of B^ intersects with at least one non-zero element of set X.

例1.膨胀运算示例Example 1. Example of dilation operation

如图6(a)是一幅二值图像,阴影部分代表灰度值为高(一般为1)的区域,白色部分代表灰度值为低(一般为0)的区域,其左上角空间坐标为(0,0)。(b)图为结构元素B,标有“+”代表结构元素的参考点。膨胀的结果如图(c)所示,其中黑色为膨胀扩大的部分。把结果

Figure S2008100538305D00064
与X相比发现,X按照B的形态膨胀了一定范围。因此,该运算被名之为膨胀。As shown in Figure 6(a) is a binary image, the shaded part represents the area with high gray value (generally 1), the white part represents the area with low gray value (generally 0), the spatial coordinates of the upper left corner is (0, 0). (b) The picture shows structural element B, marked with "+" to represent the reference point of the structural element. The result of the expansion is shown in figure (c), where the black part is the expanded part. put the result
Figure S2008100538305D00064
Compared with X, it is found that X expands to a certain extent according to the form of B. Hence, the operation is called dilation.

②腐蚀运算(Erosion)② Corrosion operation (Erosion)

腐蚀运算也称侵蚀运算,用符号表示,X用B来腐蚀记为

Figure S2008100538305D00066
定义为The corrosion operation is also called the erosion operation, with the symbol Indicates that X is corroded by B and denoted as
Figure S2008100538305D00066
defined as

Figure S2008100538305D00067
Figure S2008100538305D00067

腐蚀过程可以描述如下:集合B平移x后仍在集合X中的结构元素参考点的集合。换句话说,用B来腐蚀X得到的集合是B完全包括在集合X中时B的参考点位置的集合。The erosion process can be described as follows: the collection of structural element reference points that are still in the set X after the translation of the set B by x. In other words, the set obtained by corroding X with B is the set of reference point positions of B when B is completely included in set X.

例2腐蚀运算示例Example 2 Corrosion operation example

如图7(a)是一幅二值图像,(b)图为结构元素B,标有“+”代表参考点。腐蚀的结果如图(c)所示,其中黑色为腐蚀后留下的部分。把结果与X相比发现,X的区域范围被缩小了,可见,不能容纳结构元素的部分都被腐蚀掉了。As shown in Figure 7 (a) is a binary image, (b) is the structural element B, marked with "+" represents the reference point. The result of corrosion is shown in Figure (c), where the black part is the part left after corrosion. put the result Compared with X, it is found that the area of X has been reduced, and it can be seen that the parts that cannot accommodate structural elements have been corroded.

在形态学中,开运算AοB是指A被B腐蚀后再用B来膨胀的结果,即:In morphology, the opening operation AοB refers to the result of A being corroded by B and then expanded by B, namely:

Figure S2008100538305D00069
Figure S2008100538305D00069

式中

Figure S2008100538305D000610
表示腐蚀运算,
Figure S2008100538305D000611
表示膨胀运算。开运算可完全删除不能包含结构元素的对象,如平滑对象的凸轮廓、断开狭窄的连接、去掉细小的突起部分。闭运算与开运算刚好相反,它的定义是指A被B膨胀后再用B来腐蚀的结果,即:In the formula
Figure S2008100538305D000610
represents the erosion operation,
Figure S2008100538305D000611
Indicates the dilation operation. The opening operation can completely delete objects that cannot contain structural elements, such as smoothing the convex contour of objects, breaking narrow connections, and removing small protrusions. The closing operation is just the opposite of the opening operation. Its definition refers to the result of A being expanded by B and then corroded by B, namely:

Figure S2008100538305D00071
Figure S2008100538305D00071

闭运算可填充比结构元素小的洞,如平滑对象的凹轮廓、将狭长缺口连接成细长弯口。可利用开运算和闭运算的上述性质,实现滤波和填充空洞的功能。图4给出了形态学滤波处理的结果。Closing operations can fill holes smaller than structural elements, such as smoothing the concave contours of objects, and connecting slits into slender bends. The functions of filtering and filling holes can be realized by using the above properties of the opening and closing operations. Figure 4 shows the results of morphological filtering.

形态学滤波处理后,噪声并不一定完全消除,有的杂散噪声可能会形成大小不一的块,而运动目标往往是这些块中最大的,因此可对图像进行连通域分析,目的在于在图像中仅保留运动目标。连通域分析的步骤为:After morphological filtering, the noise is not necessarily completely eliminated. Some stray noise may form blocks of different sizes, and the moving target is often the largest of these blocks. Therefore, the connected domain analysis of the image can be carried out, aiming at Only moving objects are kept in the image. The steps of connected domain analysis are:

(i)标记连通矩阵;(i) mark the connectivity matrix;

(ii)计算每个矩阵像素个数;(ii) Calculate the number of pixels in each matrix;

(iii)找出像素最多的矩阵;(iii) find the matrix with the most pixels;

(iv)确定运动目标。(iv) Determine the goal of the exercise.

最后获得的运动目标分割如图5所示。The final segmentation of moving objects is shown in Fig. 5.

Claims (4)

1. movement destination image partition method, each image segmentation adopts following step:
Step 1: adopt minimum meta variance method to obtain background image;
Step 2: adopt indirect difference function f ( a , b ) = 1 - 2 ( a + 1 ) ( b + 1 ) ( a + 1 ) + ( b + 1 ) 2 ( 256 - a ) ( 256 - b ) ( 256 - a ) + ( 256 - b ) Obtain difference image, a wherein, b represent present image and background image gray scale (intensity) value in same pixel position respectively, and 0≤a, and b≤255,0≤f (a, b)≤1;
Step 3: select segmentation threshold T, M ( x , y ) = 1 f ( a ( x , y ) , b ( x , y ) ) &GreaterEqual; T 0 Otherwise With the difference image binaryzation, obtain the binaryzation foreground image;
Step 4: establish I (x, y) tBe the sequence image that the N frame is gathered, B (x, y) N+1And B (x, y) nBe respectively after the renewal that obtains according to step 1 and current background image, M (x, y) nFor the binaryzation foreground image that obtains according to step 3, according to formula Wherein, α is a weighting coefficient, dynamically updates background image;
Step 5: utilize morphologic filtering to eliminate the noise in the bianry image and fill up disappearance in the movement destination image.
2. movement destination image partition method according to claim 1 is characterized in that, in the step 1, according to formula B ( x , y ) = arg { min p med t ( I ( x , y ) t - p ) 2 } Obtain the background image of R, G, three components of B respectively, p is that (x y) locates coloured image value to be determined to location of pixels, and t is the frame index value in the formula.
3. movement destination image partition method according to claim 1 is characterized in that, adopts maximization inter-class variance method to determine threshold value T in the step 3.
4. movement destination image partition method according to claim 1 is characterized in that in the step 4, α gets 0.8.
CN2008100538305A 2008-07-11 2008-07-11 Moving Target Image Segmentation Method Active CN101315701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100538305A CN101315701B (en) 2008-07-11 2008-07-11 Moving Target Image Segmentation Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100538305A CN101315701B (en) 2008-07-11 2008-07-11 Moving Target Image Segmentation Method

Publications (2)

Publication Number Publication Date
CN101315701A CN101315701A (en) 2008-12-03
CN101315701B true CN101315701B (en) 2010-06-30

Family

ID=40106701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100538305A Active CN101315701B (en) 2008-07-11 2008-07-11 Moving Target Image Segmentation Method

Country Status (1)

Country Link
CN (1) CN101315701B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510304B (en) * 2009-03-30 2014-05-21 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image
CN101930610B (en) * 2009-06-26 2012-05-02 思创影像科技股份有限公司 Moving Object Detection Method Using Adaptive Background Model
CN101789128B (en) * 2010-03-09 2012-01-18 成都三泰电子实业股份有限公司 Target detection and tracking method based on DSP and digital image processing system
CN102096931B (en) * 2011-03-04 2013-01-09 中南大学 Moving target real-time detection method based on layering background modeling
CN103366569B (en) * 2013-06-26 2015-10-07 东南大学 The method and system of real-time grasp shoot traffic violation vehicle
CN103745216B (en) * 2014-01-02 2016-10-26 中国民航科学技术研究院 A kind of radar image clutter suppression method based on Spatial characteristic
WO2016011641A1 (en) * 2014-07-24 2016-01-28 徐勇 Adaptive sobs improvement method and video surveillance system based on the method
CN105657317B (en) * 2014-11-14 2018-10-16 澜至电子科技(成都)有限公司 A kind of interlaced video method for testing motion in video release of an interleave and its system
CN105335942A (en) * 2015-09-22 2016-02-17 成都融创智谷科技有限公司 Local enhancement image acquisition method of moving object on the basis of Canny operator
CN112074040B (en) * 2020-08-19 2023-05-30 福建众益太阳能科技股份公司 Solar intelligent monitoring street lamp and monitoring control method thereof
CN112418105B (en) * 2020-11-25 2022-09-27 湖北工业大学 Detection method of moving ship target in high maneuvering satellite time series remote sensing image based on difference method
CN113160109B (en) * 2020-12-15 2023-11-07 宁波大学 Cell image segmentation method based on anti-background difference
CN113411509B (en) * 2021-06-15 2023-09-26 西安微电子技术研究所 Satellite-borne autonomous vision processing system
CN115546890B (en) * 2022-09-20 2025-06-13 国武时代国际文化传媒(北京)有限公司 A correction guidance method and system based on user motion feature extraction
CN118135065B (en) * 2024-05-07 2024-08-06 山东汉鑫科技股份有限公司 Tunnel dynamic gray scale map generation method, system, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN101315701A (en) 2008-12-03

Similar Documents

Publication Publication Date Title
CN101315701B (en) Moving Target Image Segmentation Method
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN107169985A (en) A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN103077539B (en) Motion target tracking method under a kind of complex background and obstruction conditions
CN102024146B (en) Method for extracting foreground in piggery monitoring video
CN103729858B (en) A kind of video monitoring system is left over the detection method of article
CN104978567B (en) Vehicle checking method based on scene classification
CN101854467B (en) Method for adaptively detecting and eliminating shadow in video segmentation
CN103810716B (en) Move and the image partition method of Renyi entropy based on gray scale
CN101221663A (en) Intelligent monitoring alarm method based on moving target detection
CN103136537B (en) Vehicle type identification method based on support vector machine
CN104036250B (en) Video pedestrian detection and tracking
CN110415268A (en) A Foreground Image Algorithm Based on the Combination of Background Difference Method and Interframe Difference Method
CN103400120B (en) Video analysis-based bank self-service area push behavior detection method
CN102609704A (en) Detecting device and method of video monitoring image movement targets under foggy weather conditions
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
CN102663362A (en) Moving target detection method t based on gray features
CN104537688A (en) Moving object detecting method based on background subtraction and HOG features
CN110782409B (en) A Method for Removing the Shadow of Multiple Moving Objects
CN113221603A (en) Method and device for detecting shielding of monitoring equipment by foreign matters
CN115083008A (en) Moving object detection method, device, equipment and storage medium
CN101715070A (en) Method for automatically updating background in specifically monitored video
Surkutlawar et al. Shadow suppression using RGB and HSV color space in moving object detection
CN106023249A (en) Moving object detection method based on local binary similarity pattern
CN106934819A (en) A kind of method of moving object segmentation precision in raising image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant