[go: up one dir, main page]

CN101714256A - Omnibearing vision based method for identifying and positioning dynamic target - Google Patents

Omnibearing vision based method for identifying and positioning dynamic target Download PDF

Info

Publication number
CN101714256A
CN101714256A CN200910228580A CN200910228580A CN101714256A CN 101714256 A CN101714256 A CN 101714256A CN 200910228580 A CN200910228580 A CN 200910228580A CN 200910228580 A CN200910228580 A CN 200910228580A CN 101714256 A CN101714256 A CN 101714256A
Authority
CN
China
Prior art keywords
particle
image
target
delta
omni
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910228580A
Other languages
Chinese (zh)
Other versions
CN101714256B (en
Inventor
丁承君
段萍
王南
张明路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN2009102285809A priority Critical patent/CN101714256B/en
Publication of CN101714256A publication Critical patent/CN101714256A/en
Application granted granted Critical
Publication of CN101714256B publication Critical patent/CN101714256B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明属于动态图像分析技术领域,涉及一种基于全方位视觉的动态目标识别和定位方法,包括步骤1:获取全方位视觉序列图像,对该序列图像进行预处理,得到把运动目标和背景区分开来二值图像;步骤2:用光流法进行局部区域搜索,进行图像相邻帧间特征点的匹配,检测出图像序列的运动目标;步骤3:通过粒子滤波算法对目标运动状态进行估计,预测运动目标在后续帧中的参数,完成跟踪过程。采用本发明提出的方法进行动态目标识别和定位,可以明显地减少运算量并提高准确率。

Figure 200910228580

The invention belongs to the technical field of dynamic image analysis, and relates to a dynamic target recognition and positioning method based on omnidirectional vision, including step 1: obtaining a sequence image of omnidirectional vision, and performing preprocessing on the sequence image to obtain a distinction between a moving target and a background Open the binary image; Step 2: Use the optical flow method to search the local area, match the feature points between adjacent frames of the image, and detect the moving target of the image sequence; Step 3: Estimate the moving state of the target through the particle filter algorithm , predict the parameters of the moving target in subsequent frames, and complete the tracking process. Using the method proposed by the invention to identify and locate dynamic targets can significantly reduce the amount of calculation and improve accuracy.

Figure 200910228580

Description

基于全方位视觉的动态目标识别和定位方法 A Dynamic Object Recognition and Location Method Based on Omnidirectional Vision

技术领域technical field

本发明属于动态图像分析技术领域,涉及一种基于全方位视觉的目标识别和定位方法。The invention belongs to the technical field of dynamic image analysis, and relates to a target recognition and positioning method based on omnidirectional vision.

背景技术Background technique

动态图像分析的基本任务是从图像序列中检测出运动信息,识别与跟踪运目标。它涉及到图像处理、图像分析、人工智能与模式识别、计算机视觉等究领域,是图像处理与计算机视觉邻域中一个非常活跃的分支,在工业生产、疗卫生、国防建设等领域得到了广泛应用,因此对它的研究具有十分重要的实意义。The basic task of dynamic image analysis is to detect motion information from image sequences, identify and track moving objects. It involves image processing, image analysis, artificial intelligence and pattern recognition, computer vision and other research fields. It is a very active branch in the neighborhood of image processing and computer vision. application, so its research has very important practical significance.

为了识别运动目标并实现对其跟踪,人们通常采用光流场的方法,从实时采集的含有运动目标的图像序列中抽取光流场,筛选出光流较大的运动目标区域并计算出运动目标的速度矢量,从而实现了运动目标的跟踪。In order to identify moving targets and track them, people usually use the optical flow field method to extract the optical flow field from the real-time collection of image sequences containing moving targets, filter out the moving target area with large optical flow and calculate the moving target’s distance. Velocity vector, so as to realize the tracking of the moving target.

以往的基于光流的目标检测方法主要分为两类:(1)利用微分光流技术即利用光流的基本方程,附加一定的约束,得到致密的光流场,再提取运动目标。此方法的不足在于计算量较大,实时性不强。(2)用特征光流技术,在图像中寻找特征点进行匹配,得到稀疏光流场,提取目标这种方法的实时性得到了提高,但信息量不足,容易造成目标的漏检。而在目标跟踪方面,以往的做法常常将其分离开来,在实现检测后,再基于目标的特征进行跟踪,这样做便增加了算法处理的复杂度,在目标的进入和退出时带来复杂的处理过程。The previous object detection methods based on optical flow are mainly divided into two categories: (1) use differential optical flow technology, that is, use the basic equation of optical flow, add certain constraints, obtain a dense optical flow field, and then extract moving objects. The disadvantage of this method is that the calculation is large and the real-time performance is not strong. (2) Using the characteristic optical flow technology to find the feature points in the image for matching, and obtain the sparse optical flow field, the real-time performance of the method of extracting the target has been improved, but the amount of information is insufficient, which may easily cause the missed detection of the target. In terms of target tracking, the previous practice often separates them, and then tracks based on the characteristics of the target after the detection is realized. This increases the complexity of the algorithm processing and brings complexity when the target enters and exits. process.

发明内容Contents of the invention

本发明的目的在于针对现有技术的上述不足,本发明提出了一种在全方位视觉下机动目标识别跟踪的有效方法。该方法能够提高识别和跟踪的实时性及鲁棒性,使移动机器人具有陆标自主导航和机动目标跟踪的综合功能。The purpose of the present invention is to address the above-mentioned deficiencies in the prior art, and the present invention proposes an effective method for identifying and tracking a maneuvering target under omnidirectional vision. This method can improve the real-time performance and robustness of recognition and tracking, and enable the mobile robot to have comprehensive functions of landmark autonomous navigation and maneuvering target tracking.

本发明采用的技术方案如下:The technical scheme that the present invention adopts is as follows:

一种基于全方位视觉的动态目标识别和定位方法,包括下列步骤:A dynamic target recognition and positioning method based on omnidirectional vision, comprising the following steps:

步骤1:获取全方位视觉序列图像,对该序列图像进行预处理,得到把运动目标和背景区分开来二值图像;Step 1: Obtain an omnidirectional visual sequence image, preprocess the sequence image, and obtain a binary image that distinguishes the moving target from the background;

步骤2:用光流法进行局部区域搜索,进行图像相邻帧间特征点的匹配,检测出图像序列的运动目标;Step 2: Use the optical flow method to search the local area, match the feature points between adjacent frames of the image, and detect the moving target of the image sequence;

步骤3:通过粒子滤波算法对目标运动状态进行估计,预测运动目标在后续帧中的参数,完成跟踪过程。Step 3: Estimate the motion state of the target through the particle filter algorithm, predict the parameters of the moving target in subsequent frames, and complete the tracking process.

作为优选实施方式,上述的基于全方位视觉的动态目标识别和定位方法,其中的步骤2按照下列方法执行,设运动图像函数f(x,y,)是关于变量x、y的连续函数,时刻t时,图像上一点a=(x,y)处的灰度值为ft(x,y,),在时刻t+Δt时,该点这一点运动到新位置,其在图像上的位置变为(x+Δx,y+Δy),灰度值记为ft+Δt(x+Δx,y+Δy),匹配的目的就是寻求a的对应点,使ft(x,y)=ft+Δt(x+Δx,y+Δy),并使点a=(x,y)在设定的M×N的邻域内,最小均方误差MSE(Δx,Δy)最小,能使MSE(Δx,Δy)最小的即为最优匹配点opt=(Δx,Δy),As a preferred embodiment, the above-mentioned dynamic target recognition and positioning method based on omnidirectional vision, wherein step 2 is performed according to the following method, assuming that the moving image function f (x, y,) is a continuous function about variables x, y, and the time At time t, the gray value at a point a=(x, y) on the image is f t (x, y,), at time t+Δt, this point moves to a new position, its position on the image becomes (x+Δx, y+Δy), and the gray value is recorded as f t+Δt (x+Δx, y+Δy), the purpose of matching is to find the corresponding point of a, so that f t (x, y)= f t+Δt (x+Δx, y+Δy), and make the point a=(x, y) within the set M×N neighborhood, the minimum mean square error MSE (Δx, Δy) is the smallest, which can make MSE The smallest (Δx, Δy) is the optimal matching point opt=(Δx, Δy),

令f=ft(x,y)-ft+Δt(x,y),为像素点(Δx,Δy)的梯度,则,

Figure G2009102285809D0000022
Let f = f t (x, y) - f t + Δt (x, y), is the gradient of the pixel point (Δx, Δy), then,
Figure G2009102285809D0000022

Figure G2009102285809D0000024
求得最优匹配点opt=(Δx,Δy)=U-1V,通过在图像中寻找特征点进行匹配,检测出图像序列的运动目标;make
Figure G2009102285809D0000024
Obtain the optimal matching point opt=(Δx, Δy)=U -1V , and match by searching for feature points in the image to detect the moving target of the image sequence;

其中的步骤3按照下列方法执行:Wherein step 3 is carried out according to the following method:

(1)根据第二步的结果,对初始目标进行定位,并获取目标的初始运动参数:(1) According to the result of the second step, locate the initial target and obtain the initial motion parameters of the target:

Pinit=(Pinit x,Pinit y),设每个粒子代表一种可能的运动状态,取粒子数为N,粒子的初始权值wi=1,则具有N个可能的运动状态参数Pi=(Pi X,Pi Y),(i∈1,...N)。P init = (P init x , P init y ), let each particle represent a possible motion state, take the number of particles as N, and the initial weight w i =1 of the particle, then there are N possible motion state parameters P i = (P i X , P i Y ), (i∈1, . . . N).

(2)进行粒子重采样过程,淘汰权值较小的粒子,保留权值较大的粒子;(2) Carry out the particle resampling process, eliminate particles with smaller weights, and retain particles with larger weights;

(3)转入粒子滤波算法的迭代过程:从第二帧以后的每一帧中,对每个粒子进行系统状态转移以及系统观测,计算粒子的权值,并将所有粒子进行加权以输出目标状态的估计值,完成跟踪过程;(3) Transfer to the iterative process of the particle filter algorithm: from each frame after the second frame, perform system state transfer and system observation for each particle, calculate the weight of the particle, and weight all particles to output the target Estimated value of the state to complete the tracking process;

按照下列公式进行状态转移:对粒子Ni,有Pi Xt=A1Pi Xt-1+B1wi t-1和Pi Xt=A2Pi Xt-1+B2wi t-1,其中,A1,A2,B1,B2为常数,A取1,B为粒子传播半径,W是[-1,1]内的随机数;Perform state transition according to the following formula: For particle N i , there are P i Xt = A 1 P i Xt-1 + B 1 w i t-1 and P i Xt = A 2 P i Xt-1 + B 2 w i t -1 , where A 1 , A 2 , B 1 , and B 2 are constants, A is 1, B is the particle propagation radius, and W is a random number within [-1, 1];

按照下列方法进行系统观测:System observations are performed in the following ways:

(1)每个粒子状态转移后,利用新的坐标和,计算一个最小平均绝对差值函数MADi(1) After the state transfer of each particle, use the new coordinate sum to calculate a minimum mean absolute difference function MAD i ;

(2)设概率密度函数为

Figure G2009102285809D0000031
其中,σ为常数,则各个粒子的权值为:
Figure G2009102285809D0000032
(2) Let the probability density function be
Figure G2009102285809D0000031
Among them, σ is a constant, then the weight of each particle is:
Figure G2009102285809D0000032

(3)对各个粒子的权值进行归一化处理: (3) Normalize the weights of each particle:

(4)进一步最优估计,设t时刻的后验概率已知,则跟踪参数P表示为:

Figure G2009102285809D0000034
Figure G2009102285809D0000035
之后,可再令t=t+1,然后返回重采样。(4) Further optimal estimation, assuming that the posterior probability at time t is known, the tracking parameter P is expressed as:
Figure G2009102285809D0000034
Figure G2009102285809D0000035
Afterwards, t=t+1 can be set again, and then return to resampling.

本发明的实质性特点是,首先对全方位视觉图像进行预处理,然后用光流法在图像中寻找特征点进行匹配,得到稀疏光流场,最后通过粒子滤波预测运动目标在后续帧中的参数,建立相邻帧间的匹配矩阵,分析匹配矩阵判断运动目标状态,从而有效地跟踪运动目标。与现有的方法相比,采用本发明提出的方法,可以明显地减少运算量并提高准确率。The substantive feature of the present invention is that firstly, the omni-directional visual image is preprocessed, and then the optical flow method is used to search for feature points in the image for matching to obtain a sparse optical flow field, and finally the particle filter is used to predict the position of the moving target in subsequent frames. parameters, establish a matching matrix between adjacent frames, and analyze the matching matrix to judge the state of the moving target, so as to effectively track the moving target. Compared with the existing methods, the method proposed by the invention can obviously reduce the amount of computation and improve the accuracy.

附图说明Description of drawings

图1本发明的用于全方位视觉环境的光流-粒子复合识别跟踪器的总流程图。Fig. 1 is a general flow chart of the optical flow-particle compound recognition tracker for omni-directional visual environment of the present invention.

具体实施方式Detailed ways

参见图1,本发明的基于全方位视觉的动态目标识别和定位方法,包括下列步骤:Referring to Fig. 1, the dynamic target recognition and localization method based on omnidirectional vision of the present invention comprises the following steps:

步骤1:获取全方位视觉序列图像,对图像进行预处理,把目标和背景分离开来,为后继的光流场计算做准备。通过高斯低通滤波器将图像进行预先平滑,然后进行梯度锐化,找到图像物体的运动边缘,为了分割目标物体和背景,要进行阈值分割。首先通过直方图直接选择确定一个阈值,对于序列图像采取动态调整阈值,然后让图像各像素点的灰度值与该阈值进行比较,如果大于该阈值,就把该像素点的灰度值置为255(表示背景),否则把该像素点的灰度值置为0(物体),这样就把运动目标和背景区分开来了。经过阈值分割的图像就变成了二值图像,只有0和255两种灰度值。Step 1: Obtain an omnidirectional visual sequence image, preprocess the image, separate the target from the background, and prepare for the subsequent optical flow field calculation. The image is pre-smoothed through a Gaussian low-pass filter, and then gradient sharpening is performed to find the moving edge of the image object. In order to segment the target object and the background, a threshold segmentation is performed. First, determine a threshold directly through the histogram, and dynamically adjust the threshold for the sequence image, and then compare the gray value of each pixel in the image with the threshold. If it is greater than the threshold, set the gray value of the pixel to be 255 (representing the background), otherwise the gray value of the pixel is set to 0 (object), thus distinguishing the moving target from the background. The thresholded image becomes a binary image with only two grayscale values of 0 and 255.

步骤2:用光流法进行局部区域搜索,进行图像相邻帧间特征点的匹配。Step 2: Use the optical flow method to search the local area and match the feature points between adjacent frames of the image.

对序列图像而言,相邻帧时间间隔很小,空间点在相邻两帧图像内移动不大,前后帧物体位置相关性较大。For sequence images, the time interval between adjacent frames is very small, the spatial point does not move much in two adjacent frames of images, and the position correlation of objects in the front and back frames is relatively large.

设运动图像函数f(x,y)是关于变量x、y的连续函数。设时刻t时,图像上一点a=(x,y)处的灰度值为ft(x,y),在时刻t+Δt时,这一点运动到新位置,其在图像上的位置变为(x+Δx,y+Δy),灰度值记为ft+Δt(x+Δx,y+Δy),匹配的目的就是寻求a的对应点,让它与ft(x,y,)相等,即Let the moving picture function f(x, y) be a continuous function with respect to the variables x, y. Assuming that at time t, the gray value of a point a=(x, y) on the image is f t (x, y), at time t+Δt, this point moves to a new position, and its position on the image becomes is (x+Δx, y+Δy), and the gray value is recorded as f t+Δt (x+Δx, y+Δy), the purpose of matching is to find the corresponding point of a, and let it match with f t (x, y, ) are equal, that is

ft(x,y)=ft+Δt(x+Δx,y+Δy)    (1)f t (x, y) = f t + Δt (x + Δx, y + Δy) (1)

并使点a=(x,y)在设定的m×n的邻域内,最小均方误差MSE(Δx,Δy)最小。And make the point a=(x, y) within the set m×n neighborhood, and the minimum mean square error MSE(Δx, Δy) is the minimum.

MSEMSE (( ΔxΔx ,, ΔyΔy )) == 11 MNMN ΣΣ mm == 11 Mm ΣΣ nno == 11 NN [[ ff tt (( xx ,, ythe y )) -- ff tt ++ ΔtΔt (( xx ++ ΔxΔx ,, ythe y ++ ΔyΔy )) ]] 22 -- -- -- (( 22 ))

能使MSE(Δx,Δy)最小的即为最优匹配点opt=(Δx,Δy)。The one that can minimize the MSE (Δx, Δy) is the optimal matching point opt=(Δx, Δy).

令MSE(Δx,Δy)对(Δx,Δy)的一阶导数为零:Let the first derivative of MSE(Δx, Δy) with respect to (Δx, Δy) be zero:

∂∂ MSEMSE (( ΔxΔx ,, ΔyΔy )) ∂∂ (( ΔxΔx ,, ΔyΔy )) || (( ΔxΔx ,, ΔyΔy )) == optopt == (( 0,00,0 )) -- -- -- (( 33 ))

由(2),可得From (2), we can get

∂∂ MSEMSE (( ΔxΔx ,, ΔyΔy )) ∂∂ (( ΔxΔx ,, ΔyΔy )) == -- 22 MNMN ΣΣ mm == 11 Mm ΣΣ nno == 11 NN [[ ff tt (( xx ,, ythe y )) -- ff tt ++ ΔtΔt (( xx ++ ΔxΔx ,, ythe y ++ ΔyΔy )) ]] ·· (( ∂∂ ff tt ++ ΔtΔt ∂∂ ΔxΔx ∂∂ ff tt ++ ΔtΔt ∂∂ ΔyΔy )) -- -- -- (( 44 ))

用泰勒公式展开:Expand using Taylor's formula:

∂∂ MSEMSE (( ΔxΔx ,, ΔyΔy )) ∂∂ (( ΔxΔx ,, ΔyΔy )) == -- 22 MNMN ΣΣ mm == 11 Mm ΣΣ nno == 11 NN [[ ff tt (( xx ,, ythe y )) -- ff tt ++ ΔtΔt (( xx ,, ythe y )) -- (( ∂∂ ff tt ++ ΔtΔt ∂∂ ΔxΔx ,, ∂∂ ff tt ++ ΔtΔt ∂∂ ΔyΔy )) ·· (( ΔxΔx ,, ΔyΔy )) ]] ·&Center Dot; (( ∂∂ ff tt ++ ΔtΔt ∂∂ ΔxΔx ,, ∂∂ ff tt ++ ΔtΔt ∂∂ ΔyΔy )) -- -- -- (( 55 ))

令f=ft(x,y)-ft+Δt(x,y)Let f = f t (x, y) - f t + Δt (x, y)

Figure G2009102285809D0000045
为像素点(Δx,Δy)的梯度,
Figure G2009102285809D0000045
is the gradient of the pixel point (Δx, Δy),

(5)可化简得 ∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) = - 2 MN Σ m = 1 M Σ n = 1 N [ f - ▿ f T · ( Δx , Δy ) ] · ▿ f T - - - ( 6 ) (5) can be simplified ∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) = - 2 MN Σ m = 1 m Σ no = 1 N [ f - ▿ f T &Center Dot; ( Δx , Δy ) ] · ▿ f T - - - ( 6 )

又因为 ▿ f · ▿ f T = 1 also because ▿ f &Center Dot; ▿ f T = 1

上式可化简为:The above formula can be simplified as:

MNMN 22 ·&Center Dot; [[ ∂∂ MSEMSE (( ΔxΔx ,, ΔyΔy )) ∂∂ (( ΔxΔx ,, ΔyΔy )) ]] TT == ΣΣ mm == 11 Mm ΣΣ nno == 11 NN ▿▿ ff TT ·· (( ΔxΔx ,, ΔyΔy )) -- ΣΣ mm == 11 Mm ΣΣ nno == 11 NN ff -- -- -- (( 77 ))

U = Σ m = 1 M Σ n = 1 N ▿ f T , V = Σ m = 1 M Σ n = 1 N f make u = Σ m = 1 m Σ no = 1 N ▿ f T , V = Σ m = 1 m Σ no = 1 N f

可得最优匹配点opt=(Δx,Δy)=U-1VThe optimal matching point opt=(Δx, Δy)=U -1 V can be obtained

通过在图像中寻找特征点进行匹配,检测出图像序列的运动目标。By searching for feature points in the image for matching, the moving target of the image sequence is detected.

步骤3:利用目标的有效特征,通过粒子滤波算法对目标运动状态进行估计,预测运动目标在后续帧中的参数,完成跟踪过程。Step 3: Using the effective features of the target, the particle filter algorithm is used to estimate the target's motion state, predict the parameters of the moving target in subsequent frames, and complete the tracking process.

首先进行粒子初始化,对初始目标块进行定位,得到粒子wk的模板,如手工初始化,自动初始化等等,之后再获取目标wk的初始状态,即它出现的初始时刻的状态Pinit=(Pinit x,Pinit y),取粒子数为N(每个粒子代表一种可能的运动状态),设粒子的初始权值wi=1,则具有N个可能的运动状态参数Pi=(Pi X,Pi Y)(i∈1,...N),其中Pi可选择pinit周围一定范围内的点。Initialize the particle first, locate the initial target block, and obtain the template of the particle w k , such as manual initialization, automatic initialization, etc., and then obtain the initial state of the target w k , that is, the state of the initial moment when it appears P init =( P init x , P init y ), take the number of particles as N (each particle represents a possible motion state), set the initial weight w i =1 of the particle, then there are N possible motion state parameters P i = (P i X , P i Y )(i∈1,...N), where P i can select points within a certain range around p init .

然后行粒子重采样过程,淘汰权值较小的粒子,保留权值较大的粒子。Then carry out the particle resampling process, eliminate the particles with smaller weights, and keep the particles with larger weights.

最后,预设迭代次数,转入粒子滤波算法的迭代过程。从第二帧以后的每一帧中,对每个粒子进行系统状态转移以及系统观测,计算粒子的权值,并将所有粒子进行加权以输出目标状态的估计值。Finally, preset the number of iterations and turn to the iterative process of the particle filter algorithm. From the second frame onwards, the system state transition and system observation are carried out for each particle, the weight of the particle is calculated, and all particles are weighted to output the estimated value of the target state.

状态转移:对粒子Ni,有State transition: For particle N i , there is

Pi Xt=A1Pi Xt-1+B1wi t-1    (8)P i Xt =A 1 P i Xt-1 +B 1 w i t-1 (8)

Pi Xt=A2Pi Xt-1+B2wi t-1    (9)P i Xt = A 2 P i Xt-1 +B 2 w i t-1 (9)

其中,A1,A2,B1,B2为常数,一般A取1,B为粒子传播半径(系统状态转移过程中,粒子所能够传播的范围),w是[-1,1]内的随机数。Among them, A 1 , A 2 , B 1 , and B 2 are constants. Generally, A takes 1, and B is the particle propagation radius (the range that particles can propagate during the system state transition process), and w is within [-1, 1]. of random numbers.

系统观测:每个粒子状态转移后,即可用对应新坐标计算一个MADi,设概率密度函数为

Figure G2009102285809D0000051
System observation: after the state transfer of each particle, the corresponding new coordinates can be used to calculate a MAD i , and the probability density function is set as
Figure G2009102285809D0000051

其中,σ为常数,MAD为最小平均绝对差值函数。Among them, σ is a constant, and MAD is the minimum mean absolute difference function.

MADMAD (( ii ,, jj )) == 11 Mm ×× NN ΣΣ mm == 11 Mm ΣΣ nno == 11 NN || TT (( mm ,, nno )) -- Ff (( mm ++ ii ,, nno ++ jj )) ||

则各个粒子的权值为: w k i = w k - 1 i p ( z k | x k i ) - - - ( 11 ) Then the weight of each particle is: w k i = w k - 1 i p ( z k | x k i ) - - - ( 11 )

归一化: w k i = w k i / Σ i = 1 N w k i - - - ( 12 ) Normalized: w k i = w k i / Σ i = 1 N w k i - - - ( 12 )

进一步最优估计,设t时刻的后验概率已知,则跟踪参数P可表示为:Further optimal estimation, assuming that the posterior probability at time t is known, the tracking parameter P can be expressed as:

pp XtXt optopt == ΣΣ ii == 11 NN ww ii pp Xx ii ,, pp XtXt optopt YtYt == ΣΣ ii == 11 NN ww ii pp Xx ii ,, -- -- -- (( 1313 ))

之后,可再令t=t+1,然后返回重采样。Afterwards, t=t+1 can be set again, and then return to resampling.

Claims (5)

1. dynamic object identification and localization method based on an omni-directional visual comprise the following steps:
Step 1: obtain the omni-directional visual sequence image, this sequence image is carried out pre-service, obtain a moving target and background and make a distinction bianry image;
Step 2: carry out local area search with optical flow method, carry out the coupling of unique point between the image consecutive frame, detect the moving target of image sequence;
Step 3: target state is estimated that the parameter of predicted motion target in subsequent frame finished tracing process by particle filter algorithm.
2. dynamic object identification and localization method based on omni-directional visual according to claim 1, step 2 is wherein carried out according to following method, establishes moving image function f (x, y) be continuous function about variable x, y, during moment t, (x, the gray-scale value of y) locating are f to 1 a=on the image t(x, y), when moment t+ Δ t, this this point moves to reposition, and its position on image becomes (x+ Δ x, y+ Δ y), and gray-scale value is designated as f T+ Δ t(x+ Δ x, y+ Δ y), the purpose of coupling is exactly the corresponding point of seeking a, makes f t(x, y)=f T+ Δ t(x+ Δ x, y+ Δ y), and make an a=(x, y) in the neighborhood of the M * N that sets, least mean-square error MSE (Δ x, Δ y) minimum, can make MSE (Δ x, Δ y) minimum be Optimum Matching point opt=(Δ x, Δ y),
Make f=f t(x, y)-f T+ Δ t(x, y),
Figure F2009102285809C0000011
Be the gradient of pixel (Δ x, Δ y), then,
Figure F2009102285809C0000012
Order
Figure F2009102285809C0000013
Figure F2009102285809C0000014
Try to achieve Optimum Matching point opt=(Δ x, Δ y)=U -1V mates by seek unique point in image, detects the moving target of image sequence.
3. dynamic object identification and localization method based on omni-directional visual according to claim 1, step 3 is wherein carried out according to following method:
(1) according to the result in second step, initial target is positioned, and obtain the initial motion parameter of target: P Init=(P Initx, P InitY), establish each particle and represent a kind of possible motion state, getting population is N, the initial weight w of particle i=1, then have N possible motion state parameters P i=(P i X, P i Y), (i ∈ 1 ... N).
(2) carry out particle resampling process, eliminate the less particle of weights, keep the bigger particle of weights;
(3) change the iterative process of particle filter algorithm over to: from each later frame of second frame, each particle is carried out system state to be shifted and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state, finish tracing process.
4. dynamic object identification and localization method based on omni-directional visual according to claim 3 carry out state transitions according to following formula: to particle N i, P is arranged i Xt=A 1P i Xt-1+ B 1w i T-1And P i Xt=A 2P i Xt-1+ B 2w i T-1, wherein, A 1, A 2, B 1, B 2Be constant, A gets 1, and B is that particle is propagated radius, and w is the random number in [1,1].
5. dynamic object identification and localization method based on omni-directional visual according to claim 4, carry out systematic observation according to following method:
(1) after each particle state shifts, utilize new coordinate and, calculate a minimum average B configuration absolute difference function MAD i
(2) establishing probability density function is
Figure F2009102285809C0000021
Wherein, σ is a constant, and then the weights of each particle are:
Figure F2009102285809C0000022
(3) weights to each particle carry out normalized: w k i = w k i / Σ i = 1 N w k i ;
(4) further optimal estimation, the posterior probability of establishing the t moment is known, and then tracking parameter P is expressed as:
Figure F2009102285809C0000024
Figure F2009102285809C0000025
Afterwards, can make t=t+1 again, return resampling then.
CN2009102285809A 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target Expired - Fee Related CN101714256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102285809A CN101714256B (en) 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102285809A CN101714256B (en) 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target

Publications (2)

Publication Number Publication Date
CN101714256A true CN101714256A (en) 2010-05-26
CN101714256B CN101714256B (en) 2011-12-14

Family

ID=42417873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102285809A Expired - Fee Related CN101714256B (en) 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target

Country Status (1)

Country Link
CN (1) CN101714256B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110297A (en) * 2011-03-02 2011-06-29 无锡慧眼电子科技有限公司 Detection method based on accumulated light stream and double-background filtration
WO2015014111A1 (en) * 2013-08-01 2015-02-05 华为技术有限公司 Optical flow tracking method and apparatus
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
CN105975911A (en) * 2016-04-28 2016-09-28 大连民族大学 Energy perception motion significance target detection algorithm based on filter
CN106447696A (en) * 2016-09-29 2017-02-22 郑州轻工业学院 Bidirectional SIFT (scale invariant feature transformation) flow motion evaluation-based large-displacement target sparse tracking method
CN106462960A (en) * 2014-04-23 2017-02-22 微软技术许可有限责任公司 Collaborative alignment of images
CN106483577A (en) * 2015-09-01 2017-03-08 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 A kind of optical detecting gear
CN106950985A (en) * 2017-03-20 2017-07-14 成都通甲优博科技有限责任公司 A kind of automatic delivery method and device
CN107065866A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of Mobile Robotics Navigation method based on improvement optical flow algorithm
CN107764271A (en) * 2017-11-15 2018-03-06 华南理工大学 A kind of photopic vision dynamic positioning method and system based on light stream
CN108053446A (en) * 2017-12-11 2018-05-18 北京奇虎科技有限公司 Localization method, device and electronic equipment based on cloud
CN108920997A (en) * 2018-04-10 2018-11-30 国网浙江省电力有限公司信息通信分公司 Judge that non-rigid targets whether there is the tracking blocked based on profile
CN109255329A (en) * 2018-09-07 2019-01-22 百度在线网络技术(北京)有限公司 Determine method, apparatus, storage medium and the terminal device of head pose
CN111147763A (en) * 2019-12-29 2020-05-12 眸芯科技(上海)有限公司 Image processing method based on gray value and application
CN111951949A (en) * 2020-01-21 2020-11-17 梅里医疗科技(洋浦)有限责任公司 Intelligent nursing interaction system for intelligent ward
CN114347030A (en) * 2022-01-13 2022-04-15 中通服创立信息科技有限责任公司 Robot vision following method and vision following robot
CN115962783A (en) * 2023-03-16 2023-04-14 太原理工大学 Positioning method of cutting head of heading machine and heading machine

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110297B (en) * 2011-03-02 2012-10-10 无锡慧眼电子科技有限公司 Detection method based on accumulated light stream and double-background filtration
CN102110297A (en) * 2011-03-02 2011-06-29 无锡慧眼电子科技有限公司 Detection method based on accumulated light stream and double-background filtration
WO2015014111A1 (en) * 2013-08-01 2015-02-05 华为技术有限公司 Optical flow tracking method and apparatus
US9536147B2 (en) 2013-08-01 2017-01-03 Huawei Technologies Co., Ltd. Optical flow tracking method and apparatus
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
CN106462960A (en) * 2014-04-23 2017-02-22 微软技术许可有限责任公司 Collaborative alignment of images
CN106483577A (en) * 2015-09-01 2017-03-08 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 A kind of optical detecting gear
CN105975911B (en) * 2016-04-28 2019-04-19 大连民族大学 Filter-based energy-aware motion salient object detection method
CN105975911A (en) * 2016-04-28 2016-09-28 大连民族大学 Energy perception motion significance target detection algorithm based on filter
CN106447696A (en) * 2016-09-29 2017-02-22 郑州轻工业学院 Bidirectional SIFT (scale invariant feature transformation) flow motion evaluation-based large-displacement target sparse tracking method
CN106447696B (en) * 2016-09-29 2017-08-25 郑州轻工业学院 A kind of big displacement target sparse tracking that locomotion evaluation is flowed based on two-way SIFT
CN106950985A (en) * 2017-03-20 2017-07-14 成都通甲优博科技有限责任公司 A kind of automatic delivery method and device
CN107065866A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of Mobile Robotics Navigation method based on improvement optical flow algorithm
CN107764271A (en) * 2017-11-15 2018-03-06 华南理工大学 A kind of photopic vision dynamic positioning method and system based on light stream
CN107764271B (en) * 2017-11-15 2023-09-26 华南理工大学 Visible light visual dynamic positioning method and system based on optical flow
CN108053446A (en) * 2017-12-11 2018-05-18 北京奇虎科技有限公司 Localization method, device and electronic equipment based on cloud
CN108920997A (en) * 2018-04-10 2018-11-30 国网浙江省电力有限公司信息通信分公司 Judge that non-rigid targets whether there is the tracking blocked based on profile
CN109255329A (en) * 2018-09-07 2019-01-22 百度在线网络技术(北京)有限公司 Determine method, apparatus, storage medium and the terminal device of head pose
CN111147763A (en) * 2019-12-29 2020-05-12 眸芯科技(上海)有限公司 Image processing method based on gray value and application
CN111951949A (en) * 2020-01-21 2020-11-17 梅里医疗科技(洋浦)有限责任公司 Intelligent nursing interaction system for intelligent ward
CN111951949B (en) * 2020-01-21 2021-11-09 武汉博科国泰信息技术有限公司 Intelligent nursing interaction system for intelligent ward
CN114347030A (en) * 2022-01-13 2022-04-15 中通服创立信息科技有限责任公司 Robot vision following method and vision following robot
CN115962783A (en) * 2023-03-16 2023-04-14 太原理工大学 Positioning method of cutting head of heading machine and heading machine

Also Published As

Publication number Publication date
CN101714256B (en) 2011-12-14

Similar Documents

Publication Publication Date Title
CN101714256A (en) Omnibearing vision based method for identifying and positioning dynamic target
CN106875424B (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN106097391B (en) A kind of multi-object tracking method of the identification auxiliary based on deep neural network
CN101800890B (en) Multiple vehicle video tracking method in expressway monitoring scene
CN102646279B (en) Anti-shielding tracking method based on moving prediction and multi-sub-block template matching combination
CN102629385B (en) A target matching and tracking system and method based on multi-camera information fusion
CN102385690B (en) Target tracking method and system based on video image
CN104050477B (en) Infrared image vehicle detection method based on auxiliary road information and significance detection
CN106127807A (en) A kind of real-time video multiclass multi-object tracking method
CN109191497A (en) A kind of real-time online multi-object tracking method based on much information fusion
Elmezain et al. Hand trajectory-based gesture spotting and recognition using HMM
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN101404086A (en) Target tracking method and device based on video
CN112989889B (en) Gait recognition method based on gesture guidance
CN103793715B (en) Underground Personnel Target Tracking Method Based on Scene Information Mining
CN110084830B (en) Video moving object detection and tracking method
CN110490904B (en) Weak and small target detection and tracking method
Nandhini et al. SIFT algorithm-based Object detection and tracking in the video image
CN103577832B (en) A kind of based on the contextual people flow rate statistical method of space-time
CN111161308A (en) Dual-band fusion target extraction method based on key point matching
CN102261916B (en) Vision-based lunar rover positioning method in sandy environment
CN106127766B (en) Method for tracking target based on Space Coupling relationship and historical models
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
Wang et al. Deep learning-based robust positioning scheme for imaging sonar guided dynamic docking of autonomous underwater vehicle
CN110334674A (en) A Plane Free Body Trajectory Recognition Tracking and Prediction Method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111214

Termination date: 20141113

EXPY Termination of patent right or utility model