[go: up one dir, main page]

CN102184551A - Automatic target tracking method and system by combining multi-characteristic matching and particle filtering - Google Patents

Automatic target tracking method and system by combining multi-characteristic matching and particle filtering Download PDF

Info

Publication number
CN102184551A
CN102184551A CN2011101189182A CN201110118918A CN102184551A CN 102184551 A CN102184551 A CN 102184551A CN 2011101189182 A CN2011101189182 A CN 2011101189182A CN 201110118918 A CN201110118918 A CN 201110118918A CN 102184551 A CN102184551 A CN 102184551A
Authority
CN
China
Prior art keywords
target
tracking
particle
particle filter
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011101189182A
Other languages
Chinese (zh)
Inventor
魏颖
吴迪
贾同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN2011101189182A priority Critical patent/CN102184551A/en
Publication of CN102184551A publication Critical patent/CN102184551A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明涉及结合多种特征匹配和粒子滤波的目标跟踪方法及系统,该目标跟踪系统由视频采集模块、跟踪算法运算模块和输出控制模块三部分组成。视频采集模块完成采集卡的初始化和图像的实时采集;跟踪算法运算模块包括基于灰度模板区域匹配的粒子滤波跟踪、基于颜色概率分布的粒子滤波跟踪和基于SIFT特征匹配的粒子滤波跟踪三种跟踪方式,实现在平移空间和仿射空间的目标跟踪;输出控制模块利用跟踪所得到的目标位置的中心,作为发送给云台的控制指令,实现摄像机跟随目标物体的运动。The invention relates to a target tracking method and system combining multiple feature matching and particle filtering. The target tracking system is composed of a video acquisition module, a tracking algorithm operation module and an output control module. The video acquisition module completes the initialization of the acquisition card and the real-time acquisition of images; the tracking algorithm operation module includes three types of tracking: particle filter tracking based on gray template area matching, particle filter tracking based on color probability distribution and particle filter tracking based on SIFT feature matching way to realize target tracking in translation space and affine space; the output control module uses the center of the target position obtained by tracking as a control command sent to the pan/tilt to realize the movement of the camera following the target object.

Description

Automatic Target Tracking method and system in conjunction with various features coupling and particle filter
Technical field
The present invention relates to method for tracking target and system in conjunction with various features coupling and particle filter, relate in particular at translation space and affine space and realize target following, comprise particle filter tracking, based on the particle filter tracking of color probability distribution with based on three kinds of methods of particle filter tracking of SIFT characteristic matching based on gray scale template zone coupling.
Background technology
The target following result has contained a large amount of space time informations of motion unit in the scene, all is widely used aspect many in military visual guidance, robot visual guidance, safety monitoring, traffic control, medical diagnosis, virtual reality and battlefield warning, public safety supervision, video compress and meteorologic analysis etc.
From twentieth century since the eighties, target following has proposed a lot of algorithms to Chinese scholars to video image, visual tracking method can be divided into following four classes: (1) is based on the tracking in zone, its advantage is when target is not blocked, tracking accuracy is very high, follow the tracks of highly stablely, but its shortcoming at first is time-consuming, and situation is especially serious when the region of search is big; Secondly, algorithm requires target distortion little, and can not have too greatly and block, otherwise relevant precise decreasing can cause losing of target.(2) based on the tracking of feature, be blocked,, just can finish tracing task as long as some feature can be in sight even its advantage is certain part of target.The difficult point of this algorithm is: to certain moving target, how to determine it unique feature set this also be a pattern recognition problem.If adopt feature too much, system effectiveness will reduce, and be easy to generate mistake.(3) based on the tracking of deforming template, deforming template commonly used is the active contour model that was proposed in 1987 by Kass, is called the Snake model again.The Snake model is fit to the tracking of deformable object very much, and this model combines with Kalman filtering and can follow the tracks of better.But the Snake model relatively is fit to the tracking of single goal.(4) based on the tracking of model, its advantage is the 3 D motion trace of evaluating objects accurately, even under the situation that moving object attitude changes, also can follow the tracks of reliably.But its shortcoming is that the precision of motion analysis depends on the precision of geometric model, and the precise geometrical model that will obtain all moving targets in actual life is very difficult.This has just limited the use based on the track algorithm of model, simultaneously, often needs a large amount of operation time based on 3D model following algorithm, is difficult to realize real-time motion target tracking.
The thinking of following the tracks of based on Bayesian object video is that the target following problem is converted to the Bayesian Estimation problem, and the prior probability of known target state is constantly found the solution the process of the maximum a posteriori probability of dbjective state after obtaining new measuring value.Particle filter (particle filter) is a kind of practical algorithm of finding the solution Bayesian probability, Monte Carlo simulation method by imparametrization realizes recursion Bayes filtering, be applicable to any nonlinear system that can represent with state-space model, and the nonlinear system that can't represent of legacy card Kalman Filtering, precision can be forced into optimal estimation.The use of particle filter method is very flexible, realizes having parallel organization easily, and is quite practical.Particle filter has more practical value than traditional Bayes's wave filter (Kalman filter, lattice filter etc.).By analysis, can construct more effective, speed particle filter algorithm faster to tracking problem.
The present invention describes method for tracking target and the system in conjunction with various features coupling and particle filter, has made up based on the tracking framework and the software and hardware of particle filter and has realized system.Core content comprises feature and the extracting method thereof of target on three different levels, with gray feature, color characteristic and the SIFT feature characteristic manner as target, has realized the particle filter tracking based on these three kinds of features respectively.Utilize BL-E854CB type network audio-video video camera, DR68 series The Cloud Terrace, perseverance recalls video frequency collection card and PC constitutes a cover vision track system, adopt the VC++6.0 Programming with Pascal Language to realize the particle filter tracking that mates based on gray scale template zone, particle filter tracking based on the color probability distribution, particle filter tracking and CamShift based on the SIFT characteristic matching follow the tracks of, native system can be automatically and the automanual tracking target that extracts, and has bigger yardstick at moving target, rotation, affine deformation, brightness, contrast changes and target exists under the situation such as partial occlusion and can both realize stable target following.
Summary of the invention
The invention provides method for tracking target and system, make up based on the tracking framework and the software and hardware of particle filter and realize system in conjunction with various features coupling and particle filter.
Particular content of the present invention is as follows:
(1) proposition and design are based on the particle filter method for tracking target of three kinds of features
Core content of the present invention is the particle filter target tracking algorism that has designed based on three kinds of features: based on the particle filter tracking of gray scale template zone coupling, based on the particle filter tracking of color probability distribution, based on the particle filter tracking of SIFT characteristic matching.
1. based on the regional particle filter tracking that mates of gray scale template
Based on the particle filter tracking of gray scale template zone coupling is to be the goal description form with the gray scale template in the traditional area coupling tracking, represents the estimated value of target component with the particle weighted sum, and the weights and the matching value of particle are proportional.Region Matching Algorithm and particle filter tracking method are combined, both embodied regional coupling track algorithm practical characteristics directly perceived, embodied the advantage that particle filter " multimodal " is followed the tracks of again, having improved the robustness of following the tracks of greatly, also is simultaneously a kind of practical approach of carrying out the kinematic parameter search at affine space.
2. based on the particle filter tracking of color probability distribution
The color characteristic of target is the most basic one of the feature of meaning of feeling.The same with profile, angle point, textural characteristics, color characteristic also belongs to the low-level image feature of target, can be to people's impression more intuitively, and do not need complicated especially semantic description.
Color is a strong son of describing, and it usually can simplify the differentiation of object and extracting objects from scene.The people can distinguish several thousand kinds of shade of color and brightness, can only distinguish tens kinds of gray-levels by contrast.This also is why in the computer vision direction, is more prone to that image is transformed into color space from gray space and handles.For the tracking (such as human body tracking) of non-rigid object, especially be fit to follow the tracks of with color characteristic.
The present invention in conjunction with " multimodality " advantage of particle filter, at first calculates the color histogram of target from another angle, carries out color histogram probability back projection then, carries out the particle filter prediction and observe output on color probability distribution image.Represent a kind of possibility attitude of target with a particle, and calculate the weights (correlation between measurement and the true attitude of target) of particle, represent the estimated value of targeted attitude by the particle weighted sum.
3. based on the particle filter tracking of SIFT characteristic matching.
SIFT (Scale Invariant Feature Transform is the conversion of yardstick invariant features) characteristic matching algorithm is a kind of algorithm that at present domestic and international characteristic matching research field obtains the comparison success, this algorithmic match ability is stronger, can extract stable characteristics, can handle between two width of cloth images translation takes place, rotation, affined transformation, view transformation, matching problem under the illumination change situation, the even to a certain extent image of arbitrarily angled shooting is also possessed comparatively stable characteristics matching capacity, thereby the coupling of the feature between two width of cloth images that can realize differing greatly.SIFT characteristic matching technology seldom is used in the tracking, and its main cause is that SIFT feature calculation complexity is very high, so limited its application on the exigent tracking technique of real-time.
The present invention proposes a kind of particle filter tracking technology based on SIFT characteristic matching technology, this method is not to calculate entire image as traditional SIFT characteristic matching technology, but utilize the Forecasting Methodology of particle filter technology, in the target neighborhood, calculate the SIFT feature, can significantly reduce some unnecessary calculating like this, thereby reduce system operation time, and to also keeping tracking stability to a certain degree under the situations such as brightness variation, visual angle change, affined transformation, noise.
(2) design of hardware and software of Target Tracking System
The present invention adopts cooperative work of software and hardware, the establishing target tracker.Utilize BL-E854CB type network audio-video video camera, DR68 series The Cloud Terrace and perseverance to recall video frequency collection card and constitute target following hardware system, hardware system connection layout such as accompanying drawing 1.Software systems comprise video acquisition module, track algorithm computing module and output control module, realize all software functions in conjunction with VC++6.0 on PC, software system structure such as accompanying drawing 2.
Description of drawings
Fig. 1 connects block diagram for equipment
Fig. 2 is system software structure figure
Fig. 3 is the particle filter tracking algorithm flow chart based on gray scale template zone coupling
Fig. 4 is the particle filter algorithm process flow diagram based on the color probability distribution
Fig. 5 is the particle filter tracking algorithm flow chart based on the SIFT characteristic matching
Fig. 6 is the equipment pictorial diagram
Fig. 7 is control area figure
Fig. 8 is software initial interface figure
Fig. 9 journey process flow diagram of serving as theme
Figure 10 is sub-thread 1 process flow diagram
Figure 11 is sub-thread 2 and sub-thread 3 process flow diagrams, and wherein (a) figure is sub-thread 2 process flow diagrams, and (b) figure is sub-thread 3 process flow diagrams
Embodiment
The present invention proposes method for tracking target and the system in conjunction with various features coupling and particle filter, makes up based on the tracking framework and the software and hardware of particle filter and realizes system.
Specific embodiments of the present invention are as follows:
(1) the particle filter method for tracking target of three kinds of features of proposition
Core content of the present invention is the particle filter target tracking algorism that has proposed based on three kinds of features: based on the particle filter tracking of gray scale template zone coupling, based on the particle filter tracking of color probability distribution, based on the particle filter tracking of SIFT characteristic matching.Specific as follows
1. based on the regional particle filter tracking algorithm that mates of gray scale template
Based on the particle filter tracking of gray scale template zone coupling is exactly to be the goal description form with the gray scale template in the traditional area coupling tracking, represents the estimated value of target component with the particle weighted sum, and the weights and the matching value of particle are proportional.Region Matching Algorithm and particle filter tracking method are combined, both embodied regional coupling track algorithm practical characteristics directly perceived, embodied the advantage that particle filter " multimodal " is followed the tracks of again, can improve the robustness of tracking greatly, also be simultaneously a kind of effective ways that carry out the kinematic parameter search at affine space.
What need explanation a bit is that the whole bag of tricks that the present invention considered all is the kinematic parameter that is conceived to find the solution target, i.e. the position of target, angle, yardstick or the like.In order to embody the advantage of particle filter to the traditional area matching algorithm, the present invention will consider the tracking of affine space.For affine model, need to consider six parameters of target, we do not consider tangential yardstick SXY here, promptly only consider five parameter T=(TX, TY, SX, SY, θ), wherein TX and TY are respectively the level orientation and the vertical orientations coordinate of target's center's point, SX, SY are respectively the yardstick of horizontal direction and vertical direction, and θ is the anglec of rotation of target with respect to template.Such particle is just represented a kind of possible motion state of target, promptly has one group of possible kinematic parameter (T).Kinematic parameter according to particle, can obtain a kind of distortion situation of the pairing To Template of this particle, by calculating the matching value of this deforming template and real image, particle is given and the proportional weights of matching value, represent the posterior probability of dbjective state by the particle weighting.
(1) priori of target
The priori of target comprises the foundation of gray scale template of target and the initial work of particle.
In initial frame, can obtain the initial description of target with the method for detection of automatic targets such as method of difference or man-machine interaction.In the present invention, adopt rectangle frame to represent target attitude.For a rectangle frame, its attitude comprises five parameters:
Figure BSA00000491614800051
Wherein: cx and cy are respectively the center of rectangle frame, and h and w are respectively the height and the width of rectangle frame,
Figure BSA00000491614800052
Be the angle of rectangle frame along h direction and x axle, initial value is made as 90 °.
Like this, when following the tracks of beginning, the priori (k of target 0Constantly) comprised the image template f that a size is m * n (a, b), (a=1 ... m, b=1 ... and the initial motion parameter of target n),
Figure BSA00000491614800053
Get particle numerical digit Ns, its weights ω ' initial value is 1, and each particle is represented a possible motion state of target, and each particle all has five parameters:
T′=(TX′,TY′,θ′,SX′,SY′),i=1,2,…Ns (3)
The initial value of particle parameter is taken as:
TX′=TX mit+b 1ξ,TY′=TY mit+b 2ξ,θ′=θ mit+b 3ξ,
SX′=SX mit+b 4ξ,SY′=SY mit+b 5ξ,i=1…Ns (4)
Wherein, ξ is the random number in [1,1], b 1, b 2, b 3, b 4, b 5It is constant.
(2) system state shifts
At k constantly thereafter t(t>0) utilizes the system state equation of transfer that each particle is carried out status predication.
Get single order ARP equation: x t=Ax T-1+ Bw T-1(5)
Promptly, have particle N ':
TX t i = A 1 T X t - 1 i + B 1 w t - 1 , TY t i = A 2 TY t - 1 i + B 2 w t - 1 , θ t i = A 3 θ t - 1 i + B 3 w t - 1 , SX t i = A 4 SX t - 1 i + B 4 w t - 1 , SY t i = A 5 SY t - 1 i + B 5 w t - 1 , i = 1 · · · Ns . - - - ( 6 )
Wherein, A 1A 5, B 1B 5Be constant, w T-1Be the random number in [1,1].
Get A under the simple scenario 1=A 2=A 3=A 4=A 5=1, and claim B 1B 5For particle is propagated radius.The implication that this moment, system state shifted promptly is disturbance quantity of superposition on quantity of state respectively.
When the state propagation of considering target has speed or acceleration, adopt second order ARP model.
Second order ARP model can be expressed as:
x t=Ax t-2+Bx t-1+Cw t-1 (7)
Wherein, A, B, C are constant, w T-1Be the random number in [1,1].
(3) systematic observation
Can observe it after each particle is propagated, just observe the target possibility state of each particle representative and the similarity degree between the target time of day, give bigger weights near the particle of target time of day, otherwise weights be less.
Get minimum average B configuration absolute difference function for weighing the instrument of similarity degree, promptly can calculate a similar value MAD ', i=1 each particle ... Ns.The angle of original template is 90 °, and for particle i, because be prediction, so its angle θ has all directions, and generally be not equal to 90 °, like this just need calculate MAD ' time, the particle zone under the particle i rotated to utilize minimum average B configuration absolute difference function to carry out the calculating of similar value after 90 ° again.
Definition observation probability density function is: p ( z k | x k i ) = exp { - 1 2 σ 2 MAD i } - - - ( 8 )
Wherein, σ is a constant.The implication of following formula is for carrying out the Gaussian modulation to matching value.
The weights of each particle like this
Figure BSA00000491614800062
Carrying out recursion by (2.25) formula calculates:
w k i = w k - 1 i p ( z k | x k i ) - - - ( 9 )
At last, the particle weights are carried out normalized.
(4) posterior probability is calculated
k tPosterior probability constantly, just desired target component in the target following
Figure BSA00000491614800064
Can represent by the weighted sum of each particle, that is:
TX t opt = Σ i = 1 Ns ω t i TX t i , TY t opt = Σ i = 1 Ns ω t i TX t i , θ t opt = Σ i = 1 Ns ω t i θ t i , SX t opt = Σ i = 1 Ns ω t i SX t i , SY t opt = Σ i = 1 Ns ω t i SY t i , i = 1 · · · Ns . - - - ( 10 )
Wherein:
Figure BSA000004916148000610
Be the weights after i the particle normalization,
Figure BSA000004916148000611
It is the state parameter of i particle.
So far a tracing process finishes.Next tracking is constantly still shifted from system state and is restarted.
(5) particle resamples
The basic problem that sequence importance sampling algorithm exists is exactly the particle degradation phenomena, after promptly passing through a few step iteration recursion, it is very little that the weights of many particles can become, and has only a few particle to have big weights, and a large amount of calculating then is wasted on the little weights particle.
Particle resample to be exactly to derive some particles from the big particle of weights and to replace them when some particle weights occurring too hour.We only need threshold value of definition, just carry out this process after the weights of some particle reach certain lower limit, and the weights of " offspring " particle of this particle are re-set as 1.Resampling process and other processes (system state transfer, systematic observation, goal description) etc. all have nothing to do, and are not doing other explanation in the chapters and sections thereafter.
(6) the particle filter tracking algorithm flow that mates based on gray scale template zone
The front is described the concrete function based on each module of particle filter tracking algorithm of gray scale template zone coupling in detail, and here, the present invention has realized above flow process with VC++6.0, and the agent structure block diagram of program as shown in Figure 4.
At first determine population, select motion model.The selection of population is relevant with actual tracer request, and population is many more generally speaking, follows the tracks of stablely more, and precision is high more, but calculated amount is also big more simultaneously.The actual occasion of following the tracks of can be carried out compromise selection, or dynamically adjusts.We select to have the translation model of two parameters and the affine model with five parameters motion model, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.Motion model is in case selected, and particle is promptly consistent with it, has the parameter of same dimension.
Judge whether that subsequently target chooses, dual mode is manually chosen or is chosen automatically in the employing of choosing of target, manually choose is exactly to adopt artificial mode, on screen, choose a zone as tracking target with mouse, choose automatically then to utilizing the image difference method of grading to obtain wanting the target of following the tracks of.After the target area is determined, just set up To Template, initialization particle parameter, and the particle weights all are made as 1 (promptly all particles are of equal importance).
Frame after the particle initialization just changes the iterative process of particle filter algorithm over to.In each frame, each particle carried out system state shifts and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state.Carry out particle resampling process at last, change algorithm iteration process next time over to.
Particle filter tracking algorithm flow chart such as accompanying drawing 3. that the present invention proposes based on gray scale template zone coupling
2. based on the particle filter tracking algorithm of color probability distribution
Image histogram is a kind of crucial image analysis tool in the Flame Image Process, it has described the intensity level content of piece image, the histogram of any piece image has all comprised abundant information, and it mainly is used in image Segmentation, in the processing procedures such as image gray-scale transformation.Image histogram is the function of each gray-scale value statistical property of image and gradation of image value from mathematics, each gray level occurs in its statistics piece image number of times or probability; From figure, it is an X-Y scheme, the gray level of each pixel in the horizontal ordinate presentation video, and ordinate is number of times or the probability that each pixel of each gray level epigraph occurs.
(1) statistics of color histogram
RGB and hsv color histogram are two kinds of the most frequently used color space models.Most digital picture all is to express with the RGB color space, and still, the rgb space structure does not also meet the subjective judgement of people to color similarity.And the subjective understanding of people to color more approached in the hsv color space, and its three components are represented color (Hue), saturation degree (saturation) and brightness (value) respectively.
The present invention at first is transformed into the HSV space to the pixel of target area by rgb space, calculates the one dimension histogram of H component then, and carries out normalization (being exactly that the H component value is mapped in this scope of 0-1).Element quantization adopts the simplest the most frequently used method, is about to the H component and is divided into several little intervals equably, makes each minizone become a histogrammic bin.Wherein the number of bin is relevant with the efficient requirement with the concrete performance of using, and its number is many more, and histogram is just strong more to the resolution capabilities of color, but the very big histogram of the number of bin can increase computation burden.Consider the requirement of vision track real-time and the scope of H component, the number of H component bin is made as 48 in the system.
(2) calculating of color probability distribution graph
The purpose of calculating color histogram is in order to obtain color probability distribution image, on this basis, also to need raw video image is transformed into color probability distribution image by color histogram.Pixel in the raw video image is replaced by the statistic of respective pixel in the color histogram, just obtains color probability distribution image behind the re-quantization as a result that will obtain then.
Pixel in the raw video image is the amount that is used for describing light intensity, and the pixel in color probability distribution figure is the value that is used for measuring certain " possibility ", and what this possibility was represented is the probability that moving target appears at this pixel position.Later tracing process all is to act on this color probability distribution image.
The present invention calculates the color probability distribution graph like this: for each two field picture of follow-up collection, at first be transformed into the HSV space from rgb space, to the H component of each location of pixels, calculate the normalized color histogram of target then, obtain the pixel value of color probability distribution image correspondence position by following formula:
B(i,j)=H(i,j)×Hist(h) (11)
Wherein (i j) is the value of the pixel in the color probability distribution graph to B, and (i j) is the H component value of pixel in the image of being gathered to H, and Hist (h) is (i, j) the Histogram value of place bin of H in color histogram.When all pixels all calculate finish after, just obtained a color probability distribution image.
When the S component was very little, the H component that calculates can be very big, can cause very big error like this, so the present invention establishes a threshold value, when S component during less than this threshold value, the corresponding H component of order is 0, through overtesting, threshold value is made as 30.Through the color probability distribution graph of revising, again all negates of the value of each pixel, at last the pixel value of color probability distribution image be zero point replace to pixel value be one very little on the occasion of, so that particle filter can be good at being adapted to dimensional variation when calculating weights, so just obtain a width of cloth can be used for the image of particle filter tracking.
(3) based on the particle filter tracking algorithm of color probability distribution
From as can be known to the analysis of color probability distribution graph, the pixel close with color of object has bigger probable value, and the pixel value at target place has maximum probable value, like this, under the situation that the target priori has obtained, disseminate the cluster particle with initial position, then these particles are observed, to each particle, calculate in this particle state scope pixel value and, modulate and normalization just can obtain the weights of each particle with exponential function, can see, the particle approaching more with dbjective state, weights can be big more.Calculate posterior probability output at last, so just finished the primary particle filter tracking, under the not very big scene of background interference, can carry out real-time tracking, and change for three-dimensional affine model and also can well follow the tracks of.The present invention's research has also realized this algorithm.
Adopt the color probability Distribution Model, be equivalent to original image has been mapped in the probability gray level image, therefore, under the color of object jumping characteristic is not very big situation, to the edge block, target rotation, distortion and background motion be insensitive.So this model is well suited for the tracking of affine space.
The present invention adopts affine model to follow the tracks of, and the purpose of tracking then is motion state T=(TX, the TY that finds the solution target, θ, SX, SY, SXY), TX wherein, TY is respectively the target's center of x direction and y direction, and θ is the angle of target rotation, SX, SY and SXY are target in x direction, y direction with to the yardstick (be simple and Convenient Calculation, do not consider SXY here) of angular direction.
Particle filter tracking based on the color probability distribution just utilizes Bayes's recursive filtering method, use Monte Carlo simulation, the posterior probability that is used as dbjective state (T) with the weighting of plurality of particles represents, each particle represents a kind of of target may motion state (T).Kinematic parameter according to particle, can obtain a kind of distortion situation of the pairing To Template of this particle, by calculating this deforming template shared ratio value that gets in real image, give relevant weights to particle, represent the posterior probability of dbjective state by the particle weighting.
As space is limited, just do not give unnecessary details one by one based on processes such as the target priori of the particle filter tracking system of color probability distribution, state transitions, systematic observation, posterior probability calculating, particle resamplings, referring to analysis and the description in the trifle on this instructions " based on the particle filter tracking algorithm of gray scale template zone coupling ".
Based on the particle filter tracking algorithm flow chart of color probability distribution as shown in Figure 4.
In this algorithm, at first need to determine number of particles and motion model, in the present invention, we select affine model with five parameters motion model, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.Motion model is in case selected, and particle is promptly consistent with it, has the parameter of same dimension.
To judge whether that then target chooses, the choosing can adopt equally of target manually chosen or chosen automatically, after the target area is determined, just sets up the color histogram of To Template, initialization particle parameter, and the particle weights all are made as 1 (promptly all particles are of equal importance).
Frame after the particle initialization will change the iterative process of particle filter algorithm over to.In each frame, at first image is transformed into the HSV space by rgb space, calculate the color probability distribution graph then, each particle is carried out system state on the color probability distribution graph shift and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state.Carry out particle resampling process at last, change algorithm iteration process next time over to.
3. based on the particle filter tracking algorithm of SIFT characteristic matching
SIFT (Scale Invariant Feature Transform is the conversion of yardstick invariant features) characteristic matching is a kind of method that at present domestic and international characteristic matching research field obtains the comparison success, it is stronger that it has a matching capacity, can extract stable characteristics, can handle between two width of cloth images translation takes place, rotation, affined transformation, view transformation, matching problem under the illumination change situation, the even to a certain extent image of arbitrarily angled shooting is also possessed comparatively stable characteristics matching capacity, thereby the coupling of the feature between two width of cloth images that can realize differing greatly.
SIFT characteristic matching technology seldom is used in the tracking, and its main cause is that SIFT feature calculation complexity is very high, has limited its application on the demanding tracking technique of real-time.The present invention proposes a kind of particle filter tracking method based on SIFT characteristic matching technology, the inventive method is not to calculate entire image as traditional SIFT characteristic matching technology, but utilize the Forecasting Methodology of particle filter technology, in the target neighborhood, calculate the SIFT feature, can significantly reduce some unnecessary calculating like this, thereby reduce system operation time.
(1) computation process of SIFT characteristic matching
The SIFT feature is the local feature of image, and this feature changes rotation, scale, brightness and maintains the invariance, and visual angle change, affined transformation, noise are also kept to a certain degree stability.
The principal feature of SIFT algorithm:
A) the SIFT feature is the local feature of image, and it changes rotation, scale, brightness and maintains the invariance, and visual angle change, affined transformation, noise are also kept to a certain degree stability.
B) unique good, quantity of information is abundant, is applicable in the magnanimity property data base and mates fast and accurately.
C) volume is even several objects of minority also can produce a large amount of SIFT proper vectors.
D) extensibility can be united with other forms of proper vector very easily.
1) generation of SIFT proper vector
Before generating the SIFT proper vector, earlier image is carried out normalization and handle, to look like be original twice to expanded view then, pre-filtering cancelling noise, the bottom that obtains gaussian pyramid i.e. the 1st layer of the 1st rank.The generating algorithm of piece image SIFT proper vector comprises following 4 steps:
A) the metric space extreme value detects
The metric space theory comes across computer vision field the earliest, its objective is the multiple dimensioned feature of simulated image data at that time.Koendetink utilizes diffusion equation to describe the metric space filtering subsequently, and proves that thus gaussian kernel is to realize unique transformation kernel of change of scale.People such as Lindeberg, Babaud proves further that by different derivations gaussian kernel is unique linear kernel.
The two dimension gaussian kernel definition as shown in Equation (12), wherein: σ has represented the variance of Gauss normal distribution:
G ( x , y , σ ) = 1 2 πσ 2 e - ( x 2 + y 2 ) / 2 σ 2 - - - ( 12 )
For two-dimensional image I (x, y), the metric space under different scale represent L (x, y, σ) can by image I (x, y) with gaussian kernel G (convolution σ) obtains for x, y, as shown in Equation (13):
L(x,y,σ)=G(x,y,σ)×I(x,y)(13)
In order to obtain the invariant feature point under the different scale space, the gaussian kernel under the image and the different scale factor is carried out convolution operation, constitute gaussian pyramid.Gaussian pyramid has the o rank, generally selects 4 rank, and there is s layer scalogram picture on each rank, and s generally selects 5 layers.The 1st layer of the 1st rank is to amplify 2 times original image, its objective is in order to obtain more unique point; The scale factor scale-up factor of adjacent two layers is k in single order, and then the scale factor on the 2nd layer on the 1st rank is k σ, and other layer then can by that analogy then; The 1st layer of middle layer scalogram by first rank on the 2nd rank looks like to carry out the son sampling and obtains, and its scale factor is k2 σ, and the 2nd of the 2nd rank the layer scale factor is that the 1st layer k doubly is k3 σ then.The 1st layer of middle layer scalogram by the 2nd rank on the 3rd rank looks like to carry out the son sampling and obtains.The formation on other rank by that analogy.
Gaussian pyramid will be set up DOG (Different of Gaussian) pyramid after setting up, and DOG is the poor of adjacent two metric space functions, promptly subtracts each other by adjacent metric space function in the gaussian pyramid, with D (x, y σ) represents, as shown in Equation (14):
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))×I(x,y)
=L(x,y,kσ)-L(x,y,σ)(14)
In the above in the DOG metric space pyramid of Jian Liing, for maximal value and the minimum value that detects the DOG space, each pixel in middle layer in the DOG metric space (bottom and top layer except) need follow with 9 neighbor pixels of adjacent 8 pixels of one deck and its last layer and following one deck altogether 26 neighbor pixels compare, to guarantee all to detect local extremum at metric space and two dimensional image space, if this pixel is all bigger or all little than the DOG value of adjacent 26 pixels, then this is named a person for a particular job as a Local Extremum, writes down its position and corresponding yardstick.
B) accurate location feature point position
Because the DOG value is responsive to noise and edge, therefore, detects Local Extremum in the above in the DOG metric space and also will could accurately orientate unique point as through further check.Below Local Extremum is carried out three-dimensional quadratic function and fit accurately to determine the position and the yardstick of unique point, (x, y is σ) at Local Extremum (x for metric space function D 0, y 0, the Taylor expansion of σ) locating as shown in Equation (15).
D ( x , y , σ ) = D ( x 0 , y 0 , σ 0 ) + ∂ D T ∂ X X + 1 2 X T ∂ 2 D ∂ X 0 2 X - - - ( 15 )
Single order in the formula (15) and second derivative are that the difference by near zone is similar to and obtains, and by to formula (15) differentiate, and to make it be 0, draws accurate extreme value place X Max, as shown in Equation (16);
X max = - [ ∂ 2 D ∂ X 2 ] - 1 ∂ D ∂ X - - - ( 16 )
In the exactly determined unique point, to remove the unique point and the unsettled skirt response point of low contrast simultaneously in the above, to strengthen coupling stability, to improve noise resisting ability.
Remove the unique point of low contrast: formula (16) in formula (15), is needed only preceding two, obtain formula (17):
D ( X max ) = D + 1 2 ∂ D T ∂ X - - - ( 17 )
Through type (17) calculates D (X Max), if | D (X Max) | 〉=0.03, then this unique point just remains, otherwise just abandons.
Remove unsettled skirt response point: extra large gloomy matrix as shown in Equation (18), partial derivative wherein is the partial derivative at top definite unique point place, it also is to come approximate evaluation by the difference of near zone.
H = D xx D xy D xy D yy - - - ( 18 )
Extra large gloomy matrix H by 2 * 2 is calculated principal curvatures, because the eigenwert of the principal curvatures of D and H matrix is proportional, does not specifically ask eigenwert, asks its ratio ratio.If α is the maximum amplitude feature, β is inferior little,
Figure BSA00000491614800121
Then ratio as shown in Equation (19).
Tr(H)=D xx+D vv=α+β
Det(H)=D xxD vv-(D xv) 2=αβ(19)
ratio = Tr ( H ) 2 Det ( H ) = ( α + β ) 2 αβ = ( r + 1 ) 2 r
Obtain ratio by formula (19), constant r=10, if Then keep this unique point, otherwise just abandon.
C) determine the principal direction of unique point
The gradient direction distribution character that utilizes the unique point neighborhood territory pixel is each unique point assigned direction parameter, makes operator possess rotational invariance, wherein m (x, y) and θ (x y) is respectively amplitude and direction.
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ ( x , y ) = tan - 1 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( L ( x + 1 , y ) - L ( x - 1 , y ) ) ) - - - ( 20 )
Be to sample in the neighborhood window at center with the unique point, and with the gradient direction of gradient orientation histogram statistics neighborhood territory pixel.The scope of histogram of gradients is 0 °~360 °, wherein per 10 ° of posts, 36 posts altogether.The peak value of gradient orientation histogram has then been represented the principal direction of this unique point place neighborhood gradient, promptly as the direction of this unique point.In gradient orientation histogram, when existing another to be equivalent to the peak value of main peak value 80% energy, then this direction is thought the auxilliary direction of this unique point.A unique point may designatedly have a plurality of directions (principal direction, auxilliary direction more than), and this can strengthen the robustness of coupling.
By the 3 top steps, the unique point of image has detected and has finished, and each unique point has 3 information: position, corresponding yardstick, direction.
C) generate the SIFT proper vector
At first coordinate axis is rotated to be the direction of unique point, to guarantee rotational invariance.
Next be that 8 * 8 window (row and column at unique point place is not got) is got at the center with the unique point.On per 4 * 4 image fritter, calculate the gradient orientation histogram of 8 directions then, draw the accumulated value of each gradient direction, form a seed points.Such unique point by 2 * 2 totally 4 seed points form, each seed points has 8 direction vector information, can produce 2 * 2 * 8 totally 32 data, the SIFT proper vectors that form 32 dimensions are the unique point describer, required video data block is 8 * 8.The thought of this neighborhood directivity information associating has strengthened the antimierophonic ability of algorithm, also provides fault-tolerance preferably for the characteristic matching that contains positioning error simultaneously.
In the actual computation process, in order to strengthen the robustness of coupling, to each unique point use 4 * 4 totally 16 seed points describe, each seed points has 8 direction vector information, just can produce 4 * 4 * 8 totally 128 data for a unique point like this, the final SIFT proper vector that forms 128 dimensions, required video data block is 16 * 16.The influence that this moment, the SIFT proper vector was removed geometry deformation factors such as dimensional variation, rotation continues the length normalization method with proper vector again, then can further remove the influence of illumination variation.
2) coupling of SIFT proper vector
A) similar decision metric
After the proper vector of two width of cloth images generates, can adopt the similar decision metric of the Euclidean distance of key point proper vector as key point in two width of cloth images.Formula is as follows:
d L = | L i - L l | = Σ k = 1 m ( L i , k - L l , k ) 2 - - - ( 21 )
Wherein: m is vectorial dimension, d LBe Euclidean distance.
B) matching process
The mistake match condition that causes in order to reduce a unique point may have a plurality of similar match points, the present invention adopts the mistake that recently reduces of arest neighbors and time neighbour's unique point distance to mate.If nearest distance and inferior near distance ratio, think then that this point is right to being match point less than certain threshold value Td, otherwise abandon.Reduce threshold value, the SIFT match point can reduce number, but more stable.How finding arest neighbors and time neighbour is the key issue of this algorithm.The method of exhaustion is an effective method the most.If but when the unique point number was big especially, calculated amount will increase with the index rank.
In view of the problem that exists in the matching algorithm, can adopt BBF (Best Bin First) to seek arest neighbors and time neighbour, it is the improvement to the k-d tree search algorithm.In fact, the k-d tree search algorithm most of the time spends in to be checked on the node, and only some node satisfies the arest neighbors condition, and therefore, the present invention adopts approximate nearest neighbor algorithm, shortens search time by restriction k-d tree middle period son node number.
The present invention utilizes SIFT characteristic matching algorithm to carry out the generation and the SIFT characteristic matching of SIFT proper vector, rotation, scale, brightness is changed maintaining the invariance, and visual angle change, affined transformation, noise are also kept to a certain degree stability.
In order to find match point right fast, the present invention is reduced to 0.20 with matching threshold, and the point that matches like this is to reducing, but more stable.
Suitable increase matching threshold, the point that matches can increase, but stability meeting decline when matching threshold is 0.49, is mated the showed increased of counting, but is wherein had some points to match mistake.To choose appropriate threshold in the practical application and carry out stable coupling.
(2) based on the particle filter tracking algorithm of SIFT characteristic matching
The present invention proposes a kind of particle filter tracking technology, utilize well " multimodal " predictability of particle filter, estimate the scope of activities of target, in this scope, calculate the SIFT feature then and mate, last weighting output based on the SIFT characteristic matching.
Here the present invention adopts rectangle frame to represent target attitude equally.Target is at k 0Original state (TX constantly Mit, TY Mit, θ Mit, SX Mit, SY Mit) with aforementioned the same, no longer repeat here.
As space is limited, just do not give unnecessary details one by one based on processes such as the target priori of the particle filter tracking system of SIFT characteristic matching, state transitions, systematic observation, posterior probability calculating, particle resamplings, referring to analysis and the description in the trifle on this instructions " based on the particle filter tracking algorithm of gray scale template zone coupling ".
Based on the particle filter tracking algorithm flow chart of SIFT characteristic matching as shown in Figure 5.
In tracing process, at first need to determine number of particles and motion model, in the present invention, we select affine model with five parameters motion model, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.Motion model is in case selected, and particle is promptly consistent with it, has the parameter of same dimension.
To judge whether that then target chooses, the choosing can adopt equally of target manually chosen or chosen automatically, after the target area is determined, just sets up the SIFT proper vector of To Template, initialization particle parameter, and the particle weights all are made as 1 (promptly all particles are of equal importance).
Frame after the particle initialization will change the iterative process of particle filter algorithm over to.In each frame, at first carrying out particle disseminates, just to predict, in a scope of prediction, calculate the SIFT feature of image in this zone, then the SIFT feature of the template that has obtained is mated with the SIFT feature in this estimation range, the point that preservation matches is right, produce a match point to figure according to the systematic observation state, each particle is carried out systematic observation at this match point on to figure, calculate the weights of particle, and all particles are weighted estimated value with the export target state.Carry out particle resampling process at last, change algorithm iteration process next time over to.
(2) design of hardware and software of Target Tracking System and realization
The present invention adopts BL-E854CB type network audio-video video camera, DR68 series The Cloud Terrace and perseverance to recall video frequency collection card to constitute a cover vision track system, on PC in conjunction with VC++6.0 realized based on gray scale template zone coupling particle filter tracking, based on the particle filter tracking of color probability distribution, based on the particle filter tracking of SIFT characteristic matching, realize moving target from motion tracking.
1. the hardware design of system and formation
Native system is divided into video acquisition module, track algorithm computing module and output control module.Wherein video acquisition module is recalled video frequency collection card by BL-E854CB type network audio-video video camera and perseverance and is constituted, and follows the tracks of computing module and is made of four kinds of track algorithms, and output control module is made of DR68 series The Cloud Terrace, and steering order is sent by the PC serial ports.
(1) BL-E854CB type network audio-video video camera
Day and night type network audio-video video camera BL-E854CB adopts 1/3 " SONY SUPER HAD CCD and DSP Digital Signal Processing, the image of high-quality and good performance can be provided.Sharpness reaches 540TVL, 600TVL when changeing black and white, and minimal illumination reaches 0.1Lux/F1.2,0.001/F1.2 when changeing black and white.
(2) perseverance is recalled video frequency collection card
HighEasy series coding card has adopted high performance audio/video encoding/decoding technology, rely on hardware to realize the real-time coding and the precise synchronization of video and audio frequency fully, realized the control of dynamic code rate control, constant code rate, mixed Rate Control, frame per second is controlled, frame pattern is optional, the dynamic image quality control, video-losing is reported to the police, the output of multi-channel analog video, functions such as multiple alarm signal output configuration.HighEasy series encoding and decoding product provides integrated system SDK, network SDK and decoding SDK, can be for using development system use post-partum period.
(3) DR68 series The Cloud Terrace
0 °~350 ° of the maximum rotating ranges in DR68 series The Cloud Terrace held water plane, downward 90 ° of surface level, upwards 60 ° of motions, rotational speed is 6 °/s of level, 3.5 °/s of vertical rotation supports Pelco D and two kinds of agreements of Pelco P, is furnished with the RS-485 serial ports, support 2400BPS and two kinds of baud rates of 9600BPS, can set by toggle switch.The native system baud rate adopts 2400BPS, Pelco P agreement.
The pictorial diagram of system hardware equipment as shown in Figure 6, hardware connects block diagram as shown in Figure 1.
2. the software of system is realized
(1) image acquisition
Utilize the permanent SDK that capture card provides that recalls, realize the collection of image, at first capture card is done some initialized work, by finishing with minor function.
Integrated circuit board initialization function:
1. MP4Sys_SetDisplayMode (FALSE); Display mode is arranged to the YUV mode.
2. MP4Sys_InitDSPs (); The integrated circuit board initialization.
3. MP4Sys_ChannelOpen (0); Open acquisition channel, adopt the O passage.
4. MP4Sys_EncSetOriginalImageSize (hChannelHandle, 352,288); Dimension of picture is set, and native system is arranged to 288 * 352 sizes to picture.
The image acquisition function:
1. MP4Sys_GetOriginalImageEx (hChannelHandle, ImageBuf , ﹠amp; Size , ﹠amp; DwWidth , ﹠amp; DwHeight); Acquisition original image, original image are yuv format.
2. MP4Sys_SaveYUVToBmp (rgb, ImageBuf , ﹠amp; Rgbsize, dwWidth, dwHeight); Yuv format changes the BMP form.
(2) target is extracted
Native system adopts two kinds of target extracting modes, extracts automatically and manual extraction.Automatically extracting is to say at moving object in the visual field, when in the visual field of gathering moving object being arranged, then this moving object as the target of being extracted, follow the tracks of then.Manual extraction then is to utilize mouse to select a zone, then this piece zone is followed the tracks of.Be introduced respectively below.
1) automatic target extracts
Automatic target extract to be exactly when having moving object to come in the visual field, just this moving object is extracted template as follow-up tracking.
Will judge at first whether moving object is arranged in the visual field, this can utilize perseverance that the motion detection function that provides in the video frequency collection card is provided and obtain.Perseverance is recalled video frequency collection card provides as minor function to call detecting whether moving target is arranged in the visual field.
1. int MP4Sys_SetupMotionDetection (HANDLE hChannelHandle, RECT*rectList, intnumberOfAreas): surveyed area is set, for extraction moving target that can be complete, surveyed area is not arranged to the entire image size in the native system, but to be arranged to the picture centre be the center, length is 300, and wide is 220 rectangular area, like this, can guarantee can on image boundary, not extract object, thereby guarantee that the target that target is extracted is complete.
2. int MP4Sys_AdjustMotionDetectPrecision (HANDLE hChannelHandle, int iGrade, intiFastMotionDetectFps, int iSlowMotionDetectFps): adjust sensitivity for analysis, disturb the erroneous judgement that causes disconnected in order to remove, native system is hanging down a bit that sensitivity is transferred.Only just can respond to large-area moving region.
3. int MP4Sys StartMotionDetection (HANDLE hChannelHandle): start motion analysis
4. int MP4Sys_MotionAnalyzer (HANDLE hChannelHandle, char*MotionData, intiThreshold, int*iResult): motion analysis, iResult are its rreturn values, when iResult greater than zero the time, the proof motion analysis has moving target in the zone
5. int MP4Sys_StopMotionDetection (HANDLE hChannelHandle): the closing movement analysis, when in finding the motion analysis zone, moving target being arranged,, carry out target then and extract with regard to the closing movement analysis.
Native system adopts a kind of algorithm of comparative maturity to build background, i.e. Gaussian Background modeling.Utilize the Gaussian Background MBM that provides in the OpenCV kit to obtain containing the prospect frame bianry image that comprises the target area of few noise, carrying out profile then extracts, the profile of getting maximum area as detect target, locate then, promptly obtain comprising the rectangular area of target, the calculation of parameter in this piece zone is come out, promptly the initialization of selected rectangular area, R=(Cx, Cy, theta, Sx, Sy), Cx, Cy are the center of rectangular area, Sx, Sy is the width and the height of rectangular area, and theta is the rectangular area initial angle, is made as 90 °.
2) manually target is extracted
Manually target is extracted exactly according to the target of wanting to follow the tracks of, and chooses a rectangular area in image, need come out the calculation of parameter in this piece zone equally, promptly the initialization of selected rectangular area, R=(Cx, Cy, theta, Sx, Sy), Cx, Cy is the center of rectangular area, and Sx, Sy are the width and the height of rectangular area, theta is the rectangular area initial angle, is made as 90
(3) target following
In the target following, three kinds of particle filter tracking algorithms that adopt the present invention to propose.
In the particle filter tracking method, the major parameter that can regulate has two: particle is propagated radius R and number of particles N.And particle propagation radius is relevant with the movement velocity of target, and the movement velocity of target is big, then needs bigger particle to propagate radius, needs more particle to reach tenacious tracking simultaneously; The movement velocity of target is little, then can propagate a little bit smaller that radius establishes to particle, can reduce number of particles simultaneously, to reduce operand.There are following three kinds of situations between the movement velocity S of particle propagation radius R and target:
(1) R<S: the velocity of propagation of particle is obviously less than the movement velocity of target on picture at this moment, and all particles all will lag behind target, follows the tracks of failure.
(2) R>>S: to propagate radius very big for particle in this case, and in the region of search that all particles constitute will be easy to target is included in, but the area that particle covers increases and means that also population reduces in the unit area, and search resolution reduces.
(3) R>S but R!<<S: get R<S or R>>S is irrational.R should be greater than S, but can not be too big.Because R represents particle and propagates radius, characterize with pixel here, and S represents target speed, also can pixel characterize, and thinks that here both are approximated to proportional relation.Work as Rx=2.1Sx, during Ry=2.2Sy, (nothing is blocked interference) tracking should be successful in the ideal case.But consider the situation of blocking, choose R=2.5S.
In the relation of number of particles and tracking effect, number of particles very little, then tracking accuracy is not enough, number of particles is too many, then causes the waste of repetition, when number of particles N=0.3~0.5Ns (during Ns=2Sx * 2Sy), follows the tracks of average error in 0.1 pixel.
Native system is according to the move the camera to follow the subject's movement general distance of camera of the processing speed of CPU and target, choosing particle propagation radius is 20 pixels, can be fit to follow the tracks of the middling speed moving target fully, population N=0.5 * 40 * 40=800, through experiment, the last target that can follow the tracks of fully.
For the target area of extracting, select according to tracking mode respectively, obtain three kinds of templates, i.e. target gray scale template, color of object histogram and target SIFT feature have so just obtained the trace template of target.At corresponding tracking mode, carry out particle filter tracking then,,, be used for tracking Control the output of the posterior probability of particle filter to each two field picture.
(4) tracking Control
Tracking Control is exactly to utilize the center of the position of following the tracks of resulting target, as the steering order that sends to The Cloud Terrace.The output form of its steering order is with reference to the PELCO-P protocol, baud rate is made as 2400BPS, adopt the MSComm control that provides in the VC++6.0 to export the serial ports steering order, the control The Cloud Terrace can eight direction (upper and lower, left and right, upper left, lower-left, upper right, bottom right) motion.Its control law is as follows: according to the physical location of following the tracks of resulting target, judge that whether (native system is decided to be a rectangle frame to this to target among a small circle among a small circle at one of the central area, its length and width are respectively 30 pixels, the center of rectangle frame is at the center of image), if not in the rectangle frame of defined, then according to the zone at the output valve place of particle posterior probability, the instruction of transmission control corresponding, try hard to target is moved to the central area of image, realized that promptly video camera following the motion of object.The zone of its division is as shown in Figure 7:
Central area: promptly try hard to make the zone of target arrival, in the system it is located the rectangular area of 30 * 30 pixel units.
One district: promptly upper left district, when target in this when zone, should move target to central area, like this, video camera just should past upper left side to walking about, promptly send the upper left side to movement instruction.
Two districts: promptly go up the district,, should move target to central area when target during in this zone, like this, video camera just up direction walk about, promptly send upward direction movement instruction.
The zone explanation in three districts to eight districts and control mode are caught up with and are stated similarly, promptly just to send at which zone the corresponding control command of direction therewith in.
(5) overall system block diagram
Native system has realized that choosing target for two kinds chooses mode (choose automatically and manually choose), two kinds of tracking modes (particle filter and Camshift follow the tracks of), wherein Camshift is that the built-in function that the OpenCV storehouse of adopting provides realizes that particle filter divides three kinds of modes (as previously mentioned) again.The interface of manual control The Cloud Terrace is provided simultaneously, has also attempted network control.
Software is based on that VC++6.0 writes, a main thread and three sub-threads, wherein the task of main thread is to do user interactions, the main task of sub-thread one is to realize track algorithm and export controlled quentity controlled variable, the main task of sub-thread two is to realize the transmission of teledata, and the main task of sub-thread three is requests of receiving remote machine.The transmission of teledata and reception utilize SOCKET to programme and realize, adopt the UDP host-host protocol.
Here, we that PC that has video frequency collection card as server end, another machine is as client, its workflow is as follows: when server end is agreed transmission, if the client application connects, then connect foundation, the client and server end carries out the initialization of socket separately, server end just carries out the transmission of data then, and the request of real-time response client, the customer side then shows the data that server end transmits, and the request command of the cradle head control of server end transmission in the past.
The software initial interface is as shown in Figure 8:
Under the default situations, the aims of systems extracting mode is a closed condition, and tracking mode is the particle filter tracking based on gray scale template zone coupling, and remote transmission is a closed condition.
Server end is mainly finished the request of target following task and customer in response end, and its four threads are described below respectively.
Main thread is mainly finished the user interactions task, and the work of choosing of target, its process flow diagram such as accompanying drawing 9 are provided.
Sub-thread one is mainly finished four kinds of track algorithms and is exported controlled quentity controlled variable, its program flow diagram such as accompanying drawing 10.
Sub-thread two is mainly finished the transmission of teledata, and sub-thread three is mainly finished the request of receiving remote machine, and carries out control corresponding according to value request, and its program circuit respectively as accompanying drawing 11 (a) and (b)
Client needs two threads equally, and one is used for sending, and one is used for receiving, only client need at first be initiated connection request, after connection was set up, the working method at two ends was the same, so just need not provide the program flow diagram of client more here.

Claims (5)

1.基于灰度模板区域匹配的粒子滤波跟踪算法,其特征包括:1. Particle filter tracking algorithm based on gray template area matching, its features include: A、以传统区域匹配跟踪方法中的灰度模板为目标描述形式,具体包括:A. Take the gray template in the traditional area matching tracking method as the target description form, including: A1、目标的先验知识包括目标的灰度模板的建立;A1. The prior knowledge of the target includes the establishment of the gray scale template of the target; A2、在初始帧中,用差分法等自动目标检测或人机交互的方法得到目标的初始描述;A2. In the initial frame, the initial description of the target is obtained by automatic target detection such as difference method or human-computer interaction method; A3、采用矩形框来表示目标的姿态;A3, using a rectangular frame to represent the posture of the target; A4、对于一个矩形框来说,其姿态包括五个参数:矩形框的中心位置水平和垂直坐标,矩形框的高度和宽度,矩形框沿高度方向与水平轴的夹角;A4. For a rectangular frame, its posture includes five parameters: the horizontal and vertical coordinates of the center position of the rectangular frame, the height and width of the rectangular frame, and the angle between the height direction of the rectangular frame and the horizontal axis; B、将区域匹配算法和粒子滤波跟踪方法结合起来,具体包括:B. Combining the area matching algorithm and the particle filter tracking method, including: B1、用粒子加权和来表示目标参数的估计值,粒子的权值与匹配值成比例。B1. Use the particle weighted sum to represent the estimated value of the target parameter, and the particle weight is proportional to the matching value. B2、粒子数的选择与实际跟踪要求有关,粒子数越多,跟踪越稳定,精度越高,但同时计算量也越大。根据跟踪情况进行折衷选择、动态调整。B2. The selection of the number of particles is related to the actual tracking requirements. The more the number of particles, the more stable the tracking and the higher the accuracy, but at the same time, the greater the amount of calculation. Compromise selection and dynamic adjustment according to the tracking situation. B3、每一帧中,对每个粒子进行系统状态转移以及系统观测,计算粒子的权值,并将所有粒子进行加权以输出目标状态的估计值,然后进行粒子重采样过程。B3. In each frame, perform system state transition and system observation for each particle, calculate the weight of the particle, and weight all particles to output the estimated value of the target state, and then perform the particle resampling process. C、在仿射空间进行运动参数搜索,具体包括:C. Search motion parameters in affine space, including: C1、运动模型选择具有两个参数的平移模型和具有五个参数的仿射模型,包括水平方向和垂直方向的位移和尺度以及旋转角度。C1. The motion model selects a translation model with two parameters and an affine model with five parameters, including displacement and scale in horizontal and vertical directions, and rotation angle. C2、粒子与运动模型一致,具有相同维数。C2. Particles are consistent with the motion model and have the same dimension. 2.基于颜色概率分布的粒子滤波跟踪算法,其特征包括:2. A particle filter tracking algorithm based on color probability distribution, its features include: A、颜色直方图的统计,包括:A. The statistics of the color histogram, including: A1、把目标区域的像素由RGB空间转换到HSV空间,然后计算H分量的一维直方图,并进行归一化;A1. Convert the pixels of the target area from RGB space to HSV space, then calculate the one-dimensional histogram of the H component, and perform normalization; A2、H分量均匀地划分成若干个小的区间,使每个小区间成为一个直方图的一个bin;A2. The H component is evenly divided into several small intervals, so that each small interval becomes a bin of a histogram; A3、考虑到视觉跟踪实时性的要求以及H分量的范围,系统中H分量bin的数目设为48。A3. Considering the real-time requirements of visual tracking and the range of H components, the number of H component bins in the system is set to 48. B、颜色概率分布图的计算,包括:B. Calculation of the color probability distribution map, including: B1、设一个阈值,当HSV空间的S分量小于这个阈值时,令对应的H分量为0,本发明将阈值设为30;B1, set a threshold value, when the S component of HSV space is less than this threshold value, make the corresponding H component be 0, the present invention sets threshold value as 30; B2、经过修正的色彩概率分布图,再把每个像素点的值都取反,最后把颜色概率分布图像的像素值为零的点替换成像素值为一个很小的正值,以使粒子滤波计算权值时能够很好的适应于尺度变化。B2, the corrected color probability distribution map, and then the value of each pixel is reversed, and finally the pixel value of the color probability distribution image is replaced by a small positive pixel value, so that the particles Filtering can adapt well to scale changes when calculating weights. C、将颜色概率分布和粒子滤波跟踪方法结合起来,具体包括:C. Combining the color probability distribution with the particle filter tracking method, including: C1、从对颜色概率分布图的分析可知,与目标颜色相近的像素点会有较大的概率值,而目标处的像素值会有最大的概率值。在目标先验知识已经得到的情况下,以初始位置散播一簇粒子,然后对这些粒子进行观测,对每一个粒子,计算该粒子状态范围内的像素值和,用指数函数进行调制并归一化就可得到每个粒子的权值,可以看到,与目标状态越接近的粒子,权值会越大。C1. From the analysis of the color probability distribution diagram, it can be seen that the pixel point with a color similar to the target will have a larger probability value, and the pixel value at the target will have the largest probability value. In the case that the prior knowledge of the target has been obtained, a cluster of particles is scattered at the initial position, and then these particles are observed, and for each particle, the sum of the pixel values in the state range of the particle is calculated, modulated by an exponential function and normalized The weight of each particle can be obtained by transforming it. It can be seen that the closer the particle is to the target state, the greater the weight will be. C2、采用颜色概率分布模型,相当于把原始图像映射到了一张概率灰度图像中,因此,在目标颜色跳跃性不是很大的情况下,对边缘遮挡、目标旋转、变形和背景运动不敏感。C2. Using the color probability distribution model is equivalent to mapping the original image into a probability grayscale image. Therefore, when the target color jump is not very large, it is not sensitive to edge occlusion, target rotation, deformation and background motion. . C3、采用仿射模型进行跟踪,跟踪的目的则是求解目标的运动状态,,包括x方向和y方向的目标中心、目标旋转的角度、、目标在x方向、y方向和对角方向的尺度。C3. The affine model is used for tracking. The purpose of tracking is to solve the motion state of the target, including the target center in the x direction and y direction, the angle of target rotation, and the scale of the target in the x direction, y direction and diagonal direction. . C4、运动模型选择具有两个参数的平移模型和具有五个参数的仿射模型,包括水平方向和垂直方向的位移和尺度以及旋转角度。C4. Motion model A translation model with two parameters and an affine model with five parameters are selected, including displacement and scale in horizontal and vertical directions, and rotation angle. C5、粒子与运动模型一致,具有相同维数。C5. The particle is consistent with the motion model and has the same dimension. 3.基于SIFT特征匹配的粒子滤波跟踪算法,其特征包括:3. Particle filter tracking algorithm based on SIFT feature matching, its features include: A、SIFT特征匹配的计算,包括:A. Calculation of SIFT feature matching, including: A1、SIFT特征向量的生成A1, SIFT feature vector generation 先对图像进行规一化处理,然后扩大图像为原来的两倍,预滤波剔除噪声,得到高斯金字塔的最底层即第1阶的第1层;再经过尺度空间极值检测、精确定位特征点位置、确定特征点的主方向、生成SIFT特征向量四个计算步骤。First normalize the image, then expand the image to twice the original size, pre-filter to remove noise, and obtain the bottom layer of the Gaussian pyramid, which is the first layer of the first order; then through the extreme value detection of the scale space, accurately locate the feature points Position, determine the main direction of the feature point, generate SIFT feature vector four calculation steps. A2、SIFT特征向量的匹配A2, SIFT feature vector matching 为了减少一个特征点可能存在多个相似匹配点而造成的误匹配情况,本发明采用最近邻和次近邻特征点距离之比来减少误匹配。In order to reduce the mis-matching situation caused by a feature point possibly having multiple similar matching points, the present invention uses the ratio of the distance between the nearest neighbor and the next-nearest neighbor feature point to reduce the mis-matching. A3、提出了一种基于SIFT特征匹配的粒子滤波跟踪技术A3. A particle filter tracking technology based on SIFT feature matching is proposed 在跟踪过程中,首先确定粒子数目和运动模型,在本发明中,运动模型选择具有五个参数的仿射模型,包括水平方向和垂直方向的位移和尺度以及旋转角度。运动模型选定后,粒子与之一致,具有相同维数的参数。In the tracking process, the number of particles and the motion model are first determined. In the present invention, the motion model selects an affine model with five parameters, including displacement and scale in the horizontal and vertical directions, and rotation angle. After the motion model is selected, the particles are consistent with it and have parameters of the same dimension. 然后判断是否目标已选取,目标的选取采用手动选取或自动选取两种方式,当目标区域已经确定之后,就建立目标模板的SIFT特征向量,初始化粒子参数,并将粒子权值都设为1(即所有粒子同等重要)。Then judge whether the target has been selected. The selection of the target adopts two ways of manual selection or automatic selection. After the target area has been determined, the SIFT feature vector of the target template is established, the particle parameters are initialized, and the particle weights are all set to 1 ( i.e. all particles are equally important). 粒子初始化之后的帧就转入粒子滤波算法的迭代过程。每一帧中,首先进行粒子散播,也就是进行预测,在预测的一个范围内计算该区域内图像的SIFT特征,然后把已经得到的模板的SIFT特征跟这个预测区域里的SIFT特征进行匹配,保存匹配到的点对,根据系统观测状态产生一张匹配点对图,对每个粒子在这张匹配点对图上进行系统观测,计算粒子的权值,并将所有粒子进行加权以输出目标状态的估计值。最后进行粒子重采样过程,转入下一次算法迭代过程。The frame after the particle initialization is transferred to the iterative process of the particle filter algorithm. In each frame, particle dispersal is performed first, that is, prediction is performed, and the SIFT feature of the image in the area is calculated within a predicted range, and then the SIFT feature of the obtained template is matched with the SIFT feature in the predicted area. Save the matched point pairs, generate a matching point pair map according to the system observation state, perform system observation on this matching point pair map for each particle, calculate the weight of the particle, and weight all particles to output the target Estimated value of the state. Finally, the particle resampling process is carried out, and the next algorithm iteration process is transferred. 4.目标跟踪系统的硬件设计与实现,其特征包括:4. The hardware design and implementation of the target tracking system, its features include: 本跟踪系统分为视频采集模块、跟踪算法运算模块和输出控制模块。本发明采用BL-E854CB型网络音视频摄像机、DR68系列云台、恒忆视频采集卡和PC机构成一套视觉跟踪硬件系统。The tracking system is divided into a video acquisition module, a tracking algorithm operation module and an output control module. The invention adopts BL-E854CB network audio and video camera, DR68 series pan-tilt, Numonyx video acquisition card and PC to form a visual tracking hardware system. A、日夜型BL-E854CB型网络音视频摄像机,能提供优质的图像和良好的性能。A. Day and night type BL-E854CB network audio and video camera can provide high-quality images and good performance. B、恒忆视频采集卡,HighEasy系列编码卡采用了高性能的音视频编解码技术,完全依靠硬件实现了视频及音频的实时编码并精确同步,实现了动态码率控制、恒定码率控制、混合码率控制、帧率可控、帧模式可选、动态图像质量控制,视频丢失报警,多路模拟视频输出,多路报警信号输出配置等功能。B. Numonyx video capture card, HighEasy series encoding card adopts high-performance audio and video encoding and decoding technology, completely relies on hardware to realize real-time encoding and precise synchronization of video and audio, and realizes dynamic bit rate control, constant bit rate control, Mixed bit rate control, controllable frame rate, optional frame mode, dynamic image quality control, video loss alarm, multi-channel analog video output, multi-channel alarm signal output configuration and other functions. C、DR68系列云台支持水平面最大旋转范围0°~350°,水平面向下90°,向上60°运动,旋转速度为水平6°/s,垂直旋转3.5°/s,本发明波特率采用2400BPS,Pelco P协议。C. The DR68 series pan/tilt supports a maximum rotation range of 0° to 350° on the horizontal plane, 90° down on the horizontal plane, and 60° up on the horizontal plane. The rotation speed is 6°/s horizontally and 3.5°/s vertically. The baud rate of the present invention adopts 2400BPS, Pelco P protocol. 5.目标跟踪系统的软件设计与实现,其特征包括:5. Software design and implementation of target tracking system, its features include: A、对恒忆采集卡通过初始化图像工作后,完成采集。A. After initializing the image work for Numonyx capture card, complete the capture. B、本系统采用自动提取和手动提取两种目标提取方式,具体包括:B. The system adopts two target extraction methods, automatic extraction and manual extraction, including: B1、当视野中有运动物体进来时,通过自动目标提取把此运动物体提取出来作为后续跟踪的模板。本系统采用高斯背景建模,得到含有极少噪声的包含目标区域的前景帧二值图像,然后进行轮廓提取,取最大面积的轮廓作为检测到得目标,然后定位,得到包含目标的矩形区域B1. When there is a moving object in the field of view, the moving object is extracted through automatic target extraction as a template for follow-up tracking. This system adopts Gaussian background modeling to obtain a binary image of the foreground frame containing the target area with very little noise, then extracts the contour, takes the contour with the largest area as the detected target, and then locates it to obtain a rectangular area containing the target B2、手动目标提取就是根据想要跟踪的目标,在图像中选取一块矩形区域,把这块区域的参数计算出来,即把所选的矩形区域初始化。B2. Manual target extraction is to select a rectangular area in the image according to the target to be tracked, and calculate the parameters of this area, that is, initialize the selected rectangular area. C、采用本发明提出的三种粒子滤波跟踪算法进行目标跟踪,具体包括:C, adopt three kinds of particle filter tracking algorithms proposed by the present invention to carry out target tracking, specifically include: C1、在粒子滤波跟踪方法中,调节的主要参数有两个:粒子传播半径R和粒子数目N。而粒子传播半径跟目标的运动速度有关,目标的运动速度大,则需要大一点的粒子传播半径,同时需要更多的粒子来达到稳定跟踪;目标的运动速度小,则可以把粒子传播半径设的小一点,同时可以减少粒子数目,以减少运算量。C1. In the particle filter tracking method, there are two main parameters to be adjusted: particle propagation radius R and particle number N. The particle propagation radius is related to the moving speed of the target. If the moving speed of the target is high, a larger particle propagation radius is required, and more particles are required to achieve stable tracking; if the moving speed of the target is small, the particle propagation radius can be set to Smaller, at the same time can reduce the number of particles to reduce the amount of calculation. 本系统根据CPU的处理速度以及目标跟摄像机的一般距离,选取粒子传播半径为20个像素,完全可以适合跟踪中速运动目标,粒子数N=0.5×40×40=800,完全可以跟踪的上目标。According to the processing speed of the CPU and the general distance between the target and the camera, the system selects a particle propagation radius of 20 pixels, which is completely suitable for tracking medium-speed moving targets. The number of particles N=0.5×40×40=800 can be tracked completely Target. C2、对于提取的目标区域,分别根据跟踪方式选择,求出三种模板,即目标灰度模板,目标颜色直方图和目标SIFT特征,这样就得到了目标的跟踪模板。C2. For the extracted target area, according to the selection of the tracking method, three templates are obtained, namely, the target grayscale template, the target color histogram and the target SIFT feature, so that the target tracking template is obtained. D、利用跟踪所得到的目标的位置的中心,作为发送给云台的控制指令,实现跟踪控制。控制指令的输出形式采用PELCO-P协议,波特率设为2400BPS,控制云台能够八个方向(上、下、左、右、左上、左下、右上、右下)运动。D. Use the center of the target position obtained by tracking as a control command sent to the pan/tilt to realize tracking control. The output form of the control command adopts the PELCO-P protocol, the baud rate is set to 2400BPS, and the pan/tilt can be controlled to move in eight directions (up, down, left, right, upper left, lower left, upper right, and lower right). E、软件系统总体设计,包括:E. The overall design of the software system, including: E1、本系统实现了两种选取目标选取方式(自动选取和手动选取),两种跟踪方式(粒子滤波和Camshift跟踪),其中Camshift是采用的OpenCV库提供的库函数来实现的,粒子滤波又分三种方式(如前所述),还实现了网络控制E1. This system has realized two kinds of selection target selection methods (automatic selection and manual selection), two tracking methods (particle filter and Camshift tracking), wherein Camshift is realized by using the library function provided by the OpenCV library, and particle filter In three ways (as mentioned earlier), it also realizes network control E2、软件基于VC++6.0编写,一个主线程和三个子线程,其中主线程的任务是做用户交互,子线程一的主要任务是实现跟踪算法并输出控制量,子线程二的主要任务是实现远程数据的发送,子线程三的主要任务是接收远程机器的请求。远程数据的发送和接收利用SOCKET编程来实现,采用UDP传输协议。E2. The software is written based on VC++6.0. There is one main thread and three sub-threads. The main task of the main thread is to interact with users. The main task of the first sub-thread is to implement the tracking algorithm and output the control amount. The main task of the second sub-thread is To realize the transmission of remote data, the main task of the third sub-thread is to receive the request from the remote machine. The sending and receiving of remote data is realized by using SOCKET programming, using UDP transmission protocol. E3、带有视频采集卡的那台PC机作为服务器端,另一台机器作为客户端,实现网络控制。E3. The PC with the video capture card acts as the server, and the other machine acts as the client to realize network control.
CN2011101189182A 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering Pending CN102184551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101189182A CN102184551A (en) 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101189182A CN102184551A (en) 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Publications (1)

Publication Number Publication Date
CN102184551A true CN102184551A (en) 2011-09-14

Family

ID=44570720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101189182A Pending CN102184551A (en) 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Country Status (1)

Country Link
CN (1) CN102184551A (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509309A (en) * 2011-11-04 2012-06-20 大连海事大学 Image-matching-based object-point positioning system
CN102592290A (en) * 2012-02-16 2012-07-18 浙江大学 Method for detecting moving target region aiming at underwater microscopic video
CN102592135A (en) * 2011-12-16 2012-07-18 温州大学 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN102663419A (en) * 2012-03-21 2012-09-12 江苏视软智能系统有限公司 Pan-tilt tracking method based on representation model and classification model
CN102819263A (en) * 2012-07-30 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN102881012A (en) * 2012-09-04 2013-01-16 上海交通大学 Vision target tracking method aiming at target scale change
CN103136762A (en) * 2011-11-29 2013-06-05 南京理工大学常熟研究院有限公司 Dynamic image target tracking method
CN103646407A (en) * 2013-12-26 2014-03-19 中国科学院自动化研究所 Video target tracking method based on ingredient and distance relational graph
CN103870815A (en) * 2014-03-24 2014-06-18 公安部第三研究所 Mancar structural description method and system for dome camera video monitoring
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN103985141A (en) * 2014-05-28 2014-08-13 西安电子科技大学 Target tracking method based on HSV color covariance characteristics
CN104200226A (en) * 2014-09-01 2014-12-10 西安电子科技大学 Particle filtering target tracking method based on machine learning
CN104200495A (en) * 2014-09-25 2014-12-10 重庆信科设计有限公司 Multi-target tracking method in video surveillance
CN104766054A (en) * 2015-03-26 2015-07-08 济南大学 Vision-attention-model-based gesture tracking method in human-computer interaction interface
CN105592315A (en) * 2015-12-16 2016-05-18 深圳大学 Video characteristic redundant information compression method and system based on video space-time attribute
CN105989615A (en) * 2015-03-04 2016-10-05 江苏慧眼数据科技股份有限公司 Pedestrian tracking method based on multi-feature fusion
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106203449A (en) * 2016-07-08 2016-12-07 大连大学 The approximation space clustering system of mobile cloud environment
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
CN106709456A (en) * 2016-12-27 2017-05-24 成都通甲优博科技有限责任公司 Computer vision-based unmanned aerial vehicle target tracking box initialization method
CN107121893A (en) * 2017-06-12 2017-09-01 中国科学院上海光学精密机械研究所 Photoetching projection objective lens thermal aberration on-line prediction method
CN108198199A (en) * 2017-12-29 2018-06-22 北京地平线信息技术有限公司 Moving body track method, moving body track device and electronic equipment
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN108563220A (en) * 2018-01-29 2018-09-21 南京邮电大学 The motion planning of apery Soccer robot
CN108833919A (en) * 2018-06-29 2018-11-16 东北大学 Color single-pixel imaging method and system based on random circulant matrix
CN109323697A (en) * 2018-11-13 2019-02-12 大连理工大学 A Method for Rapid Particle Convergence When Indoor Robot Starts at Any Point
CN109600710A (en) * 2018-12-10 2019-04-09 浙江工业大学 Multi-movement target monitoring method based on difference algorithm in a kind of video sensor network
CN109801279A (en) * 2019-01-21 2019-05-24 京东方科技集团股份有限公司 Object detection method and device, electronic equipment, storage medium in image
CN109872343A (en) * 2019-02-01 2019-06-11 视辰信息科技(上海)有限公司 Weak texture gestures of object tracking, system and device
CN109881604A (en) * 2019-02-19 2019-06-14 福州市极化律网络科技有限公司 Mixed reality guardrail for road shows adjustment system
CN109903281A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 It is a kind of based on multiple dimensioned object detection method and device
CN109949340A (en) * 2019-03-04 2019-06-28 湖北三江航天万峰科技发展有限公司 Target scale adaptive tracking method based on OpenCV
CN110135577A (en) * 2018-02-09 2019-08-16 宏达国际电子股份有限公司 Device and method for training fully connected neural network
CN110298330A (en) * 2019-07-05 2019-10-01 东北大学 A kind of detection of transmission line polling robot monocular and localization method
CN110503665A (en) * 2019-08-22 2019-11-26 湖南科技学院 An Improved Camshift Target Tracking Algorithm
CN111050059A (en) * 2018-10-12 2020-04-21 黑快马股份有限公司 Follow-up shooting system with image stabilization function and follow-up shooting method with image stabilization function
CN111427381A (en) * 2019-12-31 2020-07-17 天嘉智能装备制造江苏股份有限公司 Control method for following work of small-sized sweeping machine based on dressing identification of operator
CN111526335A (en) * 2020-05-03 2020-08-11 杭州晶一智能科技有限公司 Target tracking algorithm for suspended track type omnidirectional pan-tilt camera
CN111524163A (en) * 2020-04-16 2020-08-11 南京卓宇智能科技有限公司 Target tracking method based on continuous extended Kalman filtering
CN112200829A (en) * 2020-09-07 2021-01-08 慧视江山科技(北京)有限公司 Target tracking method and device based on correlation filtering method
CN112243082A (en) * 2019-07-17 2021-01-19 百度时代网络技术(北京)有限公司 Tracking shooting method and device, electronic equipment and storage medium
CN112364865A (en) * 2020-11-12 2021-02-12 郑州大学 Method for detecting small moving target in complex scene
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN114330501A (en) * 2021-12-01 2022-04-12 南京航空航天大学 Track pattern recognition method and equipment based on dynamic time warping
CN115082441A (en) * 2022-07-22 2022-09-20 山东微山湖酒业有限公司 Retort material tiling method in wine brewing distillation process based on computer vision
CN115903904A (en) * 2022-12-02 2023-04-04 亿航智能设备(广州)有限公司 Method, device and equipment for automatically tracking target by unmanned aerial vehicle cradle head
CN117746076A (en) * 2024-02-19 2024-03-22 成都航空职业技术学院 Equipment image matching method based on machine vision
CN118840400A (en) * 2024-09-24 2024-10-25 南通凝聚元界信息科技有限公司 Matching positioning method based on reconstructed image visual characteristics

Cited By (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509309B (en) * 2011-11-04 2013-12-18 大连海事大学 Image-matching-based object-point positioning system
CN102509309A (en) * 2011-11-04 2012-06-20 大连海事大学 Image-matching-based object-point positioning system
CN103136762A (en) * 2011-11-29 2013-06-05 南京理工大学常熟研究院有限公司 Dynamic image target tracking method
CN102592135A (en) * 2011-12-16 2012-07-18 温州大学 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN102592135B (en) * 2011-12-16 2013-12-18 温州大学 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN102592290A (en) * 2012-02-16 2012-07-18 浙江大学 Method for detecting moving target region aiming at underwater microscopic video
CN102663419A (en) * 2012-03-21 2012-09-12 江苏视软智能系统有限公司 Pan-tilt tracking method based on representation model and classification model
CN102819263B (en) * 2012-07-30 2014-11-05 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN102819263A (en) * 2012-07-30 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN102881012A (en) * 2012-09-04 2013-01-16 上海交通大学 Vision target tracking method aiming at target scale change
CN102881012B (en) * 2012-09-04 2016-07-06 上海交通大学 Visual target tracking method for target scale change
CN103646407B (en) * 2013-12-26 2016-06-22 中国科学院自动化研究所 A kind of video target tracking method based on composition distance relation figure
CN103646407A (en) * 2013-12-26 2014-03-19 中国科学院自动化研究所 Video target tracking method based on ingredient and distance relational graph
CN103870815A (en) * 2014-03-24 2014-06-18 公安部第三研究所 Mancar structural description method and system for dome camera video monitoring
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN103985141A (en) * 2014-05-28 2014-08-13 西安电子科技大学 Target tracking method based on HSV color covariance characteristics
CN104200226A (en) * 2014-09-01 2014-12-10 西安电子科技大学 Particle filtering target tracking method based on machine learning
CN104200226B (en) * 2014-09-01 2017-08-25 西安电子科技大学 Particle filter method for tracking target based on machine learning
CN104200495A (en) * 2014-09-25 2014-12-10 重庆信科设计有限公司 Multi-target tracking method in video surveillance
CN104200495B (en) * 2014-09-25 2017-03-29 重庆信科设计有限公司 A kind of multi-object tracking method in video monitoring
CN105989615A (en) * 2015-03-04 2016-10-05 江苏慧眼数据科技股份有限公司 Pedestrian tracking method based on multi-feature fusion
CN104766054A (en) * 2015-03-26 2015-07-08 济南大学 Vision-attention-model-based gesture tracking method in human-computer interaction interface
CN105592315A (en) * 2015-12-16 2016-05-18 深圳大学 Video characteristic redundant information compression method and system based on video space-time attribute
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106097388B (en) * 2016-06-07 2018-12-18 大连理工大学 The method that target prodiction, searching scope adaptive adjustment and Dual Matching merge in video frequency object tracking
CN106203449A (en) * 2016-07-08 2016-12-07 大连大学 The approximation space clustering system of mobile cloud environment
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
CN106709456A (en) * 2016-12-27 2017-05-24 成都通甲优博科技有限责任公司 Computer vision-based unmanned aerial vehicle target tracking box initialization method
CN106709456B (en) * 2016-12-27 2020-03-31 成都通甲优博科技有限责任公司 Unmanned aerial vehicle target tracking frame initialization method based on computer vision
CN107121893A (en) * 2017-06-12 2017-09-01 中国科学院上海光学精密机械研究所 Photoetching projection objective lens thermal aberration on-line prediction method
CN107121893B (en) * 2017-06-12 2018-05-25 中国科学院上海光学精密机械研究所 Photoetching projection objective lens thermal aberration on-line prediction method
CN108198199A (en) * 2017-12-29 2018-06-22 北京地平线信息技术有限公司 Moving body track method, moving body track device and electronic equipment
CN108563220A (en) * 2018-01-29 2018-09-21 南京邮电大学 The motion planning of apery Soccer robot
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN110135577A (en) * 2018-02-09 2019-08-16 宏达国际电子股份有限公司 Device and method for training fully connected neural network
CN108833919A (en) * 2018-06-29 2018-11-16 东北大学 Color single-pixel imaging method and system based on random circulant matrix
CN108833919B (en) * 2018-06-29 2020-02-14 东北大学 Color single-pixel imaging method and system based on random circulant matrix
CN111050059A (en) * 2018-10-12 2020-04-21 黑快马股份有限公司 Follow-up shooting system with image stabilization function and follow-up shooting method with image stabilization function
CN109323697A (en) * 2018-11-13 2019-02-12 大连理工大学 A Method for Rapid Particle Convergence When Indoor Robot Starts at Any Point
CN109323697B (en) * 2018-11-13 2022-02-15 大连理工大学 A Method for Rapid Particle Convergence When Indoor Robot Starts at Any Point
CN109600710A (en) * 2018-12-10 2019-04-09 浙江工业大学 Multi-movement target monitoring method based on difference algorithm in a kind of video sensor network
CN109600710B (en) * 2018-12-10 2020-10-30 浙江工业大学 Multi-moving-target monitoring method based on difference algorithm in video sensor network
CN109801279B (en) * 2019-01-21 2021-02-02 京东方科技集团股份有限公司 Method and device for detecting target in image, electronic equipment and storage medium
CN109801279A (en) * 2019-01-21 2019-05-24 京东方科技集团股份有限公司 Object detection method and device, electronic equipment, storage medium in image
CN109872343A (en) * 2019-02-01 2019-06-11 视辰信息科技(上海)有限公司 Weak texture gestures of object tracking, system and device
CN109881604B (en) * 2019-02-19 2022-08-09 福州市极化律网络科技有限公司 Mixed reality road isolated column display adjustment system
CN109881604A (en) * 2019-02-19 2019-06-14 福州市极化律网络科技有限公司 Mixed reality guardrail for road shows adjustment system
CN109903281A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 It is a kind of based on multiple dimensioned object detection method and device
CN109949340A (en) * 2019-03-04 2019-06-28 湖北三江航天万峰科技发展有限公司 Target scale adaptive tracking method based on OpenCV
CN110298330A (en) * 2019-07-05 2019-10-01 东北大学 A kind of detection of transmission line polling robot monocular and localization method
CN110298330B (en) * 2019-07-05 2023-07-18 东北大学 A monocular detection and positioning method for a power transmission line inspection robot
CN112243082B (en) * 2019-07-17 2022-09-06 百度时代网络技术(北京)有限公司 Tracking shooting method and device, electronic equipment and storage medium
CN112243082A (en) * 2019-07-17 2021-01-19 百度时代网络技术(北京)有限公司 Tracking shooting method and device, electronic equipment and storage medium
CN110503665A (en) * 2019-08-22 2019-11-26 湖南科技学院 An Improved Camshift Target Tracking Algorithm
CN111427381A (en) * 2019-12-31 2020-07-17 天嘉智能装备制造江苏股份有限公司 Control method for following work of small-sized sweeping machine based on dressing identification of operator
CN111524163A (en) * 2020-04-16 2020-08-11 南京卓宇智能科技有限公司 Target tracking method based on continuous extended Kalman filtering
CN111526335B (en) * 2020-05-03 2021-08-27 金华精研机电股份有限公司 Target tracking method for suspended track type omnidirectional pan-tilt camera
CN111526335A (en) * 2020-05-03 2020-08-11 杭州晶一智能科技有限公司 Target tracking algorithm for suspended track type omnidirectional pan-tilt camera
CN112200829A (en) * 2020-09-07 2021-01-08 慧视江山科技(北京)有限公司 Target tracking method and device based on correlation filtering method
CN112364865A (en) * 2020-11-12 2021-02-12 郑州大学 Method for detecting small moving target in complex scene
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN114330501A (en) * 2021-12-01 2022-04-12 南京航空航天大学 Track pattern recognition method and equipment based on dynamic time warping
CN114330501B (en) * 2021-12-01 2022-08-05 南京航空航天大学 Track pattern recognition method and equipment based on dynamic time warping
CN115082441B (en) * 2022-07-22 2022-11-11 山东微山湖酒业有限公司 Retort material tiling method in wine brewing distillation process based on computer vision
CN115082441A (en) * 2022-07-22 2022-09-20 山东微山湖酒业有限公司 Retort material tiling method in wine brewing distillation process based on computer vision
CN115903904A (en) * 2022-12-02 2023-04-04 亿航智能设备(广州)有限公司 Method, device and equipment for automatically tracking target by unmanned aerial vehicle cradle head
WO2024114376A1 (en) * 2022-12-02 2024-06-06 亿航智能设备(广州)有限公司 Method and apparatus for automatically tracking target by unmanned aerial vehicle gimbal, device, and storage medium
CN117746076A (en) * 2024-02-19 2024-03-22 成都航空职业技术学院 Equipment image matching method based on machine vision
CN117746076B (en) * 2024-02-19 2024-04-26 成都航空职业技术学院 Equipment image matching method based on machine vision
CN118840400A (en) * 2024-09-24 2024-10-25 南通凝聚元界信息科技有限公司 Matching positioning method based on reconstructed image visual characteristics
CN118840400B (en) * 2024-09-24 2024-11-29 南通凝聚元界信息科技有限公司 A matching and positioning method based on reconstructed image visual features

Similar Documents

Publication Publication Date Title
CN102184551A (en) Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN113963445B (en) Pedestrian falling action recognition method and equipment based on gesture estimation
US10198823B1 (en) Segmentation of object image data from background image data
CN103268480B (en) A kind of Visual Tracking System and method
CN103473542B (en) Multi-clue fused target tracking method
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
KR101414670B1 (en) Object tracking method in thermal image using online random forest and particle filter
Bešić et al. Dynamic object removal and spatio-temporal RGB-D inpainting via geometry-aware adversarial learning
CN107798313A (en) A kind of human posture recognition method, device, terminal and storage medium
CN103778645B (en) Circular target real-time tracking method based on images
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN106056053A (en) Human posture recognition method based on skeleton feature point extraction
CN102982341A (en) Self-intended crowd density estimation method for camera capable of straddling
CN106780560B (en) A visual tracking method of bionic robotic fish based on feature fusion particle filter
CN119091234B (en) Intelligent decision-making and response method and system based on data analysis
CN103581614A (en) Method and system for tracking targets in video based on PTZ
CN108510520B (en) A kind of image processing method, device and AR equipment
CN102034247A (en) Motion capture method for binocular vision image based on background modeling
Tao et al. Indoor 3D semantic robot VSLAM based on mask regional convolutional neural network
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
Gryn et al. Detecting motion patterns via direction maps with application to surveillance
Monteleone et al. Pedestrian tracking in 360 video by virtual PTZ cameras
Kwolek et al. Swarm intelligence based searching schemes for articulated 3D body motion tracking
CN110111368B (en) Human body posture recognition-based similar moving target detection and tracking method
Li Research on camera-based human body tracking using improved cam-shift algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110914