Embodiment
Main feature of the present invention is: 1) extract the feature of people's main shaft as coupling.Because people's main shaft is the symmetry axis of human region, according to symmetry, the point that is symmetrically distributed in the main shaft both sides their error of can cancelling each other, thus make main shaft robust more.And people's main shaft is subjected to motion detection and the result of cutting apart to influence less; 2) detection method of three kinds of different situations servants' main shaft has been proposed: the detection that promptly single people's main shaft detects, group's main shaft detects and blocks the main shaft of the people under the situation; 3) based on the geometrical relationship constraint of different visual angles lower main axis, defined main shaft match likelihood function and weighed the right similarity of different visual angles lower main axis; 4) merge various visual angles information according to matching result, tracking results is optimized renewal.
The general frame of scheme is seen accompanying drawing 7.At first the image sequence under the single camera is carried out motion detection, extract people's main shaft feature, carry out the tracking under the single camera; Then according to singly reflect relation constraint to the main shaft under the different visual angles to mating; Merge various visual angles information updating tracking results according to matching result at last.
Provide the explanation of each related in this invention technical scheme detailed problem below in detail.
(1) moving Object Segmentation
Moving Object Segmentation is the first step of motion tracking, and algorithm adopts is that the background method of wiping out of single Gauss model detects the moving region.In order to reduce the influence of illumination variation and shade, adopt normalized color rgs model, r=R/ (R+G+B) wherein, g=G/ (R+G+B), s=R+G+B.Earlier one section background image sequence is carried out filtering, obtain single Gauss's background model.Gauss's parameter of each point is
U wherein
iAnd σ
i(i=r, g s) are the average and the variance of this background model respectively.Then present image and background image are carried out difference processing, can obtain binary image M in that difference image is carried out threshold process
Xy, the foregrounding pixel value is 1 in bianry image, background pixel value is 0.Its process is:
I wherein
i(x, y), (i=r, g s) are pixel (x, current measured value y).α
iBe threshold parameter, can in experiment, determine by experience.After obtaining bianry image, it is being carried out corrosion and the next further filtering noise of expansive working in the morphological operator.Connected component analysis by binaryzation at last is used to extract simply connected foreground area, as the moving region after cutting apart.
(2) detection of people's main shaft
For simplicity, suppose that the people is upright walking, and people's main shaft exists.Suppose that based on this we discuss the detection method of three kinds of people's under the different situations main shaft, promptly single people's main shaft detects; Group's main shaft detects; Block the detection of situation lower main axis.These three kinds of situations can be judged according to corresponding relation in following the tracks of.
Single people's main shaft detects.For single people, adopt minimum intermediate value quadratic method (LeastMedianSequence) to detect its main shaft.Suppose i foreground pixel X
iTo the vertical range for the treatment of boning out l is D (X
i, l), according to minimum intermediate value quadratic method, then all foreground pixels are to square intermediate value minimum of the vertical range of main shaft.
Accompanying drawing 1 has provided single people's main shaft detection example.
Group's main shaft detects.Group's main shaft detection comprises that individuality is cut apart and main shaft detects the two large divisions.
For individuality is cut apart, introduce the upright projection histogram, the individuality among the group is corresponding to the histogrammic peak region of upright projection.Have only the peak region that satisfies certain condition just corresponding to single individuality, we call the obvious peak value zone to these peak regions.Two conditions must be satisfied in the obvious peak value zone:
A) maximum of peak region must be greater than a certain specific threshold value, peak threshold P
T
B) minimum value of peak region must be less than a certain specific threshold value, trough threshold value C
T
Suppose in a peak region P
1, P
2...., P
nIt is its local extremum.C
l, C
rBe respectively this regional left and right sides trough value, then above two conditions can be expressed as with mathematic(al) representation:
max(P
1,P
2,.....,P
n)>P
T (3)
C
l<C
T,C
r<C
T (4)
Trough threshold value C
TBe chosen for whole histogrammic average, crest threshold value P
TBe chosen for people in image coordinate height 80 percent.The method that in the second step main shaft detects, can adopt above-mentioned single people's main shaft to detect.
Fig. 2 has provided the example that a plurality of people's main shafts detect.(b) being detected foreground area, (c) is its upright projection histogram, (d) is the result after cutting apart.In upright projection histogram (c), three tangible peak regions are arranged, can be that three parts correspond respectively to three individualities thus with this Region Segmentation.At last, their main shaft provides in figure (e).
The detection of the main shaft of the people under the situation of blocking.Earlier the people is split, find out the pixel of its foreground area, detect people's main shaft in method the minimum intermediate value square error of foreground pixel utilization that splits.We adopt and realize cutting apart under the situation of blocking based on the color template method.This model comprises a color model and an additional probability mask.When occurring blocking, cutting apart of moving object can be expressed as classification problem, judges promptly which model which foreground pixel belongs to.This problem can solve with bayes rule.If satisfy following equation:
Then foreground pixel X belongs to k moving object (model).
The main shaft that blocks under the situation detects as shown in Figure 3.
(3) tracking under the single camera
Adopt Kalman filter to realize following the tracks of
Choose state vector X comprise the position of people in image (x, y) and movement velocity (v
x, v
y), observation vector Z be the people the position (x, y).The measured value of people's movement position (x y) is estimated as people pin point (" land-point in image "), the i.e. intersection point of the lower limb of people's main shaft and rectangle frame.Then:
Z
t=[x
t,y
t]
T X
t=[x
t,v
x,t,y
t,v
y,t]
T (6)
The speed of people's walking can be thought constant approx, and state-transition matrix and observing matrix are:
By the position of the position prediction present frame object of former frame object, the measured value (intersection point of people's main shaft and rectangle frame lower limb) with present frame compares then.If the distance between measured value and the predicted value is very little, show that then measured value is with a high credibility, measured value can be represented the position of present frame object; If both distances surpass certain threshold value (such as people's the Lower Half this situation that is blocked), then measured value is insincere, and the intersection point that we have defined main shaft and another line upgrades measured value, and this line is exactly the vertical line of future position to main shaft.Tracking framework under the whole single camera can be referring to accompanying drawing 4.
(4) it is right to seek all best main shaft couplings
At first need to calculate and singly reflect matrix between the different images plane.Singly reflected matrix description between the different images about the one-to-one relationship of the point on the same plane.Can find the solution the parameter of matrix by simultaneous equations by given corresponding points.The corresponding points of this method are obtained the employing manual mode: promptly in advance index point is set in scene or utilizes some special corresponding points in the scene.
Define main shaft match likelihood function then.
Main shaft match likelihood function is used for weighing the matching degree of different visual angles servant's main shaft.Before providing concrete definition, the geometrical relationship of different visual angles lower main axis can be referring to accompanying drawing 5.As shown in the figure, suppose to have two video camera i and j.Suppose that cameras view to the main shaft of people s is
,
Be
Projection on ground level.Observing people k for video camera j has accordingly
With
The video camera i plane of delineation to the matrix that singly reflects of the video camera j plane of delineation is
We will by singly reflecting matrix
Project in the plane of delineation coordinate system of video camera j and can obtain
With
Will intersect at a point
According to the character of singly reflecting matrix, if the people k that people s that video camera i observes and video camera j observe is corresponding to the same individual in the three dimensions, then
Corresponding to this people's " land-point ", the i.e. intersection point of people's main shaft and ground level.Therefore, the measured value of " land-point " and intersection point
Between distance can be used for weighing matching degree between the main shaft.Distance is mated between the bright main shaft of novel more more.
According to the geometrical relationship of different visual angles lower main axis, the match likelihood function between definition people s and the k main shaft is
Wherein
Be " land-point " that video camera i observes people s,
Be " land-point " that video camera j observes people k
It is main shaft s is transformed into j visual angle and main shaft k from the i visual angle intersection point.
In order to be without loss of generality, suppose top two probability density functions clothes cloth, then:
At last find all optimum Match right according to matching algorithm
The coupling of multiple-camera in fact can modeling becomes the problem of maximum likelihood function, and the promptly corresponding mutually right main shaft match likelihood function of main shaft is maximum in the right main shaft match likelihood function of numerous main shafts.For the simplification problem, we have defined the main shaft matching distance, and the maximum likelihood function problem is converted into the minimal matching span problem.
Wherein
Be the main shaft matching distance, definition main shaft matching distance as shown in the formula:
The main shaft distance
More little, then main shaft mates each other more.
The main shaft matching algorithm is right in order to seek global best matches, makes theirs
The sum minimum.Main shaft matching algorithm between two video cameras is described below:
Suppose under video camera i, detect M main shaft:
Under video camera j, detect N main shaft:
Step 1: detected main shaft under two visual angles is made up in twos, form the main shaft that might mate right.Be without loss of generality, suppose M≤N, then M main shaft selected M main shaft successively according to priority and matched formation altogether from N main shaft
Plant combination.Each combination has following form:
Step 2: for each the combination in each main shaft to { m, n} calculate its matching distance
And guarantee
D wherein
TBe the threshold value that draws by experience, be used for judging that main shaft is to { m, whether n} mates.If do not satisfy above-mentioned constraint, then main shaft to { m, n} is from θ
kIn delete.Step 3: choose the maximum combination Θ of coupling logarithm l
k, all Θ
kForm set Θ:
Wherein
Step 4: in set Θ, seek global best matches combination λ, feasible wherein all matching distance
Sum reaches minimum value, promptly satisfies following formula:
Step 5: the Θ that finally obtains
λThen be the global best matches combination, wherein each is to then right for the main shaft of coupling.
Above-mentioned algorithm is easy to expand to the situation more than two video cameras.At first video camera makes up in twos, for a pair of video camera that public ground level zone is arranged, sets up matching relationship and can adopt the algorithm of introducing above; When if coupling produces contradiction between the video camera in twos, it is right that then matching relationship only considers to have the main shaft of minimal matching span.
(5) merge various visual angles information updating tracking results
When find all coupling main shafts to after, these match information just can be used for upgrading the tracking results under the single camera.At the situation of two video cameras, have only when the people who follows the tracks of is in public ground level zone, two visual angles, it is just effective to upgrade this step.
Suppose to find corresponding under same individual two visual angles by matching algorithm above-mentioned, the main shaft that is visual angle i and visual angle j respectively is right, main shaft under the j of visual angle is transformed in the plane of delineation of visual angle i by the relation of singly reflecting between two planes of delineation, main shaft under the then original visual angle i is exactly the position of final this person in plane of delineation i with the intersection point of the straight line that conversion is come, and is used for upgrading the tracking results under the original single-view i.For visual angle j in like manner as can be known.
As accompanying drawing 5, if the main shaft under two visual angles
With
Corresponding to same people, then will
Be transformed into visual angle j from visual angle i and obtain straight line
, then this straight line with
Meet at a little
This intersection point just corresponding " land-point " of this people under the j of visual angle, i.e. the intersection point of people's main shaft and ground level.
For the situation more than two video cameras, a people has two or more such intersection points, then chooses the final position " land-point " of the mean value of these intersection points as the people.
Situation about being blocked for people's Lower Half, based on the position of prediction and the main shaft of detection, the coupling that this algorithm still can robust estimates the position (" land-point ") of people in image accurately.
In order to implement concretism of the present invention, we have done a large amount of experiments on two databases, have realized that the pedestrian based on the main shaft coupling follows the tracks of under the multiple-camera.Experimental result has further been verified the validity and the robustness of this method.
Experimental result as shown in Figure 6, tracked human rectangle frame is represented, the sequence number that digital watch below the rectangle frame is leted others have a look at, the center line of rectangle frame is represented detected main shaft, the intersection point of the lower limb line of main shaft and rectangle frame is represented the position of people in image estimated.
Numeral 1,2,3,4 expressions in (a) are many people tracking test results under two video cameras of NLPR database.
(b) numeral 1,2,3,4 expressions in are many people tracking test results under three video cameras of NLPR database.
Numeral 1,2 expressions among Fig. 6 (c) are to the tracking test result of single people under the situation of blocking under two video cameras of PETS2001 database.
Numeral 1,2,3,4 expressions among Fig. 6 (d) are to the tracking test result of group under the situation of blocking under two video cameras of PETS2001 database.