[go: up one dir, main page]

CN104424651A - Method and system for tracking object - Google Patents

Method and system for tracking object Download PDF

Info

Publication number
CN104424651A
CN104424651A CN201310376491.5A CN201310376491A CN104424651A CN 104424651 A CN104424651 A CN 104424651A CN 201310376491 A CN201310376491 A CN 201310376491A CN 104424651 A CN104424651 A CN 104424651A
Authority
CN
China
Prior art keywords
candidate feature
value
feature
image
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310376491.5A
Other languages
Chinese (zh)
Inventor
刘童
游赣梅
师忠超
鲁耀杰
刘媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201310376491.5A priority Critical patent/CN104424651A/en
Publication of CN104424651A publication Critical patent/CN104424651A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosure discloses a method and system for tracking an object. The method comprises: obtaining a foreground image containing an object and a background image except the foreground image in a current image frame; according to a difference between of a value of each of at least two candidate features in the foreground image and a value of the candidate feature in the background image, calculating a identification capability parameter expressing the identification capability of the candidate feature for each candidate feature; and with combination of all candidate feature values and identification capability parameters of all candidate features, carrying out object tracking. Therefore, object tracking can be carried out accurately under the circumstances of complex environments and changing scenes.

Description

The method and system of tracing object
Technical field
The present invention relates to image processing field, and more specifically, the present invention relates to the technology being improved the Object tracking performance in image by the sense of calculated candidate feature in foreground image and background image.
Background technology
In the field such as computer vision, video monitoring, the detection and tracking technology of Moving Objects in Video Sequences plays an important role in many applications.The detection and tracking technology of Moving Objects is one of main contents of studying of Digital Image Processing and identification and computer vision field, has application in fields such as video monitoring system, robot navigation, intelligent vision monitoring, medical image analysis, industrial detection and video image analysis.The task of object tracing technique is the track of tracing object under special scenes, and estimates position and the motion state of current time object.But, object-tracking systems is often particularly difficult to effectively by illumination condition, shade, the factor such as to block affect the scene that changes greatly for complex environment and tracing object exactly, therefore how to carry out according to the feature of video image the direction that Object tracking becomes research exactly.
In the U.S. Patent Application Publication No. No.US2010/0310127A1 being entitled as " Subject Tracking Device andCamera " disclosed in 9 days Dec in 2010, the situation that a kind of object tracking device solves in Object tracking process, the shape of object changes is disclosed at ITO.It calculates two similarities: first is the similarity of original template image and target image; Second is the template image and target image that upgrade; Determine whether to upgrade template image based at least one in above two similarities; Linear combination based on original template image and target image generates the template image of new renewal.This technology is only determine whether to upgrade template image according to the size of original template image and the template image of renewal and two degrees of approximation of target image, then original template image and matching area information are weighted to the template image obtaining upgrading, carry out the altered shape of adaption object.But this technology only utilizes shape to determine whether to upgrade template image as the feature of image, does not consider the type of different characteristic and the impact on Object tracking.
In addition, Abe on March 6th, 2012 authorize be entitled as a kind of method disclosing Object tracking in the U.S. Patent No. US8131014B2 of " Object-tracking ComputerProgram Product; Object-tracking Device and Camera ", its create and safeguard multiple object template.When tracking, select the template that similarity is the highest; Define one is carried out template renewal condition based on new view data.This technology focuses on the initialization of multiple template, renewal and application in the track, and the renewal of template is only considered value and do not consider the type of different characteristic and the impact on Object tracking.
Need a kind of technology also can carrying out Object tracking when the scene of complex environment and change more exactly.
Summary of the invention
Object tracing technique disclosed in the disclosure can carry out Object tracking more accurately under complex environment or environmental change situation.Traditional tracking uses fixing characteristics of image to carry out tracing object, and when environmental change causes the feature change of image, the Object tracking performance that such strategy can not ensure.Each embodiment of the present disclosure can adjust setting and/or the weight of the candidate feature used when Object tracking adaptively according to the sense of candidate feature under present image scenario, in at least two candidate feature, select more effective feature or more reasonably distribute these at least two candidate feature role in Object tracking, keep using more effective and rational Feature Combination to carry out to follow the tracks of more accurately object in current scene image.
According to an aspect of the present invention, a kind of method of tracing object is provided, comprises: obtain the foreground image comprising this object in current image frame and the background image except this foreground image; According to the difference of each value in described foreground image and described background image of at least two candidate feature, calculate the sense parameter of the sense representing described candidate feature for each candidate feature; Object tracking is carried out in conjunction with the value of each candidate feature and the sense parameter of each candidate feature.
According to another aspect of the present invention, a kind of system of tracing object is provided, comprises: obtaining means, be configured to obtain the foreground image comprising this object in current image frame and the background image except this foreground image; Calculation element, is configured to the difference of each value in described foreground image and described background image according at least two candidate feature, calculates the sense parameter of the sense representing described candidate feature for each candidate feature; Object tracking device, is configured in conjunction with the value of each candidate feature and the sense parameter of each candidate feature to carry out Object tracking.
Accompanying drawing explanation
Fig. 1 schematically shows the schematic diagram of application according to the example application environment of each embodiment of the present invention;
Fig. 2 is the process flow diagram of the method schematically showing tracing object according to an embodiment of the invention;
Fig. 3 A is the schematic diagram being shown schematically in foreground image and the background image obtained in the method for the tracing object according to an embodiment of the invention shown in Fig. 2; Fig. 3 B is the schematic diagram of the difference of each value in the foreground image shown in Fig. 3 A and background image of three candidate feature schematically showing such as color, gray scale, edge; Fig. 3 C schematically shows for the car of different colours and its respective scene, the schematic diagram that in three candidate feature at color, gray scale, edge, which sense is best;
Fig. 4 is the process flow diagram of the method for the tracing object schematically shown according to another embodiment of the invention;
Fig. 5 is the block scheme of the system of the tracing object schematically shown according to another embodiment of the invention.
Embodiment
Present by detail with reference to specific embodiments of the invention, in the accompanying drawings exemplified with example of the present invention.Although the present invention will be described in conjunction with specific embodiments, be appreciated that it is not want to limit the invention to described embodiment.On the contrary, want to cover be defined by the following claims the change comprised within the spirit and scope of the present invention, amendment and equivalent.It should be noted that method step described herein can be arranged by any functional block or function realize, and any functional block or function are arranged and can be implemented as physical entity or logic entity or both combinations.
In order to make those skilled in the art understand the present invention better, below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
Fig. 1 schematically shows the schematic diagram of application according to the example application environment of each embodiment of the present invention.
Suppose, technology according to the Object tracking of each embodiment of the present invention can be applied to vehicle tracking, therefore, with reference to figure 1, such as, applied environment 100 comprises such as vehicle-mounted vidicon 101, and it can obtain to the scene that such as front side takes the object such as comprising such as vehicle continuously the multiple picture frames comprising object; Image processor 102, carries out the subsequent treatment of such as object detection, Object tracking etc. for the treatment of captured picture frame.This vehicle-mounted vidicon 101 and image processor 102 can all be arranged on a vehicle.Such as, the object of detection and tracking can be the object relevant to safe driving, as pedestrian, Vehicles and Traffic Signs etc.Certainly, the application scenarios shown in Fig. 1 is only an application example of each embodiment of the present invention, and unrestricted the present invention, other application scenarioss being different from the application scenarios shown in Fig. 1 may be there are in actual applications.
Fig. 2 is the process flow diagram of the method schematically showing tracing object according to an embodiment of the invention.
As shown in Figure 2, the method 200 of tracing object according to an embodiment of the invention comprises: step S201, obtains the foreground image comprising this object in current image frame and the background image except this foreground image; Step S202, according to the difference of each value in described foreground image and described background image of at least two candidate feature, calculates the sense parameter of the sense representing described candidate feature for each candidate feature; Step S203, carries out Object tracking in conjunction with the value of each candidate feature and the sense parameter of each candidate feature.
So, setting and/or the weight of the candidate feature used when Object tracking can be adjusted adaptively according to the sense of candidate feature under present image scenario, in at least two candidate feature, select more effective feature or more reasonably distribute these at least two candidate feature role in Object tracking, keep using more effective and rational Feature Combination to carry out to follow the tracks of more accurately object in current scene image, and Object tracking more accurately can be carried out under complex environment or environmental change situation.
In step s 201, the foreground image comprising this object in current image frame and the background image except this foreground image can be obtained.This step S201 can obtain foreground image and background image by using existing various image procossing and mode identification method and technology to carry out object detection.Such as one existing popular object detection framework based on AdaBoost sorter and Haar feature, that is, typically for the rectangle frame that the result of the object detection of each frame can be a representative object position.So, image in this rectangle frame (or comprising the image of object of detection) can think foreground image, and the image at other positions except the image (or comprising the image of object of detection) in this rectangle frame can be considered to background image.In this case simplification, is not described in detail existing concrete grammar and step of carrying out the conventional art of object detection.
In step S202, according to the difference of each value in described foreground image and described background image of at least two candidate feature, the sense parameter of the sense representing described candidate feature can be calculated for each candidate feature.Therefore, in one embodiment, described at least two candidate feature can comprise two or more as follows: the color characteristic relevant with the value of one or more colors; The gray feature relevant with gray scale; The edge feature relevant with edge; The textural characteristics relevant with texture; The Corner Feature relevant with angle point; The shape facility relevant with shape; Contour feature relevant with profile etc.; In aforementioned various feature eigenwert-number of pixels each post histogrammic of arbitrary feature each value in two or more.
Described candidate feature can be used to represent that the feature of image also can by the feature helping distinguish different images.Traditionally, image characteristics extraction is a concept in computer vision and image procossing, and it refers to extraction image information, determines whether the point of each image belongs to a characteristics of image.Desirable characteristics of image has repeatability, ga s safety degree, concentrates and the characteristic such as efficient.At this, candidate feature in the disclosure can comprise the concept of characteristics of image, but except usually described traditional characteristics of image (such as red (R), blue (B), the value of green (G) and to R, G, the value of B is carried out computing and is obtained other color characteristics, gray feature, edge feature, textural characteristics, Corner Feature, shape facility, contour feature etc.) value is in the picture (such as, mean value, mean square deviation etc.) beyond, in the disclosure, also introduce each value alternatively feature of eigenwert-number of pixels each post histogrammic of arbitrary feature in aforementioned various feature innovatively, this is because the value of the histogrammic some posts of the eigenwert-number of pixels of such as gray feature (namely, the number falling into the pixel between a certain gray area in image also may be different between different images), therefore can using the value of this post as being used for the candidate feature helping differentiate between images.
In one embodiment, the color characteristic relevant with the value of one or more colors can be to one or more colors (such as, R, G, B tri-colors) value carry out computing to obtain, such color characteristic can comprise the feature of the value of one or more colors (such as, R, G, B tri-colors).Such as but be not restriction, respective mean value is asked to R, G, B tri-color components of an image (foreground image or background image), finds out color component, such as R that the maximum component of mean value, such as B and mean value are minimum, then, calculate color characteristic wherein for the average of B component, for the mean value of R component.But certainly, color characteristic also only can use the arbitrary mean value in R, G, B.Utilize shades of colour also can conceive other color characteristic relevant with color, in this citing that differs.
In one embodiment, relevant with gray scale gray feature can be the mean value or summation etc. of the gray-scale value of an image (foreground image or background image).Utilize gray-scale value also can conceive other gray feature relevant with gray scale, in this citing that differs.
In one embodiment, relevant with edge edge feature can be found by the image processing algorithm of such as edge extracting etc.Such as, this edge feature can be the ratio that the area of pixel at the edge of object (or the background in background image) in the foreground image extracted accounts for the area of total foreground image (or background image).Certainly, the edge extracted is utilized can to conceive other edge feature relevant with edge, in this citing that differs.Here, the method extracting edge is also exist in prior art, and has a variety of, does not discuss its details in detail at this.
Such as, relevant with texture textural characteristics can be the area (accounting for the ratio of the total area) etc. of the kind of texture and number, texture part.Certainly, utilize texture also can conceive other textural characteristics relevant with texture, in this citing that differs.
Such as, relevant with angle point Corner Feature can be the number of angle point, angle point (detection of angle point can with reference to webpage position, pixel count etc. http:// en.wikipedia.org/wiki/Corner_detection).Certainly, utilize angle point also can conceive other Corner Feature relevant with angle point, in this citing that differs.
Such as, relevant with shape shape facility can be the kind, number, size, pixel count etc. of shape.Certainly, utilize shape also can conceive other shape facility relevant with shape, in this citing that differs.
Such as, the contour feature that profile is relevant can be the kind, number, size, pixel count etc. of profile.Certainly, utilize profile also can conceive other contour feature relevant with profile, in this citing that differs.
In addition, in other embodiments, also be possible for the gray feature relevant with gray scale, the textural characteristics relevant with texture, the Corner Feature relevant with angle point, the shape facility relevant with shape, the contour feature relevant with profile etc., the gray scale utilizing various existing image procossing to obtain, texture, angle point, shape, profile conceive the respective feature relevant with it, in this citing that also differs.
In addition, as mentioned above, in the disclosure, also introduce each value alternatively feature of eigenwert-number of pixels each post histogrammic of arbitrary feature in aforementioned various feature innovatively, this is because the value of the histogrammic some posts of the eigenwert-number of pixels of such as gray feature (namely, the number falling into the pixel between a certain gray area in image also may be different between different images), therefore can using the value of this post as being used for the candidate feature helping differentiate between images.
Note, " value " in " difference of each value in described foreground image and described background image of at least two candidate feature " in step S202 can be the mean value of candidate feature in each pixel of image, also can be total value, maybe can be the value (color characteristic of such as above-mentioned citing obtained by certain arithmetic expression ) etc.At this, not limiting this value is value after certain computing of the value of the specific value of certain pixel or the total value of all pixels or mean value or all pixels etc.
In step S202, according to the difference of each value in described foreground image and described background image of at least two candidate feature, the sense parameter of the sense representing described candidate feature can be calculated for each candidate feature.At this, the sense parameter of each candidate feature can be obtain based on the difference of the value of this candidate feature in described foreground image and described background image, its can be this difference size itself, maybe can be carry out certain computing to the size of this difference to obtain, etc.In a word, the sense parameter of this candidate feature can be relevant with the difference of the value in described foreground image and described background image of this candidate feature, and can represent the sense of described candidate feature.The implication of the sense of a feature can refer to the ability adopting this feature foreground image can be distinguished out from background image.In general, the difference of the value of certain candidate feature in described foreground image and described background image is larger, its sense is larger, its sense parameter is larger, namely when Object tracking, detect that the value of this candidate feature is easier to be distinguished out by foreground image from background image, namely more easily trace into the object in foreground image.
Sense parameter can have a lot of modes according to the difference of each value in described foreground image and described background image of above-mentioned at least two candidate feature.Below to illustrate several mode, but this is not restriction, those skilled in the art can conceive the account form of other sense parameter according to the principle of the relation of the difference of the value of candidate feature in described foreground image and described background image and itself and sense.
In one embodiment, the difference of each value in described foreground image and described background image of described basis at least two candidate feature, the step S202 calculating the sense parameter of the sense representing described candidate feature for each candidate feature can comprise: the sense parameter DP utilizing following formula to calculate i-th candidate feature i: wherein, i is positive integer, F irepresent the value of this i-th candidate feature in foreground image, B irepresent the value of this i-th candidate feature in background image.
In another embodiment, following formula can be utilized calculate the sense parameter DP of i-th candidate feature i: wherein, i is positive integer, F irepresent the value of this i-th candidate feature in foreground image, B irepresent the value of this i-th candidate feature in background image, M ibe | F i-B i| possible maximum value (can utilize F iand B icodomain estimate).Or, following formula can also be utilized calculate the sense parameter DP of i-th candidate feature i: DP i=| F i-B i|, wherein, i is positive integer, F irepresent the value of this i-th candidate feature in foreground image, B irepresent the value of this i-th candidate feature in background image.In like manner, except the formula of above citing, those skilled in the art can also construct other formula of sense parameter more based on the relation of the difference of value of candidate feature in described foreground image and described background image and the sense of this candidate feature.
In one embodiment, the step S203 that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking can comprise: when carrying out Object tracking, utilizes the sense parameter of each candidate feature described to decide the influence power degree of the described value of each candidate feature in the candidate region of next picture frame.As mentioned above, in general, the difference of the value of certain candidate feature in described foreground image and described background image is larger, the sense that its this feature can distinguish out foreground image from background image is larger, its sense parameter is larger, namely when Object tracking, detect that the value of this candidate feature is easier to be distinguished out by foreground image from background image, namely more easily trace into the object in foreground image.Therefore, by the sense parameter of this candidate feature, the value that the detection of this candidate feature and detection can be made to obtain for Object tracking result more important, produce larger influence power.
In one embodiment, the step S203 that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking can comprise: using the sense parameter of each candidate feature described as the weight of the value of each candidate feature described in the candidate region of next picture frame to carry out Object tracking.
In one embodiment, the step S203 that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking can comprise: the similarity s by the value of the N number of candidate feature of following formulae discovery in the candidate region of next picture frame and described foreground image:
s = Σ i = 1 N w i * | M - | C i - F i | | ,
Wherein, w irepresent the sense parameter DP with i-th candidate feature irelevant weight, C ithe value of i-th candidate feature in the candidate region of next picture frame, F irepresent the value in the foreground image of this i-th candidate feature in current image frame, N represents the number of candidate feature, and it is maximum that M represents all i | C i-F i| value (whether the setting of this M is only to allow the size of similarity s with similar corresponding, namely | and C i-F i| it is less, | M-|C i-F i|| larger, thus similarity s is larger, candidate region is more similar to current foreground image);
By comparing the similarity s of each candidate region of next picture frame, the foreground image of candidate region as next picture frame of maximum similarity s will be had.Weight w ivalue can adopt successive value, namely can be any floating number, also can adopt discrete value.Usually can by weight w ibe set to sense parameter DP iitself.But, by sense parameter DP iconceive weight w iaccount form be also not limited thereto, can also be weight is contemplated that other modes with sense relating to parameters, such as, by DP idiscretize, namely sets the individual possible weighted value (K is positive integer) of K, then by DP ipossible range be also divided into K interval, work as DP ivalue when dropping in certain interval, weight w inamely corresponding value is got (corresponding to DP ithis interval).That is, sense parameter DP ilarger, weight w ilarger, so obviously, the influence power degree of this candidate feature when Object tracking is larger.
Certainly, the algorithm of the similarity s of the value of candidate feature in the candidate region of next picture frame and described foreground image also not only above-mentioned formula, those skilled in the art the relation of the difference of the value in the candidate region of next picture frame and described foreground image can conceive the algorithmic formula of other similarities various by similarity and candidate feature, such as, by { C iand { F ibe considered as in N dimension space two points, calculate their one distance D(as Euclidean distance), with the 1/D reciprocal of this distance for similarity (if distance is for 0, then the value of similarity be one greatly on the occasion of, etc., in this citing that differs.In the disclosure, similarity is larger, illustrates that the candidate region of next picture frame is more similar to described foreground image, then this candidate region is more likely the position of current foreground image at next frame, has namely more likely traced into the object comprised in this foreground image.Meanwhile, when Object tracking, a threshold value can be set or judge whether this similarity s is large enough to determine that this candidate region of next frame is as the position of current foreground image at next frame by other means.
In one embodiment, except by calculating similarity (such as above-mentioned formula to the computing of all candidate feature ) beyond, the step S203 that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking can also comprise: select sense parameter DP imaximum one or more candidate feature are as the candidate feature that will calculate the candidate region of next picture frame when carrying out Object tracking.So only selecting one or more candidate feature that the sense in all candidate feature is stronger, also can keep effect that is good and Object tracking comparatively accurately when reducing the calculated amount being used for each candidate feature.
In fact, the respective sense parameter of each candidate feature is utilized to strengthen the strong candidate feature of sense parameter when Object tracking carries out various adaptive object tracking algorithm or to weaken the weak candidate feature role of sense parameter.This object tracking algorithm can have a variety of, all modes utilizing the candidate region of searching and current foreground image similarity comparatively large (namely comparatively similar) described above, existing various object tracing technique can also be comprised, such as, utilize particle filter algorithm (this technology can with reference to Hu Shiqiang, respect the faithful and upright article " example filtering algorithm is summarized " being published in " controlling and decision-making " the 20th volume the 4th phase in April, 2005), (this technology can with reference to webpage for average drifting (Mean Shift) algorithm http:// blog.csdn.net/dadaadao/article/details/6029583) etc., be not repeated herein.As long as utilize characteristics of image or other features can be applied in each embodiment of the present invention to the algorithm carrying out Object tracking.
In one embodiment, the method 200 of this tracing object can also comprise: step S204(is not shown), utilize as the foreground image of next picture frame of the result of Object tracking and the background image except the foreground image of this next picture frame, according to the difference of each value in described foreground image and described background image of at least two candidate feature, upgrade the sense parameter of each candidate feature.That is, in the method 200 of this tracing object, the result of the current Object tracking obtained by the method 200 can also be utilized to be used as the foreground image of next picture frame and the benchmark of background image, thus automatically to carry out the Object tracking of next frame.
As mentioned above, according to each embodiment of the present invention, setting and/or the weight of the candidate feature used when Object tracking can be adjusted adaptively according to the sense of candidate feature under present image scenario, in at least two candidate feature, select more effective feature or more reasonably distribute these at least two candidate feature role in Object tracking, keep using more effective and rational Feature Combination to carry out to follow the tracks of more accurately object in current scene image, and Object tracking more accurately can be carried out under complex environment or environmental change situation.
In order to the method for the tracing object shown in Fig. 2 is described more intuitively, schematically show how according to the difference of each value in described foreground image and described background image of at least two candidate feature below by Fig. 3 A-Fig. 3 C, calculate the sense parameter of the sense representing described candidate feature for each candidate feature.
Fig. 3 A is the schematic diagram being shown schematically in foreground image and the background image obtained in the method for the tracing object according to an embodiment of the invention shown in Fig. 2.As shown in Figure 3A, the left side of Fig. 3 A shows the schematic diagram of foreground image, visible, and this foreground image comprises object that is detected and that follow the tracks of, and the right side of Fig. 3 A shows the schematic diagram of background image.Wherein, this background image can be made up of, as long as have expressed the image around foreground image to background image all or part of of the residual image in current image frame except this foreground image.Particularly, such as, in current picture frame, when behind the position obtaining detected object, such as this position is (x, y, w, h), wherein (x, y) be the upper left corner position coordinates in the picture of object, w is object width in the picture, and h is object height in the picture.Then in this example embodiment, foreground image can be set as rectangle (x, y, w, h), and background image can be set as rectangle (x-w/2, y-h/2, 2*w, region after removing foreground image section 2*h) (namely, can not be all regions after whole current image frame deducts foreground image, but a part of region around foreground image), the position of the background image of design like this and size make to decrease the calculated amount of carrying out background image, namely the calculating all regions after whole present frame deducts foreground image being carried out to each candidate feature is not needed, and only need to calculate a part of region around foreground image, so also can know the difference between the value in the background image of candidate feature around foreground image and this foreground image.Certainly, the size of background image and position are not limited to shown in Fig. 3 A, but can change, background image also can be the region after whole current image frame deducts foreground image, and just such calculated amount may be larger.
Fig. 3 B is the schematic diagram of the difference of each value in the foreground image shown in Fig. 3 A and background image of three candidate feature schematically showing such as color, gray scale, edge.As shown in Figure 3 B, existence three candidate feature are supposed: color, gray scale, edge.In the embodiment shown in figure 3b, color characteristic can be defined as such as but not limited to: first respective mean value is asked to R, G, B tri-color components of an image (foreground image or background image), find out color component, such as R that the maximum component of mean value, such as B and mean value are minimum, then, color characteristic is calculated wherein for the average of the maximum B component of mean value, for the mean value of the minimum R component of mean value.Edge feature can be defined as such as but not limited to: the area of the pixel at the edge of the object (or the background in background image) in the foreground image of extraction accounts for the ratio of the area of total foreground image (or background image).Gray feature can be normally defined mean value or other features relevant with gray scale of the gray-scale value of an image (foreground image or background image).
Certainly, except the color exemplified by Fig. 3 B, gray scale, three, edge candidate feature, as previously mentioned, other candidate feature can also be had, such as relevant with texture textural characteristics, the Corner Feature relevant with angle point, the shape facility relevant with shape, contour feature relevant with profile etc., also may have two or more in each value of eigenwert-number of pixels each post histogrammic of arbitrary feature in aforementioned various feature.
With reference to figure 3B, can find out, differing greatly of the value of color characteristic in foreground image and its value in background image, the difference of the value of gray feature in foreground image and its value in background image is less, and the difference of the value of edge feature in foreground image and its value in background image is placed in the middle.Visible, for the object in this foreground image, the color characteristic differed greatly has good sense, namely utilizes color characteristic can pick out foreground image preferably from background image.Therefore, known, in Object tracking process, for the image of different object and different scene, be used for identification foreground image so that the most validity feature of tracing object is different, because outward appearance, color, gray scale, edge, shape etc. that this object may have different outward appearances, color, gray scale, edge, shape etc. and their background are also different.
Therefore, Fig. 3 C schematically shows for the car of different colours and its respective scene, the schematic diagram that in three candidate feature at color, gray scale, edge, which sense is best.As shown in Figure 3 C, such as, for the car of blueness, the maximum the most effective feature of sense may be edge, such as, for the car of redness, the maximum the most effective feature of sense may be color, such as, for the car of white, the maximum the most effective feature of sense may be gray scale (or brightness).Such as, in extreme circumstances, can when following the tracks of red car, only adopt color characteristic such as to obtain the value of the color characteristic of the candidate region in next frame as a feature, and the value of this color characteristic of itself and current foreground image is compared the car confirming this redness where be positioned in the next frame.Due to for the car of redness, the sense of color characteristic is maximum, utilizes color characteristic also can identification is out from background image by foreground image from the image of next frame preferably.Also effect that is good and Object tracking comparatively accurately can be kept like this when reducing the calculated amount being used for candidate feature.
Therefore, the difference of the value of the candidate feature according to Fig. 3 A-3C in foreground image and background image and the size of the sense obtained, the size by the sense obtained like this can be known, the respective sense size of these candidate feature can be utilized in follow-up image trace process to adjust the influence power degree of these candidate feature, thus carry out more effectively and more accurately Object tracking.
Fig. 4 is the process flow diagram of the method for the tracing object schematically shown according to another embodiment of the invention.As shown in Figure 4, in the method 400 of tracing object according to another embodiment of the invention: in step S401, the object in current image frame is detected by existing object detection technique.As previously mentioned, object detection technique is known technology, such as, based on AdaBoost sorter and Haar feature, the result of object detection can be obtained, such as the rectangle frame of representative object position.
In step S402, by the result of object detection, obtain foreground image and background image.Such as, according to the rectangle frame of the result detected, representative object position, obtain such as with reference to the foreground image shown in figure 3A and background image.
In step S403, calculate the difference of the value of at least two candidate feature in foreground image and background image to obtain sense situation.Such as, suppose that this i-th candidate feature (such as, gray feature) value in foreground image (mean value of the gray scale such as, in foreground image) is F i, the value of this i-th candidate feature in background image (mean value of the gray scale such as, in background image) is B i.Utilize such as but one of unrestriced following formula calculates the sense parameter DP of i-th candidate feature i: (M ibe | F i-B i| possible maximum value), DP i=| F i-B i| etc.So, sense situation, such as sense parameter DP can be obtained i.
In step s 404, according to sense situation (such as sense parameter DP i) the weight w of each candidate feature is set i.Weight w ivalue can adopt successive value, namely can be any floating number, also can adopt discrete value.Usually can by weight w ibe set to sense parameter DP iitself.But, by sense parameter DP iconceive weight w iaccount form be also not limited thereto, can also be weight is contemplated that other modes with sense relating to parameters, such as, by DP idiscretize, namely sets the individual possible weighted value (K is positive integer) of K, then by DP ipossible range be also divided into K interval, work as DP ivalue when dropping in certain interval, weight w inamely corresponding value is got (corresponding to DP ithis interval).That is, sense parameter DP ilarger, weight w ilarger.
In step S405, together with the weight and candidate feature of the candidate feature that utilization is arranged itself, object template is set, such as { w i, F i.By arranging different templates, that being used in Object tracking can be made to compare is different with the template of detected object, thus causes the accuracy of the result of Object tracking different with validity.In each embodiment of the present disclosure, due to the value except considering each candidate feature, also consider sense situation (such as, the sense parameter DP of this candidate feature to foreground image and background image iand corresponding weight w isize) distribute the influence power degree of this candidate feature, the result making Object tracking is more accurately and effectively.
In step S406, utilize the object template (such as { w arranged i, F i) carry out the Object tracking of next picture frame.In the example of an Object tracking, object template (such as { w can be considered by the mode of particle filter i, F i).Such as, first, template initialization is carried out, i.e. the object's position that detects of initialization calculating object template, such as { w as stated above i, F i; Then, generate particle, namely initialization particle position is the rectangle around the initial position of the object detected, and the weight (Weight) of all particles is set to identical value; Resampling, namely based on particle weight, particle of new generation is selected in the tracking for next frame; Propagate, namely estimate the reposition of the particle chosen in next frame, this estimation is the motion state based on particle, comprises position, speed add that random perturbation obtains with simulation system noise; Observe, namely calculate the similarity of the object template of candidate region corresponding to each particle and foreground image (w irepresent the sense parameter DP with i-th candidate feature irelevant weight, C ithe value of i-th candidate feature in the candidate region of next picture frame, F irepresent the value in the foreground image of this i-th candidate feature in current image frame, N represents the number of candidate feature, and it is maximum that M represents all i | C i-F i| value), and use this similarity as the weight of corresponding particle; Estimate, namely utilize the weight obtained, calculate the quadratic approach of all particles, namely obtain the position of object in a new frame; Then resampling is forwarded to, etc.This resampling is based on the similarity between particle region and object foreground image, and the probability that this particle more similar is selected is larger.So, the sense parameter DP with i-th candidate feature is somebody's turn to do irelevant weight w itake part in the calculating of similarity s, also contribute to choosing more effective particle, thus find the particle region more similar to the foreground image of existing object as the foreground image of next frame.Certainly, the Object tracking mode of particle filter is only that a kind of of example considers the sense of candidate feature and/or the mode of weight, in fact, the sense of candidate feature and/or weight are to carry out Object tracking, are not repeated herein to also have other existing Object tracking modes a lot of also can consider.
So, in step S 407, obtain the result of Object tracking, and upgrade foreground image and the background image of next picture frame by this result.Like this, automatically can upgrade foreground image and the background image of next frame, and not need the detecting step re-starting object.
Then for continuous print picture frame, step S402-S407 is repeated.
Certainly, above-mentioned example is only the embodiment of an example of the present invention, and the order of the content of its step, the details of step and step is not as limitation of the present invention.
As mentioned above, according to each embodiment of the present invention, setting and/or the weight of the candidate feature used when Object tracking can be adjusted adaptively according to the sense of candidate feature under present image scenario, in at least two candidate feature, select more effective feature or more reasonably distribute these at least two candidate feature role in Object tracking, keep using more effective and rational Feature Combination to carry out to follow the tracks of more accurately object in current scene image, and Object tracking more accurately can be carried out under complex environment or environmental change situation.
Fig. 5 is the block scheme of the system of the tracing object schematically shown according to another embodiment of the invention.
The system 500 of the tracing object shown in Fig. 5 comprises: obtaining means 501, is configured to obtain the foreground image comprising this object in current image frame and the background image except this foreground image; Calculation element 502, is configured to the difference of each value in described foreground image and described background image according at least two candidate feature, calculates the sense parameter of the sense representing described candidate feature for each candidate feature; Object tracking device 503, is configured in conjunction with the value of each candidate feature and the sense parameter of each candidate feature to carry out Object tracking.In one embodiment, the result of Object tracking that Object tracking device 503 exports can be imported into the input of obtaining means 501 as foreground image and background image again, moves in circles carry out Object tracking continuously with this.
So, setting and/or the weight of the candidate feature used when Object tracking can be adjusted adaptively according to the sense of candidate feature under present image scenario, in at least two candidate feature, select more effective feature or more reasonably distribute these at least two candidate feature role in Object tracking, keep using more effective and rational Feature Combination to carry out to follow the tracks of more accurately object in current scene image, and Object tracking more accurately can be carried out under complex environment or environmental change situation.
Obtaining means 501 can obtain foreground image and background image by using existing various image procossing and mode identification method and technology to carry out object detection.Such as one existing popular object detection framework based on AdaBoost sorter and Haar feature, that is, typically for the rectangle frame that the result of the object detection of each frame can be a representative object position.So, image in this rectangle frame (or comprising the image of object of detection) can think foreground image, and the image at other positions except the image (or comprising the image of object of detection) in this rectangle frame can be considered to background image.In this case simplification, is not described in detail existing concrete grammar and step of carrying out the conventional art of object detection.
Therefore, in one embodiment, described at least two candidate feature can comprise two or more as follows: the color characteristic relevant with the value of one or more colors; The gray feature relevant with gray scale; The edge feature relevant with edge; The textural characteristics relevant with texture; The Corner Feature relevant with angle point; The shape facility relevant with shape; Contour feature relevant with profile etc.; In aforementioned various feature eigenwert-number of pixels each post histogrammic of arbitrary feature each value in two or more.
Described candidate feature can be used to represent that the feature of image also can by the feature helping distinguish different images.At this, candidate feature in the disclosure can comprise the concept of characteristics of image, but except usually described traditional characteristics of image (such as red (R), blue (B), the value of green (G) and to R, G, the value of B is carried out computing and is obtained other color characteristics, gray feature, edge feature, textural characteristics, Corner Feature, shape facility, contour feature etc.) value is in the picture (such as, mean value, mean square deviation etc.) beyond, in the disclosure, also introduce each value alternatively feature of eigenwert-number of pixels each post histogrammic of arbitrary feature in aforementioned various feature innovatively, this is because the value of the histogrammic some posts of the eigenwert-number of pixels of such as gray feature (namely, the number falling into the pixel between a certain gray area in image also may be different between different images), therefore can using the value of this post as being used for the candidate feature helping differentiate between images.
In one embodiment, the color characteristic relevant with the value of one or more colors can be to one or more colors (such as, R, G, B tri-colors) value carry out computing to obtain, such color characteristic can comprise the feature of the value of one or more colors (such as, R, G, B tri-colors).Such as but be not restriction, respective mean value is asked to R, G, B tri-color components of an image (foreground image or background image), finds out color component, such as R that the maximum component of mean value, such as B and mean value are minimum, then, calculate color characteristic wherein for the average of B component, for the mean value of R component.But certainly, color characteristic also only can use the arbitrary mean value in R, G, B.Utilize shades of colour also can conceive other color characteristic relevant with color, in this citing that differs.
In one embodiment, relevant with gray scale gray feature can be the mean value or summation etc. of the gray-scale value of an image (foreground image or background image).Utilize gray-scale value also can conceive other gray feature relevant with gray scale, in this citing that differs.
In one embodiment, relevant with edge edge feature can be found by the image processing algorithm of such as edge extracting etc.Such as, this edge feature can be the ratio that the area of pixel at the edge of object (or the background in background image) in the foreground image extracted accounts for the area of total foreground image (or background image).Certainly, the edge extracted is utilized can to conceive other edge feature relevant with edge, in this citing that differs.Here, the method extracting edge is also exist in prior art, and has a variety of, does not discuss its details in detail at this.
Such as, relevant with texture textural characteristics can be the area (accounting for the ratio of the total area) etc. of the kind of texture and number, texture part.Certainly, utilize texture also can conceive other textural characteristics relevant with texture, in this citing that differs.
Such as, relevant with angle point Corner Feature can be the number of angle point, angle point (detection of angle point can with reference to webpage position, pixel count etc. http:// en.wikipedia.org/wiki/Corner_detection).Certainly, utilize angle point also can conceive other Corner Feature relevant with angle point, in this citing that differs.
Such as, relevant with shape shape facility can be the kind, number, size, pixel count etc. of shape.Certainly, utilize shape also can conceive other shape facility relevant with shape, in this citing that differs.
Such as, the contour feature that profile is relevant can be the kind, number, size, pixel count etc. of profile.Certainly, utilize profile also can conceive other contour feature relevant with profile, in this citing that differs.
In addition, in other embodiments, also be possible for the gray feature relevant with gray scale, the textural characteristics relevant with texture, the Corner Feature relevant with angle point, the shape facility relevant with shape, the contour feature relevant with profile etc., the gray scale utilizing various existing image procossing to obtain, texture, angle point, shape, profile conceive the respective feature relevant with it, in this citing that also differs.
In addition, as mentioned above, in the disclosure, also introduce each value alternatively feature of eigenwert-number of pixels each post histogrammic of arbitrary feature in aforementioned various feature innovatively, this is because the value of the histogrammic some posts of the eigenwert-number of pixels of such as gray feature (namely, the number falling into the pixel between a certain gray area in image also may be different between different images), therefore can using the value of this post as being used for the candidate feature helping differentiate between images.
At this, the sense parameter of each candidate feature can be obtain based on the difference of the value of this candidate feature in described foreground image and described background image, its can be this difference size itself, maybe can be carry out certain computing to the size of this difference to obtain, etc.In a word, the sense parameter of this candidate feature can be relevant with the difference of the value in described foreground image and described background image of this candidate feature, and can represent the sense of described candidate feature.The implication of the sense of a feature can refer to the ability adopting this feature foreground image can be distinguished out from background image.In general, the difference of the value of certain candidate feature in described foreground image and described background image is larger, its sense is larger, its sense parameter is larger, namely when Object tracking, detect that the value of this candidate feature is easier to be distinguished out by foreground image from background image, namely more easily trace into the object in foreground image.
Sense parameter can have a lot of modes according to the difference of each value in described foreground image and described background image of above-mentioned at least two candidate feature.Below to illustrate several mode, but this is not restriction, those skilled in the art can conceive the account form of other sense parameter according to the principle of the relation of the difference of the value of candidate feature in described foreground image and described background image and itself and sense.
In one embodiment, described calculation element 502 can: the sense parameter DP utilizing following formula to calculate i-th candidate feature i: wherein, i is positive integer, F irepresent the value of this i-th candidate feature in foreground image, B irepresent the value of this i-th candidate feature in background image.
In another embodiment, described calculation element 502 can utilize following formula to calculate the sense parameter DP of i-th candidate feature i: wherein, i is positive integer, F irepresent the value of this i-th candidate feature in foreground image, B irepresent the value of this i-th candidate feature in background image, M ibe | F i-B i| possible maximum value.Or described calculation element 502 can also utilize following formula to calculate the sense parameter DP of i-th candidate feature i: DP i=| F i-B i|, wherein, i is positive integer, F irepresent the value of this i-th candidate feature in foreground image, B irepresent the value of this i-th candidate feature in background image.In like manner, except the formula of above citing, those skilled in the art can also construct other formula of sense parameter more based on the relation of the difference of value of candidate feature in described foreground image and described background image and the sense of this candidate feature.
In one embodiment, described Object tracking device 503 can, when carrying out Object tracking, utilize the sense parameter of each candidate feature described to decide the influence power degree of the described value of each candidate feature in the candidate region of next picture frame.As mentioned above, in general, the difference of the value of certain candidate feature in described foreground image and described background image is larger, the sense that its this feature can distinguish out foreground image from background image is larger, its sense parameter is larger, namely when Object tracking, detect that the value of this candidate feature is easier to be distinguished out by foreground image from background image, namely more easily trace into the object in foreground image.Therefore, by the sense parameter of this candidate feature, the value that the detection of this candidate feature and detection can be made to obtain produces larger impact for the result of Object tracking.
In an alternative embodiment, the sense parameter of each candidate feature described can be carried out Object tracking as the weight of the value of each candidate feature described in the candidate region of next picture frame by Object tracking device 503.
In an alternative embodiment, this Object tracking device 503 can: the similarity s by the value of the N number of candidate feature of following formulae discovery in the candidate region of next picture frame and described foreground image:
s = Σ i = 1 N w i * | M - | C i - F i | | ,
Wherein, w irepresent the sense parameter DP with i-th candidate feature irelevant weight, C ithe value of i-th candidate feature in the candidate region of next picture frame, F irepresent the value in the foreground image of this i-th candidate feature in current image frame, N represents the number of candidate feature, and it is maximum that M represents all i | C i-F i| value; By comparing the similarity s of each candidate region of next picture frame, the foreground image of candidate region as next picture frame of maximum similarity s will be had.Weight w ivalue can adopt successive value, namely can be any floating number, also can adopt discrete value.Usually can by weight w ibe set to sense parameter DP iitself.But, by sense parameter DP iconceive weight w iaccount form be also not limited thereto, can also be weight is contemplated that other modes with sense relating to parameters.That is, sense parameter DP ilarger, weight w ilarger, so obviously, the influence power degree of this candidate feature when Object tracking is larger.
Certainly, the algorithm of the similarity s of the value of candidate feature in the candidate region of next picture frame and described foreground image also not only above-mentioned formula, those skilled in the art the relation of the difference of the value in the candidate region of next picture frame and described foreground image can conceive the algorithmic formula of other similarities various by similarity and candidate feature.In the disclosure, similarity is larger, illustrates that the candidate region of next picture frame is more similar to described foreground image, then this candidate region is more likely the position of current foreground image at next frame, has namely more likely traced into the object comprised in this foreground image.Meanwhile, when Object tracking device 503 carries out Object tracking, a threshold value can be set or judge whether this similarity s is large enough to determine that this candidate region of next frame is as the position of current foreground image at next frame by other means.
In one embodiment, except by calculating similarity (such as above-mentioned formula to the computing of all candidate feature ) beyond, described Object tracking device 503 is all right: select sense parameter DP imaximum one or more candidate feature are as the candidate feature that will calculate the candidate region of next picture frame when carrying out Object tracking.So only selecting one or more candidate feature that the sense in all candidate feature is stronger, also can keep effect that is good and Object tracking comparatively accurately when reducing the calculated amount being used for each candidate feature.
In fact, the respective sense parameter of each candidate feature is utilized to strengthen the strong candidate feature of sense parameter when Object tracking carries out various adaptive object tracking algorithm or to weaken the weak candidate feature role of sense parameter.This object tracking algorithm can have a variety of, all modes utilizing the candidate region of searching and current foreground image similarity comparatively large (namely comparatively similar) described above, existing various object tracing technique can also be comprised, such as utilize particle filter algorithm, average drifting (Mean Shift) algorithm etc., be not repeated herein.As long as utilize characteristics of image or other features can be applied in each embodiment of the present invention to the algorithm carrying out Object tracking.
In one embodiment, the system 500 of this tracing object can also comprise: updating device 504(is not shown), utilize as the foreground image of next picture frame of the result of Object tracking and the background image except the foreground image of this next picture frame, according to the difference of each value in described foreground image and described background image of at least two candidate feature, upgrade the sense parameter of each candidate feature.That is, in this embodiment, the result of the current Object tracking obtained can also be utilized to be used as the foreground image of next picture frame and the benchmark of background image, thus automatically to carry out the Object tracking of next frame.
As mentioned above, according to each embodiment of the present invention, setting and/or the weight of the candidate feature used when Object tracking can be adjusted adaptively according to the sense of candidate feature under present image scenario, in at least two candidate feature, select more effective feature or more reasonably distribute these at least two candidate feature role in Object tracking, keep using more effective and rational Feature Combination to carry out to follow the tracks of more accurately object in current scene image, and Object tracking more accurately can be carried out under complex environment or environmental change situation.
Note, the advantage mentioned in the disclosure, advantage, effect etc. are only examples and unrestricted, can not think that these advantages, advantage, effect etc. are that each embodiment of the present invention is prerequisite.
The block scheme of the device related in the disclosure, device, equipment, system only illustratively the example and being not intended to of property to require or hint must carry out connecting according to the mode shown in block scheme, arranges, configure.As the skilled person will recognize, can connect by any-mode, arrange, configure these devices, device, equipment, system.Such as " comprise ", " comprising ", " having " etc. word be open vocabulary, refer to " including but not limited to ", and can use with its exchange.Here used vocabulary "or" and " with " refer to vocabulary "and/or", and can to use with its exchange, unless it is not like this that context clearly indicates.Here used vocabulary " such as " refer to phrase " such as, but not limited to ", and can to use with its exchange.
Flow chart of steps in the disclosure and above method only describe the example of illustratively property and are not intended to require or imply the step must carrying out each embodiment according to the order provided.As the skilled person will recognize, the order of the step in above embodiment can be carried out in any order.Such as the word of " thereafter ", " then ", " next " etc. is not intended to limit the order of step; The description of these words only for guiding reader to read over these methods.In addition, such as use article " ", " one " or " being somebody's turn to do " be not interpreted as this key element to be restricted to odd number for any quoting of the key element of odd number.
The above description of disclosed aspect is provided to make to enable any technician of this area or use the present invention.Be very apparent to those skilled in the art to the various amendments of these aspects, and can be applied in other in General Principle of this definition and do not depart from the scope of the present invention.Therefore, the present invention be not intended to be limited to shown in this in, but according to consistent with principle disclosed herein and novel feature most wide region.
In order to the object illustrating and describe has given above description.In addition, this description is not intended to embodiments of the invention to be restricted to form disclosed herein.Although below discussed multiple exemplary aspect and embodiment, its some modification, amendment, change, interpolation and sub-portfolio are those skilled in the art will recognize that.
Each operation of above-described method can be undertaken by carrying out any suitable means of corresponding function.These means can comprise various hardware and/or component software and/or module, include but not limited to circuit, special IC (ASIC) or processor.
Can utilize be designed to carry out function described herein general processor, digital signal processor (DSP), ASIC, field programmable gate array signal (FPGA) or other programmable logic device (PLD) (PLD), discrete gate or transistor logic, discrete nextport hardware component NextPort or its combination in any and realize or carry out described each illustrative logical block, module and circuit.General processor can be microprocessor, but as replacing, this processor can be any commercially available processor, controller, microcontroller or state machine.Processor can also be embodied as the combination of computing equipment, the combination of such as DSP and microprocessor, multi-microprocessor, the one or more microprocessor cooperated with DSP core or any other such configuration.
In conjunction with in the method for disclosure description or the software module that step can directly embed within hardware, processor performs of algorithm or in this combination of two kinds.Software module may reside in any type of tangible media.Some examples of operable storage medium comprise random-access memory (ram), ROM (read-only memory) (ROM), flash memory, eprom memory, eeprom memory, register, hard disc, removable dish, CD-ROM etc.Storage medium can be couple to processor so that this processor can from this read information and to this storage medium write information.In substitute mode, storage medium can be overall with processor.Software module can be single instruction or many instructions, and can be distributed between programs on several different code segment, different and stride across multiple storage medium.
Method disclosed herein comprises the one or more actions for realizing described method.Method and/or action can be interchangeable with one another and do not depart from the scope of claim.In other words, unless specified the concrete order of action, otherwise the order of concrete action and/or use can be revised and do not depart from the scope of claim.
Described function can realize by hardware, software, firmware or its combination in any.If with software simulating, function can be stored on practical computer-readable medium as one or more instruction.Storage medium can be can by any available tangible media of computer access.By example instead of restriction, such computer-readable medium can comprise that RAM, ROM, EEPROM, CD-ROM or other laser discs store, magnetic disc stores or other magnetic memory devices or may be used for the expectation carrying or store instruction or data structure form program code and can by any other tangible media of computer access.As used herein, dish (disk) and dish (disc) comprise compact disk (CD), laser disk, CD, digital universal disc (DVD), soft dish and Blu-ray disc, wherein dish usual magnetic ground rendering data, and dish utilizes laser optics ground rendering data.
Therefore, computer program can carry out operation given herein.Such as, such computer program can be the computer-readable tangible medium with tangible storage (and/or coding) instruction thereon, and this instruction can be performed by one or more processor to carry out operation described herein.Computer program can comprise the material of packaging.
Software or instruction also can be transmitted by transmission medium.Such as, can use such as concentric cable, optical fiber cable, twisted-pair feeder, digital subscribe lines (DSL) or such as infrared, radio or microwave the transmission medium of wireless technology from website, server or other remote source software.
In addition, for carrying out the module of Method and Technology described herein and/or other suitable means can be downloaded by user terminal and/or base station in due course and/or other modes obtain.Such as, such equipment can be couple to server to promote the transmission of the means for carrying out method described herein.Or, various method described herein can provide via memory unit (such as the physical storage medium of RAM, ROM, such as CD or soft dish etc.), so that user terminal and/or base station can obtain various method being couple to this equipment or providing during memory unit to this equipment.In addition, any other the suitable technology for Method and Technology described herein being supplied to equipment can be utilized.
Other examples and implementation are in the scope of the disclosure and the accompanying claims and spirit.Such as, due to the essence of software, above-described function can use the software simulating performed by processor, hardware, firmware, hardwired or these arbitrary combination.The feature of practical function also can be physically located in each position, comprises and being distributed so that the part of function realizes in different physical locations.And, as used herein, comprise and to use in the claims, what be separated in the "or" instruction enumerating middle use of the item started with " at least one " enumerates, enumerating of " A, B or C at least one " means A or B or C so that such as, or AB or AC or BC, or ABC(and A and B and C).In addition, wording " example " does not mean that the example of description is preferred or better than other examples.
The technology of instructing defined by the appended claims can not be departed from and carry out various changes to technology described herein, replacement and change.In addition, of the present disclosure and scope that is claim is not limited to the concrete aspect of above-described process, machine, manufacture, the composition of event, means, method and action.The composition of process that is that can utilize the current existence carrying out substantially identical function with corresponding aspect described herein or realize substantially identical result or that will develop after a while, machine, manufacture, event, means, method or action.Thus, claims are included in such process within the scope of it, machine, manufacture, the composition of event, means, method or action.

Claims (10)

1. a method for tracing object, comprising:
Obtain the foreground image comprising this object in current image frame and the background image except this foreground image;
According to the difference of each value in described foreground image and described background image of at least two candidate feature, calculate the sense parameter of the sense representing described candidate feature for each candidate feature;
Object tracking is carried out in conjunction with the value of each candidate feature and the sense parameter of each candidate feature.
2. method according to claim 1, wherein, the step that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking comprises:
When carrying out Object tracking, the sense parameter of each candidate feature described is utilized to decide the influence power degree of the described value of each candidate feature in the candidate region of next picture frame.
3. method according to claim 1, wherein, the step that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking comprises:
The sense parameter of each candidate feature described is carried out Object tracking as the weight of the value of each candidate feature described in the candidate region of next picture frame.
4. method according to claim 1, wherein, the difference of each value in described foreground image and described background image of described basis at least two candidate feature, the step calculating the sense parameter of the sense representing described candidate feature for each candidate feature comprises:
Utilize following formula to calculate the sense parameter DP of i-th candidate feature i:
DP i = | F i - B i | | F i + B i | ,
Wherein, i is positive integer, F irepresent the value of this i-th candidate feature in foreground image, B irepresent the value of this i-th candidate feature in background image.
5. method according to claim 1, described at least two candidate feature comprise two or more as follows:
The color characteristic relevant with the value of one or more colors;
The gray feature relevant with gray scale;
The edge feature relevant with edge;
The textural characteristics relevant with texture;
The Corner Feature relevant with angle point;
The shape facility relevant with shape;
The contour feature relevant with profile;
In aforementioned various feature eigenwert-number of pixels each post histogrammic of arbitrary feature each value in two or more.
6. the step that according to described method arbitrary in claim 1 to 5, wherein, the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking comprises:
Similarity s by the value of the N number of candidate feature of following formulae discovery in the candidate region of next picture frame and described foreground image:
s = Σ i = 1 N w i * | M - | C i - F i | | ,
Wherein, w irepresent the sense parameter DP with i-th candidate feature irelevant weight, C ithe value of i-th candidate feature in the candidate region of next picture frame, F irepresent the value in the foreground image of this i-th candidate feature in current image frame, N represents the number of candidate feature, and it is maximum that M represents all i | C i-F i| value;
By comparing the similarity s of each candidate region of next picture frame, the foreground image of candidate region as next picture frame of maximum similarity s will be had.
7. method according to claim 1, wherein, the step that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking comprises:
The one or more candidate feature selecting sense parameter maximum are as the candidate feature that will calculate the candidate region of next picture frame when carrying out Object tracking.
8. method according to claim 1, wherein, the step that the sense parameter of the described value in conjunction with each candidate feature and each candidate feature carries out Object tracking comprises:
Utilize particle filter algorithm to perform Object tracking.
9. method according to claim 1, also comprises:
Utilize as the foreground image of next picture frame of the result of Object tracking and the background image except the foreground image of this next picture frame, according to the difference of each value in described foreground image and described background image of at least two candidate feature, upgrade the sense parameter of each candidate feature.
10. a system for tracing object, comprising:
Obtaining means, is configured to obtain the foreground image comprising this object in current image frame and the background image except this foreground image;
Calculation element, is configured to the difference of each value in described foreground image and described background image according at least two candidate feature, calculates the sense parameter of the sense representing described candidate feature for each candidate feature;
Object tracking device, is configured in conjunction with the value of each candidate feature and the sense parameter of each candidate feature to carry out Object tracking.
CN201310376491.5A 2013-08-26 2013-08-26 Method and system for tracking object Pending CN104424651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310376491.5A CN104424651A (en) 2013-08-26 2013-08-26 Method and system for tracking object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310376491.5A CN104424651A (en) 2013-08-26 2013-08-26 Method and system for tracking object

Publications (1)

Publication Number Publication Date
CN104424651A true CN104424651A (en) 2015-03-18

Family

ID=52973534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310376491.5A Pending CN104424651A (en) 2013-08-26 2013-08-26 Method and system for tracking object

Country Status (1)

Country Link
CN (1) CN104424651A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009004A (en) * 2019-03-14 2019-07-12 努比亚技术有限公司 Image processing method, computer equipment and storage medium
CN111433815A (en) * 2018-11-30 2020-07-17 深圳市大疆创新科技有限公司 Image feature point evaluation method and movable platform

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101676953A (en) * 2008-08-22 2010-03-24 奥多比公司 Automatic video image segmentation
CN102663773A (en) * 2012-03-26 2012-09-12 上海交通大学 Dual-core type adaptive fusion tracking method of video object
CN103262121A (en) * 2010-12-20 2013-08-21 国际商业机器公司 Detection and tracking of moving objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101676953A (en) * 2008-08-22 2010-03-24 奥多比公司 Automatic video image segmentation
CN103262121A (en) * 2010-12-20 2013-08-21 国际商业机器公司 Detection and tracking of moving objects
CN102663773A (en) * 2012-03-26 2012-09-12 上海交通大学 Dual-core type adaptive fusion tracking method of video object

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111433815A (en) * 2018-11-30 2020-07-17 深圳市大疆创新科技有限公司 Image feature point evaluation method and movable platform
CN110009004A (en) * 2019-03-14 2019-07-12 努比亚技术有限公司 Image processing method, computer equipment and storage medium
CN110009004B (en) * 2019-03-14 2023-09-01 努比亚技术有限公司 Image data processing method, computer device, and storage medium

Similar Documents

Publication Publication Date Title
Hasan et al. Smart traffic control system with application of image processing techniques
CN108701224B (en) Visual vehicle parking occupancy sensor
US20180307935A1 (en) System for detecting salient objects in images
JP4788525B2 (en) Object identification parameter learning system, object identification parameter learning method, and object identification parameter learning program
US20140056519A1 (en) Method, apparatus and system for segmenting an image in an image sequence
CN111886600A (en) Device and method for instance level segmentation of image
CN104200197A (en) Three-dimensional human body behavior recognition method and device
CN112036381B (en) Visual tracking method, video monitoring method and terminal equipment
CN111144337A (en) Fire detection method and device and terminal equipment
CN114519853B (en) Three-dimensional target detection method and system based on multi-mode fusion
CN113723170B (en) Hazard detection integrated architecture system and method
US10121234B2 (en) System and method for ghost removal in video footage using object bounding boxes
CN104503662A (en) Generation method and device for geometric outline of desktop element
JP5983749B2 (en) Image processing apparatus, image processing method, and image processing program
CN104483712A (en) Method, device and system for detecting invasion of foreign objects in power transmission line
CN104424651A (en) Method and system for tracking object
CN113269790B (en) Video clipping method, device, electronic equipment, server and storage medium
CN105163103B (en) Stereo-picture is represented with stepped construction to analyze the technology of the target in image
CN111667499A (en) Image segmentation method, device and equipment for traffic signal lamp and storage medium
Laureano et al. A topological approach for detection of chessboard patterns for camera calibration
CN117423073A (en) Vehicle state identification method and device, electronic equipment and storage medium
CN108268813B (en) Lane departure early warning method and device and electronic equipment
Ait Abdelali et al. Algorithm for moving object detection and tracking in video sequence using color feature
Iftikhar et al. Traffic light detection: A cost effective approach
Trivedi et al. Vehicle counting module design in small scale for traffic management in smart city

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150318