[go: up one dir, main page]

CN106875425A - A kind of multi-target tracking system and implementation method based on deep learning - Google Patents

A kind of multi-target tracking system and implementation method based on deep learning Download PDF

Info

Publication number
CN106875425A
CN106875425A CN201710053918.6A CN201710053918A CN106875425A CN 106875425 A CN106875425 A CN 106875425A CN 201710053918 A CN201710053918 A CN 201710053918A CN 106875425 A CN106875425 A CN 106875425A
Authority
CN
China
Prior art keywords
target
frame
tracking
trail
followed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710053918.6A
Other languages
Chinese (zh)
Inventor
何志群
白洪亮
董远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Faceall Co
Original Assignee
Beijing Faceall Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Faceall Co filed Critical Beijing Faceall Co
Priority to CN201710053918.6A priority Critical patent/CN106875425A/en
Publication of CN106875425A publication Critical patent/CN106875425A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of multi-target tracking system and implementation method based on deep learning, method includes:The target location of the first frame is obtained by target detection, multiple target to be followed the trail of is added in tracking queue, input next frame picture simultaneously travels through the tracking queue, target position in the next frame is obtained, above-mentioned target is being obtained after the position of next frame, whether the target frames out by threshold decision, if not, a target detection then is called every an anchor-frame, the result of target detection and the result followed the trail of are calculated into IOU hands over and compare, if IOU<0.1, then it is assumed that new target adds screen, by target addition tracking queue;If IOU>0.5, then the frame followed the trail of is substituted using the frame of target detection, carry out position correction;Continuation is tracked to target.The present invention improves training method by well-designed network structure, while height follows the trail of precision, significantly improves the speed of tracking, reduces the redundancy of network, reduces the size of model.

Description

A kind of multi-target tracking system and implementation method based on deep learning
Technical field
The present invention relates to image processing field, more particularly to a kind of multi-target tracking system and realization based on deep learning Method, realizes the problem of and fast track accurate for multiple target.
Background technology
Motion target tracking is exactly in a continuous videos sequence, fortune interested to be found in each frame monitored picture Moving-target (for example, vehicle, pedestrian, animal etc.).Tracking can be roughly divided into following steps:
1) effective description of target;The tracking process of target as target detection, it is necessary to carry out effective description to it, That is, it needs to extract clarification of objective such that it is able to express the target;In general, we can be by the edge of image, wheel Exterior feature, shape, texture, region, histogram, moment characteristics, conversion coefficient etc. carry out clarification of objective description;
2) similarity measurement is calculated;Conventional method has:Euclidean distance, mahalanobis distance, chessboard distance, Weighted distance, phase Like coefficient, coefficient correlation etc.;
3) target area search matching;If all targets to occurring in scene all carry out feature extraction, similitude meter Calculate, then the amount of calculation spent by system operation is very big.So, at present may to moving target generally by the way of certain The region of appearance is estimated, so as to reduce redundancy, accelerates the speed of target following;Common prediction algorithm has:Kalman is filtered Ripple, particle filter, average drifting etc..
Based on above-mentioned, the target tracking algorithm of motion target tracking generally comprises the tracking based on active profile, based on spy Tracking, the tracking based on region and the tracking based on model levied.The precision and robustness of track algorithm largely take Certainly in expression and the definition of similarity measurement to moving target, the real-time of track algorithm depends on matching search strategy and filter Ripple prediction algorithm.
It is the deformable curve defined in image area i.e. Snake curves, by its energy based on the tracking of active profile The minimum of flow function, it is consistent with objective contour that dynamic outline progressively adjusts own form.Snake technologies can be processed arbitrarily Any deformation of shaped objects, will split the object boundary for obtaining as the original template for tracking it is then determined that characterizing object first The object function of real border, and by reducing target function value, initial profile is gradually moved to the real border of object.Base It is not only to consider the half-tone information from image in the advantage of active Contour extraction, and considers the geological information of overall profile, Enhance the reliability of tracking.Because tracking process is actually the searching process of solution, the amount of calculation brought is than larger, Er Qieyou In the blindness of Snake models, during for the quick object for moving or larger deformation, tracking effect is not ideal enough.
The tracking of feature based, its global feature for not considering moving target, only by some notable spies of target image Levy to be tracked.It is assumed that moving target can be expressed by only characteristic set, search the corresponding characteristic set and just recognize For moving target has been gone up in tracking.Mainly include two aspects of feature extraction and characteristic matching:The purpose of feature extraction is to carry out frame Between target signature matching, and target is tracked with Optimum Matching.The track algorithm of common feature based matching has based on two The tracking of value target image matching, the tracking matched based on Edge Feature Matching or Corner Feature, based on target gray feature The tracking of matching, the tracking based on color of object characteristic matching etc..The track algorithm of feature based is for image blurring, noise etc. Compare sensitive, the extraction effect of characteristics of image also relies on the setting of various extraction operators and its parameter, additionally, continuous interframe The also more difficult determination of feature corresponding relation, especially when the number of features of each two field picture is inconsistent, there is missing inspection, feature increase or Situations such as reduction.
Tracking based on region, by obtaining the template comprising target, the template can be obtained or advance by image segmentation Artificial to determine, template is usually the rectangle slightly larger than target, or irregular shape;In sequence image, calculated with correlation Method tracks target.Its shortcoming is time-consuming first, and when region of search is larger, situation is especially serious;Secondly, algorithm requirement target becomes Shape less, and can not have and block greatly very much, and otherwise related precise decreasing can cause the loss of target.In recent years, to based on region It is situation when how processing template changes that tracking is paid close attention to more, and this change is caused by moving object attitude change , if the attitudes vibration of the correctly predicted target of energy, is capable of achieving tenacious tracking.
Tracking based on model, is to set up model to tracked target by certain priori, then by matching Tracking target carries out the real-time update of model.For rigid-object, the conversion of its motion state mainly translation, rotation etc., Target following can be realized using the method.But what is tracked in practical application is not only rigid body, also most right and wrong Rigid body, the definite geometrical model of target is not readily available.This method is difficult to be influenceed by observation visual angle, with stronger robust Property, Model Matching tracking accuracy is high, is suitable for the various motion changes of maneuvering target, strong antijamming capability, but due to calculating point Analysis is complicated, arithmetic speed is slow, and the renewal of model is complex, and real-time is poor.It is Model Matching energy accurately to set up motion model No successful key.
It is many to there is redundancy in present target tracking algorithm network, and speed is slow, and model is big, it is difficult to practical, it is impossible to chase after in real time The problems such as track.And the concept of deep learning comes from the research of artificial neural network, deep learning is formed by combining low-level feature More abstract high-rise expression attribute classification or feature, is represented with the distributed nature for finding data.
The content of the invention
The technical problem to be solved in the present invention is to provide by object detection one each target object of reference frame of acquisition Accurate location, and that can carry out prolonged tracking to each target object on the basis of target initial position is him.
Above-mentioned technical problem is solved, the invention provides a kind of multi-target tracking method based on deep learning, including such as Lower step:
The target location of the first frame is obtained by target detection, multiple target to be followed the trail of is added in tracking queue,
Input next frame picture simultaneously travels through the tracking queue, obtains target position in the next frame,
Specifically, when the target location of next frame is predicted, system and the picture and picture of previous frame are saved The position of middle target, volume and neutral net can predict target in next frame by contrasting the difference of previous frame and next frame Position;
Above-mentioned target is being obtained after the position of next frame, whether the target frames out by threshold decision,
If it is not, then calling a target detection every an anchor-frame, the result of target detection is calculated with the result followed the trail of IOU is handed over and compared,
If the IOU of a result of target detection and all targets followed the trail of<0.1, then it is assumed that new target adds screen Curtain, by target addition tracking queue;
If IOU>0.5, then the frame followed the trail of is substituted using the frame of target detection, carry out position correction;
Continuation is tracked to target.
Further, method also includes following pre-training process:
By this two pictures by change of scale to same yardstick, two kinds of picture conducts of similar adjacent video frames are obtained Training picture, pre-training is carried out to network.
Further, using the picture of ILSVRC contest target detections DET as above-mentioned training picture, based on ImageNet The image data base of target detection DET this task, ImageNet has 5 tasks, the different view data of each task correspondence Storehouse.
Further, method also includes following training process:
Pass through the twin network extraction picture feature of parameter identical after two pictures are pre-processed first;
Secondly, the twin network by it is dense->Sparse->Dense convolutional neural networks extract picture feature;
Then, two features are subtracted each other the feature as fusion, this feature is returned by full articulamentum again then Obtain the position of target frame.
Further, using CRELU joint amendment linear units in the characteristic extraction procedure of convolutional neural networks.
Further, the target position of first frame is detected using the target detection technique based on faster-rcnn frameworks Put.
Further, the anchor-frame is 10.
Further, judge that whether the target threshold condition that frames out is:
h/w>threshold1、w/h>threshold2、|x1|/W<threshold3、|W-x2|/W<threshold3、| y1|/H<threshold4、|H-y1|/H<Any in the threshold condition of threshold4,
Wherein, threshold represents threshold value, and h and w is respectively the height and width of object, and H and W is respectively the height and width of frame, (x1, y1) is the point coordinates in the target upper left corner, and (x2, y2) is the point coordinates in the target lower right corner.
Further, if multi-target tracking is face tracking, threshold1=threshold2=2 is set, Threshold3=threshold4=0.02.
A kind of multi-target tracking system based on deep learning is additionally provided based on the invention described above, including:
Training unit, is used to the target location that target detection obtains the first frame, multiple target to be followed the trail of is added to In tracking queue;
Detection unit, is used to the next frame picture to being input into and travels through the tracking queue, obtains the target in next frame In position;And when target does not frame out, a target detection is called every an anchor-frame;
Tracing unit, to obtaining above-mentioned target after the position of next frame, by the threshold decision target whether The finger target object that frame out of frameing out has not been suffered in picture and has frameed out finger target object not in picture ;
Threshold cell, is used to the result of target detection and the result followed the trail of calculating IOU are handed over and compared, if target detection The IOU of one result and all targets followed the trail of<0.1, then it is assumed that new target adds screen, the target is added and follows the trail of queue In;If IOU>0.5, then the frame followed the trail of is substituted using the frame of target detection, carry out position correction
Beneficial effects of the present invention:
In the present invention, by well-designed network structure, training method is improved, while height follows the trail of precision, significantly The speed of tracing algorithm is improve, the redundancy of network is reduced, the size of model is reduced, and object tracking algorithm is subject to It is practical.
Additionally, the present invention is also equipped with following advantage:1) speed is fast, and the tracking to single goal 120fps is reached on i5cpu Speed, but the 25fps of the video frame rate of current main-stream, therefore the method for the present invention can carry out reality to the target object in video When follow the trail of.2) model is small, and by simulated experiment, model size is 5.5M, compared to the deep learning tracing algorithm of existing main flow Model it is small more than 50 times.3) accuracy rate is higher, and the present invention is tested in object data set ALOV and human face data collection 300VW Card.4) multi-target tracking is realized, the tracing model of current main flow is difficult to be tracked multiple target, the tracing system in the present invention Multiple targets can be in real time tracked.
Brief description of the drawings
Fig. 1 is the method flow schematic diagram in the present invention;
Fig. 2 is the system structure diagram in the present invention;
Fig. 3 is training process schematic diagram;
Fig. 4 is to implement schematic flow sheet in one embodiment of the present invention;
Fig. 5 (a)-Fig. 5 (c) is that result schematic diagram is followed the trail of in the simulation in the present invention.
Specific embodiment
The principle of the disclosure is described referring now to some example embodiments.It is appreciated that these embodiments are merely for saying It is bright and help it will be understood by those skilled in the art that with the purpose of the embodiment disclosure and describe, rather than advise model of this disclosure Any limitation enclosed.Content of this disclosure described here can be implemented in the various modes outside mode described below.
As described herein, term " including " and its various variants be construed as open-ended term, it means that " bag Include but be not limited to ".Term "based" is construed as " being based at least partially on ".Term " one embodiment " it is understood that It is " at least one embodiment ".Term " another embodiment " is construed as " at least one other embodiment ".
It is appreciated that the concept being defined as follows in this application:
The CRELU refers to joint amendment linear unit.
The parameter sharing refers to a kind of algorithm of characteristic similarity study.
The Fusion Features are included but is not limited to, and eigenmatrix is merged, so as to multiple features are become into one more Effective fusion feature.
The IOU refers to hand over and compare, i.e., two intersection of sets collection are divided by two union of sets collection
Training includes but is not limited to be trained offline using mass data under the line, and model is carried out after training The renewal of model parameter is not carried out when test.The concept trained in the concept and line trained under line is relative.
The twin network refers to, two completely identical in structure network structures.
It is as shown in Figure 1 the method flow schematic diagram in the present invention, a kind of multi-target tracking method based on deep learning, Comprise the following steps:
Step S100 obtains the target location of the first frame by target detection, and multiple target to be followed the trail of is added into tracking team In row,
Step S101 is input into next frame picture and travels through the tracking queue, obtains target position in the next frame,
Step S102 is obtaining above-mentioned target after the position of next frame, and by threshold decision, whether the target leaves screen Curtain,
Step S103 if it is not, then call a target detection every an anchor-frame, by the result of target detection with follow the trail of Result calculates IOU and hands over and compare,
If the IOU of a result of step S104 target detections and all targets followed the trail of<0.1, then it is assumed that new target Screen is added, by target addition tracking queue;
If step S105 IOU>0.5, then the frame followed the trail of is substituted using the frame of target detection, carry out position correction;
Continue to be tracked target after terminating above-mentioned steps.
Used as preferred in the present embodiment, the method for the present embodiment also includes following pre-training process:By this two pictures By change of scale to same yardstick, two kinds of pictures of similar adjacent video frames are obtained as training picture, network is carried out Pre-training.And the picture of ILSVRC contest target detections DET is used as above-mentioned training picture.
Used as preferred in the present embodiment, method also includes following training process:
Pass through the twin network extraction picture feature of parameter identical after two pictures are pre-processed first;First by two By the twin network extraction picture feature of parameter identical after picture pretreatment, the output of the length representative convolution of figure center is led to Road number.
Secondly, the twin network by it is dense->Sparse->Dense convolutional neural networks extract picture feature;Convolution The network for carrying characteristic is a structure of sparse-dense-sparse, i.e., first extract feature with the few convolution of port number, Connecing the convolution more than a port number again carries out denseization to feature, then to connect a few convolution of port number dilute to the feature of denseization Thinization, by above-mentioned feature, can reduce redundancy.
Then, two features are subtracted each other the feature as fusion, this feature is returned by full articulamentum again then Obtain the position of target frame.
As preferred in the present embodiment, amendment is combined using CRELU in the characteristic extraction procedure of convolutional neural networks Linear unit.
As preferred in the present embodiment, using the target detection technique detection based on faster-rcnn frameworks described the The target location of one frame.
Used as preferred in the present embodiment, the anchor-frame is 10.
As preferred in the present embodiment, judge that whether the target threshold condition that frames out is:
h/w>threshold1、w/h>threshold2、|x1|/W<threshold3、|W-x2|/W<threshold3、| y1|/H<threshold4、|H-y1|/H<Any in the threshold condition of threshold4,
Wherein, threshold represents threshold value, and h and w is respectively the height and width of object, and H and W is respectively the height and width of frame, (x1, y1) is the point coordinates in the target upper left corner, and (x2, y2) is the point coordinates in the target lower right corner.
As preferred in the present embodiment, if multi-target tracking is face tracking, threshold1=is set Threshold2=2, threshold3=threshold4=0.02.
Method in the present embodiment, by well-designed network structure, improves training method, and the same of precision is followed the trail of in height When, the speed of tracing algorithm is significantly improved, the redundancy of network is reduced, reduce the size of model, and by object tracking Algorithm is subject to practicality.
Fig. 2 is the system structure diagram in the present invention, a kind of multiple target based on deep learning in the present embodiment Tracing system 100, including:
Training unit 1, is used to the target location that target detection obtains the first frame, multiple target to be followed the trail of is added to In tracking queue;
Detection unit 2, is used to the next frame picture to being input into and travels through the tracking queue, obtains the target in next frame In position;And when target does not frame out, a target detection is called every an anchor-frame;
Tracing unit 3, to obtaining above-mentioned target after the position of next frame, by the threshold decision target whether The finger target object that frame out of frameing out has not been suffered in picture and has frameed out finger target object not in picture ;
Threshold cell 4, is used to the result of target detection and the result followed the trail of calculating IOU are handed over and compared, if target detection A result and follow the trail of all targets IOU<0.1, then it is assumed that new target adds screen, the target is added and follows the trail of team In row;If IOU>0.5, then the frame followed the trail of is substituted using the frame of target detection, carry out position correction.
Understood according to above-mentioned, in the present embodiment on the basis of adjacent two frame is obtained, carried by convolutional neural networks (CNN) Picture feature is taken, the feature of adjacent two frame is merged.Simultaneously by feature it is dense->Sparse->Dense network structure sets Meter mode, and CRELU corrects linear unit in the characteristic extraction procedure of convolutional neural networks using CRELU joints.Protecting While holding high-accuracy, model size and capacity are reduced so that model can be used in embedded device.
Specifically, in practical scene, an accurate location for each target object of reference frame is obtained by object detection, Tracing system in the present embodiment can be for a long time followed the trail of each target object on the basis of target initial position.
Principles illustrated of the invention:
First, pre-training
In training process, video data collects relatively difficult, and the network convergence of a random initializtion is slow, so looking for It is critically important for the algorithm of object tracking to a good pre-training method.
It may be speculated that position coordinates of the object between adjacent two frame meets certain regularity of distribution.
If (c'x,c'y) coordinate of target's center's point, (c in present framex,cy) it is the coordinate of target's center's point in previous frame, W and h are respectively the wide and height of the rectangle frame of target previous frame.W' and h' is the wide and height of the rectangle frame of present frame.△ x and △ y Centered on put changed factor in x directions and y directions.
c'x=cx+wΔx (1)
c'y=cy+hΔy (2)
W '=w γw (3)
H'=h γh (4)
Research shows that △ x, △ y, w' and h' meet laplacian distribution:
Using this rule, a region of target position periphery to a static images, can be intercepted, be become Shape treatment, including zoom, affine transformation etc. obtain the picture after a deformation, and this two pictures is passed through into change of scale To same yardstick.Two kinds of pictures of similar adjacent video frames are obtained, as training picture.
With the picture of ILSVRC contest target detections DET as training picture, pre-training is carried out to network.
2nd, training process
Be as shown in Figure 3 training process schematic diagram training process flow chart specific as follows shown in:
By the twin network extraction picture feature of parameter identical after two pictures are pre-processed first, the length of figure center Degree represents the output channel number of convolution.It can be seen that, the network that convolution carries characteristic is a sparse-dense-sparse Structure, i.e., first extract feature with the few convolution of port number, then the convolution connect more than a port number carries out denseization to feature, then Feature rarefaction of the few convolution of port number to denseization is connect, most important feature is proposed, redundancy is reduced.Conventional part will Two features are subtracted each other the feature as fusion after feature extraction out, this feature is being carried out by full articulamentum to return To the position of the frame of target.
3rd, system flow:As shown in figure 4,
Step S1 target detections obtain the target location of the first frame
Step S2 is put into queue by target is followed the trail of
Step S3 multi-target trackings
Whether step S4 targets frame out, if then entering step S5, if then entering step S6
Step S5 is removed and is followed the trail of queue
Every 10 frames of step S6 call a target detection
Step S7 IOU<0.1, if then entering step S10
Step S8 IOU>0.5, if then entering step S12
Step S9 corrects the result followed the trail of with object detection results
Step S10 detects fresh target
Step S11 adds object queue
Step S12 occurs following the trail of or detection is abnormal
Step S13 continues to follow the trail of
First, the target location of the first frame is detected using the target detection technique based on faster-rcnn frameworks, by multiple Target to be followed the trail of is added in tracking queue.Input next frame picture, then traversal tracking queue, for each tracking target Tracing algorithm is called to obtain target position in the next frame.The target is obtained after the position of next frame, by threshold value Judge whether the target frames out.If the target have left screen, the target is removed and follows the trail of object queue.
Every 10 frames, a target detection is called, the result of target detection is calculated into IOU with the result followed the trail of, if mesh Mark certain result of detection and the IOU of all targets followed the trail of<0.1, then it is assumed that new target adds screen, the target is added In entering to follow the trail of queue.If IOU>0.5, the frame followed the trail of is substituted using the frame of target detection, carry out position correction.
Judge that the condition whether target frames out is (meeting either condition):
h/w>threshold1 (6)
w/h>threshold2 (7)
|x1|/W<threshold3 (8)
|W-x2|/W<threshold3 (9)
|y1|/H<threshold4 (10)
|H-y1|/H<threshold4 (11)
H and w are respectively the height and width of object, and H and W is respectively the height and width of frame, and (x1, y1) sits for the point in the target upper left corner Mark, (x2, y2) is the point coordinates in the target lower right corner, is collected in the tracing system of different objects, and threshold can take different Value, in face tracking system, is set to threshold1=threshold2=2, threshold3=threshold4= 0.02。
Fig. 5 (a)-Fig. 5 (c) is that result schematic diagram is followed the trail of in the simulation in the present invention, is input into one section of pedestrian's video, is first according to Video is cut frame sequence by 25fps.Video is named according to frame number.First the picture to the first frame detects, such as Fig. 5 A ()-Fig. 5 (c) is shown one by one, it can be seen that the first frame detects four faces.Using this four faces as target is followed the trail of, will Target is followed the trail of for this four to be added in tracking queue.Input next frame, tracing algorithm carries out real-time tracing against four targets.Often A Face datection is called every 10 frames, fresh target has been detected whether, if fresh target, fresh target tracking queue is added to In;If without fresh target, the frame according to Face datection is corrected to the frame followed the trail of.
It should be appreciated that each several part of the invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In implementation method, the software that multiple steps or method can in memory and by suitable instruction execution system be performed with storage Or firmware is realized.If for example, realized with hardware, and in another embodiment, can be with well known in the art Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means to combine specific features, structure, material or spy that the embodiment or example are described Point is contained at least one embodiment of the invention or example.In this manual, to the schematic representation of above-mentioned term not Necessarily refer to identical embodiment or example.And, the specific features of description, structure, material or feature can be any One or more embodiments or example in combine in an appropriate manner.
In general, the various embodiments of the disclosure can be with hardware or special circuit, software, logic or its any combination Implement.Some aspects can be implemented with hardware, and some other aspect can be with firmware or software implementation, and the firmware or software can With by controller, microprocessor or other computing devices.Although the various aspects of the disclosure be shown and described as block diagram, Flow chart is represented using some other drawing, but it is understood that frame described herein, equipment, system, techniques or methods can With in a non limiting manner with hardware, software, firmware, special circuit or logic, common hardware or controller or other calculating Equipment or some of combination are implemented.
In addition, although operation is described with particular order, but this is understood not to require this generic operation with shown suitable Sequence is performed or performed with generic sequence, or requires that all shown operations are performed to realize expected result.In some feelings Under shape, multitask or parallel processing can be favourable.Similarly, although the details of some specific implementations is superincumbent to beg for In by comprising, but these are not necessarily to be construed as any limitation of scope of this disclosure, but the description of feature is only pin To specific embodiment.Some features described in some separate embodiments can also in combination be held in single embodiment OK.Mutually oppose, the various features described in single embodiment can also be implemented separately or to appoint in various embodiments The mode of what suitable sub-portfolio is implemented.

Claims (10)

1. a kind of multi-target tracking method based on deep learning, it is characterised in that comprise the following steps:
The target location of the first frame is obtained by target detection, multiple target to be followed the trail of is added in tracking queue,
Input next frame picture simultaneously travels through the tracking queue, obtains target position in the next frame,
Above-mentioned target is being obtained after the position of next frame, whether the target frames out by threshold decision,
If it is not, then calling a target detection every an anchor-frame, the result of target detection and the result followed the trail of are calculated into IOU hands over And compare,
If the IOU of a result of target detection and all targets followed the trail of<0.1, then it is assumed that new target adds screen, will The target is added in tracking queue;
If IOU>0.5, then the frame followed the trail of is substituted using the frame of target detection, carry out position correction;
Continuation is tracked to target.
2. multi-target tracking method according to claim 1, it is characterised in that also including following pre-training process:
By this two pictures by change of scale to same yardstick, two kinds of pictures of similar adjacent video frames are obtained as training Picture, pre-training is carried out to network.
3. multi-target tracking method according to claim 2, it is characterised in that using ILSVRC contest target detections DET Picture as above-mentioned training picture.
4. multi-target tracking method according to claim 2, it is characterised in that also including following training process:
Pass through the twin network extraction picture feature of parameter identical after two pictures are pre-processed first;
Secondly, the twin network by it is dense->Sparse->Dense convolutional neural networks extract picture feature;
Then, two features are subtracted each other the feature as fusion, this feature is returned by full articulamentum again then The position of target frame.
5. multi-target tracking method according to claim 1, it is characterised in that in the feature extraction of convolutional neural networks Using CRELU joint amendment linear units in journey.
6. multi-target tracking method according to claim 1, it is characterised in that using based on faster-rcnn frameworks Target detection technique detects the target location of first frame.
7. multi-target tracking method according to claim 1, it is characterised in that the anchor-frame is 10.
8. multi-target tracking method according to claim 1, it is characterised in that judge whether target frames out threshold value bar Part is:
h/w>threshold1、w/h>threshold2、|x1|/W<threshold3、|W-x2|/W<threshold3、|y1|/ H<threshold4、|H-y1|/H<Any in the threshold condition of threshold4,
Wherein, threshold represents threshold value, and h and w is respectively the height and width of object, and H and W is respectively the height and width of frame, (x1, Y1) it is the point coordinates in the target upper left corner, (x2, y2) is the point coordinates in the target lower right corner.
9. multi-target tracking method according to claim 7, it is characterised in that if multi-target tracking is face tracking, If threshold1=threshold2=2, threshold3=threshold4=0.02.
10. a kind of multi-target tracking system based on deep learning, it is characterised in that including:
Training unit, is used to carry out the target location that target detection obtains the first frame, multiple target to be followed the trail of being added into tracking In queue;
Detection unit, is used to the next frame picture to being input into and travels through the tracking queue, obtains the target in the next frame Position;And when target does not frame out, a target detection is called every an anchor-frame;
Tracing unit, to obtain above-mentioned target after the position of next frame, by threshold decision, whether the target is left Screen is to frame out to refer to that target object has not suffered the finger target object that frames out and do not suffered in picture in picture;
Threshold cell, is used to the result of target detection and the result followed the trail of calculating IOU are handed over and compared, if a knot of target detection Fruit and the IOU of all targets followed the trail of<0.1, then it is assumed that new target adds screen, by target addition tracking queue; If IOU>0.5, then the frame followed the trail of is substituted using the frame of target detection, carry out position correction.
CN201710053918.6A 2017-01-22 2017-01-22 A kind of multi-target tracking system and implementation method based on deep learning Pending CN106875425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710053918.6A CN106875425A (en) 2017-01-22 2017-01-22 A kind of multi-target tracking system and implementation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710053918.6A CN106875425A (en) 2017-01-22 2017-01-22 A kind of multi-target tracking system and implementation method based on deep learning

Publications (1)

Publication Number Publication Date
CN106875425A true CN106875425A (en) 2017-06-20

Family

ID=59158713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710053918.6A Pending CN106875425A (en) 2017-01-22 2017-01-22 A kind of multi-target tracking system and implementation method based on deep learning

Country Status (1)

Country Link
CN (1) CN106875425A (en)

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274433A (en) * 2017-06-21 2017-10-20 吉林大学 Method for tracking target, device and storage medium based on deep learning
CN107330920A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of monitor video multi-target tracking method based on deep learning
CN107730538A (en) * 2017-10-11 2018-02-23 恩泊泰(天津)科技有限公司 A kind of method and device of the multiple target tracking based on image
CN107730536A (en) * 2017-09-15 2018-02-23 北京飞搜科技有限公司 A kind of high speed correlation filtering object tracking method based on depth characteristic
CN107845105A (en) * 2017-10-24 2018-03-27 深圳市圆周率软件科技有限责任公司 A kind of monitoring method, smart machine and storage medium based on the linkage of panorama rifle ball
CN107886120A (en) * 2017-11-03 2018-04-06 北京清瑞维航技术发展有限公司 Method and apparatus for target detection tracking
CN107993250A (en) * 2017-09-12 2018-05-04 北京飞搜科技有限公司 A kind of fast multi-target pedestrian tracking and analysis method and its intelligent apparatus
CN107992826A (en) * 2017-12-01 2018-05-04 广州优亿信息科技有限公司 A kind of people stream detecting method based on the twin network of depth
CN108090918A (en) * 2018-02-12 2018-05-29 天津天地伟业信息系统集成有限公司 A kind of Real-time Human Face Tracking based on the twin network of the full convolution of depth
CN108196680A (en) * 2018-01-25 2018-06-22 盛视科技股份有限公司 Robot vision following method based on human body feature extraction and retrieval
CN108346154A (en) * 2018-01-30 2018-07-31 浙江大学 The method for building up of Lung neoplasm segmenting device based on Mask-RCNN neural networks
CN108416780A (en) * 2018-03-27 2018-08-17 福州大学 An object detection and matching method based on twin-region-of-interest pooling model
CN108520218A (en) * 2018-03-29 2018-09-11 深圳市芯汉感知技术有限公司 A kind of naval vessel sample collection method based on target tracking algorism
CN108846358A (en) * 2018-06-13 2018-11-20 浙江工业大学 Target tracking method for feature fusion based on twin network
CN109063574A (en) * 2018-07-05 2018-12-21 顺丰科技有限公司 A kind of prediction technique, system and the equipment of the envelope frame based on deep neural network detection
CN109166136A (en) * 2018-08-27 2019-01-08 中国科学院自动化研究所 Target object follower method of the mobile robot based on monocular vision sensor
CN109190635A (en) * 2018-07-25 2019-01-11 北京飞搜科技有限公司 Target tracking method, device and electronic equipment based on classification CNN
CN109215058A (en) * 2018-09-17 2019-01-15 北京云测信息技术有限公司 A kind of mask method for image recognition face tracking
CN109271927A (en) * 2018-09-14 2019-01-25 北京航空航天大学 A kind of collaboration that space base is multi-platform monitoring method
CN109543559A (en) * 2018-10-31 2019-03-29 东南大学 Method for tracking target and system based on twin network and movement selection mechanism
CN110111363A (en) * 2019-04-28 2019-08-09 深兰科技(上海)有限公司 A kind of tracking and equipment based on target detection
CN110322475A (en) * 2019-05-23 2019-10-11 北京中科晶上科技股份有限公司 A kind of sparse detection method of video
CN110334635A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 Subject tracking method, apparatus, electronic device and computer-readable storage medium
CN110378938A (en) * 2019-06-24 2019-10-25 杭州电子科技大学 A kind of monotrack method based on residual error Recurrent networks
CN110399823A (en) * 2019-07-18 2019-11-01 Oppo广东移动通信有限公司 Subject tracking method and apparatus, electronic device, and computer-readable storage medium
CN110443824A (en) * 2018-05-02 2019-11-12 北京京东尚科信息技术有限公司 Method and apparatus for generating information
CN110472594A (en) * 2019-08-20 2019-11-19 腾讯科技(深圳)有限公司 Method for tracking target, information insertion method and equipment
CN110490084A (en) * 2019-07-24 2019-11-22 顺丰科技有限公司 Detection method, device, the network equipment and the storage medium of target object
CN110503095A (en) * 2019-08-27 2019-11-26 中国人民公安大学 Alignment quality evaluation method, localization method and the equipment of target detection model
CN110634155A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device based on deep learning
CN110634148A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Method and device for extracting target in continuous frame image
CN110647818A (en) * 2019-08-27 2020-01-03 北京易华录信息技术股份有限公司 Identification method and device for shielding target object
CN110677585A (en) * 2019-09-30 2020-01-10 Oppo广东移动通信有限公司 Target detection frame output method and device, terminal and storage medium
CN111462240A (en) * 2020-04-08 2020-07-28 北京理工大学 A target localization method based on multi-monocular vision fusion
CN111462174A (en) * 2020-03-06 2020-07-28 北京百度网讯科技有限公司 Multi-target tracking method and device and electronic equipment
CN111508006A (en) * 2020-04-23 2020-08-07 南开大学 Moving target synchronous detection, identification and tracking method based on deep learning
CN112085762A (en) * 2019-06-14 2020-12-15 福建天晴数码有限公司 Target position prediction method based on curvature radius and storage medium
CN112183675A (en) * 2020-11-10 2021-01-05 武汉工程大学 Twin network-based tracking method for low-resolution target
WO2021022643A1 (en) * 2019-08-08 2021-02-11 初速度(苏州)科技有限公司 Method and apparatus for detecting and tracking target in videos
CN112560651A (en) * 2020-12-09 2021-03-26 燕山大学 Target tracking method and device based on combination of depth network and target segmentation
CN112597795A (en) * 2020-10-28 2021-04-02 丰颂教育科技(江苏)有限公司 Visual tracking and positioning method for motion-blurred object in real-time video stream
CN112750145A (en) * 2019-10-30 2021-05-04 中国电信股份有限公司 Target detection and tracking method, device and system
WO2021208251A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Face tracking method and face tracking device
CN113689462A (en) * 2020-05-19 2021-11-23 深圳绿米联创科技有限公司 Target processing method and device and electronic equipment
CN114387296A (en) * 2022-01-07 2022-04-22 深圳市金溢科技股份有限公司 Target track tracking method and device, computer equipment and storage medium
CN114612512A (en) * 2022-03-10 2022-06-10 豪威科技(上海)有限公司 KCF-based target tracking algorithm
CN116883915A (en) * 2023-09-06 2023-10-13 常州星宇车灯股份有限公司 Target detection method and system based on front and rear frame image association

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036250A (en) * 2014-06-16 2014-09-10 上海大学 Video pedestrian detecting and tracking method
CN105160400A (en) * 2015-09-08 2015-12-16 西安交通大学 L21 norm based method for improving convolutional neural network generalization capability
EP3032462A1 (en) * 2014-12-09 2016-06-15 Ricoh Company, Ltd. Method and apparatus for tracking object, and non-transitory computer-readable recording medium
CN105898107A (en) * 2016-04-21 2016-08-24 北京格灵深瞳信息技术有限公司 Target object snapping method and system
CN106022220A (en) * 2016-05-09 2016-10-12 西安北升信息科技有限公司 Method for performing multi-face tracking on participating athletes in sports video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036250A (en) * 2014-06-16 2014-09-10 上海大学 Video pedestrian detecting and tracking method
EP3032462A1 (en) * 2014-12-09 2016-06-15 Ricoh Company, Ltd. Method and apparatus for tracking object, and non-transitory computer-readable recording medium
CN105160400A (en) * 2015-09-08 2015-12-16 西安交通大学 L21 norm based method for improving convolutional neural network generalization capability
CN105898107A (en) * 2016-04-21 2016-08-24 北京格灵深瞳信息技术有限公司 Target object snapping method and system
CN106022220A (en) * 2016-05-09 2016-10-12 西安北升信息科技有限公司 Method for performing multi-face tracking on participating athletes in sports video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
毛宁等: "基于分层卷积特征的自适应目标跟踪", 《激光与光电子学进展》 *

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274433A (en) * 2017-06-21 2017-10-20 吉林大学 Method for tracking target, device and storage medium based on deep learning
CN107274433B (en) * 2017-06-21 2020-04-03 吉林大学 Target tracking method, device and storage medium based on deep learning
CN107330920A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of monitor video multi-target tracking method based on deep learning
CN107330920B (en) * 2017-06-28 2020-01-03 华中科技大学 Monitoring video multi-target tracking method based on deep learning
CN107993250A (en) * 2017-09-12 2018-05-04 北京飞搜科技有限公司 A kind of fast multi-target pedestrian tracking and analysis method and its intelligent apparatus
CN107730536A (en) * 2017-09-15 2018-02-23 北京飞搜科技有限公司 A kind of high speed correlation filtering object tracking method based on depth characteristic
CN107730536B (en) * 2017-09-15 2020-05-12 苏州飞搜科技有限公司 High-speed correlation filtering object tracking method based on depth features
CN107730538A (en) * 2017-10-11 2018-02-23 恩泊泰(天津)科技有限公司 A kind of method and device of the multiple target tracking based on image
CN107845105A (en) * 2017-10-24 2018-03-27 深圳市圆周率软件科技有限责任公司 A kind of monitoring method, smart machine and storage medium based on the linkage of panorama rifle ball
CN107845105B (en) * 2017-10-24 2021-09-10 深圳市圆周率软件科技有限责任公司 Monitoring method based on panoramic gun-ball linkage, intelligent device and storage medium
CN107886120A (en) * 2017-11-03 2018-04-06 北京清瑞维航技术发展有限公司 Method and apparatus for target detection tracking
CN107992826A (en) * 2017-12-01 2018-05-04 广州优亿信息科技有限公司 A kind of people stream detecting method based on the twin network of depth
CN108196680A (en) * 2018-01-25 2018-06-22 盛视科技股份有限公司 Robot vision following method based on human body feature extraction and retrieval
CN108196680B (en) * 2018-01-25 2021-10-08 盛视科技股份有限公司 Robot vision following method based on human body feature extraction and retrieval
CN108346154B (en) * 2018-01-30 2021-09-07 浙江大学 Establishment method of lung nodule segmentation device based on Mask-RCNN neural network
CN108346154A (en) * 2018-01-30 2018-07-31 浙江大学 The method for building up of Lung neoplasm segmenting device based on Mask-RCNN neural networks
CN108090918A (en) * 2018-02-12 2018-05-29 天津天地伟业信息系统集成有限公司 A kind of Real-time Human Face Tracking based on the twin network of the full convolution of depth
CN108416780A (en) * 2018-03-27 2018-08-17 福州大学 An object detection and matching method based on twin-region-of-interest pooling model
CN108416780B (en) * 2018-03-27 2021-08-31 福州大学 An Object Detection and Matching Method Based on Siamese-Region of Interest Pooling Model
CN108520218A (en) * 2018-03-29 2018-09-11 深圳市芯汉感知技术有限公司 A kind of naval vessel sample collection method based on target tracking algorism
CN110443824A (en) * 2018-05-02 2019-11-12 北京京东尚科信息技术有限公司 Method and apparatus for generating information
CN108846358B (en) * 2018-06-13 2021-10-26 浙江工业大学 Target tracking method for feature fusion based on twin network
CN108846358A (en) * 2018-06-13 2018-11-20 浙江工业大学 Target tracking method for feature fusion based on twin network
CN110634148A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Method and device for extracting target in continuous frame image
CN110634155A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device based on deep learning
CN109063574B (en) * 2018-07-05 2021-04-23 顺丰科技有限公司 Method, system and equipment for predicting envelope frame based on deep neural network detection
CN109063574A (en) * 2018-07-05 2018-12-21 顺丰科技有限公司 A kind of prediction technique, system and the equipment of the envelope frame based on deep neural network detection
CN109190635A (en) * 2018-07-25 2019-01-11 北京飞搜科技有限公司 Target tracking method, device and electronic equipment based on classification CNN
CN109166136B (en) * 2018-08-27 2022-05-03 中国科学院自动化研究所 Target object following method for mobile robot based on monocular vision sensor
CN109166136A (en) * 2018-08-27 2019-01-08 中国科学院自动化研究所 Target object follower method of the mobile robot based on monocular vision sensor
CN109271927A (en) * 2018-09-14 2019-01-25 北京航空航天大学 A kind of collaboration that space base is multi-platform monitoring method
CN109271927B (en) * 2018-09-14 2020-03-27 北京航空航天大学 A Collaborative Monitoring Method for Space-Based Multi-Platforms
CN109215058A (en) * 2018-09-17 2019-01-15 北京云测信息技术有限公司 A kind of mask method for image recognition face tracking
CN109543559B (en) * 2018-10-31 2021-12-28 东南大学 Target tracking method and system based on twin network and action selection mechanism
CN109543559A (en) * 2018-10-31 2019-03-29 东南大学 Method for tracking target and system based on twin network and movement selection mechanism
CN110111363A (en) * 2019-04-28 2019-08-09 深兰科技(上海)有限公司 A kind of tracking and equipment based on target detection
CN110322475A (en) * 2019-05-23 2019-10-11 北京中科晶上科技股份有限公司 A kind of sparse detection method of video
CN112085762A (en) * 2019-06-14 2020-12-15 福建天晴数码有限公司 Target position prediction method based on curvature radius and storage medium
CN112085762B (en) * 2019-06-14 2023-07-07 福建天晴数码有限公司 Target position prediction method based on curvature radius and storage medium
CN110378938A (en) * 2019-06-24 2019-10-25 杭州电子科技大学 A kind of monotrack method based on residual error Recurrent networks
CN110334635A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 Subject tracking method, apparatus, electronic device and computer-readable storage medium
CN110334635B (en) * 2019-06-28 2021-08-31 Oppo广东移动通信有限公司 Subject tracking method, apparatus, electronic device, and computer-readable storage medium
CN110399823A (en) * 2019-07-18 2019-11-01 Oppo广东移动通信有限公司 Subject tracking method and apparatus, electronic device, and computer-readable storage medium
CN110490084B (en) * 2019-07-24 2022-07-08 顺丰科技有限公司 Target object detection method and device, network equipment and storage medium
CN110490084A (en) * 2019-07-24 2019-11-22 顺丰科技有限公司 Detection method, device, the network equipment and the storage medium of target object
WO2021022643A1 (en) * 2019-08-08 2021-02-11 初速度(苏州)科技有限公司 Method and apparatus for detecting and tracking target in videos
CN110472594A (en) * 2019-08-20 2019-11-19 腾讯科技(深圳)有限公司 Method for tracking target, information insertion method and equipment
CN110472594B (en) * 2019-08-20 2022-12-06 腾讯科技(深圳)有限公司 Target tracking method, information insertion method and equipment
CN110647818A (en) * 2019-08-27 2020-01-03 北京易华录信息技术股份有限公司 Identification method and device for shielding target object
CN110503095B (en) * 2019-08-27 2022-06-03 中国人民公安大学 Positioning quality evaluation method, positioning method and device of target detection model
CN110503095A (en) * 2019-08-27 2019-11-26 中国人民公安大学 Alignment quality evaluation method, localization method and the equipment of target detection model
CN110677585A (en) * 2019-09-30 2020-01-10 Oppo广东移动通信有限公司 Target detection frame output method and device, terminal and storage medium
CN112750145A (en) * 2019-10-30 2021-05-04 中国电信股份有限公司 Target detection and tracking method, device and system
CN111462174A (en) * 2020-03-06 2020-07-28 北京百度网讯科技有限公司 Multi-target tracking method and device and electronic equipment
CN111462174B (en) * 2020-03-06 2023-10-31 北京百度网讯科技有限公司 Multi-target tracking method and device and electronic equipment
CN111462240B (en) * 2020-04-08 2023-05-30 北京理工大学 A target localization method based on multi-monocular vision fusion
CN111462240A (en) * 2020-04-08 2020-07-28 北京理工大学 A target localization method based on multi-monocular vision fusion
WO2021208251A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Face tracking method and face tracking device
CN111508006A (en) * 2020-04-23 2020-08-07 南开大学 Moving target synchronous detection, identification and tracking method based on deep learning
CN113689462A (en) * 2020-05-19 2021-11-23 深圳绿米联创科技有限公司 Target processing method and device and electronic equipment
CN112597795A (en) * 2020-10-28 2021-04-02 丰颂教育科技(江苏)有限公司 Visual tracking and positioning method for motion-blurred object in real-time video stream
CN112183675A (en) * 2020-11-10 2021-01-05 武汉工程大学 Twin network-based tracking method for low-resolution target
CN112183675B (en) * 2020-11-10 2023-09-26 武汉工程大学 A tracking method for low-resolution targets based on Siamese network
CN112560651A (en) * 2020-12-09 2021-03-26 燕山大学 Target tracking method and device based on combination of depth network and target segmentation
CN114387296A (en) * 2022-01-07 2022-04-22 深圳市金溢科技股份有限公司 Target track tracking method and device, computer equipment and storage medium
CN114612512A (en) * 2022-03-10 2022-06-10 豪威科技(上海)有限公司 KCF-based target tracking algorithm
CN116883915A (en) * 2023-09-06 2023-10-13 常州星宇车灯股份有限公司 Target detection method and system based on front and rear frame image association
CN116883915B (en) * 2023-09-06 2023-11-21 常州星宇车灯股份有限公司 Target detection method and system based on front and rear frame image association

Similar Documents

Publication Publication Date Title
CN106875425A (en) A kind of multi-target tracking system and implementation method based on deep learning
Zheng et al. Deep learning for event-based vision: A comprehensive survey and benchmarks
CN109387204B (en) Synchronous positioning and composition method of mobile robot for indoor dynamic environment
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
CN106462976B (en) Method for tracking shape in scene observed by asynchronous sensor
EP2858008B1 (en) Target detecting method and system
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN104200494B (en) Real-time visual target tracking method based on light streams
CN109934065B (en) Method and device for gesture recognition
CN110533687B (en) Multi-target three-dimensional track tracking method and device
CN107958479A (en) A kind of mobile terminal 3D faces augmented reality implementation method
CN105574894B (en) A kind of screening technique and system of moving object feature point tracking result
WO2018152214A1 (en) Event-based feature tracking
CN105913028A (en) Face tracking method and face tracking device based on face++ platform
CN106780564A (en) A kind of anti-interference contour tracing method based on Model Prior
CN106157329A (en) A kind of adaptive target tracking method and device
Min et al. Coeb-slam: A robust vslam in dynamic environments combined object detection, epipolar geometry constraint, and blur filtering
CN113689459A (en) Real-time tracking and mapping method based on GMM combined with YOLO in dynamic environment
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN113781523B (en) A football detection and tracking method and device, electronic equipment, and storage medium
Madasu et al. Estimation of vehicle speed by motion tracking on image sequences
Pundlik et al. Time to collision and collision risk estimation from local scale and motion
CN111179281A (en) Human body image extraction method and human action video extraction method
CN113780181A (en) Offside judgment method, device and electronic equipment in football match based on drone
Martin et al. An evaluation of different methods for 3d-driver-body-pose estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170620