CN107066990A - A kind of method for tracking target and mobile device - Google Patents
A kind of method for tracking target and mobile device Download PDFInfo
- Publication number
- CN107066990A CN107066990A CN201710309346.3A CN201710309346A CN107066990A CN 107066990 A CN107066990 A CN 107066990A CN 201710309346 A CN201710309346 A CN 201710309346A CN 107066990 A CN107066990 A CN 107066990A
- Authority
- CN
- China
- Prior art keywords
- target
- frame
- picture frame
- target location
- tracker
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for tracking target, this method is performed in the mobile device for possess camera function, including:The target location determined in initial frame is inputted according to user, wherein target location is expressed as the target frame at a surrounding target center;Based on the target location training generation tracker and detector in initial frame, wherein tracker is suitable to be tracked frame by frame to shooting the target in video, and detector is suitable to be detected frame by frame to shooting the target in video;To shooting each picture frame follow-up in video:Target location and the output tracking response for obtaining the picture frame are tracked using tracker;Judge whether tracking response value is more than or equal to threshold value, if then continuing the target following of next image frame;If otherwise start-up detector, the target location of correspondence image frame is exported using detector;Keep after the predetermined frame number of detector continuous service, switch to tracker and proceed target following.The present invention discloses corresponding mobile device in the lump.
Description
Technical field
The present invention relates to technical field of image processing, especially a kind of method for tracking target and mobile device.
Background technology
When carrying out video capture using mobile devices such as mobile phones, photographer, which is often desirable to photographic subjects, can be always maintained at clearly
It is clear.For this reason, it may be necessary to ensure that the camera focus of mobile device is directed at subject goal all the time in shooting process.In reality, due to
Hiding relation between target erratic motion and different objects, the judgement to target location is extremely difficult, causes in many feelings
Under condition, the subject goal of shooting thickens because out of focus.
In film shooting, the regulation of focus is manually completed by experienced photographer, this manual adjustment method
Obviously demand simple to operate on mobile device is not suitable for.Some existing mobile devices use the method for Face datection, Neng Goushi
Focus is simultaneously navigated to relevant position by face in other video.However, this method application field is very limited, only for face
It is effective etc. certain objects, and continuity is not enough in time-domain, the change of focus is not smooth enough, can there is jitter phenomenon;It is another
Aspect, when there is multiple identical type objects in video, it is difficult to determine which is only user's target interested.
Track algorithm provides a kind of feasible program for auto-focusing.But existing tracking is in accuracy and real-time
On all need to be improved.For example, the track algorithm based on correlation filter is easy with losing target when target is blocked, it is based on
The algorithm of re-detection tracking performance when target is deformed upon is not good enough, and the method based on space constraint or deep learning is in effect
It is difficult to meet demand on rate and portability.
The content of the invention
Therefore, the invention provides target following scheme, to try hard to solve or at least alleviate exist above at least one
Individual problem.
According to an aspect of the invention, there is provided a kind of method for tracking target, this method is possessing the shifting of camera function
Performed in dynamic equipment, including step:The target location determined in initial frame is inputted according to user, wherein target location is expressed as one
The target frame at surrounding target center;Based on the target location training generation tracker and detector in initial frame, wherein tracker
Suitable for being tracked to shooting the target in video, detector is suitable to detect to shooting the target in video;Shooting is regarded
Follow-up each picture frame in frequency:The target location for obtaining the picture frame, and output tracking response are tracked using tracker;Sentence
Whether disconnected tracking response value is more than or equal to threshold value, if then continuing the target following of next image frame;If otherwise starting detection
Device, the target location of correspondence image frame is exported using detector;And keep after the predetermined frame number of detector continuous service, switch to
Tracker proceeds target following.
Alternatively, in the method for tracking target according to the present invention, the target position determined in initial frame is inputted according to user
The step of putting includes:The area-of-interest inputted based on user, multiple candidates of current image frame are exported using RPN network models
Target frame;It is identified and is returned with position by Fast R-CNN network models, exports the confidence level of each candidate target frame;With
And after non-maxima suppression, confidence level highest candidate target frame is chosen as the target for characterizing target location in initial frame
Frame.
Alternatively, in the method for tracking target according to the present invention, the target location training generation based on initial image frame
The step of tracker, includes:Use the circular matrix collecting sample of the target frame peripheral region of initial image frame;And using most
The initial trace template for the optimization method output tracking device that a young waiter in a wineshop or an inn multiplies.
Alternatively, in the method for tracking target according to the present invention, the target location training generation based on initial image frame
The step of detector, includes:According to the target frame of initial image frame, multiple sample boxes are exported according to the sliding window of predetermined dimension,
Generate sample queue.
Alternatively, in the method for tracking target according to the present invention, the sliding window of predetermined dimension is:At the beginning of sliding window
Beginning yardstick is the 10% of original image, step-size in search yardstick be the first prearranged multiple or the second prearranged multiple of adjacent yardstick and
Interval is [0.1 times of initial gauges, 10 times of initial gauges].
Alternatively, in the method for tracking target according to the present invention, its target location is obtained using tracker tracking, and it is defeated
The step of going out tracking response value includes:Trace template is generated by the target location of a upper picture frame, using tracker;According to upper
The target location of one picture frame generates the region of search of the picture frame;By the neighborhood of each pixel in trace template and region of search
Convolution algorithm is carried out, the response of each pixel is obtained;The maximum pixel of response is chosen as the target's center of the picture frame,
And maximum response is exported as tracking response value;And it is true by the size of the target's center and the target frame of a upper picture frame
The target location of the fixed picture frame.
Alternatively, in the method for tracking target according to the present invention, the figure is generated according to the target location of a upper picture frame
As frame region of search the step of include:The center of the target frame of the picture frame of the above one is search center, with its each chi of target frame
Very little twice is hunting zone, is used as the region of search of the picture frame.
Alternatively, in the method for tracking target according to the present invention, the figure is generated according to the target location of a upper picture frame
As frame region of search the step of also include:Processing is zoomed in and out to the picture frame according to predetermined zoom factor, multiple contractings are obtained
Picture frame after putting;And the center of the target frame of a picture frame is search center above, with twice of its each size of target frame
For hunting zone, the region of search of picture frame after multiple scalings is used as.
Alternatively, in the method for tracking target according to the present invention, by each pixel in trace template and region of search
The step of neighborhood carries out convolution algorithm includes:Each pixel in region of search using picture frame after trace template and multiple scalings
Neighborhood carry out convolution algorithm, obtain the response under the different zoom factor.
Alternatively, in the method for tracking target according to the present invention, the target's center and the target of a upper picture frame are passed through
The step of size of frame determines the target location of the picture frame also includes:With the scaling of the maximum affiliated picture frame of pixel of response
The target frame of a picture frame zooms in and out processing on factor pair, is used as the target frame size of the picture frame;And according to calculating
The target frame size of target's center and the picture frame determines the target location of the picture frame.
Alternatively, in the method for tracking target according to the present invention, the target position of correspondence image frame is exported using detector
The step of putting includes:Multiple sample boxes in sample queue generate multiple candidate samples of target in the picture frame;Pass through
Three-stage cascade classification is filtered to multiple candidate samples, exports the target location of the picture frame.
Alternatively, in the method for tracking target according to the present invention, in addition to the step of renewal tracker:Obtaining each
After the target frame of picture frame, the trace template for obtaining the picture frame is calculated according to the content frame;And the tracking to the picture frame
Template and the trace template of a upper picture frame are weighted, the trace template after being updated.
Alternatively, in the method for tracking target according to the present invention, the weight coefficient point of the picture frame and a upper picture frame
Wei 0.015 and 0.985.
Alternatively, in the method for tracking target according to the present invention, in addition to the step of renewal detector:Calculate by detecting
The IoU indexs of multiple candidate samples of device generation;And sample queue is screened according to IoU indexs.
According to another aspect of the invention there is provided a kind of mobile device, including:Camera sub-system, suitable for shooting video
Image;One or more processors;Memory;One or more programs, wherein one or more program storages are in memory
And be configured as by one or more of computing devices, one or more of programs include being used to perform side as described above
The instruction of either method in method.
There is provided a kind of computer-readable storage medium for storing one or more programs according to another aspect of the invention
Matter, one or more of programs include instruction, and the instruction is when computing device so that computing device as above institute
Either method in the method stated.
According to the target following scheme of the present invention, handed over compared to existing Atomatic focusing method there is provided user-friendly
Mutual mode, user only needs simply to put on the touchscreen and touches or delineate, you can automatic decision user's area-of-interest, and generates relative
Accurate fine target location, so that ensure subsequently to track is accurate.
Further, it is contemplated that the factor such as real-time and accuracy of target following, to every in follow-up shooting video
One picture frame, is tracked using tracker to target, and when target following makes a mistake or is that the target tracked disappears
When, start standby detector and target is detected, so as to ensure that the robustness of long video tracking.
Brief description of the drawings
In order to realize above-mentioned and related purpose, some illustrative sides are described herein in conjunction with following description and accompanying drawing
Face, these aspects indicate the various modes of principles disclosed herein that can put into practice, and all aspects and its equivalent aspect
It is intended to fall under in the range of theme claimed.The following detailed description by being read in conjunction with the figure, the disclosure it is above-mentioned
And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical reference generally refers to identical
Part or element.
Fig. 1 shows the organigram of mobile device 100 according to an embodiment of the invention;
Fig. 2 shows the flow chart of method for tracking target 200 according to an embodiment of the invention;And
Fig. 3 shows that utilization tracker tracking according to an embodiment of the invention obtains the flow of picture frame target location
Figure.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in accompanying drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Complete conveys to those skilled in the art.
Fig. 1 shows the organigram of mobile device 100 according to an embodiment of the invention.Reference picture 1, movement is set
Standby 100 include:Memory interface 102, one or more data processors, image processor and/or CPU 104,
And peripheral interface 106.Memory interface 102, one or more processors 104 and/or peripheral interface 106 both can be discrete
Element, can also be integrated in one or more integrated circuits.In mobile device 100, various elements can by one or
A plurality of communication bus or signal wire are coupled.Sensor, equipment and subsystem may be coupled to peripheral interface 106, to help
Realize a variety of functions.For example, motion sensor 110, optical sensor 112 and range sensor 114 may be coupled to peripheral interface
106, to facilitate the functions such as orientation, illumination and ranging.Other sensors 116 can equally be connected with peripheral interface 106, such as fixed
Position system (such as GPS), angular-rate sensor, temperature sensor, biometric sensor or other sensor devices, by
This can help the function of implementing correlation.
Camera sub-system 120 and optical sensor 122 can be used for the camera of convenient such as recording photograph and video clipping
The realization of function, wherein camera sub-system and optical sensor for example can be charge coupling device (CCD) or complementary metal oxygen
Compound semiconductor (CMOS) optical sensor.
Communication function, wherein wireless communication can be helped to realize by one or more radio communication subsystems 124
System can include radio-frequency transmitter and emitter and/or light (such as infrared) Receiver And Transmitter.Radio communication subsystem
124 particular design and embodiment can depend on one or more communication networks that mobile device 100 is supported.For example,
Mobile device 100 can include being designed to supporting GSM network, GPRS network, EDGE network, Wi-Fi or WiMax network and
BlueboothTMThe communication subsystem 124 of network.Audio subsystem 126 can be with loudspeaker 128 and the phase coupling of microphone 130
Close, to help to implement the function of enabling voice, such as speech recognition, speech reproduction, digital record and telephony feature.
I/O subsystems 140 can include touch screen controller 142 and/or other one or more input controllers 144.
Touch screen controller 142 may be coupled to touch-screen 146.For example, the touch-screen 146 and touch screen controller 142 can be with
The contact and movement or pause carried out therewith is detected using any one of a variety of touch-sensing technologies, wherein sensing skill
Art includes but is not limited to capacitive character, resistive, infrared and surface acoustic wave technique.Other one or more input controllers 144
May be coupled to other input/control devicess 148, such as one or more buttons, rocker switch, thumb wheel, infrared port,
The pointer device of USB port, and/or instruction pen etc.One or more button (not shown)s can include raising one's voice for control
The up/down button of device 128 and/or the volume of microphone 130.
Memory interface 102 can be coupled with memory 150.The memory 150 can be deposited including high random access
Reservoir and/or nonvolatile memory, such as one or more disk storage equipments, one or more optical storage apparatus, and/
Or flash memories (such as NAND, NOR).Memory 150 can store an operating system 152, such as Android, IOS or
Windows Phone etc operating system.The operating system 152 can include being used to handle basic system services and execution
Instruction dependent on the task of hardware.Memory 150 can also store and apply 154.These are applied in operation, can be from memory
150 are loaded on processor 104, and are run on the operating system run via processor 104, and utilize operating system
And the interface that bottom hardware is provided realizes the desired function of various users, such as instant messaging, web page browsing, pictures management.
It is operating system offer or that operating system is carried using that can be independently of.In some implementations, apply
154 can be one or more programs.
According to the implementation of the present invention, by storing corresponding one in the memory 150 of mobile device 100 or many
Individual program come realize camera sub-system 120 gather video image when target following function, i.e. methods described below 200.
It should be noted that the signified mobile device 100 of the present invention can be the mobile phone with above-mentioned construction, flat board, camera etc..
Fig. 2 shows the flow chart of method for tracking target 200 according to an embodiment of the invention.As shown in Fig. 2 the party
Method 200 starts from step S210, and when opening the progress video capture of camera sub-system 120, user can be for example, by the touchscreen
The mode input region of interest clicked on/delineated or target interested, are optimized by the input to user, obtained
The initial target location of (image) frame in, wherein, target location is expressed as the target frame at a surrounding target center.
According to one embodiment, the area-of-interest that user inputs is input to the deep learning model trained under line, it is defeated
Go out target location.Specifically, current image frame is exported first with RPN networks (Region Proposal Network) model
Multiple candidate target frames;It is identified and is returned with position by Fast R-CNN network models again, exports each candidate target frame
Confidence level;Finally, after non-maxima suppression, confidence level highest candidate target frame is chosen as sign initial image frame
The target frame of target location.Introduction on Fast R-CNN network models refers to following paper description:Ren,Shaoqing,
et al."Faster R-CNN:Towards real-time object detection with region proposal
Networks. " Advances in neural information processing systems.2015, no longer go to live in the household of one's in-laws on getting married herein
State.
Then in step S220, target location training generation tracker and detector based on initial image frame.
Wherein, tracker is suitable to be tracked to shooting the target in video.Alternatively, the present embodiment is using discriminate
Tracking distinguishes target and surrounding environment.In the track, what a grader is trained to need great amount of samples, this is just meaned
Substantial amounts of time loss.According to one embodiment of present invention, volume is used to the target frame and peripheral region of initial image frame
Product matrix (circulant matrix) generates training sample, i.e. the image pattern based on cycle spinning, the benefit so done
Be to sample set judge can be completed using more efficient frequency domain method;Then, it is defeated using the optimization method of least square
Go out the initial trace template of tracker.
Detector is suitable to detect to shooting the target in video.Embodiments in accordance with the present invention, the training of detector
The method of sampling based on sliding window, it is multiple according to the sliding window output of predetermined dimension according to the target frame of initial image frame
Sample boxes, generate sample queue.Alternatively, the initial gauges of sliding window take the 10% of original image size, step-size in search chi
Degree is that the first prearranged multiple (e.g., 1.2 times) or the second prearranged multiple (e.g., 0.8 times) and interval of adjacent yardstick are [just
0.1 times of beginning yardstick, 10 times of initial gauges], especially, reject the window that area is less than 20 pixels.According to sample boxes and mesh
The size of frame overlapping region is marked, multiple sample boxes of output are divided into positive and negative two class, wherein, overlap proportion is more than 50% sampling
Frame is stored in positive sample queue, and the sample boxes that overlap proportion is less than 20% are stored in negative sample queue.
It can be seen from the above description that detector is computationally intensive in tracker.In view of target following real-time and
The factors such as accuracy, to the follow-up each picture frame shot in video, are tracked using tracker to target, and when target with
When track makes a mistake or is the target disappearance of tracking, then start-up detector is detected to target.
Detailed process is described as follows.
In step S230, the target location for obtaining the picture frame (for example, the 2nd two field picture) is tracked using tracker, and
Output tracking response.
Such as Fig. 3, show according to an embodiment of the invention, picture frame target location is obtained using tracker tracking
Flow chart.
In step S2302, trace template is generated by the target location of a upper picture frame, using tracker.Namely
Say, after tracking obtains the target of each picture frame, the trace template of the picture frame is generated using tracker, for next figure
As the tracking of frame.
Then in step S2304, the region of search of the picture frame is generated according to the target location of a upper picture frame.It is optional
Ground, the center of the target frame of the picture frame of the above one is search center, with twice of its each size of target frame (that is, wide, high size)
For hunting zone, the region of search of the picture frame is used as.For example, the target location of a upper picture frame be expressed as with pixel (200,
500) it is target frame that target's center, size are 100 × 100, then the search of the picture frame generated according to the target location
Region be exactly centered on pixel (200,500), size be 200 × 200 search boxes.
Then in step S2306, the neighborhood of each pixel in trace template and region of search is subjected to convolution algorithm (etc.
Valency carries out dot product in trace template and region of search are transformed on frequency domain), the response of each pixel is obtained, response is represented
Each pixel is the probability of final goal central point.
Then in step S2308, the maximum pixel of response is chosen as the target's center of the picture frame, and export most
Big response is used as tracking response value.
Then in step S2310, the image is determined by the size of the target's center and the target frame of a upper picture frame
The target location of frame.That is, the target location of the picture frame is expressed as:In using tracking response value corresponding pixel points as target
The target frame size of the heart, above a picture frame is the target frame of size.
In specific implementation process, due to the change or the motion of target object of shooting focal length etc., mesh may result in
Mark object and occur dimensional variation, therefore, according to the embodiment of the present invention, above-mentioned steps are carried out respectively using some different scales
S2302 to S2310.
That is, before step S2304 is performed, processing is zoomed in and out to this picture frame according to predetermined zoom factor,
The picture frame after multiple scalings is obtained, then according still further to step S2304, the center of the target frame of the picture frame of the above one is in search
The heart, twice with its each size of target frame is hunting zone, is used as the region of search of picture frame after multiple scalings.According to the present invention
Embodiment, predetermined zoom factor includes one or more of following array:{0.82,0.88,0.94,1.06,1.12,
1.2}。
In step S2306, the neighbour of each pixel in the region of search using picture frame after trace template and multiple scalings
Domain carries out convolution algorithm, obtains the response under the different zoom factor.
In subsequent step S2308 and S2310, with the zoom factor of the maximum affiliated picture frame of pixel of response to upper one
The target frame of picture frame zooms in and out processing, is used as the target frame size of the picture frame;According to the target's center and the figure calculated
As the target frame size of frame determines the target location of the picture frame.
Then in step S240, judge whether tracking response value is more than or equal to threshold value, alternatively, threshold value is set to
0.27.If tracking response value >=0.27, return to step S230 continues the target following of next image frame.
If tracking response value is less than threshold value, then it is assumed that tracking result is inaccurate, step S250, start-up detector, profit are performed
The target location of correspondence image frame is exported with detector.Embodiments in accordance with the present invention, when the target location of tracking is got too close to
During image border, it is believed that target may disappear, now also start-up detector, performs step S250.
Specifically, the detector of generation is trained according to step S220, multiple sample boxes generation in sample queue should
Multiple candidate samples of target in picture frame, it is directly relatively low using arest neighbors matching efficiency because candidate samples quantity is larger, because
This uses the method that three-stage cascade is classified, and multiple candidate samples are filtered, the target location of the picture frame is exported.According to one
Embodiment is planted, the first order filters candidate samples by Variance Constraints, and the second level passes through random fern classification further filtering candidate
Sample, the final third level carries out arest neighbors matching, and the candidate samples of highest scoring are considered as the output of detector.
Under many circumstances, target disappears or will not reappeared immediately after being blocked, and the detection of short-time duty can not
It is correct to find target.Therefore in step S260, keep after the predetermined frame number of detector continuous service, switch to tracker, continue
Target following is carried out using tracker, i.e. return to step S230 continues executing with target following flow.Alternatively, predetermined frame number is set
For 50 frames.
According to the embodiment of the present invention, after being judged to target location in each picture frame, system is according to the frame
Content is updated to tracker and detector.
Specifically, the method for renewal tracker is:After the target frame of each picture frame is obtained, calculated according to the content frame
Obtain the trace template of the picture frame;Fortune is weighted to the trace template of the picture frame and the trace template of a upper picture frame again
Calculate (that is, linear superposition), the trace template after being updated.Alternatively, the weight coefficient of the picture frame and a upper picture frame point
Wei 0.015 and 0.985.
Equally, the method for renewal detector is:Calculate by the IoU indexs of multiple candidate samples of detector maturation, according to
IoU indexs are screened to sample queue.In other words, target is judged as by detector to a new frame according to IoU indexs
The larger sample of probability is classified, if IoU indexs are more than 0.65, then it is assumed that the sample is the sample high with tracking result registration
This, is classified to positive sample queue, if IoU indexs are less than 0.2, then it is assumed that the sample is the sample low with tracking result registration
This, is added into negative sample queue.It is random to forget part sample to avoid sample queue long, maintain population sample quantity
It is stable.
To sum up, according to the target following scheme of the present invention, compared to existing Atomatic focusing method, use is there is provided first
The friendly interactive mode in family, user only needs simply to put on the touchscreen and touches or delineate, you can automatic decision user's area-of-interest,
And generate relatively accurate fine target location, so that ensure subsequently to track is accurate;Secondly, the tracker of this programme is used and followed
The image pattern of ring translation, situations such as obscuring target deformation, motion blur, background better discriminates between ability, and tracking is calculated
Method has real-time speed, can rapidly and accurately judge the position of target object and corresponding scale in picture frame;Finally, target is worked as
When there is temporary extinction or situation about being blocked, this programme provides standby detector, is conceived to the long-term of target appearance
Memory, breaks through constraint spatially, its position can be judged again after target is reappeared, so as to ensure that long video tracking
Robustness.
Various technologies described herein can combine hardware or software, or combinations thereof is realized together.So as to the present invention
Method and apparatus, or the process and apparatus of the present invention some aspects or part can take embedded tangible media, such as it is soft
The form of program code (instructing) in disk, CD-ROM, hard disk drive or other any machine readable storage mediums,
Wherein when program is loaded into the machine of such as computer etc, and when being performed by the machine, the machine becomes to put into practice this hair
Bright equipment.
In the case where program code is performed on programmable computers, mobile device generally comprises processor, processor
Readable storage medium (including volatibility and nonvolatile memory and/or memory element), at least one input unit, and extremely
A few output device (as shown in Figure 1).Wherein, memory is arranged to store program codes;Processor is arranged to
According to the instruction in the described program code stored in the memory, the method for tracking target of the present invention is performed.
By way of example and not limitation, computer-readable medium includes computer-readable storage medium and communication media.Calculate
Machine computer-readable recording medium includes computer-readable storage medium and communication media.Computer-readable storage medium storage such as computer-readable instruction,
The information such as data structure, program module or other data.Communication media is general modulated with carrier wave or other transmission mechanisms etc.
Data-signal processed passes to embody computer-readable instruction, data structure, program module or other data including any information
Pass medium.Any combination above is also included within the scope of computer-readable medium.
It should be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, it is right above
The present invention exemplary embodiment description in, each feature of the invention be grouped together into sometimes single embodiment, figure or
In person's descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. claimed hair
The bright feature more features required than being expressly recited in each claim.More precisely, as the following claims
As book reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows specific real
Thus the claims for applying mode are expressly incorporated in the embodiment, wherein each claim is used as this hair in itself
Bright separate embodiments.
Those skilled in the art should be understood the module or unit or group of the equipment in example disclosed herein
Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In different one or more equipment.Module in aforementioned exemplary can be combined as a module or be segmented into addition multiple
Submodule.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment
Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or
Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power
Profit is required, summary and accompanying drawing) disclosed in each feature can or similar purpose identical, equivalent by offer alternative features come generation
Replace.
The present invention is disclosed in the lump:
A9, the method as described in A8, wherein, it is described to be rolled up trace template and the neighborhood of each pixel in region of search
The step of product computing, includes:Each pixel in region of search using picture frame after the trace template and the multiple scaling
Neighborhood carries out convolution algorithm, obtains the response under the different zoom factor.
A10, the method as described in A9, wherein, determined by the size of the target's center and the target frame of a upper picture frame
The step of target location of the picture frame, also includes:With the zoom factor of the maximum affiliated picture frame of pixel of response to a upper figure
As the target frame of frame zooms in and out processing, the target frame size of the picture frame is used as;And according to the target's center calculated and be somebody's turn to do
The target frame size of picture frame determines the target location of the picture frame.
A11, the method as any one of A4-10, wherein, the target location of correspondence image frame is exported using detector
The step of include:Multiple sample boxes in sample queue generate multiple candidate samples of target in the picture frame;Pass through three
Level cascade sort is filtered to the multiple candidate samples, exports the target location of the picture frame.
A12, the method as any one of A1-11, in addition to the step of renewal tracker:Obtaining each picture frame
Target frame after, calculated according to the content frame and obtain the trace template of the picture frame;And the trace template to the picture frame with
The trace template of a upper picture frame is weighted, the trace template after being updated.
A13, the method as described in A12, wherein, the weight coefficient of the picture frame and a upper picture frame is respectively 0.015 He
0.985。
A14, the method as any one of A1-13, in addition to the step of renewal detector:Calculate by detector maturation
Multiple candidate samples IoU indexs;And the sample queue is screened according to the IoU indexs.
Although in addition, it will be appreciated by those of skill in the art that some embodiments described herein include other embodiments
In included some features rather than further feature, but the combination of the feature of be the same as Example does not mean in of the invention
Within the scope of and form different embodiments.For example, in the following claims, times of embodiment claimed
One of meaning mode can be used in any combination.
In addition, be described as herein can be by the processor of computer system or by performing for some in the embodiment
Method or the combination of method element that other devices of the function are implemented.Therefore, with for implementing methods described or method
The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, device embodiment
Element described in this is the example of following device:The device is used to implement as in order to performed by implementing the element of the purpose of the invention
Function.
As used in this, unless specifically stated so, come using ordinal number " first ", " second ", " the 3rd " etc.
Description plain objects are merely representative of the different instances for being related to similar object, and are not intended to imply that the object being so described must
Must have the time it is upper, spatially, in terms of sequence or given order in any other manner.
Although describing the present invention according to the embodiment of limited quantity, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that
The language that is used in this specification primarily to readable and teaching purpose and select, rather than in order to explain or limit
Determine subject of the present invention and select.Therefore, in the case of without departing from the scope and spirit of the appended claims, for this
Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this
The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.
Claims (10)
1. a kind of method for tracking target, methods described is performed in the mobile device for possess camera function, including step:
The target location determined in initial frame is inputted according to user, wherein, the target location is expressed as a surrounding target center
Target frame;
Based on the target location training generation tracker and detector in initial frame, wherein the tracker is suitable to shooting video
In target be tracked, the detector be suitable to shoot video in target detect;
To shooting each picture frame follow-up in video:
The target location for obtaining the picture frame, and output tracking response are tracked using the tracker;
Judge whether the tracking response value is more than or equal to threshold value, if then continuing the target following of next image frame;
If otherwise starting the detector, the target location of correspondence image frame is exported using detector;And
Keep after the predetermined frame number of detector continuous service, switch to the tracker and proceed target following.
2. the method for claim 1, wherein described input the step of determining the target location in initial frame according to user
Including:
The area-of-interest inputted based on user, multiple candidate target frames of current image frame are exported using RPN network models;
It is identified and is returned with position by Fast R-CNN network models, exports the confidence level of each candidate target frame;And
After non-maxima suppression, confidence level highest candidate target frame is chosen as sign initial image frame target location
Target frame.
3. method as claimed in claim 1 or 2, wherein, the target location training generation tracker based on initial image frame
Step includes:
Use the circular matrix collecting sample of the target frame peripheral region of initial image frame;And
Using the initial trace template of the optimization method output tracking device of least square.
4. the method as any one of claim 1-3, wherein, based on the target location training generation detection in initial frame
The step of device, includes:
According to the target frame of the initial image frame, multiple sample boxes are exported according to the sliding window of predetermined dimension, sample is generated
Queue.
5. method as claimed in claim 4, wherein, the sliding window of the predetermined dimension is:
The initial gauges of sliding window are the 10% of original image, step-size in search yardstick be adjacent yardstick the first prearranged multiple or
Second prearranged multiple and interval is [0.1 times of initial gauges, 10 times of initial gauges].
6. the method as any one of claim 1-5, wherein, the utilization tracker tracking obtains its target location,
And the step of output tracking response includes:
Trace template is generated by the target location of a upper picture frame, using tracker;
The region of search of the picture frame is generated according to the target location of a upper picture frame;
The neighborhood of each pixel in the trace template and region of search is subjected to convolution algorithm, the response of each pixel is obtained
Value;
The maximum pixel of response is chosen as the target's center of the picture frame, and exports maximum response as tracking response
Value;And
The target location of the picture frame is determined by the size of the target's center and the target frame of a upper picture frame.
7. method as claimed in claim 6, wherein, the target location according to a upper picture frame generates searching for the picture frame
The step of rope region, includes:
The center of the target frame of the picture frame of the above one is search center, and twice with its each size of target frame is hunting zone, is made
For the region of search of the picture frame.
8. method as claimed in claim 6, wherein, the target location according to a upper picture frame generates searching for the picture frame
The step of rope region, also includes:
Processing is zoomed in and out to the picture frame according to predetermined zoom factor, the picture frame after multiple scalings is obtained;And
The center of the target frame of the picture frame of the above one is search center, and twice with its each size of target frame is hunting zone, is made
For the region of search of picture frame after multiple scalings.
9. a kind of mobile device, including:
Camera sub-system, suitable for shooting video image;
One or more processors;
Memory;
One or more programs, wherein one or more of program storages are in the memory and are configured as by described one
Individual or multiple computing devices, one or more of programs include being used to perform according in claim 1-8 methods describeds
The instruction of either method.
10. a kind of computer-readable recording medium for storing one or more programs, one or more of programs include instruction,
The instruction is when computing device so that the computing device is according in the method as described in claim 1-8
Either method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710309346.3A CN107066990B (en) | 2017-05-04 | 2017-05-04 | A kind of method for tracking target and mobile device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710309346.3A CN107066990B (en) | 2017-05-04 | 2017-05-04 | A kind of method for tracking target and mobile device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107066990A true CN107066990A (en) | 2017-08-18 |
CN107066990B CN107066990B (en) | 2019-10-11 |
Family
ID=59597052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710309346.3A Active CN107066990B (en) | 2017-05-04 | 2017-05-04 | A kind of method for tracking target and mobile device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107066990B (en) |
Cited By (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107564039A (en) * | 2017-08-31 | 2018-01-09 | 成都观界创宇科技有限公司 | Multi-object tracking method and panorama camera applied to panoramic video |
CN107578368A (en) * | 2017-08-31 | 2018-01-12 | 成都观界创宇科技有限公司 | Multi-object tracking method and panorama camera applied to panoramic video |
CN107832683A (en) * | 2017-10-24 | 2018-03-23 | 亮风台(上海)信息科技有限公司 | A kind of method for tracking target and system |
CN108197605A (en) * | 2018-01-31 | 2018-06-22 | 电子科技大学 | Yak personal identification method based on deep learning |
CN108229360A (en) * | 2017-12-26 | 2018-06-29 | 美的集团股份有限公司 | A kind of method of image procossing, equipment and storage medium |
CN108230359A (en) * | 2017-11-12 | 2018-06-29 | 北京市商汤科技开发有限公司 | Object detection method and device, training method, electronic equipment, program and medium |
CN108280843A (en) * | 2018-01-24 | 2018-07-13 | 新华智云科技有限公司 | A kind of video object detecting and tracking method and apparatus |
CN108470332A (en) * | 2018-01-24 | 2018-08-31 | 博云视觉(北京)科技有限公司 | A kind of multi-object tracking method and device |
CN108776822A (en) * | 2018-06-22 | 2018-11-09 | 腾讯科技(深圳)有限公司 | Target area detection method, device, terminal and storage medium |
CN108830219A (en) * | 2018-06-15 | 2018-11-16 | 北京小米移动软件有限公司 | Method for tracking target, device and storage medium based on human-computer interaction |
CN108960206A (en) * | 2018-08-07 | 2018-12-07 | 北京字节跳动网络技术有限公司 | Video frame treating method and apparatus |
CN108961312A (en) * | 2018-04-03 | 2018-12-07 | 奥瞳系统科技有限公司 | High-performance visual object tracking and system for embedded vision system |
CN108986138A (en) * | 2018-05-24 | 2018-12-11 | 北京飞搜科技有限公司 | Method for tracking target and equipment |
CN109034136A (en) * | 2018-09-06 | 2018-12-18 | 湖北亿咖通科技有限公司 | Image processing method, device, picture pick-up device and storage medium |
WO2019041519A1 (en) * | 2017-08-29 | 2019-03-07 | 平安科技(深圳)有限公司 | Target tracking device and method, and computer-readable storage medium |
CN109543534A (en) * | 2018-10-22 | 2019-03-29 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Target loses the method and device examined again in a kind of target following |
CN109671103A (en) * | 2018-12-12 | 2019-04-23 | 易视腾科技股份有限公司 | Method for tracking target and device |
CN109697499A (en) * | 2017-10-24 | 2019-04-30 | 北京京东尚科信息技术有限公司 | Pedestrian's flow funnel generation method and device, storage medium, electronic equipment |
CN109697727A (en) * | 2018-11-27 | 2019-04-30 | 哈尔滨工业大学(深圳) | Target tracking method, system and storage medium based on correlation filtering and metric learning |
CN109697441A (en) * | 2017-10-23 | 2019-04-30 | 杭州海康威视数字技术股份有限公司 | A kind of object detection method, device and computer equipment |
CN109815773A (en) * | 2017-11-21 | 2019-05-28 | 北京航空航天大学 | A vision-based detection method for low-slow and small aircraft |
CN110084777A (en) * | 2018-11-05 | 2019-08-02 | 哈尔滨理工大学 | A kind of micro parts positioning and tracing method based on deep learning |
CN110084835A (en) * | 2019-06-06 | 2019-08-02 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling video |
CN110176027A (en) * | 2019-05-27 | 2019-08-27 | 腾讯科技(深圳)有限公司 | Video target tracking method, device, equipment and storage medium |
CN110211153A (en) * | 2019-05-28 | 2019-09-06 | 浙江大华技术股份有限公司 | Method for tracking target, target tracker and computer storage medium |
CN110334635A (en) * | 2019-06-28 | 2019-10-15 | Oppo广东移动通信有限公司 | Subject tracking method, apparatus, electronic device and computer-readable storage medium |
CN110363790A (en) * | 2018-04-11 | 2019-10-22 | 北京京东尚科信息技术有限公司 | Target tracking method, device and computer readable storage medium |
CN110458861A (en) * | 2018-05-04 | 2019-11-15 | 佳能株式会社 | Object detection and tracking and equipment |
CN110472594A (en) * | 2019-08-20 | 2019-11-19 | 腾讯科技(深圳)有限公司 | Method for tracking target, information insertion method and equipment |
CN110634151A (en) * | 2019-08-01 | 2019-12-31 | 西安电子科技大学 | Single-target tracking method |
CN110634153A (en) * | 2019-09-19 | 2019-12-31 | 上海眼控科技股份有限公司 | Target tracking template updating method and device, computer equipment and storage medium |
CN110647836A (en) * | 2019-09-18 | 2020-01-03 | 中国科学院光电技术研究所 | A Robust Deep Learning-Based Single Target Tracking Method |
CN110661977A (en) * | 2019-10-29 | 2020-01-07 | Oppo广东移动通信有限公司 | Subject detection method and apparatus, electronic device, computer-readable storage medium |
CN111052753A (en) * | 2017-08-30 | 2020-04-21 | Vid拓展公司 | Tracking video scaling |
CN111145215A (en) * | 2019-12-25 | 2020-05-12 | 北京迈格威科技有限公司 | Target tracking method and device |
CN111242981A (en) * | 2020-01-21 | 2020-06-05 | 北京捷通华声科技股份有限公司 | Tracking method and device for fixed object and security equipment |
CN111311639A (en) * | 2019-12-31 | 2020-06-19 | 山东工商学院 | A fast-moving adaptive update interval tracking method for multiple search spaces |
CN111448588A (en) * | 2017-12-07 | 2020-07-24 | 华为技术有限公司 | Activity detection by joint detection and tracking of people and objects |
CN111489284A (en) * | 2019-01-29 | 2020-08-04 | 北京搜狗科技发展有限公司 | Image processing method and device for image processing |
CN111626263A (en) * | 2020-06-05 | 2020-09-04 | 北京百度网讯科技有限公司 | Video interesting area detection method, device, equipment and medium |
WO2020187095A1 (en) * | 2019-03-20 | 2020-09-24 | 深圳市道通智能航空技术有限公司 | Target tracking method and apparatus, and unmanned aerial vehicle |
CN111754541A (en) * | 2020-07-29 | 2020-10-09 | 腾讯科技(深圳)有限公司 | Target tracking method, device, equipment and readable storage medium |
CN111836102A (en) * | 2019-04-23 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Video frame analysis method and device |
CN111986229A (en) * | 2019-05-22 | 2020-11-24 | 阿里巴巴集团控股有限公司 | Video target detection method, device and computer system |
CN112446241A (en) * | 2019-08-29 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Method and device for obtaining characteristic information of target object and electronic equipment |
CN112466121A (en) * | 2020-12-12 | 2021-03-09 | 江西洪都航空工业集团有限责任公司 | Speed measuring method based on video |
CN112639405A (en) * | 2020-05-07 | 2021-04-09 | 深圳市大疆创新科技有限公司 | State information determination method, device, system, movable platform and storage medium |
EP3754608A4 (en) * | 2018-08-01 | 2021-08-18 | Tencent Technology (Shenzhen) Company Limited | Target tracking method, computer device, and storage medium |
CN113793365A (en) * | 2021-11-17 | 2021-12-14 | 第六镜科技(成都)有限公司 | Target tracking method and device, computer equipment and readable storage medium |
CN113869163A (en) * | 2021-09-18 | 2021-12-31 | 北京远度互联科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN113989696A (en) * | 2021-09-18 | 2022-01-28 | 北京远度互联科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN119169058A (en) * | 2024-11-25 | 2024-12-20 | 思翼科技(深圳)有限公司 | Target tracking method, device, electronic device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916368A (en) * | 2010-08-20 | 2010-12-15 | 中国科学院软件研究所 | Object Tracking Method Based on Multi-window |
CN102982340A (en) * | 2012-10-31 | 2013-03-20 | 中国科学院长春光学精密机械与物理研究所 | Target tracking method based on semi-supervised learning and random fern classifier |
CN104008371A (en) * | 2014-05-22 | 2014-08-27 | 南京邮电大学 | Regional suspicious target tracking and recognizing method based on multiple cameras |
US20140369555A1 (en) * | 2013-06-14 | 2014-12-18 | Qualcomm Incorporated | Tracker assisted image capture |
CN105279773A (en) * | 2015-10-27 | 2016-01-27 | 杭州电子科技大学 | TLD framework based modified video tracking optimization method |
CN105512640A (en) * | 2015-12-30 | 2016-04-20 | 重庆邮电大学 | Method for acquiring people flow on the basis of video sequence |
GB2533360A (en) * | 2014-12-18 | 2016-06-22 | Nokia Technologies Oy | Method, apparatus and computer program product for processing multi-camera media content |
CN106408591A (en) * | 2016-09-09 | 2017-02-15 | 南京航空航天大学 | Anti-blocking target tracking method |
-
2017
- 2017-05-04 CN CN201710309346.3A patent/CN107066990B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916368A (en) * | 2010-08-20 | 2010-12-15 | 中国科学院软件研究所 | Object Tracking Method Based on Multi-window |
CN102982340A (en) * | 2012-10-31 | 2013-03-20 | 中国科学院长春光学精密机械与物理研究所 | Target tracking method based on semi-supervised learning and random fern classifier |
US20140369555A1 (en) * | 2013-06-14 | 2014-12-18 | Qualcomm Incorporated | Tracker assisted image capture |
CN104008371A (en) * | 2014-05-22 | 2014-08-27 | 南京邮电大学 | Regional suspicious target tracking and recognizing method based on multiple cameras |
GB2533360A (en) * | 2014-12-18 | 2016-06-22 | Nokia Technologies Oy | Method, apparatus and computer program product for processing multi-camera media content |
CN105279773A (en) * | 2015-10-27 | 2016-01-27 | 杭州电子科技大学 | TLD framework based modified video tracking optimization method |
CN105512640A (en) * | 2015-12-30 | 2016-04-20 | 重庆邮电大学 | Method for acquiring people flow on the basis of video sequence |
CN106408591A (en) * | 2016-09-09 | 2017-02-15 | 南京航空航天大学 | Anti-blocking target tracking method |
Non-Patent Citations (1)
Title |
---|
SHAOQING REN ET AL.: ""Faster R-CNN:Towards Real-Time Object Detection with Region Proposal Networks"", 《ARXIV:1506.01497V3[CS.CV]》 * |
Cited By (80)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019041519A1 (en) * | 2017-08-29 | 2019-03-07 | 平安科技(深圳)有限公司 | Target tracking device and method, and computer-readable storage medium |
CN111052753A (en) * | 2017-08-30 | 2020-04-21 | Vid拓展公司 | Tracking video scaling |
CN107578368A (en) * | 2017-08-31 | 2018-01-12 | 成都观界创宇科技有限公司 | Multi-object tracking method and panorama camera applied to panoramic video |
CN107564039A (en) * | 2017-08-31 | 2018-01-09 | 成都观界创宇科技有限公司 | Multi-object tracking method and panorama camera applied to panoramic video |
CN109697441A (en) * | 2017-10-23 | 2019-04-30 | 杭州海康威视数字技术股份有限公司 | A kind of object detection method, device and computer equipment |
US11288548B2 (en) | 2017-10-23 | 2022-03-29 | Hangzhou Hikvision Digital Technology Co., Ltd. | Target detection method and apparatus, and computer device |
US11210795B2 (en) | 2017-10-24 | 2021-12-28 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Pedestrian flow funnel generation method and apparatus, storage medium and electronic device |
CN107832683A (en) * | 2017-10-24 | 2018-03-23 | 亮风台(上海)信息科技有限公司 | A kind of method for tracking target and system |
CN109697499A (en) * | 2017-10-24 | 2019-04-30 | 北京京东尚科信息技术有限公司 | Pedestrian's flow funnel generation method and device, storage medium, electronic equipment |
CN109697499B (en) * | 2017-10-24 | 2021-09-07 | 北京京东尚科信息技术有限公司 | Pedestrian flow funnel generation method and device, storage medium and electronic equipment |
CN108230359B (en) * | 2017-11-12 | 2021-01-26 | 北京市商汤科技开发有限公司 | Object detection method and apparatus, training method, electronic device, program, and medium |
CN108230359A (en) * | 2017-11-12 | 2018-06-29 | 北京市商汤科技开发有限公司 | Object detection method and device, training method, electronic equipment, program and medium |
CN109815773A (en) * | 2017-11-21 | 2019-05-28 | 北京航空航天大学 | A vision-based detection method for low-slow and small aircraft |
CN111448588B (en) * | 2017-12-07 | 2023-09-12 | 华为技术有限公司 | Activity detection method and computer equipment |
CN111448588A (en) * | 2017-12-07 | 2020-07-24 | 华为技术有限公司 | Activity detection by joint detection and tracking of people and objects |
CN108229360B (en) * | 2017-12-26 | 2021-03-19 | 美的集团股份有限公司 | An image processing method, device and storage medium |
CN108229360A (en) * | 2017-12-26 | 2018-06-29 | 美的集团股份有限公司 | A kind of method of image procossing, equipment and storage medium |
CN108470332A (en) * | 2018-01-24 | 2018-08-31 | 博云视觉(北京)科技有限公司 | A kind of multi-object tracking method and device |
CN108280843A (en) * | 2018-01-24 | 2018-07-13 | 新华智云科技有限公司 | A kind of video object detecting and tracking method and apparatus |
CN108470332B (en) * | 2018-01-24 | 2023-07-07 | 博云视觉(北京)科技有限公司 | Multi-target tracking method and device |
CN108197605A (en) * | 2018-01-31 | 2018-06-22 | 电子科技大学 | Yak personal identification method based on deep learning |
CN108961312B (en) * | 2018-04-03 | 2022-02-25 | 奥瞳系统科技有限公司 | High-performance visual object tracking method and system for embedded visual system |
CN108961312A (en) * | 2018-04-03 | 2018-12-07 | 奥瞳系统科技有限公司 | High-performance visual object tracking and system for embedded vision system |
CN110363790B (en) * | 2018-04-11 | 2024-06-14 | 北京京东尚科信息技术有限公司 | Target tracking method, apparatus and computer readable storage medium |
CN110363790A (en) * | 2018-04-11 | 2019-10-22 | 北京京东尚科信息技术有限公司 | Target tracking method, device and computer readable storage medium |
CN110458861A (en) * | 2018-05-04 | 2019-11-15 | 佳能株式会社 | Object detection and tracking and equipment |
CN110458861B (en) * | 2018-05-04 | 2024-01-26 | 佳能株式会社 | Object detection and tracking method and device |
CN108986138A (en) * | 2018-05-24 | 2018-12-11 | 北京飞搜科技有限公司 | Method for tracking target and equipment |
CN108830219A (en) * | 2018-06-15 | 2018-11-16 | 北京小米移动软件有限公司 | Method for tracking target, device and storage medium based on human-computer interaction |
CN108830219B (en) * | 2018-06-15 | 2022-03-18 | 北京小米移动软件有限公司 | Target tracking method and device based on man-machine interaction and storage medium |
CN108776822A (en) * | 2018-06-22 | 2018-11-09 | 腾讯科技(深圳)有限公司 | Target area detection method, device, terminal and storage medium |
US11961242B2 (en) | 2018-08-01 | 2024-04-16 | Tencent Technology (Shenzhen) Company Limited | Target tracking method, computer device, and storage medium |
EP3754608A4 (en) * | 2018-08-01 | 2021-08-18 | Tencent Technology (Shenzhen) Company Limited | Target tracking method, computer device, and storage medium |
CN108960206A (en) * | 2018-08-07 | 2018-12-07 | 北京字节跳动网络技术有限公司 | Video frame treating method and apparatus |
CN109034136B (en) * | 2018-09-06 | 2021-07-20 | 湖北亿咖通科技有限公司 | Image processing method, image processing apparatus, image capturing device, and storage medium |
CN109034136A (en) * | 2018-09-06 | 2018-12-18 | 湖北亿咖通科技有限公司 | Image processing method, device, picture pick-up device and storage medium |
CN109543534A (en) * | 2018-10-22 | 2019-03-29 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Target loses the method and device examined again in a kind of target following |
CN109543534B (en) * | 2018-10-22 | 2020-09-01 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Method and device for re-detecting lost target in target tracking |
CN110084777A (en) * | 2018-11-05 | 2019-08-02 | 哈尔滨理工大学 | A kind of micro parts positioning and tracing method based on deep learning |
CN109697727A (en) * | 2018-11-27 | 2019-04-30 | 哈尔滨工业大学(深圳) | Target tracking method, system and storage medium based on correlation filtering and metric learning |
CN109671103A (en) * | 2018-12-12 | 2019-04-23 | 易视腾科技股份有限公司 | Method for tracking target and device |
CN111489284A (en) * | 2019-01-29 | 2020-08-04 | 北京搜狗科技发展有限公司 | Image processing method and device for image processing |
CN111489284B (en) * | 2019-01-29 | 2024-02-06 | 北京搜狗科技发展有限公司 | Image processing method and device for image processing |
WO2020187095A1 (en) * | 2019-03-20 | 2020-09-24 | 深圳市道通智能航空技术有限公司 | Target tracking method and apparatus, and unmanned aerial vehicle |
CN111836102A (en) * | 2019-04-23 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Video frame analysis method and device |
CN111836102B (en) * | 2019-04-23 | 2023-03-24 | 杭州海康威视数字技术股份有限公司 | Video frame analysis method and device |
CN111986229A (en) * | 2019-05-22 | 2020-11-24 | 阿里巴巴集团控股有限公司 | Video target detection method, device and computer system |
CN110176027B (en) * | 2019-05-27 | 2023-03-14 | 腾讯科技(深圳)有限公司 | Video target tracking method, device, equipment and storage medium |
CN110176027A (en) * | 2019-05-27 | 2019-08-27 | 腾讯科技(深圳)有限公司 | Video target tracking method, device, equipment and storage medium |
CN110211153A (en) * | 2019-05-28 | 2019-09-06 | 浙江大华技术股份有限公司 | Method for tracking target, target tracker and computer storage medium |
CN110084835B (en) * | 2019-06-06 | 2020-08-21 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing video |
CN110084835A (en) * | 2019-06-06 | 2019-08-02 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling video |
CN110334635A (en) * | 2019-06-28 | 2019-10-15 | Oppo广东移动通信有限公司 | Subject tracking method, apparatus, electronic device and computer-readable storage medium |
CN110634151A (en) * | 2019-08-01 | 2019-12-31 | 西安电子科技大学 | Single-target tracking method |
CN110634151B (en) * | 2019-08-01 | 2022-03-15 | 西安电子科技大学 | Single-target tracking method |
CN110472594A (en) * | 2019-08-20 | 2019-11-19 | 腾讯科技(深圳)有限公司 | Method for tracking target, information insertion method and equipment |
CN110472594B (en) * | 2019-08-20 | 2022-12-06 | 腾讯科技(深圳)有限公司 | Target tracking method, information insertion method and equipment |
CN112446241A (en) * | 2019-08-29 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Method and device for obtaining characteristic information of target object and electronic equipment |
CN110647836A (en) * | 2019-09-18 | 2020-01-03 | 中国科学院光电技术研究所 | A Robust Deep Learning-Based Single Target Tracking Method |
CN110634153A (en) * | 2019-09-19 | 2019-12-31 | 上海眼控科技股份有限公司 | Target tracking template updating method and device, computer equipment and storage medium |
CN110661977B (en) * | 2019-10-29 | 2021-08-03 | Oppo广东移动通信有限公司 | Subject detection method and apparatus, electronic device, computer-readable storage medium |
WO2021082883A1 (en) * | 2019-10-29 | 2021-05-06 | Oppo广东移动通信有限公司 | Main body detection method and apparatus, and electronic device and computer readable storage medium |
CN110661977A (en) * | 2019-10-29 | 2020-01-07 | Oppo广东移动通信有限公司 | Subject detection method and apparatus, electronic device, computer-readable storage medium |
CN111145215B (en) * | 2019-12-25 | 2023-09-05 | 北京迈格威科技有限公司 | Target tracking method and device |
CN111145215A (en) * | 2019-12-25 | 2020-05-12 | 北京迈格威科技有限公司 | Target tracking method and device |
CN111311639B (en) * | 2019-12-31 | 2022-08-26 | 山东工商学院 | Multi-search-space fast-moving self-adaptive update interval tracking method |
CN111311639A (en) * | 2019-12-31 | 2020-06-19 | 山东工商学院 | A fast-moving adaptive update interval tracking method for multiple search spaces |
CN111242981A (en) * | 2020-01-21 | 2020-06-05 | 北京捷通华声科技股份有限公司 | Tracking method and device for fixed object and security equipment |
CN112639405A (en) * | 2020-05-07 | 2021-04-09 | 深圳市大疆创新科技有限公司 | State information determination method, device, system, movable platform and storage medium |
WO2021223166A1 (en) * | 2020-05-07 | 2021-11-11 | 深圳市大疆创新科技有限公司 | State information determination method, apparatus and system, and movable platform and storage medium |
CN111626263A (en) * | 2020-06-05 | 2020-09-04 | 北京百度网讯科技有限公司 | Video interesting area detection method, device, equipment and medium |
CN111626263B (en) * | 2020-06-05 | 2023-09-05 | 北京百度网讯科技有限公司 | Video region of interest detection method, device, equipment and medium |
CN111754541A (en) * | 2020-07-29 | 2020-10-09 | 腾讯科技(深圳)有限公司 | Target tracking method, device, equipment and readable storage medium |
CN111754541B (en) * | 2020-07-29 | 2023-09-19 | 腾讯科技(深圳)有限公司 | Target tracking method, device, equipment and readable storage medium |
CN112466121A (en) * | 2020-12-12 | 2021-03-09 | 江西洪都航空工业集团有限责任公司 | Speed measuring method based on video |
CN113869163A (en) * | 2021-09-18 | 2021-12-31 | 北京远度互联科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN113989696A (en) * | 2021-09-18 | 2022-01-28 | 北京远度互联科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN113793365A (en) * | 2021-11-17 | 2021-12-14 | 第六镜科技(成都)有限公司 | Target tracking method and device, computer equipment and readable storage medium |
CN119169058A (en) * | 2024-11-25 | 2024-12-20 | 思翼科技(深圳)有限公司 | Target tracking method, device, electronic device and storage medium |
CN119169058B (en) * | 2024-11-25 | 2025-03-07 | 思翼科技(深圳)有限公司 | Target tracking method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107066990B (en) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107066990B (en) | A kind of method for tracking target and mobile device | |
CN105512685B (en) | Object identification method and device | |
CN110532984B (en) | Key point detection method, gesture recognition method, device and system | |
CN113095124B (en) | A face life detection method, device and electronic equipment | |
JP2020518078A (en) | METHOD AND APPARATUS FOR OBTAINING VEHICLE LOSS EVALUATION IMAGE, SERVER, AND TERMINAL DEVICE | |
US9998651B2 (en) | Image processing apparatus and image processing method | |
KR20210102180A (en) | Image processing method and apparatus, electronic device and storage medium | |
US20200053279A1 (en) | Camera operable using natural language commands | |
CN103729120A (en) | Method for generating thumbnail image and electronic device thereof | |
CN107787463B (en) | The capture of optimization focusing storehouse | |
CN107871001B (en) | Audio playback method, device, storage medium and electronic device | |
CN106326853A (en) | Human face tracking method and device | |
CN109978891A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107948510A (en) | The method, apparatus and storage medium of Focussing | |
CN108781252A (en) | A kind of image capturing method and device | |
US11679301B2 (en) | Step counting method and apparatus for treadmill | |
CN113873166A (en) | Video shooting method, apparatus, electronic device and readable storage medium | |
CN111523402B (en) | Video processing method, mobile terminal and readable storage medium | |
CN104205031A (en) | Image zoom method and equipment | |
CN108139564A (en) | Focusing control apparatus, photographic device, focusing control method and focusing control program | |
CN111091034B (en) | Question searching method based on multi-finger recognition and home teaching equipment | |
CN107003730A (en) | A kind of electronic equipment, photographic method and camera arrangement | |
WO2018045565A1 (en) | Control display method and device for flexible display device | |
CN112884040B (en) | Training sample data optimization method, system, storage medium and electronic equipment | |
CN111077992A (en) | Point reading method, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |