[go: up one dir, main page]

CN115047903B - A method and device for automatically guiding, identifying and tracking a target - Google Patents

A method and device for automatically guiding, identifying and tracking a target Download PDF

Info

Publication number
CN115047903B
CN115047903B CN202210490085.0A CN202210490085A CN115047903B CN 115047903 B CN115047903 B CN 115047903B CN 202210490085 A CN202210490085 A CN 202210490085A CN 115047903 B CN115047903 B CN 115047903B
Authority
CN
China
Prior art keywords
target
tracking
hit
algorithm
striking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210490085.0A
Other languages
Chinese (zh)
Other versions
CN115047903A (en
Inventor
刘蝉
米颖
彭延云
侯师
贾彦翔
邱旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Machinery Equipment Research Institute
Original Assignee
Beijing Machinery Equipment Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Machinery Equipment Research Institute filed Critical Beijing Machinery Equipment Research Institute
Priority to CN202210490085.0A priority Critical patent/CN115047903B/en
Publication of CN115047903A publication Critical patent/CN115047903A/en
Application granted granted Critical
Publication of CN115047903B publication Critical patent/CN115047903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The disclosure relates to a method, a device, electronic equipment and a storage medium for automatically guiding identification tracking target. The method comprises the steps of constructing a lightweight feature extraction basic network based on a YOLOv identification model aiming at target detection, improving the target detection speed of an on-board end, combining a DeepSORT algorithm for inter-frame multi-target association aiming at multi-target problems, optimizing scale feature extraction calculation based on an ECO (electronic control unit) related filtering tracking algorithm aiming at single-target stable tracking, combining Kalman filtering prediction motion trail, reducing search range, improving tracking speed, aiming at flexible target maneuver and loss problems, designing a target shielding and tracking judgment strategy, and performing filter training based on the identified targets in a target detection stage, and improving tracking precision.

Description

Method and device for automatically guiding, identifying and tracking target
Technical Field
The present disclosure relates to the field of object recognition and tracking, and in particular, to a method, an apparatus, an electronic device, and a computer readable storage medium for automatically guiding recognition and tracking of an object.
Background
The television guiding technology is initially applied to terminal guidance of missiles, and the television guiding technology is adopted by the white star eye 1/2, GBU-15 (V) 1/B guided bombs and the like in the United states. In recent years, with the development of unmanned flying platforms toward aggressive, miniaturized, low-cost trends, and the increasing maturity of image processing technologies, nationwide competition in the world has developed remote guided battles or direct aggressive flying platforms using television guidance technology.
The low-altitude small unmanned aerial vehicle has the span of 15 cm-30 cm, most similar products are used for executing ground attack tasks at present, the guiding system mainly comprises a radar guiding attack system and a television guiding system, the radar guiding attack system is applied to the disposable small unmanned aerial vehicle very early, the small unmanned aerial vehicle emits a large amount of electromagnetic waves and is easily subjected to electronic interference, so that the fight capability is greatly limited, the television guiding can acquire abundant target information, the searching of targets under a complex background is facilitated, and the television guiding technology is applied to the small unmanned aerial vehicle to become a development trend.
The television guides a camera arranged at the front end of the flying platform to serve as a sensor for detecting the target, measures the angular coordinate of the target in space, locks and tracks the target, and further controls the platform to guide and attack the target. The method is divided into a semi-automatic mode of 'people in a loop' and a full-automatic mode of 'post-launching no matter', wherein the whole process from target locking to hitting of the semi-automatic mode needs manual control, the efficiency is low, the labor input amount is large, the system is directly paralyzed once an information transmission loop is interrupted, the full-automatic mode is that after a shooter aims at a locked target, the missile automatically completes the tasks of target tracking and striking, the mode is high in efficiency, particularly in anti-tank missile application in an individual combat environment, the life safety of the shooter can be effectively protected, and the guiding mode has high design requirements on a target tracking algorithm.
The existing semi-automatic television guiding technology is mature, is mainly applied to the hitting task of a large missile on a ground target, has long platform flight distance and long guidance time, is long in reservation judgment decision and correction time, and can be zoomed manually in time in the guidance process to clearly observe the target. The guide head which is usually equipped has high performance and high price, and aims at a large target size (such as pedestrians, vehicles, building structures and the like), has low relative movement speed and is easy to lock. The existing full-automatic television guiding technology is simpler in application targets and scenes, adopts target detection methods such as statistical pattern recognition, template matching and support vector machine training classification, is low in intelligence, reliability and sensitivity, and is not suitable for target locking under complex background.
However, for small-sized flight platforms with small size and limited weight to be mounted, especially considering cost constraints, the existing television guidance algorithm has the following disadvantages:
1) Even though the prior full-automatic television guiding technology is mainly aimed by a shooter, the prior intelligent detection method is mainly a traditional image processing method such as background segmentation and moving target detection, and the detection accuracy is low, the target type cannot be accurately identified, and the situation of multiple targets in a view field cannot be dealt with;
2) The small flying platform can be applied to coping with air and ground targets in a small range (less than or equal to 4 km), is different from typical 'low-low small' targets such as vehicles or fixed buildings, air rotor unmanned aerial vehicles and the like, and targets such as pedestrians on the ground and the like, has flexible movement, does not accord with a linear movement rule, and is difficult to stably lock the targets by simple linear track prediction;
3) The problems that the target is in a fuzzy shooting condition, the scale change is large, the target is difficult to continuously track, the reserved time is short (2-3 seconds) after the target is locked by the flight platform, the zooming reaction of the camera is not timely, the cost is low, a fixed focus camera is adopted, the target cannot be always shot to a clear target in the guidance process due to the adoption of the fixed focus camera, the scale change range of the target is large, the continuous tracking difficulty is large and the like are solved.
In the prior art, an aerospace background multi-target detection and tracking method (publication number: CN 107993245B) uses an aerospace target, and although multi-target detection tracking is realized, a simple binarization segmentation method based on an image color channel is adopted to detect the target, so that the reliability is low, the method is not suitable for target detection under a ground complex environment, a YOLOv Bernoulli video multi-target detection tracking method (publication number: CN 110084831) is based on YOLOv and Bernoulli filtering, the multi-target detection tracking is realized, the calculation amount of a detection algorithm is large, the method is not suitable for an airborne calculation end, the correlation among detection targets is more considered by the tracking algorithm, and the method is not suitable for single-target continuous tracking with large scale change under no detection assistance.
Accordingly, there is a need for one or more approaches to address the above-described problems.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
It is an object of the present disclosure to provide a method, apparatus, electronic device, and computer-readable storage medium for automatically guiding recognition of a tracking target, which overcome, at least in part, one or more of the problems due to the limitations and disadvantages of the related art.
According to one aspect of the present disclosure, there is provided a method of automatically guiding recognition of a tracking target, including:
a target searching step, enabling a servo system of a photoelectric pod of a flight platform to control a camera of the flight platform to rotate based on preset scanning control logic, and realizing full-range searching of a target;
After the shooting head of the flying platform receives the target, the multi-target detection step is carried out on the multi-target identification based on YOLOv s algorithm and DeepSORT algorithm, and the shooting head of the flying platform is controlled to rotate along with the movement of the target through a servo system of a photoelectric pod of the flying platform, so that the multi-target tracking is realized;
And a single target tracking and striking step, wherein after one target in the multiple targets is determined as a striking target, a servo system of a photoelectric pod of a flying platform is enabled to control a camera to acquire an image of the striking target, tracking of the striking target is realized based on a scale estimation algorithm and a background suppression regularization algorithm, and the distance between the flying platform and the striking target is calculated based on a laser range finder so as to realize tracking and striking of the striking target.
In one exemplary embodiment of the present disclosure, the multi-target detection step of the method includes:
After the camera head of the flying platform receives the targets and identifies the multiple targets, the camera head of the flying platform is controlled to rotate along with the movement of the targets through a servo system of a photoelectric pod of the flying platform, so that the tracking of the multiple targets is realized;
tracking filter training is enabled to complete learning of the target feature.
In an exemplary embodiment of the present disclosure, the method further comprises:
After the camera of the flying platform receives the targets and identifies the multiple targets, carrying out feature extraction on the images of the multiple targets based on YOLOv s rapid detection network;
The target attention is promoted for the characteristics based on the channel attention module and the space attention module;
and tracking and identifying the multiple targets based on DeepSORT algorithm.
In one exemplary embodiment of the present disclosure, the single target tracking striking step of the method further includes:
and when the hit target is tracked, predicting and adjusting a search area of the tracking algorithm based on the target scale change rate of the image of the hit target.
In an exemplary embodiment of the present disclosure, the method further comprises:
and when the hit target is tracked, adjusting a target characteristic calculation size limit value based on the distance between the flight platform and the hit target calculated by the laser range finder.
In an exemplary embodiment of the present disclosure, the method further comprises:
based on a background suppression regularization algorithm, according to a tracking response result of the image of the hit target, a regular coefficient matrix of the background suppression regularization algorithm is adaptively adjusted, so that background suppression of the hit target is realized.
In an exemplary embodiment of the present disclosure, the method further comprises:
Performing loss/shielding judgment on the hit target based on peak sidelobe ratio calculated by an ECO tracking algorithm and the hit target historical position and track prediction;
when the target is determined to be lost, changing to global searching of the target;
And when the shielding is judged, searching the hit target continuously according to the hit target historical position and track prediction.
In one exemplary embodiment of the present disclosure, the single target tracking striking step of the method further includes:
After one target in the multiple targets is determined to be a hit target, a servo system of a photoelectric pod of a flying platform is enabled to control a camera to acquire an image of the hit target, and tracking of the hit target is achieved based on a scale estimation algorithm and a background suppression regularization algorithm;
And calculating the distance between the flight platform and the hit target based on a laser range finder, and calculating the three-dimensional coordinate of the hit target based on the geodetic coordinate so as to realize tracking hit of the hit target.
In one aspect of the present disclosure, there is provided an apparatus for automatically guiding an identification tracking target, including:
The target searching module is used for enabling a servo system of the photoelectric pod of the flight platform to control the camera of the flight platform to rotate based on preset scanning control logic so as to realize full-range searching of targets;
The multi-target detection module is used for identifying the multiple targets based on YOLOv s algorithm and DeepSORT algorithm after the targets are detected by the cameras of the flying platform, and controlling the cameras of the flying platform to rotate along with the target movement through a servo system of a photoelectric pod of the flying platform so as to track the multiple targets;
And the single-target tracking and striking module is used for enabling a servo system of a photoelectric pod of the flying platform to control a camera to acquire an image of the striking target after one target in the multiple targets is determined as the striking target, realizing the tracking of the striking target based on a scale estimation algorithm and a background suppression regularization algorithm, and calculating the distance between the flying platform and the striking target based on a laser range finder so as to realize the tracking and striking of the striking target.
In one aspect of the present disclosure, there is provided an electronic device comprising:
Processor, and
A memory having stored thereon computer readable instructions which, when executed by the processor, implement a method according to any of the above.
In one aspect of the present disclosure, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements a method according to any of the above.
The method for automatically guiding and identifying the tracking target in the exemplary embodiment of the disclosure comprises the steps of constructing a lightweight feature extraction basic network based on YOLOv identification models aiming at target detection and tracking guiding algorithms, improving the target detection speed of an on-board end, carrying out inter-frame multi-target association by combining DeepSORT algorithm aiming at multi-target problems, optimizing scale feature extraction calculation by combining Kalman filtering prediction motion tracks on the basis of ECO (echo correlation) filtering tracking algorithm aiming at single-target stable tracking, reducing search range and improving tracking speed, designing target shielding and tracking judgment strategies aiming at flexible and lost problems of targets, and carrying out filter training based on the identified targets in a target detection stage to improve tracking precision.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 illustrates a flowchart of a method of automatically guiding recognition of a tracking target in accordance with an exemplary embodiment of the present disclosure;
2A-2B illustrate a target detection and tracking guidance algorithm flow chart of a method of automatically guiding identification of a tracking target according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a guidance algorithm hardware system and functional diagram of a method of automatically guiding identification of a tracking target according to an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a schematic block diagram of an apparatus for automatically guiding recognition of a tracking target according to an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an electronic device in accordance with an exemplary embodiment of the present disclosure, and
Fig. 6 schematically illustrates a schematic diagram of a computer-readable storage medium according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, materials, devices, steps, etc. In other instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
In the present exemplary embodiment, there is first provided a method of automatically guiding and identifying a tracking target, referring to fig. 1, the method of automatically guiding and identifying a tracking target may include the steps of:
step S110 of searching the target, enabling a servo system of a photoelectric pod of the flying platform to control a camera of the flying platform to rotate based on preset scanning control logic, and realizing full-range searching of the target;
Step S120 of multi-target detection, namely after the camera of the flying platform receives a target, the multi-target is identified based on YOLOv S algorithm and DeepSORT algorithm, and the camera of the flying platform is controlled to rotate along with the movement of the target through a servo system of a photoelectric pod of the flying platform, so that the multi-target tracking is realized;
And a single-target tracking and striking step S130, wherein after one target in the multiple targets is determined to be a striking target, a servo system of a photoelectric pod of the flying platform is enabled to control a camera to acquire an image of the striking target, tracking of the striking target is realized based on a scale estimation algorithm and a background suppression regularization algorithm, and the distance between the flying platform and the striking target is calculated based on a laser range finder so as to realize tracking and striking of the striking target.
The method for automatically guiding and identifying the tracking target in the exemplary embodiment of the disclosure comprises the steps of constructing a lightweight feature extraction basic network based on YOLOv identification models aiming at target detection and tracking guiding algorithms, improving the target detection speed of an on-board end, carrying out inter-frame multi-target association by combining DeepSORT algorithm aiming at multi-target problems, optimizing scale feature extraction calculation by combining Kalman filtering prediction motion tracks on the basis of ECO (echo correlation) filtering tracking algorithm aiming at single-target stable tracking, reducing search range and improving tracking speed, designing target shielding and tracking judgment strategies aiming at flexible and lost problems of targets, and carrying out filter training based on the identified targets in a target detection stage to improve tracking precision.
Next, a method of automatically guiding recognition of a tracking target in the present exemplary embodiment will be further described.
In the embodiment of the example, a low-altitude rapid flight platform is used as a carrier, the flight speed is 30m/s-50m/s, the furthest flight distance is not less than 5km, the maximum ground flying height is not less than 500m, a 'visible light + laser ranging' pod is used as a detection sensor, an automatic target detection and tracking guidance algorithm is designed, a lightweight characteristic extraction basic network is built based on a YOLOv identification model aiming at target detection, the target detection speed of an airborne end is improved, the algorithm of DeepSORT is combined for multi-target problem, inter-frame multi-target association is carried out, the single-target stable tracking is carried out, the ECO related filtering tracking algorithm is used for optimizing scale characteristic extraction calculation, the Kalman filtering prediction motion track is combined, the searching range is reduced, the tracking speed is improved, the maneuvering flexibility and the loss problem of targets are solved, the target shielding and the tracking judgment strategy is designed, and the filter training is carried out based on the identified targets in the target detection stage, and the tracking precision is improved.
In the embodiment of the present example, after the target is visually found by the unmanned aerial platform, the guiding algorithm based on target detection and tracking is used for detecting the target GPS position by the ground photoelectric or radar system in the early stage, guiding the unmanned aerial platform to emit a target, controlling the visible light camera to rotate by the servo system, intelligently identifying the search target, and entering the television guidance stage.
In the embodiment of the present example, as shown in fig. 2A, which is a target detection tracking guidance flow chart, from the approach target to the hit target after the unmanned flying platform is launched, three stages can be divided:
1) In the target searching stage, the process starts a target detection function, the target does not enter the visual field of the camera, and the servo system controls the camera to rotate to search the target in a large range according to a preset scanning control logic;
2) In the target discovery stable detection stage, once a target is detected, starting a multi-target detection and tracking function, rotating a servo system according to the multi-target concentrated position at the moment, ensuring that the target appears in the field of view of a camera, and simultaneously starting training of a tracking filter so as to learn target characteristics in advance;
3) And in the target locking and stable tracking stage, after a long-time single-target stable tracking function is started, a servo system controls a camera to lock a target according to target movement, a laser range finder calculates a target distance, and three-dimensional coordinates of the target are calculated in real time after a series of positioning conversions such as a photoelectric pod, a flight platform, a geodetic coordinate and the like are carried out so as to track a hit target.
The method for cutting in the single-target stable tracking comprises the steps of 1, intelligently switching, judging and finding out an optimal attack target according to a preset strategy under the condition of detecting and tracking the single target or multiple targets, and enabling a stable target detection frame number to meet a threshold requirement, 2, manually switching, and enabling a ground shooter to manually select the target to switch into a target tracking stage according to a returned target detection image.
In the target searching step S110, a servo system of a photoelectric pod of the flying platform may be controlled to rotate a camera of the flying platform based on a preset scanning control logic, so as to realize full-range searching of the target.
In the multi-target detection step S120, after the targets are detected by the cameras of the flying platform, the multi-targets can be identified based on YOLOv S algorithm and DeepSORT algorithm, and the cameras of the flying platform are controlled to rotate along with the target movement by the servo system of the optoelectronic pod of the flying platform, so that the multi-target tracking is realized.
In an embodiment of the present example, the multi-target detection step of the method comprises:
After the camera head of the flying platform receives the targets and identifies the multiple targets, the camera head of the flying platform is controlled to rotate along with the movement of the targets through a servo system of a photoelectric pod of the flying platform, so that the tracking of the multiple targets is realized;
tracking filter training is enabled to complete learning of the target feature.
In an embodiment of the present example, the method further comprises:
After the camera of the flying platform receives the targets and identifies the multiple targets, carrying out feature extraction on the images of the multiple targets based on YOLOv s rapid detection network;
The target attention is promoted for the characteristics based on the channel attention module and the space attention module;
and tracking and identifying the multiple targets based on DeepSORT algorithm.
In the embodiment of the present example, for multi-target detection, a lightweight detection model is designed mainly according to the characteristics of limited performance of an airborne computing unit and small targets of an application scene. The realization method is that a YOLOv s model is taken as a basic frame, a light weight design is carried out on a feature extraction part, a attention mechanism is introduced to optimize the detection precision of a small target, and the small target is combined with DeepSORT to realize the inter-frame same-target track association, namely the tracking of multiple targets.
In the embodiment of the example, the target extraction basic network lightweight design comprises an optimization design based on an end-to-end YOLOv s rapid detection network, wherein a classical lightweight network MobileNet is replaced by the characteristic extraction basic network, and specifically comprises the steps of carrying out channel compression by adding convolution before deep separable convolution operation aiming at the problem of large calculation amount of convolution, and improving calculation speed by using packet convolution operation on a point-by-point convolution part with the highest proportion in the deep separable convolution, replacing part of ReLU by adopting a Mish activation function aiming at the problem of collapse of ReLU low-dimensional data, allowing smaller gradient flow to enable information to penetrate into the network better at a negative value, thereby improving accuracy and generalization capability of the network, and carrying out no multiplexing characteristic problem by adding a residual structure and skipping one or more weight layers so that the gradient can reach shallower layers unimpeded.
In the embodiment of the example, the YOLOv s attention mechanism design comprises that on the basis of replacing the YOLOv s feature extraction network with the MobileNet network with the improved design, aiming at the condition of multi-scale insufficient detection and small target missed detection, an attention mechanism is introduced through an embedded CBAM module, and the effect is that the network knows which part is focused to pay attention to, so that the salient expression of important features is correspondingly realized, and the less salient features are restrained. The process takes a feature map obtained through a network as input to a CBAM module, and according to a CBAM divided channel attention and space attention module, the input feature map is firstly convolved and then is sent to the channel attention module in a CBAM module, then is sent to the space attention module, and finally is subjected to feature adjustment with the input feature map to obtain the output of the whole module. And the spatial attention module of the first part performs global average pooling operation and maximum pooling operation on the input feature map, obtains output through two full-connection layers, obtains two feature sums, obtains a scaling factor through a sigmoid activation function, and multiplies the scaling factor with the input feature map to obtain the output of the spatial attention module. And the second part of the spatial attention module is used for respectively carrying out the operations of average pooling and maximum pooling on the output of the channel attention module, then carrying out channel splicing on the output after the two operations, obtaining the scaling factor of the spatial attention module through convolution and sigmoid activation functions, multiplying the scaling factor of the spatial attention module by the output of the channel attention module to obtain the output of the spatial attention module, and finally adding the output of the two module groups and the input of the CBAM module to obtain the new characteristic of the whole CBAM output. Experimental comparison shows that the effect obtained by the channel attention module and the space attention module is better than the processing method of processing the input by the channel attention module and the space attention module together and the processing method of the channel attention module and the space attention module.
In the embodiment of the example, YOLOv5s+ DeepSORT multi-target tracking comprises adopting DeepSORT tracking algorithm for multi-target tracking, and compared with a multi-target Online tracking algorithm SORT (Simple Online AND REALTIME TRACKING), the method has the advantages that motion information and appearance information of targets are combined to be used as correlation metrics, and the problem of tracking failure caused by reappearance after target disappearance is solved. DeepSORT initialize the trackers with the results of the detectors, each of which sets a counter that is incremented after kalman filtering, and sets 0 when the predicted and detected results match successfully. The tracker is deleted if it does not match the appropriate detection result for a period of time. DeepSORT a tracker is allocated to the new detection result in each frame, when the prediction results of 3 continuous frames of the tracker can be matched with the detection result, the new track is confirmed to appear, otherwise, the tracker is deleted.
In the embodiment of the example, multi-target tracking is introduced, and target detection without correlation among frames is related, so that the method is suitable for coping with locking tasks of multiple targets and multiple targets of the same type, and especially the problem of unmanned aerial vehicle bee colony which is relatively hot recently. The multi-target tracking is introduced, so that the hit judgment of the single-flight platform based on the multi-target tracking is improved, and a capability foundation is provided for the multi-flight platform to hit the unmanned aerial vehicle bee colony in a cooperative manner.
In the single-target tracking and hitting step S130, after one target of the multiple targets is determined as a hit target, a servo system of a photoelectric pod of a flying platform is enabled to control a camera to acquire an image of the hit target, optimization is performed by adopting a scale estimation algorithm and a background suppression regularization algorithm based on an ECO-related filtering tracking method, stable tracking of the hit target is achieved, and a distance between the flying platform and the hit target is calculated based on a laser range finder, so that tracking and hitting of the hit target are achieved.
In an embodiment of the present example, the single target tracking striking step of the method further comprises:
and when the hit target is tracked, predicting and adjusting a search area of the tracking algorithm based on the target scale change rate of the image of the hit target.
In an embodiment of the present example, the method further comprises:
and when the hit target is tracked, adjusting a target characteristic calculation size limit value based on the distance between the flight platform and the hit target calculated by the laser range finder.
In the embodiment of the example, the scale estimation and calculation simplification design comprises that under a fixed-focus camera, in the process that an unmanned flying platform approaches a target, the target scale is changed greatly and the definition is reduced. Under the scene, the target searching range in the tracking process needs to be enlarged, but the characteristics are calculated and extracted for the large-size blurred picture, the information richness is low, and the calculated amount is increased.
In the embodiment of the present example, the search area is adjusted based on the target scale change prediction, and the target scale change rate lambda (t) is introduced, that is, a continuous micro-function r (t) of any order with respect to time t is used to describe the target scale, then the target scale at time t 0 is r (t 0), then
In the formula,Is the partial derivative of the objective scale function r (t) with respect to time t. The accurate analysis form of the target scale function cannot be obtained in practical application, the target scale of the next frame is estimated by a method of a cubic polynomial based on the target scale of the previous four frames in consideration of the inertia of the target motion process, and the target search range is adjusted according to the accurate analysis form.
In the embodiment of the present example, the size limitation of the fuzzy object feature calculation includes that the object detection stage can accurately identify that the minimum object imaging is about 50×25 pixels (the pixel size of the picture is 1920×1080), and in the process of locking the tracking object, especially when the flying platform is less than 10 meters away from the object, the object size becomes 20 times larger, and in this case, feature analysis is continuously extracted for each pixel, so that the calculation speed is greatly reduced, and the real-time calculation requirement cannot be met. It is considered that although the imaging size of the target becomes larger in the process of approaching the target, the photographing definition under the fixed-focus camera is obviously reduced, and not too much abundant characteristic information is added. Therefore, the invention aims at the size-enlarging process, and the size of the image of the search area is reduced before the feature extraction, matching and tracking.
The specific adjustment logic is that a maximum feature calculation threshold value is set, when a search area is enlarged and a convolution size frame exceeds the threshold value, the calculated image area is downsampled to the size of the threshold value, namely, tracking matching calculation exceeding the threshold value is performed, and feature analysis is performed after the processing of the size of the threshold value.
In an embodiment of the present example, the method further comprises:
based on a background suppression regularization algorithm, according to a tracking response result of the image of the hit target, a regular coefficient matrix of the background suppression regularization algorithm is adaptively adjusted, so that background suppression of the hit target is realized.
In the embodiment of the present example, the design of the background suppression regularization includes that the background of the unmanned aerial vehicle is complex for both the application scene of the unmanned aerial vehicle to the air and the ground, and although the ECO algorithm adopts the spatial regularization method, the regularization coefficient matrix ω is a fixed matrix which is preset during the spatial regularization operation, the regularization coefficient matrix ω cannot be adaptively adjusted according to the target background situation, and the parameters of the region should be emphasized and punished when there is a similar target in the background, and the regularization coefficient should be properly reduced for the region without the similar target.
In the embodiment of the example, the self-adaptive background suppression regularization method is that a regular coefficient matrix omega of a spatial regularization term is self-adaptively adjusted according to a filter response result of a tracker, so that the regularization coefficient matrix omega can focus on punishing areas with similar backgrounds. Specifically, the self-adaptive background suppression regular coefficient matrix omega LALA and omega linear superposition is required to be introduced, and the final regular coefficient matrix can be obtained. Furthermore, an objective function E (f) of background suppression can be obtained:
wherein the updating of the background suppression regular coefficient matrix omega LA is self-adaptive adjustment according to the response of the filter, omega LA is:
where k is the regularized gain coefficient, Is the filter response at time t.
In an embodiment of the present example, the method further comprises:
Performing loss/shielding judgment on the hit target based on peak sidelobe ratio calculated by an ECO tracking algorithm and the hit target historical position and track prediction;
when the target is determined to be lost, changing to global searching of the target;
And when the shielding is judged, searching the hit target continuously according to the hit target historical position and track prediction.
In the embodiment of the example, the unmanned aerial vehicle has high speed, the typical unmanned aerial vehicle target in the air is flexible to maneuver and easily leaves the field of view of the camera, the ground application scene is more complex, such as the typical moving targets of pedestrians, vehicles and the like, and the unmanned aerial vehicle target not only easily leaves the field of view, but also has obvious shielding condition. Aiming at the problem, the peak sidelobe ratio (Peak side lobe ratio, PSLR) calculated based on the ECO tracking algorithm is used for judging target shielding and loss, and tracking search strategy adjustment is carried out on the basis of predicting a target track by combining Kalman filtering. The interpretation and decision logic is shown in fig. 2B, and when the PSLR calculated by the new frame of image is lower than the threshold, it indicates that the response related to the target template is weak, and the target is not found. At this time, combining the target history position and the track prediction, when the target history position is in the field boundary region and the predicted track is out of the field, determining that the target is lost. And when the historical position and the track prediction of the target are in the inner area of the field of view, judging that the target is blocked. And determining a next frame searching range according to the judging condition and the track prediction, searching the global by the next frame when the target is judged to be lost, and moving the searching area position only according to the track prediction of the target without changing the searching range when the target is judged to be blocked. When no target is found for 30 frames, tracking fails and the target detection search phase is re-entered.
In an embodiment of the present example, the single target tracking striking step of the method further comprises:
after one target in the multiple targets is determined to be a hit target, a servo system of a photoelectric pod of a flying platform is enabled to control a camera to acquire an image of the hit target, and tracking of the hit target is achieved based on an optimized ECO tracking algorithm;
And calculating the distance between the flight platform and the hit target based on a laser range finder, and calculating the three-dimensional coordinate of the hit target based on the geodetic coordinate so as to realize tracking hit of the hit target.
It should be noted that although the steps of the methods of the present disclosure are illustrated in a particular order in the figures, this does not require or imply that the steps must be performed in that particular order or that all of the illustrated steps must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In addition, in the present exemplary embodiment, an apparatus for automatically guiding recognition of a tracking target is also provided.
In the present exemplary embodiment, the guidance algorithm is implemented by the optoelectronic pod of the flying platform and the information processor, and the hardware system is composed and functions as shown in fig. 3. The photoelectric pod comprises a visible light camera, a laser range finder and a servo system, wherein the visible light camera consists of a fixed-focus optical lens and an imaging machine core, a low-cost camera is selected for visible light imaging of a target, the laser range finder is used for measuring the slant range of the target, the servo system adopts a triaxial stabilized platform, the searching range of the target of the camera can be enlarged, the tracking stage of the camera is controlled after the target is locked, and angle information and the like when the target is tracked are output. The information processor is used for completing parameter setting of the visible light measuring camera, acquisition and storage of images, target detection and tracking, data acquisition and timing control of the positioning time service module, control of a servo system and the like.
Referring to fig. 4, the apparatus 400 for automatically guiding recognition of a tracking target may include a target search module 410, a multi-target detection module 420, and a single target tracking striking module 430.
Wherein:
the target searching module 410 is used for enabling a servo system of an optoelectronic pod of the flying platform to control a camera of the flying platform to rotate based on preset scanning control logic, so that full-range searching of a target is realized;
The multi-target detection module 420 is used for identifying multiple targets based on YOLOv s algorithm and DeepSORT algorithm after the targets are detected by the cameras of the flying platform, and controlling the cameras of the flying platform to rotate along with the target movement through a servo system of a photoelectric pod of the flying platform so as to track the multiple targets;
And the single-target tracking and hitting module 430 is configured to, after determining one of the multiple targets as a hit target, enable a servo system of a photoelectric pod of a flying platform to control a camera to acquire an image of the hit target, and implement tracking of the hit target based on a scale estimation algorithm and a background suppression regularization algorithm, and calculate a distance between the flying platform and the hit target based on a laser range finder, so as to implement tracking and hitting of the hit target.
The specific details of each of the above device modules for automatically guiding and identifying the tracking target are described in detail in a corresponding method for automatically guiding and identifying the tracking target, and thus will not be described herein.
It should be noted that although several modules or units of an apparatus 400 for automatically guiding an identification tracking target are mentioned in the above detailed description, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects that may be referred to herein collectively as a "circuit," module "or" system.
An electronic device 500 according to such an embodiment of the invention is described below with reference to fig. 5. The electronic device 500 shown in fig. 5 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 5, the electronic device 500 is embodied in the form of a general purpose computing device. The components of the electronic device 500 may include, but are not limited to, the at least one processing unit 510 described above, the at least one memory unit 520 described above, a bus 530 connecting the different system components (including the memory unit 520 and the processing unit 510), and a display unit 540.
Wherein the storage unit stores program code that is executable by the processing unit 510 such that the processing unit 510 performs steps according to various exemplary embodiments of the present invention described in the above-mentioned "exemplary methods" section of the present specification. For example, the processing unit 510 may perform steps S110 to S130 as shown in fig. 1.
The storage unit 520 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 5201 and/or cache memory unit 5202, and may further include Read Only Memory (ROM) 5203.
The storage unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5203, such program modules 5205 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 550 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 500 may also communicate with one or more external devices 570 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 500, and/or with any device (e.g., router, modem, etc.) that enables the electronic device 500 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 550. Also, electronic device 500 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 560. As shown, network adapter 560 communicates with other modules of electronic device 500 over bus 550. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 500, including, but not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 6, a program product 600 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of a readable storage medium include an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (7)

1. A method of automatically guiding identification of a tracking target, the method comprising:
a target searching step, enabling a servo system of a photoelectric pod of a flight platform to control a camera of the flight platform to rotate based on preset scanning control logic, and realizing full-range searching of a target;
The method comprises a multi-target detection step, a tracking filter training and a target feature learning step, wherein after a camera of the flying platform receives a target, the multi-target detection step is performed on an image of the multi-target based on a YOLOv s rapid detection network, the target attention is promoted based on a channel attention module and a space attention module;
A single target tracking and striking step, after one target in the multiple targets is determined as a striking target, a servo system of a photoelectric pod of a flying platform is enabled to control a camera to acquire an image of the striking target, and based on an ECO related filtering and tracking method, a scale estimation algorithm and a background suppression regularization algorithm are adopted to optimize, so that stable tracking of the striking target is realized, a distance between the flying platform and the striking target is calculated based on a laser range finder, and three-dimensional coordinates of the striking target are calculated based on geocoordinates, so that tracking and striking of the striking target are realized, wherein the method based on the ECO related filtering and tracking method, the method adopting the scale estimation algorithm and the background suppression regularization algorithm is adopted to optimize, and stable tracking of the striking target is realized, and the method comprises the following steps:
Performing loss/shielding judgment on the hit target based on peak sidelobe ratio calculated by an ECO tracking algorithm and the hit target historical position and track prediction;
when the target is determined to be lost, changing to global searching of the target;
And when the shielding is judged, searching the hit target continuously according to the hit target historical position and track prediction.
2. The method of claim 1, wherein the single target tracking strike step of the method further comprises:
and when the hit target is tracked, predicting and adjusting a search area of a tracking algorithm based on the target scale change rate of the image of the hit target.
3. The method of claim 1, wherein the method further comprises:
and when the hit target is tracked, adjusting a target characteristic calculation size limit value based on the distance between the flight platform and the hit target calculated by the laser range finder.
4. The method of claim 1, wherein the method further comprises:
And (3) optimizing an ECO tracking algorithm based on a background suppression regularization algorithm, and adaptively adjusting a regular coefficient matrix of the background suppression regularization algorithm according to a tracking response result of the image of the hit target to realize background suppression of the hit target.
5. An apparatus for automatically guiding identification of a tracking target, the apparatus comprising:
The target searching module is used for enabling a servo system of the photoelectric pod of the flight platform to control the camera of the flight platform to rotate based on preset scanning control logic so as to realize full-range searching of targets;
The multi-target detection module is used for extracting characteristics of images of multiple targets based on YOLOv s rapid detection network after the targets are shot by the camera of the flying platform, improving the target attention degree of the characteristics based on the channel attention module and the space attention module, tracking and identifying the multiple targets based on DeepSORT algorithm, controlling the camera of the flying platform to rotate along with the movement of the targets through the servo system of the photoelectric pod of the flying platform, realizing the tracking of the multiple targets, enabling the tracking filter to train so as to finish the learning of the characteristics of the targets;
The single-target tracking and hitting module is used for enabling a servo system of a photoelectric pod of a flying platform to control a camera to acquire an image of a hit target after one target of the multiple targets is determined to be the hit target, realizing the tracking of the hit target based on an optimized ECO tracking algorithm, calculating the distance between the flying platform and the hit target based on a laser range finder, and calculating the three-dimensional coordinate of the hit target based on a geodetic coordinate to realize the tracking and hitting of the hit target, wherein the realizing the tracking of the hit target based on the optimized ECO tracking algorithm comprises the following steps:
Performing loss/shielding judgment on the hit target based on peak sidelobe ratio calculated by an ECO tracking algorithm and the hit target historical position and track prediction;
when the target is determined to be lost, changing to global searching of the target;
And when the shielding is judged, searching the hit target continuously according to the hit target historical position and track prediction.
6. An electronic device, comprising
Processor, and
A memory having stored thereon computer readable instructions which, when executed by the processor, implement the method according to any of claims 1 to 4.
7. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of claims 1 to 4.
CN202210490085.0A 2022-05-07 2022-05-07 A method and device for automatically guiding, identifying and tracking a target Active CN115047903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210490085.0A CN115047903B (en) 2022-05-07 2022-05-07 A method and device for automatically guiding, identifying and tracking a target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210490085.0A CN115047903B (en) 2022-05-07 2022-05-07 A method and device for automatically guiding, identifying and tracking a target

Publications (2)

Publication Number Publication Date
CN115047903A CN115047903A (en) 2022-09-13
CN115047903B true CN115047903B (en) 2024-11-29

Family

ID=83157195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210490085.0A Active CN115047903B (en) 2022-05-07 2022-05-07 A method and device for automatically guiding, identifying and tracking a target

Country Status (1)

Country Link
CN (1) CN115047903B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115639819A (en) * 2022-10-17 2023-01-24 郑州大学 An automatic following robot with vision and depth information fusion
CN116929149B (en) * 2023-09-14 2024-01-19 中国电子科技集团公司第五十八研究所 Target identification and guidance method based on image guidance
CN117765243B (en) * 2023-12-22 2024-07-05 北京中科航星科技有限公司 AI guiding system based on high-performance computing architecture
CN119128811B (en) * 2024-11-12 2025-02-11 环宇佳诚科技(北京)有限公司 Impact target intelligent identification method and system integrating multi-source information

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816698A (en) * 2019-02-25 2019-05-28 南京航空航天大学 A UAV Visual Target Tracking Method Based on Scale Adaptive Kernel Correlation Filtering
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158909B (en) * 2021-04-25 2023-06-27 中国科学院自动化研究所 Behavior recognition light-weight method, system and equipment based on multi-target tracking
CN113313738B (en) * 2021-07-15 2021-10-01 武汉卓目科技有限公司 Unmanned aerial vehicle target tracking method and device based on ECO and servo linkage
CN114419343A (en) * 2021-12-09 2022-04-29 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) A multi-target identification and tracking method and identification and tracking system
CN114049383B (en) * 2022-01-13 2022-04-22 苏州浪潮智能科技有限公司 Multi-target tracking method and device and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816698A (en) * 2019-02-25 2019-05-28 南京航空航天大学 A UAV Visual Target Tracking Method Based on Scale Adaptive Kernel Correlation Filtering
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning

Also Published As

Publication number Publication date
CN115047903A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
CN115047903B (en) A method and device for automatically guiding, identifying and tracking a target
CN113269098B (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
CN108496129B (en) Aircraft-based facility detection method and control equipment
Lim et al. Monocular localization of a moving person onboard a quadrotor mav
CN107491742B (en) Long-term stable target tracking method for unmanned aerial vehicle
CN112378397B (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
US11483484B2 (en) Systems and methods for imaging of moving objects using multiple cameras
CN111679695A (en) Unmanned aerial vehicle cruising and tracking system and method based on deep learning technology
CN116929149B (en) Target identification and guidance method based on image guidance
KR20230078675A (en) Simultaneous localization and mapping using cameras capturing multiple light spectra
CN115909173B (en) Object tracking method, tracking model training method, device, equipment and medium
CN114782484A (en) Multi-target tracking method and system for detection loss and association failure
CN116977902B (en) A target tracking method and system for a border and coastal defense vehicle-mounted photoelectric stabilization platform
CN118759517A (en) A method and device for cooperative detection of unmanned aerial vehicles using multi-source heterogeneous sensors
He et al. Intelligent vehicle pedestrian tracking based on YOLOv3 and DASiamRPN
CN117367212A (en) Artificial intelligent low-light-level aiming system and method
CN113438399B (en) Target guidance system, method for unmanned aerial vehicle, and storage medium
CN115690622A (en) Low-delay camouflage cluster target autonomous tracking method and system based on optical flow
CN112601021B (en) Method and system for processing monitoring video of network camera
WO2022040940A1 (en) Calibration method and device, movable platform, and storage medium
CN114967715B (en) Target recognition system and method for image/television guided aircraft with stable attitude and image
Ogorzalek Computer Vision Tracking of sUAS from a Pan/Tilt Platform
CN115988321B (en) An active tracking method for PTZ cameras based on yolov5 and deepsort
Matuszewski et al. Tracking of moving objects based on video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant