CN106707296B - It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods - Google Patents
It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods Download PDFInfo
- Publication number
- CN106707296B CN106707296B CN201710014967.9A CN201710014967A CN106707296B CN 106707296 B CN106707296 B CN 106707296B CN 201710014967 A CN201710014967 A CN 201710014967A CN 106707296 B CN106707296 B CN 106707296B
- Authority
- CN
- China
- Prior art keywords
- target
- algorithm
- frame
- suspected
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
It is detected automatically the invention discloses a kind of unmanned plane based on Based on Dual-Aperture photo electric imaging system and identifying system, the unmanned plane provided by the invention based on Based on Dual-Aperture photo electric imaging system detects automatically and identifying system is monitored using wide angle camera and focal length camera respectively as unmanned machine detecting device and identification device.Monitoring image sequence is acquired, the detection of unmanned plane suspected target is carried out using algorithm of target detection, then suspected target is identified using algorithm for pattern recognition, is confirmed and target is tracked after target, is interfered and control.The present invention uses photoelectric sensor to have many advantages, such as high reliablity, low in cost as unmanned machine testing and identification device.By means of wide angle imaging system in a wide range of day area searching suspected target, suspected target is confirmed and tracked using focal length imaging system (there is two-axle rotating table), meets the needs of high detection rate and high accuracy simultaneously, the reliability of system is substantially increased, there is economic benefit outstanding and practical value.
Description
Technical field
The invention belongs to Image Processing and Pattern Recognition technical fields, are based on Based on Dual-Aperture photoelectricity more particularly, to one kind
The unmanned plane of imaging system detects automatically and recognition methods.
Background technique
Existing unmanned machine testing and recognition methods are toured the heavens automatically using optical imaging sensor, obtain area to be tested
Image sequence detects nothing using the difference of the target kinetic characteristic between sequence image and the target in single image and background
Man-machine equal low flyers.This method is vulnerable to environmental disturbances, it is difficult to differentiate false-alarm caused by unmanned plane target and background interference.And
After detecting target, target can not further be identified according to existing information.In addition, the prior art by based on radar monitoring without
Man-machine target is implemented, and the problem of can not identifying target type is still remained, while radar equipment cost is high, vulnerable to weather environment etc.
The interference of factor.
Summary of the invention
Aiming at the above defects or improvement requirements of the prior art, the present invention provides one kind to be based on Based on Dual-Aperture photoelectronic imaging system
The unmanned machine testing of system and recognition methods further increase nothing its object is to realize automatic detection and identification to unmanned plane
Man-machine accuracy in detection, the solution prior art is big by environmental disturbances, false alarm rate is high, can not identify the technical problems such as target type.
Purpose to realize the present invention provides a kind of unmanned plane based on Based on Dual-Aperture photo electric imaging system and detects and know automatically
Other method, comprising the following steps:
(1) Based on Dual-Aperture optical imaging system is used, wherein wide angle imaging system is in a wide range of day area searching suspected target, length
Burnt imaging system (having two-axle rotating table) confirmed and tracked to suspected target, while meeting high detection rate and high accuracy
Demand.
(2) algorithm of target detection reality is utilized according to image background priori knowledge to wide angle camera acquired image sequence
When detect suspected target;According to image background priori knowledge, it can be achieved that monitoring strategies are arranged, artificial selection goes out sky background area
Domain, complex background region and exclusionary zone.
(3) after detecting suspected target, control focal length camera is directed toward suspected target and is shot, and obtains the high-resolution of target
Spend image sequence;
(4) it is identified using the target image that algorithm for pattern recognition acquires focal length camera, if it is determined that unmanned plane
Target then exports initial position co-ordinates of the target location coordinate as target following, goes to step (5), otherwise goes to step
(2);
(5) control focal length camera is tracked unmanned plane target, and control holder movement guarantees that target is in focal length always
The visual field center of camera, the unmanned plane coordinate that track algorithm obtains can be output to unmanned plane interference system, be oriented interference.
The tracing algorithm includes the target tracking algorism based on Meanshift, the target tracking algorism based on particle filter, KCF calculation
Method and optical flow method.
Further, image background priori knowledge described in the step (2) includes sky background region, complex background area
Domain and exclusionary zone;In step (5), after obtaining unmanned plane target coordinate, interference and measure of control also are taken to target.
Further, in the step (2), the algorithm of target detection, including following sub-step:
(2.1) continuous acquisition image carries out interframe target detection, before being obtained using background difference algorithm solution present frame
Scape image Dn;
(2.2) disposable initialized target set Track is sky, and Track is target trajectory set, each in Track
Track all represents a suspected target (only initializing in first time, track refers to multiple target points);To the current of (2.1) input
N-th frame image InTarget detection in frame is carried out, to correct foreground image Dn;
If not detecting target in the (n-1)th frame, that is, previous frame, is operated without amendment, go to step (2.3);
If detecting target, i.e. set Track in the (n-1)th frame, that is, previous framen-1Non-empty, and in a certain range
Foreground image does not respond to, i.e., Detection in frame is then carried out, it will
The target detected in frame is to correct Change detection result Dn, which is added to the output D of step (2.1)nIn, formula is such as
Under:
Wherein Distance indicates Euclidean distance,Indicate arbitrary, d is constant, takes 3-10;The selection of d is adopted with video
Sample frequency is related, and sampling frame frequency is higher, and d is smaller;Sampling frame frequency is lower, and d is bigger;Thres is threshold value, takes 10-50, thres's
Value is bigger, and omission factor is higher;Value is smaller, and false alarm rate is higher;Indicate image withCentered on image block with
The matrix H P of same scale carries out convolution;HP is convolution kernel, and scale is related with target scale, when target scale size is m row
When × n column, HP are as follows:
Wherein A, B, C, D, F, G, H, K are the matrix for 1/9 × II, and E is -8/9 × II, and II is the square of m × n size
Battle array, value are all 1;
(2.3) to n-th frame foreground image DnUsing label connected domain, and connected domain is clustered using DP clustering algorithm
Processing, obtains suspected target set On;Preceding t frame is saved to t suspected target set { O between present framen-t+1…On, then
Export { On-t+1…OnInto chained list list, t can use 5-15, and the bigger result of t is more accurate but lag time is long, the smaller real-time of t
Higher but unstable result is (to each frame DnBy seeking connected domain, clusters, obtain the target collection O in a framen, t frame is every
One frame result OnIt all preserves, shares t On, it is saved in list, is used for subsequent processing);
(2.4) using detection clustering algorithm to the chained list list obtained in step (2.3) using detection clustering algorithm at
Reason generates a plurality of track, and every track is all used as a suspected target, and a plurality of track constitutes target collection Track.
Further, in the step (3) or step (5), after detecting suspected target or unmanned plane target, calculate and to
Holder sends the coordinate of suspected target or unmanned plane target, for controlling focal length camera tracking suspected target or unmanned plane target;
Coordinate calculation formula is as follows:
Wherein, std_rows is the height of image, and targety is the coordinate of target in the vertical direction, and yvision is perpendicular
The upward field angle of histogram;Std_cols is the width of image, and targetx is the coordinate of target in the horizontal direction, xvision
For the field angle in horizontal direction;Targety targetx is obtained from the target detected, other parameters be wide angle camera from
Body parameter.
Further, in the step (2.4), detection clustering algorithm sub-step is as follows:
The chained list list that (2.4.1) traversal step (2.3) generates is used as suspected target, wound after searching out first aim
A new track is built, which is added in the track;If (occurring target in the air, each frame can all detect
As soon as target, then t frame will detect that t target, this t target generates a set according to the arrangement of the time sequencing of appearance
It is exactly a track)
(2.4.2) searches subsequent suspected target in list, and each subsequent suspected target and every is existing doubtful
Target trajectory is compared, and when meeting following two criterion, the track is added in suspected target, goes to step (2.4.3);Otherwise it creates
Simultaneously the suspected target is added in the track for one new track, goes to step (2.4.3);The criterion is as follows:
(a) For tracing point (mesh newest in already present track
The coordinate of mark in the picture),For the suspected target of current track to be added, Distance indicates Euclidean distance,
For two tracing point spatial coherences (with euclidean distance metric), ifThen think that suspected target point meets
These standards;The LthresFor capacity-threshold, 3-20 is taken, value is related with video sampling frequency, and sample frequency is higher, and value is got over
It is small;Sample frequency is lower, and value is bigger;
(b)For the time difference between adjacent two tracing point of current track, if it is less than the preset time difference
Tthres, then it is assumed that suspected target point meets these standards;TthresFor time threshold, 1-3 is taken;
(2.4.3) counts the breakpoint number of every track, if breakpoint number is more than maximum allowable breakpoint number, deletes the rail
Mark;Bthres, take 3-5;(track has counted t frame, and should all there be a target point in each frame track, if not provided, not meeting
Two rules are stated, then 1) breakpoint number adds.
Further, the algorithm for pattern recognition mentioned in the step (4) chooses deep learning algorithm, it is preferred to use
Faster-RCNN algorithm identifies the collected high-definition image sequence of focal length camera;Identification step is as follows:
(4.1) prepare unmanned plane training sample first, and manual markings are carried out to the position of unmanned plane;Use Faster-
RCNN algorithm is trained preprepared training sample, obtains network model parameter, determines specific network model;
(4.2) it is identified with the network model, the collected current frame image of focal length camera is inputted into the network
Model, obtains the coordinate targetx and targety and all kinds of target confidence scores of target, and by unmanned plane target score
More than TthresCorresponding target exports the coordinate of unmanned plane target, T as unmanned plane targetthresFor threshold value, generally take
0.5-0.9。
Further, the target tracking algorism mentioned in the step (5) is specific as follows: carrying out target following;
Specifically, use the target location coordinate exported in step (4) as the initial position co-ordinates of target following, use
Target tracking algorism based on particle filter carries out target following, and control holder makes target remain at visual field center.
Further, the background difference algorithm is the space constraint mixed Gauss model algorithm accelerated based on GPU,
Specific sub-step is as follows:
(a) mixed Gauss model parameter and fixed background frame are initialized according to image background priori knowledge in step (2),
This, using the first frame image of input as fixed background frame;
(b) the image I of current n-th frame will be passed tonIt carries out subtracting operation with fixed background frame, obtains space constraint matrix M, transport
It is as follows to calculate formula:
Wherein InFor n-th frame image sequence, ZnFor fixed background frame, Thres is threshold value, takes 20-50;Thres threshold value is got over
Low, the real-time of algorithm is lower;Threshold value is higher, and background parts puppet background dot is more, and the accuracy of algorithm is lower;I, j are respectively
The ranks number of pixel in the picture;
(c) prospect is detected with mixed Gauss model at space constraint matrix M, obtains foreground image Dn, meanwhile, mixing is high
This model is updated;Detection formula is as follows:
Wherein, GMM is mixed Gauss model;
(d) background frames are fixed to mixed Gauss model every certain frame number num to extract, i.e., in mixed Gauss model
The Gaussian Profile mean μ of each pixel highest priority (mixed Gauss model model parameter includes priority)I, jIt extracts extremely
The corresponding position of fixed background frame, to update fixed background frame;The desirable integer greater than 1 of num, num is bigger, with background
It gradually changes, calculation amount is bigger, can reduce the real-time of algorithm;Num is smaller, and the frequency for extracting fixed background frame is higher, also can
Reduce the real-time of algorithm.
Further, in the step (2.3), the method for marking connected domain includes region-growing method, the company based on stroke
Logical field mark algorithm, the connected component labeling algorithm based on Contour extraction, based on the connected component labeling algorithm for running long code and towards
The connected component labeling algorithm of target's feature-extraction.
In general, through the invention it is contemplated above technical scheme is compared with the prior art, can obtain down and show
Beneficial effect:
(1) corresponding monitoring strategies being arranged according to different environment can make system adapt to different environment, improve system
The robustness and reliability of system.
(2) doubtful mesh is carried out using the space constraint mixed Gauss model algorithm used herein accelerated based on GPU
Target detection, can be improved 50 times or more compared to traditional mixed Gauss model algorithm speed, reaches requirement of real-time, in conjunction with inspection
Video sequence interframe and frame information can be comprehensively utilized by surveying detection algorithm in clustering algorithm and frame, on the basis for guaranteeing verification and measurement ratio
On, a large amount of false-alarms are eliminated, and reduce missing inspection target, ensure that the continuous of detection.
It (3) can be further to detecting using the strategy that Based on Dual-Aperture photo electric imaging system carries out Target detection and identification
Target is accurately identified and is confirmed, false-alarm is excluded, and is confirmed target, be ensure that the robustness of system.
(4) target identification is carried out using deep learning algorithm, algorithm is automatically performed the process of target identification, Faster-
Convolutional neural networks used in RCNN are extracted the big measure feature of sample, have to variations such as angle, illumination, scales preferable
The accuracy rate of robustness, algorithm identification is high.The weight of the convolutional layer of RPN network and Faster R-CNN network is shared to keep algorithm fast
Degree greatly promotes, and can achieve the requirement handled in real time.
(5) target is tracked using particle filter algorithm, relative to other target tracking algorisms, illumination is become
Change, target occlusion has well adapting to property.And algorithm only generates limited particle and is calculated, and real-time is preferable.Detection
Target is tracked after to target, and continues to interfere, can reach the purpose to target control.
To sum up, the unmanned plane provided by the invention based on Based on Dual-Aperture photo electric imaging system detects automatically and identifying system
Wide angle camera and focal length camera is used to be monitored as unmanned machine testing and identification device respectively.Monitoring image sequence is acquired,
Carry out unmanned plane suspected target detection, then suspected target is identified, confirm target after coordinates computed and to target into
The row tracking present invention uses photoelectric sensor to have high reliablity, low in cost etc. excellent as unmanned machine testing and identification device
Point.By means of the monitoring data that wide angle camera provides, monitoring image is acquired using focal length camera and using the means of pattern-recognition
Monitoring objective can accurately be confirmed by carrying out identification, substantially increase the reliability of system, have economic benefit outstanding and practical valence
Value.
Detailed description of the invention
Fig. 1 is that the unmanned plane provided in an embodiment of the present invention based on Based on Dual-Aperture photo electric imaging system detects and identification side automatically
The flow chart of method;
Fig. 2 is the schematic diagram of the unmanned machine testing and identification device in the embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below
Not constituting a conflict with each other can be combined with each other.
This unmanned plane based on Based on Dual-Aperture photo electric imaging system provided in an embodiment of the present invention detects and identification side automatically
Method process is as shown in Figure 1.
In this example, we are using Haikang prestige view DS-2df7320iw high-speed intelligent ball machine as Based on Dual-Aperture photoelectric sensing
The wide angle camera part of device system, use FY-SP2515F standard intelligent speed changing holder carry Haikang prestige regard 3007 cameras as
The focal length camera part of Based on Dual-Aperture Electro-Optic Sensor System.
Firstly, acquire actual monitoring unmanned video, to each frame image in monitor video including unmanned plane target,
Using the position of the method label unmanned plane manually marked, as training sample training Faster-RCNN network model.Training step
It is rapid as follows:
(a) joined using the model initialization Region Proposal Networks (RPN) of the pre-training on ImageNet
Number finely tunes RPN network;
(b) it using the model initialization Faster-RCNN network of the pre-training on ImageNet, uses obtained in (a)
RPN network carries out extracted region (Region Proposal) work to our training sample, as the defeated of Faster-RCNN
Enter.Use our training sample training Faster-RCNN network;
(c) fixed convolutional layer, utilizes Faster R-CNN netinit RPN network obtained in (b), uses ours
Training sample trains RPN network.
(d) fixed convolutional layer, using RPN network obtained in (c) as input training Faster-RCNN network.
(e) step (a)-(d) is repeated, alternately training RPN network and Faster-RCNN network, until network output error
In the range of reaching requirement.
When operation, monitoring strategies are set, artificial selection goes out sky background region, complex background region and exclusionary zone,
Priori knowledge as subsequent processes.Algorithm of target detection used in different types of region step (2) is taken
Different strategies, it is complex background region that all areas are defaulted if not set.A threshold value table is generated, in threshold value table
The position of each value is corresponding with the position of each pixel of image sequence, pixel on high background area, complex background region,
Exclusionary zone, the value of corresponding position takes σ respectively in threshold value table σ1, σ2, σ3.Preferably, σ1, σ2, σ30.9,2.5,0.0 is taken respectively.
Video sequence is acquired using Haikang prestige view DS-2df7320iw high-speed intelligent ball machine, chooses preceding 30 frame initialization apparatus
The mixed Gauss model parameter at end simultaneously extracts fixed background frame.According to the reality of algorithm of target detection described in summary of the invention step 2
When detection video sequence in suspected target.
After detecting suspected target, the position of target is calculated according to calculation method described in summary of the invention step (4),
Herein according to the parameter of DS-2df7320iw camera, xvision takes 58.3, yvision to take 43.6.It is sent out to FY-SP2515F holder
Send control command that it is made to be directed toward corresponding position, here, cradle head control order uses PELCO_D agreement.When holder reaches specific bit
It postpones, 3007 cameras acquire a high definition picture, and being identified in visual field using trained Faster-RCNN algorithm network model is
It is no that there are unmanned plane targets.If there is unmanned plane target, i.e. Faster-RCNN network output unmanned plane target confidence level is greater than
0.8 target, then controlling radio interference module is open state, is interfered unmanned plane, and according to Faster-RCNN net
Network exports unmanned plane target coordinate position as initial position and uses the target tracking algorism based on particle filter to unmanned plane mesh
Mark is tracked, and control holder is directed toward target while persistently directional jamming always.
The present invention is by combining wide angle camera and the respective imaging characteristics of focal length camera to carry out object detection and recognition work.
Big visual field, remote region can be monitored in real time and accurately identify targeted species.To different background areas using not
Monitoring strategies enhance the robustness and applicability of system.Target identification is carried out using deep learning algorithm, identification is accurate
Rate it is high and to illumination, the variations such as block, rotate with stronger robustness, and basically reach the requirement of real time target recognitio.Make
Compared to original GMM algorithm speed 50 times or more are improved with GPU accelerating space constraint mixed Gauss model algorithm, realized pair
The requirement that high-definition monitoring image is measured in real time.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to
The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include
Within protection scope of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710014967.9A CN106707296B (en) | 2017-01-09 | 2017-01-09 | It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710014967.9A CN106707296B (en) | 2017-01-09 | 2017-01-09 | It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106707296A CN106707296A (en) | 2017-05-24 |
CN106707296B true CN106707296B (en) | 2019-03-05 |
Family
ID=58907117
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710014967.9A Expired - Fee Related CN106707296B (en) | 2017-01-09 | 2017-01-09 | It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106707296B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220172380A1 (en) * | 2019-04-08 | 2022-06-02 | Shenzhen Vision Power Technology Co., Ltd. | Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239077B (en) * | 2017-06-28 | 2020-05-08 | 歌尔科技有限公司 | Unmanned aerial vehicle moving distance calculation system and method |
CN107886120A (en) * | 2017-11-03 | 2018-04-06 | 北京清瑞维航技术发展有限公司 | Method and apparatus for target detection tracking |
CN107909600B (en) * | 2017-11-04 | 2021-05-11 | 南京奇蛙智能科技有限公司 | Unmanned aerial vehicle real-time moving target classification and detection method based on vision |
CN108038415B (en) * | 2017-11-06 | 2021-12-28 | 湖南华诺星空电子技术有限公司 | Unmanned aerial vehicle automatic detection and tracking method based on machine vision |
CN109815773A (en) * | 2017-11-21 | 2019-05-28 | 北京航空航天大学 | A vision-based detection method for low-slow and small aircraft |
CN108170160A (en) * | 2017-12-21 | 2018-06-15 | 中山大学 | It is a kind of to utilize monocular vision and the autonomous grasping means of airborne sensor rotor wing unmanned aerial vehicle |
CN109993767B (en) * | 2017-12-28 | 2021-10-12 | 北京京东尚科信息技术有限公司 | Image processing method and system |
CN108388879B (en) * | 2018-03-15 | 2022-04-15 | 斑马网络技术有限公司 | Target detection method, device and storage medium |
CN108614896A (en) * | 2018-05-10 | 2018-10-02 | 济南浪潮高新科技投资发展有限公司 | Bank Hall client's moving-wire track describing system based on deep learning and method |
CN108985193A (en) * | 2018-06-28 | 2018-12-11 | 电子科技大学 | A kind of unmanned plane portrait alignment methods based on image detection |
CN109360224A (en) * | 2018-09-29 | 2019-02-19 | 吉林大学 | An Anti-Occlusion Target Tracking Method Fusion KCF and Particle Filter |
CN109543553A (en) * | 2018-10-30 | 2019-03-29 | 中国舰船研究设计中心 | The photoelectricity recognition and tracking method of low small slow target based on machine learning |
CN109785562B (en) * | 2018-12-29 | 2023-08-15 | 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) | Vertical photoelectric ground threat alert system and suspicious target identification method |
CN109872483B (en) * | 2019-02-22 | 2020-09-29 | 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) | Intrusion alert photoelectric monitoring system and method |
CN109753903B (en) * | 2019-02-27 | 2020-09-15 | 北航(四川)西部国际创新港科技有限公司 | Unmanned aerial vehicle detection method based on deep learning |
CN110062205A (en) * | 2019-03-15 | 2019-07-26 | 四川汇源光通信有限公司 | Motion estimate, tracking device and method |
CN109946703B (en) * | 2019-04-10 | 2021-09-28 | 北京小马智行科技有限公司 | Sensor attitude adjusting method and device |
CN110458866A (en) * | 2019-08-13 | 2019-11-15 | 北京积加科技有限公司 | Target tracking method and system |
CN110443247A (en) * | 2019-08-22 | 2019-11-12 | 中国科学院国家空间科学中心 | A kind of unmanned aerial vehicle moving small target real-time detecting system and method |
CN110347183A (en) * | 2019-08-26 | 2019-10-18 | 中国航空工业集团公司沈阳飞机设计研究所 | A kind of unmanned plane moves target striking method and system over the ground |
CN110705524B (en) * | 2019-10-24 | 2023-12-29 | 佛山科学技术学院 | Visual-based monitoring method and device for unmanned aerial vehicle in specific area |
CN111161305A (en) * | 2019-12-18 | 2020-05-15 | 任子行网络技术股份有限公司 | Intelligent unmanned aerial vehicle identification tracking method and system |
CN111145217A (en) * | 2019-12-27 | 2020-05-12 | 湖南华诺星空电子技术有限公司 | KCF-based unmanned aerial vehicle tracking method |
CN111652067B (en) * | 2020-04-30 | 2023-06-30 | 南京理工大学 | A UAV identification method based on image detection |
CN111683204A (en) * | 2020-06-18 | 2020-09-18 | 南方电网数字电网研究院有限公司 | Unmanned aerial vehicle shooting method and device, computer equipment and storage medium |
CN112288986A (en) * | 2020-10-28 | 2021-01-29 | 金娇荣 | An electric vehicle charging safety monitoring and early warning system |
CN112669280B (en) * | 2020-12-28 | 2023-08-08 | 莆田市山海测绘技术有限公司 | Unmanned aerial vehicle inclination aerial photography right-angle image control point target detection method based on LSD algorithm |
CN113111715B (en) * | 2021-03-13 | 2023-07-25 | 浙江御穹电子科技有限公司 | Unmanned aerial vehicle target tracking and information acquisition system and method |
CN113326752B (en) * | 2021-05-20 | 2024-04-30 | 淮阴工学院 | Unmanned aerial vehicle-based photovoltaic power station identification method and system |
CN113438399B (en) * | 2021-06-25 | 2022-04-08 | 北京冠林威航科技有限公司 | Target guidance system, method for unmanned aerial vehicle, and storage medium |
CN114219838B (en) * | 2021-12-23 | 2024-12-10 | 中国民用航空总局第二研究所 | A high-mobility small target detection method and system based on event signal |
CN114119676B (en) * | 2022-01-24 | 2022-08-09 | 西安羚控电子科技有限公司 | Target detection tracking identification method and system based on multi-feature information fusion |
CN116109956A (en) * | 2023-04-12 | 2023-05-12 | 安徽省空安信息技术有限公司 | Unmanned aerial vehicle self-adaptive zooming high-precision target detection intelligent inspection method |
CN116188534B (en) * | 2023-05-04 | 2023-08-08 | 广东工业大学 | Indoor real-time human body tracking method, storage medium and equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1062525B1 (en) * | 1998-03-10 | 2003-05-14 | Riegl Laser Measurement Systems Gmbh | Method for monitoring objects or an object area |
CN102291569A (en) * | 2011-07-27 | 2011-12-21 | 上海交通大学 | Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof |
CN104008371A (en) * | 2014-05-22 | 2014-08-27 | 南京邮电大学 | Regional suspicious target tracking and recognizing method based on multiple cameras |
CN104197928A (en) * | 2014-08-29 | 2014-12-10 | 西北工业大学 | Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle |
CN105898107A (en) * | 2016-04-21 | 2016-08-24 | 北京格灵深瞳信息技术有限公司 | Target object snapping method and system |
-
2017
- 2017-01-09 CN CN201710014967.9A patent/CN106707296B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1062525B1 (en) * | 1998-03-10 | 2003-05-14 | Riegl Laser Measurement Systems Gmbh | Method for monitoring objects or an object area |
CN102291569A (en) * | 2011-07-27 | 2011-12-21 | 上海交通大学 | Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof |
CN104008371A (en) * | 2014-05-22 | 2014-08-27 | 南京邮电大学 | Regional suspicious target tracking and recognizing method based on multiple cameras |
CN104197928A (en) * | 2014-08-29 | 2014-12-10 | 西北工业大学 | Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle |
CN105898107A (en) * | 2016-04-21 | 2016-08-24 | 北京格灵深瞳信息技术有限公司 | Target object snapping method and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220172380A1 (en) * | 2019-04-08 | 2022-06-02 | Shenzhen Vision Power Technology Co., Ltd. | Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system |
Also Published As
Publication number | Publication date |
---|---|
CN106707296A (en) | 2017-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106707296B (en) | It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN106447680B (en) | The object detecting and tracking method that radar is merged with vision under dynamic background environment | |
CN103778645B (en) | Circular target real-time tracking method based on images | |
CN109785363A (en) | A kind of unmanned plane video motion Small object real-time detection and tracking | |
CN110264493B (en) | A method and device for tracking multi-target objects in motion state | |
CN108447091A (en) | Object localization method, device, electronic equipment and storage medium | |
CN108038415B (en) | Unmanned aerial vehicle automatic detection and tracking method based on machine vision | |
CN108776974B (en) | A kind of real-time modeling method method suitable for public transport scene | |
CN109145803B (en) | Gesture recognition method and device, electronic equipment and computer readable storage medium | |
CN102214309B (en) | Special human body recognition method based on head and shoulder model | |
CN109102522A (en) | A kind of method for tracking target and device | |
CN108731587A (en) | A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model | |
US8831284B2 (en) | Object identification from image data captured from a mobile aerial platforms | |
JP6789876B2 (en) | Devices, programs and methods for tracking objects using pixel change processed images | |
CN109828267A (en) | The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera | |
CN110287907A (en) | A kind of method for checking object and device | |
CN110245566B (en) | A long-distance tracking method for infrared targets based on background features | |
CN104376575A (en) | Pedestrian counting method and device based on monitoring of multiple cameras | |
CN110991297A (en) | Target positioning method and system based on scene monitoring | |
CN110503647A (en) | Wheat plant real-time counting method based on deep learning image segmentation | |
CN113781526A (en) | A livestock counting and identification system | |
CN116109950A (en) | Low-airspace anti-unmanned aerial vehicle visual detection, identification and tracking method | |
CN202010257U (en) | Ward round robot system based on Bayesian theory | |
CN117671529A (en) | Unmanned aerial vehicle scanning measurement-based farmland water level observation device and observation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190305 |
|
CF01 | Termination of patent right due to non-payment of annual fee |