[go: up one dir, main page]

CN106707296B - It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods - Google Patents

It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods Download PDF

Info

Publication number
CN106707296B
CN106707296B CN201710014967.9A CN201710014967A CN106707296B CN 106707296 B CN106707296 B CN 106707296B CN 201710014967 A CN201710014967 A CN 201710014967A CN 106707296 B CN106707296 B CN 106707296B
Authority
CN
China
Prior art keywords
target
algorithm
frame
suspected
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710014967.9A
Other languages
Chinese (zh)
Other versions
CN106707296A (en
Inventor
马杰
刘阳
岳子涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201710014967.9A priority Critical patent/CN106707296B/en
Publication of CN106707296A publication Critical patent/CN106707296A/en
Application granted granted Critical
Publication of CN106707296B publication Critical patent/CN106707296B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

It is detected automatically the invention discloses a kind of unmanned plane based on Based on Dual-Aperture photo electric imaging system and identifying system, the unmanned plane provided by the invention based on Based on Dual-Aperture photo electric imaging system detects automatically and identifying system is monitored using wide angle camera and focal length camera respectively as unmanned machine detecting device and identification device.Monitoring image sequence is acquired, the detection of unmanned plane suspected target is carried out using algorithm of target detection, then suspected target is identified using algorithm for pattern recognition, is confirmed and target is tracked after target, is interfered and control.The present invention uses photoelectric sensor to have many advantages, such as high reliablity, low in cost as unmanned machine testing and identification device.By means of wide angle imaging system in a wide range of day area searching suspected target, suspected target is confirmed and tracked using focal length imaging system (there is two-axle rotating table), meets the needs of high detection rate and high accuracy simultaneously, the reliability of system is substantially increased, there is economic benefit outstanding and practical value.

Description

It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods
Technical field
The invention belongs to Image Processing and Pattern Recognition technical fields, are based on Based on Dual-Aperture photoelectricity more particularly, to one kind The unmanned plane of imaging system detects automatically and recognition methods.
Background technique
Existing unmanned machine testing and recognition methods are toured the heavens automatically using optical imaging sensor, obtain area to be tested Image sequence detects nothing using the difference of the target kinetic characteristic between sequence image and the target in single image and background Man-machine equal low flyers.This method is vulnerable to environmental disturbances, it is difficult to differentiate false-alarm caused by unmanned plane target and background interference.And After detecting target, target can not further be identified according to existing information.In addition, the prior art by based on radar monitoring without Man-machine target is implemented, and the problem of can not identifying target type is still remained, while radar equipment cost is high, vulnerable to weather environment etc. The interference of factor.
Summary of the invention
Aiming at the above defects or improvement requirements of the prior art, the present invention provides one kind to be based on Based on Dual-Aperture photoelectronic imaging system The unmanned machine testing of system and recognition methods further increase nothing its object is to realize automatic detection and identification to unmanned plane Man-machine accuracy in detection, the solution prior art is big by environmental disturbances, false alarm rate is high, can not identify the technical problems such as target type.
Purpose to realize the present invention provides a kind of unmanned plane based on Based on Dual-Aperture photo electric imaging system and detects and know automatically Other method, comprising the following steps:
(1) Based on Dual-Aperture optical imaging system is used, wherein wide angle imaging system is in a wide range of day area searching suspected target, length Burnt imaging system (having two-axle rotating table) confirmed and tracked to suspected target, while meeting high detection rate and high accuracy Demand.
(2) algorithm of target detection reality is utilized according to image background priori knowledge to wide angle camera acquired image sequence When detect suspected target;According to image background priori knowledge, it can be achieved that monitoring strategies are arranged, artificial selection goes out sky background area Domain, complex background region and exclusionary zone.
(3) after detecting suspected target, control focal length camera is directed toward suspected target and is shot, and obtains the high-resolution of target Spend image sequence;
(4) it is identified using the target image that algorithm for pattern recognition acquires focal length camera, if it is determined that unmanned plane Target then exports initial position co-ordinates of the target location coordinate as target following, goes to step (5), otherwise goes to step (2);
(5) control focal length camera is tracked unmanned plane target, and control holder movement guarantees that target is in focal length always The visual field center of camera, the unmanned plane coordinate that track algorithm obtains can be output to unmanned plane interference system, be oriented interference. The tracing algorithm includes the target tracking algorism based on Meanshift, the target tracking algorism based on particle filter, KCF calculation Method and optical flow method.
Further, image background priori knowledge described in the step (2) includes sky background region, complex background area Domain and exclusionary zone;In step (5), after obtaining unmanned plane target coordinate, interference and measure of control also are taken to target.
Further, in the step (2), the algorithm of target detection, including following sub-step:
(2.1) continuous acquisition image carries out interframe target detection, before being obtained using background difference algorithm solution present frame Scape image Dn
(2.2) disposable initialized target set Track is sky, and Track is target trajectory set, each in Track Track all represents a suspected target (only initializing in first time, track refers to multiple target points);To the current of (2.1) input N-th frame image InTarget detection in frame is carried out, to correct foreground image Dn
If not detecting target in the (n-1)th frame, that is, previous frame, is operated without amendment, go to step (2.3);
If detecting target, i.e. set Track in the (n-1)th frame, that is, previous framen-1Non-empty, and in a certain range Foreground image does not respond to, i.e., Detection in frame is then carried out, it will The target detected in frame is to correct Change detection result Dn, which is added to the output D of step (2.1)nIn, formula is such as Under:
Wherein Distance indicates Euclidean distance,Indicate arbitrary, d is constant, takes 3-10;The selection of d is adopted with video Sample frequency is related, and sampling frame frequency is higher, and d is smaller;Sampling frame frequency is lower, and d is bigger;Thres is threshold value, takes 10-50, thres's Value is bigger, and omission factor is higher;Value is smaller, and false alarm rate is higher;Indicate image withCentered on image block with The matrix H P of same scale carries out convolution;HP is convolution kernel, and scale is related with target scale, when target scale size is m row When × n column, HP are as follows:
Wherein A, B, C, D, F, G, H, K are the matrix for 1/9 × II, and E is -8/9 × II, and II is the square of m × n size Battle array, value are all 1;
(2.3) to n-th frame foreground image DnUsing label connected domain, and connected domain is clustered using DP clustering algorithm Processing, obtains suspected target set On;Preceding t frame is saved to t suspected target set { O between present framen-t+1…On, then Export { On-t+1…OnInto chained list list, t can use 5-15, and the bigger result of t is more accurate but lag time is long, the smaller real-time of t Higher but unstable result is (to each frame DnBy seeking connected domain, clusters, obtain the target collection O in a framen, t frame is every One frame result OnIt all preserves, shares t On, it is saved in list, is used for subsequent processing);
(2.4) using detection clustering algorithm to the chained list list obtained in step (2.3) using detection clustering algorithm at Reason generates a plurality of track, and every track is all used as a suspected target, and a plurality of track constitutes target collection Track.
Further, in the step (3) or step (5), after detecting suspected target or unmanned plane target, calculate and to Holder sends the coordinate of suspected target or unmanned plane target, for controlling focal length camera tracking suspected target or unmanned plane target; Coordinate calculation formula is as follows:
Wherein, std_rows is the height of image, and targety is the coordinate of target in the vertical direction, and yvision is perpendicular The upward field angle of histogram;Std_cols is the width of image, and targetx is the coordinate of target in the horizontal direction, xvision For the field angle in horizontal direction;Targety targetx is obtained from the target detected, other parameters be wide angle camera from Body parameter.
Further, in the step (2.4), detection clustering algorithm sub-step is as follows:
The chained list list that (2.4.1) traversal step (2.3) generates is used as suspected target, wound after searching out first aim A new track is built, which is added in the track;If (occurring target in the air, each frame can all detect As soon as target, then t frame will detect that t target, this t target generates a set according to the arrangement of the time sequencing of appearance It is exactly a track)
(2.4.2) searches subsequent suspected target in list, and each subsequent suspected target and every is existing doubtful Target trajectory is compared, and when meeting following two criterion, the track is added in suspected target, goes to step (2.4.3);Otherwise it creates Simultaneously the suspected target is added in the track for one new track, goes to step (2.4.3);The criterion is as follows:
(a) For tracing point (mesh newest in already present track The coordinate of mark in the picture),For the suspected target of current track to be added, Distance indicates Euclidean distance, For two tracing point spatial coherences (with euclidean distance metric), ifThen think that suspected target point meets These standards;The LthresFor capacity-threshold, 3-20 is taken, value is related with video sampling frequency, and sample frequency is higher, and value is got over It is small;Sample frequency is lower, and value is bigger;
(b)For the time difference between adjacent two tracing point of current track, if it is less than the preset time difference Tthres, then it is assumed that suspected target point meets these standards;TthresFor time threshold, 1-3 is taken;
(2.4.3) counts the breakpoint number of every track, if breakpoint number is more than maximum allowable breakpoint number, deletes the rail Mark;Bthres, take 3-5;(track has counted t frame, and should all there be a target point in each frame track, if not provided, not meeting Two rules are stated, then 1) breakpoint number adds.
Further, the algorithm for pattern recognition mentioned in the step (4) chooses deep learning algorithm, it is preferred to use Faster-RCNN algorithm identifies the collected high-definition image sequence of focal length camera;Identification step is as follows:
(4.1) prepare unmanned plane training sample first, and manual markings are carried out to the position of unmanned plane;Use Faster- RCNN algorithm is trained preprepared training sample, obtains network model parameter, determines specific network model;
(4.2) it is identified with the network model, the collected current frame image of focal length camera is inputted into the network Model, obtains the coordinate targetx and targety and all kinds of target confidence scores of target, and by unmanned plane target score More than TthresCorresponding target exports the coordinate of unmanned plane target, T as unmanned plane targetthresFor threshold value, generally take 0.5-0.9。
Further, the target tracking algorism mentioned in the step (5) is specific as follows: carrying out target following;
Specifically, use the target location coordinate exported in step (4) as the initial position co-ordinates of target following, use Target tracking algorism based on particle filter carries out target following, and control holder makes target remain at visual field center.
Further, the background difference algorithm is the space constraint mixed Gauss model algorithm accelerated based on GPU, Specific sub-step is as follows:
(a) mixed Gauss model parameter and fixed background frame are initialized according to image background priori knowledge in step (2), This, using the first frame image of input as fixed background frame;
(b) the image I of current n-th frame will be passed tonIt carries out subtracting operation with fixed background frame, obtains space constraint matrix M, transport It is as follows to calculate formula:
Wherein InFor n-th frame image sequence, ZnFor fixed background frame, Thres is threshold value, takes 20-50;Thres threshold value is got over Low, the real-time of algorithm is lower;Threshold value is higher, and background parts puppet background dot is more, and the accuracy of algorithm is lower;I, j are respectively The ranks number of pixel in the picture;
(c) prospect is detected with mixed Gauss model at space constraint matrix M, obtains foreground image Dn, meanwhile, mixing is high This model is updated;Detection formula is as follows:
Wherein, GMM is mixed Gauss model;
(d) background frames are fixed to mixed Gauss model every certain frame number num to extract, i.e., in mixed Gauss model The Gaussian Profile mean μ of each pixel highest priority (mixed Gauss model model parameter includes priority)I, jIt extracts extremely The corresponding position of fixed background frame, to update fixed background frame;The desirable integer greater than 1 of num, num is bigger, with background It gradually changes, calculation amount is bigger, can reduce the real-time of algorithm;Num is smaller, and the frequency for extracting fixed background frame is higher, also can Reduce the real-time of algorithm.
Further, in the step (2.3), the method for marking connected domain includes region-growing method, the company based on stroke Logical field mark algorithm, the connected component labeling algorithm based on Contour extraction, based on the connected component labeling algorithm for running long code and towards The connected component labeling algorithm of target's feature-extraction.
In general, through the invention it is contemplated above technical scheme is compared with the prior art, can obtain down and show Beneficial effect:
(1) corresponding monitoring strategies being arranged according to different environment can make system adapt to different environment, improve system The robustness and reliability of system.
(2) doubtful mesh is carried out using the space constraint mixed Gauss model algorithm used herein accelerated based on GPU Target detection, can be improved 50 times or more compared to traditional mixed Gauss model algorithm speed, reaches requirement of real-time, in conjunction with inspection Video sequence interframe and frame information can be comprehensively utilized by surveying detection algorithm in clustering algorithm and frame, on the basis for guaranteeing verification and measurement ratio On, a large amount of false-alarms are eliminated, and reduce missing inspection target, ensure that the continuous of detection.
It (3) can be further to detecting using the strategy that Based on Dual-Aperture photo electric imaging system carries out Target detection and identification Target is accurately identified and is confirmed, false-alarm is excluded, and is confirmed target, be ensure that the robustness of system.
(4) target identification is carried out using deep learning algorithm, algorithm is automatically performed the process of target identification, Faster- Convolutional neural networks used in RCNN are extracted the big measure feature of sample, have to variations such as angle, illumination, scales preferable The accuracy rate of robustness, algorithm identification is high.The weight of the convolutional layer of RPN network and Faster R-CNN network is shared to keep algorithm fast Degree greatly promotes, and can achieve the requirement handled in real time.
(5) target is tracked using particle filter algorithm, relative to other target tracking algorisms, illumination is become Change, target occlusion has well adapting to property.And algorithm only generates limited particle and is calculated, and real-time is preferable.Detection Target is tracked after to target, and continues to interfere, can reach the purpose to target control.
To sum up, the unmanned plane provided by the invention based on Based on Dual-Aperture photo electric imaging system detects automatically and identifying system Wide angle camera and focal length camera is used to be monitored as unmanned machine testing and identification device respectively.Monitoring image sequence is acquired, Carry out unmanned plane suspected target detection, then suspected target is identified, confirm target after coordinates computed and to target into The row tracking present invention uses photoelectric sensor to have high reliablity, low in cost etc. excellent as unmanned machine testing and identification device Point.By means of the monitoring data that wide angle camera provides, monitoring image is acquired using focal length camera and using the means of pattern-recognition Monitoring objective can accurately be confirmed by carrying out identification, substantially increase the reliability of system, have economic benefit outstanding and practical valence Value.
Detailed description of the invention
Fig. 1 is that the unmanned plane provided in an embodiment of the present invention based on Based on Dual-Aperture photo electric imaging system detects and identification side automatically The flow chart of method;
Fig. 2 is the schematic diagram of the unmanned machine testing and identification device in the embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below Not constituting a conflict with each other can be combined with each other.
This unmanned plane based on Based on Dual-Aperture photo electric imaging system provided in an embodiment of the present invention detects and identification side automatically Method process is as shown in Figure 1.
In this example, we are using Haikang prestige view DS-2df7320iw high-speed intelligent ball machine as Based on Dual-Aperture photoelectric sensing The wide angle camera part of device system, use FY-SP2515F standard intelligent speed changing holder carry Haikang prestige regard 3007 cameras as The focal length camera part of Based on Dual-Aperture Electro-Optic Sensor System.
Firstly, acquire actual monitoring unmanned video, to each frame image in monitor video including unmanned plane target, Using the position of the method label unmanned plane manually marked, as training sample training Faster-RCNN network model.Training step It is rapid as follows:
(a) joined using the model initialization Region Proposal Networks (RPN) of the pre-training on ImageNet Number finely tunes RPN network;
(b) it using the model initialization Faster-RCNN network of the pre-training on ImageNet, uses obtained in (a) RPN network carries out extracted region (Region Proposal) work to our training sample, as the defeated of Faster-RCNN Enter.Use our training sample training Faster-RCNN network;
(c) fixed convolutional layer, utilizes Faster R-CNN netinit RPN network obtained in (b), uses ours Training sample trains RPN network.
(d) fixed convolutional layer, using RPN network obtained in (c) as input training Faster-RCNN network.
(e) step (a)-(d) is repeated, alternately training RPN network and Faster-RCNN network, until network output error In the range of reaching requirement.
When operation, monitoring strategies are set, artificial selection goes out sky background region, complex background region and exclusionary zone, Priori knowledge as subsequent processes.Algorithm of target detection used in different types of region step (2) is taken Different strategies, it is complex background region that all areas are defaulted if not set.A threshold value table is generated, in threshold value table The position of each value is corresponding with the position of each pixel of image sequence, pixel on high background area, complex background region, Exclusionary zone, the value of corresponding position takes σ respectively in threshold value table σ1, σ2, σ3.Preferably, σ1, σ2, σ30.9,2.5,0.0 is taken respectively.
Video sequence is acquired using Haikang prestige view DS-2df7320iw high-speed intelligent ball machine, chooses preceding 30 frame initialization apparatus The mixed Gauss model parameter at end simultaneously extracts fixed background frame.According to the reality of algorithm of target detection described in summary of the invention step 2 When detection video sequence in suspected target.
After detecting suspected target, the position of target is calculated according to calculation method described in summary of the invention step (4), Herein according to the parameter of DS-2df7320iw camera, xvision takes 58.3, yvision to take 43.6.It is sent out to FY-SP2515F holder Send control command that it is made to be directed toward corresponding position, here, cradle head control order uses PELCO_D agreement.When holder reaches specific bit It postpones, 3007 cameras acquire a high definition picture, and being identified in visual field using trained Faster-RCNN algorithm network model is It is no that there are unmanned plane targets.If there is unmanned plane target, i.e. Faster-RCNN network output unmanned plane target confidence level is greater than 0.8 target, then controlling radio interference module is open state, is interfered unmanned plane, and according to Faster-RCNN net Network exports unmanned plane target coordinate position as initial position and uses the target tracking algorism based on particle filter to unmanned plane mesh Mark is tracked, and control holder is directed toward target while persistently directional jamming always.
The present invention is by combining wide angle camera and the respective imaging characteristics of focal length camera to carry out object detection and recognition work. Big visual field, remote region can be monitored in real time and accurately identify targeted species.To different background areas using not Monitoring strategies enhance the robustness and applicability of system.Target identification is carried out using deep learning algorithm, identification is accurate Rate it is high and to illumination, the variations such as block, rotate with stronger robustness, and basically reach the requirement of real time target recognitio.Make Compared to original GMM algorithm speed 50 times or more are improved with GPU accelerating space constraint mixed Gauss model algorithm, realized pair The requirement that high-definition monitoring image is measured in real time.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include Within protection scope of the present invention.

Claims (9)

1.一种基于双孔径光电成像系统的无人机自动检测与识别方法,其特征在于,包括以下步骤:1. an unmanned aerial vehicle automatic detection and identification method based on double aperture photoelectric imaging system, is characterized in that, comprises the following steps: (1)使用双孔径光学成像系统,其中广角成像系统在大范围天区搜索疑似目标,长焦成像系统对疑似目标进行确认和跟踪;(1) Use a dual-aperture optical imaging system, in which the wide-angle imaging system searches for suspected targets in a large-scale sky area, and the telephoto imaging system confirms and tracks the suspected targets; (2)对广角相机采集到的图像序列,根据图像背景先验知识,利用目标检测算法实时检测疑似目标;(2) For the image sequence collected by the wide-angle camera, according to the prior knowledge of the image background, the target detection algorithm is used to detect the suspected target in real time; (3)检测到疑似目标后,控制长焦相机指向疑似目标进行拍摄,得到目标的高清晰度图像序列;(3) After detecting the suspected target, control the telephoto camera to point at the suspected target to shoot, and obtain a high-definition image sequence of the target; (4)利用模式识别算法对长焦相机采集的目标图像进行识别,如果判断为无人机目标,则输出目标位置坐标作为目标跟踪的初始位置坐标,转到步骤(5),否则转到步骤(2);(4) Use the pattern recognition algorithm to identify the target image collected by the telephoto camera. If it is judged to be a UAV target, output the target position coordinates as the initial position coordinates of the target tracking, and go to step (5), otherwise go to step (2); (5)控制长焦相机对无人机目标进行追踪,控制云台运动,保证目标始终处于长焦相机的视场中央,跟踪算法得到的无人机坐标可以输出到无人机干扰系统,进行定向干扰;所述追踪算法包括基于MeanShift的目标跟踪算法、基于粒子滤波的目标跟踪算法、KCF算法和光流法;(5) Control the telephoto camera to track the target of the UAV, control the movement of the gimbal, and ensure that the target is always in the center of the field of view of the telephoto camera. The coordinates of the UAV obtained by the tracking algorithm can be output to the UAV jamming system for Directional interference; the tracking algorithm includes a MeanShift-based target tracking algorithm, a particle filter-based target tracking algorithm, a KCF algorithm and an optical flow method; 步骤(4)中提到的模式识别算法采用深度学习算法中的Faster-RCNN算法对长焦相机采集到的高清图像序列进行识别;识别步骤如下:The pattern recognition algorithm mentioned in step (4) uses the Faster-RCNN algorithm in the deep learning algorithm to recognize the high-definition image sequence collected by the telephoto camera; the recognition steps are as follows: (4.1)首先准备无人机训练样本,并对无人机的位置进行手工标记;使用Faster-RCNN算法对预先准备好的训练样本进行训练,得到网络模型参数,确定具体网络模型;(4.1) First prepare the UAV training samples, and manually mark the position of the UAV; use the Faster-RCNN algorithm to train the pre-prepared training samples, obtain network model parameters, and determine the specific network model; (4.2)用所述网络模型进行识别,将长焦相机采集到的当前帧图像输入所述网络模型,得到目标的坐标tar getx和tar gety,以及各类目标可信度得分,并将无人机目标得分超过Tthres对应的目标作为无人机目标,并输出无人机目标的坐标,Tthres为阈值,取0.5-0.9。(4.2) Use the network model for identification, input the current frame image collected by the telephoto camera into the network model, and obtain the coordinates of the target tar getx and tar gety, as well as the reliability scores of various targets. The target whose score exceeds T thres is regarded as the target of the drone, and the coordinates of the target of the drone are output. T thres is the threshold, which is 0.5-0.9. 2.如权利要求1所述的无人机自动检测与识别方法,其特征在于,所述双孔径光学成像系统由广角成像系统和长焦成像系统组成。2 . The method for automatic detection and identification of unmanned aerial vehicles according to claim 1 , wherein the dual-aperture optical imaging system is composed of a wide-angle imaging system and a telephoto imaging system. 3 . 3.如权利要求1所述的无人机自动检测与识别方法,其特征在于,步骤(2)中所述图像背景先验知识包括天空背景区域、复杂背景区域、以及排除区域;步骤(5)中,得到无人机目标坐标后,还对目标采取干扰与管制措施。3. UAV automatic detection and identification method as claimed in claim 1 is characterized in that, the prior knowledge of image background described in step (2) comprises sky background area, complex background area and exclusion area; Step (5) ), after obtaining the target coordinates of the UAV, it also takes interference and control measures on the target. 4.如权利要求1所述的无人机自动检测与识别方法,其特征在于,步骤(2)中,所述目标检测算法,包括以下子步骤:4. UAV automatic detection and identification method as claimed in claim 1, is characterized in that, in step (2), described target detection algorithm, comprises following substep: (2.1)连续采集图像进行帧间目标检测,使用背景差分算法求解当前帧得到的前景图像Dn(2.1) Continuously collect images to perform inter-frame target detection, and use the background difference algorithm to solve the foreground image D n obtained by the current frame; (2.2)一次性初始化目标集合Track为空,Track为目标轨迹集合,Track中每一条轨迹都代表着一个疑似目标;对(2.1)输入的当前第n帧图像In进行帧内目标检测,用以修正前景图像Dn(2.2) One-time initialization target set Track is empty, Track is a set of target tracks, each track in Track represents a suspected target; perform intra-frame target detection on the current nth frame image I n input in (2.1), use to correct the foreground image D n ; 如果在第n-1帧即上一帧中没有检测到目标,不进行修正操作,转步骤(2.3);If no target is detected in the n-1th frame, i.e. the previous frame, no correction operation is performed, and go to step (2.3); 如果在第n-1帧即上一帧检测到目标,即集合Trackn-1非空,而且在一定的范围内前景图像没有响应,即 则进行帧内检测,将帧内检测到的目标用以修正帧间检测结果Dn,将该目标加入步骤(2.1)的输出Dn中,公式如下:If the target is detected in the n-1th frame, that is, the previous frame, that is, the set Track n-1 is not empty, and the foreground image does not respond within a certain range, that is Then perform intra-frame detection, use the target detected in the frame to correct the inter-frame detection result D n , and add the target to the output D n of step (2.1), the formula is as follows: 其中Distance表示欧氏距离,表示任意的,d为常量,取3-10;d的选取与视频采样频率有关,采样帧频越高,d越小;采样帧频越低,d越大;thres为阈值,取10-50,thres的取值越大,漏检率越高;取值越小,虚警率越高;表示图像以为中心的图像块与相同尺度的矩阵HP进行卷积;HP为卷积核,其尺度与目标尺度有关,当目标尺度大小为m行×n列时,HP为:Where Distance represents the Euclidean distance, Represents arbitrary, d is a constant, taking 3-10; the selection of d is related to the video sampling frequency, the higher the sampling frame frequency, the smaller d; the lower the sampling frame frequency, the larger d; thres is the threshold, taking 10-50 , the larger the value of thres, the higher the missed detection rate; the smaller the value, the higher the false alarm rate; represents the image with The image block in the center is convolved with the matrix HP of the same scale; HP is the convolution kernel, and its scale is related to the target scale. When the target scale size is m rows × n columns, HP is: 其中A,B,C,D,F,G,H,K均为为1/9×II的矩阵,E为-8/9×II,II为m×n大小的矩阵,其值全为1;A, B, C, D, F, G, H, K are all 1/9×II matrices, E is -8/9×II, II is a m×n matrix, and its values are all 1 ; (2.3)对第n帧前景图像Dn使用标记连通域,并对连通域使用DP聚类算法进行聚类处理,得到疑似目标集合On;保存前t帧至当前帧之间的t个疑似目标集合{On-t+1…On},然后输出{On-t+1…On}到链表list中,t取5-15,t越大结果越准确但滞后时间长,t越小实时性越高但结果不稳定;(2.3) Use the labeled connected domain for the nth frame foreground image Dn, and use the DP clustering algorithm to perform clustering processing on the connected domain to obtain a set of suspected targets On; save the t suspected objects between the previous t frame and the current frame Target set {O n-t+1 ... On }, then output {O n -t+1 ... On } to the linked list list, t is 5-15, the larger t is, the more accurate the result is but the lag time is long, t The smaller the value, the higher the real-time performance, but the result is unstable; (2.4)使用检测聚类算法对步骤(2.3)中获的链表list使用检测聚类算法进行处理,生成多条轨迹,每条轨迹都作为一个疑似目标,多条轨迹构成目标集合Track。(2.4) Use detection clustering algorithm to process the linked list obtained in step (2.3) using detection clustering algorithm to generate multiple tracks, each track is regarded as a suspected target, and multiple tracks constitute a target set Track. 5.如权利要求1所述的无人机自动检测与识别方法,其特征在于,步骤(3)或步骤(5)中,检测到疑似目标或无人机目标后,计算并向云台发送疑似目标或无人机目标的坐标,用于控制长焦相机追踪疑似目标或无人机目标;坐标计算公式如下:5. UAV automatic detection and identification method as claimed in claim 1 is characterized in that, in step (3) or step (5), after detecting suspected target or UAV target, calculate and send to PTZ The coordinates of the suspected target or UAV target, used to control the telephoto camera to track the suspected target or UAV target; the coordinate calculation formula is as follows: 其中,std_rowS为图像的高度,targety为目标在竖直方向上的坐标,yvision为竖直方向上的视场角;std_cols为图像的宽度,targetx为目标在水平方向上的坐标,xvision为水平方向上的视场角;targety targetx从检测到的目标中取得,其它参数为广角相机自身参数。Among them, std_rowS is the height of the image, targety is the coordinate of the target in the vertical direction, yvision is the field of view in the vertical direction; std_cols is the width of the image, targetx is the coordinate of the target in the horizontal direction, and xvision is the horizontal direction The field of view above; targety targetx is obtained from the detected target, and other parameters are the parameters of the wide-angle camera itself. 6.如权利要求1所述的无人机自动检测与识别方法,其特征在于,所述步骤(2.4)中,检测聚类算法子步骤如下:6. UAV automatic detection and identification method as claimed in claim 1, is characterized in that, in described step (2.4), detection clustering algorithm sub-step is as follows: (2.4.1)遍历步骤(2.3)生成的链表list,寻找到第一个目标后作为疑似目标,创建一条新的轨迹,将该疑似目标加入该轨迹中;(2.4.1) Traverse the linked list list generated in step (2.3), find the first target as a suspected target, create a new trajectory, and add the suspected target to the trajectory; (2.4.2)查找list中后续的疑似目标,将每个后续的疑似目标和每条已存在疑似目标轨迹相比,当满足下述两条准则时,将疑似目标加入该轨迹,转步骤(2.4.3);否则创建一条新轨迹并将该疑似目标加入该轨迹中,转步骤(2.4.3);所述准则如下:(2.4.2) Find the subsequent suspected targets in the list, compare each subsequent suspected target with each existing suspected target trajectory, when the following two criteria are met, add the suspected target to the trajectory, go to step ( 2.4.3); otherwise, create a new trajectory and add the suspected target to the trajectory, go to step (2.4.3); the criteria are as follows: (a) 为已存在的轨迹中最新的轨迹点,为当前待加入轨迹的轨迹点,Distance表示欧氏距离,为两轨迹点空间相关性,以欧氏距离度量,如果则认为疑似目标点符合本准则;所述Lthres为空间阈值,取3-20,其取值与视频采样频率有关,采样频率越高,取值越小;采样频率越低,取值越大;(a) is the latest trajectory point in the existing trajectory, is the current track point to be added to the track, Distance represents the Euclidean distance, is the spatial correlation of two trajectory points, measured by Euclidean distance, if Then it is considered that the suspected target point conforms to this criterion; the L thres is the spatial threshold, which is 3-20, and its value is related to the video sampling frequency. The higher the sampling frequency, the smaller the value; the lower the sampling frequency, the larger the value. ; (b)为当前轨迹相邻两轨迹点之间的时间差,如果小于预设的时间差Tthres,则认为疑似目标点符合本准则;Tthres为时间阈值,取1-3;(b) is the time difference between two adjacent trajectory points of the current trajectory, if it is less than the preset time difference T thres , it is considered that the suspected target point conforms to this criterion; T thres is the time threshold, taking 1-3; (2.4.3)统计每条轨迹的断点数,如果断点数超过最大允许断点数,则删除该轨迹;Bthres,取3-5。(2.4.3) Count the number of breakpoints of each track, if the number of breakpoints exceeds the maximum allowable number of breakpoints, delete the track; B thres , take 3-5. 7.如权利要求1所述的无人机自动检测与识别方法,其特征在于,步骤(5)中提到的目标跟踪算法具体如下:使用基于粒子滤波器的目标跟踪算法进行目标跟踪;7. UAV automatic detection and identification method as claimed in claim 1 is characterized in that, the target tracking algorithm mentioned in step (5) is specifically as follows: use the target tracking algorithm based on particle filter to carry out target tracking; 具体地,使用步骤(4)中输出的目标位置坐标作为目标跟踪的初始位置坐标,采用基于粒子滤波器的目标跟踪算法进行目标跟踪,控制云台使目标始终保持在视场中央。Specifically, the target position coordinate output in step (4) is used as the initial position coordinate of target tracking, the target tracking algorithm based on particle filter is used to track the target, and the pan/tilt is controlled to keep the target in the center of the field of view all the time. 8.如权利要求4所述的无人机自动检测与识别方法,其特征在于,所述背景差分算法为基于GPU加速的空间约束混合高斯模型算法,具体子步骤如下:8. UAV automatic detection and identification method as claimed in claim 4, is characterized in that, described background difference algorithm is based on GPU-accelerated space constraint mixture Gaussian model algorithm, and concrete sub-step is as follows: (a)根据步骤(2)中图像背景先验知识初始化混合高斯模型参数和固定背景帧,在此,将输入的第一帧图像作为固定背景帧;(a) initialize the mixed Gaussian model parameters and the fixed background frame according to the prior knowledge of the image background in the step (2), here, the first frame image of the input is used as the fixed background frame; (b)将传入当前第n帧的图像In与固定背景帧进行减运算,得到空间约束矩阵M,运算公式如下:(b) Subtract the image I n of the current nth frame and the fixed background frame to obtain the space constraint matrix M, and the operation formula is as follows: 其中In为第n帧图像序列,Zn为固定背景帧,Thres为阈值,取20-50;Thres阈值越低,算法的实时性越低;阈值越高,背景部分伪背景点越多,算法的准确性越低;i,j分别为像素在图像中的行列号;Among them, I n is the image sequence of the nth frame, Z n is the fixed background frame, Thres is the threshold value, taking 20-50; the lower the Thres threshold value, the lower the real-time performance of the algorithm; the higher the threshold value, the more false background points in the background part, The lower the accuracy of the algorithm; i, j are the row and column numbers of the pixel in the image; (c)在空间约束矩阵M下用混合高斯模型检测前景,得到前景图像Dn,同时,混合高斯模型得到更新;检测公式如下:(c) Using the mixture Gaussian model to detect the foreground under the space constraint matrix M, the foreground image D n is obtained, and at the same time, the mixture Gaussian model is updated; the detection formula is as follows: 其中,GMM为混合高斯模型;Among them, GMM is a Gaussian mixture model; (d)每隔一定帧数num对混合高斯模型进行固定背景帧抽取,即对混合高斯模型中每个像素点优先级最高的高斯分布均值μi,j抽取至固定背景帧相应的位置,从而更新固定背景帧;num取大于1的整数,num越大,随着背景的逐渐改变,计算量越大,会降低算法的实时性;num越小,抽取固定背景帧的频次越高,也会降低算法的实时性。(d) Extracting the fixed background frame of the mixture Gaussian model every certain number of frames num, that is, extracting the Gaussian distribution mean μ i,j with the highest priority of each pixel point in the mixture Gaussian model to the corresponding position of the fixed background frame, thereby Update the fixed background frame; num is an integer greater than 1, the larger the num, the greater the amount of calculation as the background gradually changes, which will reduce the real-time performance of the algorithm; the smaller the num, the higher the frequency of extracting the fixed background frame, the Reduce the real-time performance of the algorithm. 9.如权利要求4所述的无人机自动检测与识别方法,其特征在于,所述步骤(2.3)中,标记连通域的方法包括区域生长法、基于行程的连通域标记算法、基于轮廓跟踪的连通域标记算法、基于跑长码的连通区域标记算法和面向目标特征提取的连通域标记算法。9. UAV automatic detection and identification method as claimed in claim 4, is characterized in that, in described step (2.3), the method for marking connected domain comprises region growth method, travel-based connected domain labeling algorithm, contour-based Tracked connected domain labeling algorithm, run-length code-based connected domain labeling algorithm and target-oriented feature extraction connected domain labeling algorithm.
CN201710014967.9A 2017-01-09 2017-01-09 It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods Expired - Fee Related CN106707296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710014967.9A CN106707296B (en) 2017-01-09 2017-01-09 It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710014967.9A CN106707296B (en) 2017-01-09 2017-01-09 It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods

Publications (2)

Publication Number Publication Date
CN106707296A CN106707296A (en) 2017-05-24
CN106707296B true CN106707296B (en) 2019-03-05

Family

ID=58907117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710014967.9A Expired - Fee Related CN106707296B (en) 2017-01-09 2017-01-09 It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods

Country Status (1)

Country Link
CN (1) CN106707296B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220172380A1 (en) * 2019-04-08 2022-06-02 Shenzhen Vision Power Technology Co., Ltd. Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239077B (en) * 2017-06-28 2020-05-08 歌尔科技有限公司 Unmanned aerial vehicle moving distance calculation system and method
CN107886120A (en) * 2017-11-03 2018-04-06 北京清瑞维航技术发展有限公司 Method and apparatus for target detection tracking
CN107909600B (en) * 2017-11-04 2021-05-11 南京奇蛙智能科技有限公司 Unmanned aerial vehicle real-time moving target classification and detection method based on vision
CN108038415B (en) * 2017-11-06 2021-12-28 湖南华诺星空电子技术有限公司 Unmanned aerial vehicle automatic detection and tracking method based on machine vision
CN109815773A (en) * 2017-11-21 2019-05-28 北京航空航天大学 A vision-based detection method for low-slow and small aircraft
CN108170160A (en) * 2017-12-21 2018-06-15 中山大学 It is a kind of to utilize monocular vision and the autonomous grasping means of airborne sensor rotor wing unmanned aerial vehicle
CN109993767B (en) * 2017-12-28 2021-10-12 北京京东尚科信息技术有限公司 Image processing method and system
CN108388879B (en) * 2018-03-15 2022-04-15 斑马网络技术有限公司 Target detection method, device and storage medium
CN108614896A (en) * 2018-05-10 2018-10-02 济南浪潮高新科技投资发展有限公司 Bank Hall client's moving-wire track describing system based on deep learning and method
CN108985193A (en) * 2018-06-28 2018-12-11 电子科技大学 A kind of unmanned plane portrait alignment methods based on image detection
CN109360224A (en) * 2018-09-29 2019-02-19 吉林大学 An Anti-Occlusion Target Tracking Method Fusion KCF and Particle Filter
CN109543553A (en) * 2018-10-30 2019-03-29 中国舰船研究设计中心 The photoelectricity recognition and tracking method of low small slow target based on machine learning
CN109785562B (en) * 2018-12-29 2023-08-15 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) Vertical photoelectric ground threat alert system and suspicious target identification method
CN109872483B (en) * 2019-02-22 2020-09-29 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) Intrusion alert photoelectric monitoring system and method
CN109753903B (en) * 2019-02-27 2020-09-15 北航(四川)西部国际创新港科技有限公司 Unmanned aerial vehicle detection method based on deep learning
CN110062205A (en) * 2019-03-15 2019-07-26 四川汇源光通信有限公司 Motion estimate, tracking device and method
CN109946703B (en) * 2019-04-10 2021-09-28 北京小马智行科技有限公司 Sensor attitude adjusting method and device
CN110458866A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Target tracking method and system
CN110443247A (en) * 2019-08-22 2019-11-12 中国科学院国家空间科学中心 A kind of unmanned aerial vehicle moving small target real-time detecting system and method
CN110347183A (en) * 2019-08-26 2019-10-18 中国航空工业集团公司沈阳飞机设计研究所 A kind of unmanned plane moves target striking method and system over the ground
CN110705524B (en) * 2019-10-24 2023-12-29 佛山科学技术学院 Visual-based monitoring method and device for unmanned aerial vehicle in specific area
CN111161305A (en) * 2019-12-18 2020-05-15 任子行网络技术股份有限公司 Intelligent unmanned aerial vehicle identification tracking method and system
CN111145217A (en) * 2019-12-27 2020-05-12 湖南华诺星空电子技术有限公司 KCF-based unmanned aerial vehicle tracking method
CN111652067B (en) * 2020-04-30 2023-06-30 南京理工大学 A UAV identification method based on image detection
CN111683204A (en) * 2020-06-18 2020-09-18 南方电网数字电网研究院有限公司 Unmanned aerial vehicle shooting method and device, computer equipment and storage medium
CN112288986A (en) * 2020-10-28 2021-01-29 金娇荣 An electric vehicle charging safety monitoring and early warning system
CN112669280B (en) * 2020-12-28 2023-08-08 莆田市山海测绘技术有限公司 Unmanned aerial vehicle inclination aerial photography right-angle image control point target detection method based on LSD algorithm
CN113111715B (en) * 2021-03-13 2023-07-25 浙江御穹电子科技有限公司 Unmanned aerial vehicle target tracking and information acquisition system and method
CN113326752B (en) * 2021-05-20 2024-04-30 淮阴工学院 Unmanned aerial vehicle-based photovoltaic power station identification method and system
CN113438399B (en) * 2021-06-25 2022-04-08 北京冠林威航科技有限公司 Target guidance system, method for unmanned aerial vehicle, and storage medium
CN114219838B (en) * 2021-12-23 2024-12-10 中国民用航空总局第二研究所 A high-mobility small target detection method and system based on event signal
CN114119676B (en) * 2022-01-24 2022-08-09 西安羚控电子科技有限公司 Target detection tracking identification method and system based on multi-feature information fusion
CN116109956A (en) * 2023-04-12 2023-05-12 安徽省空安信息技术有限公司 Unmanned aerial vehicle self-adaptive zooming high-precision target detection intelligent inspection method
CN116188534B (en) * 2023-05-04 2023-08-08 广东工业大学 Indoor real-time human body tracking method, storage medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1062525B1 (en) * 1998-03-10 2003-05-14 Riegl Laser Measurement Systems Gmbh Method for monitoring objects or an object area
CN102291569A (en) * 2011-07-27 2011-12-21 上海交通大学 Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof
CN104008371A (en) * 2014-05-22 2014-08-27 南京邮电大学 Regional suspicious target tracking and recognizing method based on multiple cameras
CN104197928A (en) * 2014-08-29 2014-12-10 西北工业大学 Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
CN105898107A (en) * 2016-04-21 2016-08-24 北京格灵深瞳信息技术有限公司 Target object snapping method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1062525B1 (en) * 1998-03-10 2003-05-14 Riegl Laser Measurement Systems Gmbh Method for monitoring objects or an object area
CN102291569A (en) * 2011-07-27 2011-12-21 上海交通大学 Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof
CN104008371A (en) * 2014-05-22 2014-08-27 南京邮电大学 Regional suspicious target tracking and recognizing method based on multiple cameras
CN104197928A (en) * 2014-08-29 2014-12-10 西北工业大学 Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
CN105898107A (en) * 2016-04-21 2016-08-24 北京格灵深瞳信息技术有限公司 Target object snapping method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220172380A1 (en) * 2019-04-08 2022-06-02 Shenzhen Vision Power Technology Co., Ltd. Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system

Also Published As

Publication number Publication date
CN106707296A (en) 2017-05-24

Similar Documents

Publication Publication Date Title
CN106707296B (en) It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN106447680B (en) The object detecting and tracking method that radar is merged with vision under dynamic background environment
CN103778645B (en) Circular target real-time tracking method based on images
CN109785363A (en) A kind of unmanned plane video motion Small object real-time detection and tracking
CN110264493B (en) A method and device for tracking multi-target objects in motion state
CN108447091A (en) Object localization method, device, electronic equipment and storage medium
CN108038415B (en) Unmanned aerial vehicle automatic detection and tracking method based on machine vision
CN108776974B (en) A kind of real-time modeling method method suitable for public transport scene
CN109145803B (en) Gesture recognition method and device, electronic equipment and computer readable storage medium
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN109102522A (en) A kind of method for tracking target and device
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
US8831284B2 (en) Object identification from image data captured from a mobile aerial platforms
JP6789876B2 (en) Devices, programs and methods for tracking objects using pixel change processed images
CN109828267A (en) The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN110287907A (en) A kind of method for checking object and device
CN110245566B (en) A long-distance tracking method for infrared targets based on background features
CN104376575A (en) Pedestrian counting method and device based on monitoring of multiple cameras
CN110991297A (en) Target positioning method and system based on scene monitoring
CN110503647A (en) Wheat plant real-time counting method based on deep learning image segmentation
CN113781526A (en) A livestock counting and identification system
CN116109950A (en) Low-airspace anti-unmanned aerial vehicle visual detection, identification and tracking method
CN202010257U (en) Ward round robot system based on Bayesian theory
CN117671529A (en) Unmanned aerial vehicle scanning measurement-based farmland water level observation device and observation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190305

CF01 Termination of patent right due to non-payment of annual fee