CN106981073A - A kind of ground moving object method for real time tracking and system based on unmanned plane - Google Patents
A kind of ground moving object method for real time tracking and system based on unmanned plane Download PDFInfo
- Publication number
- CN106981073A CN106981073A CN201710206676.XA CN201710206676A CN106981073A CN 106981073 A CN106981073 A CN 106981073A CN 201710206676 A CN201710206676 A CN 201710206676A CN 106981073 A CN106981073 A CN 106981073A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- pixel
- moving object
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 230000033001 locomotion Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000000926 separation method Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 8
- 230000000877 morphologic effect Effects 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 claims description 5
- 239000000203 mixture Substances 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims 12
- 238000003786 synthesis reaction Methods 0.000 claims 12
- 229910000831 Steel Inorganic materials 0.000 claims 1
- 238000006243 chemical reaction Methods 0.000 claims 1
- 230000000052 comparative effect Effects 0.000 claims 1
- 239000010959 steel Substances 0.000 claims 1
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 claims 1
- 230000005540 biological transmission Effects 0.000 abstract description 8
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 abstract description 3
- 239000002131 composite material Substances 0.000 description 8
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 239000002245 particle Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于无人机的地面运动目标实时跟踪方法及系统,启动地面控制站的目标检测与识别模块处理摄像机传回的图像序列,获得在地面站显示屏上目标矩形框大小及中心坐标;启动目标跟踪模块,利用算法融合策略对目标进行跟踪,若跟踪有效,则输出目标定位结果到跟踪指令生成模块;若未定位到目标,则启动目标搜索模块,搜寻目标并输出目标定位结果到跟踪指令生成模块;根据目标图像需定位到地面站显示屏中心的要求,跟踪指令生成模块生成无人机位置与姿态调整指令,并通过无线传输设备上传到无人机飞控系统对其位姿进行实时调整。本发明匹配效率高,易于实现,能有效地进行目标识别,避免背景噪声的影响。
The invention discloses a method and system for real-time tracking of ground moving targets based on an unmanned aerial vehicle. The target detection and recognition module of the ground control station is started to process the image sequence returned by the camera, and the size and size of the target rectangular frame on the display screen of the ground station are obtained. Center coordinates; start the target tracking module, use the algorithm fusion strategy to track the target, if the tracking is effective, output the target positioning result to the tracking command generation module; if the target is not located, start the target search module, search for the target and output the target positioning The results are sent to the tracking command generation module; according to the requirement that the target image needs to be positioned at the center of the display screen of the ground station, the tracking command generation module generates the position and attitude adjustment commands of the UAV, and uploads them to the UAV flight control system through the wireless transmission device. The pose is adjusted in real time. The invention has high matching efficiency, is easy to realize, can effectively carry out target identification, and avoids the influence of background noise.
Description
技术领域technical field
本发明属于无人机导航领域,计算机视觉领域,具体涉及使用无人机对目标进行自动检测与跟踪的方法。The invention belongs to the field of unmanned aerial vehicle navigation and computer vision, and in particular relates to a method for automatically detecting and tracking a target by using an unmanned aerial vehicle.
背景技术Background technique
无人机具有高机动性、高分辨率、隐蔽性好、操作灵活等优势。所以在目标侦察与跟踪领域有巨大优势,比传统的固定摄像头监视范围大,其主要应用于昼夜空中侦察,交通监视,军事测绘等领域。利用无人机搭载的视频传感器对地面运动目标进行跟踪与分析,在民用与军事上有重大的实践意义。UAVs have the advantages of high maneuverability, high resolution, good concealment, and flexible operation. Therefore, it has a huge advantage in the field of target reconnaissance and tracking, and has a larger surveillance range than traditional fixed cameras. It is mainly used in day and night air reconnaissance, traffic surveillance, military surveying and mapping and other fields. Utilizing the video sensor carried by the UAV to track and analyze the ground moving target has great practical significance in civil and military applications.
首先,对于大多数的视频监控系统而言,都是在摄像机静止时对某个需要特殊关注的区域进行监视。背景是静止的,而作为前景的运动目标是移动的,这种情况下的目标检测只需作背景差法,就能取得不错的效果。但是在很多情况下,如以无人机作为载体的摄像机下的目标检测与跟踪,其拍摄的图像序列背景往往是不断变化的,具有不固定性,这种情况下的待跟踪目标的检测与跟踪显得异常艰难。First of all, for most video surveillance systems, an area requiring special attention is monitored when the camera is stationary. The background is static, but the moving target as the foreground is moving. In this case, the target detection only needs to do the background difference method to achieve good results. However, in many cases, such as the target detection and tracking under the camera with the UAV as the carrier, the background of the image sequence taken by it is often changing and not fixed. In this case, the detection and tracking of the target to be tracked Tracking appears to be extremely difficult.
其次,对于一个单目标的跟踪,并不代表无人机的视场中只有单一运动物体,而是场景中有多个移动的物体,对真正的感兴趣目标的检测及跟踪造成了干扰,不能进行目标的有效识别。还有背景噪声存在,例如由于阴影或光照的影响等造成提取出来的目标不完整或中心有空洞,在这些情况下,往往使得目标的检测识别造成了更大的困难。Secondly, the tracking of a single target does not mean that there is only a single moving object in the UAV's field of view, but that there are multiple moving objects in the scene, which interferes with the detection and tracking of the real target of interest. effective identification of targets. There is also background noise, for example, the extracted target is incomplete or has a hole in the center due to the influence of shadow or illumination, etc. In these cases, it often makes the detection and recognition of the target more difficult.
本发明中用到的名词解释如下:The terms used in the present invention are explained as follows:
无人机:是利用无线电遥控设备和自备的程序控制装置操纵的不载人飞机,包括无人固定翼机、无人直升机和多旋翼无人机等。UAV: It is an unmanned aircraft controlled by radio remote control equipment and its own program control device, including unmanned fixed-wing aircraft, unmanned helicopters and multi-rotor UAVs.
无线传输设备:一种使用MAVlink协议的通信设备,通信频段一般为2.4G。Wireless transmission equipment: a communication equipment using the MAVlink protocol, and the communication frequency band is generally 2.4G.
Shi-Tomasi角点:一种图像特征点的检测方法,代表图像的局部特征,对图像的亮度变化、模糊变化、旋转变化和视角变化等,均具有较强鲁棒性。Shi-Tomasi corner point: An image feature point detection method, which represents the local features of the image, and is robust to image brightness changes, blur changes, rotation changes, and viewing angle changes.
FRI:以角点为中心的邻域图像,本发明中取大小为30×30正方形区域。FRI: Neighborhood image centered on the corner point, the size of which is 30×30 square area in the present invention.
Bhattacharyya系数:度量目标模型和候选模型区域间的相似程度的数值,数值越小,区域相似性越大;反之,区域相似性越大。Bhattacharyya coefficient: A value that measures the similarity between the target model and the candidate model region. The smaller the value, the greater the regional similarity; otherwise, the greater the regional similarity.
发明内容Contents of the invention
本发明旨在提供一种基于无人机的地面运动目标实时跟踪方法及系统,解决现有技术中目标检测识别困难的问题。The present invention aims to provide a method and system for real-time tracking of a ground moving target based on an unmanned aerial vehicle, so as to solve the problem of difficult target detection and recognition in the prior art.
为解决上述技术问题,本发明所采用的技术方案是:一种基于无人机的地面运动目标实时跟踪方法,包括以下步骤:In order to solve the above-mentioned technical problems, the technical solution adopted in the present invention is: a method for real-time tracking of ground moving targets based on unmanned aerial vehicles, comprising the following steps:
1)无人机按照预定的飞行轨迹进行巡逻,将拍摄的图像序列传输到地面控制站,检测出无人机视场下的感兴趣目标;1) The drone patrols according to the predetermined flight trajectory, transmits the captured image sequence to the ground control station, and detects the target of interest under the drone's field of view;
2)提取上述感兴趣目标的二维图像矩形框大小与中心位置信息;2) Extracting the size and center position information of the two-dimensional image rectangular frame of the above-mentioned target of interest;
3)利用所述二维图像矩形框大小与中心位置信息,融合均值漂移算法和卡尔曼滤波算法的输出数据,使用数据加权的形式输出最终目标定位结果。3) Using the size and center position information of the rectangular frame of the two-dimensional image, fusing the output data of the mean shift algorithm and the Kalman filter algorithm, and outputting the final target positioning result in the form of data weighting.
步骤3)之后,根据所述目标定位结果,调整无人机飞行模式,使运动目标位于地面站显示屏中心区域。After step 3), according to the target positioning result, adjust the flight mode of the UAV so that the moving target is located in the central area of the display screen of the ground station.
步骤2)的具体实现过程包括:The specific implementation process of step 2) includes:
1)分别提取无人机拍摄图像序列的相邻两帧的Shi-Tomasi角点集合;1) Extract the Shi-Tomasi corner point sets of two adjacent frames of the image sequence captured by the UAV;
2)对两帧图像的Shi-Tomasi角点集合分别构造合成基描述子;2) Construct synthetic base descriptors for the Shi-Tomasi corner sets of the two frames of images respectively;
3)对带有合成基描述子的Shi-Tomasi角点集合进行特征匹配,获取相邻两帧的图像角点匹配对;3) Perform feature matching on the Shi-Tomasi corner set with synthetic base descriptors, and obtain image corner matching pairs of two adjacent frames;
4)对步骤3)获得的角点匹配对,利用RANSAC方法估计出背景运动变换矩阵,并进行图像背景运动补偿;4) For the corner point matching pair obtained in step 3), utilize the RANSAC method to estimate the background motion transformation matrix, and perform image background motion compensation;
5)对运动补偿后的相邻帧图像作帧差操作,得到帧差图像,并将帧差图像二值化;5) performing a frame difference operation on the motion-compensated adjacent frame images to obtain a frame difference image, and binarizing the frame difference image;
6)对帧差图像作形态学滤波操作,进行目标信息分离和提取,获得目标矩形框的大小与中心位置信息。6) Do a morphological filtering operation on the frame difference image, separate and extract target information, and obtain the size and center position information of the target rectangular frame.
相邻两帧图像所有角点合成基描述子的具体生成过程包括:The specific generation process of the synthetic base descriptors for all corners of two adjacent frames of images includes:
1)对相邻两帧图像中每一个特征点邻域图像FRI进行二值化处理,并计算特征点邻域图像FRI的平均灰度值,当特征点邻域图像FRI内的像素点值大于平均灰度值,则该像素点值被置1;否则,置0;1) Binarize each feature point neighborhood image FRI in two adjacent frames of images, and calculate the average gray value of the feature point neighborhood image FRI, when the pixel value in the feature point neighborhood image FRI is greater than The average gray value, the pixel value is set to 1; otherwise, set to 0;
2)把相邻两帧图像中所有30×30大小的特征点邻域图像FRI均分成6×6个大小为5×5的子区域,合成基图像为5×5个黑白元素组成的正方形;所述合成基图像黑色像素点个数为FRI子区域像素的一半,合成基图像的个数其中,N为FRI子区域的像素个数;K为合成基图像中黑色像素的个数;2) Divide all 30×30 feature point neighborhood images FRI in adjacent two frames of images into 6×6 sub-regions with a size of 5×5, and synthesize the base image as a square composed of 5×5 black and white elements; The number of black pixels in the synthetic base image is half of the pixels in the FRI sub-region, and the number of synthetic base images is Wherein, N is the number of pixels in the FRI sub-region; K is the number of black pixels in the synthetic base image;
3)对于步骤2)中的任一个特征点邻域图像FRI,将该特征点邻域图像FRI的所有子区域以从左至右、从上到下的顺序与合成基图像集合进行比较,每一个子区域都生成一个9维向量,组合36个子区域的各自9维向量,最后形成一个324维的合成基描述子。3) For any feature point neighborhood image FRI in step 2), compare all subregions of the feature point neighborhood image FRI with the synthetic base image set in order from left to right and from top to bottom, each A 9-dimensional vector is generated for each sub-region, and the respective 9-dimensional vectors of 36 sub-regions are combined to form a 324-dimensional synthetic base descriptor.
所述特征点邻域图像FRI的一个子区域9维向量生成方法为:一个子区域与合成基图像集合中一个合成基图像的比较值为两者在同一像素点处黑色像素相同的个数,合成基图像集合被比较的顺序为从左至右、从上到下,然后一个子区域依照上述的比较规则和比较顺序,与合成基图像集合中所有的合成基图像一一进行比较,得到9个整数值,组成9维向量。The 9-dimensional vector generation method of a sub-region of the feature point neighborhood image FRI is as follows: the comparison value of a sub-region and a composite base image in the composite base image set is the same number of black pixels at the same pixel point, The composite base image set is compared in order from left to right and from top to bottom, and then a sub-region is compared with all the composite base images in the composite base image set one by one according to the above comparison rules and order, and 9 integer values to form a 9-dimensional vector.
目标信息分离和提取的具体步骤包括:The specific steps of target information separation and extraction include:
a)遍历每一帧滤波后的帧差图像,遍历的顺序为自上而下,从左至右;a) Traversing the filtered frame difference image of each frame, the order of traversal is from top to bottom, from left to right;
b)如果一个像素满足:二值化后的像素值为1且没有编号,则对该像素赋予新的编号;b) If a pixel satisfies: the pixel value after binarization is 1 and there is no number, assign a new number to the pixel;
c)遍历赋予了新的编号的像素的八邻域,依据步骤b)中的条件,给予满足条件的8邻域内的像素新的编号,且该新的编号与赋予了新的编号的像素编号相同;对于不满足条件的八领域内的像素,返回步骤b);c) Traversing the eight neighborhoods of pixels assigned new numbers, according to the conditions in step b), giving new numbers to pixels in the eight neighborhoods that meet the conditions, and the new numbers are the same as the pixel numbers assigned new numbers Same; For the pixels in the eight domains that do not meet the conditions, return to step b);
d)当把帧差图像中所有像素值为1的像素点遍历完且都编完号之后,操作结束。d) After traversing and numbering all the pixel points with a pixel value of 1 in the frame difference image, the operation ends.
所述矩形框的确定方法包括:每一帧滤波后的帧差图像经过扫描后,像素点为1的都有编号,编号相同的则为同一物体,连接在一起就组成了运动物体,假设有m个运动物体,对于第一个运动物体,矩形框获取方法如下:依次从第一个被标记的像素点开始遍历,直至遍历完最后一个被标记像素点,将标记像素点中x坐标与y坐标的最小值与最大值保存下来,记为xmin,ymin,xmax,ymax,以(xmin,ymin),(xmax,ymax)两点作为矩形框的对角点,画出矩形框。The method for determining the rectangular frame includes: after each frame of the filtered frame difference image is scanned, those with a pixel point of 1 have a number, and those with the same number are the same object, and they are connected together to form a moving object. For m moving objects, for the first moving object, the method of obtaining the rectangular frame is as follows: start traversing from the first marked pixel point until the last marked pixel point is traversed, and compare the x coordinates and y points in the marked pixel points The minimum and maximum values of the coordinates are saved and recorded as x min , y min , x max , y max , with (x min , y min ), (x max , y max ) two points as the opposite corners of the rectangular frame, Draw a rectangle.
本发明还提供了一种地面运动目标实时跟踪的系统,包括:The present invention also provides a system for real-time tracking of ground moving targets, including:
无人机,用于按照预定的飞行轨迹进行巡逻,将拍摄的图像序列传输到地面控制站;Unmanned aerial vehicles, used for patrolling according to a predetermined flight trajectory, transmitting the captured image sequence to the ground control station;
无线传输设备:为无人机与地面控制站之间的数据传输提供一种通信方式;Wireless transmission equipment: provide a communication method for data transmission between the UAV and the ground control station;
地面控制站,用于检测出无人机视场下的感兴趣目标,提取感兴趣目标的二维图像矩形框大小与中心位置信息,并利用所述二维图像矩形框大小与中心位置信息,融合均值漂移算法和卡尔曼滤波算法的输出数据,使用数据加权的形式输出最终得目标定位结果。The ground control station is used to detect the target of interest under the field of view of the drone, extract the size and center position information of the two-dimensional image rectangular frame of the target of interest, and use the size and center position information of the two-dimensional image rectangular frame, The output data of the mean shift algorithm and the Kalman filter algorithm are fused, and the final target positioning result is output in the form of data weighting.
相应的,该系统还包括:跟踪指令生成模块,用于根据所述目标定位结果,调整无人机飞行模式,使运动目标位于地面站显示屏中心区域。Correspondingly, the system further includes: a tracking instruction generation module, configured to adjust the flight mode of the drone according to the target positioning result, so that the moving target is located in the central area of the display screen of the ground station.
所述地面控制站包括:The ground control station includes:
检测与识别模块,用于检测出无人机视场下的感兴趣目标,并提取感兴趣目标的二维图像矩形框大小与中心位置信息;The detection and identification module is used to detect the target of interest under the field of view of the drone, and extract the size and center position information of the two-dimensional image rectangular frame of the target of interest;
目标跟踪模块,利用所述二维图像矩形框大小与中心位置信息,融合均值漂移算法和卡尔曼滤波算法的输出数据,使用数据加权的形式输出最终得目标定位结果。The target tracking module uses the size and center position information of the rectangular frame of the two-dimensional image, fuses the output data of the mean shift algorithm and the Kalman filter algorithm, and outputs the final target positioning result in the form of data weighting.
目标搜索模块,丢失跟踪目标时,该模块使用一种序列搜索方法重新定位目标。The target search module, when a tracked target is lost, this module uses a sequential search method to relocate the target.
跟踪指令生成模块,根据跟踪目标在地面站显示屏中的成像区域,生成相应的跟踪指令,以使目标位于显示屏中心位置。The tracking instruction generation module generates corresponding tracking instructions according to the imaging area of the tracking target in the display screen of the ground station, so that the target is located at the center of the display screen.
与现有技术相比,本发明所具有的有益效果为:本发明目标的检测与跟踪过程不需人工的全程参与,使用的合成基描述子进行特征点匹配,对尺度旋转、光照、模糊变化具有鲁棒性,匹配效率高,并且合成基描述子的生成不涉及浮点运算,其对处理图像的硬件平台具有友好性,易于实现,能有效地进行目标识别,避免背景噪声的影响。Compared with the prior art, the present invention has the beneficial effects that: the detection and tracking process of the target in the present invention does not require manual participation in the whole process, the synthetic base descriptor used is used for feature point matching, and the scale rotation, illumination, and blur changes It has robustness and high matching efficiency, and the generation of synthetic base descriptors does not involve floating-point operations. It is friendly to the hardware platform for image processing, easy to implement, and can effectively perform target recognition and avoid the influence of background noise.
附图说明Description of drawings
图1为无人机系统结构组成图;Figure 1 is a structural composition diagram of the UAV system;
图2为无人机系统基于合成基描述子的背景运动参数估计方法的流程图;Fig. 2 is the flow chart of the method for estimating the background motion parameters based on the synthetic base descriptor of the unmanned aerial vehicle system;
图3为目标信息分离与提取图;Figure 3 is a diagram of target information separation and extraction;
图4(a)合成基图像集合;图4(b)二值化的FRI;图4(c)FRI的第一个子区域与第一个合成基图像比较值;图4(d)FRI的第一个子区域与第二个合成基图像比较值;Figure 4(a) Synthetic base image collection; Figure 4(b) Binarized FRI; Figure 4(c) Comparison value of the first sub-region of FRI with the first synthetic base image; Figure 4(d) FRI compare values of the first subregion with the second synthetic base image;
图5为运动目标分离与信息提取流程图;Fig. 5 is a flow chart of moving target separation and information extraction;
图6为无人机系统算法融合与搜索策略流程图;Figure 6 is a flow chart of the algorithm fusion and search strategy of the UAV system;
图7为无人机系统搜索序列层次策略流程图;Fig. 7 is a flow chart of the UAV system search sequence hierarchy strategy;
图8为无人机系统地面站显示屏分区域示意图;Fig. 8 is a schematic diagram of sub-regions of the display screen of the ground station of the UAV system;
图9为无人机视频序列任意上下帧图像;Fig. 9 is an arbitrary upper and lower frame image of the UAV video sequence;
图10为基于合成基描述子的角点匹配图像;Figure 10 is a corner matching image based on synthetic base descriptors;
图11为帧差检测结果图像;Fig. 11 is the frame difference detection result image;
图12为形态学滤波后的目标检测图像;Figure 12 is a target detection image after morphological filtering;
图13为目标分离和信息提取图像。Figure 13 is an image of target separation and information extraction.
具体实施方式detailed description
图1为无人机系统组成图,其包括无人机、摄像机、无线传输设备和地面控制站。无人机作为摄像机的载体,扩大摄像机的拍摄范围。无线传输设备为无人机采集图像序列的下传与地面站飞控指令上传提供通信方法;地面控制站包含四个模块,分别为目标检测与识别模块,目标跟踪模块,目标搜索模块,跟踪指令生成模块。Figure 1 is a composition diagram of the UAV system, which includes UAV, camera, wireless transmission equipment and ground control station. As the carrier of the camera, the UAV expands the shooting range of the camera. The wireless transmission equipment provides a communication method for the download of the image sequence collected by the UAV and the upload of the flight control command of the ground station; the ground control station includes four modules, namely the target detection and recognition module, the target tracking module, the target search module, and the tracking command Generate modules.
无人机系统跟踪的具体实施方法如下:The specific implementation method of UAV system tracking is as follows:
1、无人机按用户指定的飞行区域使用预先规划的飞行航迹,进行巡逻,摄像机把拍摄的图像序列通过无线传输设备下传到地面控制站的目标检测与识别模块进行处理,获得目标在地面站显示屏成像位置及矩形框大小。无人机拍摄的图像序列的任意相邻两帧如图9所示。1. The UAV uses the pre-planned flight path to patrol according to the flight area specified by the user. The camera downloads the image sequence captured by the wireless transmission device to the target detection and recognition module of the ground control station for processing, and obtains the The imaging position and the size of the rectangular frame on the display screen of the ground station. Any two adjacent frames of the image sequence captured by the UAV are shown in Figure 9.
2、启动无人机的目标检测与识别模块,检测无人机视场下的感兴趣目标,并提取目标在显示屏上的矩形框大小与中心位置信息。目标检测与识别模块分为两个过程进行。基于合成基描述子的背景运动参数估计与目标信息分离和提取。第一个过程下面将介绍具体的实施方法,如图2为一种基于合成基描述子的背景运动参数估计方法的流程图:2. Start the target detection and recognition module of the drone, detect the target of interest in the field of view of the drone, and extract the size and center position information of the rectangular frame of the target on the display screen. The target detection and recognition module is divided into two processes. Background Motion Parameter Estimation and Object Information Separation and Extraction Based on Synthetic Base Descriptors. The first process will introduce the specific implementation method below, and Fig. 2 is a kind of flow chart of the background motion parameter estimation method based on synthetic base descriptor:
1)提取起始帧的特征点,由于Shi-Tomasi角点具有高效性,故采用这种特征点。设定起始帧为X,定义一个在像素s处的自相关函数F如下:1) Extract the feature points of the starting frame. Since the Shi-Tomasi corner points are highly efficient, this feature point is used. Set the starting frame as X, and define an autocorrelation function F at pixel s as follows:
其中δs表示位移量,W表示以S为中心的宽窗where δs represents the displacement, and W represents the wide window centered on S
对X(s+δs)进行一阶泰勒展开,上式可改写为如下:Carrying out first-order Taylor expansion on X(s+δs), the above formula can be rewritten as follows:
其中△X是图像一阶导函数,Λ是精度矩阵。特征点提取标准是精度矩阵特征值的最小值大于一个常数,即:Among them, △X is the first order derivative function of the image, and Λ is the precision matrix. The feature point extraction standard is that the minimum value of the precision matrix eigenvalue is greater than a constant, namely:
Q(s)=min{λ1,λ2}>K (3)Q(s)=min{λ 1 ,λ 2 }>K (3)
其中K为经验阈值,一般为0.05-0.5之间。Where K is the empirical threshold, generally between 0.05 and 0.5.
2)角点邻域的二值化,一般取特征点30×30的正方形邻域较为合理,能兼顾到复杂度与准确度。接下来生成描述符,对FRI进行二值化处理,需计算特征点邻域的平均灰度值,FRI的平均灰度值计算公式如下:2) For the binarization of the corner neighborhood, it is generally more reasonable to take a square neighborhood of 30×30 feature points, which can take both complexity and accuracy into consideration. Next, the descriptor is generated, and the FRI is binarized. The average gray value of the neighborhood of feature points needs to be calculated. The formula for calculating the average gray value of FRI is as follows:
式中,p为FRI的像素个数,这里为900;I(x,y)为FRI中某一点的像素灰度值。In the formula, p is the number of pixels of FRI, here is 900; I(x, y) is the pixel gray value of a certain point in FRI.
然后,当特征点邻域内的像素点值大于g,则该像素点值被置1;当特征点邻域内的像素点值小于g,则该像素点值被置0。由此过程,可以得到二值化的FRI,它可以保留关键点邻域内的结构信息,为下步特征点的描述子生成奠定基础。Then, when the value of the pixel in the neighborhood of the feature point is greater than g, the value of the pixel is set to 1; when the value of the pixel in the neighborhood of the feature point is smaller than g, the value of the pixel is set to 0. Through this process, the binarized FRI can be obtained, which can retain the structural information in the neighborhood of key points, and lay the foundation for the next step of feature point descriptor generation.
3)构造角点描述符,首先把30×30的FRI分成6×6个5×5的子区域,为了能使FRI的子区域与合成基图像进行对应元素比较,一个合成基图像的大小与FRI的子区域相等。合成基图像是一个正方形区域,由黑色与白色元素组合而成,可以由下面的合成基函数确定合成基图像的个数。3) To construct a corner descriptor, first divide the 30×30 FRI into 6×6 sub-regions of 5×5, in order to compare the corresponding elements of the FRI sub-regions with the synthetic base image, the size of a synthetic base image is equal to The subregions of FRI are equal. The composite base image is a square area composed of black and white elements, and the number of composite base images can be determined by the following composite base function.
式中,N为子区域的像素个数;K为合成基图像中黑色像素的个数;M表示SBI的个数,能唯一表征一个特征点。In the formula, N is the number of pixels in the sub-region; K is the number of black pixels in the synthetic base image; M is the number of SBIs, which can uniquely represent a feature point.
为了提高算法的实时性,当然希望合成基图像的个数越少越好,当K为N的一半时,函数有最小值。K结果为小数,进行加1取整操作。例如,30×30的FRI分成6×6个5×5子区域,则N为13,合成基图像的个数为13ln(25/13)或9;30×30的FRI分成2个15×15的子区域,则N为450,合成基图像的个数为113ln(225/113)或78。以图4(a)~图4(d)的5×5子区域例子进行算法的具体说明:In order to improve the real-time performance of the algorithm, it is of course hoped that the number of synthetic base images should be as small as possible. When K is half of N, the function has a minimum value. The result of K is a decimal, and the rounding operation is performed by adding 1. For example, a 30×30 FRI is divided into 6×6 5×5 sub-regions, then N is 13, and the number of synthetic base images is 13ln(25/13) or 9; a 30×30 FRI is divided into two 15×15 , then N is 450, and the number of synthetic base images is 113ln(225/113) or 78. Take the 5×5 sub-region example in Figure 4(a) to Figure 4(d) as a specific description of the algorithm:
图4(a)为合成基图像集和,由9个合成基图像组成,每一个合成基图像区域有13个像素点为黑色,其余点为白色,这13个黑色点采用伪随机方式分布在5×5的区域内,但必须保证每一个合成基图像的分布模式互不相同。图4(b)为二值化后的FRI,并把它分成36个5×5的子区域。从左至右、从上到下的顺序,把第一个子区域与每一个合成基图像进行比较,比较的规则是看两者在同像素点处黑色点相同的个数,这样每一个子区域都会生成一个9维的向量,这就是子区域的描述符,且每一个分量的范围是(0,13)。Figure 4(a) is the synthetic base image set, which is composed of 9 synthetic base images. In each synthetic base image area, 13 pixels are black, and the rest are white. These 13 black points are distributed in a pseudo-random manner. 5×5 area, but it must be ensured that the distribution patterns of each synthetic base image are different from each other. Figure 4(b) shows the binarized FRI and divides it into 36 5×5 sub-regions. From left to right and from top to bottom, compare the first sub-region with each synthetic base image. The comparison rule is to see that the two have the same number of black points at the same pixel, so that each sub-region Each region will generate a 9-dimensional vector, which is the descriptor of the subregion, and the range of each component is (0, 13).
再依照上文的比较顺序,得到其余的35个子区域描述。最后组合36个子区域的描述子,最后形成一个324维的描述子。其中图4(c)为第一个子区域和第一个合成基图像比较得到的描述子,值为6;图4(d)为第一个子区域和第二个合成基图像的比较得到的描述子,值为7。According to the comparison sequence above, the descriptions of the remaining 35 sub-regions are obtained. Finally, the descriptors of 36 sub-regions are combined to form a 324-dimensional descriptor. Among them, Figure 4(c) is the descriptor obtained by comparing the first sub-region and the first synthetic base image, with a value of 6; Figure 4(d) is the comparison between the first sub-region and the second synthetic base image The descriptor of is 7.
4)基于合成基描述子的角点匹配。特征点的匹配的成功,意味着这两个特征点的“距离”是最短的,衡量这个距离的最常用的方法有欧氏距离,马氏距离等,但是其计算的复杂性是高维向量所不能接受的。基于此,使用L1范数来度量特征点“距离”。为了说明特征点集的匹配过程,现假设视频序列的当前帧中有m个特征点,下一帧有n个特征点,那么衡量上下两帧中特征点距离的L1范数如下式:4) Corner matching based on synthetic base descriptors. The success of the matching of feature points means that the "distance" between the two feature points is the shortest. The most commonly used methods to measure this distance are Euclidean distance, Mahalanobis distance, etc., but the complexity of its calculation is high-dimensional vector unacceptable. Based on this, the L1 norm is used to measure the "distance" of feature points. In order to illustrate the matching process of the feature point set, it is assumed that there are m feature points in the current frame of the video sequence, and n feature points in the next frame, then the L1 norm to measure the distance between the feature points in the upper and lower frames is as follows:
xi表示当前帧的第i个合成基描述子,yj表示下一帧图像第j个合成基描述子,w表示描述子的维数,含324个分量。x i represents the i-th synthetic base descriptor of the current frame, y j represents the j-th synthetic base descriptor of the next frame image, and w represents the dimension of the descriptor, including 324 components.
合成基描述子计算原理如图5所示,每一行表示一个角点的描述子,再利用L1范数计算距离,在图5中角点1与角点2的距离为3。由上一步可以得到两幅图像中各自任意的特征点的距离,为了减少误匹配的概率,采用一种交叉匹配法:计算当前帧中的第i个角点与下一帧所有角点的L1范数距离d,共得到n个距离值,选取距离最小值为候选匹配点,记为yj;在按照上述方法,计算下一帧的第j个角点点与上一帧所有角点的距离,共得到m个距离值,把其中得到的最小值标记为t,若t=j,则可以判定xi与yj为匹配正确的一对特征点,否则认为匹配有误。如图10所示,为交叉匹配法得到航拍图像的角点匹配图。The principle of synthetic base descriptor calculation is shown in Figure 5. Each row represents a corner point descriptor, and then the distance is calculated using the L1 norm. In Figure 5, the distance between corner point 1 and corner point 2 is 3. From the previous step, the distance between any feature points in the two images can be obtained. In order to reduce the probability of false matching, a cross-matching method is used: calculate the L1 of the i-th corner point in the current frame and all corner points in the next frame Norm distance d, a total of n distance values are obtained, and the minimum distance is selected as the candidate matching point, which is recorded as y j ; according to the above method, the distance between the jth corner point of the next frame and all the corner points of the previous frame is calculated , a total of m distance values are obtained, and the minimum value obtained is marked as t. If t=j, it can be determined that x i and y j are a pair of feature points that match correctly, otherwise it is considered that the match is wrong. As shown in Figure 10, the corner matching graph of the aerial image is obtained by the cross-matching method.
5)利用RANSAC算法排除运动物体上的角点(外点),再去估计背景变换矩阵。估计背景的运动参数,希望角点匹配对尽可能的来自于背景角点组,对于上一步中的角点匹配对,需要使用RANSAC算法排除运动目标角点匹配对的误差干扰,使计算出来的背景运动补偿参数更精确。由于采用的图像变换形式为八参数射影变换模型,所以至少需要四组匹配对解出背景变换矩阵,其中八参数射影变换模型如下;5) Use the RANSAC algorithm to exclude the corner points (external points) on the moving object, and then estimate the background transformation matrix. Estimate the motion parameters of the background, and hope that the corner matching pairs come from the background corner group as much as possible. For the corner matching pairs in the previous step, it is necessary to use the RANSAC algorithm to eliminate the error interference of the moving target corner matching pairs, so that the calculated The background motion compensation parameters are more precise. Since the image transformation form adopted is an eight-parameter projective transformation model, at least four sets of matching pairs are required to solve the background transformation matrix, and the eight-parameter projective transformation model is as follows;
RANSAC算法计算背景运动补偿矩阵的算法过程如下所示:The algorithm process of the RANSAC algorithm to calculate the background motion compensation matrix is as follows:
a)首先定义两幅图像所有的匹配点对为总体样本D,任意选取四组匹配点作为一个样本数据Ji,并根据样本数据计算背景参数模型H(J)。a) First define all matching point pairs of two images as the overall sample D, randomly select four groups of matching points as a sample data J i , and calculate the background parameter model H(J) according to the sample data.
b)由上一步计算得到的实例H(Ji),确定总体D中与H(Ji)之间几何距离<阈值d的匹配点所构成的集合,并记为S(H(Ji)),称为实例H(Ji)的一致集。b) From the instance H(J i ) calculated in the previous step, determine the set of matching points whose geometric distance between H(J i ) and H(J i ) in the population D is less than the threshold d, and record it as S(H(J i ) ), called the consistent set of instance H(J i ).
c)由a)和b)两步计算另一个一致集S(H(Jk)),如果S(H(Ji))>S(H(Jk)),则保留一致集S(H(Ji));反之,则保留一致集S(H(Jk))。c) Calculate another consistent set S(H(J k )) from a) and b) in two steps. If S(H(J i ))>S(H(J k )), keep the consistent set S(H (J i )); otherwise, keep the consistent set S(H(J k )).
d)经过K次随机抽样,选择最大个数的一致集中的匹配对作为正确的匹配对,即背景角点组。d) After K times of random sampling, select the matching pair in the consistent set with the largest number as the correct matching pair, that is, the background corner point group.
e)由确定的背景角点组,使用最小二乘法计算出背景运动变换矩阵H。e) Calculate the background motion transformation matrix H by using the least squares method from the determined background corner point group.
其中d和k参数的确定分别为如式(8)、(9)计算:Among them, the determination of d and k parameters is calculated as formula (8) and (9) respectively:
d=‖xi-Hxi‖ (8)d=‖x i -Hx i ‖ (8)
式中,xi为总体样本的一个数据点;w为好样本(内点)的概率。In the formula, x i is a data point of the overall sample; w is the probability of a good sample (inlier point).
目标检测与识别第二个过程,目标信息分离和提取的流程如图3所示,具体实施方法如下:The second process of target detection and recognition, the process of target information separation and extraction is shown in Figure 3, and the specific implementation method is as follows:
1)为计算帧差图像,因为无人机视场中存在多个移动物体,故采用一种帧前帧后差分法,检测出所有的运动物体,其计算公式如下:1) In order to calculate the frame difference image, because there are multiple moving objects in the drone's field of view, a frame-before-frame-after-frame difference method is used to detect all moving objects. The calculation formula is as follows:
其中Xt-2,Xt-1,Xt为视频序列的任意连续三帧;与均为背景变换矩阵;Et-1为帧差减除图像。无人机的航拍图像经过该步骤处理,如图11所示。Where X t-2 , X t-1 , and X t are any consecutive three frames of the video sequence; and Both are background transformation matrices; E t-1 is the frame difference subtracted image. The aerial image of the UAV is processed through this step, as shown in Figure 11.
2)帧差图像的二值化,采用合适的阈值对步骤S301得到的图像二值化。2) Binarization of the frame difference image, using an appropriate threshold to binarize the image obtained in step S301.
3)形态学滤波操作,由步骤302得到的二值化图像采用形态学操作对其滤波,这样会使各个运动对象的分割效果更明显。形态学操作过程如下:3) Morphological filtering operation. The binarized image obtained in step 302 is filtered by morphological operations, which will make the segmentation effect of each moving object more obvious. The morphological operation process is as follows:
a)对其进行图像腐蚀,以剔除孤立的噪声点。a) Perform image erosion on it to remove isolated noise points.
b)再对其进行图像膨胀,就是扩大目标的边缘,填平所缺的坑,使轮廓更平滑。b) Then perform image expansion on it, which is to expand the edge of the target, fill in the missing pits, and make the outline smoother.
在数学形态化处理后,检测结果更饱满,目标区域更明显,更有利于各个运动对象的分割和信息提取。图12为形态学滤波后的航拍图。After mathematical morphological processing, the detection results are fuller and the target area is more obvious, which is more conducive to the segmentation and information extraction of various moving objects. Fig. 12 is an aerial image after morphological filtering.
4)目标信息的分离和提取,为了分离每一帧的多个运动物体,首先需要对每个运动物体进行连通关联,将每帧的各个运动物体标记为不同的编号,最后把相同的区域选择出来。实现以上的目的,一般通常再用顺序标记法,该方法能够完成对运动物体的标记和分离,通常对每一帧采用自上而下自左向右的顺序进行像素扫描。在本方法中使用的像素模板为3*3大小的,具体步骤如下:4) Separation and extraction of target information. In order to separate multiple moving objects in each frame, it is first necessary to connect and associate each moving object, mark each moving object in each frame as a different number, and finally select the same area come out. To achieve the above purpose, the sequential marking method is generally used. This method can complete the marking and separation of moving objects. Usually, each frame is scanned from top to bottom and left to right. The pixel template used in this method is 3*3 in size, and the specific steps are as follows:
a)对每一帧进行像素遍历,遍历的顺序为自上而下从左至右。a) Perform pixel traversal for each frame, and the order of traversal is from top to bottom and from left to right.
b)如果一个像素满足两个条件:二值化后的像素值为1且没有编号,则对该像素赋予新的编号。b) If a pixel satisfies two conditions: the pixel value after binarization is 1 and there is no number, assign a new number to the pixel.
c)对b)中找到像素的八邻域进行遍历,重复b)中的条件,给予相同的编号。c) Traverse the eight neighbors of the pixel found in b), repeat the condition in b), and give the same number.
d)当c)中的条件不满足时,重复b)的操作。d) When the condition in c) is not satisfied, repeat the operation of b).
e)当把图像中所有像素值为1的点遍历完且都编完号之后,操作结束。e) When all the points in the image with a pixel value of 1 have been traversed and numbered, the operation ends.
每一帧经过扫描后,像素点为1的都有编号,编号相同的则为物体,连接在一起就组成运动物体,假设有m个物体,现在以第一个运动物体为例,矩形框获取方法如下:依次从第一个被标记的像素点遍历至最后一个被标记像素点,将标记像素点中x坐标与y坐标的最小值与最大值存下来,记为xmin,ymin,xmax,ymax,接着就可以画出矩形框。通常以(xmin,ymin),(xmax,ymax)两点作为矩形框的对角点,画出矩形框。其它的运动物体的矩形框获取方法同上。无人机图像序列任意相邻两帧经过该步骤处理后的效果,如图13所示。After each frame is scanned, the pixels with a pixel of 1 are numbered, and those with the same number are objects, and they are connected together to form a moving object. Assuming there are m objects, now take the first moving object as an example, and the rectangular frame is acquired The method is as follows: sequentially traverse from the first marked pixel point to the last marked pixel point, and save the minimum and maximum values of the x-coordinate and y-coordinate in the marked pixel point as x min , y min , x min max , y max , and then you can draw a rectangular frame. Usually two points (x min , y min ) and (x max , y max ) are used as the opposite corners of the rectangular frame to draw a rectangular frame. The methods for obtaining the rectangular frames of other moving objects are the same as above. The effect of any two adjacent frames of the UAV image sequence after this step is shown in Figure 13.
3、启动目标跟踪模块,由上一步骤获得的跟踪目标矩形框位置及大小信息,输入到跟踪模块的两个跟踪算法中,该步骤的具体运行过程如下:3. Start the target tracking module, and input the position and size information of the tracking target rectangular frame obtained in the previous step into the two tracking algorithms of the tracking module. The specific operation process of this step is as follows:
1)先假设目标运动模型服从匀速度模型,卡尔曼滤波输出定位结果,记为第一目标真值ykf。1) Assuming that the target motion model obeys the uniform velocity model, the Kalman filter outputs the positioning result, which is recorded as the first target true value y kf .
卡尔曼滤波器利用过渡模型从先前估计的状态预测当前状态,并且用当前状态更新当前测如下,其中The Kalman filter uses the transition model to predict the current state from the previously estimated state, and updates the current state with the current state as follows, where
再利用卡尔曼滤波增益K去计算当前状态真值b(t):Then use the Kalman filter gain K to calculate the current state true value b(t):
假设当前运动目标运动模型为匀速运动,依据该模型来设置A与M即可。其中A为状态转移矩阵ωt控制过渡模型误差,M是测量矩阵,εt表示测量误差。其中Vω和Vε分别是ωt和εt的协方差。在我们的应用中,我们将检测到的对象的边界框的大小和位置分配为状态变量b(t),初始化卡尔曼滤波器。Assuming that the current movement model of the moving target is a uniform movement, it is sufficient to set A and M according to the model. Among them, A is the state transition matrix ω t to control the transition model error, M is the measurement matrix, and ε t represents the measurement error. where Vω and Vε are the covariances of ωt and εt , respectively. In our application, we assign the size and position of the bounding box of the detected object as the state variable b(t), initializing the Kalman filter.
2)利用均值漂移跟踪算法,其目标模板的位置已经由目标检测与识别模块给出,所以能输出定位目标结果,记为第二目标真值yms。均值漂移算法具体过程已经非常成熟,故不在此赘述。2) Using the mean shift tracking algorithm, the position of the target template has been given by the target detection and recognition module, so the target positioning result can be output, which is recorded as the second target true value y ms . The specific process of the mean shift algorithm is very mature, so it is not repeated here.
3)使用加权和数据融合方法,输出目标在未丢失时的定位结果。,若丢失目标,则启用搜索模块,重新定位目标结果。3) Use the weighting and data fusion method to output the positioning result of the target when it is not lost. , if the target is lost, enable the search module to relocate the target result.
由第一步输出的第一目标真值ykf,及第二步输出的第二目标真值yms,用以下策略进行数据的加权融合,采用Bhattacharyya系数来度量目标模型和候选区域(第二目标真值)的相似性程度,当相似度大于0.8时,认为第二目标真值完全可信;当相似度大于0.5小于0.8时,不完全信任第二目标真值,进行数据加权融合操作;当相似度小于0.5时,认为目标发生了遮挡或目标状态的变化,认为目标发生了遮挡或目标状态的变化,可认为目标丢失,需启动目标搜索模块重新定位目标;上述的三种情况数据融合方式可由公式(13)、(14)、(15)分别判定:From the first target true value y kf output in the first step, and the second target true value y ms output in the second step, the following strategy is used for weighted fusion of data, and the Bhattacharyya coefficient is used to measure the target model and candidate area (second target true value), when the similarity is greater than 0.8, it is considered that the second target true value is completely credible; when the similarity is greater than 0.5 and less than 0.8, the second target true value is not completely trusted, and the data weighted fusion operation is performed; When the similarity is less than 0.5, it is considered that the target has been occluded or the target state has changed. If the target is considered to be occluded or the target state has changed, it can be considered that the target is lost, and the target search module needs to be activated to relocate the target; the data fusion of the above three cases The way can be judged by formulas (13), (14) and (15):
ρ<0.5,y=NULL (15)ρ<0.5, y=NULL (15)
式中,ρ为相似度;d为经验阈值;yms,ykf分别为均值漂移算法与卡尔曼滤波算法的目标值。In the formula, ρ is the similarity; d is the empirical threshold; y ms , y kf are the target values of the mean shift algorithm and the Kalman filter algorithm respectively.
由上可知,当输出值为NULL时,融合策略算法认为目标由于遮挡等原因发生丢失,无人机系统会自动从跟踪模块切换到目标搜索模块,重新定位目标在地面站显示屏的区域位置。It can be seen from the above that when the output value is NULL, the fusion strategy algorithm believes that the target is lost due to occlusion and other reasons, and the UAV system will automatically switch from the tracking module to the target search module to reposition the target on the ground station display area.
4)如图4为搜索序列流程图,当丢失跟踪目标时,启动目标搜素模块,该模块使用一种序列的搜索方法,分两个层次,对目标丢失的原因更具有针对性,搜索效率较高。4) As shown in Figure 4, the search sequence flow chart is shown. When the tracking target is lost, the target search module is started. This module uses a sequential search method, which is divided into two levels. It is more targeted to the cause of the target loss and improves the search efficiency. higher.
第一层,前后帧差等距离搜索,yk+1=yk+△y,其中△y=yk-yk-1。In the first layer, the equidistant search of front and rear frame differences, y k+1 = y k + Δy, where Δy = y k - y k-1 .
a)假设当前处理图像序列为第K帧,yk为其K时刻目标的中心位置,跟踪默认图像序列目标中心依次为y0,y1,…,yk-1,yk,yk+1,…。a) Assuming that the currently processed image sequence is the Kth frame, y k is the center position of the target at time K, and the target center of the tracking default image sequence is y 0 , y 1 , ..., y k-1 , y k , y k+ 1 ,….
b)利用帧差等距离公式,根据第K帧图像位置点计算出第K+1帧的中心位置,再以该位置取目标检测与识别模块输出的矩形框一样大小作为候选目标,计算其目标的颜色直方图,再计算与目标模板的相似度,如果相似度大于设定的阈值0.75,选取信任该候选模板,找到了目标;否则,不信任,进入第二层搜索策略。b) Use the frame difference equidistance formula to calculate the center position of the K+1th frame according to the image position point of the Kth frame, and then use the position to take the same size as the rectangular box output by the target detection and recognition module as the candidate target, and calculate its target Then calculate the similarity with the target template. If the similarity is greater than the set threshold of 0.75, choose to trust the candidate template and find the target; otherwise, don’t trust it and enter the second-level search strategy.
第二层,局部/全局搜索策略,先局部搜索,即先在上一帧目标发生丢失的分区中,采用粒子滤波的方法进行重新搜索,具体而言就是,假如目标在摄像机成像视场中的第6区域丢失,则优先在该区内均匀喷洒N个粒子,重新定位到目标;若还在K帧内无法发现目标,则采用分区域粒子滤波方法,在1-9区域内,分别使用粒子滤波跟踪方法,每一个区域都会滤出一个跟踪结果,再采用一种加权融合每个区域的结果,最后重新得到目标的位置。The second layer, the local/global search strategy, searches locally first, that is, in the partition where the target was lost in the previous frame, the particle filter method is used to search again. Specifically, if the target is in the imaging field of the camera If the sixth area is lost, spray N particles evenly in this area first, and relocate to the target; if the target cannot be found within K frames, use the sub-area particle filter method, and use particles in areas 1-9 respectively In the filter tracking method, each area will filter out a tracking result, and then use a weighted fusion of the results of each area, and finally obtain the position of the target.
4、根据上一步输出的目标定位结果,启用跟踪指令生成模块,调整无人机飞模式,使运动目标位于图像中心区域。如图五为图像分区域编号,利用这个分区启用跟踪指令生成模块,通过无线传输模块,发送指令至无人机的飞控系统,调整飞行模式,使目标在当前时刻成像区域向中心区域(第五区域)移动。具体而言,跟踪指令生成模块的调整方式如下:4. According to the target positioning result output in the previous step, enable the tracking command generation module and adjust the flying mode of the UAV so that the moving target is located in the center area of the image. As shown in Figure 5, the sub-regions of the image are numbered. Use this sub-region to activate the tracking command generation module, send commands to the flight control system of the UAV through the wireless transmission module, and adjust the flight mode so that the target imaging area at the current moment moves toward the central area (section five regions) to move. Specifically, the adjustment method of the tracking instruction generation module is as follows:
第5区:图像中心区域,如果目标中心点位于该区域,保持无人机的飞行姿态不变,不生成任何跟踪指令。Area 5: The center area of the image. If the center point of the target is located in this area, the flight attitude of the UAV remains unchanged and no tracking command is generated.
第1区:如果目标中心点位于该区域,则跟踪指令模块生成左前方飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Zone 1: If the center point of the target is located in this area, the tracking command module generates a left front flight mode to control the flight attitude of the UAV so that the center point of the target image is located in the center area of the image.
第2区:如果目标中心点位于该区域,则跟踪指令模块生成向前方飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Zone 2: If the center point of the target is located in this area, the tracking instruction module generates a forward flight mode to control the flight attitude of the UAV so that the center point of the target image is located in the center area of the image.
第3区:如果目标中心点位于该区域,则跟踪指令模块生成右前方飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Area 3: If the center point of the target is located in this area, the tracking command module generates a right-front flight mode to control the flight attitude of the UAV so that the center point of the target image is located in the center area of the image.
第4区:如果目标中心点位于该区域,则跟踪指令模块生成向左方飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Area 4: If the center point of the target is located in this area, the tracking command module generates a flight mode to the left, and controls the flying attitude of the UAV so that the center point of the target image is located in the center area of the image.
第6区:如果目标中心点位于该区域,则跟踪指令模块生成右飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Area 6: If the center point of the target is located in this area, the tracking instruction module will generate a right flight mode to control the flying attitude of the UAV so that the center point of the target image is located in the center area of the image.
第7区:如果目标中心点位于该区域,则跟踪指令模块生成左后方飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Area 7: If the center point of the target is located in this area, the tracking instruction module generates a left rear flight pattern to control the flight attitude of the UAV so that the center point of the target image is located in the center area of the image.
第8区:如果目标中心点位于该区域,则跟踪指令模块生成向后方飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Area 8: If the center point of the target is located in this area, the tracking instruction module generates a backward flight mode to control the flight attitude of the UAV so that the center point of the target image is located in the center area of the image.
第9区:如果目标中心点位于该区域,则跟踪指令模块生成右下方飞行模式,控制无人机飞行姿态,使目标图像中心点位于图像中心区域。Area 9: If the center point of the target is located in this area, the tracking instruction module will generate the lower right flight mode to control the flight attitude of the UAV so that the center point of the target image is located in the center area of the image.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710206676.XA CN106981073B (en) | 2017-03-31 | 2017-03-31 | A method and system for real-time tracking of ground moving targets based on UAV |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710206676.XA CN106981073B (en) | 2017-03-31 | 2017-03-31 | A method and system for real-time tracking of ground moving targets based on UAV |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106981073A true CN106981073A (en) | 2017-07-25 |
CN106981073B CN106981073B (en) | 2019-08-06 |
Family
ID=59339192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710206676.XA Expired - Fee Related CN106981073B (en) | 2017-03-31 | 2017-03-31 | A method and system for real-time tracking of ground moving targets based on UAV |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106981073B (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909600A (en) * | 2017-11-04 | 2018-04-13 | 南京奇蛙智能科技有限公司 | The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model |
CN108108697A (en) * | 2017-12-25 | 2018-06-01 | 中国电子科技集团公司第五十四研究所 | A kind of real-time UAV Video object detecting and tracking method |
CN108286959A (en) * | 2017-12-14 | 2018-07-17 | 彩虹无人机科技有限公司 | A kind of O-E Payload for UAV is detectd to be calculated and display methods according to region |
CN108446634A (en) * | 2018-03-20 | 2018-08-24 | 北京天睿空间科技股份有限公司 | The aircraft combined based on video analysis and location information continues tracking |
CN108534797A (en) * | 2018-04-13 | 2018-09-14 | 北京航空航天大学 | A kind of real-time high-precision visual odometry method |
CN108573498A (en) * | 2018-03-08 | 2018-09-25 | 李绪臣 | The instant tracking system of driving vehicle based on unmanned plane |
CN109032166A (en) * | 2018-03-08 | 2018-12-18 | 李绪臣 | Track the method for driving vehicle immediately based on unmanned plane |
CN109376660A (en) * | 2018-10-26 | 2019-02-22 | 天宇经纬(北京)科技有限公司 | A kind of target monitoring method, apparatus and system |
WO2019041569A1 (en) * | 2017-09-01 | 2019-03-07 | 歌尔科技有限公司 | Method and apparatus for marking moving target, and unmanned aerial vehicle |
WO2019041534A1 (en) * | 2017-08-29 | 2019-03-07 | 深圳市道通智能航空技术有限公司 | Target tracking method, unmanned aerial vehicle and computer-readable storage medium |
CN109446901A (en) * | 2018-09-21 | 2019-03-08 | 北京晶品特装科技有限责任公司 | A kind of real-time humanoid Motion parameters algorithm of embedded type transplanted |
CN109765939A (en) * | 2018-12-21 | 2019-05-17 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Cloud platform control method, device and the storage medium of unmanned plane |
CN109828488A (en) * | 2018-12-27 | 2019-05-31 | 北京航天福道高技术股份有限公司 | The double optical detection tracking systems of acquisition transmission integration |
CN109902591A (en) * | 2018-03-13 | 2019-06-18 | 北京影谱科技股份有限公司 | A kind of automobile search system |
CN109933087A (en) * | 2019-03-18 | 2019-06-25 | 西安爱生技术集团公司 | A virtual formation battle position keeping control method between UAV and ground maneuvering target |
WO2019127306A1 (en) * | 2017-12-29 | 2019-07-04 | Beijing Airlango Technology Co., Ltd. | Template-based image acquisition using a robot |
CN110097586A (en) * | 2019-04-30 | 2019-08-06 | 青岛海信网络科技股份有限公司 | A kind of Face datection method for tracing and device |
CN110120077A (en) * | 2019-05-06 | 2019-08-13 | 航天东方红卫星有限公司 | A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment |
CN110189297A (en) * | 2019-04-18 | 2019-08-30 | 杭州电子科技大学 | A method for detecting appearance defects of magnetic materials based on gray level co-occurrence matrix |
CN110473229A (en) * | 2019-08-21 | 2019-11-19 | 上海无线电设备研究所 | A kind of moving target detecting method based on self-movement feature clustering |
CN110471442A (en) * | 2018-09-24 | 2019-11-19 | 深圳市道通智能航空技术有限公司 | A kind of target observations method, relevant device and system |
CN110930455A (en) * | 2019-11-29 | 2020-03-27 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device, terminal equipment and storage medium |
CN111160304A (en) * | 2019-12-31 | 2020-05-15 | 华中科技大学 | A ground moving target detection and tracking method based on local frame difference and multi-frame fusion |
US10719087B2 (en) | 2017-08-29 | 2020-07-21 | Autel Robotics Co., Ltd. | Target tracking method, unmanned aerial vehicle, and computer readable storage medium |
CN111476116A (en) * | 2020-03-24 | 2020-07-31 | 南京新一代人工智能研究院有限公司 | Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method |
CN111798434A (en) * | 2020-07-08 | 2020-10-20 | 哈尔滨体育学院 | Martial arts competition area detection method based on Ranpac model |
CN111898434A (en) * | 2020-06-28 | 2020-11-06 | 江苏柏勋科技发展有限公司 | Screen detection and analysis system |
CN112766103A (en) * | 2021-01-07 | 2021-05-07 | 国网福建省电力有限公司泉州供电公司 | Machine room inspection method and device |
CN112927264A (en) * | 2021-02-25 | 2021-06-08 | 华南理工大学 | Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof |
CN113034547A (en) * | 2021-04-07 | 2021-06-25 | 中国科学院半导体研究所 | Target tracking method, digital integrated circuit chip, electronic device, and storage medium |
CN113298788A (en) * | 2021-05-27 | 2021-08-24 | 南京航空航天大学 | Vision-based marine mobile platform tracking and identifying method |
CN113496136A (en) * | 2020-03-18 | 2021-10-12 | 中强光电股份有限公司 | Unmanned aerial vehicle and image identification method thereof |
CN113762252A (en) * | 2017-08-18 | 2021-12-07 | 深圳市道通智能航空技术股份有限公司 | Unmanned aerial vehicle intelligent following target determination method, unmanned aerial vehicle and remote controller |
WO2022027596A1 (en) * | 2020-08-07 | 2022-02-10 | 深圳市大疆创新科技有限公司 | Control method and device for mobile platform, and computer readable storage medium |
CN114815877A (en) * | 2022-03-28 | 2022-07-29 | 上海赫千电子科技有限公司 | Unmanned aerial vehicle target tracking method based on unmanned aerial vehicle mother vehicle |
CN115170995A (en) * | 2022-03-28 | 2022-10-11 | 上海赫千电子科技有限公司 | Target tracking method and system applied to unmanned aerial vehicle |
CN115984335A (en) * | 2023-03-20 | 2023-04-18 | 华南农业大学 | A Method of Obtaining the Feature Parameters of Fog Droplets Based on Image Processing |
CN116415910A (en) * | 2023-03-27 | 2023-07-11 | 国网山东省电力公司建设公司 | Unmanned aerial vehicle-based power transmission line environment-friendly intelligent checking method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101324956A (en) * | 2008-07-10 | 2008-12-17 | 上海交通大学 | Anti-Occlusion Moving Target Tracking Method Based on Mean Shift |
CN103455797A (en) * | 2013-09-07 | 2013-12-18 | 西安电子科技大学 | Detection and tracking method of moving small target in aerial shot video |
US20140232893A1 (en) * | 2012-11-26 | 2014-08-21 | Pixart Imaging, Inc. | Image sensor and operating method thereof |
CN106023257A (en) * | 2016-05-26 | 2016-10-12 | 南京航空航天大学 | Target tracking method based on rotor UAV platform |
-
2017
- 2017-03-31 CN CN201710206676.XA patent/CN106981073B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101324956A (en) * | 2008-07-10 | 2008-12-17 | 上海交通大学 | Anti-Occlusion Moving Target Tracking Method Based on Mean Shift |
US20140232893A1 (en) * | 2012-11-26 | 2014-08-21 | Pixart Imaging, Inc. | Image sensor and operating method thereof |
CN103455797A (en) * | 2013-09-07 | 2013-12-18 | 西安电子科技大学 | Detection and tracking method of moving small target in aerial shot video |
CN106023257A (en) * | 2016-05-26 | 2016-10-12 | 南京航空航天大学 | Target tracking method based on rotor UAV platform |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762252A (en) * | 2017-08-18 | 2021-12-07 | 深圳市道通智能航空技术股份有限公司 | Unmanned aerial vehicle intelligent following target determination method, unmanned aerial vehicle and remote controller |
CN113762252B (en) * | 2017-08-18 | 2023-10-24 | 深圳市道通智能航空技术股份有限公司 | Unmanned aerial vehicle intelligent following target determining method, unmanned aerial vehicle and remote controller |
US10719087B2 (en) | 2017-08-29 | 2020-07-21 | Autel Robotics Co., Ltd. | Target tracking method, unmanned aerial vehicle, and computer readable storage medium |
WO2019041534A1 (en) * | 2017-08-29 | 2019-03-07 | 深圳市道通智能航空技术有限公司 | Target tracking method, unmanned aerial vehicle and computer-readable storage medium |
WO2019041569A1 (en) * | 2017-09-01 | 2019-03-07 | 歌尔科技有限公司 | Method and apparatus for marking moving target, and unmanned aerial vehicle |
CN107909600A (en) * | 2017-11-04 | 2018-04-13 | 南京奇蛙智能科技有限公司 | The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model |
CN107909600B (en) * | 2017-11-04 | 2021-05-11 | 南京奇蛙智能科技有限公司 | Unmanned aerial vehicle real-time moving target classification and detection method based on vision |
CN108286959A (en) * | 2017-12-14 | 2018-07-17 | 彩虹无人机科技有限公司 | A kind of O-E Payload for UAV is detectd to be calculated and display methods according to region |
CN108108697A (en) * | 2017-12-25 | 2018-06-01 | 中国电子科技集团公司第五十四研究所 | A kind of real-time UAV Video object detecting and tracking method |
CN108108697B (en) * | 2017-12-25 | 2020-05-19 | 中国电子科技集团公司第五十四研究所 | Real-time unmanned aerial vehicle video target detection and tracking method |
WO2019127306A1 (en) * | 2017-12-29 | 2019-07-04 | Beijing Airlango Technology Co., Ltd. | Template-based image acquisition using a robot |
CN108573498A (en) * | 2018-03-08 | 2018-09-25 | 李绪臣 | The instant tracking system of driving vehicle based on unmanned plane |
CN108573498B (en) * | 2018-03-08 | 2019-04-26 | 上海申雪供应链管理有限公司 | The instant tracking system of driving vehicle based on unmanned plane |
CN109032166A (en) * | 2018-03-08 | 2018-12-18 | 李绪臣 | Track the method for driving vehicle immediately based on unmanned plane |
CN109032166B (en) * | 2018-03-08 | 2020-01-21 | 深圳中琛源科技股份有限公司 | Method for immediately tracking running vehicle based on unmanned aerial vehicle |
CN109902591B (en) * | 2018-03-13 | 2023-10-27 | 北京影谱科技股份有限公司 | Automobile searching system |
CN109902591A (en) * | 2018-03-13 | 2019-06-18 | 北京影谱科技股份有限公司 | A kind of automobile search system |
CN108446634A (en) * | 2018-03-20 | 2018-08-24 | 北京天睿空间科技股份有限公司 | The aircraft combined based on video analysis and location information continues tracking |
CN108534797A (en) * | 2018-04-13 | 2018-09-14 | 北京航空航天大学 | A kind of real-time high-precision visual odometry method |
CN109446901A (en) * | 2018-09-21 | 2019-03-08 | 北京晶品特装科技有限责任公司 | A kind of real-time humanoid Motion parameters algorithm of embedded type transplanted |
CN110471442A (en) * | 2018-09-24 | 2019-11-19 | 深圳市道通智能航空技术有限公司 | A kind of target observations method, relevant device and system |
CN109376660A (en) * | 2018-10-26 | 2019-02-22 | 天宇经纬(北京)科技有限公司 | A kind of target monitoring method, apparatus and system |
CN109765939A (en) * | 2018-12-21 | 2019-05-17 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Cloud platform control method, device and the storage medium of unmanned plane |
CN109828488A (en) * | 2018-12-27 | 2019-05-31 | 北京航天福道高技术股份有限公司 | The double optical detection tracking systems of acquisition transmission integration |
CN109933087B (en) * | 2019-03-18 | 2021-12-10 | 西安爱生技术集团公司 | Unmanned aerial vehicle and ground maneuvering target virtual formation battle position keeping control method |
CN109933087A (en) * | 2019-03-18 | 2019-06-25 | 西安爱生技术集团公司 | A virtual formation battle position keeping control method between UAV and ground maneuvering target |
CN110189297A (en) * | 2019-04-18 | 2019-08-30 | 杭州电子科技大学 | A method for detecting appearance defects of magnetic materials based on gray level co-occurrence matrix |
CN110189297B (en) * | 2019-04-18 | 2021-02-19 | 杭州电子科技大学 | Magnetic material appearance defect detection method based on gray level co-occurrence matrix |
CN110097586B (en) * | 2019-04-30 | 2023-05-30 | 青岛海信网络科技股份有限公司 | Face detection tracking method and device |
CN110097586A (en) * | 2019-04-30 | 2019-08-06 | 青岛海信网络科技股份有限公司 | A kind of Face datection method for tracing and device |
CN110120077A (en) * | 2019-05-06 | 2019-08-13 | 航天东方红卫星有限公司 | A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment |
CN110473229A (en) * | 2019-08-21 | 2019-11-19 | 上海无线电设备研究所 | A kind of moving target detecting method based on self-movement feature clustering |
CN110473229B (en) * | 2019-08-21 | 2022-03-29 | 上海无线电设备研究所 | Moving object detection method based on independent motion characteristic clustering |
CN110930455B (en) * | 2019-11-29 | 2023-12-29 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device, terminal equipment and storage medium |
CN110930455A (en) * | 2019-11-29 | 2020-03-27 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device, terminal equipment and storage medium |
CN111160304B (en) * | 2019-12-31 | 2022-03-29 | 华中科技大学 | Local frame difference and multi-frame fusion ground moving target detection and tracking method |
CN111160304A (en) * | 2019-12-31 | 2020-05-15 | 华中科技大学 | A ground moving target detection and tracking method based on local frame difference and multi-frame fusion |
CN113496136A (en) * | 2020-03-18 | 2021-10-12 | 中强光电股份有限公司 | Unmanned aerial vehicle and image identification method thereof |
CN111476116A (en) * | 2020-03-24 | 2020-07-31 | 南京新一代人工智能研究院有限公司 | Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method |
CN111898434B (en) * | 2020-06-28 | 2021-03-19 | 江苏柏勋科技发展有限公司 | Video detection and analysis system |
CN111898434A (en) * | 2020-06-28 | 2020-11-06 | 江苏柏勋科技发展有限公司 | Screen detection and analysis system |
CN111798434A (en) * | 2020-07-08 | 2020-10-20 | 哈尔滨体育学院 | Martial arts competition area detection method based on Ranpac model |
WO2022027596A1 (en) * | 2020-08-07 | 2022-02-10 | 深圳市大疆创新科技有限公司 | Control method and device for mobile platform, and computer readable storage medium |
CN112766103A (en) * | 2021-01-07 | 2021-05-07 | 国网福建省电力有限公司泉州供电公司 | Machine room inspection method and device |
CN112766103B (en) * | 2021-01-07 | 2023-05-16 | 国网福建省电力有限公司泉州供电公司 | Machine room inspection method and device |
CN112927264B (en) * | 2021-02-25 | 2022-12-16 | 华南理工大学 | A UAV tracking and shooting system and its RGBD tracking method |
CN112927264A (en) * | 2021-02-25 | 2021-06-08 | 华南理工大学 | Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof |
CN113034547A (en) * | 2021-04-07 | 2021-06-25 | 中国科学院半导体研究所 | Target tracking method, digital integrated circuit chip, electronic device, and storage medium |
CN113034547B (en) * | 2021-04-07 | 2024-02-06 | 中国科学院半导体研究所 | Target tracking method, digital integrated circuit chip, electronic device, and storage medium |
CN113298788A (en) * | 2021-05-27 | 2021-08-24 | 南京航空航天大学 | Vision-based marine mobile platform tracking and identifying method |
CN115170995A (en) * | 2022-03-28 | 2022-10-11 | 上海赫千电子科技有限公司 | Target tracking method and system applied to unmanned aerial vehicle |
CN114815877A (en) * | 2022-03-28 | 2022-07-29 | 上海赫千电子科技有限公司 | Unmanned aerial vehicle target tracking method based on unmanned aerial vehicle mother vehicle |
CN115984335A (en) * | 2023-03-20 | 2023-04-18 | 华南农业大学 | A Method of Obtaining the Feature Parameters of Fog Droplets Based on Image Processing |
CN116415910A (en) * | 2023-03-27 | 2023-07-11 | 国网山东省电力公司建设公司 | Unmanned aerial vehicle-based power transmission line environment-friendly intelligent checking method |
Also Published As
Publication number | Publication date |
---|---|
CN106981073B (en) | 2019-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106981073B (en) | A method and system for real-time tracking of ground moving targets based on UAV | |
Han et al. | Deepvio: Self-supervised deep learning of monocular visual inertial odometry using 3d geometric constraints | |
Zhang et al. | Eye in the sky: Drone-based object tracking and 3d localization | |
US11270148B2 (en) | Visual SLAM method and apparatus based on point and line features | |
Ye et al. | Deep learning for moving object detection and tracking from a single camera in unmanned aerial vehicles (UAVs) | |
Chen et al. | Esvio: Event-based stereo visual inertial odometry | |
CN107491742B (en) | Long-term stable target tracking method for unmanned aerial vehicle | |
CN108446634B (en) | Aircraft continuous tracking method based on combination of video analysis and positioning information | |
Qian et al. | Robust visual-lidar simultaneous localization and mapping system for UAV | |
CN111382613B (en) | Image processing method, device, equipment and medium | |
Rozantsev et al. | Flight dynamics-based recovery of a UAV trajectory using ground cameras | |
Sanfourche et al. | Perception for UAV: Vision-Based Navigation and Environment Modeling. | |
CN108830286A (en) | A kind of reconnaissance UAV moving-target detects automatically and tracking | |
CN113223045A (en) | Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation | |
Zhou et al. | Tracking of multiple pixel targets using multiple cameras | |
Xiang et al. | UAV based target tracking and recognition | |
Zhao et al. | Vision based ground target tracking for rotor UAV | |
Guo et al. | Global-local MAV detection under challenging conditions based on appearance and motion | |
Zhou et al. | Information-efficient 3-D visual SLAM for unstructured domains | |
Rostum et al. | A review of using visual odometery methods in autonomous UAV Navigation in GPS-Denied Environment | |
Zhang et al. | A stereo SLAM system with dense mapping | |
Eendebak et al. | Landing spot selection for UAV emergency landing | |
CN117523461B (en) | Moving target tracking and positioning method based on airborne monocular camera | |
Li-Chee-Ming et al. | Determination of UAS trajectory in a known environment from FPV video | |
Li-Chee-Ming et al. | Augmenting visp’s 3d model-based tracker with rgb-d slam for 3d pose estimation in indoor environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190806 |
|
CF01 | Termination of patent right due to non-payment of annual fee |