[go: up one dir, main page]

CN111369589B - A UAV tracking method based on multi-strategy fusion - Google Patents

A UAV tracking method based on multi-strategy fusion Download PDF

Info

Publication number
CN111369589B
CN111369589B CN202010120410.5A CN202010120410A CN111369589B CN 111369589 B CN111369589 B CN 111369589B CN 202010120410 A CN202010120410 A CN 202010120410A CN 111369589 B CN111369589 B CN 111369589B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
image
uav
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010120410.5A
Other languages
Chinese (zh)
Other versions
CN111369589A (en
Inventor
纪元法
何传骥
孙希延
付文涛
严素清
符强
王守华
黄建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202010120410.5A priority Critical patent/CN111369589B/en
Publication of CN111369589A publication Critical patent/CN111369589A/en
Application granted granted Critical
Publication of CN111369589B publication Critical patent/CN111369589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an unmanned aerial vehicle tracking method based on multi-strategy fusion, which comprises the steps of taking a Centernet network in a deep learning network as a structural main body, combining a frequency spectrum detection module and a steering engine control module, providing a new vision and frequency spectrum combined evaluation algorithm, effectively calculating the position of an unmanned aerial vehicle in a video image, controlling a camera steering engine to rotate through a central point of the position, accurately tracking the unmanned aerial vehicle in flight within a range of 3 kilometers, displaying the unmanned aerial vehicle in flight in a more intuitive vision tracking mode, and solving the problem of difficult tracking of the unmanned aerial vehicle in flight.

Description

一种基于多策略融合的无人机跟踪方法A UAV tracking method based on multi-strategy fusion

技术领域technical field

本发明涉及图像处理领域,尤其涉及一种基于多策略融合的无人机跟踪方法。The invention relates to the field of image processing, in particular to an unmanned aerial vehicle tracking method based on multi-strategy fusion.

背景技术Background technique

无人机通常是指一种有动力、可控制、可执行多种任务,并能重复使用的无人驾驶飞行器。与有人驾驶飞机相比,无人机具有重量轻、雷达反射截面小、运行成本低、灵活性高且不存在机组人员安全问题等优点,可广泛用于侦察、攻击等军事任务;在民用方面,可用于气象探测、灾害监测、地质勘探、地图测绘等诸多领域,因此受到越来越多国家的重视,发展迅猛。UAV usually refers to a powered, controllable, multi-tasking, and reusable unmanned aerial vehicle. Compared with manned aircraft, UAV has the advantages of light weight, small radar reflection cross section, low operating cost, high flexibility and no crew safety problems, and can be widely used in military tasks such as reconnaissance and attack; It can be used in many fields such as meteorological detection, disaster monitoring, geological exploration, map surveying and mapping, so it has been paid more and more attention by more and more countries and developed rapidly.

无人机的飞行速度较快,且一般具有独特的几何形状,表现为缺乏完整的结构信息,因此在飞行中跟踪困难。UAVs fly fast and generally have unique geometries that lack complete structural information, making tracking in flight difficult.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提供一种基于多策略融合的无人机跟踪方法,旨在解决无人机在飞行时跟踪困难的问题。The purpose of the present invention is to provide a UAV tracking method based on multi-strategy fusion, which aims to solve the problem that the UAV is difficult to track in flight.

为实现上述目的,本发明提供了一种基于多策略融合的无人机跟踪方法,In order to achieve the above purpose, the present invention provides a UAV tracking method based on multi-strategy fusion,

包括:include:

基于Centernet网络对无人机图像样本进行训练,生成特征图;Based on the Centernet network, the UAV image samples are trained to generate feature maps;

获取无人机信号,并对无人机信号进行分析处理后得到无人机的方向参数;Obtain the UAV signal, and analyze and process the UAV signal to obtain the direction parameter of the UAV;

基于无人机的方向参数输出控制信号控制摄像机镜头转动,采集无人机的视频图像,得到无人机的估计位置;Control the rotation of the camera lens based on the direction parameter output of the UAV, collect the video image of the UAV, and obtain the estimated position of the UAV;

对采集的视频图像进行图像块加权处理,基于视觉与频谱联合评估算法,得到无人机在视频图像里的具体位置;Perform image block weighting processing on the collected video images, and obtain the specific position of the UAV in the video image based on the joint evaluation algorithm of vision and spectrum;

基于opencv函数获取无人机的中心坐标,实时跟踪无人机。Obtain the center coordinates of the UAV based on the opencv function and track the UAV in real time.

其中,所述基于Centernet网络对无人机图像样本进行训练,生成特征图的具体步骤,是:Wherein, the specific steps for training the UAV image samples based on the Centernet network and generating the feature map are:

获取无人机图像输入RGB三通道,并基于卷积神经网络处理,输出预测值;Obtain the UAV image input RGB three-channel, and process it based on the convolutional neural network to output the predicted value;

基于网络前向传播,提取特征,得到特征图。Based on network forward propagation, features are extracted and feature maps are obtained.

其中,所述获取无人机信号,并对无人机信号进行分析处理后得到无人机的方向参数得具体步骤,是:Wherein, the specific steps of obtaining the UAV signal and analyzing and processing the UAV signal to obtain the direction parameter of the UAV are:

检测并获取3公里范围内的无人机信号;Detect and acquire drone signals within 3 kilometers;

基于使用环境进行数据融合;Data fusion based on usage environment;

通过下变频、A/D采样、数字信道化和阵列信号处理提取无人机信号参数;Extract UAV signal parameters through down-conversion, A/D sampling, digital channelization and array signal processing;

基于不同天线的参数进行幅相一体化测向处理,并与数据库比对,得到无人机型号;Perform amplitude and phase integrated direction finding processing based on the parameters of different antennas, and compare with the database to obtain the UAV model;

基于多站测向交叉定位体制进行定位,得到无人机方向参数。Positioning is carried out based on the multi-station direction finding and cross positioning system, and the direction parameters of the UAV are obtained.

其中,所述基于无人机的方向参数输出控制信号控制摄像机镜头转动,采集无人机的视频图像,得到无人机的估计位置的具体步骤,是Wherein, the specific steps of controlling the rotation of the camera lens based on the direction parameter output of the UAV, collecting the video image of the UAV, and obtaining the estimated position of the UAV are:

读取输出的无人机方向参数;Read the output UAV direction parameters;

计算方向参数与图像中心的差值;Calculate the difference between the orientation parameter and the center of the image;

通过数学模型计算出云台的转动量;Calculate the rotation of the gimbal through the mathematical model;

通过位置PD算法进行控制云台转动,所述位置PD算法模型如下:The rotation of the gimbal is controlled by the position PD algorithm. The position PD algorithm model is as follows:

Figure BDA0002392787820000021
Figure BDA0002392787820000021

其中,S(n)为控制输出,Kp为比例控制参数,Td为微分控制参数,e(n)为当前状态值与目标值之间的差值,n为控制次数,

Figure BDA0002392787820000022
用Kd表示,则有:Among them, S(n) is the control output, Kp is the proportional control parameter, Td is the differential control parameter, e(n) is the difference between the current state value and the target value, n is the control times,
Figure BDA0002392787820000022
Represented by K d , there are:

S(n)=Kpe(n)+Kd[e(n)-e(n-1)]。S(n)=Kpe( n )+Kd[ e (n)-e(n-1)].

其中,所述对采集的视频图像进行图像块加权处理,基于视觉与频谱联合评估算法,得到无人机在视频图像里的具体位置的具体步骤,是:Wherein, the specific steps for obtaining the specific position of the drone in the video image based on the combined vision and spectrum evaluation algorithm by performing image block weighting processing on the collected video image are:

将图像分为三个图像块,并赋予加权参数;Divide the image into three image blocks and assign weighting parameters;

基于Centernet网络提取特征图每个类别的关键点;Extract the key points of each category of the feature map based on the Centernet network;

对关键点进行置信度判断,获取具体位置。Perform confidence judgment on key points to obtain specific locations.

其中,所述对关键点进行置信度判断,获取具体位置的具体步骤,是,Wherein, the specific steps of performing confidence judgment on key points and acquiring specific positions are,

基于原始的Centernet算法对摄像机采集的原始图像进行第一次置信度判断,若特征图里的关键点小于阈值A,则摄像机放大固定倍数;Based on the original Centernet algorithm, the first confidence judgment is made on the original image collected by the camera. If the key point in the feature map is smaller than the threshold A, the camera will be enlarged by a fixed multiple;

若特征图里的关键点大于阈值A,基于加权参数和特征响应值计算方式,使用Centernet算法进行二次置信度判断;If the key point in the feature map is greater than the threshold A, the Centernet algorithm is used to judge the secondary confidence based on the weighted parameter and the calculation method of the feature response value;

若二次置信度判断特征峰值大于阈值B,用opencv函数画框显示,并回传无人机在图像里的中心坐标(x,y)。If the secondary confidence determines that the feature peak is greater than the threshold B, use the opencv function to draw a frame and display it, and return the center coordinates (x, y) of the drone in the image.

其中,所述特征响应值计算方式的具体计算步骤,是,Wherein, the specific calculation steps of the characteristic response value calculation method are,

建立如下公式:Create the following formula:

y(t)=(ω+B(n))x(t)t≥0,n≥0y(t)=(ω+B(n))x(t)t≥0, n≥0

其中,y(t)代表特征响应值,t代表特征点排序号,x(t)代表原始Centernet算法计算的每个特征点的响应值,ω代表图像块权重,B(n)表示摄像机每放大一次倍数提高的准确度,n代表摄像机放大倍数的次数;Among them, y(t) represents the feature response value, t represents the feature point sorting number, x(t) represents the response value of each feature point calculated by the original Centernet algorithm, ω represents the image block weight, B(n) represents the camera zoomed in each time The accuracy of a multiple increase, n represents the number of camera magnifications;

B(n)=(1.1)nβn≥0B(n)=(1.1) n βn≥0

其中,β为初始常量。where β is the initial constant.

本发明的一种基于多策略融合的无人机跟踪方法,使用基于多策略融合的无人机跟踪方法能够将频谱探测和视觉跟踪融为一体,在用频谱探测对无人机进行粗略定位后使用视觉跟踪实现对无人机位置的准确定位,同时以更加直观的形式将无人机在视频中的位置标示出来,方便无人机使用者和监测者能够更加清楚地观察无人机的位置。In the UAV tracking method based on multi-strategy fusion of the present invention, spectrum detection and visual tracking can be integrated by using the UAV tracking method based on multi-strategy fusion. Use visual tracking to accurately locate the position of the UAV, and at the same time mark the position of the UAV in the video in a more intuitive form, so that the UAV user and monitor can observe the position of the UAV more clearly. .

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative efforts.

图1是本发明的一种基于多策略融合的无人机跟踪方法的流程图;Fig. 1 is a kind of flow chart of the UAV tracking method based on multi-strategy fusion of the present invention;

图2是本发明的一种基于多策略融合的无人机跟踪方法的运行框图;Fig. 2 is a kind of operational block diagram of the UAV tracking method based on multi-strategy fusion of the present invention;

图3是本发明的一种基于多策略融合的无人机跟踪方法的图像分块图。FIG. 3 is an image block diagram of a UAV tracking method based on multi-strategy fusion of the present invention.

具体实施方式Detailed ways

下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。The following describes in detail the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to explain the present invention and should not be construed as limiting the present invention.

在本发明的描述中,需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In the description of the present invention, it should be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", The orientations or positional relationships indicated by "horizontal", "top", "bottom", "inside", "outside", etc. are based on the orientations or positional relationships shown in the accompanying drawings, which are only for the convenience of describing the present invention and simplifying the description, rather than An indication or implication that the referred device or element must have a particular orientation, be constructed and operate in a particular orientation, is not to be construed as a limitation of the invention. In addition, in the description of the present invention, "plurality" means two or more, unless otherwise expressly and specifically defined.

实施例Example

请参阅图1,本发明的一种基于多策略融合的无人机跟踪方法,包括:Please refer to Fig. 1, a UAV tracking method based on multi-strategy fusion of the present invention includes:

S101、基于Centernet网络对无人机图像样本进行训练,生成特征图。S101 , training the UAV image samples based on the Centernet network to generate a feature map.

获取无人机图像输入RGB三通道,并基于卷积神经网络处理,输出预测值;Obtain the UAV image input RGB three-channel, and process it based on the convolutional neural network to output the predicted value;

24bit RGB图像也叫全彩图,其有三个通道,分别为R(red),G(green),B(blue),用halcon程序以及halcon自带图像进行理解RGB图像和灰度值。三通道图像的灰度值是三个单通道的灰度值的组合。灰度值为0-255,每个通道都是0-255,值越大图像看起来越亮,值越小图像越暗。在三通道图像上看到哪部分的哪种颜色越深,证明在该部分的哪种颜色分量越大,反应到该单通道上越亮。24bit RGB image is also called full color image, which has three channels, namely R (red), G (green), B (blue). Use halcon program and halcon's own image to understand RGB image and gray value. The grayscale value of a three-channel image is the combination of the grayscale values of three single channels. The grayscale value is 0-255, and each channel is 0-255. The larger the value, the brighter the image looks, and the smaller the value, the darker the image. Which part of the three-channel image is seen to be darker, which proves which part of the color component is larger, and the brighter it is reflected on the single channel.

基于网络前向传播,提取特征,得到特征图。Based on network forward propagation, features are extracted and feature maps are obtained.

网络前向传播是一个神经元有多个输入和一个输出,每个神经元的输入既可以是其他神经元的输出也可以是整个神经网络的输入。特征图(featuremap)相当于卷积神经网络提取原始图像特征之后形成的图像,从概率大小的角度来看,特征图里的每个类型的点都有自己的概率,即响应值,这就形成了热点图(heatmap),反应的是颜色深度,即响应值大小,响应值最大的点就是跟踪目标所在的点。Network forward propagation is that a neuron has multiple inputs and one output, and the input of each neuron can be either the output of other neurons or the input of the entire neural network. The feature map is equivalent to the image formed after the convolutional neural network extracts the original image features. From the perspective of probability, each type of point in the feature map has its own probability, that is, the response value, which forms The heatmap reflects the color depth, that is, the size of the response value, and the point with the largest response value is the point where the tracking target is located.

S102、获取无人机信号,并对无人机信号进行分析处理后得到无人机的方向参数。S102 , acquiring the UAV signal, and analyzing and processing the UAV signal to obtain the direction parameter of the UAV.

开启摄像头,开启频谱探测模块,此时由频谱探测模块拥有舵机控制模块的所有权,检测并捕获3公里范围内的无人机信号,信号数据根据使用环境进行数据融合;截获地面遥控器发出的遥控器信号和无人机下传的图传信号,通过下变频、A/D采样、数字信道化和阵列信号处理提取信号参数;根据不同天线的参数进行幅相一体化测向处理,通过实测参数与数据库比对,得到无人机型号;基于多站测向交叉定位体制进行定位,得到无人机的方向参数。Turn on the camera and turn on the spectrum detection module. At this time, the spectrum detection module owns the ownership of the steering gear control module, detects and captures the UAV signals within a range of 3 kilometers, and the signal data is fused according to the use environment; The remote control signal and the image transmission signal downloaded by the UAV are extracted by down-conversion, A/D sampling, digital channelization and array signal processing. The parameters are compared with the database to obtain the UAV model; based on the multi-station direction finding and cross positioning system, the orientation parameters of the UAV are obtained.

S103、基于无人机的方向参数输出控制信号控制摄像机镜头转动,采集无人机的视频图像,得到无人机的估计位置。S103, outputting a control signal based on the direction parameter of the UAV to control the rotation of the camera lens, collecting the video image of the UAV, and obtaining the estimated position of the UAV.

读取输出的无人机方向参数,如无人机相对于摄像机的方向,角度等;计算方向参数与图像中心的差值;通过数学模型计算出云台的转动量;通过位置PD算法进行控制云台转动,所述位置PD算法模型如下:Read the output UAV direction parameters, such as the direction and angle of the UAV relative to the camera; calculate the difference between the direction parameter and the image center; calculate the rotation of the gimbal through the mathematical model; use the position PD algorithm to control The PTZ rotates, and the PD algorithm model of the position is as follows:

Figure BDA0002392787820000051
Figure BDA0002392787820000051

Figure BDA0002392787820000052
用Kd表示,则有:
Figure BDA0002392787820000052
Represented by K d , there are:

S(n)=Kpe(n)+Kd[e(n)-e(n-1)]S(n)=K p e(n)+K d [e(n)-e(n-1)]

其中,Kp为比例控制参数,Td为微分控制参数,e(n)为当前状态值与目标值之间的差值。Among them, Kp is the proportional control parameter, Td is the differential control parameter, and e(n) is the difference between the current state value and the target value.

摄像机图像的中心位置即为无人机的估计位置。The center position of the camera image is the estimated position of the drone.

S104、对采集的视频图像进行图像块加权处理,基于视觉与频谱联合评估算法,得到无人机在视频图像里的具体位置。S104 , performing image block weighting processing on the collected video image, and obtaining a specific position of the drone in the video image based on a vision and spectrum joint evaluation algorithm.

将采集的1920*1080大小的视频图像以(960,540)为图像中心点,如图3所示,将整张图像分为3个图像块,分别为image_inside,image_middle,image_outside。image_inside表示图像的正中间部分,image_inside图像块里理应具有较大的可能性存在无人机目标,而image_outside远离图像中心,理应具有较小的可能性存在无人机目标。设image_inside,image_middle,image_outside的参数分别为ω1,ω2,ω3,则ω2=0.8ω1,ω3=0.4ω1The collected video image of size 1920*1080 takes (960,540) as the image center point, as shown in Figure 3, and divides the entire image into 3 image blocks, which are image_inside, image_middle, and image_outside. image_inside represents the middle part of the image, the image_inside image block should have a greater possibility of a drone target, and the image_outside is far from the center of the image, so there should be a small possibility of a drone target. Assuming that the parameters of image_inside, image_middle, and image_outside are ω 1 , ω 2 , and ω 3 respectively, then ω 2 =0.8ω 1 , ω 3 =0.4ω 1 .

Centernet检测器将特征图上的所有特征点与其连接的8个临近点进行比较,如果该点响应值大于或等于其八个临近点值则保留,得到所有满足要求的前100个关键点。The Centernet detector compares all the feature points on the feature map with its 8 neighboring points, and if the response value of the point is greater than or equal to the value of its eight neighboring points, it is retained, and all the top 100 key points that meet the requirements are obtained.

Figure BDA0002392787820000053
是用上述方法检测得到的c类别的n个关键点的集合,每个关键点以整型坐标(xi,yi)的形式给出,则
Figure BDA0002392787820000054
产生如下的检测框:make
Figure BDA0002392787820000053
is the set of n key points of the c category detected by the above method, and each key point is given in the form of integer coordinates (x i , y i ), then
Figure BDA0002392787820000054
The following detection box is generated:

Figure BDA0002392787820000061
Figure BDA0002392787820000061

其中,

Figure BDA0002392787820000062
是偏移预测结果,
Figure BDA0002392787820000063
是尺度预测结果。in,
Figure BDA0002392787820000062
is the offset prediction result,
Figure BDA0002392787820000063
is the scale prediction result.

所有检测框组成一个粗略框图,然后设定一个阈值,

Figure BDA0002392787820000064
小于阈值的舍弃,
Figure BDA0002392787820000065
大于阈值的框图形成最终的无人机框图。All detection boxes form a rough frame, and then set a threshold,
Figure BDA0002392787820000064
Anything less than the threshold is discarded,
Figure BDA0002392787820000065
Blocks larger than the threshold form the final UAV block diagram.

请参阅图2,使用Centernet算法对摄像机采集的原始图像进行第一次置信度判断,若特征图里的关键点小于阈值A,则摄像机放大固定倍数,n加1,更新B(n);若特征图里的关键点大于阈值A则进行第二次置信度判断,此时在摄像机采集视频里引入图像块权重,采用特征响应值计算方式,使用Centernet算法再次进行置信度判断。最后,若经过第二次置信度判断,特征峰值大于阈值B,则确定跟踪目标,用opencv函数画框显示,并回传无人机在图像里的中心坐标(x,y),此时视觉跟踪模块重新拥有舵机控制权,通过(x,y)重新控制舵机转动。Please refer to Figure 2. Use the Centernet algorithm to judge the confidence of the original image collected by the camera for the first time. If the key point in the feature map is smaller than the threshold value A, the camera is enlarged by a fixed multiple, n is increased by 1, and B(n) is updated; if If the key points in the feature map are greater than the threshold A, the second confidence judgment is performed. At this time, the image block weight is introduced into the video captured by the camera, and the feature response value calculation method is adopted, and the Centernet algorithm is used to judge the confidence again. Finally, if after the second confidence judgment, the characteristic peak value is greater than the threshold B, the tracking target is determined, displayed with the opencv function frame, and the center coordinates (x, y) of the drone in the image are returned. The tracking module regains control of the steering gear and re-controls the rotation of the steering gear through (x,y).

本算法在在引入图像块权重的基础上,设计了一种新的特征响应值计算方式,如下:Based on the introduction of image block weights, this algorithm designs a new feature response value calculation method, as follows:

y(t)=(ω+B(n))x(t)t≥0,n≥0y(t)=(ω+B(n))x(t)t≥0, n≥0

其中t代表特征点排序号,x(t)代表原始Centernet算法计算的每个特征点的响应值,y(t)代表最终每个特征点的的响应值。ω代表图像块权重,根据3个不同的区域image_inside,image_middle,image_outside分为3个不同的值,分别为ω1,ω2,ω3,其中ω2=0.8ω1,ω3=0.4ω1。B(n)是一个线性增加的函数,用来表示摄像机每放大一次倍数(固定倍数),图像的分辨率越高,Centernet跟踪算法的准确度也就越高,其中n代表摄像机放大倍数的次数,方程如下:Where t represents the feature point sorting number, x(t) represents the response value of each feature point calculated by the original Centernet algorithm, and y(t) represents the final response value of each feature point. ω represents the image block weight, which is divided into 3 different values according to 3 different regions image_inside, image_middle, image_outside, respectively ω 1 , ω 2 , ω 3 , where ω 2 =0.8ω 1 , ω 3 =0.4ω 1 . B(n) is a linearly increasing function, which is used to represent the magnification of the camera each time (fixed multiple), the higher the resolution of the image, the higher the accuracy of the Centernet tracking algorithm, where n represents the number of times the camera is magnified , the equation is as follows:

B(n)=(1.1)nβn≥0B(n)=(1.1) n βn≥0

其中,β为初始常量。where β is the initial constant.

S105、基于opencv函数获取无人机的中心坐标,实时跟踪无人机。S105. Obtain the center coordinates of the UAV based on the opencv function, and track the UAV in real time.

确定无人机在视频图像里的位置后,使用opencv函数画出无人机所在的矩形框,计算并回传矩形框的中心坐标(x,y),舵机控制所有权交给视觉跟踪模块,通过该中心坐标重新控制舵机转动,实时跟踪无人机。After determining the position of the drone in the video image, use the opencv function to draw the rectangular frame where the drone is located, calculate and return the center coordinates (x, y) of the rectangular frame, and hand over the control ownership of the steering gear to the visual tracking module. Re-control the rotation of the steering gear through the center coordinate to track the drone in real time.

以上所揭露的仅为本发明一种较佳实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。The above disclosure is only a preferred embodiment of the present invention, and of course, it cannot limit the scope of rights of the present invention. Those of ordinary skill in the art can understand that all or part of the process of implementing the above embodiment can be realized according to the rights of the present invention. The equivalent changes required to be made still belong to the scope covered by the invention.

Claims (5)

1. An unmanned aerial vehicle tracking method based on multi-strategy fusion is characterized in that,
the method comprises the following steps:
training an unmanned aerial vehicle image sample based on a Centeret network to generate a feature map;
acquiring an unmanned aerial vehicle signal, and analyzing and processing the unmanned aerial vehicle signal to obtain a direction parameter of the unmanned aerial vehicle;
outputting a control signal based on the direction parameter of the unmanned aerial vehicle to control the camera lens to rotate, and acquiring a video image of the unmanned aerial vehicle to obtain an estimated position of the unmanned aerial vehicle;
carrying out image block weighting processing on the acquired video image, and obtaining the specific position of the unmanned aerial vehicle in the video image based on a vision and frequency spectrum joint evaluation algorithm, wherein the method comprises the following specific steps: dividing the image into three image blocks and giving weighting parameters; extracting key points of each category of the feature map based on the Centeret network; performing confidence judgment on the key points to obtain specific positions, specifically performing first confidence judgment on an original image acquired by the camera based on a Centeret algorithm, and if the key points in the feature map are smaller than a threshold A, amplifying the camera by a fixed multiple; if the key point in the feature map is larger than the threshold A, performing secondary confidence judgment by using a Centernet algorithm based on a weighted parameter and a feature response value calculation mode; if the secondary confidence coefficient judges that the characteristic peak value is larger than the threshold value B, displaying the characteristic peak value by using an opencv function picture frame, and returning the central coordinates (x, y) of the unmanned aerial vehicle in the image;
and acquiring the central coordinate of the unmanned aerial vehicle based on the opencv function, and tracking the unmanned aerial vehicle in real time.
2. The unmanned aerial vehicle tracking method based on multi-strategy fusion as claimed in claim 1, wherein the specific steps of training the unmanned aerial vehicle image sample based on the centret network and generating the feature map are as follows:
acquiring three channels of an unmanned aerial vehicle image input RGB, and outputting a predicted value based on convolutional neural network processing;
and extracting features based on network forward propagation to obtain a feature map.
3. The unmanned aerial vehicle tracking method based on multi-strategy fusion of claim 1, wherein the specific steps of obtaining the unmanned aerial vehicle signal, analyzing and processing the unmanned aerial vehicle signal to obtain the direction parameter of the unmanned aerial vehicle are as follows:
detecting and acquiring unmanned aerial vehicle signals within a range of 3 kilometers;
performing data fusion based on the use environment;
extracting unmanned aerial vehicle signal parameters through down-conversion, A/D sampling, digital channelization and array signal processing;
carrying out amplitude-phase integrated direction finding processing based on parameters of different antennas, and comparing the processed direction finding processing with a database to obtain the model of the unmanned aerial vehicle;
and positioning based on a multi-station direction-finding cross positioning system to obtain the direction parameters of the unmanned aerial vehicle.
4. The unmanned aerial vehicle tracking method based on multi-strategy fusion as claimed in claim 1, wherein the specific steps of outputting a control signal based on the directional parameter of the unmanned aerial vehicle to control the camera lens to rotate, acquiring the video image of the unmanned aerial vehicle, and obtaining the estimated position of the unmanned aerial vehicle are as follows:
reading the direction parameters of the unmanned aerial vehicle;
calculating the difference between the direction parameter and the image center;
calculating the rotation quantity of the holder through a mathematical model;
controlling the rotation of the holder through a position PD algorithm, wherein the position PD algorithm model is as follows:
Figure FDA0003533045680000021
wherein S (n) is a control output, Kp is a proportional control parameter, Td is a derivative control parameter, e (n) is a difference between a current state value and a target value, n is a control number,
Figure FDA0003533045680000022
by KdThis means that there are:
S(n)=Kpe(n)+Kd[e(n)-e(n-1)]。
5. the unmanned aerial vehicle tracking method based on multi-strategy fusion of claim 1, wherein the calculation formula of the characteristic response value calculation mode is,
y(t)=(ω+B(n))x(t)t≥0,n≥0
wherein y (t) represents a characteristic response value, t represents a characteristic point ranking number, x (t) represents a response value of each characteristic point calculated by the original Centernet algorithm, ω represents an image block weight, B (n) represents the accuracy of the camera increased by each magnification, and n represents the number of times of the camera magnification;
B(n)=(1.1)nβn≥0
where β is an initial constant.
CN202010120410.5A 2020-02-26 2020-02-26 A UAV tracking method based on multi-strategy fusion Active CN111369589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010120410.5A CN111369589B (en) 2020-02-26 2020-02-26 A UAV tracking method based on multi-strategy fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010120410.5A CN111369589B (en) 2020-02-26 2020-02-26 A UAV tracking method based on multi-strategy fusion

Publications (2)

Publication Number Publication Date
CN111369589A CN111369589A (en) 2020-07-03
CN111369589B true CN111369589B (en) 2022-04-22

Family

ID=71211009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010120410.5A Active CN111369589B (en) 2020-02-26 2020-02-26 A UAV tracking method based on multi-strategy fusion

Country Status (1)

Country Link
CN (1) CN111369589B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016141100A2 (en) * 2015-03-03 2016-09-09 Prenav Inc. Scanning environments and tracking unmanned aerial vehicles
CN109099779A (en) * 2018-08-31 2018-12-28 江苏域盾成鹫科技装备制造有限公司 A kind of detecting of unmanned plane and intelligent intercept system
CN109283491A (en) * 2018-08-02 2019-01-29 哈尔滨工程大学 A UAV Positioning System Based on Vector Detection Unit
CN109816695A (en) * 2019-01-31 2019-05-28 中国人民解放军国防科技大学 A detection and tracking method of infrared small UAV under complex background
CN110133573A (en) * 2019-04-23 2019-08-16 四川九洲电器集团有限责任公司 A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information
CN110262529A (en) * 2019-06-13 2019-09-20 桂林电子科技大学 A kind of monitoring unmanned method and system based on convolutional neural networks
CN110398720A (en) * 2019-08-21 2019-11-01 深圳耐杰电子技术有限公司 A kind of anti-unmanned plane detection tracking interference system and photoelectric follow-up working method
CN110647931A (en) * 2019-09-20 2020-01-03 深圳市网心科技有限公司 Object detection method, electronic device, system, and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016141100A2 (en) * 2015-03-03 2016-09-09 Prenav Inc. Scanning environments and tracking unmanned aerial vehicles
CN109283491A (en) * 2018-08-02 2019-01-29 哈尔滨工程大学 A UAV Positioning System Based on Vector Detection Unit
CN109099779A (en) * 2018-08-31 2018-12-28 江苏域盾成鹫科技装备制造有限公司 A kind of detecting of unmanned plane and intelligent intercept system
CN109816695A (en) * 2019-01-31 2019-05-28 中国人民解放军国防科技大学 A detection and tracking method of infrared small UAV under complex background
CN110133573A (en) * 2019-04-23 2019-08-16 四川九洲电器集团有限责任公司 A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information
CN110262529A (en) * 2019-06-13 2019-09-20 桂林电子科技大学 A kind of monitoring unmanned method and system based on convolutional neural networks
CN110398720A (en) * 2019-08-21 2019-11-01 深圳耐杰电子技术有限公司 A kind of anti-unmanned plane detection tracking interference system and photoelectric follow-up working method
CN110647931A (en) * 2019-09-20 2020-01-03 深圳市网心科技有限公司 Object detection method, electronic device, system, and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度神经网络的低空弱小无人机目标检测研究;王靖宇等;《西北工业大学学报》;20180415(第02期);全文 *

Also Published As

Publication number Publication date
CN111369589A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
US11915502B2 (en) Systems and methods for depth map sampling
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN110675431B (en) Three-dimensional multi-target tracking method fusing image and laser point cloud
CN112567201B (en) Distance measuring method and device
CN108416361A (en) A kind of information fusion system and method based on sea survaillance
CN115439424A (en) Intelligent detection method for aerial video image of unmanned aerial vehicle
US9165383B1 (en) Point cloud visualization using bi-modal color schemes based on 4D lidar datasets
Häselich et al. Probabilistic terrain classification in unstructured environments
US20110200249A1 (en) Surface detection in images based on spatial data
Hammer et al. UAV detection, tracking, and classification by sensor fusion of a 360 lidar system and an alignable classification sensor
CN110276321A (en) A remote sensing video target tracking method and system
Schumann et al. An image processing pipeline for long range UAV detection
Yuan et al. MMAUD: A comprehensive multi-modal anti-UAV dataset for modern miniature drone threats
CN118279770A (en) Unmanned aerial vehicle follow-up shooting method based on SLAM algorithm
Stann et al. Integration and demonstration of MEMS-scanned LADAR for robotic navigation
CN110910379B (en) Incomplete detection method and device
Omrani et al. Dynamic and static object detection and tracking in an autonomous surface vehicle
CN111369589B (en) A UAV tracking method based on multi-strategy fusion
Semenyuk et al. DEVELOPING THE GOOGLENET NEURAL NETWORK FOR THE DETECTION AND RECOGNITION OF UNMANNED AERIAL VEHICLES IN THE DATA FUSION SYSTEM.
US11244470B2 (en) Methods and systems for sensing obstacles in an indoor environment
Sulaj et al. Examples of real-time UAV data processing with cloud computing
Haavardsholm et al. Multimodal Multispectral Imaging System for Small UAVs
Amigo et al. Automatic context learning based on 360 imageries triangulation and 3D LiDAR validation
Patil et al. Drone detection using YOLO
GB2582419A (en) Improvements in and relating to range-finding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200703

Assignee: Guangxi Huantai Aerospace Technology Co.,Ltd.

Assignor: GUILIN University OF ELECTRONIC TECHNOLOGY

Contract record no.: X2022450000392

Denomination of invention: A tracking method of uav based on multi strategy fusion

Granted publication date: 20220422

License type: Common License

Record date: 20221226