[go: up one dir, main page]

CN110221625A - The Autonomous landing guidance method of unmanned plane exact position - Google Patents

The Autonomous landing guidance method of unmanned plane exact position Download PDF

Info

Publication number
CN110221625A
CN110221625A CN201910446706.3A CN201910446706A CN110221625A CN 110221625 A CN110221625 A CN 110221625A CN 201910446706 A CN201910446706 A CN 201910446706A CN 110221625 A CN110221625 A CN 110221625A
Authority
CN
China
Prior art keywords
semantic
target
coordinate system
landing target
uav
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910446706.3A
Other languages
Chinese (zh)
Other versions
CN110221625B (en
Inventor
李晓峰
杨晗
管岭
贾利民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201910446706.3A priority Critical patent/CN110221625B/en
Publication of CN110221625A publication Critical patent/CN110221625A/en
Application granted granted Critical
Publication of CN110221625B publication Critical patent/CN110221625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/244Arrangements for determining position or orientation using passive navigation aids external to the vehicle, e.g. markers, reflectors or magnetic means
    • G05D1/2446Arrangements for determining position or orientation using passive navigation aids external to the vehicle, e.g. markers, reflectors or magnetic means the passive navigation aids having encoded information, e.g. QR codes or ground control points
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • G05B11/42Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P. I., P. I. D.
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/40Control within particular dimensions
    • G05D1/48Control of altitude or depth
    • G05D1/485Control of rate of change of altitude or depth
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/654Landing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/656Interaction with payloads or external entities
    • G05D1/689Pointing payloads towards fixed or moving targets
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/20Aircraft, e.g. drones
    • G05D2109/25Rotorcrafts
    • G05D2109/254Flying platforms, e.g. multicopters
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种无人机精确位置的自主降落导引方法。该方法包括:通过卫星导航系统引导无人机飞行到地面上的降落标靶的设定距离范围内;通过机载相机获取地面的视频图像,通过图形检测规则识别出视频图像中包含的降落标靶上的语义图标,根据语义图标计算出降落标靶的中心位置信息;根据降落标靶的中心位置信息通过机载相机与无人机的姿态和相对位置关系,计算大地坐标系下降落标靶的位置和动态特性;持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度,通过三重PID控制算法控制无人机降落在降落标靶的中心位置。本发明的方法通过识别降落标靶中的语义图标,实现对降落标靶的定位和跟踪,实现无人机的在动态标靶上精准自主降落。

The invention provides an autonomous landing guidance method for the precise position of the drone. The method includes: guiding the unmanned aerial vehicle to fly to a set distance range of a landing target on the ground through a satellite navigation system; acquiring a video image of the ground through an airborne camera, and identifying the landing target contained in the video image through a graphic detection rule According to the semantic icon on the target, the center position information of the landing target is calculated according to the semantic icon; according to the center position information of the landing target, the landing target in the earth coordinate system is calculated through the posture and relative position relationship between the airborne camera and the UAV position and dynamic characteristics; continuously calculate the relative position and relative speed of the UAV and the landing target in the geodetic coordinate system, and control the UAV to land at the center of the landing target through the triple PID control algorithm. The method of the present invention realizes the positioning and tracking of the landing target by identifying the semantic icon in the landing target, and realizes the accurate and autonomous landing of the UAV on the dynamic target.

Description

无人机精确位置的自主降落导引方法Autonomous Landing Guidance Method for UAV Precise Position

技术领域technical field

本发明涉及无人机控制技术领域,尤其涉及一种无人机精确位置的自主降落导引方法。The invention relates to the technical field of unmanned aerial vehicle control, in particular to an autonomous landing guidance method for the precise position of an unmanned aerial vehicle.

背景技术Background technique

旋翼无人机具有使用便利、机动灵活、运营成本低、飞行精度高等优点,在实际应用中存在大量的需求,被广泛应用于侦察、救援、测绘、植保等各个领域。无人机的自主起飞、降落等技术多年来一直是无人机领域的研究热点。Rotor UAV has the advantages of convenient use, flexible maneuverability, low operating cost, and high flight accuracy. Autonomous take-off and landing of drones has been a research hotspot in the field of drones for many years.

目前无人机的自主降落多采用GNSS(Global Navigation Satellite System,全球导航卫星系统)导航定位配合高度数据进行定点降落。高度数据通常由GNSS、气压计、超声波或对地雷达测得。但GNSS信号易受建筑物遮挡和天气条件影响,数据漂移严重,且高度方向的精度非常有限;基于超声波、微波、激光等的测距传感器难以区分降落平台与地面,无法直接用于无人机在移动平台上降落。At present, the autonomous landing of drones mostly uses GNSS (Global Navigation Satellite System, Global Navigation Satellite System) navigation and positioning with altitude data for fixed-point landing. Altitude data is usually measured by GNSS, barometer, ultrasound or ground radar. However, GNSS signals are easily affected by building occlusion and weather conditions, the data drift is serious, and the accuracy in the height direction is very limited; ranging sensors based on ultrasonic, microwave, laser, etc. are difficult to distinguish the landing platform from the ground, and cannot be directly used for UAVs Land on a moving platform.

目前,针对移动的降落平台,现有技术中的无人机的自主降落通常采用人工引导控制,对GNSS精度和操作人员的熟练度都有较高的要求,无法做到自主降落。在一些复杂条件下,例如海面移动平台、颠簸移动的地面平台上起飞降落,对该类无人机的飞行控制系统和控制人员仍然是严峻的挑战,制约着无人机在更广泛的领域运用。At present, for mobile landing platforms, the autonomous landing of UAVs in the prior art usually adopts manual guidance and control, which has high requirements for GNSS accuracy and operator proficiency, and cannot achieve autonomous landing. Under some complex conditions, such as taking off and landing on a mobile platform on the sea surface or a bumpy moving ground platform, it is still a severe challenge for the flight control system and controllers of this type of UAV, which restricts the use of UAVs in a wider range of fields. .

发明内容Contents of the invention

本发明的实施例提供了一种无人机精确位置的自主降落导引方法,以克服现有技术的问题。The embodiment of the present invention provides an autonomous landing guidance method for the precise position of the UAV, so as to overcome the problems in the prior art.

为了实现上述目的,本发明采取了如下技术方案。In order to achieve the above object, the present invention adopts the following technical solutions.

一种无人机精确位置的自主降落导引方法,包括:An autonomous landing guidance method for a precise position of an unmanned aerial vehicle, comprising:

通过卫星导航系统引导无人机飞行到地面上的降落标靶的设定距离范围内;Guide the UAV to fly to the set distance range of the landing target on the ground through the satellite navigation system;

通过机载相机获取地面的视频图像,通过图形检测规则识别出所述视频图像中包含的降落标靶上的语义图标,根据所述降落标靶上的语义图标计算出降落标靶的中心位置信息;The video image of the ground is acquired by the airborne camera, the semantic icon on the landing target contained in the video image is recognized through the graphic detection rule, and the center position information of the landing target is calculated according to the semantic icon on the landing target ;

根据所述降落标靶的中心位置信息通过机载相机与无人机的姿态和相对位置关系,计算大地坐标系下所述降落标靶的位置和动态特性;Calculate the position and dynamic characteristics of the landing target in the geodetic coordinate system through the attitude and relative position relationship between the airborne camera and the UAV according to the center position information of the landing target;

基于所述降落标靶的位置和动态特性持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度,通过三重PID控制算法控制无人机降落在所述降落标靶的中心位置。Based on the position and dynamic characteristics of the landing target, the relative position and relative speed of the UAV and the landing target in the geodetic coordinate system are continuously calculated, and the UAV is controlled to land at the center of the landing target through a triple PID control algorithm Location.

优选地,所述降落标靶的内部包含多个互不重叠的语义图标,每个语义图标的大小、位置以及对应的语义都是已知的,多个语义图标按照一定规律排布,使得无人机降落过程中机载相机始终至少能看到1个语义图标。Preferably, the interior of the landing target contains a plurality of non-overlapping semantic icons, the size, position and corresponding semantics of each semantic icon are known, and the plurality of semantic icons are arranged according to certain rules, so that no During the landing process, the onboard camera can always see at least one semantic icon.

优选地,所述语义图标的主体为黑色矩形,所述语义图标的内部按照语义规则画有不同位置和数量的白色矩形,将各个语义图标包含的语义信息保存在语义图标数据库中。Preferably, the main body of the semantic icon is a black rectangle, and white rectangles with different positions and numbers are drawn inside the semantic icon according to semantic rules, and the semantic information contained in each semantic icon is stored in the semantic icon database.

优选地,所述的通过机载相机获取地面的视频图像,通过图形检测规则识别出所述视频图像中包含的降落标靶上的语义图标,根据所述降落标靶上的语义图标计算出降落标靶的中心位置信息,包括:Preferably, the video image of the ground is acquired by the airborne camera, the semantic icon on the landing target contained in the video image is recognized through the graphic detection rule, and the landing target is calculated according to the semantic icon on the landing target. Target center position information, including:

通过无人机的机载相机拍摄地面的视频图像,将所述视频图像转换为灰度图像,根据所述灰度图像中每个像素点的邻域计算出每个像素点对应的自适应阈值,将每个像素点的灰度值与对应的自适应阈值进行比较,当某个像素点的灰度值大于自适应阈值时,则将所述某个像素点设为白色,当某个像素点的灰度值不大于自适应阈值时,则将所述某个像素点设为反之设置为黑色;The ground video image is taken by the drone's onboard camera, the video image is converted into a grayscale image, and the adaptive threshold corresponding to each pixel is calculated according to the neighborhood of each pixel in the grayscale image , compare the gray value of each pixel with the corresponding adaptive threshold, when the gray value of a certain pixel is greater than the adaptive threshold, set the certain pixel to white, when a certain pixel When the gray value of the point is not greater than the adaptive threshold, the certain pixel is set to black otherwise;

提取重新设置像素点后的灰度图像中的黑色矩形,并按照语义规则检测该黑色矩形是否为正确的语义图标。由于无人机总是在标靶上方的位置,且姿态变化较小,因此可以滤除其中面积过小的、形状太偏的轮廓,减少计算量。Extract the black rectangle in the grayscale image after resetting the pixels, and detect whether the black rectangle is a correct semantic icon according to the semantic rules. Since the UAV is always at the position above the target, and the attitude change is small, it can filter out the contours with too small area and too biased shape to reduce the amount of calculation.

所述的语义规则为:将黑色矩形的每个边6等分,将对边的对应等分点连接,由此将黑色矩形分为6×6个小矩形,除去最外围的一层全为黑色外,内部的每个小矩形中,黑色占大多数的矩形记为0,白色占大多数的矩形记为1,由上至下将黑色矩形中的每一行的数据串联起来,得到串联数据,将所述串联数据作为所述语义图标所蕴含的语义信息,将所述串联数据所代表的语义信息与语义图标数据库中存储的各个语义图标的语义信息进行对比,当有对比结果为一致时,则确定上述串联数据对应的黑色矩形为正确的语义图标;The semantic rule described is: divide each side of the black rectangle into 6 equal parts, connect the corresponding equal parts of the opposite sides, thus divide the black rectangle into 6×6 small rectangles, except for the outermost layer, all are In each of the small rectangles outside the black and inside, the rectangle with the majority of black is recorded as 0, and the rectangle with the majority of white is recorded as 1, and the data of each row in the black rectangle is concatenated from top to bottom to obtain concatenated data , using the series data as the semantic information contained in the semantic icon, comparing the semantic information represented by the series data with the semantic information of each semantic icon stored in the semantic icon database, and when the comparison results are consistent , then determine that the black rectangle corresponding to the above concatenated data is the correct semantic icon;

根据降落标靶上的所有语义图标所蕴含的语义信息计算出降落标靶的中心位置信息。The center position information of the landing target is calculated according to the semantic information contained in all the semantic icons on the landing target.

优选地,所述的根据所述降落标靶的中心位置信息通过机载相机与无人机的姿态和相对位置关系,计算大地坐标系下所述降落标靶的位置和动态特性,包括:Preferably, according to the center position information of the landing target, the position and dynamic characteristics of the landing target in the geodetic coordinate system are calculated through the attitude and relative position relationship between the airborne camera and the UAV, including:

根据降落标靶的中心位置信息建立标靶坐标系,根据各语义图标所蕴含的语义信息得到语义图标的顶点及中心在标靶坐标系下的坐标位置;The target coordinate system is established according to the center position information of the landing target, and the coordinate position of the vertex and center of the semantic icon in the target coordinate system is obtained according to the semantic information contained in each semantic icon;

通过语义图标在标靶坐标系和图像像素坐标的一一对应关系获取标靶平面到相机成像平面的旋转和平移矩阵,根据标靶平面到相机成像平面的旋转和平移矩阵将语义图标在标靶坐标系下的坐标转换为语义图标在相机坐标系下的坐标,根据相机坐标系到大地坐标系的转换公式,将语义图标在相机坐标系下的空间坐标转换为语义图标在大地坐标系下的坐标;The rotation and translation matrix from the target plane to the camera imaging plane is obtained through the one-to-one correspondence between the semantic icon in the target coordinate system and the image pixel coordinates, and the semantic icon is placed on the target according to the rotation and translation matrix from the target plane to the camera imaging plane. The coordinates in the coordinate system are transformed into the coordinates of the semantic icon in the camera coordinate system. According to the conversion formula from the camera coordinate system to the earth coordinate system, the spatial coordinates of the semantic icon in the camera coordinate system are converted into the coordinates of the semantic icon in the earth coordinate system. coordinate;

以语义图标的东、北方向的大地坐标为输入,通过卡尔曼滤波计算大地坐标系下降落标靶的位置和速度,由降落标靶的东、北方向坐标和速度组成状态向量X=[x,y,vx,vy]T和输出向量Y=[x,y]T,大地坐标系下所述降落标靶的状态方程如式(1)所示:Taking the geodetic coordinates of the east and north directions of the semantic icon as input, the position and velocity of the landing target in the geodetic coordinate system are calculated by Kalman filtering, and the state vector X=[x , y, v x , v y ] T and the output vector Y=[x, y] T , the state equation of the landing target under the geodetic coordinate system is shown in formula (1):

其中,Δt为采样时间间隔;W代表均值为零的系统噪声,是协方差为Q的高斯变量;V代表均值为零的量测噪声,是协方差为R的高斯变量;in, Δt is the sampling time interval; W represents system noise with zero mean, which is a Gaussian variable with covariance Q; V represents measurement noise with zero mean, which is a Gaussian variable with covariance R;

所述标靶平面到相机成像平面的旋转和平移矩阵如式(2)所示::The rotation and translation matrix from the target plane to the camera imaging plane is shown in formula (2):

其中,(u,v)为图像像素坐标,(x,y)为语义图标在标靶坐标系下的坐标,为内参矩阵,R为3×3的旋转矩阵,T为3×1的平移向量;Among them, (u, v) are the image pixel coordinates, (x, y) are the coordinates of the semantic icon in the target coordinate system, is the internal reference matrix, R is a 3×3 rotation matrix, and T is a 3×1 translation vector;

所述相机坐标系到大地坐标系的转换公式如式(3)所示:The conversion formula of the camera coordinate system to the earth coordinate system is as shown in formula (3):

Xg=RpXp+Xg0=RpRcXc+Xg0 (3)X g =R p X p +X g0 =R p R c X c +X g0 (3)

其中Xg、Xp、Xc分别为降落标靶在大地坐标系、无人机坐标系、相机坐标系下的坐标;Xg0为无人机当前在大地坐标系下的坐标,由GNSS定位坐标转换得出;Rp、Rc分别为无人机系到大地坐标系、相机到无人机坐标系的旋转矩阵,计算公式如式(4)所示,其中,α为翻滚角,β为俯仰角,γ为偏航角。Among them, X g , X p , and X c are the coordinates of the landing target in the earth coordinate system, the UAV coordinate system, and the camera coordinate system; X g0 is the current coordinate of the UAV in the earth coordinate system, which is positioned by GNSS The coordinate transformation is obtained; R p and R c are the rotation matrices from the UAV system to the earth coordinate system, and from the camera to the UAV coordinate system, respectively. The calculation formula is shown in formula (4), where α is the roll angle and β is the pitch angle, and γ is the yaw angle.

优选地,所述的基于所述降落标靶的位置和动态特性持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度,通过三重PID控制算法控制无人机降落在所述降落标靶的中心位置,包括:Preferably, based on the position and dynamic characteristics of the landing target, the relative position and relative speed between the UAV and the landing target in the geodetic coordinate system are continuously calculated, and the UAV is controlled to land on the ground by a triple PID control algorithm. The center position of the landing target, including:

基于所述降落标靶的位置和动态特性持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度;Continuously calculate the relative position and relative velocity of the UAV and the landing target in the geodetic coordinate system based on the position and dynamic characteristics of the landing target;

以无人机与降落标靶的相对位置为输入,通过PID控制算法控制无人机向标靶移动;Taking the relative position of the UAV and the landing target as input, the UAV is controlled to move to the target through the PID control algorithm;

以无人机与降落标靶的相对速度为输入,叠加在位置控制得出的速度上,通过PID控制算法追踪动态降落标靶;The relative speed of the UAV and the landing target is used as input, superimposed on the speed obtained by the position control, and the dynamic landing target is tracked through the PID control algorithm;

以无人机与降落标靶的相对高度为输入,通过PID控制算法控制无人机降落在标靶上。Taking the relative height of the UAV and the landing target as input, the UAV is controlled to land on the target through the PID control algorithm.

由上述本发明的实施例提供的技术方案可以看出,本发明实施例的无人机精确位置的自主降落导引方法通过识别降落标靶中的语义图标,实现对降落标靶的定位和跟踪,并估算降落标靶的移动速度,弥补GNSS定位精度不足导致的降落标靶定位误差大的不足,实现无人机的在动态标靶上精准自主降落。通过采用图像识别的方法,能够自动识别降落标靶中的语义图标,从而快速计算无人机与标靶的相对位置和相对速度。From the technical solutions provided by the above-mentioned embodiments of the present invention, it can be seen that the autonomous landing guidance method of the precise position of the UAV in the embodiment of the present invention realizes the positioning and tracking of the landing target by identifying the semantic icons in the landing target , and estimate the moving speed of the landing target, make up for the lack of large positioning error of the landing target caused by the lack of GNSS positioning accuracy, and realize the accurate and autonomous landing of the UAV on the dynamic target. By adopting the method of image recognition, the semantic icons in the landing target can be automatically recognized, so as to quickly calculate the relative position and relative speed of the UAV and the target.

本发明附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in part in the description which follows, and will become apparent from the description, or may be learned by practice of the invention.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. For Those of ordinary skill in the art can also obtain other drawings based on these drawings without making creative efforts.

图1为本发明实施例提供的一种无人机自主精确位置降落导引的总体流程图。Fig. 1 is an overall flow chart of an autonomous precise position landing guidance for a UAV provided by an embodiment of the present invention.

图2为本发明实施例提供的一种降落标靶图案的示意图。FIG. 2 is a schematic diagram of a landing target pattern provided by an embodiment of the present invention.

图3为本发明实施例提供的一种标靶中的语义图标的识别流程示意图。FIG. 3 is a schematic diagram of a process for identifying semantic icons in a target according to an embodiment of the present invention.

图4为本发明实施例提供的一种语义图标的实例说明示意图。Fig. 4 is a schematic diagram illustrating an example of a semantic icon provided by an embodiment of the present invention.

图5为本发明实施例提供的一种相机坐标系、无人机坐标系与大地坐标系的相对关系示意图。FIG. 5 is a schematic diagram of the relative relationship between a camera coordinate system, a drone coordinate system, and a ground coordinate system provided by an embodiment of the present invention.

具体实施方式Detailed ways

下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.

本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的任一单元和全部组合。Those skilled in the art will understand that unless otherwise stated, the singular forms "a", "an", "said" and "the" used herein may also include plural forms. It should be further understood that the word "comprising" used in the description of the present invention refers to the presence of said features, integers, steps, operations, elements and/or components, but does not exclude the presence or addition of one or more other features, Integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Additionally, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语)具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样定义,不会用理想化或过于正式的含义来解释。Those skilled in the art can understand that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It should also be understood that terms such as those defined in commonly used dictionaries should be understood to have a meaning consistent with the meaning in the context of the prior art, and unless defined as herein, are not to be interpreted in an idealized or overly formal sense explain.

为便于对本发明实施例的理解,下面将结合附图以几个具体实施例为例做进一步的解释说明,且各个实施例并不构成对本发明实施例的限定。In order to facilitate the understanding of the embodiments of the present invention, several specific embodiments will be taken as examples for further explanation below in conjunction with the accompanying drawings, and each embodiment does not constitute a limitation to the embodiments of the present invention.

实施例一Embodiment one

为了解决现有技术中GNSS定位精度不够导致无人机无法精确地自主降落在移动平台上的问题,提高无人机的自主能力,本发明实施例提供了一种无人机精确位置的自主降落导引方法,该方法的处理流程如图1所示,包括如下的处理步骤:In order to solve the problem that the GNSS positioning accuracy in the prior art is not enough to cause the drone to land on the mobile platform accurately and autonomously, and improve the autonomy of the drone, the embodiment of the present invention provides an autonomous landing of the precise position of the drone Guidance method, the processing flow of this method is as shown in Figure 1, including the following processing steps:

步骤S1:通过GNSS卫星导航系统引导使无人机处于降落标靶的上空。目前的GNSS卫星导航定位精度通常都在10米以内,可以保证无人机在到达GNSS卫星所标示的位置时,机载的下视相机能够看到降落标靶。Step S1: Guide the UAV over the landing target through the guidance of the GNSS satellite navigation system. The current GNSS satellite navigation positioning accuracy is usually within 10 meters, which can ensure that when the UAV reaches the position marked by the GNSS satellite, the airborne downward-looking camera can see the landing target.

所述的无人机指的是多旋翼无人机和无人驾驶的直升机。本发明实施例中所用降落标靶如图2所示,可以安装在地面或移动载具上,用于标记无人机的可降落范围。Described UAV refers to multi-rotor UAV and unmanned helicopter. The landing target used in the embodiment of the present invention is shown in Figure 2, which can be installed on the ground or on a mobile vehicle to mark the landing range of the drone.

步骤S2:通过机载相机识别降落标靶,调整无人机的姿态和角度,保持降落标靶在视野范围内的情况下,降低无人机高度。Step S2: Identify the landing target through the onboard camera, adjust the attitude and angle of the UAV, and lower the height of the UAV while keeping the landing target within the field of view.

本发明实施例中,无人机的下视相机的视场角为90°,根据试验,无人机降至3米高度时可以清晰地识别语义标靶。In the embodiment of the present invention, the viewing angle of the down-looking camera of the drone is 90°. According to the test, the semantic target can be clearly identified when the drone drops to a height of 3 meters.

步骤S3:通过图形检测识别标靶中的语义图标,并判断所识别到的图标是否为降落标靶上的图标,滤除误检图标后,计算降落标靶中心位置。Step S3: Identify the semantic icons in the target through graphic detection, and judge whether the recognized icon is an icon on the landing target, and calculate the center position of the landing target after filtering out falsely detected icons.

所述降落标靶的内部包含多个互不重叠的语义图标,每个语义图标的大小、位置以及对应的语义都是已知的,多个语义图标按照一定规律排布,使得无人机降落过程中机载相机始终至少能看到1个语义图标。所述语义图标的主体为黑色矩形,所述语义图标的内部按照语义规则画有不同位置和数量的白色矩形。在设计标靶时将各个语义图标包含的语义信息保存在语义图标数据库中。在检测到语义图标的时候,根据其语义信息与语义图标数据库对比,对比成功的就是正确的语义图标。The interior of the landing target contains a plurality of non-overlapping semantic icons, the size, position and corresponding semantics of each semantic icon are known, and the plurality of semantic icons are arranged according to certain rules, so that the drone lands During the process, the onboard camera can always see at least 1 semantic icon. The main body of the semantic icon is a black rectangle, and white rectangles with different positions and numbers are drawn inside the semantic icon according to semantic rules. When designing the target, the semantic information contained in each semantic icon is stored in the semantic icon database. When a semantic icon is detected, the semantic information is compared with the semantic icon database, and the successful comparison is the correct semantic icon.

本发明实施例提供的一种识别标靶中的语义图标的流程如图3所示,具体处理过程包括:A process for identifying semantic icons in a target provided by an embodiment of the present invention is shown in Figure 3, and the specific processing process includes:

首先通过机载相机拍摄地面的视频图像,将上述视频图像转换为灰度图像,然后通过自适应阈值将灰度图像二值化。所述的自适应阈值是一种局部阈值化方法。它的原理是根据图像每个像素点的邻域计算出每个像素点对应的自适应阈值,然后将每个像素点的灰度值与对应的自适应阈值进行比较,根据比较结果设置每个像素点为白色或者黑色。从而避免由于光照不均匀导致的阈值化错误。对图像中的每个像素点,自适应阈值化的计算如式(5)所示,Firstly, the video image of the ground is captured by an airborne camera, and the above video image is converted into a grayscale image, and then the grayscale image is binarized by an adaptive threshold. The adaptive threshold is a local thresholding method. Its principle is to calculate the adaptive threshold value corresponding to each pixel point according to the neighborhood of each pixel point in the image, then compare the gray value of each pixel point with the corresponding adaptive threshold value, and set each threshold value according to the comparison result. Pixels are either white or black. This avoids thresholding errors due to uneven lighting. For each pixel in the image, the calculation of adaptive thresholding is shown in formula (5),

其中,Pi,j为第i行第j列的图像像素灰度值,N为窗口内的像素数量总数,C为计算偏置。得出阈值后,将对应的像素灰度值与阈值进行比较,当灰度值大于阈值时,该像素点设为255(白色),反之设置为0(黑色),如式(6)所示。Among them, P i, j is the image pixel gray value of row i and column j, N is the total number of pixels in the window, and C is the calculation bias. After the threshold is obtained, compare the corresponding pixel gray value with the threshold. When the gray value is greater than the threshold, the pixel is set to 255 (white), otherwise it is set to 0 (black), as shown in formula (6) .

提取重新设置像素点后的图像中的黑色矩形。由于无人机总是在标靶上方的位置,且姿态变化较小,因此可以首先滤除其中面积过小的、形状太偏的黑色矩形,减少计算量。对应剩下的黑色矩形,按照语义规则检测该黑色矩形是否为正确的语义图标。Extract the black rectangle in the repixelated image. Since the UAV is always above the target and its attitude changes little, it is possible to first filter out the black rectangles with too small area and too biased shape to reduce the amount of calculation. Corresponding to the remaining black rectangle, check whether the black rectangle is a correct semantic icon according to the semantic rules.

所述的语义规则为:将黑色矩形的每个边6等分,将对边的对应等分点连接,由此将黑色矩形分为6×6个小矩形;除去最外围的一层全为黑色外,内部的每个小矩形中,黑色占大多数的矩形记为0,白色占大多数的矩形记为1。The semantic rule described is: each side of the black rectangle is divided into 6 equal parts, and the corresponding equal parts of the opposite sides are connected, thereby dividing the black rectangle into 6×6 small rectangles; except the outermost layer, all are Outside the black, in each small rectangle inside, the rectangle with the majority of black is recorded as 0, and the rectangle with the majority of white is recorded as 1.

图4a和图4b为本发明实施例提供的一种语义图标的实例说明示意图。如图4a所示,由上至下将每一行的数据串联起来,得到串联数据,该串联数据即为所述语义图标包含的语义信息。将上述串联数据所代表的语义信息与语义图标数据库中存储的各个语义图标的语义信息进行对比,当有对比结果为一致时,则确定上述串联数据对应的黑色矩形为正确的语义图标。如图4b所示,同一个黑色矩形旋转90°、180°、270°所产生的串联数据(语义信息)被认为含有相同的信息,保证语义图标与语义信息的唯一对应关系;同时,为了避免混淆,不使用中心对称和轴对称的语义图标。Fig. 4a and Fig. 4b are schematic diagrams illustrating an example of a semantic icon provided by an embodiment of the present invention. As shown in FIG. 4a, the data of each row is connected in series from top to bottom to obtain series data, which is the semantic information contained in the semantic icon. Comparing the semantic information represented by the above concatenated data with the semantic information of each semantic icon stored in the semantic icon database, and when the comparison results are consistent, it is determined that the black rectangle corresponding to the above concatenated data is the correct semantic icon. As shown in Figure 4b, the concatenated data (semantic information) generated by the same black rectangle rotated 90°, 180°, and 270° is considered to contain the same information, ensuring the unique correspondence between the semantic icon and the semantic information; at the same time, in order to avoid Confused, not using centrosymmetric and axisymmetric semantic icons.

然后,根据降落标靶上的所有语义图标所蕴含的语义信息计算出降落标靶的中心位置信息,建立标靶坐标系,根据语义信息还可获取语义图标在标靶上的位置、大小信息,即获取语义图标在标靶坐标系上的坐标信息,其中包括语义图标的角点和中心点的像素坐标。Then, calculate the center position information of the landing target according to the semantic information contained in all the semantic icons on the landing target, establish the target coordinate system, and obtain the position and size information of the semantic icon on the target according to the semantic information, That is, the coordinate information of the semantic icon on the target coordinate system is obtained, including the pixel coordinates of the corner point and the center point of the semantic icon.

步骤S4:通过机载相机与无人机的姿态和相对位置关系,计算大地坐标系下降落标靶的位置,并通过卡尔曼滤波计算降落标靶的动态特性。Step S4: Calculate the position of the landing target in the earth coordinate system through the attitude and relative position relationship between the airborne camera and the UAV, and calculate the dynamic characteristics of the landing target through Kalman filtering.

本发明实施例中使用的语义图标如图2所示,根据各语义图标所蕴含的语义信息得到语义图的顶点及中心在标靶坐标系下的位置。通过语义图标在标靶坐标系和图像像素坐标的一一对应关系,计算标靶平面到相机成像平面的旋转和平移矩阵,如式(7)所示。The semantic icons used in the embodiment of the present invention are shown in FIG. 2 , and the positions of the vertices and centers of the semantic graph in the target coordinate system are obtained according to the semantic information contained in each semantic icon. Through the one-to-one correspondence between the semantic icon in the target coordinate system and the image pixel coordinates, the rotation and translation matrix from the target plane to the camera imaging plane is calculated, as shown in formula (7).

其中,(u,v)为图像像素坐标,(x,y)为语义图标在标靶坐标系下的坐标,为内参矩阵,R为3×3的旋转矩阵,T为3×1的平移向量。Among them, (u, v) are the image pixel coordinates, (x, y) are the coordinates of the semantic icon in the target coordinate system, is the internal reference matrix, R is a 3×3 rotation matrix, and T is a 3×1 translation vector.

利用上述式(3)所示的旋转和平移矩阵通过最小二乘法即可快速求解标靶平面在相机坐标系下的空间坐标。The space coordinates of the target plane in the camera coordinate system can be quickly obtained by using the rotation and translation matrix shown in the above formula (3) by the least square method.

所述的大地坐标系为以无人机起飞点为原点的北东地坐标系;所述降落标靶的动态特性包括标靶的大地坐标、标靶朝向、东向速度、北向速度、旋转角速度等。The geodetic coordinate system is the northeast geodetic coordinate system with the take-off point of the drone as the origin; the dynamic characteristics of the landing target include the geodetic coordinates of the target, the orientation of the target, the eastward velocity, the northward velocity, and the rotational angular velocity Wait.

本发明实施例提供的一种相机坐标系、无人机坐标系、大地坐标系的相对关系如图5所示,通过无人机机载的惯导模块可以获取无人机相对于地面的三轴姿态(翻滚角、俯仰角、航向角),通过相机云台或标定可以获取相机相对于无人机的三轴姿态。降落标靶由相机坐标系转换到大地坐标系的公式如式(8)所示。The relative relationship between the camera coordinate system, the UAV coordinate system and the earth coordinate system provided by the embodiment of the present invention is shown in Fig. Axis attitude (roll angle, pitch angle, heading angle), the three-axis attitude of the camera relative to the drone can be obtained through the camera gimbal or calibration. The formula for transforming the landing target from the camera coordinate system to the earth coordinate system is shown in formula (8).

Xg=RpXp+Xg0=RpRcXc+Xg0 (8)X g =R p X p +X g0 =R p R c X c +X g0 (8)

其中Xg、Xp、Xc分别为降落标靶在大地坐标系、无人机坐标系、相机坐标系下的坐标;Xg0为无人机当前在大地坐标系下的坐标,由GNSS定位坐标转换得出;Rp、Rc分别为无人机系到大地坐标系、相机到无人机坐标系的旋转矩阵,计算公式如式(9)所示。其中,α为翻滚角,β为俯仰角,γ为偏航角。Among them, X g , X p , and X c are the coordinates of the landing target in the earth coordinate system, the UAV coordinate system, and the camera coordinate system; X g0 is the current coordinate of the UAV in the earth coordinate system, which is positioned by GNSS The coordinate transformation is obtained; R p and R c are the rotation matrices from the UAV system to the earth coordinate system, and from the camera to the UAV coordinate system, respectively, and the calculation formula is shown in formula (9). Among them, α is the roll angle, β is the pitch angle, and γ is the yaw angle.

以降落标靶的东、北方向的大地坐标为输入,通过卡尔曼滤波预测降落标靶的位置和速度。由降落标靶东、北方向坐标和速度组成状态向量X=[x,y,vx,vy]T和输出向量Y=[x,y]T,系统的状态方程如式(10)所示。Taking the geodetic coordinates in the east and north directions of the landing target as input, the position and velocity of the landing target are predicted by Kalman filtering. The state vector X=[x, y, v x , v y ] T and the output vector Y=[x, y] T are composed of the east and north coordinates and velocity of the landing target, and the state equation of the system is given by formula (10) Show.

其中,Δt为采样时间间隔;W代表均值为零的系统噪声,是协方差为Q的高斯变量;V代表均值为零的量测噪声,是协方差为R的高斯变量。in, Δt is the sampling time interval; W represents the system noise with zero mean and is a Gaussian variable with covariance Q; V represents the measurement noise with zero mean and is a Gaussian variable with covariance R.

基于降落标靶的位置和动态特性持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度,上述相对位置包括相对距离和相对高度。Based on the position and dynamic characteristics of the landing target, the relative position and relative speed between the UAV and the landing target in the geodetic coordinate system are continuously calculated, and the relative position includes the relative distance and relative height.

步骤S5:根据大地坐标系下无人机与降落标靶的相对位置和相对速度,通过三重PID(比例、积分、微分)控制算法使无人机精确地降落在标靶中心位置。Step S5: According to the relative position and relative speed of the UAV and the landing target in the geodetic coordinate system, the UAV is accurately landed at the center of the target through a triple PID (proportional, integral, differential) control algorithm.

所述的三重PID控制算法分为:Described triple PID control algorithm is divided into:

(1)位置控制,以无人机与降落标靶的相对位置为输入,通过PID控制算法控制无人机向标靶移动;(1) Position control, taking the relative position of the UAV and the landing target as input, and controlling the movement of the UAV to the target through the PID control algorithm;

(2)水平速度控制,以无人机与降落标靶的相对速度为输入,叠加在位置控制得出的速度上,通过PID控制算法追踪动态降落标靶;(2) Horizontal speed control, using the relative speed of the UAV and the landing target as input, superimposed on the speed obtained by the position control, and tracking the dynamic landing target through the PID control algorithm;

(3)降落速度控制,以无人机与降落标靶的相对高度为输入,通过PID控制算法控制无人机降落在标靶上。(3) Landing speed control, taking the relative height of the UAV and the landing target as input, and controlling the UAV to land on the target through the PID control algorithm.

本发明实施例的无人机降落方法不只适用于无人机在固定标靶上降落,还适用于在行驶中的车辆和船舶上降落。The UAV landing method in the embodiment of the present invention is not only suitable for UAV landing on a fixed target, but also suitable for landing on a moving vehicle or ship.

综上所述,本发明实施例的无人机精确位置的自主降落导引方法通过识别降落标靶中的语义图标,实现对降落标靶的定位和跟踪,并估算降落标靶的移动速度,弥补GNSS定位精度不足导致的降落标靶定位误差大的不足,实现无人机的在动态标靶上精准自主降落。In summary, the autonomous landing guidance method for the precise position of the UAV in the embodiment of the present invention realizes the positioning and tracking of the landing target by identifying the semantic icons in the landing target, and estimates the moving speed of the landing target. Make up for the lack of large landing target positioning errors caused by insufficient GNSS positioning accuracy, and realize the precise and autonomous landing of UAVs on dynamic targets.

通过采用图像识别的方法,能够自动识别降落标靶中的语义图标,从而快速计算无人机与标靶的相对位置和相对速度;By adopting the method of image recognition, the semantic icon in the landing target can be automatically recognized, so as to quickly calculate the relative position and relative speed of the UAV and the target;

降落标靶中的语义图标可以在满足相机视场的情况下灵活配置大小、位置和姿态,组成不同的降落标靶图形;The semantic icons in the landing target can flexibly configure the size, position and attitude under the condition of meeting the camera's field of view to form different landing target graphics;

从无人机到达可识别距离到降落在标靶上的全过程中,每时每刻都至少有一个语义图标被完整识别到,从而保证对降落标靶的定位精确度;During the whole process from the UAV reaching the recognizable distance to landing on the target, at least one semantic icon is fully recognized at every moment, thus ensuring the positioning accuracy of the landing target;

使用三重PID算法控制无人机对动态标靶进行跟踪和自主降落,相比同类算法更加高速、稳定。The triple PID algorithm is used to control the UAV to track the dynamic target and land autonomously, which is faster and more stable than similar algorithms.

本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of an embodiment, and the modules or processes in the accompanying drawing are not necessarily necessary for implementing the present invention.

通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例或者实施例的某些部分所述的方法。It can be seen from the above description of the implementation manners that those skilled in the art can clearly understand that the present invention can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the essence of the technical solution of the present invention or the part that contributes to the prior art can be embodied in the form of software products, and the computer software products can be stored in storage media, such as ROM/RAM, disk , CD, etc., including several instructions to make a computer device (which may be a personal computer, server, or network device, etc.) execute the methods described in various embodiments or some parts of the embodiments of the present invention.

本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置或系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置及系统实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。Each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the device or system embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for relevant parts, refer to part of the description of the method embodiments. The device and system embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, It can be located in one place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any person skilled in the art within the technical scope disclosed in the present invention can easily think of changes or Replacement should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be determined by the protection scope of the claims.

Claims (6)

1.一种无人机精确位置的自主降落导引方法,其特征在于,包括:1. A self-guiding method for the precise position of an unmanned aerial vehicle, characterized in that it comprises: 通过卫星导航系统引导无人机飞行到地面上的降落标靶的设定距离范围内;Guide the UAV to fly to the set distance range of the landing target on the ground through the satellite navigation system; 通过机载相机获取地面的视频图像,通过图形检测规则识别出所述视频图像中包含的降落标靶上的语义图标,根据所述降落标靶上的语义图标计算出降落标靶的中心位置信息;The video image of the ground is acquired by the airborne camera, the semantic icon on the landing target contained in the video image is recognized through the graphic detection rule, and the center position information of the landing target is calculated according to the semantic icon on the landing target ; 根据所述降落标靶的中心位置信息通过机载相机与无人机的姿态和相对位置关系,计算大地坐标系下所述降落标靶的位置和动态特性;Calculate the position and dynamic characteristics of the landing target in the geodetic coordinate system through the attitude and relative position relationship between the airborne camera and the UAV according to the center position information of the landing target; 基于所述降落标靶的位置和动态特性持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度,通过三重PID控制算法控制无人机降落在所述降落标靶的中心位置。Based on the position and dynamic characteristics of the landing target, the relative position and relative speed of the UAV and the landing target in the geodetic coordinate system are continuously calculated, and the UAV is controlled to land at the center of the landing target through a triple PID control algorithm Location. 2.根据权利要求1所述的方法,其特征在于,所述降落标靶的内部包含多个互不重叠的语义图标,每个语义图标的大小、位置以及对应的语义都是已知的,多个语义图标按照一定规律排布,使得无人机降落过程中机载相机始终至少能看到1个语义图标。2. The method according to claim 1, wherein the inside of the landing target contains a plurality of non-overlapping semantic icons, and the size, position and corresponding semantics of each semantic icon are known, Multiple semantic icons are arranged according to certain rules, so that the airborne camera can always see at least one semantic icon during the landing process of the drone. 3.根据权利要求2所述的方法,其特征在于,所述语义图标的主体为黑色矩形,所述语义图标的内部按照语义规则画有不同位置和数量的白色矩形,将各个语义图标包含的语义信息保存在语义图标数据库中。3. The method according to claim 2, wherein the main body of the semantic icon is a black rectangle, and the inside of the semantic icon is drawn with white rectangles of different positions and quantities according to the semantic rules, and each semantic icon contains Semantic information is stored in a semantic icon database. 4.根据权利要求3所述的方法,其特征在于,所述的通过机载相机获取地面的视频图像,通过图形检测规则识别出所述视频图像中包含的降落标靶上的语义图标,根据所述降落标靶上的语义图标计算出降落标靶的中心位置信息,包括:4. method according to claim 3, is characterized in that, described obtains the video image of ground by airborne camera, recognizes the semantic icon on the landing target that comprises in described video image by graphic detection rule, according to The semantic icon on the landing target calculates the center position information of the landing target, including: 通过无人机的机载相机拍摄地面的视频图像,将所述视频图像转换为灰度图像,根据所述灰度图像中每个像素点的邻域计算出每个像素点对应的自适应阈值,将每个像素点的灰度值与对应的自适应阈值进行比较,当某个像素点的灰度值大于自适应阈值时,则将所述某个像素点设为白色,当某个像素点的灰度值不大于自适应阈值时,则将所述某个像素点设为反之设置为黑色;The ground video image is taken by the drone's onboard camera, the video image is converted into a grayscale image, and the adaptive threshold corresponding to each pixel is calculated according to the neighborhood of each pixel in the grayscale image , compare the gray value of each pixel with the corresponding adaptive threshold, when the gray value of a certain pixel is greater than the adaptive threshold, set the certain pixel to white, when a certain pixel When the gray value of the point is not greater than the adaptive threshold, the certain pixel is set to black otherwise; 提取重新设置像素点后的灰度图像中的黑色矩形,并按照语义规则检测该黑色矩形是否为正确的语义图标;Extract the black rectangle in the grayscale image after resetting the pixels, and detect whether the black rectangle is a correct semantic icon according to the semantic rules; 所述的语义规则为:将黑色矩形的每个边6等分,将对边的对应等分点连接,由此将黑色矩形分为6×6个小矩形,除去最外围的一层全为黑色外,内部的每个小矩形中,黑色占大多数的矩形记为0,白色占大多数的矩形记为1,由上至下将黑色矩形中的每一行的数据串联起来,得到串联数据,将所述串联数据作为所述语义图标所蕴含的语义信息,将所述串联数据所代表的语义信息与语义图标数据库中存储的各个语义图标的语义信息进行对比,当有对比结果为一致时,则确定上述串联数据对应的黑色矩形为正确的语义图标;The semantic rule described is: divide each side of the black rectangle into 6 equal parts, connect the corresponding equal parts of the opposite sides, thus divide the black rectangle into 6×6 small rectangles, except for the outermost layer, all are In each of the small rectangles outside the black and inside, the rectangle with the majority of black is recorded as 0, and the rectangle with the majority of white is recorded as 1, and the data of each row in the black rectangle is concatenated from top to bottom to obtain concatenated data , using the series data as the semantic information contained in the semantic icon, comparing the semantic information represented by the series data with the semantic information of each semantic icon stored in the semantic icon database, and when the comparison results are consistent , then determine that the black rectangle corresponding to the above concatenated data is the correct semantic icon; 根据降落标靶上的所有语义图标所蕴含的语义信息计算出降落标靶的中心位置信息。The center position information of the landing target is calculated according to the semantic information contained in all the semantic icons on the landing target. 5.根据权利要求4所述的方法,其特征在于,所述的根据所述降落标靶的中心位置信息通过机载相机与无人机的姿态和相对位置关系,计算大地坐标系下所述降落标靶的位置和动态特性,包括:5. The method according to claim 4, characterized in that, according to the center position information of the landing target, the attitude and relative positional relationship between the airborne camera and the unmanned aerial vehicle are used to calculate the ground coordinate system. The location and dynamics of the landing target, including: 根据降落标靶的中心位置信息建立标靶坐标系,根据各语义图标所蕴含的语义信息得到语义图标的顶点及中心在标靶坐标系下的坐标位置;The target coordinate system is established according to the center position information of the landing target, and the coordinate position of the vertex and center of the semantic icon in the target coordinate system is obtained according to the semantic information contained in each semantic icon; 通过语义图标在标靶坐标系和图像像素坐标的一一对应关系获取标靶平面到相机成像平面的旋转和平移矩阵,根据标靶平面到相机成像平面的旋转和平移矩阵将语义图标在标靶坐标系下的坐标转换为语义图标在相机坐标系下的坐标,根据相机坐标系到大地坐标系的转换公式,将语义图标在相机坐标系下的空间坐标转换为语义图标在大地坐标系下的坐标;The rotation and translation matrix from the target plane to the camera imaging plane is obtained through the one-to-one correspondence between the semantic icon in the target coordinate system and the image pixel coordinates, and the semantic icon is placed on the target according to the rotation and translation matrix from the target plane to the camera imaging plane. The coordinates in the coordinate system are transformed into the coordinates of the semantic icon in the camera coordinate system. According to the conversion formula from the camera coordinate system to the earth coordinate system, the spatial coordinates of the semantic icon in the camera coordinate system are transformed into the coordinates of the semantic icon in the earth coordinate system. coordinate; 以语义图标的东、北方向的大地坐标为输入,通过卡尔曼滤波计算大地坐标系下降落标靶的位置和速度,由降落标靶的东、北方向坐标和速度组成状态向量X=[x,y,vx,vy]T和输出向量Y=[x,y]T,大地坐标系下所述降落标靶的状态方程如式(1)所示:Taking the geodetic coordinates of the east and north directions of the semantic icon as input, the position and velocity of the landing target in the geodetic coordinate system are calculated by Kalman filtering, and the state vector X=[x , y, v x , v y ] T and the output vector Y=[x, y] T , the state equation of the landing target under the geodetic coordinate system is shown in formula (1): 其中,Δt为采样时间间隔;W代表均值为零的系统噪声,是协方差为Q的高斯变量;V代表均值为零的量测噪声,是协方差为R的高斯变量;in, Δt is the sampling time interval; W represents system noise with zero mean, which is a Gaussian variable with covariance Q; V represents measurement noise with zero mean, which is a Gaussian variable with covariance R; 所述标靶平面到相机成像平面的旋转和平移矩阵如式(2)所示::The rotation and translation matrix from the target plane to the camera imaging plane is shown in formula (2): 其中,(u,v)为图像像素坐标,(x,y)为语义图标在标靶坐标系下的坐标,为内参矩阵,R为3×3的旋转矩阵,T为3×1的平移向量;Among them, (u, v) is the image pixel coordinates, (x, y) is the coordinates of the semantic icon in the target coordinate system, is the internal reference matrix, R is a 3×3 rotation matrix, and T is a 3×1 translation vector; 所述相机坐标系到大地坐标系的转换公式如式(3)所示:The conversion formula of the camera coordinate system to the earth coordinate system is as shown in formula (3): Xg=RpXp+Xg0=RpRcXc+Xg0 (3)X g =R p X p +X g0 =R p R c X c +X g0 (3) 其中Xg、Xp、Xc分别为降落标靶在大地坐标系、无人机坐标系、相机坐标系下的坐标;Xg0为无人机当前在大地坐标系下的坐标,由GNSS定位坐标转换得出;Rp、Rc分别为无人机系到大地坐标系、相机到无人机坐标系的旋转矩阵,计算公式如式(4)所示,其中,α为翻滚角,β为俯仰角,γ为偏航角。Among them, X g , X p , and X c are the coordinates of the landing target in the earth coordinate system, the UAV coordinate system, and the camera coordinate system; X g0 is the current coordinate of the UAV in the earth coordinate system, which is positioned by GNSS The coordinate transformation is obtained; R p and R c are the rotation matrices from the UAV system to the earth coordinate system, and from the camera to the UAV coordinate system, respectively. The calculation formula is shown in formula (4), where α is the roll angle and β is the pitch angle, and γ is the yaw angle. 6.根据权利要求5所述的方法,其特征在于,所述的基于所述降落标靶的位置和动态特性持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度,通过三重PID控制算法控制无人机降落在所述降落标靶的中心位置,包括:6. The method according to claim 5, wherein the relative position and relative velocity of the unmanned aerial vehicle and the landing target under the geodetic coordinate system are continuously calculated based on the position and dynamic characteristics of the landing target, The drone is controlled to land at the center of the landing target through a triple PID control algorithm, including: 基于所述降落标靶的位置和动态特性持续计算出大地坐标系下无人机与降落标靶的相对位置和相对速度;Continuously calculate the relative position and relative velocity of the UAV and the landing target in the geodetic coordinate system based on the position and dynamic characteristics of the landing target; 以无人机与降落标靶的相对位置为输入,通过PID控制算法控制无人机向标靶移动;Taking the relative position of the UAV and the landing target as input, the UAV is controlled to move to the target through the PID control algorithm; 以无人机与降落标靶的相对速度为输入,叠加在位置控制得出的速度上,通过PID控制算法追踪动态降落标靶;The relative speed of the UAV and the landing target is used as input, superimposed on the speed obtained by the position control, and the dynamic landing target is tracked through the PID control algorithm; 以无人机与降落标靶的相对高度为输入,通过PID控制算法控制无人机降落在标靶上。Taking the relative height of the UAV and the landing target as input, the UAV is controlled to land on the target through the PID control algorithm.
CN201910446706.3A 2019-05-27 2019-05-27 Autonomous Landing Guidance Method for Precise Position of UAV Active CN110221625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910446706.3A CN110221625B (en) 2019-05-27 2019-05-27 Autonomous Landing Guidance Method for Precise Position of UAV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910446706.3A CN110221625B (en) 2019-05-27 2019-05-27 Autonomous Landing Guidance Method for Precise Position of UAV

Publications (2)

Publication Number Publication Date
CN110221625A true CN110221625A (en) 2019-09-10
CN110221625B CN110221625B (en) 2021-08-03

Family

ID=67818488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910446706.3A Active CN110221625B (en) 2019-05-27 2019-05-27 Autonomous Landing Guidance Method for Precise Position of UAV

Country Status (1)

Country Link
CN (1) CN110221625B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110920916A (en) * 2019-12-11 2020-03-27 集美大学 A kind of landing equipment for civil airliner
CN111413717A (en) * 2019-12-18 2020-07-14 中国地质大学(武汉) Aircraft digital carrier landing system based on satellite navigation
CN111813148A (en) * 2020-07-22 2020-10-23 广东工业大学 A UAV landing method, system, device and storage medium
CN112904895A (en) * 2021-01-20 2021-06-04 中国商用飞机有限责任公司北京民用飞机技术研究中心 Image-based airplane guide method and device
CN113050667A (en) * 2021-02-05 2021-06-29 广东国地规划科技股份有限公司 Unmanned aerial vehicle sampling control method, controller and system
CN113066040A (en) * 2019-12-26 2021-07-02 南京甄视智能科技有限公司 Unmanned aerial vehicle 3D modeling-based face recognition equipment layout method
CN114200948A (en) * 2021-12-09 2022-03-18 中国人民解放军国防科技大学 A method of autonomous drone landing based on visual aids
WO2022135427A1 (en) * 2020-12-23 2022-06-30 上海布鲁可积木科技有限公司 Method and system for determining complete icon
CN114935938A (en) * 2021-11-29 2022-08-23 江苏科技大学 Rotor unmanned aerial vehicle autonomous landing system and method based on mobile platform
CN115258181A (en) * 2022-08-18 2022-11-01 深圳大漠大智控技术有限公司 Unmanned aerial vehicle and take-off and landing method thereof, computer equipment and storage medium
WO2023109589A1 (en) * 2021-12-13 2023-06-22 深圳先进技术研究院 Smart car-unmanned aerial vehicle cooperative sensing system and method

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226356A (en) * 2013-02-27 2013-07-31 广东工业大学 Image-processing-based unmanned plane accurate position landing method
CN104316060A (en) * 2014-06-06 2015-01-28 清华大学深圳研究生院 Rendezvous docking method and device of space non-cooperative target
CN104536453A (en) * 2014-11-28 2015-04-22 深圳一电科技有限公司 Aircraft control method and device
CN105197252A (en) * 2015-09-17 2015-12-30 武汉理工大学 Small-size unmanned aerial vehicle landing method and system
CN106054929A (en) * 2016-06-27 2016-10-26 西北工业大学 Unmanned plane automatic landing guiding method based on optical flow
CN106127201A (en) * 2016-06-21 2016-11-16 西安因诺航空科技有限公司 A kind of unmanned plane landing method of view-based access control model positioning landing end
CN106774386A (en) * 2016-12-06 2017-05-31 杭州灵目科技有限公司 Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform
CN107244423A (en) * 2017-06-27 2017-10-13 歌尔科技有限公司 A kind of landing platform and its recognition methods
WO2018111075A1 (en) * 2016-12-16 2018-06-21 Rodarte Leyva Eduardo Automatic landing system with high-speed descent for drones
CN108305264A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 A kind of unmanned plane precision landing method based on image procossing
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN108828500A (en) * 2018-06-22 2018-11-16 深圳草莓创新技术有限公司 Unmanned plane accurately lands bootstrap technique and Related product
CN108873917A (en) * 2018-07-05 2018-11-23 太原理工大学 A kind of unmanned plane independent landing control system and method towards mobile platform
CN108919830A (en) * 2018-07-20 2018-11-30 南京奇蛙智能科技有限公司 A kind of flight control method that unmanned plane precisely lands
CN109298723A (en) * 2018-11-30 2019-02-01 山东大学 Method and system for precise landing of vehicle-mounted UAV
CN109643129A (en) * 2016-08-26 2019-04-16 深圳市大疆创新科技有限公司 The method and system of independent landing

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226356A (en) * 2013-02-27 2013-07-31 广东工业大学 Image-processing-based unmanned plane accurate position landing method
CN104316060A (en) * 2014-06-06 2015-01-28 清华大学深圳研究生院 Rendezvous docking method and device of space non-cooperative target
CN104536453A (en) * 2014-11-28 2015-04-22 深圳一电科技有限公司 Aircraft control method and device
CN105197252A (en) * 2015-09-17 2015-12-30 武汉理工大学 Small-size unmanned aerial vehicle landing method and system
CN106127201A (en) * 2016-06-21 2016-11-16 西安因诺航空科技有限公司 A kind of unmanned plane landing method of view-based access control model positioning landing end
CN106054929A (en) * 2016-06-27 2016-10-26 西北工业大学 Unmanned plane automatic landing guiding method based on optical flow
CN109643129A (en) * 2016-08-26 2019-04-16 深圳市大疆创新科技有限公司 The method and system of independent landing
CN106774386A (en) * 2016-12-06 2017-05-31 杭州灵目科技有限公司 Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
WO2018111075A1 (en) * 2016-12-16 2018-06-21 Rodarte Leyva Eduardo Automatic landing system with high-speed descent for drones
CN107244423A (en) * 2017-06-27 2017-10-13 歌尔科技有限公司 A kind of landing platform and its recognition methods
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN108305264A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 A kind of unmanned plane precision landing method based on image procossing
CN108828500A (en) * 2018-06-22 2018-11-16 深圳草莓创新技术有限公司 Unmanned plane accurately lands bootstrap technique and Related product
CN108873917A (en) * 2018-07-05 2018-11-23 太原理工大学 A kind of unmanned plane independent landing control system and method towards mobile platform
CN108919830A (en) * 2018-07-20 2018-11-30 南京奇蛙智能科技有限公司 A kind of flight control method that unmanned plane precisely lands
CN109298723A (en) * 2018-11-30 2019-02-01 山东大学 Method and system for precise landing of vehicle-mounted UAV

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾配洋: "无人机高速移动降落技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110920916A (en) * 2019-12-11 2020-03-27 集美大学 A kind of landing equipment for civil airliner
CN111413717A (en) * 2019-12-18 2020-07-14 中国地质大学(武汉) Aircraft digital carrier landing system based on satellite navigation
CN111413717B (en) * 2019-12-18 2023-08-11 中国地质大学(武汉) A digital aircraft landing system based on satellite navigation
CN113066040B (en) * 2019-12-26 2022-09-09 南京甄视智能科技有限公司 Face recognition equipment arrangement method based on unmanned aerial vehicle 3D modeling
CN113066040A (en) * 2019-12-26 2021-07-02 南京甄视智能科技有限公司 Unmanned aerial vehicle 3D modeling-based face recognition equipment layout method
CN111813148A (en) * 2020-07-22 2020-10-23 广东工业大学 A UAV landing method, system, device and storage medium
CN111813148B (en) * 2020-07-22 2024-01-26 广东工业大学 Unmanned aerial vehicle landing method, system, equipment and storage medium
WO2022135427A1 (en) * 2020-12-23 2022-06-30 上海布鲁可积木科技有限公司 Method and system for determining complete icon
CN112904895A (en) * 2021-01-20 2021-06-04 中国商用飞机有限责任公司北京民用飞机技术研究中心 Image-based airplane guide method and device
CN113050667A (en) * 2021-02-05 2021-06-29 广东国地规划科技股份有限公司 Unmanned aerial vehicle sampling control method, controller and system
CN114935938A (en) * 2021-11-29 2022-08-23 江苏科技大学 Rotor unmanned aerial vehicle autonomous landing system and method based on mobile platform
CN114200948A (en) * 2021-12-09 2022-03-18 中国人民解放军国防科技大学 A method of autonomous drone landing based on visual aids
CN114200948B (en) * 2021-12-09 2023-12-29 中国人民解放军国防科技大学 Unmanned aerial vehicle autonomous landing method based on visual assistance
WO2023109589A1 (en) * 2021-12-13 2023-06-22 深圳先进技术研究院 Smart car-unmanned aerial vehicle cooperative sensing system and method
CN115258181A (en) * 2022-08-18 2022-11-01 深圳大漠大智控技术有限公司 Unmanned aerial vehicle and take-off and landing method thereof, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110221625B (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN110221625B (en) Autonomous Landing Guidance Method for Precise Position of UAV
CN110222612B (en) Dynamic target recognition and tracking method for autonomous landing of UAV
AU2022291653B2 (en) A backup navigation system for unmanned aerial vehicles
Lee et al. Vision-based UAV landing on the moving vehicle
EP3158412B1 (en) Sensor fusion using inertial and image sensors
EP3158417B1 (en) Sensor fusion using inertial and image sensors
EP3158411B1 (en) Sensor fusion using inertial and image sensors
WO2016187757A1 (en) Sensor fusion using inertial and image sensors
US12315244B2 (en) Target state estimation method and apparatus, and unmanned aerial vehicle
CN111615677B (en) Unmanned aerial vehicle safety landing method and device, unmanned aerial vehicle and medium
US20080077284A1 (en) System for position and velocity sense of an aircraft
JP2015006874A (en) Systems and methods for autonomous landing using three dimensional evidence grid
CN101109640A (en) Vision-based autonomous landing navigation system for unmanned aircraft
Kim et al. Landing control on a mobile platform for multi-copters using an omnidirectional image sensor
CN109612333B (en) A vision-aided guidance system for vertical recovery of reusable rockets
Morais et al. Trajectory and Guidance Mode for autonomously landing an UAV on a naval platform using a vision approach
CN115272458A (en) Visual positioning method for fixed wing unmanned aerial vehicle in landing stage
CN113568430A (en) A method for correcting control of UAV wing execution data
CN113156450B (en) Active rotation laser radar system on unmanned aerial vehicle and control method thereof
US10330769B1 (en) Method and apparatus for geolocating emitters in a multi-emitter environment
CN114384932A (en) A method of unmanned aerial vehicle navigation and docking based on distance measurement
Budiyanto et al. Autonomous quadcopter landing system using monocular camera and fractal marker
Llerena et al. Error reduction in autonomous multirotor vision-based landing system with helipad context
US20240420371A1 (en) A method, software product and system for gnss-denied navigation
Keller et al. A Robust and Accurate Landing Methodology for Drones on Moving Targets. Drones 2022, 6, 98

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190910

Assignee: GUANGZHOU HI-TARGET SURVEYING INSTRUMENT Co.,Ltd.

Assignor: Beijing Jiaotong University

Contract record no.: X2021990000807

Denomination of invention: Autonomous landing guidance method for precise position of UAV

Granted publication date: 20210803

License type: Exclusive License

Record date: 20211222

EE01 Entry into force of recordation of patent licensing contract