[go: up one dir, main page]

CN102538782B - Helicopter landing guide device and method based on computer vision - Google Patents

Helicopter landing guide device and method based on computer vision Download PDF

Info

Publication number
CN102538782B
CN102538782B CN201210000593.2A CN201210000593A CN102538782B CN 102538782 B CN102538782 B CN 102538782B CN 201210000593 A CN201210000593 A CN 201210000593A CN 102538782 B CN102538782 B CN 102538782B
Authority
CN
China
Prior art keywords
helicopter
ground
module
tracking
sensor system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210000593.2A
Other languages
Chinese (zh)
Other versions
CN102538782A (en
Inventor
郑翰
李平
郑晓平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201210000593.2A priority Critical patent/CN102538782B/en
Publication of CN102538782A publication Critical patent/CN102538782A/en
Application granted granted Critical
Publication of CN102538782B publication Critical patent/CN102538782B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Navigation (AREA)

Abstract

本发明公开了一种基于计算机视觉的直升机着陆引导装置和方法,该装置包括地面三摄像头视觉传感器系统、前景搜索模块、跟踪匹配模块、位姿解算模块和无线数据传输模块,所述地面三摄像头视觉传感器系统、前景搜索模块、跟踪匹配模块、位姿解算模块和无线数据传输模块依次相连,所述地面三摄像头视觉传感器系统与跟踪匹配模块相连;本发明较之以往视觉引导着陆方案,图像的背景为单调的天空,有效地降低了背景的干扰,减小了噪声;同时,由地面高性能计算机完成图像处理任务,较之机载计算机,有效提升实时性和计算精度。

The invention discloses a computer vision-based helicopter landing guidance device and method. The device includes a ground three-camera visual sensor system, a foreground search module, a tracking and matching module, a pose calculation module and a wireless data transmission module. The ground three The camera visual sensor system, the foreground search module, the tracking and matching module, the pose calculation module and the wireless data transmission module are connected in sequence, and the ground three-camera visual sensor system is connected to the tracking and matching module; compared with the previous vision-guided landing scheme, the present invention The background of the image is the monotonous sky, which effectively reduces background interference and noise; at the same time, the image processing task is completed by the high-performance computer on the ground, which effectively improves the real-time performance and calculation accuracy compared with the airborne computer.

Description

一种基于计算机视觉的直升机着陆引导方法A Helicopter Landing Guidance Method Based on Computer Vision

技术领域 technical field

本发明涉及飞行器导航技术领域,尤其涉及一种无人直升机着陆视觉引导方法以及实现该方法的装置。  The invention relates to the technical field of aircraft navigation, in particular to a visual guidance method for unmanned helicopter landing and a device for realizing the method. the

背景技术 Background technique

微小型无人直升机的自主着陆是非常复杂的飞行阶段,它要求在低速和低高度的情况下,获得精确的高度,速度和航向等自身运动信息,并反馈给机载飞行控制系统,以控制飞机的位置和姿态,降落到指定位置。  The autonomous landing of micro unmanned helicopters is a very complicated flight phase, which requires accurate altitude, speed and heading and other self-motion information at low speeds and low altitudes, and feeds back to the airborne flight control system to control The position and attitude of the aircraft, landing to the specified position. the

传统的无人机着陆引导方法是,将机载的全球定位系统(GPS)、惯性导航系统(INS)以及电子罗盘的数据融合来得到直升机的位姿信息。但是由于受限于机载GPS系统的精度,以及降落过噪声干扰、遮挡等原因,GPS 系统无法提供足够精准的位姿信息。而INS和电子罗盘往往存在着较大的积分误差,也无法独立准确地提供定位信息。  The traditional UAV landing guidance method is to fuse the data of the airborne Global Positioning System (GPS), Inertial Navigation System (INS) and electronic compass to obtain the position and attitude information of the helicopter. However, due to the limitation of the accuracy of the airborne GPS system, as well as landing noise interference, occlusion and other reasons, the GPS system cannot provide sufficiently accurate pose information. However, INS and electronic compass often have large integral errors, and cannot independently and accurately provide positioning information. the

针对这一问题,近年来出现利用视觉传感器拍摄地面标志以获取直升机位姿信息的着陆引导方案。但视觉方法需对采集的图像进行复杂的运算处理,以提取上述位姿信息。而受限于直升机的尺寸和载重能力,机载计算机系统的性能有限,这就意味着在运算实时性和运算精度方面存在较大问题。此外,机载视觉传感器在获取地面图像时存在着不同程度的图像抖动,烟雾遮挡等问题,所以仍有必要寻求新的更为完善的直升机着陆方案。  In response to this problem, in recent years, a landing guidance scheme that uses visual sensors to capture ground marks to obtain helicopter position and attitude information has emerged. However, the vision method needs to perform complex calculation processing on the collected images to extract the above pose information. However, limited by the size and load capacity of the helicopter, the performance of the onboard computer system is limited, which means that there are big problems in the real-time and accuracy of the calculation. In addition, when the airborne visual sensor acquires ground images, there are problems such as image shaking and smoke occlusion to varying degrees, so it is still necessary to find a new and more complete helicopter landing scheme. the

发明内容 Contents of the invention

为了提高飞机位姿测量的实时性和精度,本发明提供了一种将直升机降落架末端塑料缓冲球作为特征角点,利用视觉传感器实时跟踪角点,并通过图像处理算法获取直升机位姿信息,以引导直升机自主定点着陆的装置和方法。  In order to improve the real-time performance and accuracy of aircraft pose measurement, the present invention provides a method that uses the plastic buffer ball at the end of the helicopter landing gear as a characteristic corner point, uses the visual sensor to track the corner point in real time, and obtains the helicopter pose information through an image processing algorithm. A device and method for guiding a helicopter to land at a fixed point autonomously. the

本发明的目的是通过以下技术方案来实现的:一种直升机着陆引导装置,它包括:地面三摄像头视觉传感器系统、前景搜索模块、跟踪匹配模块、位姿解算模块和无线数据传输模块等,所述地面三摄像头视觉传感器系统、前景搜索模块、跟踪匹配模块、位姿解算模块和无线数据传输模块依次相连,所述地面三摄像头视觉传感器系统与跟踪匹配模块相连。  The object of the present invention is achieved through the following technical solutions: a helicopter landing guidance device, which includes: a ground three-camera visual sensor system, a prospect search module, a tracking and matching module, a pose calculation module and a wireless data transmission module, etc., The ground three-camera visual sensor system, foreground search module, tracking and matching module, pose calculation module and wireless data transmission module are sequentially connected, and the ground three-camera visual sensor system is connected to the tracking and matching module. the

一种无人直升机着陆视觉引导方法,包括如下步骤:  A method for visual guidance of unmanned helicopter landing, comprising the steps of:

(1)地面三摄像头视觉传感器系统采集图像; (1) The ground three-camera vision sensor system collects images;

(2)在步骤1采集的图像中搜索图像前景,即颜色分别为红、黄、蓝、绿的四个机载人工地标; (2) Search for the foreground of the image in the image collected in step 1, that is, the four airborne artificial landmarks whose colors are red, yellow, blue, and green;

(3)使用粒子滤波算法实时跟踪上述人工地标; (3) Use particle filter algorithm to track the above artificial landmarks in real time;

(4)由位姿解算模块计算直升机相对地坐标系的位置和姿态; (4) Calculate the position and attitude of the helicopter relative to the ground coordinate system by the pose calculation module;

(5)地面站计算机通过无线数据传输模块将步骤4得到的位姿信息传送至连接到飞行控制计算机的接收机,然后通过RS-232接口传输到飞行控制计算机; (5) The ground station computer transmits the pose information obtained in step 4 to the receiver connected to the flight control computer through the wireless data transmission module, and then transmits it to the flight control computer through the RS-232 interface;

(6)飞行控制计算机根据上述位姿信息引导直升机降落到指定位置。 (6) The flight control computer guides the helicopter to land at the designated position according to the above position and attitude information.

本发明的有益效果是,本发明由于采用了在直升机机体上设置人工地标,由地面多摄像头视觉传感器系统拍摄空中地标的方案,较之直升机机载视觉传感器拍摄地面人工地标的方案,图像的背景为单调的天空,有效地降低了背景的干扰,减小了噪声;同时,由地面高性能计算机完成图像处理任务,较之机载计算机,有效提升实时性和计算精度。此外,本方案采用多摄像头视觉系统从三个角度采集图像信息进行融合,可以有效解决地标间的遮挡问题,增大了系统的测量空间范围。本发明仅在原有硬件基础上增加四个人工地标以及三台摄像头,充分利用了原有的地面站计算机和无线数据传输模块,有效控制了成本。  The beneficial effect of the present invention is, the present invention has adopted the scheme that artificial landmarks are set on the helicopter body, and the aerial landmarks are photographed by the ground multi-camera visual sensor system. The monotonous sky effectively reduces background interference and noise; at the same time, the image processing task is completed by the high-performance computer on the ground, which effectively improves the real-time performance and calculation accuracy compared with the airborne computer. In addition, this solution uses a multi-camera vision system to collect image information from three angles for fusion, which can effectively solve the occlusion problem between landmarks and increase the measurement space range of the system. The invention only adds four artificial landmarks and three cameras on the basis of the original hardware, fully utilizes the original ground station computer and wireless data transmission module, and effectively controls the cost. the

附图说明 Description of drawings

下面结合附图和实施例对本发明进一步说明:  Below in conjunction with accompanying drawing and embodiment the present invention is further described:

图1是本发明的直升机着陆装置功能框示意图; Fig. 1 is the functional block schematic diagram of the helicopter landing device of the present invention;

图2是地面三摄像头视觉传感器系统结构图; Fig. 2 is a structural diagram of the ground three-camera vision sensor system;

图3是前景搜索模块软件流程图; Fig. 3 is the prospect search module software flowchart;

图4是跟踪匹配模块软件流程图;  Fig. 4 is a flow chart of tracking and matching module software;

图5是位姿解算模块软件流程图; Fig. 5 is a software flowchart of the pose calculation module;

图6是本发明的直升机着陆方法的流程示意图。 Fig. 6 is a schematic flow chart of the helicopter landing method of the present invention.

具体实施方式 Detailed ways

图1是本发明所涉及的直升机着陆引导装置的功能框图的示例。  FIG. 1 is an example of a functional block diagram of a helicopter landing guide device according to the present invention. the

如图1所示,本发明直升机着陆引导装置包括:地面三摄像头视觉传感器系统1、前景搜索模块2、跟踪匹配模块3、位姿解算模块4和无线数据传输模块5,地面三摄像头视觉传感器系统1、前景搜索模块2、跟踪匹配模块3、位姿解算模块4和无线数据传输模块5依次相连,地面三摄像头视觉传感器系统1与跟踪匹配模块3相连。  As shown in Figure 1, the helicopter landing guidance device of the present invention comprises: ground three-camera visual sensor system 1, foreground search module 2, tracking matching module 3, pose calculation module 4 and wireless data transmission module 5, ground three-camera visual sensor System 1, foreground search module 2, tracking and matching module 3, pose calculation module 4 and wireless data transmission module 5 are connected in sequence, and ground three-camera visual sensor system 1 is connected to tracking and matching module 3. the

如图2所示,地面三摄像头视觉传感器系统包括三台光学摄像头,其连接关系如图2所示。光学摄像头可以采用罗技公司的C160产品,但不限于此。  As shown in Figure 2, the ground three-camera vision sensor system includes three optical cameras, and their connection relationship is shown in Figure 2. The optical camera can adopt Logitech's C160 product, but is not limited thereto. the

如图3所示,前景搜索模块的工作过程如下:将视觉传感器输出的图像由RGB域转为HSV域;经过筛选检测所需的颜色区域,即四个人工地标所对应的红、黄、蓝、绿区域;分别将上述各个颜色区域翻转成反色即绿、蓝、黄、红颜色;将上述两幅图像转化为Gray图像以去除噪声,并进行做差运算,从而消除背景;随后对上述图像运用Canny算子以寻找圆球边缘;最后应用霍夫变换求解上述四个圆的半径以及圆心在图像中的坐标。本模块可以采用C++语言调用OpenCV库函数实现,但不限于此。  As shown in Figure 3, the working process of the foreground search module is as follows: the image output by the visual sensor is converted from the RGB domain to the HSV domain; after screening the required color areas for detection, that is, the red, yellow and blue corresponding to the four artificial landmarks , green area; respectively flip the above-mentioned color areas into inverse colors, that is, green, blue, yellow, and red colors; convert the above-mentioned two images into Gray images to remove noise, and perform difference operations to eliminate the background; then the above-mentioned The image uses the Canny operator to find the edge of the sphere; finally, the Hough transform is used to solve the radii of the above four circles and the coordinates of the center of the circle in the image. This module can be implemented by calling OpenCV library functions in C++ language, but not limited thereto. the

上述人工地标为固定在直升机降落架末端的四个硬质塑料缓冲球,其相对于直升机的质心位置已经过标定。上述人工地标可采用日本Futaba公司出产的T-REX 600 Nitro Super Pro 直升机附带的降落架,但不仅限于此。  The aforementioned artificial landmarks are four hard plastic buffer balls fixed at the end of the landing gear of the helicopter, and their positions relative to the center of mass of the helicopter have been calibrated. The above-mentioned artificial landmarks can use the landing gear attached to the T-REX 600 Nitro Super Pro helicopter produced by Futaba Corporation of Japan, but it is not limited to this. the

如图4所示,跟踪匹配模块的工作过程如下:设定粒子数量,确定运动模型为一阶自适应模型;将S2中检测到得四个地标设定为跟踪目标,并使用加权颜色直方图建立目标模型:针对上述每一个人工地标,令目标即投影圆中心坐标为y,直方图量化级数为m,跟踪区域内像素坐标为{xi}i=1,2,…,N,建立加权颜色直方图目标模型 p={pu(y)}u=1,2,..,m ,其中:  As shown in Figure 4, the working process of the tracking and matching module is as follows: set the number of particles, determine the motion model as a first-order adaptive model; set the four landmarks detected in S2 as tracking targets, and use the weighted color histogram Establishing the target model: For each of the above artificial landmarks, let the coordinates of the center of the target, that is, the projection circle, be y, the quantization level of the histogram be m, and the pixel coordinates in the tracking area be {x i } i=1,2,…,N , and establish Weighted color histogram target model p={p u (y)} u=1,2,..,m where:

;

;

其中,为核函数带宽;为对应于核函数的剖面函数;函数b将像素点根据其颜色值划分到直方图的响应bin中;为Kronecker函数;u为直方图量化级数;  in, is the kernel function bandwidth; is the profile function corresponding to the kernel function; function b divides the pixels into the response bins of the histogram according to their color values; is the Kronecker function; u is the histogram quantization series;

并建立初始状态粒子集共N个粒子: And establish the initial state particle set with a total of N particles:

;

其中,表示第0帧时刻、第i个粒子中该地标投影圆的圆心坐标和半径信息,且每个粒子的初始权重值都为1/N。在后续图像帧中进行系统状态转移以随机传播粒子,并且求解粒子权值;粒子加权后求取上述各目标在像平面中的坐标位置;输出坐标位置值;然后进行重采样,重新分布粒子的位置。本模块可以采用C++语言实现,但不限于此。 in, Represents the center coordinates and radius information of the landmark projection circle in the i-th particle at frame 0, and the initial weight value of each particle is 1/N. In the subsequent image frames, the system state transition is performed to randomly propagate the particles, and the particle weights are calculated; after the particles are weighted, the coordinate positions of the above-mentioned targets in the image plane are obtained; the coordinate position values are output; and then resampling is performed to redistribute the particles. Location. This module can be implemented in C++ language, but not limited thereto.

如图5所示,位姿解算模块设计实现过程包括如下步骤:已知直升机坐标系和摄像机坐标系的转换关系如下所示:  As shown in Figure 5, the design and implementation process of the pose calculation module includes the following steps: The conversion relationship between the known helicopter coordinate system and the camera coordinate system is as follows:

;

其中,(xc,yc,zc)(xh,yh,zh)分别表示地标中心在摄像头坐标系下的齐次坐标和在直升机坐标系下的齐次坐标;用迭代最小二乘算法求解直升机坐标系相对于摄像头坐标系原点的旋转矩阵R和位移矩阵t;通过解算旋转矩阵可以获得直升机的偏航角、横滚角以及俯仰角;通过解算位移矩阵可以获得直升机的高度信息以及相对于降落点的距离信息。 Among them, (x c , y c , z c ) (x h , y h , z h ) represent the homogeneous coordinates of the landmark center in the camera coordinate system and the homogeneous coordinates in the helicopter coordinate system respectively; The multiplication algorithm is used to solve the rotation matrix R and displacement matrix t of the helicopter coordinate system relative to the origin of the camera coordinate system; the yaw angle, roll angle and pitch angle of the helicopter can be obtained by solving the rotation matrix; the helicopter’s yaw angle can be obtained by solving the displacement matrix Altitude information and distance information relative to the landing point.

无线数据传输模块可以采用安美通科技有限公司生产的APC200A-43模块,但不仅限于此。  The wireless data transmission module can use the APC200A-43 module produced by AMT Technology Co., Ltd., but it is not limited thereto. the

如图6所示,本发明无人直升机着陆视觉引导方法包括如下步骤:  As shown in Figure 6, the unmanned helicopter landing visual guidance method of the present invention comprises the following steps:

开始阶段,假设直升机已在全球定位系统的引导下,飞行到地面视觉传感器系统的有效视野范围内。飞行控制计算机切换到本发明涉及的直升机着陆引导装置,以提供着陆所需的高精度位姿信息。 In the initial stage, it is assumed that the helicopter has been guided by the global positioning system and has flown to the effective field of view of the ground visual sensor system. The flight control computer switches to the helicopter landing guidance device involved in the present invention to provide high-precision position and attitude information required for landing.

1. 地面三摄像头视觉传感器系统采集图像。  1. The ground three-camera vision sensor system collects images. the

地面三摄像头视觉传感器系统实时采集图像,并将采集的图像传送至前景搜索模块。  The ground three-camera vision sensor system collects images in real time, and transmits the collected images to the foreground search module. the

2.在步骤1采集的图像中搜索图像前景,即颜色分别为红、黄、蓝、绿的四个机载人工地标。 2. Search for the foreground of the image in the image collected in step 1, that is, the four airborne artificial landmarks whose colors are red, yellow, blue, and green.

具体操作如下:前景搜索模块将地面三摄像头视觉传感器系统输出的图像由RGB域转为HSV域;经过筛选检测所需的颜色区域,即四个人工地标所对应的红、黄、蓝、绿区域;分别将上述各个颜色区域翻转成反色即绿、蓝、黄、红颜色;将上述两幅图像转化为Gray图像以去除噪声,并进行做差运算,从而消除背景;随后对上述图像运用Canny算子以寻找圆球边缘;最后应用霍夫变换求解上述四个圆的半径以及圆心在图像中的坐标。  The specific operation is as follows: the foreground search module converts the image output by the ground three-camera visual sensor system from the RGB domain to the HSV domain; after screening the required color areas for detection, that is, the red, yellow, blue, and green areas corresponding to the four artificial landmarks ;Respectively flip the above-mentioned color areas into inverse colors, that is, green, blue, yellow, and red colors; convert the above-mentioned two images into Gray images to remove noise, and perform a difference operation to eliminate the background; then apply Canny to the above-mentioned images operator to find the edge of the sphere; finally, the Hough transform is applied to solve the radii of the above four circles and the coordinates of the center of the circle in the image. the

3.使用粒子滤波算法实时跟踪上述人工地标  3. Use the particle filter algorithm to track the above artificial landmarks in real time

具体操作如下:在跟踪匹配模块中,设定粒子数量,确定运动模型为一阶自适应模型;将步骤2中检测到的四个地标设定为跟踪目标,并使用加权颜色直方图建立目标模型:针对上述每一个人工地标,令目标即投影圆中心坐标为y,直方图量化级数为m,跟踪区域内像素坐标为{xi}i=1,2,…,N,建立加权颜色直方图目标模型 p={pu(y)}u=1,2,..,m ,其中: The specific operation is as follows: In the tracking and matching module, set the number of particles and determine the motion model as a first-order adaptive model; set the four landmarks detected in step 2 as tracking targets, and use the weighted color histogram to establish the target model : For each of the above artificial landmarks, let the center coordinate of the target, that is, the projection circle, be y, the histogram quantization series be m, and the pixel coordinates in the tracking area be {x i } i=1,2,…,N to establish a weighted color histogram Graph target model p={p u (y)} u=1,2,..,m where:

;

;

其中,为核函数带宽;为对应于核函数的剖面函数;函数b将像素点根据其颜色值划分到直方图的响应bin中;为Kronecker函数;u为直方图量化级数; in, is the kernel function bandwidth; is the profile function corresponding to the kernel function; function b divides the pixels into the response bins of the histogram according to their color values; is the Kronecker function; u is the histogram quantization series;

并建立初始状态粒子集共N个粒子 And establish the initial state particle set with a total of N particles

;

其中表示第0帧时刻、第i个粒子中该地标投影圆的圆心坐标和半径信息,且每个粒子的初始权重值都为1/N。在后续图像帧中进行系统状态转移以随机传播粒子,并且求解粒子权值;粒子加权后求取上述各目标在像平面中的坐标位置;输出坐标位置值;然后进行重采样,重新分布粒子的位置。 in Represents the center coordinates and radius information of the landmark projection circle in the i-th particle at frame 0, and the initial weight value of each particle is 1/N. In the subsequent image frames, the system state transition is performed to randomly propagate the particles, and the particle weights are calculated; after the particles are weighted, the coordinate positions of the above-mentioned targets in the image plane are obtained; the coordinate position values are output; and then resampling is performed to redistribute the particles. Location.

4.由位姿解算模块计算直升机相对地坐标系的位置和姿态。  4. Calculate the position and attitude of the helicopter relative to the ground coordinate system by the pose calculation module. the

具体操作如下:已知直升机坐标系和摄像机坐标系的转换关系如下所示:  The specific operation is as follows: The conversion relationship between the known helicopter coordinate system and the camera coordinate system is as follows:

;

其中,(xc,yc,zc)(xh,yh,zh)分别表示地标中心在摄像头坐标系下的齐次坐标和在直升机坐标系下的齐次坐标;用迭代最小二乘算法求解直升机坐标系相对于摄像头坐标系原点的旋转矩阵R和位移矩阵t;通过解算旋转矩阵可以获得直升机的偏航角、横滚角以及俯仰角;通过解算位移矩阵可以获得直升机的高度信息以及相对于降落点的距离信息。 Among them, (x c , y c , z c ) (x h , y h , z h ) represent the homogeneous coordinates of the landmark center in the camera coordinate system and the homogeneous coordinates in the helicopter coordinate system respectively; The multiplication algorithm is used to solve the rotation matrix R and displacement matrix t of the helicopter coordinate system relative to the origin of the camera coordinate system; the yaw angle, roll angle and pitch angle of the helicopter can be obtained by solving the rotation matrix; the helicopter’s yaw angle can be obtained by solving the displacement matrix Altitude information and distance information relative to the landing point.

5.地面站计算机通过无线数据传输模块将上述位姿信息传送至连接到飞行控制计算机的接收机,然后通过RS-232接口传输到飞行控制计算机。  5. The ground station computer transmits the above position and attitude information to the receiver connected to the flight control computer through the wireless data transmission module, and then transmits it to the flight control computer through the RS-232 interface. the

6.飞行控制计算机根据上述位姿信息引导直升机降落到指定位置。  6. The flight control computer guides the helicopter to land at the designated position according to the above position and attitude information. the

依照上述步骤无人直升机可实现安全定点着陆。  According to the above steps, the unmanned helicopter can realize safe fixed-point landing. the

Claims (1)

1.一种基于计算机视觉的直升机着陆引导方法,该方法应用了直升机着陆引导装置,所述直升机着陆引导装置包括:地面三摄像头视觉传感器系统、前景搜索模块、跟踪匹配模块、位姿解算模块和无线数据传输模块,所述地面三摄像头视觉传感器系统、前景搜索模块、跟踪匹配模块、位姿解算模块和无线数据传输模块依次相连,所述地面三摄像头视觉传感器系统与跟踪匹配模块相连;其特征在于,该方法包括如下步骤:1. A helicopter landing guidance method based on computer vision, the method has applied the helicopter landing guidance device, and the helicopter landing guidance device comprises: ground three-camera visual sensor system, prospect search module, tracking matching module, pose calculation module With the wireless data transmission module, the three-camera visual sensor system on the ground, the prospect search module, the tracking and matching module, the pose calculation module and the wireless data transmission module are connected in sequence, and the three-camera visual sensor system on the ground is connected to the tracking and matching module; It is characterized in that the method comprises the steps of: (1)地面三摄像头视觉传感器系统采集图像;(1) The ground three-camera vision sensor system collects images; (2)在步骤(1)采集的图像中搜索图像前景,即颜色分别为红、黄、蓝、绿的四个机载人工地标;(2) Search for the foreground of the image in the image collected in step (1), that is, the four airborne artificial landmarks whose colors are red, yellow, blue, and green; (3)使用粒子滤波算法实时跟踪上述人工地标;(3) Use particle filter algorithm to track the above artificial landmarks in real time; (4)由位姿解算模块计算直升机相对地坐标系的位置和姿态;(4) Calculate the position and attitude of the helicopter relative to the ground coordinate system by the pose calculation module; (5)地面站计算机通过无线数据传输模块将步骤(4)得到的位姿信息传送至连接到飞行控制计算机的接收机,然后通过RS-232接口传输到飞行控制计算机;(5) The ground station computer transmits the pose information obtained in step (4) to the receiver connected to the flight control computer through the wireless data transmission module, and then transmits it to the flight control computer through the RS-232 interface; (6)飞行控制计算机根据上述位姿信息引导直升机降落到指定位置;(6) The flight control computer guides the helicopter to land at the designated position according to the above position and attitude information; 所述步骤(2)具体为:将地面三摄像头视觉传感器系统输出的图像由RGB域转为HSV域;经过筛选检测所需的颜色区域,即四个人工地标所对应的红、黄、蓝、绿区域;分别将上述各个颜色区域翻转成反色即绿、蓝、黄、红颜色;将上述两幅图像转化为Gray图像以去除噪声,并进行做差运算,从而消除背景;随后对上述图像运用Canny算子以寻找圆球边缘;最后应用霍夫变换求解上述四个圆的半径以及圆心在图像中的坐标;The step (2) is specifically: converting the image output by the ground three-camera visual sensor system from the RGB domain to the HSV domain; after screening and detecting the required color areas, that is, the red, yellow, blue, and red corresponding to the four artificial landmarks. Green area; respectively flip the above-mentioned color areas into inverse colors, that is, green, blue, yellow, and red colors; convert the above-mentioned two images into Gray images to remove noise, and perform difference operations to eliminate the background; then the above-mentioned images Use the Canny operator to find the edge of the sphere; finally apply the Hough transform to solve the radius of the above four circles and the coordinates of the center of the circle in the image; 所述步骤(3)具体为:设定粒子数量,确定运动模型为一阶自适应模型;将步骤(2)中检测到的四个地标设定为跟踪目标,并使用加权颜色直方图建立目标模型:针对上述每一个人工地标,令目标即投影圆中心坐标为y,直方图量化级数为m,跟踪区域内像素坐标为{xi}i=1,2,…,N,建立加权颜色直方图目标模型:p={pu(y)}u=1,2,..,m,其中:The step (3) specifically includes: setting the number of particles, determining the motion model as a first-order adaptive model; setting the four landmarks detected in step (2) as tracking targets, and using the weighted color histogram to establish the target Model: For each of the above artificial landmarks, set the center coordinate of the target, that is, the projection circle, as y, the histogram quantization series as m, and the pixel coordinates in the tracking area as {x i } i=1,2,…,N to establish a weighted color Histogram target model: p={p u (y)} u=1,2,..,m where: pp uu (( ythe y )) == ff ΣΣ ii == 11 NN kk (( || || ythe y -- xx ii hh || || 22 )) δδ (( bb (( xx ii )) -- uu )) ;; ff == 11 ΣΣ ii == 11 NN kk (( || || ythe y -- xx ii hh || || 22 )) ;; 其中,h为核函数带宽;k(||x||2)为对应于核函数的剖面函数;函数b将像素点根据其颜色值划分到直方图的响应bin中;δ为Kronecker函数;u为直方图量化级数;Among them, h is the kernel function bandwidth; k(||x|| 2 ) is the profile function corresponding to the kernel function; function b divides the pixels into the response bins of the histogram according to their color values; δ is the Kronecker function; u Quantize the series for the histogram; 并建立初始状态粒子集共N个粒子:And establish the initial state particle set with a total of N particles: {{ xx 00 ii ,, 11 // NN }} ii == 11 NN ;; 其中表示第0帧时刻、第i个粒子中该地标投影圆的圆心坐标和半径信息,且每个粒子的初始权重值都为1/N;在后续图像帧中进行系统状态转移以随机传播粒子,并且求解粒子权值;粒子加权后求取上述各目标在像平面中的坐标位置;输出坐标位置值;然后进行重采样,重新分布粒子的位置;in Indicates the center coordinates and radius information of the landmark projection circle in the i-th particle at frame 0, and the initial weight value of each particle is 1/N; the system state transition is performed in subsequent image frames to randomly propagate particles, And solve the particle weight; obtain the coordinate position of the above-mentioned targets in the image plane after particle weighting; output the coordinate position value; then resample and redistribute the position of the particle; 所述步骤(5)具体为:已知直升机坐标系和摄像机坐标系的转换关系如下所示:The step (5) is specifically: the conversion relationship between the known helicopter coordinate system and the camera coordinate system is as follows: xx cc ythe y cc zz cc == RR xx hh ythe y hh zz hh ++ tt ;; 其中,(xc,yc,zc)(xh,yh,zh)分别表示地标中心在摄像头坐标系下的齐次坐标和在直升机坐标系下的齐次坐标;用迭代最小二乘算法求解直升机坐标系相对于摄像头坐标系原点的旋转矩阵R和位移矩阵t;通过解算旋转矩阵可以获得直升机的偏航角、横滚角以及俯仰角;通过解算位移矩阵可以获得直升机的高度信息以及相对于降落点的距离信息。Among them, (x c , y c , z c ) (x h , y h , z h ) represent the homogeneous coordinates of the landmark center in the camera coordinate system and the homogeneous coordinates in the helicopter coordinate system respectively; The multiplication algorithm is used to solve the rotation matrix R and displacement matrix t of the helicopter coordinate system relative to the origin of the camera coordinate system; the yaw angle, roll angle and pitch angle of the helicopter can be obtained by solving the rotation matrix; the helicopter’s yaw angle can be obtained by solving the displacement matrix Altitude information and distance information relative to the landing point.
CN201210000593.2A 2012-01-04 2012-01-04 Helicopter landing guide device and method based on computer vision Expired - Fee Related CN102538782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210000593.2A CN102538782B (en) 2012-01-04 2012-01-04 Helicopter landing guide device and method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210000593.2A CN102538782B (en) 2012-01-04 2012-01-04 Helicopter landing guide device and method based on computer vision

Publications (2)

Publication Number Publication Date
CN102538782A CN102538782A (en) 2012-07-04
CN102538782B true CN102538782B (en) 2014-08-27

Family

ID=46346273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210000593.2A Expired - Fee Related CN102538782B (en) 2012-01-04 2012-01-04 Helicopter landing guide device and method based on computer vision

Country Status (1)

Country Link
CN (1) CN102538782B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226356A (en) * 2013-02-27 2013-07-31 广东工业大学 Image-processing-based unmanned plane accurate position landing method
CN103413466A (en) * 2013-07-08 2013-11-27 中国航空无线电电子研究所 Airborne visible ground guide and warning device and guide and warning method thereof
CN105676875A (en) * 2015-03-10 2016-06-15 张超 Automatic landing system of unmanned aerial vehicle
CN105182995B (en) * 2015-03-10 2016-09-07 海安索菲亚生态环境新材料科技有限公司 Autonomous Landing of UAV system
CN105182994B (en) * 2015-08-10 2018-02-06 普宙飞行器科技(深圳)有限公司 A kind of method of unmanned plane pinpoint landing
CN105068548B (en) * 2015-08-12 2019-06-28 北京贯中精仪科技有限公司 UAV Landing guides system
CN105979119B (en) * 2016-06-02 2019-07-16 深圳迪乐普数码科技有限公司 A kind of filtering method and terminal of infrared rocker arm pursuit movement data
CN107544550B (en) * 2016-06-24 2021-01-15 西安电子科技大学 Unmanned aerial vehicle automatic landing method based on visual guidance
US10152059B2 (en) * 2016-10-10 2018-12-11 Qualcomm Incorporated Systems and methods for landing a drone on a moving base
CN109521791A (en) * 2018-09-28 2019-03-26 易瓦特科技股份公司 Identification method and device based on earth station
CN113154220A (en) * 2021-03-26 2021-07-23 苏州略润娇贸易有限公司 Easily-built computer for construction site and use method thereof
CN114812513A (en) * 2022-05-10 2022-07-29 北京理工大学 A UAV positioning system and method based on infrared beacon

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000243A (en) * 2007-01-16 2007-07-18 北京航空航天大学 Pilotless plane landing navigation method and its device
CN101109640A (en) * 2006-07-19 2008-01-23 北京航空航天大学 Vision-based autonomous landing navigation system for unmanned aircraft
CN101833104A (en) * 2010-04-27 2010-09-15 北京航空航天大学 Three-dimensional visual navigation method based on multi-sensor information fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101109640A (en) * 2006-07-19 2008-01-23 北京航空航天大学 Vision-based autonomous landing navigation system for unmanned aircraft
CN101000243A (en) * 2007-01-16 2007-07-18 北京航空航天大学 Pilotless plane landing navigation method and its device
CN101833104A (en) * 2010-04-27 2010-09-15 北京航空航天大学 Three-dimensional visual navigation method based on multi-sensor information fusion

Also Published As

Publication number Publication date
CN102538782A (en) 2012-07-04

Similar Documents

Publication Publication Date Title
CN102538782B (en) Helicopter landing guide device and method based on computer vision
US10703479B2 (en) Unmanned aerial vehicle, control systems for unmanned aerial vehicle and control method thereof
EP3347789B1 (en) Systems and methods for detecting and tracking movable objects
CN105865454B (en) A kind of Navigation of Pilotless Aircraft method generated based on real-time online map
CN106570820B (en) A monocular vision 3D feature extraction method based on quadrotor UAV
WO2021189507A1 (en) Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method
CN101598556B (en) Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment
Yang et al. An onboard monocular vision system for autonomous takeoff, hovering and landing of a micro aerial vehicle
CN106814744A (en) A kind of UAV Flight Control System and method
WO2020186444A1 (en) Object detection method, electronic device, and computer storage medium
CN108873917A (en) A kind of unmanned plane independent landing control system and method towards mobile platform
WO2019182521A1 (en) Autonomous taking off, positioning and landing of unmanned aerial vehicles (uav) on a mobile platform
Martínez et al. On-board and ground visual pose estimation techniques for UAV control
CN108711166A (en) A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN108759826B (en) A UAV motion tracking method based on the fusion of multi-sensing parameters of mobile phones and UAVs
CN105243664B (en) A kind of wheeled mobile robot fast-moving target tracking method of view-based access control model
CN107943064A (en) A kind of unmanned plane spot hover system and method
CN103411621A (en) Indoor-mobile-robot-oriented optical flow field vision/inertial navigation system (INS) combined navigation method
CN114581516A (en) Multi-UAV intelligent identification and relative positioning method based on monocular vision
CN115291536B (en) Verification method of semi-physical simulation platform for UAV tracking ground targets based on vision
CN110058604A (en) A kind of accurate landing system of unmanned plane based on computer vision
US10991155B2 (en) Landmark location reconstruction in autonomous machine applications
CN108225273B (en) Real-time runway detection method based on sensor priori knowledge
CN117636284A (en) Unmanned aerial vehicle autonomous landing method and device based on visual image guidance
CN114445572A (en) Deeplab V3+ based method for instantly positioning obstacles and constructing map in unfamiliar sea area

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140827

Termination date: 20150104

EXPY Termination of patent right or utility model