[go: up one dir, main page]

CN114879729B - Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm - Google Patents

Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm Download PDF

Info

Publication number
CN114879729B
CN114879729B CN202210531544.5A CN202210531544A CN114879729B CN 114879729 B CN114879729 B CN 114879729B CN 202210531544 A CN202210531544 A CN 202210531544A CN 114879729 B CN114879729 B CN 114879729B
Authority
CN
China
Prior art keywords
image
obstacle
drone
coordinates
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210531544.5A
Other languages
Chinese (zh)
Other versions
CN114879729A (en
Inventor
符小卫
李环宇
谢国燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210531544.5A priority Critical patent/CN114879729B/en
Publication of CN114879729A publication Critical patent/CN114879729A/en
Application granted granted Critical
Publication of CN114879729B publication Critical patent/CN114879729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle autonomous obstacle avoidance method based on an obstacle contour detection algorithm. Then, thresholding and morphological processing are performed on the converted image. And (3) performing edge detection and contour detection on the processed image, combining the detected result with camera calibration data, and calculating the barycenter coordinates of the obstacle under the world coordinate system, so as to obtain the position information and contour information of the obstacle. And finally, transmitting the information of the obstacle into a D obstacle avoidance algorithm to carry out real-time path solving until the autonomous obstacle avoidance function of the unmanned aerial vehicle is completed. The method disclosed by the invention is high in instantaneity and high in calculation efficiency, and can be popularized to the autonomous obstacle avoidance of the unmanned aerial vehicle under a dynamic obstacle and the autonomous obstacle avoidance of the unmanned aerial vehicle under a real three-dimensional scene based on the method.

Description

一种基于障碍物轮廓检测算法的无人机自主避障方法An autonomous obstacle avoidance method for UAV based on obstacle contour detection algorithm

技术领域Technical Field

本发明属于无人机技术领域,具体涉及一种无人机自主避障方法。The invention belongs to the technical field of unmanned aerial vehicles, and in particular relates to an autonomous obstacle avoidance method for unmanned aerial vehicles.

背景技术Background technique

随着无人机行业迅速发展,无人机安全也受到广泛关注,尤其是在未知环境下,无人机的自主避障显得尤为重要。由于缺乏未知环境的先验信息,无人机需要感知并避开障碍物。其中,障碍物检测是重要一环。当前障碍物检测方法主要有:基于超声波的检测方法、基于红外线的检测方法、基于激光的检测方法和基于机器视觉的检测方法。其中,基于机器视觉的检测方法是利用相机获取图像,通过图像处理算法对其进行处理,来获得障碍物的轮廓、位置、深度等信息。与前三种方法不同的是,基于机器视觉的检测方法获取到的信息要更加丰富。With the rapid development of the drone industry, drone safety has also received widespread attention, especially in unknown environments, where autonomous obstacle avoidance is particularly important. Due to the lack of prior information about unknown environments, drones need to sense and avoid obstacles. Among them, obstacle detection is an important part. The current obstacle detection methods mainly include: ultrasonic-based detection methods, infrared-based detection methods, laser-based detection methods, and machine vision-based detection methods. Among them, the machine vision-based detection method uses a camera to obtain images and processes them through image processing algorithms to obtain information such as the outline, position, and depth of the obstacle. Unlike the first three methods, the machine vision-based detection method obtains richer information.

基于机器视觉的障碍物检测算法在应用上也存在着显著差异。其中,帧间差分法仅适用于动态障碍物的检测,不能满足实时检测的要求;光流估计法需要提前预测探测的区域,并无法探测到完整的障碍物;传统的障碍物轮廓检测算法适用于特征障碍物检测,但存在受噪声影响大、适应性差等缺点。因此,设计一种适应性较好、计算量小且能满足实时性要求的障碍物检测算法,并能完成无人机的自主避障功能是本领域研究人员需要解决的技术问题。There are also significant differences in the application of obstacle detection algorithms based on machine vision. Among them, the inter-frame difference method is only applicable to the detection of dynamic obstacles and cannot meet the requirements of real-time detection; the optical flow estimation method requires the prediction of the detection area in advance and cannot detect the complete obstacle; the traditional obstacle contour detection algorithm is suitable for feature obstacle detection, but it has the disadvantages of being greatly affected by noise and poor adaptability. Therefore, designing an obstacle detection algorithm with good adaptability, small computational complexity, and that can meet real-time requirements, and can complete the autonomous obstacle avoidance function of the drone is a technical problem that researchers in this field need to solve.

发明内容Summary of the invention

为了克服现有技术的不足,本发明提供了一种基于障碍物轮廓检测算法的无人机自主避障方法,首先对获取到的图像进行滤波处理和颜色空间的转化。然后,对转化后的图像进行阈值处理和形态学处理。对处理后的图像进行边缘检测和轮廓检测,将检测后的结果与相机标定的数据结合,计算出世界坐标系下障碍物的质心坐标,从而获得障碍物的位置信息和轮廓信息。最后,将障碍物的信息传入到D*避障算法中进行实时路径求解,直至完成无人机的自主避障功能。本发明方法实时性高,计算效率较高,可基于该方法推广到无人机在动态障碍物下的自主避障以及真实三维场景下的无人机自主避障。In order to overcome the shortcomings of the prior art, the present invention provides an autonomous obstacle avoidance method for a drone based on an obstacle contour detection algorithm. First, the acquired image is subjected to filtering processing and color space conversion. Then, the converted image is subjected to threshold processing and morphological processing. Edge detection and contour detection are performed on the processed image, and the detection results are combined with the camera calibration data to calculate the center of mass coordinates of the obstacle in the world coordinate system, thereby obtaining the position information and contour information of the obstacle. Finally, the obstacle information is passed into the D* obstacle avoidance algorithm for real-time path solving until the autonomous obstacle avoidance function of the drone is completed. The method of the present invention has high real-time performance and high computational efficiency. Based on this method, it can be extended to autonomous obstacle avoidance of drones under dynamic obstacles and autonomous obstacle avoidance of drones in real three-dimensional scenes.

本发明解决其技术问题所采用的技术方案包括如下步骤:The technical solution adopted by the present invention to solve the technical problem includes the following steps:

步骤1:建立无人机模型;Step 1: Build a drone model;

步骤1-1:无人机模型的布局结构为“X”型,即前进方向与相邻支架夹角为45°;假设无人机模型为一个刚体,并且无人机的制动器产生力F和扭矩τ,设第i时刻无人机的力和扭矩分别为Fi和τi,其计算公式如下:Step 1-1: The layout structure of the drone model is "X" type, that is, the angle between the forward direction and the adjacent bracket is 45°; assume that the drone model is a rigid body, and the brake of the drone generates force F and torque τ. Assume that the force and torque of the drone at the i-th moment are Fi and τi respectively, and the calculation formula is as follows:

式中CT和Cpow分别代表基于旋翼的推力系数和功率系数,ρ表示空气密度,D表示旋翼直径,ωmax表示最大的旋转角速度,ui表示第i时刻电机的转速;Where C T and C pow represent the thrust coefficient and power coefficient based on the rotor, respectively, ρ represents the air density, D represents the rotor diameter, ω max represents the maximum rotation angular velocity, and u i represents the speed of the motor at the i-th moment;

步骤1-2:计算无人机的下一个运动状态;Step 1-2: Calculate the next motion state of the drone;

设第m-1时刻无人机的速度为vm-1,位置为pm-1,加速度为am-1,时间步长为dt,第m时刻的位置pm和速度vm计算如下:Assume that the speed of the drone at the m-1th moment is v m-1 , the position is p m-1 , the acceleration is a m-1 , and the time step is dt. The position p m and speed v m at the mth moment are calculated as follows:

式中,pm为第m时刻无人机的位置,vm为第m时刻无人机的速度;Where, p m is the position of the UAV at the mth moment, and v m is the speed of the UAV at the mth moment;

步骤2:使用世界坐标系,确定无人机的起始点和目标点;Step 2: Use the world coordinate system to determine the starting point and target point of the drone;

步骤3:确定无人机路径搜索算法;Step 3: Determine the UAV path search algorithm;

选取D*路径搜索算法,设置其启发函数为:Select the D* path search algorithm and set its heuristic function to:

f(s)=h(s)+g(s) (5)f(s)=h(s)+g(s) (5)

其中,h(s)表示由当前节点到目标点的代价值,g(s)表示当前节点到起始点的代价值;Among them, h(s) represents the cost from the current node to the target point, and g(s) represents the cost from the current node to the starting point;

假设当前节点的位置坐标为(xs,ys),起始点的坐标为(xstart,ystart),目标点的坐标为((xgoal,ygoal),则h(s)和g(s)分别表示为:Assuming that the position coordinates of the current node are (x s , y s ), the coordinates of the starting point are (x start , y start ), and the coordinates of the target point are ((x goal , y goal ), then h(s) and g(s) are expressed as:

通过D*路径搜索算法生成一条从当前节点到目标节点的路径;Generate a path from the current node to the target node through the D* path search algorithm;

步骤4:建立基于颜色信息的障碍物轮廓检测算法;Step 4: Establish an obstacle contour detection algorithm based on color information;

步骤4-1:通过无人机机载相机获取环境图像;Step 4-1: Obtain environmental images through the drone’s onboard camera;

步骤4-2:对获取到的环境图像采用高斯滤波和中值滤波结合的方式进行平滑滤波处理,高斯滤波的计算公式为:Step 4-2: The acquired environment image is smoothed by combining Gaussian filtering and median filtering. The calculation formula of Gaussian filtering is:

其中,G(.)为二维高斯函数,(Δx2+Δy2)表示的是邻域内其他像素与中心像素之间的距离的平方和,σ是二维正态分布的标准差,(Δx,Δy)表示领域;Among them, G(.) is a two-dimensional Gaussian function, (Δx 2 +Δy 2 ) represents the sum of the squares of the distances between other pixels in the neighborhood and the central pixel, σ is the standard deviation of the two-dimensional normal distribution, and (Δx, Δy) represents the domain;

运用中值滤波对平滑滤波后的图像做进一步的滤波;Use median filtering to further filter the smoothed filtered image;

步骤4-3:将RGB空间的环境图像转化到HSV颜色空间中,计算公式如下:Step 4-3: Convert the environment image in RGB space to HSV color space. The calculation formula is as follows:

V=max(R,G,B) (11)V=max(R,G,B) (11)

其中,R、G、B分别表示RGB空间中三种颜色分量的数值,H、S、V分别代表HSV空间中的色度、饱和度和亮度值;Among them, R, G, and B represent the values of the three color components in the RGB space, and H, S, and V represent the hue, saturation, and brightness values in the HSV space, respectively;

步骤4-4:对图像进行二值化操作,并采用Otus算法进行图像的阈值处理;具体计算过程如下:Step 4-4: Binarize the image and use the Otus algorithm to perform threshold processing on the image. The specific calculation process is as follows:

步骤4-4-1:计算灰度直方图的零阶累积矩:Step 4-4-1: Calculate the zero-order cumulative moment of the grayscale histogram:

式中,histogramI表示的是归一化的图像灰度直方图,histogramI(k)代表灰度值等于k的像素点在图像中所占的比率;Wherein, histogram I represents the normalized image grayscale histogram, and histogram I (k) represents the ratio of pixels with grayscale value equal to k in the image;

步骤4-4-2:计算灰度直方图的一阶累积矩:Step 4-4-2: Calculate the first-order cumulative moment of the grayscale histogram:

步骤4-4-3:计算图像总体的灰度平均值:Step 4-4-3: Calculate the grayscale average of the image as a whole:

mean=oneCuMo(255) (14)mean=oneCuMo(255) (14)

步骤4-4-4:将图像按照灰度特性分为前景图像和背景图像,计算能使前景图像和背景图像方差最大的阈值q;对方差的衡量采用以下度量:Step 4-4-4: Divide the image into foreground image and background image according to the grayscale characteristics, and calculate the threshold q that can maximize the variance between the foreground image and the background image; the following metric is used to measure the variance:

步骤4-5:对图像进行先膨胀后腐蚀的形态学处理;Step 4-5: Perform morphological processing on the image by first dilating and then corroding;

步骤4-6:采用Canny算子边缘检测和轮廓检测:Step 4-6: Use Canny operator for edge detection and contour detection:

步骤4-6-1:对图像的非边缘区域的噪声进行平滑处理;Step 4-6-1: Smoothing the noise in the non-edge area of the image;

步骤4-6-2:利用Sobel算子计算图像梯度的幅度和方向;Step 4-6-2: Use the Sobel operator to calculate the magnitude and direction of the image gradient;

步骤4-6-3:逐一遍历像素点,判断当前像素点是否为梯度方向上梯度值极大的点,若为极大值点,保留该点,否则将其归零;Step 4-6-3: Go through the pixels one by one to determine whether the current pixel is a point with a maximum gradient value in the gradient direction. If it is a maximum value point, keep the point, otherwise return it to zero;

步骤4-6-4:使用双阈值进行阈值处理,获取边缘点;Step 4-6-4: Use double thresholds to perform threshold processing and obtain edge points;

步骤4-6-5:将边缘检测后的结果与图像的前景信息进行拟合,近似获取图像轮廓;Step 4-6-5: Fit the edge detection result with the foreground information of the image to approximately obtain the image contour;

步骤4-7:采用张正友相机标定法确定机载相机的内参矩阵M1和外参矩阵M2Step 4-7: Use Zhang Zhengyou's camera calibration method to determine the intrinsic parameter matrix M1 and extrinsic parameter matrix M2 of the airborne camera;

步骤4-8:求解世界坐标系下障碍物的质心坐标;Step 4-8: Solve the coordinates of the center of mass of the obstacle in the world coordinate system;

步骤4-8-1:环境图像的(i+j)阶图像矩的计算公式为:Step 4-8-1: The calculation formula of the (i+j)-order image moment of the environment image is:

其中,x和y表示像素点的横纵坐标,I(x,y)表示坐标为(x,y)的像素点对应的像素强度;Where x and y represent the horizontal and vertical coordinates of the pixel point, and I(x,y) represents the pixel intensity corresponding to the pixel point with coordinates (x,y);

步骤4-8-2:通过零阶图像矩M00和一阶图像矩(M01、M10)计算像素坐标系下的质心坐标:Step 4-8-2: Calculate the centroid coordinates in the pixel coordinate system using the zero-order image moment M 00 and the first-order image moment (M 01 , M 10 ):

步骤4-8-3:进行坐标转换,将质心坐标转换到世界坐标系下:Step 4-8-3: Perform coordinate conversion and convert the centroid coordinates to the world coordinate system:

式中,u和v为像素坐标系中的坐标,(XC,YC,ZC)为相机坐标系坐标,fx和fy分别表示在x轴和y轴上像素的物理尺寸,u0和v0分别表示图像的中心像素坐标与图像原点像素坐标x方向和y方向上的像素差值,f为相机焦距;R为3×3的旋转矩阵,即从像素坐标系转换到世界坐标系时,坐标轴因旋转而得到的矩阵;t为偏移向量,(XW,YW,ZW)为世界坐标系中的坐标;Where u and v are the coordinates in the pixel coordinate system, ( XC , YC , ZC ) are the coordinates in the camera coordinate system, fx and fy represent the physical sizes of the pixels on the x-axis and y-axis respectively, u0 and v0 represent the pixel differences between the center pixel coordinates of the image and the pixel coordinates of the image origin in the x-direction and y-direction respectively, and f is the focal length of the camera; R is a 3×3 rotation matrix, that is, the matrix obtained by rotating the coordinate axes when converting from the pixel coordinate system to the world coordinate system; t is the offset vector, and ( XW , YW , ZW ) are the coordinates in the world coordinate system;

步骤5:无人机从起点开始沿着步骤3生成的初始路径开始运动,同时机载相机采用步骤4的障碍物轮廓检测算法进行实时检测,若路径上出现未知障碍物,根据无人机的位置信息和轮廓信息判断是否影响到无人机的飞行,若影响则采用步骤3的路径搜索算法生成一条新的从当前点到目标点的自主避障路径;Step 5: The drone starts to move from the starting point along the initial path generated in step 3. At the same time, the onboard camera uses the obstacle contour detection algorithm in step 4 for real-time detection. If an unknown obstacle appears on the path, it is determined whether it affects the flight of the drone based on the drone's position information and contour information. If it does, the path search algorithm in step 3 is used to generate a new autonomous obstacle avoidance path from the current point to the target point.

按照该过程循环,直至无人机到达目标点,整个自主避障过程结束。This process is repeated until the drone reaches the target point, and the entire autonomous obstacle avoidance process is completed.

本发明的有益效果如下:The beneficial effects of the present invention are as follows:

本发明在不需要环境完全已知的条件下,无人机可采用障碍物检测算法实时检测获得未知障碍物的位置和轮廓等信息,通过与D*寻路算法相结合,可实现无人机在环境未知情况下的无人机自主避障的目标。该方法实时性高,计算效率较高,可基于该方法推广到无人机在动态障碍物下的自主避障以及真实三维场景下的无人机自主避障。In the present invention, the UAV can use the obstacle detection algorithm to detect and obtain the position and contour of unknown obstacles in real time without the need for the environment to be completely known. By combining with the D* pathfinding algorithm, the goal of autonomous obstacle avoidance of the UAV in an unknown environment can be achieved. The method has high real-time performance and high computational efficiency. Based on this method, it can be extended to autonomous obstacle avoidance of UAVs under dynamic obstacles and autonomous obstacle avoidance of UAVs in real three-dimensional scenes.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明基于障碍物轮廓检测算法的无人机自主避障方法的总流程图。FIG1 is a general flow chart of the autonomous obstacle avoidance method for unmanned aerial vehicles based on the obstacle contour detection algorithm of the present invention.

图2是本发明实施例在AirSim中采用的无人机仿真模型图。FIG. 2 is a diagram of a UAV simulation model used in AirSim according to an embodiment of the present invention.

图3是本发明实施例在AirSim中搭建的仿真环境的俯视图。FIG. 3 is a top view of a simulation environment constructed in AirSim according to an embodiment of the present invention.

图4是本发明实施例某一时刻障碍物轮廓检测算法的效果图。FIG. 4 is a diagram showing the effect of an obstacle contour detection algorithm at a certain moment in an embodiment of the present invention.

图5是本发明实施例的无人机自主避障的路径结果图。FIG. 5 is a diagram showing the path results of the autonomous obstacle avoidance of the UAV according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图和实施例对本发明进一步说明。The present invention is further described below in conjunction with the accompanying drawings and embodiments.

本发明提供了一种基于改进障碍物轮廓检测算法的无人机自主避障方法。首先对获取到的图像进行滤波处理和颜色空间的转化。然后,对转化后的图像进行阈值处理和形态学处理。对处理后的图像进行边缘检测和轮廓检测,将检测后的结果与相机标定的数据结合,计算出世界坐标系下障碍物的质心坐标,从而获得障碍物的位置信息和轮廓信息。最后,将障碍物的信息传入到D*避障算法中进行实时路径求解,直至完成无人机的自主避障功能。The present invention provides an autonomous obstacle avoidance method for a UAV based on an improved obstacle contour detection algorithm. First, the acquired image is subjected to filtering processing and color space conversion. Then, the converted image is subjected to threshold processing and morphological processing. Edge detection and contour detection are performed on the processed image, and the detection results are combined with the camera calibration data to calculate the center of mass coordinates of the obstacle in the world coordinate system, thereby obtaining the position information and contour information of the obstacle. Finally, the obstacle information is passed into the D* obstacle avoidance algorithm for real-time path solving until the autonomous obstacle avoidance function of the UAV is completed.

仿真环境为:Windows10操作系统,Airsim仿真平台。The simulation environment is: Windows 10 operating system, Airsim simulation platform.

本发明考虑的是三维地图模型,坐标系为平面坐标系。假设有1架自带视觉摄像机的无人机,如图2所示。搭建的仿真地图大小为100m×100m,灰色障碍物代表已知位置和轮廓的障碍物,橙色和白色的障碍物表示未知障碍物,环境搭建如图3所示。假设无人机的位置坐标为(70,20),目标点的坐标为(70,80)。The present invention considers a three-dimensional map model, and the coordinate system is a plane coordinate system. Assume that there is a drone with a built-in visual camera, as shown in Figure 2. The size of the constructed simulation map is 100m×100m. Gray obstacles represent obstacles with known positions and contours, and orange and white obstacles represent unknown obstacles. The environment is constructed as shown in Figure 3. Assume that the position coordinates of the drone are (70, 20) and the coordinates of the target point are (70, 80).

如图1所示,在AirSim环境下本发明的具体步骤如下:As shown in FIG1 , the specific steps of the present invention in the AirSim environment are as follows:

步骤1:建立无人机模型;Step 1: Build a drone model;

步骤1-1:无人机模型的布局结构为“X”型,即前进方向与相邻支架夹角为45°;假设无人机模型为一个刚体,并且无人机可以由任意数量的制动器来产生力F和扭矩τ,设第i时刻无人机的力和扭矩分别为Fi和τi,其计算公式如下:Step 1-1: The layout structure of the drone model is "X" type, that is, the angle between the forward direction and the adjacent bracket is 45°; assume that the drone model is a rigid body, and the drone can generate force F and torque τ by any number of brakes. Let the force and torque of the drone at the i-th moment be Fi and τi respectively, and the calculation formula is as follows:

式中CT和Cpow分别代表基于旋翼的推力系数和功率系数,ρ表示空气密度,D表示旋翼直径,ωmax表示最大的旋转角速度,ui表示第i时刻电机的转速;Where C T and C pow represent the thrust coefficient and power coefficient based on the rotor, respectively, ρ represents the air density, D represents the rotor diameter, ω max represents the maximum rotation angular velocity, and u i represents the speed of the motor at the i-th moment;

步骤1-2:计算无人机的下一个运动状态;Step 1-2: Calculate the next motion state of the drone;

设第m-1时刻无人机的速度为vm-1,位置为pm-1,加速度为am-1,时间步长为dt,第m时刻的位置pm和速度vm计算如下:Assume that the speed of the drone at the m-1th moment is v m-1 , the position is p m-1 , the acceleration is a m-1 , and the time step is dt. The position p m and speed v m at the mth moment are calculated as follows:

式中,pm为第m时刻无人机的位置,vm为第m时刻无人机的速度;Where, p m is the position of the UAV at the mth moment, and v m is the speed of the UAV at the mth moment;

步骤2:使用世界坐标系,确定无人机的起始点和目标点;Step 2: Use the world coordinate system to determine the starting point and target point of the drone;

步骤3:确定无人机路径搜索算法;Step 3: Determine the UAV path search algorithm;

选取D*路径搜索算法,设置其启发函数为:Select the D* path search algorithm and set its heuristic function to:

f(s)=h(s)+g(s) (5)f(s)=h(s)+g(s) (5)

其中,h(s)表示由当前节点到目标点的代价值,g(s)表示当前节点到起始点的代价值;Among them, h(s) represents the cost from the current node to the target point, and g(s) represents the cost from the current node to the starting point;

假设当前节点的位置坐标为(xs,ys),起始点的坐标为(xstart,ystart),目标点的坐标为((xgoal,ygoal),则h(s)和g(s)分别表示为:Assuming that the position coordinates of the current node are (x s , y s ), the coordinates of the starting point are (x start , y start ), and the coordinates of the target point are ((x goal , y goal ), then h(s) and g(s) are expressed as:

通过D*路径搜索算法生成一条从当前节点到目标节点的路径;Generate a path from the current node to the target node through the D* path search algorithm;

步骤4:建立基于颜色信息的障碍物轮廓检测算法;Step 4: Establish an obstacle contour detection algorithm based on color information;

步骤4-1:通过无人机机载相机获取环境图像;Step 4-1: Obtain environmental images through the drone’s onboard camera;

步骤4-2:对获取到的环境图像采用高斯滤波和中值滤波结合的方式进行平滑滤波处理,高斯滤波的计算公式为:Step 4-2: The acquired environment image is smoothed by combining Gaussian filtering and median filtering. The calculation formula of Gaussian filtering is:

其中,G(.)为二维高斯函数,(Δx2+Δy2)表示的是邻域内其他像素与中心像素之间的距离的平方和,σ是二维正态分布的标准差,(Δx,Δy)表示领域;Among them, G(.) is a two-dimensional Gaussian function, (Δx 2 +Δy 2 ) represents the sum of the squares of the distances between other pixels in the neighborhood and the central pixel, σ is the standard deviation of the two-dimensional normal distribution, and (Δx, Δy) represents the domain;

运用中值滤波对平滑滤波后的图像做进一步的滤波,目的是在消除噪声的同时,最大限度的保留图像的轮廓信息;;The median filter is used to further filter the smoothed image, with the aim of eliminating noise while retaining the image contour information to the maximum extent;

步骤4-3:将RGB空间的环境图像转化到HSV颜色空间中,计算公式如下:Step 4-3: Convert the environment image in RGB space to HSV color space. The calculation formula is as follows:

V=max(R,G,B) (11)V=max(R,G,B) (11)

其中,R、G、B分别表示RGB空间中三种颜色分量的数值,H、S、V分别代表HSV空间中的色度、饱和度和亮度值;Among them, R, G, and B represent the values of the three color components in the RGB space, and H, S, and V represent the hue, saturation, and brightness values in the HSV space, respectively;

步骤4-4:对图像进行二值化操作,并采用Otus算法进行图像的阈值处理;具体计算过程如下:Step 4-4: Binarize the image and use the Otus algorithm to perform threshold processing on the image. The specific calculation process is as follows:

步骤4-4-1:计算灰度直方图的零阶累积矩:Step 4-4-1: Calculate the zero-order cumulative moment of the grayscale histogram:

式中,histogramI表示的是归一化的图像灰度直方图,histogramI(k)代表灰度值等于k的像素点在图像中所占的比率;Wherein, histogram I represents the normalized image grayscale histogram, and histogram I (k) represents the ratio of pixels with grayscale value equal to k in the image;

步骤4-4-2:计算灰度直方图的一阶累积矩:Step 4-4-2: Calculate the first-order cumulative moment of the grayscale histogram:

步骤4-4-3:计算图像总体的灰度平均值:Step 4-4-3: Calculate the grayscale average of the image as a whole:

mean=oneCuMo(255) (14)mean=oneCuMo(255) (14)

步骤4-4-4:将图像按照灰度特性分为前景图像和背景图像,计算能使前景图像和背景图像方差最大的阈值q;对方差的衡量采用以下度量:Step 4-4-4: Divide the image into foreground image and background image according to the grayscale characteristics, and calculate the threshold q that can maximize the variance between the foreground image and the background image; the following metric is used to measure the variance:

步骤4-5:对图像进行先膨胀后腐蚀的形态学处理;Step 4-5: Perform morphological processing on the image by first dilating and then corroding;

步骤4-6:采用Canny算子边缘检测和轮廓检测:Step 4-6: Use Canny operator for edge detection and contour detection:

步骤4-6-1:对图像的非边缘区域的噪声进行平滑处理;Step 4-6-1: Smoothing the noise in the non-edge area of the image;

步骤4-6-2:利用Sobel算子计算图像梯度的幅度和方向;Step 4-6-2: Use the Sobel operator to calculate the magnitude and direction of the image gradient;

步骤4-6-3:逐一遍历像素点,判断当前像素点是否为梯度方向上梯度值极大的点,若为极大值点,保留该点,否则将其归零;Step 4-6-3: Go through the pixels one by one to determine whether the current pixel is a point with a maximum gradient value in the gradient direction. If it is a maximum value point, keep the point, otherwise return it to zero;

步骤4-6-4:使用双阈值进行阈值处理,获取边缘点;Step 4-6-4: Use double thresholds to perform threshold processing and obtain edge points;

步骤4-6-5:将边缘检测后的结果与图像的前景信息进行拟合,近似获取图像轮廓;Step 4-6-5: Fit the edge detection result with the foreground information of the image to approximately obtain the image contour;

步骤4-7:采用张正友相机标定法确定机载相机的内参矩阵M1和外参矩阵M2Step 4-7: Use Zhang Zhengyou's camera calibration method to determine the intrinsic parameter matrix M1 and extrinsic parameter matrix M2 of the airborne camera;

步骤4-8:求解世界坐标系下障碍物的质心坐标;Step 4-8: Solve the coordinates of the center of mass of the obstacle in the world coordinate system;

步骤4-8-1:环境图像的(i+j)阶图像矩的计算公式为:Step 4-8-1: The calculation formula of the (i+j)-order image moment of the environment image is:

其中,x和y表示像素点的横纵坐标,I(x,y)表示坐标为(x,y)的像素点对应的像素强度;Where x and y represent the horizontal and vertical coordinates of the pixel point, and I(x,y) represents the pixel intensity corresponding to the pixel point with coordinates (x,y);

步骤4-8-2:通过零阶图像矩M00和一阶图像矩(M01、M10)计算像素坐标系下的质心坐标:Step 4-8-2: Calculate the centroid coordinates in the pixel coordinate system using the zero-order image moment M 00 and the first-order image moment (M 01 , M 10 ):

步骤4-8-3:进行坐标转换,将质心坐标转换到世界坐标系下:Step 4-8-3: Perform coordinate conversion and convert the centroid coordinates to the world coordinate system:

式中,u和v为像素坐标系中的坐标,(XC,YC,ZC)为相机坐标系坐标,fx和fy分别表示在x轴和y轴上像素的物理尺寸,u0和v0分别表示图像的中心像素坐标与图像原点像素坐标x方向和y方向上的像素差值,f为相机焦距;R为3×3的旋转矩阵,即从像素坐标系转换到世界坐标系时,坐标轴因旋转而得到的矩阵;t为偏移向量,(XW,YW,ZW)为世界坐标系中的坐标,矩阵M1为相机的内参矩阵,矩阵M2为相机的外参矩阵,可以通过相机的标定来测得;Wherein, u and v are the coordinates in the pixel coordinate system, ( XC , YC , ZC ) are the coordinates in the camera coordinate system, fx and fy represent the physical sizes of the pixels on the x-axis and y-axis respectively, u0 and v0 represent the pixel differences between the center pixel coordinates of the image and the pixel coordinates of the image origin in the x-direction and y-direction respectively, and f is the focal length of the camera; R is a 3×3 rotation matrix, that is, the matrix obtained by rotating the coordinate axis when converting from the pixel coordinate system to the world coordinate system; t is the offset vector, ( XW , YW , ZW ) are the coordinates in the world coordinate system, matrix M1 is the intrinsic parameter matrix of the camera, and matrix M2 is the extrinsic parameter matrix of the camera, which can be measured by camera calibration;

无人机可以获得未知障碍物的位置信息和轮廓信息;The drone can obtain the location and contour information of unknown obstacles;

步骤5:无人机从起点开始沿着步骤3生成的初始路径开始运动,同时机载相机采用步骤4的障碍物轮廓检测算法进行实时检测,若路径上出现未知障碍物,根据无人机的位置信息和轮廓信息判断是否影响到无人机的飞行,若影响则采用步骤3的路径搜索算法生成一条新的从当前点到目标点的自主避障路径;Step 5: The drone starts to move from the starting point along the initial path generated in step 3. At the same time, the onboard camera uses the obstacle contour detection algorithm in step 4 for real-time detection. If an unknown obstacle appears on the path, it is determined whether it affects the flight of the drone based on the drone's position information and contour information. If it does, the path search algorithm in step 3 is used to generate a new autonomous obstacle avoidance path from the current point to the target point.

按照该过程循环,直至无人机到达目标点,整个自主避障过程结束。This process is repeated until the drone reaches the target point, and the entire autonomous obstacle avoidance process is completed.

综上所述,本发明利用障碍物轮廓检测算法确定未知障碍物的轮廓信息和位置信息,图4为障碍物检测过程中的某一时刻的轮廓结果图,能够较快且准确地为无人机自主避障算法提供所需的未知障碍物的信息。无人机在边搜索边检测的过程中遇到未知障碍物,也能较好地生成一条从当前未知节点到目标节点的路径,如图5所示为无人机通过自主避障算法生成的可行路径,验证了算法的实时性和可行性。对于无人机自主避障来说,该方法较为简单且实时性鲁棒性较高,实现了无人机的自主避障。In summary, the present invention uses the obstacle contour detection algorithm to determine the contour information and position information of the unknown obstacle. FIG4 is a contour result diagram at a certain moment in the obstacle detection process, which can quickly and accurately provide the required unknown obstacle information for the autonomous obstacle avoidance algorithm of the drone. When the drone encounters an unknown obstacle in the process of searching and detecting, it can also generate a path from the current unknown node to the target node. As shown in FIG5, it is a feasible path generated by the drone through the autonomous obstacle avoidance algorithm, which verifies the real-time and feasibility of the algorithm. For autonomous obstacle avoidance of drones, this method is relatively simple and has high real-time robustness, and realizes autonomous obstacle avoidance of drones.

Claims (1)

1.一种基于障碍物轮廓检测算法的无人机自主避障方法,其特征在于,包括如下步骤:1. A method for autonomous obstacle avoidance of a UAV based on an obstacle contour detection algorithm, characterized in that it comprises the following steps: 步骤1:建立无人机模型;Step 1: Build a drone model; 步骤1-1:无人机模型的布局结构为“X”型,即前进方向与相邻支架夹角为45°;假设无人机模型为一个刚体,并且无人机的制动器产生力F和扭矩τ,设第i时刻无人机的力和扭矩分别为Fi和τi,其计算公式如下:Step 1-1: The layout structure of the drone model is "X" type, that is, the angle between the forward direction and the adjacent bracket is 45°; assume that the drone model is a rigid body, and the brake of the drone generates force F and torque τ. Let the force and torque of the drone at the i-th moment be Fi and τi respectively, and the calculation formula is as follows: 式中CT和Cpow分别代表基于旋翼的推力系数和功率系数,ρ表示空气密度,D表示旋翼直径,ωmax表示最大的旋转角速度,ui表示第i时刻电机的转速;Where C T and C pow represent the thrust coefficient and power coefficient based on the rotor, respectively, ρ represents the air density, D represents the rotor diameter, ω max represents the maximum rotation angular velocity, and u i represents the speed of the motor at the i-th moment; 步骤1-2:计算无人机的下一个运动状态;Step 1-2: Calculate the next motion state of the drone; 设第m-1时刻无人机的速度为vm-1,位置为pm-1,加速度为am-1,时间步长为dt,第m时刻的位置pm和速度vm计算如下:Assume that the speed of the drone at the m-1th moment is v m-1 , the position is p m-1 , the acceleration is a m-1 , and the time step is dt. The position p m and speed v m at the mth moment are calculated as follows: 式中,pm为第m时刻无人机的位置,vm为第m时刻无人机的速度;Where, p m is the position of the UAV at the mth moment, and v m is the speed of the UAV at the mth moment; 步骤2:使用世界坐标系,确定无人机的起始点和目标点;Step 2: Use the world coordinate system to determine the starting point and target point of the drone; 步骤3:确定无人机路径搜索算法;Step 3: Determine the UAV path search algorithm; 选取D*路径搜索算法,设置其启发函数为:Select the D* path search algorithm and set its heuristic function to: f(s)=h(s)+g(s) (5)f(s)=h(s)+g(s) (5) 其中,h(s)表示由当前节点到目标点的代价值,g(s)表示当前节点到起始点的代价值;Among them, h(s) represents the cost from the current node to the target point, and g(s) represents the cost from the current node to the starting point; 假设当前节点的位置坐标为(xs,ys),起始点的坐标为(xstart,ystart),目标点的坐标为((xgoal,ygoal),则h(s)和g(s)分别表示为:Assuming that the position coordinates of the current node are (x s , y s ), the coordinates of the starting point are (x start , y start ), and the coordinates of the target point are ((x goal , y goal ), then h(s) and g(s) are expressed as: 通过D*路径搜索算法生成一条从当前节点到目标节点的路径;Generate a path from the current node to the target node through the D* path search algorithm; 步骤4:建立基于颜色信息的障碍物轮廓检测算法;Step 4: Establish an obstacle contour detection algorithm based on color information; 步骤4-1:通过无人机机载相机获取环境图像;Step 4-1: Obtain environmental images through the drone’s onboard camera; 步骤4-2:对获取到的环境图像采用高斯滤波和中值滤波结合的方式进行平滑滤波处理,高斯滤波的计算公式为:Step 4-2: The acquired environment image is smoothed by combining Gaussian filtering and median filtering. The calculation formula of Gaussian filtering is: 其中,G(.)为二维高斯函数,(Δx2+Δy2)表示的是邻域内其他像素与中心像素之间的距离的平方和,σ是二维正态分布的标准差,(Δx,Δy)表示领域;Among them, G(.) is a two-dimensional Gaussian function, (Δx 2 +Δy 2 ) represents the sum of the squares of the distances between other pixels in the neighborhood and the central pixel, σ is the standard deviation of the two-dimensional normal distribution, and (Δx, Δy) represents the domain; 运用中值滤波对平滑滤波后的图像做进一步的滤波;Use median filtering to further filter the smoothed filtered image; 步骤4-3:将RGB空间的环境图像转化到HSV颜色空间中,计算公式如下:Step 4-3: Convert the environment image in RGB space to HSV color space. The calculation formula is as follows: V=max(R,G,B) (11)V=max(R,G,B) (11) 其中,R、G、B分别表示RGB空间中三种颜色分量的数值,H、S、V分别代表HSV空间中的色度、饱和度和亮度值;Among them, R, G, and B represent the values of the three color components in the RGB space, and H, S, and V represent the hue, saturation, and brightness values in the HSV space, respectively; 步骤4-4:对图像进行二值化操作,并采用Otus算法进行图像的阈值处理;具体计算过程如下:Step 4-4: Binarize the image and use the Otus algorithm to perform threshold processing on the image. The specific calculation process is as follows: 步骤4-4-1:计算灰度直方图的零阶累积矩:Step 4-4-1: Calculate the zero-order cumulative moment of the grayscale histogram: 式中,histogramI表示的是归一化的图像灰度直方图,histogramI(k)代表灰度值等于k的像素点在图像中所占的比率;Wherein, histogram I represents the normalized image grayscale histogram, and histogram I (k) represents the ratio of pixels with grayscale value equal to k in the image; 步骤4-4-2:计算灰度直方图的一阶累积矩:Step 4-4-2: Calculate the first-order cumulative moment of the grayscale histogram: 步骤4-4-3:计算图像总体的灰度平均值:Step 4-4-3: Calculate the grayscale average of the image as a whole: mean=oneCuMo(255) (14)mean=oneCuMo(255) (14) 步骤4-4-4:将图像按照灰度特性分为前景图像和背景图像,计算能使前景图像和背景图像方差最大的阈值q;对方差的衡量采用以下度量:Step 4-4-4: Divide the image into foreground image and background image according to the grayscale characteristics, and calculate the threshold q that can maximize the variance between the foreground image and the background image; the following metric is used to measure the variance: 步骤4-5:对图像进行先膨胀后腐蚀的形态学处理;Step 4-5: Perform morphological processing on the image by first dilating and then corroding; 步骤4-6:采用Canny算子边缘检测和轮廓检测:Step 4-6: Use Canny operator for edge detection and contour detection: 步骤4-6-1:对图像的非边缘区域的噪声进行平滑处理;Step 4-6-1: Smoothing the noise in the non-edge area of the image; 步骤4-6-2:利用Sobel算子计算图像梯度的幅度和方向;Step 4-6-2: Use the Sobel operator to calculate the magnitude and direction of the image gradient; 步骤4-6-3:逐一遍历像素点,判断当前像素点是否为梯度方向上梯度值极大的点,若为极大值点,保留该点,否则将其归零;Step 4-6-3: Go through the pixels one by one to determine whether the current pixel is a point with a maximum gradient value in the gradient direction. If it is a maximum value point, keep the point, otherwise return it to zero; 步骤4-6-4:使用双阈值进行阈值处理,获取边缘点;Step 4-6-4: Use double thresholds to perform threshold processing and obtain edge points; 步骤4-6-5:将边缘检测后的结果与图像的前景信息进行拟合,近似获取图像轮廓;Step 4-6-5: Fit the edge detection result with the foreground information of the image to approximately obtain the image contour; 步骤4-7:采用张正友相机标定法确定机载相机的内参矩阵M1和外参矩阵M2Step 4-7: Use Zhang Zhengyou's camera calibration method to determine the intrinsic parameter matrix M1 and extrinsic parameter matrix M2 of the airborne camera; 步骤4-8:求解世界坐标系下障碍物的质心坐标;Step 4-8: Solve the coordinates of the center of mass of the obstacle in the world coordinate system; 步骤4-8-1:环境图像的(i+j)阶图像矩的计算公式为:Step 4-8-1: The calculation formula of the (i+j)-order image moment of the environment image is: 其中,x和y表示像素点的横纵坐标,I(x,y)表示坐标为(x,y)的像素点对应的像素强度;Where x and y represent the horizontal and vertical coordinates of the pixel point, and I(x,y) represents the pixel intensity corresponding to the pixel point with coordinates (x,y); 步骤4-8-2:通过零阶图像矩M00和一阶图像矩(M01、M10)计算像素坐标系下的质心坐标:Step 4-8-2: Calculate the centroid coordinates in the pixel coordinate system using the zero-order image moment M 00 and the first-order image moment (M 01 , M 10 ): 步骤4-8-3:进行坐标转换,将质心坐标转换到世界坐标系下:Step 4-8-3: Perform coordinate conversion and convert the centroid coordinates to the world coordinate system: 式中,u和v为像素坐标系中的坐标,(XC,YC,ZC)为相机坐标系坐标,fx和fy分别表示在x轴和y轴上像素的物理尺寸,u0和v0分别表示图像的中心像素坐标与图像原点像素坐标x方向和y方向上的像素差值,f为相机焦距;R为3×3的旋转矩阵,即从像素坐标系转换到世界坐标系时,坐标轴因旋转而得到的矩阵;t为偏移向量,(XW,YW,ZW)为世界坐标系中的坐标;Where u and v are the coordinates in the pixel coordinate system, ( XC , YC , ZC ) are the coordinates in the camera coordinate system, fx and fy represent the physical sizes of the pixels on the x-axis and y-axis respectively, u0 and v0 represent the pixel differences between the center pixel coordinates of the image and the pixel coordinates of the image origin in the x-direction and y-direction respectively, and f is the focal length of the camera; R is a 3×3 rotation matrix, that is, the matrix obtained by rotating the coordinate axes when converting from the pixel coordinate system to the world coordinate system; t is the offset vector, and ( XW , YW , ZW ) are the coordinates in the world coordinate system; 步骤5:无人机从起点开始沿着步骤3生成的初始路径开始运动,同时机载相机采用步骤4的障碍物轮廓检测算法进行实时检测,若路径上出现未知障碍物,根据无人机的位置信息和轮廓信息判断是否影响到无人机的飞行,若影响则采用步骤3的路径搜索算法生成一条新的从当前点到目标点的自主避障路径;Step 5: The drone starts to move from the starting point along the initial path generated in step 3. At the same time, the onboard camera uses the obstacle contour detection algorithm in step 4 for real-time detection. If an unknown obstacle appears on the path, it is determined whether it affects the flight of the drone based on the drone's position information and contour information. If it does, the path search algorithm in step 3 is used to generate a new autonomous obstacle avoidance path from the current point to the target point. 按照该过程循环,直至无人机到达目标点,整个自主避障过程结束。This process is repeated until the drone reaches the target point, and the entire autonomous obstacle avoidance process is completed.
CN202210531544.5A 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm Active CN114879729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210531544.5A CN114879729B (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210531544.5A CN114879729B (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm

Publications (2)

Publication Number Publication Date
CN114879729A CN114879729A (en) 2022-08-09
CN114879729B true CN114879729B (en) 2024-06-18

Family

ID=82674949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210531544.5A Active CN114879729B (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm

Country Status (1)

Country Link
CN (1) CN114879729B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708084B (en) * 2016-11-24 2019-08-02 中国科学院自动化研究所 The automatic detection of obstacles of unmanned plane and barrier-avoiding method under complex environment
CN107329490B (en) * 2017-07-21 2020-10-09 歌尔科技有限公司 Unmanned aerial vehicle obstacle avoidance method and unmanned aerial vehicle
CN110032211A (en) * 2019-04-24 2019-07-19 西南交通大学 Multi-rotor unmanned aerial vehicle automatic obstacle-avoiding method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无人机视觉定位与避障子系统研究;林涛;;机械工程师;20200310(第03期);全文 *

Also Published As

Publication number Publication date
CN114879729A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
CN110569838B (en) An autonomous landing method of quadrotor UAV based on visual positioning
CN106708084B (en) The automatic detection of obstacles of unmanned plane and barrier-avoiding method under complex environment
WO2020135446A1 (en) Target positioning method and device and unmanned aerial vehicle
US10719727B2 (en) Method and system for determining at least one property related to at least part of a real environment
CN112825192B (en) Object identification system and method based on machine learning
CN102313536B (en) Obstacle Perception Method Based on Airborne Binocular Vision
CN113359782B (en) A method for autonomous location and landing of unmanned aerial vehicles integrating LIDAR point cloud and image data
JP5822322B2 (en) Network capture and 3D display of localized and segmented images
CN111326023A (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
US20210103299A1 (en) Obstacle avoidance method and device and movable platform
CN105550692B (en) The homing vector landing concept of unmanned plane based on marker color and contour detecting
Tang et al. Camera self-calibration from tracking of moving persons
JP6817742B2 (en) Information processing device and its control method
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN113971801B (en) A multi-dimensional target detection method based on four types of multimodal data fusion
CN112683228A (en) Monocular camera ranging method and device
US20230376106A1 (en) Depth information based pose determination for mobile platforms, and associated systems and methods
Rosero et al. Calibration and multi-sensor fusion for on-road obstacle detection
CN117968640A (en) Target pose estimation method based on monocular vision and inertial navigation fusion
Byrne et al. Expansion segmentation for visual collision detection and estimation
CN111260709A (en) A ground-aided visual odometry method for dynamic environments
CN114879729B (en) Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm
Wang et al. Research on UAV obstacle detection based on data fusion of millimeter wave radar and monocular camera
Lin et al. Robust ground plane region detection using multiple visual cues for obstacle avoidance of a mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant