[go: up one dir, main page]

CN106681353A - Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion - Google Patents

Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion Download PDF

Info

Publication number
CN106681353A
CN106681353A CN201611069481.7A CN201611069481A CN106681353A CN 106681353 A CN106681353 A CN 106681353A CN 201611069481 A CN201611069481 A CN 201611069481A CN 106681353 A CN106681353 A CN 106681353A
Authority
CN
China
Prior art keywords
obstacle
depth
uav
information
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611069481.7A
Other languages
Chinese (zh)
Other versions
CN106681353B (en
Inventor
张天翼
杨忠
胡国雄
韩家明
张翔
沈杨杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201611069481.7A priority Critical patent/CN106681353B/en
Publication of CN106681353A publication Critical patent/CN106681353A/en
Application granted granted Critical
Publication of CN106681353B publication Critical patent/CN106681353B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了基于双目视觉与光流融合的无人机避障方法及系统,该方法通过机载双目摄像机实时获得图像信息;利用图形处理器GPU获得图像深度信息;利用获得的深度信息提取最具威胁障碍物的几何轮廓信息并通过威胁深度模型计算其威胁距离;通过对障碍物几何轮廓信息的矩形拟合获得障碍物追踪窗口并计算出障碍物所属区域的光流场以获得障碍物相对于无人机的速度;飞控计算机根据计算出的障碍物距离信息、几何轮廓信息和相对速度信息发出规避飞行动作指令以躲避障碍物。本发明将障碍物的深度信息与光流矢量进行有效融合,实时获得障碍物相对于无人机的运动信息,提高了无人机快速视觉避障的能力,其实时性、准确性相较于传统算法都有较大的提升。

The invention discloses an unmanned aerial vehicle obstacle avoidance method and system based on binocular vision and optical flow fusion. The method obtains image information in real time through an airborne binocular camera; uses a graphics processor GPU to obtain image depth information; uses the obtained depth information Extract the geometric contour information of the most threatening obstacle and calculate its threat distance through the threat depth model; Obtain the obstacle tracking window through rectangular fitting of the geometric contour information of the obstacle and calculate the optical flow field of the area where the obstacle belongs to obtain the obstacle The speed of the object relative to the UAV; the flight control computer issues an evasive flight action command to avoid obstacles based on the calculated obstacle distance information, geometric contour information and relative speed information. The present invention effectively fuses the depth information of the obstacle with the optical flow vector, obtains the movement information of the obstacle relative to the drone in real time, and improves the rapid visual obstacle avoidance ability of the drone. Compared with Traditional algorithms have been greatly improved.

Description

基于双目视觉与光流融合的无人机避障方法及系统UAV obstacle avoidance method and system based on binocular vision and optical flow fusion

技术领域technical field

本发明涉及无人机避障方法,特别是涉及基于双目视觉与光流融合的无人机避障方法及系统,属于无人机避障技术领域。The invention relates to an obstacle avoidance method for an unmanned aerial vehicle, in particular to an obstacle avoidance method and system for an unmanned aerial vehicle based on fusion of binocular vision and optical flow, and belongs to the technical field of obstacle avoidance for an unmanned aerial vehicle.

背景技术Background technique

随着无人机技术与其应用市场的发展,无人机往往面临着不同于以往的特殊任务,这些任务对其快速识别、躲避行进航路上障碍物的能力提出了更高的要求。基于视觉的避障系统因为通常采用被动工作方式,拥有设备简单、成本低、经济性好、应用范围广等特点。With the development of UAV technology and its application market, UAVs are often faced with special tasks that are different from the past, and these tasks put forward higher requirements for their ability to quickly identify and avoid obstacles on the way. Because the vision-based obstacle avoidance system usually adopts a passive working mode, it has the characteristics of simple equipment, low cost, good economy, and wide application range.

相比于基于超声波、激光雷达等主动传感器的避障系统,视觉避障系统响应速度更快,精度更高,可以提供如颜色、纹理、几何形状等更加丰富的信息,因此得到了越来越多的关注。Compared with the obstacle avoidance system based on active sensors such as ultrasonic waves and lidar, the visual obstacle avoidance system has faster response speed and higher precision, and can provide richer information such as color, texture, geometric shape, etc., so it has been more and more popular. Much attention.

双目视觉与单目视觉相比,可以获得与摄像头垂直的距离信息,能更加有效的判断出障碍物与无人机的相对位置,也有助于将障碍物从复杂背景中快速、准确的进行分割;目前双目视觉已经广泛应用在机器人导航、目标跟踪等多个领域。Compared with monocular vision, binocular vision can obtain distance information perpendicular to the camera, which can more effectively determine the relative position of obstacles and drones, and also helps to quickly and accurately detect obstacles from complex backgrounds. Segmentation; At present, binocular vision has been widely used in many fields such as robot navigation and target tracking.

光流法是利用图像序列中像素强度数据的相关性来预测像素点运动的一种方法,即研究图像亮度在时间上的变化以建立目标像素点集合的运动场;一般情况下,光流可以对相机运动、场景中目标运动或两者的共同运动进行有效而精确的测量,其预测量可以表示目标运动的瞬时速度。The optical flow method is a method to predict the movement of pixels by using the correlation of pixel intensity data in the image sequence, that is, to study the temporal change of the image brightness to establish the motion field of the target pixel set; in general, the optical flow can be used for Efficient and precise measurements of camera motion, object motion in the scene, or both, whose predictors can represent the instantaneous velocity of object motion.

对光流场的计算方法通常有稀疏光流法和稠密光流法两种。稀疏光流选取一些图像场景中的特征点,通过对这些特征点速度的测量来拟合整个运动场的速度。稠密光流则是计算整个区域的运动场,以此来得到目标区域相对于相机的运动速度;稀疏光流运算速度快,但计算值误差大;稠密光流虽然计算比较精确,但若没有对要测算的目标区域进行精确分割,则会大大增加计算时间,所以常常要配合快速精确的图像分割算法使用。There are usually two methods for calculating the optical flow field: the sparse optical flow method and the dense optical flow method. Sparse optical flow selects some feature points in the image scene, and fits the velocity of the entire motion field by measuring the velocity of these feature points. Dense optical flow is to calculate the motion field of the entire area, so as to obtain the moving speed of the target area relative to the camera; sparse optical flow calculation speed is fast, but the calculation value error is large; although dense optical flow calculation is relatively accurate, if there is no correct Precise segmentation of the calculated target area will greatly increase the calculation time, so it is often used in conjunction with a fast and accurate image segmentation algorithm.

申请号为CN201410565278.3的《基于双目立体视觉与光流融合的车辆运动信息检测方法》,主要通过双目视觉标记出地面上感兴趣点,再计算出感兴趣点的光流值,最后再用最小二乘拟合估计地面车辆的三维平移速度和三维旋转速度。该方法采用特征点的速度信息代替车辆的速度信息,虽然运算速度得到增强,但估计精度难以保证。且该方法只是对车辆自身的运动信息进行估计,无法识别和躲避行进过程中的障碍物,难以在实际领域得到运用。The application number is CN201410565278.3 "Vehicle Motion Information Detection Method Based on Binocular Stereo Vision and Optical Flow Fusion", which mainly marks the points of interest on the ground through binocular vision, and then calculates the optical flow value of the points of interest, and finally The three-dimensional translation velocity and three-dimensional rotation velocity of the ground vehicle are then estimated by least squares fitting. This method uses the velocity information of the feature points to replace the velocity information of the vehicle. Although the calculation speed is enhanced, the estimation accuracy is difficult to guarantee. Moreover, this method only estimates the motion information of the vehicle itself, and cannot identify and avoid obstacles in the process of traveling, so it is difficult to be applied in the actual field.

申请号为CN201110412394.8的《一种基于双目立体视觉的巡视探测器自主避障规划方法》,主要通过双目立体视觉计算出相机图像中所有像素点的三维坐标并形成探测点的三维地图,根据三维地图选择出一条避开障碍物的最佳路径。该方法需要计算出视场中全部像素点的三维坐标,对处理器运算性能和存储器的容量都有很高的要求,不适用与小型嵌入式机载设备。Application No. CN201110412394.8 "A Binocular Stereo Vision-Based Inspection Detector Autonomous Obstacle Avoidance Planning Method", which mainly calculates the three-dimensional coordinates of all pixels in the camera image through binocular stereo vision and forms a three-dimensional map of detection points , choose an optimal path to avoid obstacles according to the three-dimensional map. This method needs to calculate the three-dimensional coordinates of all pixels in the field of view, which has high requirements on the computing performance of the processor and the capacity of the memory, and is not suitable for small embedded airborne devices.

申请为CN201510688485.2的《一种基于双目视觉的无人机自主障碍物检测系统及方法》,着重介绍了利用双目摄像机进行障碍物检测的硬件架构,其主要采用FPGA作为处理双目图像的运算核心。FPGA虽然有着体积小,运算速度快等特点,但FPGA价格昂贵且需要使用特殊的开发语言进行程序编写,不利于与其他模块进行对接。且该专利只说明了无人机使用双目视觉进行障碍物检测的硬件架构,对如何检测障碍物的具体算法没有说明。The application is CN201510688485.2 "A UAV Autonomous Obstacle Detection System and Method Based on Binocular Vision", which focuses on the hardware architecture of using binocular cameras for obstacle detection, which mainly uses FPGA as the processing binocular image computing core. Although FPGA has the characteristics of small size and fast operation speed, FPGA is expensive and requires a special development language for programming, which is not conducive to docking with other modules. Moreover, the patent only describes the hardware architecture of UAVs using binocular vision for obstacle detection, and does not explain the specific algorithm of how to detect obstacles.

因此,虽然国内外在利用双目视觉进行避障的领域有较多的研究,但大多方法无法快速获得障碍物相对于无人机的位置和速度,难以快速准确地对障碍物进行躲避,故大多难以应用到无人机实时避障领域。Therefore, although there are many researches in the field of obstacle avoidance using binocular vision at home and abroad, most methods cannot quickly obtain the position and speed of obstacles relative to the UAV, and it is difficult to quickly and accurately avoid obstacles. Most of them are difficult to apply to the field of UAV real-time obstacle avoidance.

发明内容Contents of the invention

本发明所要解决的技术问题是:提供基于双目视觉与光流融合的无人机避障方法及系统,将障碍物的深度信息与光流矢量进行有效融合,实时获得障碍物相对于无人机的运动信息,实现无人机的实时障碍物躲避。The technical problem to be solved by the present invention is to provide a UAV obstacle avoidance method and system based on the fusion of binocular vision and optical flow, which can effectively fuse the depth information of obstacles with the optical flow vector, and obtain the relative distance between obstacles and unmanned vehicles in real time. The movement information of the drone realizes the real-time obstacle avoidance of the drone.

本发明为解决上述技术问题采用以下技术方案:The present invention adopts the following technical solutions for solving the problems of the technologies described above:

基于双目视觉与光流融合的无人机避障方法,包括如下步骤:An obstacle avoidance method for drones based on binocular vision and optical flow fusion, including the following steps:

步骤1,利用双目摄像机获取无人机前进方向上的图像,并对图像做灰度变换;Step 1, use the binocular camera to obtain the image in the direction of the UAV, and perform grayscale transformation on the image;

步骤2,计算灰度变换后图像上各像素点的特征信息并进行立体匹配,得到无人机前进方向上的深度图信息;Step 2, calculate the feature information of each pixel on the image after the grayscale transformation and perform stereo matching to obtain the depth map information in the direction of the UAV;

步骤3,将深度图中的深度值分为属于障碍物或属于背景的深度值两类,将深度图分为障碍物区域和背景区域,将障碍物区域中闭合面积最大的轮廓作为障碍物轮廓,并用矩形框对其进行拟合,得到障碍物追踪窗口作为障碍物的几何信息;Step 3: Divide the depth values in the depth map into two types: those belonging to obstacles or the depth values belonging to the background, divide the depth map into obstacle areas and background areas, and use the contour with the largest closed area in the obstacle area as the obstacle contour , and fit it with a rectangular frame to obtain the obstacle tracking window as the geometric information of the obstacle;

步骤4,利用稠密光流法计算障碍物追踪窗口滑动的速度矢量,得到该窗口在x,y方向上的速度并根据速度判断下一帧图像障碍物追踪窗口的位置,将判断的位置与下一帧实际计算出的障碍物追踪窗口位置进行比较,若两者之间的差值小于阈值,则进行步骤5,否则,返回步骤1重新计算;Step 4, use the dense optical flow method to calculate the velocity vector of the obstacle tracking window sliding, obtain the velocity of the window in the x, y direction, and judge the position of the obstacle tracking window in the next frame of image according to the speed, and compare the judged position with the next Compare the obstacle tracking window position actually calculated in one frame, if the difference between the two is less than the threshold, go to step 5, otherwise, return to step 1 to recalculate;

步骤5,利用深度威胁模型计算障碍物对于无人机的威胁深度值;Step 5, using the depth threat model to calculate the threat depth value of the obstacle to the UAV;

步骤6,将步骤3计算出的障碍物几何信息与步骤4计算出的障碍物速度信息从像素坐标转化为世界坐标,并利用无人机的运动参数进行校正;Step 6, converting the obstacle geometry information calculated in step 3 and the obstacle velocity information calculated in step 4 from pixel coordinates to world coordinates, and correcting them using the motion parameters of the drone;

步骤7,将障碍物位置信息与步骤6得到的几何、速度信息发送至无人机的飞控计算机中,飞控计算机根据上述信息控制无人机做出实时规避动作。Step 7. Send the obstacle position information and the geometry and speed information obtained in step 6 to the flight control computer of the UAV. The flight control computer controls the UAV to make real-time evasion actions according to the above information.

作为本发明方法的一种优选方案,步骤1所述利用双目摄像机获取无人机前进方向上的图像的具体过程为:将双目摄像机安装于无人机的机头部位,通过标定方法获得双目摄像机的内外参数矩阵及畸变参数,利用双目摄像机获取无人机前进方向上的图像,并根据内外参数矩阵及畸变参数对图像进行校正,得到无畸变且行对准的两幅图像。As a preferred solution of the method of the present invention, the specific process of using the binocular camera to obtain the image in the direction of the UAV in step 1 is: the binocular camera is installed on the nose of the UAV, and obtained by a calibration method The internal and external parameter matrix and distortion parameters of the binocular camera, using the binocular camera to obtain the image in the direction of the UAV, and correcting the image according to the internal and external parameter matrix and distortion parameters, to obtain two images without distortion and line alignment.

作为本发明方法的一种优选方案,所述步骤2的具体过程为:计算灰度变换后图像上各像素点上、下、左、右、左上、左下、右上、右下8个方向的能量函数并进行累加,求视差值使得累加后的能量函数最小化,根据视差值确定各像素点的深度值信息。As a preferred solution of the method of the present invention, the specific process of step 2 is: calculate the energy in eight directions of each pixel on the image after the grayscale transformation: up, down, left, right, upper left, lower left, upper right, and lower right The function is accumulated, and the parallax value is calculated to minimize the accumulated energy function, and the depth value information of each pixel is determined according to the parallax value.

作为本发明方法的一种优选方案,步骤3所述将障碍物区域中闭合面积最大的轮廓作为障碍物轮廓之前,利用speckle滤波器对区分了障碍物区域和背景区域的深度图进行滤波,去除噪声。As a preferred solution of the method of the present invention, in step 3, before using the contour with the largest closed area in the obstacle region as the obstacle contour, use a speckle filter to filter the depth map that distinguishes the obstacle region and the background region, and remove noise.

作为本发明方法的一种优选方案,步骤3所述将深度图中的深度值分为属于障碍物或属于背景的深度值两类的具体过程为:As a preferred solution of the method of the present invention, the specific process of dividing the depth values in the depth map into two types of depth values belonging to obstacles or backgrounds in step 3 is as follows:

设定分割阈值Dh,将深度值大于等于Dh的归类为属于障碍物的深度值,将深度值小于Dh的归类为属于背景的深度值,通过最大化障碍物与背景之间的方差求解Dh,方差计算公式为:Set the segmentation threshold D h , classify the depth value greater than or equal to D h as the depth value belonging to the obstacle, classify the depth value less than D h as the depth value belonging to the background, and maximize the distance between the obstacle and the background Solving for the variance of D h , the variance The calculation formula is:

其中,ω0、ω1分别为深度值被Dh分割为障碍物、背景深度值的概率,μ0、μ1分别为属于障碍物、背景的深度值的均值:Among them, ω 0 and ω 1 are respectively the probability that the depth value is divided into obstacles and background depth values by D h , and μ 0 and μ 1 are the mean values of the depth values belonging to obstacles and background respectively:

其中,Di为离散的可信深度层,i=1,…,t,t为深度层的数目,D1,…,Dh为属于障碍物的深度层,Dh+1,…,Dt为属于背景的深度层,Kj为各深度层中深度值的数目,j=1,…,t。Among them, D i is a discrete credible depth layer, i=1,...,t, t is the number of depth layers, D 1 ,...,D h are depth layers belonging to obstacles, D h+1 ,...,D t is the depth layer belonging to the background, K j is the number of depth values in each depth layer, j=1,...,t.

作为本发明方法的一种优选方案,所述步骤5的具体过程为:As a preferred version of the method of the present invention, the specific process of the step 5 is:

设定属于障碍物的深度值集合为DK={d1,d2,…,dK},d1,d2,…,dK均为深度值,K为属于障碍物的深度值的数目;D1,…,Dt为离散的可信深度层,t为深度层的数目,K1,…,Kt为各深度层中深度值的数目,若存在至少一个1≤j≤t,则障碍物对于无人机的威胁深度值为:其中,Dmin为大于的Kj对应的深度层中最小深度层,Kmin为Dmin中深度值的数目;若所有的K1,…,Kt均小于则障碍物对于无人机的威胁深度值为: Set the depth value set belonging to obstacles as D K ={d 1 ,d 2 ,...,d K }, d 1 ,d 2 ,...,d K are all depth values, and K is the depth value belonging to obstacles number; D 1 ,…,D t are discrete credible depth layers, t is the number of depth layers, K 1 ,…,K t are the number of depth values in each depth layer, if there is at least one 1≤j≤t, then the threat depth value of the obstacle to the UAV is: Among them, D min is greater than K j of K j corresponds to the minimum depth layer in the depth layer, and K min is the number of depth values in D min ; if all K 1 ,...,K t are less than Then the threat depth value of the obstacle to the UAV is:

基于双目视觉与光流融合的无人机避障系统,包括搭载在无人机上的图像采集模块、图像处理模块、惯性测量模块、GNSS模块,无人机包括飞控计算机;所述图像采集模块获取无人机前进方向上的图像同步传入图像处理模块,图像处理模块包括CPU模块和GPU模块,分别计算障碍物的几何、速度、位置信息,GNSS模块、惯性测量模块分别对无人机进行实时定位和姿态测量,惯性测量模块还对图像处理模块计算出的速度信息进行校正,飞控计算机根据图像处理模块发送的信息进行融合并控制无人机做出避开前向障碍物的规避动作。The UAV obstacle avoidance system based on binocular vision and optical flow fusion includes an image acquisition module, an image processing module, an inertial measurement module, and a GNSS module mounted on the UAV, and the UAV includes a flight control computer; the image acquisition The module acquires the image in the forward direction of the UAV and transmits it to the image processing module synchronously. The image processing module includes a CPU module and a GPU module to calculate the geometry, speed and position information of obstacles respectively. The GNSS module and the inertial measurement module respectively Perform real-time positioning and attitude measurement, the inertial measurement module also corrects the speed information calculated by the image processing module, and the flight control computer fuses the information sent by the image processing module and controls the UAV to avoid forward obstacles action.

作为本发明系统的一种优选方案,该系统还包括超声波模块,超声波模块包括四个超声波传感器,分别安装于无人机的前、后、左、右四个方向上,用于检测四个方向的障碍物。As a preferred solution of the system of the present invention, the system also includes an ultrasonic module. The ultrasonic module includes four ultrasonic sensors, which are respectively installed in the four directions of the front, rear, left and right of the drone, and are used to detect four directions. of obstacles.

本发明采用以上技术方案与现有技术相比,具有以下技术效果:Compared with the prior art, the present invention adopts the above technical scheme and has the following technical effects:

1、本发明通过双目视觉对障碍物进行有效分割,只计算障碍物追踪窗口中的光流值,解决了稠密光流计算整幅图像时耗时长的问题,进一步提高了避障算法的实时性。1. The present invention effectively segments obstacles through binocular vision, and only calculates the optical flow value in the obstacle tracking window, which solves the time-consuming problem of calculating the entire image with dense optical flow, and further improves the real-time performance of the obstacle avoidance algorithm. sex.

2、本发明提出了一种新的威胁深度模型,该模型可简化避障流程,不必引入过于复杂的路径规划算法,具有较高的实用价值。2. The present invention proposes a new threat depth model, which can simplify the obstacle avoidance process without introducing an overly complicated path planning algorithm, and has high practical value.

3、本发明通过同时使用GPU与CPU进行计算,使用GPU计算立体匹配和光流,利用CPU计算障碍物几何尺寸、位置和速度,提高了算法的运算速度。3. The present invention uses GPU and CPU to calculate at the same time, uses GPU to calculate stereo matching and optical flow, and uses CPU to calculate the geometric size, position and speed of obstacles, thereby improving the calculation speed of the algorithm.

附图说明Description of drawings

图1是本发明中标定摄像头的棋盘格图。Fig. 1 is a checkerboard diagram of the calibration camera in the present invention.

图2是本发明基于双目视觉与光流融合的无人机避障系统的硬件结构图。Fig. 2 is a hardware structure diagram of the UAV obstacle avoidance system based on binocular vision and optical flow fusion in the present invention.

图3是本发明基于双目视觉与光流融合的无人机避障方法的算法流程图。Fig. 3 is an algorithm flow chart of the UAV obstacle avoidance method based on binocular vision and optical flow fusion in the present invention.

图4是本发明实施例中针对深度连续障碍物的避障策略示意图。Fig. 4 is a schematic diagram of an obstacle avoidance strategy for deep continuous obstacles in an embodiment of the present invention.

具体实施方式detailed description

下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。Embodiments of the invention are described in detail below, examples of which are illustrated in the accompanying drawings. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.

如图2所示,为本发明基于双目视觉与光流融合的无人机避障系统的硬件结构图,系统包括图像采集模块、嵌入式图像处理模块、飞控计算机、超声波模块、GNSS(GlobalNavigation Satellite System全球卫星导航系统)模块和惯性测量模块,其中,嵌入式图像处理模块包括CPU模块和GPU模块,超声波模块包括四个超声波传感器,在无人机的前、后、左、右四个方向上安装超声波传感器作为辅助避障装置,当超声波传感器传感器检测到其他方向存在障碍物时,飞控计算机根据障碍物距离信息调整避障动作。As shown in Figure 2, it is a hardware structural diagram of the UAV obstacle avoidance system based on binocular vision and optical flow fusion of the present invention. The system includes an image acquisition module, an embedded image processing module, a flight control computer, an ultrasonic module, GNSS ( GlobalNavigation Satellite System (Global Satellite Navigation System) module and inertial measurement module, wherein the embedded image processing module includes a CPU module and a GPU module, and the ultrasonic module includes four ultrasonic sensors. The ultrasonic sensor is installed in the direction as an auxiliary obstacle avoidance device. When the ultrasonic sensor detects obstacles in other directions, the flight control computer adjusts the obstacle avoidance action according to the obstacle distance information.

图像采集模块将左右图像同步传入嵌入式图像处理模块;CPU模块和GPU模块负责计算无人机的避障参数;GNSS模块和惯性测量模块负责无人机的实时定位与姿态测量,此外,惯性测量模块还负责将测量结果通过串口发送至嵌入式图像处理模块以对计算出的速度信息进行校正;飞控计算机将嵌入式图像处理模块与超声波模块所发送的信息进行有效融合并控制无人机做出避开前向障碍物的规避动作。The image acquisition module transmits the left and right images to the embedded image processing module synchronously; the CPU module and the GPU module are responsible for calculating the obstacle avoidance parameters of the UAV; the GNSS module and the inertial measurement module are responsible for the real-time positioning and attitude measurement of the UAV. The measurement module is also responsible for sending the measurement results to the embedded image processing module through the serial port to correct the calculated speed information; the flight control computer effectively fuses the information sent by the embedded image processing module and the ultrasonic module to control the UAV Make evasive maneuvers to avoid forward obstacles.

本实例中图像采集模块采用双目摄像机,分辨率640*480或800*600等,可用帧率20到30fps,基线距离12cm,基线距离与焦距均可调,可通过USB或其它高速接口直接将两路同步的视频信号送入嵌入式图像处理模块。In this example, the image acquisition module adopts a binocular camera with a resolution of 640*480 or 800*600, etc., an available frame rate of 20 to 30fps, a baseline distance of 12cm, and both baseline distance and focal length can be adjusted. Two synchronous video signals are sent to the embedded image processing module.

如图3所示,本发明基于双目视觉与光流融合的无人机避障方法的算法流程如下:首先使用如图1所示的棋盘格对双目摄像机进行标定,分别得到两个摄像机的内参数矩阵和畸变参数、两个摄像机之间的外参数(包括旋转矩阵和平移向量)并将其储存在嵌入式图像处理模块的存储器中。As shown in Figure 3, the algorithm flow of the UAV obstacle avoidance method based on binocular vision and optical flow fusion in the present invention is as follows: first, the binocular camera is calibrated using the checkerboard as shown in Figure 1, and two cameras are respectively obtained Intrinsic parameter matrix and distortion parameters, extrinsic parameters (including rotation matrix and translation vector) between the two cameras are stored in the memory of the embedded image processing module.

读入双目摄像头发送的同步视频数据,利用内参数矩阵、畸变参数和外参数将两幅图像进行矫正,使两幅图像无畸变且行对准;将两幅图像变换到灰度空间并利用GPU模块同时计算每一像素点上、下、左、右,左上、左下、右上、右下8个方向的能量函数并将其累加,通过同时计算每一像素点使得能量函数最小化的视差值确定每一个像素点的深度信息(即深度图)。CPU模块根据GPU模块计算出的深度信息得到障碍物追踪窗口,确定障碍物的几何、位置信息,并确定障碍物追踪窗口;GPU模块再根据选定的障碍物追踪窗口并行计算窗口内所有像素点的光流值以计算相对速度,最后CPU模块将速度信息进行校正并和位置信息、几何尺寸一起发送至飞控计算机进行处理。Read in the synchronous video data sent by the binocular camera, use the internal parameter matrix, distortion parameters and external parameters to correct the two images, so that the two images have no distortion and line alignment; transform the two images to grayscale space and use GPU The module simultaneously calculates the energy functions of each pixel in 8 directions of up, down, left, right, upper left, lower left, upper right, and lower right and accumulates them, and calculates the parallax value that minimizes the energy function by calculating each pixel at the same time Determine the depth information (ie depth map) of each pixel. The CPU module obtains the obstacle tracking window according to the depth information calculated by the GPU module, determines the geometry and position information of the obstacle, and determines the obstacle tracking window; the GPU module then calculates all pixels in the window in parallel according to the selected obstacle tracking window The optical flow value is used to calculate the relative speed, and finally the CPU module corrects the speed information and sends it to the flight control computer for processing together with the position information and geometric dimensions.

将深度图中的深度值分为两类,一类为属于障碍物的深度值DK={d1,d2,…,dK},一类为属于背景的深度值通过最大化DK与Db之间的方差找到分割阈值Dh,并将小于Dh的深度值置为0,计算处理之后深度图中的轮廓并选取最大轮廓进行矩形拟合,得到障碍物矩形追踪窗口。Divide the depth values in the depth map into two categories, one is the depth value D K ={d 1 ,d 2 ,…,d K } belonging to obstacles, and the other is the depth value belonging to the background Find the segmentation threshold D h by maximizing the variance between D K and D b , and set the depth value smaller than D h to 0, calculate the contour in the depth map after processing, and select the largest contour for rectangular fitting to obtain obstacles Rectangular tracking window.

两类深度值之间的方差可表示为:Variance between two classes of depth values Can be expressed as:

式中ω0和ω1为被Dh分割的两类深度值的概率,μ0和μ1为两类深度均值:where ω 0 and ω 1 are the probabilities of the two types of depth values divided by D h , and μ 0 and μ 1 are the mean values of the two types of depth:

其中,{D1,…,Dt}=Dr为离散的可信深度层,Kj为各深度层中深度值的数目,t为深度层的数目。Among them, {D 1 ,...,D t }=D r is the discrete credible depth layer, K j is the number of depth values in each depth layer, and t is the number of depth layers.

提取经过障碍物、背景区分的深度图并计算深度图中的轮廓;在进行轮廓检测之前,先使用speckle滤波器对深度图进行滤波,去除较小的块状深度区域;将检测出的闭合面积最大的轮廓作为障碍物轮廓,并用矩形框对其进行拟合,将拟合出的矩形作为障碍物追踪窗口。Extract the depth map distinguished by obstacles and background and calculate the contour in the depth map; before the contour detection, use the speckle filter to filter the depth map to remove the small block depth area; the detected closed area The largest contour is used as the obstacle contour, and it is fitted with a rectangular frame, and the fitted rectangle is used as the obstacle tracking window.

计算障碍物对无人机的威胁深度D0,假设DK={d1,d2,…,dK}是一组属于障碍物的深度值集合,d1,d2,…,dK是属于障碍物的深度值,K是这些深度值的数目,Dr为可信的深度区间,D1,D2,…,Dt为在这个区间上离散的可信深度层,{D1,D2,…,Dt}=Dr,K1,…,Kt为各深度层D1,D2,…,Dt中深度值的数目,当1≤j≤t,如果存在障碍物追踪窗口的威胁深度D0为:Calculate the threat depth D 0 of obstacles to the UAV, assuming that D K ={d 1 ,d 2 ,…,d K } is a set of depth values belonging to obstacles, d 1 ,d 2 ,…,d K is the depth value belonging to the obstacle, K is the number of these depth values, D r is the credible depth interval, D 1 , D 2 ,..., D t is the credible depth layer discrete on this interval, {D 1 ,D 2 ,…,D t }=D r , K 1 ,…,K t are the number of depth values in each depth layer D 1 ,D 2 ,…,D t , when 1≤j≤t, if there is The threat depth D0 of the obstacle tracking window is:

式中,Dmin为大于的Kj对应的深度层中最小深度层,Kmin为Dmin中深度值的数目;In the formula, D min is greater than K j corresponds to the minimum depth layer in the depth layer, and K min is the number of depth values in D min ;

如果所有的障碍物跟踪窗口的威胁深度D0为:if all The threat depth D0 of the obstacle tracking window is:

利用Horn-Schunck稠密光流法计算障碍物追踪窗口内的光流场,得到追踪窗口在x,y方向上的速度并根据速度判断下一帧追踪窗口的位置;将判断的位置与下一帧实际计算出的追踪窗口位置进行比较,如果两者之间的差值小于10个像素,则判定为计算准确,若大于10个像素,则判定计算错误并返回第一步重新计算。Use the Horn-Schunck dense optical flow method to calculate the optical flow field in the obstacle tracking window, get the speed of the tracking window in the x and y directions, and judge the position of the next frame of the tracking window according to the speed; compare the judged position with the next frame The actual calculated tracking window position is compared. If the difference between the two is less than 10 pixels, it is judged that the calculation is accurate. If it is greater than 10 pixels, it is judged that the calculation is wrong and returns to the first step to recalculate.

利用内外参数矩阵将计算出的障碍物追踪窗口所表示的障碍物几何信息以及障碍物速度信息从像素坐标转化为世界坐标,并利用以下公式对障碍物相对于无人机的速度进行校正:Use the internal and external parameter matrix to convert the obstacle geometric information and obstacle velocity information represented by the calculated obstacle tracking window from pixel coordinates to world coordinates, and use the following formula to correct the velocity of the obstacle relative to the UAV:

式中,vx和vy为校正后的无人机x,y方向速度;fx和fy为x,y方向上的焦距;u和v为步骤4中计算出的障碍物x,y方向光流矢量;D0为障碍物跟踪窗口的威胁深度;θ、分别为无人机的俯仰、偏航角;Δt为两帧之间的时间。In the formula, v x and v y are the corrected drone speeds in the x, y direction; f x and f y are the focal lengths in the x, y direction; u and v are the obstacles x, y calculated in step 4 directional optical flow vector; D 0 is the threat depth of the obstacle tracking window; θ, are the pitch and yaw angles of the drone, respectively; Δt is the time between two frames.

将计算出的速度、几何、位置信息与威胁深度信息发送给无人机飞行控制系统,无人机飞行控制系统根据该信息控制无人机做出实时规避动作以避开飞行路径上的障碍物,位置信息为障碍物追踪窗口中心相对于图像中心的偏移值。Send the calculated speed, geometry, position information and threat depth information to the UAV flight control system, and the UAV flight control system controls the UAV to make real-time avoidance actions based on the information to avoid obstacles on the flight path , the location information is the offset value of the obstacle tracking window center relative to the image center.

如图4所示,在躲避深度连续的障碍物时,将其分解为多个障碍物,在不同帧图像中分别计算障碍物每一部分的威胁深度进行避障。As shown in Figure 4, when avoiding an obstacle with continuous depth, it is decomposed into multiple obstacles, and the threat depth of each part of the obstacle is calculated in different frame images for obstacle avoidance.

本发明通过经过GPU加速的立体匹配算法得到深度图信息,再通过障碍物分割、威胁深度计算、光流计算等步骤获得障碍物的几何尺寸、与无人机的相对位置、速度并将其发送至飞控计算机产生避障动作。该算法通过同时使用GPU与CPU进行计算,提高了算法的运算速度,通过双目视觉对障碍物进行有效分割,只计算障碍物追踪窗口中的光流值,解决了稠密光流计算整幅图像时耗时长的问题,进一步提高了避障算法的实时性;本发明提出的威胁深度模型可简化避障流程,不必引入过于复杂的路径规划算法,具有较高的实用价值。The present invention obtains the depth map information through the stereo matching algorithm accelerated by the GPU, and then obtains the geometric size of the obstacle, the relative position and speed of the UAV through the steps of obstacle segmentation, threat depth calculation, and optical flow calculation, and sends them to Until the flight control computer generates an obstacle avoidance action. The algorithm uses GPU and CPU for calculation at the same time, which improves the calculation speed of the algorithm, effectively segments obstacles through binocular vision, and only calculates the optical flow value in the obstacle tracking window, which solves the problem of dense optical flow calculation for the entire image. The time-consuming problem further improves the real-time performance of the obstacle avoidance algorithm; the threat depth model proposed by the present invention can simplify the obstacle avoidance process without introducing an overly complicated path planning algorithm, and has high practical value.

以上实施例仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明保护范围之内。The above embodiments are only to illustrate the technical ideas of the present invention, and can not limit the protection scope of the present invention with this. All technical ideas proposed in accordance with the present invention, any changes made on the basis of technical solutions, all fall within the protection scope of the present invention. Inside.

Claims (8)

1.基于双目视觉与光流融合的无人机避障方法,其特征在于,包括如下步骤:1. The UAV obstacle avoidance method based on binocular vision and optical flow fusion, is characterized in that, comprises the following steps: 步骤1,利用双目摄像机获取无人机前进方向上的图像,并对图像做灰度变换;Step 1, use the binocular camera to obtain the image in the direction of the UAV, and perform grayscale transformation on the image; 步骤2,计算灰度变换后图像上各像素点的特征信息并进行立体匹配,得到无人机前进方向上的深度图信息;Step 2, calculate the feature information of each pixel on the image after the grayscale transformation and perform stereo matching to obtain the depth map information in the direction of the UAV; 步骤3,将深度图中的深度值分为属于障碍物或属于背景的深度值两类,将深度图分为障碍物区域和背景区域,将障碍物区域中闭合面积最大的轮廓作为障碍物轮廓,并用矩形框对其进行拟合,得到障碍物追踪窗口作为障碍物的几何信息;Step 3: Divide the depth values in the depth map into two types: those belonging to obstacles or the depth values belonging to the background, divide the depth map into obstacle areas and background areas, and use the contour with the largest closed area in the obstacle area as the obstacle contour , and fit it with a rectangular frame to obtain the obstacle tracking window as the geometric information of the obstacle; 步骤4,利用稠密光流法计算障碍物追踪窗口滑动的速度矢量,得到该窗口在x,y方向上的速度并根据速度判断下一帧图像障碍物追踪窗口的位置,将判断的位置与下一帧实际计算出的障碍物追踪窗口位置进行比较,若两者之间的差值小于阈值,则进行步骤5,否则,返回步骤1重新计算;Step 4, use the dense optical flow method to calculate the velocity vector of the obstacle tracking window sliding, obtain the velocity of the window in the x, y direction, and judge the position of the obstacle tracking window in the next frame of image according to the speed, and compare the judged position with the next Compare the obstacle tracking window position actually calculated in one frame, if the difference between the two is less than the threshold, go to step 5, otherwise, return to step 1 to recalculate; 步骤5,利用深度威胁模型计算障碍物对于无人机的威胁深度值;Step 5, using the depth threat model to calculate the threat depth value of the obstacle to the UAV; 步骤6,将步骤3计算出的障碍物几何信息与步骤4计算出的障碍物速度信息从像素坐标转化为世界坐标,并利用无人机的运动参数进行校正;Step 6, converting the obstacle geometry information calculated in step 3 and the obstacle velocity information calculated in step 4 from pixel coordinates to world coordinates, and correcting them using the motion parameters of the drone; 步骤7,将障碍物位置信息与步骤6得到的几何、速度信息发送至无人机的飞控计算机中,飞控计算机根据上述信息控制无人机做出实时规避动作。Step 7. Send the obstacle position information and the geometry and speed information obtained in step 6 to the flight control computer of the UAV. The flight control computer controls the UAV to make real-time avoidance actions according to the above information. 2.根据权利要求1所述基于双目视觉与光流融合的无人机避障方法,其特征在于,步骤1所述利用双目摄像机获取无人机前进方向上的图像的具体过程为:将双目摄像机安装于无人机的机头部位,通过标定方法获得双目摄像机的内外参数矩阵及畸变参数,利用双目摄像机获取无人机前进方向上的图像,并根据内外参数矩阵及畸变参数对图像进行校正,得到无畸变且行对准的两幅图像。2. according to claim 1, based on the UAV obstacle avoidance method based on binocular vision and optical flow fusion, it is characterized in that, the specific process of utilizing the binocular camera to obtain the image on the direction of advancement of the UAV described in step 1 is: Install the binocular camera on the nose of the UAV, obtain the internal and external parameter matrix and distortion parameters of the binocular camera through the calibration method, use the binocular camera to obtain the image in the direction of the UAV, and according to the internal and external parameter matrix and distortion The parameters are used to correct the image to obtain two images with no distortion and line alignment. 3.根据权利要求1所述基于双目视觉与光流融合的无人机避障方法,其特征在于,所述步骤2的具体过程为:计算灰度变换后图像上各像素点上、下、左、右、左上、左下、右上、右下8个方向的能量函数并进行累加,求视差值使得累加后的能量函数最小化,根据视差值确定各像素点的深度值信息。3. according to claim 1, the UAV obstacle avoidance method based on binocular vision and optical flow fusion, is characterized in that, the specific process of described step 2 is: calculate the up and down of each pixel on the image after the grayscale transformation , left, right, upper left, lower left, upper right, and lower right, and accumulate the energy functions in 8 directions, find the parallax value to minimize the accumulated energy function, and determine the depth value information of each pixel according to the parallax value. 4.根据权利要求1所述基于双目视觉与光流融合的无人机避障方法,其特征在于,步骤3所述将障碍物区域中闭合面积最大的轮廓作为障碍物轮廓之前,利用speckle滤波器对区分了障碍物区域和背景区域的深度图进行滤波,去除噪声。4. according to claim 1, based on the UAV obstacle avoidance method based on binocular vision and optical flow fusion, it is characterized in that, before the outline with the largest closed area in the obstacle area is used as the obstacle outline in step 3, use speckle The filter filters the depth map that distinguishes the obstacle area and the background area to remove noise. 5.根据权利要求1所述基于双目视觉与光流融合的无人机避障方法,其特征在于,步骤3所述将深度图中的深度值分为属于障碍物或属于背景的深度值两类的具体过程为:5. according to claim 1, the UAV obstacle avoidance method based on binocular vision and optical flow fusion is characterized in that, in step 3, the depth value in the depth map is divided into depth values belonging to obstacles or backgrounds The two types of specific processes are: 设定分割阈值Dh,将深度值大于等于Dh的归类为属于障碍物的深度值,将深度值小于Dh的归类为属于背景的深度值,通过最大化障碍物与背景之间的方差求解Dh,方差计算公式为:Set the segmentation threshold D h , classify the depth value greater than or equal to D h as the depth value belonging to the obstacle, classify the depth value less than D h as the depth value belonging to the background, and maximize the distance between the obstacle and the background Solving for the variance of D h , the variance The calculation formula is: σσ dd 22 == ωω 00 (( μμ 00 -- μμ 11 )) 22 ++ ωω 11 (( μμ 00 -- μμ 11 )) 22 ,, 其中,ω0、ω1分别为深度值被Dh分割为障碍物、背景深度值的概率,μ0、μ1分别为属于障碍物、背景的深度值的均值:Among them, ω 0 and ω 1 are respectively the probability that the depth value is divided into obstacles and background depth values by D h , and μ 0 and μ 1 are the mean values of the depth values belonging to obstacles and background respectively: ωω 00 == ΣΣ ii == 11 hh (( ΣΣ jj == 11 ii KK jj // tt )) μμ 00 == ΣΣ ii == 11 hh DD. ii KK ii tt ωω 00 ωω 11 == ΣΣ ii == hh ++ 11 tt (( ΣΣ jj == 11 ii KK jj // tt )) μμ 11 == ΣΣ ii == hh ++ 11 tt DD. ii KK ii tt -- ΣΣ ii == ii hh DD. ii KK ii tt 11 -- ωω 00 其中,Di为离散的可信深度层,i=1,…,t,t为深度层的数目,D1,…,Dh为属于障碍物的深度层,Dh+1,…,Dt为属于背景的深度层,Kj为各深度层中深度值的数目,j=1,…,t。Among them, D i is a discrete credible depth layer, i=1,...,t, t is the number of depth layers, D 1 ,...,D h are depth layers belonging to obstacles, D h+1 ,...,D t is the depth layer belonging to the background, K j is the number of depth values in each depth layer, j=1,...,t. 6.根据权利要求1所述基于双目视觉与光流融合的无人机避障方法,其特征在于,所述步骤5的具体过程为:6. according to claim 1, the UAV obstacle avoidance method based on binocular vision and optical flow fusion, is characterized in that, the specific process of described step 5 is: 设定属于障碍物的深度值集合为DK={d1,d2,…,dK},d1,d2,…,dK均为深度值,K为属于障碍物的深度值的数目;D1,…,Dt为离散的可信深度层,t为深度层的数目,K1,…,Kt为各深度层中深度值的数目,若存在至少一个1≤j≤t,则障碍物对于无人机的威胁深度值为:其中,Dmin为大于的Kj对应的深度层中最小深度层,Kmin为Dmin中深度值的数目;若所有的K1,…,Kt均小于则障碍物对于无人机的威胁深度值为: Set the depth value set belonging to obstacles as D K ={d 1 ,d 2 ,...,d K }, d 1 ,d 2 ,...,d K are all depth values, and K is the depth value belonging to obstacles number; D 1 ,…,D t are discrete credible depth layers, t is the number of depth layers, K 1 ,…,K t are the number of depth values in each depth layer, if there is at least one 1≤j≤t, then the threat depth value of the obstacle to the UAV is: Among them, D min is greater than K j of K j corresponds to the minimum depth layer in the depth layer, and K min is the number of depth values in D min ; if all K 1 ,...,K t are less than Then the threat depth value of the obstacle to the UAV is: 7.基于双目视觉与光流融合的无人机避障系统,其特征在于,包括搭载在无人机上的图像采集模块、图像处理模块、惯性测量模块、GNSS模块,无人机包括飞控计算机;所述图像采集模块获取无人机前进方向上的图像同步传入图像处理模块,图像处理模块包括CPU模块和GPU模块,分别计算障碍物的几何、速度、位置信息,GNSS模块、惯性测量模块分别对无人机进行实时定位和姿态测量,惯性测量模块还对图像处理模块计算出的速度信息进行校正,飞控计算机根据图像处理模块发送的信息进行融合并控制无人机做出避开前向障碍物的规避动作。7. The UAV obstacle avoidance system based on the fusion of binocular vision and optical flow is characterized in that it includes an image acquisition module, an image processing module, an inertial measurement module, and a GNSS module mounted on the UAV. The UAV includes a flight control module Computer; the image acquisition module acquires the image synchronously incoming image processing module on the advancing direction of the unmanned aerial vehicle, and the image processing module includes a CPU module and a GPU module, calculates respectively the geometry, speed, position information of obstacles, GNSS module, inertial measurement The module performs real-time positioning and attitude measurement on the UAV. The inertial measurement module also corrects the speed information calculated by the image processing module. The flight control computer fuses the information sent by the image processing module and controls the UAV to avoid Avoidance maneuvers for forward obstacles. 8.根据权利要求1所述基于双目视觉与光流融合的无人机避障系统,其特征在于,该系统还包括超声波模块,超声波模块包括四个超声波传感器,分别安装于无人机的前、后、左、右四个方向上,用于检测四个方向的障碍物。8. The UAV obstacle avoidance system based on binocular vision and optical flow fusion according to claim 1, characterized in that, the system also includes an ultrasonic module, and the ultrasonic module includes four ultrasonic sensors, which are respectively installed on the UAV's In the four directions of front, back, left and right, it is used to detect obstacles in four directions.
CN201611069481.7A 2016-11-29 2016-11-29 Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion Expired - Fee Related CN106681353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611069481.7A CN106681353B (en) 2016-11-29 2016-11-29 Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611069481.7A CN106681353B (en) 2016-11-29 2016-11-29 Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion

Publications (2)

Publication Number Publication Date
CN106681353A true CN106681353A (en) 2017-05-17
CN106681353B CN106681353B (en) 2019-10-25

Family

ID=58866816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611069481.7A Expired - Fee Related CN106681353B (en) 2016-11-29 2016-11-29 Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion

Country Status (1)

Country Link
CN (1) CN106681353B (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107091643A (en) * 2017-06-07 2017-08-25 旗瀚科技有限公司 A kind of indoor navigation method based on many 3D structure lights camera splicings
CN107388967A (en) * 2017-08-14 2017-11-24 上海汽车集团股份有限公司 A kind of outer parameter compensation method of vehicle-mounted three-dimensional laser sensor and device
CN107497621A (en) * 2017-09-20 2017-12-22 王晓东 Extended pattern is atomized regulating system and method
CN107689063A (en) * 2017-07-27 2018-02-13 南京理工大学北方研究院 A kind of robot indoor orientation method based on ceiling image
CN107909614A (en) * 2017-11-13 2018-04-13 中国矿业大学 Crusing robot localization method under a kind of GPS failures environment
CN107908195A (en) * 2017-11-06 2018-04-13 深圳市道通智能航空技术有限公司 Target tracking method, device, tracker and computer-readable recording medium
CN108007474A (en) * 2017-08-31 2018-05-08 哈尔滨工业大学 A kind of unmanned vehicle independent positioning and pose alignment technique based on land marking
CN108053691A (en) * 2017-12-19 2018-05-18 广东省航空航天装备技术研究所 A kind of unmanned plane of unmanned plane anticollision automatic testing method and application this method
CN108058838A (en) * 2017-12-03 2018-05-22 中国直升机设计研究所 A kind of helicopter collision avoidance system based on binocular distance measurement
CN108082506A (en) * 2018-01-26 2018-05-29 锐合防务技术(北京)有限公司 Unmanned vehicle
CN108106623A (en) * 2017-09-08 2018-06-01 同济大学 A kind of unmanned vehicle paths planning method based on flow field
CN108230403A (en) * 2018-01-23 2018-06-29 北京易智能科技有限公司 A kind of obstacle detection method based on space segmentation
CN108280401A (en) * 2017-12-27 2018-07-13 达闼科技(北京)有限公司 A kind of pavement detection method, apparatus, cloud server and computer program product
CN108445905A (en) * 2018-03-30 2018-08-24 合肥赛为智能有限公司 A kind of UAV Intelligent avoidance regulator control system
CN108536171A (en) * 2018-03-21 2018-09-14 电子科技大学 The paths planning method of multiple no-manned plane collaboration tracking under a kind of multiple constraint
CN108873931A (en) * 2018-06-05 2018-11-23 北京理工雷科电子信息技术有限公司 A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness
CN109214984A (en) * 2017-07-03 2019-01-15 北京臻迪科技股份有限公司 A kind of image acquiring method and device, calculate equipment at automatic positioning navigation system
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN109753081A (en) * 2018-12-14 2019-05-14 中国矿业大学 A roadway inspection drone system and navigation method based on machine vision
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 An Intelligent Obstacle Avoidance Algorithm Based on Binocular Vision
CN110007313A (en) * 2019-03-08 2019-07-12 中国科学院深圳先进技术研究院 Obstacle detection method and device based on unmanned plane
CN110209184A (en) * 2019-06-21 2019-09-06 太原理工大学 A kind of unmanned plane barrier-avoiding method based on binocular vision system
CN110222581A (en) * 2019-05-13 2019-09-10 电子科技大学 A kind of quadrotor drone visual target tracking method based on binocular camera
CN110299030A (en) * 2019-06-28 2019-10-01 汉王科技股份有限公司 Handheld terminal, aircraft and its airspace measurement method, control method
CN110488805A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN110543186A (en) * 2019-08-02 2019-12-06 佛山科学技术学院 UAV-based forest fire monitoring system, method and storage medium
CN110568861A (en) * 2019-09-19 2019-12-13 中国电子科技集团公司电子科学研究院 Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN110647156A (en) * 2019-09-17 2020-01-03 中国科学院自动化研究所 Target object docking ring-based docking equipment pose adjusting method and system
CN111113404A (en) * 2018-11-01 2020-05-08 阿里巴巴集团控股有限公司 Method for robot to obtain position service and robot
CN111247557A (en) * 2019-04-23 2020-06-05 深圳市大疆创新科技有限公司 Method and system for detecting moving target object and movable platform
CN111354027A (en) * 2018-12-21 2020-06-30 沈阳新松机器人自动化股份有限公司 Visual obstacle avoidance method for mobile robot
CN111428651A (en) * 2020-03-26 2020-07-17 广州小鹏汽车科技有限公司 Vehicle obstacle information acquisition method and system and vehicle
CN111619556A (en) * 2020-05-22 2020-09-04 奇瑞汽车股份有限公司 Obstacle avoidance control method and device for automobile and storage medium
CN111736631A (en) * 2020-07-09 2020-10-02 史全霞 Path planning method and system of pesticide spraying robot
CN111950502A (en) * 2020-08-21 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Obstacle object-based detection method and device and computer equipment
CN112148033A (en) * 2020-10-22 2020-12-29 广州极飞科技有限公司 Method, device and equipment for determining unmanned aerial vehicle air route and storage medium
CN112180943A (en) * 2020-10-19 2021-01-05 山东交通学院 Underwater robot navigation obstacle avoidance method based on visual image and laser radar
WO2021088684A1 (en) * 2019-11-07 2021-05-14 深圳市道通智能航空技术股份有限公司 Omnidirectional obstacle avoidance method and unmanned aerial vehicle
CN112907629A (en) * 2021-02-08 2021-06-04 浙江商汤科技开发有限公司 Image feature tracking method and device, computer equipment and storage medium
CN112912811A (en) * 2018-11-21 2021-06-04 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle path planning method and device and unmanned aerial vehicle
CN112906479A (en) * 2021-01-22 2021-06-04 成都纵横自动化技术股份有限公司 Unmanned aerial vehicle auxiliary landing method and system
CN113031648A (en) * 2021-02-26 2021-06-25 华南理工大学 Method for avoiding obstacles of rotor unmanned aerial vehicle based on sensory depth camera
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium
WO2022121024A1 (en) * 2020-12-10 2022-06-16 中国科学院深圳先进技术研究院 Unmanned aerial vehicle positioning method and system based on screen optical communication
CN114690802A (en) * 2022-04-02 2022-07-01 深圳慧源创新科技有限公司 Unmanned aerial vehicle binocular light stream obstacle avoidance method and device, unmanned aerial vehicle and storage medium
CN114879729A (en) * 2022-05-16 2022-08-09 西北工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm
CN114905512A (en) * 2022-05-16 2022-08-16 安徽元古纪智能科技有限公司 Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN114924595A (en) * 2022-05-06 2022-08-19 华南师范大学 UAV swarm obstacle crossing method and control system, electronic equipment, storage medium
CN115576357A (en) * 2022-12-01 2023-01-06 浙江大有实业有限公司杭州科技发展分公司 Full-automatic unmanned aerial vehicle inspection intelligent path planning method under RTK signal-free scene
CN116820132A (en) * 2023-07-06 2023-09-29 杭州牧星科技有限公司 Flight obstacle avoidance early warning prompting method and system based on remote vision sensor
CN117170411A (en) * 2023-11-02 2023-12-05 山东环维游乐设备有限公司 Vision assistance-based auxiliary obstacle avoidance method for racing unmanned aerial vehicle
CN118466575A (en) * 2024-05-27 2024-08-09 深圳市星辰智途科技有限公司 UAV collision prediction and obstacle avoidance method and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110128379A1 (en) * 2009-11-30 2011-06-02 Dah-Jye Lee Real-time optical flow sensor design and its application to obstacle detection
CN103196443A (en) * 2013-04-09 2013-07-10 王宁羽 Flight body posture measuring method and system based on light stream and additional information
CN103365297A (en) * 2013-06-29 2013-10-23 天津大学 Optical flow-based four-rotor unmanned aerial vehicle flight control method
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
US20160114887A1 (en) * 2002-10-01 2016-04-28 Dylan T X Zhou Amphibious vertical takeoff and landing unmanned system and flying car with multiple aerial and aquatic flight modes for capturing panoramic virtual reality views, interactive video and transportation with mobile and wearable application
CN105787447A (en) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 Method and system of unmanned plane omnibearing obstacle avoidance based on binocular vision
CN105959627A (en) * 2016-05-11 2016-09-21 徐洪恩 Automatic wireless charging type artificial intelligence unmanned aerial vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160114887A1 (en) * 2002-10-01 2016-04-28 Dylan T X Zhou Amphibious vertical takeoff and landing unmanned system and flying car with multiple aerial and aquatic flight modes for capturing panoramic virtual reality views, interactive video and transportation with mobile and wearable application
US20110128379A1 (en) * 2009-11-30 2011-06-02 Dah-Jye Lee Real-time optical flow sensor design and its application to obstacle detection
CN103196443A (en) * 2013-04-09 2013-07-10 王宁羽 Flight body posture measuring method and system based on light stream and additional information
CN103365297A (en) * 2013-06-29 2013-10-23 天津大学 Optical flow-based four-rotor unmanned aerial vehicle flight control method
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN105787447A (en) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 Method and system of unmanned plane omnibearing obstacle avoidance based on binocular vision
CN105959627A (en) * 2016-05-11 2016-09-21 徐洪恩 Automatic wireless charging type artificial intelligence unmanned aerial vehicle

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107091643A (en) * 2017-06-07 2017-08-25 旗瀚科技有限公司 A kind of indoor navigation method based on many 3D structure lights camera splicings
CN109214984B (en) * 2017-07-03 2023-03-14 臻迪科技股份有限公司 Image acquisition method and device, autonomous positioning navigation system and computing equipment
CN109214984A (en) * 2017-07-03 2019-01-15 北京臻迪科技股份有限公司 A kind of image acquiring method and device, calculate equipment at automatic positioning navigation system
CN107689063A (en) * 2017-07-27 2018-02-13 南京理工大学北方研究院 A kind of robot indoor orientation method based on ceiling image
CN107388967A (en) * 2017-08-14 2017-11-24 上海汽车集团股份有限公司 A kind of outer parameter compensation method of vehicle-mounted three-dimensional laser sensor and device
CN108007474A (en) * 2017-08-31 2018-05-08 哈尔滨工业大学 A kind of unmanned vehicle independent positioning and pose alignment technique based on land marking
CN108106623A (en) * 2017-09-08 2018-06-01 同济大学 A kind of unmanned vehicle paths planning method based on flow field
CN107497621A (en) * 2017-09-20 2017-12-22 王晓东 Extended pattern is atomized regulating system and method
CN107497621B (en) * 2017-09-20 2018-04-06 安徽灵感科技有限公司 Extended pattern is atomized regulating system and method
CN107908195A (en) * 2017-11-06 2018-04-13 深圳市道通智能航空技术有限公司 Target tracking method, device, tracker and computer-readable recording medium
CN107909614B (en) * 2017-11-13 2021-02-26 中国矿业大学 A positioning method of inspection robot under GPS failure environment
CN107909614A (en) * 2017-11-13 2018-04-13 中国矿业大学 Crusing robot localization method under a kind of GPS failures environment
CN108058838A (en) * 2017-12-03 2018-05-22 中国直升机设计研究所 A kind of helicopter collision avoidance system based on binocular distance measurement
CN108053691A (en) * 2017-12-19 2018-05-18 广东省航空航天装备技术研究所 A kind of unmanned plane of unmanned plane anticollision automatic testing method and application this method
CN108280401A (en) * 2017-12-27 2018-07-13 达闼科技(北京)有限公司 A kind of pavement detection method, apparatus, cloud server and computer program product
CN108230403A (en) * 2018-01-23 2018-06-29 北京易智能科技有限公司 A kind of obstacle detection method based on space segmentation
CN108082506A (en) * 2018-01-26 2018-05-29 锐合防务技术(北京)有限公司 Unmanned vehicle
CN108536171A (en) * 2018-03-21 2018-09-14 电子科技大学 The paths planning method of multiple no-manned plane collaboration tracking under a kind of multiple constraint
CN108536171B (en) * 2018-03-21 2020-12-29 电子科技大学 A Path Planning Method for Cooperative Tracking of Multiple UAVs under Multiple Constraints
CN108445905A (en) * 2018-03-30 2018-08-24 合肥赛为智能有限公司 A kind of UAV Intelligent avoidance regulator control system
CN110488805A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN108873931A (en) * 2018-06-05 2018-11-23 北京理工雷科电子信息技术有限公司 A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness
CN109520497B (en) * 2018-10-19 2022-09-30 天津大学 Unmanned aerial vehicle autonomous positioning method based on vision and imu
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN111113404A (en) * 2018-11-01 2020-05-08 阿里巴巴集团控股有限公司 Method for robot to obtain position service and robot
CN112912811B (en) * 2018-11-21 2024-03-29 深圳市道通智能航空技术股份有限公司 UAV path planning method, device and UAV
US12025996B2 (en) 2018-11-21 2024-07-02 Autel Robotics Co., Ltd. Unmanned aerial vehicle path planning method and apparatus and unmanned aerial vehicle
CN112912811A (en) * 2018-11-21 2021-06-04 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle path planning method and device and unmanned aerial vehicle
CN109753081A (en) * 2018-12-14 2019-05-14 中国矿业大学 A roadway inspection drone system and navigation method based on machine vision
CN111354027A (en) * 2018-12-21 2020-06-30 沈阳新松机器人自动化股份有限公司 Visual obstacle avoidance method for mobile robot
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 An Intelligent Obstacle Avoidance Algorithm Based on Binocular Vision
CN110007313A (en) * 2019-03-08 2019-07-12 中国科学院深圳先进技术研究院 Obstacle detection method and device based on unmanned plane
WO2020215194A1 (en) * 2019-04-23 2020-10-29 深圳市大疆创新科技有限公司 Method and system for detecting moving target object, and movable platform
CN111247557A (en) * 2019-04-23 2020-06-05 深圳市大疆创新科技有限公司 Method and system for detecting moving target object and movable platform
CN110222581B (en) * 2019-05-13 2022-04-19 电子科技大学 Binocular camera-based quad-rotor unmanned aerial vehicle visual target tracking method
CN110222581A (en) * 2019-05-13 2019-09-10 电子科技大学 A kind of quadrotor drone visual target tracking method based on binocular camera
CN110209184A (en) * 2019-06-21 2019-09-06 太原理工大学 A kind of unmanned plane barrier-avoiding method based on binocular vision system
CN110299030A (en) * 2019-06-28 2019-10-01 汉王科技股份有限公司 Handheld terminal, aircraft and its airspace measurement method, control method
CN110543186A (en) * 2019-08-02 2019-12-06 佛山科学技术学院 UAV-based forest fire monitoring system, method and storage medium
CN110647156B (en) * 2019-09-17 2021-05-11 中国科学院自动化研究所 Method and system for adjusting the pose of a docking device based on a target docking ring
CN110647156A (en) * 2019-09-17 2020-01-03 中国科学院自动化研究所 Target object docking ring-based docking equipment pose adjusting method and system
CN110568861A (en) * 2019-09-19 2019-12-13 中国电子科技集团公司电子科学研究院 Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN110568861B (en) * 2019-09-19 2022-09-16 中国电子科技集团公司电子科学研究院 Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
WO2021088684A1 (en) * 2019-11-07 2021-05-14 深圳市道通智能航空技术股份有限公司 Omnidirectional obstacle avoidance method and unmanned aerial vehicle
CN111428651A (en) * 2020-03-26 2020-07-17 广州小鹏汽车科技有限公司 Vehicle obstacle information acquisition method and system and vehicle
CN111428651B (en) * 2020-03-26 2023-05-16 广州小鹏汽车科技有限公司 Obstacle information acquisition method and system for vehicle and vehicle
CN111619556A (en) * 2020-05-22 2020-09-04 奇瑞汽车股份有限公司 Obstacle avoidance control method and device for automobile and storage medium
CN111619556B (en) * 2020-05-22 2022-05-03 奇瑞汽车股份有限公司 Obstacle avoidance control method and device for automobile and storage medium
CN111736631A (en) * 2020-07-09 2020-10-02 史全霞 Path planning method and system of pesticide spraying robot
CN111950502A (en) * 2020-08-21 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Obstacle object-based detection method and device and computer equipment
CN111950502B (en) * 2020-08-21 2024-04-16 东软睿驰汽车技术(沈阳)有限公司 Obstacle object-based detection method and device and computer equipment
CN112180943A (en) * 2020-10-19 2021-01-05 山东交通学院 Underwater robot navigation obstacle avoidance method based on visual image and laser radar
CN112148033A (en) * 2020-10-22 2020-12-29 广州极飞科技有限公司 Method, device and equipment for determining unmanned aerial vehicle air route and storage medium
WO2022121024A1 (en) * 2020-12-10 2022-06-16 中国科学院深圳先进技术研究院 Unmanned aerial vehicle positioning method and system based on screen optical communication
CN112906479B (en) * 2021-01-22 2024-01-26 成都纵横自动化技术股份有限公司 Unmanned aerial vehicle auxiliary landing method and system thereof
CN112906479A (en) * 2021-01-22 2021-06-04 成都纵横自动化技术股份有限公司 Unmanned aerial vehicle auxiliary landing method and system
CN112907629A (en) * 2021-02-08 2021-06-04 浙江商汤科技开发有限公司 Image feature tracking method and device, computer equipment and storage medium
CN113031648A (en) * 2021-02-26 2021-06-25 华南理工大学 Method for avoiding obstacles of rotor unmanned aerial vehicle based on sensory depth camera
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium
CN114690802A (en) * 2022-04-02 2022-07-01 深圳慧源创新科技有限公司 Unmanned aerial vehicle binocular light stream obstacle avoidance method and device, unmanned aerial vehicle and storage medium
CN114924595A (en) * 2022-05-06 2022-08-19 华南师范大学 UAV swarm obstacle crossing method and control system, electronic equipment, storage medium
CN114924595B (en) * 2022-05-06 2024-11-22 华南师范大学 Unmanned aerial vehicle swarm obstacle crossing method and control system, electronic equipment, and storage medium
CN114905512A (en) * 2022-05-16 2022-08-16 安徽元古纪智能科技有限公司 Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN114905512B (en) * 2022-05-16 2024-05-14 安徽元古纪智能科技有限公司 Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN114879729B (en) * 2022-05-16 2024-06-18 西北工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm
CN114879729A (en) * 2022-05-16 2022-08-09 西北工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm
CN115576357B (en) * 2022-12-01 2023-07-07 浙江大有实业有限公司杭州科技发展分公司 Full-automatic unmanned aerial vehicle inspection intelligent path planning method under RTK signal-free scene
CN115576357A (en) * 2022-12-01 2023-01-06 浙江大有实业有限公司杭州科技发展分公司 Full-automatic unmanned aerial vehicle inspection intelligent path planning method under RTK signal-free scene
CN116820132B (en) * 2023-07-06 2024-01-09 杭州牧星科技有限公司 Flight obstacle avoidance early warning prompting method and system based on remote vision sensor
CN116820132A (en) * 2023-07-06 2023-09-29 杭州牧星科技有限公司 Flight obstacle avoidance early warning prompting method and system based on remote vision sensor
CN117170411A (en) * 2023-11-02 2023-12-05 山东环维游乐设备有限公司 Vision assistance-based auxiliary obstacle avoidance method for racing unmanned aerial vehicle
CN117170411B (en) * 2023-11-02 2024-02-02 山东环维游乐设备有限公司 Vision assistance-based auxiliary obstacle avoidance method for racing unmanned aerial vehicle
CN118466575A (en) * 2024-05-27 2024-08-09 深圳市星辰智途科技有限公司 UAV collision prediction and obstacle avoidance method and related equipment
CN118466575B (en) * 2024-05-27 2024-11-15 深圳市星辰智途科技有限公司 UAV collision prediction and obstacle avoidance method and related equipment

Also Published As

Publication number Publication date
CN106681353B (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN106681353B (en) Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion
CN113359810B (en) A multi-sensor based UAV landing area identification method
CN111326023B (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
CN110222581B (en) Binocular camera-based quad-rotor unmanned aerial vehicle visual target tracking method
CN110221603B (en) Remote obstacle detection method based on laser radar multi-frame point cloud fusion
US11064178B2 (en) Deep virtual stereo odometry
CN104197928B (en) Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
WO2021052403A1 (en) Obstacle information sensing method and device for mobile robot
EP2209091B1 (en) System and method for object motion detection based on multiple 3D warping and vehicle equipped with such system
CN110246159A (en) The 3D target motion analysis method of view-based access control model and radar information fusion
CN113568435B (en) Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN112567201A (en) Distance measuring method and apparatus
CN106384382A (en) Three-dimensional reconstruction system and method based on binocular stereoscopic vision
CN106444837A (en) Obstacle avoiding method and obstacle avoiding system for unmanned aerial vehicle
CN110745140A (en) A Vehicle Lane Change Early Warning Method Based on Constrained Pose Estimation of Continuous Images
CN106548173A (en) A kind of improvement no-manned plane three-dimensional information getting method based on classification matching strategy
CN107193011A (en) A kind of method for being used to quickly calculate car speed in automatic driving car area-of-interest
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
WO2019127518A1 (en) Obstacle avoidance method and device and movable platform
CN105844692A (en) Binocular stereoscopic vision based 3D reconstruction device, method, system and UAV
Zhou et al. Fast, accurate thin-structure obstacle detection for autonomous mobile robots
CN117274036A (en) Parking scene detection method based on multi-view and time sequence fusion
CN108225273B (en) Real-time runway detection method based on sensor priori knowledge
CN108645408A (en) Unmanned aerial vehicle autonomous recovery target prediction method based on navigation information
US20210233307A1 (en) Landmark location reconstruction in autonomous machine applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191025

Termination date: 20211129