CN110209184A - A kind of unmanned plane barrier-avoiding method based on binocular vision system - Google Patents
A kind of unmanned plane barrier-avoiding method based on binocular vision system Download PDFInfo
- Publication number
- CN110209184A CN110209184A CN201910543933.8A CN201910543933A CN110209184A CN 110209184 A CN110209184 A CN 110209184A CN 201910543933 A CN201910543933 A CN 201910543933A CN 110209184 A CN110209184 A CN 110209184A
- Authority
- CN
- China
- Prior art keywords
- image
- uav
- obstacle avoidance
- depth map
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0808—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
本发明公开了一种基于双目视觉系统的无人机避障方法,该方法通过双目视觉系统获得图像信息后得到深度图信息;利用生态学方法结合指数加权移动平均值算法对深度图进行去噪操作;基于区域生长的阈值分割算法提取障碍轮廓,同时用矩形框进行拟合得出视差值;通过相似三角定理求出障碍距离;将所得距离传入飞行控制器,与预先设定的阈值距离作比较后发出不同指令,无人机在准确的时机完成避障;本发明将形态学方法与指数加权移动平均值算法结合,提升了深度图去噪程度,提高了无人机与障碍物之间距离的计算精度,提高了无人机快速视觉避障能力,其准确度相较于现存算法有较大提升。
The invention discloses a UAV obstacle avoidance method based on a binocular vision system. The method obtains depth map information after obtaining image information through the binocular vision system; uses an ecological method combined with an exponentially weighted moving average algorithm to carry out the depth map Denoising operation; the threshold segmentation algorithm based on region growth extracts the obstacle contour, and at the same time uses a rectangular frame to fit the disparity value; calculates the obstacle distance through the similar triangle theorem; transfers the obtained distance to the flight controller, and presets After comparing the threshold distances, different commands are sent out, and the UAV completes obstacle avoidance at an accurate time; the invention combines the morphological method with the exponentially weighted moving average algorithm, which improves the denoising degree of the depth map, and improves the accuracy of the UAV and UAV. The calculation accuracy of the distance between obstacles improves the UAV's rapid visual obstacle avoidance ability, and its accuracy is greatly improved compared with existing algorithms.
Description
技术领域technical field
本发明涉及无人机视觉与数字图像处理领域,具体为一种基于双目视觉系统的无人机避障方法。The invention relates to the field of UAV vision and digital image processing, in particular to an UAV obstacle avoidance method based on a binocular vision system.
背景技术Background technique
随着科学技术的不断深入,机器无人化成为当今的趋势,市场不断扩大,无人机经常会执行一些特殊任务,这无疑向无人机对障碍的识别速度、躲避障碍的灵活性提出了更高的要求。基于视觉的控制系统的特点是组成结构简单、成本低、性价比高等等,同时,相比较而言,双目视觉系统比单目视觉系统获得的信息更多,同时由于双目视觉系统可以将得到的图像进行共面处理,得到无畸变且行对准图像,此时比单目系统测量精度更高,有利于无人机的准确避障,双目视觉系统目前在机器导航、目标追寻等相关领域使用广泛。With the continuous deepening of science and technology, unmanned machines have become the current trend, and the market continues to expand. UAVs often perform some special tasks. higher requirement. The vision-based control system is characterized by simple structure, low cost, high cost performance, etc. At the same time, in comparison, the binocular vision system obtains more information than the monocular vision system, and because the binocular vision system can obtain The images are coplanarly processed to obtain distortion-free and line-aligned images. At this time, the measurement accuracy is higher than that of the monocular system, which is conducive to the accurate obstacle avoidance of the UAV. The binocular vision system is currently used in machine navigation, target pursuit and other related fields. Widely used.
广泛使用的降噪方法,一般是利用形态开闭操作对深度图进行降噪,每进行一次开操作为腐蚀-膨胀,目的是消除图像中大于领域内点的孤立异常值。闭操作为膨胀-腐蚀,目的是消除图像中小于领域内点的孤立异常值。但开闭操作彼此独立,操作结束后图像效果并不是很理想,噪声区域内与非噪声区域灰度值相差相对较小的像素点并未完全消除,因此还具有一定的改善空间。The widely used noise reduction method generally uses morphological opening and closing operations to denoise the depth map. Each opening operation is erosion-expansion, and the purpose is to eliminate isolated outliers in the image that are larger than the points in the field. The closing operation is dilation-corrosion, and its purpose is to eliminate isolated outliers in the image that are smaller than the points in the domain. However, the opening and closing operations are independent of each other, and the image effect after the operation is not very ideal, and the pixels with relatively small gray value differences between the noise area and the non-noise area have not been completely eliminated, so there is still room for improvement.
指数加权移动平均值算法,从传递函数的角度分析,它具有很好的鲁棒性,同时表现出很好的低通滤波性,对于图像像素而言,可视为对图像的进一步腐蚀。但单独使用的话需要待处理的深度图像噪声区域与非噪声区域内各像素点灰度值差别在一定范围内才会有较好的表现。Exponentially weighted moving average algorithm, analyzed from the perspective of transfer function, it has good robustness, and at the same time shows good low-pass filtering performance. For image pixels, it can be regarded as further erosion of the image. However, if it is used alone, the difference between the gray value of each pixel in the noise area of the depth image to be processed and the non-noise area must be within a certain range to have a better performance.
所以,目前对于深度图像的降噪处理效果不理想,这使得无人机的测距结果不准确,进而导致无人机在障碍物时判断出现误差使其控制失误。Therefore, the current noise reduction processing effect on the depth image is not ideal, which makes the ranging results of the UAV inaccurate, which in turn leads to errors in the judgment of the UAV when it is in an obstacle, causing it to make a control error.
发明内容Contents of the invention
本发明克服现有技术存在的不足,在基于深度图像处理的基础上,提出了一种基于双目视觉系统的无人机避障方法,目的是为了解决深度图降噪效果不理想,使得测距结果不精确,导致无人机控制不准确的问题。The present invention overcomes the deficiencies in the prior art. On the basis of depth image processing, a UAV obstacle avoidance method based on a binocular vision system is proposed. The distance result is inaccurate, leading to the problem of inaccurate control of the UAV.
本发明是通过如下技术方案实现的。The present invention is achieved through the following technical solutions.
一种基于双目视觉系统的无人机避障方法,具体包括以下步骤:A method for avoiding obstacles of an unmanned aerial vehicle based on a binocular vision system, specifically comprising the following steps:
1)利用双目摄像机获取无人机正前方的图像,得到左图像和右图像。1) Use the binocular camera to obtain the image directly in front of the UAV, and obtain the left image and the right image.
2)对图像进行灰度化,得到图像的各像素点特征信息,对这些信息进行立体匹配,从而得到深度图信息。2) Grayscale the image to obtain the feature information of each pixel of the image, and perform stereo matching on this information to obtain the depth map information.
3)获取深度图,依次使用形态学方法和指数加权移动平均值算法进行深度图去噪;所述的形态学方法是将深度图转换为二值深度图,三次形态开放操作在二值深度图,三次形态关闭操作在进程图像;所述的指数加权移动平均值算法具体计算公式为。3) Obtain the depth map, and then use the morphological method and the exponentially weighted moving average algorithm to denoise the depth map; the morphological method is to convert the depth map into a binary depth map, and three morphological opening operations are performed on the binary depth map , the three-time pattern closing operation is in the process image; the specific calculation formula of the exponentially weighted moving average algorithm is as follows.
EWMA(t)=aY(t)+(1-a)EWMA(t-1),t=1,2,……,n。EWMA(t)=aY(t)+(1-a)EWMA(t-1), t=1, 2,..., n.
其中,EWMA(t)为t时刻的估计值,Y(t)为t时刻的测量值,n为观察总时间,a(0<a<1)为历史测量值权重系数;取多个a值,并通过信噪比曲线来确定最大信噪比所对应a值。Among them, EWMA(t) is the estimated value at time t, Y(t) is the measured value at time t, n is the total observation time, a(0<a<1) is the weight coefficient of historical measured values; take multiple a values , and determine the a value corresponding to the maximum signal-to-noise ratio through the signal-to-noise ratio curve.
4)用基于区域生长的阈值分割算法提取障碍轮廓,同时用矩形框进行拟合。4) Use the threshold segmentation algorithm based on region growth to extract the obstacle contour, and at the same time use the rectangular frame to fit it.
5)得到视差值,从而得到障碍物与无人机之间的距离。5) Get the parallax value, so as to get the distance between the obstacle and the drone.
6)设置无人机避障阈值距离,将得到距离与阈值进行比较,并将这些数据信息传入飞行控制器,飞行控制器根据这些信息进行判断,从而做出相应的动作。6) Set the obstacle avoidance threshold distance of the UAV, compare the obtained distance with the threshold, and transmit the data information to the flight controller, and the flight controller will make judgments based on the information and make corresponding actions.
进一步优选的,步骤1所述利用双目摄像头获取无人机的图像具体为:将双目摄像头安装在无人机正前方,对双目摄像头进行标定,从而得出内参矩阵、外参矩阵、畸变参数,利用摄像头获得的图像,根据获得的内外参矩阵以及畸变参数对摄像头进行校正,得到无畸变且行对准图像。Further preferably, in step 1, using the binocular camera to obtain the image of the UAV is specifically: installing the binocular camera directly in front of the UAV, and calibrating the binocular camera, so as to obtain the internal parameter matrix, external parameter matrix, Distortion parameters, use the image obtained by the camera, correct the camera according to the obtained internal and external parameter matrix and distortion parameters, and obtain an undistorted and line-aligned image.
更进一步的,所述步骤2为:将得到的无畸变且行对准图像进行灰度化,灰度化的图像经过高斯滤波法滤波,拉普拉斯算子锐化和提高图像边缘,得到像素信息。Further, the step 2 is: grayscale the obtained undistorted and line-aligned image, the grayscaled image is filtered by Gaussian filter method, the Laplacian operator sharpens and improves the edge of the image, and obtains the pixel information.
进一步优选的,步骤2所述立体匹配为:利用区块匹配算法对处理后的图像进行立体匹配,得到深度图,在区块匹配算法中,用SAD窗口计算出左右两幅图像中像素点灰度区别之和,具体计算公式为:Further preferably, the stereo matching described in step 2 is: use the block matching algorithm to perform stereo matching on the processed image to obtain a depth map, and in the block matching algorithm, use the SAD window to calculate the grayscale of the pixels in the left and right images. The sum of degree differences, the specific calculation formula is:
其中, 为左图像像素灰度,为右图像像素灰度,为邻域窗口,d为视差值。in, is the gray level of the left image pixel, is the gray level of the right image pixel, is the neighborhood window, and d is the disparity value.
进一步优选的,步骤3所述获取深度图具体过程为:利用视差图和校正参数,在三维空间中重投影左右图像平面的图像点,重投影矩阵Q的计算公式为。Further preferably, the specific process of obtaining the depth map in step 3 is: using the disparity map and correction parameters to reproject the image points of the left and right image planes in the three-dimensional space, and the calculation formula of the reprojection matrix Q is:
Q= Q=
其中,( )为左图像的主点,f为左镜头的焦距, 为右图像上与左图像中 对应的位置。in,( ) is the principal point of the left image, f is the focal length of the left lens, for right image top and left image middle the corresponding location.
由二维坐标转换为三维坐标的具体计算公式为:The specific formula for converting from two-dimensional coordinates to three-dimensional coordinates is:
Q Q
其中,d为视差值。Among them, d is the disparity value.
所以,利用重投影矩阵Q可获得深度信息z,具体计算公式为形式为:Therefore, the depth information z can be obtained by using the reprojection matrix Q, and the specific calculation formula is in the form:
z=Z/W。z=Z/W.
进一步优选的,步骤6所述设置无人机避障阈值距离,将得到距离与阈值进行比较是通过三角形的相似定理得出理论距离,与所设定的阈值距离进行比较,得到不同的飞行模式。Further preferably, as described in step 6, the obstacle avoidance threshold distance of the UAV is set, and comparing the obtained distance with the threshold is to obtain the theoretical distance through the similarity theorem of triangles, and compare it with the set threshold distance to obtain different flight modes .
更进一步的,所述的飞行模式包括:自然飞行模式、报警模式和避障模式;所述自然飞行模式是理论距离大于阈值距离时,无人机将信息传送给飞行控制器,飞行控制器下达继续保持前进的指令;所述报警模式是理论距离等于阈值距离时,无人机将信息传送给飞行控制器,飞行控制器下达LED灯闪烁红色指令;所述避障模式是理论距离小于阈值距离时,无人机将信息传送给飞行控制器,飞行控制器下达避障指令。Furthermore, the flight modes include: natural flight mode, alarm mode and obstacle avoidance mode; in the natural flight mode, when the theoretical distance is greater than the threshold distance, the drone transmits information to the flight controller, and the flight controller issues Continue to keep moving forward; the alarm mode is that when the theoretical distance is equal to the threshold distance, the unmanned aerial vehicle transmits information to the flight controller, and the flight controller issues an instruction that the LED light flashes red; the obstacle avoidance mode is that the theoretical distance is less than the threshold distance When , the UAV transmits the information to the flight controller, and the flight controller issues an obstacle avoidance command.
无人机飞行途中,通过双目视觉传感器获得实际图像,得到标定参数,通过立体匹配,得到深度图,对深度图进行图像预处理,得到视差值,从而得到测距数值,设置阈值距离,将无人机飞行模式划分为自然飞行模式、报警模式以及避障模式三种,传输到飞行控制器,飞行控制器根据不同模式发出不同指令,无人机做出相应动作。During the flight of the UAV, the actual image is obtained through the binocular vision sensor, the calibration parameters are obtained, the depth map is obtained through stereo matching, the image preprocessing is performed on the depth map, the parallax value is obtained, and the ranging value is obtained, and the threshold distance is set. The UAV flight mode is divided into three types: natural flight mode, alarm mode and obstacle avoidance mode, which are transmitted to the flight controller. The flight controller sends different instructions according to different modes, and the UAV makes corresponding actions.
本发明相对于现有技术所产生的有益效果为。Compared with the prior art, the present invention has the following beneficial effects.
本发明结合形态学开闭操作算法与指数加权移动平均值算法相结合的降噪方法,形态学开闭操作处理深度图像后,由于指数加权移动平均值算法良好的低通滤波性,利用指数加权移动平均值算法对深度图像继续降噪处理是对形态学操作的完善,进一步消除深度图像中领域内点的孤立异常值,因此两者结合可能有更理想的降噪效果,从而使无人机测距结果更加精确,无人机控制更加准确。The present invention combines the noise reduction method combining the morphological opening and closing operation algorithm and the exponentially weighted moving average algorithm. The continuous noise reduction processing of the depth image by the moving average algorithm is the perfection of the morphological operation, and further eliminates the isolated outliers of the points in the field in the depth image. Therefore, the combination of the two may have a more ideal noise reduction effect, so that the UAV The ranging result is more accurate, and the UAV control is more accurate.
本发明在深度图像处理时利用形态学开闭操作结合指数加权移动平均值算法,提高了无人机控制的准确率。相对于现有的单一生态学方法处理深度图像,本发明测量的视差值精度更高,深度图像去噪程度更高,提高了无人机控制的准确性,从某种程度上可以视为提高了无人机的灵活程度,降低了无人机的失事率,从而提高了无人机执行任务的效率。The present invention uses morphological opening and closing operations combined with an exponentially weighted moving average algorithm during depth image processing to improve the accuracy of drone control. Compared with the existing single ecological method for processing depth images, the accuracy of the parallax value measured by the present invention is higher, the depth image denoising degree is higher, and the accuracy of UAV control is improved. To some extent, it can be regarded as The flexibility of the drone is improved, the accident rate of the drone is reduced, and the efficiency of the drone's mission is improved.
附图说明Description of drawings
图1是本发明中标定相机的标定图。FIG. 1 is a calibration diagram of a calibration camera in the present invention.
图2是本发明基于双目视觉系统的无人机控制方法的硬件结构图。Fig. 2 is a hardware structural diagram of the drone control method based on the binocular vision system of the present invention.
图3是本发明基于双目视觉系统的无人机控制方法的算法流程图。Fig. 3 is an algorithm flow chart of the UAV control method based on the binocular vision system of the present invention.
图4是本发明中指数加权移动平均值算法处理图像信噪比曲线图。Fig. 4 is a graph of signal-to-noise ratio of an image processed by an exponentially weighted moving average algorithm in the present invention.
图5是本发明实施例中障碍物原始左图像。Fig. 5 is the original left image of the obstacle in the embodiment of the present invention.
图6是本发明实施例中障碍物原始右图像。Fig. 6 is the original right image of the obstacle in the embodiment of the present invention.
图7是实施例中通过现有的形态学开闭操作对障碍物深度图降噪效果图。Fig. 7 is an effect diagram of denoising the obstacle depth map through the existing morphological opening and closing operation in the embodiment.
图8是本发明所述的形态学开闭操作融合指数加权移动平均值算法对障碍物深度图降噪效果图。Fig. 8 is a diagram of the denoising effect of the morphological opening and closing operation fusion exponential weighted moving average algorithm on the obstacle depth map according to the present invention.
具体实施方式Detailed ways
为了使本发明所要解决的技术问题、技术方案及有益效果更加清楚明白,结合实施例和附图,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。下面结合实施例及附图详细说明本发明的技术方案,但保护范围不被此限制。In order to make the technical problems, technical solutions and beneficial effects to be solved by the present invention clearer, the present invention will be further described in detail in combination with the embodiments and accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. The technical solutions of the present invention will be described in detail below in conjunction with the embodiments and accompanying drawings, but the scope of protection is not limited thereto.
如图3所示,本发明提供了一种面向无人机避障的控制系统,包括定位模块、惯性测量模块、图像获取模块以及嵌入式图像处理模块。As shown in Fig. 3, the present invention provides a control system for UAV obstacle avoidance, including a positioning module, an inertial measurement module, an image acquisition module and an embedded image processing module.
其中,图像获取模块包含双目视觉系统,其位置固定在无人机前方,水平安装。Among them, the image acquisition module includes a binocular vision system, which is fixed in front of the drone and installed horizontally.
图像获取模块将左右图像同步传输到嵌入式图像处理模块:CPU模块负责速度校正,GPU模块负责计算相机标定参数;定位模块和惯性测量模块负责无人机的实际位置和姿态测量,除此之外,测量模块还承担将测量数据传输到嵌入式图像处理模块的责任,从而调节计算出的速度信息;飞行控制器接受嵌入式图像处理模块产生的信息,判断当前所处模式,发出相应的指令,无人机做出对应的动作。The image acquisition module transmits the left and right images to the embedded image processing module synchronously: the CPU module is responsible for speed correction, the GPU module is responsible for calculating the camera calibration parameters; the positioning module and the inertial measurement module are responsible for the actual position and attitude measurement of the drone, in addition , the measurement module is also responsible for transmitting the measurement data to the embedded image processing module, so as to adjust the calculated speed information; the flight controller receives the information generated by the embedded image processing module, judges the current mode, and issues corresponding instructions. The drone makes corresponding actions.
本例中图像获取模块利用双目摄像头,分辨率为680*240,基线距离为5cm,基线距离与镜头焦距均可调整,通过USB接口将图像信息传入嵌入式图像处理模块。In this example, the image acquisition module uses a binocular camera with a resolution of 680*240 and a baseline distance of 5cm. Both the baseline distance and the focal length of the lens can be adjusted, and the image information is transmitted to the embedded image processing module through the USB interface.
如图3所示,本发明基于双目视觉系统的无人机避障控制方法的算法流程具体过程为:首先利用图1所示的标定图对双目摄像头进行标定,得到镜头的内参矩阵、畸变参数矩阵以及两摄像头之间的外参矩阵,将获得的数据信息在嵌入式图像处理模块存储。As shown in Figure 3, the specific process of the algorithm flow of the UAV obstacle avoidance control method based on the binocular vision system of the present invention is as follows: first, the binocular camera is calibrated using the calibration diagram shown in Figure 1, and the internal parameter matrix of the lens is obtained, The distortion parameter matrix and the external parameter matrix between the two cameras store the obtained data information in the embedded image processing module.
通过双目摄像头对物体进行拍照,将所获图像灰度化且进行图像分割,得到如图5所示灰度化后的原始左图像和如图6所示灰度化后的原始右图像,利用之前所获得的内外参数矩阵以及畸变参数矩阵对左右两幅图像进行校正,生成两幅无畸变行对准图像;将这两幅图像进行灰度化,灰度化的图像经过高斯滤波法滤波,拉普拉斯算子锐化和提高图像边缘,得到像素信息。The object is photographed by a binocular camera, and the obtained image is grayscaled and image segmented to obtain the grayscaled original left image as shown in Figure 5 and the grayscaled original right image as shown in Figure 6, Use the previously obtained internal and external parameter matrix and distortion parameter matrix to correct the left and right images to generate two undistorted line-aligned images; grayscale the two images, and filter the grayscaled image by Gaussian filter method , the Laplacian operator sharpens and improves the edge of the image to obtain pixel information.
利用区块立体匹配算法对处理后的图像进行立体匹配,得到深度图,在区块匹配算法中,用SAD窗口计算出左右两幅图像中像素点灰度区别之和,具体计算公式为:Use the block stereo matching algorithm to perform stereo matching on the processed image to obtain the depth map. In the block matching algorithm, use the SAD window to calculate the sum of the gray difference of the pixels in the left and right images. The specific calculation formula is:
其中, 为左图像像素灰度, 为右图像像素灰度, 为邻域窗口,d为视差值。in, is the gray level of the left image pixel, is the gray level of the right image pixel, is the neighborhood window, and d is the disparity value.
在获取深度图的过程中,利用视差图和校正参数,在三维空间中重投影左右图像平面的图像点,重投影矩阵Q的计算公式为:In the process of obtaining the depth map, the disparity map and correction parameters are used to reproject the image points of the left and right image planes in the three-dimensional space. The calculation formula of the reprojection matrix Q is:
Q= Q=
其中,( )为左图像的主点,f为左镜头的焦距, 为右图像上与左图像中 对应的位置。in,( ) is the principal point of the left image, f is the focal length of the left lens, for right image top and left image middle the corresponding location.
由二维坐标转换为三维坐标的具体计算公式为:The specific formula for converting from two-dimensional coordinates to three-dimensional coordinates is:
Q Q
其中,d为视差值。Among them, d is the disparity value.
所以,利用重投影矩阵Q可获得深度图像z,具体计算公式为形式为:Therefore, the depth image z can be obtained by using the reprojection matrix Q, and the specific calculation formula is in the form:
z=Z/W。z=Z/W.
对所得深度图进行去噪操作,将深度图转换为二值深度图,三次形态开放操作在二值深度图,三次形态关闭操作在进程图像,形态学开闭操作后的深度图效果如图7所示,并加以融合指数加权移动平均值算法进行去噪,指数加权移动平均值算法(EWMA)的具体计算公式为:Perform denoising operations on the obtained depth map, convert the depth map into a binary depth map, perform three morphological opening operations on the binary depth map, and three morphological closing operations on the process image. The effect of the depth map after the morphological opening and closing operation is shown in Figure 7 As shown, and combined with the exponentially weighted moving average algorithm for denoising, the specific calculation formula of the exponentially weighted moving average algorithm (EWMA) is:
EWMA(t)=aY(t)+(1-a)EWMA(t-1),t=1,2,……,n;EWMA(t)=aY(t)+(1-a)EWMA(t-1),t=1,2,...,n;
其中,EWMA(t)为t时刻的估计值,Y(t)为t时刻的测量值,n为观察总时间,a(0<a<1)为历史测量值权重系数。Among them, EWMA(t) is the estimated value at time t, Y(t) is the measured value at time t, n is the total observation time, and a(0<a<1) is the weight coefficient of historical measured values.
本次实施方案中选用0.75赋值给历史测量值权重系数,参考图4信噪比曲线图可知此时信噪比最大,用于去噪效果最好,降噪效果如图8所示。In this implementation plan, 0.75 is selected as the weight coefficient of historical measurement values. Referring to the signal-to-noise ratio curve in Figure 4, it can be seen that the signal-to-noise ratio is the largest at this time, and the denoising effect is the best. The denoising effect is shown in Figure 8.
设置阈值距离,通过三角形的相似定理得出理论距离,与所设定的阈值距离进行比较,将飞行模式分为三种:理论距离大于阈值距离时,无人机将信息传送给飞行控制器,飞行控制器下达保持前进指令,称为自然飞行模式;理论距离等于阈值距离时,无人机将信息传送给飞行控制器,飞行控制器下达LED灯闪烁红色指令,称为报警模式;理论距离小于阈值距离时,无人机将信息传送给飞行控制器,飞行控制器下达避障指令,称为避障模式。Set the threshold distance, get the theoretical distance through the similarity theorem of triangles, compare it with the set threshold distance, and divide the flight mode into three types: when the theoretical distance is greater than the threshold distance, the drone will send the information to the flight controller, The flight controller issues an instruction to keep moving forward, which is called the natural flight mode; when the theoretical distance is equal to the threshold distance, the UAV sends the information to the flight controller, and the flight controller issues an instruction to flash the LED light red, which is called the alarm mode; the theoretical distance is less than When the threshold distance is reached, the UAV transmits the information to the flight controller, and the flight controller issues an obstacle avoidance command, which is called the obstacle avoidance mode.
区别于现有技术,本发明提供了一种基于双目视觉系统的无人机控制方法,无人机在执行任务的过程中,对机身控制准确性的要求极高,利用形态学方法结合指数加权移动平均值算法,可以有效提高深度图的去噪效果,促使得出更准确的障碍距离,从而使得无人机控制的准确性更高。本发明基于双目视觉系统的无人机控制方法,提高了无人机在作业过程的工作效率,降低了无人机的失事风险。Different from the prior art, the present invention provides a UAV control method based on a binocular vision system. In the process of performing missions, UAVs have extremely high requirements on the control accuracy of the fuselage. Using morphological methods to combine The exponentially weighted moving average algorithm can effectively improve the denoising effect of the depth map, and promote a more accurate obstacle distance, so that the accuracy of UAV control is higher. The control method of the drone based on the binocular vision system of the present invention improves the working efficiency of the drone in the operation process and reduces the accident risk of the drone.
以上内容是结合具体的优选实施方式对本发明所做的进一步详细说明,不能认定本发明的具体实施方式仅限于此,对于本发明所属技术领域的普通技术人员来说,在不脱离本发明的前提下,还可以做出若干简单的推演或替换,都应当视为属于本发明由所提交的权利要求书确定专利保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments. It cannot be determined that the specific embodiments of the present invention are limited thereto. Under the circumstances, some simple deduction or replacement can also be made, all of which should be regarded as belonging to the scope of patent protection determined by the submitted claims of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910543933.8A CN110209184A (en) | 2019-06-21 | 2019-06-21 | A kind of unmanned plane barrier-avoiding method based on binocular vision system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910543933.8A CN110209184A (en) | 2019-06-21 | 2019-06-21 | A kind of unmanned plane barrier-avoiding method based on binocular vision system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110209184A true CN110209184A (en) | 2019-09-06 |
Family
ID=67794072
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910543933.8A Pending CN110209184A (en) | 2019-06-21 | 2019-06-21 | A kind of unmanned plane barrier-avoiding method based on binocular vision system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110209184A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476762A (en) * | 2020-03-26 | 2020-07-31 | 南方电网科学研究院有限责任公司 | Obstacle detection method and device of inspection equipment and inspection equipment |
CN111880576A (en) * | 2020-08-20 | 2020-11-03 | 西安联飞智能装备研究院有限责任公司 | Unmanned aerial vehicle flight control method and device based on vision |
CN111890358A (en) * | 2020-07-01 | 2020-11-06 | 浙江大华技术股份有限公司 | Binocular obstacle avoidance method and device, storage medium and electronic device |
CN112598705A (en) * | 2020-12-17 | 2021-04-02 | 太原理工大学 | Vehicle body posture detection method based on binocular vision |
CN112861887A (en) * | 2019-11-08 | 2021-05-28 | 中国科学院长春光学精密机械与物理研究所 | Method and system for rapidly detecting obstacle and terminal equipment |
CN112906479A (en) * | 2021-01-22 | 2021-06-04 | 成都纵横自动化技术股份有限公司 | Unmanned aerial vehicle auxiliary landing method and system |
CN115158372A (en) * | 2022-07-18 | 2022-10-11 | 内蒙古工业大学 | Large shuttle vehicle obstacle avoidance early warning method based on sound waves |
CN116685840A (en) * | 2020-12-16 | 2023-09-01 | 卡特彼勒公司 | Systems and methods for processing data from particle monitoring sensors |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130162768A1 (en) * | 2011-12-22 | 2013-06-27 | Wen-Nung Lie | System for converting 2d video into 3d video |
CN106681353A (en) * | 2016-11-29 | 2017-05-17 | 南京航空航天大学 | Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion |
CN108805906A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工业大学 | A kind of moving obstacle detection and localization method based on depth map |
CN109740727A (en) * | 2018-12-05 | 2019-05-10 | 中国软件与技术服务股份有限公司 | A kind of hydraulic turbine shaft state monitoring method neural network based and system |
-
2019
- 2019-06-21 CN CN201910543933.8A patent/CN110209184A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130162768A1 (en) * | 2011-12-22 | 2013-06-27 | Wen-Nung Lie | System for converting 2d video into 3d video |
CN106681353A (en) * | 2016-11-29 | 2017-05-17 | 南京航空航天大学 | Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion |
CN108805906A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工业大学 | A kind of moving obstacle detection and localization method based on depth map |
CN109740727A (en) * | 2018-12-05 | 2019-05-10 | 中国软件与技术服务股份有限公司 | A kind of hydraulic turbine shaft state monitoring method neural network based and system |
Non-Patent Citations (2)
Title |
---|
吕朝辉 等: "一种用于立体匹配的边缘检测方法", 《上海大学学报(自然科学版)》 * |
王帅 等: "基于立体视觉技术的实时测距系统研究", 《电子科技》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861887A (en) * | 2019-11-08 | 2021-05-28 | 中国科学院长春光学精密机械与物理研究所 | Method and system for rapidly detecting obstacle and terminal equipment |
CN111476762A (en) * | 2020-03-26 | 2020-07-31 | 南方电网科学研究院有限责任公司 | Obstacle detection method and device of inspection equipment and inspection equipment |
CN111476762B (en) * | 2020-03-26 | 2023-11-03 | 南方电网科学研究院有限责任公司 | Obstacle detection method and device of inspection equipment and inspection equipment |
CN111890358A (en) * | 2020-07-01 | 2020-11-06 | 浙江大华技术股份有限公司 | Binocular obstacle avoidance method and device, storage medium and electronic device |
CN111890358B (en) * | 2020-07-01 | 2022-06-14 | 浙江大华技术股份有限公司 | Binocular obstacle avoidance method and device, storage medium and electronic device |
CN111880576A (en) * | 2020-08-20 | 2020-11-03 | 西安联飞智能装备研究院有限责任公司 | Unmanned aerial vehicle flight control method and device based on vision |
CN111880576B (en) * | 2020-08-20 | 2024-02-02 | 西安联飞智能装备研究院有限责任公司 | Unmanned aerial vehicle flight control method and device based on vision |
CN116685840A (en) * | 2020-12-16 | 2023-09-01 | 卡特彼勒公司 | Systems and methods for processing data from particle monitoring sensors |
CN112598705A (en) * | 2020-12-17 | 2021-04-02 | 太原理工大学 | Vehicle body posture detection method based on binocular vision |
CN112598705B (en) * | 2020-12-17 | 2024-05-03 | 太原理工大学 | Binocular vision-based vehicle body posture detection method |
CN112906479A (en) * | 2021-01-22 | 2021-06-04 | 成都纵横自动化技术股份有限公司 | Unmanned aerial vehicle auxiliary landing method and system |
CN112906479B (en) * | 2021-01-22 | 2024-01-26 | 成都纵横自动化技术股份有限公司 | Unmanned aerial vehicle auxiliary landing method and system thereof |
CN115158372B (en) * | 2022-07-18 | 2023-05-19 | 内蒙古工业大学 | Acoustic wave-based large shuttle car obstacle avoidance early warning method |
CN115158372A (en) * | 2022-07-18 | 2022-10-11 | 内蒙古工业大学 | Large shuttle vehicle obstacle avoidance early warning method based on sound waves |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110209184A (en) | A kind of unmanned plane barrier-avoiding method based on binocular vision system | |
CN111462135B (en) | Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation | |
CN106681353B (en) | Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion | |
CN111897349B (en) | A method for autonomous obstacle avoidance of underwater robot based on binocular vision | |
CN106708084B (en) | The automatic detection of obstacles of unmanned plane and barrier-avoiding method under complex environment | |
CN108682026B (en) | Binocular vision stereo matching method based on multi-matching element fusion | |
CN106960454B (en) | Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle | |
WO2018119744A1 (en) | False alarm obstacle detection method and device | |
CN108171787A (en) | A kind of three-dimensional rebuilding method based on the detection of ORB features | |
US20170293796A1 (en) | Flight device and flight control method | |
WO2017080108A1 (en) | Flying device, flying control system and method | |
CN107677274B (en) | A real-time solution method for UAV autonomous landing navigation information based on binocular vision | |
CN110807809A (en) | Light-weight monocular vision positioning method based on point-line characteristics and depth filter | |
CN109961417B (en) | Image processing method, image processing apparatus, and mobile apparatus control method | |
CN106384382A (en) | Three-dimensional reconstruction system and method based on binocular stereoscopic vision | |
CN106650701B (en) | Binocular vision-based obstacle detection method and device in indoor shadow environment | |
CN105844692B (en) | Three-dimensional reconstruction apparatus, method, system and unmanned plane based on binocular stereo vision | |
CN110349249B (en) | Real-time dense reconstruction method and system based on RGB-D data | |
KR102129738B1 (en) | Autonomous tractor having crop hight sensing algorithm and hight correcting algorithm | |
CN106931962A (en) | A kind of real-time binocular visual positioning method based on GPU SIFT | |
CN113781562A (en) | Lane line virtual and real registration and self-vehicle positioning method based on road model | |
CN111553862A (en) | A method for dehazing and binocular stereo vision positioning for sea and sky background images | |
CN116429098A (en) | Visual navigation positioning method and system for low-speed unmanned aerial vehicle | |
CN111047636B (en) | Obstacle avoidance system and obstacle avoidance method based on active infrared binocular vision | |
CN110032211A (en) | Multi-rotor unmanned aerial vehicle automatic obstacle-avoiding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190906 |