[go: up one dir, main page]

CN106097304A - A kind of unmanned plane real-time online ground drawing generating method - Google Patents

A kind of unmanned plane real-time online ground drawing generating method Download PDF

Info

Publication number
CN106097304A
CN106097304A CN201610373105.0A CN201610373105A CN106097304A CN 106097304 A CN106097304 A CN 106097304A CN 201610373105 A CN201610373105 A CN 201610373105A CN 106097304 A CN106097304 A CN 106097304A
Authority
CN
China
Prior art keywords
frame
map
image
key frame
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610373105.0A
Other languages
Chinese (zh)
Other versions
CN106097304B (en
Inventor
布树辉
杨君
赵勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201610373105.0A priority Critical patent/CN106097304B/en
Publication of CN106097304A publication Critical patent/CN106097304A/en
Application granted granted Critical
Publication of CN106097304B publication Critical patent/CN106097304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明提出一种无人机实时在线地图生成方法,在无人机上实现地图实时在线重建,然后将重建好的地图通过高带宽数据传输链路传回地面,应用于无人机实时地获得其周围环境的三维地图以及实现对自身的定位。通过本发明,无人机能够在未知空域飞行,在自身位置不确定的条件下,通过对环境信息的探测和比较,对环境的特征信息的提取和匹配,来获取自身的定位以及进行三维地图的构建。本发明可应用于战场需求(如无人机自主飞行、攻击);火灾/地震/洪水等灾害应急救援;无人机治安管理监控等。本方法与现有方法相比,无论是位置漂移量、角度漂移量还是绝对误差都处于中上水平,本方法获取环境较多信息,而且与全稠密的方法相比,本方法的实验精度依然能够达到使用标准,而且能够直接在CPU上运行,实时性好。

The present invention proposes a real-time online map generation method for unmanned aerial vehicles, which realizes real-time online reconstruction of the map on the unmanned aerial vehicle, then transmits the reconstructed map back to the ground through a high-bandwidth data transmission link, and applies it to the unmanned aerial vehicle to obtain its real-time A three-dimensional map of the surrounding environment and the realization of its own positioning. Through the present invention, the UAV can fly in the unknown airspace, and under the condition of uncertain location, obtain its own positioning and carry out three-dimensional map by detecting and comparing the environmental information, extracting and matching the characteristic information of the environment build. The present invention can be applied to battlefield requirements (such as autonomous flight and attack of UAVs); emergency rescue of disasters such as fire/earthquake/flood; UAV public security management and monitoring, etc. Compared with the existing methods, this method is at the upper-middle level in terms of position drift, angle drift and absolute error. This method obtains more information about the environment, and compared with the full-dense method, the experimental accuracy of this method is still the same. It can meet the use standard, and can run directly on the CPU, with good real-time performance.

Description

一种无人机实时在线地图生成方法A real-time online map generation method for UAV

技术领域technical field

本发明涉及计算机图像处理与地图测绘领域,具体为一种无人机实时在线地图生成方法,通过对无人机和地面站传输内容方式的改变,实现了无人机实时在线地图生成。The invention relates to the fields of computer image processing and map surveying, and specifically relates to a method for generating a real-time online map of an unmanned aerial vehicle. By changing the transmission mode of the unmanned aerial vehicle and a ground station, the real-time online map generation of the unmanned aerial vehicle is realized.

背景技术Background technique

传统的测绘技术将地面的特征点和界线通过测量手段(比如利用遥感、激光、超声波等)获得反映地面现状的图形和位置信息。传统的测绘技术对于传感器要求较高,虽然其测绘精度高,但是高成本和从信息采集到得出结果所需的时间长制约了传统测绘在一些场合(例如对时效性要求比较高)的应用。Traditional surveying and mapping technology uses the feature points and boundaries of the ground to obtain graphics and position information reflecting the current status of the ground through measurement means (such as remote sensing, laser, ultrasonic, etc.). Traditional surveying and mapping technology has high requirements for sensors. Although its surveying and mapping accuracy is high, the high cost and the long time required from information collection to results have restricted the application of traditional surveying and mapping in some occasions (such as relatively high timeliness requirements). .

基于计算机视觉的SLAM(Simultaneous Localization and Mapping),即同时定位与地图构建,由于其重要的理论与应用价值,一直以来都是学者研究的热点。但目前三维重建的方式是无人机机载相机拍摄到的图像通过高清图传传给地面电脑,地面电脑对拍摄到的图像进行三维重建。这种方法存在着很大的不足:SLAM (Simultaneous Localization and Mapping) based on computer vision, that is, simultaneous positioning and map construction, has always been a research hotspot for scholars due to its important theoretical and application value. However, the current 3D reconstruction method is that the images captured by the drone's onboard camera are transmitted to the ground computer through high-definition images, and the ground computer performs 3D reconstruction of the captured images. This method has great disadvantages:

1、图像大量重复、冗余;1. A large number of images are repeated and redundant;

2、传输速度很大,导致传输距离小,图像传回地面站很难;2. The transmission speed is very high, resulting in a small transmission distance, and it is difficult to transmit the image back to the ground station;

3、传输信号易受干扰。3. The transmission signal is susceptible to interference.

4、不能使飞机做决策、自动飞行。4. The aircraft cannot be made to make decisions and fly automatically.

发明内容Contents of the invention

为解决现有技术存在的问题,本发明提出了一种无人机实时在线地图生成方法,在无人机上实现地图实时在线重建,然后将重建好的地图通过高带宽数据传输链路传回地面。In order to solve the problems existing in the prior art, the present invention proposes a real-time online map generation method for drones, which realizes real-time online reconstruction of the map on the drone, and then transmits the reconstructed map back to the ground through a high-bandwidth data transmission link .

本发明的硬件实现主要分为两大部分,第一部分是天空端,主要装置是将微小型计算机安置在无人机上,无人机直接搭载USB相机。在无人机在飞行过程中,通过USB相机进行图像信息采集,直接将数据给微小型计算机,微小型计算机对所采集到的图像信息进行地图重建,在飞行过程中由飞行控制器为微小型计算机的图像处理软件提供GPS,并且将其传给DDL图传;第二部分是地面端,天空端和地面端的信息传递主要是通过DDL图传实现。The hardware implementation of the present invention is mainly divided into two parts, the first part is the sky terminal, the main device is to place the microcomputer on the drone, and the drone is directly equipped with a USB camera. During the flight of the UAV, the image information is collected through the USB camera, and the data is directly sent to the microcomputer, and the microcomputer reconstructs the map of the collected image information. The image processing software of the computer provides GPS and transmits it to the DDL video transmission; the second part is the ground terminal, and the information transmission between the air terminal and the ground terminal is mainly realized through the DDL video transmission.

在整个过程中,微小型计算机与飞控系统利用串口进行信息传递。主要是用USB转UART将飞控的GPS信息传给微小型计算机。飞控系统和微小型计算机的信息会实时的通过DDL图传传给地面的DDL图传。In the whole process, the microcomputer and the flight control system use the serial port to transmit information. Mainly use the USB to UART to transfer the GPS information of the flight control to the microcomputer. The information of the flight control system and the microcomputer will be transmitted to the DDL image transmission on the ground in real time through the DDL image transmission.

在天空端,无人机要对采集到的数据信息进行处理以实时地获取自身姿态和三维环境的地图,主要由以下步骤实现:On the sky side, the UAV needs to process the collected data information to obtain its own attitude and a map of the three-dimensional environment in real time, which is mainly realized by the following steps:

步骤1:采集图像Step 1: Acquire Images

无人机机载相机采集到一系列图像,并将图像传递给无人机载的微小型计算机;The drone's onboard camera collects a series of images and transmits the images to the drone's onboard microcomputer;

步骤2:无人机载的微小型计算机对相机获取的第一帧图像进行处理得到初始化地图:Step 2: The microcomputer onboard the UAV processes the first frame of image acquired by the camera to obtain an initialization map:

步骤2.1:对第一帧图像进行去畸变处理,得到去畸变后的第一帧图像;Step 2.1: Perform dedistortion processing on the first frame image to obtain the first frame image after dedistortion;

步骤2.2:对去畸变后的第一帧图像进行深度初始化:根据设定的灰度梯度阈值,筛选出去畸变后的第一帧图像中灰度梯度大于灰度梯度阈值的像素点,并赋予所筛选出的像素点随机深度值;Step 2.2: Depth initialization of the first frame of image after dedistortion: according to the set gray gradient threshold, screen out the pixels whose gray gradient is greater than the gray gradient threshold in the distorted first frame image, and assign Random depth value of selected pixels;

步骤2.3:根据无人机机载相机参数将步骤2.2中赋予深度值的像素点反投影回三维环境,得到初始化后的地图;Step 2.3: Back-project the pixels assigned depth values in step 2.2 back to the three-dimensional environment according to the parameters of the drone’s onboard camera, and obtain the initialized map;

步骤2.4:将去畸变后的第一帧图像设置为关键帧;Step 2.4: Set the first frame image after de-distortion as a key frame;

步骤3:对无人机机载相机实时获得的第i帧图像进行如下处理,i=2,3,4……:Step 3: Perform the following processing on the i-th frame image obtained by the drone's onboard camera in real time, i=2,3,4...:

步骤3.1:对第i帧图像进行去畸变处理,得到去畸变后的第i帧图像;Step 3.1: Perform dedistortion processing on the i-th frame image to obtain the i-th frame image after de-distortion;

步骤3.2:以当前关键帧为基准,进行去畸变后的第i帧图像与基准的图像对齐操作,得到第i帧到当前关键帧的位姿变化;Step 3.2: Taking the current key frame as the benchmark, perform the operation of aligning the i-th frame image after dedistortion with the reference image, and obtain the pose change from the i-th frame to the current key frame;

步骤3.3:根据当前关键帧对应的相机姿态,以及步骤3.2得到的第i帧到当前关键帧的位姿变化,得到第i帧对应的相机姿态和相机在局部坐标下的位置;Step 3.3: According to the camera pose corresponding to the current key frame, and the pose change from the i-th frame to the current key frame obtained in step 3.2, obtain the camera pose corresponding to the i-th frame and the position of the camera in local coordinates;

步骤3.4:根据设定的灰度梯度阈值,筛选出去畸变后的第i帧图像中灰度梯度大于灰度梯度阈值的像素点,并根据相机参数和步骤3.3得到的第i帧对应的相机姿态,将筛选出的像素点反投影回三维环境,得到所筛选出的像素点深度值;并将所筛选出的带有深度值的像素点加入地图中;Step 3.4: According to the set gray-scale gradient threshold, filter out the pixels whose gray-scale gradient is greater than the gray-scale gradient threshold in the distorted i-th frame image, and obtain the camera pose corresponding to the i-th frame according to the camera parameters and step 3.3 , back-project the screened pixels back to the 3D environment to obtain the screened pixel depth values; and add the screened pixel points with depth values to the map;

步骤3.5:若步骤3.2得到的第i帧到当前关键帧的位姿变化大于设定的位姿变化阈值,则用第i帧代替当前关键帧作为新的关键帧。Step 3.5: If the pose change from the i-th frame to the current keyframe obtained in step 3.2 is greater than the set pose change threshold, replace the current keyframe with the i-th frame as a new keyframe.

步骤4:当预设帧数的无人机机载相机实时获得的图像处理完成后,无人机载的微小型计算机将生成的地图通过无人机载的DDL图传传给地面端显示。Step 4: After the real-time image processing of the drone's onboard camera with a preset number of frames is completed, the drone's onboard microcomputer transmits the generated map to the ground terminal for display through the drone's onboard DDL image.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:Further preferred scheme, described a kind of unmanned aerial vehicle real-time online map generating method is characterized in that:

提取并存储每一个关键帧图像中的特征点;Extract and store the feature points in each key frame image;

若步骤3.2中图像对齐操作无法实现,则进行失败重建:If the image alignment operation in step 3.2 cannot be realized, perform failed reconstruction:

提取去畸变后的当前帧图像的特征点,将当前帧的特征点与存储的每一个关键帧图像中的特征点进行匹配,寻找成功匹配特征点个数最多的关键帧,若该关键帧中成功匹配特征点的个数占该关键帧中特征点总数的比例不大于40%,则以当前帧作为第一帧,返回步骤2;否则以该关键帧作为为基准,进行去畸变后的当前帧图像与基准的图像对齐操作,得到当前帧到当前关键帧的位姿变化;Extract the feature points of the de-distorted current frame image, match the feature points of the current frame with the feature points in each stored key frame image, and find the key frame with the largest number of successfully matched feature points. If the number of successfully matched feature points accounts for no more than 40% of the total number of feature points in the key frame, then take the current frame as the first frame and return to step 2; The frame image is aligned with the benchmark image to obtain the pose change from the current frame to the current key frame;

根据基准对应的相机姿态,以及当前帧到当前关键帧的位姿变化,得到当前帧对应的相机姿态;According to the camera pose corresponding to the benchmark, and the pose change from the current frame to the current key frame, the camera pose corresponding to the current frame is obtained;

根据设定的灰度梯度阈值,筛选出去畸变后的当前帧图像中灰度梯度大于灰度梯度阈值的像素点,并根据相机参数和当前帧对应的相机姿态,将筛选出的像素点反投影回三维环境,得到所筛选出的像素点深度值;并将所筛选出的带有深度值的像素点加入地图中;而后继续按照步骤3进行。According to the set gray gradient threshold, filter out the pixels whose gray gradient is greater than the gray gradient threshold in the distorted current frame image, and back-project the filtered pixels according to the camera parameters and the camera pose corresponding to the current frame Go back to the 3D environment to get the depth value of the selected pixels; add the screened pixels with depth values to the map; then proceed to step 3.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:采用快速角点检测方法提取特征点。In a further preferred solution, the method for generating a real-time online map of an unmanned aerial vehicle is characterized in that: a fast corner point detection method is used to extract feature points.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:步骤3.4中,将所筛选出的带有深度值的像素点加入地图的过程中,若某一像素点反投影后,在地图中对应的三维点的邻域内,已存在有地图三维点,则将该像素点反投影后在地图中对应的三维点,以及三维点邻域内的已存在的地图三维点去除,并将该像素点反投影后在地图中对应的三维点,与三维点邻域内的已存在的地图三维点的加权平均点加入地图中。A further preferred solution, the method for generating a real-time online map of an unmanned aerial vehicle, is characterized in that: in step 3.4, in the process of adding the screened pixel points with depth values to the map, if a certain pixel point reverses After projection, if there is already a map 3D point in the neighborhood of the corresponding 3D point in the map, then the corresponding 3D point in the map and the existing map 3D point in the neighborhood of the 3D point are removed after the pixel is back-projected , and add the corresponding 3D point in the map after back-projection of the pixel point, and the weighted average point of the existing map 3D points in the neighborhood of the 3D point to the map.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:步骤3.5中,若步骤3.2得到的第i帧到当前关键帧的位姿变化大于设定的位姿变化阈值,且第i帧与当前关键帧的帧数差不小于15帧,则用第i帧代替当前关键帧作为新的关键帧。Further preferred scheme, described a kind of unmanned aerial vehicle real-time online map generation method is characterized in that: in step 3.5, if step 3.2 obtains the pose change of i frame to current key frame greater than the set pose change threshold , and the frame number difference between the i-th frame and the current key frame is not less than 15 frames, then use the i-th frame to replace the current key frame as a new key frame.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:步骤3.2中的图像对齐操作采用以下过程:Further preferred scheme, described a kind of unmanned aerial vehicle real-time online map generation method is characterized in that: the image alignment operation in step 3.2 adopts the following process:

先设定第i帧到当前关键帧的位姿变化的初始值;根据第i帧到当前关键帧的位姿变化,将当前关键帧中筛选出的灰度梯度大于灰度梯度阈值的像素点反投影到三维环境,再从三维环境投影到去畸变后的第i帧图像上,得到投影点;并在去畸变后的第i帧图像上找到,当前关键帧中筛选出的灰度梯度大于灰度梯度阈值的像素点的对应点;计算投影点与对应点的光度值残差和;迭代变化第i帧到当前关键帧的位姿变化,使光度值残差和最小。First set the initial value of the pose change from the i-th frame to the current key frame; according to the pose change from the i-th frame to the current key frame, filter out the pixels whose gray gradient is greater than the gray gradient threshold in the current key frame Back-project to the 3D environment, and then project from the 3D environment onto the i-th frame image after dedistortion to obtain the projection point; and find it on the i-th frame image after de-distortion, the gray gradient selected in the current key frame is greater than The corresponding point of the pixel point of the gray gradient threshold; calculate the residual sum of the photometric value of the projected point and the corresponding point; iteratively change the pose change from the i-th frame to the current key frame to minimize the residual sum of the photometric value.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:采用第i-1帧到当前关键帧的位姿变化作为第i帧到当前关键帧的位姿变化的初始值。A further preferred solution, the method for generating a real-time online map of a UAV, is characterized in that: the pose change from the i-1th frame to the current key frame is used as the initial change in pose from the i-th frame to the current key frame value.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:步骤3.4中,得到所筛选出的像素点深度值后;采用图优化方法对第i帧对应的相机在局部坐标下的位置,以及所筛选出的带有深度值的像素点位置进行优化,将优化后的带有深度值的像素点加入地图中。Further preferred scheme, described a kind of unmanned aerial vehicle real-time online map generating method, it is characterized in that: in step 3.4, after obtaining the pixel point depth value that is screened out; Adopt graph optimization method to i-th frame corresponding camera in local The position under the coordinates and the selected pixel position with depth value are optimized, and the optimized pixel point with depth value is added to the map.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:将重建后的地图转换到世界坐标系下:A further preferred solution, the method for generating a real-time online map of an unmanned aerial vehicle, is characterized in that: the reconstructed map is converted to the world coordinate system:

在无人机实时地图重建过程中,通过卫星定位信号获得每一帧时刻下,无人机在世界坐标系下的轨迹信息Xn,n表示的是第n帧;并在无人机实时地图重建过程中,得到每一帧对应的相机在局部坐标系下的位置xn;通过优化函数In the UAV real-time map reconstruction process, the trajectory information X n of the UAV in the world coordinate system is obtained at each frame time through the satellite positioning signal, and n represents the nth frame; and in the UAV real-time map During the reconstruction process, the position x n of the camera corresponding to each frame in the local coordinate system is obtained; through the optimization function

argminargmin δδ ΣΣ nno == 11 NN (( TT (( xx nno ,, δδ )) -- Xx nno )) 22

得到优化函数取最小值对应的变换矩阵δ,其中N为无人机实时地图重建过程中的总帧数,T(xn,δ)表示从局部坐标系到世界坐标系的投影转换函数,δ为从局部坐标系到世界坐标系的变换矩阵;根据得到的变换矩阵δ对应的投影转换函数,将重建得到的地图转换到世界坐标系下。Get the transformation matrix δ corresponding to the minimum value of the optimization function, where N is the total number of frames in the UAV real-time map reconstruction process, T(x n , δ) represents the projection conversion function from the local coordinate system to the world coordinate system, δ is the transformation matrix from the local coordinate system to the world coordinate system; according to the projection transformation function corresponding to the obtained transformation matrix δ, transform the reconstructed map into the world coordinate system.

进一步的优选方案,所述一种无人机实时在线地图生成方法,其特征在于:当卫星定位信号频率小于帧频时,对于每一个卫星定位信号采集时刻tn,得到无人机在世界坐标系下的轨迹信息Xn;并用采集时刻tn前后各一帧对应的相机在局部坐标系下的位置插值得到采集时刻tn下相机在局部坐标系下的位置xn;通过优化函数In a further preferred solution, the method for generating a real-time online map of a UAV is characterized in that: when the satellite positioning signal frequency is lower than the frame frequency, for each satellite positioning signal collection time t n , the world coordinates of the UAV are obtained The trajectory information X n under the system; and use the position interpolation of the camera in the local coordinate system corresponding to a frame before and after the acquisition time t n to obtain the position x n of the camera under the local coordinate system at the acquisition time t n ; through the optimization function

argminargmin δδ ΣΣ nno == 11 NN (( TT (( xx nno ,, δδ )) -- Xx nno )) 22

得到优化函数取最小值对应的变换矩阵δ,其中N为卫星定位信号的总采集点数,T(xn,δ)表示从局部坐标系到世界坐标系的投影转换函数,δ为从局部坐标系到世界坐标系的变换矩阵;根据得到的变换矩阵δ对应的投影转换函数,将重建得到的地图转换到世界坐标系下。Obtain the transformation matrix δ corresponding to the minimum value of the optimization function, where N is the total collection points of satellite positioning signals, T(x n , δ) represents the projection conversion function from the local coordinate system to the world coordinate system, and δ is the transformation from the local coordinate system Transformation matrix to the world coordinate system; transform the reconstructed map into the world coordinate system according to the projection transformation function corresponding to the obtained transformation matrix δ.

有益效果Beneficial effect

本发明提出的方法与诸多现有方法相比,无论是位置漂移量、角度漂移量还是绝对误差都处于中上水平,本方法获取环境较多信息,而且与全稠密的方法相比,本方法的实验精度依然能够达到使用标准,而且能够直接在CPU上运行,不用要求GPU,实时性好。Compared with many existing methods, the method proposed by the present invention is at the upper-middle level in terms of position drift, angle drift and absolute error. This method obtains more information about the environment, and compared with the full dense method, this method The accuracy of the experiment can still meet the use standard, and it can run directly on the CPU without requiring the GPU, and the real-time performance is good.

而且,本发明采用在无人机上实时在线地图生成方式,图像信息传递更易实现,减小了数据量的储存,以及传输的信息量,具体而言是:1、没有大量重复、冗余的图片传输回来;2、传输信息量小,使得传输速率小,传输距离远;3、在无人机飞行时进行地图实时重建,可以让飞机做出决策、自动飞行,可以用于壁障、路径规划等。Moreover, the present invention adopts a real-time online map generation method on the UAV, which makes image information transfer easier to realize, reduces the storage of data volume, and the amount of information transmitted, specifically: 1. There are no large amounts of repeated and redundant pictures 2. The amount of transmitted information is small, so that the transmission rate is small and the transmission distance is long; 3. Real-time reconstruction of the map when the drone is flying can allow the aircraft to make decisions and fly automatically, which can be used for barriers and path planning Wait.

本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

附图说明Description of drawings

本发明的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and comprehensible from the description of the embodiments in conjunction with the following drawings, wherein:

图1:三角化投影确定像素深度示意图;Figure 1: Schematic diagram of determining pixel depth by triangulation projection;

图2:直接图像对齐方法示意图;Figure 2: Schematic diagram of the direct image alignment method;

图3:失败重建示意图;Figure 3: Schematic diagram of failed reconstruction;

图4:图优化方法示意图;Figure 4: Schematic diagram of graph optimization method;

图5:拟合卫星导航数据示意图;Figure 5: Schematic diagram of fitting satellite navigation data;

图6:相机获取的一帧原始图像;Figure 6: A frame of original image captured by the camera;

图7:实时重建的三维点云图;Figure 7: 3D point cloud image of real-time reconstruction;

图8:使用无人机在线实时重建的地图;Figure 8: Maps reconstructed in real-time online using drones;

图9:拼接后的地图;Figure 9: The stitched map;

图10:无人机在线实时地图重建系统天空端示意图;Figure 10: Schematic diagram of the air end of the UAV online real-time map reconstruction system;

图11:无人机在线实时地图重建系统地面端示意图。Figure 11: Schematic diagram of the ground terminal of the UAV online real-time map reconstruction system.

具体实施方式detailed description

下面详细描述本发明的实施例,所述实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。Embodiments of the present invention are described in detail below, and the embodiments are exemplary and intended to explain the present invention, but should not be construed as limiting the present invention.

本实施例的硬件实现主要分为两大部分,第一部分在天空端进行,如图10所示,第二部分在地面端进行,如图11所示。The hardware implementation of this embodiment is mainly divided into two parts. The first part is performed on the air terminal, as shown in FIG. 10 , and the second part is performed on the ground terminal, as shown in FIG. 11 .

在整个过程中,机载微小型计算机与飞控系统利用USB-UART进行信息传递。主要是用USB-UART将飞控的GPS信息传给机载微小型计算机。飞控系统和机载微小型计算机的信息会实时的通过机载DDL图传传给地面端显示。During the whole process, the onboard microcomputer and the flight control system use USB-UART for information transmission. Mainly use the USB-UART to transmit the GPS information of the flight control to the airborne microcomputer. The information of the flight control system and the onboard microcomputer will be transmitted to the ground terminal for display through the onboard DDL image in real time.

本实施例的具体实施例步骤如下:The specific implementation steps of this embodiment are as follows:

在天空端的任务是在飞行过程中进行实时地图的重建:The task in the sky is to reconstruct the real-time map during the flight:

步骤1:采集图像Step 1: Acquire Images

无人机机载相机采集到一系列图像,并将图像通过USB2.0或者USB3.0传递给无人机载的微小型计算机;这样的传输方式比远程传输要快很多。The drone's onboard camera collects a series of images, and transmits the images to the drone's microcomputer via USB2.0 or USB3.0; this transmission method is much faster than remote transmission.

步骤2:无人机载的微小型计算机对相机获取的第一帧图像进行处理得到初始化地图:Step 2: The microcomputer onboard the UAV processes the first frame of image acquired by the camera to obtain an initialization map:

步骤2.1:用事先获取的标定数据对第一帧图像进行去畸变处理,得到去畸变后的第一帧图像,用于后续处理。Step 2.1: Use the pre-acquired calibration data to perform de-distortion processing on the first frame of image to obtain the de-distorted first frame of image for subsequent processing.

步骤2.2:对去畸变后的第一帧图像进行深度初始化:根据设定的灰度梯度阈值,筛选出去畸变后的第一帧图像中灰度梯度大于灰度梯度阈值的像素点,并赋予所筛选出的像素点随机深度值;这样的随机处理并不会影响后面的重建精度,因为通过几十帧的处理后,深度地图会逐渐趋近于一个准确的模型。Step 2.2: Depth initialization of the first frame of image after dedistortion: according to the set gray gradient threshold, screen out the pixels whose gray gradient is greater than the gray gradient threshold in the distorted first frame image, and assign The random depth value of the selected pixels; such random processing will not affect the subsequent reconstruction accuracy, because after dozens of frames of processing, the depth map will gradually approach an accurate model.

步骤2.3:根据相机参数将步骤2.2中赋予深度值的像素点反投影回三维环境,得到初始化后的地图.Step 2.3: According to the camera parameters, back-project the pixels assigned depth values in step 2.2 back to the 3D environment to obtain the initialized map.

步骤2.4:将去畸变后的第一帧图像设置为关键帧;提取并存储关键帧中的特征点。对于特征点检测的方法有:①SIFT、②SURF、③fast-corner(快速角点检测:通过高斯过滤,再角点检测)等,由于本发明旨在应用于实时生成环境三维地图,程序注重实时性,而前两种虽然精度高、效果好,但是其所需要的时间很长,不适合实时运行的情形,因此本步骤以及后续步骤中都采用fast-corner来检测特征点。Step 2.4: Set the first frame of image after dedistortion as a key frame; extract and store the feature points in the key frame. The methods for feature point detection are: 1. SIFT, 2. SURF, 3. fast-corner (fast corner detection: through Gaussian filtering, then corner detection), etc., because the present invention is intended to be applied to real-time generation of three-dimensional maps of the environment, the program pays attention to real-time, Although the first two methods have high precision and good effects, they take a long time and are not suitable for real-time operation. Therefore, fast-corner is used to detect feature points in this step and subsequent steps.

步骤3:对相机实时获得的第i帧图像进行如下处理,i=2,3,4……:Step 3: Perform the following processing on the i-th frame image obtained by the camera in real time, i=2,3,4...:

步骤3.1:对第i帧图像进行去畸变处理,得到去畸变后的第i帧图像。Step 3.1: Perform de-distortion processing on the i-th frame image to obtain the i-th frame image after de-distortion.

步骤3.2:当第i帧图像数据加载进来后,系统以当前关键帧为基准(即跟踪基础),进行去畸变后的第i帧图像与基准的图像对齐操作,得到第i帧到当前关键帧的位姿变化。Step 3.2: After the i-th frame image data is loaded in, the system takes the current key frame as the reference (i.e., the tracking basis), and performs the alignment operation between the de-distorted i-th frame image and the reference image, and obtains the i-th frame to the current key frame pose changes.

由于帧率较高,相邻帧的差别(时间、空间)不是很大,在此假设在两帧这么小的时间间隔内,同一个像素的光度值不发生改变(变化很小,忽略不计)。本文直接通过比较两幅图像的光度值,通过最小化光度误差获得两帧间的位姿变化,即完成图像对齐操作:Due to the high frame rate, the difference (time, space) between adjacent frames is not very large. It is assumed that the luminosity value of the same pixel does not change in such a small time interval between two frames (the change is small and negligible) . This article directly compares the photometric values of the two images, and obtains the pose change between the two frames by minimizing the photometric error, that is, completes the image alignment operation:

先设定第i帧到当前关键帧的位姿变化的初始值。First set the initial value of the pose change from the i-th frame to the current key frame.

根据第i帧到当前关键帧的位姿变化,将当前关键帧中筛选出的灰度梯度大于灰度梯度阈值的像素点反投影到三维环境,再从三维环境投影到去畸变后的第i帧图像上,得到投影点;并在去畸变后的第i帧图像上找到,当前关键帧中筛选出的灰度梯度大于灰度梯度阈值的像素点的对应点;计算投影点与对应点的光度值残差和;迭代变化第i帧到当前关键帧的位姿变化,使光度值残差和最小。According to the pose change from the i-th frame to the current key frame, back-project the pixels with a gray gradient greater than the gray gradient threshold selected in the current key frame to the 3D environment, and then project from the 3D environment to the i-th image after dedistortion On the frame image, the projection point is obtained; and on the i-th frame image after dedistortion, the corresponding point of the pixel whose gray gradient is greater than the gray gradient threshold screened out in the current key frame is found; the projection point and the corresponding point are calculated The residual sum of the photometric value; iteratively change the pose change from the i-th frame to the current key frame to minimize the residual sum of the photometric value.

由于帧率较高,导致了两帧间的位姿变化量在一个小区域内是近似不变的,因此可以采用第i-1帧到当前关键帧的位姿变化作为第i帧到当前关键帧的位姿变化的初始值。Due to the high frame rate, the pose change between two frames is approximately constant in a small area, so the pose change from the i-1th frame to the current keyframe can be used as the i-th frame to the current keyframe The initial value of the pose change of .

利用相机的参数矩阵将像素点反投影到三维环境中是为了通过三角化,算出当前帧(即第i帧)与当前关键帧的姿态变化,即SE(3)变换:SE(3)是一个4×4的矩阵,表示位置和姿态变化(在相机投影方程中又称为外参矩阵):Using the parameter matrix of the camera to back-project the pixels into the three-dimensional environment is to calculate the pose change between the current frame (i.e. frame i) and the current key frame through triangulation, that is, SE(3) transformation: SE(3) is a A 4×4 matrix representing position and attitude changes (also called an extrinsic matrix in the camera projection equation):

这个矩阵主要分为两大部分,其中从a00到a22为SO(3),表示三维空间中姿态(角度)的变化,从T0到T2表示位置的变化,即(x,y,z)的变化量。SIM3由SE(3)加上Scale参数组成,SO(3)*s即可将SE(3)变成SIM3:This matrix is mainly divided into two parts, where from a 00 to a 22 is SO(3), which represents the change of attitude (angle) in three-dimensional space, and from T 0 to T 2 represents the change of position, namely (x, y, z) the amount of change. SIM3 consists of SE(3) plus Scale parameters, SO(3)*s can turn SE(3) into SIM3:

s表示Scale参数,用于仿射变换。s represents the Scale parameter, which is used for affine transformation.

步骤3.3:根据当前关键帧对应的相机姿态,以及步骤3.2得到的第i帧到当前关键帧的位姿变化,得到第i帧对应的相机姿态和相机在局部坐标下的位置。Step 3.3: According to the camera pose corresponding to the current key frame and the pose change from the i-th frame to the current key frame obtained in step 3.2, obtain the camera pose corresponding to the i-th frame and the position of the camera in local coordinates.

步骤3.4:根据设定的灰度梯度阈值,筛选出去畸变后的第i帧图像中灰度梯度大于灰度梯度阈值的像素点,并根据相机参数和步骤3.3得到的第i帧对应的相机姿态,将筛选出的像素点反投影回三维环境,得到所筛选出的像素点深度值;并将所筛选出的带有深度值的像素点加入地图中。Step 3.4: According to the set gray-scale gradient threshold, filter out the pixels whose gray-scale gradient is greater than the gray-scale gradient threshold in the distorted i-th frame image, and obtain the camera pose corresponding to the i-th frame according to the camera parameters and step 3.3 , and back-project the screened pixels back to the 3D environment to obtain the screened pixel depth values; and add the screened pixel points with depth values to the map.

由于目前整套系统前后依赖性强,为了减小来自传感器的误差对最终结果的影响,矫正每一步的位姿,这里采用图优化方法对第i帧对应的相机在局部坐标下的位置,以及所筛选出的带有深度值的像素点位置进行优化,将优化后的带有深度值的像素点加入地图中。公知的图优化方法公式为:Due to the strong front-to-back dependence of the current system, in order to reduce the influence of the error from the sensor on the final result and correct the pose of each step, the graph optimization method is used here to determine the position of the camera corresponding to the i-th frame under the local coordinates, and the corresponding The selected pixel points with depth values are optimized, and the optimized pixels with depth values are added to the map. The well-known graph optimization method formula is:

argminargmin xx kk ΣeΣ e kk (( xx kk ,, zz kk )) TT ΩΩ kk ee kk (( xx kk ,, zz kk ))

xk表示前面定义的节点(也可以理解为状态),Zk表示边(也可以理解为约束),ek表示这些节点满足约束的状态(如果没有噪声及ek=0),Ωk表示引入的信息矩阵,及约束的置信度,如果误差大则它对应的置信度就小。定义完变量之后,我们要做的就是使整个误差函数降到最小,以达到全局最优的目的。在本方法中我们将地图点的三维位置和无人机的位姿定义为节点,将从图像到地图点的投影关系和相邻两帧之间的SE(3)的变化定义为边,信息矩阵包含两方面:地图点的被关键帧观测次数和图像中点的灰度梯度。x k represents the previously defined nodes (also can be understood as states), Z k represents edges (also can be understood as constraints), e k represents the state of these nodes satisfying the constraints (if there is no noise and e k = 0), Ω k represents The introduced information matrix and the confidence degree of the constraint, if the error is large, its corresponding confidence degree will be small. After defining the variables, what we have to do is to minimize the entire error function to achieve the global optimum. In this method, we define the three-dimensional position of the map point and the pose of the UAV as nodes, and define the projection relationship from the image to the map point and the change of SE(3) between two adjacent frames as edges, and the information The matrix contains two aspects: the number of key frame observations of map points and the gray gradient of points in the image.

此外在步骤3.4中,将所筛选出的带有深度值的像素点加入地图的过程中,若某一像素点反投影后,在地图中对应的三维点的邻域(设定的某一小值)内,已存在有地图三维点,则将该像素点反投影后在地图中对应的三维点,以及三维点邻域内的已存在的地图三维点去除,并将该像素点反投影后在地图中对应的三维点,与三维点邻域内的已存在的地图三维点的加权平均点加入地图中。In addition, in step 3.4, during the process of adding the screened pixels with depth values to the map, if a certain pixel is back-projected, the neighborhood of the corresponding three-dimensional point in the map (a set small value), if there is already a 3D point on the map, the corresponding 3D point in the map will be back-projected after the pixel point, and the existing map 3D point in the neighborhood of the 3D point will be removed, and the pixel point will be back-projected on the The weighted average point of the corresponding 3D point in the map and the existing map 3D point in the neighborhood of the 3D point is added to the map.

步骤3.5:若步骤3.2得到的第i帧到当前关键帧的位姿变化大于设定的位姿变化阈值,则用第i帧代替当前关键帧作为新的关键帧。本实施例,为了提高运算速率,降低存储的数据量,这里要求若步骤3.2得到的第i帧到当前关键帧的位姿变化大于设定的位姿变化阈值,且第i帧与当前关键帧的帧数差不小于15帧,则用第i帧代替当前关键帧作为新的关键帧。Step 3.5: If the pose change from the i-th frame to the current keyframe obtained in step 3.2 is greater than the set pose change threshold, replace the current keyframe with the i-th frame as a new keyframe. In this embodiment, in order to improve the calculation rate and reduce the amount of stored data, it is required that if the pose change from the i-th frame to the current key frame obtained in step 3.2 is greater than the set pose change threshold, and the i-th frame and the current key frame If the frame difference is not less than 15 frames, the i-th frame is used to replace the current key frame as a new key frame.

关键帧的设立是由于其具有的位姿相对于前一个关键帧变化较大,其探测到的三维环境信息相比较于之前的关键帧有较大不同,因此将其设为一个标尺,用于扩展全局地图和检测后续的帧是否有较大的位姿变化。The establishment of the key frame is because its pose has a large change compared with the previous key frame, and the three-dimensional environment information detected by it is quite different from the previous key frame, so it is set as a ruler for Expand the global map and detect whether subsequent frames have large pose changes.

在跟踪过程中,如果产生“丢帧”现象(可能的原因有:相机移动过快,导致当前帧与当前的关键帧的“差距”过大,无法在当前的关键帧上进行跟踪,如果这时不进行处理将导致前后创建的两个地图间没有一个紧密的“联系”,让之前的所有工作失去意义),所以判断若步骤3.2中图像对齐操作无法实现,则进行失败重建:During the tracking process, if there is a "frame loss" phenomenon (possible reasons are: the camera moves too fast, resulting in too large a "gap" between the current frame and the current key frame, and it is impossible to track on the current key frame, if this If no processing is performed, there will be no close "connection" between the two maps created before and after, making all the previous work meaningless), so it is judged that if the image alignment operation in step 3.2 cannot be realized, the reconstruction will fail:

提取去畸变后的当前帧图像的特征点,将当前帧的特征点与存储的每一个关键帧图像中的特征点进行匹配,寻找成功匹配特征点个数最多的关键帧,若该关键帧中成功匹配特征点的个数占该关键帧中特征点总数的比例不大于40%,则以当前帧作为第一帧,返回步骤2;否则以该关键帧作为为基准,进行去畸变后的当前帧图像与基准的图像对齐操作,得到当前帧到当前关键帧的位姿变化;Extract the feature points of the de-distorted current frame image, match the feature points of the current frame with the feature points in each stored key frame image, and find the key frame with the largest number of successfully matched feature points. If the number of successfully matched feature points accounts for no more than 40% of the total number of feature points in the key frame, then take the current frame as the first frame and return to step 2; The frame image is aligned with the benchmark image to obtain the pose change from the current frame to the current key frame;

根据基准对应的相机姿态,以及当前帧到当前关键帧的位姿变化,得到当前帧对应的相机姿态;According to the camera pose corresponding to the benchmark, and the pose change from the current frame to the current key frame, the camera pose corresponding to the current frame is obtained;

根据设定的灰度梯度阈值,筛选出去畸变后的当前帧图像中灰度梯度大于灰度梯度阈值的像素点,并根据相机参数和当前帧对应的相机姿态,将筛选出的像素点反投影回三维环境,得到所筛选出的像素点深度值;并将所筛选出的带有深度值的像素点加入地图中;而后继续按照步骤3进行。According to the set gray gradient threshold, filter out the pixels whose gray gradient is greater than the gray gradient threshold in the distorted current frame image, and back-project the filtered pixels according to the camera parameters and the camera pose corresponding to the current frame Go back to the 3D environment to get the depth value of the selected pixels; add the screened pixels with depth values to the map; then proceed to step 3.

由于上述重建出来的三维环境是基于局部坐标系下的,与真实三维环境在统一的尺度上不匹配,为了能够更好的应用重建地图,所以下面基于卫星定位信号进行拟合,将重建出来的三维环境在统一的尺度下匹配到真实三维环境中。Since the above-mentioned reconstructed 3D environment is based on the local coordinate system and does not match the real 3D environment on a unified scale, in order to better apply the reconstructed map, the following is based on satellite positioning signals for fitting, and the reconstructed The 3D environment is matched to the real 3D environment at a unified scale.

在无人机实时地图重建过程中,通过卫星定位信号获得每一帧时刻下,无人机在世界坐标系下的轨迹信息Xn,n表示的是第n帧;并在无人机实时地图重建过程中,得到每一帧对应的相机在局部坐标系下的位置xn;通过优化函数In the UAV real-time map reconstruction process, the trajectory information X n of the UAV in the world coordinate system is obtained at each frame time through the satellite positioning signal, and n represents the nth frame; and in the UAV real-time map During the reconstruction process, the position x n of the camera corresponding to each frame in the local coordinate system is obtained; through the optimization function

argminargmin δδ ΣΣ nno == 11 NN (( TT (( xx nno ,, δδ )) -- Xx nno )) 22

得到优化函数取最小值对应的变换矩阵δ,其中N为无人机实时地图重建过程中的总帧数,T(xn,δ)表示从局部坐标系到世界坐标系的投影转换函数,δ为从局部坐标系到世界坐标系的变换矩阵;根据得到的变换矩阵δ对应的投影转换函数,将重建得到的地图转换到世界坐标系下。Get the transformation matrix δ corresponding to the minimum value of the optimization function, where N is the total number of frames in the UAV real-time map reconstruction process, T(x n , δ) represents the projection conversion function from the local coordinate system to the world coordinate system, δ is the transformation matrix from the local coordinate system to the world coordinate system; according to the projection transformation function corresponding to the obtained transformation matrix δ, transform the reconstructed map into the world coordinate system.

另外,卫星定位信号频率往往小于帧频,而且卫星定位信号的采集时刻与图像采集时刻不严格对齐,此时,对于每一个卫星定位信号采集时刻tn,得到无人机在世界坐标系下的轨迹信息Xn;并用采集时刻tn前后各一帧对应的相机在局部坐标系下的位置插值得到采集时刻tn下相机在局部坐标系下的位置xn;通过优化函数In addition, the frequency of satellite positioning signals is often smaller than the frame frequency, and the acquisition time of satellite positioning signals is not strictly aligned with the image acquisition time. At this time, for each satellite positioning signal acquisition time t n , the UAV in the world coordinate system is obtained Trajectory information X n ; and interpolate the position of the camera in the local coordinate system corresponding to a frame before and after the acquisition time t n to obtain the position x n of the camera in the local coordinate system at the acquisition time t n ; through the optimization function

argminargmin δδ ΣΣ nno == 11 NN (( TT (( xx nno ,, δδ )) -- Xx nno )) 22

得到优化函数取最小值对应的变换矩阵δ,其中N为卫星定位信号的总采集点数,T(xn,δ)表示从局部坐标系到世界坐标系的投影转换函数,δ为从局部坐标系到世界坐标系的变换矩阵;根据得到的变换矩阵δ对应的投影转换函数,将重建得到的地图转换到世界坐标系下。Obtain the transformation matrix δ corresponding to the minimum value of the optimization function, where N is the total collection points of satellite positioning signals, T(x n , δ) represents the projection conversion function from the local coordinate system to the world coordinate system, and δ is the transformation from the local coordinate system Transformation matrix to the world coordinate system; transform the reconstructed map into the world coordinate system according to the projection transformation function corresponding to the obtained transformation matrix δ.

步骤4:当预设帧数的无人机机载相机实时获得的图像处理完成后,无人机载的微小型计算机将生成的地图通过无人机载的DDL图传传给地面端显示。例如,可以设定在机载微小型计算机每处理完第10帧,第20帧……等预先设定帧图像后,将实时生成的地图通过无人机载的DDL图传传给地面端显示。Step 4: After the real-time image processing of the drone's onboard camera with a preset number of frames is completed, the drone's onboard microcomputer transmits the generated map to the ground terminal for display through the drone's onboard DDL image. For example, it can be set that after the onboard microcomputer processes the 10th frame, the 20th frame... and other pre-set frame images, the real-time generated map will be transmitted to the ground terminal for display through the DDL image carried by the drone .

下面给出重建的结果,包括实时和离线数据的测试结果,并与当前一些主流的视觉SLAM方法作比较,来评价系统性能。以下所有的测试都在mint17(基于Ubuntu14.04,64bit)上进行,8G RAM,不用GPU,主频3.50GHz。The reconstruction results are given below, including real-time and offline data test results, and compared with some current mainstream visual SLAM methods to evaluate system performance. All the following tests are performed on mint17 (based on Ubuntu14.04, 64bit), 8G RAM, without GPU, and the main frequency is 3.50GHz.

表1:RGB-D基准测试结果比较Table 1: Comparison of RGB-D benchmark results

表2:绝对跟踪误差测试结果比较(单位cm)Table 2: Comparison of absolute tracking error test results (unit cm)

本专利方法This patented method [3][3] [4][4] [5][5] [6][6] fr2-xyzfr2-xyz 1.461.46 3.783.78 24.2924.29 1.211.21 2.72.7 SimSim 0.360.36 2.312.31 -- 0.140.14 -- Sim-deskSim-desk 0.050.05 1.541.54 -- 0.260.26 -- fr2-deskfr2-desk 4.534.53 13.5113.51 Xx 1.781.78 9.69.6

注:[1]方法为:Dense RGB-D Odometry;[2]方法为:PTAM(keypoint-based);[3]方法为:semi-densemono-VO;[4]方法为:keypoint-based mono-SLAM;[5]方法为:DirectRGB-D SLAM;[6]方法为:keypoint-based RGB-D SLAM;—表示没有查到该方法的数据,X表示追踪失败(Tracking Lost);用作比较的数据来自Computer Vision Group;通过比较可以发现:本专利的方法无论是位置漂移量、角度漂移量还是绝对误差都处于中上水平,相比较其他的方法,我们的方法获取环境较多信息,同时与全稠密的方法,我们实验精度依然能够达到使用标准,而且能够直接在CPU上运行,不用要求GPU。Note: [1] The method is: Dense RGB-D Odometry; [2] The method is: PTAM (keypoint-based); [3] The method is: semi-densemono-VO; [4] The method is: keypoint-based mono- SLAM; [5] the method is: DirectRGB-D SLAM; [6] the method is: keypoint-based RGB-D SLAM; - indicates that the data of this method is not found, and X indicates that tracking failure (Tracking Lost); used for comparison The data comes from Computer Vision Group; through comparison, it can be found that the method of this patent is at the upper-middle level in terms of position drift, angle drift and absolute error. Compared with other methods, our method obtains more information about the environment, and at the same time, compared With the full-dense method, the accuracy of our experiment can still meet the usage standard, and it can run directly on the CPU without requiring a GPU.

此外本方法在无人机上验证了实时地图构建,如图8和图9所示。本系统中无人机上搭载了一台高清摄像机和微小型计算机,利用微小型计算机运行本方法能够实时计算得到大地的稀疏点云,利用解算的飞机位姿、点云、特征点将拍摄的图像实时拼接得到二维地图。通过本实验能够证明本发明方法能够实现实时重建地图。In addition, this method verifies real-time map construction on a UAV, as shown in Figures 8 and 9. In this system, the UAV is equipped with a high-definition camera and a microcomputer. Using the microcomputer to run this method can calculate the sparse point cloud of the earth in real time. Using the calculated aircraft pose, point cloud, and feature points, the captured The images are spliced in real time to obtain a two-dimensional map. This experiment can prove that the method of the present invention can realize real-time reconstruction of maps.

尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在不脱离本发明的原理和宗旨的情况下在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present invention have been shown and described above, it can be understood that the above embodiments are exemplary and cannot be construed as limitations to the present invention. Variations, modifications, substitutions, and modifications to the above-described embodiments are possible within the scope of the present invention.

Claims (10)

1.一种无人机实时在线地图生成方法,其特征在于:包括以下步骤:1. A real-time online map generation method for unmanned aerial vehicle, is characterized in that: comprise the following steps: 步骤1:采集图像Step 1: Acquire Images 无人机机载相机采集到一系列图像,并将图像传递给无人机载的微小型计算机;The drone's onboard camera collects a series of images and transmits the images to the drone's onboard microcomputer; 步骤2:无人机载的微小型计算机对相机获取的第一帧图像进行处理得到初始化地图:Step 2: The microcomputer onboard the UAV processes the first frame of image acquired by the camera to obtain an initialization map: 步骤2.1:对第一帧图像进行去畸变处理,得到去畸变后的第一帧图像;Step 2.1: Perform dedistortion processing on the first frame image to obtain the first frame image after dedistortion; 步骤2.2:对去畸变后的第一帧图像进行深度初始化:根据设定的灰度梯度阈值,筛选出去畸变后的第一帧图像中灰度梯度大于灰度梯度阈值的像素点,并赋予所筛选出的像素点随机深度值;Step 2.2: Depth initialization of the first frame of image after dedistortion: according to the set gray gradient threshold, screen out the pixels whose gray gradient is greater than the gray gradient threshold in the distorted first frame image, and assign Random depth value of selected pixels; 步骤2.3:根据无人机机载相机参数将步骤2.2中赋予深度值的像素点反投影回三维环境,得到初始化后的地图;Step 2.3: Back-project the pixels assigned depth values in step 2.2 back to the three-dimensional environment according to the parameters of the drone’s onboard camera, and obtain the initialized map; 步骤2.4:将去畸变后的第一帧图像设置为关键帧;Step 2.4: Set the first frame image after de-distortion as a key frame; 步骤3:对无人机机载相机实时获得的第i帧图像进行如下处理,i=2,3,4……:Step 3: Perform the following processing on the i-th frame image obtained by the drone's onboard camera in real time, i=2,3,4...: 步骤3.1:对第i帧图像进行去畸变处理,得到去畸变后的第i帧图像;Step 3.1: Perform dedistortion processing on the i-th frame image to obtain the i-th frame image after de-distortion; 步骤3.2:以当前关键帧为基准,进行去畸变后的第i帧图像与基准的图像对齐操作,得到第i帧到当前关键帧的位姿变化;Step 3.2: Taking the current key frame as the benchmark, perform the operation of aligning the i-th frame image after dedistortion with the reference image, and obtain the pose change from the i-th frame to the current key frame; 步骤3.3:根据当前关键帧对应的相机姿态,以及步骤3.2得到的第i帧到当前关键帧的位姿变化,得到第i帧对应的相机姿态和相机在局部坐标下的位置;Step 3.3: According to the camera pose corresponding to the current key frame, and the pose change from the i-th frame to the current key frame obtained in step 3.2, obtain the camera pose corresponding to the i-th frame and the position of the camera in local coordinates; 步骤3.4:根据设定的灰度梯度阈值,筛选出去畸变后的第i帧图像中灰度梯度大于灰度梯度阈值的像素点,并根据相机参数和步骤3.3得到的第i帧对应的相机姿态,将筛选出的像素点反投影回三维环境,得到所筛选出的像素点深度值;并将所筛选出的带有深度值的像素点加入地图中;Step 3.4: According to the set gray-scale gradient threshold, filter out the pixels whose gray-scale gradient is greater than the gray-scale gradient threshold in the distorted i-th frame image, and obtain the camera pose corresponding to the i-th frame according to the camera parameters and step 3.3 , back-project the screened pixels back to the 3D environment to obtain the screened pixel depth values; and add the screened pixel points with depth values to the map; 步骤3.5:若步骤3.2得到的第i帧到当前关键帧的位姿变化大于设定的位姿变化阈值,则用第i帧代替当前关键帧作为新的关键帧;Step 3.5: If the pose change from the i-th frame obtained in step 3.2 to the current key frame is greater than the set pose change threshold, replace the current key frame with the i-th frame as a new key frame; 步骤4:当预设帧数的无人机机载相机实时获得的图像处理完成后,无人机载的微小型计算机将生成的地图通过无人机载的DDL图传传给地面端显示。Step 4: After the real-time image processing of the drone's onboard camera with a preset number of frames is completed, the drone's onboard microcomputer transmits the generated map to the ground terminal for display through the drone's onboard DDL image. 2.根据权利要求1所述一种无人机实时在线地图生成方法,其特征在于:2. a kind of unmanned aerial vehicle real-time online map generation method according to claim 1, is characterized in that: 提取并存储每一个关键帧图像中的特征点;Extract and store the feature points in each key frame image; 若步骤3.2中图像对齐操作无法实现,则进行失败重建:If the image alignment operation in step 3.2 cannot be realized, perform failed reconstruction: 提取去畸变后的当前帧图像的特征点,将当前帧的特征点与存储的每一个关键帧图像中的特征点进行匹配,寻找成功匹配特征点个数最多的关键帧,若该关键帧中成功匹配特征点的个数占该关键帧中特征点总数的比例不大于40%,则以当前帧作为第一帧,返回步骤2;否则以该关键帧作为为基准,进行去畸变后的当前帧图像与基准的图像对齐操作,得到当前帧到当前关键帧的位姿变化;Extract the feature points of the de-distorted current frame image, match the feature points of the current frame with the feature points in each stored key frame image, and find the key frame with the largest number of successfully matched feature points. If the number of successfully matched feature points accounts for no more than 40% of the total number of feature points in the key frame, then take the current frame as the first frame and return to step 2; The frame image is aligned with the benchmark image to obtain the pose change from the current frame to the current key frame; 根据基准对应的相机姿态,以及当前帧到当前关键帧的位姿变化,得到当前帧对应的相机姿态;According to the camera pose corresponding to the benchmark, and the pose change from the current frame to the current key frame, the camera pose corresponding to the current frame is obtained; 根据设定的灰度梯度阈值,筛选出去畸变后的当前帧图像中灰度梯度大于灰度梯度阈值的像素点,并根据相机参数和当前帧对应的相机姿态,将筛选出的像素点反投影回三维环境,得到所筛选出的像素点深度值;并将所筛选出的带有深度值的像素点加入地图中;而后继续按照步骤3进行。According to the set gray gradient threshold, filter out the pixels whose gray gradient is greater than the gray gradient threshold in the distorted current frame image, and back-project the filtered pixels according to the camera parameters and the camera pose corresponding to the current frame Go back to the 3D environment to get the depth value of the selected pixels; add the screened pixels with depth values to the map; then proceed to step 3. 3.根据权利要求2所述一种无人机实时在线地图生成方法,其特征在于:采用快速角点检测方法提取特征点。3. a kind of unmanned aerial vehicle real-time online map generation method according to claim 2, is characterized in that: adopt fast corner point detection method to extract feature point. 4.根据权利要求1所述一种无人机实时在线地图生成方法,其特征在于:步骤3.4中,将所筛选出的带有深度值的像素点加入地图的过程中,若某一像素点反投影后,在地图中对应的三维点的邻域内,已存在有地图三维点,则将该像素点反投影后在地图中对应的三维点,以及三维点邻域内的已存在的地图三维点去除,并将该像素点反投影后在地图中对应的三维点,与三维点邻域内的已存在的地图三维点的加权平均点加入地图中。4. A kind of unmanned aerial vehicle real-time online map generation method according to claim 1, it is characterized in that: in step 3.4, in the process of adding the pixel point with the depth value screened out to the map, if a certain pixel point After back-projection, if there is already a 3D point on the map in the neighborhood of the corresponding 3D point in the map, then the corresponding 3D point in the map and the existing 3D point on the map in the neighborhood of the 3D point will be back-projected. Remove, and add the corresponding three-dimensional point in the map after the back projection of the pixel point, and the weighted average point of the existing map three-dimensional point in the neighborhood of the three-dimensional point to the map. 5.根据权利要求1所述一种无人机实时在线地图生成方法,其特征在于:步骤3.5中,若步骤3.2得到的第i帧到当前关键帧的位姿变化大于设定的位姿变化阈值,且第i帧与当前关键帧的帧数差不小于15帧,则用第i帧代替当前关键帧作为新的关键帧。5. a kind of unmanned aerial vehicle real-time online map generation method according to claim 1, is characterized in that: in step 3.5, if step 3.2 obtains the pose change of i frame to current key frame greater than the pose change of setting threshold, and the frame number difference between the i-th frame and the current key frame is not less than 15 frames, then use the i-th frame instead of the current key frame as a new key frame. 6.根据权利要求1所述一种无人机实时在线地图生成方法,其特征在于:步骤3.2中的图像对齐操作采用以下过程:6. a kind of unmanned aerial vehicle real-time online map generation method according to claim 1, is characterized in that: the image alignment operation in the step 3.2 adopts the following process: 先设定第i帧到当前关键帧的位姿变化的初始值;根据第i帧到当前关键帧的位姿变化,将当前关键帧中筛选出的灰度梯度大于灰度梯度阈值的像素点反投影到三维环境,再从三维环境投影到去畸变后的第i帧图像上,得到投影点;并在去畸变后的第i帧图像上找到,当前关键帧中筛选出的灰度梯度大于灰度梯度阈值的像素点的对应点;计算投影点与对应点的光度值残差和;迭代变化第i帧到当前关键帧的位姿变化,使光度值残差和最小。First set the initial value of the pose change from the i-th frame to the current key frame; according to the pose change from the i-th frame to the current key frame, filter out the pixels whose gray gradient is greater than the gray gradient threshold in the current key frame Back-project to the 3D environment, and then project from the 3D environment onto the i-th frame image after dedistortion to obtain the projection point; and find it on the i-th frame image after de-distortion, the gray gradient selected in the current key frame is greater than The corresponding point of the pixel point of the gray gradient threshold; calculate the residual sum of the photometric value of the projected point and the corresponding point; iteratively change the pose change from the i-th frame to the current key frame to minimize the residual sum of the photometric value. 7.根据权利要求6所述一种无人机实时在线地图生成方法,其特征在于:采用第i-1帧到当前关键帧的位姿变化作为第i帧到当前关键帧的位姿变化的初始值。7. a kind of unmanned aerial vehicle real-time online map generation method according to claim 6, is characterized in that: adopt i-1th frame to the pose change of current key frame as the pose change of i frame to current key frame initial value. 8.根据权利要求1所述一种无人机实时在线地图生成方法,其特征在于:步骤3.4中,得到所筛选出的像素点深度值后;采用图优化方法对第i帧对应的相机在局部坐标下的位置,以及所筛选出的带有深度值的像素点位置进行优化,将优化后的带有深度值的像素点加入地图中。8. A kind of unmanned aerial vehicle real-time online map generating method according to claim 1, it is characterized in that: in step 3.4, after obtaining the pixel point depth value that is screened out; Adopt graph optimization method to the i-th frame corresponding camera in The position under the local coordinates and the selected pixel point with depth value are optimized, and the optimized pixel point with depth value is added to the map. 9.根据权利要求1或8所述一种无人机实时在线地图生成方法,其特征在于:将重建后的地图转换到世界坐标系下:9. according to claim 1 or 8 described a kind of unmanned aerial vehicle real-time online map generation method, it is characterized in that: the map after reconstruction is converted under the world coordinate system: 在无人机实时地图重建过程中,通过卫星定位信号获得每一帧时刻下,无人机在世界坐标系下的轨迹信息Xn,n表示的是第n帧;并在无人机实时地图重建过程中,得到每一帧对应的相机在局部坐标系下的位置xn;通过优化函数In the UAV real-time map reconstruction process, the trajectory information X n of the UAV in the world coordinate system is obtained at each frame time through the satellite positioning signal, and n represents the nth frame; and in the UAV real-time map During the reconstruction process, the position x n of the camera corresponding to each frame in the local coordinate system is obtained; through the optimization function argminargmin δδ ΣΣ nno == 11 NN (( TT (( xx nno ,, δδ )) -- Xx nno )) 22 得到优化函数取最小值对应的变换矩阵δ,其中N为无人机实时地图重建过程中的总帧数,T(xn,δ)表示从局部坐标系到世界坐标系的投影转换函数,δ为从局部坐标系到世界坐标系的变换矩阵;根据得到的变换矩阵δ对应的投影转换函数,将重建得到的地图转换到世界坐标系下。Get the transformation matrix δ corresponding to the minimum value of the optimization function, where N is the total number of frames in the UAV real-time map reconstruction process, T(x n , δ) represents the projection conversion function from the local coordinate system to the world coordinate system, δ is the transformation matrix from the local coordinate system to the world coordinate system; according to the projection transformation function corresponding to the obtained transformation matrix δ, transform the reconstructed map into the world coordinate system. 10.根据权利要求1或8所述一种无人机实时在线地图生成方法,其特征在于:当卫星定位信号频率小于帧频时,对于每一个卫星定位信号采集时刻tn,得到无人机在世界坐标系下的轨迹信息Xn;并用采集时刻tn前后各一帧对应的相机在局部坐标系下的位置插值得到采集时刻tn下相机在局部坐标系下的位置xn;通过优化函数10. according to claim 1 or 8 described a kind of UAV real-time online map generating method, it is characterized in that: when satellite positioning signal frequency is less than frame frequency, for each satellite positioning signal collection time t n , get UAV Trajectory information X n in the world coordinate system; and use the position interpolation of the camera in the local coordinate system corresponding to a frame before and after the acquisition time t n to obtain the position x n of the camera in the local coordinate system at the acquisition time t n ; through optimization function argminargmin δδ ΣΣ nno == 11 NN (( TT (( xx nno ,, δδ )) -- Xx nno )) 22 得到优化函数取最小值对应的变换矩阵δ,其中N为卫星定位信号的总采集点数,T(xn,δ)表示从局部坐标系到世界坐标系的投影转换函数,δ为从局部坐标系到世界坐标系的变换矩阵;根据得到的变换矩阵δ对应的投影转换函数,将重建得到的地图转换到世界坐标系下。Obtain the transformation matrix δ corresponding to the minimum value of the optimization function, where N is the total collection points of satellite positioning signals, T(x n , δ) represents the projection conversion function from the local coordinate system to the world coordinate system, and δ is the transformation from the local coordinate system Transformation matrix to the world coordinate system; transform the reconstructed map into the world coordinate system according to the projection transformation function corresponding to the obtained transformation matrix δ.
CN201610373105.0A 2016-05-31 2016-05-31 A real-time online map generation method for unmanned aerial vehicles Active CN106097304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610373105.0A CN106097304B (en) 2016-05-31 2016-05-31 A real-time online map generation method for unmanned aerial vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610373105.0A CN106097304B (en) 2016-05-31 2016-05-31 A real-time online map generation method for unmanned aerial vehicles

Publications (2)

Publication Number Publication Date
CN106097304A true CN106097304A (en) 2016-11-09
CN106097304B CN106097304B (en) 2019-04-23

Family

ID=57229505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610373105.0A Active CN106097304B (en) 2016-05-31 2016-05-31 A real-time online map generation method for unmanned aerial vehicles

Country Status (1)

Country Link
CN (1) CN106097304B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389477A (en) * 2018-03-05 2018-08-10 广州星唯信息科技有限公司 A kind of correction guidance method for driving training field training
CN108496134A (en) * 2017-05-31 2018-09-04 深圳市大疆创新科技有限公司 UAV return path planning method and device
CN108513648A (en) * 2017-06-19 2018-09-07 深圳市大疆创新科技有限公司 Map construction method, map construction system, unmanned aerial vehicle and control terminal
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM
CN108812599A (en) * 2018-06-13 2018-11-16 仲恺农业工程学院 A kind of drone plant protection monitoring system and method for manual control
CN108846084A (en) * 2018-06-11 2018-11-20 成都山河空间信息技术有限公司 A kind of generation system and method for live-action map
WO2019100214A1 (en) * 2017-11-21 2019-05-31 深圳市大疆创新科技有限公司 Method, device, and unmanned aerial vehicle for generating output image
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110285822A (en) * 2019-07-01 2019-09-27 东莞理工学院 Fusion application system and method of unmanned aerial vehicle mapping algorithm and unmanned vehicle navigation algorithm
CN110648398A (en) * 2019-08-07 2020-01-03 武汉九州位讯科技有限公司 Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
CN110779496A (en) * 2018-07-30 2020-02-11 阿里巴巴集团控股有限公司 Three-dimensional map construction system, method, device and storage medium
CN110858414A (en) * 2018-08-13 2020-03-03 北京嘀嘀无限科技发展有限公司 Image processing method and device, readable storage medium and augmented reality system
WO2020198963A1 (en) * 2019-03-29 2020-10-08 深圳市大疆创新科技有限公司 Data processing method and apparatus related to photographing device, and image processing device
CN111798373A (en) * 2020-06-11 2020-10-20 西安视野慧图智能科技有限公司 A Fast UAV Image Stitching Method Based on Local Plane Assumption and 6DOF Pose Optimization
CN113190012A (en) * 2021-05-10 2021-07-30 山东大学 Robot task autonomous planning method and system
CN115147683A (en) * 2022-07-08 2022-10-04 南京人工智能高等研究院有限公司 Training method, pose estimation method and device for pose estimation network model
CN116309821A (en) * 2023-01-05 2023-06-23 电子科技大学 A UAV localization method based on heterogeneous image registration
CN117478667A (en) * 2023-10-31 2024-01-30 重庆市规划和自然资源信息中心 Working method for providing three-dimensional map sharing data on line based on GIS positioning
CN119309550A (en) * 2024-12-16 2025-01-14 中国地质大学(北京) A surveying and mapping method based on low-altitude UAV remote sensing technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN103822635A (en) * 2014-03-05 2014-05-28 北京航空航天大学 Visual information based real-time calculation method of spatial position of flying unmanned aircraft
US20140225985A1 (en) * 2012-10-17 2014-08-14 DotProduct LLC Handheld portable optical scanner and method of using
CN104200523A (en) * 2014-09-11 2014-12-10 中国科学院自动化研究所 Large-scale scene three-dimensional reconstruction method for fusion of additional information
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN105571588A (en) * 2016-03-10 2016-05-11 赛度科技(北京)有限责任公司 Method for building three-dimensional aerial airway map of unmanned aerial vehicle and displaying airway of three-dimensional aerial airway map

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140225985A1 (en) * 2012-10-17 2014-08-14 DotProduct LLC Handheld portable optical scanner and method of using
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN103822635A (en) * 2014-03-05 2014-05-28 北京航空航天大学 Visual information based real-time calculation method of spatial position of flying unmanned aircraft
CN104200523A (en) * 2014-09-11 2014-12-10 中国科学院自动化研究所 Large-scale scene three-dimensional reconstruction method for fusion of additional information
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN105571588A (en) * 2016-03-10 2016-05-11 赛度科技(北京)有限责任公司 Method for building three-dimensional aerial airway map of unmanned aerial vehicle and displaying airway of three-dimensional aerial airway map

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DOMINIK NEUMANN 等: "Real-time RGB-D Mapping and 3-D Modeling on the GPU using the Random Ball Cover Data Structure", 《2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS》 *
FELIX ENDRES 等: "3-D Mapping With an RGB-D Camera", 《IEEE TRANSACTIONS ON ROBOTICS》 *
XUEPING ZHU 等: "Model of Collaborative UAV Swarm Toward Coordination and Control Mechanisms Study", 《ICCS 2015 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE》 *
张臻炜 等: "一种基于计算机视觉的无人机实时三维重建方法", 《机械与电子》 *
黄金鑫 等: "一种改进的未知环境无人机三维地图实时创建方法", 《机械与电子》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108496134A (en) * 2017-05-31 2018-09-04 深圳市大疆创新科技有限公司 UAV return path planning method and device
CN108513648A (en) * 2017-06-19 2018-09-07 深圳市大疆创新科技有限公司 Map construction method, map construction system, unmanned aerial vehicle and control terminal
WO2018232576A1 (en) * 2017-06-19 2018-12-27 深圳市大疆创新科技有限公司 Map construction method, map construction system, unmanned aerial vehicle and control terminal
WO2019100214A1 (en) * 2017-11-21 2019-05-31 深圳市大疆创新科技有限公司 Method, device, and unmanned aerial vehicle for generating output image
CN110073403A (en) * 2017-11-21 2019-07-30 深圳市大疆创新科技有限公司 Image output generation method, equipment and unmanned plane
CN108389477A (en) * 2018-03-05 2018-08-10 广州星唯信息科技有限公司 A kind of correction guidance method for driving training field training
CN108648270B (en) * 2018-05-12 2022-04-19 西北工业大学 Unmanned aerial vehicle real-time three-dimensional scene reconstruction method capable of realizing real-time synchronous positioning and map construction
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM
CN108846084A (en) * 2018-06-11 2018-11-20 成都山河空间信息技术有限公司 A kind of generation system and method for live-action map
CN108812599A (en) * 2018-06-13 2018-11-16 仲恺农业工程学院 A kind of drone plant protection monitoring system and method for manual control
CN108812599B (en) * 2018-06-13 2021-08-27 仲恺农业工程学院 Unmanned aerial vehicle plant protection monitoring system and method for manual control
CN110779496A (en) * 2018-07-30 2020-02-11 阿里巴巴集团控股有限公司 Three-dimensional map construction system, method, device and storage medium
CN110779496B (en) * 2018-07-30 2022-03-22 斑马智行网络(香港)有限公司 Three-dimensional map construction system, method, device and storage medium
CN110858414A (en) * 2018-08-13 2020-03-03 北京嘀嘀无限科技发展有限公司 Image processing method and device, readable storage medium and augmented reality system
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
WO2020198963A1 (en) * 2019-03-29 2020-10-08 深圳市大疆创新科技有限公司 Data processing method and apparatus related to photographing device, and image processing device
CN110285822A (en) * 2019-07-01 2019-09-27 东莞理工学院 Fusion application system and method of unmanned aerial vehicle mapping algorithm and unmanned vehicle navigation algorithm
CN110648398B (en) * 2019-08-07 2020-09-11 武汉九州位讯科技有限公司 Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
CN110648398A (en) * 2019-08-07 2020-01-03 武汉九州位讯科技有限公司 Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
CN111798373A (en) * 2020-06-11 2020-10-20 西安视野慧图智能科技有限公司 A Fast UAV Image Stitching Method Based on Local Plane Assumption and 6DOF Pose Optimization
CN113190012A (en) * 2021-05-10 2021-07-30 山东大学 Robot task autonomous planning method and system
CN113190012B (en) * 2021-05-10 2022-08-12 山东大学 A method and system for autonomous planning of robot tasks
CN115147683A (en) * 2022-07-08 2022-10-04 南京人工智能高等研究院有限公司 Training method, pose estimation method and device for pose estimation network model
CN116309821A (en) * 2023-01-05 2023-06-23 电子科技大学 A UAV localization method based on heterogeneous image registration
CN117478667A (en) * 2023-10-31 2024-01-30 重庆市规划和自然资源信息中心 Working method for providing three-dimensional map sharing data on line based on GIS positioning
CN117478667B (en) * 2023-10-31 2024-05-28 重庆市规划和自然资源信息中心 Working method for providing three-dimensional map sharing data on line based on GIS positioning
CN119309550A (en) * 2024-12-16 2025-01-14 中国地质大学(北京) A surveying and mapping method based on low-altitude UAV remote sensing technology
CN119309550B (en) * 2024-12-16 2025-02-25 中国地质大学(北京) Mapping method based on low-altitude unmanned aerial vehicle remote sensing technology

Also Published As

Publication number Publication date
CN106097304B (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN105678754B (en) A kind of unmanned plane real-time map method for reconstructing
CN106097304A (en) A kind of unmanned plane real-time online ground drawing generating method
CN105865454B (en) A kind of Navigation of Pilotless Aircraft method generated based on real-time online map
Lin et al. Autonomous aerial navigation using monocular visual‐inertial fusion
CN101598556B (en) Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
KR102130687B1 (en) System for information fusion among multiple sensor platforms
CN110515110B (en) Method, device, equipment and computer readable storage medium for data evaluation
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN113689499B (en) A visual rapid positioning method, device and system based on point-surface feature fusion
Luo et al. Fast terrain mapping from low altitude digital imagery
Lyu et al. Spins: A structure priors aided inertial navigation system
CN111936946A (en) A positioning system and method
Florea et al. TanDepth: Leveraging global DEMs for metric monocular depth estimation in UAVs
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light
CN105389819A (en) Robust semi-calibrating down-looking image epipolar rectification method and system
CN110160503A (en) A kind of unmanned plane landscape matching locating method for taking elevation into account
Kupervasser et al. Robust positioning of drones for land use monitoring in strong terrain relief using vision-based navigation
CN119002538A (en) Laser radar line-imitating flight method and device based on ICP
Roh et al. Aerial image based heading correction for large scale SLAM in an urban canyon
CN115239899B (en) Pose map generation method, high-precision map generation method and device
Li et al. 3D semantic map construction based on point cloud and image fusion
CN119594980B (en) A method, apparatus, device, and storage medium for estimating the motion of an aircraft.
Loeper et al. Visual localization in urban environments employing 3D city models
US20260017820A1 (en) Colored Point Cloud Based Refinement for Pose Estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant