[go: up one dir, main page]

CN114659514B - A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration - Google Patents

A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration Download PDF

Info

Publication number
CN114659514B
CN114659514B CN202210278286.4A CN202210278286A CN114659514B CN 114659514 B CN114659514 B CN 114659514B CN 202210278286 A CN202210278286 A CN 202210278286A CN 114659514 B CN114659514 B CN 114659514B
Authority
CN
China
Prior art keywords
gnss
point cloud
point
voxel
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210278286.4A
Other languages
Chinese (zh)
Other versions
CN114659514A (en
Inventor
潘树国
何璇
高旺
刘宏
陆新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202210278286.4A priority Critical patent/CN114659514B/en
Publication of CN114659514A publication Critical patent/CN114659514A/en
Application granted granted Critical
Publication of CN114659514B publication Critical patent/CN114659514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/485Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an optical system or imaging system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本发明公开了一种基于体素化精配准的LiDAR‑IMU‑GNSS融合定位方法。首先提出一种基于曲率分割的体素化点云降采样方法,通过曲率阈值进行粗分类,并利用哈希映射函数进行点云体素降采样,更大程度保留了源点云的空间特征分布属性。其次,构建了一个基于点与邻域点集的最近邻域的点云配准模型,并设置了迭代终止阈值减小局部最优解问题的发生概率,将单帧点云的配准耗时提升了一个数量级。最后,构建了一个基于优化的LiDAR‑IMU‑GNSS融合定位模型,利用基于置信度加权的GNSS观测值对局部位姿估计值进行全局校正,能在复杂城市环境实现平均实现比同类先进方法更连续且更精确的位姿估计与地图重构。

The present invention discloses a LiDAR‑IMU‑GNSS fusion positioning method based on voxelized precise registration. Firstly, a voxelized point cloud downsampling method based on curvature segmentation is proposed, which performs coarse classification through curvature threshold, and uses hash mapping function to perform point cloud voxel downsampling, thereby retaining the spatial characteristic distribution attributes of the source point cloud to a greater extent. Secondly, a point cloud registration model based on the nearest neighbor of a point and a neighborhood point set is constructed, and an iteration termination threshold is set to reduce the probability of occurrence of the local optimal solution problem, thereby increasing the registration time of a single-frame point cloud by an order of magnitude. Finally, a LiDAR‑IMU‑GNSS fusion positioning model based on optimization is constructed, and the local pose estimation value is globally corrected using confidence-weighted GNSS observations, which can achieve more continuous and accurate pose estimation and map reconstruction on average in complex urban environments than similar advanced methods.

Description

一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration

技术领域Technical Field

本发明属于多传感器即时定位与建图领域,具体涉及一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法。The present invention belongs to the field of multi-sensor real-time positioning and mapping, and specifically relates to a LiDAR-IMU-GNSS fusion positioning method based on voxelized precise registration.

背景技术Background technique

对于任何自主机器人系统,精确和鲁棒的移动载体导航定位技术是最基本的技术之一。针对移动观测平台的定位问题。近年来,基于SLAM(Simultaneous Localization andMapping,即时定位与建图)的多源融合定位技术受到了来自相关企业和研究人员的广泛关注。基于SLAM的多源融合定位方案可根据主要传感器的不同分为基于视觉的SLAM和基于激光雷达的SLAM两种。由于传感器的优越性,基于激光雷达的SLAM的方案可以对空间指纹信息进行更高频、更精确的采集,更适用于车载平台实时高精度定位需求。因此,以激光雷达-惯性里程计(LiDAR-Inertial Odometry,LIO)为代表的基于激光雷达的SLAM方案被广泛用于获取复杂环境的三维地理信息并用于载体定位及地图重构。For any autonomous robot system, accurate and robust mobile carrier navigation and positioning technology is one of the most basic technologies. For the positioning problem of mobile observation platforms. In recent years, multi-source fusion positioning technology based on SLAM (Simultaneous Localization and Mapping) has received widespread attention from relevant companies and researchers. The multi-source fusion positioning scheme based on SLAM can be divided into two types according to the main sensors: vision-based SLAM and lidar-based SLAM. Due to the superiority of the sensor, the lidar-based SLAM scheme can collect spatial fingerprint information more frequently and more accurately, and is more suitable for the real-time high-precision positioning requirements of vehicle-mounted platforms. Therefore, the lidar-based SLAM scheme represented by LiDAR-Inertial Odometry (LIO) is widely used to obtain three-dimensional geographic information in complex environments and for carrier positioning and map reconstruction.

激光雷达的点云配准为移动载体位姿估计的关键步骤,其严格影响着定位方法的位姿估计和地图重构结果。常用的点云滤波配准方法有正态分布变换(NormalDistributions Transform,NDT)、迭代最近点(Iterative Closest Point,ICP)及各类改进方法。NDT方法的核心在于将源点云与目标点云的概率密度函数作为目标函数,利用非线性优化的方法最小化二者之间的概率密度函数求得最优解,虽实时性能较优,但需多点构建协方差阵,在点云稀疏区域具有低鲁棒性。作为点云配准的另一种方法,ICP方法定位精度较高,但其每次迭代过程中须重新搜索最近邻点,并求取变换矩阵,其高昂计算成本难以适配计算资源有限的车载平台。综上所述,在压缩计算成本的基础上,一种适用于车载平台的高精度实时点云配准方法仍有待研究。The point cloud registration of LiDAR is a key step in the pose estimation of mobile carriers, which strictly affects the pose estimation and map reconstruction results of the positioning method. Common point cloud filter registration methods include Normal Distributions Transform (NDT), Iterative Closest Point (ICP) and various improved methods. The core of the NDT method is to use the probability density function of the source point cloud and the target point cloud as the objective function, and use the nonlinear optimization method to minimize the probability density function between the two to obtain the optimal solution. Although it has better real-time performance, it requires multiple points to construct the covariance matrix and has low robustness in sparse point cloud areas. As another method of point cloud registration, the ICP method has higher positioning accuracy, but it must re-search the nearest neighbor points and obtain the transformation matrix in each iteration process. Its high computational cost makes it difficult to adapt to the vehicle platform with limited computing resources. In summary, on the basis of compressing the computational cost, a high-precision real-time point cloud registration method suitable for vehicle platforms still needs to be studied.

此外,作为局部传感器,LIO对当前帧进行位姿估计时的局部地图与全局地图之间存在累积偏移,这限制了LIO定位建图方案在大型室外环境中的定位精度。需采用来自全球卫星定位导航系统(Global Navigation Satellite System,GNSS)的全局观测信息可以为LIO系统提供一个可信的全局约束校正。传统融合方法常引入三维GNSS量测因子对LIO进行辅助的优化框架,但单一关键帧的量测信息较为冗余,且行驶至GNSS多径区域添加的GNSS因子可靠性较差。为了降低数据冗余度,现有方法提出可筛选GNSS角点因子以约束局部位姿,但未考虑平直路段角点数量不足的情况,量测信息利用率较低,在大型复杂室外环境中的应用具有局限性。In addition, as a local sensor, there is a cumulative offset between the local map and the global map when LIO estimates the pose of the current frame, which limits the positioning accuracy of the LIO positioning and mapping solution in large outdoor environments. Global observation information from the Global Navigation Satellite System (GNSS) is needed to provide a reliable global constraint correction for the LIO system. Traditional fusion methods often introduce three-dimensional GNSS measurement factors to assist LIO in the optimization framework, but the measurement information of a single key frame is relatively redundant, and the reliability of the GNSS factors added when driving to the GNSS multipath area is poor. In order to reduce data redundancy, existing methods propose to screen GNSS corner point factors to constrain local poses, but they do not consider the situation where the number of corner points on straight sections is insufficient, the measurement information utilization rate is low, and its application in large and complex outdoor environments is limited.

由以上分析可知,亟需提出一种适用于车载平台的LIO-GNSS融合方案,主要技术要点包括:(1)在压缩计算成本的基础上,实现实时高精度的点云配准。(2)在充分利用GNSS量测信息的基础上,利用GNSS对LIO进行全局累计误差校正。From the above analysis, it can be seen that there is an urgent need to propose a LIO-GNSS fusion solution suitable for vehicle-mounted platforms. The main technical points include: (1) Realizing real-time and high-precision point cloud registration on the basis of compressing the computational cost. (2) Using GNSS to perform global cumulative error correction on LIO on the basis of making full use of GNSS measurement information.

发明内容Summary of the invention

为解决上述问题,本发明公开了一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法,在实际城市环境内实现了较其他同类方法更高的定位精度,可以为车载平台在复杂城市环境的行驶过程中提供精准且连续的位姿估计结果。To solve the above problems, the present invention discloses a LiDAR-IMU-GNSS fusion positioning method based on voxelized precise registration, which achieves higher positioning accuracy than other similar methods in actual urban environments, and can provide accurate and continuous pose estimation results for vehicle-mounted platforms during driving in complex urban environments.

为达到上述目的,本发明的技术方案如下:To achieve the above object, the technical solution of the present invention is as follows:

一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法,包括以下步骤:A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration includes the following steps:

(1)采用一种基于曲率分割的体素化降采样方法:给定一组激光雷达采集的原始点云序列,遍历原始点云序列中的所有点,利用广度优先方法进行粗聚类;(1) A voxel-based downsampling method based on curvature segmentation is adopted: given a set of original point cloud sequences collected by LiDAR, all points in the original point cloud sequence are traversed and coarse clustering is performed using the breadth-first method;

具体来说,利用基于欧氏距离的几何角度阈值对深度相近的点云簇进行细分割。设激光雷达扫描中心为O,点云簇中两个相近边缘点pa、pb,深度分别为da、db(da>db),设点pi所在点云簇中点云数为M,则pi点云粗糙度为:Specifically, the geometric angle threshold based on Euclidean distance is used to finely segment point cloud clusters with similar depths. Assume that the laser radar scanning center is O, and the two similar edge points p a and p b in the point cloud cluster have depths d a and d b (d a > d b ), respectively. Assume that the number of point clouds in the point cloud cluster where point p i is located is M, then the roughness of point cloud p i is:

设粗糙度阈值为遍历M,将/>的点归为边缘特征点集,将/>的点归为平面特征点集,分别进行降采样操作,然后利用哈希函数映射值代替随机取样对曲率粗分类后的点云簇进行快速降采样;设一组点云序列中某特征点在体素空间中的坐标为p(x,y,z)。设体素网格尺寸为r,则体素网格在x方向的维数为Dx=(xmax-xmin)/r,p在体素网格内的x方向索引为hx=(x-xmin)/r,y、z方向同理。Set the roughness threshold to Traverse M and put/> The points are classified as edge feature points, and the /> The points are classified as plane feature point sets, and downsampling operations are performed separately. Then, the hash function mapping value is used instead of random sampling to quickly downsample the point cloud clusters after the curvature rough classification; let the coordinates of a feature point in a set of point cloud sequences in the voxel space be p(x,y,z). Let the voxel grid size be r, then the dimension of the voxel grid in the x direction is Dx = ( xmax - xmin )/r, and the x-direction index of p in the voxel grid is hx = ( xxmin )/r, and the same applies to the y and z directions.

得到特征点在体素空间中的三维索引后,采用哈希函数对特征点索引快速排序,将其映射到N个容器中(N=80),哈希映射函数为:After obtaining the three-dimensional index of the feature point in the voxel space, the hash function is used to quickly sort the feature point index and map it to N containers (N=80). The hash mapping function is:

hash(hx,hy,hz)=(hx+hy·Dx+hz·Dx·Dy)%N (2)hash( hx , hy , hz )=( hx + hy · Dx + hz · Dx · Dy )%N (2)

为避免哈希冲突,设冲突检测条件为:To avoid hash conflicts, the conflict detection condition is set as:

hash(hx,hy,hz)=hash(h'x,h'y,h'z)(hx≠h'x|hy≠h'y|hz≠h'z) (3)hash(h x ,hy y ,h z )=hash(h' x ,h' y ,h' z )(h x ≠h' x |hy ≠h' y |h z h' z ) (3)

一旦检测到哈希冲突,则将当前容器中的索引值输出并清空容器,将新索引值置入容器中。Once a hash conflict is detected, the index value in the current container is output and the container is cleared, and the new index value is placed in the container.

(2)基于平滑体素化处理的点云配准(2) Point cloud registration based on smooth voxelization

采用基于体素内特征点分布求解T的方法,将利用树状图构建单点对最近邻模型问题转化为构建点与邻域点集的最近邻模型;首先将两组点云序列近似为高斯分布,即 其中i∈(1,n),/>和/>分别为两组点云序列协方差阵;设ai的邻域点集为/>其中,λ为邻域判断阈值,由此计算刚体变换误差/>为:The method of solving T based on the distribution of feature points within voxels is used to transform the problem of constructing the nearest neighbor model of a single point pair using a dendrogram into the nearest neighbor model of a point and a set of neighborhood points. First, the two sets of point cloud sequences are approximated as Gaussian distributions, that is, where i∈(1,n),/> and/> are the covariance matrices of two sets of point cloud sequences respectively; let the neighborhood point set of a i be/> Among them, λ is the neighborhood judgment threshold, from which the rigid body transformation error is calculated/> for:

该步骤实现了对ai邻域内所有邻近点云的平滑处理;由此可推估帧间位姿变换矩阵T的最大似然估计如下,其中Ni为ai邻域内点云数量。This step realizes the smoothing of all neighboring point clouds in the neighborhood of ai ; thus, the maximum likelihood estimate of the inter-frame pose transformation matrix T can be inferred as follows, where Ni is the number of point clouds in the neighborhood of ai .

除了对ai邻域内所有邻近点云的平滑处理外,为避免多次迭代后陷入局部最优的盲区,设立迭代终止阈值ε如下:In addition to smoothing all neighboring point clouds in the neighborhood of ai , in order to avoid falling into the local optimal blind area after multiple iterations, the iteration termination threshold ε is set as follows:

|RMSEk+1-RMSEk|>ε (6)|RMSE k+1 -RMSE k |>ε (6)

其中,RMSEk+1和RMSEk分别是前k+1次迭代和前k次迭代的均方根误差,当均方根误差的变化量的绝对值|RMSEk+1-RMSEk|≤ε或达到最大迭代次数时,迭代完成。Among them, RMSE k+1 and RMSE k are the root mean square errors of the first k+1 iterations and the first k iterations, respectively. The iteration is completed when the absolute value of the change in the root mean square error |RMSE k+1 -RMSE k |≤ε or the maximum number of iterations is reached.

(3)采用适用于激光雷达-IMU-GNSS多传感器位姿空间统一的方法;(3) Adopt a method suitable for the unification of the LiDAR-IMU-GNSS multi-sensor pose space;

为IMU到GNSS的外参转换矩阵,/>为激光雷达到IMU的外参转换矩阵;由于硬件固连于移动载体,故二者标定后均为定值;引入空间转换参数/>(/>包括平移参数和旋转参数/>)关联两位姿空间;t时刻的多源位姿空间统一问题表示为:set up is the external parameter conversion matrix from IMU to GNSS,/> is the external parameter conversion matrix from laser radar to IMU; since the hardware is fixed to the mobile carrier, both are constant after calibration; introduce spatial conversion parameters/> (/> Include translation parameters and rotation parameters/> ) associates two pose spaces; the problem of unifying the multi-source pose spaces at time t is expressed as:

式中,初值设为单位阵,每次添加GNSS因子求解全局最优解后将更新下一时刻的/>值,修正局部坐标系与全局坐标系间的累计偏移;In the formula, The initial value is set to the unit matrix. Each time the GNSS factor is added to solve the global optimal solution, the next moment will be updated. Value, correct the accumulated offset between the local coordinate system and the global coordinate system;

(4)全局地图构建(4) Global map construction

利用图优化方法,将构建全局地图视为一个最大后验估计问题,对滑动窗口内的状态向量进行非线性优化。构建全局优化函数如下:Using graph optimization methods, we regard the construction of the global map as a maximum a posteriori estimation problem and perform nonlinear optimization on the state vector within the sliding window. The global optimization function is constructed as follows:

其中,ρ为GNSS置信度;为当前全局点云/>与帧间局部匹配后派生的局部点云/>之间的位姿转换矩阵;公式中各传感器代价函数的具体含义如下所述;Where, ρ is the GNSS confidence; For the current global point cloud/> Local point cloud derived from local matching between frames/> The pose conversion matrix between them; the specific meaning of each sensor cost function in the formula is as follows;

①激光雷达-惯性里程计因子:根据激光雷达-惯性里程计局部位姿估计结果可得t时刻局部坐标系下的载体位置和旋转/>由此可构建局部残差因子如下:① LiDAR-Inertial Odometry Factor: Based on the local pose estimation result of LiDAR-Inertial Odometry, the carrier position in the local coordinate system at time t can be obtained. and rotation/> From this, the local residual factor can be constructed as follows:

②GNSS因子:设两帧GNSS观测值时间间隔为Δt,与LIO位姿估计值以插值的方式实现时间对齐;现给定ENU坐标系中的GNSS测量值LIO在全局坐标系中的位姿观测值则GNSS残差因子表示如下:②GNSS factor: Assume that the time interval between two frames of GNSS observations is Δt, and the time alignment is achieved with the LIO pose estimation value by interpolation. Now, the GNSS measurement value in the ENU coordinate system is given as LIO pose observations in the global coordinate system Then the GNSS residual factor is expressed as follows:

当载体运动至GNSS信号良好区域时,为充分且可靠地利用GNSS观测值,以GNSS置信度作为权重添加GNSS因子,GNSS置信度由GNSS可见有效卫星数决定;When the carrier moves to an area with good GNSS signals, in order to fully and reliably utilize GNSS observations, a GNSS factor is added with GNSS confidence as a weight. The GNSS confidence is determined by the number of visible and effective GNSS satellites.

③体素化回环因子:考虑到移动载体行驶区域可能存在的重合性,需添加回环检测环节建立非相邻帧间存在的回环约束;根据(5)式可构建回环因子如下:③ Voxelized loop closure factor: Considering the possible overlap of the driving area of the mobile carrier, it is necessary to add a loop detection link to establish the loop constraints between non-adjacent frames; the loop closure factor can be constructed according to formula (5) as follows:

本发明的有益效果:Beneficial effects of the present invention:

本发明提出了一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法。首先,提出了一种基于曲率分割的体素化点云降采样方法,通过曲率阈值进行粗分类,并利用哈希映射函数进行点云体素降采样,更大程度保留了源点云的空间特征分布属性。其次,构建了一个基于点与邻域点集的最近邻域的点云配准模型,并设置了迭代终止阈值减小局部最优解问题的发生概率。将单帧点云的配准耗时提升了一个数量级。最后,构建了一个基于图优化的融合定位模型,利用基于置信度加权的GNSS观测值对LIO的局部位姿进行全局校正。实验结果表明,在大型实测室外环境,该方法的绝对定位轨迹误差的均方根误差平均为2.317m,较同类方法的精度均有所提升。实验结果充分证明了本发明的方法能在复杂城市环境实现平均实现比同类先进方法更连续且更精确的位姿估计与地图重构。The present invention proposes a LiDAR-IMU-GNSS fusion positioning method based on voxelized precise registration. First, a voxelized point cloud downsampling method based on curvature segmentation is proposed. The curvature threshold is used for rough classification, and the point cloud voxel downsampling is performed using a hash mapping function, which retains the spatial feature distribution properties of the source point cloud to a greater extent. Secondly, a point cloud registration model based on the nearest neighbor of a point and a neighborhood point set is constructed, and an iteration termination threshold is set to reduce the probability of the occurrence of the local optimal solution problem. The registration time of a single frame point cloud is increased by an order of magnitude. Finally, a fusion positioning model based on graph optimization is constructed, and the local pose of the LIO is globally corrected using GNSS observations based on confidence weighting. The experimental results show that in a large-scale measured outdoor environment, the root mean square error of the absolute positioning trajectory error of this method is an average of 2.317m, which is higher than the accuracy of similar methods. The experimental results fully prove that the method of the present invention can achieve more continuous and more accurate pose estimation and map reconstruction than similar advanced methods in a complex urban environment.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是多传感器位姿空间统一原理图;Figure 1 is a schematic diagram of the unified multi-sensor pose space;

图2是本系统与同类方法的定位绝对轨迹误差对比箱型图;Figure 2 is a box plot comparing the absolute trajectory errors of the positioning of the present system and similar methods;

图3是本系统建图效果图。Figure 3 is a diagram showing the mapping effect of this system.

具体实施方式Detailed ways

下面结合附图和具体实施方式,进一步阐明本发明,应理解下述具体实施方式仅用于说明本发明而不用于限制本发明的范围。The present invention will be further explained below in conjunction with the accompanying drawings and specific embodiments. It should be understood that the following specific embodiments are only used to illustrate the present invention and are not used to limit the scope of the present invention.

如图所示,本发明所述的一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法,具体方法如下:As shown in the figure, the LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration described in the present invention is specifically as follows:

(1)基于曲率分割的体素化降采样(1) Voxel-based downsampling based on curvature segmentation

本发明提出了一种基于曲率分割的体素化降采样方法。给定一组激光雷达采集的原始点云序列,遍历原始点云序列中的所有点,利用广度优先方法进行粗聚类。进一步地,利用基于欧氏距离的几何角度阈值对深度相近的点云簇进行细分割。设激光雷达扫描中心为O,点云簇中两个相近边缘点pa、pb,深度分别为da、db(da>db),设点pi所在点云簇中点云数为M,则pi点云粗糙度为:The present invention proposes a voxelized downsampling method based on curvature segmentation. Given a set of original point cloud sequences collected by a laser radar, all points in the original point cloud sequence are traversed and coarse clustering is performed using the breadth-first method. Furthermore, a geometric angle threshold based on Euclidean distance is used to finely segment point cloud clusters with similar depths. Assume that the laser radar scanning center is O, and the two close edge points p a and p b in the point cloud cluster have depths d a and d b (d a > d b ), respectively. Assume that the number of point clouds in the point cloud cluster where point p i is located is M, then the roughness of point cloud p i is:

设粗糙度阈值为遍历M,将/>的点归为边缘特征点集,将/>的点归为平面特征点集,分别进行降采样操作。本发明利用哈希函数映射值代替随机取样对曲率粗分类后的点云簇进行快速降采样。设一组点云序列中某特征点在体素空间中的坐标为p(x,y,z)。设体素网格尺寸为r,则体素网格在x方向的维数为Dx=(xmax-xmin)/r,p在体素网格内的x方向索引为hx=(x-xmin)/r,y、z方向同理。Set the roughness threshold to Traverse M and put/> The points are classified as edge feature points, and the /> The points are classified as plane feature point sets and down-sampled respectively. The present invention uses hash function mapping values instead of random sampling to quickly downsample the point cloud clusters after rough curvature classification. Suppose the coordinates of a feature point in a set of point cloud sequences in the voxel space are p(x, y, z). Suppose the voxel grid size is r, then the dimension of the voxel grid in the x direction is D x =(x max -x min )/r, and the x-direction index of p in the voxel grid is h x =(xx min )/r, and the same applies to the y and z directions.

得到特征点在体素空间中的三维索引后,采用哈希函数对特征点索引快速排序,将其映射到N个容器中(N=80),哈希映射函数为:After obtaining the three-dimensional index of the feature point in the voxel space, the hash function is used to quickly sort the feature point index and map it to N containers (N=80). The hash mapping function is:

hash(hx,hy,hz)=(hx+hy·Dx+hz·Dx·Dy)%N (2)hash( hx , hy , hz )=( hx + hy · Dx + hz · Dx · Dy )%N (2)

为避免哈希冲突,设冲突检测条件为:To avoid hash conflicts, the conflict detection condition is set as:

hash(hx,hy,hz)=hash(h'x,h'y,h'z)(hx≠h'x|hy≠h'y|hz≠h'z) (3)hash(h x ,hy y ,h z )=hash(h' x ,h' y ,h' z )(h x ≠h' x |hy ≠h' y |h z h' z ) (3)

一旦检测到哈希冲突,则将当前容器中的索引值输出并清空容器,将新索引值置入容器中。Once a hash conflict is detected, the index value in the current container is output and the container is cleared, and the new index value is placed in the container.

(2)基于平滑体素化处理的点云配准(2) Point cloud registration based on smooth voxelization

本发明提出了一种基于体素内特征点分布求解T的方法,将利用树状图构建单点对最近邻模型问题转化为构建点与邻域点集的最近邻模型。首先将两组点云序列近似为高斯分布,即其中i∈(1,n),/>和/>分别为两组点云序列协方差阵。设ai的邻域点集为/>其中,λ为邻域判断阈值,由此计算刚体变换误差为:This paper proposes a method to solve T based on the distribution of feature points within voxels, which transforms the problem of constructing a nearest neighbor model for a single point pair using a dendrogram into constructing a nearest neighbor model for a point and a set of neighboring points. First, the two sets of point cloud sequences are approximated as Gaussian distributions, that is, where i∈(1,n),/> and/> are the covariance matrices of two sets of point cloud sequences. Let the neighborhood point set of a i be/> Among them, λ is the neighborhood judgment threshold, which is used to calculate the rigid body transformation error for:

该步骤实现了对ai邻域内所有邻近点云的平滑处理。由此可推估帧间位姿变换矩阵T的最大似然估计如下,其中Ni为ai邻域内点云数量。This step realizes the smoothing of all neighboring point clouds in the neighborhood of ai . From this, the maximum likelihood estimate of the inter-frame pose transformation matrix T can be inferred as follows, where Ni is the number of point clouds in the neighborhood of ai .

除了对ai邻域内所有邻近点云的平滑处理外,为避免多次迭代后陷入局部最优的盲区,进一步地,设立迭代终止阈值ε如下:In addition to smoothing all neighboring point clouds in the neighborhood of ai , in order to avoid falling into the local optimal blind area after multiple iterations, the iteration termination threshold ε is further set as follows:

|RMSEk+1-RMSEk|>ε (6)|RMSE k+1 -RMSE k |>ε (6)

其中,RMSEk+1和RMSEk分别是前k+1次迭代和前k次迭代的均方根误差。当均方根误差的变化量的绝对值|RMSEk+1-RMSEk|≤ε或达到最大迭代次数时,迭代完成。Where RMSE k+1 and RMSE k are the root mean square errors of the first k+1 iterations and the first k iterations, respectively. The iteration is completed when the absolute value of the change in the root mean square error |RMSE k+1 -RMSE k |≤ε or the maximum number of iterations is reached.

(3)多传感器位姿空间统一(3) Multi-sensor pose space unification

本发明提出了一种适用于激光雷达-IMU-GNSS多传感器位姿空间统一的方法。设为IMU到GNSS的外参转换矩阵,/>为激光雷达到IMU的外参转换矩阵。由于硬件固连于移动载体,故二者标定后均为定值。引入空间转换参数/>(包括平移参数/>和旋转参数)关联两位姿空间。t时刻的多源位姿空间统一问题可表示为:The present invention proposes a method suitable for unifying the position and posture space of multiple sensors such as LiDAR, IMU and GNSS. is the external parameter conversion matrix from IMU to GNSS,/> is the external parameter conversion matrix from the laser radar to the IMU. Since the hardware is fixed to the mobile carrier, both are constant after calibration. Introducing the spatial conversion parameter/> (including translation parameters/> and rotation parameters ) associates two pose spaces. The problem of unifying multi-source pose spaces at time t can be expressed as:

式中,初值设为单位阵,每次添加GNSS因子求解全局最优解后将更新下一时刻的/>值,修正局部坐标系与全局坐标系间的累计偏移。In the formula, The initial value is set to the unit matrix. Each time the GNSS factor is added to solve the global optimal solution, the next moment will be updated. Value, correct the accumulated offset between the local coordinate system and the global coordinate system.

(4)全局地图构建(4) Global map construction

本发明利用图优化方法,将构建全局地图视为一个最大后验估计问题,对滑动窗口内的状态向量进行非线性优化。构建全局优化函数如下:The present invention uses graph optimization methods to regard the construction of a global map as a maximum a posteriori estimation problem and performs nonlinear optimization on the state vector within the sliding window. The global optimization function is constructed as follows:

其中,ρ为GNSS置信度。为当前全局点云/>与帧间局部匹配后派生的局部点云/>之间的位姿转换矩阵。公式中各传感器代价函数的具体含义如下所述。Where ρ is the GNSS confidence. For the current global point cloud/> Local point cloud derived from local matching between frames/> The specific meaning of each sensor cost function in the formula is as follows.

①激光雷达-惯性里程计因子:根据激光雷达-惯性里程计局部位姿估计结果可得t时刻局部坐标系下的载体位置和旋转/>由此可构建局部残差因子如下:① LiDAR-Inertial Odometry Factor: Based on the local pose estimation result of LiDAR-Inertial Odometry, the carrier position in the local coordinate system at time t can be obtained. and rotation/> From this, the local residual factor can be constructed as follows:

②GNSS因子:设两帧GNSS观测值时间间隔为Δt,与LIO位姿估计值以插值的方式实现时间对齐。现给定ENU坐标系中的GNSS测量值LIO在全局坐标系中的位姿观测值则GNSS残差因子表示如下:②GNSS factor: Assume that the time interval between two frames of GNSS observations is Δt, and the LIO pose estimation value is interpolated to achieve time alignment. Now given the GNSS measurement value in the ENU coordinate system LIO pose observations in the global coordinate system Then the GNSS residual factor is expressed as follows:

当载体运动至GNSS信号良好区域时,为充分且可靠地利用GNSS观测值,以GNSS置信度作为权重添加GNSS因子,GNSS置信度由GNSS可见有效卫星数决定。When the carrier moves to an area with good GNSS signals, in order to fully and reliably utilize GNSS observations, a GNSS factor is added with GNSS confidence as a weight. The GNSS confidence is determined by the number of visible and effective GNSS satellites.

③体素化回环因子:考虑到移动载体行驶区域可能存在的重合性,需添加回环检测环节建立非相邻帧间存在的回环约束。根据(5)式可构建回环因子如下:③ Voxelized loop closure factor: Considering the possible overlap of the driving area of the mobile carrier, it is necessary to add a loop detection link to establish the loop constraints between non-adjacent frames. According to formula (5), the loop closure factor can be constructed as follows:

下面根据公共数据集实验对本发明的技术方案的定位精度进行量化评估。利用KITTI_RAW数据集对融合方法的定位精度进行评估,并于其他同类先进方法进行对比。本文选用四种不同室外场景对融合方法性能进行验证,包括城市环境,开阔区域,高速公路和林中道路。融合方法体素栅格大小设置为0.3×0.2×0.3,最大迭代次数阈值设为30,迭代终止容差阈值设为1e-8,以此兼顾在满足实时性需求的同时,保证在室外点云稀疏区域中参与匹配的特征点云数量稳定。评估策略为通过比较轨迹真值与不同方法输出位姿估计结果,求取绝对轨迹误差的均方根误差(APE_RMSE)值。The positioning accuracy of the technical solution of the present invention is quantitatively evaluated based on public data set experiments. The positioning accuracy of the fusion method is evaluated using the KITTI_RAW data set and compared with other similar advanced methods. This paper selects four different outdoor scenes to verify the performance of the fusion method, including urban environments, open areas, highways, and forest roads. The voxel grid size of the fusion method is set to 0.3×0.2×0.3, the maximum iteration threshold is set to 30, and the iteration termination tolerance threshold is set to 1e-8, so as to ensure that the number of feature point clouds participating in the matching in the sparse area of the outdoor point cloud is stable while meeting the real-time requirements. The evaluation strategy is to compare the true value of the trajectory with the pose estimation results output by different methods, and obtain the root mean square error (APE_RMSE) value of the absolute trajectory error.

数据集测试结果如下表所示:The test results of the data set are shown in the following table:

表1各方法在KITTI数据集内的绝对轨迹误差(ATE_RMSE(m))Table 1 Absolute trajectory error (ATE_RMSE(m)) of each method in the KITTI dataset

由上表可知,得益于精准的点云配准和引入GNSS全局约束的多传感器加权非线性优化框架,本方法的绝对轨迹误差的均方根误差平均为2.317m左右,较以上三种同类先进方法的定位误差均有明显下降,定位精度均优于以上三种先进方法。由实验结果可知,本发明方法中的全局优化环节很好地完成了抑制局部累积漂移的作用,使位姿估计结果更趋向全局最优解。且由于局部传感器不受信号折反射环境干扰的固有优点,弥补了传统GNSS定位在城市环境中因多径效应产生的定位异常值,保证了融合系统定位的完整可靠性。As can be seen from the above table, thanks to the precise point cloud registration and the multi-sensor weighted nonlinear optimization framework that introduces GNSS global constraints, the average root mean square error of the absolute trajectory error of this method is about 2.317m, which is significantly lower than the positioning errors of the above three similar advanced methods, and the positioning accuracy is better than the above three advanced methods. From the experimental results, it can be seen that the global optimization link in the method of the present invention has well completed the role of suppressing local cumulative drift, making the pose estimation results closer to the global optimal solution. And because of the inherent advantage of local sensors that are not affected by the signal refraction and reflection environment, it makes up for the positioning anomalies caused by the multipath effect of traditional GNSS positioning in urban environments, and ensures the complete reliability of the fusion system positioning.

通过测试结果可知,本发明提出的一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法,在实际城市环境内实现了较其他同类方法更高的定位精度,可以为车载平台在复杂城市环境的行驶过程中提供精准且连续的位姿估计结果。The test results show that the LiDAR-IMU-GNSS fusion positioning method based on voxelized precise registration proposed in the present invention achieves higher positioning accuracy than other similar methods in actual urban environments, and can provide accurate and continuous pose estimation results for vehicle-mounted platforms during driving in complex urban environments.

Claims (5)

1. A LiDAR-IMU-GNSS fusion positioning method based on voxel precise registration comprises the following steps:
(1) The voxel downsampling method based on curvature segmentation is adopted: giving an original point cloud sequence acquired by a group of laser radars, traversing all points in the original point cloud sequence, and performing coarse clustering by using a breadth-first method;
Specifically, the point cloud clusters with similar depths are finely segmented by utilizing a geometric angle threshold value based on Euclidean distance; let the laser radar scan center be O, two similar edge points p a、pb in the point cloud cluster, the depth is d a、db,da>db respectively, the point cloud number in the point cloud cluster where the set point p i is located is M, and then the p i point cloud roughness is:
Setting the roughness threshold as Traversing M, will/>Points of (1) are classified as edge feature point sets, will/>The points of the curve are classified into a plane characteristic point set, downsampling operation is respectively carried out, and then a hash function mapping value is used for replacing random sampling to rapidly downsample the point cloud cluster after the curvature coarse classification; setting the coordinates of a certain characteristic point in a group of point cloud sequences in a voxel space as p (x, y, z); setting the size of the voxel grid as r, wherein the dimension of the voxel grid in the x direction is D x=(xmax-xmin)/r, and the x direction index of p in the voxel grid is h x=(x-xmin)/r, and the y and z directions are the same;
after three-dimensional indexes of the feature points in the voxel space are obtained, the feature point indexes are rapidly ordered by adopting a hash function, the feature point indexes are mapped into N containers, N=80, and the hash mapping function is as follows:
hash(hx,hy,hz)=(hx+hy·Dx+hz·Dx·Dy)%N (2)
in order to avoid hash collision, the collision detection conditions are set as follows:
hash(hx,hy,hz)=hash(h'x,h'y,h'z) (hx≠h'x|hy≠h'y|hz≠h'z) (3)
Outputting the index value in the current container and emptying the container once the hash collision is detected, and placing a new index value into the container;
(2) Point cloud registration based on smoothing voxel processing
Adopting a method for solving T based on characteristic point distribution in voxels, and converting a problem of constructing a single-point nearest neighbor model by using a dendrogram into a nearest neighbor model of a construction point and a neighborhood point set; first, two sets of point cloud sequences are approximated as gaussian distributions, i.e Wherein i is E (1, n),/>And/>Respectively two groups of point cloud sequence covariance matrixes; let the neighborhood point set of a i be/>Wherein λ is a neighborhood decision threshold, thereby calculating rigid body transformation error/>The method comprises the following steps:
The step realizes the smooth processing of all adjacent point clouds in the a i neighborhood; the maximum likelihood estimation of the pose transformation matrix T of the frame can be estimated as follows, wherein N i is the number of point clouds in a i neighborhood;
in addition to smoothing all neighboring point clouds in the a i neighborhood, to avoid trapping in a locally optimal blind zone after multiple iterations, an iteration termination threshold epsilon is set as follows:
|RMSEk+1-RMSEk|>ε (6)
Wherein, RMSE k+1 and RMSE k are root mean square errors of the previous k+1 iterations and the previous k iterations respectively, and when the absolute value of the variation of the root mean square errors is |rmse k+1-RMSEk |ε or reaches the maximum number of iterations, the iterations are completed;
(3) The method suitable for unifying the pose space of the laser radar-IMU-GNSS multi-sensor is adopted;
Is provided with For IMU to GNSS extrinsic conversion matrix,/>An external parameter conversion matrix for the laser radar reaching the IMU; because the hardware is fixedly connected with the mobile carrier, the two are fixed values after calibration; introducing spatial conversion parameters/>Associating two pose spaces; /(I)Including translation parameters/>And rotation parameter/>The problem of multi-source pose space unification at the time t is expressed as follows:
In the method, in the process of the invention, Setting the initial value as a unit array, and updating the/>, at the next moment after adding GNSS factors to solve the global optimal solution each timeA value correcting an accumulated offset between the local coordinate system and the global coordinate system;
(4) Global map construction
The method comprises the steps that a map optimization method is utilized, a global map is constructed as a maximum posterior estimation problem, and nonlinear optimization is carried out on state vectors in a sliding window; the global optimization function is constructed as follows:
Wherein ρ is GNSS confidence; For the current global point cloud/> Local point cloud derived after local matching with inter-frameA pose conversion matrix between the two; the specific meaning of each sensor cost function in the formula is as follows;
① Lidar-inertial odometer factor: the carrier position under the local coordinate system at the moment t can be obtained according to the local pose estimation result of the laser radar-inertial odometer And rotation/>The local residual factor can thus be constructed as follows:
② GNSS factors: setting the time interval of two frames of GNSS observation values as delta t, and realizing time alignment with LIO pose estimation values in an interpolation mode; GNSS measurements in an ENU coordinate system are now given Pose observations of LIO in global coordinate System/>The GNSS residual factor is expressed as follows:
When the carrier moves to a GNSS signal good area, GNSS factors are added by taking GNSS confidence as a weight in order to fully and reliably utilize GNSS observation values, wherein the GNSS confidence is determined by the number of visible effective satellites of the GNSS;
③ Voxelized loop-back factor: considering the possible coincidence of the running areas of the mobile carrier, a loop detection link needs to be added to establish loop constraint existing between non-adjacent frames; the loop back factor can be constructed according to equation (5) as follows:
2. The LiDAR-IMU-GNSS fusion positioning method based on voxel precise registration according to claim 1, wherein the point cloud voxel downsampling method based on curvature segmentation in the step (1) is characterized in that firstly coarse classification is carried out through a curvature threshold value, then the point cloud voxel downsampling is carried out by utilizing a Hash mapping function instead of a random sampling consistency method, and the spatial feature distribution attribute of source point clouds is reserved to a greater extent.
3. The LiDAR-IMU-GNSS fusion positioning method based on voxel-based fine registration of claim 1, wherein the point cloud registration based on smooth voxel-based processing in the step (2) performs smooth processing on the point cloud sequence aiming at single-to-multiple distribution in the point cloud sequence, a point cloud registration model based on the nearest neighborhood of the point and neighborhood point set is constructed, and an iteration termination threshold is set to reduce the occurrence probability of a local optimal solution problem.
4. The LiDAR-IMU-GNSS fusion positioning method based on voxel-based fine registration according to claim 1, wherein in the multi-sensor pose space unification method in the step (3), a pose estimation result of a laser radar and an IMU in a local map and a GNSS measurement value in a global map are spatially unified by a lever arm compensation method, and a pose conversion matrix at the next moment is updated after a GNSS factor is added to solve a global optimal solution each time, so that the accumulated offset between a local coordinate system and the global coordinate system is corrected.
5. The LiDAR-IMU-GNSS fusion positioning method based on voxel-based fine registration of claim 1, wherein in the step (4), a global map construction framework based on graph optimization is adopted, and weighted GNSS global position observations and voxel loop factors are used as global constraints to inhibit the accumulated drift of a local sensor; and optimizing and correcting the historical track by utilizing an optimized smooth voxel point cloud registration method through registration between the point cloud of the prior local map and the current global point cloud, so that the pose estimated value is converged to the global optimal result.
CN202210278286.4A 2022-03-21 2022-03-21 A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration Active CN114659514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210278286.4A CN114659514B (en) 2022-03-21 2022-03-21 A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210278286.4A CN114659514B (en) 2022-03-21 2022-03-21 A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration

Publications (2)

Publication Number Publication Date
CN114659514A CN114659514A (en) 2022-06-24
CN114659514B true CN114659514B (en) 2024-05-31

Family

ID=82032309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210278286.4A Active CN114659514B (en) 2022-03-21 2022-03-21 A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration

Country Status (1)

Country Link
CN (1) CN114659514B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115407357B (en) * 2022-07-05 2024-05-31 东南大学 Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes
CN115236708B (en) * 2022-07-25 2024-10-11 重庆长安汽车股份有限公司 Vehicle position and posture state estimation method, device, equipment and storage medium
CN115508844B (en) * 2022-11-23 2023-03-21 江苏新宁供应链管理有限公司 Intelligent detection method for deviation of logistics conveyor based on laser radar
AU2022487083A1 (en) * 2022-11-23 2025-06-26 Dconstruct Technologies Pte. Ltd. Device and method for performing localization of a robot device
CN115540875B (en) * 2022-11-24 2023-03-07 成都运达科技股份有限公司 Method and system for high-precision detection and positioning of train vehicles in station track
CN115631314B (en) * 2022-12-19 2023-06-09 中汽研(天津)汽车工程研究院有限公司 A point cloud map construction method based on multi-features and adaptive keyframes
CN116007634A (en) * 2022-12-28 2023-04-25 上海航天控制技术研究所 Star table inspection device autonomous navigation method for cross-time domain feature search
CN116168171B (en) * 2023-03-02 2025-05-30 西北工业大学 Real-time dense reconstruction method for clustered unmanned aerial vehicle
CN118379526B (en) * 2024-06-26 2024-10-22 集美大学 Multi-feature point cloud matching method based on probability distribution sampling constraint
CN118887100B (en) * 2024-07-09 2025-03-28 重庆第二师范学院 A 3D point cloud fusion method based on manifold embedding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107709928A (en) * 2015-04-10 2018-02-16 欧洲原子能共同体由欧洲委员会代表 For building figure and the method and apparatus of positioning in real time
CN111710027A (en) * 2020-05-25 2020-09-25 南京林业大学 A 3D geometric reconstruction method of tunnel considering data-driven segment segmentation and model-driven segment assembly
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 A Vehicle Fusion Localization Method for V2X and Laser Point Cloud Registration for Advanced Autonomous Driving
CN114140761A (en) * 2020-08-13 2022-03-04 长沙智能驾驶研究院有限公司 Point cloud registration method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230379B (en) * 2017-12-29 2020-12-04 百度在线网络技术(北京)有限公司 Method and device for fusing point cloud data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107709928A (en) * 2015-04-10 2018-02-16 欧洲原子能共同体由欧洲委员会代表 For building figure and the method and apparatus of positioning in real time
CN111710027A (en) * 2020-05-25 2020-09-25 南京林业大学 A 3D geometric reconstruction method of tunnel considering data-driven segment segmentation and model-driven segment assembly
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 A Vehicle Fusion Localization Method for V2X and Laser Point Cloud Registration for Advanced Autonomous Driving
CN114140761A (en) * 2020-08-13 2022-03-04 长沙智能驾驶研究院有限公司 Point cloud registration method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于激光雷达的SLAM和融合定位方法综述;代凯 等;《汽车文摘》;20211231;第1-8页 *

Also Published As

Publication number Publication date
CN114659514A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN114659514B (en) A LiDAR-IMU-GNSS fusion positioning method based on voxel-based precise registration
CN107038717B (en) A Method for Automatically Analyzing 3D Point Cloud Registration Errors Based on Stereo Grid
CN115407357B (en) Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes
CN113837277B (en) A multi-source fusion SLAM system based on visual point-line feature optimization
CN111337941A (en) Dynamic obstacle tracking method based on sparse laser radar data
CN104361590B (en) High-resolution remote sensing image registration method with control points distributed in adaptive manner
CN112230243B (en) Indoor map construction method for mobile robot
CN113409410A (en) Multi-feature fusion IGV positioning and mapping method based on 3D laser radar
CN101916373B (en) Road semiautomatic extraction method based on wavelet detection and ridge line tracking
CN111242996B (en) SLAM method based on Apriltag and factor graph
CN114777775A (en) A multi-sensor fusion positioning method and system
CN113627569A (en) Data fusion method for radar video all-in-one machine used for traffic large scene
CN116758153A (en) Back-end optimization method based on multi-factor graph for accurate robot pose acquisition
CN116878542A (en) Laser SLAM method for inhibiting height drift of odometer
CN114563000A (en) An indoor and outdoor SLAM method based on improved lidar odometer
CN110285805A (en) An Adaptive Interpolation/Segmentation Processing Method for Data Holes
CN117471439A (en) External parameter calibration method for ship lock forbidden region non-overlapping vision field laser radar
CN115908539A (en) Target volume automatic measurement method and device and storage medium
CN114137562B (en) Multi-target tracking method based on improved global nearest neighbor
CN115200601A (en) Navigation method, device, wheeled robot and storage medium
CN117541614B (en) Space non-cooperative target close-range relative pose tracking method based on improved ICP algorithm
CN118537381A (en) Method for jointly calibrating water-to-water slope ratio of upstream dam slope of earth-rock dam in operation period
CN117392268A (en) Laser scanning mapping method and system based on self-adaption combined CPD and ICP algorithm
CN115951371B (en) A Method for Determining the Same-named Points in the Airborne LiDAR Sounding Track
CN117741662A (en) Array interferometric SAR point cloud fusion method based on dual observation perspectives

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant