[go: up one dir, main page]

CN114862932B - BIM global positioning-based pose correction method and motion distortion correction method - Google Patents

BIM global positioning-based pose correction method and motion distortion correction method Download PDF

Info

Publication number
CN114862932B
CN114862932B CN202210694055.1A CN202210694055A CN114862932B CN 114862932 B CN114862932 B CN 114862932B CN 202210694055 A CN202210694055 A CN 202210694055A CN 114862932 B CN114862932 B CN 114862932B
Authority
CN
China
Prior art keywords
bim
pose
point cloud
global
correction method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210694055.1A
Other languages
Chinese (zh)
Other versions
CN114862932A (en
Inventor
刘伟
蔡晨星
高婷
毛如轩
陈雪辉
李�昊
黄磊
景甜甜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Jianzhu University
Original Assignee
Anhui Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Jianzhu University filed Critical Anhui Jianzhu University
Priority to CN202210694055.1A priority Critical patent/CN114862932B/en
Publication of CN114862932A publication Critical patent/CN114862932A/en
Application granted granted Critical
Publication of CN114862932B publication Critical patent/CN114862932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Optics & Photonics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to a pose correction method and a motion distortion correction method based on BIM global positioning. The pose correction method based on BIM global positioning comprises the steps of dividing a BIM global point cloud map into different three-dimensional grids on the space; counting the number of point clouds in each three-dimensional grid, calculating a multi-dimensional normal distribution parameter of the three-dimensional grid, and transforming the original coordinate of the real-time data of the laser radar into a BIM global point cloud map coordinate system through a transformation matrix T; calculating the probability density of the laser radar conversion points according to the multi-dimensional normal distribution parameters, and establishing an NDT registration function according to the probability density; solving the registration function; releasing a real-time positioning pose of the robot under the BIM global point cloud map; and carrying out ESKF fusion on the laser radar predicted pose and the real-time positioning pose to obtain a corrected pose. The invention utilizes the extracted BIM global point cloud map and the point cloud data of the laser radar real-time frame for registration, and eliminates the pose error calculated by a local sensor.

Description

基于BIM全局定位的位姿修正方法及运动畸变矫正方法Pose Correction Method and Motion Distortion Correction Method Based on BIM Global Positioning

技术领域technical field

本发明涉及室内机器人技术领域,特别是涉及基于BIM全局定位的位姿修正方法、基于BIM全局定位的运动畸变矫正方法。The invention relates to the technical field of indoor robots, in particular to a pose correction method based on BIM global positioning and a motion distortion correction method based on BIM global positioning.

背景技术Background technique

现有的室内机器人利用IMU传感器采集的数据可以推算出机器人位姿变换,利用计算出的位姿变换修正雷达采集数据,达到畸变修正的效果。对IMU传感器做积分可以得到机器人的推算位姿,但机器人在实际运行过程中会发生车轮打滑、IMU漂移等现象会导致推算的位姿并不准确,室内机器人激光运动畸变矫正如果仅依靠依赖IMU等传感器,缺乏全局观测数据,无法消除因车轮打滑、IMU漂移等现象带来的误差。Existing indoor robots can use the data collected by the IMU sensor to calculate the pose transformation of the robot, and use the calculated pose transformation to correct the data collected by the radar to achieve the effect of distortion correction. The estimated pose of the robot can be obtained by integrating the IMU sensor. However, during the actual operation of the robot, wheel slippage and IMU drift will cause the estimated pose to be inaccurate. If the laser motion distortion correction of the indoor robot only relies on the IMU And other sensors, lack of global observation data, can not eliminate errors caused by wheel slippage, IMU drift and other phenomena.

发明内容Contents of the invention

基于此,有必要针对因车轮打滑、IMU漂移等现象带来的误差导致运动畸变矫正不准确问题,提供了基于BIM全局定位的位姿修正方法、基于BIM全局定位的运动畸变矫正方法。Based on this, it is necessary to provide a pose correction method based on BIM global positioning and a motion distortion correction method based on BIM global positioning to solve the problem of inaccurate motion distortion correction caused by errors caused by wheel slippage and IMU drift.

为实现上述目的,本发明采用了以下技术方案:To achieve the above object, the present invention adopts the following technical solutions:

基于BIM全局定位的位姿修正方法,所述位姿修正方法包括以下步骤:Based on the pose correction method of BIM global positioning, the pose correction method comprises the following steps:

获取BIM全局点云地图,将所述BIM全局点云地图在空间上划分到不同的三维网格中;Obtaining the BIM global point cloud map, and spatially dividing the BIM global point cloud map into different three-dimensional grids;

统计每个所述三维网格中点云数量,计算所述三维网格多维正态分布参数,即点云均值与协方差矩阵,其中,点云均值为q=

Figure DEST_PATH_IMAGE001
,协方差矩阵为Σ
Figure DEST_PATH_IMAGE002
n表示划分的三维网格数量,k表示n的序号,x表示网格内所有扫描点的位置,T表示由激光雷达坐标系到BIM全局点云地图坐标系的变换矩阵;Count the number of point clouds in each of the three-dimensional grids, and calculate the multi-dimensional normal distribution parameters of the three-dimensional grids, that is, the point cloud mean value and covariance matrix, where the point cloud mean value is q=
Figure DEST_PATH_IMAGE001
, the covariance matrix is Σ
Figure DEST_PATH_IMAGE002
, n represents the number of divided three-dimensional grids, k represents the serial number of n , x represents the position of all scanning points in the grid, T represents the transformation matrix from the lidar coordinate system to the BIM global point cloud map coordinate system;

将激光雷达的实时数据的原始坐标由所述变换矩阵T变换到BIM全局点云地图坐标系中,得到位于BIM全局点云地图的坐标,具体变换为[

Figure DEST_PATH_IMAGE003
1] T =T[x i ,y i ,1] T ,变换矩阵为T=
Figure DEST_PATH_IMAGE004
[x i ,y i ]为雷达实时数据在原始坐标系下的坐标,[
Figure DEST_PATH_IMAGE005
]经变换矩阵转移到所述BIM全局点云地图坐标系后的坐标,φ表示旋转角度,
Figure DEST_PATH_IMAGE006
表示BIM全局点云地图坐标系下激光雷达坐标系原点到BIM全局点云地图坐标系下x方向上的距离,
Figure DEST_PATH_IMAGE007
表示y方向上的距离;The original coordinates of the real-time data of the lidar are transformed into the BIM global point cloud map coordinate system by the transformation matrix T , and the coordinates located in the BIM global point cloud map are obtained, and the specific transformation is [
Figure DEST_PATH_IMAGE003
1] T =T[x i , y i , 1] T , the transformation matrix is T=
Figure DEST_PATH_IMAGE004
, [ xi ,y i ] are the coordinates of the real-time radar data in the original coordinate system, [
Figure DEST_PATH_IMAGE005
] is transferred to the coordinates of the BIM global point cloud map coordinate system through the transformation matrix, φ represents the rotation angle,
Figure DEST_PATH_IMAGE006
Indicates the distance from the origin of the lidar coordinate system in the BIM global point cloud map coordinate system to the x direction in the BIM global point cloud map coordinate system,
Figure DEST_PATH_IMAGE007
Indicates the distance in the y direction;

根据所述三维网格多维正态分布参数计算所述激光雷达转换点的概率密度,概率密度为:P(x):

Figure DEST_PATH_IMAGE008
,根据所述概率密度建立NDT配准函数,配准函数为:S(P)=
Figure DEST_PATH_IMAGE009
,即转换点概率密度的累加;Calculate the probability density of the laser radar conversion point according to the multidimensional normal distribution parameter of the three-dimensional grid, and the probability density is: P(x):
Figure DEST_PATH_IMAGE008
, according to the probability density to establish the NDT registration function, the registration function is: S(P)=
Figure DEST_PATH_IMAGE009
, that is, the accumulation of the probability density of the conversion point;

对所述配准函数求解,寻找一组变换参数令所述配准函数最大,设置迭代阈值与最大迭代次数,达到收敛条件时结束迭代;Solve the registration function, find a set of transformation parameters to maximize the registration function, set the iteration threshold and the maximum number of iterations, and end the iteration when the convergence condition is reached;

发布机器人在所述BIM全局点云地图下的实时定位位姿;Publish the real-time positioning pose of the robot under the BIM global point cloud map;

获取所述激光雷达预测位姿,将所述激光雷达预测位姿与所述实时定位位姿进行ESKF融合,获得机器人的修正位姿。The lidar predicted pose is obtained, and the lidar predicted pose is fused with the real-time positioning pose by ESKF to obtain a corrected pose of the robot.

进一步的,预先建立IMU传感器运动学模型,通过所述运动学模型与IMU传感器实时采集数据推测出所述激光雷达预测位姿。Further, a kinematic model of the IMU sensor is established in advance, and the predicted pose of the laser radar is estimated through the kinematic model and real-time data collected by the IMU sensor.

进一步的,获取精确的建筑物信息模型BIM,对BIM进行特征提取,得到三维空间特征点,将所述三维空间特征点转化成所述BIM全局点云地图。Further, the accurate building information model BIM is obtained, feature extraction is performed on the BIM to obtain three-dimensional space feature points, and the three-dimensional space feature points are converted into the BIM global point cloud map.

进一步的,所述特征为构件信息,其包括大小,空间位置,形状。Further, the feature is component information, which includes size, spatial position, and shape.

进一步的,利用优化算法对所述配准函数求解,所述优化算法包括牛顿迭代算法。Further, an optimization algorithm is used to solve the registration function, and the optimization algorithm includes a Newton iteration algorithm.

本发明还包括基于BIM全局定位的运动畸变矫正方法,所述运动畸变矫正方法包括以下步骤:The present invention also includes a motion distortion correction method based on BIM global positioning, and the motion distortion correction method includes the following steps:

获取激光雷达单帧扫描时间内的机器人的位姿,对所述位姿做线性近似和插值处理,获取补偿变换矩阵;Obtain the pose of the robot within the laser radar single-frame scanning time, perform linear approximation and interpolation processing on the pose, and obtain a compensation transformation matrix;

将单帧时间内所有激光点基准坐标系下的坐标通过所述补偿变换矩阵进行矫正,得到矫正后的激光点坐标,封装激光点坐标的点云数据,完成矫正;Correct the coordinates under the reference coordinate system of all laser points within a single frame time through the compensation transformation matrix to obtain the corrected laser point coordinates, encapsulate the point cloud data of the laser point coordinates, and complete the correction;

其特征在于,It is characterized in that,

在获取所述补偿变换矩阵前,对所述位姿进行修正,所述位姿的位姿修正方法采用前述的基于BIM全局定位的位姿修正方法。Before obtaining the compensation transformation matrix, the pose is corrected, and the pose correction method for the pose adopts the aforementioned pose correction method based on BIM global positioning.

进一步的,根据激光雷达有序点云中每个点云的水平角计算出单个点云相对于单帧初始时刻的激光雷达坐标的相对时间,根据相对时间对经过线性近似变换的位姿变换做插值求解,计算出每个激光点在激光雷达坐标下的补偿变换矩阵,得到所述补偿变换矩阵。Further, according to the horizontal angle of each point cloud in the lidar ordered point cloud, the relative time of the single point cloud relative to the lidar coordinates at the initial moment of the single frame is calculated, and the pose transformation after the linear approximation transformation is performed according to the relative time The interpolation solution is used to calculate the compensation transformation matrix of each laser point under the laser radar coordinates to obtain the compensation transformation matrix.

进一步的,获取经过线性近似变换的位姿变换包括以下步骤:Further, obtaining the pose transformation after linear approximation transformation includes the following steps:

获取所述修正位姿数据,对齐位姿数据与激光雷达点云的时间,对单帧点云时间内的位姿变换做线性近似处理,得到所述经过线性近似变换的位姿变换。Acquire the corrected pose data, align the pose data with the time of the lidar point cloud, perform linear approximation processing on the pose transformation within the single frame point cloud time, and obtain the pose transformation after the linear approximation transformation.

进一步的,以单帧初始时刻的激光雷达位置为原点建立基准坐标系O 0 XY,单帧其余时刻位置建立的坐标系为激光雷达坐标系O t XY。 Further, the reference coordinate system O 0 XY is established with the lidar position at the initial moment of the single frame as the origin, and the coordinate system established at the remaining moments of the single frame is the lidar coordinate system O t XY .

进一步的,获取激光雷达点云数据,将激光雷达点云数据按雷达线束和扫描时间转化为有序点云,得到所述激光雷达有序点云。Further, the laser radar point cloud data is obtained, and the laser radar point cloud data is converted into an ordered point cloud according to the radar beam and scanning time, so as to obtain the laser radar ordered point cloud.

本发明提供的技术方案,具有如下有益效果:The technical scheme provided by the invention has the following beneficial effects:

通过建筑物信息模型可以获得机器人需要的先验地图,通过先验地图与激光雷达实时扫描数据配准,建立机器人全局定位信息消除局部传感器推算的位姿误差,修正IMU推算出的位姿误差,从而达到更高精度的矫正激光雷达运动畸变的效果。The prior map required by the robot can be obtained through the building information model. Through the registration of the prior map and the real-time scanning data of the laser radar, the global positioning information of the robot is established to eliminate the pose error calculated by the local sensor, and the pose error calculated by the IMU is corrected. In this way, the effect of correcting lidar motion distortion with higher precision can be achieved.

附图说明Description of drawings

图1为本发明中基于BIM全局定位的位姿修正方法的流程图;Fig. 1 is the flow chart of the pose correction method based on BIM global positioning among the present invention;

图2为图1中BIM模型提取全局点云地图流程图;Fig. 2 is the flow chart of extracting the global point cloud map from the BIM model in Fig. 1;

图3为图1中激光雷达位姿配准的示意图;FIG. 3 is a schematic diagram of lidar pose registration in FIG. 1;

图4为图1中BIM定位数据与IMU传感器频率示意图;Fig. 4 is a schematic diagram of BIM positioning data and IMU sensor frequency in Fig. 1;

图5为图1中全局定位修正位姿示意图;Fig. 5 is a schematic diagram of global positioning correction pose in Fig. 1;

图6为以图1为基础的基于BIM全局定位的运动畸变矫正方法的流程图;Fig. 6 is the flowchart of the motion distortion correction method based on BIM global positioning based on Fig. 1;

图7为图6中基准坐标系结构示意图;Fig. 7 is a schematic diagram of the structure of the reference coordinate system in Fig. 6;

图8为以图6为基础的基于BIM全局定位的运动畸变矫正方法的总流程框图。FIG. 8 is a block diagram of a general flowchart of a motion distortion correction method based on BIM global positioning based on FIG. 6 .

具体实施方式detailed description

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

如图1所示,本实施例提供了基于BIM全局定位的位姿修正方法,所述位姿修正方法包括以下步骤:As shown in Figure 1, the present embodiment provides a pose correction method based on BIM global positioning, and the pose correction method includes the following steps:

获取BIM全局点云地图,将BIM全局点云地图在空间上划分到不同的三维网格中;Obtain the BIM global point cloud map, and spatially divide the BIM global point cloud map into different 3D grids;

统计每个三维网格中点云数量,计算三维网格多维正态分布参数,即点云均值与协方差矩阵,其中,点云均值为q=

Figure 367703DEST_PATH_IMAGE001
,协方差矩阵为Σ
Figure DEST_PATH_IMAGE010
n表示划分的三维网格数量,k表示n的序号,x表示网格内所有扫描点的位置,T表示由激光雷达坐标系到BIM全局点云地图坐标系的变换矩阵;Count the number of point clouds in each 3D grid, and calculate the multidimensional normal distribution parameters of the 3D grid, that is, the point cloud mean and covariance matrix, where the point cloud mean is q=
Figure 367703DEST_PATH_IMAGE001
, the covariance matrix is Σ
Figure DEST_PATH_IMAGE010
; n represents the number of divided three-dimensional grids, k represents the serial number of n , x represents the position of all scanning points in the grid, T represents the transformation matrix from the lidar coordinate system to the BIM global point cloud map coordinate system;

将激光雷达的实时数据的原始坐标由变换矩阵T变换到BIM全局点云地图坐标系中,得到位于BIM全局点云地图的坐标,具体变换为[

Figure 956947DEST_PATH_IMAGE003
1] T =T[x i ,y i ,1] T ,变换矩阵为T=
Figure 205526DEST_PATH_IMAGE004
[x i ,y i ]为雷达实时数据在原始坐标系下的坐标,[
Figure 820309DEST_PATH_IMAGE005
]经变换矩阵转移到所述BIM全局点云地图坐标系后的坐标,φ表示旋转角度,
Figure 103523DEST_PATH_IMAGE006
表示BIM全局点云地图坐标系下激光雷达坐标系原点到BIM全局点云地图坐标系下x方向上的距离,
Figure 129248DEST_PATH_IMAGE007
表示y方向上的距离;Transform the original coordinates of the real-time data of the lidar into the BIM global point cloud map coordinate system by the transformation matrix T , and obtain the coordinates located in the BIM global point cloud map, and the specific transformation is [
Figure 956947DEST_PATH_IMAGE003
1] T =T[x i , y i , 1] T , the transformation matrix is T=
Figure 205526DEST_PATH_IMAGE004
; [x i , y i ] are the coordinates of the real-time radar data in the original coordinate system, [
Figure 820309DEST_PATH_IMAGE005
] is transferred to the coordinates of the BIM global point cloud map coordinate system through the transformation matrix, φ represents the rotation angle,
Figure 103523DEST_PATH_IMAGE006
Indicates the distance from the origin of the lidar coordinate system in the BIM global point cloud map coordinate system to the x direction in the BIM global point cloud map coordinate system,
Figure 129248DEST_PATH_IMAGE007
Indicates the distance in the y direction;

根据三维网格多维正态分布参数计算激光雷达转换点的概率密度,概率密度为:P (x):

Figure 848811DEST_PATH_IMAGE008
,根据概率密度建立NDT配准函数,配准函数为:S(P) =
Figure 516552DEST_PATH_IMAGE009
;Calculate the probability density of the lidar conversion point according to the multidimensional normal distribution parameters of the 3D grid, and the probability density is: P (x):
Figure 848811DEST_PATH_IMAGE008
, according to the probability density to establish the NDT registration function, the registration function is: S(P) =
Figure 516552DEST_PATH_IMAGE009
;

对配准函数求解,寻找一组变换参数令所述配准函数最大,设置迭代阈值与最大迭代次数,达到收敛条件时结束迭代;Solve the registration function, find a set of transformation parameters to maximize the registration function, set the iteration threshold and the maximum number of iterations, and end the iteration when the convergence condition is reached;

发布机器人在所述BIM全局点云地图下的实时定位位姿;Publish the real-time positioning pose of the robot under the BIM global point cloud map;

获取激光雷达预测位姿,将激光雷达预测位姿与实时定位位姿进行ESKF融合,获得机器人的修正位姿。Obtain the predicted pose of the lidar, and perform ESKF fusion of the predicted pose of the lidar and the real-time positioning pose to obtain the corrected pose of the robot.

对于上述位姿修正方法,主要分为三大模块,包括BIM地图提取模块、基于BIM全局定位模块和基于全局定位数据修正推算位姿模块。For the above pose correction method, it is mainly divided into three modules, including the BIM map extraction module, the BIM-based global positioning module, and the global positioning data-based correction and estimation pose module.

针对BIM地图提取模块,具体操作步骤如下:For the BIM map extraction module, the specific operation steps are as follows:

S1:获取精确的建筑物信息模型(BIM),BIM包含建筑物的各种信息,机器人全局定位模块主要需要BIM信息中的几何模型构建全局定位地图;S1: Obtain an accurate building information model (BIM). BIM contains various information of buildings. The global positioning module of the robot mainly needs the geometric model in the BIM information to construct a global positioning map;

S2:将BIM文件导出为IFC文件;S2: export the BIM file as an IFC file;

S3:对导出的IFC文件进行特征提取,根据文件中的构件信息即大小,空间位置,形状。对墙体,门窗,地板楼板等实体构建提取其在三维空间中的特征点。S3: Perform feature extraction on the exported IFC file, according to the component information in the file, that is, size, spatial position, and shape. Extract feature points in three-dimensional space for the construction of walls, doors and windows, floors and slabs and other entities.

S4:将提取出的三维空间特征点进行处理,转化成所需要的BIM全局点云地图。S4: Process the extracted three-dimensional space feature points and convert them into the required BIM global point cloud map.

根据上述步骤提取出的三维BIM全局点云地图相较于二维栅格地图可以在机器人的实时位姿变换中提供更多的信息。BIM全局点云地图流程如图2所示。Compared with the two-dimensional grid map, the 3D BIM global point cloud map extracted according to the above steps can provide more information in the real-time pose transformation of the robot. The process of BIM global point cloud map is shown in Figure 2.

针对基于BIM全局定位模块,基于BIM的全局定位模块的原理是将提取的BIM全局点云地图与激光雷达实时帧的点云数据进行配准,具体配准方法为将BIM全局点云转化为多维变量的正态分布,再将激光雷达实时点云根据初始变换参数转换到BIM全局点云地图中,计算雷达变换点在参考系的概率密度,利用优化器求解最大概率密度的变换参数,设置迭代阈值与最大迭代次数。输出机器人在BIM全局点云地图下的实时位姿。具体流程如下:For the BIM-based global positioning module, the principle of the BIM-based global positioning module is to register the extracted BIM global point cloud map with the point cloud data of the LiDAR real-time frame. The specific registration method is to convert the BIM global point cloud into a multi-dimensional The normal distribution of variables, and then convert the real-time lidar point cloud into the BIM global point cloud map according to the initial transformation parameters, calculate the probability density of the radar transformation points in the reference system, use the optimizer to solve the transformation parameters of the maximum probability density, and set the iteration Threshold and maximum number of iterations. Output the real-time pose of the robot under the BIM global point cloud map. The specific process is as follows:

S1:将上述模块生成的BIM全局点云地图在空间上划分到不同的三维网格中。S1: Spatially divide the BIM global point cloud map generated by the above modules into different 3D grids.

S2:统计每个网格中点云数量,计算网格多维正态分布参数,即点云均值与协方差矩阵。点云均值为q=

Figure DEST_PATH_IMAGE011
,协方差矩阵为Σ
Figure 608267DEST_PATH_IMAGE010
,n为划分出的三维网格数量,k表示n的序号,x表示网格内所有扫描点的位置,T表示由激光雷达坐标系到BIM全局点云地图坐标系的变换矩阵;S2: Count the number of point clouds in each grid, and calculate the multidimensional normal distribution parameters of the grid, that is, the point cloud mean and covariance matrix. The point cloud mean is q=
Figure DEST_PATH_IMAGE011
, the covariance matrix is Σ
Figure 608267DEST_PATH_IMAGE010
, n is the number of divided three-dimensional grids, k is the serial number of n , x is the position of all scanning points in the grid, T is the transformation matrix from the lidar coordinate system to the BIM global point cloud map coordinate system;

S3:将雷达实时数据的原始坐标由变换矩阵T变换到BIM全局点云地图坐标系中,具体变换为[

Figure 70473DEST_PATH_IMAGE003
1] T =T[x i ,y i ,1] T ,变换矩阵为T=
Figure 293644DEST_PATH_IMAGE004
[x i , y i ]为雷达实时数据在原始坐标系下的坐标,[
Figure 499497DEST_PATH_IMAGE005
]经变换矩阵转移到所述BIM全局点云地图坐标系后的坐标,φ表示旋转角度,
Figure 475412DEST_PATH_IMAGE006
表示BIM全局点云地图坐标系下激光雷达坐标系原点到BIM全局点云地图坐标系下x方向上的距离,
Figure 374098DEST_PATH_IMAGE007
表示y方向上的距离。S3: Transform the original coordinates of the radar real-time data from the transformation matrix T into the BIM global point cloud map coordinate system, the specific transformation is [
Figure 70473DEST_PATH_IMAGE003
1] T =T[x i , y i , 1] T , the transformation matrix is T=
Figure 293644DEST_PATH_IMAGE004
, [ xi , y i ] are the coordinates of the real-time radar data in the original coordinate system, [
Figure 499497DEST_PATH_IMAGE005
] is transferred to the coordinates of the BIM global point cloud map coordinate system through the transformation matrix, φ represents the rotation angle,
Figure 475412DEST_PATH_IMAGE006
Indicates the distance from the origin of the lidar coordinate system in the BIM global point cloud map coordinate system to the x direction in the BIM global point cloud map coordinate system,
Figure 374098DEST_PATH_IMAGE007
Indicates the distance in the y direction.

S4:根据正态分布参数计算雷达转换点的概率密度,概率密度为:P(x):

Figure 84565DEST_PATH_IMAGE008
,根据概率密度函数建立NDT配准函数:S(P)=
Figure 828530DEST_PATH_IMAGE009
,即转换点概率密度的累加。其中,[x i ,y i ]为雷达实时数据在原始坐标系下的坐标,[
Figure 160417DEST_PATH_IMAGE005
]为雷达数据经变换矩阵转移到BIM点云地图坐标系后的坐标。S4: Calculate the probability density of the radar conversion point according to the normal distribution parameters, the probability density is: P(x):
Figure 84565DEST_PATH_IMAGE008
, establish the NDT registration function according to the probability density function: S(P)=
Figure 828530DEST_PATH_IMAGE009
, which is the accumulation of the transition point probability densities. Among them, [x i , y i ] are the coordinates of the real-time radar data in the original coordinate system, [
Figure 160417DEST_PATH_IMAGE005
] are the coordinates after the radar data is transferred to the BIM point cloud map coordinate system through the transformation matrix.

S5:利用牛顿迭代算法对配准函数求解,寻找一组变换参数使得配准函数最大,设置迭代阈值与最大迭代次数,达到收敛条件时结束迭代。S5: Use the Newton iterative algorithm to solve the registration function, find a set of transformation parameters to maximize the registration function, set the iteration threshold and the maximum number of iterations, and end the iteration when the convergence condition is reached.

S6:以固定频率发布机器人在全局地图下的实时定位位姿。S6: Publish the real-time positioning pose of the robot under the global map at a fixed frequency.

图3为激光雷达在全局地图内配准的示意图,A的位姿为激光雷达位姿,虚线为激光雷达实时扫描数据。图示时刻为配准过程进行中。Figure 3 is a schematic diagram of lidar registration in the global map. The pose of A is the lidar pose, and the dotted line is the real-time scanning data of the lidar. The time shown in the illustration is the registration process in progress.

针对基于全局定位数据修正推算位姿模块,在利用BIM定位信息修正IMU推算位姿实际过程中,BIM定位信息更新频率远低于IMU频率,如图4所示,考虑频率问题,选用误差状态卡尔曼滤波器(ESKF)修正推算位姿,如图5所示,具体流程如下:For the correction of the estimated pose module based on the global positioning data, in the actual process of using the BIM positioning information to correct the IMU estimated pose, the update frequency of the BIM positioning information is much lower than the frequency of the IMU, as shown in Figure 4. Considering the frequency problem, the error state Karl is selected. Mann filter (ESKF) corrects the estimated pose, as shown in Figure 5, and the specific process is as follows:

S1:建立IMU传感器运动学模型。S1: Establish the IMU sensor kinematics model.

S2:通过运动学模型与IMU实时采集数据推测激光雷达预测位姿。S2: Infer the predicted pose of the lidar through the kinematic model and the real-time data collected by the IMU.

S3:以机器人BIM定位信息(实时定位位姿)为观测值与预测位姿进行ESKF融合。S3: ESKF fusion is performed with the robot BIM positioning information (real-time positioning pose) as the observed value and predicted pose.

S4:修正预测位姿。S4: Correct the predicted pose.

该位姿修正方法能够对IMU传感器的推算预测位姿进行修正优化,利于在激光雷达运动畸变时能够更好的矫正。This pose correction method can correct and optimize the estimated and predicted pose of the IMU sensor, which is conducive to better correction when the lidar motion is distorted.

如图6所述,本实施例还提供了基于BIM全局定位的运动畸变矫正方法,所述运动畸变矫正方法包括以下步骤:As shown in FIG. 6, this embodiment also provides a motion distortion correction method based on BIM global positioning, and the motion distortion correction method includes the following steps:

获取激光雷达单帧扫描时间内的机器人的位姿,对位姿做线性近似和插值处理,获取补偿变换矩阵;Obtain the pose of the robot within the single-frame scanning time of the laser radar, perform linear approximation and interpolation processing on the pose, and obtain the compensation transformation matrix;

将单帧时间内所有激光点基准坐标系下的坐标通过补偿变换矩阵进行矫正,得到矫正后的激光点坐标,封装激光点坐标的点云数据,完成矫正;Correct the coordinates of all laser point reference coordinates in a single frame time through the compensation transformation matrix to obtain the corrected laser point coordinates, encapsulate the point cloud data of the laser point coordinates, and complete the correction;

在获取所述补偿变换矩阵前,对所述位姿进行修正,所述位姿的位姿修正方法采用前述的基于BIM全局定位的位姿修正方法。Before obtaining the compensation transformation matrix, the pose is corrected, and the pose correction method for the pose adopts the aforementioned pose correction method based on BIM global positioning.

对于基于BIM全局定位的运动畸变矫正方法,在上述BIM全局定位的位姿修正方法的基础上,还包括利用推算位姿矫正点云畸变模块,相同激光束下每个激光点的数据是由激光雷达在不同时刻采集而来,但原始激光雷达数据是默认所有的激光点是激光雷达同一时刻采集,需要对原始数据进行矫正处理。以图7为例,O 0 XY为以单帧初始时刻的雷达位姿建立基准坐标系,以单帧时间内其余时刻的雷达位姿建立雷达坐标系。如O t XY为雷达坐标系。O t A为激光雷达在O t XY位姿时刻扫描出来的数据;但原始点云数据因未考虑点云采集周期时间,认为O t A数据是激光雷达在O 0 XY位姿时刻下采集的数据,因此产生运动畸变。矫正点云畸变的方法为将单帧点云时间内所有激光点变换到基准坐标系下,再对点云数据进行封装,总流程图如图8所示,具体流程如下:For the motion distortion correction method based on BIM global positioning, on the basis of the above-mentioned pose correction method of BIM global positioning, it also includes the point cloud distortion correction module using the estimated pose. The data of each laser point under the same laser beam is obtained by the laser The radar is collected at different times, but the original lidar data is defaulted to be collected by the lidar at the same time for all laser points, and the original data needs to be corrected. Taking Fig. 7 as an example, O 0 XY establishes the reference coordinate system based on the radar pose at the initial moment of the single frame, and establishes the radar coordinate system based on the radar pose at the rest of the single frame time. For example, O t XY is the radar coordinate system. O t A is the data scanned by the lidar at the time of O t XY pose; however, because the original point cloud data does not consider the point cloud acquisition cycle time, it is considered that the O t A data is collected by the lidar at the time of O 0 XY pose data, resulting in motion distortion. The method of correcting point cloud distortion is to transform all laser points within a single frame point cloud time into the reference coordinate system, and then package the point cloud data. The general flow chart is shown in Figure 8, and the specific process is as follows:

S1:读取激光雷达点云数据,将雷达点云按雷达线束与扫描时间将点云转化为有序点云。S1: Read the lidar point cloud data, convert the radar point cloud into an ordered point cloud according to the radar beam and scanning time.

S2:以每一帧初始时刻的激光雷达位置为原点建立基准坐标系O 0 XY,一帧内以激光雷达其余时刻位置建立的坐标系称为激光雷达坐标系O t XYS2: The reference coordinate system O 0 XY is established with the lidar position at the initial moment of each frame as the origin, and the coordinate system established with the rest of the lidar position within a frame is called the lidar coordinate system O t XY .

S3:读取修正位姿的位姿数据,对齐位姿数据与雷达点云的时间,对单帧点云时间内的位姿变换做线性近似处理。S3: Read the pose data of the corrected pose, align the pose data with the time of the radar point cloud, and perform linear approximation processing on the pose transformation within the single-frame point cloud time.

S4:根据有序点云中每一个点云的水平角计算出单个点云相对于单帧初始时刻的激光雷达坐标的相对时间,对经过线性近似的位姿变换做插值求解,计算出每一个激光点在激光雷达坐标下的补偿变换矩阵。S4: Calculate the relative time of a single point cloud relative to the lidar coordinates at the initial moment of a single frame according to the horizontal angle of each point cloud in the ordered point cloud, interpolate and solve the pose transformation after linear approximation, and calculate each The compensation transformation matrix of the laser point in the lidar coordinates.

S5:针对每一个激光点,用补偿变换矩阵乘以基准坐标系下的激光点坐标,即可得到矫正后的激光点坐标。S5: For each laser point, the corrected laser point coordinates can be obtained by multiplying the compensation transformation matrix by the laser point coordinates in the reference coordinate system.

该运动畸变矫正方法能够修正IMU推算出的位姿误差,能够更加精确的矫正点云畸变。The motion distortion correction method can correct the pose error calculated by the IMU, and can correct the point cloud distortion more accurately.

以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-mentioned embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, should be considered as within the scope of this specification.

以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation modes of the present invention, and the descriptions thereof are relatively specific and detailed, but should not be construed as limiting the patent scope of the invention. It should be pointed out that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention, and these all belong to the protection scope of the present invention. Therefore, the protection scope of the patent for the present invention should be based on the appended claims.

Claims (10)

1. A BIM global positioning-based pose correction method is used for correcting the predicted pose of an IMU sensor and is characterized by comprising the following steps:
acquiring a BIM global point cloud map, and spatially dividing the BIM global point cloud map into different three-dimensional grids; counting the number of point clouds in each three-dimensional grid, and calculating the multi-dimensional normal distribution parameters of the three-dimensional grids, namely a point cloud mean value and a covariance matrix, wherein the point cloud mean value is
Figure FDA0003954630460000011
Covariance matrix of ∑
Figure FDA0003954630460000012
n represents the number of divided three-dimensional grids, k represents the serial number of n, x represents the positions of all scanning points in the grids, T represents transposition, T represents 1 Representing a transformation matrix from a laser radar coordinate system to a BIM global point cloud map coordinate system;
the original coordinates of the real-time data of the laser radar are transformed by the transformation matrix T 1 And transforming to the coordinate system of the BIM global point cloud map to obtain the coordinate of the BIM global point cloud map, and specifically transforming to [ x' i ,y′ i ,1] T =T 1 [x i ,y i ,1] T Transforming the matrix into
Figure FDA0003954630460000013
[x i ,y i ]Is the coordinate of the ith radar real-time data in the original coordinate system, [ x' i ,y′ i ]Transformed matrix T 1 Coordinates after transferring to the coordinate system of the BIM global point cloud map, phi represents a rotation angle, t x The distance from the origin of the laser radar coordinate system to the x direction of the abscissa of the BIM global point cloud map coordinate system under the BIM global point cloud map coordinate system is represented, t y Representing the distance in the y-direction of the respective ordinate;
calculating the probability density of the conversion point of the laser radar according to the three-dimensional grid multi-dimensional normal distribution parameter, wherein the probability density is P (W):
Figure FDA0003954630460000014
establishing an NDT registration function according to the probability density, wherein the registration function is as follows:
Figure FDA0003954630460000015
W i it is indicated that the (i) th switching point,
solving the registration function, searching a group of transformation parameters to maximize the registration function, setting an iteration threshold and the maximum iteration times, and ending the iteration when a convergence condition is reached;
issuing a real-time positioning pose of the robot under the BIM global point cloud map;
and acquiring the predicted pose of the laser radar, and fusing the predicted pose of the laser radar and the real-time positioning pose through an error state Kalman filter to obtain the corrected pose of the robot.
2. The BIM global positioning-based pose correction method according to claim 1, wherein an IMU sensor kinematic model is established in advance, and the lidar predicted pose is inferred through the kinematic model and IMU sensor real-time data acquisition.
3. The BIM global positioning-based pose correction method according to claim 2, characterized in that an accurate building information model BIM is obtained, features of the BIM are extracted to obtain three-dimensional space feature points, and the three-dimensional space feature points are converted into the BIM global point cloud map.
4. The BIM global positioning-based pose correction method according to claim 3, wherein the features are component information comprising size, spatial position and shape.
5. The BIM global positioning-based pose correction method according to claim 4, wherein the registration function is solved by an optimization algorithm, wherein the optimization algorithm comprises a Newton iteration algorithm.
6. The BIM global positioning-based motion distortion correction method comprises the following steps:
acquiring the pose of the robot within the single-frame scanning time of the laser radar, performing linear approximation and interpolation processing on the pose, and acquiring a compensation transformation matrix;
correcting coordinates under all laser point reference coordinate systems within a single frame time through the compensation transformation matrix to obtain corrected laser point coordinates, and packaging point cloud data of the laser point coordinates to finish correction;
it is characterized in that the preparation method is characterized in that,
before the compensation transformation matrix is obtained, correcting the pose, wherein the pose correction method adopts the pose correction method based on BIM global positioning as claimed in any one of claims 1-5.
7. The BIM global positioning-based motion distortion correction method according to claim 6, wherein the relative time of a single point cloud relative to the lidar coordinates at the single frame initial time is calculated according to the horizontal angle of each point cloud in the lidar ordered point cloud, interpolation solution is performed on the linear approximation transformed pose transformation according to the relative time, and the compensation transformation matrix of each laser point under the lidar coordinates is calculated to obtain the compensation transformation matrix.
8. The BIM global positioning-based motion distortion correction method according to claim 7, wherein the step of obtaining the pose transformation through linear approximate transformation comprises the following steps:
and acquiring the corrected pose data, aligning the pose data with the time of the laser radar point cloud, and performing linear approximation processing on pose transformation in the time of single-frame point cloud to obtain the pose transformation subjected to linear approximation transformation.
9. The BIM global positioning-based motion distortion correction method according to claim 8, wherein a reference coordinate system O is established with the lidar position at the single-frame initial time as the origin 0 XY, coordinate system established by positions of other moments of single frame is laser radar coordinate system O t XY。
10. The BIM global positioning-based motion distortion correction method according to claim 9, wherein laser radar point cloud data is obtained, and the laser radar point cloud data is converted into ordered point cloud according to radar beam and scanning time, so as to obtain the laser radar ordered point cloud.
CN202210694055.1A 2022-06-20 2022-06-20 BIM global positioning-based pose correction method and motion distortion correction method Active CN114862932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210694055.1A CN114862932B (en) 2022-06-20 2022-06-20 BIM global positioning-based pose correction method and motion distortion correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210694055.1A CN114862932B (en) 2022-06-20 2022-06-20 BIM global positioning-based pose correction method and motion distortion correction method

Publications (2)

Publication Number Publication Date
CN114862932A CN114862932A (en) 2022-08-05
CN114862932B true CN114862932B (en) 2022-12-30

Family

ID=82627003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210694055.1A Active CN114862932B (en) 2022-06-20 2022-06-20 BIM global positioning-based pose correction method and motion distortion correction method

Country Status (1)

Country Link
CN (1) CN114862932B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229119A (en) * 2022-08-30 2023-06-06 智瞰深鉴(北京)科技有限公司 Substation inspection robot and repositioning method thereof
CN115512054B (en) * 2022-09-06 2024-07-12 武汉大学 Method for constructing three-dimensional space-time continuous point cloud map
CN115290097B (en) * 2022-09-30 2022-12-30 安徽建筑大学 BIM-based real-time accurate map construction method, terminal and storage medium
CN115780322B (en) * 2023-01-30 2023-04-28 安徽建筑大学 Synchronous control method and system of single-motor type sorting equipment and sorting equipment
CN116628827B (en) * 2023-07-21 2023-11-10 中国铁路设计集团有限公司 BIM component linkage method based on linear positioning system
CN116878504B (en) * 2023-09-07 2023-12-08 兰笺(苏州)科技有限公司 Accurate positioning method for building outer wall operation unmanned aerial vehicle based on multi-sensor fusion
CN118376290B (en) * 2024-06-21 2024-09-27 浙江省白马湖实验室有限公司 Gaussian prediction coal pile volume calculation method based on point cloud registration of push harrow machine
CN118711170B (en) * 2024-08-29 2024-11-29 杭州华橙软件技术有限公司 Sensor calibration method, computer device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108917759A (en) * 2018-04-19 2018-11-30 电子科技大学 Mobile robot pose correct algorithm based on multi-level map match
CN111077549A (en) * 2019-12-31 2020-04-28 深圳一清创新科技有限公司 Position data correction method, apparatus and computer readable storage medium
CN111521204A (en) * 2020-03-19 2020-08-11 安徽建筑大学 Angular displacement visual measurement method based on absolute position rotary encoder
CN112859051A (en) * 2021-01-11 2021-05-28 桂林电子科技大学 Method for correcting laser radar point cloud motion distortion
CN114459471A (en) * 2022-01-30 2022-05-10 中国第一汽车股份有限公司 Positioning information determination method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10151839B2 (en) * 2012-06-01 2018-12-11 Agerpoint, Inc. Systems and methods for determining crop yields with high resolution geo-referenced sensors

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108917759A (en) * 2018-04-19 2018-11-30 电子科技大学 Mobile robot pose correct algorithm based on multi-level map match
CN111077549A (en) * 2019-12-31 2020-04-28 深圳一清创新科技有限公司 Position data correction method, apparatus and computer readable storage medium
CN111521204A (en) * 2020-03-19 2020-08-11 安徽建筑大学 Angular displacement visual measurement method based on absolute position rotary encoder
CN112859051A (en) * 2021-01-11 2021-05-28 桂林电子科技大学 Method for correcting laser radar point cloud motion distortion
CN114459471A (en) * 2022-01-30 2022-05-10 中国第一汽车股份有限公司 Positioning information determination method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进MSCKF的无人机室内定位方法;王思鹏等;《浙江大学学报(工学版)》;20220430;第711-717页 *

Also Published As

Publication number Publication date
CN114862932A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN114862932B (en) BIM global positioning-based pose correction method and motion distortion correction method
CN114399554B (en) Calibration method and system of multi-camera system
CN113066105A (en) Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN110570449A (en) A positioning and mapping method based on millimeter-wave radar and visual SLAM
CN111524194B (en) Positioning method and terminal for mutually fusing laser radar and binocular vision
CN104134220A (en) Low-altitude remote sensing image high-precision matching method with consistent image space
CN106056643A (en) Point cloud based indoor dynamic scene SLAM (Simultaneous Location and Mapping) method and system
CN118089666B (en) A photogrammetry method and system suitable for low-overlap UAV images
CN116929338B (en) Map construction method, device and storage medium
CN114325664A (en) Robust laser radar-inertial navigation calibration method
CN116429084A (en) A 3D Synchronous Positioning and Mapping Method for Dynamic Environment
CN117269977A (en) Laser SLAM implementation method and system based on vertical optimization
CN118608435A (en) Point cloud dedistortion method, device, electronic device and readable storage medium
CN117471439A (en) External parameter calibration method for ship lock forbidden region non-overlapping vision field laser radar
CN115031752A (en) SLAM mapping-based multi-sensor data fusion algorithm
CN118298017A (en) Map building and positioning method based on ground constraint
CN118209096B (en) Laser SLAM mapping method based on self-adaptive filter and GNSS factors
CN118896608A (en) A robot positioning and mapping method, device, robot and storage medium
CN118482711A (en) Method and system for improving richness of mapping
CN114485574B (en) Three-line array image POS-assisted ground positioning method based on Kalman filter model
CN105093222A (en) Automatic extraction method for block adjustment connection points of SAR image
CN115754977A (en) Repositioning method for intelligent bridge crane
Huang et al. Research on slam improvement method based on cartographer
CN118463996B (en) Decentralizing multi-robot co-location method and system
CN117387598B (en) A tightly coupled and lightweight real-time positioning and mapping method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant