[go: up one dir, main page]

CN118707542A - Self-positioning method, system, electronic device and storage medium for mobile robot - Google Patents

Self-positioning method, system, electronic device and storage medium for mobile robot Download PDF

Info

Publication number
CN118707542A
CN118707542A CN202411099026.6A CN202411099026A CN118707542A CN 118707542 A CN118707542 A CN 118707542A CN 202411099026 A CN202411099026 A CN 202411099026A CN 118707542 A CN118707542 A CN 118707542A
Authority
CN
China
Prior art keywords
point cloud
data
cloud data
inertial measurement
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411099026.6A
Other languages
Chinese (zh)
Inventor
朱建新
袁大方
张翼
谌鸿强
刘浩淼
胡小成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sunward Intelligent Equipment Co Ltd
Central South University
Original Assignee
Sunward Intelligent Equipment Co Ltd
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sunward Intelligent Equipment Co Ltd, Central South University filed Critical Sunward Intelligent Equipment Co Ltd
Priority to CN202411099026.6A priority Critical patent/CN118707542A/en
Publication of CN118707542A publication Critical patent/CN118707542A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本申请公开了一种移动机器人的自定位方法、系统、电子设备及存储介质,所属的技术领域为机器人自动化技术。移动机器人的自定位方法包括:对惯性测量单元进行初始化,并记录惯性测量数据;对惯性测量数据进行插值处理得到第一位姿转换矩阵,利用第一位姿转换矩阵对点云数据进行坐标变换;对点云数据进行帧间匹配,得到点云特征匹配结果;对相邻的关键帧之间的惯性测量数据进行预积分,得到预积分结果;对GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据;构建因子图优化模型,利用因子图优化模型对移动机器人的位姿信息进行全局优化,以便得到移动机器人在世界坐标系的定位信息。本申请能够提高移动机器人的自定位精度。

The present application discloses a self-positioning method, system, electronic device and storage medium for a mobile robot, and the technical field to which it belongs is robot automation technology. The self-positioning method for a mobile robot includes: initializing an inertial measurement unit and recording inertial measurement data; interpolating the inertial measurement data to obtain a first pose conversion matrix, and using the first pose conversion matrix to perform coordinate transformation on the point cloud data; performing inter-frame matching on the point cloud data to obtain a point cloud feature matching result; pre-integrating the inertial measurement data between adjacent key frames to obtain a pre-integration result; performing ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data; constructing a factor graph optimization model, and using the factor graph optimization model to globally optimize the posture information of the mobile robot, so as to obtain the positioning information of the mobile robot in the world coordinate system. The present application can improve the self-positioning accuracy of the mobile robot.

Description

移动机器人的自定位方法、系统、电子设备及存储介质Self-positioning method, system, electronic device and storage medium for mobile robot

技术领域Technical Field

本申请涉及机器人自动化技术领域,特别涉及一种移动机器人的自定位方法、系统、电子设备及存储介质。The present application relates to the field of robot automation technology, and in particular to a self-positioning method, system, electronic device and storage medium for a mobile robot.

背景技术Background Art

移动机器人在陌生环境中执行任务需要先进行自身定位,而定位的精度直接影响了地图构建的精度及任务的执行情况。常用于定位与地图构建的传感器有相机或激光雷达。使用相机定位的方案可以通过图像信息构建里程计,但其构建的地图难以直观反映环境结构。使用激光雷达定位的方案可以通过点云配准实现定位,同时建立稠密点云地图,真实的反映周围环境。单独使用激光雷达依靠帧与帧之间的点云匹配,可以构建简单的激光雷达里程计,但因激光雷达本身具有误差,误差随着时间不断累积,且激光雷达在长走廊等难以提取特征点的情况下,里程计往往会失效。Mobile robots need to locate themselves before performing tasks in unfamiliar environments, and the accuracy of positioning directly affects the accuracy of map construction and the execution of tasks. Sensors commonly used for positioning and map construction are cameras or lidars. The solution using camera positioning can build an odometer through image information, but the map it builds is difficult to intuitively reflect the structure of the environment. The solution using lidar positioning can achieve positioning through point cloud registration, and at the same time establish a dense point cloud map to truly reflect the surrounding environment. Using lidar alone, relying on point cloud matching between frames, a simple lidar odometer can be constructed, but because the lidar itself has errors, the errors accumulate over time, and the odometer often fails in situations where it is difficult to extract feature points, such as long corridors.

因此,如何提高移动机器人的自定位精度是本领域技术人员目前需要解决的技术问题。Therefore, how to improve the self-positioning accuracy of the mobile robot is a technical problem that technicians in this field currently need to solve.

发明内容Summary of the invention

本申请的目的是提供一种移动机器人的自定位方法、系统、电子设备及存储介质,能够提高移动机器人的自定位精度。The purpose of the present application is to provide a self-positioning method, system, electronic device and storage medium for a mobile robot, which can improve the self-positioning accuracy of the mobile robot.

为解决上述技术问题,本申请提供一种移动机器人的自定位方法,所述移动机器人为设置有激光雷达、惯性测量单元和GPS定位模块的机器人,所述移动机器人的自定位方法包括:In order to solve the above technical problems, the present application provides a self-positioning method for a mobile robot, wherein the mobile robot is a robot provided with a laser radar, an inertial measurement unit and a GPS positioning module, and the self-positioning method for the mobile robot comprises:

对所述惯性测量单元进行初始化,并记录所述惯性测量单元的测量值,得到惯性测量数据;Initializing the inertial measurement unit and recording the measurement value of the inertial measurement unit to obtain inertial measurement data;

接收所述激光雷达采集的点云数据,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,利用所述第一位姿转换矩阵对所述点云数据进行坐标变换;其中,所述第一位姿转换矩阵用于描述所述点云数据的采集时刻相对于起始时刻的位姿变化情况;Receive the point cloud data collected by the laser radar, interpolate the inertial measurement data according to the timestamp of the point cloud data to obtain a first pose transformation matrix, and use the first pose transformation matrix to perform coordinate transformation on the point cloud data; wherein the first pose transformation matrix is used to describe the pose change of the point cloud data at the collection time relative to the starting time;

确定所述点云数据中的特征点,并根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果;Determine feature points in the point cloud data, and perform frame matching on the point cloud data according to the feature points to obtain a point cloud feature matching result;

从所述惯性测量数据中选取关键帧,对相邻的所述关键帧之间的惯性测量数据进行预积分,得到预积分结果;其中,所述关键帧为所述点云数据与所述惯性测量数据中同时采集的数据帧;Selecting key frames from the inertial measurement data, and pre-integrating the inertial measurement data between adjacent key frames to obtain a pre-integration result; wherein the key frames are data frames collected simultaneously from the point cloud data and the inertial measurement data;

对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据;Performing ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data;

根据所述点云特征匹配结果、所述预积分结果和所述ENU坐标数据构建因子图优化模型;Constructing a factor graph optimization model according to the point cloud feature matching result, the pre-integration result and the ENU coordinate data;

若检测到运动闭环,则利用所述因子图优化模型对所述移动机器人的位姿信息进行全局优化,以便得到所述移动机器人在世界坐标系的定位信息。If a motion closed loop is detected, the factor graph optimization model is used to globally optimize the posture information of the mobile robot so as to obtain the positioning information of the mobile robot in the world coordinate system.

可选地,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,包括:Optionally, interpolating the inertial measurement data according to the timestamp of the point cloud data to obtain a first pose conversion matrix includes:

选取第n帧点云数据;Select the nth frame of point cloud data;

确定第n帧点云数据的时间戳;Determine the timestamp of the nth frame of point cloud data;

将距离所述第n帧点云数据的时间戳最近的2帧惯性测量数据设置为参考惯性测量数据;The two frames of inertial measurement data closest to the timestamp of the nth frame of point cloud data are set as reference inertial measurement data;

生成所述参考惯性测量数据对应的第二位姿转换矩阵;其中,所述第二位姿转换矩阵为所述参考惯性测量数据的采集时刻相对于起始时刻的位姿转换矩阵;Generate a second posture conversion matrix corresponding to the reference inertial measurement data; wherein the second posture conversion matrix is a posture conversion matrix of the collection time of the reference inertial measurement data relative to the starting time;

对所述第二位姿转换矩阵进行插值处理,得到所述第n帧点云数据的第一位姿转换矩阵;Performing interpolation processing on the second pose conversion matrix to obtain the first pose conversion matrix of the nth frame point cloud data;

判断所述点云数据是否选取完毕;若否,则将n的值加1,并进入所述选取第n帧点云数据的步骤。Determine whether the point cloud data has been selected; if not, add 1 to the value of n and enter the step of selecting the nth frame of point cloud data.

可选地,确定所述点云数据中的特征点,包括:Optionally, determining feature points in the point cloud data includes:

计算所述点云数据中每一点的曲率;Calculating the curvature of each point in the point cloud data;

根据曲率从所述点云数据中选取边缘点和平面点作为所述特征点;其中,所述边缘点为所述点云数据中曲率最大的P1个点,所述平面点为所述点云数据中曲率最小的P2个点。Edge points and plane points are selected from the point cloud data as the feature points according to the curvature; wherein the edge points are P1 points with the largest curvature in the point cloud data, and the plane points are P2 points with the smallest curvature in the point cloud data.

可选地,根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果,包括:Optionally, performing inter-frame matching on the point cloud data according to the feature points to obtain a point cloud feature matching result includes:

根据所述边缘点的坐标进行点到线的特征匹配,得到第一距离;其中,所述第一距离为第k+1帧点云数据中的边缘点到目标直线的距离,所述目标直线为经过第k帧点云数据中2个边缘点的直线,所述第一距离大于0;Performing point-to-line feature matching according to the coordinates of the edge points to obtain a first distance; wherein the first distance is the distance from the edge point in the k+1th frame of point cloud data to the target straight line, the target straight line is a straight line passing through two edge points in the kth frame of point cloud data, and the first distance is greater than 0;

根据所述平面点的坐标进行点到面的特征匹配,得到第二距离;其中,所述第二距离为第k+1帧点云数据中的平面点到目标平面的距离,所述目标平面为第k帧点云数据中3个不共线的平面点所在的平面,所述第二距离大于0;Performing point-to-plane feature matching according to the coordinates of the plane point to obtain a second distance; wherein the second distance is the distance from the plane point in the k+1th frame point cloud data to the target plane, the target plane is the plane where the three non-collinear plane points in the kth frame point cloud data are located, and the second distance is greater than 0;

确定相邻帧点云坐标变换矩阵;Determine the coordinate transformation matrix of the point cloud of adjacent frames;

根据所述相邻帧点云坐标变换矩阵和所述第一距离生成边缘点约束方程;Generate an edge point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the first distance;

根据所述相邻帧点云坐标变换矩阵和所述第二距离生成平面点约束方程;Generate a plane point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the second distance;

以所述第一距离的值最小为优化目标,对所述边缘点约束方程和所述平面点约束方程进行求解,得到点云特征匹配结果。Taking the minimum value of the first distance as the optimization goal, the edge point constraint equation and the plane point constraint equation are solved to obtain a point cloud feature matching result.

可选地,对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据,包括:Optionally, performing ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data includes:

将所述GPS定位模块的测量值转换至ECEF坐标系,得到ECEF坐标数据;Convert the measured value of the GPS positioning module to the ECEF coordinate system to obtain ECEF coordinate data;

将所述ECEF坐标数据转换至ENU坐标系,得到所述ENU坐标数据。The ECEF coordinate data is converted to the ENU coordinate system to obtain the ENU coordinate data.

进一步的,还包括:Furthermore, it also includes:

确定所述激光雷达采集的点云数据中当前帧与所述关键帧的比对信息;Determine comparison information between a current frame and the key frame in the point cloud data collected by the laser radar;

判断所述比对信息是否符合预设条件;Determining whether the comparison information meets a preset condition;

若是,则判定存在运动闭环;If so, it is determined that there is a motion closed loop;

若否,则判定不存在运动闭环。If not, it is determined that there is no motion closed loop.

进一步的,确定所述激光雷达采集的点云数据中当前帧与所述关键帧的比对信息,包括:Further, determining the comparison information between the current frame and the key frame in the point cloud data collected by the laser radar includes:

计算所述激光雷达采集的点云数据中当前帧与关键帧的时间差,得到第一比对信息;Calculating the time difference between the current frame and the key frame in the point cloud data collected by the laser radar to obtain first comparison information;

和/或,计算所述激光雷达采集的点云数据中当前帧与关键帧之间的运动轨迹长度,得到第二比对信息;and/or, calculating the length of a motion trajectory between a current frame and a key frame in the point cloud data collected by the laser radar to obtain second comparison information;

和/或,计算所述激光雷达采集的当前帧与所述关键帧之间的直线距离,得到第三比对信息;and/or, calculating a straight-line distance between a current frame collected by the laser radar and the key frame to obtain third comparison information;

和/或,计算所述激光雷达采集的点云数据中当前帧与关键帧的点云配准分数,得到第四比对信息。And/or, calculate the point cloud registration scores of the current frame and the key frame in the point cloud data collected by the laser radar to obtain fourth comparison information.

本申请还提供了一种移动机器人的自定位系统,所述移动机器人为设置有激光雷达、惯性测量单元和GPS定位模块的机器人,所述移动机器人的自定位系统包括:The present application also provides a self-positioning system for a mobile robot, wherein the mobile robot is a robot provided with a laser radar, an inertial measurement unit and a GPS positioning module, and the self-positioning system for the mobile robot comprises:

第一数据获取模块,用于对所述惯性测量单元进行初始化,并记录所述惯性测量单元的测量值,得到惯性测量数据;A first data acquisition module, used for initializing the inertial measurement unit and recording the measurement value of the inertial measurement unit to obtain inertial measurement data;

第二数据获取模块,用于接收所述激光雷达采集的点云数据,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,利用所述第一位姿转换矩阵对所述点云数据进行坐标变换;其中,所述第一位姿转换矩阵用于描述所述点云数据的采集时刻相对于起始时刻的位姿变化情况;A second data acquisition module is used to receive the point cloud data collected by the laser radar, interpolate the inertial measurement data according to the timestamp of the point cloud data to obtain a first pose conversion matrix, and use the first pose conversion matrix to perform coordinate transformation on the point cloud data; wherein the first pose conversion matrix is used to describe the pose change of the point cloud data at the collection time relative to the starting time;

帧间匹配模块,用于确定所述点云数据中的特征点,并根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果;An inter-frame matching module is used to determine feature points in the point cloud data, and perform inter-frame matching on the point cloud data according to the feature points to obtain a point cloud feature matching result;

预积分模块,用于从所述惯性测量数据中选取关键帧,对相邻的所述关键帧之间的惯性测量数据进行预积分,得到预积分结果;其中,所述关键帧为所述点云数据与所述惯性测量数据中同时采集的数据帧;A pre-integration module, used to select a key frame from the inertial measurement data, pre-integrate the inertial measurement data between adjacent key frames, and obtain a pre-integration result; wherein the key frame is a data frame collected simultaneously from the point cloud data and the inertial measurement data;

第三数据获取模块,用于对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据;A third data acquisition module is used to perform ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data;

模型构建模块,用于根据所述点云特征匹配结果、所述预积分结果和所述ENU坐标数据构建因子图优化模型;A model building module, used to build a factor graph optimization model according to the point cloud feature matching result, the pre-integration result and the ENU coordinate data;

优化模块,用于若检测到运动闭环,则利用所述因子图优化模型对所述移动机器人的位姿信息进行全局优化,以便得到所述移动机器人在世界坐标系的定位信息。The optimization module is used to use the factor graph optimization model to globally optimize the posture information of the mobile robot if a motion closed loop is detected, so as to obtain the positioning information of the mobile robot in the world coordinate system.

本申请还提供了一种存储介质,其上存储有计算机程序,所述计算机程序执行时实现上述移动机器人的自定位方法执行的步骤。The present application also provides a storage medium on which a computer program is stored, and when the computer program is executed, the steps of the self-positioning method of the mobile robot are implemented.

本申请还提供了一种电子设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时实现上述移动机器人的自定位方法执行的步骤。The present application also provides an electronic device, including a memory and a processor, wherein the memory stores a computer program, and when the processor calls the computer program in the memory, the steps of the self-positioning method of the mobile robot are implemented.

本申请提供了一种移动机器人的自定位方法,该方法应用于设置有激光雷达、惯性测量单元和GPS定位模块的移动机器人。本申请对惯性测量单元进行初始化后记录惯性测量单元采集的惯性测量数据,并利用该惯性测量数据对激光雷达采集的点云数据进行差值处理和坐标变换以便激光雷达在移动机器人运动过程中引起的点云畸变。本申请还对点云数据进行帧间匹配得到点云特征匹配结果,以便准确地估计移动机器人的相对位置变化。本申请根据点云数据和惯性测量数据确定同时采集的关键帧,进而基于关键帧进行预积分计算,以便减少自定位过程中对惯性测量数据的计算量,并保持对机器人运动状态的持续跟踪。本申请对GPS定位模块的测量值通过ENU坐标转换得到ENU坐标数据,使得GPS定位模块的测量值可以与机器人的本地坐标系进行匹配,为定位提供全局参考。本申请根据点云特征匹配结果、预积分结果和ENU坐标数据构建因子图优化模型,整合不同传感器数据,并在检测到运动闭环时进行全局优化,以便消除累积误差并提高定位的准确性和稳定性。由此可见,本申请通过融合激光雷达、惯性测量单元和GPS定位模块的数据进行定位,能够提高移动机器人的自定位精度。本申请同时还提供了一种移动机器人的自定位系统、一种存储介质和一种电子设备,具有上述有益效果,在此不再赘述。The present application provides a self-positioning method for a mobile robot, which is applied to a mobile robot equipped with a laser radar, an inertial measurement unit and a GPS positioning module. The present application records the inertial measurement data collected by the inertial measurement unit after initializing the inertial measurement unit, and uses the inertial measurement data to perform difference processing and coordinate transformation on the point cloud data collected by the laser radar so as to correct the point cloud distortion caused by the laser radar during the movement of the mobile robot. The present application also performs inter-frame matching on the point cloud data to obtain point cloud feature matching results so as to accurately estimate the relative position change of the mobile robot. The present application determines the key frames collected simultaneously based on the point cloud data and the inertial measurement data, and then performs pre-integration calculation based on the key frames so as to reduce the amount of calculation of the inertial measurement data during the self-positioning process and maintain continuous tracking of the robot's motion state. The present application obtains ENU coordinate data by converting the measurement value of the GPS positioning module through ENU coordinates, so that the measurement value of the GPS positioning module can be matched with the local coordinate system of the robot, providing a global reference for positioning. This application constructs a factor graph optimization model based on point cloud feature matching results, pre-integration results and ENU coordinate data, integrates different sensor data, and performs global optimization when a motion closed loop is detected, so as to eliminate cumulative errors and improve positioning accuracy and stability. It can be seen that this application can improve the self-positioning accuracy of the mobile robot by fusing the data of the laser radar, inertial measurement unit and GPS positioning module for positioning. This application also provides a self-positioning system for a mobile robot, a storage medium and an electronic device, which have the above-mentioned beneficial effects and will not be repeated here.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本申请实施例,下面将对实施例中所需要使用的附图做简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application, the following is a brief introduction to the drawings required for use in the embodiments. Obviously, the drawings described below are only some embodiments of the present application. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying any creative work.

图1为本申请实施例所提供的一种移动机器人的自定位方法的流程图;FIG1 is a flow chart of a self-positioning method for a mobile robot provided in an embodiment of the present application;

图2为本申请实施例所提供的一种基于多传感器信息融合的自定位方法流程图;FIG2 is a flow chart of a self-positioning method based on multi-sensor information fusion provided in an embodiment of the present application;

图3为本申请实施例所提供的一种激光单帧扫描示意图;FIG3 is a schematic diagram of a laser single-frame scanning provided in an embodiment of the present application;

图4为本申请实施例所提供的一种IMU插值法求解变换矩阵示意图;FIG4 is a schematic diagram of solving a transformation matrix using an IMU interpolation method provided in an embodiment of the present application;

图5为本申请实施例所提供的一种SLAM定位的因子图;FIG5 is a factor graph of SLAM positioning provided by an embodiment of the present application;

图6为本申请实施例所提供的一种因子图模型示意图;FIG6 is a schematic diagram of a factor graph model provided in an embodiment of the present application;

图7为本申请实施例所提供的一种回环检测的流程图。FIG. 7 is a flow chart of a loop detection provided in an embodiment of the present application.

具体实施方式DETAILED DESCRIPTION

为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solution and advantages of the embodiments of the present application clearer, the technical solution in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of this application.

下面请参见图1,图1为本申请实施例所提供的一种移动机器人的自定位方法的流程图。Please refer to Figure 1 below, which is a flow chart of a self-positioning method for a mobile robot provided in an embodiment of the present application.

具体步骤可以包括:Specific steps may include:

S101:对所述惯性测量单元进行初始化,并记录所述惯性测量单元的测量值,得到惯性测量数据;S101: Initializing the inertial measurement unit and recording the measurement value of the inertial measurement unit to obtain inertial measurement data;

其中,本实施例可以用于移动机器人的处理器,该移动机器人为设置有激光雷达、惯性测量单元IMU和GPS定位模块的机器人,本实施例融合激光雷达、惯性测量单元和GPS定位模块采集的定位信息实现移动机器人的自定位。在本实施例的实现过程中可以利用激光雷达、惯性测量单元IMU和GPS定位模块持续采集对应的定位信息。Among them, this embodiment can be used for a processor of a mobile robot, which is a robot equipped with a laser radar, an inertial measurement unit IMU and a GPS positioning module. This embodiment integrates the positioning information collected by the laser radar, the inertial measurement unit and the GPS positioning module to realize the self-positioning of the mobile robot. In the implementation process of this embodiment, the laser radar, the inertial measurement unit IMU and the GPS positioning module can be used to continuously collect corresponding positioning information.

本实施例可以先对惯性测量单元进行初始化,进而利用初始化后的惯性测量单元对移动机器的运动状态进行测量,得到惯性测量数据(包括加速度数据和角速度数据)。In this embodiment, the inertial measurement unit may be initialized first, and then the initialized inertial measurement unit may be used to measure the motion state of the mobile machine to obtain inertial measurement data (including acceleration data and angular velocity data).

在对惯性测量单元初始化之后,本实施例还可以使用三次B样条曲线对惯性测量单元的测量值进行拟合得到惯性测量数据,以便拟合惯性测量数据中的平滑变化部分,同时抑制噪声和异常值的影响。After the inertial measurement unit is initialized, the present embodiment may also use a cubic B-spline curve to fit the measurement value of the inertial measurement unit to obtain inertial measurement data, so as to fit the smoothly changing part in the inertial measurement data and suppress the influence of noise and abnormal values.

S102:接收所述激光雷达采集的点云数据,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,利用所述第一位姿转换矩阵对所述点云数据进行坐标变换;S102: receiving point cloud data collected by the laser radar, interpolating the inertial measurement data according to the timestamp of the point cloud data to obtain a first pose transformation matrix, and performing coordinate transformation on the point cloud data using the first pose transformation matrix;

其中,激光雷达可以按照周期发射激光束探测周围环境,得到多帧点云数据。点云数据是三维空间中点的集合,每个点都包含其空间坐标(X, Y, Z)。Among them, the laser radar can periodically emit laser beams to detect the surrounding environment and obtain multi-frame point cloud data. Point cloud data is a collection of points in three-dimensional space, and each point contains its spatial coordinates (X, Y, Z).

点云数据带有时间戳,表示数据被采集的时间。同一帧点云数据的时间戳相同。由于惯性测量单元的采样频率高于激光雷达,因此在连续两帧之间点云数据,可能会有多帧惯性测量数据。Point cloud data has a timestamp, which indicates the time when the data was collected. The timestamp of the same frame of point cloud data is the same. Since the sampling frequency of the inertial measurement unit is higher than that of the lidar, there may be multiple frames of inertial measurement data between two consecutive frames of point cloud data.

根据惯性测量数据可以确定当前时刻相对于起始时刻的位姿转换矩阵,本申请可以根据点云数据的时间戳,查询该点云数据采集时间前后的惯性测量数据,并对上述惯性测量数据进行插值处理得到第一位姿转换矩阵,所述第一位姿转换矩阵用于描述所述移动机器人在所述点云数据的采集时刻相对于起始时刻的位姿变化情况。上述起始时刻指:移动机器人启动激光雷达和惯性测量单元采集定位信息的时刻。According to the inertial measurement data, the posture conversion matrix of the current moment relative to the starting moment can be determined. The present application can query the inertial measurement data before and after the point cloud data collection time according to the timestamp of the point cloud data, and interpolate the above inertial measurement data to obtain the first posture conversion matrix. The first posture conversion matrix is used to describe the posture change of the mobile robot at the time of collecting the point cloud data relative to the starting moment. The above starting moment refers to: the moment when the mobile robot starts the laser radar and the inertial measurement unit to collect positioning information.

在得到第一位姿转换矩阵后,可以利用第一位姿转换矩阵对所述点云数据进行坐标变换,以消除激光雷达在移动机器人运动过程中引起的点云畸变。本实施例在后续操作中使用的点云数据为经过S102坐标变换操作后的点云数据。After obtaining the first pose conversion matrix, the first pose conversion matrix can be used to perform coordinate transformation on the point cloud data to eliminate point cloud distortion caused by the laser radar during the movement of the mobile robot. The point cloud data used in subsequent operations of this embodiment is the point cloud data after the coordinate transformation operation in S102.

S103:确定所述点云数据中的特征点,并根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果;S103: determining feature points in the point cloud data, and performing inter-frame matching on the point cloud data according to the feature points to obtain a point cloud feature matching result;

其中,本实施例可以提取点云数据中具有明显特征的点作为特征点,并根据上述特征点对点云数据进行帧间匹配,得到点云特征匹配结果。上述点云特征匹配结果用于描述相邻两帧点云数据之间的转换关系。In this embodiment, points with obvious features in the point cloud data can be extracted as feature points, and the point cloud data can be matched between frames according to the feature points to obtain point cloud feature matching results. The point cloud feature matching results are used to describe the conversion relationship between two adjacent frames of point cloud data.

作为一种可行的实施方式,本实施例可以根据各个点的曲率确定点云数据中的特征点,帧间匹配指将两个不同时间的点云数据进行对齐的过程,本实施例可以通过比较相邻两帧点云数据中特征点的位置实现帧间匹配。As a feasible implementation method, this embodiment can determine the feature points in the point cloud data based on the curvature of each point. Inter-frame matching refers to the process of aligning point cloud data at two different times. This embodiment can achieve inter-frame matching by comparing the positions of feature points in two adjacent frames of point cloud data.

S104:从所述惯性测量数据中选取关键帧,对相邻的所述关键帧之间的惯性测量数据进行预积分,得到预积分结果;S104: selecting key frames from the inertial measurement data, and pre-integrating the inertial measurement data between adjacent key frames to obtain a pre-integration result;

其中,激光雷达和惯性测量单元可以按照各自的周期采集数据,本实施例将点云数据与所述惯性测量数据中同时采集的数据帧(即,具有相同的时间戳的点云数据和惯性测量数据)作为关键帧。本实施例中从惯性测量数据中选取的关键帧具体指:与任一帧点云数据同时采集的一帧惯性测量数据。Among them, the laser radar and the inertial measurement unit can collect data according to their respective cycles. In this embodiment, the data frame collected simultaneously with the point cloud data and the inertial measurement data (that is, the point cloud data and the inertial measurement data with the same timestamp) is used as a key frame. In this embodiment, the key frame selected from the inertial measurement data specifically refers to: a frame of inertial measurement data collected simultaneously with any frame of point cloud data.

惯性测量数据以高频率(如100Hz到1000Hz)持续产生,而点云数据通常频率较低(如10Hz)。预积分的目的是在两个关键帧之间对惯性测量数据进行积分,以估算这两个关键帧之间的相对位姿变化。预积分结果即相邻的两个关键帧之间的位姿变化信息。Inertial measurement data is continuously generated at a high frequency (e.g., 100 Hz to 1000 Hz), while point cloud data is usually generated at a lower frequency (e.g., 10 Hz). The purpose of pre-integration is to integrate the inertial measurement data between two keyframes to estimate the relative pose change between the two keyframes. The pre-integration result is the pose change information between two adjacent keyframes.

通过将惯性测量数据在关键帧之间进行预积分,可以避免在优化过程中重复进行积分计算。预积分结果对惯性测量数据的噪声和偏差有一定的鲁棒性,以便提高定位精度。By pre-integrating the inertial measurement data between key frames, repeated integration calculations can be avoided during the optimization process. The pre-integration results are robust to noise and bias in the inertial measurement data to improve positioning accuracy.

S105:对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据;S105: performing ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data;

其中,GPS定位模块的测量值可以包括WGS84坐标系的经纬度及高度信息。WGS84坐标系以地心为原点,使用经度、纬度和大地高度来描述地球上任意一点的位置。ENU坐标系则是以移动机器人所在位置为坐标原点,X轴指向东边,Y轴指向北边,Z轴指向天顶的局部坐标系。The measurement value of the GPS positioning module can include the longitude, latitude and altitude information of the WGS84 coordinate system. The WGS84 coordinate system takes the center of the earth as the origin and uses longitude, latitude and geodetic height to describe the position of any point on the earth. The ENU coordinate system is a local coordinate system with the location of the mobile robot as the origin, the X axis pointing to the east, the Y axis pointing to the north, and the Z axis pointing to the zenith.

本实施例可以确定WGS84坐标系与ENU坐标系的坐标系转换矩阵,进而利用上述坐标系转换矩阵对GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据。ENU坐标系为东北天坐标系。This embodiment can determine the coordinate system conversion matrix between the WGS84 coordinate system and the ENU coordinate system, and then use the coordinate system conversion matrix to perform ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data. The ENU coordinate system is the northeast celestial coordinate system.

S106:根据所述点云特征匹配结果、所述预积分结果和所述ENU坐标数据构建因子图优化模型;S106: constructing a factor graph optimization model according to the point cloud feature matching result, the pre-integration result and the ENU coordinate data;

因子图是一个概率图模型,用于表示和优化包含多个变量的复杂函数。因子图包括点云特征匹配结果对应的点云特征匹配因子(即里程计因子)、预积分结果对应的预积分因子(即回环检测因子),以及ENU坐标数据对应的GPS因子。The factor graph is a probabilistic graph model used to represent and optimize complex functions containing multiple variables. The factor graph includes the point cloud feature matching factor (i.e., odometer factor) corresponding to the point cloud feature matching result, the pre-integration factor (i.e., loop detection factor) corresponding to the pre-integration result, and the GPS factor corresponding to the ENU coordinate data.

点云特征匹配结果用于描述相邻两帧点云数据之间的转换关系(即,机器人相邻帧之间的相对位姿变化)。点云特征匹配结果可以作为因子图中的边,连接表示不同时刻机器人位姿的节点。预积分结果即相邻的两个关键帧之间的位姿变化信息,提供了基于惯性测量数据的机器人位姿估计,预积分结果也可以作为因子图中的边。ENU坐标数据提供了机器人在全球坐标系中的绝对位置,ENU坐标数据可以作为因子图中的约束。The point cloud feature matching result is used to describe the transformation relationship between two adjacent frames of point cloud data (i.e., the relative pose change between adjacent frames of the robot). The point cloud feature matching result can be used as an edge in the factor graph to connect the nodes representing the robot pose at different times. The pre-integration result is the pose change information between two adjacent key frames, which provides the robot pose estimation based on inertial measurement data. The pre-integration result can also be used as an edge in the factor graph. The ENU coordinate data provides the absolute position of the robot in the global coordinate system. The ENU coordinate data can be used as a constraint in the factor graph.

因子图优化模型是利用因子图结构进行状态估计的模型,可以整合来自不同传感器的信息。因子图优化模型的目标是找到一组变量值使得所有因子(即点云特征匹配因子、预积分因子和GPS因子)的联合概率最大或代价函数最小。当检测到运动闭环时,可以添加对于的约束到因子图中。这些因子连接了表示不同时间点的相同位置的节点,并提供了消除累积误差的机会。本实施例根据点云特征匹配结果、预积分结果和ENU坐标数据构建因子图优化模型,能够将不同传感器数据融合到统一概率框架。The factor graph optimization model is a model that uses a factor graph structure for state estimation and can integrate information from different sensors. The goal of the factor graph optimization model is to find a set of variable values that maximizes the joint probability of all factors (i.e., point cloud feature matching factors, pre-integration factors, and GPS factors) or minimizes the cost function. When a motion loop is detected, constraints can be added to the factor graph. These factors connect nodes that represent the same position at different time points and provide an opportunity to eliminate cumulative errors. This embodiment constructs a factor graph optimization model based on point cloud feature matching results, pre-integration results, and ENU coordinate data, which can fuse different sensor data into a unified probability framework.

S107:若检测到运动闭环,则利用所述因子图优化模型对所述移动机器人的位姿信息进行全局优化,以便得到所述移动机器人在世界坐标系的定位信息。S107: If a motion closed loop is detected, the factor graph optimization model is used to perform global optimization on the posture information of the mobile robot so as to obtain the positioning information of the mobile robot in the world coordinate system.

其中,在检测到运动闭环之后,可以将闭环约束对应的闭环因子以及因子图优化模型中的点云特征匹配因子、预积分因子和GPS因子转化为一个优化问题,例如非线性最小二乘问题。本实施例可以使用优化算法来最小化所有因子的总误差,调整节点的状态,直到找到使所有观测误差最小化的最优位姿估计。移动机器人的位姿信息进行全局优化完成后,可以所有位姿节点的估计值,更新后的位姿信息即为机器人在世界坐标系的定位信息。通过上述方案,可以使得各种传感器的累积误差被校正,同时增强了地图一致性。Among them, after the motion closed loop is detected, the closed-loop factor corresponding to the closed-loop constraint and the point cloud feature matching factor, pre-integration factor and GPS factor in the factor graph optimization model can be converted into an optimization problem, such as a nonlinear least squares problem. In this embodiment, an optimization algorithm can be used to minimize the total error of all factors, and the state of the node is adjusted until the optimal pose estimate that minimizes all observation errors is found. After the global optimization of the pose information of the mobile robot is completed, the estimated values of all pose nodes can be used, and the updated pose information is the positioning information of the robot in the world coordinate system. Through the above scheme, the cumulative errors of various sensors can be corrected, and the map consistency is enhanced at the same time.

本实施例应用于设置有激光雷达、惯性测量单元和GPS定位模块的移动机器人。本实施例对惯性测量单元进行初始化后记录惯性测量单元采集的惯性测量数据,并利用该惯性测量数据对激光雷达采集的点云数据进行差值处理和坐标变换以便激光雷达在移动机器人运动过程中引起的点云畸变。本实施例还对点云数据进行帧间匹配得到点云特征匹配结果,以便准确地估计移动机器人的相对位置变化。本实施例根据点云数据和惯性测量数据确定同时采集的关键帧,进而基于关键帧进行预积分计算,以便减少自定位过程中对惯性测量数据的计算量,并保持对机器人运动状态的持续跟踪。本实施例对GPS定位模块的测量值通过ENU坐标转换得到ENU坐标数据,使得GPS定位模块的测量值可以与机器人的本地坐标系进行匹配,为定位提供全局参考。本实施例根据点云特征匹配结果、预积分结果和ENU坐标数据构建因子图优化模型,整合不同传感器数据,并在检测到运动闭环时进行全局优化,以便消除累积误差并提高定位的准确性和稳定性。由此可见,本实施例通过融合激光雷达、惯性测量单元和GPS定位模块的数据进行定位,能够提高移动机器人的自定位精度。This embodiment is applied to a mobile robot provided with a laser radar, an inertial measurement unit and a GPS positioning module. This embodiment records the inertial measurement data collected by the inertial measurement unit after initializing the inertial measurement unit, and uses the inertial measurement data to perform difference processing and coordinate transformation on the point cloud data collected by the laser radar so as to eliminate the point cloud distortion caused by the laser radar during the movement of the mobile robot. This embodiment also performs inter-frame matching on the point cloud data to obtain point cloud feature matching results so as to accurately estimate the relative position change of the mobile robot. This embodiment determines the key frames collected simultaneously based on the point cloud data and the inertial measurement data, and then performs pre-integration calculation based on the key frames so as to reduce the amount of calculation of the inertial measurement data during the self-positioning process and maintain continuous tracking of the robot's motion state. This embodiment obtains ENU coordinate data by converting the measurement value of the GPS positioning module through ENU coordinates, so that the measurement value of the GPS positioning module can be matched with the local coordinate system of the robot, providing a global reference for positioning. This embodiment constructs a factor graph optimization model based on the point cloud feature matching results, pre-integration results and ENU coordinate data, integrates different sensor data, and performs global optimization when a motion closed loop is detected to eliminate cumulative errors and improve positioning accuracy and stability. It can be seen that this embodiment can improve the self-positioning accuracy of the mobile robot by fusing the data of the laser radar, inertial measurement unit and GPS positioning module for positioning.

作为对于图1对应实施例的进一步介绍,本实施例可以确定每一帧点云数据的第一位姿转换矩阵,以便利用第一位姿转换矩阵对相应的点云数据进行坐标变换,以消除点云畸变。As a further introduction to the embodiment corresponding to FIG. 1 , this embodiment can determine the first pose transformation matrix of each frame of point cloud data, so as to utilize the first pose transformation matrix to perform coordinate transformation on the corresponding point cloud data to eliminate point cloud distortion.

具体的,本实施例可以通过以下方式确定每一帧点云数据的第一位姿转换矩阵:Specifically, this embodiment can determine the first pose transformation matrix of each frame of point cloud data in the following manner:

步骤A1:选取第n帧点云数据。Step A1: Select the nth frame of point cloud data.

其中,本实施例首次执行步骤A1时,n的值为1。When step A1 is executed for the first time in this embodiment, the value of n is 1.

步骤A2:确定第n帧点云数据的时间戳。Step A2: Determine the timestamp of the nth frame of point cloud data.

步骤A3:将距离所述第n帧点云数据的时间戳最近的2帧惯性测量数据设置为参考惯性测量数据。Step A3: setting the two frames of inertial measurement data closest to the timestamp of the n-th frame of point cloud data as reference inertial measurement data.

其中,第n帧点云数据的时间戳为tn,本步骤选取的参考惯性测量数据包括时间戳早于tn的惯性测量数据中距离tn最近的1帧惯性测量数据,以及时间戳晚于tn的惯性测量数据中距离tn最近的1帧惯性测量数据。The timestamp of the nth frame of point cloud data is tn , and the reference inertial measurement data selected in this step includes the inertial measurement data with a timestamp earlier than tn and the inertial measurement data with a timestamp later than tn and the inertial measurement data with a timestamp closest to tn .

步骤A4:生成所述参考惯性测量数据对应的第二位姿转换矩阵;Step A4: generating a second posture conversion matrix corresponding to the reference inertial measurement data;

其中,所述第二位姿转换矩阵为所述参考惯性测量数据的采集时刻相对于起始时刻的位姿转换矩阵。The second posture conversion matrix is a posture conversion matrix of the collection time of the reference inertial measurement data relative to the starting time.

步骤A5:对所述第二位姿转换矩阵进行插值处理,得到所述第n帧点云数据的第一位姿转换矩阵。Step A5: interpolate the second pose conversion matrix to obtain the first pose conversion matrix of the nth frame point cloud data.

其中,通过步骤A4可以得到两个第二位姿转换矩阵,本实施例可以根据参考惯性测量数据的时间戳对第二位姿转换矩阵进行插值处理,得到tn时刻的位姿转换矩阵,即所述第n帧点云数据的第一位姿转换矩阵。Among them, two second posture transformation matrices can be obtained through step A4. In this embodiment, the second posture transformation matrix can be interpolated according to the timestamp of the reference inertial measurement data to obtain the posture transformation matrix at time tn , that is, the first posture transformation matrix of the nth frame point cloud data.

步骤A6:判断所述点云数据是否选取完毕;若否,则将n的值加1,并进入步骤A1。Step A6: Determine whether the point cloud data has been selected; if not, add 1 to the value of n and go to step A1.

本实施例还可以确定点云数据的总帧数n,在得到所述第n点云数据的第一位姿转换矩阵后,可以退出步骤A1~步骤A6的循环。上述实施例使用惯性测量数据补偿点云数据在采集过程中因机器人运动造成的畸变,能够提高点云数据的准确性和一致性。This embodiment can also determine the total number of frames of point cloud data n total , and after obtaining the first position transformation matrix of the n total point cloud data, the loop of step A1 to step A6 can be exited. The above embodiment uses inertial measurement data to compensate for the distortion of point cloud data caused by robot movement during the acquisition process, which can improve the accuracy and consistency of point cloud data.

作为对于图1对应实施例的进一步介绍,可以通过以下方式确定所述点云数据中的特征点:计算所述点云数据中每一点的曲率;根据曲率从所述点云数据中选取边缘点和平面点作为所述特征点;其中,所述边缘点为所述点云数据中曲率最大的P1个点,所述平面点为所述点云数据中曲率最小的P2个点。As a further introduction to the embodiment corresponding to Figure 1, the feature points in the point cloud data can be determined in the following manner: calculate the curvature of each point in the point cloud data; select edge points and plane points from the point cloud data as the feature points based on the curvature; wherein the edge points are the P1 points with the largest curvature in the point cloud data, and the plane points are the P2 points with the smallest curvature in the point cloud data.

在特征点包括边缘点和平面点的基础上,可以通过以下方式进行帧间匹配得到点云特征匹配结果:On the basis that the feature points include edge points and plane points, the point cloud feature matching results can be obtained by performing inter-frame matching in the following ways:

步骤B1:根据所述边缘点的坐标进行点到线的特征匹配,得到第一距离;Step B1: performing point-to-line feature matching according to the coordinates of the edge points to obtain a first distance;

其中,所述第一距离为第k+1帧点云数据中的边缘点到目标直线的距离,所述目标直线为经过第k帧点云数据中2个边缘点的直线,所述第一距离大于0;The first distance is the distance from the edge point in the k+1th frame of point cloud data to the target straight line, the target straight line is a straight line passing through two edge points in the kth frame of point cloud data, and the first distance is greater than 0;

步骤B2:根据所述平面点的坐标进行点到面的特征匹配,得到第二距离;Step B2: performing point-to-plane feature matching according to the coordinates of the plane point to obtain a second distance;

其中,所述第二距离为第k+1帧点云数据中的平面点到目标平面的距离,所述目标平面为第k帧点云数据中3个不共线的平面点所在的平面,所述第二距离大于0;The second distance is the distance from the plane point in the k+1th frame of point cloud data to the target plane, the target plane is the plane where the three non-collinear plane points in the kth frame of point cloud data are located, and the second distance is greater than 0;

步骤B3:确定相邻帧点云坐标变换矩阵;Step B3: Determine the coordinate transformation matrix of the point cloud of adjacent frames;

步骤B4:根据所述相邻帧点云坐标变换矩阵和所述第一距离生成边缘点约束方程;Step B4: generating an edge point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the first distance;

步骤B5:根据所述相邻帧点云坐标变换矩阵和所述第二距离生成平面点约束方程;Step B5: generating a plane point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the second distance;

步骤B6:以所述第一距离的值最小为优化目标,对所述边缘点约束方程和所述平面点约束方程进行求解,得到点云特征匹配结果。Step B6: Taking the minimum value of the first distance as the optimization goal, solving the edge point constraint equation and the plane point constraint equation to obtain a point cloud feature matching result.

作为对于图1对应实施例的进一步介绍,本实施例可以对GPS定位模块的测量值进行以下操作得到ENU坐标数据:将所述GPS定位模块的测量值转换至ECEF坐标系,得到ECEF坐标数据;将所述ECEF坐标数据转换至ENU坐标系,得到所述ENU坐标数据。ECEF坐标系为地心地固坐标系。As a further introduction to the embodiment corresponding to FIG. 1 , the embodiment can perform the following operations on the measurement values of the GPS positioning module to obtain ENU coordinate data: convert the measurement values of the GPS positioning module to the ECEF coordinate system to obtain ECEF coordinate data; convert the ECEF coordinate data to the ENU coordinate system to obtain the ENU coordinate data. The ECEF coordinate system is an earth-centered earth-fixed coordinate system.

作为对于图1对应实施例的进一步介绍,可以通过以下方式构建因子图优化模型,包括:生成所述点云特征匹配结果对应的第一误差函数;生成所述预积分结果对应的第二误差函数;生成所述ENU坐标数据对应的第三误差函数;构建包含所述第一误差函数、所述第二误差函数和所述第三误差函数的因子图优化模型。As a further introduction to the embodiment corresponding to Figure 1, a factor graph optimization model can be constructed in the following manner, including: generating a first error function corresponding to the point cloud feature matching result; generating a second error function corresponding to the pre-integration result; generating a third error function corresponding to the ENU coordinate data; and constructing a factor graph optimization model including the first error function, the second error function and the third error function.

上述过程构建了三个误差函数的因子图优化模型,以便将点云特征匹配、预积分结果和ENU坐标数据的约束条件统一在一个框架下进行优化,通过求解这个优化问题,可以得到一个综合考虑了多种传感器信息的最优位姿估计。The above process constructs a factor graph optimization model of three error functions so that the constraints of point cloud feature matching, pre-integration results and ENU coordinate data can be optimized in a unified framework. By solving this optimization problem, an optimal pose estimation that comprehensively considers information from multiple sensors can be obtained.

作为对于图1对应实施例的进一步介绍,可以通过以下方式判断是否存在运动闭环:确定所述激光雷达采集的点云数据中当前帧与所述关键帧的比对信息;判断所述比对信息是否符合预设条件;若是,则判定存在运动闭环;若否,则判定不存在运动闭环。As a further introduction to the embodiment corresponding to Figure 1, whether a motion loop exists can be determined in the following manner: determine the comparison information between the current frame and the key frame in the point cloud data collected by the laser radar; determine whether the comparison information meets the preset conditions; if so, determine that a motion loop exists; if not, determine that no motion loop exists.

上述比对信息通过将点云数据中的当前帧与点云数据中的关键帧进行比对的结果,点云数据中的当前帧为最近一次采集的一帧点云数据,上述点云数据中的关键帧为点云数据中与所述惯性测量数据采集的数据帧。也就是说,点云数据中的关键帧指:与任一帧惯性测量数据同时采集的一帧点云数据。The above comparison information is the result of comparing the current frame in the point cloud data with the key frame in the point cloud data, where the current frame in the point cloud data is the most recently acquired frame of point cloud data, and the key frame in the point cloud data is the data frame in the point cloud data acquired with the inertial measurement data. In other words, the key frame in the point cloud data refers to: a frame of point cloud data acquired simultaneously with any frame of inertial measurement data.

上述比对信息包括:所述当前帧与所述关键帧的时间差、所述当前帧与所述关键帧之间的运动轨迹长度、所述当前帧与所述关键帧之间的直线距离、以及所述当前帧与所述关键帧的点云配准分数中的任一项或任几项的组合;The comparison information includes: any one or a combination of any one of the time difference between the current frame and the key frame, the length of the motion trajectory between the current frame and the key frame, the straight-line distance between the current frame and the key frame, and the point cloud registration score between the current frame and the key frame;

若当前帧与关键帧的时间差大于第一阈值,则判定比对信息符合预设条件;若当前帧与关键帧之间的运动轨迹长度大于第二阈值,则判定比对信息符合预设条件;若当前帧与关键帧之间的直线距离小于第三阈值,则判定比对信息符合预设条件;当前帧与所述关键帧的点云配准分数小于第四阈值,则判定比对信息符合预设条件。If the time difference between the current frame and the key frame is greater than the first threshold, the comparison information is determined to meet the preset conditions; if the length of the motion trajectory between the current frame and the key frame is greater than the second threshold, the comparison information is determined to meet the preset conditions; if the straight-line distance between the current frame and the key frame is less than the third threshold, the comparison information is determined to meet the preset conditions; if the point cloud alignment score between the current frame and the key frame is less than the fourth threshold, the comparison information is determined to meet the preset conditions.

作为一种可行的实施方式,本实施例可以计算所述激光雷达采集的点云数据中当前帧与关键帧的时间差,得到第一比对信息;还可以计算所述激光雷达采集的点云数据中当前帧与关键帧之间的运动轨迹长度,得到第二比对信息;还可以计算所述激光雷达采集的当前帧与所述关键帧之间的直线距离,得到第三比对信息;还可以计算所述激光雷达采集的点云数据中当前帧与关键帧的点云配准分数,得到第四比对信息;将第一比对信息、第二比对信息、第三比对信息和第四比对信息中的任一项或任几项的组合作为检测运动闭环的比对信息。As a feasible implementation method, this embodiment can calculate the time difference between the current frame and the key frame in the point cloud data collected by the laser radar to obtain the first comparison information; it can also calculate the length of the motion trajectory between the current frame and the key frame in the point cloud data collected by the laser radar to obtain the second comparison information; it can also calculate the straight-line distance between the current frame collected by the laser radar and the key frame to obtain the third comparison information; it can also calculate the point cloud alignment score of the current frame and the key frame in the point cloud data collected by the laser radar to obtain the fourth comparison information; any one or a combination of any several of the first comparison information, the second comparison information, the third comparison information and the fourth comparison information is used as the comparison information for detecting the motion closed loop.

下面通过在实际应用中的实施例说明上述实施例描述的流程。The process described in the above embodiment is explained below through an embodiment in actual application.

为了提高位姿估计的精度以及多场景下的适应性,多传感器融合的方案被越来越多地采用。在未知环境中,移动机器人的任务执行依赖于准确的自身定位。定位精度直接关系到地图构建的准确性以及任务的执行结果。为了实现定位和地图构建,通常会使用多种传感器,如激光雷达、相机和IMU等。相机可以利用图像信息生成里程计,但所生成的地图往往难以直解反映环境结构。相比之下,激光雷达可以通过点云配准来实现定位,并同时创建密集的点云地图,真实地反馈周围环境特征。使用单一的激光雷达进行定位通常依点云图之间的特征匹配,这可以构建出激光雷达里程计。然而,由于传感器本身存在误差,这些误差会随着时间的推移而累积,尤其在长走廊等缺乏明显特征点的情况,激光雷达里程计常常会失效。因此,为了确保在多种不同场景下移动机器人自定位的适应性,越来越多的研究采用了多传感器融合的方法。这种多传感器融合的方法能够有效地克服单一传感器存在的限制和误差,从而提高移动机器人的自主定位和导航能力。In order to improve the accuracy of pose estimation and adaptability in multiple scenarios, multi-sensor fusion solutions are increasingly being adopted. In unknown environments, the task execution of mobile robots depends on accurate self-positioning. The positioning accuracy is directly related to the accuracy of map construction and the execution results of tasks. In order to achieve positioning and map construction, multiple sensors such as lidar, camera, and IMU are usually used. Cameras can generate odometers using image information, but the generated maps are often difficult to directly reflect the structure of the environment. In contrast, lidar can achieve positioning through point cloud registration and simultaneously create dense point cloud maps to truly reflect the characteristics of the surrounding environment. Using a single lidar for positioning usually relies on feature matching between point cloud maps, which can construct a lidar odometer. However, due to the errors of the sensor itself, these errors will accumulate over time, especially in long corridors and other situations where there are no obvious feature points, the lidar odometer often fails. Therefore, in order to ensure the adaptability of mobile robot self-positioning in a variety of different scenarios, more and more studies have adopted multi-sensor fusion methods. This multi-sensor fusion method can effectively overcome the limitations and errors of a single sensor, thereby improving the autonomous positioning and navigation capabilities of mobile robots.

本实施例提出了一种基于多传感器信息融合的自定位方法,采用激光雷达、GPS和IMU为主要传感器,实现多传感器信息融合的移动机器人自定位与地图构建。在处理IMU数据时,引入了离散积分模型,以推导出预积分模型,从而简化了IMU数据的积分计算过程。通过激光雷达里程计和IMU预积分模型结合,构建因子图优化框架,以提高定位和地图构建的精度和可靠性。此外,引入GPS因子和回环检测因子,以进一步增强多传感器信息融合的精确度及可靠性。本实施例还引入回环检测环节,对累计误差进行优化,减小闭环误差;通过优化残差方程,得到高精度的机器人位姿估计。请参见图2,图2为本申请实施例所提供的一种基于多传感器信息融合的自定位方法流程图,过程如下:对GPS测量值(即,GPS定位模块的测量值)进行ENU坐标转换,对IMU测量值(即,惯性测量单元的测量值)进行IMU初始化、IMU预积分,对原始点云(即,点云数据)进行点云预处理、特征点提取、特征匹配、检测关键帧、回环检测,根据GPS测量值的ENU坐标转换结果、IMU预积分结果、回环检测结果和特征匹配结果进行因子图优化,根据因子图优化结果进行全局位姿优化。上述方案还可以根据特征匹配结果构建激光雷达里程计。This embodiment proposes a self-positioning method based on multi-sensor information fusion, using laser radar, GPS and IMU as the main sensors to achieve self-positioning and map construction of mobile robots with multi-sensor information fusion. When processing IMU data, a discrete integral model is introduced to derive a pre-integration model, thereby simplifying the integral calculation process of IMU data. By combining the laser radar odometer and the IMU pre-integration model, a factor graph optimization framework is constructed to improve the accuracy and reliability of positioning and map construction. In addition, GPS factors and loop detection factors are introduced to further enhance the accuracy and reliability of multi-sensor information fusion. This embodiment also introduces a loop detection link to optimize the accumulated error and reduce the closed-loop error; by optimizing the residual equation, a high-precision robot pose estimation is obtained. Please refer to Figure 2, which is a flow chart of a self-positioning method based on multi-sensor information fusion provided by an embodiment of the present application. The process is as follows: perform ENU coordinate conversion on the GPS measurement value (i.e., the measurement value of the GPS positioning module), perform IMU initialization and IMU pre-integration on the IMU measurement value (i.e., the measurement value of the inertial measurement unit), perform point cloud preprocessing, feature point extraction, feature matching, key frame detection, loop detection on the original point cloud (i.e., point cloud data), perform factor graph optimization based on the ENU coordinate conversion results of the GPS measurement value, the IMU pre-integration results, the loop detection results, and the feature matching results, and perform global pose optimization based on the factor graph optimization results. The above scheme can also construct a lidar odometer based on the feature matching results.

本实施例提供的基于多传感器信息融合的自定位方法包括以下步骤:The self-positioning method based on multi-sensor information fusion provided in this embodiment includes the following steps:

步骤C1:对IMU进行初始化,并将IMU的测量值存储至向量中。Step C1: Initialize the IMU and store the IMU measurement values into a vector.

其中,在对IMU进行初始化后,可以通过三次B样条对IMU反馈的角速度进行拟合,得到IMU角度随时间的变化关系。Among them, after the IMU is initialized, the angular velocity fed back by the IMU can be fitted by cubic B-spline to obtain the relationship between the change of the IMU angle and time.

步骤C2:接收到激光雷达的点云数据后,根据点的时间戳由IMU信息插值得到位姿变换矩阵,并以通过位姿变换矩阵去除移动机器人运动引起的点云畸变。Step C2: After receiving the point cloud data from the lidar, the pose transformation matrix is interpolated from the IMU information according to the timestamp of the point, and the point cloud distortion caused by the movement of the mobile robot is removed by the pose transformation matrix.

请参见图3,图3为本申请实施例所提供的一种激光单帧扫描示意图,图中示出了扫描起始时刻和当前扫描时刻,x为横坐标,y为纵坐标。每次扫描过程中,将开始时刻记为ts,在世界坐标系下,扫描起点与y轴的夹角记为θs;在tn时刻,扫描到的点与y轴的夹角记为θn,扫描结束时记为te,可推算出时间与角度的关系如下:Please refer to Figure 3, which is a schematic diagram of a laser single-frame scanning provided by an embodiment of the present application, showing the scanning start time and the current scanning time, with x as the horizontal coordinate and y as the vertical coordinate. During each scanning process, the start time is recorded as ts , and in the world coordinate system, the angle between the scanning start point and the y-axis is recorded as θs ; at time tn , the angle between the scanned point and the y-axis is recorded as θn , and the end of the scan is recorded as te . The relationship between time and angle can be calculated as follows:

式(1); Formula (1);

请参见图4,图4为本申请实施例所提供的一种IMU插值法求解变换矩阵示意图,t表示时间,IMU分别在tk-1、tk、tk+1、tk+2时刻接收位姿信息,通过换算可得到各个时刻位姿相对与起始时刻位姿的转换矩阵。根据式(1)得到当前扫描时刻tn后(即点云数据的时间戳),根据时间戳在IMU数据中匹配到tn最近的两个时刻(tk和tk+1)对应的转换矩阵(即第二位姿转换矩阵),通过插值法计算tn时刻的转换矩阵(即第一位姿转换矩阵),记为Please refer to Figure 4, which is a schematic diagram of solving the transformation matrix using an IMU interpolation method provided in an embodiment of the present application. t represents time. The IMU receives posture information at time t k-1 , t k , t k+1 , and t k+2 respectively. By conversion, the transformation matrix of the posture at each time relative to the posture at the starting time can be obtained. According to formula (1), after obtaining the current scanning time tn (i.e., the timestamp of the point cloud data), the transformation matrix corresponding to the two most recent times ( tk and tk+1 ) of tn in the IMU data is matched according to the timestamp (i.e., the second pose transformation matrix). and , calculate the transformation matrix at time tn (i.e., the first pose transformation matrix) by interpolation method, recorded as :

式(2); Formula (2);

图4中bf表示tn与tk之间的时间差,bb表示tn与tk+1之间的时间差。In Figure 4, bf represents the time difference between tn and tk , and bb represents the time difference between tn and tk+1 .

得到转换矩阵后,按式(2)对每个采样时刻的点云进行坐标变换,消除激光雷达在移动机器人运动过程中引起的点云畸变。After obtaining the transformation matrix, the coordinate transformation of the point cloud at each sampling moment is performed according to formula (2) to eliminate the point cloud distortion caused by the laser radar during the movement of the mobile robot.

步骤C3:根据曲率提取点云中的特征点,通过特征点的帧间匹配得到激光雷达里程计。Step C3: Extract feature points in the point cloud according to the curvature, and obtain the laser radar odometer by inter-frame matching of the feature points.

其中,本步骤可以计算点云中各点的曲率,根据曲率提取特征点,通过每帧点云图中的特征点进行帧间匹配,获取激光里程计。在本步骤之前还可以存在对点云数据进行去噪、下采样的操作。激光里程计本质上是一个点云配准过程,配准后得到的转换矩阵即当前帧相对上一帧的位姿变化。Among them, this step can calculate the curvature of each point in the point cloud, extract feature points based on the curvature, and perform frame matching through the feature points in each frame of the point cloud image to obtain the laser odometer. Before this step, there can also be operations of denoising and downsampling the point cloud data. Laser odometer is essentially a point cloud registration process. The transformation matrix obtained after registration is the change in posture of the current frame relative to the previous frame.

在计算的曲率时,选择目标点周边n个点组建点集S,目标点曲率c的计算如式(3)所示,考虑数据处理时间及精度,通常选取n=5。When calculating the curvature, n points around the target point are selected to form a point set S. The curvature c of the target point is calculated as shown in formula (3). Considering the data processing time and accuracy, n=5 is usually selected.

式(3); Formula (3);

其中,pi为目标点的坐标值,pj为周边点的坐标值。Among them, pi is the coordinate value of the target point, and pj is the coordinate value of the surrounding points.

若计算到的c值较大,则将该目标点记为边缘点;若c值较小,则记为平面点。在各个区域中挑选出任意数量个(如2个)c值最大的边缘点和任意数量个(如4个)c值最小的平面点作为特征点。If the calculated c value is large, the target point is recorded as an edge point; if the c value is small, it is recorded as a plane point. In each area, any number (such as 2) of edge points with the largest c value and any number (such as 4) of plane points with the smallest c value are selected as feature points.

假定在第k帧点云中拟合到的边缘点集为εk,平面点集为Hk,将第k+1帧点云中的特征点投影到扫描开始的时刻,获得边缘点集,平面点集Assume that the edge point set fitted in the k-th frame point cloud is ε k and the plane point set is H k , project the feature points in the k+1-th frame point cloud to the moment when the scan starts, and obtain the edge point set , a plane point set .

处理边缘点时通过点到线的策略进行匹配:在第k+1帧点云的边缘点集εk+1中选择一点i,通过ICP算法,寻找εk中的最近邻点j,以及相邻扫描线上点j的最近邻点l,保证三点不会共线。接着根据式(4)计算点i到直线(即点j和点l的连线,又称目标直线)距离d(即第一距离)。When processing edge points, the point-to-line strategy is used for matching: select a point i from the edge point set ε k+1 of the k+1 frame point cloud, and use the ICP algorithm to find the nearest neighbor point j in ε k and the nearest neighbor point l of point j on the adjacent scan line to ensure that the three points are not collinear. Then, the distance d (i.e., the first distance) from point i to the straight line (i.e., the line connecting point j and point l, also known as the target straight line) is calculated according to formula (4).

式(4); Formula (4);

式中,为第k+1帧中的点i映射到当前帧初始时刻的坐标,X(k,j) 为第k帧中的点j的坐标,X(k,l) 为第k帧中的点l的坐标。In the formula, is the coordinate of point i in the k+1th frame mapped to the initial moment of the current frame, X (k,j) is the coordinate of point j in the kth frame, and X (k,l) is the coordinate of point l in the kth frame.

处理平面点时通过点到面的策略进行匹配:在第k+1帧点云的平面点集Hk+1中找到点i,在Hk中选取点i邻近的两点j和l,之后在相邻扫描线中匹配到点j邻近点m,此时寻到的三个点j、l、m构成的平面(即目标平面)不与点i共面。根据式(5)计算出点i到平面距离dh(即第二距离)。When processing planar points, the point-to-plane strategy is used for matching: find point i in the planar point set H k+1 of the k+1th frame point cloud, select two points j and l adjacent to point i in H k , and then match the adjacent point m of point j in the adjacent scan line. At this time, the plane formed by the three points j, l, and m (i.e., the target plane) is not coplanar with point i. The distance d h from point i to the plane (i.e., the second distance) is calculated according to formula (5).

式(5); Formula (5);

X(k,m) 为第k帧中的点m的坐标。X (k,m) is the coordinate of point m in the kth frame.

移动机器人在空间中的位姿信息包括旋转矩阵R及平移向量,则两帧点云间对应特征点的坐标变换可表示如下:The position information of the mobile robot in space includes the rotation matrix R and the translation vector , then the coordinate transformation of the corresponding feature points between the two frames of point clouds can be expressed as follows:

式(6); Formula (6);

上式中表示第k+1帧特征点坐标,表示第k帧特征点坐标,式(6)即相邻帧点云坐标变换矩阵。In the above formula represents the coordinates of the feature points in the k+1th frame, represents the coordinates of the feature points in the kth frame, and equation (6) is the coordinate transformation matrix of the point cloud of adjacent frames.

故点i的位姿变换关系如式(7)。Therefore, the position transformation relationship of point i is as shown in formula (7).

式(7); Formula (7);

式中,T(k+1,i)表示第k+1帧中点i在起始时的位姿矩阵,ti为点i所对应时刻,tk、tk+1表示第k帧、第k+1帧结束点云扫描的时刻,Tk+1表示第k+1帧点云相对于第k帧点云的转换矩阵。Where T (k+1,i) represents the pose matrix of point i at the beginning of the k+1th frame, ti is the time corresponding to point i, tk and tk+1 represent the time when the point cloud scanning ends at the kth frame and the k+1th frame, and Tk+1 represents the transformation matrix of the point cloud of the k+1th frame relative to the point cloud of the kth frame.

第k+1帧中点i到当前帧起始时刻的转换关系为:The conversion relationship from the midpoint i of the k+1th frame to the start time of the current frame is:

式(8); Formula (8);

表示第k+1帧中的点i到当前帧起始时刻的坐标,表示第k+1帧中的点i的坐标。 represents the coordinates of point i in the k+1th frame to the start time of the current frame, represents the coordinates of point i in the k+1th frame.

本实施例可以将相邻帧点云坐标变换矩阵转换为式(8),并根据式(4)及式(8)可得到边缘点约束方程In this embodiment, the coordinate transformation matrix of the point cloud of adjacent frames can be converted into equation (8), and the edge point constraint equation can be obtained according to equation (4) and equation (8): :

式(9)。 Formula (9).

本实施例可以将相邻帧点云坐标变换矩阵转换为式(8),并根据式(5)及式(8)可得到平面点约束方程In this embodiment, the coordinate transformation matrix of the point cloud of adjacent frames can be converted into equation (8), and the plane point constraint equation can be obtained according to equations (5) and (8): :

式(10)。 Formula (10).

采用Levenberg-Marquardt方法对约束方程求解,以式(4)中得到的距离d最小为优化目标,可得到点云帧间的转换关系(即点云特征匹配结果):The constraint equation is solved by using the Levenberg-Marquardt method. Taking the minimum distance d obtained in equation (4) as the optimization goal, the conversion relationship between point cloud frames (i.e., point cloud feature matching result) can be obtained:

式(11)。 Formula (11).

式中,,表示雅可比矩阵;λ表示系数因子。T表示转置矩阵,diag为用于构造对角矩阵的函数,表示约束方程。In the formula, , represents the Jacobian matrix; λ represents the coefficient factor. T represents the transposed matrix, diag is the function used to construct the diagonal matrix, represents the constraint equation.

步骤C4:根据激光雷达关键帧所对应时刻,对IMU数据进行预积分。Step C4: Pre-integrate the IMU data according to the time corresponding to the lidar key frame.

IMU利用内部的加速度计和陀螺仪,可以测量移动机器人的加速度和速度,并积分得到各时刻的位姿变换。假定在世界坐标系下,IMU测得的机器人运动参数分别是速度v、角速度w,加速度a;位姿矩阵中的旋转矩阵R,平移向量p,运动方程为:IMU uses the internal accelerometer and gyroscope to measure the acceleration and speed of the mobile robot, and integrates the posture transformation at each moment. Assuming that in the world coordinate system, the robot motion parameters measured by IMU are velocity v, angular velocity w, acceleration a; the rotation matrix R and translation vector p in the posture matrix, the motion equation is:

式(12)。 Formula (12).

上式中表示R的一阶导数;表示的p一阶导数;表示的v一阶导数;表示角速度w对应的反对称矩阵;经过时间ΔT,位姿变换通过对式(12)积分得到:In the above formula represents the first derivative of R; represents the p-first-order derivative; represents the first-order derivative of v; represents the antisymmetric matrix corresponding to the angular velocity w; after time ΔT, the posture transformation is obtained by integrating equation (12):

式(13)。 Formula (13).

上式中t表示某一时刻的时间,Exp表示指数函数,通过上式(13)可以得到旋转矩阵积分结果、速度积分结果以及平移向量积分结果分别表示相应时刻t的旋转矩阵、角速度、速度、加速度和平移向量,Δt表示时间的微分量。In the above formula, t represents the time at a certain moment, and Exp represents the exponential function. The rotation matrix integral result can be obtained through the above formula (13): , speed integration results And the translation vector integral result . , , , , They represent the rotation matrix, angular velocity, velocity, acceleration and translation vector at the corresponding time t respectively, and Δt represents the time differential.

由于IMU测量测得到的数据存在一定的随机误差,假设其误差服从高斯分布,则IMU测量误差模型表示为:Since the data measured by IMU has certain random errors, assuming that the error obeys Gaussian distribution, the IMU measurement error model is expressed as:

式(14)。 Formula (14).

式中,为IMU测量值中的角速度,为IMU测量值中的加速度,表示陀螺仪中的加速度零偏及高斯噪声,为加速度计中的零偏及高斯噪声。为实际的角速度,为实际的加速度,为重力加速度。In the formula, is the angular velocity in the IMU measurement, is the acceleration in the IMU measurement, , represents the acceleration bias and Gaussian noise in the gyroscope, , is the zero bias and Gaussian noise in the accelerometer. is the actual angular velocity, is the actual acceleration, is the acceleration due to gravity.

由式(13)、(14)推算状态量与测量值之间的关系:The relationship between the state quantity and the measured value can be deduced from equations (13) and (14):

式(15); Formula (15);

式中,表示离散型的随机噪声。表示角速度测量值,表示加速度测量值。In the formula, and Represents discrete random noise. represents the angular velocity measurement value, Represents the acceleration measurement.

式(16); Formula (16);

上式中Cov表示协方差函数。In the above formula, Cov represents the covariance function.

实际使用时,IMU的测量频率远高于比激光雷达,故而根据激光雷达测量频率,找到与IMU同时刻反馈回的点云数据作为关键帧,将关键帧对应时刻的IMU数据统一进行积分:In actual use, the measurement frequency of IMU is much higher than that of LiDAR. Therefore, according to the measurement frequency of LiDAR, the point cloud data fed back at the same time as IMU is found as the key frame, and the IMU data at the corresponding time of the key frame is integrated uniformly:

式(17); Formula (17);

式中,Δtij表示i时刻与j时刻的时间间隔,i和j表示不同的时刻,k表示关键帧对应的时刻,表示j时刻的旋转矩阵,表示j时刻的速度,表示j时刻的平移向量,表示i时刻的旋转矩阵,表示i时刻的速度,表示i时刻的平移向量,表示k时刻的速度,表示k时刻的旋转矩阵,表示k时刻的随机噪声,表示k时刻的加速度计中的零偏,表示k时刻的陀螺仪中的加速度零偏,表示k时刻的速度测量值,表示k时刻的角速度测量值,Δt表示时间的微分量。In the formula, Δt ij represents the time interval between time i and time j, i and j represent different time periods, and k represents the time corresponding to the key frame. represents the rotation matrix at time j, represents the speed at time j, represents the translation vector at time j, represents the rotation matrix at time i, represents the speed at time i, represents the translation vector at time i, represents the speed at time k, represents the rotation matrix at time k, represents the random noise at time k, represents the zero bias of the accelerometer at time k, represents the acceleration bias in the gyroscope at time k, represents the speed measurement value at time k, represents the angular velocity measurement value at time k, and Δt represents the time derivative.

若后续要优化i时刻位姿,则后续的所有积分结果都需更新,为了规避此情况响,旋转的运动增量ΔRij、速度的运动增量Δvij和平移的运动增量Δpij如式(18)所示。变换后,式(18)的所有计算与Ri无关。故而对位姿优化后,不影响积分值,也无需更新后续积分结果。If the posture at time i is to be optimized later, all subsequent integral results need to be updated. To avoid this situation, the rotation increment ΔR ij , the velocity increment Δv ij , and the translation increment Δp ij are shown in equation (18). After the transformation, all calculations in equation (18) are independent of R i . Therefore, after the posture is optimized, the integral value is not affected, and there is no need to update the subsequent integral results.

式(18); Formula (18);

上式中,表示i与k时刻之间的速度变化量。In the above formula, Represents the change in velocity between time i and time k.

对零偏量泰勒展开,去掉高阶项,可将其简化为线性函数,在零偏量变化时,修正预积分值。则式(18)可转换为:By removing the high-order terms from the zero bias Taylor expansion, it can be simplified to a linear function. When the zero bias changes, the pre-integrated value is corrected. Then equation (18) can be converted to:

式(19); Formula (19);

式中,表示预积分模型的旋转矩阵测量值、速度测量值和平移向量测量值,表示旋转噪声,δvij表示速度噪声,δpij表示位移噪声。In the formula, , and represents the rotation matrix measurement, velocity measurement and translation vector measurement of the pre-integrated model, represents rotation noise, δvij represents velocity noise, and δpij represents displacement noise.

步骤C4对现有IMU积分进行了改进,在处理IMU数据时,引入了离散积分模型,以推导出预积分模型,从而简化了IMU数据的积分计算过程。Step C4 improves the existing IMU integration. When processing IMU data, a discrete integration model is introduced to derive a pre-integration model, thereby simplifying the integration calculation process of the IMU data.

步骤C5:接收GPS此时的测量值,进行ENU坐标转换后,将这些数据共同构建因子图优化模型,对位姿进行全局优化。Step C5: Receive the GPS measurement value at this time, perform ENU coordinate conversion, and use these data to construct a factor graph optimization model to perform global optimization on the posture.

GPS获得的位置信息以经纬度和高度表示,以WGS-48(The World GeodeticSystem 1984)为参考坐标系,要转换到全局坐标系,需先转换到ENU(East North Up)坐标系。根据式(20),可以将WGS-48坐标系中的点(La, Lo, H)转换到ECEF坐标系,记作(XE, YE,ZE)。The location information obtained by GPS is expressed in latitude, longitude and altitude, with WGS-48 (The World Geodetic System 1984) as the reference coordinate system. To convert to the global coordinate system, it is necessary to first convert to the ENU (East North Up) coordinate system. According to formula (20), the point (La, Lo, H) in the WGS-48 coordinate system can be converted to the ECEF coordinate system, recorded as (X E , Y E , Z E ).

式(20); Formula (20);

式中,e表示偏心率,N表示曲率半径。La表示经度,Lo表示纬度,H表示海拔。Where e represents eccentricity, N represents radius of curvature, La represents longitude, Lo represents latitude, and H represents altitude.

之后转换到ENU坐标系:Then convert to the ENU coordinate system:

式(21); Formula (21);

式中,(E, N, U)是ENU坐标系下的点坐标,(XN, YN, ZN)表示坐标系零点偏移,M表示转换矩阵,其表达形式为:Where (E, N, U) is the point coordinate in the ENU coordinate system, (X N , Y N , Z N ) represents the zero offset of the coordinate system, and M represents the transformation matrix, which is expressed as:

式(22); Formula (22);

式中,(La0, Lo0)表示ENU原点的经纬度。Where (La 0 , Lo 0 ) represents the longitude and latitude of the ENU origin.

由式(20)和(21)可推出从WGS-84转换到ENU的变换关系:From equations (20) and (21), the transformation relationship from WGS-84 to ENU can be deduced:

式(23); Formula (23);

得到ENU坐标信息后,将其与IMU预积分数据、点云特征匹配结果及关键帧共同构建因子图优化模型。因子图模型实质是根据观测值,求出状态值对应的最大后验概率,并且将优化函数简化为最小二乘问题求解。若已知移动机器人的状态量xi及路标状态变量li,推算出观测量ziAfter obtaining the ENU coordinate information, it is combined with the IMU pre-integrated data, point cloud feature matching results and key frames to construct a factor graph optimization model. The essence of the factor graph model is to find the maximum a posteriori probability corresponding to the state value based on the observed value, and simplify the optimization function to solve the least squares problem. If the state quantity xi of the mobile robot and the landmark state variable li are known, the observation quantity zi can be inferred.

式(24); Formula (24);

式中P(z|x)为机器人在当前状态下观测变量的条件概率,表示第i个观测变量zi在给定状态量xi条件下的条件概率。Where P(z|x) is the conditional probability of the robot observing the variable in the current state, It represents the conditional probability of the i-th observed variable z i under the given state quantity x i .

SLAM定位问题是根据传感器测量出的观测量zi求解机器人的状态量xi,是一个后验概率问题:The SLAM positioning problem is to solve the robot's state quantity x i based on the observation quantity z i measured by the sensor. It is a posterior probability problem:

式(25); Formula (25);

式中𝑃(x)对机器人位置的初始估计;P(z) 是对传感器测量结果的先验概率分布。Where 𝑃(x) is the initial estimate of the robot’s position; P(z) is the prior probability distribution of the sensor measurement results.

转化为最大后验概率求解:Convert to maximum a posteriori probability solution:

式(26); Formula (26);

表示最优的机器人状态量,x表示机器人状态。 represents the optimal robot state quantity, and x represents the robot state.

分解式(26)中先验概率及似然概率,将其作为因子。Decompose the prior probability and likelihood probability in equation (26) and use them as factors.

式(27); Formula (27);

上式中表示一个因子,ϕ(xi)表示对机器人状态量xi的先验概率分布,优化函数可化为:In the above formula represents a factor, ϕ( xi ) represents the prior probability distribution of the robot state quantity xi , and the optimization function can be transformed into:

式(28); Formula (28);

表示在没有观测数据时,对机器人状态 的初始估计。 Indicates that when there is no observation data, the robot state initial estimate of .

SLAM模型转换为因子图,将状态变量记为节点,观测的变量作为边,如图5所示。图5为本申请实施例所提供的一种SLAM定位的因子图,图中x1、x2、x3表示机器人状态节点,l1、l2表示路标状态变量,f1表示先验因子,f2表示路标观测因子,f3表示里程计因子。The SLAM model is converted into a factor graph, with state variables recorded as nodes and observed variables as edges, as shown in Figure 5. Figure 5 is a factor graph of SLAM positioning provided by an embodiment of the present application, in which x 1 , x 2 , and x 3 represent robot state nodes, l 1 and l 2 represent landmark state variables, f 1 represents a priori factors, f 2 represents landmark observation factors, and f 3 represents odometer factors.

现实中传感器的观测值存在一定的波动,通常服从高斯分布,所以因子一般以指数表示:In reality, the observed values of sensors fluctuate to a certain extent and usually obey Gaussian distribution, so the factor is generally expressed as an exponential:

式(29); Formula (29);

式中,fi表示误差函数,∑i表示协方差矩阵。Where fi represents the error function and ∑ i represents the covariance matrix.

将式(29)代入式(28),将等式的右边取负对数,可将优化函数变换为最小二乘问题。Substituting equation (29) into equation (28) and taking the negative logarithm of the right side of the equation, the optimization function can be transformed into a least squares problem.

式(30); Formula (30);

多传感器定位系统的优化函数最终化为:The optimization function of the multi-sensor positioning system is finally transformed into:

式(31); Formula (31);

其中,f3(x1) 表示GPS因子的误差函数,f2(x1) 表示回环检测因子的误差函数,f1(x1)表示里程计因子的误差函数,统一表示为:Among them, f 3 (x 1 ) represents the error function of the GPS factor, f 2 (x 1 ) represents the error function of the loop detection factor, and f 1 (x 1 ) represents the error function of the odometer factor, which can be uniformly expressed as:

式(32); Formula (32);

分别表示世界坐标系下第i、j帧的位姿;ΔTij为两个节点的变换矩阵。请参见图6,图6为本申请实施例所提供的一种因子图模型示意图,图中f2表示IMU预积分因子,f4表示回环检测因子,f1表示激光雷达里程计因子,x1、x2、x3、x4、x5表示机器人状态节点、f3表示GPS因子,GPS表示GPS测量节点。引入GPS因子和回环检测因子,以进一步增强多传感器信息融合的精确度及可靠性。 and Respectively represent the pose of the i-th and j-th frames in the world coordinate system; ΔT ij is the transformation matrix of the two nodes. Please refer to Figure 6, which is a schematic diagram of a factor graph model provided by an embodiment of the present application, in which f 2 represents the IMU pre-integration factor, f 4 represents the loop detection factor, f 1 represents the laser radar odometer factor, x 1 , x 2 , x 3 , x 4 , x 5 represent robot state nodes, f 3 represents the GPS factor, and GPS represents the GPS measurement node. The GPS factor and the loop detection factor are introduced to further enhance the accuracy and reliability of multi-sensor information fusion.

本步骤引入了GPS因子和回环检测因子,以进一步增强多传感器信息融合的精确度及可靠性。本步骤中构建的因子图优化模型能够对测量数据的不确定性进行建模和处理,使机器人能够更准确地估计自身位置。This step introduces GPS factors and loop detection factors to further enhance the accuracy and reliability of multi-sensor information fusion. The factor graph optimization model constructed in this step can model and process the uncertainty of measurement data, enabling the robot to estimate its own position more accurately.

步骤C6:检测到移动机器人的运动闭环时,对整个运动过程的位姿以及构建的地图进行一次全局优化,消除闭环误差,完成回环检测。Step C6: When the motion closed loop of the mobile robot is detected, a global optimization is performed on the posture of the entire motion process and the constructed map to eliminate the closed loop error and complete the loop detection.

机器人在限定空间作业时,存在回到起点的情形,即存在运动闭环;根据这种闭环运动信息,构建特殊的约束,对机器人移动过程的位姿信息进行一次全局优化,减小闭环误差。When the robot is working in a confined space, there is a situation where it returns to the starting point, that is, there is a motion closed loop; based on this closed-loop motion information, special constraints are constructed to perform a global optimization of the position and posture information of the robot's movement process to reduce the closed-loop error.

请参见图7,图7为本申请实施例所提供的一种回环检测的流程图,具体过程如下:构建关键帧,存储关键帧位姿,处理最新关键帧,判断是否满足回环条件。若不满足回环条件,则进入构建关键帧的步骤。若满足构建关键帧,则构建局部地图,进行ICP(最近点搜索法)配准,判断分数满足要求。若分数不满足回环条件,则进入构建关键帧的步骤。若分数满足回环条件,则进行位姿修正。Please refer to Figure 7, which is a flowchart of a loop detection provided by an embodiment of the present application. The specific process is as follows: construct key frames, store key frame poses, process the latest key frames, and determine whether the loop conditions are met. If the loop conditions are not met, enter the step of constructing key frames. If the key frame construction is met, build a local map, perform ICP (closest point search method) alignment, and determine whether the score meets the requirements. If the score does not meet the loop conditions, enter the step of constructing key frames. If the score meets the loop conditions, perform pose correction.

在构建图时,若每一帧检测是否有运动闭环,计算量大且耗时长,故而采取关键帧检测的方式。判断是否有运动闭环的标准:When constructing a graph, if each frame is checked for a closed motion loop, the amount of calculation is large and time-consuming, so the key frame detection method is adopted. The criteria for judging whether there is a closed motion loop are:

(1)当前帧与历史关键帧之间的时差超过阈值;(1) The time difference between the current frame and the historical key frame exceeds the threshold;

(2)当前帧与历史关键帧的轨迹长度超过阈值;(2) The trajectory length between the current frame and the historical key frame exceeds the threshold;

(3)当前帧与历史关键帧之间的直线距离低于阈值;(3) The straight-line distance between the current frame and the historical key frame is lower than the threshold;

(4)当前帧与历史关键帧的点云配准分数低于阈值。(4) The point cloud registration score between the current frame and the historical keyframe is lower than the threshold.

在实现移动机器人的自主任务执行时,前提是确保机器人能够准确确定自身的位置,并建立环境地图以便于导航和决策。然而,在非结构化环境中,机器人常常会面临一系列挑战,其中之一是由于遮挡现象导致定位信号的减弱或中断,这可能会严重影响机器人的定位精度。因此,需借助其他传感器对机器人的进行自定位。此外,在非结构化环境中,动态物体的存在也会影响机器人后续的路径规划和决策。为此本实施例提出一种基于多传感器信息融合的自定位方法,旨在解决单一传感器累积误差无法完全消除的问题。首先,利用点云特征提取和配准技术构建激光雷达里程计。通过将激光雷达里程计因子、惯性传感器预积分因子和回环检测因子结合起来构建后端优化模型。引入了GPS因子获取全局位姿信息,通过对残差方程进行优化,得到了高精度和鲁棒性的位姿估计结果。这一综合性的多传感器融合算法为移动机器人在非结构化环境中的自主任务执行提供了关键的支持,使其能够更加可靠地定位和导航。When implementing autonomous task execution of mobile robots, the premise is to ensure that the robot can accurately determine its own position and establish an environmental map for navigation and decision-making. However, in unstructured environments, robots often face a series of challenges, one of which is the weakening or interruption of positioning signals due to occlusion, which may seriously affect the positioning accuracy of the robot. Therefore, it is necessary to use other sensors to self-position the robot. In addition, in unstructured environments, the presence of dynamic objects will also affect the subsequent path planning and decision-making of the robot. For this reason, this embodiment proposes a self-positioning method based on multi-sensor information fusion, aiming to solve the problem that the cumulative error of a single sensor cannot be completely eliminated. First, a laser radar odometer is constructed using point cloud feature extraction and registration technology. The back-end optimization model is constructed by combining the laser radar odometer factor, the inertial sensor pre-integration factor and the loop detection factor. The GPS factor is introduced to obtain global pose information, and the pose estimation result with high accuracy and robustness is obtained by optimizing the residual equation. This comprehensive multi-sensor fusion algorithm provides key support for the autonomous task execution of mobile robots in unstructured environments, enabling them to locate and navigate more reliably.

本申请实施例所提供的一种移动机器人的自定位系统,所述移动机器人为设置有激光雷达、惯性测量单元和GPS定位模块的机器人,所述移动机器人的自定位系统包括:The present application provides a self-positioning system for a mobile robot, wherein the mobile robot is a robot provided with a laser radar, an inertial measurement unit, and a GPS positioning module, and the self-positioning system for the mobile robot comprises:

第一数据获取模块,用于对所述惯性测量单元进行初始化,并记录所述惯性测量单元的测量值,得到惯性测量数据;A first data acquisition module, used for initializing the inertial measurement unit and recording the measurement value of the inertial measurement unit to obtain inertial measurement data;

第二数据获取模块,用于接收所述激光雷达采集的点云数据,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,利用所述第一位姿转换矩阵对所述点云数据进行坐标变换;其中,所述第一位姿转换矩阵用于描述所述点云数据的采集时刻相对于起始时刻的位姿变化情况;A second data acquisition module is used to receive the point cloud data collected by the laser radar, interpolate the inertial measurement data according to the timestamp of the point cloud data to obtain a first pose conversion matrix, and use the first pose conversion matrix to perform coordinate transformation on the point cloud data; wherein the first pose conversion matrix is used to describe the pose change of the point cloud data at the collection time relative to the starting time;

帧间匹配模块,用于确定所述点云数据中的特征点,并根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果;An inter-frame matching module is used to determine feature points in the point cloud data, and perform inter-frame matching on the point cloud data according to the feature points to obtain a point cloud feature matching result;

预积分模块,用于从所述惯性测量数据中选取关键帧,对相邻的所述关键帧之间的惯性测量数据进行预积分,得到预积分结果;其中,所述关键帧为所述点云数据与所述惯性测量数据中同时采集的数据帧;A pre-integration module, used to select a key frame from the inertial measurement data, pre-integrate the inertial measurement data between adjacent key frames, and obtain a pre-integration result; wherein the key frame is a data frame collected simultaneously from the point cloud data and the inertial measurement data;

第三数据获取模块,用于对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据;A third data acquisition module is used to perform ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data;

模型构建模块,用于根据所述点云特征匹配结果、所述预积分结果和所述ENU坐标数据构建因子图优化模型;A model building module, used to build a factor graph optimization model according to the point cloud feature matching result, the pre-integration result and the ENU coordinate data;

优化模块,用于若检测到运动闭环,则利用所述因子图优化模型对所述移动机器人的位姿信息进行全局优化,以便得到所述移动机器人在世界坐标系的定位信息。The optimization module is used to use the factor graph optimization model to globally optimize the posture information of the mobile robot if a motion closed loop is detected, so as to obtain the positioning information of the mobile robot in the world coordinate system.

本实施例应用于设置有激光雷达、惯性测量单元和GPS定位模块的移动机器人。本实施例对惯性测量单元进行初始化后记录惯性测量单元采集的惯性测量数据,并利用该惯性测量数据对激光雷达采集的点云数据进行差值处理和坐标变换以便激光雷达在移动机器人运动过程中引起的点云畸变。本实施例还对点云数据进行帧间匹配得到点云特征匹配结果,以便准确地估计移动机器人的相对位置变化。本实施例根据点云数据和惯性测量数据确定同时采集的关键帧,进而基于关键帧进行预积分计算,以便减少自定位过程中对惯性测量数据的计算量,并保持对机器人运动状态的持续跟踪。本实施例对GPS定位模块的测量值通过ENU坐标转换得到ENU坐标数据,使得GPS定位模块的测量值可以与机器人的本地坐标系进行匹配,为定位提供全局参考。本实施例根据点云特征匹配结果、预积分结果和ENU坐标数据构建因子图优化模型,整合不同传感器数据,并在检测到运动闭环时进行全局优化,以便消除累积误差并提高定位的准确性和稳定性。由此可见,本实施例通过融合激光雷达、惯性测量单元和GPS定位模块的数据进行定位,能够提高移动机器人的自定位精度。This embodiment is applied to a mobile robot provided with a laser radar, an inertial measurement unit and a GPS positioning module. This embodiment records the inertial measurement data collected by the inertial measurement unit after initializing the inertial measurement unit, and uses the inertial measurement data to perform difference processing and coordinate transformation on the point cloud data collected by the laser radar so as to eliminate the point cloud distortion caused by the laser radar during the movement of the mobile robot. This embodiment also performs inter-frame matching on the point cloud data to obtain point cloud feature matching results so as to accurately estimate the relative position change of the mobile robot. This embodiment determines the key frames collected simultaneously based on the point cloud data and the inertial measurement data, and then performs pre-integration calculation based on the key frames so as to reduce the amount of calculation of the inertial measurement data during the self-positioning process and maintain continuous tracking of the robot's motion state. This embodiment obtains ENU coordinate data by converting the measurement value of the GPS positioning module through ENU coordinates, so that the measurement value of the GPS positioning module can be matched with the local coordinate system of the robot, providing a global reference for positioning. This embodiment constructs a factor graph optimization model based on the point cloud feature matching results, pre-integration results and ENU coordinate data, integrates different sensor data, and performs global optimization when a motion closed loop is detected to eliminate cumulative errors and improve positioning accuracy and stability. It can be seen that this embodiment can improve the self-positioning accuracy of the mobile robot by fusing the data of the laser radar, inertial measurement unit and GPS positioning module for positioning.

进一步的,第二数据获取模块根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵的过程包括:选取第n帧点云数据;确定第n帧点云数据的时间戳;将距离所述第n帧点云数据的时间戳最近的2帧惯性测量数据设置为参考惯性测量数据;生成所述参考惯性测量数据对应的第二位姿转换矩阵;其中,所述第二位姿转换矩阵为所述参考惯性测量数据的采集时刻相对于起始时刻的位姿转换矩阵;对所述第二位姿转换矩阵进行插值处理,得到所述第n帧点云数据的第一位姿转换矩阵;判断所述点云数据是否选取完毕;若否,则将n的值加1,并进入所述选取第n帧点云数据的步骤。Furthermore, the process in which the second data acquisition module interpolates the inertial measurement data according to the timestamp of the point cloud data to obtain the first pose conversion matrix includes: selecting the nth frame of point cloud data; determining the timestamp of the nth frame of point cloud data; setting the 2 frames of inertial measurement data closest to the timestamp of the nth frame of point cloud data as reference inertial measurement data; generating a second pose conversion matrix corresponding to the reference inertial measurement data; wherein the second pose conversion matrix is the pose conversion matrix of the collection time of the reference inertial measurement data relative to the starting time; interpolating the second pose conversion matrix to obtain the first pose conversion matrix of the nth frame of point cloud data; judging whether the point cloud data has been selected; if not, adding 1 to the value of n and entering the step of selecting the nth frame of point cloud data.

进一步的,帧间匹配模块确定所述点云数据中的特征点的过程包括:计算所述点云数据中每一点的曲率;根据曲率从所述点云数据中选取边缘点和平面点作为所述特征点;其中,所述边缘点为所述点云数据中曲率最大的P1个点,所述平面点为所述点云数据中曲率最小的P2个点。Furthermore, the process of determining the feature points in the point cloud data by the inter-frame matching module includes: calculating the curvature of each point in the point cloud data; selecting edge points and plane points from the point cloud data as the feature points according to the curvature; wherein the edge points are the P1 points with the largest curvature in the point cloud data, and the plane points are the P2 points with the smallest curvature in the point cloud data.

进一步的,帧间匹配模块根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果的过程包括:根据所述边缘点的坐标进行点到线的特征匹配,得到第一距离;其中,所述第一距离为第k+1帧点云数据中的边缘点到目标直线的距离,所述目标直线为经过第k帧点云数据中2个边缘点的直线,所述第一距离大于0;根据所述平面点的坐标进行点到面的特征匹配,得到第二距离;其中,所述第二距离为第k+1帧点云数据中的平面点到目标平面的距离,所述目标平面为第k帧点云数据中3个不共线的平面点所在的平面,所述第二距离大于0;确定相邻帧点云坐标变换矩阵;根据所述相邻帧点云坐标变换矩阵和所述第一距离生成边缘点约束方程;根据所述相邻帧点云坐标变换矩阵和所述第二距离生成平面点约束方程;以所述第一距离的值最小为优化目标,对所述边缘点约束方程和所述平面点约束方程进行求解,得到点云特征匹配结果。Furthermore, the inter-frame matching module performs inter-frame matching on the point cloud data according to the feature points, and the process of obtaining the point cloud feature matching result includes: performing point-to-line feature matching according to the coordinates of the edge points to obtain a first distance; wherein the first distance is the distance from the edge point in the k+1th frame point cloud data to the target straight line, the target straight line is a straight line passing through two edge points in the kth frame point cloud data, and the first distance is greater than 0; performing point-to-plane feature matching according to the coordinates of the plane points to obtain a second distance; wherein the second distance is the distance from the plane point in the k+1th frame point cloud data to the target plane, the target plane is the plane where three non-collinear plane points in the kth frame point cloud data are located, and the second distance is greater than 0; determining the adjacent frame point cloud coordinate transformation matrix; generating an edge point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the first distance; generating a plane point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the second distance; taking the minimum value of the first distance as the optimization goal, solving the edge point constraint equation and the plane point constraint equation to obtain the point cloud feature matching result.

进一步的,第三数据获取模块对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据的过程包括:将所述GPS定位模块的测量值转换至ECEF坐标系,得到ECEF坐标数据;将所述ECEF坐标数据转换至ENU坐标系,得到所述ENU坐标数据。Furthermore, the third data acquisition module performs ENU coordinate conversion on the measurement value of the GPS positioning module, and the process of obtaining ENU coordinate data includes: converting the measurement value of the GPS positioning module to the ECEF coordinate system to obtain ECEF coordinate data; converting the ECEF coordinate data to the ENU coordinate system to obtain the ENU coordinate data.

进一步的,还包括:Furthermore, it also includes:

闭环检测模块,用于确定所述激光雷达采集的点云数据中当前帧与所述关键帧的比对信息;还用于判断所述比对信息是否符合预设条件;若是,则判定存在运动闭环;若否,则判定不存在运动闭环。The closed-loop detection module is used to determine the comparison information between the current frame and the key frame in the point cloud data collected by the laser radar; it is also used to determine whether the comparison information meets the preset conditions; if so, it is determined that a motion closed loop exists; if not, it is determined that there is no motion closed loop.

进一步的,闭环检测模块确定所述激光雷达采集的点云数据中当前帧与所述关键帧的比对信息的过程包括:计算所述激光雷达采集的点云数据中当前帧与关键帧的时间差,得到第一比对信息;和/或,计算所述激光雷达采集的点云数据中当前帧与关键帧之间的运动轨迹长度,得到第二比对信息;和/或,计算所述激光雷达采集的当前帧与所述关键帧之间的直线距离,得到第三比对信息;和/或,计算所述激光雷达采集的点云数据中当前帧与关键帧的点云配准分数,得到第四比对信息。Furthermore, the process by which the closed-loop detection module determines the comparison information between the current frame and the key frame in the point cloud data collected by the laser radar includes: calculating the time difference between the current frame and the key frame in the point cloud data collected by the laser radar to obtain first comparison information; and/or calculating the length of the motion trajectory between the current frame and the key frame in the point cloud data collected by the laser radar to obtain second comparison information; and/or calculating the straight-line distance between the current frame and the key frame collected by the laser radar to obtain third comparison information; and/or calculating the point cloud alignment score between the current frame and the key frame in the point cloud data collected by the laser radar to obtain fourth comparison information.

由于系统部分的实施例与方法部分的实施例相互对应,因此系统部分的实施例请参见方法部分的实施例的描述,这里暂不赘述。Since the embodiments of the system part correspond to the embodiments of the method part, please refer to the description of the embodiments of the method part for the embodiments of the system part, which will not be repeated here.

本申请还提供了一种存储介质,其上存有计算机程序,该计算机程序被执行时可以实现上述实施例所提供的步骤。该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory ,ROM)、随机存取存储器(Random Access Memory ,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The present application also provides a storage medium on which a computer program is stored, and when the computer program is executed, the steps provided in the above embodiment can be implemented. The storage medium may include: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.

本申请还提供了一种电子设备,可以包括存储器和处理器,所述存储器中存有计算机程序,所述处理器调用所述存储器中的计算机程序时,可以实现上述实施例所提供的步骤。当然所述电子设备还可以包括各种网络接口,电源等组件。The present application also provides an electronic device, which may include a memory and a processor, wherein a computer program is stored in the memory, and when the processor calls the computer program in the memory, the steps provided in the above embodiment may be implemented. Of course, the electronic device may also include various network interfaces, power supplies and other components.

说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请的保护范围内。The various embodiments in the specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant parts can be referred to the method part description. It should be pointed out that for ordinary technicians in this technical field, without departing from the principles of this application, several improvements and modifications can be made to the present application, and these improvements and modifications also fall within the scope of protection of this application.

还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的状况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should also be noted that, in this specification, relational terms such as first and second, etc. are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any such actual relationship or order between these entities or operations. Moreover, the terms "comprises", "comprising" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements includes not only those elements, but also other elements not explicitly listed, or also includes elements inherent to such process, method, article or device. In the absence of further restrictions, an element defined by the statement "comprising a ..." does not exclude the presence of other identical elements in the process, method, article or device including the element.

Claims (10)

1.一种移动机器人的自定位方法,其特征在于,所述移动机器人为设置有激光雷达、惯性测量单元和GPS定位模块的机器人,所述移动机器人的自定位方法包括:1. A self-positioning method for a mobile robot, characterized in that the mobile robot is a robot provided with a laser radar, an inertial measurement unit and a GPS positioning module, and the self-positioning method for the mobile robot comprises: 对所述惯性测量单元进行初始化,并记录所述惯性测量单元的测量值,得到惯性测量数据;Initializing the inertial measurement unit and recording the measurement value of the inertial measurement unit to obtain inertial measurement data; 接收所述激光雷达采集的点云数据,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,利用所述第一位姿转换矩阵对所述点云数据进行坐标变换;其中,所述第一位姿转换矩阵用于描述所述点云数据的采集时刻相对于起始时刻的位姿变化情况;Receive the point cloud data collected by the laser radar, interpolate the inertial measurement data according to the timestamp of the point cloud data to obtain a first pose transformation matrix, and use the first pose transformation matrix to perform coordinate transformation on the point cloud data; wherein the first pose transformation matrix is used to describe the pose change of the point cloud data at the collection time relative to the starting time; 确定所述点云数据中的特征点,并根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果;Determine feature points in the point cloud data, and perform frame matching on the point cloud data according to the feature points to obtain a point cloud feature matching result; 从所述惯性测量数据中选取关键帧,对相邻的所述关键帧之间的惯性测量数据进行预积分,得到预积分结果;其中,所述关键帧为所述点云数据与所述惯性测量数据中同时采集的数据帧;Selecting key frames from the inertial measurement data, and pre-integrating the inertial measurement data between adjacent key frames to obtain a pre-integration result; wherein the key frames are data frames collected simultaneously from the point cloud data and the inertial measurement data; 对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据;Performing ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data; 根据所述点云特征匹配结果、所述预积分结果和所述ENU坐标数据构建因子图优化模型;Constructing a factor graph optimization model according to the point cloud feature matching result, the pre-integration result and the ENU coordinate data; 若检测到运动闭环,则利用所述因子图优化模型对所述移动机器人的位姿信息进行全局优化,以便得到所述移动机器人在世界坐标系的定位信息。If a motion closed loop is detected, the factor graph optimization model is used to globally optimize the posture information of the mobile robot so as to obtain the positioning information of the mobile robot in the world coordinate system. 2.根据权利要求1所述移动机器人的自定位方法,其特征在于,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,包括:2. The self-positioning method of a mobile robot according to claim 1, characterized in that the inertial measurement data is interpolated according to the timestamp of the point cloud data to obtain a first pose conversion matrix, comprising: 选取第n帧点云数据;Select the nth frame of point cloud data; 确定第n帧点云数据的时间戳;Determine the timestamp of the nth frame of point cloud data; 将距离所述第n帧点云数据的时间戳最近的2帧惯性测量数据设置为参考惯性测量数据;The two frames of inertial measurement data closest to the timestamp of the nth frame of point cloud data are set as reference inertial measurement data; 生成所述参考惯性测量数据对应的第二位姿转换矩阵;其中,所述第二位姿转换矩阵为所述参考惯性测量数据的采集时刻相对于起始时刻的位姿转换矩阵;Generate a second posture conversion matrix corresponding to the reference inertial measurement data; wherein the second posture conversion matrix is a posture conversion matrix of the collection time of the reference inertial measurement data relative to the starting time; 对所述第二位姿转换矩阵进行插值处理,得到所述第n帧点云数据的第一位姿转换矩阵;Performing interpolation processing on the second pose conversion matrix to obtain the first pose conversion matrix of the nth frame point cloud data; 判断所述点云数据是否选取完毕;若否,则将n的值加1,并进入所述选取第n帧点云数据的步骤。Determine whether the point cloud data has been selected; if not, add 1 to the value of n and enter the step of selecting the nth frame of point cloud data. 3.根据权利要求1所述移动机器人的自定位方法,其特征在于,确定所述点云数据中的特征点,包括:3. The self-positioning method of a mobile robot according to claim 1, wherein determining the feature points in the point cloud data comprises: 计算所述点云数据中每一点的曲率;Calculating the curvature of each point in the point cloud data; 根据曲率从所述点云数据中选取边缘点和平面点作为所述特征点;其中,所述边缘点为所述点云数据中曲率最大的P1个点,所述平面点为所述点云数据中曲率最小的P2个点。Edge points and plane points are selected from the point cloud data as the feature points according to the curvature; wherein the edge points are P1 points with the largest curvature in the point cloud data, and the plane points are P2 points with the smallest curvature in the point cloud data. 4.根据权利要求3所述移动机器人的自定位方法,其特征在于,根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果,包括:4. The self-positioning method of a mobile robot according to claim 3, characterized in that the point cloud data is matched between frames according to the feature points to obtain a point cloud feature matching result, comprising: 根据所述边缘点的坐标进行点到线的特征匹配,得到第一距离;其中,所述第一距离为第k+1帧点云数据中的边缘点到目标直线的距离,所述目标直线为经过第k帧点云数据中2个边缘点的直线,所述第一距离大于0;Performing point-to-line feature matching according to the coordinates of the edge point to obtain a first distance; wherein the first distance is the distance from the edge point in the k+1th frame of point cloud data to the target straight line, the target straight line is a straight line passing through two edge points in the kth frame of point cloud data, and the first distance is greater than 0; 根据所述平面点的坐标进行点到面的特征匹配,得到第二距离;其中,所述第二距离为第k+1帧点云数据中的平面点到目标平面的距离,所述目标平面为第k帧点云数据中3个不共线的平面点所在的平面,所述第二距离大于0;Performing point-to-plane feature matching according to the coordinates of the plane point to obtain a second distance; wherein the second distance is the distance from the plane point in the k+1th frame point cloud data to the target plane, the target plane is the plane where the three non-collinear plane points in the kth frame point cloud data are located, and the second distance is greater than 0; 确定相邻帧点云坐标变换矩阵;Determine the coordinate transformation matrix of the point cloud of adjacent frames; 根据所述相邻帧点云坐标变换矩阵和所述第一距离生成边缘点约束方程;Generate an edge point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the first distance; 根据所述相邻帧点云坐标变换矩阵和所述第二距离生成平面点约束方程;Generate a plane point constraint equation according to the adjacent frame point cloud coordinate transformation matrix and the second distance; 以所述第一距离的值最小为优化目标,对所述边缘点约束方程和所述平面点约束方程进行求解,得到点云特征匹配结果。Taking the minimum value of the first distance as the optimization goal, the edge point constraint equation and the plane point constraint equation are solved to obtain a point cloud feature matching result. 5.根据权利要求1所述移动机器人的自定位方法,其特征在于,对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据,包括:5. The self-positioning method of a mobile robot according to claim 1, characterized in that the measurement value of the GPS positioning module is converted into ENU coordinates to obtain ENU coordinate data, comprising: 将所述GPS定位模块的测量值转换至ECEF坐标系,得到ECEF坐标数据;Convert the measured value of the GPS positioning module to the ECEF coordinate system to obtain ECEF coordinate data; 将所述ECEF坐标数据转换至ENU坐标系,得到所述ENU坐标数据。The ECEF coordinate data is converted to the ENU coordinate system to obtain the ENU coordinate data. 6.根据权利要求1所述移动机器人的自定位方法,其特征在于,还包括:6. The self-positioning method of a mobile robot according to claim 1, further comprising: 确定所述激光雷达采集的点云数据中当前帧与所述关键帧的比对信息;Determine comparison information between a current frame and the key frame in the point cloud data collected by the laser radar; 判断所述比对信息是否符合预设条件;Determining whether the comparison information meets a preset condition; 若是,则判定存在运动闭环;If so, it is determined that there is a motion closed loop; 若否,则判定不存在运动闭环。If not, it is determined that there is no motion closed loop. 7.根据权利要求6所述移动机器人的自定位方法,其特征在于,确定所述激光雷达采集的点云数据中当前帧与所述关键帧的比对信息,包括:7. The self-positioning method of a mobile robot according to claim 6, characterized in that determining the comparison information between the current frame and the key frame in the point cloud data collected by the laser radar comprises: 计算所述激光雷达采集的点云数据中当前帧与关键帧的时间差,得到第一比对信息;Calculating the time difference between the current frame and the key frame in the point cloud data collected by the laser radar to obtain first comparison information; 和/或,计算所述激光雷达采集的点云数据中当前帧与关键帧之间的运动轨迹长度,得到第二比对信息;and/or, calculating the length of a motion trajectory between a current frame and a key frame in the point cloud data collected by the laser radar to obtain second comparison information; 和/或,计算所述激光雷达采集的当前帧与所述关键帧之间的直线距离,得到第三比对信息;and/or, calculating a straight-line distance between a current frame collected by the laser radar and the key frame to obtain third comparison information; 和/或,计算所述激光雷达采集的点云数据中当前帧与关键帧的点云配准分数,得到第四比对信息。And/or, calculate the point cloud registration scores of the current frame and the key frame in the point cloud data collected by the laser radar to obtain fourth comparison information. 8.一种移动机器人的自定位系统,其特征在于,所述移动机器人为设置有激光雷达、惯性测量单元和GPS定位模块的机器人,所述移动机器人的自定位系统包括:8. A self-positioning system for a mobile robot, characterized in that the mobile robot is a robot equipped with a laser radar, an inertial measurement unit and a GPS positioning module, and the self-positioning system for the mobile robot comprises: 第一数据获取模块,用于对所述惯性测量单元进行初始化,并记录所述惯性测量单元的测量值,得到惯性测量数据;A first data acquisition module, used for initializing the inertial measurement unit and recording the measurement value of the inertial measurement unit to obtain inertial measurement data; 第二数据获取模块,用于接收所述激光雷达采集的点云数据,根据所述点云数据的时间戳对所述惯性测量数据进行插值处理得到第一位姿转换矩阵,利用所述第一位姿转换矩阵对所述点云数据进行坐标变换;其中,所述第一位姿转换矩阵用于描述所述点云数据的采集时刻相对于起始时刻的位姿变化情况;A second data acquisition module is used to receive the point cloud data collected by the laser radar, interpolate the inertial measurement data according to the timestamp of the point cloud data to obtain a first pose conversion matrix, and use the first pose conversion matrix to perform coordinate transformation on the point cloud data; wherein the first pose conversion matrix is used to describe the pose change of the point cloud data at the collection time relative to the starting time; 帧间匹配模块,用于确定所述点云数据中的特征点,并根据所述特征点对所述点云数据进行帧间匹配,得到点云特征匹配结果;An inter-frame matching module is used to determine feature points in the point cloud data, and perform inter-frame matching on the point cloud data according to the feature points to obtain a point cloud feature matching result; 预积分模块,用于从所述惯性测量数据中选取关键帧,对相邻的所述关键帧之间的惯性测量数据进行预积分,得到预积分结果;其中,所述关键帧为所述点云数据与所述惯性测量数据中同时采集的数据帧;A pre-integration module, used to select a key frame from the inertial measurement data, pre-integrate the inertial measurement data between adjacent key frames, and obtain a pre-integration result; wherein the key frame is a data frame collected simultaneously from the point cloud data and the inertial measurement data; 第三数据获取模块,用于对所述GPS定位模块的测量值进行ENU坐标转换,得到ENU坐标数据;A third data acquisition module is used to perform ENU coordinate conversion on the measurement value of the GPS positioning module to obtain ENU coordinate data; 模型构建模块,用于根据所述点云特征匹配结果、所述预积分结果和所述ENU坐标数据构建因子图优化模型;A model building module, used to build a factor graph optimization model according to the point cloud feature matching result, the pre-integration result and the ENU coordinate data; 优化模块,用于若检测到运动闭环,则利用所述因子图优化模型对所述移动机器人的位姿信息进行全局优化,以便得到所述移动机器人在世界坐标系的定位信息。The optimization module is used to use the factor graph optimization model to globally optimize the posture information of the mobile robot if a motion closed loop is detected, so as to obtain the positioning information of the mobile robot in the world coordinate system. 9.一种电子设备,其特征在于,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时实现如权利要求1至7任一项所述移动机器人的自定位方法的步骤。9. An electronic device, characterized in that it comprises a memory and a processor, wherein a computer program is stored in the memory, and when the processor calls the computer program in the memory, the steps of the self-positioning method of the mobile robot as claimed in any one of claims 1 to 7 are implemented. 10.一种存储介质,其特征在于,所述存储介质中存储有计算机可执行指令,所述计算机可执行指令被处理器加载并执行时,实现如权利要求1至7任一项所述移动机器人的自定位方法的步骤。10. A storage medium, characterized in that computer executable instructions are stored in the storage medium, and when the computer executable instructions are loaded and executed by a processor, the steps of the self-positioning method of the mobile robot as described in any one of claims 1 to 7 are implemented.
CN202411099026.6A 2024-08-12 2024-08-12 Self-positioning method, system, electronic device and storage medium for mobile robot Pending CN118707542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411099026.6A CN118707542A (en) 2024-08-12 2024-08-12 Self-positioning method, system, electronic device and storage medium for mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411099026.6A CN118707542A (en) 2024-08-12 2024-08-12 Self-positioning method, system, electronic device and storage medium for mobile robot

Publications (1)

Publication Number Publication Date
CN118707542A true CN118707542A (en) 2024-09-27

Family

ID=92816089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411099026.6A Pending CN118707542A (en) 2024-08-12 2024-08-12 Self-positioning method, system, electronic device and storage medium for mobile robot

Country Status (1)

Country Link
CN (1) CN118707542A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119722788A (en) * 2025-02-28 2025-03-28 内蒙古缘利羊农牧业科技有限责任公司 A picking robot positioning method, device, equipment, medium and product
CN120558195A (en) * 2025-07-30 2025-08-29 上海宝信软件股份有限公司 Unmanned vehicle positioning method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119722788A (en) * 2025-02-28 2025-03-28 内蒙古缘利羊农牧业科技有限责任公司 A picking robot positioning method, device, equipment, medium and product
CN120558195A (en) * 2025-07-30 2025-08-29 上海宝信软件股份有限公司 Unmanned vehicle positioning method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN112862894B (en) Robot three-dimensional point cloud map construction and expansion method
CN110514225B (en) An external parameter calibration and precise positioning method for multi-sensor fusion in mines
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
CN112014857A (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN111929699A (en) Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN113674412B (en) Pose fusion optimization-based indoor map construction method, system and storage medium
CN118707542A (en) Self-positioning method, system, electronic device and storage medium for mobile robot
CN114019552A (en) A Bayesian Multi-sensor Error Constraint-Based Position-Location Reliability Optimization Method
CN108519615A (en) Autonomous navigation method for mobile robot based on combined navigation and feature point matching
CN119618211B (en) Method and system for detecting faults of mining unmanned vehicle
CN117906591A (en) Multi-sensor tightly-coupled SLAM algorithm
CN114429432A (en) A kind of multi-source information layered fusion method, device and storage medium
Jia et al. A cross-correction LiDAR SLAM method for high-accuracy 2D mapping of problematic scenario
CN115291227A (en) Indoor and outdoor seamless positioning and 3D mapping method and system
WO2024001649A1 (en) Robot positioning method, apparatus and computing readable storage medium
CN116448111A (en) Pedestrian indoor navigation method, device and medium based on multi-source information fusion
CN115930977A (en) Localization method, system, electronic device and readable storage medium for feature degradation scene
CN117451035A (en) A method and system for laser slam adaptive automatic map updating
CN117451032A (en) SLAM method and system of low-calculation-force and loose-coupling laser radar and IMU
CN115930943B (en) SLAM method and system for fusing monocular vision and IMU based on graph optimization and EKF framework
CN116734838A (en) A high-speed large scene mapping method based on LiDAR-IMU-GNSS
CN118067078A (en) High-precision positioning method and system for unmanned aerial vehicles operating in complex power transmission environments
CN116299523A (en) The method of point cloud selection and registration of vehicle lidar key frame based on IMU
Cui et al. Real-time generation and automatic update of 3D point cloud maps in featureless environments based on multi-sensor fusion
CN116501813A (en) Picture construction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination