[go: up one dir, main page]

CN113674412A - Indoor map construction method, system and storage medium based on pose fusion optimization - Google Patents

Indoor map construction method, system and storage medium based on pose fusion optimization Download PDF

Info

Publication number
CN113674412A
CN113674412A CN202110924699.0A CN202110924699A CN113674412A CN 113674412 A CN113674412 A CN 113674412A CN 202110924699 A CN202110924699 A CN 202110924699A CN 113674412 A CN113674412 A CN 113674412A
Authority
CN
China
Prior art keywords
pose
data
robot
imu
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110924699.0A
Other languages
Chinese (zh)
Other versions
CN113674412B (en
Inventor
张锦明
王勋
陈书界
包翠竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202110924699.0A priority Critical patent/CN113674412B/en
Publication of CN113674412A publication Critical patent/CN113674412A/en
Application granted granted Critical
Publication of CN113674412B publication Critical patent/CN113674412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明提供一种基于位姿融合优化的室内地图构建方法、系统及存储介质,所述方法包括:根据KINECT和IMU获取的实时数据解算机器人的位姿数据;根据IMU的线加速度数据和里程计数据判断机器人的当前运动状态;若机器人处于静止状态,则采用扩展卡尔曼滤波算法对所述位姿数据进行融合处理,若机器人处于运动状态,则采用动态加权方法对所述位姿数据进行融合处理;根据融合处理结果构建室内地图。该方法具有更高的位姿估计精度、更高的二维地图建模精度以及更优的建模效果,能够应用于贫特征、高动态、弱光影的场景中。

Figure 202110924699

The invention provides an indoor map construction method, system and storage medium based on pose fusion optimization. The method includes: calculating the pose data of a robot according to real-time data obtained by KINECT and IMU; If the robot is in a static state, the extended Kalman filtering algorithm is used to fuse the pose data; if the robot is in motion, the pose data is processed by a dynamic weighting method. Fusion processing; build an indoor map according to the fusion processing results. This method has higher pose estimation accuracy, higher 2D map modeling accuracy and better modeling effect, and can be applied to scenes with poor features, high dynamics, and low light and shadow.

Figure 202110924699

Description

基于位姿融合优化的室内地图构建方法、系统及存储介质Indoor map construction method, system and storage medium based on pose fusion optimization

技术领域technical field

本发明涉及一种室内地图构建方法,尤其涉及一种在室内环境卫星信号缺失的情况下, 在贫特征、高动态、弱光影的场景中,基于KINECT/IMU位姿融合优化的室内地图构建的 方法、系统及存储介质。The invention relates to an indoor map construction method, in particular to an indoor map construction based on KINECT/IMU pose fusion optimization in a scene with poor features, high dynamics and low light and shadow in the absence of satellite signals in the indoor environment. Method, system and storage medium.

背景技术Background technique

精确高效的位姿估计与建图系统是机器人实现运动控制与路径规划的关键,在室内环境 卫星信号缺失的情况下,同时定位与地图构建(Simultaneous Localization andMapping, SLAM)技术是解决机器人环境认知与定位导航的有效手段之一。SLAM是指搭载特定传感 器的主体(一般为机器人等),在没有环境先验信息的情况下,在运动中建立环境模型,同 时估计自身运动的过程。根据搭载传感器的不同,SLAM可以分为视觉SLAM、激光SLAM和多传感器融合SLAM。基于视觉传感器的SLAM(Visual SLAM)主要由传感器信息读取、 视觉里程计、后端优化、回环检测和建图等部分组成。随着计算机视觉、图像处理、人工智 能等技术的发展,视觉SLAM的可用性不断得到挖掘,并且尝试融合其他传感器信息用于 构建更加稳健的视觉SLAM系统。随着RGB-D相机的出现,许多学者提出了基于KINECT 的视觉SLAM方法,并且运用到三维建模、人体识别、系统交互等领域。An accurate and efficient pose estimation and mapping system is the key to the robot's motion control and path planning. In the absence of satellite signals in the indoor environment, Simultaneous Localization and Mapping (SLAM) technology is the solution to the robot's environmental cognition. One of the effective means of navigation with positioning. SLAM refers to a process in which a subject equipped with a specific sensor (usually a robot, etc.) builds an environmental model in motion and estimates its own motion without prior information about the environment. According to the different sensors, SLAM can be divided into visual SLAM, laser SLAM and multi-sensor fusion SLAM. SLAM based on visual sensor (Visual SLAM) is mainly composed of sensor information reading, visual odometry, back-end optimization, loop detection and mapping. With the development of computer vision, image processing, artificial intelligence and other technologies, the usability of visual SLAM is constantly being explored, and attempts are made to integrate other sensor information to build a more robust visual SLAM system. With the emergence of RGB-D cameras, many scholars have proposed visual SLAM methods based on KINECT, and applied them to the fields of 3D modeling, human recognition, and system interaction.

但是,视觉SLAM方法在贫特征、高动态、弱光影的场景时,算法受到很多限制。 IMU传感(惯性传感器)由于自身的特点,使得它与视觉SLAM方法形成了较好的互补性。 例如,视觉传感器在高速运动的状态下,基于特征匹配的位姿估计成功率较低,而IMU依 赖自身加速度和陀螺仪仍然能够保持较精确的位姿估计;视觉传感器可以有效限制运动过程 中IMU存在的累计漂移误差;相机数据与IMU数据的融合可以提高位姿估计算法的鲁棒性。 这种结合相机和IMU的方法被称为视觉惯性SLAM(Visual-Inertial SLAM,VI-SLAM)。与 视觉SLAM相似,VI-SLAM框架可以分为基于滤波和基于优化两种。基于滤波的VI-SLAM 方法将状态和参数作为在线计算的一部分,输入不同的滤波框架,更新校准并输出实时位姿。 基于优化的VI-SLAM方法也被称为平滑方法,它们将IMU数据作为先验信息,提取匹配图 像特征,再根据图像特征定义重投影误差,构造代价函数,解算机器人位姿。很多学者对此 方面进行了研究:Weiss等设计的SSF(Single-Sensor Fusion)算法以扩展卡尔曼滤波(Extended KalmanFilter,EKF)为基本框架,将IMU解算的位姿作为预测函数,将PTAM 算法计算的位姿作为测量数据,有效地提升了算法的性能,并且当视觉解算位姿短时间失败时,该算法中IMU的运动预测仍然能够提供精确的估计。刘振彬等使用一种改进的非线性方法优化单目惯导SLAM方案,该方案基于VINS-Mono系统,在初始化的过程中添加了对 加速度贝叶斯偏差的初始化,并借鉴ORB-SLAM,使用ORB(一种快速特征点提取和描述 的算法)特征点在前端对图像进行特征检测匹配,在提高方案鲁棒性的同时实现了较高精度 的实时定位与稀疏地图构建。Campos等提出一种基于视觉惯导的SLAM系统VIORB- SLAM,能够支持单目、双目、RGBD等多种相机类型。VI ORB-SLAM系统基于两种传感 器设计了一种混合地图数据关联机制,能够对短期、中期、长期的传感器采集数据进行数据 关联。同时基于特征的紧耦合VIO系统,VI ORB-SLAM算法依赖最大后验估计,对关键帧 进行几何一致性检测,提高了地图定位的精确性。However, when the visual SLAM method is used in scenes with poor features, high dynamics, and low light and shadow, the algorithm is subject to many limitations. Due to its own characteristics, IMU sensing (inertial sensor) has formed a good complementarity with the visual SLAM method. For example, when the vision sensor is moving at high speed, the success rate of pose estimation based on feature matching is low, while the IMU can still maintain a more accurate pose estimation by relying on its own acceleration and gyroscope; the vision sensor can effectively limit the IMU during the movement process. The existing cumulative drift error; the fusion of camera data and IMU data can improve the robustness of the pose estimation algorithm. This method of combining cameras and IMUs is called Visual-Inertial SLAM (VI-SLAM). Similar to visual SLAM, the VI-SLAM framework can be divided into two types: filtering-based and optimization-based. Filter-based VI-SLAM methods take states and parameters as part of online computation, input different filtering frameworks, update calibrations and output real-time poses. Optimization-based VI-SLAM methods are also called smoothing methods. They take IMU data as prior information, extract matching image features, define reprojection errors based on image features, construct cost functions, and solve robot poses. Many scholars have conducted research on this aspect: the SSF (Single-Sensor Fusion) algorithm designed by Weiss et al. takes the Extended Kalman Filter (EKF) as the basic framework, uses the pose calculated by the IMU as the prediction function, and uses the PTAM algorithm as the prediction function. The calculated pose is used as the measurement data, which effectively improves the performance of the algorithm, and when the visual solution of the pose fails for a short time, the motion prediction of the IMU in the algorithm can still provide accurate estimates. Liu Zhenbin et al. used an improved nonlinear method to optimize the monocular inertial navigation SLAM scheme. The scheme is based on the VINS-Mono system. The initialization of the acceleration Bayesian bias is added in the initialization process, and the ORB-SLAM is used for reference. (A fast feature point extraction and description algorithm) Feature points perform feature detection and matching on images at the front end, which not only improves the robustness of the scheme, but also achieves high-precision real-time positioning and sparse map construction. Campos et al. proposed a visual inertial navigation-based SLAM system VIORB-SLAM, which can support multiple camera types such as monocular, binocular, and RGBD. The VI ORB-SLAM system designs a hybrid map data association mechanism based on two sensors, which can correlate data collected by sensors in the short-term, mid-term and long-term. At the same time, based on the feature-based tightly coupled VIO system, the VI ORB-SLAM algorithm relies on maximum a posteriori estimation to perform geometric consistency detection on key frames, which improves the accuracy of map positioning.

但是,融合两种传感器数据的位姿估计,虽然可以有效提高估计精度,其同样存在一些 技术缺陷:使用松耦合的方式融合位姿,主体仍然独立地使用视觉算法实现机器人的位姿估 算,IMU的测量数据仅仅被添加到一些独立的位姿估算过程当中,直接导致了相机与IMU 内部状态之间的相关性被忽略,缺乏对位姿估算有效的约束和补偿,甚至会出现对相机与 IMU位姿误差估算失败的结果,因此,松耦合方法并不能取得最优的位姿估计精度;紧耦 合的方法是将视觉算法和IMU数据融合在一起,共同构建运动方程与状态方程,对机器人 的位姿状态进行解算,但是采用全局优化或局部平滑的紧耦合方法过度依赖初始化的位姿估 计,计算相机与IMU数据的过程中复杂度较高,实时性有所欠缺;同时,单目相机无法直 接获取尺度信息,只能依赖IMU计算的尺度信息,紧耦合方法并不能有效增强位姿尺度的 估算精度。此外,还需要考虑到基于滤波的VI-SLAM方法的马尔科夫性,即在某一时刻的 状态与之前所有时刻的状态关系无法确定,算法框架存在缺陷,精度提升空间有限。However, although the pose estimation of the fusion of two sensor data can effectively improve the estimation accuracy, it also has some technical defects: the pose is fused in a loosely coupled way, and the subject still uses the vision algorithm to estimate the robot pose independently, and the IMU The measurement data is only added to some independent pose estimation processes, which directly causes the correlation between the camera and the internal state of the IMU to be ignored, lacks effective constraints and compensation for the pose estimation, and even causes problems between the camera and the IMU. Therefore, the loosely coupled method cannot achieve the optimal pose estimation accuracy; the tightly coupled method is to fuse the vision algorithm and IMU data together to jointly construct the motion equation and the state equation, which can improve the accuracy of the robot. The pose state is solved, but the tight coupling method of global optimization or local smoothing is overly dependent on the initialization pose estimation, and the process of calculating the camera and IMU data is more complicated and lacks real-time performance; at the same time, the monocular camera The scale information cannot be obtained directly, and it can only rely on the scale information calculated by the IMU. The tight coupling method cannot effectively enhance the estimation accuracy of the pose scale. In addition, the Markov property of the filtering-based VI-SLAM method also needs to be considered, that is, the relationship between the state at a certain moment and the state at all previous moments cannot be determined, the algorithm framework has defects, and the space for accuracy improvement is limited.

发明内容SUMMARY OF THE INVENTION

为了解决上述技术问题,本发明的目的是提供一种基于KINECT/IMU位姿融合优化的 室内地图构建的方法,该方法具有更高的位姿估计精度、更高的二维地图建模精度以及更优 的建模效果,能够应用于贫特征、高动态、弱光影的场景中。In order to solve the above technical problems, the purpose of the present invention is to provide a method for indoor map construction based on KINECT/IMU pose fusion optimization, the method has higher pose estimation accuracy, higher two-dimensional map modeling accuracy and Better modeling effects can be applied to scenes with poor features, high dynamics, and low light and shadow.

基于上述目的,本发明的一个方面,提供一种基于位姿融合优化的室内地图构建方法, 该方法包括:Based on the above purpose, one aspect of the present invention provides an indoor map construction method based on pose fusion optimization, the method comprising:

根据KINECT和IMU获取的实时数据解算机器人的位姿数据;Calculate the pose data of the robot according to the real-time data obtained by KINECT and IMU;

获取机器人当前运动状态;Get the current motion state of the robot;

若机器人处于静止状态,则采用扩展卡尔曼滤波算法对所述位姿数据进行融合处理,若 机器人处于运动状态,则采用动态加权方法对所述位姿数据进行融合处理;If the robot is in a static state, the extended Kalman filter algorithm is used to fuse the pose data, and if the robot is in a motion state, a dynamic weighting method is used to fuse the pose data;

根据融合处理结果构建室内地图。Build an indoor map based on the fusion processing results.

作为优选,所述利用KINECT获取的实时数据解算机器人位姿数据的具体方法为:获 取图像数据;对两张部分重叠图像的同名特征点进行匹配,获得同名特征点之间的对应关系, 该对应关系与机器人位姿对应。Preferably, the specific method for calculating the robot pose data using the real-time data obtained by KINECT is: obtaining image data; matching the feature points of the same name of two partially overlapping images to obtain the corresponding relationship between the feature points of the same name, the The correspondence corresponds to the robot pose.

作为优选,所述利用IMU获取的实时数据解算机器人位姿数据的具体方法为:采用方 向余弦矩阵算法进行位姿解算。Preferably, the specific method for calculating the robot pose data using the real-time data obtained by the IMU is: adopting a direction cosine matrix algorithm to calculate the pose.

作为优选,所述获取机器人当前运动状态的方法为:Preferably, the method for obtaining the current motion state of the robot is:

若IMU的线加速度为0,且里程计数据没有增加,则判定机器人当前为静止状态;If the linear acceleration of the IMU is 0 and the odometer data does not increase, it is determined that the robot is currently in a stationary state;

若IMU的线加速度不为0,和/或里程计数据有增加,则判定机器人当前为运动状态。If the linear acceleration of the IMU is not 0, and/or the odometer data increases, it is determined that the robot is currently in motion.

作为优选,采用扩展卡尔曼滤波算法对所述位姿数据进行融合处理的具体方法为:Preferably, the specific method of using the extended Kalman filter algorithm to fuse the pose data is as follows:

获取系统状态量,计算误差状态预测值和系统状态噪声协方差矩阵;Obtain the system state quantity, calculate the error state prediction value and the system state noise covariance matrix;

由计算得到观测矩阵和噪声协方差矩阵,计算系统的增益矩阵;The observation matrix and noise covariance matrix are obtained by calculation, and the gain matrix of the system is calculated;

由增益矩阵,更新系统状态量、系统协方差矩阵;From the gain matrix, update the system state quantity and the system covariance matrix;

将系统状态量计算得到的四元数进行归一化处理;Normalize the quaternion calculated by the system state quantity;

将四元数转化为方便位姿显示的欧拉角,以此表示机器人的位姿状态。The quaternion is converted into Euler angles for convenient pose display to represent the pose state of the robot.

作为优选,采用动态加权方法对所述位姿数据进行融合处理的方法为:Preferably, the method of using the dynamic weighting method to fuse the pose data is as follows:

通过KINECT根据关键帧获取对应的图像;Obtain the corresponding image according to the key frame through KINECT;

根据KINECT获取的两个关键帧之间特征点的匹配成功率μ设置权重值阈值;若匹配 成功率μ降低,则提高IMU姿态的权重;若匹配成功率μ升高,则提高KINECT姿态的权 重。The weight threshold is set according to the matching success rate μ of the feature points between the two key frames obtained by KINECT; if the matching success rate μ decreases, the weight of the IMU attitude is increased; if the matching success rate μ increases, the weight of the KINECT attitude is increased .

作为优选,根据所述融合处理结果创建室内地图的具体方法为:Preferably, the specific method for creating an indoor map according to the fusion processing result is:

获取激光点与雷达本身经过距离与方位角的解算后获得相对坐标;Obtain the relative coordinates of the laser point and the radar itself after calculating the distance and azimuth angle;

预先将融合优化后的位姿估计替换建图算法中的位姿估计,并将该位姿估计作为新的数 据源计算激光点与雷达的相对关系,以更新激光点的距离与方位角信息;Replace the pose estimation in the mapping algorithm with the pose estimation after fusion optimization in advance, and use the pose estimation as a new data source to calculate the relative relationship between the laser point and the radar, so as to update the distance and azimuth information of the laser point;

使用基于高斯牛顿的扫描匹配法匹配激光点坐标,建立高精度的室内二维地图。Using the Gauss-Newton-based scan matching method to match the coordinates of the laser points, a high-precision indoor two-dimensional map is established.

本发明的另一个方面,提供一种采用上述的基于位姿融合优化的室内地图构建方法的进 行室内地图构建的系统,其特征在于,包括:Another aspect of the present invention provides a kind of system that adopts the above-mentioned indoor map construction method based on pose fusion optimization to carry out indoor map construction, it is characterized in that, comprising:

位姿数据解算单元,根据KINECT和IMU获取的实时数据解算机器人的位姿数据;The pose data calculation unit calculates the pose data of the robot according to the real-time data obtained by KINECT and IMU;

运动状态判断单元,根据IMU的线加速度数据和里程计数据判断机器人当前运动状态;The motion state judgment unit judges the current motion state of the robot according to the linear acceleration data and odometer data of the IMU;

位姿数据融合处理单元,分别与位姿数据解算单元和运动状态判断单元连接,当机器人 处于静止状态时,采用扩展卡尔曼滤波算法对所述位姿数据进行融合处理,当机器人处于运 动状态时,采用动态加权方法对所述位姿数据进行融合处理;The pose data fusion processing unit is respectively connected with the pose data calculation unit and the motion state judgment unit. When the robot is in a static state, the extended Kalman filtering algorithm is used to fuse the pose data. When the robot is in a motion state When , adopt the dynamic weighting method to perform fusion processing on the pose data;

二维地图构建单元,与位姿数据融合处理单元连接,根据所述融合处理结果创建室内地 图。The two-dimensional map construction unit is connected with the pose data fusion processing unit, and creates an indoor map according to the fusion processing result.

作为优选,所述位姿数据解算单元包括:Preferably, the pose data calculation unit includes:

KINECT数据解算子单元,其用于获取图像数据;对两张部分重叠图像的同名特征点进 行匹配,获得同名特征点之间的对应关系,该对应关系与机器人位姿对应。The KINECT data solving subunit is used to obtain image data; it matches the feature points of the same name of two partially overlapping images to obtain the corresponding relationship between the feature points of the same name, and the corresponding relationship corresponds to the pose of the robot.

IMU数据解算子单元,其采用方向余弦矩阵算法对IMU获取的实时数据进行位姿解算。The IMU data solving sub-unit uses the direction cosine matrix algorithm to solve the pose and attitude of the real-time data obtained by the IMU.

本发明的再一个方面,提供一种存储介质,该存储介质内存储有计算机程序,所述计算 机程序被处理执行时实现如上所述的基于位姿融合优化的室内地图构建方法。In yet another aspect of the present invention, a storage medium is provided, and a computer program is stored in the storage medium, and when the computer program is processed and executed, the above-mentioned indoor map construction method based on pose fusion optimization is realized.

与现有技术相比,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:

该方法通过首先判定机器人的运动状态,针对机器人静止状态,使用扩展卡尔曼滤波算 法进行位姿数据的融合,针对机器人运动状态,采用特征匹配加权策略融合KINECT与 IMU位姿数据,这样,有效提高融合后的位姿估计精确度,使得二维地图建模精度更好且建 模效果更优,该方法可有效提高贫特征、高动态、弱光影的场景中的位姿估计精度。This method firstly determines the motion state of the robot, and uses the extended Kalman filter algorithm to fuse the pose data for the stationary state of the robot. The fused pose estimation accuracy makes the two-dimensional map modeling accuracy better and the modeling effect better. This method can effectively improve the pose estimation accuracy in scenes with poor features, high dynamics, and low light and shadow.

附图说明Description of drawings

构成本申请的一部分的说明书附图用来提供对本申请的进一步理解,本申请的示意性实 施例及其说明用于解释本申请,并不构成对本申请的限定。The accompanying drawings forming a part of the present application are used to provide further understanding of the present application, and the schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute a limitation to the present application.

图1是本发明实施例中基于位姿融合优化的室内地图构建方法的流程图;1 is a flowchart of an indoor map construction method based on pose fusion optimization in an embodiment of the present invention;

图2(a)-(c)分别是是本发明实施例中过道、墙角和桌椅特征点选取匹配示意图;2(a)-(c) are respectively a schematic diagram of selection and matching of feature points of aisles, corners, and tables and chairs in the embodiment of the present invention;

图3是本发明实施例中部分自然场景与人为布设场景;Fig. 3 is a part of natural scenes and artificial layout scenes in the embodiment of the present invention;

图4是本发明实施例中匹配成功率与位姿误差关系;4 is the relationship between the matching success rate and the pose error in the embodiment of the present invention;

图5是本发明实施例中搭建的机器人平台与传感器;Fig. 5 is the robot platform and sensor built in the embodiment of the present invention;

图6是本发明实施例中x、y、z方向位移偏离图;Fig. 6 is the displacement deviation diagram of x, y, z direction in the embodiment of the present invention;

图7是本发明实施例中x、y方向位移偏离图;Fig. 7 is the displacement deviation diagram of x, y direction in the embodiment of the present invention;

图8(a)-(d)是本发明实施例中局部建图效果对比;Figure 8(a)-(d) is a comparison of local mapping effects in the embodiment of the present invention;

图9(a)是本发明实施例中Cartographer算法构建的二维地图;Fig. 9 (a) is the two-dimensional map constructed by Cartographer algorithm in the embodiment of the present invention;

图9(b)是本发明实施例中本发明算法构建的二维地图。Fig. 9(b) is a two-dimensional map constructed by the algorithm of the present invention in an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图与实施例对本发明作进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.

应该指出,以下详细说明都是示例性的,旨在对本申请提供进一步的说明。除非另有指 明,本实施例使用的所有技术和科学术语具有与本申请所属技术领域的普通技术人员通常理 解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the application. Unless otherwise defined, all technical and scientific terms used in the examples have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申 请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图 包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括” 时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, and it should also be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.

本实施例提供一种基于位姿融合优化的室内地图构建方法,如图1所示,该方法包括:This embodiment provides an indoor map construction method based on pose fusion optimization, as shown in FIG. 1 , the method includes:

根据KINECT和IMU获取的实时数据解算机器人的位姿数据;Calculate the pose data of the robot according to the real-time data obtained by KINECT and IMU;

获取机器人当前运动状态;Get the current motion state of the robot;

若机器人处于静止状态,则采用扩展卡尔曼滤波算法对所述位姿数据进行融合处理,若 机器人处于运动状态,则采用动态加权方法对所述位姿数据进行融合处理;If the robot is in a static state, the extended Kalman filter algorithm is used to fuse the pose data, and if the robot is in a motion state, a dynamic weighting method is used to fuse the pose data;

根据融合处理结果构建室内地图。Build an indoor map based on the fusion processing results.

作为一种较优的实施方式,所述利用KINECT获取的实时数据解算机器人位姿数据的 具体方法为:获取图像数据;对两张部分重叠图像的同名特征点进行匹配,获得同名特征点 之间的对应关系,该对应关系与机器人位姿对应。具体地:基于KINECT获取的图像数据 解算机器人的位姿,通常根据两张部分重叠图像的同名特征点的匹配而实现。首先,使用 Oriented-Fast算法提取图像特征点;然后,使用快速近似最近邻方法(Fast Libraryfor Approximate Nearest Neighbors,FLANN)实现特征点之间的匹配。根据两张图像中的同名 特征点的对应关系,实现相机运动参数的解算。例如:假设相机运动的刚体变换量为(R, T),其中,R为旋转向量,T为平移向量,那么特征点对应的坐标与运动过程的关系如式(1) 所示。As a preferred embodiment, the specific method for calculating the robot pose data using the real-time data obtained by KINECT is as follows: obtaining image data; matching the feature points of the same name of two partially overlapping images to obtain one of the feature points of the same name. The corresponding relationship between the corresponding relationship and the robot pose. Specifically: Solving the pose of the robot based on the image data obtained by KINECT is usually achieved by matching the feature points of the same name in two partially overlapping images. First, the Oriented-Fast algorithm is used to extract image feature points; then, the Fast Library for Approximate Nearest Neighbors (FLANN) method is used to achieve matching between feature points. According to the corresponding relationship between the feature points with the same name in the two images, the camera motion parameters are calculated. For example: Assuming that the rigid body transformation of camera motion is (R, T), where R is the rotation vector and T is the translation vector, the relationship between the coordinates corresponding to the feature points and the motion process is shown in formula (1).

Ppm=RPlm+T (1)P pm =RP lm +T (1)

其中,Ppm,和RPlm为当前帧图像与上一帧图像第m个特征点的三维坐标,m=1,2,3……n。使用最近邻方法匹配特征点的同时需要剔除误匹配的特征点对,随机选取至少4对特征点对,计算刚体变换量(R,T),将其坐标代入||Ppm-(RPlm+T)||2,筛选小于阈值的点 代入式(1)解算。最后,通过迭代计算获得特征点对应坐标与运动过程关系的最小二乘值, 用于估计机器人的当前位姿。Among them, P pm and RP lm are the three-dimensional coordinates of the m-th feature point of the current frame image and the previous frame image, m=1, 2, 3...n. When using the nearest neighbor method to match feature points, it is necessary to eliminate the incorrectly matched feature point pairs, randomly select at least 4 pairs of feature points, calculate the rigid body transformation (R, T), and substitute its coordinates into ||P pm -(RP lm + T)|| 2 , filter the points smaller than the threshold and substitute them into formula (1) to solve. Finally, the least square value of the relationship between the corresponding coordinates of the feature points and the motion process is obtained through iterative calculation, which is used to estimate the current pose of the robot.

作为一种较优的实施方式,所述利用IMU获取的实时数据解算机器人位姿数据的具体 方法为:采用方向余弦矩阵算法进行位姿解算。具体地:As a preferred embodiment, the specific method for calculating the robot pose data using the real-time data obtained by the IMU is: adopting the direction cosine matrix algorithm to calculate the pose. specifically:

IMU主要包含三轴加速度计与三轴陀螺仪,通过对加速度计测得的线速度进行二次积 分,可以获得IMU的速度与位置数据;对陀螺仪测得的角速度进行积分,可以获得IMU的 姿态数据。为避免当俯仰角为90°时,欧拉角微分方程会出现奇点的问题,导致无法获得IMU的全姿态解算,本实施例采用方向余弦矩阵算法(Direction Cosine Matrix,DCM)使用矢量的方向余弦表示姿态矩阵,有效地避免了使用欧拉角表示姿态遇到的奇点问题,具体 地,本实施例中IMU姿态的解算采用DCM算法,定义DCM矩阵为C,如式(2)所示。The IMU mainly includes a three-axis accelerometer and a three-axis gyroscope. By quadratic integration of the linear velocity measured by the accelerometer, the speed and position data of the IMU can be obtained; attitude data. In order to avoid the problem of singularity in the Euler angle differential equation when the pitch angle is 90°, resulting in the inability to obtain the full attitude solution of the IMU, this embodiment adopts the direction cosine matrix algorithm (Direction Cosine Matrix, DCM) to use the vector The direction cosine represents the attitude matrix, which effectively avoids the singularity problem encountered by using Euler angles to represent the attitude. Specifically, in this embodiment, the DCM algorithm is used to calculate the attitude of the IMU, and the DCM matrix is defined as C, as shown in formula (2) shown.

Figure BDA0003208732120000061
Figure BDA0003208732120000061

其中,i′,j′,k′为机器人坐标的单位向量。对i′,j′,k′求导可得式(3)。Among them, i′, j′, k′ are the unit vectors of the robot coordinates. Formula (3) can be obtained by derivation of i′, j′, k′.

Figure BDA0003208732120000062
Figure BDA0003208732120000062

再对矩阵C求导可得

Figure BDA0003208732120000066
如式(4)所示。Then take the derivative of the matrix C to get
Figure BDA0003208732120000066
As shown in formula (4).

Figure BDA0003208732120000063
Figure BDA0003208732120000063

求解式(4)方程,得到IMU与机器人的相对位姿变换关系。Solve the equation (4) to obtain the relative pose transformation relationship between the IMU and the robot.

此外,由于在实际操作中,IMU的高测量频率占用了大量的计算资源,为避免在每个 IMU测量时都添加一个新的状态量,通常在两帧之间加入重新参数化的过程,实现运动约 束,避免重复积分,这种重新参数化的过程称为IMU预积分。具体地:假设IMU测量得到的加速度与角速度分别表示为

Figure BDA0003208732120000064
那么,IMU测量模型如式(5)所示。In addition, since the high measurement frequency of the IMU takes up a lot of computing resources in practice, in order to avoid adding a new state quantity during each IMU measurement, a re-parameterization process is usually added between two frames to achieve Motion constraints, avoiding repeated integration, this reparameterization process is called IMU pre-integration. Specifically: Assume that the acceleration and angular velocity measured by the IMU are expressed as
Figure BDA0003208732120000064
Then, the IMU measurement model is shown in equation (5).

Figure BDA0003208732120000065
Figure BDA0003208732120000065

其中,W为世界坐标系,B为IMU坐标系,BωWB(t)为B相对于W的瞬时角速度,

Figure BDA0003208732120000071
为世界坐标系到IMU坐标系的旋转矩阵,wα(t)为世界坐标系下的瞬时加速度,bg(t)、 bα(t)为陀螺仪与加速度的偏差,ηg(t)、ηα(t)为随机噪声。引入动力学积分模型,模型包含旋 转关系
Figure BDA0003208732120000072
速度
Figure BDA0003208732120000073
和平移
Figure BDA0003208732120000074
如式(6)所示。Among them, W is the world coordinate system, B is the IMU coordinate system, B ω WB (t) is the instantaneous angular velocity of B relative to W,
Figure BDA0003208732120000071
is the rotation matrix from the world coordinate system to the IMU coordinate system, w α (t) is the instantaneous acceleration in the world coordinate system, b g (t), b α (t) are the deviation between the gyroscope and the acceleration, η g (t) , η α (t) are random noises. Introduce the dynamic integral model, the model includes the rotation relationship
Figure BDA0003208732120000072
speed
Figure BDA0003208732120000073
and panning
Figure BDA0003208732120000074
As shown in formula (6).

Figure BDA0003208732120000075
Figure BDA0003208732120000075

对式(6)进行积分可以得Δt时间内的旋转量RWB(t+Δt)、速度量wv(t+Δt)和平移量 WP(t+Δt)WP(t+Δt),结合IMU测量模型中的加速度与角速度

Figure BDA0003208732120000076
得到IMU 在Δt时间内相对于世界坐标系的运动状态RWB(t+Δt)、Wv(t+Δt)和Wp(t+Δt),即两个时刻 IMU数据之间的相对关系。IMU与KINECT的更新率并不同步,积分两个关键帧之间的多 帧IMU数据可以有效约束关键帧。考虑到旋转矩阵
Figure BDA0003208732120000077
随着时间的不断变化,因此相对运动 增量ΔRij、Δvij、Apij,如式(7)所示。Integrating Equation (6) can obtain the rotation amount R WB (t+Δt), the velocity amount w v(t+Δt) and the translation amount W P(t+Δt) W P(t+Δt) in the Δt time, Combine acceleration and angular velocity in the IMU measurement model
Figure BDA0003208732120000076
Obtain the motion state R WB (t+Δt), W v(t+Δt) and W p(t+Δt) of the IMU relative to the world coordinate system within the time Δt, that is, the relative relationship between the IMU data at two moments. The update rates of IMU and KINECT are not synchronized, and integrating multi-frame IMU data between two key frames can effectively constrain key frames. Taking into account the rotation matrix
Figure BDA0003208732120000077
As time changes, the relative motion increments ΔR ij , Δv ij , and Ap ij are shown in equation (7).

Figure BDA0003208732120000078
Figure BDA0003208732120000078

其中,

Figure BDA0003208732120000079
Figure BDA00032087321200000710
式(7)左侧为相对运动增量,右 侧为IMU数据,进一步将两个关键帧的相对运动增量ΔRij、Δvij、Δpij表示为式(8)。in,
Figure BDA0003208732120000079
Figure BDA00032087321200000710
The left side of equation (7) is the relative motion increment, and the right side is the IMU data, and the relative motion increments ΔR ij , Δv ij , and Δp ij of the two key frames are further expressed as equation (8).

Figure BDA00032087321200000711
Figure BDA00032087321200000711

如此一来,得到两帧之间的相对运动增量ΔRij、Δvij、Δpij,即两帧之间使用IMU预积分 重新参数化的结果,避免了重复积分IMU观测量,提高了算法的可靠性。In this way, the relative motion increments ΔR ij , Δv ij , and Δp ij between the two frames are obtained, that is, the result of re-parameterization between the two frames using the IMU pre-integration, which avoids repeated integration of the IMU observations and improves the performance of the algorithm. reliability.

作为一种较优的实施方式,所述获取机器人当前运动状态的方法为:As a preferred embodiment, the method for obtaining the current motion state of the robot is:

若IMU的线加速度为0,且里程计数据没有增加,则判定机器人当前为静止状态;If the linear acceleration of the IMU is 0 and the odometer data does not increase, it is determined that the robot is currently in a stationary state;

若IMU的线加速度不为0,或里程计数据有增加,则判定机器人当前为运动状态。If the linear acceleration of the IMU is not 0, or the odometer data increases, it is determined that the robot is currently in motion.

作为一种较优的实施方式,采用扩展卡尔曼滤波算法对所述位姿数据进行融合处理的具 体方法为:As a kind of preferred embodiment, the concrete method that adopts extended Kalman filter algorithm to carry out fusion processing to described pose data is:

获取系统状态量,计算误差状态预测值和系统状态噪声协方差矩阵;Obtain the system state quantity, calculate the error state prediction value and the system state noise covariance matrix;

由计算得到观测矩阵和噪声协方差矩阵,计算系统的增益矩阵;The observation matrix and noise covariance matrix are obtained by calculation, and the gain matrix of the system is calculated;

由增益矩阵,更新系统状态量、系统协方差矩阵;From the gain matrix, update the system state quantity and the system covariance matrix;

将系统状态量计算得到的四元数进行归一化处理;Normalize the quaternion calculated by the system state quantity;

将四元数转化为方便位姿显示的欧拉角,以此表示机器人的位姿状态。The quaternion is converted into Euler angles for convenient pose display to represent the pose state of the robot.

具体地:specifically:

为了更加精确地估计机器人的位姿信息,可以使用扩展卡尔曼滤波融合IMU和KINECT位姿。EKF分为预测过程和更新过程。对于预测过程,定义误差状态向量

Figure BDA0003208732120000081
如式 (9)所示。In order to estimate the pose information of the robot more accurately, the extended Kalman filter can be used to fuse the IMU and KINECT poses. EKF is divided into prediction process and update process. For the prediction process, define the error state vector
Figure BDA0003208732120000081
As shown in formula (9).

Figure BDA0003208732120000082
Figure BDA0003208732120000082

其中,

Figure BDA0003208732120000083
为状态向量期望值。同时,使用
Figure BDA0003208732120000084
表示状态噪声n,根据误差状态向 量
Figure BDA0003208732120000085
的定义,线性化处理IMU测量得到的加速度与角速度,得到误差状态方程中平移、旋转、 速度、加速度计和陀螺零漂的表示量
Figure BDA0003208732120000086
参照系统的状态方程,通过 雅克比矩阵求解状态转移矩阵
Figure BDA0003208732120000087
如式(10)所示。in,
Figure BDA0003208732120000083
is the expected value of the state vector. use simultaneously
Figure BDA0003208732120000084
represents the state noise n, according to the error state vector
Figure BDA0003208732120000085
The definition of , linearizes the acceleration and angular velocity measured by the IMU, and obtains the representations of translation, rotation, velocity, accelerometer and gyro zero drift in the error state equation
Figure BDA0003208732120000086
Referring to the state equation of the system, solve the state transition matrix by the Jacobian matrix
Figure BDA0003208732120000087
As shown in formula (10).

Figure BDA0003208732120000088
Figure BDA0003208732120000088

根据系统噪声矩阵G和状态转移矩阵

Figure BDA0003208732120000089
可以求出系统噪声协方差矩阵Qd,进一步计算扩 展卡尔曼滤波的误差状态预测值Xk+1|k和系统状态协方差矩阵预测值Pk+1|k,如式(11)所示。According to the system noise matrix G and the state transition matrix
Figure BDA0003208732120000089
The system noise covariance matrix Qd can be obtained, and the error state prediction value X k+1|k of the extended Kalman filter and the system state covariance matrix prediction value P k+1|k can be further calculated, as shown in equation (11).

Figure BDA00032087321200000810
Figure BDA00032087321200000810

其中,Xk|k、Pk|k为上一时刻的最优估计值。随着时间推移,IMU误差不断累积,需要使用低频输出的KINECT位姿作为滤波器的观测值z,用于校正IMU解算的结果。一般而言, IMU与KINECT相对于机器人底盘的位姿转换矩阵已经确定,因此可以确定IMU与 KINECT相机之间的旋转平移矩阵

Figure BDA00032087321200000811
系统观测方程zp、zq,如式(12)所示。Among them, X k|k and P k|k are the optimal estimated values at the previous moment. Over time, the IMU error accumulates, and the KINECT pose of the low-frequency output needs to be used as the observed value z of the filter to correct the result of the IMU solution. Generally speaking, the pose transformation matrix of IMU and KINECT relative to the robot chassis has been determined, so the rotation and translation matrix between IMU and KINECT camera can be determined
Figure BDA00032087321200000811
The system observation equations z p and z q are shown in formula (12).

Figure BDA0003208732120000091
Figure BDA0003208732120000091

由观测矩阵和观测噪声协方差矩阵可以推导EKF的更新过程,计算系统的增益矩阵K, 如式(13)所示。The update process of the EKF can be derived from the observation matrix and the observation noise covariance matrix, and the gain matrix K of the system is calculated, as shown in equation (13).

K=Pk+1|kHT(HPk+1|kHT+R)-1 (13)K=P k+1|k H T (HP k+1|k H T +R) -1 (13)

获得增益矩阵K后,更新系统状态量Xk+1|k+1,如式(14)所示。After the gain matrix K is obtained, the system state quantity X k+1|k+1 is updated, as shown in equation (14).

Figure BDA0003208732120000092
Figure BDA0003208732120000092

更新系统协方差矩阵Pk+1|k+1,如式(15)所示。Update the system covariance matrix P k+1|k+1 , as shown in equation (15).

Pk+1|k+1=(I15-KH)Pk+1|k(I15-KH)T+KRKT (15)P k+1|k+1 =(I 15 -KH)P k+1|k (I 15 -KH) T +KRK T (15)

更新系统状态量Xk+1|k+1和系统协方差矩阵Pk+1|k+1之后,将系统状态量计算得到的四元数 进行归一化处理,再将四元数转化为方便位姿显示的欧拉角,以此表示机器人的位姿状态, 实现扩展卡尔曼滤波融合传感器数据后的位姿输出。After updating the system state quantity X k+1|k+1 and the system covariance matrix P k+1|k+1 , normalize the quaternion calculated by the system state quantity, and then convert the quaternion into The Euler angle for convenient pose display is used to represent the pose state of the robot, and the pose output after the extended Kalman filter is fused with sensor data is realized.

作为一种优选的实施方式,采用动态加权方法对所述位姿数据进行融合处理的方法为:As a preferred embodiment, the method of using the dynamic weighting method to fuse the pose data is as follows:

通过KINECT根据关键帧获取对应的图像;Obtain the corresponding image according to the key frame through KINECT;

根据KINECT获取的两个关键帧之间特征点的匹配成功率μ设置权重值阈值;若匹配 成功率μ降低,则提高IMU姿态的权重;若匹配成功率μ升高,则提高KINECT姿态的权 重。The weight threshold is set according to the matching success rate μ of the feature points between the two key frames obtained by KINECT; if the matching success rate μ decreases, the weight of the IMU attitude is increased; if the matching success rate μ increases, the weight of the KINECT attitude is increased .

具体地:位姿的融合方式取决于机器人的位姿状态,机器人的位姿状态分为运动与静止 两种,本实施例使用IMU数据与里程计数据作为判断机器人运动与静止的条件。如果IMU 的线加速度为0,并且里程计数据没有增加,那么可以判定机器人为静止状态,位姿的融合 直接使用扩展卡尔曼滤波方法融合KINECT和IMU的位姿;如果IMU的线加速度不为0和 /或者里程计数据有增加,那么可以判定机器人为运动状态。在机器人的运动过程中,KINECT获取图像的纹理特征的不确定性较大,IMU误差虽有累积,但是相对而言较为稳定。因此,位姿融合过程中可以适当考虑两者之间的权重关系,分析关键帧特征点的匹配成功率, 用以确定KINECT和IMU位姿对于特征点匹配的依赖程度,进而实现面向不同位姿状态采 取不同的加权策略融合处理IMU和KINECT位姿。Specifically: the fusion mode of the pose depends on the pose state of the robot, and the pose state of the robot is divided into two types: motion and static, and the present embodiment uses IMU data and odometer data as the conditions for judging motion and static of the robot. If the linear acceleration of the IMU is 0 and the odometer data does not increase, it can be determined that the robot is in a static state, and the fusion of the pose and the pose directly uses the extended Kalman filter method to fuse the pose of the KINECT and the IMU; if the linear acceleration of the IMU is not 0 And/or the odometer data has increased, then it can be determined that the robot is in a motion state. During the motion of the robot, the texture features of the images acquired by KINECT have great uncertainty, and although the IMU errors accumulate, they are relatively stable. Therefore, in the process of pose fusion, the weight relationship between the two can be properly considered, and the matching success rate of key frame feature points can be analyzed to determine the degree of dependence of KINECT and IMU poses on feature point matching, and then achieve different pose orientations. The state adopts different weighting strategies to fuse the IMU and KINECT poses.

当机器人处于运动状态时,IMU的测量值包括线速度、加速度和角速度,KINECT根据 关键帧获取对应的图像。根据两个KINECT获取的关键帧之间特征点匹配的成功率设置权 重值阈值;如果匹配成功率低,那么当前姿态和KINECT姿态的关联程度低,应当提高IMU姿态的权重;如果匹配成功率高,那么当前姿态和KINECT姿态的关联程度高,应当 提高KINECT姿态的权重。When the robot is in motion, the measured values of the IMU include linear velocity, acceleration and angular velocity, and KINECT obtains the corresponding image according to the key frame. The weight threshold is set according to the success rate of feature point matching between the key frames obtained by two KINECT; if the matching success rate is low, the correlation between the current pose and the KINECT pose is low, and the weight of the IMU pose should be increased; if the matching success rate is high , then the relationship between the current pose and the KINECT pose is high, and the weight of the KINECT pose should be increased.

假设两个关键帧匹配成功的特征点与总特征点的比值定义为匹配成功率μ,那么加权融 合位姿的关键在于,根据μ的取值区间,确定KINECT和IMU位姿融合的权重值,即位姿加 权融合的权参数。Assuming that the ratio of the successfully matched feature points of two key frames to the total feature points is defined as the matching success rate μ, the key to weighted fusion pose is to determine the weight value of KINECT and IMU pose fusion according to the value range of μ, That is, the weight parameters of the pose weighted fusion.

作为一种较优的实施方式,根据所述融合处理结果创建室内地图的具体方法为:As a preferred embodiment, the specific method for creating an indoor map according to the fusion processing result is as follows:

获取激光点与雷达本身经过距离与方位角的解算后获得相对坐标;Obtain the relative coordinates of the laser point and the radar itself after calculating the distance and azimuth angle;

预先将融合优化后的位姿估计替换建图算法中的位姿估计,并将该位姿估计作为新的数 据源计算激光点与雷达的相对关系,以更新激光点的距离与方位角信息;Replace the pose estimation in the mapping algorithm with the pose estimation after fusion optimization in advance, and use the pose estimation as a new data source to calculate the relative relationship between the laser point and the radar, so as to update the distance and azimuth information of the laser point;

使用基于高斯牛顿的扫描匹配法匹配激光点坐标,建立高精度的室内二维地图。Using the Gauss-Newton-based scan matching method to match the coordinates of the laser points, a high-precision indoor two-dimensional map is established.

室内二维地图的构建过程中,非结构化的场景可能导致二维激光雷达的构图平面倾斜, 降低建图质量。本实施例采用激光三角测距技术实现LIDAR建图,测得的激光点与LIDAR 本身经过距离与方位角的解算后获得相对坐标,构图前将融合优化后的位姿估计替换建图算 法中的位姿估计,将其作为新的数据源计算激光点与雷达的相对关系,以此更新激光点的距 离与方位角信息,再使用基于高斯牛顿的扫描匹配法匹配激光点坐标,建立高精度的室内二 维地图。During the construction of indoor 2D maps, unstructured scenes may cause the composition plane of the 2D lidar to tilt, reducing the quality of the mapping. In this embodiment, the laser triangulation technology is used to realize LIDAR mapping. The measured laser point and the LIDAR itself obtain relative coordinates after distance and azimuth angle calculation. Before composition, the optimized pose estimation is replaced in the mapping algorithm. Then use the Gauss-Newton-based scan matching method to match the coordinates of the laser points to establish a high-precision indoor 2D map.

室内环境大多呈现垂直结构,实验中需要将倾斜的雷达数据做近似投影处理。扫描获得 的激光数据(ρ,θ)按照地图坐标表达形式表示为(Px,Py,Pz),如式(16)所示。Most of the indoor environments have a vertical structure, and the oblique radar data needs to be approximated projected in the experiment. The laser data (ρ, θ) obtained by scanning is expressed as (P x , P y , P z ) in the form of map coordinates, as shown in equation (16).

Figure BDA0003208732120000101
Figure BDA0003208732120000101

其中,(θx,θy,θz)表示翻滚角、俯仰角、偏航角。根据初始位姿与角度变换求得扫描激光 点的坐标(X,Y,Z),如式(17)所示。Among them, (θ x , θ y , θ z ) represent roll angle, pitch angle, and yaw angle. The coordinates (X, Y, Z) of the scanned laser spot are obtained according to the initial pose and angle transformation, as shown in formula (17).

Figure BDA0003208732120000102
Figure BDA0003208732120000102

对应优化后的位姿估计与激光点相对坐标后,使用高斯牛顿法匹配激光点,提高建图精 度与效果。首先对刚体变换T取最小值T*,如式(18)所示。After corresponding the optimized pose estimation and the relative coordinates of the laser points, the Gauss-Newton method is used to match the laser points to improve the mapping accuracy and effect. First, take the minimum value T * for the rigid body transformation T, as shown in equation (18).

Figure BDA0003208732120000111
Figure BDA0003208732120000111

其中,Si(T)为激光点的平面坐标,G(Si(T))为Si(T)的占用值,当所有激光点完全匹配时, G(Si(T))的值为1。之后优化一段时间内的雷达数据测量误差,如式(19)所示:Among them, S i (T) is the plane coordinate of the laser spot, G(S i (T)) is the occupancy value of S i (T), when all laser spots are completely matched, the value of G(S i (T)) is 1. Then optimize the radar data measurement error for a period of time, as shown in equation (19):

Figure BDA0003208732120000112
Figure BDA0003208732120000112

通过泰勒展开与求偏导,将求解ΔT转化为高斯牛顿的最小化问题,如式(20)所示。Through Taylor expansion and partial derivatives, the solution of ΔT is transformed into a Gauss-Newton minimization problem, as shown in equation (20).

Figure BDA0003208732120000113
Figure BDA0003208732120000113

其中,

Figure BDA0003208732120000114
输出基于牛顿高斯的最小二乘解后,根 据高斯牛顿法求得光束点匹配的结果,将与优化位姿对应点的距离与方位角作为数据源代入 建图过程,构建室内二维地图,有效去除了因二维激光雷达运动带来的畸变,提升了建图效 果。in,
Figure BDA0003208732120000114
After outputting the least-squares solution based on Newton-Gaussian, the result of beam spot matching is obtained according to the Gauss-Newton method, and the distance and azimuth angle of the corresponding point corresponding to the optimized pose are used as data sources into the mapping process to construct an indoor two-dimensional map, which is effective The distortion caused by the motion of the 2D lidar is removed, and the mapping effect is improved.

根据上述内容,基于KINECT/IMU的位姿融合优化的室内地图构建方法根据机器人的 运动状况,选择不同的位姿融合策略。当机器人处于静止状态时,使用EKF方法融合KINECT和IMU的位姿;当机器人处于运动状态时,使用加权方法融合KINECT和IMU的 位姿。对于加权融合方法而言,确定KINECT位姿和IMU位姿以何种比例关系参与加权是 关键。本实施例依据两个关键帧匹配成功的特征点与总特征点的比值μ(简称匹配成功率μ) 确定KINECT位姿和IMU位姿的权重关系,即位姿加权融合的权参数λ。According to the above content, the indoor map construction method based on the pose fusion optimization of KINECT/IMU selects different pose fusion strategies according to the motion status of the robot. When the robot is stationary, the poses of KINECT and IMU are fused using the EKF method; when the robot is in motion, the poses of KINECT and IMU are fused using the weighting method. For the weighted fusion method, it is the key to determine the proportional relationship between the KINECT pose and the IMU pose to participate in the weighting. In this embodiment, the weight relationship between the KINECT pose and the IMU pose, that is, the weight parameter λ of the pose weighted fusion, is determined according to the ratio μ (matching success rate μ for short) of successfully matched feature points to the total feature points.

为了得到准确的位姿加权融合的权参数λ,本实施例还设计了位姿加权融合权参数确定 实验。实验区域选取某实验室室内环境。实验设备选取KINECT V2相机与AH100B惯性测 量单元。由于惯性测量单元采集数据的频率过高,为同步相机与惯性测量单元采集数据的频 率,前期使用IMU预积分步骤,将相机与惯性测量单元的采集频率统一为KINECT V2相机 的30Hz。In order to obtain an accurate weight parameter λ of the pose weighted fusion, this embodiment also designs an experiment for determining the weight parameter of the pose weighted fusion. The experimental area selects the indoor environment of a laboratory. The experimental equipment is KINECT V2 camera and AH100B inertial measurement unit. Since the frequency of data collection by the inertial measurement unit is too high, in order to synchronize the frequency of data collection by the camera and the inertial measurement unit, the IMU pre-integration step was used in the early stage to unify the collection frequency of the camera and the inertial measurement unit to 30Hz of the KINECT V2 camera.

实验步骤主要包括如下步骤:The experimental steps mainly include the following steps:

(1)获取原始位姿数据。(1) Obtain the original pose data.

分别选取26个室内环境场景,使用KINECT V2相机拍摄图像,匹配图像之间的特征点, 解算位姿;使用IMU积分记录加速度与角速度数据,解算位姿;常用的位姿解算方法都直 接使用扩展卡尔曼滤波方法获得机器人的位姿,因此,可以将用作实验比较的真值。Select 26 indoor environment scenes respectively, use the KINECT V2 camera to capture images, match the feature points between the images, and solve the pose; use the IMU integral to record the acceleration and angular velocity data, and solve the pose; the commonly used pose solving methods are all The pose of the robot is obtained directly using the Extended Kalman Filtering method, so it can be used as the ground truth for experimental comparisons.

(2)计算均方根误差。(2) Calculate the root mean square error.

分别计算KINECT V2相机和IMU解算的位姿与扩展卡尔曼滤波解算的位姿之间的均方 根误差。Calculate the root mean square error between the pose solved by the KINECT V2 camera and the IMU and the pose solved by the extended Kalman filter, respectively.

(3)分析匹配成功率μ与位姿均方根误差的关系。当匹配成功率μ取不同的值时,分析KINECT V2相机和IMU计算的均方根误差的关系,进而确定KINECT位姿和IMU位姿 的权重关系。(3) Analyze the relationship between the matching success rate μ and the root mean square error of the pose. When the matching success rate μ takes different values, analyze the relationship between the root mean square error calculated by the KINECT V2 camera and the IMU, and then determine the weight relationship between the KINECT pose and the IMU pose.

匹配成功率μ是指某一关键帧的两幅重叠图像的匹配成为同名特征点的数量与重叠图像 获取的总特征点之间的比值,本申请中匹配成功率μ的高低决定了环境特征的显著程度,进 而决定了位姿解算的精度。The matching success rate μ refers to the ratio between the number of feature points with the same name obtained from the matching of two overlapping images of a certain key frame and the total feature points obtained from the overlapping images. In this application, the matching success rate μ determines the environmental characteristics. The degree of significance, which in turn determines the accuracy of the pose calculation.

本实施例首先选取了19个不同的场景,实验场景包含了具有代表性的过道、墙角、桌 椅等位置,基于ORB特征点检测使用快速近似最邻近方法匹配特征点。In this embodiment, 19 different scenarios are selected first, and the experimental scenarios include representative aisles, corners, tables and chairs, etc., and feature points are matched using the fast approximate nearest neighbor method based on ORB feature point detection.

如图2(a)-图2(c)所示,左侧图像分别表示实验室环境的过道、墙角、桌椅场景,右侧3幅图像表示从不同角度获取的过道、墙角、桌椅图像,其中图中横线连线表示两幅图像成功匹配的ORB特征点对。实验过程中,过道、墙角、桌椅场景分别检测到533、149、 653个特征点,其中过道场景成功匹配50对特征点对,墙角场景成功匹配4对特征点对, 桌椅场景成功匹配76对特征点对。过道、墙角、桌椅场景的匹配成功率分别为9.38%、 2.68%、11.63%,这说明纹理丰富、结构明显、特征性强的场景能够检测到更多的特征点并 且匹配成功率较高;相反,弱纹理、结构单一的场景检测到的特征点少并且匹配成功率低。 更为重要的是,通过这19个不同场景的实验,匹配成功率最大为43%,始终没有超过50%。 针对上述情况,本实施例增加7个布设了人为特征的实验场景,使得特征点匹配成功率达到 了87.37%,图3为部分自然场景和人为布设场景。As shown in Figure 2(a)-Figure 2(c), the images on the left represent the aisle, corner, table and chair scenes of the laboratory environment, and the three images on the right represent the images of the aisle, corner, table and chair obtained from different angles , where the horizontal line in the figure represents the ORB feature point pair that is successfully matched between the two images. During the experiment, 533, 149, and 653 feature points were detected in the aisle, corner, and table and chair scenes respectively. Among them, 50 pairs of feature points were successfully matched in the aisle scene, 4 pairs were successfully matched in the corner scene, and 76 pairs were successfully matched in the table and chair scene. pair of feature points. The matching success rates of aisle, corner, table and chair scenes are 9.38%, 2.68%, and 11.63% respectively, which means that scenes with rich texture, obvious structure and strong characteristics can detect more feature points and have a higher matching success rate; On the contrary, scenes with weak texture and single structure have few detected feature points and low matching success rate. More importantly, through the experiments of these 19 different scenarios, the maximum matching success rate is 43%, and it never exceeds 50%. In view of the above situation, this embodiment adds 7 experimental scenes with artificial features, so that the feature point matching success rate reaches 87.37%. Figure 3 shows some natural scenes and artificial layout scenes.

其次,如图4所示,分别计算KINECT V2相机和IMU解算的位姿与扩展卡尔曼滤波解算的位姿之间的均方根误差,实验结果表明:(1)当KINECT相机特征点匹配的成功率越 低时,KINECT相机与扩展卡尔曼滤波之间的位姿均方根误差就越高,说明此时KINECT相 机解算位姿的可靠性较低;(2)当KINECT相机特征点匹配的成功率越高时,KINECT相机 与扩展卡尔曼滤波之间的位姿均方根误差就越低,说明此时KINECT相机解算位姿的可靠 性较高;(3)但是,考虑到匹配成功率越高的原因是增加了许多人为的特征点,因此在适当 时候可能需要忽略这种影响。如果单纯观察匹配成功率低于50%的情况,还可以发现 KINECT解算的位姿精度逐渐降低,IMU解算的位姿精度始终没有太大变化,这种现象是符 合实际情况的。因此,当机器人处于运动状态时,本申请按照加权参数融合KINECT解算 的姿态和IMU解算的姿态,用作优化的融合位姿。Secondly, as shown in Fig. 4, the root mean square error between the pose calculated by the KINECT V2 camera and the IMU and the pose calculated by the extended Kalman filter is calculated respectively. The experimental results show that: (1) When the feature points of the KINECT camera are The lower the matching success rate, the higher the root mean square error of the pose between the KINECT camera and the extended Kalman filter, indicating that the reliability of the KINECT camera to solve the pose is low at this time; (2) When the KINECT camera features The higher the success rate of point matching, the lower the root mean square error of the pose between the KINECT camera and the extended Kalman filter, indicating that the KINECT camera has a higher reliability in solving the pose; (3) However, considering The reason for the higher matching success rate is the addition of many artificial feature points, so this effect may need to be ignored when appropriate. If we simply observe that the matching success rate is lower than 50%, we can also find that the pose accuracy calculated by KINECT gradually decreases, and the pose accuracy calculated by IMU has not changed much. This phenomenon is in line with the actual situation. Therefore, when the robot is in a motion state, the present application fuses the posture calculated by KINECT and the posture calculated by IMU according to the weighted parameters, and is used as the optimized fusion posture.

1)0.6≤μ≤1时,KINECT解算的姿态的权参数λ为0.5;IMU解算的姿态的权参数μ为0.5;1) When 0.6≤μ≤1, the weight parameter λ of the attitude calculated by KINECT is 0.5; the weight parameter μ of the attitude calculated by IMU is 0.5;

2)0.3≤μ<0.6时,KINECT解算的姿态的权参数λ为0.33;IMU解算的姿态的权参数μ为0.67;2) When 0.3≤μ<0.6, the weight parameter λ of the attitude calculated by KINECT is 0.33; the weight parameter μ of the attitude calculated by IMU is 0.67;

3)0≤μ<0.3时,KINECT解算的姿态的权参数λ为0.2;IMU解算的姿态的权参数μ为0.8。3) When 0≤μ<0.3, the weight parameter λ of the attitude calculated by KINECT is 0.2; the weight parameter μ of the attitude calculated by IMU is 0.8.

4KINECT/IMU位姿融合实验:4KINECT/IMU pose fusion experiment:

KINECT/IMU位姿融合实验的实验平台选用Autolabor机器人,它搭载了KINECT V2相机、AH100B惯性测量单元和RPLIDAR A2二维激光雷达,三个传感器与Autolabor机器 人之间采用固联的联接方式,如图5所示。The experimental platform of the KINECT/IMU pose fusion experiment uses Autolabor robot, which is equipped with KINECT V2 camera, AH100B inertial measurement unit and RPLIDAR A2 two-dimensional lidar. The three sensors and the Autolabor robot are connected by solid connection, as shown in the figure 5 shown.

实验数据:Experimental data:

为了验证位姿融合加权策略的有效性、KINECT/IMU融合位姿的精度以及优化后的LIDAR建图效果,本实施例设计了两组不同实验,第一组实验选用了EuRoC公开数据集作 为数据来源,另一组实验采集某实验室环境数据作为数据来源。In order to verify the effectiveness of the pose fusion weighting strategy, the accuracy of the KINECT/IMU fusion pose, and the optimized LIDAR mapping effect, two groups of different experiments are designed in this embodiment. The first group of experiments uses the EuRoC public data set as the data source, another group of experiments collected a laboratory environmental data as a data source.

EuRoC公开数据集是苏黎世联邦理工学院基于无人飞行器在礼堂与空房间两种环境内 采集的数据,无人飞行器搭载的传感器主要包括相机与惯性测量单元,相机频率为20Hz, 惯性测量单元频率为200Hz。实验数据分为4个文件夹,每个文件夹都包含传感器相对于坐 标系的变换情况。EuRoC公开数据集主要用于位姿精度的比较分析。The EuRoC public data set is the data collected by ETH Zurich based on the unmanned aerial vehicle in the auditorium and the empty room. The sensors carried by the unmanned aerial vehicle mainly include the camera and the inertial measurement unit. The frequency of the camera is 20Hz, and the frequency of the inertial measurement unit is 200Hz. The experimental data is divided into 4 folders, each of which contains the transformation of the sensor relative to the coordinate system. The EuRoC public dataset is mainly used for comparative analysis of pose accuracy.

第二组数据集使用Autolabor机器人采集某实验室的环境数据。实验室场景为有障碍物 的室内平坦地面,面积为15m×10m。机器人搭载的传感器主要包括KINECT V2相机与 AH100B惯性测量单元,相机频率为30Hz,惯性测量单元频率为37Hz。实时采集的数据集通过bag数据包记录,然后根据不同的话题将bag数据包导出对应的位姿数据集。The second set of data sets uses the Autolabor robot to collect environmental data in a laboratory. The laboratory scene is an indoor flat ground with obstacles, with an area of 15m × 10m. The sensors carried by the robot mainly include the KINECT V2 camera and the AH100B inertial measurement unit. The frequency of the camera is 30Hz and the frequency of the inertial measurement unit is 37Hz. The data set collected in real time is recorded in the bag data package, and then the bag data package is exported to the corresponding pose data set according to different topics.

实验步骤:Experimental steps:

(1)实验数据获取(1) Acquisition of experimental data

第一组实验选用了EuRoC公开数据集作为数据来源,第二组数据集使用Autolabor机器 人采集实验室的环境数据。The first set of experiments selected the EuRoC public data set as the data source, and the second set of data sets used the Autolabor robot to collect laboratory environmental data.

(2)位姿解算(2) Pose calculation

提取并匹配两帧重叠图像的ORB特征点,根据特征点对的匹配关系解算相机的运动参 数。二次积分IMU记录的线速度,计算速度与位置信息,积分角速度计算姿态信息,获得 速度、位置、姿态信息之后,使用方向余弦矩阵计算IMU与实验平台的相对位姿关系。Extract and match the ORB feature points of the two overlapping images, and calculate the motion parameters of the camera according to the matching relationship between the feature point pairs. Integrate the linear velocity recorded by the IMU twice, calculate the velocity and position information, and integrate the angular velocity to calculate the attitude information. After obtaining the velocity, position, and attitude information, use the direction cosine matrix to calculate the relative position and attitude relationship between the IMU and the experimental platform.

(3)位姿估计优化(3) Pose estimation optimization

判断机器人运动状态。如果机器人为静止状态,那么使用扩展卡尔曼滤波融合相机与惯 导传感器解算的位姿;如果机器人为运动状态,那么通过特征点匹配的依赖程度,使用加权 策略融合位姿,输出优化后的位姿估计。Determine the motion state of the robot. If the robot is in a stationary state, the extended Kalman filter is used to fuse the pose calculated by the camera and the inertial navigation sensor; if the robot is in a motion state, the weighted strategy is used to fuse the pose and output the optimized Pose estimation.

(4)二维地图构建(4) Two-dimensional map construction

优化更新后的位姿纠正RPLIDAR A2获取激光点的距离与方位角信息,使用基于高斯 牛顿的扫描匹配,构建二维环境地图。The optimized and updated pose correction RPLIDAR A2 obtains the distance and azimuth information of the laser point, and uses Gauss-Newton-based scan matching to construct a two-dimensional environment map.

(5)实验分析(5) Experimental analysis

首先,基于EuRoC公开数据集,比较ROVIO算法和本实施例算法在位姿融合优化方面 的精度。ROVIO算法紧耦合视觉信息和IMU信息,通过迭代卡尔曼滤波融合数据,更新滤波状态与位姿,在算法框架和数据融合方式方面与本实施例类似。其次,基于实验室场景的采集数据,同样比较VIORB-SLAM算法和比本实施例算法在位姿融合优化方面的精度。VIORB-SLAM算法由ORB-SLAM和IMU模块组成,通过最大后验估计,对关键帧进行几 何一致性检测,得到位姿最优估计,因此,VIORB-SLAM算法和本实施例算法具有较高的 对比度。最后,在二维地图构建方面,比较Cartographer算法和本实施例算法的建模精度和 效果。Cartographer算法是基于二维激光雷达构建地图的经典算法,根据IMU和里程计信息 通过滤波的方式预测位姿,将扫描点进行体素滤波转换为栅格点坐标构建地图,建图效果稳定,鲁棒性高。First, based on the EuRoC public data set, compare the accuracy of the ROVIO algorithm and the algorithm in this embodiment in terms of pose fusion optimization. The ROVIO algorithm tightly couples visual information and IMU information, fuses data through iterative Kalman filtering, and updates the filtering state and pose. The algorithm framework and data fusion method are similar to this embodiment. Secondly, based on the collected data of the laboratory scene, the accuracy of the VIORB-SLAM algorithm in terms of pose fusion optimization is also compared with that of the algorithm in this embodiment. The VIORB-SLAM algorithm is composed of ORB-SLAM and IMU modules. Through the maximum a posteriori estimation, the geometric consistency detection of the key frames is performed to obtain the optimal estimation of the pose. Therefore, the VIORB-SLAM algorithm and the algorithm in this embodiment have higher performance. contrast. Finally, in the aspect of two-dimensional map construction, the modeling accuracy and effect of the Cartographer algorithm and the algorithm of this embodiment are compared. The Cartographer algorithm is a classic algorithm for building maps based on two-dimensional lidar. It predicts the pose and poses by filtering according to the IMU and odometer information, and converts the scanned points to the coordinates of the grid points by voxel filtering. Greatness.

第一组实验基于EuRoC公开数据集,比较分析ROVIO算法和本实施例算法的位姿估计 精度。实验时间总长为149.96s,共选取了29993个位姿。实验误差如表5所示,分别选取最大误差(Max)、最小误差(Min)、平均值(Mean)、均方根误差(RMSE)和标准差 (Std)作为精度衡量的指标。同时分析了机器人在x轴方向、y轴方向和z轴方向的偏移图 (图6)。The first set of experiments is based on the EuRoC public data set, comparing and analyzing the pose estimation accuracy of the ROVIO algorithm and the algorithm of this embodiment. The total length of the experiment is 149.96s, and a total of 29993 poses are selected. The experimental errors are shown in Table 5. The maximum error (Max), the minimum error (Min), the mean value (Mean), the root mean square error (RMSE) and the standard deviation (Std) are selected as the indicators for measuring the accuracy. At the same time, the offset graphs of the robot in the x-axis, y-axis and z-axis directions are analyzed (Fig. 6).

表5本实施例算法与ROVIO实验误差分析(pose:29993)Table 5 This embodiment algorithm and ROVIO experiment error analysis (pose: 29993)

Figure BDA0003208732120000151
Figure BDA0003208732120000151

从实验结果可以看出:本实施例算法与ROVIO算法对比,均方根绝对误差为2.5944, 标准差绝对误差为1.5223,同时对比x、y、z方向位移偏离,说明本实施例算法相较于 ROVIO算法能够有效过滤多余的偏移与振动,借助融合的优化机制有效提升机器人的位姿 估计精度。It can be seen from the experimental results that the absolute root mean square error is 2.5944 and the standard deviation absolute error is 1.5223 compared with the ROVIO algorithm, and the displacement deviations in the x, y, and z directions are compared at the same time, indicating that the algorithm of this embodiment is compared with The ROVIO algorithm can effectively filter excess offset and vibration, and effectively improve the robot's pose estimation accuracy with the help of the fusion optimization mechanism.

第二组实验基于机器人实时采集的实验室场景数据,比较分析VIORB-SLAM算法和本 实施例算法的位姿估计精度。实验时间总长为367.30s,共选取了3648个位姿。实验误差如 表6所示,同样选取最大误差(Max)、最小误差(Min)、平均值(Mean)、均方根误差 (RMSE)和标准差(Std)作为精度衡量的指标。同时分析了机器人在x轴方向、y轴方向 的偏移图(图7)。The second group of experiments compares and analyzes the pose estimation accuracy of the VIORB-SLAM algorithm and the algorithm of this embodiment based on the laboratory scene data collected by the robot in real time. The total length of the experiment is 367.30s, and a total of 3648 poses are selected. The experimental errors are shown in Table 6, and the maximum error (Max), the minimum error (Min), the mean value (Mean), the root mean square error (RMSE) and the standard deviation (Std) are also selected as the indicators for measuring the accuracy. At the same time, the offset graphs of the robot in the x-axis and y-axis directions are analyzed (Fig. 7).

表6本申请算法与VIORB-SLAM实验误差分析(pose:3648)Table 6 The algorithm of this application and the experimental error analysis of VIORB-SLAM (pose: 3648)

Figure BDA0003208732120000152
Figure BDA0003208732120000152

在3648个位姿估计中,本申请算法与VIORB-SLAM算法对比,均方根误差减少0.26,标准差减少0.27。相较于VIORB-SLAM算法的位姿解算结果,本实施例算法的融合位姿优 化为机器人提供了更高精度的位姿估计,误差分析如表6所示。由图7所示,运动过程中, x、y方向上通过融合位姿优化的机器人在相同的运动轨迹中位移偏差逐渐拟合,位置随时 间的漂移减少,说明融合位姿优化在现地场景中也能有效提升机器人的定位精度。Among the 3648 pose estimations, compared with the VIORB-SLAM algorithm, the root mean square error is reduced by 0.26 and the standard deviation is reduced by 0.27. Compared with the pose calculation result of the VIORB-SLAM algorithm, the fusion pose optimization of the algorithm in this embodiment provides a higher-precision pose estimation for the robot, and the error analysis is shown in Table 6. As shown in Figure 7, during the movement process, the displacement deviation of the robot optimized by the fusion pose in the x and y directions is gradually fitted in the same motion trajectory, and the drift of the position over time decreases, indicating that the fusion pose optimization is in the local scene. It can also effectively improve the positioning accuracy of the robot.

通过两组实验表明,基于KINECT/IMU的融合优化位姿,一方面解决了室内环境卫星 信号缺失情况下机器人位姿估计定位困难的问题,同时提高了机器人位姿估计精度;另一方 面也解决了KINECT相机输出频率低、可依赖性差的问题,融合位姿优化使用的IMU高频 输出也满足了高动态特性条件下的高精度位姿估计需求。Two sets of experiments show that the fusion optimized pose based on KINECT/IMU solves the difficult problem of robot pose estimation and positioning in the absence of satellite signals in the indoor environment, and at the same time improves the robot pose estimation accuracy; In order to solve the problem of low output frequency and poor reliability of KINECT camera, the high-frequency output of IMU used for fusion pose optimization also meets the requirements of high-precision pose estimation under the condition of high dynamic characteristics.

地图构建分析:Map Construction Analysis:

室内环境二维地图构建通过机器人操作系统实现。实验比较分析了基于KINECT/IMU 融合位姿优化的室内地图构建方法和Cartographer算法的建模精度和效果。The construction of a two-dimensional map of the indoor environment is realized through the robot operating system. The experiment compares and analyzes the modeling accuracy and effect of the indoor map construction method based on KINECT/IMU fusion pose optimization and the Cartographer algorithm.

首先,对比分析局部建图效果,如图8所示。图8(a)和图8(b)分别为Cartographer算法 和本实施例算法在该场景中扫描到的激光点分布,图8(c)和图8(d)分别为Cartographer算法 和本实施例算法基于该场景构建的局部地图。从对比结果中可以看出,在同一局部场景非重 复扫描的条件下,Cartographer算法扫描的激光点在光滑铁皮面等反射率较高的物体处出现 局部激光点不连续的情况,如图8(a)所示,导致Cartographer算法根据二维激光点构建的实 际二维地图的边缘无法拟合,如图8(c)方框中所示,进而影响地图精度与构建效果;而使用 IMU/KINECT融合位姿补偿的本实施例算法可以修正二维激光雷达扫描的激光点位姿,产生 根据机器人自身的位姿形成对激光点的距离与方位角的优化,使得激光点相对于真值的偏差 减小,如图8(b)所示,使得二维图像边缘产生的缝隙封闭拟合,提高了扫描匹配的精度,最 终通过地图构建算法构建场景二维地图,提升地图局部细节的精度与可视化效果,如图8(d) 所示。采用本实施例算法构建完整的室内环境地图,实验中机器人共行驶68.96m,产生 3648个标记点,全局建图效果如图9(a)、图9(b)所示。First, compare and analyze the local mapping effect, as shown in Figure 8. Figures 8(a) and 8(b) show the distribution of laser points scanned by the Cartographer algorithm and the algorithm in this embodiment, respectively, and Figures 8(c) and 8(d) show the Cartographer algorithm and this embodiment, respectively. The algorithm builds a local map based on the scene. From the comparison results, it can be seen that under the condition of non-repetitive scanning of the same local scene, the laser points scanned by the Cartographer algorithm have local laser points discontinuity at objects with high reflectivity such as smooth iron surfaces, as shown in Figure 8 ( As shown in a), the edge of the actual two-dimensional map constructed by the Cartographer algorithm based on the two-dimensional laser points cannot be fitted, as shown in the box in Figure 8(c), which affects the map accuracy and construction effect; while using IMU/KINECT The algorithm of this embodiment of fusion pose compensation can correct the pose of the laser point scanned by the two-dimensional lidar, resulting in the optimization of the distance and azimuth angle of the laser point according to the robot's own pose, so that the deviation of the laser point from the true value is obtained. As shown in Figure 8(b), the gap generated by the edge of the two-dimensional image is closed and fitted, which improves the accuracy of scan matching. Finally, a two-dimensional map of the scene is constructed through the map construction algorithm, which improves the accuracy and visualization of local details of the map. The effect is shown in Fig. 8(d). The algorithm of this embodiment is used to construct a complete indoor environment map. In the experiment, the robot travels a total of 68.96m and generates 3648 marked points. The global mapping effect is shown in Figure 9(a) and Figure 9(b).

其次,从整体的建图效果分析,本实施例算法构建地图的边缘细节保留更完整(图9(b)),实际墙面在二维平面地图上的映射直线相互垂直,边与边的相对关系基本没有畸变。Secondly, from the analysis of the overall mapping effect, the edge details of the map constructed by the algorithm in this embodiment are more complete (Figure 9(b)). The mapping lines of the actual wall on the two-dimensional plane map are perpendicular to each other, and the relative sides The relationship is basically undistorted.

因此,从建图精度与可视化效果的角度来说,本实施例算法相较于Cartographer算法都 有一定程度的提升。Therefore, from the perspective of mapping accuracy and visualization effect, the algorithm of this embodiment has a certain degree of improvement compared to the Cartographer algorithm.

本实施例以室内环境为实验背景,结合机器人室内建图的位姿精度需求与室内环境特点, 提出了一种基于KINECT/IMU融合位姿优化的方法,将其运用于二维地图的建模。首先, 通过KINECT、IMU解算机器人的位姿;其次,通过判定机器人的运动状态,使用扩展卡尔 曼滤波或特征匹配加权策略融合IMU与KINECT数据,形成新的位姿估计;再次,将融合优化后的高精度位姿和同一时刻测得的激光点一一对应,纠正激光点在坐标系内的距离与方 位角信息,将其代入基于牛顿高斯扫面点的匹配过程,构建室内二维地图。最后,设计了KINECT/IMU位姿融合实验,分别运用EuRoc公开数据集和实测数据集验证算法的有效性,实验结果表明,算法较ROBVIO算法和VI ORB-SLAM算法具有更高位姿估计精度,较Cartographer算法具有更高的二维地图建模精度和建模效果。In this example, taking the indoor environment as the experimental background, combined with the pose accuracy requirements of the robot indoor mapping and the characteristics of the indoor environment, a method of pose optimization based on KINECT/IMU fusion is proposed, which is applied to the modeling of two-dimensional maps. . First, the pose of the robot is calculated by KINECT and IMU; secondly, by determining the motion state of the robot, the IMU and KINECT data are fused using the extended Kalman filter or feature matching weighting strategy to form a new pose estimation; third, the fusion is optimized The resulting high-precision pose corresponds to the laser points measured at the same time, corrects the distance and azimuth information of the laser points in the coordinate system, and substitutes them into the matching process based on Newton-Gaussian sweep points to construct an indoor two-dimensional map. . Finally, the KINECT/IMU pose fusion experiment is designed, and the EuRoc public data set and the measured data set are used to verify the effectiveness of the algorithm. The experimental results show that the algorithm has higher pose estimation accuracy than the ROBVIO algorithm and the VI ORB-SLAM algorithm. Cartographer algorithm has higher 2D map modeling accuracy and modeling effect.

本发明的另一个方面,提供一种采用上述的基于位姿融合优化的室内地图构建方法的进 行室内地图构建的系统,其特征在于,包括:Another aspect of the present invention provides a kind of system that adopts the above-mentioned indoor map construction method based on pose fusion optimization to carry out indoor map construction, it is characterized in that, comprising:

位姿数据解算单元,根据KINECT和IMU获取的实时数据解算机器人的位姿数据;The pose data calculation unit calculates the pose data of the robot according to the real-time data obtained by KINECT and IMU;

运动状态判断单元,根据所述位姿数据判断机器人当前运动状态;a motion state judging unit, for judging the current motion state of the robot according to the pose data;

位姿数据融合处理单元,分别与位姿数据解算单元和运动状态判断单元连接,当机器人 处于静止状态时,采用扩展卡尔曼滤波算法对所述位姿数据进行融合处理,当机器人处于运 动状态时,采用动态加权方法对所述位姿数据进行融合处理;The pose data fusion processing unit is respectively connected with the pose data calculation unit and the motion state judgment unit. When the robot is in a static state, the extended Kalman filtering algorithm is used to fuse the pose data. When the robot is in a motion state When , adopt the dynamic weighting method to perform fusion processing on the pose data;

二维地图构建单元,与位姿数据融合处理单元连接,根据所述融合处理结果创建室内地 图。The two-dimensional map construction unit is connected with the pose data fusion processing unit, and creates an indoor map according to the fusion processing result.

作为一种较优的实施方式,所述位姿数据解算单元包括:As a preferred embodiment, the pose data calculation unit includes:

KINECT数据解算子单元,其用于获取图像数据;对两张部分重叠图像的同名特征点进 行匹配,获得同名特征点之间的对应关系,该对应关系与机器人位姿对应。The KINECT data solving subunit is used to obtain image data; it matches the feature points of the same name of two partially overlapping images to obtain the corresponding relationship between the feature points of the same name, and the corresponding relationship corresponds to the pose of the robot.

IMU数据解算子单元,其采用方向余弦矩阵算法对IMU获取的实时数据进行位姿解算。The IMU data solving sub-unit uses the direction cosine matrix algorithm to solve the pose and attitude of the real-time data obtained by the IMU.

本发明的再一个方面,提供一种存储介质,该存储介质内存储有计算机程序,所述计算 机程序被处理执行时实现如上所述的基于位姿融合优化的室内地图构建方法。In yet another aspect of the present invention, a storage medium is provided, and a computer program is stored in the storage medium, and when the computer program is processed and executed, the above-mentioned indoor map construction method based on pose fusion optimization is realized.

为了提高室内地图构建的精度,通常可以融合KINECT、IMU等多种传感器获得的高精 度数据。本申请通过判断机器人的运动状态,根据特征点匹配的依赖程度,分别使用EKF和加权融合两种方式实现KINECT和IMU两种传感器位姿的优化,实现了室内地图的高精 度建模。这一过程中,首先判定机器人的运动状态,针对机器人静止状态,使用扩展卡尔曼 滤波算法进行位姿数据的融合,针对机器人运动状态,采用特征匹配加权策略融合KINECT 与IMU位姿数据,这样,有效提高融合后的位姿估计精确度,使得二维地图建模精度更好 且建模效果更优,该方法可有效提高贫特征、高动态、弱光影的场景中的位姿估计精度。In order to improve the accuracy of indoor map construction, high-precision data obtained by various sensors such as KINECT and IMU can usually be fused. By judging the motion state of the robot, according to the degree of dependence of feature point matching, the present application uses EKF and weighted fusion to optimize the poses of the two sensors, KINECT and IMU, and realizes high-precision modeling of indoor maps. In this process, the motion state of the robot is first determined. For the stationary state of the robot, the extended Kalman filter algorithm is used to fuse the pose data. For the motion state of the robot, the KINECT and IMU pose data are fused by the feature matching weighting strategy. In this way, Effectively improve the accuracy of pose estimation after fusion, so that the two-dimensional map modeling accuracy is better and the modeling effect is better. This method can effectively improve the pose estimation accuracy in scenes with poor features, high dynamics, and low light and shadow.

尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的, 不能理解为对本发明的限制,本领域的普通技术人员在不脱离本发明的原理和宗旨的情况下 在本发明的范围内可以对上述实施例进行变化、修改、替换和变型,凡是依据本发明的技术 实质对以上实施例所作的任何简单修改、等同变化与修饰,均仍属于本发明技术方案的范围 内。Although the embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limiting the present invention. Under the circumstance of the present invention, the above-described embodiments can be changed, modified, replaced and modified within the scope of the present invention, and any simple modifications, equivalent changes and modifications made to the above embodiments according to the technical essence of the present invention still belong to the present invention within the scope of the technical solution.

Claims (10)

1. An indoor map construction method based on pose fusion optimization is characterized by comprising the following steps:
resolving pose data of the robot according to real-time data acquired by KINECT and IMU;
acquiring the current motion state of the robot;
if the robot is in a static state, performing fusion processing on the pose data by adopting an extended Kalman filtering algorithm, and if the robot is in a motion state, performing fusion processing on the pose data by adopting a dynamic weighting method;
and constructing an indoor map according to the fusion processing result.
2. The pose fusion optimization-based indoor map construction method according to claim 1, wherein the specific method for resolving robot pose data by using the real-time data acquired by KINECT is as follows: acquiring image data; and matching the homonymous feature points of the two partially overlapped images to obtain a corresponding relation between the homonymous feature points, wherein the corresponding relation corresponds to the position and posture of the robot.
3. The pose fusion optimization-based indoor map construction method according to claim 1, wherein the specific method for resolving robot pose data by using real-time data acquired by the IMU comprises the following steps: and (4) adopting a direction cosine matrix algorithm to solve the pose.
4. The pose fusion optimization-based indoor map construction method according to claim 1, wherein the method for acquiring the current motion state of the robot is as follows:
if the linear acceleration of the IMU is 0 and the mileage count data is not increased, judging that the robot is in a static state currently;
and if the linear acceleration of the IMU is not 0 or the odometer data is increased, judging that the robot is in a motion state currently.
5. The pose fusion optimization-based indoor map construction method according to claim 1, characterized in that a specific method for performing fusion processing on the pose data by using an extended kalman filter algorithm is as follows:
obtaining system state quantity, and calculating an error state predicted value and a system state noise covariance matrix;
obtaining an observation matrix and a noise covariance matrix by calculation, and calculating a gain matrix of the system;
updating the system state quantity and the system covariance matrix by the gain matrix;
carrying out normalization processing on the quaternion obtained by calculating the system state quantity;
and converting the quaternion into an Euler angle convenient for displaying the pose, so as to represent the pose state of the robot.
6. The pose fusion optimization-based indoor map construction method according to claim 1, wherein the pose data fusion processing method by adopting a dynamic weighting method comprises the following steps:
acquiring a corresponding image according to the key frame through KINECT;
setting a weight value threshold according to the matching success rate mu of the feature points between the two key frames acquired by KINECT; if the matching success rate mu is reduced, the weight of the IMU posture is increased; and if the matching success rate mu is increased, the weight of the KINECT posture is increased.
7. The pose fusion optimization-based indoor map construction method according to claim 1, wherein a specific method for creating an indoor map according to the fusion processing result is as follows:
obtaining relative coordinates of the laser point and the radar after resolving the distance and the azimuth angle;
replacing pose estimation in a mapping algorithm with the pose estimation after fusion optimization in advance, and calculating the relative relation between the laser point and the radar by taking the pose estimation as a new data source so as to update the distance and azimuth angle information of the laser point;
and matching the laser point coordinates by using a scanning matching method based on Gauss Newton to establish a high-precision indoor two-dimensional map.
8. A system for indoor mapping using the pose fusion optimization-based indoor mapping method according to any one of claims 1 to 7, comprising:
the pose data resolving unit is used for resolving pose data of the robot according to real-time data acquired by the KINECT and the IMU;
the motion state judgment unit is used for judging the current motion state of the robot according to the linear acceleration data and the mileage counting data of the IMU;
the pose data fusion processing unit is respectively connected with the pose data resolving unit and the motion state judging unit, when the robot is in a static state, the pose data are fused by adopting an extended Kalman filtering algorithm, and when the robot is in a motion state, the pose data are fused by adopting a dynamic weighting method;
and the two-dimensional map construction unit is connected with the pose data fusion processing unit and used for creating an indoor map according to the fusion processing result.
9. The pose-fusion-optimization-based indoor map construction system according to claim 8, wherein the pose data solving unit includes:
a KINECT data decoding subunit for acquiring image data; and matching the homonymous feature points of the two partially overlapped images to obtain a corresponding relation between the homonymous feature points, wherein the corresponding relation corresponds to the position and posture of the robot.
And the IMU data resolving subunit is used for resolving the pose of the real-time data acquired by the IMU by adopting a direction cosine matrix algorithm.
10. A storage medium having a computer program stored therein, wherein the computer program is configured to implement the pose fusion optimization-based indoor map construction method according to any one of claims 1 to 7 when executed.
CN202110924699.0A 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium Active CN113674412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110924699.0A CN113674412B (en) 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110924699.0A CN113674412B (en) 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium

Publications (2)

Publication Number Publication Date
CN113674412A true CN113674412A (en) 2021-11-19
CN113674412B CN113674412B (en) 2023-08-29

Family

ID=78542345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110924699.0A Active CN113674412B (en) 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113674412B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219835A (en) * 2021-12-09 2022-03-22 超级视线科技有限公司 High-performance image stabilizing method and device based on video monitoring equipment
CN114442808A (en) * 2022-01-26 2022-05-06 深圳市慧鲤科技有限公司 Pose tracking module testing method, device, equipment, system and medium
CN114565734A (en) * 2022-03-02 2022-05-31 上海谦尊升网络科技有限公司 Method for positioning and mapping dynamic robot in dynamic scene
CN116258769A (en) * 2023-05-06 2023-06-13 亿咖通(湖北)技术有限公司 Positioning verification method and device, electronic equipment and storage medium
CN117437290A (en) * 2023-12-20 2024-01-23 深圳市森歌数据技术有限公司 Multi-sensor fusion type three-dimensional space positioning method for unmanned aerial vehicle in natural protection area
CN117554984A (en) * 2023-11-08 2024-02-13 广东科学技术职业学院 A single-line lidar indoor SLAM positioning method and system based on image understanding
CN119594989A (en) * 2025-02-10 2025-03-11 厦门致睿智控地信科技股份有限公司 An adaptive navigation system for unmanned aerial vehicles

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105651242A (en) * 2016-04-05 2016-06-08 清华大学深圳研究生院 Method for calculating fusion attitude angle based on complementary Kalman filtering algorithm
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN108827301A (en) * 2018-04-16 2018-11-16 南京航空航天大学 A kind of improvement error quaternion Kalman filtering robot pose calculation method
CN109631888A (en) * 2019-01-04 2019-04-16 北京卡路里信息技术有限公司 Movement locus recognition methods, device, wearable device and storage medium
CN109785428A (en) * 2019-01-21 2019-05-21 苏州大学 A Handheld 3D Reconstruction Method Based on Polymorphic Constrained Kalman Filtering
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A Pose Estimation Method Based on RGB-D and IMU Information Fusion
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
EP3715785A1 (en) * 2019-03-29 2020-09-30 Trimble Inc. Slam assisted ins
WO2020253854A1 (en) * 2019-06-21 2020-12-24 台州知通科技有限公司 Mobile robot posture angle calculation method
KR20210063723A (en) * 2019-11-25 2021-06-02 한국전자기술연구원 Method for posture estimation of user wearing Head Mounted Display

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105651242A (en) * 2016-04-05 2016-06-08 清华大学深圳研究生院 Method for calculating fusion attitude angle based on complementary Kalman filtering algorithm
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN108827301A (en) * 2018-04-16 2018-11-16 南京航空航天大学 A kind of improvement error quaternion Kalman filtering robot pose calculation method
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN109631888A (en) * 2019-01-04 2019-04-16 北京卡路里信息技术有限公司 Movement locus recognition methods, device, wearable device and storage medium
CN109785428A (en) * 2019-01-21 2019-05-21 苏州大学 A Handheld 3D Reconstruction Method Based on Polymorphic Constrained Kalman Filtering
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A Pose Estimation Method Based on RGB-D and IMU Information Fusion
EP3715785A1 (en) * 2019-03-29 2020-09-30 Trimble Inc. Slam assisted ins
WO2020253854A1 (en) * 2019-06-21 2020-12-24 台州知通科技有限公司 Mobile robot posture angle calculation method
KR20210063723A (en) * 2019-11-25 2021-06-02 한국전자기술연구원 Method for posture estimation of user wearing Head Mounted Display

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李鲁明;赵鲁阳;唐晓红;何为;李凤荣;: "基于模糊卡尔曼滤波的姿态估计算法", 仪表技术与传感器, no. 04 *
江杰;朱君;岂伟楠;: "四旋翼无人飞行器姿态数据采集处理系统", 计算机测量与控制, no. 06 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219835A (en) * 2021-12-09 2022-03-22 超级视线科技有限公司 High-performance image stabilizing method and device based on video monitoring equipment
CN114442808A (en) * 2022-01-26 2022-05-06 深圳市慧鲤科技有限公司 Pose tracking module testing method, device, equipment, system and medium
CN114565734A (en) * 2022-03-02 2022-05-31 上海谦尊升网络科技有限公司 Method for positioning and mapping dynamic robot in dynamic scene
CN116258769A (en) * 2023-05-06 2023-06-13 亿咖通(湖北)技术有限公司 Positioning verification method and device, electronic equipment and storage medium
CN117554984A (en) * 2023-11-08 2024-02-13 广东科学技术职业学院 A single-line lidar indoor SLAM positioning method and system based on image understanding
CN117437290A (en) * 2023-12-20 2024-01-23 深圳市森歌数据技术有限公司 Multi-sensor fusion type three-dimensional space positioning method for unmanned aerial vehicle in natural protection area
CN117437290B (en) * 2023-12-20 2024-02-23 深圳市森歌数据技术有限公司 Multi-sensor fusion type three-dimensional space positioning method for unmanned aerial vehicle in natural protection area
CN119594989A (en) * 2025-02-10 2025-03-11 厦门致睿智控地信科技股份有限公司 An adaptive navigation system for unmanned aerial vehicles

Also Published As

Publication number Publication date
CN113674412B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN113674412B (en) Pose fusion optimization-based indoor map construction method, system and storage medium
CN113066105B (en) Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN111156998B (en) A Mobile Robot Localization Method Based on RGB-D Camera and IMU Information Fusion
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN105469405B (en) Positioning and map constructing method while view-based access control model ranging
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
CN112577493B (en) A method and system for autonomous positioning of unmanned aerial vehicles based on remote sensing map assistance
CN111929699A (en) Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN108717712A (en) A kind of vision inertial navigation SLAM methods assumed based on ground level
CN109540126A (en) A kind of inertia visual combination air navigation aid based on optical flow method
CN113551665B (en) A highly dynamic motion state perception system and perception method for motion carriers
Sun et al. Autonomous state estimation and mapping in unknown environments with onboard stereo camera for micro aerial vehicles
CN110736457A (en) An integrated navigation method based on Beidou, GPS and SINS
Wu et al. AFLI-Calib: Robust LiDAR-IMU extrinsic self-calibration based on adaptive frame length LiDAR odometry
CN113408623B (en) Multi-node fusion estimation method for flexible attachment of non-cooperative targets
Huai et al. Real-time large scale 3D reconstruction by fusing Kinect and IMU data
Hinzmann et al. Flexible stereo: constrained, non-rigid, wide-baseline stereo vision for fixed-wing aerial platforms
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN110598370B (en) Robust Attitude Estimation for Multi-rotor UAV Based on Fusion of SIP and EKF
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light
Qayyum et al. Inertial-kinect fusion for outdoor 3d navigation
CN118603077A (en) A quadruped robot inspection map construction system and method based on multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant