CN108665540A - Robot localization based on binocular vision feature and IMU information and map structuring system - Google Patents
Robot localization based on binocular vision feature and IMU information and map structuring system Download PDFInfo
- Publication number
- CN108665540A CN108665540A CN201810218153.1A CN201810218153A CN108665540A CN 108665540 A CN108665540 A CN 108665540A CN 201810218153 A CN201810218153 A CN 201810218153A CN 108665540 A CN108665540 A CN 108665540A
- Authority
- CN
- China
- Prior art keywords
- imu
- module
- submodule
- acceleration
- binocular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明属于机器人技术领域,具体涉及一种机器人定位与地图构建系统。The invention belongs to the technical field of robots, and in particular relates to a robot positioning and map construction system.
背景技术Background technique
同时定位与地图构建(SLAM)的技术是机器人导航领域一个重要的问题,SLAM问题可以描述为:机器人能够对探索过的环境建立一个全局的地图,同时能够在任何时候,利用这个地图去推测出自身的位置。机器人通过带有传感器设备在环境中自由的移动,通过采集到的信息对自身位置进行定位,同时在定位的基础上进行地图构建,实现机器人的同时定位和建图。有两个主要的因素影响着SLAM问题的求解,分别是传感器的数据特征和观测数据相关性,如果能提高数据关联的鲁棒性和准确性并且能够提高传感器数据的利用率,将会提高机器人的定位和地图构建的精度。Simultaneous Localization and Mapping (SLAM) technology is an important problem in the field of robot navigation. The SLAM problem can be described as: the robot can build a global map of the explored environment, and at the same time, it can use this map to infer at any time. own position. The robot moves freely in the environment through the sensor equipment, locates its own position through the collected information, and builds a map on the basis of the positioning, so as to realize the simultaneous positioning and mapping of the robot. There are two main factors that affect the solution of the SLAM problem, namely, the data characteristics of the sensor and the correlation of the observation data. If the robustness and accuracy of the data association can be improved and the utilization rate of the sensor data can be improved, the robot will be improved. The accuracy of positioning and map construction.
目前移动设备上最常见的传感器有相机和IMU传感器等,如何设计算法去利用这些常见的传感器设备实现高精度的同时定位和制图是目前的一个研究的热点问题。At present, the most common sensors on mobile devices are cameras and IMU sensors. How to design algorithms to use these common sensor devices to achieve high-precision simultaneous positioning and mapping is a current research hotspot.
本发明创新性的利用IMU信息融合双目视觉信息,弥补双目视觉SLAM中存在的问题,并提高了SLAM算法的鲁棒性。在此基础之上,构建出具备真实尺度信息的地图。The present invention innovatively utilizes IMU information to fuse binocular vision information, makes up for the problems existing in binocular vision SLAM, and improves the robustness of the SLAM algorithm. On this basis, a map with real scale information is constructed.
同时定位与地图创建(SLAM)的技术是机器人领域比较经典的问题,SLAM问题可以描述为:机器人从某一未知位置开始在未知环境中运动,在移动过程中估计自身位姿和建立环境地图,实现机器人的自主定位和建图。影响SLAM系统的两个因素有观测数据的相关性和环境噪声,周围环境观测的正确性依赖于好的的数据相关性,进而影响到环境地图的构建。Simultaneous positioning and map creation (SLAM) technology is a classic problem in the field of robotics. The SLAM problem can be described as: the robot starts to move in an unknown environment from an unknown position, estimates its own pose and establishes an environmental map during the movement process, Realize the autonomous positioning and mapping of the robot. Two factors that affect the SLAM system are the correlation of observation data and environmental noise. The correctness of observation of the surrounding environment depends on good data correlation, which in turn affects the construction of the environmental map.
发明内容Contents of the invention
为了克服已有机器人定位与地图构建系统的鲁棒性较差、准确性较低、适应性较差的不足,本发明提供一种鲁棒性好、准确性高、适应性强的基于双目视觉特征和IMU信息的机器人定位与地图构建系统。In order to overcome the disadvantages of poor robustness, low accuracy and poor adaptability of existing robot positioning and map construction systems, the present invention provides a binocular-based Robot localization and mapping system based on visual features and IMU information.
本发明解决其技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve its technical problems is:
一种基于双目视觉特征和IMU信息的机器人定位与地图构建系统,所述系统包括双目信息采集、特征提取与匹配模块,改进的IMU初始化及其运动模块,视觉SLAM算法初始化和跟踪模块,局部建图模块,回环检测和优化模块;所述双目信息采集、特征提取与匹配模块包括双目ORB特征提取子模块、双目特征匹配子模块和IMU信息采集子模块;所述改进的IMU初始化及其运动模块包括IMU角速率偏差估计子模块、重力加速度预估子模块、IMU加速度偏差估计子模块和IMU预积分子模块;所述视觉SLAM算法初始化和跟踪模块包括跟踪帧间运动子模块和关键帧产生子模块;所述局部建图模块包括新关键帧插入子模块、局部BA优化子模块和剔除冗余关键帧模块;所述回环检测及优化模块包括回环检测子模块和全局优化子模块。A robot positioning and map construction system based on binocular visual features and IMU information, the system includes binocular information collection, feature extraction and matching modules, improved IMU initialization and motion modules, visual SLAM algorithm initialization and tracking modules, Local mapping module, loop detection and optimization module; the binocular information collection, feature extraction and matching modules include binocular ORB feature extraction submodule, binocular feature matching submodule and IMU information acquisition submodule; the improved IMU Initialization and its motion module include IMU angular rate deviation estimation sub-module, gravity acceleration estimation sub-module, IMU acceleration deviation estimation sub-module and IMU pre-integration sub-module; the visual SLAM algorithm initialization and tracking module includes tracking inter-frame motion sub-module and key frame generation submodule; the local mapping module includes a new key frame insertion submodule, a local BA optimization submodule and a redundant keyframe elimination module; the loopback detection and optimization module includes a loopback detection submodule and a global optimization submodule module.
进一步,所述双目信息采集、特征提取和匹配模块中,双目ORB特征提取子模块通过接受机器人移动过程中双目相机拍摄的图片,使用ORB特征提取算法提取左右图像中的ORB特征,将提取的结果存储;双目特征匹配模块对左右图片提取的特征点进行匹配,得到特征点的深度信息;IMU信息采集模块对机器人内部IMU信息进行收集。Further, in the binocular information collection, feature extraction and matching module, the binocular ORB feature extraction sub-module accepts the pictures taken by the binocular camera during the movement of the robot, uses the ORB feature extraction algorithm to extract the ORB features in the left and right images, and The extracted results are stored; the binocular feature matching module matches the feature points extracted from the left and right pictures to obtain the depth information of the feature points; the IMU information collection module collects the IMU information inside the robot.
再进一步,所述改进的IMU初始化及其运动模块中,IMU角速率偏差估计子模块利用视觉SLAM结果,采用优化方法对IMU角速率偏差估计;重力加速度及IMU加速度偏差预估子模块,利用IMU角速率偏差估计子模块的结果,对连续N个关键帧,利用其之间关系获得线性方程,求解得到重力加速度;对加速度偏差估计是将重力加速度对IMU加速度的影响排除,在重新对连续当前帧状态的优化个关键帧两两结合,求解方程得到重力加速度的重估计,同时得到IMU加速度偏差;IMU加速度偏差估计子模块,利用重力加速度及IMU加速度偏差预估子模块的结果,将重力加速度对IMU加速度的影响排除,重新对连续N个关键帧求解方程得到重力加速度的重估计,同时得到IMU加速度偏差;IMU预积分子模块将连续两个图像帧之间的所有IMU数据通过IMU预积分模型集合成一个单独的观测值,得到机器人的运动模型与机器人当前的位姿估计。Further, in the improved IMU initialization and its motion module, the IMU angular rate deviation estimation submodule uses the visual SLAM results to estimate the IMU angular rate deviation using an optimization method; the gravity acceleration and IMU acceleration deviation estimation submodule uses the IMU The results of the angular rate deviation estimation sub-module, for consecutive N key frames, use the relationship between them to obtain a linear equation, and solve to obtain the acceleration of gravity; the estimation of the acceleration deviation is to exclude the influence of the gravity acceleration on the IMU acceleration, and re-calculate the continuous current The optimization of the frame state combines two key frames to solve the equation to obtain the re-estimation of the acceleration of gravity, and at the same time obtain the IMU acceleration deviation; the IMU acceleration deviation estimation sub-module uses the results of the gravity acceleration and the IMU acceleration deviation estimation sub-module to calculate the gravity acceleration Eliminate the influence of IMU acceleration, re-solve the equation for consecutive N key frames to obtain a re-estimation of the acceleration of gravity, and at the same time obtain the IMU acceleration deviation; the IMU pre-integration sub-module pre-integrates all IMU data between two consecutive image frames through IMU The models are ensembled into a single observation, resulting in a motion model of the robot and an estimate of the robot's current pose.
更进一步,所述视觉SLAM算法初始化及跟踪模块中,跟踪帧间运动子模块计算连续两个图像之间的ORB特征匹配,计算出本征矩阵,再利用集束优化对估算记过进行进一步估计;在成功返回后,系统开始以纯视觉的方式追踪,关键帧产生子模块监控追踪中间结果,当当前帧满足关键帧的条件则将当前帧加入关键帧集合当中,并把当前的关键帧作为追踪参考帧。Further, in the visual SLAM algorithm initialization and tracking module, the tracking inter-frame motion sub-module calculates the ORB feature matching between two consecutive images, calculates the intrinsic matrix, and then uses cluster optimization to further estimate the estimated demerit; After successfully returning, the system starts to track in a purely visual way. The key frame generation sub-module monitors the tracking intermediate results. When the current frame meets the conditions of the key frame, the current frame is added to the key frame set, and the current key frame is used as the tracking reference frame .
所述局部建图模块中,新关键帧插入子模块)根据上一子模块生成的关键帧,插入关键帧队列中;局部BA优化子模块),在关键帧插入后,对当前局部视图的关键帧与地图点云作集束优化;剔除冗余的关键帧子模块,根据插入的关键帧,利用筛选策略删除窗口中多余的关键帧,保证地图的大小。In the local mapping module, the new key frame insertion submodule) is inserted into the key frame queue according to the key frame generated by the previous submodule; the local BA optimization submodule), after the key frame is inserted, the key to the current partial view Frames and map point clouds are clustered and optimized; redundant key frame sub-modules are eliminated, and redundant key frames in the window are deleted using screening strategies according to the inserted key frames to ensure the size of the map.
所述回环检测与优化模块中,回环检测子模块将当前帧在数据库中关键帧集合进行查询,检测回环;在成功返回后,对回环上所有关键帧的位姿进行估计,全局优化子模块对回环中的所有关键帧进行全局的图优化,误差因子包括视觉投影因子和IMU因子。In the described loopback detection and optimization module, the loopback detection submodule queries the key frame set of the current frame in the database to detect the loopback; after returning successfully, it estimates the poses of all keyframes on the loopback, and the global optimization submodule All key frames in the loop are optimized globally, and the error factors include visual projection factors and IMU factors.
本发明的技术构思为:本发明提出的机器人SLAM系统,是基于双目视觉和惯导信息融合SLAM系统的一种解决方案。本发明首先利用纯视觉位姿信息对IMU偏差模型以及重力加速度方向等进行估计;在视觉位姿中,使用高效率的特征提取算法,对图像提取丰富的描述子,然后利用提取的特征点进行匹配,获取其深度信息;利用基于预积分的IMU运动学模型建立相机的运动模型,采用运动模型对相机位置进行实时初步估计;在初步估计的基础上对前后两个图像帧之间位姿进行估计,在利用视觉几何知识,对机器人当前位置进行更加精确的估计;利用关键帧选择策略,实现局部地图点的稀疏构建;在融合IMU信息的视觉信息匹配的基础之上,采用基于图优化的后端优化算法,将空间地图点投影因子与IMU因子作为因子图中的因子,实时的对地图位置进行精确和实时的估计。本发明实现了基于双目视觉和惯导信息的机器人同步定位于地图创建算法,能够对机器人运动和周围环境信息进行精确的估计。The technical idea of the present invention is: the robot SLAM system proposed by the present invention is a solution based on binocular vision and inertial navigation information fusion SLAM system. In the present invention, the pure visual pose information is used to estimate the IMU deviation model and the direction of the acceleration of gravity; in the visual pose, a high-efficiency feature extraction algorithm is used to extract rich descriptors from the image, and then the extracted feature points are used to perform Matching to obtain its depth information; use the IMU kinematics model based on pre-integration to establish the camera motion model, and use the motion model to make a real-time preliminary estimate of the camera position; based on the preliminary estimate, the pose between the two image frames before and after Estimation, using visual geometric knowledge to estimate the current position of the robot more accurately; using the key frame selection strategy to realize the sparse construction of local map points; on the basis of visual information matching with IMU information, using graph optimization The back-end optimization algorithm uses the spatial map point projection factor and the IMU factor as factors in the factor graph to estimate the map position accurately and in real time. The invention realizes the robot synchronous positioning and map creation algorithm based on the binocular vision and inertial navigation information, and can accurately estimate the movement of the robot and the surrounding environment information.
本发明的有益效果主要表现在:采用了视觉信息和IMU信息融合的方法,能够很好的解决视觉SLAM中的问题,并提高SLAM系统的鲁棒性;准确性高,适应性强。The beneficial effects of the present invention are mainly manifested in: the method of fusion of visual information and IMU information is adopted, which can well solve problems in visual SLAM and improve the robustness of the SLAM system; the accuracy is high and the adaptability is strong.
附图说明Description of drawings
图1是本发明系统结构图示。Fig. 1 is a schematic diagram of the system structure of the present invention.
图2是本发明系统流程图示。Fig. 2 is a flow diagram of the system of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明作进一步描述。The present invention will be further described below in conjunction with the accompanying drawings.
参照图1和图2,一种基于双目视觉特征和IMU信息的机器人定位与地图构建系统,包括:双目信息采集、特征提取与匹配模块(1)、改进的IMU初始化及其运动模块(2)、视觉SLAM算法初始化和跟踪模块(3)、局部建图模块(4)、回环检测和优化模块(5);所述双目信息采集、特征提取与匹配模块(1)包括:双目ORB特征提取子模块(1.1)、双目特征匹配子模块(1.2)、IMU信息采集子模块(1.3);所述改进的IMU初始化及其运动模块(2)包括:IMU角速率偏差估计子模块(2.1)、重力加速度预估子模块(2.2)、IMU加速度偏差估计子模块(2.3)、IMU预积分子模块(2.4);所述视觉SLAM算法初始化及跟踪模块(3)包括:跟踪帧间运动子模块(3.1)、关键帧产生子模块(3.2);所述局部建图模块(4)包括:新关键帧插入子模块(4.1)、局部BA优化子模块(4.2)、剔除冗余关键帧模块(4.3);所述回环检测及优化模块(5)包括:回环检测子模块(5.1)、全局优化子模块(5.2)。Referring to Figure 1 and Figure 2, a robot positioning and map construction system based on binocular visual features and IMU information, including: binocular information collection, feature extraction and matching module (1), improved IMU initialization and its motion module ( 2), visual SLAM algorithm initialization and tracking module (3), local mapping module (4), loop detection and optimization module (5); described binocular information collection, feature extraction and matching module (1) include: binocular ORB feature extraction submodule (1.1), binocular feature matching submodule (1.2), IMU information collection submodule (1.3); the improved IMU initialization and its motion module (2) include: IMU angular rate deviation estimation submodule (2.1), acceleration of gravity estimation submodule (2.2), IMU acceleration deviation estimation submodule (2.3), IMU pre-integration submodule (2.4); described visual SLAM algorithm initialization and tracking module (3) include: tracking between frames Motion sub-module (3.1), key frame generation sub-module (3.2); described local mapping module (4) includes: new key frame insertion sub-module (4.1), local BA optimization sub-module (4.2), elimination of redundant key A frame module (4.3); the loopback detection and optimization module (5) includes: a loopback detection submodule (5.1) and a global optimization submodule (5.2).
所述双目信息采集、特征提取和匹配模块(1)中,双目ORB特征提取子模块(1.1)通过接受机器人移动过程中双目相机拍摄的图片,使用ORB特征提取算法提取左右图像中的ORB特征,将提取的结果存储;双目特征匹配模块(1.2)对左右图片提取的特征点进行匹配,得到特征点的深度信息;IMU信息采集模块(1.3)对机器人内部IMU信息进行收集;In the binocular information collection, feature extraction and matching module (1), the binocular ORB feature extraction submodule (1.1) uses the ORB feature extraction algorithm to extract the left and right images by accepting the pictures taken by the binocular camera during the movement of the robot. The ORB feature stores the extracted results; the binocular feature matching module (1.2) matches the feature points extracted from the left and right pictures to obtain the depth information of the feature points; the IMU information collection module (1.3) collects the IMU information inside the robot;
所述改进的IMU初始化及其运动模块(2)中,IMU角速率偏差估计子模块(2.1)利用视觉SLAM结果,采用优化方法对IMU角速率偏差估计;重力加速度及IMU加速度偏差预估子模块(2.2),利用子模块(2.1)结果,对连续N个关键帧,利用其之间关系获得线性方程,求解得到重力加速度;对加速度偏差估计是将重力加速度对IMU加速度的影响排除,在重新对连续当前帧状态的优化个关键帧两两结合,求解方程得到重力加速度的重估计,同时得到IMU加速度偏差;IMU加速度偏差估计子模块(2.3),利用子模块(2.2)的结果,将重力加速度对IMU加速度的影响排除,重新对连续N个关键帧求解方程得到重力加速度的重估计,同时得到IMU加速度偏差;IMU预积分子模块(2.4)将连续两个图像帧之间的所有IMU数据通过IMU预积分模型集合成一个单独的观测值,得到机器人的运动模型与机器人当前的位姿估计;In the described improved IMU initialization and its motion module (2), the IMU angular rate deviation estimation submodule (2.1) utilizes the visual SLAM result, and adopts an optimization method to estimate the IMU angular rate deviation; gravity acceleration and IMU acceleration deviation estimation submodule (2.2), using the results of the sub-module (2.1), for consecutive N key frames, using the relationship between them to obtain a linear equation, and solving the acceleration of gravity; the estimation of the acceleration deviation is to exclude the influence of the acceleration of gravity on the acceleration of the IMU, and then re- Combining the optimized key frames of the continuous current frame state in pairs, solving the equation to obtain the re-estimation of the acceleration of gravity, and obtaining the IMU acceleration deviation at the same time; Eliminate the influence of acceleration on IMU acceleration, re-solve the equation for consecutive N key frames to obtain a re-estimation of the acceleration of gravity, and at the same time obtain the IMU acceleration deviation; the IMU pre-integration sub-module (2.4) converts all IMU data between two consecutive image frames The IMU pre-integration model is combined into a single observation value to obtain the motion model of the robot and the current pose estimation of the robot;
所述视觉SLAM算法初始化及跟踪模块(3)中,视觉SLAM算法初始化及跟踪模块(3.1)计算连续两个图像之间的ORB特征匹配,计算出本征矩阵,再利用集束优化对估算记过进行进一步估计;在子模块(3.1)成功返回后,系统开始以纯视觉的方式追踪,关键帧产生子模块(3.2)监控追踪中间结果,当当前帧满足关键帧的条件则将当前帧加入关键帧集合当中,并把当前的关键帧作为追踪参考帧,;In the visual SLAM algorithm initialization and tracking module (3), the visual SLAM algorithm initialization and tracking module (3.1) calculates the ORB feature matching between two consecutive images, calculates the intrinsic matrix, and then utilizes cluster optimization to estimate and demerit Further estimation; after the sub-module (3.1) returns successfully, the system starts to track in a purely visual way, the key frame generation sub-module (3.2) monitors the intermediate results of the tracking, and when the current frame meets the conditions of the key frame, the current frame is added to the key frame set Among them, and use the current key frame as the tracking reference frame;
所述局部建图模块(4)中,新关键帧插入子模块(4.1)根据上一子模块生成的关键帧,插入关键帧队列中;局部BA优化子模块(4.2),在关键帧插入后,对当前局部视图的关键帧与地图点云作集束优化;剔除冗余的关键帧子模块(4.3),根据插入的关键帧,利用筛选策略删除窗口中多余的关键帧,保证地图的大小。In the local mapping module (4), the new key frame insertion submodule (4.1) is inserted into the key frame queue according to the key frame generated by the previous submodule; the local BA optimization submodule (4.2), after the key frame is inserted , perform bundle optimization on the keyframes of the current partial view and the map point cloud; eliminate redundant keyframe submodules (4.3), and use the screening strategy to delete redundant keyframes in the window according to the inserted keyframes to ensure the size of the map.
所述回环检测与优化模块(5)中,回环检测子模块(5.1)将当前帧在数据库中关键帧集合进行查询,检测回环;在子模块(5.2)成功返回后,对回环上所有关键帧的位姿进行估计,全局优化子模块(5.3)对回环中的所有关键帧进行全局的图优化,误差因子包括视觉投影因子和IMU因子。In described loopback detection and optimization module (5), loopback detection submodule (5.1) queries current frame key frame set in database, detects loopback; After submodule (5.2) returns successfully, to all keyframes on loopback The pose is estimated, the global optimization sub-module (5.3) performs global graph optimization on all key frames in the loop, and the error factors include visual projection factors and IMU factors.
下面进一步具体介绍对模块的执行情况。The implementation of the module is further described in detail below.
所述双目信息采集、特征提取和匹配模块(1)中:In the binocular information collection, feature extraction and matching module (1):
双目ORB特征提取子模块(1.1),在每次机器人采集到左右图像后,用于对图像进行ORB特征的提取,选取ORB特征作为图像特征的原因在于ORB特征具有较好的视角光照不变性,并且ORB特征作为一种二进制的视觉特征,具有提取速度快和匹配速度快的特点,很适合要求实时性的SLAM系统。提取后ORB特征后,将提取的结果存储。The binocular ORB feature extraction sub-module (1.1) is used to extract the ORB feature of the image after the robot collects the left and right images each time. The reason why the ORB feature is selected as the image feature is that the ORB feature has good viewing angle and illumination invariance , and the ORB feature, as a binary visual feature, has the characteristics of fast extraction and matching speed, which is very suitable for SLAM systems that require real-time performance. After the ORB features are extracted, the extracted results are stored.
双目特征匹配子模块(1.2),由于左右图像经过极线校正,搜索匹配点时只需在同一行进行匹配,在每次机器人采集到图像提取特征点后,利用特征点的描述子匹配,利用双目视觉的三角法可计算特征点的绝对尺度,计算公式如下:The binocular feature matching sub-module (1.2), since the left and right images have undergone epipolar correction, only need to match in the same line when searching for matching points. After each robot collects image extraction feature points, use the descriptor matching of feature points, The absolute scale of the feature points can be calculated using the triangulation method of binocular vision, and the calculation formula is as follows:
在上述公式中,f为相机的焦距,B为两摄像机光心之间的距离;D为视差,即左右图片同一特征点的距离差值,z为特征点的深度;In the above formula, f is the focal length of the camera, B is the distance between the optical centers of the two cameras; D is the parallax, that is, the distance difference between the same feature point in the left and right pictures, and z is the depth of the feature point;
IMU信息采集子模块(1.3),用于对机器人内部的IMU数据进行实时采集,并送至IMU预积分子模块(4.1)中进行处理,这是因为IMU信息采集模块所采集到的IMU数据频率远高于视觉图像采集频率,利用IMU预积分可以减少基于因子图的回环检测与优化模块(5)的计算压力。The IMU information acquisition sub-module (1.3) is used to collect the IMU data inside the robot in real time, and send it to the IMU pre-integration sub-module (4.1) for processing, because the frequency of the IMU data collected by the IMU information acquisition module Much higher than the frequency of visual image acquisition, the use of IMU pre-integration can reduce the computational pressure of the factor graph-based loop closure detection and optimization module (5).
所述改进的IMU初始化及其运动模块(2)中,IMU角速率偏差估计子模块(2.1),利用双目视觉SLAM的结果,将IMU角速率偏差变量加入到机器人状态向量之中,对IMU角速率偏差估计。优化公式为:In the improved IMU initialization and its motion module (2), the IMU angular rate deviation estimation submodule (2.1) utilizes the results of binocular vision SLAM to add the IMU angular rate deviation variable to the robot state vector, and to the IMU Angular rate bias estimation. The optimization formula is:
其中,N是当前关键帧数量,是由纯视觉SLAM算法计算得到的摄像机中心C坐标与摄像机与IMU中心(记作B)之间的标定矩阵RCB相乘得到,ΔRi,i+1是根据IMU预积分子模块(2.4)得到的i和i+1关键帧之间的旋转矩阵;是根据IMU预积分子模块(2.4)得到的ΔRi,i+1随角速率偏差变化方程的一阶近似;通过对优化方程进行求解,得到IMU角速率偏差bg。Among them, N is the number of current keyframes, is the camera center C coordinate calculated by the pure visual SLAM algorithm Multiplied with the calibration matrix R CB between the camera and the IMU center (denoted as B), ΔR i,i+1 is the rotation between i and i+1 key frames obtained according to the IMU pre-integration sub-module (2.4) matrix; is the first-order approximation of the equation of ΔR i,i+1 changing with the angular rate deviation obtained from the IMU pre-integration sub-module (2.4); by solving the optimization equation, the IMU angular rate deviation b g is obtained.
重力加速度预估子模块(3.2),利用IMU角速率偏差估计子模块(2.1)的结果,对于两个相连接的关键帧之间的关系,利用预积分测量值Δv,得到方程:Gravitational acceleration estimation sub-module (3.2), using the result of IMU angular rate deviation estimation sub-module (2.1), for the relationship between two connected key frames, using the pre-integrated measured value Δv, the equation is obtained:
在上述公式中,RWC代表摄像机中心C相对于世界坐标W的旋转矩阵,RWB代表IMU中心B相对于世界坐标的旋转矩阵,pCB代表IMU中心B在摄像机坐标下的位置,Δt为帧间的时间差,代表帧间的相机速度,s是特征点的绝对尺度,通过计算可得,重力加速度gw。In the above formula, R WC represents the rotation matrix of camera center C relative to world coordinates W, R WB represents the rotation matrix of IMU center B relative to world coordinates, p CB represents the position of IMU center B in camera coordinates, and Δt is the frame time difference between Represents the camera speed between frames, s is the absolute scale of the feature point, and can be obtained through calculation, the gravitational acceleration g w .
IMU加速度偏差估计子模块(2.3),利用重力加速度预估子模块(2.2)的结果,利用连续三个关键帧之间的线性关系:The IMU acceleration deviation estimation sub-module (2.3) uses the results of the gravity acceleration estimation sub-module (2.2) and utilizes the linear relationship between three consecutive key frames:
用1,2,3代表i,i+1,i+2三个连续的关键帧,对上式中的各项计算方程如下:Use 1, 2, 3 to represent three consecutive key frames of i, i+1, and i+2. The calculation equations for the items in the above formula are as follows:
在上式中,[ ](;1:2)指使用矩阵的前两列,RWC代表摄像机中心C相对于世界坐标W的旋转矩阵,RWB代表IMU中心B相对于世界坐标的旋转矩阵,RWI代表当前坐标相对于世界坐标的旋转矩,pB代表IMU中心B在世界坐标下的位置,I为单位矩阵Δt为帧间的时间差,Δv代表帧间的相机速度,δθxy是需要估计的重力加速度方向角,ba是加速度偏差。将所有的三连续关键帧等式堆积成一个线性方程,记作:In the above formula, [ ] (; 1:2) refers to the use of the first two columns of the matrix, R WC represents the rotation matrix of the camera center C relative to the world coordinate W, R WB represents the rotation matrix of the IMU center B relative to the world coordinates, R WI represents the rotation moment of the current coordinates relative to the world coordinates, p B represents the position of the IMU center B in the world coordinates, I is the identity matrix Δt is the time difference between frames, Δv represents the camera speed between frames, and δθ xy is to be estimated The direction angle of gravitational acceleration, b a is the acceleration deviation. Stack all three consecutive keyframe equations into a linear equation, denoted as:
A3(N-2)×5X5×1=B3(N-2)×1A3(N-2)×5X5×1=B3(N-2)×1
通过上面的线程方程进行奇异值分解,得到重力加速度方向δθxy以及IMU加速度偏差ba。Singular value decomposition is performed through the thread equation above to obtain the gravity acceleration direction δθ xy and the IMU acceleration deviation b a .
IMU预积分子模块,将连续两个图像帧之间的所有IMU数据通过B预积分模型集合成一个单独的观测值,得到机器人的运动模型与机器人当前位估计。将IMU标记为B,IMU输出的加速度与角速率表示为αB与ωB,则在两个IMU输出i与i+1之间机器人的运动方程为:The IMU pre-integration sub-module integrates all IMU data between two consecutive image frames into a single observation value through the B pre-integration model, and obtains the motion model of the robot and the current position estimation of the robot. Mark the IMU as B, and the acceleration and angular rate output by the IMU are expressed as α B and ω B , then the motion equation of the robot between the two IMU outputs i and i+1 is:
在k与k+1两个关键帧直接的IMU预积分结果记作ΔR、Δv、Δp,则得到两个关键帧之间的运动模型:The direct IMU pre-integration results of the two key frames k and k+1 are recorded as ΔR, Δv, and Δp, and the motion model between the two key frames is obtained:
在上式中RWB代表IMU中心B相对于世界坐标的旋转,vB代表IMU的速度,pB代表IMU中心B在世界坐标下的位置,bg、ba分别代表角速率与加速度的偏差。雅可比矩阵和代表的是IMU预积分观测值随偏差变化方程的一阶近似。In the above formula, R WB represents the rotation of the IMU center B relative to the world coordinates, v B represents the speed of the IMU, p B represents the position of the IMU center B in the world coordinates, b g and b a represent the deviation between the angular rate and the acceleration . Jacobian matrix and Represents the first-order approximation of the IMU pre-integration observation value variation equation with the deviation.
所述视觉SLAM算法初始化及跟踪模块(3)中,跟踪帧间运动子模块(3.1),利用ORB特征提取子模块(1.1)提取特征后,对连续两个初始化图像Fc,Fr上的特征进行特征匹配,利用PnP算法求解出两者之间的位姿关系。此后可以利用SLAM系统追踪不断的对机器人位置进行估计。利用集束优化对估算进一步估计的原因在于集束优化可以修正位姿矩阵,使之更符合ORB特征匹配结果。In the visual SLAM algorithm initialization and tracking module (3), the tracking inter-frame motion submodule (3.1), after utilizing the ORB feature extraction submodule (1.1) to extract features, performs two continuous initialization images Fc, Fr on the feature Feature matching, using the PnP algorithm to solve the pose relationship between the two. Afterwards, the SLAM system can be used to track and continuously estimate the position of the robot. The reason for using cluster optimization to further estimate the estimation is that cluster optimization can modify the pose matrix to make it more consistent with the ORB feature matching results.
关键帧产生子模块(3.2),根据规则,决定是否将当前帧选择为关键帧。其规则如下:The key frame generation sub-module (3.2) decides whether to select the current frame as a key frame according to the rules. Its rules are as follows:
1)至少离上一次重定位距离20帧;1) At least 20 frames away from the last relocation;
2)局部建图空闲或离上一次关键帧插入相差已有20帧;2) Partial mapping is idle or 20 frames have passed since the last key frame insertion;
3)当前帧中至少含有50个特征点。3) The current frame contains at least 50 feature points.
所述局部建图模块(4)中,新关键帧插入子模块(4.1),利用子模块(3.2)生成的关键帧插入到队列中,更新内容有关视图和与关键帧具有相同地图点云产生的边缘,同时更新节点与其他关键帧的链接。然后,剔除局部视图不合格的云点,重新生成新的云点。In the local mapping module (4), the new key frame is inserted into the submodule (4.1), and the key frame generated by the submodule (3.2) is inserted into the queue, and the relevant view of the update content and the point cloud with the same map as the key frame are generated edge while updating the node's links to other keyframes. Then, the unqualified cloud points of the local view are eliminated, and new cloud points are regenerated.
局部BA优化子模块(4.2),局部BA优化当前的关键帧,在交互视图集中所有连接到它的关键帧,和所有被这些关键帧观测到的地图云点。所有其他能够观测到这些云点的关键帧,但没有连接到当前关键帧的关键帧会被优化;其中投影误差为:Local BA optimization sub-module (4.2), local BA optimizes the current keyframe, all keyframes connected to it in the interactive view set, and all map cloud points observed by these keyframes. All other keyframes that can observe these cloud points, but are not connected to the current keyframe will be optimized; where the projection error is:
Eproj(i)=ρ((x-π(XC))T∑I(x-π(XC)))E proj (i)=ρ((x-π(X C )) T ∑ I (x-π(X C )))
其中π(XC)为投影公式,x代表空间点在当前帧上的投影坐标,从世界坐标系坐标值XW到相机坐标系坐标值XC的转换为:Among them, π(X C ) is the projection formula, and x represents the projected coordinates of the space point on the current frame. The conversion from the coordinate value X W of the world coordinate system to the coordinate value X C of the camera coordinate system is:
在上式中RCB代表IMU中心B相对于当前相机中心C坐标的旋转矩阵,RBW代表世界坐标W对于IMU中心B的旋转矩阵。pCB代表IMU中心B相对于当前当前相机中心C坐标的平移向量,pWB代表IMU中心B相对于当前世界坐标W的平移向量;In the above formula, R CB represents the rotation matrix of the IMU center B relative to the coordinates of the current camera center C, and R BW represents the rotation matrix of the world coordinate W to the IMU center B. p CB represents the translation vector of IMU center B relative to the current coordinates of the current camera center C, p WB represents the translation vector of IMU center B relative to the current world coordinate W;
IMU因子可以表示为:The IMU factor can be expressed as:
其中各个误差项分布表示为:The distribution of each error term is expressed as:
上式中代表两帧间旋转变换ΔR相对于重力加速度的一阶近似,代表两帧间速度化Δv相对于重力加速度的一阶近似,代表两帧间速度变化Δv相对于加速度的一阶近似。RBW代表世界坐标W对于IMU中心B的旋转矩阵,RWB代表IMU坐标B到世界坐标W的旋转矩阵。bg,ba分别代表角速率和加速度的偏差,eR,ep,ev,eb分别表示的是旋转、平移、速度和IMU偏差带来的误差项。我们将其都视为高斯分布,∑I是预积分的信息矩阵,∑R是偏差的随机游走信息矩阵代表的是huber robust损失方程。In the above formula Represents the first-order approximation of the rotation transformation ΔR between two frames relative to the acceleration of gravity, Represents the first-order approximation of the velocity Δv between two frames relative to the acceleration of gravity, Represents a first-order approximation of the velocity change Δv between two frames with respect to acceleration. R BW represents the rotation matrix of world coordinate W to IMU center B, and R WB represents the rotation matrix of IMU coordinate B to world coordinate W. b g , b a represent the deviation of angular rate and acceleration, respectively, and e R , e p , ev , e b represent the error terms caused by rotation, translation, velocity and IMU deviation, respectively. We regard it as a Gaussian distribution, ∑ I is the pre-integrated information matrix, and ∑ R is the biased random walk information matrix representing the huber robust loss equation.
剔除冗余关键帧模块(4.3),由于IMU短时的特性,如果局部BA的本地窗口中的两个连续关键帧的差异不超过0.5秒,允许跟踪线程丢弃多余的关键帧。但是为了能够执行全局优化,在循环关闭之后或者在任何时候细化地图,我们不允许任何两个连续的关键帧差异超过3秒。如果我们使用IMU约束关闭完整的BA,我们只需要限制本地窗口中关键帧之间的时间偏移。Elimination of redundant keyframes module (4.3), due to the short-term nature of the IMU, allows the tracking thread to discard redundant keyframes if the difference between two consecutive keyframes in the local window of the local BA does not exceed 0.5 seconds. But to be able to perform global optimization, refine the map after the loop is closed or at any time, we do not allow any two consecutive keyframes to differ by more than 3 seconds. If we turn off full BA with IMU constraints, we only need to constrain the time offset between keyframes in the local window.
所述回环检测与优化模块(5)中,回环检测子模块(5.1),计算当前关键帧的词袋向量,与其视图相关的邻近图像计算相似度,保留最低分值。然后,我们检索图像识别数据库,丢掉那些分值低于最低分值的关键帧,将当前帧在数据库中关键帧集合进行查询。In the loop detection and optimization module (5), the loop detection submodule (5.1) calculates the bag-of-words vector of the current key frame, calculates the similarity of adjacent images related to its view, and retains the lowest score. Then, we search the image recognition database, discard those keyframes whose score is lower than the minimum score, and query the current frame in the keyframe set in the database.
全局优化子模块(5.2),对回环上的所有关键帧进行估计,此时仅考虑视觉投影因子,投影因子公式如当前帧状态的优化子模块(4.2)中所示。The global optimization sub-module (5.2) estimates all key frames on the loop. At this time, only the visual projection factor is considered, and the projection factor formula is shown in the optimization sub-module (4.2) of the current frame state.
以在配备双目相机和IMU的机器人的设备为例,详细介绍本实施例在此设备上的执行过程。Taking a robot device equipped with a binocular camera and an IMU as an example, the execution process of this embodiment on this device is introduced in detail.
首先在双目与IMU信息采集和深度信息获取模块(1)中,对双目视觉信息进行特征提取及特征匹配,并对IMU信息进行采集,这两部分将获取的传感器信息传入后续模块,因为机器人在空间中运动,这两个部分将不断的对传感器新得到的数据进行采集和提取。利用双目相机采集图像,将图像送至特征提取算法,将提取到的图像特征的位置与描述子保存,同时利用提取到左右图片的特征点,进行匹配计算特征点的深度并保存。同时对IMU传感器的信号进行采集并送至IMU预积分算法进行处理。First, in the binocular and IMU information acquisition and depth information acquisition module (1), feature extraction and feature matching are performed on the binocular visual information, and the IMU information is collected. These two parts transfer the acquired sensor information to the subsequent module. Because the robot is moving in space, these two parts will continuously collect and extract the new data obtained by the sensor. Use the binocular camera to collect images, send the images to the feature extraction algorithm, save the position and descriptor of the extracted image features, and use the feature points extracted from the left and right pictures to match and calculate the depth of the feature points and save them. At the same time, the signal of the IMU sensor is collected and sent to the IMU pre-integration algorithm for processing.
当双目视觉SLAM执行一段时间后,改进的IMU初始化及其运动模块(2)开始运行。首先是IMU角速率偏差估计,将得到IMU的角速率偏差估计结果。其次重力加速度预估计对重力加速度进行初步的估计,利用加速度偏差估计结果与尺度可得到重力加速度,可以对IMU加速度偏差进行更为精确的估计。IMU加速度偏差估计和重力加速度重估计是对IMU加速度偏差进行估计,同时对重力加速度再次估计。利用双目与IMU信息采集和深度信息获取模块(1)得到的IMU信息数据,建立基于IMU预计分的运动模型。When binocular vision SLAM is performed for a period of time, the improved IMU initialization and its motion module (2) start running. The first is the estimation of the IMU angular rate deviation, and the estimation result of the angular rate deviation of the IMU will be obtained. Secondly, the gravitational acceleration pre-estimation makes a preliminary estimate of the gravitational acceleration, and the gravitational acceleration can be obtained by using the acceleration deviation estimation result and scale, which can make a more accurate estimation of the IMU acceleration deviation. IMU acceleration deviation estimation and gravity acceleration re-estimation are to estimate the IMU acceleration deviation and re-estimate the gravity acceleration at the same time. Using the binocular and IMU information acquisition and the IMU information data obtained by the depth information acquisition module (1), a motion model based on IMU pre-score is established.
利用提取到的特征点信息,视觉SLAM算法初始化及跟踪模块(3)对视觉SLAM部分进行初始化操作。依次对追踪模块,关键帧产生模块和地图点进行初始化。初始化后的系统保存了初始化得到的地图点,以及最初的位置信息。视觉初始化完毕后,系统将执行跟踪算法部分,对帧间位姿进行估计。同时利用关键帧选择策略,生成关键帧。利用IMU的运动学模型与双目视觉位姿信息,对每一个关键帧的速度进行估计,完成融合IMU信息的算法的初始化工作。此部分能够使得双目视觉SLAM的结果更加鲁棒,减少跟踪失败的情况。Using the extracted feature point information, the visual SLAM algorithm initialization and tracking module (3) initializes the visual SLAM part. Initialize the tracking module, key frame generation module and map points in turn. The initialized system saves the initialized map points and the initial location information. After the vision initialization is completed, the system will execute the tracking algorithm part to estimate the pose between frames. At the same time, the key frame selection strategy is used to generate key frames. Using the kinematic model of the IMU and the binocular vision pose information, the speed of each key frame is estimated, and the initialization of the algorithm that fuses the IMU information is completed. This part can make the result of binocular vision SLAM more robust and reduce the failure of tracking.
跟踪算法解决了关联的数据问题,之后将进行局部地图的构建。根据上一模块生成的关键帧,在局部视图里,对当前关键帧及其相关视图所看到的地图点云,进行局部BA优化,优化包括对IMU及视觉位姿的同时优化。优化后,可以删除队列中一定时间间隔的关键帧,以避免在区域内增加地图的大小。The tracking algorithm solves the associated data problem, after which local map construction will proceed. According to the key frame generated in the previous module, in the local view, local BA optimization is performed on the map point cloud seen by the current key frame and its related views. The optimization includes the simultaneous optimization of IMU and visual pose. After optimization, keyframes at certain time intervals in the queue can be deleted to avoid increasing the size of the map within the region.
利用基于图优化算法的回环检测与优化模块(5)对构建的地图进行全局优化,获得有关机器人位置和地图点更精确的估计。这一模块分为两个部分分别进行。首先检测回环子模块(5.1)计算当前关键帧的词袋向量,与其视图相关的邻近图像计算相似度,保留最低分值。然后,我们检索图像识别数据库,丢掉那些分值低于最低分值的关键帧,将当前帧在数据库中关键帧集合进行查询,当检测到回环,则对回环上所有关键帧的位姿进行估计。全局位姿优化子模块(5.2)在每个关键帧插入的时候进行,为了对关键帧和地图点的位置进一步估计,并且对当前所有关键帧的位姿在当前情况下进行进一步的优化,这一步利用了回环检测的结果,能够消除累计的误差照成的影响。The loop detection and optimization module (5) based on the graph optimization algorithm is used to globally optimize the constructed map to obtain more accurate estimates of the robot's position and map points. This module is divided into two parts. Firstly, the detection loop sub-module (5.1) calculates the bag-of-words vector of the current key frame, calculates the similarity of the adjacent images related to its view, and keeps the lowest score. Then, we retrieve the image recognition database, discard those key frames whose scores are lower than the minimum score, and query the current frame in the key frame set in the database. When a loop is detected, the poses of all key frames on the loop are estimated. . The global pose optimization sub-module (5.2) is performed when each key frame is inserted, in order to further estimate the positions of the key frames and map points, and to further optimize the poses of all current key frames in the current situation, this In one step, the result of loop closure detection is used, which can eliminate the influence of accumulated errors.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810218153.1A CN108665540A (en) | 2018-03-16 | 2018-03-16 | Robot localization based on binocular vision feature and IMU information and map structuring system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810218153.1A CN108665540A (en) | 2018-03-16 | 2018-03-16 | Robot localization based on binocular vision feature and IMU information and map structuring system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN108665540A true CN108665540A (en) | 2018-10-16 |
Family
ID=63785213
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810218153.1A Pending CN108665540A (en) | 2018-03-16 | 2018-03-16 | Robot localization based on binocular vision feature and IMU information and map structuring system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108665540A (en) |
Cited By (51)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109461208A (en) * | 2018-11-15 | 2019-03-12 | 网易(杭州)网络有限公司 | Three-dimensional map processing method, device, medium and calculating equipment |
| CN109579840A (en) * | 2018-10-25 | 2019-04-05 | 中国科学院上海微系统与信息技术研究所 | A kind of close coupling binocular vision inertia SLAM method of dotted line Fusion Features |
| CN109658507A (en) * | 2018-11-27 | 2019-04-19 | 联想(北京)有限公司 | Information processing method and device, electronic equipment |
| CN109712170A (en) * | 2018-12-27 | 2019-05-03 | 广东省智能制造研究所 | Environmental objects method for tracing, device, computer equipment and storage medium |
| CN109767470A (en) * | 2019-01-07 | 2019-05-17 | 浙江商汤科技开发有限公司 | A kind of tracking system initial method and terminal device |
| CN109781092A (en) * | 2019-01-19 | 2019-05-21 | 北京化工大学 | A mobile robot positioning and mapping method in dangerous chemical accidents |
| CN109871803A (en) * | 2019-02-18 | 2019-06-11 | 清华大学 | Robot winding detection method and device |
| CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
| CN110084853A (en) * | 2019-04-22 | 2019-08-02 | 北京易达图灵科技有限公司 | A kind of vision positioning method and system |
| CN110111389A (en) * | 2019-05-14 | 2019-08-09 | 南京信息工程大学 | A kind of mobile augmented reality Tracing Registration method and system based on SLAM |
| CN110118554A (en) * | 2019-05-16 | 2019-08-13 | 深圳前海达闼云端智能科技有限公司 | SLAM method, apparatus, storage medium and device based on visual inertia |
| CN110125928A (en) * | 2019-03-27 | 2019-08-16 | 浙江工业大学 | A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames |
| CN110458863A (en) * | 2019-06-25 | 2019-11-15 | 广东工业大学 | A Dynamic SLAM System Based on Fusion of RGBD and Encoder |
| CN110471422A (en) * | 2019-08-29 | 2019-11-19 | 南京理工大学 | The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair |
| CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
| CN110514199A (en) * | 2019-08-28 | 2019-11-29 | 爱笔(北京)智能科技有限公司 | Loop detection method and device of SLAM system |
| CN110648370A (en) * | 2019-09-29 | 2020-01-03 | 百度在线网络技术(北京)有限公司 | Calibration data screening method and device and electronic equipment |
| CN110986968A (en) * | 2019-10-12 | 2020-04-10 | 清华大学 | Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction |
| CN111089579A (en) * | 2018-10-22 | 2020-05-01 | 北京地平线机器人技术研发有限公司 | Heterogeneous binocular SLAM method and device and electronic equipment |
| CN111145251A (en) * | 2018-11-02 | 2020-05-12 | 深圳市优必选科技有限公司 | A robot and its simultaneous positioning and mapping method and computer storage device |
| CN111220155A (en) * | 2020-03-04 | 2020-06-02 | 广东博智林机器人有限公司 | Method, device and processor for estimating pose based on binocular vision inertial odometer |
| WO2020108285A1 (en) * | 2018-11-30 | 2020-06-04 | 华为技术有限公司 | Map building method, apparatus and system, and storage medium |
| CN111258313A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Multi-sensor fusion SLAM system and robot |
| CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | A method and device for SLAM initialization |
| CN111307146A (en) * | 2020-03-02 | 2020-06-19 | 北京航空航天大学青岛研究院 | Virtual reality wears display device positioning system based on binocular camera and IMU |
| CN111323009A (en) * | 2020-03-09 | 2020-06-23 | 西南交通大学 | Magnetic suspension train positioning method and system |
| CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
| CN111737278A (en) * | 2020-08-05 | 2020-10-02 | 鹏城实验室 | Simultaneous positioning and mapping method, system, device and storage medium |
| CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, apparatus, electronic device and readable storage medium |
| CN111798566A (en) * | 2019-04-08 | 2020-10-20 | 上海未来伙伴机器人有限公司 | A visual SLAM method |
| CN111899276A (en) * | 2020-07-07 | 2020-11-06 | 武汉大学 | SLAM method and system based on binocular event camera |
| CN112082545A (en) * | 2020-07-29 | 2020-12-15 | 武汉威图传视科技有限公司 | Map generation method, device and system based on IMU and laser radar |
| CN112197770A (en) * | 2020-12-02 | 2021-01-08 | 北京欣奕华数字科技有限公司 | Robot positioning method and positioning device thereof |
| CN112233177A (en) * | 2020-10-10 | 2021-01-15 | 中国安全生产科学研究院 | Unmanned aerial vehicle pose estimation method and system |
| WO2021035703A1 (en) * | 2019-08-30 | 2021-03-04 | 深圳市大疆创新科技有限公司 | Tracking method and movable platform |
| CN112509006A (en) * | 2020-12-11 | 2021-03-16 | 北京华捷艾米科技有限公司 | Sub-map recovery fusion method and device |
| CN112581514A (en) * | 2019-09-30 | 2021-03-30 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
| CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
| CN112747749A (en) * | 2020-12-23 | 2021-05-04 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
| CN112767546A (en) * | 2021-01-22 | 2021-05-07 | 湖南大学 | Binocular image-based visual map generation method for mobile robot |
| CN113076988A (en) * | 2021-03-25 | 2021-07-06 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
| CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
| CN113674408A (en) * | 2021-07-19 | 2021-11-19 | 之江实验室 | End-edge-cooperated visual synchronous mapping and positioning system and method |
| CN113674340A (en) * | 2021-07-05 | 2021-11-19 | 北京物资学院 | A method and device for binocular vision navigation based on landmarks |
| CN113720323A (en) * | 2021-07-30 | 2021-11-30 | 安徽大学 | Monocular vision through-guidance SLAM method and device based on dotted line feature fusion |
| CN113781582A (en) * | 2021-09-18 | 2021-12-10 | 四川大学 | Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration |
| CN114111818A (en) * | 2021-12-08 | 2022-03-01 | 太原供水设计研究院有限公司 | A general visual SLAM method |
| CN114485620A (en) * | 2022-01-29 | 2022-05-13 | 中国科学院国家空间科学中心 | Orbital dynamics fused asteroid detector autonomous visual positioning system and method |
| CN114612499A (en) * | 2021-08-31 | 2022-06-10 | 北京石头创新科技有限公司 | Robot and sensing method thereof |
| CN116678398A (en) * | 2023-06-15 | 2023-09-01 | 太原理工大学 | A VSLAM back-end optimization method based on multi-convex combination maximum cross-correlation entropy |
| CN120726129A (en) * | 2025-08-18 | 2025-09-30 | 慧芯加(苏州)智能科技有限公司 | A pipe network detection robot positioning method, device, equipment, medium and product |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
| CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
-
2018
- 2018-03-16 CN CN201810218153.1A patent/CN108665540A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
| CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
Non-Patent Citations (3)
| Title |
|---|
| RA’UL MUR-ARTAL 等: "Visual-Inertial Monocular SLAM with Map Reuse", 《IEEE ROBOTICS AND AUTOMATION LETTERS》 * |
| XUE-BO ZHANG 等: "An Extended Kalman Filter-Based Robot Pose Estimation Approach with Vision and Odometry", 《WEARABLE SENSORS AND ROBOTS》 * |
| 熊敏君 等: "基于单目视觉与惯导融合的无人机位姿估计", 《计算机应用》 * |
Cited By (77)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111089579A (en) * | 2018-10-22 | 2020-05-01 | 北京地平线机器人技术研发有限公司 | Heterogeneous binocular SLAM method and device and electronic equipment |
| CN109579840A (en) * | 2018-10-25 | 2019-04-05 | 中国科学院上海微系统与信息技术研究所 | A kind of close coupling binocular vision inertia SLAM method of dotted line Fusion Features |
| CN111145251B (en) * | 2018-11-02 | 2024-01-02 | 深圳市优必选科技有限公司 | Robot and synchronous positioning and mapping method thereof and computer storage device |
| CN111145251A (en) * | 2018-11-02 | 2020-05-12 | 深圳市优必选科技有限公司 | A robot and its simultaneous positioning and mapping method and computer storage device |
| CN109461208A (en) * | 2018-11-15 | 2019-03-12 | 网易(杭州)网络有限公司 | Three-dimensional map processing method, device, medium and calculating equipment |
| CN109658507A (en) * | 2018-11-27 | 2019-04-19 | 联想(北京)有限公司 | Information processing method and device, electronic equipment |
| WO2020108285A1 (en) * | 2018-11-30 | 2020-06-04 | 华为技术有限公司 | Map building method, apparatus and system, and storage medium |
| US12266177B2 (en) | 2018-11-30 | 2025-04-01 | Shenzhen Yinwang Intelligent Technologies Co., Ltd. | Map building method, apparatus, and system, and storage medium |
| CN109712170A (en) * | 2018-12-27 | 2019-05-03 | 广东省智能制造研究所 | Environmental objects method for tracing, device, computer equipment and storage medium |
| CN109767470B (en) * | 2019-01-07 | 2021-03-02 | 浙江商汤科技开发有限公司 | Tracking system initialization method and terminal equipment |
| CN109767470A (en) * | 2019-01-07 | 2019-05-17 | 浙江商汤科技开发有限公司 | A kind of tracking system initial method and terminal device |
| CN109781092B (en) * | 2019-01-19 | 2021-01-19 | 北京化工大学 | Mobile robot positioning and mapping method in dangerous chemical engineering accident |
| CN109781092A (en) * | 2019-01-19 | 2019-05-21 | 北京化工大学 | A mobile robot positioning and mapping method in dangerous chemical accidents |
| CN109871803A (en) * | 2019-02-18 | 2019-06-11 | 清华大学 | Robot winding detection method and device |
| CN109871803B (en) * | 2019-02-18 | 2020-12-08 | 清华大学 | Robot loopback detection method and device |
| CN110125928A (en) * | 2019-03-27 | 2019-08-16 | 浙江工业大学 | A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames |
| CN110125928B (en) * | 2019-03-27 | 2021-04-06 | 浙江工业大学 | A Binocular Inertial Navigation SLAM System for Feature Matching Based on Front and Back Frames |
| CN109993113B (en) * | 2019-03-29 | 2023-05-02 | 东北大学 | A Pose Estimation Method Based on RGB-D and IMU Information Fusion |
| CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
| CN111798566A (en) * | 2019-04-08 | 2020-10-20 | 上海未来伙伴机器人有限公司 | A visual SLAM method |
| CN110084853A (en) * | 2019-04-22 | 2019-08-02 | 北京易达图灵科技有限公司 | A kind of vision positioning method and system |
| CN110111389B (en) * | 2019-05-14 | 2023-06-02 | 南京信息工程大学 | Mobile augmented reality tracking registration method and system based on SLAM |
| CN110111389A (en) * | 2019-05-14 | 2019-08-09 | 南京信息工程大学 | A kind of mobile augmented reality Tracing Registration method and system based on SLAM |
| CN110118554A (en) * | 2019-05-16 | 2019-08-13 | 深圳前海达闼云端智能科技有限公司 | SLAM method, apparatus, storage medium and device based on visual inertia |
| CN110458863A (en) * | 2019-06-25 | 2019-11-15 | 广东工业大学 | A Dynamic SLAM System Based on Fusion of RGBD and Encoder |
| CN110458863B (en) * | 2019-06-25 | 2023-12-01 | 广东工业大学 | A dynamic SLAM system based on the fusion of RGBD and encoder |
| CN110487274B (en) * | 2019-07-30 | 2021-01-29 | 中国科学院空间应用工程与技术中心 | SLAM method and system for weak texture scene, navigation vehicle and storage medium |
| CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
| CN110514199A (en) * | 2019-08-28 | 2019-11-29 | 爱笔(北京)智能科技有限公司 | Loop detection method and device of SLAM system |
| CN110514199B (en) * | 2019-08-28 | 2021-10-22 | 爱笔(北京)智能科技有限公司 | A kind of loopback detection method and device of SLAM system |
| CN110471422A (en) * | 2019-08-29 | 2019-11-19 | 南京理工大学 | The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair |
| WO2021035703A1 (en) * | 2019-08-30 | 2021-03-04 | 深圳市大疆创新科技有限公司 | Tracking method and movable platform |
| CN110648370B (en) * | 2019-09-29 | 2022-06-03 | 阿波罗智联(北京)科技有限公司 | Calibration data screening method and device and electronic equipment |
| CN110648370A (en) * | 2019-09-29 | 2020-01-03 | 百度在线网络技术(北京)有限公司 | Calibration data screening method and device and electronic equipment |
| CN112581514A (en) * | 2019-09-30 | 2021-03-30 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
| CN110986968A (en) * | 2019-10-12 | 2020-04-10 | 清华大学 | Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction |
| CN110986968B (en) * | 2019-10-12 | 2022-05-24 | 清华大学 | Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction |
| US12233561B2 (en) | 2020-01-20 | 2025-02-25 | Shenzhen Pudu Technology Co., Ltd. | Multi-sensor fusion SLAM system, multi-sensor fusion method, robot, and medium |
| CN111258313A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Multi-sensor fusion SLAM system and robot |
| CN111307146A (en) * | 2020-03-02 | 2020-06-19 | 北京航空航天大学青岛研究院 | Virtual reality wears display device positioning system based on binocular camera and IMU |
| CN111220155A (en) * | 2020-03-04 | 2020-06-02 | 广东博智林机器人有限公司 | Method, device and processor for estimating pose based on binocular vision inertial odometer |
| CN111323009A (en) * | 2020-03-09 | 2020-06-23 | 西南交通大学 | Magnetic suspension train positioning method and system |
| CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | A method and device for SLAM initialization |
| CN111311684B (en) * | 2020-04-01 | 2021-02-05 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
| CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
| CN111561923B (en) * | 2020-05-19 | 2022-04-15 | 北京数字绿土科技股份有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
| CN111784835B (en) * | 2020-06-28 | 2024-04-12 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
| CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, apparatus, electronic device and readable storage medium |
| CN111899276A (en) * | 2020-07-07 | 2020-11-06 | 武汉大学 | SLAM method and system based on binocular event camera |
| CN112082545A (en) * | 2020-07-29 | 2020-12-15 | 武汉威图传视科技有限公司 | Map generation method, device and system based on IMU and laser radar |
| CN112082545B (en) * | 2020-07-29 | 2022-06-21 | 武汉威图传视科技有限公司 | Map generation method, device and system based on IMU and laser radar |
| CN111737278B (en) * | 2020-08-05 | 2020-12-04 | 鹏城实验室 | Method, system, equipment and storage medium for simultaneous positioning and mapping |
| CN111737278A (en) * | 2020-08-05 | 2020-10-02 | 鹏城实验室 | Simultaneous positioning and mapping method, system, device and storage medium |
| CN112233177A (en) * | 2020-10-10 | 2021-01-15 | 中国安全生产科学研究院 | Unmanned aerial vehicle pose estimation method and system |
| CN112197770A (en) * | 2020-12-02 | 2021-01-08 | 北京欣奕华数字科技有限公司 | Robot positioning method and positioning device thereof |
| CN112509006A (en) * | 2020-12-11 | 2021-03-16 | 北京华捷艾米科技有限公司 | Sub-map recovery fusion method and device |
| CN112747749B (en) * | 2020-12-23 | 2022-12-06 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
| CN112747749A (en) * | 2020-12-23 | 2021-05-04 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
| CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
| CN112767546A (en) * | 2021-01-22 | 2021-05-07 | 湖南大学 | Binocular image-based visual map generation method for mobile robot |
| CN112767546B (en) * | 2021-01-22 | 2022-08-02 | 湖南大学 | Binocular image-based visual map generation method for mobile robot |
| CN113076988B (en) * | 2021-03-25 | 2022-06-03 | 重庆邮电大学 | A Neural Network-Based Adaptive Screening Method for Mobile Robot Vision SLAM Key Frames |
| CN113076988A (en) * | 2021-03-25 | 2021-07-06 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
| CN113674340A (en) * | 2021-07-05 | 2021-11-19 | 北京物资学院 | A method and device for binocular vision navigation based on landmarks |
| CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
| CN113674408A (en) * | 2021-07-19 | 2021-11-19 | 之江实验室 | End-edge-cooperated visual synchronous mapping and positioning system and method |
| CN113720323A (en) * | 2021-07-30 | 2021-11-30 | 安徽大学 | Monocular vision through-guidance SLAM method and device based on dotted line feature fusion |
| CN113720323B (en) * | 2021-07-30 | 2024-01-23 | 安徽大学 | Monocular visual inertial navigation SLAM method and device based on point-line feature fusion |
| CN114612499A (en) * | 2021-08-31 | 2022-06-10 | 北京石头创新科技有限公司 | Robot and sensing method thereof |
| CN113781582B (en) * | 2021-09-18 | 2023-09-19 | 四川大学 | Synchronous positioning and map creation method based on joint calibration of lidar and inertial navigation |
| CN113781582A (en) * | 2021-09-18 | 2021-12-10 | 四川大学 | Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration |
| CN114111818A (en) * | 2021-12-08 | 2022-03-01 | 太原供水设计研究院有限公司 | A general visual SLAM method |
| CN114485620B (en) * | 2022-01-29 | 2023-07-28 | 中国科学院国家空间科学中心 | Autonomous visual positioning system and method for asteroid probes integrated with orbital dynamics |
| CN114485620A (en) * | 2022-01-29 | 2022-05-13 | 中国科学院国家空间科学中心 | Orbital dynamics fused asteroid detector autonomous visual positioning system and method |
| CN116678398A (en) * | 2023-06-15 | 2023-09-01 | 太原理工大学 | A VSLAM back-end optimization method based on multi-convex combination maximum cross-correlation entropy |
| CN120726129A (en) * | 2025-08-18 | 2025-09-30 | 慧芯加(苏州)智能科技有限公司 | A pipe network detection robot positioning method, device, equipment, medium and product |
| CN120726129B (en) * | 2025-08-18 | 2025-11-04 | 慧芯加(苏州)智能科技有限公司 | Pipe network detection robot positioning method, device, equipment, medium and product |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108665540A (en) | Robot localization based on binocular vision feature and IMU information and map structuring system | |
| CN110125928B (en) | A Binocular Inertial Navigation SLAM System for Feature Matching Based on Front and Back Frames | |
| CN107193279A (en) | Robot localization and map structuring system based on monocular vision and IMU information | |
| CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
| CN110044354B (en) | A binocular vision indoor positioning and mapping method and device | |
| CN112649016B (en) | A visual inertial odometry method based on point-line initialization | |
| CN109029433B (en) | A method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platforms | |
| CN114019552B (en) | A method for optimizing location reliability based on Bayesian multi-sensor error constraints | |
| CN115240047A (en) | A laser SLAM method and system integrating visual loop closure detection | |
| US8380384B2 (en) | Apparatus and method for localizing mobile robot | |
| CN110570453B (en) | A visual odometry method for closed-loop feature tracking based on binocular vision | |
| WO2019062291A1 (en) | Binocular vision positioning method, device, and system | |
| CN110223348A (en) | Robot scene adaptive bit orientation estimation method based on RGB-D camera | |
| CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
| CN115420276B (en) | A multi-robot collaborative localization and mapping method for outdoor scenes | |
| CN107967457A (en) | A kind of place identification for adapting to visual signature change and relative positioning method and system | |
| CN111274847B (en) | Positioning method | |
| CN106940186A (en) | A kind of robot autonomous localization and air navigation aid and system | |
| CN110717927A (en) | Motion estimation method for indoor robot based on deep learning and visual-inertial fusion | |
| CN107886129A (en) | A kind of mobile robot map closed loop detection method of view-based access control model bag of words | |
| CN112595322B (en) | A laser SLAM method integrating ORB closed-loop detection | |
| CN116380079B (en) | An underwater SLAM method integrating forward-looking sonar and ORB-SLAM3 | |
| CN116563340A (en) | Visual SLAM method based on deep learning under dynamic environment | |
| CN113570716B (en) | Cloud 3D map construction method, system and device | |
| CN115218906A (en) | Indoor SLAM-oriented visual inertial fusion positioning method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |