CN112862768B - Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics - Google Patents
Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics Download PDFInfo
- Publication number
- CN112862768B CN112862768B CN202110119124.1A CN202110119124A CN112862768B CN 112862768 B CN112862768 B CN 112862768B CN 202110119124 A CN202110119124 A CN 202110119124A CN 112862768 B CN112862768 B CN 112862768B
- Authority
- CN
- China
- Prior art keywords
- point
- line
- imu
- initialization
- constraint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
- G01C25/005—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manufacturing & Machinery (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于点线特征的自适应单目VIO初始化方法,属于机器人视觉定位导航技术领域,包括步骤:S1:输入图像帧,分别检测点特征与线特征,输入IMU获取的数据,在图像帧之间进行预积分计算;S2:估计相机初始位姿;S3:构建最大后验估计问题,优化惯性参数,得到尺度因子、速度信息、重力方向以及IMU的陀螺仪偏置和加速度计偏置;S4:视觉惯性对齐、尺度缩放,将相机初始位姿转换到世界坐标系下;S5:初始值收敛。本发明能在不同复杂环境和不同初始状态下完成较为稳定精确的初始化,解决了VIO初始化过程中传感器的不确定性和惯性参数的不一致性,具有更高的性能。
The invention relates to an adaptive monocular VIO initialization method based on point and line features, which belongs to the technical field of robot vision positioning and navigation. Perform pre-integration calculation between image frames; S2: estimate the initial camera pose; S3: construct the maximum a posteriori estimation problem, optimize the inertial parameters, and obtain the scale factor, velocity information, gravity direction, and the gyroscope bias and accelerometer bias of the IMU position; S4: visual inertia alignment, scale scaling, and convert the initial camera pose to the world coordinate system; S5: initial value convergence. The invention can complete relatively stable and accurate initialization under different complex environments and different initial states, solves the uncertainty of the sensor and the inconsistency of inertial parameters in the VIO initialization process, and has higher performance.
Description
技术领域technical field
本发明属于机器人视觉定位导航技术领域,涉及一种基于点线特征的自适应单目VIO初始化方法。The invention belongs to the technical field of robot vision positioning and navigation, and relates to an adaptive monocular VIO initialization method based on point and line features.
背景技术Background technique
随着计算机技术的发展,对于移动机器人领域的研究也得到了快速发展。为了实现机器人在未知环境下的自主运动,首先要解决的两个问题是机器人位姿的实时估计和如何根据位姿构建地图,进一步为后续的自主定位、路径规划、避障等任务提供条件。在实际应用中,机器人通常会搭载不同功能的传感器,搭载相机和IMU的SLAM系统被称为视觉惯性SLAM,其里程计又被称为视觉惯性里程计(Visual Inertial Odometry,VIO),具有体积小、成本低、场景辨识能力强等优点,得到了该领域的广泛关注。With the development of computer technology, research in the field of mobile robots has also developed rapidly. In order to realize the autonomous motion of the robot in an unknown environment, the first two problems to be solved are the real-time estimation of the robot pose and how to construct a map based on the pose, which further provides conditions for subsequent tasks such as autonomous positioning, path planning, and obstacle avoidance. In practical applications, robots are usually equipped with sensors with different functions. The SLAM system equipped with cameras and IMUs is called visual inertial SLAM, and its odometer is also called visual inertial odometry (VIO). It has a small size. , low cost, strong scene recognition ability and other advantages, has received extensive attention in this field.
对于VIO而言,初始化模块尤为重要。初始参数的确定,如重力方向、速度、IMU偏置等,决定了系统的精度。特别是在单目VIO中无法直接观测到尺度,很难将视觉和惯性进行融合, VIO初始化困难。对于IMU的初始化,由于IMU数据的加速度计受重力影响,重力方向的估计也是位姿估计中的决定性因素。如果初始化工作出现误差,整个系统准确度都会随之降低,在基于优化的方法中可能会陷入局部最优。目前的初始化方法主要分为紧耦合和松耦合方法,并针对上述问题提出了不同的解决方案。文献“Martinelli等人,Closed-formsolution of visual- inertial structure from motion.International Journal ofComputer Vision,2014”提供了一种封闭求解的方案,用于联合获取尺度、重力、偏置、初始速度等参数,建立在可从IMU数据中大致估计相机位姿的基础上。文献“Mur-Artal等人,Visual-inertial monocular SLAM with map reuse. IEEE Robotics and AutomationLetters,2017”和文献“T.Qin等人,VINS-Mono:A robust and versatile monocularvisual-inertial state estimator.IEEE Transactions on Robotics,vol.34,2018”基于单目相机可精确估计无尺度的相机轨迹的假设,通过相机轨迹估算惯性参数,并通过BA优化。惯性参数在视觉信息提供的线性方程中由最小二乘法求解。但上述两种初始化方案都忽略了传感器的不确定性,且惯性参数在不同步骤中分别求解,忽略了其关联性。For VIO, the initialization module is especially important. The determination of initial parameters, such as gravity direction, speed, IMU bias, etc., determines the accuracy of the system. Especially in the monocular VIO, the scale cannot be directly observed, it is difficult to integrate vision and inertia, and VIO initialization is difficult. For the initialization of the IMU, since the accelerometer of the IMU data is affected by gravity, the estimation of the direction of gravity is also a decisive factor in the pose estimation. If there is an error in the initialization work, the accuracy of the entire system will be degraded, and the optimization-based method may fall into a local optimum. The current initialization methods are mainly divided into tightly coupled and loosely coupled methods, and different solutions are proposed for the above problems. The document "Martinelli et al., Closed-formsolution of visual-inertial structure from motion. International Journal of Computer Vision, 2014" provides a closed solution scheme for jointly acquiring parameters such as scale, gravity, bias, and initial velocity to establish On the basis that the camera pose can be roughly estimated from the IMU data. Literature "Mur-Artal et al., Visual-inertial monocular SLAM with map reuse. IEEE Robotics and AutomationLetters, 2017" and literature "T. Qin et al., VINS-Mono: A robust and versatile monocularvisual-inertial state estimator. IEEE Transactions on Robotics, vol. 34, 2018" is based on the assumption that the monocular camera can accurately estimate the scale-free camera trajectory, and the inertial parameters are estimated from the camera trajectory and optimized by BA. The inertial parameters are solved by least squares in linear equations provided by visual information. However, the above two initialization schemes ignore the uncertainty of the sensor, and the inertia parameters are solved separately in different steps, ignoring their correlation.
综上所述,目前在VIO技术领域存在的问题是:1)过于依赖场景特征。现有的VIO初始化算法中,普遍采用点特征进行纯视觉估计,但在弱纹理环境下,如走廊、墙壁等,则很难提取到足够数量的特征点从而导致初始化失败,系统定位精度差。2)未考虑到传感器的不确定性和惯性参数之间的关联性,且通常会忽略IMU的加速度计偏置,导致估计值准确度较低。 3)对初始状态要求较高,需要相机在初始化阶段提供足够的旋转和平移才能完成初始化,只适用特定情况下。To sum up, the current problems in the field of VIO technology are: 1) Too much reliance on scene features. In the existing VIO initialization algorithms, point features are generally used for pure visual estimation, but in weak texture environments, such as corridors, walls, etc., it is difficult to extract a sufficient number of feature points, resulting in initialization failure and poor system positioning accuracy. 2) The correlation between the uncertainty of the sensor and the inertial parameters is not considered, and the accelerometer bias of the IMU is usually ignored, resulting in a low accuracy of the estimated value. 3) The requirements for the initial state are high, and the camera needs to provide enough rotation and translation in the initialization stage to complete the initialization, which is only applicable to specific situations.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本发明的目的在于针对弱纹理环境下点特征不足难以估计相机初始位姿、系统定位精度差,惯性参数估计过程中未考虑到传感器不确定性和关联性,初始化方案适用性不强等问题,在纯视觉SFM中引入线特征作为可选项,在场景纹理不足以提供可靠估计时使用,提供鲁棒性;同时构建最大后验估计问题对惯性参数进行求解,保证其一致性,适用于任何初始化情况,提供一种基于点线特征的自适应单目VIO初始化方法。In view of this, the purpose of the present invention is that it is difficult to estimate the initial pose of the camera due to insufficient point features in a weak texture environment, the system positioning accuracy is poor, the sensor uncertainty and correlation are not considered in the inertial parameter estimation process, and the applicability of the initialization scheme is not suitable Strong and other problems, the line feature is introduced as an option in pure visual SFM, and it is used when the scene texture is not enough to provide reliable estimation to provide robustness; at the same time, a maximum a posteriori estimation problem is constructed to solve the inertial parameters to ensure their consistency. Applicable to any initialization situation, it provides an adaptive monocular VIO initialization method based on point and line features.
为达到上述目的,本发明提供如下技术方案:To achieve the above object, the present invention provides the following technical solutions:
一种基于点线特征的自适应单目VIO初始化方法,包括以下步骤:An adaptive monocular VIO initialization method based on point and line features, comprising the following steps:
S1:输入图像帧,分别检测点特征与线特征;输入IMU获取的数据,在每个图像帧之间进行IMU预积分计算;S1: Input image frames, detect point features and line features respectively; input data obtained by IMU, and perform IMU pre-integration calculation between each image frame;
S2:估计相机初始位姿;首先判断点特征是否满足视差条件和数量要求,是则通过八点法求解本质矩阵,估计相机初始位姿;否则引入线特征,计算其匹配的弱约束分数,筛选出用于初始化的线特征,并根据点线距离约束估计相机初始位姿;S2: Estimate the initial pose of the camera; first determine whether the point features meet the parallax conditions and quantity requirements, if yes, solve the essential matrix by the eight-point method, and estimate the initial pose of the camera; otherwise, introduce the line feature, calculate its matching weak constraint score, filter Extract the line features for initialization, and estimate the initial pose of the camera according to the point-line distance constraints;
S3:构建最大后验估计问题,优化惯性参数,得到尺度因子、速度信息、重力方向以及 IMU的陀螺仪偏置和加速度计偏置;S3: Construct the maximum a posteriori estimation problem, optimize the inertial parameters, and obtain the scale factor, velocity information, gravity direction, and the gyroscope bias and accelerometer bias of the IMU;
S4:视觉惯性对齐,并进行尺度缩放,同时将相机初始位姿转换到世界坐标系下;S4: Visual inertial alignment, scale scaling, and convert the initial camera pose to the world coordinate system;
S5:初始值收敛,初始化完成。S5: The initial value converges, and the initialization is completed.
进一步,步骤S1具体包括:通过Shi-Tomasi角点算法检测点特征,该算法基于梯度变化进行检测,属于Harris角点检测的改进算法;线特征采用LSD直线检测算法,其核心思想是将梯度方向相近的像素合并,快速检测图像中的直线段;IMU预积分是指将图像的第k帧和第k+1帧之间的所有IMU测量值进行积分,得到第k+1帧之间的PVQ值,即位置、速度和旋转的值,为视觉提供初始值,并作为后端优化的约束项。Further, step S1 specifically includes: detecting point features through the Shi-Tomasi corner point algorithm, which is detected based on gradient changes, and belongs to the improved algorithm of Harris corner point detection; line features adopt the LSD straight line detection algorithm, the core idea of which is to detect the gradient direction. Combine similar pixels to quickly detect straight line segments in the image; IMU pre-integration refers to integrating all IMU measurements between the kth frame and the k+1th frame of the image to obtain the PVQ between the k+1th frame. The values, i.e. the values of position, velocity and rotation, provide initial values for vision and serve as constraints for backend optimization.
进一步,在步骤S2中,根据点特征是否满足初始化条件分为两种情况对相机初始位姿进行估计:Further, in step S2, the initial pose of the camera is estimated in two cases according to whether the point feature satisfies the initialization conditions:
情况1:点特征满足视差条件和数量要求,根据对极几何得到对应点的关系为:Case 1: The point feature meets the parallax condition and quantity requirements, and the relationship between the corresponding points obtained according to the epipolar geometry is:
其中x1=(u1,v1,1)T、x2=(u2,v2,1)T是对应像素点归一化平面上的坐标,R和t是两帧之间的相机运动,分别表示旋转和平移;中间部分记为本质矩阵E,表示为E=t^R,是一个3×3的自由度为5的矩阵;where x 1 =(u 1 ,v 1 ,1) T , x 2 =(u 2 ,v 2 ,1) T is the coordinate on the normalized plane of the corresponding pixel, R and t are the camera between the two frames Motion, representing rotation and translation respectively; the middle part is denoted as the essential matrix E, represented as E=t^R, which is a 3×3 matrix with 5 degrees of freedom;
通过八点法求解该本质矩阵,由对极几何得到:The essential matrix is solved by the eight-point method, which is obtained from the epipolar geometry:
为求解E,共需要八对匹配点构成八个方程,通过奇异值分解(SVD)求解本质矩阵E,并取具有正深度的解作为最终估计;In order to solve E, a total of eight pairs of matching points are required to form eight equations, and the essential matrix E is solved by singular value decomposition (SVD), and the solution with positive depth is taken as the final estimate;
情况2:点特征不满足初始化要求,引入线特征,通过计算弱约束分数筛选匹配对,并通过点线距离约束求解相机初始位姿;其中,弱约束包括描述子约束和对极约束,分别对其计算分数sd和se,具体如下:Case 2: The point feature does not meet the initialization requirements, the line feature is introduced, the matching pairs are selected by calculating the weak constraint score, and the initial camera pose is solved by the point-line distance constraint; among them, the weak constraints include descriptor constraints and epipolar constraints, respectively It calculates the scores s d and s e as follows:
LSD线段采用LBD描述子,对像素梯度进行统计并计算统计量的平均向量与标准方差作为描述子;对于描述子约束,主要考虑到需要剔除外观差异较大的误匹配,计算参考帧描述子desc1和当前帧描述子desc2之间的汉明距离,若小于阈值τdesc,描述子分数sd记为1,若大于该阈值,描述子分数sd记为0,表示为:The LSD line segment adopts the LBD descriptor, and calculates the pixel gradient and calculates the average vector and standard deviation of the statistic as the descriptor; for the descriptor constraint, the reference frame descriptor desc The Hamming distance between 1 and the current frame descriptor desc 2 , if it is less than the threshold τ desc , the descriptor score s d is recorded as 1, if it is greater than the threshold, the descriptor score s d is recorded as 0, expressed as:
对于对极约束,由于线特征没有严格的对极约束,因此作为弱约束项增强可靠性;首先计算参考帧线特征两个端点的极线,当前帧的对应线特征AB所在直线与该极线相交于点C 和点D,该约束分数定义为:For the epipolar constraint, since the line feature does not have strict epipolar constraint, it is used as a weak constraint to enhance reliability; first, the epipolar line of the two endpoints of the reference frame line feature is calculated, and the line where the corresponding line feature AB of the current frame is located is the same as the epipolar line. Intersecting at points C and D, the constraint score is defined as:
其中dmin表示四点共线的最小欧式距离,dmax表示四点共线的最大欧式距离;where d min represents the minimum Euclidean distance of four collinear points, and d max represents the maximum Euclidean distance of four collinear points;
最终,对于每对匹配线,计算分数s=sd·se,如果s大于某个阈值,则认为该匹配对可用于初始化,进行闭式求解;Finally, for each pair of matching lines, calculate the score s=s d · s e , if s is greater than a certain threshold, it is considered that the matching pair can be used for initialization, and closed-form solution is performed;
闭式求解过程如下:3D线特征的端点投影理论上应落在相机观测到的线上,获得归一化线特征的系数;The closed-form solution process is as follows: the endpoint projection of the 3D line feature should theoretically fall on the line observed by the camera, and the coefficients of the normalized line feature are obtained;
记线特征端点的逆深度分别为ρks和ρke,则3D线端点重投影归一化地表示为:The inverse depths of the endpoints of the line feature are ρ ks and ρ ke respectively, then the reprojection of the 3D line endpoints can be expressed as:
其中π(·)是重投影函数,表示为π(x,y,z)T=π(x/z,y/z,1)T,Ri是基于小旋转假设下的旋转矩阵,即假设连续图像帧之间的旋转较小,记相机旋转向量和平移向量分别为r=(r1,r2,r3)T和t=(t1,t2,t3)T,旋转矩阵按一阶泰勒展开式近似表示:where π(·) is the reprojection function, expressed as π(x,y,z) T =π(x/z,y/z,1) T , and R i is the rotation matrix based on the small rotation assumption, that is, the assumption The rotation between consecutive image frames is small, and the camera rotation vector and translation vector are respectively r=(r 1 , r 2 , r 3 ) T and t=(t 1 , t 2 , t 3 ) T , the rotation matrix is The first-order Taylor expansion is approximated by:
由于投影点在观测线上,二者距离为零;以起始点为例,约束表示为即:Since the projection point is on the observation line, the distance between the two is zero; taking the starting point as an example, the constraint is expressed as which is:
在小旋转假设下,ρkst1可忽略不计,因此上式化简得:Under the small rotation assumption, ρ ks t 1 is negligible, so the above formula simplifies to:
Ar1+Br2+Cr3+D=0Ar 1 +Br 2 +Cr 3 +D=0
其中:in:
此外,另一端点也具有相同约束,因此一对匹配线得到两个方程;如果有多对匹配线,则通过如下线性方程闭式求解,由SVD得到唯一解;Also, another endpoint It also has the same constraints, so a pair of matching lines obtains two equations; if there are multiple pairs of matching lines, the following linear equations are solved in closed form, and the unique solution is obtained by SVD;
进一步,在步骤S3中,构建最大后验估计问题,优化IMU相关参数,得到尺度因子、速度信息、重力方向以及IMU的陀螺仪偏置和加速度计偏置;Further, in step S3, a maximum a posteriori estimation problem is constructed, and the relevant parameters of the IMU are optimized to obtain the scale factor, velocity information, gravity direction, and the gyroscope bias and accelerometer bias of the IMU;
首先,所估计的惯性参数为:First, the estimated inertial parameters are:
其中s为尺度因子,Rwg为重力方向,b向量包括IMU加速度计偏差ba和陀螺仪偏差bg,是无尺度的第0帧到第k帧的速度;由IMU预积分理论建立含先验的MAP问题;where s is the scale factor, R wg is the direction of gravity, and the b vector includes the IMU accelerometer bias b a and the gyroscope bias b g , is the speed from the 0th frame to the kth frame without scale; the MAP problem with prior is established by the IMU pre-integration theory;
其中是似然值,是先验值,表示初始化窗口内连续关键帧之间 IMU预积分的集合;假设IMU每次的测量值是独立的,该MAP问题描述为:of which is Likelihood, is the prior value, Represents the set of IMU pre-integrations between consecutive keyframes within the initialization window; assuming that each measurement of the IMU is independent, the MAP problem is described as:
假设IMU预积分和先验分布的误差为高斯误差,得到最终优化问题:Assuming that the errors of the IMU pre-integration and prior distribution are Gaussian errors, the final optimization problem is obtained:
其中rp为先验误差,为IMU预积分误差;且在优化过程中,重力方向和尺度因子的更新式为:where r p is the prior error, is the IMU pre-integration error; and in the optimization process, the update formulas of the gravity direction and scale factor are:
snew=soldexp(δs)s new =s old exp(δ s )
该方法考虑了IMU的不确定性,将惯性参数的估计建立为最优估计问题,且不需要假设忽略加速度计的偏置,将已知信息作为先验信息加入到MAP问题中;一次性估计出所有惯性参数,避免了数据不一致问题。The method considers the uncertainty of the IMU, establishes the estimation of inertial parameters as an optimal estimation problem, and does not need to assume that the bias of the accelerometer is ignored, and adds the known information to the MAP problem as a priori information; one-time estimation All inertial parameters are obtained to avoid the problem of data inconsistency.
进一步,在步骤S4中,惯性参数优化完成后,得到单目视觉所需的尺度信息估计值,根据该尺度进行比例缩放,得到相机位姿、速度和3D地图点,并与重力方向进行对齐,将位姿转换到世界坐标系下,重新计算IMU预积分并更新;至此已经分别估计了视觉和惯性的参数,最后进行BA优化,得到最优解。Further, in step S4, after the optimization of the inertial parameters is completed, the estimated value of the scale information required for monocular vision is obtained, and the scale is scaled according to the scale to obtain the camera pose, speed and 3D map point, and align with the direction of gravity, Convert the pose to the world coordinate system, recalculate the IMU pre-integration and update it; so far, the visual and inertial parameters have been estimated separately, and finally BA optimization is performed to obtain the optimal solution.
本发明的有益效果在于:1)本发明改善了传统方法精度低、鲁棒性差、适用性不足等难题,能在不同的复杂环境下和不同初始状态下完成较为稳定精确的初始化;2)本发明提出的引入线特征的自适应纯视觉SFM估计方法能够很好地适应弱纹理环境,并提供结构信息,提高可靠度;3)本发明提出的基于最大后验估计的惯性优化方法能够很好地解决VIO初始化过程中传感器的不确定性和惯性参数的不一致性。仿真结果表明,本发明相较现有VIO算法具有更高的性能。The beneficial effects of the present invention are: 1) the present invention improves the problems of low precision, poor robustness, and insufficient applicability of traditional methods, and can complete relatively stable and accurate initialization under different complex environments and different initial states; 2) this The adaptive pure visual SFM estimation method of the introduced line feature proposed by the invention can well adapt to the weak texture environment, and provides structural information to improve reliability; 3) The inertial optimization method based on maximum a posteriori estimation proposed by the invention can be well The uncertainty of the sensor and the inconsistency of the inertial parameters during the initialization of the VIO can be effectively solved. The simulation results show that the present invention has higher performance than the existing VIO algorithm.
本发明的其他优点、目标和特征在某种程度上将在随后的说明书中进行阐述,并且在某种程度上,基于对下文的考察研究对本领域技术人员而言将是显而易见的,或者可以从本发明的实践中得到教导。本发明的目标和其他优点可以通过下面的说明书来实现和获得。Other advantages, objects, and features of the present invention will be set forth in the description that follows, and will be apparent to those skilled in the art based on a study of the following, to the extent that is taught in the practice of the present invention. The objectives and other advantages of the present invention may be realized and attained by the following description.
附图说明Description of drawings
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作优选的详细描述,其中:In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be preferably described in detail below with reference to the accompanying drawings, wherein:
图1是本发明实施例提供的基于点线特征的自适应单目VIO初始化算法流程图;1 is a flowchart of an adaptive monocular VIO initialization algorithm based on dotted line features provided by an embodiment of the present invention;
图2是本发明实施例提供的线特征处理流程图;2 is a flowchart of line feature processing provided by an embodiment of the present invention;
图3是本发明实施例提供的线特征对极约束示意图;3 is a schematic diagram of a line feature-to-pole constraint provided by an embodiment of the present invention;
图4是本发明实施例提供的弱纹理环境下的特征提取示意图;4 is a schematic diagram of feature extraction in a weak texture environment provided by an embodiment of the present invention;
图5是本发明采用的方法与传统VIO方法得到的轨迹和真实轨迹之间的对比示意图;Fig. 5 is the contrast schematic diagram between the trajectory that the method adopted in the present invention and the traditional VIO method obtain and the real trajectory;
图6是本发明采用的VIO方法与传统VIO方法得到的均方根误差(RMSE)对比示意图。FIG. 6 is a schematic diagram showing the comparison of the root mean square error (RMSE) obtained by the VIO method adopted in the present invention and the traditional VIO method.
具体实施方式Detailed ways
以下通过特定的具体实例说明本发明的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本发明的其他优点与功效。本发明还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本发明的精神下进行各种修饰或改变。需要说明的是,以下实施例中所提供的图示仅以示意方式说明本发明的基本构想,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。The embodiments of the present invention are described below through specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the contents disclosed in this specification. The present invention can also be implemented or applied through other different specific embodiments, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present invention. It should be noted that the drawings provided in the following embodiments are only used to illustrate the basic idea of the present invention in a schematic manner, and the following embodiments and features in the embodiments can be combined with each other without conflict.
其中,附图仅用于示例性说明,表示的仅是示意图,而非实物图,不能理解为对本发明的限制;为了更好地说明本发明的实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;对本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。Among them, the accompanying drawings are only used for exemplary description, and represent only schematic diagrams, not physical drawings, and should not be construed as limitations of the present invention; in order to better illustrate the embodiments of the present invention, some parts of the accompanying drawings will be omitted, The enlargement or reduction does not represent the size of the actual product; it is understandable to those skilled in the art that some well-known structures and their descriptions in the accompanying drawings may be omitted.
本发明实施例的附图中相同或相似的标号对应相同或相似的部件;在本发明的描述中,需要理解的是,若有术语“上”、“下”、“左”、“右”、“前”、“后”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此附图中描述位置关系的用语仅用于示例性说明,不能理解为对本发明的限制,对于本领域的普通技术人员而言,可以根据具体情况理解上述术语的具体含义。The same or similar numbers in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there are terms “upper”, “lower”, “left” and “right” , "front", "rear" and other indicated orientations or positional relationships are based on the orientations or positional relationships shown in the accompanying drawings, and are only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the indicated device or element must be It has a specific orientation, is constructed and operated in a specific orientation, so the terms describing the positional relationship in the accompanying drawings are only used for exemplary illustration, and should not be construed as a limitation of the present invention. situation to understand the specific meaning of the above terms.
请参阅图1~图6,为一种基于点线特征的自适应单目VIO初始化方法。Please refer to FIG. 1 to FIG. 6 , which are an adaptive monocular VIO initialization method based on point-line features.
图1是本发明实施例提供的基于点线特征的自适应单目VIO初始化算法流程图,如图所示,本发明实施例提供的基于点线特征的自适应单目VIO初始化算法包括:1 is a flowchart of an adaptive monocular VIO initialization algorithm based on dotted line features provided by an embodiment of the present invention. As shown in the figure, the adaptive monocular VIO initialization algorithm based on dotted line features provided by an embodiment of the present invention includes:
首先通过Shi-Tomasi角点算法检测点特征,该算法基于梯度变化进行检测,属于Harris 角点检测的改进算法。线特征采用LSD直线检测算法,其核心思想是将梯度方向相近的像素合并,快速检测图像中的直线段。IMU预积分是指将图像的第k帧和第k+1帧之间的所有 IMU进行积分,则可以得到第k+1帧之间的PVQ值,即位置、速度和旋转值,为视觉提供初始值,并作为后端优化的约束项。First, point features are detected by the Shi-Tomasi corner point algorithm, which is based on gradient changes and belongs to the improved algorithm of Harris corner point detection. The line feature adopts the LSD line detection algorithm, and its core idea is to combine pixels with similar gradient directions to quickly detect straight line segments in the image. IMU pre-integration refers to integrating all the IMUs between the kth frame and the k+1th frame of the image, then the PVQ value between the k+1th frame, that is, the position, velocity and rotation values, can be obtained for vision. The initial value is used as a constraint for backend optimization.
随后根据点特征是否满足初始化条件分为两种情况对相机初始位姿进行估计:Then, according to whether the point features meet the initialization conditions, the initial pose of the camera is estimated in two cases:
情况1:点特征满足视差条件和数量要求,根据对极几何可以得到对应点的关系为:Case 1: The point feature meets the parallax condition and quantity requirements. According to the epipolar geometry, the relationship between the corresponding points can be obtained as:
其中x1=(u1,v1,1)T、x2=(u2,v2,1)T是对应像素点归一化平面上的坐标,R和t是两帧之间的相机运动,分别表示旋转和平移。中间部分记为本质矩阵E,表示为E=t^R,是一个3×3的自由度为5的矩阵。where x 1 =(u 1 ,v 1 ,1) T , x 2 =(u 2 ,v 2 ,1) T is the coordinate on the normalized plane of the corresponding pixel, R and t are the camera between the two frames Motion, representing rotation and translation, respectively. The middle part is denoted as the essential matrix E, expressed as E=t^R, which is a 3×3 matrix with 5 degrees of freedom.
通过八点法求解该本质矩阵,由对极几何可以得到:The essential matrix is solved by the eight-point method, which can be obtained from the epipolar geometry:
为求解E,共需要八对匹配点构成八个方程,通过奇异值分解(SVD)求解本质矩阵E,并取具有正深度的解作为最终估计。To solve E, a total of eight pairs of matching points are required to form eight equations, the essential matrix E is solved by singular value decomposition (SVD), and the solution with positive depth is taken as the final estimate.
情况2:点特征不满足初始化条件,引入线特征,通过计算弱约束分数筛选匹配对,并通过点线距离约束求解相机初始位姿。其中,弱约束包括描述子约束和对极约束,分别对其计算分数sd和se,流程如图2所示。过程具体如下:Case 2: The point feature does not meet the initialization conditions, the line feature is introduced, the matching pairs are selected by calculating the weak constraint score, and the initial camera pose is solved by the point-line distance constraint. Among them, weak constraints include descriptor constraints and epipolar constraints, and the scores s d and s e are calculated respectively for them. The process is shown in Figure 2 . The process is as follows:
LSD线段采用LBD描述子,对像素梯度进行统计并计算统计量的平均向量与标准方差作为描述子。对于描述子约束,主要考虑到需要剔除外观差异较大的误匹配,计算参考帧描述子desc1和当前帧描述子desc2之间的汉明距离,若小于阈值τdesc,描述子分数sd记为1,若大于该阈值,描述子分数sd记为0,表示为:The LSD line segment adopts the LBD descriptor, and the pixel gradients are counted and the average vector and standard deviation of the statistics are calculated as the descriptor. For the descriptor constraints, the Hamming distance between the reference frame descriptor desc 1 and the current frame descriptor desc 2 is calculated mainly considering the need to eliminate the mismatches with large appearance differences. If it is less than the threshold τ desc , the descriptor score s d It is recorded as 1. If it is greater than the threshold, the descriptor score s d is recorded as 0, which is expressed as:
对于对极约束,由于线特征没有严格的对极约束,因此作为弱约束项增强可靠性。如图 3所示,表示了线段两端点的极线约束,首先计算参考帧线特征两个端点的极线l1、l2,当前帧的对应线特征AB所在直线与该极线相交于点C和点D。该约束分数定义为:For epipolar constraints, since the line features do not have strict epipolar constraints, they are used as weak constraints to enhance reliability. As shown in Figure 3, the epipolar constraint of the two ends of the line segment is represented. First, the epipolar lines l 1 and l 2 of the two end points of the reference frame line feature are calculated. The line corresponding to the line feature AB of the current frame intersects the epipolar line at the point C and point D. The constraint score is defined as:
其中dmin表示四点共线的最小欧式距离,dmax表示四点共线的最大欧式距离。Where d min represents the minimum Euclidean distance of four collinear points, and d max represents the maximum Euclidean distance of four collinear points.
最终,对于每对匹配线,计算分数s=sd·se,如果s大于某个阈值,则认为该匹配对可用于初始化,进行闭式求解。闭式求解过程如下:3D线特征的端点投影理论上应落在相机观测到的线上,因此可获得归一化线特征的系数:Finally, for each pair of matching lines, a score s=s d ·s e is calculated, and if s is greater than a certain threshold, the matching pair is considered to be available for initialization and closed-form solution is performed. The closed-form solution process is as follows: The projection of the endpoints of the 3D line feature should theoretically fall on the line observed by the camera, so the coefficients of the normalized line feature can be obtained:
记线特征端点的逆深度分别为ρks和ρke,则3D线端点重投影可归一化地表示为:The inverse depths of the endpoints of the line feature are ρ ks and ρ ke respectively, then the reprojection of the 3D line endpoints can be expressed as:
其中π(·)是重投影函数,可表示为π(x,y,z)T=π(x/z,y/z,1)T,Ri是基于小旋转假设下的旋转矩阵,即假设连续图像帧之间的旋转较小,记相机旋转向量和平移向量分别为 r=(r1,r2,r3)T和t=(t1,t2,t3)T,旋转矩阵可按一阶泰勒展开式近似表示:where π(·) is the reprojection function, which can be expressed as π(x,y,z) T =π(x/z,y/z,1) T , R i is the rotation matrix based on the small rotation assumption, that is Assuming that the rotation between consecutive image frames is small, record the camera rotation vector and translation vector as r=(r 1 , r 2 , r 3 ) T and t=(t 1 , t 2 , t 3 ) T , respectively, the rotation matrix It can be approximated by the first-order Taylor expansion:
由于投影点在观测线上,二者距离为零。以起始点为例,约束可表示为即:Since the projection point is on the observation line, the distance between the two is zero. Taking the starting point as an example, the constraint can be expressed as which is:
在小旋转假设下,ρkst1可忽略不计,因此上式化简得:Under the small rotation assumption, ρ ks t 1 is negligible, so the above formula simplifies to:
Ar1+Br2+Cr3+D=0Ar 1 +Br 2 +Cr 3 +D=0
其中:in:
此外,另一端点也具有相同约束。因此一对匹配线可得到两个方程。如果有多对匹配线,则通过如下线性方程闭式求解,由SVD得到唯一解:Also, another endpoint also has the same constraints. So a pair of matched lines results in two equations. If there are multiple pairs of matching lines, the following linear equation is solved in closed form, and the unique solution is obtained by SVD:
对于惯性估计,构建最大后验估计问题,优化IMU相关参数,得到尺度因子、速度信息、重力方向以及IMU的陀螺仪偏置和加速度计偏置。首先,所估计的惯性参数为:For inertial estimation, the maximum a posteriori estimation problem is constructed, and the relevant parameters of the IMU are optimized to obtain the scale factor, velocity information, gravity direction, and the gyroscope bias and accelerometer bias of the IMU. First, the estimated inertial parameters are:
其中s为尺度因子,Rwg为重力方向,b向量包括IMU加速度计偏差ba和陀螺仪偏差bg,是无尺度的第0帧到第k帧的速度。由IMU预积分理论可以建立含先验的MAP问题:where s is the scale factor, R wg is the direction of gravity, and the b vector includes the IMU accelerometer bias b a and the gyroscope bias b g , is the speed of unscaled frames 0 to k. The MAP problem with priors can be established from the IMU pre-integration theory:
其中是似然值,是先验值,表示初始化窗口内连续关键帧之间IMU预积分的集合。假设IMU每次的测量值是独立的,该MAP问题可以描述为:of which is Likelihood, is the prior value, Represents the set of IMU preintegrations between consecutive keyframes within the initialization window. Assuming that each measurement of the IMU is independent, the MAP problem can be described as:
假设IMU预积分和先验分布的误差为高斯误差,可以得到最终优化问题:Assuming that the errors of the IMU pre-integration and prior distribution are Gaussian errors, the final optimization problem can be obtained:
其中rp为先验误差,为IMU预积分误差。且在优化过程中,重力方向和尺度因子的更新式为:where r p is the prior error, is the IMU pre-integration error. And in the optimization process, the update formula of gravity direction and scale factor is:
snew=soldexp(δs)s new =s old exp(δ s )
该方法考虑了IMU的不确定性,将惯性参数的估计建立为最优估计问题,且不需要假设忽略加速度计的偏置,将已知信息作为先验信息加入到MAP问题中。可一次性估计出所有惯性参数,避免了数据不一致问题。The method considers the uncertainty of the IMU, establishes the estimation of inertial parameters as an optimal estimation problem, and does not need to assume that the bias of the accelerometer is ignored, and adds the known information as a priori information to the MAP problem. All inertial parameters can be estimated at one time, avoiding the problem of data inconsistency.
惯性参数优化完成后,可以得到单目视觉所需的尺度信息估计值,根据该尺度进行比例缩放,可得到相机位姿、速度和3D地图点,并与重力方向进行对齐,将位姿转换到世界坐标系下,重新计算IMU预积分并更新。至此已经分别估计了视觉和惯性的参数,最后进行BA 优化,得到最优解。After the optimization of inertial parameters is completed, the estimated value of scale information required for monocular vision can be obtained. By scaling according to the scale, the camera pose, speed and 3D map points can be obtained, and aligned with the direction of gravity to convert the pose to In the world coordinate system, recalculate and update the IMU pre-integration. So far, the visual and inertial parameters have been estimated respectively, and finally the BA optimization is carried out to obtain the optimal solution.
图4是本发明实施例提供的弱纹理环境下的特征提取示意图,从图中可以看到,当环境纹理不明显时,难以提取点特征,此时引入线特征作为结构信息可解决这一问题,增强了VIO 的鲁棒性。FIG. 4 is a schematic diagram of feature extraction in a weak texture environment provided by an embodiment of the present invention. As can be seen from the figure, when the environment texture is not obvious, it is difficult to extract point features. At this time, introducing line features as structural information can solve this problem. , which enhances the robustness of VIO.
本发明使用主流数据集Euroc进行了实验。该数据集采用微型飞行器(MAV)采集工业环境中的图像信息以及IMU信息,共包含11个序列,根据照明情况、纹理和运动速度分为简单、中等和困难三种,适用于测试本发明的性能表现。The present invention conducts experiments using the mainstream dataset Euroc. The data set uses micro air vehicles (MAV) to collect image information and IMU information in industrial environments, and contains 11 sequences in total, which are divided into three types: simple, medium and difficult according to lighting conditions, textures and motion speeds, and are suitable for testing the invention. performance.
图5是本发明采用的VIO方法与传统VIO方法得到的轨迹和真实轨迹之间的对比示意图,其中(a)是V2_01_easy序列的轨迹示意图,该序列在初始阶段视差不足,平移较小;(b)是MH_05_difficult序列的轨迹示意图,该序列在初始阶段几乎保持静止,并在很长一段时间处于无光源、纹理较少的环境中。可以看到,本发明采用的VIO方法得到的轨迹更接近真实值,由此验证了本发明的方法具有更好的精度。5 is a schematic diagram of the comparison between the trajectory obtained by the VIO method adopted by the present invention and the real trajectory obtained by the traditional VIO method, wherein (a) is a schematic diagram of the trajectory of the V2_01_easy sequence, the sequence has insufficient parallax in the initial stage, and the translation is small; (b) ) is a schematic diagram of the trajectory of the MH_05_difficult sequence, which remains almost stationary at the initial stage and in a light-less, texture-less environment for a long time. It can be seen that the trajectory obtained by the VIO method adopted in the present invention is closer to the real value, thereby verifying that the method of the present invention has better accuracy.
图6是本发明采用的VIO方法与传统VIO方法得到的均方根误差(RMSE)对比示意图,其中(a)是V2_01_easy序列的均方根误差(RMSE)变化,(b)是MH_05_difficult序列的均方根误差(RMSE)变化。可以看到,本发明采用的VIO方法得到的均方根误差(RMSE) 值整体低于传统VIO方法,且变化幅度也较小,由此验证了本发明的方法具有更好的稳定性。6 is a schematic diagram of the root mean square error (RMSE) comparison between the VIO method adopted by the present invention and the traditional VIO method, wherein (a) is the root mean square error (RMSE) variation of the V2_01_easy sequence, and (b) is the mean square error (RMSE) of the MH_05_difficult sequence. Root square error (RMSE) variation. It can be seen that the root mean square error (RMSE) value obtained by the VIO method adopted in the present invention is generally lower than that obtained by the traditional VIO method, and the variation range is also small, thus verifying that the method of the present invention has better stability.
表1统计了本发明的VIO方法与传统VIO算法在Euroc数据集下的平移误差(Translation) 和旋转误差(Rotation),均采用均方根误差(RMSE)。从表1中的数据可以看到,本发明采用的VIO方法能取得更好的结果。Table 1 summarizes the translation error (Translation) and the rotation error (Rotation) of the VIO method of the present invention and the traditional VIO algorithm under the Euroc data set, and both use the root mean square error (RMSE). It can be seen from the data in Table 1 that the VIO method adopted in the present invention can achieve better results.
本发明提出的一种基于点线特征的单目VIO初始化算法有效解决了传统方法精度低、鲁棒性差、适用性不足等难题,能在不同的复杂环境下和不同初始状态下完成较为稳定精确的初始化,引入线特征,使初始化能自适应地根据环境变化进行纯视觉估计,增强了可靠性。并且很好地解决了VIO初始化过程中传感器的不确定性和惯性参数的不一致性;仿真结果表明,本发明在初始化实时性、精确度和稳定性方面都有一定的提升,具有良好的性能。最后说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本技术方案的宗旨和范围,其均应涵盖在本发明的权利要求范围当中。The monocular VIO initialization algorithm based on point and line features proposed by the present invention effectively solves the problems of low precision, poor robustness and insufficient applicability of traditional methods, and can be completed stably and accurately in different complex environments and different initial states. The initialization of , introduces line features, so that the initialization can adaptively perform pure visual estimation according to environmental changes, which enhances the reliability. In addition, the uncertainty of the sensor and the inconsistency of inertial parameters in the VIO initialization process are well resolved; the simulation results show that the invention has certain improvements in initialization real-time, accuracy and stability, and has good performance. Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be Modifications or equivalent replacements, without departing from the spirit and scope of the technical solution, should all be included in the scope of the claims of the present invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110119124.1A CN112862768B (en) | 2021-01-28 | 2021-01-28 | Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110119124.1A CN112862768B (en) | 2021-01-28 | 2021-01-28 | Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112862768A CN112862768A (en) | 2021-05-28 |
CN112862768B true CN112862768B (en) | 2022-08-02 |
Family
ID=75987748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110119124.1A Active CN112862768B (en) | 2021-01-28 | 2021-01-28 | Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862768B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113298796B (en) * | 2021-06-10 | 2024-04-19 | 西北工业大学 | Line characteristic SLAM initialization method based on maximum posterior IMU |
CN113376669B (en) * | 2021-06-22 | 2022-11-15 | 东南大学 | A monocular VIO-GNSS fusion positioning algorithm based on point-line features |
CN114234959B (en) * | 2021-12-22 | 2024-02-20 | 深圳市普渡科技有限公司 | Robot, VSLAM initialization method, device and readable storage medium |
CN114998389B (en) * | 2022-06-20 | 2024-12-03 | 珠海格力电器股份有限公司 | Indoor Positioning Methods |
CN116957958A (en) * | 2023-06-25 | 2023-10-27 | 东南大学 | An improved VIO front-end method based on inertial priori correction of image grayscale |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544696A (en) * | 2018-12-04 | 2019-03-29 | 中国航空工业集团公司西安航空计算技术研究所 | A kind of airborne enhancing Synthetic vision actual situation Image Precision Registration of view-based access control model inertia combination |
CN110030994A (en) * | 2019-03-21 | 2019-07-19 | 东南大学 | A kind of robustness vision inertia close coupling localization method based on monocular |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN110411476A (en) * | 2019-07-29 | 2019-11-05 | 视辰信息科技(上海)有限公司 | Vision inertia odometer calibration adaptation and evaluation method and system |
CN110702107A (en) * | 2019-10-22 | 2020-01-17 | 北京维盛泰科科技有限公司 | Monocular vision inertial combination positioning navigation method |
CN110763251A (en) * | 2019-10-18 | 2020-02-07 | 华东交通大学 | Method and system for visual inertial odometry optimization |
CN111197984A (en) * | 2020-01-15 | 2020-05-26 | 重庆邮电大学 | Vision-inertial motion estimation method based on environmental constraint |
CN111578937A (en) * | 2020-05-29 | 2020-08-25 | 天津工业大学 | Visual inertial odometer system capable of optimizing external parameters simultaneously |
CN111780754A (en) * | 2020-06-23 | 2020-10-16 | 南京航空航天大学 | Visual-inertial odometry pose estimation method based on sparse direct method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10630962B2 (en) * | 2017-01-04 | 2020-04-21 | Qualcomm Incorporated | Systems and methods for object location |
EP3451288A1 (en) * | 2017-09-04 | 2019-03-06 | Universität Zürich | Visual-inertial odometry with an event camera |
CN108981693B (en) * | 2018-03-22 | 2021-10-29 | 东南大学 | A fast joint initialization method of VIO based on monocular camera |
CN111156984B (en) * | 2019-12-18 | 2022-12-09 | 东南大学 | Monocular vision inertia SLAM method oriented to dynamic scene |
-
2021
- 2021-01-28 CN CN202110119124.1A patent/CN112862768B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544696A (en) * | 2018-12-04 | 2019-03-29 | 中国航空工业集团公司西安航空计算技术研究所 | A kind of airborne enhancing Synthetic vision actual situation Image Precision Registration of view-based access control model inertia combination |
CN110030994A (en) * | 2019-03-21 | 2019-07-19 | 东南大学 | A kind of robustness vision inertia close coupling localization method based on monocular |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN110411476A (en) * | 2019-07-29 | 2019-11-05 | 视辰信息科技(上海)有限公司 | Vision inertia odometer calibration adaptation and evaluation method and system |
CN110763251A (en) * | 2019-10-18 | 2020-02-07 | 华东交通大学 | Method and system for visual inertial odometry optimization |
CN110702107A (en) * | 2019-10-22 | 2020-01-17 | 北京维盛泰科科技有限公司 | Monocular vision inertial combination positioning navigation method |
CN111197984A (en) * | 2020-01-15 | 2020-05-26 | 重庆邮电大学 | Vision-inertial motion estimation method based on environmental constraint |
CN111578937A (en) * | 2020-05-29 | 2020-08-25 | 天津工业大学 | Visual inertial odometer system capable of optimizing external parameters simultaneously |
CN111780754A (en) * | 2020-06-23 | 2020-10-16 | 南京航空航天大学 | Visual-inertial odometry pose estimation method based on sparse direct method |
Non-Patent Citations (4)
Title |
---|
PLS-VIO: Stereo Vision-inertial Odometry Based on Point and Line Features;Huanyu Wen;《2020 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS)》;20200701;1-7 * |
Trifo-VIO: Robust and Efficient Stereo Visual Inertial Odometry Using Points and Lines;Feng Zheng;《2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)》;20190107;3686-3693 * |
基于IMU与单目视觉融合算法的视觉惯性里程计软件设计;黄仁强;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20210115;I138-365 * |
基于点线综合特征的视觉惯性里程计方法研究;蒋满城;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20210115;I138-1846 * |
Also Published As
Publication number | Publication date |
---|---|
CN112862768A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112862768B (en) | Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics | |
CN111258313B (en) | Multi-sensor fusion SLAM system and robot | |
Xie et al. | Pixels and 3-D points alignment method for the fusion of camera and LiDAR data | |
Zheng et al. | Trifo-VIO: Robust and efficient stereo visual inertial odometry using points and lines | |
Gluckman et al. | Rectifying transformations that minimize resampling effects | |
CN110246147A (en) | Vision inertia odometer method, vision inertia mileage counter device and mobile device | |
CN109974707A (en) | A Visual Navigation Method for Indoor Mobile Robots Based on Improved Point Cloud Matching Algorithm | |
CN107869989A (en) | A positioning method and system based on visual inertial navigation information fusion | |
CN111780781B (en) | A combined visual and inertial odometry for template matching based on sliding window optimization | |
CN107610175A (en) | The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window | |
CN112649016A (en) | Visual inertial odometer method based on point-line initialization | |
Liu et al. | Direct visual odometry for a fisheye-stereo camera | |
CN114529576B (en) | RGBD and IMU hybrid tracking registration method based on sliding window optimization | |
CN116342661A (en) | Binocular visual-inertial odometry method using landmark point offset pose correction | |
CN112802196B (en) | Binocular inertial simultaneous localization and map construction method based on point-line feature fusion | |
CN112419497A (en) | Monocular vision-based SLAM method combining feature method and direct method | |
CN115371665B (en) | A Mobile Robot Localization Method Based on Depth Camera and Inertial Fusion | |
CN107462897A (en) | The method that three-dimensional based on laser radar builds figure | |
CN108053445A (en) | The RGB-D camera motion methods of estimation of Fusion Features | |
CN115218889A (en) | Multi-sensor indoor positioning method based on dotted line feature fusion | |
CN116309813A (en) | Solid-state laser radar-camera tight coupling pose estimation method | |
CN114485640A (en) | Monocular visual-inertial synchronous positioning and mapping method and system based on point and line features | |
Mu et al. | Visual navigation features selection algorithm based on instance segmentation in dynamic environment | |
CN112907633A (en) | Dynamic characteristic point identification method and application thereof | |
CN113763470A (en) | RGBD Visual Inertial Simultaneous Localization and Map Construction Based on Point-Line Feature Fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |