[go: up one dir, main page]

CN103247075A - Variational mechanism-based indoor scene three-dimensional reconstruction method - Google Patents

Variational mechanism-based indoor scene three-dimensional reconstruction method Download PDF

Info

Publication number
CN103247075A
CN103247075A CN201310173608XA CN201310173608A CN103247075A CN 103247075 A CN103247075 A CN 103247075A CN 201310173608X A CN201310173608X A CN 201310173608XA CN 201310173608 A CN201310173608 A CN 201310173608A CN 103247075 A CN103247075 A CN 103247075A
Authority
CN
China
Prior art keywords
camera
current
formula
point
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310173608XA
Other languages
Chinese (zh)
Other versions
CN103247075B (en
Inventor
贾松敏
王可
李雨晨
李秀智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201310173608.XA priority Critical patent/CN103247075B/en
Publication of CN103247075A publication Critical patent/CN103247075A/en
Application granted granted Critical
Publication of CN103247075B publication Critical patent/CN103247075B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于计算机视觉与智能机器人的交叉领域,公开了一种基于变分机制的大范围室内场景的重建方法,包括:步骤一,获取相机的标定参数,并建立畸变矫正模型;步骤二,建立相机位姿描述与相机投影模型;步骤三,利用基于SFM的单目SLAM算法实现相机位姿估计;步骤四,建立基于变分机制的深度图估计模型,并求解该模型;步骤五,建立关键帧选取机制,实现三维场景的更新。本发明采用RGB相机获取环境数据,针对利用高精度单目定位算法,提出了一种基于变分机制的深度图生成方法,实现了大范围的快速室内三维场景重建,有效地解决了三维重建算法成本与实时性问题。

Figure 201310173608

The invention belongs to the intersecting field of computer vision and intelligent robot, and discloses a reconstruction method of a large-scale indoor scene based on a variational mechanism, including: step 1, obtaining calibration parameters of a camera, and establishing a distortion correction model; step 2, establishing Camera pose description and camera projection model; step 3, use the SFM-based monocular SLAM algorithm to realize camera pose estimation; step 4, establish a depth map estimation model based on a variational mechanism, and solve the model; step 5, establish a key The frame selection mechanism realizes the update of the 3D scene. The present invention uses RGB cameras to acquire environmental data, and proposes a depth map generation method based on a variational mechanism for the use of high-precision monocular positioning algorithms, which realizes large-scale rapid indoor 3D scene reconstruction and effectively solves the 3D reconstruction algorithm Cost and timeliness issues.

Figure 201310173608

Description

基于变分机制的室内环境三维重建方法3D Reconstruction Method of Indoor Environment Based on Variational Mechanism

技术领域technical field

本发明属于计算机视觉与智能机器人的交叉领域,涉及一种室内环境三维重建技术,尤其涉及一种基于变分机制的大范围室内场景的重建方法。The invention belongs to the intersecting field of computer vision and intelligent robots, relates to a three-dimensional reconstruction technology of an indoor environment, and in particular relates to a reconstruction method of a large-scale indoor scene based on a variational mechanism.

技术背景technical background

随着同时定位与地图创建(Simultaneous Localization And Mapping,SLAM)研究的不断深入,环境三维立体化建模已逐步成为该领域研究热点,引起众多学者的关注。G.Klein等于2007年在增强现实(AR)领域首先提出同时定位与地图创建(Parallel Tracking and Mapping,PTAM)的概念,以解决环境实时建模问题。PTAM将摄像机跟踪与地图生成划分为两个独立线程,利用FastCorner方法更新检测特征点的同时,采用最优的局部与全局光束平差法(Bundle Adjustment,BA),不断实现相机位姿与三维特征点地图的更新。该方法基于稀疏点云建立了环境三维地图,但该地图缺乏对环境的直观三维描述。Pollefeys等人通过多传感器融合实现了大型室外场景的三维重建。但该方法存在计算的高复杂性以及对噪音敏感等缺点。目前在实时跟踪和稠密环境模型重构方面也有了一些尝试性的进步,但是仅仅局限于一些简单物体的重构,并且只能在特定约束条件下可以获得较高的精度。Richard A.Newcombe等人,利用基于SFM(Structure from Moving)的SLAM算法获取空间稀疏特征点云,采用多尺度径向基插值,运用图形图像学中隐式曲面多边形化方法,构造三维空间初始化网格地图,并结合场景流约束与高精度TV-L1光流算法更新网格顶点坐标,以达到逼近真实场景的目的。该算法能获取高精度的环境模型,但由于其算法复杂度较高,在两个图形硬件处理器(GPU)加速情况下,处理一帧图像仍需花费几秒钟的时间。With the continuous deepening of Simultaneous Localization And Mapping (SLAM) research, three-dimensional environmental modeling has gradually become a research hotspot in this field, attracting the attention of many scholars. In 2007, G. Klein et al. first proposed the concept of simultaneous positioning and map creation (Parallel Tracking and Mapping, PTAM) in the field of augmented reality (AR) to solve the problem of real-time modeling of the environment. PTAM divides camera tracking and map generation into two independent threads, uses the FastCorner method to update the detection feature points, and uses the optimal local and global bundle adjustment (Bundle Adjustment, BA) to continuously realize the camera pose and 3D features Point map updates. This method builds a 3D map of the environment based on sparse point clouds, but the map lacks an intuitive 3D description of the environment. Pollefeys et al. achieved 3D reconstruction of large outdoor scenes through multi-sensor fusion. However, this method has disadvantages such as high computational complexity and sensitivity to noise. At present, there have been some tentative progress in real-time tracking and reconstruction of dense environment models, but they are only limited to the reconstruction of some simple objects, and can only obtain high accuracy under certain constraints. Richard A. Newcombe et al. used the SFM (Structure from Moving) SLAM algorithm to obtain spatially sparse feature point clouds, adopted multi-scale radial basis interpolation, and used the implicit surface polygonization method in graphics and imaging to construct a three-dimensional space initialization network. Grid map, combined with scene flow constraints and high-precision TV-L1 optical flow algorithm to update grid vertex coordinates, in order to achieve the purpose of approaching the real scene. This algorithm can obtain a high-precision environment model, but due to its high algorithm complexity, it still takes a few seconds to process one frame of image under the acceleration of two graphics hardware processors (GPU).

发明内容Contents of the invention

针对现有技术中存在的上述问题,本发明提供了一种基于变分机制的快速三维重建方法,以实现在室内复杂环境下的三维建模。该方法保证环境信息的同时降低了所需处理数据量,能实现大范围的快速室内三维场景重建。有效地解决了三维重建算法成本与实时性问题,提高了重建精度。Aiming at the above-mentioned problems in the prior art, the present invention provides a fast 3D reconstruction method based on a variational mechanism to realize 3D modeling in complex indoor environments. This method reduces the amount of data to be processed while ensuring the environmental information, and can realize rapid indoor three-dimensional scene reconstruction in a large range. It effectively solves the cost and real-time problems of the 3D reconstruction algorithm, and improves the reconstruction accuracy.

本发明采用的技术方案如下:The technical scheme that the present invention adopts is as follows:

利用PTAM算法作为相机位姿估计手段,并在关键帧处选取适当图像序列构造基于变分模式的深度图估计能量函数,运用原始对偶算法优化上述能量函数,实现在当前关键帧处环境深度图的获取。由于该算法利用邻近帧信息构造能量函数,且有效利用了特定视角坐标系统间的关联性,以及摄像机透视投影变换关系,使得数据项蕴含了多视成像约束,降低了算法模型求解的计算复杂度。在统一计算架构下,本发明利用图形加速硬件实现了算法的并行优化,有效提高了算法实时性。The PTAM algorithm is used as the camera pose estimation method, and an appropriate image sequence is selected at the key frame to construct a depth map estimation energy function based on the variational mode, and the primal dual algorithm is used to optimize the above energy function to realize the environment depth map at the current key frame. Obtain. Since the algorithm uses adjacent frame information to construct energy functions, and effectively utilizes the correlation between coordinate systems of specific viewing angles and the camera perspective projection transformation relationship, the data items contain multi-view imaging constraints, which reduces the computational complexity of the algorithm model solution . Under the unified computing framework, the invention realizes the parallel optimization of the algorithm by using the graphics acceleration hardware, and effectively improves the real-time performance of the algorithm.

一种基于变分机制的室内环境三维重建的方法,其特征在于包括以下步骤:A method for three-dimensional reconstruction of an indoor environment based on a variational mechanism, characterized in that it comprises the following steps:

步骤一,获取相机的标定参数,并建立畸变矫正模型。Step 1, obtain the calibration parameters of the camera, and establish a distortion correction model.

在计算机视觉应用中,通过相机成像的几何模型,有效建立图像中像素点与空间三维点之间的映射关系。构成相机模型的几何参数须通过实验与计算才能得到,求解上述参数的过程就称之为相机标定。在本发明中相机参数的标定是非常关键的环节,标定参数的精度直接影响最终结果三维地图的准确性。In computer vision applications, through the geometric model of camera imaging, the mapping relationship between pixels in the image and three-dimensional points in space is effectively established. The geometric parameters that constitute the camera model must be obtained through experiments and calculations. The process of solving the above parameters is called camera calibration. In the present invention, the calibration of the camera parameters is a very critical link, and the accuracy of the calibration parameters directly affects the accuracy of the final three-dimensional map.

相机标定的具体过程为:The specific process of camera calibration is as follows:

(1)打印一张棋盘模板。本发明采用一张A4纸,棋盘的间隔为0.25cm。(1) Print a chessboard template. The present invention adopts a piece of A4 paper, and the interval of chessboard is 0.25cm.

(2)从多个角度拍摄棋盘。拍摄时,应尽量让棋盘占满屏幕,并保证棋盘的每一个角都在屏幕中,一共拍摄6张模板图片。(2) Photograph the chessboard from multiple angles. When shooting, try to make the chessboard fill the screen as much as possible, and ensure that every corner of the chessboard is on the screen, and shoot a total of 6 template pictures.

(3)检测出图像中的特征点,即棋盘的每一个黑色交叉点。(3) Detect the feature points in the image, that is, every black intersection of the chessboard.

(4)求取相机的内部参数,方法如下:(4) Obtain the internal parameters of the camera, the method is as follows:

RGB相机标定参数主要为相机内参。相机的内参矩阵K为:RGB camera calibration parameters are mainly camera internal parameters. The internal parameter matrix K of the camera is:

KK == ff uu 00 uu 00 00 ff vv vv 00 00 00 11

式中,u、v为相机平面坐标轴,(u0,v0)是相机像平面中心坐标,(fu,fv)是相机的焦距。In the formula, u and v are the coordinate axes of the camera plane, (u 0 , v 0 ) is the center coordinate of the camera image plane, and (f u , f v ) is the focal length of the camera.

根据标定参数,RGB图像中点与三维空间点的映射关系如下:RGB图像中点p=(u,v)在相机坐标系下的坐标P3D=(x,y,z)表示为:According to the calibration parameters, the mapping relationship between the point in the RGB image and the point in the three-dimensional space is as follows: the coordinate P 3D = (x, y, z) of the point p in the RGB image = (u, v) in the camera coordinate system is expressed as:

xx == (( uu -- uu 00 )) ** zz // ff uu ythe y == (( vv -- vv 00 )) ** zz // ff vv zz == dd

式中,d表示深度图像中点p的深度值。where d represents the depth value of point p in the depth image.

本发明中相机坐标系如图2所示,向下为y轴正方向,向前为z轴正方向,向右为x正方向。将相机的起始点位置设定为世界坐标系原点,世界坐标系的X、Y、Z方向与相机的定义相同。In the present invention, the camera coordinate system is shown in FIG. 2 , downward is the positive direction of the y-axis, forward is the positive direction of the z-axis, and rightward is the positive direction of the x-axis. Set the starting point of the camera as the origin of the world coordinate system. The X, Y, and Z directions of the world coordinate system are the same as those defined by the camera.

FOV(Field of Viewer)相机矫正模型为:The FOV (Field of Viewer) camera correction model is:

uu dd == uu 00 vv 00 ++ ff uu 00 00 ff vv rr dd rr uu xx uu

rr dd == 11 ωω arctanarctan (( 22 rr uu tanthe tan ωω 22 ))

rr uu == tanthe tan (( rr dd ωω )) 22 tanthe tan ωω 22

式中,xu为z=1面的像素坐标,ud为原始图像中像素坐标,ω为FOV相机畸变系数。In the formula, x u is the pixel coordinates of z=1 plane, u d is the pixel coordinates in the original image, and ω is the FOV camera distortion coefficient.

步骤二,建立相机位姿描述与相机投影模型。Step 2, establish the camera pose description and camera projection model.

在已建立起的世界坐标系下,相机位姿可以表示为如下矩阵:In the established world coordinate system, the camera pose can be expressed as the following matrix:

TT cwcw == RR cwcw tt cwcw 00 11

式中,“cw”表示从世界坐标系到当前相机坐标系,Tcw∈SE(3),SE(3)为刚体的旋转平移变换空间。Tcw可由如下六元组μ=(μ123456)表示,即:In the formula, "cw" means from the world coordinate system to the current camera coordinate system, T cw ∈ SE(3), SE(3) is the rotation-translation transformation space of the rigid body. T cw can be represented by the following six-tuple μ=(μ 1 , μ 2 , μ 3 , μ 4 , μ 5 , μ 6 ), namely:

TT cwcw == expexp (( μμ ^^ ))

uu ^^ == 00 μμ 66 -- μμ 55 μμ 11 μμ 66 00 μμ 44 μμ 22 μμ 55 -- μμ 44 00 μμ 33 00 00 00 00

式中,μ123分别为Kinect在全局坐标系下的平移量,μ456表示局部坐标系下坐标轴的旋转量。In the formula, μ 1 , μ 2 , μ 3 are the translation amount of Kinect in the global coordinate system respectively, and μ 4 , μ 5 , μ 6 represent the rotation amount of the coordinate axis in the local coordinate system.

相机的位姿Tcw建立了当前坐标系下空间点云坐标pc到世界坐标pw的变换关系,即:The pose T cw of the camera establishes the transformation relationship from the spatial point cloud coordinates p c to the world coordinates p w in the current coordinate system, namely:

pc=Tcwpw p c =T cw p w

在当前标系下,三维空间点云到z=1平面上投影定义为:Under the current coordinate system, the projection of the 3D space point cloud onto the z=1 plane is defined as:

π(p)=(xz,yz)T π(p)=(xz,yz) T

式中,p∈R3为三维空间点,x,y,z为该点的坐标值。根据当前坐标点深度值d,利用逆向投影法确定当前空间三维点坐标p,其坐标关系可表示为:In the formula, p∈R 3 is a point in three-dimensional space, and x, y, z are the coordinate values of the point. According to the depth value d of the current coordinate point, use the reverse projection method to determine the coordinate p of the three-dimensional point in the current space, and its coordinate relationship can be expressed as:

π-1(u,d)=dK-1uπ -1 (u,d) = dK -1 u

步骤三,利用基于SFM的单目SLAM算法实现相机位姿估计。Step 3, use the SFM-based monocular SLAM algorithm to realize camera pose estimation.

目前,单目视觉SLAM算法主要包括基于滤波与SFM(Structure from Moving)的SLAM算法。本发明采用PTAM算法实现对相机的定位。该算法是一种基于SFM的单目视觉SLAM方法,通过将系统划分为相机跟踪与地图创建两个独立的线程。在相机跟踪线程,系统利用相机获取当前环境纹理信息,并构建四层高斯图像金字塔,运用FAST-10角点检测算法提取当前图像中特征信息,采用块匹配的方式建立角点特征间的数据关联。在此基础上,根据当前投影误差,建立位姿估计模型实现相机的精确定位,并结合特征匹配信息与三角测量算法生成当前三维点云地图。相机位姿估计的具体过程为:At present, monocular vision SLAM algorithms mainly include SLAM algorithms based on filtering and SFM (Structure from Moving). The invention adopts the PTAM algorithm to realize the positioning of the camera. This algorithm is a SFM-based monocular vision SLAM method, which creates two independent threads by dividing the system into camera tracking and map. In the camera tracking thread, the system uses the camera to obtain the current environmental texture information, and constructs a four-layer Gaussian image pyramid, uses the FAST-10 corner detection algorithm to extract feature information in the current image, and uses block matching to establish data association between corner features . On this basis, according to the current projection error, a pose estimation model is established to realize the precise positioning of the camera, and the current 3D point cloud map is generated by combining feature matching information and triangulation algorithm. The specific process of camera pose estimation is as follows:

(1)稀疏地图的初始化(1) Initialization of sparse map

PTAM算法利用标准立体相机算法模型建立当前环境初始化地图,并在此基础上结合新增加关键帧不断更新三维地图。在地图的初始化过程中,通过人为选择两个独立关键帧,利用图像中FAST角点匹配关系,采用基于随机采样一致性(Random Sample Consensus,RANSAC)的五点法实现上述关键帧间重要矩阵F估计,并计算当前特征点处的三维坐标,同时,结合RANSAC算法选取适当空间点建立当前一致性平面,以确定全局世界坐标系,实现地图的初始化。The PTAM algorithm uses the standard stereo camera algorithm model to establish the current environment initialization map, and on this basis, it combines the newly added key frames to continuously update the three-dimensional map. In the initialization process of the map, by artificially selecting two independent key frames, using the FAST corner point matching relationship in the image, and using the five-point method based on Random Sample Consensus (RANSAC) to realize the important matrix F between key frames. Estimate and calculate the three-dimensional coordinates of the current feature points, and at the same time, combine the RANSAC algorithm to select appropriate spatial points to establish the current consistent plane to determine the global world coordinate system and realize the initialization of the map.

(2)相机位姿估计(2) Camera pose estimation

系统利用相机获取当前环境纹理信息,并构建四层高斯图像金字塔,运用FAST-10角点检测算法提取当前图像中特征信息,采用块匹配的方式建立角点特征间数据关联。在此基础上,根据当前投影误差,建立位姿估计模型,其数学描述如下:The system uses the camera to obtain the current environmental texture information, and builds a four-layer Gaussian image pyramid, uses the FAST-10 corner detection algorithm to extract feature information in the current image, and uses block matching to establish data association between corner features. On this basis, according to the current projection error, a pose estimation model is established, and its mathematical description is as follows:

ξξ == argarg minmin ξξ ΣObjΣ Obj (( || ee jj || σσ jj ,, σσ TT ))

ee jj == uu ii vv ii -- KπKπ (( expexp (( ξξ ^^ )) pp ))

式中,ej是投影误差,∑Obj(·,σT)为Tukey双权目标函数,σT为特征点的匹配标准差的无偏估计值,ξ为当前位姿6元组表示,

Figure BDA00003178676000043
为由ξ组成的反对称矩阵。In the formula, e j is the projection error, ∑Obj(·,σ T ) is the Tukey dual-weight objective function, σ T is the unbiased estimate of the matching standard deviation of the feature point, ξ is the current pose 6-tuple representation,
Figure BDA00003178676000043
is an antisymmetric matrix composed of ξ.

根据上述位姿估计模型,选取位于图像金字塔顶层的50个特征匹配点,实现对相机的初始化位姿估计。更进一步,该算法结合相机初始位姿,采用极线收索的方式,建立图像金字塔中角点特征亚像素精度匹配关系,并将上述匹配对带入位姿估计模型,实现相机的精确重定位。According to the above pose estimation model, 50 feature matching points located at the top of the image pyramid are selected to realize the initial pose estimation of the camera. Furthermore, the algorithm combines the initial pose of the camera and adopts the method of epipolar retrieval to establish the sub-pixel precision matching relationship of the corner features in the image pyramid, and brings the above matching pairs into the pose estimation model to realize the precise repositioning of the camera .

(3)相机位姿优化(3) Camera pose optimization

系统经初始化后,地图创建线程将等待新的关键帧进入。若相机与当前关键帧间图像帧数超出阈值条件,且相机跟踪效果最佳时,将自动执行添加关键帧过程。此时,系统将会对新增加关键帧中所有FAST角点进行Shi-Tomas评估,以获取当前具有显著特征的角点信息,并选取与之最近的关键帧利用极线收索与块匹配方法建立特征点映射关系,结合位姿估计模型实现相机精确重定位,同时将匹配点投影到空间,生成当前全局环境三维地图。After the system is initialized, the map creation thread will wait for new keyframes to come in. If the number of image frames between the camera and the current key frame exceeds the threshold condition, and the camera tracking effect is the best, the process of adding key frames will be automatically performed. At this time, the system will perform Shi-Tomas evaluation on all FAST corner points in the newly added keyframes to obtain the current corner point information with salient features, and select the nearest keyframe to use epipolar search and block matching methods Establish the mapping relationship of feature points, combine the pose estimation model to achieve precise camera repositioning, and project the matching points into space to generate a 3D map of the current global environment.

为了实现全局地图的维护,在地图创建线程等待新关键帧进入的过程中,系统将利用局部与全局的Levenberg-Marquardt集束调整算法实现当前地图的一致性优化。该集束调整算法的数学描述为:In order to maintain the global map, the system will use the local and global Levenberg-Marquardt bundle adjustment algorithm to achieve the consistency optimization of the current map while the map creation thread is waiting for new keyframes to enter. The mathematical description of the cluster adjustment algorithm is:

{{ {{ == ξξ 11 .. .. .. ξξ NN }} ,, {{ pp 11 .. .. .. pp Mm }} }} == argarg minmin {{ {{ μμ }} ,, {{ pp }} }} ΣΣ ii == 11 NN ΣΣ jj ∈∈ sthe s ii ObjObj (( || ee jithe ji || σσ jithe ji ,, σσ TT ))

式中,σji为在第i个关键帧中,FAST特征点的匹配标准差的无偏估计,ξi表示第i个关键帧位姿的6元组表示,pi为全局地图中的点。where σ ji is the unbiased estimate of the matching standard deviation of the FAST feature points in the ith keyframe, ξi is the 6-tuple representation of the pose of the ith keyframe, p i is the point in the global map .

步骤四,建立基于变分机制的深度图估计模型,并求解该模型。Step 4: Establish a variational mechanism-based depth map estimation model and solve the model.

在PTAM精确位姿估计前提下,本发明基于多视重建方法,利用变分机制建立深度求解模型。该方法基于光照不变性与深度图平滑性假设,建立L1型数据惩罚项与变分规则项,该模型由在光照不变性假设的前提下建立数据惩罚项,并利用数据惩罚项保证当前深度图的平滑性,其数学模型如下:On the premise of PTAM accurate pose estimation, the present invention is based on a multi-view reconstruction method, and uses a variational mechanism to establish a depth solution model. Based on the assumption of illumination invariance and depth map smoothness, this method establishes L1 type data penalty items and variation rule items. The model establishes data penalty items under the premise of illumination invariance assumption, and uses data penalty items to ensure that the current depth map The smoothness of , its mathematical model is as follows:

Ed=∫Ω(Edata+λEreg)dxE d =∫ Ω (E data +λE reg )dx

式中,λ为数据惩罚项Edata与变分规则项Ereg间的权重系数,

Figure BDA00003178676000054
为深度图取值范围。In the formula, λ is the weight coefficient between the data penalty item E data and the variation rule item E reg ,
Figure BDA00003178676000054
Value range for the depth map.

通过选取当前关键帧为深度图估计算法的参考帧Ir,利用其相邻像序列I={I1,I2,...,In},结合投影模型建立数据惩罚项Edata,其数学描述为:By selecting the current key frame as the reference frame I r of the depth map estimation algorithm, using its adjacent image sequence I={I 1 ,I 2 ,..., In }, combined with the projection model to establish the data penalty item E data , its The mathematical description is:

EE. datadata == 11 || II (( rr )) || ΣΣ II ii ∈∈ II || II rr (( xx )) -- II ii (( xx ′′ )) ||

式中,|I(r)|为当前图像序列中与参考帧具有重合信息的图像帧数量,x′为在当深度d下参考帧x处在Ii处的投影坐标,即:In the formula, |I(r)| is the number of image frames in the current image sequence that have overlapping information with the reference frame, and x' is the projection coordinate of the reference frame x at I i at the current depth d, that is:

xx ′′ == ππ -- 11 (( KTKT rr ii ππ (( xx ,, dd )) ))

在深度图平滑性假设前提下,为了确保在图像中边界处的不连续性,引入加权Huber算子构建变分规则项,其数学描述为:Under the assumption of smoothness of the depth map, in order to ensure the discontinuity at the boundary in the image, a weighted Huber operator is introduced to construct a variational rule item, whose mathematical description is:

Ereg=g(u)||▽d(u)||α E reg =g(u)||▽d(u)|| α

式中,▽d为深度图的梯度,g(u)为像素梯度权重系数,且g(u)=exp(-a||▽Ir(u)||)In the formula, ▽d is the gradient of the depth map, g(u) is the weight coefficient of the pixel gradient, and g(u)=exp(-a||▽I r (u)||)

Huber算子||x||α的数学描述为:The mathematical description of the Huber operator ||x|| α is:

|| || xx || || αα == || || xx || || 22 22 αα ,, || || xx || || ≤≤ αα || || xx || || -- αα 22 ,, othersothers

式中,α为常量。In the formula, α is a constant.

根据Legendre-Fenchel变换,能量函数可表示为: g | | &dtri; d | | &alpha; = < g &dtri; d , q > - &delta; ( q ) - &alpha; 2 | | q | | 2 According to the Legendre-Fenchel transformation, the energy function can be expressed as: g | | &dtri; d | | &alpha; = < g &dtri; d , q > - &delta; ( q ) - &alpha; 2 | | q | | 2

式中, &delta; ( q ) = &alpha; 2 &alpha; < | | q | | &le; 1 &infin; others In the formula, &delta; ( q ) = &alpha; 2 &alpha; < | | q | | &le; 1 &infin; others

上述Huber算子的引入为三维重建过程提供了光滑性保证,同时也为确保深度图中存在非连续边界,提高了三维地图创建质量。The introduction of the above-mentioned Huber operator provides a smoothness guarantee for the 3D reconstruction process, and also ensures the presence of discontinuous boundaries in the depth map, improving the quality of 3D map creation.

针对上述数学模型求解复杂度高、计算量大的问题,引入辅助变量建立凸优化模型,采用交替下降法实现对上述模型的优化,其具体过程如下:Aiming at solving the problems of high complexity and large amount of calculation in the above mathematical model, an auxiliary variable is introduced to establish a convex optimization model, and the optimization of the above model is realized by using the alternating descent method. The specific process is as follows:

(1)固定h,求解:(1) Fix h, solve:

argarg maxmax qq {{ argarg minmin dd EE. dd ,, qq }}

EE. dd ,, qq == &Integral;&Integral; &Omega;&Omega; (( << gg &dtri;&dtri; dd ,, qq >> ++ 11 22 &theta;&theta; (( dd -- hh )) 22 -- &delta;&delta; (( qq )) -- &alpha;&alpha; 22 || || qq || || 22 )) dxdx

式中,θ为二次项常系数,g为变分规则项中梯度权重系数。In the formula, θ is the constant coefficient of the quadratic term, and g is the gradient weight coefficient in the variation rule term.

根据拉格朗日极值法,上述能量函数达到极值的条件为:According to the Lagrangian extreme value method, the condition for the above energy function to reach the extreme value is:

&PartialD;&PartialD; EE. dd ,, qq &PartialD;&PartialD; qq == gg &dtri;&dtri; dd -- &alpha;q&alpha;q == 00

&PartialD;&PartialD; EE. dd ,, qq &PartialD;&PartialD; dd == gg divdiv qq ++ 11 &theta;&theta; (( dd -- hh )) == 00

式中,divq为q的散度。In the formula, divq is the divergence of q.

结合偏导数离散化描述,上述极值条件可表示为:Combined with the discretization description of partial derivatives, the above extreme value condition can be expressed as:

qq nno ++ 11 -- qq nno &epsiv;&epsiv; qq == gg &dtri;&dtri; dd -- &alpha;q&alpha;q nno ++ 11

dd nno ++ 11 -- dd nno &epsiv;&epsiv; dd == gg divdiv pp ++ 11 &theta;&theta; (( dd nno ++ 11 -- hh ))

此时可采用原始对偶算法实现能量函数的迭代优化,即:At this time, the original dual algorithm can be used to realize the iterative optimization of the energy function, namely:

pp nno ++ 11 == (( pp nno ++ &epsiv;&epsiv; qq gg &dtri;&dtri; dd nno )) // (( 11 ++ &epsiv;&epsiv; qq &alpha;&alpha; )) maxmax (( 11 ,, (( pp nno ++ &epsiv;&epsiv; qq gg &dtri;&dtri; dd nno )) // (( 11 ++ &epsiv;&epsiv; qq &alpha;&alpha; )) ))

dd nno ++ 11 == dd nno ++ &epsiv;&epsiv; dd (( gg divdiv qq nno ++ 11 ++ hh nno // &theta;&theta; )) (( 11 ++ &epsiv;&epsiv; dd // &theta;&theta; ))

式中,εq、εd为常数,分别表示最大化与最小化梯度描述系数。In the formula, ε q and ε d are constants, representing the maximum and minimum gradient description coefficients respectively.

(2)固定d,求解:(2) Fix d, solve:

argarg minmin hh EE. hh

EE. hh == &Integral;&Integral; &Omega;&Omega; (( &theta;&theta; 22 (( dd -- hh )) 22 ++ &lambda;&lambda; || II (( rr )) || &Sigma;&Sigma; ii == 00 nno || II ii (( xx )) -- II refref (( xx ,, hh )) || )) dxdx

在上述能量函数求解过程中,为了有效减少算法的复杂度,同时保证重建过程中的部分细节信息。本发明将深度取值范围[dmin,dmax]划分为S个采样平面,采用穷举的方式获取当前能量函数的最优解。其中步长的选择为:In the process of solving the above energy function, in order to effectively reduce the complexity of the algorithm, and at the same time ensure part of the detailed information in the reconstruction process. The present invention divides the depth value range [d min , d max ] into S sampling planes, and obtains the optimal solution of the current energy function in an exhaustive manner. The choice of step size is:

dd incinc kk == SdSD minmin dd maxmax (( SS -- kk )) dd minmin ++ dd maxmax

式中,为k与k-1采样平面间隔。In the formula, Sampling plane intervals for k and k-1.

步骤五,建立关键帧选取机制,实现三维场景的更新。Step five, establishing a key frame selection mechanism to realize updating of the 3D scene.

考虑系统冗余信息的消除,为了提高重建结果的清晰度以及实时性,减少系统在计算负担,本发明只在关键帧处实现对三维场景的估计,并更新和维护所生成的三维场景。当新增一帧KeyFrame数据后,根据式

Figure BDA00003178676000077
将当前新增KeyFrame数据转换到世界坐标系中,完成场景数据的更新。Considering the elimination of redundant information in the system, in order to improve the clarity and real-time performance of the reconstruction results and reduce the computational burden of the system, the present invention only realizes the estimation of the 3D scene at key frames, and updates and maintains the generated 3D scene. After adding a frame of KeyFrame data, according to the formula
Figure BDA00003178676000077
Convert the current newly added KeyFrame data to the world coordinate system to complete the update of the scene data.

利用深度模型中数据惩罚项,建立当前帧与关键帧间的信息重合程度评估函数,即:Using the data penalty item in the depth model, the evaluation function of the information coincidence degree between the current frame and the key frame is established, namely:

NN == &Sigma;&Sigma; xx &Element;&Element; RR 22 cc (( xx ))

cc (( xx )) == 11 ,, || II rr (( xx )) -- II ii (( xx &prime;&prime; )) || << &zeta;&zeta; 00 ,, othersothers

式中,ζ为常数。In the formula, ζ is a constant.

若此时N小于图像大小的0.7时,即确定当前帧为新关键帧。If N is smaller than 0.7 of the image size at this time, it is determined that the current frame is a new key frame.

本发明的有益效果是:本发明采用RGB相机获取环境数据。针对利用高精度单目定位算法,提出一种基于变分机制的深度图生成方法,实现了大范围的快速室内三维场景重建,有效地解决了三维重建算法成本与实时性问题。The beneficial effect of the present invention is that: the present invention adopts RGB camera to obtain environmental data. For the use of high-precision monocular positioning algorithm, a depth map generation method based on variational mechanism is proposed, which realizes a large-scale rapid indoor 3D scene reconstruction, and effectively solves the cost and real-time problems of 3D reconstruction algorithms.

附图说明Description of drawings

图1为基于变分模型的室内三维场景重建方法流程图;Fig. 1 is the flowchart of the indoor 3D scene reconstruction method based on the variational model;

图2为相机坐标系示意图;Figure 2 is a schematic diagram of the camera coordinate system;

图3为本发明应用实例的三维重建实验结果。Fig. 3 is the experimental result of three-dimensional reconstruction of the application example of the present invention.

具体实施方式Detailed ways

图1是基于变分模型的室内三维场景重建方法流程图,包括以下步骤:Figure 1 is a flowchart of a method for indoor 3D scene reconstruction based on a variational model, including the following steps:

步骤一,获取相机的标定参数,并建立畸变矫正模型。Step 1, obtain the calibration parameters of the camera, and establish a distortion correction model.

步骤二,建立相机位姿描述与相机投影模型。Step 2, establish camera pose description and camera projection model.

步骤三,利用基于SFM的单目SLAM算法实现相机位姿估计。Step 3, use the SFM-based monocular SLAM algorithm to realize camera pose estimation.

步骤四,建立基于变分机制的深度图估计模型,并求解该模型。Step 4: Establish a variational mechanism-based depth map estimation model and solve the model.

步骤五,建立关键帧选取机制,实现三维场景的更新。Step five, establishing a key frame selection mechanism to realize updating of the 3D scene.

下面给出本发明的一个应用实例。An application example of the present invention is given below.

本实例采用的RGB相机为Point Grey Flea2,图像辨率为640×480,最高帧频为30fps,水平视场角为65°,焦距大约为3.5mm。所使用的PC机配备有GTS450GPU和i5四核CPU。The RGB camera used in this example is Point Gray Flea2, the image resolution is 640×480, the maximum frame rate is 30fps, the horizontal field of view is 65°, and the focal length is about 3.5mm. The PC used is equipped with GTS450GPU and i5 quad-core CPU.

在实验过程中,通过彩色相机获取环境深度信息,结合相机位姿估计算法实现对自身精确定位。当进入关键帧后,选择关键帧周围20帧图像作为本文深度估计算法的输入。在深度估计算法执行过程中,令d0=h0且q0=0,计算

Figure BDA00003178676000081
以获取当前深度图的初始化输入,并迭代优化Ed,q与Eh直到收敛。同时,该在算法迭代过程中不断减小θ值,增加二次函数在算法执行过程中的权重,有效提高了算法收敛速度。最终实验结果如图3所示,实验表明该方法能有效实现环境的稠密三维重建,并进一步验证了该方法的可行性。During the experiment, the depth information of the environment is obtained through the color camera, and the camera pose estimation algorithm is used to realize the precise positioning of itself. After entering the key frame, select 20 frames of images around the key frame as the input of the depth estimation algorithm in this paper. During the execution of the depth estimation algorithm, let d 0 =h 0 and q 0 =0, calculate
Figure BDA00003178676000081
To obtain the initialization input of the current depth map, and iteratively optimize E d, q and E h until convergence. At the same time, the value of θ is continuously reduced during the algorithm iteration process, and the weight of the quadratic function in the algorithm execution process is increased, which effectively improves the algorithm convergence speed. The final experimental results are shown in Figure 3. The experiment shows that the method can effectively realize the dense 3D reconstruction of the environment, and further verifies the feasibility of the method.

Claims (3)

1. method based on the indoor environment three-dimensional reconstruction of variation mechanism is characterized in that may further comprise the steps:
Step 1 is obtained the calibrating parameters of camera, and sets up the distortion correction model;
The detailed process of camera calibration is:
(1) prints a chessboard template;
(2) from a plurality of angle shot chessboards, should allow chessboard take screen, and each angle that guarantees chessboard be taken 6 template picture altogether all in screen as far as possible;
(3) detect unique point in the image, i.e. each black point of crossing of chessboard;
(4) inner parameter of asking for, method is as follows:
RGB camera calibration parameter is mainly the camera confidential reference items, and the confidential reference items matrix K of camera is:
K = f u 0 u 0 0 f v v 0 0 0 1
In the formula, u, v are the camera plane coordinate axis, (u 0, v 0) be that camera is as planar central coordinate, (f u, f v) be the focal length of camera;
According to calibrating parameters, the mapping relations of RGB image mid point and three dimensions point are as follows: RGB image mid point p=(u, v) the coordinate P under camera coordinates system 3D=(x, y z) are expressed as:
x = ( u - u 0 ) * z / f u y = ( v - v 0 ) * z / f v z = d
In the formula, d represents the depth value of depth image mid point p;
Camera coordinates system is y axle positive dirction downwards, is forward z axle positive dirction, is to the right the x positive dirction; The initial point position of camera is set at the world coordinate system initial point, and the X of world coordinate system, Y, Z direction are identical with the definition of camera;
FOV camera correction model is:
u d = u 0 v 0 + f u 0 0 f v r d r u x u
r d = 1 &omega; arctan ( 2 r u tan &omega; 2 )
r u = tan ( r d &omega; ) 2 tan &omega; 2
In the formula, x uBe the pixel coordinate of z=1 face, u dBe pixel coordinate in the original image, ω is FOV camera distortion factor;
Step 2 is set up the camera pose and is described and the camera projection model, and direction is as follows:
Under the world coordinate system of having set up, the camera pose can be expressed as matrix:
T cw = R cw t cw 0 1
In the formula, cw represents that being tied to current camera coordinates from world coordinates is T Cw∈ SE (3), SE (3) are the rotation translation transformation space of rigid body; T CwCan be by following hexa-atomic group of μ=(μ 1, μ 2, μ 3, μ 4, μ 5, μ 6) expression, that is:
T cw = exp ( &mu; ^ )
&mu; ^ = 0 &mu; 6 - &mu; 5 &mu; 1 &mu; 6 0 &mu; 4 &mu; 2 &mu; 5 - &mu; 4 0 &mu; 3 0 0 0 0
In the formula, μ 1, μ 2, μ 3Be respectively the translational movement of Kinect under global coordinate system, μ 4, μ 5, μ 6The rotation amount of coordinate axis under the expression local coordinate system;
The pose T of camera CwSet up spatial point cloud coordinate p under the current coordinate system cTo world coordinates p wTransformation relation, that is:
p c=T cwp w
Under current mark system, the projection to the z=1 plane of three dimensions point cloud is defined as:
π(p)=(xz,yz) T
In the formula, p ∈ R 3Be the three dimensions point, x, y, z are the coordinate figure of this point; According to current coordinate points depth value d, utilize reverse sciagraphy to determine current space three-dimensional point coordinate p, its coordinate relation can be expressed as:
π -1(u,d)=dK -1u
Step 3 is utilized based on the monocular SLAM algorithm of SFM and is realized the estimation of camera pose;
Step 4 is set up the depth map estimation model based on variation mechanism, and is found the solution this model;
Step 5 is set up the key frame selection mechanism, realizes the renewal of three-dimensional scenic, and method is as follows:
Realize the estimation to three-dimensional scenic at the key frame place, and upgrade and safeguard the three-dimensional scenic that generates; After newly-increased frame KeyFrame data, according to formula
Figure FDA00003178675900026
Current newly-increased KeyFrame data-switching in world coordinate system, is finished the renewal of contextual data;
Utilize data penalty term in the depth model, set up present frame and overlap the degree valuation functions with information between key frame, that is:
N = &Sigma; x &Element; R 2 c ( x )
c ( x ) = 1 , | I r ( x ) - I i ( x &prime; ) | < &zeta; 0 , others
In the formula, ζ is constant;
If N was less than 0.7 o'clock of the image size at this moment, determine that namely present frame is new key frame.
2. the method for a kind of indoor environment three-dimensional reconstruction based on variation mechanism according to claim 1 is characterized in that, the step 3 utilization realizes that based on the monocular SLAM algorithm of SFM camera pose estimation approach is further comprising the steps of:
(1) initialization of sparse map
The PTAM algorithm utilizes standard stereoscopic camera algorithm model to set up current environment initialization map, and brings in constant renewal in three-dimensional map in conjunction with increasing key frame newly on this basis; In the initialization procedure of map, by two independent key frames of artificial selection, utilize FAST corners Matching relation in the image, employing realizes the estimation of the important matrix F between above-mentioned key frame based on the conforming five-spot of stochastic sampling, and calculate the three-dimensional coordinate at current unique point place, simultaneously, set up current consistance plane in conjunction with the suitable spatial point of RANSAC algorithm picks, to determine overall world coordinate system, realize the initialization of map;
(2) the camera pose is estimated
System utilizes camera to obtain the current environment texture information, and makes up this image pyramid of four floor heights, uses the FAST-10 Corner Detection Algorithm to extract characteristic information in the present image, and the mode of employing piece coupling is set up the data association between the angle point feature; On this basis, according to current projection error, set up the pose estimation model, its mathematical description is as follows:
&xi; = arg min &xi; &Sigma;Obj ( | e j | &sigma; j , &sigma; T )
e j = u i v i - K&pi; ( exp ( &xi; ) p )
In the formula, e jBe projection error, Σ Obj (, σ T) be the two power of Tukey objective function function, σ TBe the unbiased estimator of the match-on criterion difference of unique point, ξ is current pose 6 element group representations,
Figure FDA00003178675900033
Be the antisymmetric matrix of being formed by ξ;
According to above-mentioned pose estimation model, choose 50 characteristic matching points that are positioned at the image pyramid top layer, realize the initialization pose of camera is estimated; Further, the initial pose of this algorithm combining camera adopts polar curve to receive the mode of rope, sets up angle point feature sub-pixel precision matching relationship in the image pyramid, and with above-mentioned coupling to bringing the pose estimation model into, realize the accurate reorientation of camera;
(3) the camera pose is optimized
System is after initialization, and the key frame that the map building thread waits is new enters; If number of image frames exceeds threshold condition between camera and current key frame, and the camera tracking effect will automatically perform the key frame process of adding when best; At this moment, system will carry out the Shi-Tomas assessment to all FAST angle points in the key frame that increases newly, to obtain current angle point characteristic information with notable feature, and choose nearest with it key frame and utilize polar curve receipts rope and block matching method to set up the unique point mapping relations, realize the accurate reorientation of camera in conjunction with the pose estimation model, simultaneously match point is projected to the space, generate current global context three-dimensional map;
In order to realize the maintenance of global map, in the process that the new key frame of map building thread waits enters, the local Levenberg-Marquardt boundling adjustment algorithm with the overall situation of system's utilization realizes the global coherency optimization of current map; The mathematical description of this boundling adjustment algorithm is:
{ { &xi; 2 . . &xi; N } , { p 1 . . p M } } = arg min { { &mu; } , { p } } &Sigma; i = 1 N &Sigma; j &Element; S i Obj ( | e ji | &sigma; ji , &sigma; T )
In the formula, σ JiFor in i key frame, the nothing of the match-on criterion difference of FAST unique point is estimated ξ partially i6 element group representations of representing i key frame pose, p iBe the point in the global map.
3. the method for a kind of indoor environment three-dimensional reconstruction based on variation mechanism according to claim 1 is characterized in that, step 4 is set up and find the solution based on the method for the depth map estimation model of variation mechanism as follows:
Based on the depth map estimation model of variation mechanism, under the prerequisite of illumination unchangeability hypothesis, to set up the data penalty term, and utilize the data penalty term to guarantee the flatness of current depth map, its mathematical model is as follows:
E d=∫ Ω(E data+λE reg)dx
In the formula, λ is data penalty term E DataWith variation regularization term E RegBetween weight coefficient,
Figure FDA00003178675900045
Be the depth map span;
By choosing the reference frame I that current key frame is the depth map algorithm for estimating r, utilize its adjacent picture sequence I={I 1, I 2..., I n, set up data penalty term E in conjunction with projection model Data, its mathematical description is:
E data = 1 | I ( r ) | &Sigma; I i &Element; I | I r ( x ) - I i ( x &prime; ) |
In the formula, | I (r) | for have the image frames numbers of the information of coincidence in the present image sequence with reference frame, x ' is for being in I at reference frame x under depth d iThe projection coordinate at place, that is:
x &prime; = &pi; - 1 ( KT r i &pi; ( x , d ) )
Under depth map smoothness assumption prerequisite, in order to ensure the uncontinuity of boundary in image, to introduce Weighted H uber operator and make up the variation regularization term, its mathematical description is:
E reg=g(u)||▽d(u)|| α
In the formula, ▽ d is the gradient of depth map, and g (u) is the pixel gradient weight coefficient, g (u)=exp (a|| ▽ I r(u) ||)
The Huber operator || x|| αMathematical description be:
| | x | | &alpha; = | | x | | 2 2 &alpha; , | | x | | &le; &alpha; | | x | | - &alpha; 2 , others
In the formula, α is constant;
According to the Legendre-Fenchel conversion, energy function is transformed to:
g | | &dtri; d | | &alpha; = < g &dtri; d , q > - &delta; ( q ) - &alpha; 2 | | q | | 2
In the formula, &delta; ( q ) = &alpha; 2 &alpha; < | | q | | &le; 1 &infin; other
In view of above-mentioned mathematical model is found the solution the complexity height, calculated amount is big, introduce auxiliary variable and set up protruding optimization model, adopting alternately, descent method realizes that to above-mentioned Model Optimization detailed process is as follows:
(1) fixing h, find the solution:
arg max q { arg min d E d , q }
E d , q = &Integral; &Omega; ( < g &dtri; d , q > + 1 2 &theta; ( d - h ) 2 - &delta; ( q ) - &alpha; 2 | | q | | 2 ) dx
In the formula, g is gradient weight coefficient in the variation regularization term, and θ is the quadratic term constant coefficient;
According to Lagrangian extremum method, the condition that above-mentioned energy function reaches extreme value is:
&PartialD; E d , q &PartialD; q = g &dtri; d - &alpha;q = 0
&PartialD; E d , q &PartialD; d = g div q + 1 &theta; ( d - h ) = 0
In the formula, divq is the divergence of q;
Describe in conjunction with the partial derivative discretize, above-mentioned extremum conditions can be expressed as:
q n + 1 - q n &epsiv; q = g &dtri; d - &alpha;q n + 1
d n + 1 - d n &epsiv; d = g div p + 1 &theta; ( d n + 1 - h )
Adopt primal dual algorithm to realize the iteration optimization of energy function, that is:
p n + 1 = ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) max ( 1 , ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) )
d n + 1 = d n + &epsiv; d ( g div q n + 1 + h n / &theta; ) ( 1 + &epsiv; d / &theta; )
In the formula, ε q, ε dBe constant, expression maximizes and minimizes gradient and describes coefficient respectively;
(2) fixing d, find the solution:
arg min E h h
E h = &Integral; &Omega; ( &theta; 2 ( d - h ) 2 + &lambda; | I ( r ) | &Sigma; i = 0 n | I i ( x ) - I ref ( x , h ) | ) dx
In above-mentioned energy function solution procedure, in order effectively to reduce the complexity of algorithm, guarantee the part detailed information in the process of reconstruction simultaneously, with degree of depth span [d Min, d Max] be divided into S sample plane, adopt exhaustive mode to obtain the optimum solution of current energy function; Being chosen as of step-length wherein:
d inc k = Sd min d max ( S - k ) d min + d max
In the formula,
Figure FDA00003178675900062
Be k and k-1 sample plane interval.
CN201310173608.XA 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism Expired - Fee Related CN103247075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310173608.XA CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310173608.XA CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Publications (2)

Publication Number Publication Date
CN103247075A true CN103247075A (en) 2013-08-14
CN103247075B CN103247075B (en) 2015-08-19

Family

ID=48926580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310173608.XA Expired - Fee Related CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Country Status (1)

Country Link
CN (1) CN103247075B (en)

Cited By (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901891A (en) * 2014-04-12 2014-07-02 复旦大学 Dynamic particle tree SLAM algorithm based on hierarchical structure
CN103914874A (en) * 2014-04-08 2014-07-09 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN103942832A (en) * 2014-04-11 2014-07-23 浙江大学 Real-time indoor scene reconstruction method based on on-line structure analysis
CN104427230A (en) * 2013-08-28 2015-03-18 北京大学 Reality enhancement method and reality enhancement system
CN104463962A (en) * 2014-12-09 2015-03-25 合肥工业大学 Three-dimensional scene reconstruction method based on GPS information video
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN104881029A (en) * 2015-05-15 2015-09-02 重庆邮电大学 Mobile robot navigation method based on one point RANSAC and FAST algorithm
WO2015134832A1 (en) * 2014-03-06 2015-09-11 Nec Laboratories America, Inc. High accuracy monocular moving object localization
CN105513083A (en) * 2015-12-31 2016-04-20 新浪网技术(中国)有限公司 PTAM camera tracking method and device
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN105678842A (en) * 2016-01-11 2016-06-15 湖南拓视觉信息技术有限公司 Manufacturing method and device for three-dimensional map of indoor environment
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105686936A (en) * 2016-01-12 2016-06-22 浙江大学 Sound coding interaction system based on RGB-IR camera
CN105825520A (en) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106289099A (en) * 2016-07-28 2017-01-04 汕头大学 A kind of single camera vision system and three-dimensional dimension method for fast measuring based on this system
CN106485744A (en) * 2016-10-10 2017-03-08 成都奥德蒙科技有限公司 A kind of synchronous superposition method
CN106529838A (en) * 2016-12-16 2017-03-22 湖南拓视觉信息技术有限公司 Virtual assembling method and device
CN106595601A (en) * 2016-12-12 2017-04-26 天津大学 Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration
CN106780588A (en) * 2016-12-09 2017-05-31 浙江大学 A kind of image depth estimation method based on sparse laser observations
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN106803275A (en) * 2017-02-20 2017-06-06 苏州中科广视文化科技有限公司 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN106875446A (en) * 2017-02-20 2017-06-20 清华大学 Camera method for relocating and device
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107004275A (en) * 2014-11-21 2017-08-01 Metaio有限公司 For determining that at least one of 3D in absolute space ratio of material object reconstructs the method and system of the space coordinate of part
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107481279A (en) * 2017-05-18 2017-12-15 华中科技大学 A kind of monocular video depth map computational methods
CN107506040A (en) * 2017-08-29 2017-12-22 上海爱优威软件开发有限公司 A kind of space path method and system for planning
CN107657640A (en) * 2017-09-30 2018-02-02 南京大典科技有限公司 Intelligent patrol inspection management method based on ORB SLAM
CN107818592A (en) * 2017-11-24 2018-03-20 北京华捷艾米科技有限公司 Method, system and the interactive system of collaborative synchronous superposition
CN107833245A (en) * 2017-11-28 2018-03-23 北京搜狐新媒体信息技术有限公司 SLAM method and system based on monocular vision Feature Points Matching
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN107909643A (en) * 2017-11-06 2018-04-13 清华大学 Mixing scene reconstruction method and device based on model segmentation
CN108062537A (en) * 2017-12-29 2018-05-22 幻视信息科技(深圳)有限公司 A kind of 3d space localization method, device and computer readable storage medium
CN108122263A (en) * 2017-04-28 2018-06-05 上海联影医疗科技有限公司 Image re-construction system and method
CN108154531A (en) * 2018-01-03 2018-06-12 深圳北航新兴产业技术研究院 A kind of method and apparatus for calculating body-surface rauma region area
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108242079A (en) * 2017-12-30 2018-07-03 北京工业大学 A VSLAM method based on multi-feature visual odometry and graph optimization model
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN108629843A (en) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 A kind of method and apparatus for realizing augmented reality
CN108898669A (en) * 2018-07-17 2018-11-27 网易(杭州)网络有限公司 Data processing method, device, medium and calculating equipment
WO2018214086A1 (en) * 2017-05-25 2018-11-29 深圳先进技术研究院 Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN109191526A (en) * 2018-09-10 2019-01-11 杭州艾米机器人有限公司 Three-dimensional environment method for reconstructing and system based on RGBD camera and optical encoder
CN109254579A (en) * 2017-07-14 2019-01-22 上海汽车集团股份有限公司 A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109739079A (en) * 2018-12-25 2019-05-10 广东工业大学 A Method to Improve the Accuracy of VSLAM System
CN109870118A (en) * 2018-11-07 2019-06-11 南京林业大学 A point cloud collection method for green plant time series model
CN110059651A (en) * 2019-04-24 2019-07-26 北京计算机技术及应用研究所 A kind of camera real-time tracking register method
CN110555883A (en) * 2018-04-27 2019-12-10 腾讯科技(深圳)有限公司 repositioning method and device for camera attitude tracking process and storage medium
CN110751640A (en) * 2019-10-17 2020-02-04 南京鑫和汇通电子科技有限公司 Quadrangle detection method of depth image based on angular point pairing
CN110966917A (en) * 2018-09-29 2020-04-07 深圳市掌网科技股份有限公司 Indoor three-dimensional scanning system and method for mobile terminal
CN111145238A (en) * 2019-12-12 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN111340864A (en) * 2020-02-26 2020-06-26 浙江大华技术股份有限公司 Monocular estimation-based three-dimensional scene fusion method and device
CN111652901A (en) * 2020-06-02 2020-09-11 山东大学 A Textureless 3D Object Tracking Method Based on Confidence and Feature Fusion
CN112221132A (en) * 2020-10-14 2021-01-15 王军力 Method and system for applying three-dimensional weiqi to online game
CN112348868A (en) * 2020-11-06 2021-02-09 养哇(南京)科技有限公司 Method and system for recovering monocular SLAM scale through detection and calibration
CN112348869A (en) * 2020-11-17 2021-02-09 的卢技术有限公司 Method for recovering monocular SLAM scale through detection and calibration
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
CN112634371A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for outputting information and calibrating camera
CN113034606A (en) * 2021-02-26 2021-06-25 嘉兴丰鸟科技有限公司 Motion recovery structure calculation method
CN113534786A (en) * 2020-04-20 2021-10-22 深圳市奇虎智能科技有限公司 SLAM method-based environment reconstruction method and system and mobile robot
CN113902847A (en) * 2021-10-11 2022-01-07 岱悟智能科技(上海)有限公司 Monocular depth image pose optimization method based on three-dimensional feature constraint
US11348260B2 (en) * 2017-06-22 2022-05-31 Interdigital Vc Holdings, Inc. Methods and devices for encoding and reconstructing a point cloud
WO2022142049A1 (en) * 2020-12-29 2022-07-07 浙江商汤科技开发有限公司 Map construction method and apparatus, device, storage medium, and computer program product
CN114943773A (en) * 2022-04-06 2022-08-26 阿里巴巴(中国)有限公司 Camera calibration method, device, equipment and storage medium
CN117214860A (en) * 2023-08-14 2023-12-12 北京科技大学顺德创新学院 Laser radar odometer method based on twin feature pyramid and ground segmentation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701811B (en) * 2016-01-12 2018-05-22 浙江大学 A kind of acoustic coding exchange method based on RGB-IR cameras
CN108645398A (en) * 2018-02-09 2018-10-12 深圳积木易搭科技技术有限公司 A kind of instant positioning and map constructing method and system based on structured environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07182541A (en) * 1993-12-21 1995-07-21 Nec Corp Preparing method for three-dimensional model
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 A New Viewpoint Reconstruction Method in Multi-viewpoint Acquisition/Display System of Convergent Cameras
CN102800127A (en) * 2012-07-18 2012-11-28 清华大学 Light stream optimization based three-dimensional reconstruction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07182541A (en) * 1993-12-21 1995-07-21 Nec Corp Preparing method for three-dimensional model
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 A New Viewpoint Reconstruction Method in Multi-viewpoint Acquisition/Display System of Convergent Cameras
CN102800127A (en) * 2012-07-18 2012-11-28 清华大学 Light stream optimization based three-dimensional reconstruction method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TAGUCHI.Y ,ETAL: "SLAM using both points and planes for hand-held 3D sensors", 《MIXED AND AUGMENTED REALITY (ISMAR), 2012 IEEE INTERNATIONAL SYMPOSIUM ON》 *
刘鑫,等: "基于GPU和Kinect的快速物体重建", 《自动化学报》 *

Cited By (126)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427230B (en) * 2013-08-28 2017-08-25 北京大学 The method of augmented reality and the system of augmented reality
CN104427230A (en) * 2013-08-28 2015-03-18 北京大学 Reality enhancement method and reality enhancement system
WO2015134832A1 (en) * 2014-03-06 2015-09-11 Nec Laboratories America, Inc. High accuracy monocular moving object localization
US9367922B2 (en) 2014-03-06 2016-06-14 Nec Corporation High accuracy monocular moving object localization
CN103914874B (en) * 2014-04-08 2017-02-01 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
US9686527B2 (en) 2014-04-08 2017-06-20 Sun Yat-Sen University Non-feature extraction-based dense SFM three-dimensional reconstruction method
CN103914874A (en) * 2014-04-08 2014-07-09 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
WO2015154601A1 (en) * 2014-04-08 2015-10-15 中山大学 Non-feature extraction-based dense sfm three-dimensional reconstruction method
CN103942832A (en) * 2014-04-11 2014-07-23 浙江大学 Real-time indoor scene reconstruction method based on on-line structure analysis
CN103942832B (en) * 2014-04-11 2016-07-06 浙江大学 A kind of indoor scene real-time reconstruction method based on online structural analysis
CN103901891A (en) * 2014-04-12 2014-07-02 复旦大学 Dynamic particle tree SLAM algorithm based on hierarchical structure
CN107004275A (en) * 2014-11-21 2017-08-01 Metaio有限公司 For determining that at least one of 3D in absolute space ratio of material object reconstructs the method and system of the space coordinate of part
US11741624B2 (en) 2014-11-21 2023-08-29 Apple Inc. Method and system for determining spatial coordinates of a 3D reconstruction of at least part of a real object at absolute spatial scale
CN107004275B (en) * 2014-11-21 2020-09-29 苹果公司 Method and system for determining spatial coordinates of 3D heavy components of at least a portion of an object
US10846871B2 (en) 2014-11-21 2020-11-24 Apple Inc. Method and system for determining spatial coordinates of a 3D reconstruction of at least part of a real object at absolute spatial scale
CN104463962B (en) * 2014-12-09 2017-02-22 合肥工业大学 Three-dimensional scene reconstruction method based on GPS information video
CN104463962A (en) * 2014-12-09 2015-03-25 合肥工业大学 Three-dimensional scene reconstruction method based on GPS information video
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN104537709B (en) * 2014-12-15 2017-09-29 西北工业大学 It is a kind of that method is determined based on the real-time three-dimensional reconstruction key frame that pose changes
CN105825520A (en) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN104881029B (en) * 2015-05-15 2018-01-30 重庆邮电大学 Mobile Robotics Navigation method based on a point RANSAC and FAST algorithms
CN104881029A (en) * 2015-05-15 2015-09-02 重庆邮电大学 Mobile robot navigation method based on one point RANSAC and FAST algorithm
CN105654492B (en) * 2015-12-30 2018-09-07 哈尔滨工业大学 Robust real-time three-dimensional method for reconstructing based on consumer level camera
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105513083A (en) * 2015-12-31 2016-04-20 新浪网技术(中国)有限公司 PTAM camera tracking method and device
CN105513083B (en) * 2015-12-31 2019-02-22 新浪网技术(中国)有限公司 A kind of PTAM video camera tracking method and device
CN105678842A (en) * 2016-01-11 2016-06-15 湖南拓视觉信息技术有限公司 Manufacturing method and device for three-dimensional map of indoor environment
CN105686936B (en) * 2016-01-12 2017-12-29 浙江大学 A kind of acoustic coding interactive system based on RGB-IR cameras
CN105686936A (en) * 2016-01-12 2016-06-22 浙江大学 Sound coding interaction system based on RGB-IR camera
CN105928505B (en) * 2016-04-19 2019-01-29 深圳市神州云海智能科技有限公司 The pose of mobile robot determines method and apparatus
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
CN105856230B (en) * 2016-05-06 2017-11-24 简燕梅 A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106052674B (en) * 2016-05-20 2019-07-26 青岛克路德机器人有限公司 A kind of SLAM method and system of Indoor Robot
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106097304B (en) * 2016-05-31 2019-04-23 西北工业大学 A real-time online map generation method for unmanned aerial vehicles
CN106127739B (en) * 2016-06-16 2021-04-27 华东交通大学 An RGB-D SLAM Method Combined with Monocular Vision
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106289099A (en) * 2016-07-28 2017-01-04 汕头大学 A kind of single camera vision system and three-dimensional dimension method for fast measuring based on this system
CN106289099B (en) * 2016-07-28 2018-11-20 汕头大学 A kind of single camera vision system and the three-dimensional dimension method for fast measuring based on the system
CN106485744A (en) * 2016-10-10 2017-03-08 成都奥德蒙科技有限公司 A kind of synchronous superposition method
CN106485744B (en) * 2016-10-10 2019-08-20 成都弥知科技有限公司 A kind of synchronous superposition method
CN106780576B (en) * 2016-11-23 2020-03-17 北京航空航天大学 RGBD data stream-oriented camera pose estimation method
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN106780588A (en) * 2016-12-09 2017-05-31 浙江大学 A kind of image depth estimation method based on sparse laser observations
CN106595601B (en) * 2016-12-12 2020-01-07 天津大学 An accurate relocation method of camera 6-DOF pose without hand-eye calibration
CN106595601A (en) * 2016-12-12 2017-04-26 天津大学 Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration
CN106529838A (en) * 2016-12-16 2017-03-22 湖南拓视觉信息技术有限公司 Virtual assembling method and device
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN106940186B (en) * 2017-02-16 2019-09-24 华中科技大学 A kind of robot autonomous localization and navigation methods and systems
CN106803275A (en) * 2017-02-20 2017-06-06 苏州中科广视文化科技有限公司 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated
CN106875446A (en) * 2017-02-20 2017-06-20 清华大学 Camera method for relocating and device
CN106875446B (en) * 2017-02-20 2019-09-20 清华大学 Camera repositioning method and device
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN108629843A (en) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 A kind of method and apparatus for realizing augmented reality
CN108629843B (en) * 2017-03-24 2021-07-13 成都理想境界科技有限公司 Method and equipment for realizing augmented reality
US11455756B2 (en) 2017-04-28 2022-09-27 Shanghai United Imaging Healthcare Co., Ltd. System and method for image reconstruction
CN108122263B (en) * 2017-04-28 2021-06-25 上海联影医疗科技股份有限公司 Image reconstruction system and method
US11062487B2 (en) 2017-04-28 2021-07-13 Shanghai United Imaging Healthcare Co., Ltd. System and method for image reconstruction
CN108122263A (en) * 2017-04-28 2018-06-05 上海联影医疗科技有限公司 Image re-construction system and method
CN107481279A (en) * 2017-05-18 2017-12-15 华中科技大学 A kind of monocular video depth map computational methods
WO2018214086A1 (en) * 2017-05-25 2018-11-29 深圳先进技术研究院 Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107292949B (en) * 2017-05-25 2020-06-16 深圳先进技术研究院 Three-dimensional reconstruction method and device of scene and terminal equipment
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system
US11348260B2 (en) * 2017-06-22 2022-05-31 Interdigital Vc Holdings, Inc. Methods and devices for encoding and reconstructing a point cloud
CN109254579A (en) * 2017-07-14 2019-01-22 上海汽车集团股份有限公司 A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method
CN107506040A (en) * 2017-08-29 2017-12-22 上海爱优威软件开发有限公司 A kind of space path method and system for planning
CN107657640A (en) * 2017-09-30 2018-02-02 南京大典科技有限公司 Intelligent patrol inspection management method based on ORB SLAM
CN107909643A (en) * 2017-11-06 2018-04-13 清华大学 Mixing scene reconstruction method and device based on model segmentation
CN107909643B (en) * 2017-11-06 2020-04-24 清华大学 Mixed scene reconstruction method and device based on model segmentation
CN107862720B (en) * 2017-11-24 2020-05-22 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on multi-map fusion
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN107818592A (en) * 2017-11-24 2018-03-20 北京华捷艾米科技有限公司 Method, system and the interactive system of collaborative synchronous superposition
CN107833245A (en) * 2017-11-28 2018-03-23 北京搜狐新媒体信息技术有限公司 SLAM method and system based on monocular vision Feature Points Matching
CN107833245B (en) * 2017-11-28 2020-02-07 北京搜狐新媒体信息技术有限公司 Monocular visual feature point matching-based SLAM method and system
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108062537A (en) * 2017-12-29 2018-05-22 幻视信息科技(深圳)有限公司 A kind of 3d space localization method, device and computer readable storage medium
CN108242079B (en) * 2017-12-30 2021-06-25 北京工业大学 A VSLAM method based on multi-feature visual odometry and graph optimization model
CN108242079A (en) * 2017-12-30 2018-07-03 北京工业大学 A VSLAM method based on multi-feature visual odometry and graph optimization model
CN108154531B (en) * 2018-01-03 2021-10-08 深圳北航新兴产业技术研究院 Method and device for calculating area of body surface damage region
CN108154531A (en) * 2018-01-03 2018-06-12 深圳北航新兴产业技术研究院 A kind of method and apparatus for calculating body-surface rauma region area
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN110555883A (en) * 2018-04-27 2019-12-10 腾讯科技(深圳)有限公司 repositioning method and device for camera attitude tracking process and storage medium
CN110555883B (en) * 2018-04-27 2022-07-22 腾讯科技(深圳)有限公司 Repositioning method and device for camera attitude tracking process and storage medium
CN108898669A (en) * 2018-07-17 2018-11-27 网易(杭州)网络有限公司 Data processing method, device, medium and calculating equipment
CN109191526B (en) * 2018-09-10 2020-07-07 杭州艾米机器人有限公司 Three-dimensional environment reconstruction method and system based on RGBD camera and optical encoder
CN109191526A (en) * 2018-09-10 2019-01-11 杭州艾米机器人有限公司 Three-dimensional environment method for reconstructing and system based on RGBD camera and optical encoder
CN110966917A (en) * 2018-09-29 2020-04-07 深圳市掌网科技股份有限公司 Indoor three-dimensional scanning system and method for mobile terminal
CN109870118B (en) * 2018-11-07 2020-09-11 南京林业大学 A point cloud collection method for green plant time series model
CN109870118A (en) * 2018-11-07 2019-06-11 南京林业大学 A point cloud collection method for green plant time series model
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109697753B (en) * 2018-12-10 2023-10-03 智灵飞(北京)科技有限公司 Unmanned aerial vehicle three-dimensional reconstruction method based on RGB-D SLAM and unmanned aerial vehicle
CN109739079B (en) * 2018-12-25 2022-05-10 九天创新(广东)智能科技有限公司 Method for improving VSLAM system precision
CN109739079A (en) * 2018-12-25 2019-05-10 广东工业大学 A Method to Improve the Accuracy of VSLAM System
CN110059651B (en) * 2019-04-24 2021-07-02 北京计算机技术及应用研究所 Real-time tracking and registering method for camera
CN110059651A (en) * 2019-04-24 2019-07-26 北京计算机技术及应用研究所 A kind of camera real-time tracking register method
CN112634371B (en) * 2019-09-24 2023-12-15 阿波罗智联(北京)科技有限公司 Method and device for outputting information and calibrating camera
CN112634371A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for outputting information and calibrating camera
CN110751640A (en) * 2019-10-17 2020-02-04 南京鑫和汇通电子科技有限公司 Quadrangle detection method of depth image based on angular point pairing
CN110751640B (en) * 2019-10-17 2024-07-16 南京鑫和汇通电子科技有限公司 Quadrilateral detection method for depth image based on corner pairing
CN111145238A (en) * 2019-12-12 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
WO2021115071A1 (en) * 2019-12-12 2021-06-17 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and apparatus for monocular endoscope image, and terminal device
CN111145238B (en) * 2019-12-12 2023-09-22 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method, device and terminal equipment of monocular endoscopic images
CN111340864A (en) * 2020-02-26 2020-06-26 浙江大华技术股份有限公司 Monocular estimation-based three-dimensional scene fusion method and device
CN111340864B (en) * 2020-02-26 2023-12-12 浙江大华技术股份有限公司 Three-dimensional scene fusion method and device based on monocular estimation
CN113534786A (en) * 2020-04-20 2021-10-22 深圳市奇虎智能科技有限公司 SLAM method-based environment reconstruction method and system and mobile robot
CN111652901B (en) * 2020-06-02 2021-03-26 山东大学 A Textureless 3D Object Tracking Method Based on Confidence and Feature Fusion
CN111652901A (en) * 2020-06-02 2020-09-11 山东大学 A Textureless 3D Object Tracking Method Based on Confidence and Feature Fusion
CN112221132A (en) * 2020-10-14 2021-01-15 王军力 Method and system for applying three-dimensional weiqi to online game
CN112348868A (en) * 2020-11-06 2021-02-09 养哇(南京)科技有限公司 Method and system for recovering monocular SLAM scale through detection and calibration
CN112348869A (en) * 2020-11-17 2021-02-09 的卢技术有限公司 Method for recovering monocular SLAM scale through detection and calibration
CN112348869B (en) * 2020-11-17 2024-08-16 的卢技术有限公司 Method for recovering monocular SLAM scale through detection and calibration
WO2022142049A1 (en) * 2020-12-29 2022-07-07 浙江商汤科技开发有限公司 Map construction method and apparatus, device, storage medium, and computer program product
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
CN113034606A (en) * 2021-02-26 2021-06-25 嘉兴丰鸟科技有限公司 Motion recovery structure calculation method
CN113902847B (en) * 2021-10-11 2024-04-16 岱悟智能科技(上海)有限公司 Monocular depth image pose optimization method based on three-dimensional feature constraint
CN113902847A (en) * 2021-10-11 2022-01-07 岱悟智能科技(上海)有限公司 Monocular depth image pose optimization method based on three-dimensional feature constraint
CN114943773A (en) * 2022-04-06 2022-08-26 阿里巴巴(中国)有限公司 Camera calibration method, device, equipment and storage medium
CN117214860A (en) * 2023-08-14 2023-12-12 北京科技大学顺德创新学院 Laser radar odometer method based on twin feature pyramid and ground segmentation
CN117214860B (en) * 2023-08-14 2024-04-19 北京科技大学顺德创新学院 LiDAR odometry method based on twin feature pyramid and ground segmentation

Also Published As

Publication number Publication date
CN103247075B (en) 2015-08-19

Similar Documents

Publication Publication Date Title
CN103247075B (en) Based on the indoor environment three-dimensional rebuilding method of variation mechanism
CN109461180B (en) Three-dimensional scene reconstruction method based on deep learning
CN109974707B (en) Indoor mobile robot visual navigation method based on improved point cloud matching algorithm
CN108416840B (en) A 3D scene dense reconstruction method based on monocular camera
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN110189399B (en) Indoor three-dimensional layout reconstruction method and system
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
US9613420B2 (en) Method for locating a camera and for 3D reconstruction in a partially known environment
CN111462135A (en) Semantic Mapping Method Based on Visual SLAM and 2D Semantic Segmentation
CN108776989B (en) Low-texture planar scene reconstruction method based on sparse SLAM framework
CN109035388A (en) Three-dimensional face model method for reconstructing and device
CN107240129A (en) Object and indoor small scene based on RGB D camera datas recover and modeling method
CN103106688A (en) Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN111062966A (en) Method for optimizing camera tracking based on L-M algorithm and polynomial interpolation
Pathak et al. Dense 3D reconstruction from two spherical images via optical flow-based equirectangular epipolar rectification
CN114494150A (en) A Design Method of Monocular Visual Odometry Based on Semi-direct Method
CN115205463A (en) New visual angle image generation method, device and equipment based on multi-spherical scene expression
Liu et al. Dense stereo matching strategy for oblique images that considers the plane directions in urban areas
Qu et al. Visual slam with 3d gaussian primitives and depth priors enabling novel view synthesis
CN112767481B (en) High-precision positioning and mapping method based on visual edge features
CN114935316A (en) Standard depth image generation method based on optical tracking and monocular vision
CN119206118B (en) NeRF-based visual dominant multi-mode SLAM method for indoor office environment
CN113963030B (en) A method to improve the stability of monocular vision initialization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150819

Termination date: 20200513