[go: up one dir, main page]

CN107240129A - Object and indoor small scene based on RGB D camera datas recover and modeling method - Google Patents

Object and indoor small scene based on RGB D camera datas recover and modeling method Download PDF

Info

Publication number
CN107240129A
CN107240129A CN201710324369.1A CN201710324369A CN107240129A CN 107240129 A CN107240129 A CN 107240129A CN 201710324369 A CN201710324369 A CN 201710324369A CN 107240129 A CN107240129 A CN 107240129A
Authority
CN
China
Prior art keywords
mrow
point
msub
mtd
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710324369.1A
Other languages
Chinese (zh)
Inventor
叶勤
姚亚会
林怡
宇洁
叶张林
姜雪芹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201710324369.1A priority Critical patent/CN107240129A/en
Publication of CN107240129A publication Critical patent/CN107240129A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明的目的在于提供一种基于RGB‑D相机数据的物体及室内小场景恢复与建模方法,概括为:该方法在进行RGB‑D深度数据(点云)配准建模中,将点到面和点到投影的约束条件综合起来应用于RGB‑D相机所获得的序列深度数据(点云)的精确配准,最后得到物体或者小场景的点云模型(.ply格式),该模型可用于物体的量测及进一步CAD建模。本发明中的配准方法既考虑了点到投影算法的速度优势,也综合了点到切平面算法的精度优势,克服了传统点云配准方法速度慢、场景精度低的问题,可以快速、准确的找到源点云上某点在目标点云上的对应点,实现多视点云拼接。

The object of the present invention is to provide a kind of object and indoor small scene recovery and modeling method based on RGB-D camera data, be summarized as: this method is in carrying out RGB-D depth data (point cloud) registration modeling, points The constraints of surface-to-surface and point-to-projection are combined and applied to the precise registration of the sequence depth data (point cloud) obtained by the RGB-D camera, and finally the point cloud model (.ply format) of the object or small scene is obtained. The model It can be used for object measurement and further CAD modeling. The registration method in the present invention not only takes into account the speed advantages of the point-to-projection algorithm, but also integrates the accuracy advantages of the point-to-tangent plane algorithm, overcomes the problems of slow speed and low scene accuracy of the traditional point cloud registration method, and can be fast, Accurately find the corresponding point of a point on the source point cloud on the target point cloud to realize multi-view point cloud splicing.

Description

基于RGB-D相机数据的物体及室内小场景恢复与建模方法Object and indoor small scene restoration and modeling method based on RGB-D camera data

技术领域technical field

本发明属于摄影测量学与计算机视觉领域,具体涉及一种基于RGB-D相机数据的物体及室内小场景恢复与建模方法。The invention belongs to the field of photogrammetry and computer vision, in particular to a method for restoring and modeling objects and small indoor scenes based on RGB-D camera data.

背景技术Background technique

近年来,三维重建日益成为计算机视觉领域的研究热点和重点,被广泛应用于工业测量、文物古迹保护、逆向工程、电子商务、计算机动画、医疗解剖、显微摄影、虚拟现实等诸多领域。常用的物体与室内建模方法主要是利用激光扫描仪LIDAR获取点云数据进行建模,或者是利用相机获取重叠影像基于双目立体视觉的方法进行建模,前者价格昂贵且仪器操作笨重;而后者数据处理复杂耗时,对光照条件有要求,且对相机也需要特别校正。便携式RGB-D传感器(如Kinect2.0)价格低廉、结构简单、操作方便,能同时获取场景的深度和彩色数据,且深度数据的获取不受环境可见光的干扰。这些优势大大减低了三维建模的难度和成本,使得普通用户也能将日常生活中的物体数字化,连接虚拟与真实世界。In recent years, 3D reconstruction has increasingly become a research hotspot and focus in the field of computer vision, and has been widely used in many fields such as industrial surveying, cultural relics protection, reverse engineering, e-commerce, computer animation, medical anatomy, photomicrography, and virtual reality. Commonly used object and indoor modeling methods mainly use laser scanner LIDAR to obtain point cloud data for modeling, or use cameras to obtain overlapping images based on binocular stereo vision for modeling. The former is expensive and the instrument operation is cumbersome; Or the data processing is complicated and time-consuming, it has requirements on lighting conditions, and special calibration is required for the camera. Portable RGB-D sensors (such as Kinect2.0) are cheap, simple in structure, and easy to operate. They can simultaneously acquire the depth and color data of the scene, and the acquisition of depth data is not interfered by the visible light of the environment. These advantages greatly reduce the difficulty and cost of 3D modeling, enabling ordinary users to digitize objects in daily life and connect the virtual and real worlds.

在利用RGB-D相机数据进行物体及室内小场景恢复与建模的点云建模中,深度数据(点云)配准是三维物体重建和室内小场景恢复的关键技术,涉及点云初始配准位置的确定和在此基础上的点云精确配准。目前在点云配准的粗配准位置确定方面,采用靶标球是最通用的方法,而如果没有靶标则多利用人工标注粗位置的方法,点云精配准是在粗配准基础上进行的。1992年Besl和Mckay提出一种基于自由形态曲面的配准算法——迭代最近点算法(Iterative Closest Point,ICP),这是一种点集到点集的配准方法,不需要对曲面进行分割和特征提取,但算法的收敛速度慢,很容易迭代收敛到一个局部极值,导致配准失败。现有的点云配准算法基本都是基于ICP算法并加以改进,来提高点云匹配的正确率、效率和鲁棒性等,以获得重建结果。微软针对RGB-D相机的代表Kinect相机研发了KinectFusion技术,在支持GPU加速的机器上能够对物体进行实时的三维几何建模,快速创建足够平滑的重建场景。但该方法不做环闭合检测和优化,重建模型会产生大的累积误差,造成模型变形,且在场景趋于平坦时算法极易出现追踪失败。由于RGB-D相机能同时获取光学与深度数据,人们研究在ICP配准中利用光学影像数据引入极线约束进行RGB-D相机深度数据配准,并应用于Kinect相机数据,该方法的鲁棒性较高,能够基本恢复场景的整体模型,特别是在KinectFusion追踪建模失败的情况下,仍能取得较好的结果,但模型细节有时会出现失真。In the point cloud modeling of object and indoor small scene restoration and modeling using RGB-D camera data, depth data (point cloud) registration is the key technology for 3D object reconstruction and indoor small scene restoration, which involves the initial registration of point cloud. Determination of the quasi-position and precise registration of the point cloud on this basis. At present, in terms of determining the rough registration position of point cloud registration, using the target ball is the most common method, and if there is no target, the method of manually marking the rough position is often used. The point cloud fine registration is based on the rough registration. of. In 1992, Besl and Mckay proposed a registration algorithm based on free-form surfaces - iterative closest point algorithm (Iterative Closest Point, ICP), which is a registration method from point set to point set, without the need to segment the surface and feature extraction, but the convergence speed of the algorithm is slow, and it is easy to iteratively converge to a local extremum, resulting in registration failure. The existing point cloud registration algorithms are basically based on the ICP algorithm and improved to improve the accuracy, efficiency and robustness of point cloud matching to obtain reconstruction results. Microsoft has developed KinectFusion technology for the Kinect camera, the representative of RGB-D cameras, which can perform real-time 3D geometric modeling of objects on machines that support GPU acceleration, and quickly create sufficiently smooth reconstruction scenes. However, this method does not perform loop closure detection and optimization, and the reconstruction model will generate large cumulative errors, resulting in model deformation, and the algorithm is prone to tracking failure when the scene tends to be flat. Since the RGB-D camera can acquire optical and depth data at the same time, people study the use of optical image data to introduce epipolar constraints in ICP registration for RGB-D camera depth data registration, and apply it to Kinect camera data. The robustness of this method It has high reliability and can basically restore the overall model of the scene. Especially in the case of KinectFusion tracking modeling failure, it can still achieve good results, but the model details are sometimes distorted.

针对现有的利用RGB-D相机数据对物体及室内小场景恢复与建模方法中存在的:对移动中相机位姿估计的精度不高,重建模型存在系统性的误差累积,模型表面比较粗糙以及算法的实时性差等缺点,研究利用各种约束信息对ICP算法进行改进获得精度与鲁棒性较高的点云配准算法,实现基于RGB-D相机数据的物体及室内小场景恢复与建模,使得不借助昂贵笨重的激光扫描仪可以进行物体和小场景的快速建模,是现今计算机视觉、近景摄影测量领域人们研究的目标之一。Aiming at the problems existing in the existing restoration and modeling methods of objects and small indoor scenes using RGB-D camera data: the estimation accuracy of the moving camera pose is not high, there is systematic error accumulation in the reconstruction model, and the model surface is relatively rough As well as the disadvantages of poor real-time performance of the algorithm, the research uses various constraint information to improve the ICP algorithm to obtain a point cloud registration algorithm with high accuracy and robustness, and realize the restoration and construction of objects and indoor small scenes based on RGB-D camera data. Model, so that the rapid modeling of objects and small scenes can be carried out without the help of expensive and bulky laser scanners, which is one of the research goals of people in the field of computer vision and close-range photogrammetry.

发明内容Contents of the invention

本发明的目的在于提供一种基于RGB-D相机数据的物体及室内小场景恢复与建模方法。The object of the present invention is to provide a method for restoring and modeling objects and indoor small scenes based on RGB-D camera data.

本发明需要保护的技术方案表征为:The technical solution to be protected in the present invention is characterized by:

一种基于RGB-D相机数据的物体及室内小场景恢复与建模方法,其特征在于,概括为:该方法在进行RGB-D深度数据(点云)配准建模中,将点到面和点到投影的约束条件综合起来应用于RGB-D相机所获得的序列深度数据(点云)的精确配准,最后得到物体或者小场景的点云模型(.ply格式),该模型可用于物体的量测及进一步CAD建模。本发明中的配准方法既考虑了点到投影算法的速度优势,也综合了点到切平面算法的精度优势,克服了传统点云配准方法速度慢、场景精度低的问题,可以快速、准确的找到源点云上某点在目标点云上的对应点,实现多视点云拼接。A kind of object and indoor small scene recovery and modeling method based on RGB-D camera data, it is characterized in that, be summarized as: this method is in carrying out RGB-D depth data (point cloud) registration modeling, points to surface Combined with the point-to-projection constraints, it is applied to the precise registration of the sequence depth data (point cloud) obtained by the RGB-D camera, and finally the point cloud model (.ply format) of the object or small scene is obtained. This model can be used for Object measurement and further CAD modeling. The registration method in the present invention not only takes into account the speed advantages of the point-to-projection algorithm, but also integrates the accuracy advantages of the point-to-tangent plane algorithm, overcomes the problems of slow speed and low scene accuracy of the traditional point cloud registration method, and can be fast, Accurately find the corresponding point of a point on the source point cloud on the target point cloud to realize multi-view point cloud splicing.

本发明技术方案,具体说明如下:The technical scheme of the present invention is specifically described as follows:

设SP、SQ为RGB-D相机从不同的视点P、Q获得的两帧相邻帧点云数据。因为RGB-D进行点云获取时,采样频率较高且相机移动速度比较慢,因此相邻帧点云数据间的配准值平移量接近0向量、旋转阵接近E阵,可以利用0、E作为点云配准的初始值。SP、SQ是经过粗略配准的点云参数化的局部表面,p0是SP上一点,是p0点的法向量,qs是过p0点的法线与SQ的交点,过qs做SQ的切平面,q′s为p0点在该切平面的投影。利用p0和q′s作为点云配准参数解算的配准点。点到面的配准方法的关键是求得交点qs,常用的求交点的方法是通过找到目标点云表面的三角形,利用三角形的3个顶点内插出交点,但这种方法的计算量大,耗时长。为避免此问题,在本发明中首先利用点到投影的方法求得交点,然后基于求出的交点得到过交点的切平面并进一步确定配准点候选点对,再利用刚性运动一致性约束对前面确立的候选点对进行错误点对剔除,最后利用错误点剔除后的配准点对进行SP、SQ两点云的ICP精配准,得到的精确的配准参数。根据上面方法得到连续帧点云间的精配准参数后,再利用点云精配准参数对连续帧点云进行配准建模得到物体或者小场景的点云模型(.ply格式)。Let S P and S Q be the point cloud data of two adjacent frames obtained by the RGB-D camera from different viewpoints P and Q. Because when RGB-D is used for point cloud acquisition, the sampling frequency is high and the camera moves slowly, so the translation value of the registration value between adjacent frames of point cloud data is close to the 0 vector, and the rotation array is close to the E array. You can use 0, E as the initial value for point cloud registration. S P , S Q are local surfaces parameterized by roughly registered point clouds, p 0 is a point on S P , is the normal vector of point p 0 , and q s is the normal line passing through point p 0 The intersection point with S Q is the tangent plane of S Q through q s , and q′ s is the projection of point p 0 on the tangent plane. Use p 0 and q' s as registration points for point cloud registration parameter calculation. The key to the point-to-surface registration method is to obtain the intersection point q s . The commonly used method for obtaining the intersection point is to find the triangle on the surface of the target point cloud, and use the three vertices of the triangle to interpolate the intersection point. However, the calculation amount of this method is Big and time consuming. In order to avoid this problem, in the present invention, the point-to-projection method is first used to obtain the intersection point, and then the tangent plane passing through the intersection point is obtained based on the obtained intersection point, and the candidate point pairs of registration points are further determined, and then the rigid motion consistency constraint is used to determine the previous The established candidate point pairs are eliminated by erroneous point pairs, and finally the ICP fine registration of SP and S Q two point clouds is performed using the registration point pairs after erroneous point removal to obtain accurate registration parameters. After the fine registration parameters between the continuous frame point clouds are obtained according to the above method, the point cloud model (.ply format) of the object or small scene is obtained by using the fine registration parameters of the point cloud to perform registration modeling on the continuous frame point clouds.

进一步给出实施例技术方案,过程如下:Further provide embodiment technical scheme, process is as follows:

首先进行RGB-D数据获取(后续采用微软发布的Kinect2.0作为深度数据的采集装置为例进行说明),数据采集前对RGB-D数据采集的软、硬件平台环境进行适当配置满足采集要求;采集时,对固定目标或场景,在RGB-D相机深度识别范围内,手持相机以较平稳且低移动速度绕物体或场景旋转一周,形成一个闭合的圆环,且采集过程中应避免较大的抖动,防止帧间配准产生较大的偏差。Firstly, the RGB-D data acquisition is carried out (the Kinect2.0 released by Microsoft is used as an example of the depth data acquisition device to illustrate later), and the software and hardware platform environment for RGB-D data acquisition are properly configured to meet the acquisition requirements before data acquisition; When collecting, for a fixed target or scene, within the depth recognition range of the RGB-D camera, the hand-held camera rotates around the object or scene at a relatively stable and low speed to form a closed circle, and large objects should be avoided during the collection process. The jitter to prevent large deviations in frame-to-frame registration.

得到序列帧数据后,进行物体场景建模,方法如下:After obtaining the sequence frame data, perform object scene modeling, as follows:

1.深度图像滤波1. Depth image filtering

在实际量测过程中,传感器受强光条件以及待测物体表面材质的影响,深度数据含有大量的噪声,尤其在物体边缘处,会出现无效数据,因此将深度数据转换成三维点云之前,采用双边滤波对深度图像降噪平滑的同时保留物体的深度图像上边缘信息,在考虑像素在空间位置距离上的关系的同时,也考虑了像素间的相似程度。In the actual measurement process, the sensor is affected by strong light conditions and the surface material of the object to be measured. The depth data contains a lot of noise, especially at the edge of the object, and invalid data will appear. Therefore, before converting the depth data into a 3D point cloud, Bilateral filtering is used to denoise and smooth the depth image while retaining the upper edge information of the object's depth image. While considering the relationship between pixels in the spatial position distance, the similarity between pixels is also considered.

ws=wg×wh w s =w g ×w h

式(1)为双边滤波的计算式。当空间域的标准差δg不变,灰度阈的权值δh逐渐增大的过程中,不同灰度值对应的权值wh都很大,利用深度图像灰度(深度信息)的变化保留边缘信息的作用越来越弱,双边滤波器退化为高斯滤波器。在δh不变,δg逐渐增大的过程中,图像的模糊越来越明显。对物体与小场景反复实验后,本发明中双边滤波的模板大小为3×3,空域和值域高斯函数的标准差分别为10、15。Equation (1) is the calculation formula of bilateral filtering. When the standard deviation δ g of the spatial domain remains unchanged and the weight δ h of the gray threshold gradually increases, the weights w h corresponding to different gray values are all very large. Using the gray value of the depth image (depth information) The effect of changing edge information becomes weaker and weaker, and the bilateral filter degenerates into a Gaussian filter. When δ h is constant and δ g gradually increases, the blurring of the image becomes more and more obvious. After repeated experiments on objects and small scenes, the template size of bilateral filtering in the present invention is 3×3, and the standard deviations of Gaussian functions in the space domain and range domain are 10 and 15, respectively.

2.彩色影像特征点检测与图像匹配2. Color image feature point detection and image matching

为了生成与场景一致的三维模型,在几何重建的同时进行纹理重建,就需要利用特征检测算子找到彩色影像特征点并对相邻两帧彩色进行匹配,由于数据量较大,采用ORB(ORiented Brief)算法提取彩色影像特征点,ORB算法相比SIFT(Scale-invariantfeature transform)和SURF(Speed-up robust features)算子运行的时间短,算法匹配的正确率高,采用二进制编码方式描述提取的特征大大节省了计算机内存。彩色影像匹配完成后,根据标定得到的彩色镜头与深度镜头之间的相对位姿,实现彩色影像与深度数据对齐,生成场景不同视角下的彩色化点云。In order to generate a 3D model consistent with the scene and perform texture reconstruction at the same time as geometric reconstruction, it is necessary to use a feature detection operator to find the feature points of the color image and match the colors of two adjacent frames. Due to the large amount of data, ORB (ORiented Brief) algorithm extracts color image feature points. Compared with SIFT (Scale-invariant feature transform) and SURF (Speed-up robust features) operators, the ORB algorithm has a shorter running time, and the accuracy of algorithm matching is high. Binary coding is used to describe the extracted Features greatly save computer memory. After the color image matching is completed, according to the relative pose between the calibrated color lens and the depth lens, the color image is aligned with the depth data, and the colored point cloud under different viewing angles of the scene is generated.

3.改进的ICP精配准方法3. Improved ICP fine registration method

A.利用back-project法,将待配准的第n帧上某一点p0按下一帧((n+1)帧)的视点方向投影得到它的像素坐标pq,计算公式为A. Use the back-project method to press a point p 0 on the nth frame to be registered in the direction of the viewpoint of the next frame ((n+1) frame) Project to get its pixel coordinates p q , the calculation formula is

式(2)中,MQ为中心投影矩阵(相机标定后得到的内参矩阵),TQ为将Q视点获得的点云转换到同一世界坐标系下的变换矩阵。由于式(2)的计算结果不一定为整数,对式(2)结果进行取整(四舍五入)操作,得到深度影像格网点上的值pqIn formula (2), M Q is the center projection matrix (the internal reference matrix obtained after camera calibration), and T Q is the transformation matrix that transforms the point cloud obtained from the Q viewpoint into the same world coordinate system. Since the calculation result of formula (2) is not necessarily an integer, the result of formula (2) is rounded (rounded) to obtain the value p q on the grid point of the depth image.

B.利用forward-project法,将pq坐标带入式(3)等式右端,得到新的三维点坐标 的计算公式为:B. Using the forward-project method, bring the p and q coordinates into the right end of the equation (3) to obtain new three-dimensional point coordinates The calculation formula is:

式(3)中,为相机标定后得到的内参数矩阵,Zc为pq在第(n+1)深度图像上的深度值,通过式(3)计算得到 In formula (3), is the internal parameter matrix obtained after camera calibration, Z c is the depth value of p q on the (n+1)th depth image, calculated by formula (3)

C.利用式(4)将投影在p0处的法向量上,得到p1点的坐标C. Using formula (4) will the normal vector projected at p 0 On, get the coordinates of point p 1

式(4)中,为p0处的单位法向量。In formula (4), is the unit normal vector at p 0 .

D.得到p1后,将p1作为新的p0值参与上述A、B、C三个步骤,进行投影,递归迭代得到p2。不断迭代,第i次投影得到的点将迭代收敛于交点qs,且当i趋于无穷大时,有在实际情况中,受投影次数的限制,迭代的终止条件为:D. After obtaining p 1 , use p 1 as the new p 0 value to participate in the above three steps of A, B, and C, perform projection, and recursively iterate to obtain p 2 . Constantly iterate, the point obtained by the i-th projection The iteration converges to the intersection point q s , and when i tends to infinity, there is In the actual situation, limited by the number of projections, the termination condition of the iteration is:

其中εc为一个给定的阈值,大小为10-6。得到收敛后的交点qs,确定过点qs的切平面,将p0投影在该切平面上,得到投影点q′sAmong them, ε c is a given threshold value of 10 -6 . Obtain the converged intersection point q s , determine the tangent plane passing through the point q s , project p 0 on the tangent plane, and obtain the projected point q′ s .

E.将通过步骤1预处理后深度图像上每个点都进行上述步骤A~D,得到每个点的q′s,最终构成匹配点对集合{p0}和{q′s}。E. Perform the above steps A to D for each point on the depth image preprocessed in step 1 to obtain the q′ s of each point, and finally form a set of matching point pairs {p 0 } and {q′ s }.

F.受迭代初值和点云表面光滑度的影响,采用上述方法确立的匹配点对{p0}和{q′s}中仍然存在错误的对应点,这种情况在迭代初期尤为明显,因此,本发明中采用刚性运动一致性约束对错误点对进行剔除。假设(p1,q1)与(p2,q2)为候选的对应点对,则有刚性运动一致性约束:F. Affected by the initial value of the iteration and the surface smoothness of the point cloud, there are still wrong corresponding points in the matching point pair {p 0 } and {q′ s } established by the above method, which is especially obvious at the initial stage of the iteration. Therefore, in the present invention, rigid motion consistency constraints are used to eliminate wrong point pairs. Assuming (p 1 ,q 1 ) and (p 2 ,q 2 ) are candidate corresponding point pairs, there are rigid motion consistency constraints:

式(6)中,t是预先定义的阈值,对所有候选点(pi,qi),i=1,2,…,n,判断彼此的兼容性,并对不兼容的点进行剔除。In formula (6), t is a pre-defined threshold, for all candidate points (p i , q i ), i=1, 2,...,n, judge compatibility with each other, and eliminate incompatible points.

G.将剔除错误候选点对得到的新对应点序列{p0}和{q′s}作为精确匹配点,利用奇异值分解法求得矩阵最大特征值对应的单位特征向量,计算第n帧与相邻第(n+1)帧点云间的旋转矩阵R和平移向量T,实现相邻帧间精配准。G. Use the new corresponding point sequence {p 0 } and {q′ s } obtained by eliminating the wrong candidate point pairs as the exact matching point, use the singular value decomposition method to obtain the unit eigenvector corresponding to the largest eigenvalue of the matrix, and calculate the nth frame The rotation matrix R and the translation vector T between the point clouds of the adjacent (n+1) frame can realize fine registration between adjacent frames.

4.步骤3中得到相邻点云帧间的精确配准结果都是以第一帧点云所在的坐标系为参考坐标系,对其他所有相邻帧点云都利用上面步骤1~3进行处理,得到各相邻帧间的旋转矩阵R与平移向量T,最后利用式(7)将所有帧点云统一到参考坐标系下。4. The precise registration results between adjacent point cloud frames obtained in step 3 are based on the coordinate system of the first frame point cloud as the reference coordinate system, and the above steps 1 to 3 are used for all other adjacent frame point clouds. processing to obtain the rotation matrix R and translation vector T between adjacent frames, and finally use formula (7) to unify all frame point clouds into the reference coordinate system.

生成统一坐标系下的物体和场景点云模型,格式为.ply文件。该文件可在Meshlab或者Geomagic Studio等通用的三维几何处理系统中显示并进行进一步数据处理。Generate point cloud models of objects and scenes in a unified coordinate system in the form of .ply files. This file can be displayed in a general 3D geometry processing system such as Meshlab or Geomagic Studio for further data processing.

与现有方法相比,联合点到面和点到投影、并加上刚性运动一致性约束的精配准方法,兼顾了现有算法的速度和精度优势,算法执行的时间更短,点云配准的精度更高,所建模型的结构特征比较清晰,算法的鲁棒性较强,精度可靠,能满足物体及室内小场景恢复与建模。Compared with the existing methods, the fine registration method combining point-to-surface and point-to-projection, and adding rigid motion consistency constraints takes into account the advantages of speed and accuracy of existing algorithms, and the execution time of the algorithm is shorter. The registration accuracy is higher, the structural features of the built model are clearer, the algorithm is more robust, and the accuracy is reliable, which can meet the recovery and modeling of objects and small indoor scenes.

本发明可以应用于基于RGB-D相机数据的物体与室内小场景恢复与建模,在目标物与小场景的量化恢复与建模方面发挥作用。The invention can be applied to the restoration and modeling of objects and indoor small scenes based on RGB-D camera data, and plays a role in the quantitative restoration and modeling of target objects and small scenes.

附图说明Description of drawings

图1为本发明算法的整体流程图。Fig. 1 is the overall flowchart of the algorithm of the present invention.

图2为本发明中改进的ICP算法的详细流程图。Fig. 2 is a detailed flowchart of the improved ICP algorithm in the present invention.

图3为本发明提出的联合点到面约束和点到投影约束的点云配准方法的原理图。Fig. 3 is a schematic diagram of a point cloud registration method combined with point-to-plane constraints and point-to-projection constraints proposed by the present invention.

图4为RGB-D传感器(利用Kinect相机)获得的会议室场景中第41帧数据,其中图4(a)为对应的彩色影像(灰度显示),图4(b)为红外影像,图4(c)为深度数据。Figure 4 is the 41st frame of data in the conference room scene obtained by the RGB-D sensor (using the Kinect camera), where Figure 4(a) is the corresponding color image (grayscale display), Figure 4(b) is the infrared image, and Figure 4(b) is the infrared image. 4(c) is depth data.

图5为图4相邻的下一帧(42帧)的数据,其中图5(a)为彩色影像(灰度显示),图5(b)为红外影像,图5(c)为深度数据。Figure 5 is the data of the next frame (42 frames) adjacent to Figure 4, where Figure 5(a) is the color image (grayscale display), Figure 5(b) is the infrared image, and Figure 5(c) is the depth data .

图6(a)、图6(b)分别为双边滤波对图4、图5中会议室场景的深度数据降噪平滑的结果。Figure 6(a) and Figure 6(b) are the results of bilateral filtering for denoising and smoothing the depth data of the conference room scene in Figure 4 and Figure 5, respectively.

图7为图4、图5相邻两帧彩色影像匹配结果(灰度显示)。Fig. 7 is the matching result of two adjacent color images in Fig. 4 and Fig. 5 (in grayscale display).

图8为彩色化点云(灰度显示)。Figure 8 is a colored point cloud (grayscale display).

图9为本发明方法配准后建立的场景模型的效果图,其中右图中的灰色三角锥表示Kinect相机获取点云时的位置与视轴方向。Fig. 9 is an effect diagram of the scene model established after registration by the method of the present invention, wherein the gray triangle cone in the right diagram represents the position and the direction of the visual axis when the Kinect camera acquires the point cloud.

具体实施方式detailed description

以下结合一个实施例子对具体实现方法进行说明,即对一有代表性的RGB-D相机——微软的Kinect2.0——获得物体或室内小场景的序列彩色与深度数据,生成能被Meshlab、Geomagic Studio等通用三维几何处理系统支持的三维模型数据,格式为.ply文件。The specific implementation method is described below in conjunction with an implementation example, that is, a representative RGB-D camera—Microsoft’s Kinect2.0—obtains sequential color and depth data of objects or small indoor scenes, and generates data that can be used by Meshlab, 3D model data supported by general 3D geometry processing systems such as Geomagic Studio, in the form of .ply files.

数据采集前对Kinect数据采集的软、硬件平台环境进行配置,采用window 8(64位)、接口为USB 3.0的笔记本一台,软件平台环境需要预装微软自主研发的开发工具包SDK,并安装配置开源的跨平台计算机视觉库OpenCV 3.0。采集时,对固定目标或场景,在Kinect相机深度识别范围内,手持相机以较平稳且低移动速度绕物体或场景旋转一周,形成一个闭合的圆环,采集过程中应避免较大的抖动,防止帧间配准产生较大的偏差。Configure the software and hardware platform environment for Kinect data collection before data collection, using a notebook with window 8 (64 bits) and USB 3.0 interface. The software platform environment needs to pre-install the development kit SDK independently developed by Microsoft, and install Configure the open source cross-platform computer vision library OpenCV 3.0. During acquisition, for a fixed target or scene, within the depth recognition range of the Kinect camera, the hand-held camera rotates around the object or scene at a relatively stable and low speed to form a closed circle. During the acquisition process, large shakes should be avoided. Prevent large deviations in frame-to-frame registration.

实施例1:Example 1:

对于实施例子中的会议室场景,共采集了151帧数据,数据采样的时间间隔为3s。For the conference room scene in the implementation example, a total of 151 frames of data are collected, and the time interval of data sampling is 3s.

实验数据为Kinect 2.0获得的数据包括会议室场景的彩色影像、红外影像和深度数据,其中彩色影像的分辨率为1920×1080,深度和红外影像的分辨率为512×424。下面为第41帧原始深度影像部分灰度值(深度值),范围为行号209~224,列号为192~220.The experimental data obtained by Kinect 2.0 includes the color image, infrared image and depth data of the meeting room scene. The resolution of the color image is 1920×1080, and the resolution of the depth and infrared image is 512×424. The following is the gray value (depth value) of the original depth image of the 41st frame, the range is row number 209~224, and the column number is 192~220.

表1Table 1

通过标定得到的彩色镜头与深度镜头之间的相对位姿参数为:旋转矩阵为平移向量为根据图1所示的具体流程,整个实施过程如下:The relative pose parameters between the color lens and the depth lens obtained through calibration are: the rotation matrix is The translation vector is According to the specific process shown in Figure 1, the entire implementation process is as follows:

1、深度图像滤波1. Depth image filtering

在将Kinect相机获取的图像形式的深度数据转换为(X,Y,Z)形式的三维点云格式之前,考虑到实际数据获取过程中,深度传感器受外界光照条件以及待测物体表面材质的影响,所获得的深度数据含有大量的噪声,尤其在物体边缘处,会出现无效数据。因此在转换前,利用式(1)先对深度图像进行降噪平滑,同时保留物体的边缘信息。Before converting the depth data in the form of an image acquired by the Kinect camera into a 3D point cloud format in the form of (X, Y, Z), considering the actual data acquisition process, the depth sensor is affected by the external lighting conditions and the surface material of the object to be measured. , the obtained depth data contains a lot of noise, especially at the edge of the object, invalid data will appear. Therefore, before the conversion, the depth image is denoised and smoothed using formula (1), while retaining the edge information of the object.

本发明双边滤波模板的大小为3×3,空域和值域高斯函数的标准差分别为10、15。则定义域核为:The size of the bilateral filtering template of the present invention is 3×3, and the standard deviations of the Gaussian functions of the space domain and the range domain are 10 and 15 respectively. Then the domain kernel is defined as:

0.99000.9900 0.99500.9950 0.99000.9900 0.99500.9950 1.00001.0000 0.99500.9950 0.99000.9900 0.99500.9950 0.99000.9900

值域核为:The value range kernel is:

由于深度数据量较大,这里只列出第41帧深度数据某3×3范围内的原始灰度值为:Due to the large amount of depth data, only the original gray value in a 3×3 range of the depth data of the 41st frame is listed here:

177177 177177 177177 177177 176176 178178 175175 179179 180180

对该帧数据双边滤波后该3×3范围内的值为After the frame data is bilaterally filtered, the value in the 3×3 range is

175175 176176 175175 176176 176176 176176 173173 175175 172172

图6(a)为双边滤波对图4(第41帧)中深度数据降噪平滑的结果,图6(b)为双边滤波对图5(第41帧相邻的下一帧(第42帧))中深度数据降噪平滑的结果。Figure 6(a) is the result of bilateral filtering for denoising and smoothing the depth data in Figure 4 (frame 41), and Figure 6(b) is the result of bilateral filtering for the next frame adjacent to Figure 5 (frame 41 (frame 42). )) The result of denoising and smoothing medium depth data.

2、彩色影像特征点检测与图像匹配2. Color image feature point detection and image matching

为了生成与场景一致的三维模型,在几何重建的同时进行纹理重建,利用特征检测算子找到彩色影像特征点并对相邻两帧彩色进行匹配,由于数据量较大,采用ORB算法提取彩色影像特征点,ORB算法相比SIFT和SURF算子运行的时间短,算法匹配的正确率高,同时采用二进制编码方式描述提取的特征来节省计算机内存。对第41帧和第42帧的彩色影像进行匹配,匹配完成,结果如图7。根据标定得到的彩色镜头与深度镜头之间的相对位姿,进行彩色影像与深度数据对齐,生成物体、场景不同视角下的彩色化点云,对第41帧和第42帧分别进行彩色影像与深度数据对齐,生成的彩色化点云结果分别见图8(a)、图8(b)。In order to generate a 3D model that is consistent with the scene, texture reconstruction is carried out at the same time as geometric reconstruction. The feature detection operator is used to find the feature points of the color image and match the colors of two adjacent frames. Due to the large amount of data, the ORB algorithm is used to extract the color image. For feature points, the ORB algorithm takes less time to run than SIFT and SURF operators, and has a high accuracy of algorithm matching. At the same time, it uses binary encoding to describe the extracted features to save computer memory. Match the color images of the 41st frame and the 42nd frame, and the matching is completed, and the result is shown in Figure 7. According to the relative pose between the color lens and the depth lens obtained by calibration, the color image is aligned with the depth data, and the colored point cloud of the object and the scene is generated under different viewing angles. The color image and the 42nd frame are respectively compared The depth data is aligned, and the generated colorized point cloud results are shown in Figure 8(a) and Figure 8(b).

3、改进的ICP精配准方法3. Improved ICP fine registration method

首先对深度滤波后的第41帧深度数据进行转换,得到点云数据,包含8475个点,下面为该点云数据前72个点的信息为:First, convert the depth data of the 41st frame after depth filtering to obtain point cloud data, which contains 8475 points. The following is the information of the first 72 points of the point cloud data:

表2Table 2

利用联合点到面和点到投影的点云配准算法对物体或室内小场景曝光时刻的的位置和姿态进行估计并优化,以第一帧点云所在的坐标系为参考坐标系,将其余各帧点云经坐标转换都变换到参考坐标系下。The point cloud registration algorithm combining point-to-surface and point-to-projection is used to estimate and optimize the position and attitude of the object or small indoor scene at the exposure moment, and take the coordinate system where the first frame point cloud is located as the reference coordinate system, and the rest The point clouds of each frame are transformed into the reference coordinate system after coordinate transformation.

A.利用back-project法,将第41帧点云上的某点p0,坐标为(-0.432926,-0.383050,1.971088),根据式(2)按照第42帧点云的视点方向投影得到它的像素坐标pq,像素坐标为(348,270)。其中, A. Using the back-project method, take a point p 0 on the point cloud of the 41st frame, the coordinates are (-0.432926, -0.383050, 1.971088), and follow the viewpoint direction of the point cloud of the 42nd frame according to formula (2) Project to get its pixel coordinate p q , and the pixel coordinate is (348,270). in,

B.利用forward-project法,将pq坐标带入式(3)等式右端,得到新的三维点坐标为(0.261255,0.179193,1.000000)。B. Using the forward-project method, bring the p and q coordinates into the right end of the equation (3) to obtain a new three-dimensional point The coordinates are (0.261255, 0.179193, 1.000000).

C.利用式(4)将投影第41帧点云p0处的法向量上,得到p1点的坐标为(-0.308091,-0.357967,2.012104)。C. Using formula (4) will Project the normal vector at point cloud p 0 of frame 41 Above, the coordinates of point p 1 are (-0.308091, -0.357967, 2.012104).

D.得到p1后,根据式(5),得到这里阈值εc大小取10-6。在值大于阈值εc时,将p1点坐标作为p0新的坐标,回到步骤A,重复上面A、B、C三个步骤,进行投影,递归迭代得到p2;此时如果小于εc,得到qs。得到收敛后的交点qs后,确定过42帧上点qs的切平面,将p0投影在该切平面上,得到投影点q′s,坐标为(-0.432665,-0.387157,1.975511)。D. After obtaining p 1 , according to formula (5), get Here, the threshold ε c is set to be 10 -6 . exist When the value is greater than the threshold εc , take the coordinates of point p 1 as the new coordinates of p 0 , return to step A, repeat the above three steps of A, B, and C, perform projection, and obtain p 2 by recursive iteration; at this time, if Less than ε c , get q s . After the converged intersection point q s is obtained, the tangent plane of point q s on frame 42 is determined, and p 0 is projected on the tangent plane to obtain the projected point q′ s with coordinates of (-0.432665, -0.387157, 1.975511).

E.将通过步骤1预处理后该帧深度图像生成点云上的每个点都进行上述步骤A~D,得到每个点的q′s,最终构成初始匹配点对集合{p0}和{q′s}。由于两帧点云间匹配点数目很多,表3中只显示了第41帧和相邻第42帧处理后初始匹配点对集合中的部分对应匹配点:E. Perform the above steps A to D for each point on the point cloud generated by the depth image of the frame after preprocessing in step 1 to obtain the q′ s of each point, and finally form the initial matching point pair set {p 0 } and {q 's }. Due to the large number of matching points between the two frames of point clouds, Table 3 only shows part of the corresponding matching points in the initial matching point pair set after the processing of the 41st frame and the adjacent 42nd frame:

表3table 3

F.受迭代初值和点云表面光滑度的影响,采用上述方法确立的候选匹配点对{p0}和{q′s}中仍然存在错误的匹配对应点,本发明中采用刚性运动一致性约束对错误点对进行剔除。假设(p1,q1)与(p2,q2)为候选的匹配对应点对,根据式(6)对上面确定的第41帧和第42帧所有匹配点进行刚性运动一致性约束,其中t是预先定义的阈值(本方法中取10%),对所有候选点(pi,qi),i=1,2,…,n,判断彼此的兼容性,并对不兼容的点进行剔除。下面为上表3前两对点,计算点对兼容性的结果:F. Affected by the iterative initial value and the surface smoothness of the point cloud, there are still wrong matching corresponding points in the candidate matching point pairs {p 0 } and {q′ s } established by the above method. In the present invention, the rigid motion consistent Sexual constraints eliminate wrong point pairs. Assuming (p 1 , q 1 ) and (p 2 , q 2 ) are candidate matching point pairs, perform rigid motion consistency constraints on all matching points in the 41st and 42nd frames determined above according to formula (6), Where t is a pre-defined threshold (10% in this method), for all candidate points (p i , q i ), i=1,2,...,n, judge the compatibility with each other, and for incompatible points to cull. The following is the result of calculating point pair compatibility for the first two pairs of points in Table 3 above:

该结果表明这两对点不兼容,应该剔除。对上面步骤E得到所有匹配点对,利用上面方法剔除错误点对后,得到更准确的匹配点对。表4为经过这一处理后得到的匹配点对集合中的部分对应匹配点的列表:This result indicates that these two pairs of points are incompatible and should be rejected. For the above step E, all matching point pairs are obtained, and after using the above method to eliminate the wrong point pairs, more accurate matching point pairs are obtained. Table 4 is a list of part corresponding matching points in the matching point pair collection obtained after this processing:

表4Table 4

p0 p 0 -3.5780711-3.5780711 0.548076870.54807687 1.68930541.6893054 qs0qs 0 ' -3.5738416-3.5738416 0.5471962650.547196265 1.68875841.6887584 p1 p 1 -3.6656244-3.6656244 0.514112590.51411259 1.75661971.7566197 qs1qs 1 ' -3.6676449-3.6676449 0.515090920.51509092 1.75666261.7566626 p2 p 2 -3.7146368-3.7146368 0.500283240.50028324 1.82231311.8223131 qs2qs 2 ' -3.7141747-3.7141747 0.500616250.50061625 1.82309061.8230906 p3 p 3 -2.2115169-2.2115169 0.998328210.99832821 0.298333820.29833382 qs3qs 3 ' -2.2117355-2.2117355 0.99574250.9957425 0.296922920.29692292 p4 p 4 -2.2430336-2.2430336 1.00235141.0023514 0.413025410.41302541 qs4qs 4 ' -2.2431977-2.2431977 1.00261561.0026156 0.412570410.41257041 p5 p 5 -2.2954671-2.2954671 0.988810060.98881006 0.490762680.49076268 qs5qs 5 ' -2.2955692-2.2955692 0.98810030.9881003 0.500185740.50018574 p6 p 6 -2.4128008-2.4128008 0.944958450.94495845 0.589010.58901 qs6qs 6 ' -2.4125545-2.4125545 0.944728010.94472801 0.579226140.57922614 p7 p 7 -3.0111077-3.0111077 0.599951740.59995174 0.411618950.41161895 qs7qs 7 ' -3.0111504-3.0111504 0.5997623930.599762393 0.41176055610.4117605561 p8 p 8 -2.7442737-2.7442737 0.808493380.80849338 0.796252370.79625237 qs8qs 8 ' -2.7482291-2.7482291 0.797230130.79723013 0.808704350.80870435 p9 p 9 -2.8855073-2.8855073 0.775642690.77564269 1.02589691.0258969 qs9qs 9 ' -2.8863597-2.8863597 0.775639190.77563919 1.02904741.0290474 p10 p 10 -3.0237136-3.0237136 0.739816130.73981613 1.23005461.2300546 qs10qs 10 ' -3.0245329-3.0245329 0.739074770.73907477 1.23070171.2307017

G.将剔除错误候选点对得到的新对应点序列{p0}和{q′s}作为匹配点利用奇异值分解法求得矩阵最大特征值对应的单位特征向量,来计算得到第41帧与相邻第42帧点云间G. Use the new corresponding point sequence {p 0 } and {q′ s } obtained by eliminating the wrong candidate point pair as the matching point and use the singular value decomposition method to obtain the unit eigenvector corresponding to the largest eigenvalue of the matrix to calculate the 41st frame Between the adjacent point cloud of the 42nd frame

的三维刚体变换的6个自由度,用旋转矩阵R和平移向量T表示为:The 6 degrees of freedom of the three-dimensional rigid body transformation are expressed as:

其他相邻帧点云均采用以上步骤A~G依次求得各帧间转换参数,即旋转矩阵R和平移向量T。For other adjacent frame point clouds, the above steps A to G are used to sequentially obtain the transformation parameters between frames, that is, the rotation matrix R and the translation vector T.

4、物体与场景模型的生成4. Generation of object and scene models

利用步骤3中得到的各相邻帧点云间的精确配准结果R和T,根据式(7)将这151帧点云数据转换到统一的坐标系(第一帧点云深度相机所在的坐标系),并进行数据叠合,得到.ply格式的场景三维点云(在Meshlab、Geomagic Studio等软件中可显示)。生成的物体或场景的模型如图9所示,生成的.ply文件中部分数据如表5所示:Using the precise registration results R and T between adjacent frame point clouds obtained in step 3, the 151 frames of point cloud data are transformed into a unified coordinate system according to formula (7) (where the first frame point cloud depth camera is located coordinate system), and superimpose the data to obtain the 3D point cloud of the scene in .ply format (displayable in software such as Meshlab and Geomagic Studio). The model of the generated object or scene is shown in Figure 9, and part of the data in the generated .ply file is shown in Table 5:

表5table 5

Claims (8)

1. An object and indoor small scene recovery and modeling method based on RGB-D camera data is characterized in that a closed loop is formed by acquiring continuous frame depth data (point cloud) of a scene or an object to be modeled by using an RGB-D camera, and the acquired data are processed by the following steps: carrying out bilateral filtering on depth data (point cloud); secondly, detecting the characteristic points of the color image and matching the characteristic points with the image; in the RGB-D depth data (point cloud) registration modeling, the method integrates constraint conditions from point to surface and from point to projection, and determines accurate registration parameters R, T between adjacent depth data (point cloud) by using rigid motion consistency constraint; and fourthly, applying the fine registration parameters between the adjacent depth data (point clouds) to the accurate registration between the sequence depth data (point clouds) obtained by the RGB-D camera to obtain a point cloud model (ply format) of the object or the small scene, wherein the point cloud model can be used for measuring the object and further modeling CAD.
2. The method of claim 1, wherein step (r) preprocesses the depth data (point cloud). Adopting bilateral filtering to reduce noise and smooth the depth image, simultaneously keeping the edge information of the depth image of an object, considering the relation of pixels on the space position distance, and simultaneously considering the similarity degree among the pixels, and taking an expression (1) as a calculation expression of the bilateral filtering:
ws=wg×wh
<mrow> <msub> <mi>w</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mi>x</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>j</mi> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msubsup> <mi>&amp;delta;</mi> <mi>g</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>w</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mo>(</mo> <mrow> <mi>i</mi> <mo>-</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>-</mo> <mi>I</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
in the formula (1), wgThe space domain weight is determined by the second expression in the expression (1), whThe value domain weight is determined by the third formula in the formula (1), (x, y) is the index value on the depth image to be filtered, (i, j) is the index value in the (x, y) neighborhood range,gis the standard deviation of the spatial domain and,his the standard deviation of the gray scale domain, wsIs a bilateral filtering weight.
Standard deviation of space domaingInvariant, weight of grey thresholdhIn the process of gradually increasing, the weights w corresponding to different gray valueshThe two-sided filter is a Gaussian filter, and the function of retaining edge information by using the change of the gray level (depth information) of a depth image is weaker and weaker; in thathThe temperature of the molten steel is not changed,gin the process of gradual increase, the blurring of the image becomes more and more obvious.
3. The method of claim 2, wherein the size of the bilateral filtering template is 3 x 3, and the standard deviation of the spatial and the value domain gaussian functions are 10 and 15, respectively.
4. The method of claim 1, wherein said step (ii) extracts color image feature points using an orb (ordered brief) algorithm, and describes the extracted features using binary coding. After the color images are matched, according to the relative position and posture between the color lens and the depth lens obtained by calibration, the color images are aligned with the depth data, and colorized point clouds of different viewing angles of the scene are generated.
5. The method of claim 1, wherein in step ③, the precise registration parameter between adjacent depth data (point clouds) is set as SP、SQTwo adjacent frames of point cloud data obtained from different viewpoints P, Q for an RGB-D camera. The RGB-D camera moves slowly when acquiring point clouds, the translation amount of a registration value between adjacent frame point cloud data is close to a 0 vector, a rotation array is close to an E array, and 0 and E can be used as initial values of point cloud registration. SP、SQIs a coarsely registered point cloud parameterized local surface, p0Is SPAt the point of the last point, the first point,is p0Normal vector of points, qsIs over p0Normal to the pointAnd SQCross point of (2), cross qsTo SQTangent plane of (q)sIs' a p0The projection of a point on the tangent plane. By p0And q iss' registration points as a solution to point cloud registration parameters. The point-to-surface registration method is characterized in that the key point is to obtain an intersection point qs
Firstly, the point-to-projection is used to obtain the intersection point, then based on the obtained intersection point the tangent plane passing through the intersection point is obtained and the registration point is further defined, and for SPAll points on the point cloud are obtained by the method in SQObtaining a candidate matching point pair by the above registration points, then carrying out error point pair elimination on the previously established candidate matching point pair by utilizing rigid motion consistency constraint, and finally utilizing error point eliminationSubsequent registration point pair SP、SQAnd (4) carrying out improved ICP (inductively coupled plasma) fine registration on the two-point cloud to obtain a fine registration parameter.
The process is as follows:
A. using back-project method to register the nth frame SPAt a certain point p0Press the (n +1) th frame S of a frameQDirection of viewpoint ofProjecting to obtain its pixel coordinate pqThe calculation formula is
<mrow> <msub> <mi>p</mi> <mi>q</mi> </msub> <mo>=</mo> <msub> <mi>M</mi> <mi>Q</mi> </msub> <msubsup> <mi>T</mi> <mi>Q</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
In the formula (2), the reaction mixture is,
MQis a central projection matrix (an internal reference matrix obtained after camera calibration),
TQthe point cloud obtained from the Q viewpoint is converted into a transformation matrix under the same world coordinate system.
Since the calculation result of the formula (2) is not necessarily an integer, the rounding operation is performed on the result of the formula (2) to obtain the value p on the grid point of the depth imageq
B. P obtained in step A is subjected to forward-project methodqThe coordinate is brought into the right end of the equation (3) to obtain a new three-dimensional point coordinate The calculation formula of (2) is as follows:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msub> <mi>Z</mi> <mi>c</mi> </msub> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mi>x</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>u</mi> <mn>0</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> </mtd> <mtd> <msub> <mi>v</mi> <mn>0</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>u</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
in the formula (3), the reaction mixture is,for the internal parameter matrix obtained after camera calibration, f in the matrixx、fyPrincipal moments of the depth camera lens in the x and y directions respectively,in order to be the coordinates of the main point like,is p0Homogeneous coordinate p of point in pixel coordinate systemq(ii) is expressed in terms of (a),are homogeneous coordinates in the depth camera coordinate system. ZcIs pqIn the (n +1) th frame SQThe depth value on the depth image is obtained by calculation of the formula (3)Three-dimensional coordinates of (a).
C. By the formula (4)Projected at p0Normal vector of (c)To obtain p1Coordinates of points
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>&amp;alpha;</mi> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>q</mi> <msub> <mi>p</mi> <mn>0</mn> </msub> </msub> <mo>-</mo> <msub> <mi>p</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mover> <mi>p</mi> <mo>&amp;RightArrow;</mo> </mover> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>+</mo> <mi>&amp;alpha;</mi> <mover> <mi>p</mi> <mo>&amp;RightArrow;</mo> </mover> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
In the formula (4), the reaction mixture is,is p0The unit normal vector of (b) α is a vectorIn excess of p0Unit normal vector ofThe projected length of (c).
D. To obtain p1Then p is added1As new p0The value participates in the A, B, C steps, projection is carried out, and p is obtained through recursive iteration2. Continuously iterating, i-th projecting obtained pointConverging the iteration on the intersection point qsAnd when i tends to infinity, there areThe termination condition of the iteration is:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;epsiv;</mi> <mi>c</mi> </msub> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>q</mi> <msub> <mi>p</mi> <mi>i</mi> </msub> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;epsiv;</mi> <mi>c</mi> </msub> <mo>&lt;</mo> <mi>T</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
whereincLess than a given threshold T. Obtaining the converged intersection point qsDetermining a passing point qsTangent plane of p, i0Projected on the tangent plane to obtain a projected point q's
E. Performing the steps A to D on each point on the depth image preprocessed in the step ① to obtain q 'of each point'sFinally, a set of candidate matching point pairs { p }is formed0Q 'and { q's}。
F. The matching point pair { p is established by the method under the influence of the iteration initial value and the smoothness of the cloud surface0Q 'and { q'sThere are still wrong corresponding points in the iteration, which is particularly evident at the beginning of the iteration, and thereforeIn the invention, rigid motion consistency constraint is adopted to remove error point pairs. Suppose (p)1,q1) And (p)2,q2) For candidate corresponding point pairs, there is a rigid motion consistency constraint:
<mrow> <mo>-</mo> <mi>t</mi> <mo>&amp;le;</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> <mo>-</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;le;</mo> <mi>t</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
in the formula (6), t is a predetermined threshold value, and for all candidate points (p)i,qi) And i is 1,2, …, n, judging compatibility of each other, and rejecting incompatible points.
G. New corresponding point sequence { p obtained by eliminating wrong candidate point pairs0And qs' } as precise matching point, and using singular value decomposition method to obtain matrix maximumCalculating the unit characteristic vector corresponding to the large characteristic value to calculate the n-th frame SPAnd the adjacent (n +1) th frame SQAnd (4) realizing accurate registration between adjacent frames by using a rotation matrix R and a translational vector T between the point clouds.
6. The method as claimed in claim 5, wherein the given threshold T-10-6
7. The method of claim 5, wherein the given threshold t in the F point in step (c) is 10%.
8. The method according to claim 1, wherein in the step (iv), after the sequence depth data (point clouds) obtained in the step (iii) are respectively and precisely registered to obtain the precise registration parameters between the continuous frame point clouds, the point cloud precise registration parameters are used to perform registration modeling on the continuous frame point clouds to obtain the point cloud model (in the ply format) of the object or the small scene.
The process is as follows:
and (3) obtaining accurate registration results of each adjacent point cloud frame in the step (c) (the results are all in a reference coordinate system of the coordinate system where the first frame point cloud is located), and unifying all frame point clouds under the reference coordinate system by using a formula (7).
<mrow> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msub> <mo>=</mo> <mi>R</mi> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mi>T</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
And generating an object and scene point cloud model under a unified coordinate system.
CN201710324369.1A 2017-05-10 2017-05-10 Object and indoor small scene based on RGB D camera datas recover and modeling method Pending CN107240129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710324369.1A CN107240129A (en) 2017-05-10 2017-05-10 Object and indoor small scene based on RGB D camera datas recover and modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710324369.1A CN107240129A (en) 2017-05-10 2017-05-10 Object and indoor small scene based on RGB D camera datas recover and modeling method

Publications (1)

Publication Number Publication Date
CN107240129A true CN107240129A (en) 2017-10-10

Family

ID=59984189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710324369.1A Pending CN107240129A (en) 2017-05-10 2017-05-10 Object and indoor small scene based on RGB D camera datas recover and modeling method

Country Status (1)

Country Link
CN (1) CN107240129A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680039A (en) * 2017-09-22 2018-02-09 武汉中观自动化科技有限公司 A kind of point cloud method and system based on white light scanning instrument
CN107845134A (en) * 2017-11-10 2018-03-27 浙江大学 A kind of three-dimensional rebuilding method of the single body based on color depth camera
CN107886101A (en) * 2017-12-08 2018-04-06 北京信息科技大学 A kind of scene three-dimensional feature point highly effective extraction method based on RGB D
CN108596947A (en) * 2018-03-27 2018-09-28 南京邮电大学 A kind of fast-moving target tracking method suitable for RGB-D cameras
CN108596974A (en) * 2018-04-04 2018-09-28 清华大学 Dynamic scene robot localization builds drawing system and method
CN108648224A (en) * 2018-05-18 2018-10-12 杭州电子科技大学 A method of the real-time scene layout identification based on artificial neural network and reconstruction
CN108734772A (en) * 2018-05-18 2018-11-02 宁波古德软件技术有限公司 High accuracy depth image acquisition methods based on Kinect fusion
CN108742159A (en) * 2018-04-08 2018-11-06 浙江安精智能科技有限公司 Intelligent control device of water dispenser based on RGB-D cameras and its control method
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN109360267A (en) * 2018-09-29 2019-02-19 杭州蓝芯科技有限公司 A kind of thin objects quick three-dimensional reconstructing method
CN109544677A (en) * 2018-10-30 2019-03-29 山东大学 Indoor scene main structure method for reconstructing and system based on depth image key frame
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109785381A (en) * 2018-12-06 2019-05-21 苏州炫感信息科技有限公司 A kind of optical inertial fusion space-location method, positioning device and positioning system
CN109887030A (en) * 2019-01-23 2019-06-14 浙江大学 Image pose detection method of textureless metal parts based on CAD sparse template
CN109895094A (en) * 2019-02-11 2019-06-18 苏州瀚华智造智能技术有限公司 A kind of industrial robot measurement track analysis of Positioning Error method and system
CN110049323A (en) * 2018-01-17 2019-07-23 华为技术有限公司 Coding method, coding/decoding method and device
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110455815A (en) * 2019-09-05 2019-11-15 西安多维机器视觉检测技术有限公司 A kind of method and system of electronic component open defect detection
CN110880185A (en) * 2019-11-08 2020-03-13 南京理工大学 High-precision dynamic real-time 360-degree all-dimensional point cloud acquisition method based on fringe projection
CN110889243A (en) * 2019-12-20 2020-03-17 南京航空航天大学 Aircraft fuel tank three-dimensional reconstruction method and detection method based on depth camera
WO2020057338A1 (en) * 2018-09-19 2020-03-26 华为技术有限公司 Point cloud coding method and encoder
CN110944187A (en) * 2018-09-19 2020-03-31 华为技术有限公司 Point cloud encoding method and encoder
CN111009005A (en) * 2019-11-27 2020-04-14 天津大学 Scene classification point cloud rough registration method combining geometric information and photometric information
CN111178138A (en) * 2019-12-04 2020-05-19 国电南瑞科技股份有限公司 Distribution network wire operating point detection method and device based on laser point cloud and binocular vision
CN112116655A (en) * 2019-06-20 2020-12-22 北京地平线机器人技术研发有限公司 Method and device for determining position information of image of target object
CN112116703A (en) * 2020-09-08 2020-12-22 苏州小优智能科技有限公司 A 3D camera and an infrared light scanning algorithm for alignment of point clouds and color textures
CN112734830A (en) * 2020-12-30 2021-04-30 无锡祥生医疗科技股份有限公司 Point cloud data-based pose control method, device, equipment and storage medium
CN113379815A (en) * 2021-06-25 2021-09-10 中德(珠海)人工智能研究院有限公司 Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN113920191A (en) * 2021-07-30 2022-01-11 北京工商大学 A Depth Camera-Based 6D Dataset Construction Method
CN114155256A (en) * 2021-10-21 2022-03-08 北京航空航天大学 A method and system for tracking the deformation of flexible objects using RGBD cameras
CN114693881A (en) * 2020-12-30 2022-07-01 合肥美亚光电技术股份有限公司 Point cloud data processing method, registration device and intraoral scanning device
CN117557601A (en) * 2023-09-26 2024-02-13 北京长木谷医疗科技股份有限公司 Skeleton registration method and device based on digital twinning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
SOON-YONG PARK ET AL: ""An accurate and fast point-to-plane registration technique"", 《PATTERN RECOGNITION LETTERS》 *
余胜威等编著: "《MATLAB图像滤波去噪分析及其应用》", 30 June 2015, 北京:北京航空航天大学出版社 *
杨英保等: ""一种基于改进投影方法的点云拼接算法"", 《计算机应用研究》 *
解则晓等: ""三维点云数据拼接中ICP及其改进算法综述"", 《中国海洋大学学报》 *
陈晓明等: ""基于Kinect深度信息的实时三维重建和滤波算法研究"", 《计算机应用研究》 *
韦羽棉: ""基于Kinect深度图像的三维重建研究"", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680039A (en) * 2017-09-22 2018-02-09 武汉中观自动化科技有限公司 A kind of point cloud method and system based on white light scanning instrument
CN107845134A (en) * 2017-11-10 2018-03-27 浙江大学 A kind of three-dimensional rebuilding method of the single body based on color depth camera
CN107845134B (en) * 2017-11-10 2020-12-29 浙江大学 A 3D reconstruction method of a single object based on a color depth camera
CN107886101A (en) * 2017-12-08 2018-04-06 北京信息科技大学 A kind of scene three-dimensional feature point highly effective extraction method based on RGB D
CN110049323B (en) * 2018-01-17 2021-09-07 华为技术有限公司 Encoding method, decoding method and apparatus
US11388446B2 (en) 2018-01-17 2022-07-12 Huawei Technologies Co., Ltd. Encoding method, decoding method, and apparatus
CN110278719B (en) * 2018-01-17 2021-11-19 华为技术有限公司 Encoding method, decoding method and device
CN110278719A (en) * 2018-01-17 2019-09-24 华为技术有限公司 Coding method, coding/decoding method and device
WO2019140973A1 (en) * 2018-01-17 2019-07-25 华为技术有限公司 Encoding method, decoding method, and device
CN110049323A (en) * 2018-01-17 2019-07-23 华为技术有限公司 Coding method, coding/decoding method and device
CN108596947B (en) * 2018-03-27 2021-09-17 南京邮电大学 Rapid target tracking method suitable for RGB-D camera
CN108596947A (en) * 2018-03-27 2018-09-28 南京邮电大学 A kind of fast-moving target tracking method suitable for RGB-D cameras
CN108596974B (en) * 2018-04-04 2020-08-04 清华大学 Dynamic scene robot positioning and mapping system and method
CN108596974A (en) * 2018-04-04 2018-09-28 清华大学 Dynamic scene robot localization builds drawing system and method
CN108742159A (en) * 2018-04-08 2018-11-06 浙江安精智能科技有限公司 Intelligent control device of water dispenser based on RGB-D cameras and its control method
CN108648224A (en) * 2018-05-18 2018-10-12 杭州电子科技大学 A method of the real-time scene layout identification based on artificial neural network and reconstruction
CN108648224B (en) * 2018-05-18 2021-07-13 杭州电子科技大学 A method for real-time scene layout recognition and reconstruction based on artificial neural network
CN108734772A (en) * 2018-05-18 2018-11-02 宁波古德软件技术有限公司 High accuracy depth image acquisition methods based on Kinect fusion
CN108986161B (en) * 2018-06-19 2020-11-10 亮风台(上海)信息科技有限公司 Three-dimensional space coordinate estimation method, device, terminal and storage medium
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN110944187A (en) * 2018-09-19 2020-03-31 华为技术有限公司 Point cloud encoding method and encoder
WO2020057338A1 (en) * 2018-09-19 2020-03-26 华为技术有限公司 Point cloud coding method and encoder
CN110944187B (en) * 2018-09-19 2022-05-10 华为技术有限公司 Point cloud encoding method and encoder
US11875538B2 (en) 2018-09-19 2024-01-16 Huawei Technologies Co., Ltd. Point cloud encoding method and encoder
CN109360267A (en) * 2018-09-29 2019-02-19 杭州蓝芯科技有限公司 A kind of thin objects quick three-dimensional reconstructing method
CN109544677B (en) * 2018-10-30 2020-12-25 山东大学 Indoor scene main structure reconstruction method and system based on depth image key frame
CN109544677A (en) * 2018-10-30 2019-03-29 山东大学 Indoor scene main structure method for reconstructing and system based on depth image key frame
CN109785381A (en) * 2018-12-06 2019-05-21 苏州炫感信息科技有限公司 A kind of optical inertial fusion space-location method, positioning device and positioning system
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109697753B (en) * 2018-12-10 2023-10-03 智灵飞(北京)科技有限公司 Unmanned aerial vehicle three-dimensional reconstruction method based on RGB-D SLAM and unmanned aerial vehicle
CN109887030A (en) * 2019-01-23 2019-06-14 浙江大学 Image pose detection method of textureless metal parts based on CAD sparse template
CN109895094A (en) * 2019-02-11 2019-06-18 苏州瀚华智造智能技术有限公司 A kind of industrial robot measurement track analysis of Positioning Error method and system
CN112116655A (en) * 2019-06-20 2020-12-22 北京地平线机器人技术研发有限公司 Method and device for determining position information of image of target object
CN112116655B (en) * 2019-06-20 2024-04-05 北京地平线机器人技术研发有限公司 Target object position determining method and device
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110415342B (en) * 2019-08-02 2023-04-18 深圳市唯特视科技有限公司 Three-dimensional point cloud reconstruction device and method based on multi-fusion sensor
CN110455815A (en) * 2019-09-05 2019-11-15 西安多维机器视觉检测技术有限公司 A kind of method and system of electronic component open defect detection
CN110880185A (en) * 2019-11-08 2020-03-13 南京理工大学 High-precision dynamic real-time 360-degree all-dimensional point cloud acquisition method based on fringe projection
CN111009005A (en) * 2019-11-27 2020-04-14 天津大学 Scene classification point cloud rough registration method combining geometric information and photometric information
CN111178138A (en) * 2019-12-04 2020-05-19 国电南瑞科技股份有限公司 Distribution network wire operating point detection method and device based on laser point cloud and binocular vision
CN110889243A (en) * 2019-12-20 2020-03-17 南京航空航天大学 Aircraft fuel tank three-dimensional reconstruction method and detection method based on depth camera
CN112116703A (en) * 2020-09-08 2020-12-22 苏州小优智能科技有限公司 A 3D camera and an infrared light scanning algorithm for alignment of point clouds and color textures
CN112734830B (en) * 2020-12-30 2022-05-03 无锡祥生医疗科技股份有限公司 Point cloud data-based pose control method, device, equipment and storage medium
CN114693881A (en) * 2020-12-30 2022-07-01 合肥美亚光电技术股份有限公司 Point cloud data processing method, registration device and intraoral scanning device
CN112734830A (en) * 2020-12-30 2021-04-30 无锡祥生医疗科技股份有限公司 Point cloud data-based pose control method, device, equipment and storage medium
CN113379815A (en) * 2021-06-25 2021-09-10 中德(珠海)人工智能研究院有限公司 Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN113920191A (en) * 2021-07-30 2022-01-11 北京工商大学 A Depth Camera-Based 6D Dataset Construction Method
CN113920191B (en) * 2021-07-30 2024-06-04 北京工商大学 6D data set construction method based on depth camera
CN114155256A (en) * 2021-10-21 2022-03-08 北京航空航天大学 A method and system for tracking the deformation of flexible objects using RGBD cameras
CN114155256B (en) * 2021-10-21 2024-05-24 北京航空航天大学 A method and system for tracking deformation of flexible objects using RGBD cameras
CN117557601A (en) * 2023-09-26 2024-02-13 北京长木谷医疗科技股份有限公司 Skeleton registration method and device based on digital twinning

Similar Documents

Publication Publication Date Title
CN107240129A (en) Object and indoor small scene based on RGB D camera datas recover and modeling method
CN108416840B (en) A 3D scene dense reconstruction method based on monocular camera
CN109636831B (en) A Method for Estimating 3D Human Pose and Hand Information
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN106803267B (en) 3D reconstruction method of indoor scene based on Kinect
CN106204718B (en) A Simple and Efficient 3D Human Body Reconstruction Method Based on a Single Kinect
US11928778B2 (en) Method for human body model reconstruction and reconstruction system
CN111414798A (en) Head posture detection method and system based on RGB-D image
US11568601B2 (en) Real-time hand modeling and tracking using convolution models
CN103106688A (en) Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN109325995B (en) Low-resolution multi-view hand reconstruction method based on hand parameter model
US10755433B2 (en) Method and system for scanning an object using an RGB-D sensor
CN103839277A (en) Mobile augmented reality registration method of outdoor wide-range natural scene
Li et al. Animated 3D human avatars from a single image with GAN-based texture inference
CN117011493B (en) Three-dimensional face reconstruction method, device and equipment based on symbol distance function representation
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
CN106125907A (en) A kind of objective registration method based on wire-frame model
Niu et al. Overview of image-based 3D reconstruction technology
Ferstl et al. CP-Census: A Novel Model for Dense Variational Scene Flow from RGB-D Data.
CN102496184A (en) Increment three-dimensional reconstruction method based on bayes and facial model
CN111914790A (en) Real-time human body rotation angle recognition method in different scenarios based on dual cameras
CN111915632B (en) Machine learning-based method for constructing truth database of lean texture target object
US20250118025A1 (en) Flexible 3d landmark detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171010

WD01 Invention patent application deemed withdrawn after publication