[go: up one dir, main page]

CN106683173A - A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching - Google Patents

A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching Download PDF

Info

Publication number
CN106683173A
CN106683173A CN201611201364.1A CN201611201364A CN106683173A CN 106683173 A CN106683173 A CN 106683173A CN 201611201364 A CN201611201364 A CN 201611201364A CN 106683173 A CN106683173 A CN 106683173A
Authority
CN
China
Prior art keywords
point
point cloud
image
prime
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611201364.1A
Other languages
Chinese (zh)
Other versions
CN106683173B (en
Inventor
宋锐
田野
李星霓
贾媛
李云松
王养利
许全优
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Xiandian Tongyuan Information Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201611201364.1A priority Critical patent/CN106683173B/en
Publication of CN106683173A publication Critical patent/CN106683173A/en
Application granted granted Critical
Publication of CN106683173B publication Critical patent/CN106683173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于邻域块匹配提高三维重建点云稠密程度的方法,包括:使用基于图像序列的三维重建算法得到粗糙且稀疏的物体点云,得到每一帧图像拍摄时相机在三维空间中的变换矩阵;再次对原始图像进行处理,使用基于邻域的块匹配算法,对图像中进行稠密特征匹配;接下来根据得到的相机在空间中的位置,对得到的稠密特征点对进行合法性检验,并将符合要求的特征点映射到三维点云中对应的位置;使用基于物体轮廓的外点删除算法对得到的点云进行一次外点过滤,并进行一次颜色重映射,得到质量远好于原始点云的稠密点云。本发明能够得到质量和稠密都远高于传统算法的点云,能够大幅度改善原始算法的效果,改进重建质量;普适性高,鲁棒性强。

The invention discloses a method for improving the density of a three-dimensional reconstruction point cloud based on neighborhood block matching, comprising: using a three-dimensional reconstruction algorithm based on an image sequence to obtain a rough and sparse object point cloud; transformation matrix in the space; process the original image again, and use the neighborhood-based block matching algorithm to perform dense feature matching in the image; Check the legality, and map the feature points that meet the requirements to the corresponding positions in the 3D point cloud; use the outlier deletion algorithm based on the object outline to perform an outlier filtering on the obtained point cloud, and perform a color remapping to obtain the quality A dense point cloud that is much better than the original point cloud. The invention can obtain a point cloud whose quality and density are much higher than those of the traditional algorithm, can greatly improve the effect of the original algorithm, and improve the reconstruction quality; it has high universality and strong robustness.

Description

一种基于邻域块匹配提高三维重建点云稠密程度的方法A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching

技术领域technical field

本发明属于计算机视觉技术领域,尤其涉及一种基于邻域块匹配提高三维重建点云稠密程度的方法。The invention belongs to the technical field of computer vision, and in particular relates to a method for increasing the density of a three-dimensional reconstruction point cloud based on neighborhood block matching.

背景技术Background technique

基于图像序列的三维重建涉及很多学科,属于相机成像原理的逆向过程,研究范围主要包括:物体目标识别,特征检测,特征匹配等领域。三维重建技术自诞生以来就成为了计算机视觉领域的热点与难点之一。其输入仅需要一般的彩色图像,普适性高的特点在对真实世界中的物体建模时,提供了无与伦比的便捷。然而,由于三维重建中有太多需要计算测量的参量,因此如何得到高质量的重建模型也成为了其难以攻克的难点。通常,三维重建的核心步骤在于如何得到高质量的足够表达物体模型信息的点云。通常三维重建分为以下两种类别:(1)基于图像序列的三维重建方法。基于图像序列的三维重建方法通过对物体进行环绕拍摄获取输入图像。根据图像中检测到的特征和图像之间的特征匹配,获取两帧之间的相机变换方式。通过最小化所有图像相机位置的误差最终确定每一帧图像拍摄时,相机相对于世界坐标系的旋转和平移,根据这些信息,能够确定每一个特征点在空间中的真实位置,从而获取物体的稀疏点云。然后再通过一些点云增长算法增加获取得到的点云的稠密度。(2)基于深度相机的三维重建方法。基于深度相机的三维重建方法是通过使用深度相机实时捕获图像和深度图。即对每一帧图像来说,都能得到该帧每一个像素点在空间中的位置。而随着相机的移动,将下一帧的点云不断与该帧的点云做匹配,检测两帧之间相机位置的移动,把不同帧的点云融合在一起,得到物体的稠密点云。在以上的方法中,基于图像序列的三维重建具有输入数据易获得,具有拍摄环境要求低,鲁棒性强的优点。但是由于图像仅仅能提供二维空间的信息,因此在反向求解三维空间坐标时,计算中有很多的误差和噪声存在,比如很难得到精确的匹配特征点对,使得得到的点云往往质量不够高,导致最终得到的模型效果不尽人意。而基于深度相机的方法由于获取的图像中,每个像素点都可以被当作一个特征点,因此能够得到足够稠密的特征点。但由于其需要高质量的深度相机,并且难以应付距离较远的拍摄,同时目前民用深度相机最大范围约在10m左右,因此该方法对拍摄环境和设备的要求过高。而基于图像序列的三维重建不受此限制,只要物体在图像中呈现出足够的特征即可。3D reconstruction based on image sequences involves many disciplines, and belongs to the reverse process of camera imaging principles. The research scope mainly includes: object target recognition, feature detection, feature matching and other fields. Since its birth, 3D reconstruction technology has become one of the hot spots and difficulties in the field of computer vision. Its input only requires general color images, and its high universality provides unparalleled convenience when modeling objects in the real world. However, since there are too many parameters that need to be calculated and measured in 3D reconstruction, how to obtain a high-quality reconstruction model has become a difficult problem to overcome. Usually, the core step of 3D reconstruction is how to obtain a high-quality point cloud that can sufficiently express the information of the object model. Generally, 3D reconstruction is divided into the following two categories: (1) 3D reconstruction methods based on image sequences. The 3D reconstruction method based on the image sequence obtains the input image by shooting around the object. Get the camera transformation between two frames based on the features detected in the image and the feature matches between the images. By minimizing the error of the camera position of all images, the rotation and translation of the camera relative to the world coordinate system are finally determined when each frame of image is taken. According to this information, the real position of each feature point in space can be determined, so as to obtain the object's position. Sparse point cloud. Then increase the density of the obtained point cloud through some point cloud growth algorithms. (2) 3D reconstruction method based on depth camera. The 3D reconstruction method based on the depth camera is to capture images and depth maps in real time by using the depth camera. That is, for each frame of image, the position of each pixel in the frame can be obtained. With the movement of the camera, the point cloud of the next frame is continuously matched with the point cloud of the frame, the movement of the camera position between the two frames is detected, and the point clouds of different frames are fused together to obtain the dense point cloud of the object. . Among the above methods, the 3D reconstruction based on the image sequence has the advantages of easy access to input data, low requirements on the shooting environment, and strong robustness. However, since the image can only provide information in two-dimensional space, there are many errors and noises in the calculation when solving the three-dimensional space coordinates in reverse. Not high enough, resulting in unsatisfactory model results. The method based on the depth camera can obtain sufficiently dense feature points because each pixel point in the acquired image can be regarded as a feature point. However, because it requires a high-quality depth camera, and it is difficult to cope with long-distance shooting. At the same time, the maximum range of civilian depth cameras is about 10m, so this method has too high requirements on the shooting environment and equipment. However, 3D reconstruction based on image sequences is not subject to this restriction, as long as the object presents enough features in the image.

综上所述,基于图像序列的三维重建得到的点云质量不够高,存在较大的误差;基于深度相机的方法对拍摄环境要求过高,需要高质量的深度相机,并且难以应付距离较远的拍摄。To sum up, the quality of the point cloud obtained by 3D reconstruction based on image sequences is not high enough, and there are large errors; the method based on the depth camera has high requirements on the shooting environment, requires a high-quality depth camera, and is difficult to cope with the distance shooting.

发明内容Contents of the invention

本发明的目的在于提供一种基于邻域块匹配提高三维重建点云稠密程度的方法,旨在解决基于图像序列的三维重建得到的点云质量不够高;基于深度相机的方法对拍摄环境要求过高,需要高质量的深度相机,并且难以应付距离较远的拍摄的问题。The purpose of the present invention is to provide a method for improving the density of 3D reconstruction point cloud based on neighborhood block matching, aiming at solving the problem that the quality of the point cloud obtained by 3D reconstruction based on image sequence is not high enough; the method based on depth camera is too demanding on the shooting environment. High, requires a high-quality depth camera, and struggles to cope with longer distance shots.

本发明是这样实现的,一种基于邻域块匹配提高三维重建点云稠密程度的方法,所述基于邻域块匹配提高三维重建点云稠密程度的方法包括以下步骤:The present invention is achieved in this way, a method for improving the density of a three-dimensional reconstruction point cloud based on neighborhood block matching, the method for increasing the density of a three-dimensional reconstruction point cloud based on neighborhood block matching includes the following steps:

步骤一,使用基于图像序列的三维重建算法得到粗糙且稀疏的物体点云,得到每一帧图像拍摄时相机在三维空间中的变换矩阵;Step 1, using a 3D reconstruction algorithm based on image sequences to obtain rough and sparse object point clouds, and obtain the transformation matrix of the camera in 3D space when each frame of image is taken;

步骤二,再次对原始图像进行处理,使用基于邻域的块匹配算法,对图像中进行稠密特征匹配;Step 2, process the original image again, and use the neighborhood-based block matching algorithm to perform dense feature matching in the image;

步骤三,接下来根据得到的相机在空间中的位置,对得到的稠密特征点对进行合法性检验,并将符合要求的特征点映射到三维点云中对应的位置;Step 3: Next, according to the obtained position of the camera in space, the legality of the obtained dense feature point pairs is checked, and the feature points that meet the requirements are mapped to the corresponding positions in the 3D point cloud;

步骤四,使用基于物体轮廓的外点删除算法对得到的点云进行一次外点过滤,并进行一次颜色重映射,得到最终质量远好于原始点云的稠密点云。Step 4: Use the outlier deletion algorithm based on the object outline to perform an outlier filtering on the obtained point cloud, and perform a color remapping to obtain a dense point cloud with a final quality much better than the original point cloud.

进一步,所述基于邻域块匹配提高三维重建点云稠密程度的方法具体包括以下步骤:Further, the method for increasing the density of the 3D reconstruction point cloud based on neighborhood block matching specifically includes the following steps:

第一步,使用绕拍图像序列进行传统的三维重建;In the first step, traditional 3D reconstruction is performed using a sequence of images taken around the camera;

第二步,选取相邻的图像进行基于邻域的块匹配的稠密特征生成算法,确定邻域步长n,即每幅图像会与其左右各两幅图像进行稠密特征的计算;The second step is to select adjacent images to perform dense feature generation algorithm based on neighborhood block matching, and determine the neighborhood step size n, that is, each image will calculate dense features with two left and right images;

第三步,确定相邻图像匹配的步长n后,遍历整个图像序列,以每一幅图像为基准图像,与其后方的n帧图像进行基于邻域的块匹配算法;在原始图像中,沿行和列方向每隔多少个像素点取一个点作为需要进行匹配的特征点,用于基于块匹配的特征计算;In the third step, after determining the step size n of adjacent image matching, traverse the entire image sequence, take each image as a reference image, and perform a neighborhood-based block matching algorithm with the n-frame images behind it; in the original image, along How many pixels in the row and column directions take a point as the feature point to be matched, which is used for feature calculation based on block matching;

第四步,确定采样率r后,对于每一对需要进行匹配特征的图像(P1,P2)即得到了需要计算的P1中的匹配点集Pt;遍历Pt中所有的特征点,以每一个特征点为中心,在其周围选取X*Y大小的图像块,使用插值算法将其行列都扩充为原先的8倍,记为Isrc;在P2中,以同样的坐标点为中心,选取M*N大小的图像块,同样使用插值算法将其行列都扩充为原先的8倍,记为IdistThe fourth step, after determining the sampling rate r, for each pair of images (P 1 , P 2 ) that need to be matched with features, the matching point set Pt in P 1 that needs to be calculated is obtained; traversing all the feature points in Pt, Take each feature point as the center, select an image block of X*Y size around it, use the interpolation algorithm to expand its ranks and columns to 8 times the original, and record it as I src ; in P 2 , use the same coordinate point as In the center, select an image block of M*N size, and use an interpolation algorithm to expand its rows and columns to 8 times the original size, which is denoted as I dist ;

第五步,当Idist图像块大小足够大时,能够找到图像块Isrc所对应的图像块Imatch;此时图像块Imatch的中心在图P2中的位置也就是构造图像块Isrc时所用的特征点在图像P2上对应的匹配点;The fifth step, when the size of the I dist image block is large enough, the image block I match corresponding to the image block I src can be found ; at this time, the position of the center of the image block I match in Figure P2 is to construct the image block I src The matching points corresponding to the feature points used in the image P2;

第六步,完成特征匹配后,得到了需要的图相对之间的稠密特征对;对于图像序列中的每一帧,将和它进行过特征匹配计算的图像记作P2;将拍摄P1和P2时的相机光心位置分别记作C1和C2;遍历P1的特征点集Pt中所有的特征点的位置,在三维空间中的相平面上找到它所在的位置Pt1 同时找到Pt1在图像P2上的匹配点在三维空间中相平面上的位置Pt2 ;并在三维空间中做射线C1Pt1’以及C2Pt2 。得到射线L1和L2In the sixth step, after the feature matching is completed, the dense feature pairs between the required image pairs are obtained; for each frame in the image sequence, the image that has undergone feature matching calculation with it is recorded as P 2 ; the shooting P 1 and P 2 when the camera optical center positions are recorded as C 1 and C 2 respectively; traverse the positions of all feature points in the feature point set Pt of P 1 , and find its position Pt 1 on the phase plane in three-dimensional space ; At the same time, find the position Pt 2 ' of the matching point of Pt 1 on the image P 2 on the phase plane in the three-dimensional space; and make rays C 1 Pt 1 ' and C 2 Pt 2 ' in the three-dimensional space. Get rays L 1 and L 2 ;

第七步,对于得到的射线L1与L2来说,如果匹配点对Pt1与Pt2是正确的匹配点,那么L1与L2在空间中应该相交在物体模型的对应点上;设置一个阈值T,当这两条异面直线之间的最近点之间的距离小于T时,这一对匹配点是合法的匹配点对;The seventh step, for the obtained rays L 1 and L 2 , if the matching point pair Pt 1 and Pt 2 is the correct matching point, then L 1 and L 2 should intersect at the corresponding point of the object model in space; Set a threshold T, when the distance between the closest points between the two straight lines of different planes is less than T, this pair of matching points is a legal pair of matching points;

第八步,对于得到的合法匹配点对,若所做射线L1与L2交于一点,则将交点作为新的特征点补充进初始点云中;若L1与L2为最近点距离小于阈值T的异面直线;则取L1和L2的最近点对的中点位置作为新的三维点加入初始点云中;The eighth step, for the obtained legal matching point pair, if the made rays L 1 and L 2 intersect at one point, then add the intersection point as a new feature point into the initial point cloud; if L 1 and L 2 are the closest point distance is less than the threshold T, then take the midpoint position of the nearest point pair of L 1 and L 2 as a new three-dimensional point and add it to the initial point cloud;

第九步,进行基于物体轮廓的杂点滤除,对于每一帧图像,提取物体轮廓,通过得到的相机变换矩阵,将点云反投影至三维空间中相平面的位置,将轮廓区域内的点保留;而反投影过后,在任意一帧中,出现在物体轮廓外部的点,都将其从点云删除。The ninth step is to perform noise filtering based on the outline of the object. For each frame of image, extract the outline of the object, and back-project the point cloud to the position of the phase plane in the three-dimensional space through the obtained camera transformation matrix. Points are retained; after backprojection, in any frame, points that appear outside the outline of the object are deleted from the point cloud.

第十步,把点云反投影到每一帧在空间中的相平面,记录相平面上每一像素点在反投影时距离它最近的点云中的点,将图像上该像素点的像素值赋值给这个离他最近的点云中的点,从而完成点云颜色的重映射。The tenth step is to back-project the point cloud to the phase plane of each frame in space, record the point in the point cloud of each pixel on the phase plane that is closest to it during back-projection, and convert the pixel of the pixel point on the image to The value is assigned to the point in the point cloud closest to him, so as to complete the remapping of the point cloud color.

进一步,所述相机变换矩阵包括了相机光心位置相对于世界坐标系原点的旋转和平移即[R|T],其中R为旋转矩阵,T为平移向量。Further, the camera transformation matrix includes the rotation and translation of the optical center position of the camera relative to the origin of the world coordinate system, ie [R|T], where R is a rotation matrix and T is a translation vector.

进一步,所述步骤二中使用每两帧之间旋转5°的拍摄方式,使用的步长为2。Further, in the second step, a shooting method of rotating 5° between every two frames is used, and the step size used is 2.

进一步,所述步骤三中使用每隔2行2列取一个像素点的采样率;选取的像素点数目和算法的时间开销成正比。Further, in the step 3, a sampling rate of one pixel every two rows and two columns is used; the number of selected pixels is directly proportional to the time overhead of the algorithm.

进一步,所述步骤四中M和N的值分别需要大于X和Y;使用X,Y为40,M和N为80的参数设置。Further, the values of M and N in step 4 need to be greater than X and Y respectively; use the parameter settings where X and Y are 40, and M and N are 80.

进一步,所述步骤五中,计算位置(x,y)相关系数R的公式如下:Further, in the step five, the formula for calculating the position (x, y) correlation coefficient R is as follows:

其中:in:

其中,T(x,y)代表图像Isrc中坐标(x,y)位置的像素值,I(x,y)代表图像Idist中坐标(x,y)位置的像素值。w与h分别代表图像的宽度和高度。Among them, T(x, y) represents the pixel value at the coordinate (x, y) position in the image I src , and I(x, y) represents the pixel value at the coordinate (x, y) position in the image I dist . w and h represent the width and height of the image, respectively.

进一步,所述步骤六中,在每一帧的相平面上寻找特征点时,需要使用相机的变换矩阵[R|T],拍照的焦距为f,某一帧F中,特征点的位置在图像上为(x,y),其旋转平移矩阵为[R|T],则其在世界坐标系中的三维坐标(X,Y,Z)的计算公式如下:Further, in the step six, when searching for feature points on the phase plane of each frame, the transformation matrix [R|T] of the camera needs to be used, the focal length of the photo is f, and in a certain frame F, the position of the feature points is at The image is (x, y), and its rotation and translation matrix is [R|T], then its three-dimensional coordinates (X, Y, Z) in the world coordinate system are calculated as follows:

进一步,所述步骤七中,设置的认为两射线L1与L2合法的阈值为1e-2Further, in the step seven, the set threshold for considering the two rays L 1 and L 2 to be legal is 1e −2 ;

所述步骤八中,参数k是5;In the eighth step, the parameter k is 5;

所述步骤九中,将点云反投影到第i帧的相平面的方法需要使用相机的变换矩阵[R|T];点云中一点(X,Y,Z),其投影在第i帧中的坐标(u,v)的计算公式为:In said step nine, the method of back-projecting the point cloud to the phase plane of the i-th frame needs to use the transformation matrix [R|T] of the camera; a point (X, Y, Z) in the point cloud, its projection in the i-th frame The formula for calculating the coordinates (u, v) in is:

其中,f为第i帧的焦距;Among them, f is the focal length of the i-th frame;

所述步骤十中,若多个相平面都将其中的某个点的像素映射到点云中的一个点Pt,则Pt的RGB取这几个平面的对应点的像素均值。In the tenth step, if multiple phase planes map the pixel of a certain point to a point Pt in the point cloud, then the RGB of Pt takes the average value of the pixels of the corresponding points of these planes.

本发明的另一目的在于提供一种应用所述基于邻域块匹配提高三维重建点云稠密程度的方法的计算机视觉系统。Another object of the present invention is to provide a computer vision system that applies the method for increasing the density of a 3D reconstruction point cloud based on neighborhood block matching.

本发明提供的基于邻域块匹配提高三维重建点云稠密程度的方法,对于图像序列没有特殊要求,对于参数的调整没有过分的依赖,可以极大程度改善点云的质量和稠密度。本发明在基于图像序列的三维重建的基础上,对得到的点云进行进一步的扩充,增加其完整度与稠密度。在现有的三维重建获取稀疏点云的算法基础上,大幅度增加点云特征,从而提升最终点云结果的算法。The method for improving the density of 3D reconstruction point cloud based on neighborhood block matching provided by the present invention has no special requirements for image sequences and does not rely too much on parameter adjustment, which can greatly improve the quality and density of point cloud. On the basis of the three-dimensional reconstruction based on the image sequence, the present invention further expands the obtained point cloud to increase its completeness and density. On the basis of the existing algorithm for obtaining sparse point cloud by 3D reconstruction, the feature of point cloud is greatly increased, thereby improving the algorithm of the final point cloud result.

本发明相比与基于SIFT特征的三维重建算法,获得的点云稠密度,完整度,以及颜色还原度都远好于普遍的算法。同时,SIFT对于特征比较平滑的区域很难提取出正确的特征点,而本发明基于快匹配的方法,能够最大化利用每个像素点周边邻域的信息,尽可能的增加正确的特征点。同时,基于已计算的到的相机变换矩,消除了传统快匹配算法不够精确的缺点,能够大量,精确地将缺失或过于稀疏的点云补充成为质量较好的点云。Compared with the three-dimensional reconstruction algorithm based on SIFT features, the present invention can obtain point cloud density, completeness, and color restoration degree far better than common algorithms. At the same time, it is difficult for SIFT to extract correct feature points for areas with relatively smooth features, but the present invention is based on a fast matching method, which can maximize the use of information in the neighborhood of each pixel and increase correct feature points as much as possible. At the same time, based on the calculated camera transformation moment, it eliminates the inaccurate shortcomings of the traditional fast matching algorithm, and can accurately supplement missing or too sparse point clouds into better quality point clouds.

相比基于深度图像的方法,由于这种方法需要每一帧对应的深度图像,以及相邻帧之间位移不能过大,所以在无法获得深度图的场景很难具有真实的使用价值。目前民用的高分辨率深度相机往往仅能探测10米以内的深度范围。而激光探测虽然距离足够,却达不到足够的精度。本发明只要能够获取简单的图像,即可实现高质量的模型重建,且两帧之间的位移没有基于深度图像的方法所需要的精确,加大了普适性和易用性;使用绕拍图像进行传统基于SIFT特征的三维重建;确定邻域匹配步长n;确定采样步长即需要进行匹配的特征点;遍历图像序列,对需要计算的图像对进行基于邻域块匹配的特征提取;基于1中获取的相机矩阵和4中得到的匹配特征,对4中的特征进行过滤;基于图像轮廓对空间中的点云再次进行滤除以及颜色矫正。本发明能够得到质量和稠密都远高于传统算法的点云,能够大幅度改善原始算法的效果,改进重建质量;是一种普适性高,鲁棒性强的改善三维重建最终点云质量的方法。Compared with the method based on the depth image, since this method requires a corresponding depth image for each frame, and the displacement between adjacent frames cannot be too large, it is difficult to have real use value in the scene where the depth image cannot be obtained. At present, the high-resolution depth cameras for civilian use can only detect the depth range within 10 meters. While laser detection has a sufficient distance, it cannot achieve sufficient accuracy. As long as the present invention can obtain simple images, high-quality model reconstruction can be realized, and the displacement between two frames is not as accurate as the method based on depth images, which increases the universality and ease of use; Perform traditional 3D reconstruction of images based on SIFT features; determine the neighborhood matching step size n; determine the sampling step size, that is, the feature points that need to be matched; traverse the image sequence, and perform feature extraction based on neighborhood block matching for the image pairs that need to be calculated; Based on the camera matrix obtained in 1 and the matching features obtained in 4, the features in 4 are filtered; based on the image contour, the point cloud in space is filtered and color corrected again. The present invention can obtain a point cloud whose quality and density are much higher than those of the traditional algorithm, can greatly improve the effect of the original algorithm, and improve the reconstruction quality; it is a highly universal and robust method for improving the final point cloud quality of 3D reconstruction Methods.

附图说明Description of drawings

图1是本发明实施例提供的基于邻域块匹配提高三维重建点云稠密程度的方法流程图。Fig. 1 is a flowchart of a method for increasing the density of a 3D reconstruction point cloud based on neighborhood block matching provided by an embodiment of the present invention.

图2是本发明实施例提供的实施例1的流程图。Fig. 2 is a flowchart of Embodiment 1 provided by the embodiment of the present invention.

具体实施方式detailed description

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

下面结合附图对本发明的应用原理作详细的描述。The application principle of the present invention will be described in detail below in conjunction with the accompanying drawings.

如图1所示,本发明实施例提供的基于邻域块匹配提高三维重建点云稠密程度的方法包括以下步骤:As shown in Figure 1, the method for improving the density of the 3D reconstruction point cloud based on neighborhood block matching provided by the embodiment of the present invention includes the following steps:

S101:使用传统的基于图像序列的三维重建算法得到粗糙且稀疏的物体点云,得到每一帧图像拍摄时相机在三维空间中的变换矩阵;S101: Obtain a rough and sparse object point cloud using a traditional image sequence-based three-dimensional reconstruction algorithm, and obtain the transformation matrix of the camera in three-dimensional space when each frame of image is captured;

S102:再次对原始图像进行处理,使用基于邻域的块匹配算法,对图像中进行稠密特征匹配,该特征匹配的方法能够得到足够稠密,但精度较低的匹配特征对;S102: Process the original image again, and use a neighborhood-based block matching algorithm to perform dense feature matching in the image. This feature matching method can obtain matching feature pairs that are sufficiently dense but have low precision;

S103:接下来根据得到的相机在空间中的位置,对得到的稠密特征点对进行合法性检验,并将符合要求的特征点映射到三维点云中对应的位置;S103: Next, according to the obtained position of the camera in space, perform a legality check on the obtained dense feature point pairs, and map the required feature points to corresponding positions in the 3D point cloud;

S104:使用基于物体轮廓的外点删除算法对得到的点云进行一次外点过滤,并进行一次颜色重映射,得到最终质量远好于原始点云的稠密点云。S104: Use an outlier deletion algorithm based on the object outline to perform outlier filtering on the obtained point cloud, and perform a color remapping to obtain a dense point cloud with a final quality much better than the original point cloud.

下面结合具体实施例对本发明的应用原理作进一步的描述。The application principle of the present invention will be further described below in combination with specific embodiments.

实施例1:Example 1:

本发明实施例提供的基于邻域块匹配提高三维重建点云稠密程度的方法包括以下步骤:The method for increasing the density of the 3D reconstruction point cloud based on neighborhood block matching provided by the embodiment of the present invention includes the following steps:

步骤1:对目标物体进行环绕拍摄,获取各个角度的图像。Step 1: Shoot around the target object to obtain images from various angles.

步骤2:使用传统的基于SIFT特征匹配算法的三维重建方法进行重建,获取每一帧拍摄时的相机旋转矩阵。Step 2: Use the traditional 3D reconstruction method based on the SIFT feature matching algorithm for reconstruction, and obtain the camera rotation matrix when shooting each frame.

步骤3:确定进行图像匹配的步长,并使用基于邻域块匹配的特征匹配算法,提取稠密的匹配特征点。特征点提取公式在S5的说明中有详细描述。Step 3: Determine the step size for image matching, and use the feature matching algorithm based on neighborhood block matching to extract dense matching feature points. The feature point extraction formula is described in detail in the description of S5.

步骤4:获取稠密的匹配特征点后,在三维空间中对其进行合法性的检验,将不满足阈值条件的特征点从点集中删除,符合阈值条件的计算其在空间中的真实位置。Step 4: After obtaining the dense matching feature points, check their validity in the three-dimensional space, delete the feature points that do not meet the threshold condition from the point set, and calculate their real positions in the space that meet the threshold condition.

步骤5:将步骤4中所有的合法点云加入原有点云中,提升原有点云的稠密度和完整度。Step 5: Add all legal point clouds in step 4 to the original point cloud to improve the density and completeness of the original point cloud.

步骤6:进行基于轮廓的杂点滤除算法和颜色矫正算法,以滤去少量存在的错点,并修正扩充的点云的颜色,从而完成整个算法流程。Step 6: Carry out the contour-based noise point filtering algorithm and color correction algorithm to filter out a small number of wrong points and correct the color of the expanded point cloud to complete the entire algorithm process.

依据以上步骤最终得到更加稠密完整的高质量点云。According to the above steps, a more dense and complete high-quality point cloud is finally obtained.

本发明实施例提供的基于邻域块匹配提高三维重建点云稠密程度的方法具体包括以下步骤:The method for improving the density of the 3D reconstruction point cloud based on neighborhood block matching provided by the embodiment of the present invention specifically includes the following steps:

S1使用绕拍图像序列进行传统的三维重建,由于传统的三维重建基于SIFT特征算法,而SIFT准确度较高,能够进行亚像素级别的特征提取,故能够得到准确的三维空间中相机位置的变换矩阵。S1 uses a sequence of images to perform traditional 3D reconstruction. Because traditional 3D reconstruction is based on the SIFT feature algorithm, and SIFT has high accuracy and can perform feature extraction at the sub-pixel level, it can obtain accurate camera position transformation in 3D space. matrix.

S2选取相邻的图像进行基于邻域的块匹配的稠密特征生成算法。首先,确定邻域步长n,即每幅图像会与其左右各两幅图像进行稠密特征的计算。S2 selects adjacent images to perform a dense feature generation algorithm based on neighborhood block matching. First, determine the neighborhood step size n, that is, each image will perform dense feature calculation with two left and right images.

S3确定相邻图像匹配的步长n后,遍历整个图像序列,以每一幅图像为基准图像,与其后方的n帧图像进行基于邻域的块匹配算法。该算法首先需要确定采样率r,即在原始图像中,沿行和列方向每隔多少个像素点取一个点作为需要进行匹配的特征点,用于基于块匹配的特征计算。S3 After determining the step size n of adjacent image matching, traverse the entire image sequence, take each image as a reference image, and perform a neighborhood-based block matching algorithm with the n frames of images behind it. The algorithm first needs to determine the sampling rate r, that is, in the original image, how many pixels along the row and column directions take a point as a feature point to be matched, which is used for feature calculation based on block matching.

S4确定采样率r后,对于每一对需要进行匹配特征的图像(P1,P2)即得到了需要计算的P1中的匹配点集Pt。遍历Pt中所有的特征点,以每一个特征点为中心,在其周围选取X*Y大小的图像块,使用插值算法将其行列都扩充为原先的8倍,记为Isrc。在P2中,以同样的坐标点为中心,选取M*N大小的图像块,同样使用插值算法将其行列都扩充为原先的8倍,记为IdistS4 After determining the sampling rate r, for each pair of images (P 1 , P 2 ) that need to be matched with features, the matching point set Pt in P 1 that needs to be calculated is obtained. Traverse all the feature points in Pt, take each feature point as the center, select an image block of size X*Y around it, use the interpolation algorithm to expand its row and column to 8 times the original size, and record it as I src . In P 2 , with the same coordinate point as the center, select an image block of M*N size, and also use the interpolation algorithm to expand its ranks and columns to 8 times the original size, which is recorded as I dist .

S5基于假设,认为当相机在对图像进行环绕拍摄时,相邻两帧之间的运动幅度很小。故在S4中,当Idist图像块大小足够大时,能够在其中找到图像块Isrc所对应的图像块Imatch。而此时图像块Imatch的中心在图P2中的位置也就是构造图像块Isrc时所用的特征点在图像P2上对应的匹配点,由于之前的插值工作,最终进行的块匹配的精度是1/8像素。S5 is based on the assumption that when the camera is shooting around the image, the motion range between two adjacent frames is very small. Therefore, in S4, when the size of the image block I dist is large enough, the image block I match corresponding to the image block I src can be found therein. At this time, the position of the center of the image block I match in Figure P 2 is the corresponding matching point on the image P 2 of the feature points used when constructing the image block I src . Due to the previous interpolation work, the final block matching The precision is 1/8 pixel.

S6完成S5中的特征匹配后,得到了需要的图相对之间的稠密特征对。但由于物体在两幅图像之间由于拍照角度的变化,只能做到在Idist中寻找尽可能接近Isrc的图像块。因此,这样提取出来的特征匹配对有较大的噪声,需要进一步处理。在S1中,得到了拍摄图像每一帧时相机在三维空间中的位置。在S5中,得到了图像对之间的匹配特征点对。对于图像序列中的每一帧,以P1为例,将和它进行过特征匹配计算的图像记作P2。将拍摄P1和P2时的相机光心位置分别记作C1和C2。遍历P1的特征点集Pt中所有的特征点的位置,以点Pt1为例,在三维空间中的相平面上找到它所在的位置Pt1 。同时找到Pt1在图像P2上的匹配点在三维空间中相平面上的位置Pt2 。并在三维空间中做射线C1Pt1 以及C2Pt2 。得到射线L1和L2After S6 completes the feature matching in S5, the dense feature pairs between the required graphs are obtained. However, due to the change of the camera angle of the object between the two images, it is only possible to find an image block as close as possible to I src in I dist . Therefore, the feature matching pairs extracted in this way have relatively large noise and need further processing. In S1, the position of the camera in three-dimensional space when each frame of the image is taken is obtained. In S5, the matching feature point pairs between image pairs are obtained. For each frame in the image sequence, take P 1 as an example, and denote the image that has undergone feature matching calculation with it as P 2 . The optical center positions of the camera when shooting P 1 and P 2 are respectively denoted as C 1 and C 2 . Traversing the positions of all the feature points in the feature point set Pt of P 1 , taking the point Pt 1 as an example, find its position Pt 1 ' on the phase plane in the three-dimensional space. At the same time, find the position Pt 2 ' of the matching point of Pt 1 on the image P 2 on the phase plane in the three-dimensional space. And make rays C 1 Pt 1 ' and C 2 Pt 2 ' in three-dimensional space. Rays L 1 and L 2 are obtained.

S7对于S6中得到的射线L1与L2来说,如果匹配点对Pt1与Pt2是正确的匹配点,那么L1与L2在空间中应该相交在物体模型的对应点上。然而,由于计算中引入的各种误差和浮点计算的固有误差,很难找到真正交于一点的情况。因此,在大多数情况中,L1与L2是两条异面直线,而算法中会设置一个阈值T,当这两条异面直线之间的最近点之间的距离小于T时,就认为这一对匹配点是合法的匹配点对。S7 For the rays L 1 and L 2 obtained in S6, if the matching point pair Pt 1 and Pt 2 are correct matching points, then L 1 and L 2 should intersect at the corresponding point of the object model in space. However, due to the various errors introduced in the calculation and the inherent errors of floating-point calculations, it is difficult to find a situation where it really intersects at a point. Therefore, in most cases, L 1 and L 2 are two straight lines with different planes, and a threshold T is set in the algorithm. When the distance between the closest points between the two straight lines with different planes is less than T, the It is considered that this pair of matching points is a legal pair of matching points.

S8对于S7中得到的合法匹配点对,若所做射线L1与L2交于一点,则将交点作为新的特征点补充进初始点云中。若L1与L2为最近点距离小于阈值T的异面直线。则取L1和L2的最近点对的中点位置作为新的三维点加入初始点云中。新补充进三维点云中的点,以其周围距离最近的k个点色彩的均值作为新点的颜色。这样得到的点云颜色是有误差的,具体修正方法见S10。S8 For the legal matching point pair obtained in S7, if the made rays L 1 and L 2 intersect at one point, add the intersection point into the initial point cloud as a new feature point. If L 1 and L 2 are straight lines on different planes whose closest point distance is less than the threshold T. Then take the midpoint position of the nearest point pair of L 1 and L 2 as a new three-dimensional point and add it to the initial point cloud. For the newly added point in the 3D point cloud, the color of the new point is the mean value of the colors of the nearest k points around it. The color of the point cloud obtained in this way has errors, and the specific correction method is shown in S10.

S9在S8中得到的结果虽然已经滤去了绝大多数的错误匹配点,但是仍然有小部分匹配点存在着误差,因此对其进行基于物体轮廓的杂点滤除。对于每一帧图像,提取物体轮廓。通过S1中得到的相机变换矩阵,可以将点云反投影至三维空间中相平面的位置。在此之后,将轮廓区域内的点保留。而反投影过后,在任意一帧中,出现在物体轮廓外部的点,都将其从点云删除。Although the results obtained by S9 in S8 have filtered out most of the wrong matching points, there are still errors in a small number of matching points, so it is filtered out based on the object outline. For each frame of image, object contours are extracted. Through the camera transformation matrix obtained in S1, the point cloud can be back-projected to the position of the phase plane in the three-dimensional space. After this, the points within the contour area are kept. After the back projection, in any frame, the points that appear outside the outline of the object are deleted from the point cloud.

S10目前新增加的点云的颜色还是有误差的,再次通过S9中的方法把点云反投影到每一帧在空间中的相平面。记录相平面上每一像素点在反投影时距离它最近的点云中的点。将图像上该像素点的像素值赋值给这个离他最近的点云中的点,从而完成点云颜色的重映射。The color of the newly added point cloud of S10 still has errors, and the method in S9 is used to back-project the point cloud to the phase plane of each frame in space. Record the point in the point cloud closest to each pixel on the phase plane during back projection. Assign the pixel value of the pixel point on the image to the point in the point cloud closest to it, so as to complete the remapping of the point cloud color.

所述步骤S1中得到的相机变换矩阵包括了相机光心位置相对于世界坐标系原点的旋转和平移即[R|T],其中R为旋转矩阵,T为平移向量。The camera transformation matrix obtained in the step S1 includes the rotation and translation of the optical center position of the camera relative to the origin of the world coordinate system, that is, [R|T], where R is a rotation matrix and T is a translation vector.

所述步骤S2中选取的匹配步长最大值视相机拍摄过程中运动的幅度大小而定。在本算法中,使用了每两帧之间旋转5°的拍摄方式,使用的步长为2。每两帧之间的变化幅度越大,则相应的步长应该越小,反之亦然。The maximum value of the matching step selected in the step S2 depends on the magnitude of the motion during the camera shooting process. In this algorithm, the shooting method of rotating 5° between every two frames is used, and the step size used is 2. The larger the range of change between every two frames, the smaller the corresponding step size should be, and vice versa.

所述步骤S3中的采样率会影响最终生成的匹配点的个数。在本方法中,使用了每隔2行2列取一个像素点的采样率。选取的像素点数目和算法的时间开销成正比。The sampling rate in step S3 will affect the number of matching points finally generated. In this method, a sampling rate of one pixel every 2 rows and 2 columns is used. The number of selected pixels is proportional to the time overhead of the algorithm.

所述步骤S4中M和N的值分别需要大于X和Y。在算法实验中,使用了X,Y为40,M和N为80的参数设置。The values of M and N in the step S4 need to be greater than X and Y respectively. In the algorithm experiment, the parameter settings of X, Y are 40, and M and N are 80 are used.

所述S5步骤中,在Idist中寻找图像块Isrc使用了基于相关系数的图像匹配算法。即将图像块Isrc在图像块Idist上从左至右,从上至下移动,计算每一个位置Isrc与其覆盖的Idist的区域的图像块的相关系数R,并选取相关系数最大的位置中Isrc所覆盖的区域,作为图像块Imatch。在滑动Isrc的过程中,将Isrc左上角在Idist中的坐标(x,y)记为当前位置的坐标。计算位置(x,y)相关系数R的公式如下:In the step S5, searching for the image block I src in I dist uses an image matching algorithm based on a correlation coefficient. That is, the image block I src moves from left to right and from top to bottom on the image block I dist , calculates the correlation coefficient R of each position I src and the image block in the area of I dist covered by it, and selects the position with the largest correlation coefficient The area covered by I src in is used as the image block I match . In the process of sliding the I src , the coordinates (x, y) of the upper left corner of the I src in the I dist are recorded as the coordinates of the current position. The formula for calculating the position (x, y) correlation coefficient R is as follows:

其中:in:

其中,T(x,y)代表图像Isrc中坐标(x,y)位置的像素值,I(x,y)代表图像Idist中坐标(x,y)位置的像素值。w与h分别代表图像的宽度和高度。Among them, T(x, y) represents the pixel value at the coordinate (x, y) position in the image I src , and I(x, y) represents the pixel value at the coordinate (x, y) position in the image I dist . w and h represent the width and height of the image, respectively.

步骤S6中,在每一帧的相平面上寻找特征点时,需要使用相机的变换矩阵[R|T]。具体公式如下,假设拍照的焦距为f,某一帧F中,特征点的位置在图像上为(x,y),其旋转平移矩阵为[R|T],则其在世界坐标系中的三维坐标计算公式如下:In step S6, when looking for feature points on the phase plane of each frame, the transformation matrix [R|T] of the camera needs to be used. The specific formula is as follows, assuming that the focal length of the photo is f, in a certain frame F, the position of the feature point is (x, y) on the image, and its rotation and translation matrix is [R|T], then its position in the world coordinate system The three-dimensional coordinate calculation formula is as follows:

步骤S7中,在试验中设置的认为两射线L1与L2合法的阈值为1e-2In step S7, the threshold set in the experiment to consider the two rays L 1 and L 2 legal is 1e −2 .

步骤S8中,对于参数k的选取,在试验中使用的是5。In step S8, for the selection of parameter k, 5 is used in the experiment.

步骤S9中,将点云反投影到第i帧的相平面的方法需要使用相机的变换矩阵[R|T]。假设点云中一点(X,Y,Z),其投影在第i帧中的坐标(u,v)的计算公式为:In step S9, the method of back-projecting the point cloud to the phase plane of the i-th frame needs to use the transformation matrix [R|T] of the camera. Assuming a point (X, Y, Z) in the point cloud, the calculation formula for the coordinates (u, v) of its projection in the i-th frame is:

其中,f为第i帧的焦距。Among them, f is the focal length of the i-th frame.

步骤S10中,若多个相平面都将其中的某个点的像素映射到点云中的一个点Pt,则Pt的RGB取这几个平面的对应点的像素均值。In step S10, if multiple phase planes map the pixel of a certain point to a point Pt in the point cloud, then the RGB of Pt takes the average value of the pixels of the corresponding points of these planes.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.

Claims (10)

1.一种基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述基于邻域块匹配提高三维重建点云稠密程度的方法包括以下步骤:1. A method for improving the dense degree of three-dimensional reconstruction point cloud based on neighborhood block matching, characterized in that, the method for improving the dense degree of three-dimensional reconstruction point cloud based on neighborhood block matching may further comprise the steps: 步骤一,使用基于图像序列的三维重建算法得到粗糙且稀疏的物体点云,得到每一帧图像拍摄时相机在三维空间中的变换矩阵;Step 1, using a 3D reconstruction algorithm based on image sequences to obtain rough and sparse object point clouds, and obtain the transformation matrix of the camera in 3D space when each frame of image is taken; 步骤二,再次对原始图像进行处理,使用基于邻域的块匹配算法,对图像中进行稠密特征匹配;Step 2, process the original image again, and use the neighborhood-based block matching algorithm to perform dense feature matching in the image; 步骤三,接下来根据得到的相机在空间中的位置,对得到的稠密特征点对进行合法性检验,并将符合要求的特征点映射到三维点云中对应的位置;Step 3: Next, according to the obtained position of the camera in space, the legality of the obtained dense feature point pairs is checked, and the feature points that meet the requirements are mapped to the corresponding positions in the 3D point cloud; 步骤四,使用基于物体轮廓的外点删除算法对得到的点云进行一次外点过滤,并进行一次颜色重映射,得到最终质量远好于原始点云的稠密点云。Step 4: Use the outlier deletion algorithm based on the object outline to perform an outlier filtering on the obtained point cloud, and perform a color remapping to obtain a dense point cloud with a final quality much better than the original point cloud. 2.如权利要求1所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述基于邻域块匹配提高三维重建点云稠密程度的方法具体包括以下步骤:2. the method for improving the dense degree of three-dimensional reconstruction point cloud based on neighborhood block matching as claimed in claim 1, is characterized in that, the method for improving the dense degree of three-dimensional reconstruction point cloud based on neighborhood block matching specifically comprises the following steps: 第一步,使用绕拍图像序列进行传统的三维重建;In the first step, traditional 3D reconstruction is performed using a sequence of images taken around the camera; 第二步,选取相邻的图像进行基于邻域的块匹配的稠密特征生成算法,确定邻域步长n,即每幅图像会与其左右各两幅图像进行稠密特征的计算;The second step is to select adjacent images to perform dense feature generation algorithm based on neighborhood block matching, and determine the neighborhood step size n, that is, each image will calculate dense features with two left and right images; 第三步,确定相邻图像匹配的步长n后,遍历整个图像序列,以每一幅图像为基准图像,与其后方的n帧图像进行基于邻域的块匹配算法;在原始图像中,沿行和列方向每隔多少个像素点取一个点作为需要进行匹配的特征点,用于基于块匹配的特征计算;In the third step, after determining the step size n of adjacent image matching, traverse the entire image sequence, take each image as a reference image, and perform a neighborhood-based block matching algorithm with the n-frame images behind it; in the original image, along How many pixels in the row and column directions take a point as the feature point to be matched, which is used for feature calculation based on block matching; 第四步,确定采样率r后,对于每一对需要进行匹配特征的图像(P1,P2)即得到了需要计算的P1中的匹配点集Pt;遍历Pt中所有的特征点,以每一个特征点为中心,在其周围选取X*Y大小的图像块,使用插值算法将其行列都扩充为原先的8倍,记为Isrc;在P2中,以同样的坐标点为中心,选取M*N大小的图像块,同样使用插值算法将其行列都扩充为原先的8倍,记为IdistThe fourth step, after determining the sampling rate r, for each pair of images (P 1 , P 2 ) that need to be matched with features, the matching point set Pt in P 1 that needs to be calculated is obtained; traversing all the feature points in Pt, Take each feature point as the center, select an image block of X*Y size around it, use the interpolation algorithm to expand its ranks and columns to 8 times the original, and record it as I src ; in P 2 , use the same coordinate point as In the center, select an image block of M*N size, and use an interpolation algorithm to expand its rows and columns to 8 times the original size, which is denoted as I dist ; 第五步,当Idist图像块大小足够大时,能够找到图像块Isrc所对应的图像块Imatch;此时图像块Imatch的中心在图P2中的位置也就是构造图像块Isrc时所用的特征点在图像P2上对应的匹配点;The fifth step, when the size of the I dist image block is large enough, the image block I match corresponding to the image block I src can be found ; at this time, the position of the center of the image block I match in Figure P2 is to construct the image block I src The matching points corresponding to the feature points used in the image P2; 第六步,完成特征匹配后,得到了需要的图相对之间的稠密特征对;对于图像序列中的每一帧,将和它进行过特征匹配计算的图像记作P2;将拍摄P1和P2时的相机光心位置分别记作C1和C2;遍历P1的特征点集Pt中所有的特征点的位置,在三维空间中的相平面上找到它所在的位置Pt1;同时找到Pt1在图像P2上的匹配点在三维空间中相平面上的位置Pt2’;并在三维空间中做射线C1Pt1’以及C2Pt2’;得到射线L1和L2In the sixth step, after the feature matching is completed, the dense feature pairs between the required image pairs are obtained; for each frame in the image sequence, the image that has undergone feature matching calculation with it is recorded as P 2 ; the shooting P 1 and P 2 when the camera optical center positions are recorded as C 1 and C 2 respectively; traverse the positions of all feature points in the feature point set Pt of P 1 , and find its position Pt 1 on the phase plane in three-dimensional space; At the same time, find the position Pt 2 ' of the matching point of Pt 1 on the image P 2 on the phase plane in three-dimensional space; and make rays C 1 Pt 1 ' and C 2 Pt 2 ' in three-dimensional space; get rays L 1 and L 2 ; 第七步,对于得到的射线L1与L2来说,如果匹配点对Pt1与Pt2是正确的匹配点,那么L1与L2在空间中应该相交在物体模型的对应点上;设置一个阈值T,当这两条异面直线之间的最近点之间的距离小于T时,这一对匹配点是合法的匹配点对;The seventh step, for the obtained rays L 1 and L 2 , if the matching point pair Pt 1 and Pt 2 is the correct matching point, then L 1 and L 2 should intersect at the corresponding point of the object model in space; Set a threshold T, when the distance between the closest points between the two straight lines of different planes is less than T, this pair of matching points is a legal pair of matching points; 第八步,对于得到的合法匹配点对,若所做射线L1与L2交于一点,则将交点作为新的特征点补充进初始点云中;若L1与L2为最近点距离小于阈值T的异面直线;则取L1和L2的最近点对的中点位置作为新的三维点加入初始点云中;The eighth step, for the obtained legal matching point pair, if the made rays L 1 and L 2 intersect at one point, then add the intersection point as a new feature point into the initial point cloud; if L 1 and L 2 are the closest point distance is less than the threshold T, then take the midpoint position of the nearest point pair of L 1 and L 2 as a new three-dimensional point and add it to the initial point cloud; 第九步,进行基于物体轮廓的杂点滤除,对于每一帧图像,提取物体轮廓,通过得到的相机变换矩阵,将点云反投影至三维空间中相平面的位置,将轮廓区域内的点保留;而反投影过后,在任意一帧中,出现在物体轮廓外部的点,都将其从点云删除;The ninth step is to perform noise filtering based on the outline of the object. For each frame of image, extract the outline of the object, and back-project the point cloud to the position of the phase plane in the three-dimensional space through the obtained camera transformation matrix. The points are retained; after the back projection, in any frame, the points that appear outside the outline of the object are deleted from the point cloud; 第十步,把点云反投影到每一帧在空间中的相平面,记录相平面上每一像素点在反投影时距离它最近的点云中的点,将图像上该像素点的像素值赋值给这个离他最近的点云中的点,从而完成点云颜色的重映射。The tenth step is to back-project the point cloud to the phase plane of each frame in space, record the point in the point cloud of each pixel on the phase plane that is closest to it during back-projection, and convert the pixel of the pixel point on the image to The value is assigned to the point in the point cloud closest to him, so as to complete the remapping of the point cloud color. 3.如权利要求2所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述相机变换矩阵包括了相机光心位置相对于世界坐标系原点的旋转和平移即[R|T],其中R为旋转矩阵,T为平移向量。3. The method for improving the denseness of a three-dimensional reconstruction point cloud based on neighborhood block matching as claimed in claim 2, wherein the camera transformation matrix includes the rotation and translation of the camera optical center position relative to the origin of the world coordinate system, i.e. [R|T], where R is the rotation matrix and T is the translation vector. 4.如权利要求2所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述步骤二中使用每两帧之间旋转5°的拍摄方式,使用的步长为2。4. The method for improving the denseness of 3D reconstruction point cloud based on neighborhood block matching as claimed in claim 2, characterized in that, in said step 2, the shooting mode of rotating 5° between every two frames is used, and the step size used is for 2. 5.如权利要求2所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述步骤三中使用每隔2行2列取一个像素点的采样率;选取的像素点数目和算法的时间开销成正比。5. the method for improving the dense degree of three-dimensional reconstruction point cloud based on neighborhood block matching as claimed in claim 2, is characterized in that, in described step 3, use every 2 rows and 2 columns to get the sampling rate of a pixel point; Selected The number of pixels is directly proportional to the time overhead of the algorithm. 6.如权利要求2所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述步骤四中M和N的值分别需要大于X和Y;使用X,Y为40,M和N为80的参数设置。6. The method for improving the density of three-dimensional reconstruction point clouds based on neighborhood block matching as claimed in claim 2, wherein the values of M and N in the step 4 need to be greater than X and Y respectively; using X, Y is 40, M and N are parameter settings of 80. 7.如权利要求2所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述步骤五中,计算位置(x,y)相关系数R的公式如下:7. the method for improving the density of three-dimensional reconstruction point cloud based on neighborhood block matching as claimed in claim 2, is characterized in that, in described step 5, the formula of calculating position (x, y) correlation coefficient R is as follows: RR (( xx ,, ythe y )) == ΣΣ xx ′′ ,, ythe y ′′ (( TT ′′ (( xx ′′ ,, ythe y ′′ )) ** II ′′ (( xx ++ xx ′′ ,, ythe y ++ ythe y ′′ )) )) ;; 其中:in: TT ′′ (( xx ′′ ,, ythe y ′′ )) == TT (( xx ′′ ,, ythe y ′′ )) -- 11 ww ** hh ** ΣΣ xx ′′ ′′ ,, ythe y ′′ ′′ TT (( xx ′′ ′′ ,, ythe y ′′ ′′ )) ;; II ′′ (( xx ++ xx ′′ ,, ythe y ++ ythe y ′′ )) == II (( xx ++ xx ′′ ,, ythe y ++ ythe y ′′ )) -- 11 ww ** hh ** ΣΣ xx ′′ ′′ ,, ythe y ′′ ′′ II (( xx ++ xx ′′ ′′ ,, ythe y ++ ythe y ′′ ′′ )) ;; 其中,T代表图像Isrc,I代表图像IdistWherein, T represents the image I src , and I represents the image I dist . 8.如权利要求2所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述步骤六中,在每一帧的相平面上寻找特征点时,需要使用相机的变换矩阵[R|T],拍照的焦距为f,某一帧F中,特征点的位置在图像上为(x,y),其旋转平移矩阵为[R|T],则其在世界坐标系中的三维坐标计算公式如下:8. The method for improving the density of 3D reconstruction point cloud based on neighborhood block matching as claimed in claim 2, characterized in that, in said step 6, when looking for feature points on the phase plane of each frame, it is necessary to use a camera The transformation matrix [R|T], the focal length of the photo is f, in a certain frame F, the position of the feature point is (x, y) on the image, and its rotation and translation matrix is [R|T], then it is in the world The three-dimensional coordinate calculation formula in the coordinate system is as follows: Xx YY ZZ == RR -- 11 ·&Center Dot; (( xx ythe y ff -- TT )) .. 9.如权利要求2所述的基于邻域块匹配提高三维重建点云稠密程度的方法,其特征在于,所述步骤七中,设置的认为两射线L1与L2合法的阈值为1e-29. The method for improving the density of 3D reconstruction point cloud based on neighborhood block matching as claimed in claim 2, characterized in that, in said step 7, the set threshold for considering the two rays L 1 and L 2 to be legal is 1e − 2 ; 所述步骤八中,参数k是5;In the eighth step, the parameter k is 5; 所述步骤九中,将点云反投影到第i帧的相平面的方法需要使用相机的变换矩阵[R|T];点云中一点(X,Y,Z),其投影在第i帧中的坐标(u,v)的计算公式为:In said step nine, the method of back-projecting the point cloud to the phase plane of the i-th frame needs to use the transformation matrix [R|T] of the camera; a point (X, Y, Z) in the point cloud, its projection in the i-th frame The formula for calculating the coordinates (u, v) in is: uu vv 11 == (( RR ·· Xx YY ZZ ++ TT )) // ff ;; 其中,f为第i帧的焦距;Among them, f is the focal length of the i-th frame; 所述步骤十中,若多个相平面都将其中的某个点的像素映射到点云中的一个点Pt,则Pt的RGB取这几个平面的对应点的像素均值。In the tenth step, if multiple phase planes map the pixel of a certain point to a point Pt in the point cloud, then the RGB of Pt takes the average value of the pixels of the corresponding points of these planes. 10.一种应用权利要求1~9任意一项所述基于邻域块匹配提高三维重建点云稠密程度的方法的计算机视觉系统。10. A computer vision system applying the method for increasing the density of a three-dimensional reconstruction point cloud based on neighborhood block matching according to any one of claims 1 to 9.
CN201611201364.1A 2016-12-22 2016-12-22 A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching Active CN106683173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611201364.1A CN106683173B (en) 2016-12-22 2016-12-22 A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611201364.1A CN106683173B (en) 2016-12-22 2016-12-22 A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching

Publications (2)

Publication Number Publication Date
CN106683173A true CN106683173A (en) 2017-05-17
CN106683173B CN106683173B (en) 2019-09-13

Family

ID=58870280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611201364.1A Active CN106683173B (en) 2016-12-22 2016-12-22 A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching

Country Status (1)

Country Link
CN (1) CN106683173B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784666A (en) * 2017-10-12 2018-03-09 武汉市工程科学技术研究院 The detection of terrain and its features three dimensional change and update method based on stereopsis
CN107862745A (en) * 2017-10-25 2018-03-30 武汉楚锐视觉检测科技有限公司 A kind of reflective reanstructing sculpture surface stamp methods and device
CN107862742A (en) * 2017-12-21 2018-03-30 华中科技大学 A kind of dense three-dimensional rebuilding methods based on more hypothesis joint views selections
CN109087338A (en) * 2017-06-13 2018-12-25 北京图森未来科技有限公司 A kind of extracting method and device of image sparse light stream
CN109345557A (en) * 2018-09-19 2019-02-15 东南大学 A foreground and background separation method based on 3D reconstruction results
CN109448111A (en) * 2018-10-25 2019-03-08 山东鲁能软件技术有限公司 A kind of image three-dimensional surface model optimization construction method and device
CN110288701A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A 3D reconstruction method and terminal based on depth focusing
CN110301981A (en) * 2019-06-28 2019-10-08 华中科技大学同济医学院附属协和医院 A kind of intelligence checks the scanner and control method of surgical instrument
CN111242990A (en) * 2020-01-06 2020-06-05 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN111325780A (en) * 2020-02-17 2020-06-23 天目爱视(北京)科技有限公司 3D model rapid construction method based on image screening
WO2020155159A1 (en) * 2019-02-02 2020-08-06 深圳市大疆创新科技有限公司 Method for increasing point cloud sampling density, point cloud scanning system, and readable storage medium
CN111508012A (en) * 2019-01-31 2020-08-07 先临三维科技股份有限公司 Method and device for line stripe mismatching detection and three-dimensional reconstruction
CN111508063A (en) * 2020-04-13 2020-08-07 南通理工学院 An image-based three-dimensional reconstruction method and system
CN111695486A (en) * 2020-06-08 2020-09-22 武汉中海庭数据技术有限公司 High-precision direction signboard target extraction method based on point cloud
CN111754560A (en) * 2020-06-10 2020-10-09 北京瓦特曼科技有限公司 High-temperature smelting container erosion early warning method and system based on dense three-dimensional reconstruction
CN111899345A (en) * 2020-08-03 2020-11-06 成都圭目机器人有限公司 Three-dimensional reconstruction method based on 2D visual image
CN112106105A (en) * 2017-12-22 2020-12-18 兹威达公司 Method and system for generating three-dimensional image of object
CN112418250A (en) * 2020-12-01 2021-02-26 怀化学院 An Optimal Matching Method for Complex 3D Point Clouds
CN113129329A (en) * 2019-12-31 2021-07-16 中移智行网络科技有限公司 Method and device for constructing dense point cloud based on base station target segmentation
CN113470049A (en) * 2021-07-06 2021-10-01 吉林省田车科技有限公司 Complete target extraction method based on structured color point cloud segmentation
CN114399631A (en) * 2022-01-12 2022-04-26 中国矿业大学 A 3D reconstruction and sludge identification method for the interior of a large crude oil tank
CN114677484A (en) * 2022-03-15 2022-06-28 南京邮电大学 A method for constructing semi-dense point cloud map based on monocular camera
CN114913552A (en) * 2022-07-13 2022-08-16 南京理工大学 Three-dimensional human body density corresponding estimation method based on single-view-point cloud sequence
WO2023160301A1 (en) * 2022-02-23 2023-08-31 杭州萤石软件有限公司 Object information determination method, mobile robot system, and electronic device
CN118212348A (en) * 2023-12-06 2024-06-18 中交第二公路勘察设计研究院有限公司 A motion recovery structure reconstruction and repair method for tunnels

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN103247045A (en) * 2013-04-18 2013-08-14 上海交通大学 Method of obtaining artificial scene main directions and image edges from multiple views
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN104915986A (en) * 2015-06-26 2015-09-16 北京航空航天大学 Physical three-dimensional model automatic modeling method
US20160150217A1 (en) * 2014-11-20 2016-05-26 Cappasity Inc. Systems and methods for 3d capturing of objects and motion sequences using multiple range and rgb cameras
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN103247045A (en) * 2013-04-18 2013-08-14 上海交通大学 Method of obtaining artificial scene main directions and image edges from multiple views
US20160150217A1 (en) * 2014-11-20 2016-05-26 Cappasity Inc. Systems and methods for 3d capturing of objects and motion sequences using multiple range and rgb cameras
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN104915986A (en) * 2015-06-26 2015-09-16 北京航空航天大学 Physical three-dimensional model automatic modeling method
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TIAN, YE 等: "Accurate vehicle detection and counting algorithm for traffic data collection", 《2015 INTERNATIONAL CONFERENCE ON CONNECTED VEHICLES AND EXPO (ICCVE)》 *
车翔玖 等: "基于k-邻域密度的离散点云简化算法与实现", 《吉林大学学报(理学版)》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087338A (en) * 2017-06-13 2018-12-25 北京图森未来科技有限公司 A kind of extracting method and device of image sparse light stream
CN107784666B (en) * 2017-10-12 2021-01-08 武汉市工程科学技术研究院 Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images
CN107784666A (en) * 2017-10-12 2018-03-09 武汉市工程科学技术研究院 The detection of terrain and its features three dimensional change and update method based on stereopsis
CN107862745A (en) * 2017-10-25 2018-03-30 武汉楚锐视觉检测科技有限公司 A kind of reflective reanstructing sculpture surface stamp methods and device
CN107862742A (en) * 2017-12-21 2018-03-30 华中科技大学 A kind of dense three-dimensional rebuilding methods based on more hypothesis joint views selections
CN112106105B (en) * 2017-12-22 2024-04-05 兹威达公司 Method and system for generating three-dimensional image of object
CN112106105A (en) * 2017-12-22 2020-12-18 兹威达公司 Method and system for generating three-dimensional image of object
CN109345557B (en) * 2018-09-19 2021-07-09 东南大学 A foreground and background separation method based on 3D reconstruction results
CN109345557A (en) * 2018-09-19 2019-02-15 东南大学 A foreground and background separation method based on 3D reconstruction results
CN109448111B (en) * 2018-10-25 2023-05-30 山东鲁软数字科技有限公司 Image three-dimensional curved surface model optimization construction method and device
CN109448111A (en) * 2018-10-25 2019-03-08 山东鲁能软件技术有限公司 A kind of image three-dimensional surface model optimization construction method and device
CN111508012B (en) * 2019-01-31 2024-04-19 先临三维科技股份有限公司 Method and device for line stripe mismatching detection and three-dimensional reconstruction
CN111508012A (en) * 2019-01-31 2020-08-07 先临三维科技股份有限公司 Method and device for line stripe mismatching detection and three-dimensional reconstruction
WO2020155159A1 (en) * 2019-02-02 2020-08-06 深圳市大疆创新科技有限公司 Method for increasing point cloud sampling density, point cloud scanning system, and readable storage medium
CN110288701A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A 3D reconstruction method and terminal based on depth focusing
CN110288701B (en) * 2019-06-26 2023-01-24 图码思(成都)科技有限公司 Three-dimensional reconstruction method based on depth focusing and terminal
CN110301981A (en) * 2019-06-28 2019-10-08 华中科技大学同济医学院附属协和医院 A kind of intelligence checks the scanner and control method of surgical instrument
CN113129329A (en) * 2019-12-31 2021-07-16 中移智行网络科技有限公司 Method and device for constructing dense point cloud based on base station target segmentation
CN111242990B (en) * 2020-01-06 2024-01-30 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN111242990A (en) * 2020-01-06 2020-06-05 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN113538552B (en) * 2020-02-17 2024-03-22 天目爱视(北京)科技有限公司 3D information synthetic image matching method based on image sorting
CN113538552A (en) * 2020-02-17 2021-10-22 天目爱视(北京)科技有限公司 3D information synthesis image matching method based on image sorting
CN111325780B (en) * 2020-02-17 2021-07-27 天目爱视(北京)科技有限公司 3D model rapid construction method based on image screening
CN111325780A (en) * 2020-02-17 2020-06-23 天目爱视(北京)科技有限公司 3D model rapid construction method based on image screening
CN111508063A (en) * 2020-04-13 2020-08-07 南通理工学院 An image-based three-dimensional reconstruction method and system
CN111695486B (en) * 2020-06-08 2022-07-01 武汉中海庭数据技术有限公司 High-precision direction signboard target extraction method based on point cloud
CN111695486A (en) * 2020-06-08 2020-09-22 武汉中海庭数据技术有限公司 High-precision direction signboard target extraction method based on point cloud
CN111754560B (en) * 2020-06-10 2023-06-02 北京瓦特曼科技有限公司 High-temperature smelting container erosion early warning method and system based on dense three-dimensional reconstruction
CN111754560A (en) * 2020-06-10 2020-10-09 北京瓦特曼科技有限公司 High-temperature smelting container erosion early warning method and system based on dense three-dimensional reconstruction
CN111899345B (en) * 2020-08-03 2023-09-01 成都圭目机器人有限公司 Three-dimensional reconstruction method based on 2D visual image
CN111899345A (en) * 2020-08-03 2020-11-06 成都圭目机器人有限公司 Three-dimensional reconstruction method based on 2D visual image
CN112418250B (en) * 2020-12-01 2024-05-10 怀化学院 Optimized matching method for complex 3D point cloud
CN112418250A (en) * 2020-12-01 2021-02-26 怀化学院 An Optimal Matching Method for Complex 3D Point Clouds
CN113470049B (en) * 2021-07-06 2022-05-20 吉林省田车科技有限公司 Complete target extraction method based on structured color point cloud segmentation
CN113470049A (en) * 2021-07-06 2021-10-01 吉林省田车科技有限公司 Complete target extraction method based on structured color point cloud segmentation
CN114399631A (en) * 2022-01-12 2022-04-26 中国矿业大学 A 3D reconstruction and sludge identification method for the interior of a large crude oil tank
WO2023160301A1 (en) * 2022-02-23 2023-08-31 杭州萤石软件有限公司 Object information determination method, mobile robot system, and electronic device
CN114677484A (en) * 2022-03-15 2022-06-28 南京邮电大学 A method for constructing semi-dense point cloud map based on monocular camera
CN114677484B (en) * 2022-03-15 2024-11-08 南京邮电大学 A method for constructing semi-dense point cloud maps based on monocular cameras
CN114913552A (en) * 2022-07-13 2022-08-16 南京理工大学 Three-dimensional human body density corresponding estimation method based on single-view-point cloud sequence
CN118212348A (en) * 2023-12-06 2024-06-18 中交第二公路勘察设计研究院有限公司 A motion recovery structure reconstruction and repair method for tunnels
CN118212348B (en) * 2023-12-06 2025-02-14 中交第二公路勘察设计研究院有限公司 A motion recovery structure reconstruction and repair method for tunnels

Also Published As

Publication number Publication date
CN106683173B (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN106683173B (en) A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching
CN110363858B (en) Three-dimensional face reconstruction method and system
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN103744086B (en) A kind of high registration accuracy method of ground laser radar and close-range photogrammetry data
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN104484648B (en) Robot variable viewing angle obstacle detection method based on contour recognition
CN101887589B (en) A Real-Shot Low-Texture Image Reconstruction Method Based on Stereo Vision
CN109754459B (en) Method and system for constructing human body three-dimensional model
CN107977997A (en) A kind of Camera Self-Calibration method of combination laser radar three dimensional point cloud
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN106780590A (en) The acquisition methods and system of a kind of depth map
CN101625768A (en) Three-dimensional human face reconstruction method based on stereoscopic vision
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN103456038A (en) Method for rebuilding three-dimensional scene of downhole environment
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN109215118B (en) Incremental motion structure recovery optimization method based on image sequence
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN111126418A (en) An Oblique Image Matching Method Based on Plane Perspective Projection
CN108564620A (en) A Scene Depth Estimation Method for Light Field Array Camera
CN117726747A (en) Three-dimensional reconstruction method, device, storage medium and equipment for complementing weak texture scene
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN119180908A (en) Gaussian splatter-based laser enhanced visual three-dimensional reconstruction method and system
JP6285686B2 (en) Parallax image generation device
CN106980601A (en) The high-precision method for solving of basis matrix based on three mesh epipolar-line constraints
CN113850293A (en) Localization method based on joint optimization of multi-source data and direction priors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220420

Address after: 710000 room 025, F2001, 20 / F, block 4-A, Xixian financial port, Fengdong new town energy gold trade zone, Xixian new area, Xi'an City, Shaanxi Province

Patentee after: Shaanxi Xiandian Tongyuan Information Technology Co.,Ltd.

Address before: 710071 Xi'an Electronic and Science University, 2 Taibai South Road, Shaanxi, Xi'an

Patentee before: XIDIAN University

TR01 Transfer of patent right