[go: up one dir, main page]

CN107358645A - Product method for reconstructing three-dimensional model and its system - Google Patents

Product method for reconstructing three-dimensional model and its system Download PDF

Info

Publication number
CN107358645A
CN107358645A CN201710425967.8A CN201710425967A CN107358645A CN 107358645 A CN107358645 A CN 107358645A CN 201710425967 A CN201710425967 A CN 201710425967A CN 107358645 A CN107358645 A CN 107358645A
Authority
CN
China
Prior art keywords
product
depth information
dimensional model
reconstructing
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710425967.8A
Other languages
Chinese (zh)
Other versions
CN107358645B (en
Inventor
路丽菲
蔡鸿明
孙秉义
孙晏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ruiwei Yingzhi Information Technology Service Co.,Ltd.
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN201710425967.8A priority Critical patent/CN107358645B/en
Publication of CN107358645A publication Critical patent/CN107358645A/en
Application granted granted Critical
Publication of CN107358645B publication Critical patent/CN107358645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种产品三维模型重建方法及其系统,通过采集产品图像,并计算产品图像对应的无穷远平面的单应矩阵后,通过Cholesky分解法获得产品图像像素点与实际场景的坐标点之间的映射矩阵;然后建立所述产品对应的网络流,计算能量函数的最优值,得到产品表面的深度信息;最后根据深度信息及映射矩阵建立立方体并对立方体进行体素划分,进而通过更新每个立方体体元的TSDF并进行渲染和投影,实现三维模型的重建,本发明不需要标定物,鲁棒性高,减小了成像畸变的影响,三维模型具有高可靠性和健壮性。

A method and system for reconstructing a three-dimensional model of a product. After collecting a product image and calculating the homography matrix of the infinite plane corresponding to the product image, the mapping between the pixel points of the product image and the coordinate points of the actual scene is obtained through the Cholesky decomposition method. Matrix; then establish the network flow corresponding to the product, calculate the optimal value of the energy function, and obtain the depth information of the product surface; finally establish a cube according to the depth information and the mapping matrix and divide the cube into voxels, and then update each cube The TSDF of the voxel is rendered and projected to realize the reconstruction of the three-dimensional model. The present invention does not need a calibration object, has high robustness, reduces the influence of imaging distortion, and the three-dimensional model has high reliability and robustness.

Description

产品三维模型重建方法及其系统Product 3D Model Reconstruction Method and System

技术领域technical field

本发明涉及的是一种三维建模领域的技术,具体是一种基于二维图像的产品三维模型重建方法及其系统。The present invention relates to a technology in the field of three-dimensional modeling, in particular to a method and system for reconstructing a three-dimensional model of a product based on two-dimensional images.

背景技术Background technique

三维重建是指对物体进行三维建模,更加真实直观地展示产品。基于图像的三维重建可以突破实时性瓶颈从而得到了很好的发展。现有的三维重建在相机标定过程要求的设备精度高,限制多。三维重建过程中建模过程复杂,速度慢,重建精度不能满足要求,因此不能应用到实际的产品展示模型的三维重建上。3D reconstruction refers to 3D modeling of objects to display products more realistically and intuitively. Image-based 3D reconstruction can break through the real-time bottleneck and has been well developed. The existing 3D reconstruction requires high equipment precision and many restrictions in the camera calibration process. The modeling process in the 3D reconstruction process is complicated, the speed is slow, and the reconstruction accuracy cannot meet the requirements, so it cannot be applied to the 3D reconstruction of the actual product display model.

发明内容Contents of the invention

本发明针对现有技术多不具有深度信息优化处理,也没有对光滑项、遮挡项的处理,导致建模效果较差的缺陷,提出一种产品三维模型重建方法及其系统,不需要标定物,鲁棒性高,减小了成像畸变的影响,三维模型具有高可靠性和健壮性。The present invention aims at the defect that the existing technology does not have depth information optimization processing, and does not process smooth items and occlusion items, resulting in poor modeling effect, and proposes a product 3D model reconstruction method and system without calibration objects , high robustness, reducing the influence of imaging distortion, and the 3D model has high reliability and robustness.

本发明是通过以下技术方案实现的:The present invention is achieved through the following technical solutions:

本发明涉及一种产品三维模型重建方法,通过采集产品图像,并计算产品图像对应的无穷远平面的单应矩阵后,通过Cholesky分解法获得产品图像像素点与实际场景的坐标点之间的映射矩阵;然后建立所述产品对应的网络流,计算能量函数的最优值,得到产品表面的深度信息;最后根据深度信息及映射矩阵建立立方体并对立方体进行体素划分,进而通过更新每个立方体体元的TSDF(截断符号距离函数)并进行渲染和投影,实现三维模型的重建。The invention relates to a method for reconstructing a three-dimensional model of a product. After collecting the product image and calculating the homography matrix of the infinite plane corresponding to the product image, the mapping between the pixel points of the product image and the coordinate points of the actual scene is obtained through the Cholesky decomposition method. Matrix; then establish the network flow corresponding to the product, calculate the optimal value of the energy function, and obtain the depth information of the product surface; finally establish a cube according to the depth information and the mapping matrix and divide the cube into voxels, and then update each cube The TSDF (truncated signed distance function) of the voxel is rendered and projected to realize the reconstruction of the 3D model.

所述的映射矩阵,通过以下方式得到:The mapping matrix is obtained in the following way:

1)建立无穷远平面的单应矩阵H,并且有求解单应矩阵H1) Establish the homography matrix H of the infinite plane, and have Solve the homography matrix H ;

2)根据H=KRtK-1,采用Cholesky分解法求解出产品图像像素点与实际场景的坐标点之间的映射矩阵K。2) According to H = KR t K −1 , the mapping matrix K between the pixel points of the product image and the coordinate points of the actual scene is obtained by using the Cholesky decomposition method.

所述的产品表面的深度信息,通过以下方式得到:The depth information of the product surface is obtained in the following ways:

a)建立产品的虚拟网络,并以产品图像的像素点坐标建立能量函数;a) Establish a virtual network of the product, and establish an energy function based on the pixel coordinates of the product image;

b)通过相似度代价和平滑代价为虚拟网络中的网格赋值以形成网络流;b) Assign values to the grids in the virtual network through similarity cost and smoothing cost to form network flow;

c)采用最大流/最小割问题的解决算法对能量函数优化,得到产品表面的深度信息。c) Optimizing the energy function by using the solution algorithm of the maximum flow/minimum cut problem to obtain the depth information of the product surface.

所述的能量函数为:Described energy function is:

E(f)=∑p∈P[Il(Tranl(xp,yp,fp))-Ir(Tranr(xp,yp,fp))]2+∑(p,q)∈Nu{p,q}|fp-fq|,其中:Il和Ir为产品图像的像素矩阵,(xp,yp)为基地平面上网格点坐标值,Tran为坐标系转换函数,f为像素点到标号之间的映射关系,P为定义的图像所有的像素的集合,p,q为像素集合P中单个的像素点。E(f)=∑ p∈P [I l (Tranl(x p ,y p ,f p ))-I r (Tranr(x p ,y p ,f p ))] 2 +∑ (p,q) ∈N u {p,q} |f p -f q |, where: I l and I r are the pixel matrix of the product image, (x p , y p ) are the grid point coordinates on the base plane, and Tran is the coordinate system Conversion function, f is the mapping relationship between pixels and labels, P is the set of all pixels in the defined image, and p and q are individual pixel points in the pixel set P.

所述的重建,具体包括以下步骤:The reconstruction specifically includes the following steps:

i)对深度信息进行双边滤波;i) Perform bilateral filtering on the depth information;

ii)根据深度信息得到深度图,对深度图进行反投影得到顶点图以及每个顶点的法向量;ii) Obtain the depth map according to the depth information, and back-project the depth map to obtain the vertex map and the normal vector of each vertex;

iii)根据映射矩阵K将产品图像像素转换到世界坐标系;iii) converting the product image pixels to the world coordinate system according to the mapping matrix K;

iv)建立立方体并进行体素划分,并更新每个体元的TSDF;iv) Create a cube and perform voxel division, and update the TSDF of each voxel;

v)根据TSDF进行渲染和投影,生成产品三维模型。v) Perform rendering and projection according to TSDF to generate a 3D model of the product.

所述的截断符号距离函数是体元到所建模型的最近表面,即模型的表面的带符号距离,即符号表示相对于表面的前后关系;由于将重建空间当做立方体,体素是体积元素的简称,是数字数据在三维空间分割上的最小单位,类似于二维图像的像素。体元表示确定了(x,y)坐标的一系列体素(只有z坐标变化),最后得到的TSDF负数表示在重建物体外面,0表示在重建物体表面,重建物体内部是正数。The truncated signed distance function is a voxel to the nearest surface of the built model, i.e. the signed distance of the surface of the model, that is, the symbol represents the front and back relationship with respect to the surface; since the reconstruction space is regarded as a cube, the voxel is the volume element Abbreviation, is the smallest unit of digital data in three-dimensional space segmentation, similar to the pixel of a two-dimensional image. A voxel represents a series of voxels with (x, y) coordinates determined (only the z coordinate changes), and the negative number of the final TSDF indicates that it is outside the reconstructed object, 0 indicates that it is on the surface of the reconstructed object, and the inside of the reconstructed object is a positive number.

本发明涉及一种产品三维模型重建系统,包括:相机标定模块、深度信息获取模块和模型建立模块,其中:相机标定模块采集产品图像,得到图像像素点与实际场景的坐标点之间的映射矩阵;深度信息获取模块获得产品的深度信息;模型建立模块接收映射矩阵和深度信息,经渲染投影得到产品的三维模型。The invention relates to a product three-dimensional model reconstruction system, comprising: a camera calibration module, a depth information acquisition module and a model building module, wherein: the camera calibration module collects product images, and obtains a mapping matrix between image pixels and coordinate points of an actual scene The depth information acquisition module obtains the depth information of the product; the model building module receives the mapping matrix and the depth information, and obtains the three-dimensional model of the product through rendering and projection.

附图说明Description of drawings

图1为本发明流程示意图。Fig. 1 is a schematic flow chart of the present invention.

具体实施方式detailed description

本实施例涉及一种实现上述方法的产品三维模型重建系统,包括:相机标定模块、深度信息获取模块和模型建立模块,其中:相机标定模块采集产品图像,得到图像像素点与实际场景的坐标点之间的映射矩阵;深度信息获取模块获得产品的深度信息;模型建立模块接收映射矩阵和深度信息,经渲染投影得到产品的三维模型。This embodiment relates to a product 3D model reconstruction system that implements the above method, including: a camera calibration module, a depth information acquisition module, and a model building module, wherein: the camera calibration module collects product images to obtain image pixels and coordinate points of the actual scene The mapping matrix between them; the depth information acquisition module obtains the depth information of the product; the model building module receives the mapping matrix and depth information, and obtains the three-dimensional model of the product through rendering and projection.

如图1所示,为上述系统的产品三维模型重建方法,包括以下步骤:As shown in Figure 1, the product 3D model reconstruction method for the above system includes the following steps:

实施设置:针对展车,控制相机做一次平移运动和2次任意运动,拍摄4张照片(照片像素为1200w);Implementation settings: For the show car, control the camera to do a translation movement and 2 arbitrary movements, and take 4 photos (the photo pixel is 1200w);

软硬件要求:Intel(R)Core(TM)i5-3210M CPU@2.5GHz,显卡GTX970。Software and hardware requirements: Intel(R) Core(TM) i5-3210M CPU@2.5GHz, graphics card GTX970.

1)求解产品图像对应的无穷远平面的单应矩阵H,并采用Cholesky分解法获得产品图像像素点与实际场景的坐标点之间的映射矩阵K。采集产品图像的相机采取平移运动和多次任意运动的方式拍摄照片,得到产品图像。1) Solve the homography matrix H of the infinite plane corresponding to the product image, and use the Cholesky decomposition method to obtain the mapping matrix K between the pixel points of the product image and the coordinate points of the actual scene. The camera that collects product images adopts translational motion and multiple arbitrary motions to take photos to obtain product images.

1.1)建立无穷远平面的单应矩阵H1.1) Establish the homography matrix H of the infinite plane.

1.2)根据方程组求解单应矩阵H,其中:e1、e2为运动后产品图像极点,H1、H2为空间平面的单应矩阵,a1和a2为标量,X1和X2为列向量。1.2) According to the equations Solve the homography matrix H , where: e 1 and e 2 are the extreme points of the product image after motion, H 1 and H 2 are the homography matrices of the space plane, a 1 and a 2 are scalars, X 1 and X 2 are column vectors .

1.3)根据求得的不同单应矩阵H,根据公式H=KRtK-1,并采用Cholesky分解法求解出映射矩阵K。1.3) According to the obtained different homography matrices H , according to the formula H =KR t K -1 , and use the Cholesky decomposition method to solve the mapping matrix K.

2)建立产品对应的网络流,计算能量函数的最优值,得到产品表面的深度信息。2) Establish the network flow corresponding to the product, calculate the optimal value of the energy function, and obtain the depth information of the product surface.

2.1)建立产品的虚拟网络。依据产品在世界坐标系中的坐标位置,建立立体虚拟网络,将需重建产品完全包裹在其中,以最前面的那一个切面作为整个立体网络的基底平面,对每个基底平面上的网络点所在的位置,确定产品表面上的物点落在哪个剖面上,每个剖面对应一个标签,这样便将深度信息获取问题转化为给虚拟立体网络基地平面上的每一个网络点进行深度标号的问题。2.1) Establish a virtual network of products. According to the coordinate position of the product in the world coordinate system, a three-dimensional virtual network is established, and the product to be reconstructed is completely wrapped in it, and the front cut plane is used as the base plane of the entire three-dimensional network, and the network points on each base plane are located The position of the object point on the product surface is determined on which section, each section corresponds to a label, so that the problem of depth information acquisition is transformed into the problem of labeling the depth of each network point on the base plane of the virtual three-dimensional network.

2.2)以产品图像像素点坐标建立能量函数。能量函数用来表示图像的性质信息,主要有数据约束项、光滑约束项和遮挡项三部分组成。数据约束的约束条件为:当基底平面上的网格点没有被正确标号的时候,在世界坐标系中非正确深度标号处的潜在物点所投影的图像像素坐标系中的像点所表达的像素信息是不一致的;只有在所赋的深度标号符合其真实深度的情况下,像点才反映的是同一物点的像素信息,此深度标号的代价才是最小的。平滑约束表示相邻像素对之间的约束关系,当相邻像素之间的差距较大时会导致光滑约束项增大,从而导致能量函数增大,光滑约束项通过这种方法反应分片的光滑程度。遮挡项约束条件为:当所有深度标号的数据项代价都大于某一阈值的时候,即设定此物体表面的点被遮挡,通过增大其平滑约束的参数值,以使得此点可以参照其周围物点的深度做平滑处理。2.2) An energy function is established based on the pixel coordinates of the product image. The energy function is used to represent the property information of the image, and it mainly consists of three parts: the data constraint item, the smoothness constraint item and the occlusion item. The constraints of the data constraints are: when the grid points on the base plane are not correctly labeled, the image points in the image pixel coordinate system projected by the potential object points at the incorrect depth labels in the world coordinate system express The pixel information is inconsistent; only when the assigned depth label conforms to its real depth, the image point reflects the pixel information of the same object point, and the cost of this depth label is the smallest. The smooth constraint represents the constraint relationship between adjacent pixel pairs. When the gap between adjacent pixels is large, the smooth constraint item will increase, which will lead to an increase in the energy function. The smooth constraint item reflects the fragmentation by this method. smoothness. The constraint condition of the occlusion item is: when the cost of all the data items of the depth label is greater than a certain threshold, that is, the point on the surface of the object is set to be occluded. By increasing the parameter value of its smooth constraint, this point can refer to other The depth of surrounding object points is smoothed.

2.3)通过相似度代价和平滑代价为虚拟网络的网格赋值以形成网络流。相似度代价通过SAD局部匹配得到。2.3) Assign values to the meshes of the virtual network through the similarity cost and the smoothing cost to form a network flow. The similarity cost is obtained by SAD local matching.

2.4)采用最大流/最小割问题的解决算法对能量函数优化,得到产品表面的深度信息。2.4) The energy function is optimized by using the solution algorithm of the maximum flow/minimum cut problem, and the depth information of the product surface is obtained.

所述的最大流/最小割问题的解决算法包括但不限于Push-Relabel方法和Ford-Fulkerson方法。The algorithm for solving the maximum flow/minimum cut problem includes, but is not limited to, the Push-Relabel method and the Ford-Fulkerson method.

3)根据深度信息及映射矩阵K建立立方体并对立方体进行体素划分,而后渲染与投影得到产品三维模型。3) Create a cube according to the depth information and the mapping matrix K, divide the cube into voxels, and then render and project to obtain the 3D model of the product.

3.1)对深度信息进行双边滤波,已进行降噪。3.1) Bilateral filtering is performed on the depth information, and noise reduction has been performed.

3.2)根据深度信息得到深度图,对深度图进行反投影得到顶点图以及每个顶点的法向量。3.2) Obtain the depth map according to the depth information, and back-project the depth map to obtain the vertex map and the normal vector of each vertex.

3.3)根据映射矩阵K将产品图像像素转换到世界坐标系。3.3) Convert the product image pixels to the world coordinate system according to the mapping matrix K.

3.4)建立立方体并进行体素划分,并更新每个体元的TSDF。对每一帧产品图像,将每个体元都转换到相机坐标系并投影到产品图像坐标点,若在投影范围内,则更新TSDF。3.4) Create a cube and perform voxel division, and update the TSDF of each voxel. For each frame of product image, transform each voxel into the camera coordinate system and project it to the coordinate point of the product image. If it is within the projection range, update TSDF.

3.5)根据TSDF进行渲染和投影,生成产品三维模型。3.5) Perform rendering and projection according to TSDF to generate a 3D model of the product.

与现有技术相比,本发明不需要标定物,鲁棒性高,减小了成像畸变的影响,三维模型具有高可靠性和健壮性,计算资源只需要普通的商业化的GPU,建模速度提高了7.3%。Compared with the prior art, the present invention does not require a calibration object, has high robustness, reduces the influence of imaging distortion, and has high reliability and robustness of the 3D model, and only requires ordinary commercial GPUs for computing resources. 7.3% faster.

上述具体实施可由本领域技术人员在不背离本发明原理和宗旨的前提下以不同的方式对其进行局部调整,本发明的保护范围以权利要求书为准且不由上述具体实施所限,在其范围内的各个实现方案均受本发明之约束。The above specific implementation can be partially adjusted in different ways by those skilled in the art without departing from the principle and purpose of the present invention. The scope of protection of the present invention is subject to the claims and is not limited by the above specific implementation. Each implementation within the scope is bound by the invention.

Claims (7)

1. a kind of product method for reconstructing three-dimensional model, it is characterised in that by gathering product image, and it is corresponding to calculate product image Plane at infinity homography matrix after, pass through the coordinate that Cholesky decomposition methods obtain product image slices vegetarian refreshments and actual scene Mapping matrix between point;Then network flow corresponding to the product is established, the optimal value of computation energy function, obtains product table The depth information in face;Cube is finally established according to depth information and mapping matrix and voxel division is carried out to cube, and then By updating the TSDF of each cube volume elements and being rendered and projected, the reconstruction of threedimensional model is realized.
2. product method for reconstructing three-dimensional model according to claim 1, it is characterized in that, described mapping matrix, by with Under type obtains:
1.1) the homography matrix H of plane at infinity is established, and haveSolve homography matrix H
1.2) according to H=KRtK-1, the coordinate of product image slices vegetarian refreshments and actual scene is solved using Cholesky decomposition methods Mapping matrix K between point.
3. product method for reconstructing three-dimensional model according to claim 1, it is characterized in that, the depth letter of described product surface Breath, is obtained in the following manner:
2.1) virtual network of product is established, and energy function is established with the pixel point coordinates of product image;
2.2) by the way that similarity cost and smooth cost are the grid assignment in virtual network to form network flow;
2.3) energy function is optimized using the solution annual reporting law of max-flow/minimal cut problem, obtains the depth information of product surface.
4. product method for reconstructing three-dimensional model according to claim 3, it is characterized in that, described energy function be E (f)= ∑p∈P[Il(Tranl(xp, yp,fp))-Ir(Tranr(xp, yp,fp))]2+∑(p,q)∈Nu{ p, q }|fp-fq|, wherein:IlAnd IrFor production The picture element matrix of product image, (xp,yp) it is mesh point coordinate value in the plane of base, Tran is coordinate system transfer function, and f is pixel Point is the set of all pixels of the image of definition to the mapping relations between label, P, and p, q are picture single in pixel set P Vegetarian refreshments.
5. product method for reconstructing three-dimensional model according to claim 4, it is characterized in that, described max-flow/minimal cut is asked The solution annual reporting law of topic includes:Push-Relabel methods and Ford-Fulkerson methods.
6. product method for reconstructing three-dimensional model according to claim 3, it is characterized in that, described reconstruction, specifically include with Lower step:
3.1) bilateral filtering is carried out to depth information;
3.2) depth map is obtained according to depth information, carrying out back projection to depth map obtains the normal direction on vertex graph and each summit Amount;
3.3) product image pixel is transformed into by world coordinate system according to mapping matrix K;
3.4) establish cube and carry out voxel division, and update the TSDF of each volume elements;
3.5) rendered and projected according to TSDF, generate product threedimensional model.
A kind of 7. product reconstructing three-dimensional model system for realizing the above method, it is characterised in that including:Camera calibration module, depth Data obtaining module and model building module are spent, wherein:Camera calibration module gathers product image, obtains image slices vegetarian refreshments and reality Mapping matrix between the coordinate points of border scene;Depth Information Acquistion module obtains the depth information of product;Model building module Receive mapping matrix and depth information, rendered projection obtain the threedimensional model of product.
CN201710425967.8A 2017-06-08 2017-06-08 Product 3D model reconstruction method and system Active CN107358645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710425967.8A CN107358645B (en) 2017-06-08 2017-06-08 Product 3D model reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710425967.8A CN107358645B (en) 2017-06-08 2017-06-08 Product 3D model reconstruction method and system

Publications (2)

Publication Number Publication Date
CN107358645A true CN107358645A (en) 2017-11-17
CN107358645B CN107358645B (en) 2020-08-11

Family

ID=60273557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710425967.8A Active CN107358645B (en) 2017-06-08 2017-06-08 Product 3D model reconstruction method and system

Country Status (1)

Country Link
CN (1) CN107358645B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062788A (en) * 2017-12-18 2018-05-22 北京锐安科技有限公司 A kind of three-dimensional rebuilding method, device, equipment and medium
WO2019219012A1 (en) * 2018-05-15 2019-11-21 清华大学 Three-dimensional reconstruction method and device uniting rigid motion and non-rigid deformation
CN110489834A (en) * 2019-08-02 2019-11-22 广州彩构网络有限公司 A kind of designing system for actual products threedimensional model
CN111696145A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Depth information determination method, depth information determination device and electronic equipment
CN111788610A (en) * 2017-12-22 2020-10-16 奇跃公司 Viewpoint-dependent brick selection for fast volume reconstruction
CN114241029A (en) * 2021-12-20 2022-03-25 贝壳技术有限公司 Image three-dimensional reconstruction method and device
WO2022227875A1 (en) * 2021-04-29 2022-11-03 中兴通讯股份有限公司 Three-dimensional imaging method, apparatus, and device, and storage medium
US12299828B2 (en) 2017-12-22 2025-05-13 Magic Leap, Inc. Viewpoint dependent brick selection for fast volumetric reconstruction

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1395222A (en) * 2001-06-29 2003-02-05 三星电子株式会社 Representation and diawing method of three-D target and method for imaging movable three-D target
US7212201B1 (en) * 1999-09-23 2007-05-01 New York University Method and apparatus for segmenting an image in order to locate a part thereof
CN101262619A (en) * 2008-03-30 2008-09-10 深圳华为通信技术有限公司 Parallax acquisition method and device
CN101751697A (en) * 2010-01-21 2010-06-23 西北工业大学 Three-dimensional scene reconstruction method based on statistical model
CN101833786A (en) * 2010-04-06 2010-09-15 清华大学 Method and system for capturing and reconstructing three-dimensional models
CN101998136A (en) * 2009-08-18 2011-03-30 华为技术有限公司 Homography matrix acquisition method as well as image pickup equipment calibrating method and device
CN102682467A (en) * 2011-03-15 2012-09-19 云南大学 Plane- and straight-based three-dimensional reconstruction method
CN102800081A (en) * 2012-06-06 2012-11-28 天津大学 Expansion algorithm of high-noise resistance speckle-coated phase diagram based on image cutting
CN103198523A (en) * 2013-04-26 2013-07-10 清华大学 Three-dimensional non-rigid body reconstruction method and system based on multiple depth maps
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN104899883A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Indoor object cube detection method for depth image scene
CN105046743A (en) * 2015-07-01 2015-11-11 浙江大学 Super-high-resolution three dimensional reconstruction method based on global variation technology
CN106355621A (en) * 2016-09-23 2017-01-25 邹建成 Method for acquiring depth information on basis of array images
CN106373153A (en) * 2016-09-23 2017-02-01 邹建成 Array lens-based 3D image replacement technology
CN106651926A (en) * 2016-12-28 2017-05-10 华东师范大学 Regional registration-based depth point cloud three-dimensional reconstruction method
CN106709948A (en) * 2016-12-21 2017-05-24 浙江大学 Quick binocular stereo matching method based on superpixel segmentation
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7212201B1 (en) * 1999-09-23 2007-05-01 New York University Method and apparatus for segmenting an image in order to locate a part thereof
CN1395222A (en) * 2001-06-29 2003-02-05 三星电子株式会社 Representation and diawing method of three-D target and method for imaging movable three-D target
CN101262619A (en) * 2008-03-30 2008-09-10 深圳华为通信技术有限公司 Parallax acquisition method and device
CN101998136A (en) * 2009-08-18 2011-03-30 华为技术有限公司 Homography matrix acquisition method as well as image pickup equipment calibrating method and device
CN101751697A (en) * 2010-01-21 2010-06-23 西北工业大学 Three-dimensional scene reconstruction method based on statistical model
CN101833786A (en) * 2010-04-06 2010-09-15 清华大学 Method and system for capturing and reconstructing three-dimensional models
CN102682467A (en) * 2011-03-15 2012-09-19 云南大学 Plane- and straight-based three-dimensional reconstruction method
CN102800081A (en) * 2012-06-06 2012-11-28 天津大学 Expansion algorithm of high-noise resistance speckle-coated phase diagram based on image cutting
CN103198523A (en) * 2013-04-26 2013-07-10 清华大学 Three-dimensional non-rigid body reconstruction method and system based on multiple depth maps
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN104899883A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Indoor object cube detection method for depth image scene
CN105046743A (en) * 2015-07-01 2015-11-11 浙江大学 Super-high-resolution three dimensional reconstruction method based on global variation technology
CN106355621A (en) * 2016-09-23 2017-01-25 邹建成 Method for acquiring depth information on basis of array images
CN106373153A (en) * 2016-09-23 2017-02-01 邹建成 Array lens-based 3D image replacement technology
CN106709948A (en) * 2016-12-21 2017-05-24 浙江大学 Quick binocular stereo matching method based on superpixel segmentation
CN106651926A (en) * 2016-12-28 2017-05-10 华东师范大学 Regional registration-based depth point cloud three-dimensional reconstruction method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NHAT QUANG DO ET AL: "Leap Studio -- A Virtual Interactive 3D Modeling Application Based on WebGL", 《2014 5TH INTERNATIONAL CONFERENCE ON DIGITAL HOME》 *
WEI-MING CHEN ET AL: "Improving Graph Cuts algorithm to transform sequence of stereo image to depth map", 《JOURNAL OF SYSTEMS AND SOFTWARE》 *
冯树彪: "基于图像的三维重建", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062788A (en) * 2017-12-18 2018-05-22 北京锐安科技有限公司 A kind of three-dimensional rebuilding method, device, equipment and medium
CN114332332A (en) * 2017-12-22 2022-04-12 奇跃公司 Method and apparatus for generating a three-dimensional reconstruction of a surface in a scene
US11398081B2 (en) 2017-12-22 2022-07-26 Magic Leap, Inc. Method of occlusion rendering using raycast and live depth
US12299828B2 (en) 2017-12-22 2025-05-13 Magic Leap, Inc. Viewpoint dependent brick selection for fast volumetric reconstruction
CN111788610A (en) * 2017-12-22 2020-10-16 奇跃公司 Viewpoint-dependent brick selection for fast volume reconstruction
CN111788610B (en) * 2017-12-22 2021-11-26 奇跃公司 Method and apparatus for generating a three-dimensional reconstruction of a surface in a scene
US11263820B2 (en) 2017-12-22 2022-03-01 Magic Leap, Inc. Multi-stage block mesh simplification
CN114332332B (en) * 2017-12-22 2023-08-18 奇跃公司 Method and device for generating a three-dimensional reconstruction of a surface in a scene
US11580705B2 (en) 2017-12-22 2023-02-14 Magic Leap, Inc. Viewpoint dependent brick selection for fast volumetric reconstruction
US11321924B2 (en) 2017-12-22 2022-05-03 Magic Leap, Inc. Caching and updating of dense 3D reconstruction data
WO2019219012A1 (en) * 2018-05-15 2019-11-21 清华大学 Three-dimensional reconstruction method and device uniting rigid motion and non-rigid deformation
CN111696145B (en) * 2019-03-11 2023-11-03 北京地平线机器人技术研发有限公司 Depth information determining method, depth information determining device and electronic equipment
CN111696145A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Depth information determination method, depth information determination device and electronic equipment
CN110489834A (en) * 2019-08-02 2019-11-22 广州彩构网络有限公司 A kind of designing system for actual products threedimensional model
WO2022227875A1 (en) * 2021-04-29 2022-11-03 中兴通讯股份有限公司 Three-dimensional imaging method, apparatus, and device, and storage medium
CN114241029A (en) * 2021-12-20 2022-03-25 贝壳技术有限公司 Image three-dimensional reconstruction method and device

Also Published As

Publication number Publication date
CN107358645B (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN107358645A (en) Product method for reconstructing three-dimensional model and its system
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
US11928778B2 (en) Method for human body model reconstruction and reconstruction system
CN108961410B (en) Three-dimensional wire frame modeling method and device based on image
Li et al. Markerless shape and motion capture from multiview video sequences
US9747668B2 (en) Reconstruction of articulated objects from a moving camera
CN111986307A (en) 3D object reconstruction using photometric grid representation
CN102262783B (en) Method and system for restructuring motion of three-dimensional gesture
CN116071278A (en) UAV aerial image synthesis method, system, computer equipment and storage medium
CN110852182A (en) A deep video human behavior recognition method based on 3D spatial time series modeling
CN112734890B (en) Face replacement method and device based on three-dimensional reconstruction
CN111402403B (en) High-precision three-dimensional face reconstruction method
CN110827295A (en) 3D Semantic Segmentation Method Based on Coupling of Voxel Model and Color Information
CN110298916A (en) A kind of 3 D human body method for reconstructing based on synthesis depth data
CN111951368A (en) A deep learning method for point cloud, voxel and multi-view fusion
CN118736092A (en) A method and system for rendering virtual human at any viewing angle based on three-dimensional Gaussian splashing
CN111640172A (en) Attitude migration method based on generation of countermeasure network
CN108734773A (en) A kind of three-dimensional rebuilding method and system for mixing picture
CN112562083A (en) Depth camera-based static portrait three-dimensional reconstruction and dynamic face fusion method
CN118505878A (en) Three-dimensional reconstruction method and system for single-view repetitive object scene
CN114730480A (en) Machine Learning Based on Volume Capture and Mesh Tracking
CN115239912A (en) Three-dimensional inside reconstruction method based on video image
Sun et al. EasyMesh: An efficient method to reconstruct 3D mesh from a single image
CN117635838A (en) Three-dimensional face reconstruction method, equipment, storage medium and device
CN117911609A (en) A three-dimensional hand modeling method based on neural radiation field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240401

Address after: 201203 Pudong New Area, Shanghai, China (Shanghai) free trade trial area, No. 3, 1 1, Fang Chun road.

Patentee after: SHANGHAI HANYU BIOLOGICAL SCIENCE & TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District

Patentee before: SHANGHAI JIAO TONG University

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20250418

Address after: 3rd Floor, Building 1, No. 400 Fangchun Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, 200135

Patentee after: Shanghai Ruiwei Yingzhi Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 201203 Pudong New Area, Shanghai, China (Shanghai) free trade trial area, No. 3, 1 1, Fang Chun road.

Patentee before: SHANGHAI HANYU BIOLOGICAL SCIENCE & TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right