CN106910222A - Face three-dimensional rebuilding method based on binocular stereo vision - Google Patents
Face three-dimensional rebuilding method based on binocular stereo vision Download PDFInfo
- Publication number
- CN106910222A CN106910222A CN201710082476.8A CN201710082476A CN106910222A CN 106910222 A CN106910222 A CN 106910222A CN 201710082476 A CN201710082476 A CN 201710082476A CN 106910222 A CN106910222 A CN 106910222A
- Authority
- CN
- China
- Prior art keywords
- face
- point
- image
- dimensional
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000012937 correction Methods 0.000 claims abstract description 37
- 239000011159 matrix material Substances 0.000 claims description 80
- 238000013519 translation Methods 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000002776 aggregation Effects 0.000 claims description 7
- 238000004220 aggregation Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims 2
- 238000006116 polymerization reaction Methods 0.000 claims 2
- 238000000205 computational method Methods 0.000 claims 1
- 230000001815 facial effect Effects 0.000 claims 1
- 238000009499 grossing Methods 0.000 claims 1
- 230000009466 transformation Effects 0.000 description 6
- 238000000280 densification Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
本发明提供了一种基于双目立体视觉的人脸三维重建方法,包括:构建双目立体视觉系统,其中,所述双目立体视觉系统包括左摄像装置和右摄像装置;利用所述双目立体视觉系统采集人脸图像,得到左右图像,对左图像和右图像进行立体校正;检测左图像和右图像中的人脸区域;对于左图像和右图像中的人脸区域,进行人脸关键点的定位和匹配;利用匹配的人脸关键点进行脸部稠密视差初始化,得到初始化视差;通过立体匹配算法平滑初始化视差;以及结合平滑后的初始化视差进行人脸三维重建。本发明采用普通摄像头作为图像的采集设备,能够节约设备的成本,且提高了人脸三维重建结果的准确度。
The invention provides a method for three-dimensional face reconstruction based on binocular stereo vision, comprising: constructing a binocular stereo vision system, wherein the binocular stereo vision system includes a left camera device and a right camera device; The stereo vision system collects face images, obtains the left and right images, performs stereo correction on the left and right images; detects the face areas in the left and right images; performs face keying on the face areas in the left and right images Point positioning and matching; use the matched face key points to initialize the dense disparity of the face to obtain the initialization disparity; smooth the initialization disparity through the stereo matching algorithm; and combine the smoothed initialization disparity to perform 3D reconstruction of the face. The present invention adopts an ordinary camera as an image acquisition device, which can save the cost of the device and improve the accuracy of the three-dimensional reconstruction result of the human face.
Description
技术领域technical field
本发明涉及双目立体视觉技术领域,尤其涉及一种基于双目立体视觉的人脸三维重建方法。The invention relates to the technical field of binocular stereo vision, in particular to a three-dimensional face reconstruction method based on binocular stereo vision.
背景技术Background technique
人脸的三维结构信息广泛的应用在人脸图像处理中,例如人脸识别、人脸跟踪、人脸对齐等方面。在过去几年,国内外研究者提出了许多人脸三维重建的方法,一类方法是基于硬件设备进行三维人脸结构的采集,如使用三维激光扫描仪、结构光扫描仪等。这类方法能够获得精度较高的人脸三维结构数据,但是需要使用价格昂贵的硬件设备,使得此方法具有造价高、不灵活、复杂度高等诸多限制,并不适合应用于普通场合。另一类方法是基于视频或者基于多角度照片的三维人脸重建方法,这类方法成本低,使用灵活,能够应用在日常生活中。The three-dimensional structure information of the face is widely used in face image processing, such as face recognition, face tracking, face alignment and so on. In the past few years, researchers at home and abroad have proposed many methods for 3D face reconstruction. One method is to collect 3D face structures based on hardware devices, such as using 3D laser scanners and structured light scanners. This type of method can obtain high-precision three-dimensional structure data of the face, but it needs to use expensive hardware equipment, which makes this method have many limitations such as high cost, inflexibility, and high complexity, and is not suitable for ordinary applications. Another type of method is a 3D face reconstruction method based on video or multi-angle photos. This type of method is low in cost, flexible in use, and can be applied in daily life.
基于双目立体视觉的人脸三维重建属于第二类方法中的一种,如何使用双目立体图像来重建人脸的三维结构仍然是一个有挑战的问题,这种方法只使用一对图像,它们来自双目摄像头的左摄像头和右摄像头,从而对人脸的三维信息进行恢复。目前存在很多双目匹配的方法,包含全局立体匹配算法和局部立体匹配算法,如BM算法、SGM算法、Meshstereo算法等。但是人脸区域的低纹理问题是人脸三维重建主要需要解决的问题。因此,提出了专门针对人脸结构的双目立体匹配方法,如基于人脸先验的块匹配方法、基于种子点增长法等进行三维人脸结构的恢复,这类方法采用高分辨率(1380是人脸区域的的摄像头采集设备,获得较高准确的结果,或者采取普通分辨率(640采集设备,的摄像头,但是获得的人脸精度比较差。由于人脸是曲面结构,而基于视差平面的立体匹配算法针对曲面结构的能够进行很好的恢复。结合人脸初始结构视差,通过立体匹配算法获取人脸的三维结构。The 3D face reconstruction based on binocular stereo vision belongs to the second type of method. How to use binocular stereo images to reconstruct the 3D structure of the face is still a challenging problem. This method only uses a pair of images. They come from the left and right cameras of the binocular camera to recover the 3D information of the face. There are currently many binocular matching methods, including global stereo matching algorithms and local stereo matching algorithms, such as BM algorithm, SGM algorithm, Meshstereo algorithm, etc. However, the low-texture problem in the face area is the main problem to be solved in the 3D reconstruction of the face. Therefore, a binocular stereo matching method specifically for the face structure is proposed, such as a block matching method based on face priors, a seed point growth method, etc. to restore the three-dimensional face structure. It is a camera acquisition device in the face area, which can obtain higher and more accurate results, or use a camera with a normal resolution (640 acquisition device, but the accuracy of the obtained face is relatively poor. Since the face is a curved surface structure, based on the parallax plane The stereo matching algorithm can restore the surface structure very well. Combined with the initial structure parallax of the face, the three-dimensional structure of the face is obtained through the stereo matching algorithm.
发明内容Contents of the invention
鉴于上述技术问题,本发明提供了一种基于双目立体视觉的人脸三维重建方法。In view of the above technical problems, the present invention provides a method for three-dimensional face reconstruction based on binocular stereo vision.
根据本发明的一个方面,提供了一种基于双目立体视觉的人脸三维重建方法According to one aspect of the present invention, a method for three-dimensional reconstruction of human face based on binocular stereo vision is provided
该双目立体视觉的人脸三维重建方法包括:The face three-dimensional reconstruction method of the binocular stereo vision comprises:
步骤A,构建双目立体视觉系统,其中,所述双目立体视觉系统包括左摄像装置和右摄像装置;Step A, building a binocular stereo vision system, wherein, the binocular stereo vision system includes a left camera and a right camera;
步骤B,利用所述双目立体视觉系统采集人脸图像,由左摄像装置得到左图像,由右摄像装置得到右图像;对左图像和右图像进行立体校正;检测左图像和右图像中的人脸区域;Step B, using the binocular stereo vision system to collect face images, obtain the left image by the left camera device, and obtain the right image by the right camera device; perform stereo correction on the left image and the right image; detect the left image and the right image face area;
步骤C,对于左图像和右图像中的人脸区域,进行人脸关键点的定位和匹配;Step C, for the face areas in the left image and the right image, perform positioning and matching of key points of the face;
步骤D,利用匹配的人脸关键点进行脸部稠密视差初始化,得到初始化视差;Step D, using the matched face key points to initialize the dense disparity of the face to obtain the initialization disparity;
步骤E,通过立体匹配算法平滑初始化视差;以及Step E, smoothly initializing the disparity through a stereo matching algorithm; and
步骤F,结合平滑后的初始化视差进行人脸三维重建。Step F, combining the smoothed initialization parallax to perform 3D face reconstruction.
优选的,本发明双目立体视觉的人脸三维重建方法中,左摄像装置和右摄像装置为相同型号的相机或摄像头:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, the left camera and the right camera are cameras or cameras of the same model:
步骤A包括:Step A includes:
子步骤A1,对左摄像装置和右摄像装置进行标定,得到两者的内参数,畸变参数和对应三维点的外参数;基于左右摄像装置对应三维点的外参数,得到双目立体视觉系统的旋转矩阵R、平移矩阵T;Sub-step A1: Calibrate the left and right camera devices to obtain their internal parameters, distortion parameters and external parameters corresponding to three-dimensional points; based on the external parameters of the left and right camera devices corresponding to three-dimensional points, obtain the binocular stereo vision system Rotation matrix R, translation matrix T;
子步骤A2,基于左摄像装置的内参数和畸变参数、右摄像装置的内参数和畸变参数,以及求取的双目立体视觉系统的旋转矩阵R和平移矩阵T,得到左校正矩阵和右校正矩阵;Sub-step A2, based on the internal parameters and distortion parameters of the left camera device, the internal parameters and distortion parameters of the right camera device, and the obtained rotation matrix R and translation matrix T of the binocular stereo vision system, the left correction matrix and the right correction matrix are obtained matrix;
其中,左校正矩阵用于对左图像进行立体校正,右校正矩阵用于对右图像进行立体校正,经过左校正矩阵处理后的左图像中的点与经过右校正矩阵处理后的右图像中的匹配点在同一条扫描线上;Among them, the left correction matrix is used to perform stereo correction on the left image, and the right correction matrix is used to perform stereo correction on the right image. The matching points are on the same scan line;
步骤B中对左图像和右图像进行立体校正包括:利用左校正矩阵对左图像进行立体校正,利用右校正矩阵对右图像进行立体校正。Performing stereo correction on the left image and the right image in step B includes: performing stereo correction on the left image by using the left correction matrix, and performing stereo correction on the right image by using the right correction matrix.
优选的,本发明双目立体视觉的人脸三维重建方法中,子步骤A1包括:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, sub-step A1 includes:
子分步骤A1a,获取10~20组包含不同角度和方向的平面棋盘图像;Sub-step A1a, obtaining 10 to 20 groups of planar chessboard images with different angles and directions;
子分步骤A1b,对获取的平面棋盘图像进行棋盘监测,定位出所述三维点所对应的棋盘格中的角点;根据张正友标定方法和Brown算法,获取左摄像装置和右摄像装置的内参数、畸变参数和棋盘角点对应的外参数;In the sub-step A1b, checkerboard monitoring is performed on the obtained planar checkerboard image, and the corner points in the checkerboard grid corresponding to the three-dimensional points are located; according to Zhang Zhengyou's calibration method and the Brown algorithm, the internal parameters of the left camera and the right camera are obtained , distortion parameters and external parameters corresponding to the corner points of the chessboard;
其中,所述棋盘角点对应的外参数包括:左摄像装置的旋转矩阵Rl和左摄像装置的平移矩阵Tl;右摄像装置的旋转矩阵Rr和右摄像装置的平移矩阵Tr;Wherein, the external parameters corresponding to the checkerboard corners include: the rotation matrix R l of the left camera and the translation matrix T l of the left camera; the rotation matrix R r of the right camera and the translation matrix T r of the right camera;
子分步骤A1c,将角点Q输入到左右摄像装置的摄像装置坐标系,对应左图和右图的坐标点Ql和Qr,存在如下式关系:In the sub-step A1c, the corner point Q is input into the camera coordinate system of the left and right camera devices, corresponding to the coordinate points Q l and Q r of the left and right images, there is the following relationship:
Ql=RlQ+Tl Q l =R l Q+T l
Qr=RrQ+Tr Q r =R r Q+T r
Ql=RT(Qr-T)Q l =R T (Q r -T)
其中,Q为角点Q在世界坐标系的三维坐标,左图为左摄像装置得到的图像,右图为右摄像装置得到的图像,推出下面关系:Among them, Q is the three-dimensional coordinate of the corner point Q in the world coordinate system, the left picture is the image obtained by the left camera device, and the right picture is the image obtained by the right camera device, and the following relationship is derived:
R=Rr(Rl)R=R r (R l )
厂=Tr-RTl Factory=T r -RT l
根据给定棋盘格的角点的对个联合视图,以及每个角点所对应的外参数矩阵,求解出旋转矩阵R和平移矩阵T;由于图像噪声和舍入误差,每一对棋盘都会使得R和T的结果出现细小不同,选用R和T参数的中值作为真实结果的初始值,运用Levenberg-Marquardt迭代算法查找棋盘角点在两个摄像装置视图上的最小投影误差,返回旋转矩阵R和平移矩阵T的结果。According to the joint views of the corner points of a given checkerboard and the external parameter matrix corresponding to each corner point, the rotation matrix R and the translation matrix T are solved; due to image noise and rounding errors, each pair of checkerboards will make The results of R and T are slightly different. The median value of the R and T parameters is selected as the initial value of the real result, and the Levenberg-Marquardt iterative algorithm is used to find the minimum projection error of the checkerboard corner points on the views of the two camera devices, and the rotation matrix R is returned. and the result of the translation matrix T.
优选的,本发明双目立体视觉的人脸三维重建方法中,子步骤A2中,利用Bouguet算法,左摄像装置的内参数和畸变参数、右摄像装置的内参数和畸变参数,以及求取的双目立体视觉系统的旋转矩阵R和平移矩阵T,得到左校正矩阵和右校正矩阵。Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, in substep A2, utilize Bouguet algorithm, internal parameter and distortion parameter of left camera device, the internal parameter and distortion parameter of right camera device, and obtained The rotation matrix R and the translation matrix T of the binocular stereo vision system are used to obtain the left correction matrix and the right correction matrix.
优选的,本发明双目立体视觉的人脸三维重建方法中,步骤B中运用Haar-Adaboost分类器检测左图像和右图像上的人脸区域。Preferably, in the method for three-dimensional face reconstruction based on binocular stereo vision of the present invention, in step B, a Haar-Adaboost classifier is used to detect the face regions on the left image and the right image.
优选的,本发明双目立体视觉的人脸三维重建方法中,步骤C包括:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, step C comprises:
子步骤C1,定位左图像和右图像中人脸区域的人脸关键点;Sub-step C1, locating the face key points of the face area in the left image and the right image;
子步骤C2,匹配左图像和右图像中相关的人脸关键点,获得人脸先验的稀疏拓扑信息-左图人脸形状SL和右边人脸形状SR,其中左图人脸形状SL包含左脸关键点坐标{(lxi,lyi),i∈[1,n]},右边人脸形状SR包含右脸关键点坐标{(rxi,ryi),i∈[1,n]},n代表关键点的总数。Sub-step C2, match the relevant face key points in the left image and the right image, and obtain the prior sparse topological information of the face-the face shape SL in the left image and the face shape SR in the right image, where the face shape SL in the left image contains the left Face key point coordinates {(lx i , ly i ), i∈[1, n]}, right face shape SR contains right face key point coordinates {(rx i , ry i ), i∈[1, n]} , n represents the total number of key points.
优选的,本发明双目立体视觉的人脸三维重建方法中,子步骤C1中,结合回归树集合ERT算法定位左图像和右图像的人脸关键点。Preferably, in the binocular stereoscopic human face three-dimensional reconstruction method of the present invention, in the sub-step C1, the key points of the human face in the left image and the right image are located in combination with the regression tree set ERT algorithm.
优选的,本发明双目立体视觉的人脸三维重建方法中,步骤D包括:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, step D comprises:
子步骤D1,计算左图像和右图像中匹配的人脸关键点的视差;Sub-step D1, calculating the disparity of the matched face key points in the left image and the right image;
子步骤D2,利用左图像和右图像中匹配的人脸关键点的视差,计算左图像和右图像中除匹配的人脸关键点之外的其他点的视差,实现人脸视差的稠密化,得到初始化视差。Sub-step D2, using the disparity of the matched face key points in the left image and the right image, calculate the disparity of other points in the left image and the right image except the matched face key points, so as to realize the densification of face disparity, Get the initialized disparity.
优选的,本发明双目立体视觉的人脸三维重建方法中,子步骤D1中,依据下式计算左图像和右图像中匹配的人脸关键点的视差:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, in the sub-step D1, the parallax of the matching face key points in the left image and the right image is calculated according to the following formula:
其中,lxi代表第i个人脸关键点在左图像的所在的列,rxi代表在右图像中与其匹配的人脸关键点所在的列,视差D(pi)代表相对应的人脸关键点的列坐标差的绝对值,i=1,2,3,......,n,n为匹配的关键点的个数。Among them, lx i represents the column of the i-th face key point in the left image, rx i represents the column of the face key point that matches it in the right image, and the disparity D(p i ) represents the corresponding face key The absolute value of the column coordinate difference of the point, i=1, 2, 3, ..., n, n is the number of matching key points.
优选的,本发明双目立体视觉的人脸三维重建方法中,子步骤D2包括:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, sub-step D2 includes:
子分步骤D2a,利用脸部定位出的关键点对脸部进行Delaunay三角剖分,将脸部划分成n个三角形;Sub-step D2a, using the key points located on the face to perform Delaunay triangulation on the face, and dividing the face into n triangles;
子分步骤D2b,对于每一个三角形,通过三角形的三个顶点视差获得三角形内点的视差,实现人脸视差的稠密化,得到初始化视差。In the sub-step D2b, for each triangle, the parallax of the points inside the triangle is obtained through the parallax of the three vertices of the triangle, so as to realize the densification of the parallax of the face and obtain the initialization parallax.
优选的,本发明双目立体视觉的人脸三维重建方法中,子分步骤D2b中,通过三角形的三个顶点视差获得三角形内点的视差包括:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, in the sub-step D2b, obtaining the parallax of the inner point of the triangle through the parallax of the three vertices of the triangle includes:
三角形的三个顶点为p1,p2,p3,对于三角形内的点p,都存在一个u和v,使得p点与点p1,p2,p3存在关系如公式:The three vertices of the triangle are p1, p2, p3. For point p in the triangle, there is a u and v, so that there is a relationship between point p and points p1, p2, p3 as in the formula:
px=(1-u-v)·p1x+u·p2x+v·p3x p x =(1-uv) p1 x +u p2 x +v p3 x
py=(1-u-v)·p1y+u·p2y+v·p3x p y =(1-uv)·p1 y +u·p2 y +v·p3 x
通过将p点的坐标(px,py),p1点的坐标(p1x,p1y),p2点的坐标(p2x,p2y),p3点的坐标(p3x,p3y)代入公式解出u和v参数;By substituting the coordinates of point p (p x , p y ), the coordinates of point p1 (p1 x , p1 y ), the coordinates of point p2 (p2 x , p2 y ), and the coordinates of point p3 (p3 x , p3 y ) into The formula solves the u and v parameters;
由公式进行插值运算得到点p的视差D(p):The parallax D(p) of the point p is obtained by interpolating from the formula:
D(p)=(1-u-V)·D(p1)+u·D(p2)+V·D(p3)。D(p)=(1-u-V)·D(p1)+u·D(p2)+V·D(p3).
优选的,本发明双目立体视觉的人脸三维重建方法中,步骤E包括:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, step E comprises:
子步骤E1,利用代价计算得到左右两图对应点的相似度,得到左右两图对应点的匹配代价;Sub-step E1, use the cost calculation to obtain the similarity of the corresponding points in the left and right images, and obtain the matching cost of the corresponding points in the left and right images;
子步骤E2,利用代价聚合计算,通过左右两图对应点匹配代价,得到所述对应点周围点的聚合代价;Sub-step E2, using cost aggregation calculation, matching the cost of the corresponding points in the left and right graphs, to obtain the aggregated cost of the points around the corresponding point;
子步骤E3,选用最小聚合匹配代价的平面作为最优平面,反求出像素点视差。In sub-step E3, the plane with the minimum aggregate matching cost is selected as the optimal plane, and the disparity of the pixels is reversely calculated.
优选的,本发明双目立体视觉的人脸三维重建方法中,步骤E1匹配代价的计算方法为:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, the calculation method of step E1 matching cost is:
G点为中心点,G点的亮度值I(G),N(G)代表以G为中心,半径为d的方形邻域内,邻域N(G)内的像素点的亮度值I(G′)为邻域N(G)内的像素点的亮度值,若邻域内的点的G′的亮度值小于G,则点G′的位置上的值记为1,反之则为0,即为ε(G,G′);Point G is the center point, the brightness value I(G) of point G, and N(G) represents the brightness value I(G) of the pixels in the neighborhood N(G) in a square neighborhood with G as the center and a radius of d ') is the brightness value of the pixel in the neighborhood N(G). If the brightness value of G' of the point in the neighborhood is smaller than G, the value at the position of point G' is recorded as 1, otherwise it is 0, that is is ε(G, G′);
将邻域内的点的值串联起来,完成变换,值为Rτ(G),其公式如下:Connect the values of the points in the neighborhood in series to complete the transformation. The value is Rτ(G), and the formula is as follows:
对每一个点进行这样的变换后,左右两幅图点与点的相似性则为计算对应点的变换值的哈明距离H,距离越小,表示相似度越高,从而求出像素J,K之间的匹配代价为:After such a transformation is performed on each point, the similarity between the points in the left and right images is the Hamming distance H for calculating the transformation value of the corresponding point. The smaller the distance, the higher the similarity, and then calculate the pixel J. The matching cost between K is:
ρ(J,K)=H(R(J),R(K));ρ(J,K)=H(R(J),R(K));
所述子步骤E2中,所述代价聚合计算公式为:In the sub-step E2, the cost aggregation calculation formula is:
其中K为一像素点,J为WK下一点,WK代表像素K的方形窗口,dK=afKx+bfKy+cf代表像素K点的视差,W(J,K)为权重函数;Where K is a pixel, J is the next point of W K , W K represents the square window of pixel K, d K = a f K x + b f K y + c f represents the parallax of pixel K, W(J, K ) is the weight function;
所述权重函数W(J,K)考虑到方形窗口内的边缘问题,利用两点的颜色相似度来定义权值函数,如果颜色相近则赋予高权重,反之,则赋予低权重,表示如公式所示:The weight function W(J, K) takes into account the edge problem in the square window, and uses the color similarity of two points to define the weight function. If the colors are similar, a high weight is given, otherwise, a low weight is given, expressed as the formula Shown:
其中,γ是定义的参数,||IJ-IK||表示J和K的RGB颜色空间的L1范数。where γ is a defined parameter and ||I J -I K || represents the L1 norm of the RGB color space of J and K.
子步骤3中,视差计算包括最优平面计算,其计算公式为:In sub-step 3, the parallax calculation includes optimal plane calculation, and its calculation formula is:
其中,F代表所有的视差平面,fJ作为J点的最优平面,通过最优平面的参数反求出像素的视差。Among them, F represents all parallax planes, f J is the optimal plane of point J, and the parallax of the pixel is inversely obtained through the parameters of the optimal plane.
优选的,本发明双目立体视觉的人脸三维重建方法中,步骤F包括:Preferably, in the face three-dimensional reconstruction method of binocular stereo vision of the present invention, step F comprises:
选取任意二维点S,其坐标为(x,y),相关联的视差d,将此点投影到三维中,三维坐标为(X/W,Y/W,Z/W),得到矩阵:Select any two-dimensional point S, whose coordinates are (x, y), and the associated parallax d, project this point into three-dimensional, and the three-dimensional coordinates are (X/W, Y/W, Z/W), and the matrix is obtained:
其中,O为投影矩阵,Among them, O is the projection matrix,
O中cx表示主点左图像的x坐标,cy表示主点左图像的y坐标,Tx表示两个摄像装置之间的基线长度,f代表摄像装置的焦距,cx′表示主点右图像的x坐标;In O, c x represents the x coordinate of the left image of the principal point, c y represents the y coordinate of the left image of the principal point, T x represents the baseline length between two cameras, f represents the focal length of the camera device, and c x ′ represents the principal point the x-coordinate of the right image;
通过上述公式求得像素点所对应的三维坐标,从而得到人脸三维结构。The three-dimensional coordinates corresponding to the pixel points are obtained through the above formula, so as to obtain the three-dimensional structure of the face.
从上述技术方案可以看出,本发明基于双目立体视觉的人脸三维重建方法至少具有以下有益效果其中之一:It can be seen from the above technical solutions that the method for three-dimensional face reconstruction based on binocular stereo vision in the present invention has at least one of the following beneficial effects:
(1)本发明针对人脸的曲面结构,采用视差平面的立体匹配算法,恢复更加平滑的人脸三维结构;(1) The present invention is aimed at the curved surface structure of human face, adopts the three-dimensional matching algorithm of parallax plane, restores smoother three-dimensional structure of human face;
(2)本发明采用人脸先验结构,以此为基础初始化视差平面,提高了人脸三维重建结果的准确度;(2) The present invention adopts the prior structure of the human face, and initializes the parallax plane based on this, thereby improving the accuracy of the three-dimensional reconstruction result of the human face;
(3)本发明采用普通摄像头作为图像的采集设备,能够节约设备的成本。(3) The present invention uses a common camera as the image acquisition device, which can save the cost of the device.
附图说明Description of drawings
图1为本发明实施例基于双目立体视觉的人脸三维重建方法的流程图。FIG. 1 is a flow chart of a three-dimensional face reconstruction method based on binocular stereo vision according to an embodiment of the present invention.
图2为本发明实施例基于双目立体视觉的人脸三维重建方法的子步骤A1的流程图。FIG. 2 is a flow chart of sub-step A1 of the method for three-dimensional face reconstruction based on binocular stereo vision according to an embodiment of the present invention.
图3为本发明实施例基于双目立体视觉的人脸三维重建方法的子步骤D2的流程图。FIG. 3 is a flow chart of sub-step D2 of the method for three-dimensional face reconstruction based on binocular stereo vision according to an embodiment of the present invention.
图4为立体匹配视觉的几何模型。Figure 4 is a geometric model of stereo matching vision.
具体实施方式detailed description
本发明提供了一种基于双目立体视觉的人脸三维重建方法。该人脸三维重建方法只使用一对图像,它们来自双目摄像头的左摄像头和右摄像头,从而对人脸的三维信息进行恢复。The invention provides a three-dimensional face reconstruction method based on binocular stereo vision. The 3D face reconstruction method only uses a pair of images, which come from the left camera and the right camera of the binocular camera, so as to restore the 3D information of the face.
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.
参照图1至图4对本发明的实施例进行说明。Embodiments of the present invention will be described with reference to FIGS. 1 to 4 .
下面结合附图和具体实施方式来发明做出进一步描述。The invention will be further described below in conjunction with the accompanying drawings and specific embodiments.
如图1所示,首先,利用两台摄像机搭建平行的双目立体视觉系统,分别标定两台摄像机内外参数,并校正双目相机,使得相机的匹配点位于同一行,然后同时各拍摄人脸的一幅图像,使用Viola-Jones人脸检测方法,结合回归树集合(Ensemble ofRegressionTrees,ERT)算法对左右图像进行人脸关键点定位,接着匹配左右关键点,恢复人脸稀疏的视差估计,运用线性插值算法初步估计脸部稠密的视差,结合基于视差平面的Patchmatch立体匹配算法对得到的视差进行平滑处理,重建脸部稠密的视差。根据标定的参数和对应点的视差,恢复出稠密的人脸三维点云信息。As shown in Figure 1, first, use two cameras to build a parallel binocular stereo vision system, calibrate the internal and external parameters of the two cameras, and calibrate the binocular cameras so that the matching points of the cameras are in the same row, and then shoot faces at the same time For an image, use the Viola-Jones face detection method, combined with the Ensemble of RegressionTrees (ERT) algorithm to locate the key points of the face on the left and right images, and then match the left and right key points to restore the sparse disparity estimation of the face. The linear interpolation algorithm preliminarily estimated the dense disparity of the face, combined with the Patchmatch stereo matching algorithm based on the disparity plane to smooth the obtained disparity, and reconstructed the dense disparity of the face. According to the calibrated parameters and the parallax of the corresponding points, the dense 3D point cloud information of the face is restored.
下面对具体的实施过程分块进行描述:The specific implementation process is described in blocks as follows:
步骤A:构建双目立体视觉系统,其中,所述双目立体视觉系统包括左摄像装置和右摄像装置;Step A: build a binocular stereo vision system, wherein the binocular stereo vision system includes a left camera and a right camera;
利用两个同等型号的摄像头,平行放置,搭建双目立体视觉系统。并对两个摄像机进行参数标定及校正。Use two cameras of the same type and place them in parallel to build a binocular stereo vision system. And the parameters of the two cameras are calibrated and corrected.
为了通过立体匹配算法重建出人脸的三维结构,首先必须获取摄像机的内参数和两个相机的相对基线参数,才能根据三角测量原理从像素点的视差恢复对应的三维坐标。这就需要首先对相机进行标定。In order to reconstruct the three-dimensional structure of the face through the stereo matching algorithm, the internal parameters of the camera and the relative baseline parameters of the two cameras must first be obtained, and then the corresponding three-dimensional coordinates can be recovered from the parallax of the pixel points according to the principle of triangulation. This requires the camera to be calibrated first.
子步骤A1:Substep A1:
对左摄像装置和右摄像装置进行标定,得到两者的内参数,畸变参数和对应三维点的外参数;基于每个三维点对应的左右摄像装置的外参数矩阵,得到双目立体视觉系统的旋转矩阵R、平移矩阵T和左右摄像装置的相对基线参数。Calibrate the left camera device and the right camera device to obtain the internal parameters of the two, the distortion parameters and the external parameters of the corresponding three-dimensional points; based on the external parameter matrix of the left and right camera devices corresponding to each three-dimensional point, the binocular stereo vision system is obtained The rotation matrix R, the translation matrix T, and the relative baseline parameters of the left and right cameras.
本发明基于平面棋盘格标记物的摄像机标定方法。两个摄像机获取左右两个输入的棋盘图像。根据张正友标定方法,获取10到20组包含不同角度和方向的平面棋盘图像。接着进行摄像机标定,首先对输入对象进行棋盘检测,然后定位出三维点对于棋盘格中的角点。The present invention is based on a camera calibration method of a plane checkerboard marker. Two cameras acquire the left and right checkerboard images of the two inputs. According to Zhang Zhengyou's calibration method, 10 to 20 groups of planar chessboard images containing different angles and directions are obtained. Next, the camera is calibrated. First, the checkerboard detection is performed on the input object, and then the three-dimensional points are located for the corner points in the checkerboard.
根据张正友的标定算法和Brown算法,分别获取左右摄像机的内参数、畸变参数和每个棋盘格上的每个角点的对应的外参数,完成左右摄像机的标定和矫正。According to Zhang Zhengyou's calibration algorithm and Brown's algorithm, the internal parameters, distortion parameters of the left and right cameras, and the corresponding external parameters of each corner point on each checkerboard are respectively obtained to complete the calibration and correction of the left and right cameras.
这样得出了空间一点与图像中此点所对应的关系。对于之前记录的棋盘格上的某一角点Q,以及利用计算出的相对应的外参数矩阵,包括对左相机的旋转矩阵Rl和平移矩阵Tl,以及对右相机的旋转矩阵Rr和平移矩阵Tr。In this way, the corresponding relationship between a point in space and this point in the image is obtained. For a corner point Q on the previously recorded checkerboard, and the corresponding external parameter matrix calculated by using, including the rotation matrix R l and translation matrix T l of the left camera, and the rotation matrix R r and the right camera The translation matrix T r .
将Q点输入到左右摄像机的摄像机坐标系,对应左图和右图的坐标点Ql和Qr,存在如下式关系:Input the Q point into the camera coordinate system of the left and right cameras, corresponding to the coordinate points Q l and Q r of the left and right images, there is the following relationship:
Ql=RlQ+Tl Q l =R l Q+T l
Qr=RrQ+Tr Q r =R r Q+T r
Ql=RT(Qr-T)Q l =R T (Q r -T)
其中双目相机相对的旋转矩阵R和平移矩阵T,推出下面的简单关系:Among them, the relative rotation matrix R and translation matrix T of the binocular camera derive the following simple relationship:
R=Rr(Rl)R=R r (R l )
T=Tr-RTl T= Tr - RTl
根据给定棋盘格的角点的对个联合视图,以及每个角点所对应的外参数矩阵,可以求解出旋转矩阵R和平移矩阵T。由于图像噪声和舍入误差,每一对棋盘都会使得R和T的结果出现细小不同。选用R和T参数的中值作为真实结果的初始值,运用Levenberg-Marquardt迭代算法查找棋盘角点在两个摄像机视图上的最小投影误差,并返回R和T的结果,从而得到双目立体视觉系统的旋转矩阵R、平移矩阵T和左右摄像机的相对基线参数。According to the joint views of the corner points of a given checkerboard and the external parameter matrix corresponding to each corner point, the rotation matrix R and the translation matrix T can be solved. Due to image noise and rounding errors, each pair of checkerboards will make the R and T results slightly different. Select the median value of the R and T parameters as the initial value of the real result, use the Levenberg-Marquardt iterative algorithm to find the minimum projection error of the checkerboard corner points on the two camera views, and return the results of R and T, so as to obtain binocular stereo vision The system's rotation matrix R, translation matrix T and relative baseline parameters of the left and right cameras.
子步骤A2:Substep A2:
基于双目立体视觉系统的旋转矩阵R,平移矩阵T,左右相机的内参数以及畸变参数矩阵,得到左校正矩阵和右校正矩阵。Based on the rotation matrix R of the binocular stereo vision system, the translation matrix T, the internal parameters of the left and right cameras and the distortion parameter matrix, the left correction matrix and the right correction matrix are obtained.
由于摄像头不能完全的有准确的共面行对准,所以对两台摄像机的图像平面进行重投影,使得它们精确落在同一个平面上,而且图像的行完全地对准到前向平行的结构上。这样的校正,可以简化搜索范围。经过校正后的左右图,使得左图的点与右图的相关的匹配点在同一条扫描线上。如果不进行极线校正,对于图像上的某一点,需要计算此点的在对应图像上的极线,并在这条极线上搜索对应的匹配点,这条线既不平行也不垂直,增大了计算的复杂度,也增加了搜索的范围。此处选用Bouguet算法进行立体校正,利用上述得到的左右相机的内参数矩阵,畸变参数矩阵以及双目相机间的旋转矩阵和平移矩阵,求解出左校正矩阵和右校正矩阵,使两图像中的每一幅重投影次数最小化,同时使得观测面积最小化。Since the cameras cannot have exact coplanar row alignment, the image planes of the two cameras are reprojected so that they fall on exactly the same plane, and the rows of the images are perfectly aligned to the front-parallel structure superior. Such a correction can simplify the search range. After correcting the left and right images, the points in the left image and the corresponding matching points in the right image are on the same scanning line. If epipolar line correction is not performed, for a certain point on the image, it is necessary to calculate the epipolar line of this point on the corresponding image, and search for the corresponding matching point on this epipolar line. This line is neither parallel nor vertical. This increases the computational complexity and also increases the search range. Here, the Bouguet algorithm is selected for stereo correction, using the internal parameter matrix of the left and right cameras obtained above, the distortion parameter matrix, and the rotation matrix and translation matrix between the binocular cameras, the left correction matrix and the right correction matrix are solved to make the two images The number of reprojections for each image is minimized, and the observation area is minimized at the same time.
步骤B:Step B:
对于左图像和右图像中的人脸区域,进行人脸关键点的定位和匹配。For the face area in the left image and the right image, locate and match the key points of the face.
利用搭建的双目立体视觉系统中的左右摄像头采集人脸图像,并使用人脸检测算法与关键点定位算法对两幅图像进行人脸检测与关键点定位。Use the left and right cameras in the binocular stereo vision system to collect face images, and use the face detection algorithm and key point positioning algorithm to perform face detection and key point positioning on the two images.
子步骤B1:Sub-step B1:
运用Haar-Adaboost分类器检测左右图像上的人脸区域,结合回归树集合ERT算法定位出左图像和右图像的人脸关键点。Use the Haar-Adaboost classifier to detect the face area on the left and right images, and combine the regression tree ensemble ERT algorithm to locate the key points of the face in the left and right images.
子步骤B2:Sub-step B2:
匹配左图像和右图像中相关的人脸关键点,获得人脸先验的稀疏拓扑信息-左图人脸形状SL和右边人脸形状SR,其中左图人脸形状SL包含左脸关键点坐标{(lxi,lyi),i∈[1,n]},右边人脸形状SR包含右脸关键点坐标{(rxi,ryi),i∈[1,n]},n代表关键点的总数。Match the relevant face key points in the left image and the right image, and obtain the sparse topological information of the face prior - the face shape SL in the left image and the face shape SR in the right image, where the face shape SL in the left image contains the key point coordinates of the left face {(lx i , ly i ), i∈[1,n]}, the right face shape SR contains the key point coordinates of the right face {(rx i , ry i ), i∈[1,n]}, n represents the key total number of points.
步骤C:Step C:
利用匹配的人脸关键点进行脸部稠密视差初始化,得到初始化视差。Use the matched face key points to initialize the dense disparity of the face to obtain the initialization disparity.
子步骤C1:Substep C1:
计算左图像和右图像中匹配的人脸关键点的视差:Compute the disparity of matched face keypoints in the left and right images:
由于左右图像是经过立体校正的,所以左右匹配的人脸关键点位于同一行。人脸关键点pi的视差D(pi)的计算如公式所示:Since the left and right images are stereo rectified, the face keypoints of the left and right matches are located in the same row. The calculation of the disparity D(p i ) of the face key point p i is shown in the formula:
其中,lxi代表人脸关键点在左图像的所在的列,rxi代表在右图像中与其匹配的人脸关键点所在的列,视差D(pi)代表相对应的人脸关键点的列坐标差的绝对值。Among them, lx i represents the column of the face key point in the left image, rx i represents the column of the face key point that matches it in the right image, and the disparity D(p i ) represents the corresponding face key point The absolute value of the column coordinate difference.
但是对于人脸的结构来说,这样的视差太过稀疏,不能够很好的描述人脸的结构。But for the structure of the face, such parallax is too sparse to describe the structure of the face well.
子步骤C2:Substep C2:
利用左图像和右图像中匹配的人脸关键点的视差,计算左图像和右图像中除匹配的人脸关键点之外的其他点的视差,实现人脸视差的稠密化,得到初始化视差。Using the disparity of the matched face key points in the left image and the right image, calculate the disparity of other points in the left image and the right image except the matched face key points, realize the densification of the face disparity, and obtain the initialization disparity.
子分步骤C2a:Sub-step C2a:
利用脸部定位出的关键点对脸部进行Delaunay三角剖分,将脸部划分成n个三角形。Delaunay triangulation is performed on the face by using the key points located on the face, and the face is divided into n triangles.
子分步骤C2b:Sub-step C2b:
对于每一个三角形,通过三角形的三个顶点视差获得三角形内点的视差For each triangle, the disparity of the points inside the triangle is obtained by disparity of the three vertices of the triangle
此处假设,位于同一三角形内的点的视差与三角形三个顶点的视差成线性关系,从而通过一个三角形的三个顶点视差获得三角形内点的视差。如果三角形的三个顶点为p1,p2,p3。对于三角形内的任意点p,都存在一个u和v,使得p点与点p1,p2,p3存在关系如公式It is assumed here that the disparity of points located in the same triangle is linearly related to the disparity of the three vertices of the triangle, so that the disparity of the points within the triangle can be obtained through the disparity of the three vertices of a triangle. If the three vertices of the triangle are p1, p2, p3. For any point p in the triangle, there exists a u and v, so that there is a relationship between point p and points p1, p2, p3 as in the formula
p=(1-u-v)·p1+u·p2+v·p3p=(1-u-v)·p1+u·p2+v·p3
p点的坐标为(px,py),p1点的坐标为(p1x,p1y),p2点的坐标为(p2x,p2y),p3点的坐标为(p3x,p3y),它们满足公式的关系,此时就能解出u和v参数。The coordinates of point p are (p x , p y ), the coordinates of point p1 are (p1 x , p1 y ), the coordinates of point p2 are (p2 x , p2 y ), and the coordinates of point p3 are (p3 x , p3 y ), they satisfy the relationship of the formula, and the parameters u and v can be solved at this time.
px=(1-u-v)·p1x+u·p2x+v·p3x p x =(1-uv) p1 x +u p2 x +v p3 x
py=(1-u-v)·p1y+u·p2y+v·p3y p y =(1-uv)·p1 y +u·p2 y +v·p3 y
从而由公式进行插值运算得到对p点的视差D(P):Thus, the parallax D(P) for point p can be obtained by interpolating from the formula:
D(p)=(1-u-v)·D(p1)+u·D(pp2)+v·D(p3)D(p)=(1-u-v)·D(p1)+u·D(pp2)+v·D(p3)
得到了整个脸部稠密的初始视差。A dense initial disparity for the entire face is obtained.
步骤D:Step D:
通过立体匹配算法平滑初始化视差。The disparity is initialized smoothly by a stereo matching algorithm.
传统的局部的立体算法以整数视差作为支持窗,并且假设同一支持窗内的像素点具有相同的视差,但是这个假设对于曲面或者倾斜的表面不成立,这样会导致前向平行的表面出现一定的偏差。基于视差平面的方法,如Patchmatch方法,对于图片上的像素(x0,y0),对应的视差为d,对应的视差平面f,af,bf,cf表示平面f,单位向量表示平面的法向量。如图2所示,立体匹配算法主要包含代价计算、代价聚合、视差计算和后处理四个步骤。The traditional local stereo algorithm uses integer disparity as the support window, and assumes that the pixels in the same support window have the same disparity, but this assumption is not true for curved or inclined surfaces, which will lead to certain deviations on the front-parallel surface . The method based on the parallax plane, such as the Patchmatch method, for the pixel (x 0 , y 0 ) on the picture, the corresponding parallax is d, and the corresponding parallax plane f, a f , b f , c f represent the plane f, the unit vector Represents the normal vector of the plane. As shown in Figure 2, the stereo matching algorithm mainly includes four steps: cost calculation, cost aggregation, disparity calculation and post-processing.
子步骤D1:Substep D1:
利用代价计算得到左右两图对应点的相似度,得到左右两图对应点的匹配代价。Use the cost calculation to obtain the similarity of the corresponding points of the left and right images, and obtain the matching cost of the corresponding points of the left and right images.
匹配代价的计算直接影响算法精度和效率。经过选择,本专利采用census算法来计算对应点的相似度,这个算法具有灰度不变性,使得像素灰度值的具体大小和编码之间的相关性不强,只关注像素之间的大小关系,对噪声具有一定的鲁棒性。准则就是将邻域内的元素与中心元素作比较,具体的方法如下式所示:The calculation of matching cost directly affects the accuracy and efficiency of the algorithm. After selection, this patent uses the census algorithm to calculate the similarity of corresponding points. This algorithm has gray invariance, so that the correlation between the specific size of the pixel gray value and the encoding is not strong, and only the size relationship between pixels is concerned. , which is robust to noise. The criterion is to compare the elements in the neighborhood with the central element. The specific method is as follows:
以G点为中心点,将此点的亮度值I(G)与其邻域N(G)内的像素点的亮度值I(G”)进行比较。N(G)代表以G为中心,半径为d的方形邻域内,若邻域内的点的G′的亮度值小于G,则点G′的位置上的值记为1,反之则为0,即为ε(G,G·)。将邻域内的点的值串联起来,完成变换,值为Rτ(G)。对每一个点进行这样的变换后,左右两幅图点与点的相似性则为计算对应点的变换值的哈明距离片。距离越小,表示相似度越高。像素J,K之间的匹配代价ρ(J,K)=H(R(J),R(K))。With point G as the center point, compare the brightness value I(G) of this point with the brightness value I(G") of the pixels in its neighborhood N(G). N(G) means that with G as the center, the radius In the square neighborhood of d, if the brightness value of G' of the point in the neighborhood is less than G, the value at the position of point G' is recorded as 1, otherwise it is 0, which is ε(G, G·). The values of the points in the neighborhood are concatenated to complete the transformation, and the value is R τ (G). After such a transformation is performed on each point, the similarity between the points in the left and right pictures is the hash of calculating the transformation value of the corresponding point Bright distance sheet. The smaller the distance, the higher the similarity. The matching cost ρ(J, K)=H(R(J), R(K)) between pixels J and K.
子步骤D2:Substep D2:
利用代价聚合计算,通过左右两图对应点匹配代价,得到所述对应点周围点的聚合代价。Using the cost aggregation calculation, the aggregation cost of the points around the corresponding point is obtained by matching the cost of the corresponding points in the left and right graphs.
对于某一个像素点J在视差平面f下与其WK范围下点K的聚合视差代价m,如公式所示:For a certain pixel point J under the parallax plane f and the aggregated parallax cost m of the point K under the range of W K , as shown in the formula:
其中WK代表像素K的方形窗口,dK=afKx+bfKy+cf代表像素K点的视差。平面参数af,bf,cf可通过公式转换成平面单位法向量 Where W K represents a square window of pixel K, and d K =a f K x +b f K y +c f represents the parallax of pixel K. The plane parameters a f , b f , c f can be converted into plane unit normal vectors by the formula
权重函数W(J,K)考虑到方形窗口内的边缘问题,利用两点的颜色相似度来定义权值函数,如果颜色相近则赋予高权重,反之,则赋予低权重,表示如公式所示:The weight function W(J, K) considers the edge problem in the square window, and uses the color similarity of two points to define the weight function. If the colors are similar, a high weight is assigned, otherwise, a low weight is assigned, which is shown in the formula :
其中,γ是定义的参数,||IJ-IK||表示J和K的RGB颜色空间的L1范数。where γ is a defined parameter and ||I J -I K || represents the L1 norm of the RGB color space of J and K.
子步骤D3:Substep D3:
选用最小聚合匹配代价的平面作为最优平面,反求出像素点视差。The plane with the minimum aggregate matching cost is selected as the optimal plane, and the pixel point parallax is calculated inversely.
对于某个像素点J选用具有最小的聚合匹配代价m的平面fJ作为当前点的最优平面,如公式所示:For a certain pixel point J, the plane f J with the smallest aggregation matching cost m is selected as the optimal plane of the current point, as shown in the formula:
F代表所有的视差平面,所对应的解是无穷多的。通过下面的方式找到较优的平面参数。首先初始化平面参数和视差参数,将求得的稠密人脸视差用来初始化参数d,将人脸结构约束在视差中,其余参数随机初始化。通过迭代进行空间传播,视角传播和平面细化三个步骤来优化参数平面。F represents all parallax planes, and the corresponding solutions are infinitely many. Find better plane parameters in the following way. First initialize the plane parameters and parallax parameters, use the obtained dense face parallax to initialize the parameter d, constrain the face structure in the parallax, and initialize the rest of the parameters randomly. The parameter plane is optimized by iteratively performing three steps of spatial propagation, viewpoint propagation and plane refinement.
最后找到最优的参数平面,反求出像素点所具有的视差。Finally, the optimal parameter plane is found, and the parallax of the pixels is reversely calculated.
步骤E:Step E:
结合平滑后的初始化视差进行人脸三维重建。Combining the smoothed initialization disparity for 3D face reconstruction.
结合得到的相机内参数,相机的相对基线参数,以及像素点所对应的平滑后的初始视差。经过校正后的双目相机根据三角测量原则,如图4所示,从下面公式,将任意一个二维点S,其坐标为(x,y),和相关联的视差d,将此点投影到三维中,三维坐标为(X/W,Y/W,Z/W)。Combining the obtained camera internal parameters, the relative baseline parameters of the camera, and the smoothed initial disparity corresponding to the pixels. The corrected binocular camera is based on the principle of triangulation, as shown in Figure 4, from the following formula, any two-dimensional point S, whose coordinates are (x, y), and the associated parallax d, project this point In 3D, the 3D coordinates are (X/W, Y/W, Z/W).
其中O为投影矩阵,where O is the projection matrix,
O中cx表示主点左图像的x坐标,cy表示主点左图像的y坐标,,Tx表示两个摄像头之间的基线长度,L代表相机的焦距,cx′表示主点右图像的x坐标,主点为主光线与图像平面相交的点。In O, c x represents the x coordinate of the left image of the principal point, c y represents the y coordinate of the left image of the principal point, T x represents the baseline length between the two cameras, L represents the focal length of the camera, and c x ′ represents the right side of the principal point The x-coordinate of the image, the principal point where the chief ray intersects the image plane.
这样通过以上的公式,可以得出像素点所对应的三维坐标,以此可以得到人脸的三维结构。In this way, through the above formula, the three-dimensional coordinates corresponding to the pixel points can be obtained, so that the three-dimensional structure of the face can be obtained.
本发明针对人脸的曲面结构,采用视差平面的立体匹配算法,恢复更加平滑的人脸三维结构;采用人脸先验结构,以此为基础初始化视差平面,提高了人脸三维重建结果的准确度。Aiming at the curved surface structure of the human face, the present invention adopts the stereo matching algorithm of the parallax plane to restore a smoother three-dimensional structure of the human face; adopts the prior structure of the human face, and initializes the parallax plane on this basis, thereby improving the accuracy of the three-dimensional reconstruction result of the human face Spend.
至此,本发明实施例介绍完毕。So far, the introduction of the embodiment of the present invention is completed.
至此,已经结合附图对本实施例进行了详细描述。依据以上描述,本领域技术人员应当对本发明基于双目立体视觉的人脸三维重建方法有了清楚的认识。So far, the present embodiment has been described in detail with reference to the drawings. Based on the above description, those skilled in the art should have a clear understanding of the method for 3D face reconstruction based on binocular stereo vision of the present invention.
需要说明的是,在附图或说明书正文中,未绘示或描述的实现方式,均为所属技术领域中普通技术人员所知的形式,并未进行详细说明。此外,上述对各方法的定义并不仅限于实施例中提到的各种具体方式,本领域普通技术人员可对其进行简单地更改或替换,It should be noted that, in the accompanying drawings or in the text of the specification, implementations that are not shown or described are forms known to those of ordinary skill in the art, and are not described in detail. In addition, the above definitions of each method are not limited to the various specific methods mentioned in the embodiments, and those of ordinary skill in the art can easily modify or replace them,
还需要说明的是,本文可提供包含特定值的参数的示范,但这些参数无需确切等于相应的值,而是可在可接受的误差容限或设计约束内近似于相应值。实施例中提到的方向用语,例如“上”、“下”、“前”、“后”、“左”、“右”等,仅是参考附图的方向,并非用来限制本发明的保护范围。此外,除非特别描述或必须依序发生的步骤,上述步骤的顺序并无限制于以上所列,且可根据所需设计而变化或重新安排。并且上述实施例可基于设计及可靠度的考虑,彼此混合搭配使用或与其他实施例混合搭配使用,即不同实施例中的技术特征可以自由组合形成更多的实施例。It should also be noted that the text may provide examples of parameters that include specific values, but these parameters need not be exactly equal to the corresponding values, but may approximate the corresponding values within acceptable error tolerances or design constraints. The directional terms mentioned in the embodiments, such as "up", "down", "front", "rear", "left", "right", etc., are only referring to the directions of the drawings, and are not intended to limit the present invention protected range. In addition, unless specifically described or steps that must occur sequentially, the order of the above steps is not limited to that listed above and may be changed or rearranged according to the desired design. Moreover, the above-mentioned embodiments can be mixed and matched with each other or with other embodiments based on design and reliability considerations, that is, technical features in different embodiments can be freely combined to form more embodiments.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710082476.8A CN106910222A (en) | 2017-02-15 | 2017-02-15 | Face three-dimensional rebuilding method based on binocular stereo vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710082476.8A CN106910222A (en) | 2017-02-15 | 2017-02-15 | Face three-dimensional rebuilding method based on binocular stereo vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106910222A true CN106910222A (en) | 2017-06-30 |
Family
ID=59208416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710082476.8A Pending CN106910222A (en) | 2017-02-15 | 2017-02-15 | Face three-dimensional rebuilding method based on binocular stereo vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106910222A (en) |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680059A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of determination methods of image rectification, terminal and computer-readable recording medium |
CN107730462A (en) * | 2017-09-30 | 2018-02-23 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer-readable recording medium |
CN108122280A (en) * | 2017-12-20 | 2018-06-05 | 北京搜狐新媒体信息技术有限公司 | The method for reconstructing and device of a kind of three-dimensional point cloud |
CN108174176A (en) * | 2017-12-22 | 2018-06-15 | 洛阳中科众创空间科技有限公司 | A GPU-based high-precision disparity calculation acceleration method |
CN108182707A (en) * | 2017-12-21 | 2018-06-19 | 上海汇像信息技术有限公司 | Acquire it is imperfect under the conditions of gridiron pattern calibrating template and its automatic identifying method |
CN108305281A (en) * | 2018-02-09 | 2018-07-20 | 深圳市商汤科技有限公司 | Calibration method, device, storage medium, program product and the electronic equipment of image |
CN108734776A (en) * | 2018-05-23 | 2018-11-02 | 四川川大智胜软件股份有限公司 | A kind of three-dimensional facial reconstruction method and equipment based on speckle |
CN108765484A (en) * | 2018-05-18 | 2018-11-06 | 北京航空航天大学 | Living insects motion pick and data reconstruction method based on two high-speed cameras |
CN108830905A (en) * | 2018-05-22 | 2018-11-16 | 苏州敏行医学信息技术有限公司 | The binocular calibration localization method and virtual emulation of simulating medical instrument cure teaching system |
CN109087386A (en) * | 2018-06-04 | 2018-12-25 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system comprising dimensional information |
CN109117726A (en) * | 2018-07-10 | 2019-01-01 | 深圳超多维科技有限公司 | A kind of identification authentication method, device, system and storage medium |
CN109146934A (en) * | 2018-06-04 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system based on binocular solid and photometric stereo |
CN109191509A (en) * | 2018-07-25 | 2019-01-11 | 广东工业大学 | A kind of virtual binocular three-dimensional reconstruction method based on structure light |
CN109829915A (en) * | 2019-02-28 | 2019-05-31 | 南京医科大学 | The dedicated smile aesthetics evaluating method of correction and system based on three-dimensional space face type |
CN110136248A (en) * | 2019-05-20 | 2019-08-16 | 湘潭大学 | A device and method for three-dimensional reconstruction of transmission housing based on binocular stereo vision |
CN110276110A (en) * | 2019-06-04 | 2019-09-24 | 华东师范大学 | A software-hardware collaborative design method for binocular stereo vision system |
CN110544301A (en) * | 2019-09-06 | 2019-12-06 | 广东工业大学 | A three-dimensional human motion reconstruction system, method and motion training system |
CN110889873A (en) * | 2019-11-26 | 2020-03-17 | 中国科学院光电研究院 | A target positioning method, device, electronic device and storage medium |
CN110909634A (en) * | 2019-11-07 | 2020-03-24 | 深圳市凯迈生物识别技术有限公司 | Visible light and double infrared combined rapid in vivo detection method |
CN111028295A (en) * | 2019-10-23 | 2020-04-17 | 武汉纺织大学 | A 3D imaging method based on encoded structured light and binocular |
CN111047678A (en) * | 2018-10-12 | 2020-04-21 | 杭州海康威视数字技术股份有限公司 | Three-dimensional face acquisition device and method |
CN111127524A (en) * | 2018-10-31 | 2020-05-08 | 华为技术有限公司 | Method, system and device for tracking trajectory and reconstructing three-dimensional image |
CN111161397A (en) * | 2019-12-02 | 2020-05-15 | 支付宝(杭州)信息技术有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and readable storage medium |
CN111292297A (en) * | 2020-01-21 | 2020-06-16 | 湖北文理学院 | Welding seam detection method, device and equipment based on binocular stereo vision and storage medium |
CN111325828A (en) * | 2020-01-21 | 2020-06-23 | 中国电子科技集团公司第五十二研究所 | Three-dimensional face acquisition method and device based on three-eye camera |
CN111354077A (en) * | 2020-03-02 | 2020-06-30 | 东南大学 | A 3D face reconstruction method based on binocular vision |
CN111465818A (en) * | 2017-12-12 | 2020-07-28 | 索尼公司 | Image processing apparatus, image processing method, program, and information processing system |
WO2020155908A1 (en) * | 2019-01-31 | 2020-08-06 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating information |
CN111508013A (en) * | 2020-04-21 | 2020-08-07 | 中国科学技术大学 | Stereo matching method |
CN111611914A (en) * | 2020-05-20 | 2020-09-01 | 北京海月水母科技有限公司 | Human-shaped positioning technology of binocular face recognition probe |
CN111784753A (en) * | 2020-07-03 | 2020-10-16 | 江苏科技大学 | Stereo matching method for 3D reconstruction of foreground field of view for autonomous underwater robot recovery and docking |
CN112184887A (en) * | 2020-09-29 | 2021-01-05 | 南京鼎毅信息科技有限公司 | Human face three-dimensional reconstruction optimization method based on binocular vision |
CN112412242A (en) * | 2020-11-20 | 2021-02-26 | 福建师范大学 | Automatic door control and anti-pinch system based on binocular stereoscopic vision and method thereof |
CN113450460A (en) * | 2021-07-22 | 2021-09-28 | 四川川大智胜软件股份有限公司 | Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution |
CN115049822A (en) * | 2022-05-26 | 2022-09-13 | 中国科学院半导体研究所 | Three-dimensional imaging method and device |
US11640680B2 (en) | 2020-01-24 | 2023-05-02 | Axis Ab | Imaging system and a method of calibrating an image system |
CN116129037A (en) * | 2022-12-13 | 2023-05-16 | 珠海视熙科技有限公司 | Visual-tactile sensor and its three-dimensional reconstruction method, system, device and storage medium |
WO2023179459A1 (en) * | 2022-03-24 | 2023-09-28 | 张国流 | Three-dimensional reconstruction method and device based on bionic stereo vision, and storage medium |
CN119445002A (en) * | 2025-01-09 | 2025-02-14 | 杭州定川信息技术有限公司 | A method and device for three-dimensional reconstruction of tidal bore based on binocular camera |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866497A (en) * | 2010-06-18 | 2010-10-20 | 北京交通大学 | Intelligent 3D face reconstruction method and system based on binocular stereo vision |
CN102096919B (en) * | 2010-12-31 | 2013-03-20 | 北京航空航天大学 | Real-time three-dimensional matching method based on two-way weighted polymerization |
CN103440653A (en) * | 2013-08-27 | 2013-12-11 | 北京航空航天大学 | Binocular vision stereo matching method |
-
2017
- 2017-02-15 CN CN201710082476.8A patent/CN106910222A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866497A (en) * | 2010-06-18 | 2010-10-20 | 北京交通大学 | Intelligent 3D face reconstruction method and system based on binocular stereo vision |
CN102096919B (en) * | 2010-12-31 | 2013-03-20 | 北京航空航天大学 | Real-time three-dimensional matching method based on two-way weighted polymerization |
CN103440653A (en) * | 2013-08-27 | 2013-12-11 | 北京航空航天大学 | Binocular vision stereo matching method |
Non-Patent Citations (6)
Title |
---|
BLEYER M,RHEMANN C,ROTHER C: ""PatchMatch Stereo - Stereo Matching with Slanted Support Windows"", 《PROCEEDINGS OF BRITISH MACHINE VISION CONFERENCE DUNDEE:WARWICK PRINT》 * |
KAZEMIV,SULLIV AN.J: ""One millisecond face alignment with an ensemble of regression trees"", 《PROCEEDINGS OF COMPUTER VISION AND PATTERN RECOGNITION》 * |
ZABIH R,WOODFILL J: ""Non-parametric Local transforms for computing visual correspondence"", 《COMPUTER VISION-ECCV’94,SPRINGER BERLIN HEIDELBERG》 * |
李娇 等: ""一种基于贝叶斯理论的高效立体匹配方法"", 《激光与光电子学进展》 * |
陈强: ""基于双目立体视觉的三维重建"", 《图形图像》 * |
韩慧妍: ""基于双目立体视觉的三维模型重建方法研究"", 《中国博士学位论文全文数据库(电子期刊) 信息科技辑》 * |
Cited By (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107730462A (en) * | 2017-09-30 | 2018-02-23 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer-readable recording medium |
CN107680059A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of determination methods of image rectification, terminal and computer-readable recording medium |
CN111465818B (en) * | 2017-12-12 | 2022-04-12 | 索尼公司 | Image processing apparatus, image processing method, program, and information processing system |
CN111465818A (en) * | 2017-12-12 | 2020-07-28 | 索尼公司 | Image processing apparatus, image processing method, program, and information processing system |
CN108122280A (en) * | 2017-12-20 | 2018-06-05 | 北京搜狐新媒体信息技术有限公司 | The method for reconstructing and device of a kind of three-dimensional point cloud |
CN108182707B (en) * | 2017-12-21 | 2021-08-10 | 上海汇像信息技术有限公司 | Chessboard grid calibration template under incomplete collection condition and automatic identification method thereof |
CN108182707A (en) * | 2017-12-21 | 2018-06-19 | 上海汇像信息技术有限公司 | Acquire it is imperfect under the conditions of gridiron pattern calibrating template and its automatic identifying method |
CN108174176A (en) * | 2017-12-22 | 2018-06-15 | 洛阳中科众创空间科技有限公司 | A GPU-based high-precision disparity calculation acceleration method |
CN108174176B (en) * | 2017-12-22 | 2020-09-15 | 洛阳中科众创空间科技有限公司 | High-precision parallax calculation acceleration method based on GPU |
CN108305281B (en) * | 2018-02-09 | 2020-08-11 | 深圳市商汤科技有限公司 | Image calibration method, device, storage medium, program product and electronic equipment |
CN108305281A (en) * | 2018-02-09 | 2018-07-20 | 深圳市商汤科技有限公司 | Calibration method, device, storage medium, program product and the electronic equipment of image |
CN108765484B (en) * | 2018-05-18 | 2021-03-05 | 北京航空航天大学 | Living insect motion acquisition and data reconstruction method based on two high-speed cameras |
CN108765484A (en) * | 2018-05-18 | 2018-11-06 | 北京航空航天大学 | Living insects motion pick and data reconstruction method based on two high-speed cameras |
CN108830905A (en) * | 2018-05-22 | 2018-11-16 | 苏州敏行医学信息技术有限公司 | The binocular calibration localization method and virtual emulation of simulating medical instrument cure teaching system |
CN108734776B (en) * | 2018-05-23 | 2022-03-25 | 四川川大智胜软件股份有限公司 | Speckle-based three-dimensional face reconstruction method and equipment |
CN108734776A (en) * | 2018-05-23 | 2018-11-02 | 四川川大智胜软件股份有限公司 | A kind of three-dimensional facial reconstruction method and equipment based on speckle |
CN109146934A (en) * | 2018-06-04 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system based on binocular solid and photometric stereo |
CN109087386A (en) * | 2018-06-04 | 2018-12-25 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system comprising dimensional information |
CN109117726A (en) * | 2018-07-10 | 2019-01-01 | 深圳超多维科技有限公司 | A kind of identification authentication method, device, system and storage medium |
CN109191509A (en) * | 2018-07-25 | 2019-01-11 | 广东工业大学 | A kind of virtual binocular three-dimensional reconstruction method based on structure light |
CN111047678B (en) * | 2018-10-12 | 2024-01-23 | 杭州海康威视数字技术股份有限公司 | Three-dimensional face acquisition device and method |
CN111047678A (en) * | 2018-10-12 | 2020-04-21 | 杭州海康威视数字技术股份有限公司 | Three-dimensional face acquisition device and method |
CN111127524A (en) * | 2018-10-31 | 2020-05-08 | 华为技术有限公司 | Method, system and device for tracking trajectory and reconstructing three-dimensional image |
WO2020155908A1 (en) * | 2019-01-31 | 2020-08-06 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating information |
CN109829915A (en) * | 2019-02-28 | 2019-05-31 | 南京医科大学 | The dedicated smile aesthetics evaluating method of correction and system based on three-dimensional space face type |
CN109829915B (en) * | 2019-02-28 | 2023-05-02 | 南京医科大学 | Orthodontic smile aesthetics evaluation method and system based on three-dimensional space face |
CN110136248A (en) * | 2019-05-20 | 2019-08-16 | 湘潭大学 | A device and method for three-dimensional reconstruction of transmission housing based on binocular stereo vision |
CN110276110A (en) * | 2019-06-04 | 2019-09-24 | 华东师范大学 | A software-hardware collaborative design method for binocular stereo vision system |
CN110544301A (en) * | 2019-09-06 | 2019-12-06 | 广东工业大学 | A three-dimensional human motion reconstruction system, method and motion training system |
CN111028295A (en) * | 2019-10-23 | 2020-04-17 | 武汉纺织大学 | A 3D imaging method based on encoded structured light and binocular |
CN110909634A (en) * | 2019-11-07 | 2020-03-24 | 深圳市凯迈生物识别技术有限公司 | Visible light and double infrared combined rapid in vivo detection method |
CN110889873A (en) * | 2019-11-26 | 2020-03-17 | 中国科学院光电研究院 | A target positioning method, device, electronic device and storage medium |
CN111161397A (en) * | 2019-12-02 | 2020-05-15 | 支付宝(杭州)信息技术有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and readable storage medium |
CN111325828B (en) * | 2020-01-21 | 2024-03-22 | 中国电子科技集团公司第五十二研究所 | Three-dimensional face acquisition method and device based on three-dimensional camera |
CN111325828A (en) * | 2020-01-21 | 2020-06-23 | 中国电子科技集团公司第五十二研究所 | Three-dimensional face acquisition method and device based on three-eye camera |
CN111292297A (en) * | 2020-01-21 | 2020-06-16 | 湖北文理学院 | Welding seam detection method, device and equipment based on binocular stereo vision and storage medium |
US11640680B2 (en) | 2020-01-24 | 2023-05-02 | Axis Ab | Imaging system and a method of calibrating an image system |
CN111354077A (en) * | 2020-03-02 | 2020-06-30 | 东南大学 | A 3D face reconstruction method based on binocular vision |
CN111354077B (en) * | 2020-03-02 | 2022-11-18 | 东南大学 | Binocular vision-based three-dimensional face reconstruction method |
CN111508013B (en) * | 2020-04-21 | 2022-09-06 | 中国科学技术大学 | Stereo matching method |
CN111508013A (en) * | 2020-04-21 | 2020-08-07 | 中国科学技术大学 | Stereo matching method |
CN111611914A (en) * | 2020-05-20 | 2020-09-01 | 北京海月水母科技有限公司 | Human-shaped positioning technology of binocular face recognition probe |
CN111784753B (en) * | 2020-07-03 | 2023-12-05 | 江苏科技大学 | Stereo matching method for 3D reconstruction of foreground field of view for autonomous underwater robot recovery and docking |
CN111784753A (en) * | 2020-07-03 | 2020-10-16 | 江苏科技大学 | Stereo matching method for 3D reconstruction of foreground field of view for autonomous underwater robot recovery and docking |
CN112184887A (en) * | 2020-09-29 | 2021-01-05 | 南京鼎毅信息科技有限公司 | Human face three-dimensional reconstruction optimization method based on binocular vision |
CN112412242A (en) * | 2020-11-20 | 2021-02-26 | 福建师范大学 | Automatic door control and anti-pinch system based on binocular stereoscopic vision and method thereof |
CN113450460A (en) * | 2021-07-22 | 2021-09-28 | 四川川大智胜软件股份有限公司 | Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution |
WO2023179459A1 (en) * | 2022-03-24 | 2023-09-28 | 张国流 | Three-dimensional reconstruction method and device based on bionic stereo vision, and storage medium |
CN115049822A (en) * | 2022-05-26 | 2022-09-13 | 中国科学院半导体研究所 | Three-dimensional imaging method and device |
CN116129037B (en) * | 2022-12-13 | 2023-10-31 | 珠海视熙科技有限公司 | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof |
CN116129037A (en) * | 2022-12-13 | 2023-05-16 | 珠海视熙科技有限公司 | Visual-tactile sensor and its three-dimensional reconstruction method, system, device and storage medium |
CN119445002A (en) * | 2025-01-09 | 2025-02-14 | 杭州定川信息技术有限公司 | A method and device for three-dimensional reconstruction of tidal bore based on binocular camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106910222A (en) | Face three-dimensional rebuilding method based on binocular stereo vision | |
US11568516B2 (en) | Depth-based image stitching for handling parallax | |
Concha et al. | Using superpixels in monocular SLAM | |
Hornacek et al. | SphereFlow: 6 DoF scene flow from RGB-D pairs | |
CN104661010B (en) | Method and device for establishing three-dimensional model | |
CN110288712B (en) | Sparse multi-view 3D reconstruction method for indoor scenes | |
US9883163B2 (en) | Method and system for determining camera parameters from a long range gradient based on alignment differences in non-point image landmarks | |
CN103065289B (en) | Based on four lens camera front face method for reconstructing of binocular stereo vision | |
KR100755450B1 (en) | 3D reconstruction apparatus and method using planar homography | |
CN105184857B (en) | Monocular vision based on structure light ranging rebuilds mesoscale factor determination method | |
CN103299343B (en) | Range image pixel matching method | |
Wong et al. | A stratified approach for camera calibration using spheres | |
CN110853151A (en) | Three-dimensional point set recovery method based on video | |
CN103456038A (en) | Method for rebuilding three-dimensional scene of downhole environment | |
CN106485690A (en) | Cloud data based on a feature and the autoregistration fusion method of optical image | |
CN103106688A (en) | Indoor three-dimensional scene rebuilding method based on double-layer rectification method | |
CN101908230A (en) | A 3D Reconstruction Method Based on Region Depth Edge Detection and Binocular Stereo Matching | |
WO2018032841A1 (en) | Method, device and system for drawing three-dimensional image | |
CN106920276A (en) | A kind of three-dimensional rebuilding method and system | |
CN111415375B (en) | SLAM method based on multi-fisheye camera and double-pinhole projection model | |
Yuan et al. | 3D reconstruction of background and objects moving on ground plane viewed from a moving camera | |
Nagy et al. | Development of an omnidirectional stereo vision system | |
CN105574875A (en) | Fish-eye image dense stereo algorithm based on polar curve geometry | |
Banno et al. | Omnidirectional texturing based on robust 3D registration through Euclidean reconstruction from two spherical images | |
Huang et al. | A novel, efficient and accurate method for lidar camera calibration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170630 |