[go: up one dir, main page]

CN112991517B - A 3D reconstruction method for automatic matching of texture image encoding and decoding - Google Patents

A 3D reconstruction method for automatic matching of texture image encoding and decoding Download PDF

Info

Publication number
CN112991517B
CN112991517B CN202110250681.7A CN202110250681A CN112991517B CN 112991517 B CN112991517 B CN 112991517B CN 202110250681 A CN202110250681 A CN 202110250681A CN 112991517 B CN112991517 B CN 112991517B
Authority
CN
China
Prior art keywords
image
color
point
matrix
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110250681.7A
Other languages
Chinese (zh)
Other versions
CN112991517A (en
Inventor
胡庆武
陈雨婷
艾明耀
赵鹏程
李加元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110250681.7A priority Critical patent/CN112991517B/en
Publication of CN112991517A publication Critical patent/CN112991517A/en
Application granted granted Critical
Publication of CN112991517B publication Critical patent/CN112991517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于纹理编码影像的三维重建方法,包括:步骤1,将利用M阵列编码生成的彩色编码影像投影到待测物体上,使用相机拍摄待测区域,得到左右影像。步骤2,基于霍夫圆形检测方法和透视变换原理提取目标区域,以避免环境杂物对投影区域的影响。步骤3,基于色彩迁移技术的图像增强以及颜色识别预处理,将图像的颜色信息转换成码值。步骤4,针对M阵列编码的方法直接解码得到匹配点对。步骤5,利用得到的同名点对,基于立体视觉的原理进行三维重建,得到二维点所对应空间点的三维坐标。本发明结合空间编解码和双目视觉结构光三维重建技术,提高三维重建的速度和精度,为在保持高仿真度前提下高效率的三维重建提供新的思路。

Figure 202110250681

The invention discloses a three-dimensional reconstruction method based on texture-coded images, comprising: step 1, projecting a color-coded image generated by M-array coding onto an object to be measured, and using a camera to photograph the area to be measured to obtain left and right images. In step 2, the target area is extracted based on the Hough circle detection method and the perspective transformation principle, so as to avoid the influence of environmental debris on the projection area. Step 3: Convert the color information of the image into a code value based on image enhancement and color recognition preprocessing based on color migration technology. In step 4, the matching point pairs are obtained by directly decoding the M-array encoding method. Step 5: Using the obtained point pairs with the same name, perform three-dimensional reconstruction based on the principle of stereo vision, and obtain the three-dimensional coordinates of the spatial points corresponding to the two-dimensional points. The invention combines spatial coding and decoding and binocular vision structured light three-dimensional reconstruction technology to improve the speed and accuracy of three-dimensional reconstruction, and provides a new idea for high-efficiency three-dimensional reconstruction on the premise of maintaining high simulation degree.

Figure 202110250681

Description

一种纹理影像编解码自动匹配的三维重建方法A 3D reconstruction method for automatic matching of texture image encoding and decoding

技术领域technical field

本发明属于影像编解码和结构光三维重建领域,通过拍摄经过投影编码图案的待测物得到纹理增强的左右影像,并通过解码算法直接定位得到左右影像同名点,实现快速高精度的三维重建。The invention belongs to the field of image encoding and decoding and structured light three-dimensional reconstruction. The left and right images with enhanced texture are obtained by photographing the object to be tested through the projected encoding pattern, and the left and right image homonymic points are directly located through a decoding algorithm, thereby realizing fast and high-precision three-dimensional reconstruction.

背景技术Background technique

随着科技水平的逐渐发展,计算机视觉技术也随之不断进步,其中的三维重建技术更是得到了高速发展,逐渐趋于完善,在各领域都得到了广泛而有效的应用。但传统三维重建的方法在生产实践中应用时受到的限制较多,如何保证三维重建过程中匹配的稳健性从而使重建可信度高以及提高重建过程的效率,是目前相关领域学者们研究的重点。基于编码结构光的三维重建技术利用投影仪把经过编码之后的纹理图案映射到待测物体的表面,然后使用摄像机作为传感器捕获待测物体的影像,得到的影像上物体表面的编码图案会由于物体的三维深度信息而发生几何形变,利用数字图像处理技术对影像进行解码处理可直接定位点从而得到匹配点对,此方法中编解码的过程代替了易出错、计算复杂、耗时长的特征提取和特征匹配的过程,三维重建过程的速度快,得到的三维重建结果精度高。显然,编解码的算法是此方法的关键所在,决定了三维重建的效率,并且,近年来各领域对于三维重建技术都有了更高的要求,希望在保持高精度的同时实现运动物体的三维重构,这就只能使用单幅影像进行投影增强,因此针对单幅影像的编解码算法已成为研究的必然趋势,也是研究应用的重点和难点。With the gradual development of science and technology, computer vision technology has also continued to improve. Among them, 3D reconstruction technology has developed rapidly and gradually improved, and has been widely and effectively applied in various fields. However, the traditional 3D reconstruction method has many limitations when applied in production practice. How to ensure the robustness of the matching in the 3D reconstruction process, so as to make the reconstruction high reliability and improve the efficiency of the reconstruction process, is currently being studied by scholars in related fields. focus. The three-dimensional reconstruction technology based on encoded structured light uses a projector to map the encoded texture pattern to the surface of the object to be measured, and then uses the camera as a sensor to capture the image of the object to be measured. The three-dimensional depth information is geometrically deformed. Using digital image processing technology to decode the image can directly locate the points to obtain matching point pairs. The encoding and decoding process in this method replaces the error-prone, complex and time-consuming feature extraction and In the process of feature matching, the speed of the three-dimensional reconstruction process is fast, and the obtained three-dimensional reconstruction results have high precision. Obviously, the encoding and decoding algorithm is the key to this method, which determines the efficiency of 3D reconstruction. Moreover, in recent years, various fields have higher requirements for 3D reconstruction technology. It is hoped that the 3D reconstruction of moving objects can be achieved while maintaining high precision. Reconstruction, which can only use a single image for projection enhancement, so the encoding and decoding algorithm for a single image has become an inevitable trend of research, and it is also the focus and difficulty of research and application.

发明内容SUMMARY OF THE INVENTION

本发明针对利用单幅M阵列编码影像用于结构光三维重建,提出了一种基于透视变换提取目标区域、基于色彩迁移的图像预处理、基于M阵列编码的解码算法、基于立体视觉的三维重建方法,实现将空间编解码技术和立体视觉三维重建技术相结合,规避了传统三维重建方法中特征点匹配这一难点和易出错点,提高三维重建过程的速度和三维重建结果的精度,为在保持高仿真度前提下高效率的三维重建提供新的思路。Aiming at using a single M-array coded image for structured light three-dimensional reconstruction, the present invention proposes an extraction target area based on perspective transformation, image preprocessing based on color migration, decoding algorithm based on M-array coding, and three-dimensional reconstruction based on stereo vision. The method realizes the combination of spatial encoding and decoding technology and stereo vision 3D reconstruction technology, which avoids the difficulty and error-prone point of feature point matching in traditional 3D reconstruction methods, and improves the speed of the 3D reconstruction process and the accuracy of the 3D reconstruction results. It provides new ideas for efficient 3D reconstruction under the premise of maintaining high fidelity.

为实现上述目的,本发明所设计的一种基于纹理影像编码的三维重建方法,它包括如下步骤:In order to achieve the above purpose, a three-dimensional reconstruction method based on texture image coding designed by the present invention comprises the following steps:

步骤1),将利用M阵列编码矩阵生成的彩色编码图像投影到待测物体上,使用相机拍摄待测区域,得到左影像和右影像。Step 1), project the color-coded image generated by the M-array coding matrix onto the object to be measured, and use a camera to photograph the area to be measured to obtain a left image and a right image.

步骤2),基于霍夫圆形检测方法和透视变换原理提取影像和右影像投影的目标区域,以避免环境杂物对投影区域的影响;Step 2), extracting the target area of the image and the right image projection based on the Hough circle detection method and the perspective transformation principle, so as to avoid the influence of environmental debris on the projection area;

步骤3),基于色彩迁移技术对提取目标区域后的左影像和右影像分别进行增强并进行颜色识别,将图像的颜色信息转换成码值,得到左影像颜色识别图和右影像颜色识别图;Step 3), based on the color migration technology, the left image and the right image after the extraction target area are respectively enhanced and color recognition is carried out, the color information of the image is converted into a code value, and the left image color recognition diagram and the right image color recognition diagram are obtained;

步骤4),针对M阵列编码的方法直接对步骤3)中得到的颜色识别图进行解码,得到匹配点对;Step 4), for the method for M array coding, the color recognition diagram obtained in step 3) is directly decoded to obtain a pair of matching points;

步骤5),利用得到的匹配点对,基于立体视觉的原理进行三维重建,得到二维点所对应空间点的三维坐标。Step 5), using the obtained matching point pairs to perform three-dimensional reconstruction based on the principle of stereo vision, to obtain the three-dimensional coordinates of the spatial points corresponding to the two-dimensional points.

进一步的,步骤1)的具体实现方式如下;Further, the specific implementation mode of step 1) is as follows;

首先将所设计的单幅彩色编码图像经投影仪投影到待测体的表面,然后采用相机从不同角度拍摄待测物影像,得到左影像与右影像,所述彩色编码图像是利用M阵列编码矩阵生成。First, the designed single color-coded image is projected onto the surface of the object to be tested through a projector, and then the camera is used to take images of the object to be tested from different angles to obtain a left image and a right image. The color-coded image is encoded by an M-array. Matrix generation.

进一步的,步骤2)的具体实现方式如下;Further, the specific implementation mode of step 2) is as follows;

首先对影像进行梯度边缘检测,得到边缘检测的二值图;其次,使用霍夫变换检测圆形的函数,选择合适的最小距离、最大半径、最小半径这三个输入参数,检测得到相机拍摄影像中四个圆的圆心坐标,也即投影区域四个角点的坐标;最后将检测得到的四个角点坐标作为原坐标,将设计的彩色编码图像的四个角点坐标作为透视变换之后的坐标,将这四组二维映射点的坐标代入透视变换方程组,计算八个未知数的值,得到左右影像提取目标区域的透视变换矩阵。First, the gradient edge detection is performed on the image to obtain the binary image of edge detection; secondly, the Hough transform is used to detect the circle function, and the appropriate three input parameters of minimum distance, maximum radius and minimum radius are selected, and the camera image is obtained by detection. The coordinates of the center of the four circles, that is, the coordinates of the four corners of the projection area; finally, the detected coordinates of the four corners are taken as the original coordinates, and the coordinates of the four corners of the designed color-coded image are taken as the coordinates of the four corners after the perspective transformation. The coordinates of these four groups of two-dimensional mapping points are substituted into the perspective transformation equation system, and the values of the eight unknowns are calculated to obtain the perspective transformation matrix of the target area of the left and right image extraction.

进一步的,步骤3)的具体实现方式如下;Further, the specific implementation mode of step 3) is as follows;

首先,将彩色编码图像作为色彩迁移的目标图像,将提取目标区域后的影像作为色彩迁移的原图像,将目标图像和原图像由RGB颜色空间转换到lαβ颜色空间,近似正交的亮度分量l和两个色度分量α、β;First, the color-coded image is used as the target image for color migration, and the image after extracting the target area is used as the original image for color migration. The target image and the original image are converted from RGB color space to lαβ color space, and the approximately orthogonal luminance component and two chrominance components α, β;

其次,分别计算lαβ空间中两幅图的均值和标准差,先将目标图像lαβ通道的数据减掉目标图像lαβ通道的均值,再将得到的新数据按比例放缩,其放缩系数是原图像的标准方差和目标图像的标准方差的比值,最后将得到的结果分别加上原图像lαβ通道的均值,即得到和目标图像有着同样均值和方差的色彩迁移处理后的原图像;Next, calculate the mean and standard deviation of the two images in the lαβ space respectively, first subtract the mean value of the lαβ channel of the target image from the data of the lαβ channel of the target image, and then scale the obtained new data proportionally, the scaling factor is the original The ratio of the standard deviation of the image to the standard deviation of the target image, and finally adding the obtained results to the mean value of the original image's lαβ channel, that is, to obtain the original image after color migration processing with the same mean and variance as the target image;

最后,对色彩迁移处理后的影像进行颜色识别;将色彩迁移处理后的影像由lαβ颜色空间转换回RGB颜色空间,提取每一个像素的RGB值,通过判别语句对像素进行分类,若三通道的灰度值均大于阈值t1,则识别为白色,若红色通道的灰度值最大且与另两个通道的灰度差值均大于t2,则识别为红色,绿色和蓝色的识别与红色的识别原理相同;对于识别到的三种颜色的像素,分别用设计编码图案时相对应的“0”,“1”,“2”表示,白色用“3”表示,得到的一个图像大小的矩阵,即得到颜色识别图。Finally, color recognition is performed on the image after color migration; the image after color migration is converted from the lαβ color space back to the RGB color space, the RGB value of each pixel is extracted, and the pixel is classified by the discriminant sentence. If the gray value is greater than the threshold t1, it is recognized as white. If the gray value of the red channel is the largest and the gray value difference with the other two channels is greater than t2, it is recognized as red, green and blue. The recognition principle is the same; for the pixels of the three colors recognized, they are represented by the corresponding "0", "1", and "2" when designing the coding pattern, and the white color is represented by "3", and an image-sized matrix is obtained. , that is, the color identification map is obtained.

进一步的,步骤4)的具体实现方式如下;Further, the specific implementation mode of step 4) is as follows;

根据左影像颜色识别图,分别构造一幅右邻域图和下邻域图;构造右邻域图的过程是在颜色识别图中对于每个像素点向右检索,直到检索到白色边框后的第一个基元像素,在右邻域图中记录下该点的码值和坐标,构造下邻域图的过程是在颜色识别图中对于每个像素点向下检索,直到检索到白色边框后的第一个基元像素,在下邻域图中记录下该点的码值和坐标;上述所说的方法是基于像素点位于单一色块内部的情况,若像素点位于白色边框上,则沿该点向右或向下检索,直到检索得到第一个码值“0”或“1”或“2”的像素点,将此点作为起始点;According to the color recognition map of the left image, a right neighborhood map and a lower neighborhood map are respectively constructed; the process of constructing the right neighborhood map is to retrieve each pixel to the right in the color recognition map until the white border is retrieved. For the first primitive pixel, record the code value and coordinates of the point in the right neighborhood graph. The process of constructing the lower neighborhood graph is to search down for each pixel in the color recognition graph until the white border is retrieved. After the first primitive pixel, record the code value and coordinates of the point in the lower neighborhood map; the above-mentioned method is based on the situation that the pixel point is located inside a single color block, if the pixel point is located on the white border, then Retrieve to the right or down along the point until the pixel point with the first code value "0" or "1" or "2" is retrieved, and this point is used as the starting point;

其次,对于左影像中的待定点,利用颜色识别图、右邻域图和下邻域图,通过在右邻域图中寻找与当前基元最相邻右方基元,在下邻域图中寻找与当前基元最相邻下方基元,构造当前基元所在的3×3窗口,并将此窗口放到与彩色编码图像对应的M阵列编码矩阵中进行模板匹配即实现窗口定位和基元定位,所述基元是指彩色编码图像中一个单一的颜色块,所述编码矩阵的值与彩色编码图像各基元一一对应,“0”、“1”、“2”分别与设计编码图像时的红、绿、蓝三种颜色的基元相对应;Secondly, for the to-be-determined points in the left image, use the color identification map, the right neighborhood map and the lower neighborhood map to find the right primitive that is the most adjacent to the current primitive in the right neighborhood map. Find the primitive most adjacent to the current primitive, construct a 3×3 window where the current primitive is located, and put this window into the M-array coding matrix corresponding to the color-coded image to perform template matching to realize window positioning and primitives. Positioning, the primitive refers to a single color block in the color-coded image, the value of the encoding matrix corresponds to each primitive of the color-coded image, and "0", "1", and "2" are respectively associated with the design code The primitives of red, green and blue in the image correspond to each other;

再次,通过待定点像素坐标与基元外围白色边框的位置关系进行基元内部进一步精确定位,若待定点位于基元内部,由待定位像素向上下左右四个方向分别进行检索,直到碰到白色边框停止,即检索到第一个码值为“3”的点则停止,计算四个方向检索时分别经过的像素个数,经过简单的计算实现定位;若待定点不在基元内部,则向右或向下检索,直到阈值范围内检索到第一个位于基元内部的点,以此点作为起始点,再进行精确定位;Once again, the interior of the primitive is further accurately positioned through the positional relationship between the pixel coordinates of the to-be-located point and the white border around the primitive. If the to-be-located point is located inside the primitive, the pixels to be located are retrieved in four directions, up, down, left, and right, until the white border is encountered. The frame stops, that is, when the first point with the code value of "3" is retrieved, it stops, calculates the number of pixels that pass through the retrieval in the four directions, and realizes positioning through simple calculation; if the point to be determined is not inside the primitive, the Search right or down until the first point inside the primitive is retrieved within the threshold range, and use this point as the starting point, and then perform precise positioning;

最后,在右影像中搜寻待定点的匹配点,与基元定位和基元内精确定位的步骤相似,首先根据左影像中待定位点的位置定位基元,再进行进一步基元内精确定位;其中基元定位时先利用透视变换的几何约束确定搜索的起始位置,并只在这一定范围内进行搜索,若在范围内搜索不到匹配基元,则放弃对该待定位点的匹配。Finally, the matching point of the to-be-located point is searched in the right image, which is similar to the steps of primitive positioning and intra-unit accurate positioning. First, the primitive is located according to the position of the to-be-located point in the left image, and then further intra-unit accurate positioning is performed; Among them, the geometric constraints of perspective transformation are used to determine the starting position of the search when the primitive is located, and the search is only performed within a certain range. If no matching primitive is found within the range, the matching of the to-be-located point is discarded.

进一步的,步骤5)的具体实现方式如下;Further, the specific implementation mode of step 5) is as follows;

首先,利用MATLAB相机标定工具箱标定摄像机内参矩阵,制作一幅棋盘格标定图片作为标定板,用摄像机从不同的角度拍摄标定板,得到尽量多的像片,用尺量取标定板上棋盘格小方块的实际边长,单位为毫米,将得到的数据都输入标定工具箱中即可得到摄像机内参的标定结果;First, use the MATLAB camera calibration toolbox to calibrate the camera internal parameter matrix, make a checkerboard calibration picture as the calibration board, use the camera to shoot the calibration board from different angles, get as many pictures as possible, and measure the checkerboard on the calibration board with a ruler. The actual side length of the small square, the unit is millimeters, input the obtained data into the calibration toolbox to get the calibration result of the camera's internal parameters;

其次,采用RANSAC的八点算法计算基本矩阵的值,在同名点对中随机选取八个点对,通过求解线性方程组的方法计算得到基本矩阵的结果,然后在所有的原始点对中寻找支持计算所得的基本矩阵的点对,如果支持的点对数量足够多,则认为计算得到的基本矩阵的结果是可信的,并用所有的支持点通过最小二乘的方法夹断得到最终的基本矩阵结果,反之,如果只有少量的匹配点对满足初始计算得到的基本矩阵结果,则重复上述步骤,直到找到最优解;得到基本矩阵的最优解之后,将基本矩阵和摄像机内参矩阵进行运算得到本质矩阵的结果;Secondly, the eight-point algorithm of RANSAC is used to calculate the value of the fundamental matrix, and eight point pairs are randomly selected from the point pairs of the same name, and the result of the fundamental matrix is calculated by solving the linear equation system, and then the support is found in all the original point pairs. For the point pairs of the calculated fundamental matrix, if the number of supported point pairs is large enough, the result of the calculated fundamental matrix is considered to be credible, and all the supporting points are pinched by the least squares method to obtain the final fundamental matrix As a result, on the contrary, if only a small number of matching point pairs satisfy the basic matrix results obtained by the initial calculation, repeat the above steps until the optimal solution is found; after obtaining the optimal solution of the basic matrix, the basic matrix and the camera internal parameter matrix are calculated to obtain the result of the essential matrix;

最后,本质矩阵只和拍摄两张像片时摄像机的相对位置和姿态关系有关,将本质矩阵进行SVD分解,得到代表右影像相对左影像的变换矩阵[R|t],计算得到四个结果,这四个结果只有一个使得通过三角化原理计算得到的点都落在两个摄像机前方,通过取点计算验证,即得到唯一结果。Finally, the essential matrix is only related to the relative position and attitude of the camera when two pictures are taken. The essential matrix is decomposed by SVD to obtain the transformation matrix [R|t] representing the right image relative to the left image, and four results are obtained from the calculation, There is only one of these four results, so that the points calculated by the triangulation principle all fall in front of the two cameras, and the only result is obtained by taking the point calculation and verifying.

本发明具有如下积极效果:The present invention has following positive effect:

1)本发明提出了基于单幅M阵列编码方案的解码方法,通过分别求出左右影像中的点在编码图案中的坐标,坐标相同,即为匹配点对,以此过程代替传统的三维重建过程中复杂度高、耗时长且易产生误匹配的特征点匹配的算法,提升重建的精度和效率,且只需要一幅编码影像就能实现定位,运动的场景也适用。1) The present invention proposes a decoding method based on a single M-array encoding scheme, by separately obtaining the coordinates of the points in the left and right images in the encoding pattern, the coordinates are the same, that is, matching point pairs, and this process replaces the traditional three-dimensional reconstruction. The algorithm of feature point matching, which is complex, time-consuming and prone to mis-matching in the process, improves the accuracy and efficiency of reconstruction, and only requires one coded image to achieve positioning, and it is also suitable for moving scenes.

2)本发明在对编码图像进行解码处理之前,首先利用透视变换进行目标提取,避免了环境杂物对投影区域的影响,也方便了后续解码后左右影像同名点的匹配。2) Before decoding the encoded image, the present invention first uses perspective transformation to extract the target, which avoids the influence of environmental debris on the projection area, and facilitates the matching of the left and right images with the same name after subsequent decoding.

3)本发明提出利用色彩迁移技术进行预处理的方法,解决了物体受到自身的表面纹理和环境光照影响导致解码精度不高的问题,经色彩迁移处理之后的影像颜色信息丰富,只需要进行简单的判断即可实现颜色识别。3) The present invention proposes a method for preprocessing using color migration technology, which solves the problem that the object is affected by its own surface texture and ambient lighting, resulting in low decoding accuracy. The judgment can realize the color recognition.

采用本发明可更加准确、稳健地实现待测物体的三维重建,结合了空间编解码技术和双目视觉结构光三维重建技术,融合两者优势,提高了三维重建过程的速度和三维重建结果的精度。The invention can realize the three-dimensional reconstruction of the object to be measured more accurately and stably, combines the spatial coding and decoding technology and the binocular vision structured light three-dimensional reconstruction technology, and integrates the advantages of the two to improve the speed of the three-dimensional reconstruction process and the accuracy of the three-dimensional reconstruction results. precision.

附图说明Description of drawings

图1为本发明中用于投影的彩色编码影像。Figure 1 is a color-coded image for projection in the present invention.

图2为本发明的流程图。Figure 2 is a flow chart of the present invention.

图3为本发明中制作邻域图的流程图。FIG. 3 is a flow chart of making a neighborhood graph in the present invention.

图4为本发明中进行窗口模板匹配以实现基元定位的流程图。FIG. 4 is a flow chart of performing window template matching to realize primitive positioning in the present invention.

图5为本发明中基元内精确定位的示意图。FIG. 5 is a schematic diagram of precise positioning within a primitive in the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案及效果更加清楚、明确,以下参照附图对本发明进一步详细说明。In order to make the objectives, technical solutions and effects of the present invention clearer and clearer, the present invention will be described in further detail below with reference to the accompanying drawings.

实施例1,一种基于纹理影像编码的三维重建方法,包括以下步骤:Embodiment 1, a three-dimensional reconstruction method based on texture image coding, comprising the following steps:

步骤1)、该步骤拍摄得到待测物经投影增强后的纹理编码影像,以供后续处理。首先将所设计的单幅彩色编码图像经投影仪投影到待测体的表面,然后采用相机从不同角度拍摄待测物影像,得到左影像与右影像,所述彩色编码图像是利用M阵列编码矩阵生成。Step 1), in this step, a texture-encoded image of the object to be tested that has been enhanced by projection is captured for subsequent processing. First, the designed single color-coded image is projected onto the surface of the object to be tested through a projector, and then the camera is used to take images of the object to be tested from different angles to obtain a left image and a right image. The color-coded image is encoded by an M-array. Matrix generation.

步骤2)、利用透视变换原理提取左影像和右影像投影的目标区域,以避免环境杂物对投影区域的影响。Step 2), using the perspective transformation principle to extract the target area of the projection of the left image and the right image, so as to avoid the influence of environmental debris on the projection area.

首先,对影像进行梯度边缘检测,得到边缘检测的二值图。First, the gradient edge detection is performed on the image to obtain the binary image of edge detection.

其次,使用OpenCV中封装好的霍夫变换检测圆形的函数,选择合适的最小距离、最大半径、最小半径这三个输入参数,检测得到相机拍摄影像中四个圆的圆心坐标,也即投影区域四个角点的坐标。Secondly, use the Hough transform packaged in OpenCV to detect the circle function, select the appropriate three input parameters of minimum distance, maximum radius, and minimum radius, and detect the coordinates of the center of the four circles in the image captured by the camera, that is, the projection. The coordinates of the four corners of the area.

最后,将检测得到的四个角点坐标作为原坐标,将设计的彩色编码图像的四个角点坐标作为透视变换之后的坐标,将这四组二维映射点的坐标代入透视变换方程组,计算得到八个未知数,得到左右影像提取目标区域的透视变换矩阵。Finally, take the detected coordinates of the four corners as the original coordinates, take the coordinates of the four corners of the designed color-coded image as the coordinates after perspective transformation, and substitute the coordinates of these four sets of two-dimensional mapping points into the perspective transformation equations, Eight unknowns are obtained by calculation, and the perspective transformation matrix of the target area of the left and right image extraction is obtained.

步骤3)、利用色彩迁移技术对提取目标区域后的左影像和右影像进行增强并进行颜色识别,将图像的颜色信息转换成码值,得到颜色识别图。Step 3), using the color migration technology to enhance the left image and the right image after the extraction of the target area and perform color recognition, and convert the color information of the image into a code value to obtain a color recognition map.

首先,将彩色编码图像作为色彩迁移的目标图像,将提取目标区域后的影像作为色彩迁移的原图像,将目标图像和原图像这两幅图由RGB颜色空间转换到lαβ颜色空间(近似正交的亮度分量l和两个色度分量α、β)。First, take the color-coded image as the target image for color migration, take the image after extracting the target area as the original image for color migration, and convert the target image and the original image from the RGB color space to the lαβ color space (approximately orthogonal The luminance component l and two chrominance components α, β).

其次,分别计算lαβ空间中两幅图的均值和标准差,先将目标图像lαβ通道的数据减掉目标图像lαβ通道的均值,再将得到的新数据按比例放缩,其放缩系数是原图像的标准方差和目标图像的标准方差的比值,最后将得到的结果分别加上原图像lαβ通道的均值,即得到和目标图像有着同样均值和方差的色彩迁移处理后的原图像。Next, calculate the mean and standard deviation of the two images in the lαβ space respectively, first subtract the mean value of the lαβ channel of the target image from the data of the lαβ channel of the target image, and then scale the obtained new data proportionally, the scaling factor is the original The ratio of the standard deviation of the image to the standard deviation of the target image, and finally the obtained results are added to the mean value of the lαβ channel of the original image, that is, the original image after color migration processing with the same mean and variance as the target image is obtained.

最后,对色彩迁移处理后的影像进行颜色识别。将色彩迁移处理后的影像由lαβ颜色空间转换回RGB颜色空间,提取每一个像素的RGB值,通过简单的判别语句对像素进行分类,本例中,若三通道的灰度值均大于50,则识别为白色,若红色通道的灰度值最大且与另两个通道的灰度差值均大于60,则识别为红色,绿色和蓝色的识别与红色同理。对于识别到的三种颜色的像素,分别用设计编码图案时相对应的“0”,“1”,“2”表示,白色用“3”表示,得到的一个图像大小的矩阵,即得到颜色识别图。Finally, color recognition is performed on the image after color migration processing. Convert the image after color migration from the lαβ color space back to the RGB color space, extract the RGB value of each pixel, and classify the pixels through simple discriminant sentences. In this example, if the grayscale values of the three channels are all greater than 50, It is recognized as white. If the gray value of the red channel is the largest and the gray value difference between the red channel and the other two channels is greater than 60, it is recognized as red. The recognition of green and blue is the same as that of red. For the pixels of the three colors identified, they are represented by "0", "1", and "2" corresponding to the design of the coding pattern, and the white color is represented by "3", and an image-sized matrix is obtained, that is, the color is obtained. Identify the graph.

步骤4)、针对M阵列编码的方法直接对步骤3)中得到的颜色识别图进行解码,得到匹配点对。Step 4), directly decoding the color identification map obtained in step 3) according to the method of M array encoding to obtain matching point pairs.

首先,如图3,根据左影像颜色识别图,分别构造一幅右邻域图和下邻域图。构造右邻域图的过程是在颜色识别图中对于每个像素点向右检索,直到检索到白色边框(码值为“3”)后的第一个基元像素,在右邻域图中记录下该点的码值和坐标,构造下邻域图的过程是在颜色识别图中对于每个像素点向下检索,直到检索到白色边框(码值为“3”)后的第一个基元像素,在下邻域图中记录下该点的码值和坐标。上述所说的方法是基于像素点位于单一色块内部的情况,若像素点位于白色边框上,则沿该点向右或向下检索,直到检索得到第一个码值“0”或“1”或“2”的像素点,将此点作为起始点。First, as shown in Figure 3, according to the color recognition map of the left image, a right neighborhood map and a lower neighborhood map are respectively constructed. The process of constructing the right neighborhood map is to search to the right for each pixel in the color recognition map until the first primitive pixel after the white border (code value "3") is retrieved, in the right neighborhood map Record the code value and coordinates of the point, and the process of constructing the neighborhood map is to search down for each pixel in the color recognition map until the first one after the white border (code value "3") is retrieved. The primitive pixel, the code value and coordinates of the point are recorded in the lower neighborhood graph. The above-mentioned method is based on the situation that the pixel is located inside a single color block. If the pixel is located on the white border, search to the right or down along the point until the first code value "0" or "1" is obtained. ” or “2” pixels, use this point as the starting point.

其次,如图4,对于左影像中的待定点,利用颜色识别图、右邻域图和下邻域图,通过在右邻域图中寻找与当前基元最相邻右方基元,在下邻域图中寻找与当前基元最相邻下方基元,构造当前基元所在的3×3窗口,并将此窗口放到与彩色编码图像对应的M阵列编码矩阵中进行模板匹配即实现窗口定位和基元定位,所述基元是指彩色编码图像中一个单一的颜色块,所述编码矩阵的值与彩色编码图像各基元一一对应,“0”、“1”、“2”分别与设计编码图像时的红、绿、蓝三种颜色的基元相对应。Secondly, as shown in Figure 4, for the to-be-determined point in the left image, using the color identification map, the right neighborhood map and the lower neighborhood map, by finding the right primitive that is the most adjacent to the current primitive in the right neighborhood graph, in the lower Find the most adjacent primitive below the current primitive in the neighborhood graph, construct a 3×3 window where the current primitive is located, and put this window into the M-array coding matrix corresponding to the color-coded image to perform template matching to realize the window. Positioning and primitive positioning, the primitive refers to a single color block in the color-coded image, the value of the encoding matrix corresponds to each primitive of the color-coded image, "0", "1", "2" They correspond to the primitives of the three colors of red, green, and blue when designing an encoded image.

再次,如图5,通过待定点像素坐标与基元外围白色边框的位置关系进行基元内部进一步精确定位。若待定点位于基元内部,由待定位像素向上下左右四个方向分别进行检索,直到碰到白色边框停止,即检索到第一个码值为“3”的点则停止,计算四个方向检索时分别经过的像素个数,经过简单的计算实现定位。若待定点不在基元内部,则向右或向下检索,直到阈值范围内检索到第一个位于基元内部的点,以此点作为起始点,再进行精确定位。Again, as shown in Figure 5, the interior of the primitive is further accurately positioned by the positional relationship between the pixel coordinates of the to-be-determined point and the white border around the primitive. If the to-be-located point is located inside the primitive, the pixels to be located are searched in four directions, up, down, left, and right, until it stops when the white border is encountered, that is, the point with the first code value of "3" is retrieved, and the four directions are calculated. The number of pixels passed through respectively during retrieval, and positioning is achieved through simple calculation. If the to-be-determined point is not inside the primitive, search to the right or down until the first point inside the primitive is retrieved within the threshold range, and use this point as the starting point, and then perform precise positioning.

最后,在右影像中搜寻待定点的匹配点,与基元定位和基元内精确定位的步骤相似,首先根据左影像中待定位点的位置定位基元,再进行进一步基元内精确定位。其中基元定位时先利用透视变换的几何约束确定搜索的起始位置,并只在这一定范围内进行搜索,若在范围内搜索不到匹配基元,则放弃对该待定位点的匹配。Finally, the matching point of the to-be-located point is searched in the right image, which is similar to the steps of primitive positioning and intra-unit precise positioning. First, the primitive is located according to the position of the to-be-located point in the left image, and then further intra-unit accurate positioning is performed. Among them, the geometric constraints of perspective transformation are used to determine the starting position of the search when the primitive is located, and the search is only performed within a certain range. If no matching primitive is found within the range, the matching of the to-be-located point is discarded.

步骤5)、利用得到的匹配点对,基于立体视觉的原理进行三维重建,得到二维点所对应空间点的三维坐标。Step 5), using the obtained matching point pairs, perform three-dimensional reconstruction based on the principle of stereo vision, and obtain the three-dimensional coordinates of the space points corresponding to the two-dimensional points.

首先,利用MATLAB相机标定工具箱标定摄像机内参矩阵,制作一幅棋盘格标定图片作为标定板,用摄像机从不同的角度拍摄标定板,得到尽量多的像片,用尺量取标定板上棋盘格小方块的实际边长,单位为毫米,将得到的数据都输入标定工具箱中即可得到摄像机内参的标定结果。First, use the MATLAB camera calibration toolbox to calibrate the camera internal parameter matrix, make a checkerboard calibration picture as the calibration board, use the camera to shoot the calibration board from different angles, get as many pictures as possible, and measure the checkerboard on the calibration board with a ruler. The actual side length of the small square, the unit is millimeters. Input the obtained data into the calibration toolbox to get the calibration result of the camera's internal parameters.

其次,采用RANSAC的八点算法计算基本矩阵的值,在同名点对中随机选取八个点对,通过求解线性方程组的方法计算得到基本矩阵的结果,然后在所有的原始点对中寻找支持计算所得的基本矩阵的点对,如果支持的点对数量足够多,则认为计算得到的基本矩阵的结果是可信的,并用所有的支持点通过最小二乘的方法计算得到最终的基本矩阵结果,反之,如果只有少量的匹配点对满足初始计算得到的基本矩阵结果,则重复上述步骤,直到找到最优解。得到基本矩阵的最优解之后,将基本矩阵和摄像机内参矩阵进行运算得到本质矩阵的结果。Secondly, the eight-point algorithm of RANSAC is used to calculate the value of the fundamental matrix, and eight point pairs are randomly selected from the point pairs of the same name, and the result of the fundamental matrix is calculated by solving the linear equation system, and then the support is found in all the original point pairs. For the point pairs of the calculated fundamental matrix, if the number of supported point pairs is large enough, the result of the calculated fundamental matrix is considered to be credible, and all supporting points are used to calculate the final fundamental matrix result by the least squares method. , on the contrary, if only a small number of matching point pairs satisfy the basic matrix result obtained by the initial calculation, repeat the above steps until the optimal solution is found. After the optimal solution of the fundamental matrix is obtained, the fundamental matrix and the camera internal parameter matrix are operated to obtain the result of the essential matrix.

再次,本质矩阵只和拍摄两张像片时摄像机的相对位置和姿态关系有关,将本质矩阵进行SVD分解,得到代表右影像相对左影像的变换矩阵[R|t],计算得到四个结果,这四个结果只有一个使得通过三角化原理计算得到的点都落在两个摄像机前方,通过取点计算验证,即可得到唯一结果。Again, the essential matrix is only related to the relative position and attitude of the camera when taking two pictures. The essential matrix is decomposed by SVD to obtain the transformation matrix [R|t] representing the right image relative to the left image, and four results are obtained, There is only one of these four results, so that the points calculated by the triangulation principle all fall in front of the two cameras, and the unique result can be obtained by taking the point calculation and verifying.

最后,采用三角测量的方法,利用得到的[R|t]矩阵对同名点对进行三角交汇,计算得到二维点对应的空间点的三维坐标。Finally, using the triangulation method, the obtained [R|t] matrix is used to triangulate the point pairs with the same name, and the three-dimensional coordinates of the spatial points corresponding to the two-dimensional points are calculated.

以上所述仅为本发明的优选实施例而已,并不用以限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. All within the spirit and principle of the present invention, Any modification, equivalent replacement, improvement, etc. made should be included within the protection scope of the present invention.

Claims (5)

1. A three-dimensional reconstruction method based on texture image coding is characterized by comprising the following steps:
step 1), projecting a color coding image generated by using an M array coding matrix onto an object to be detected, and shooting an area to be detected by using a camera to obtain a left image and a right image;
step 2), extracting a projected target area of the image and the right image based on a Hough circle detection method and a perspective transformation principle so as to avoid the influence of environmental impurities on the projection area;
step 3), respectively enhancing the left image and the right image after the target area is extracted based on a color migration technology, carrying out color identification, converting the color information of the image into a code value, and obtaining a left image color identification image and a right image color identification image;
step 4), directly decoding the color identification image obtained in the step 3) by aiming at the M array coding method to obtain a matching point pair;
the specific implementation manner of the step 4) is as follows;
respectively constructing a right neighborhood map and a lower neighborhood map according to the left image color identification map; the process of constructing the right neighborhood graph is to search right for each pixel point in the color identification graph until the first primitive pixel after the white frame is searched, and record the code value and the coordinate of the point in the right neighborhood graph, and the process of constructing the lower neighborhood graph is to search downwards for each pixel point in the color identification graph until the first primitive pixel after the white frame is searched, and record the code value and the coordinate of the point in the lower neighborhood graph; the method for constructing the right neighborhood graph and the lower neighborhood graph is based on the condition that the pixel point is positioned in the single color block, if the pixel point is positioned on the white frame, the pixel point is searched rightwards or downwards along the point until the pixel point of the first code value of 0, 1 or 2 is obtained through searching, and the point is used as a starting point;
secondly, for an undetermined point in the left image, searching a right element which is most adjacent to the current element in the right neighborhood image and searching a lower element which is most adjacent to the current element in the lower neighborhood image by using a color identification image, a right neighborhood image and a lower neighborhood image, constructing a 3 x 3 window where the current element is located, putting the window into an M array coding matrix corresponding to a color coded image for template matching, and realizing window positioning and element positioning, wherein the element refers to a single color block in the color coded image, the values of the coding matrix correspond to the elements of the color coded image one by one, and the values of the coding matrix are respectively corresponding to the elements of three colors of red, green and blue when the coded image is designed;
thirdly, further accurately positioning the interior of the element according to the position relation between the pixel coordinate of the to-be-positioned pixel and a white frame on the periphery of the element, if the to-be-positioned pixel is positioned in the element, respectively retrieving the to-be-positioned pixel in the upper, lower, left and right directions until the to-be-positioned pixel touches the white frame and stops, namely retrieving a point with a first code value of 3, stopping, calculating the number of pixels passing through the retrieval in the four directions respectively, and realizing positioning through simple calculation; if the undetermined point is not in the primitive, searching rightwards or downwards until the first point in the primitive is searched within the threshold range, and taking the point as a starting point to perform accurate positioning;
finally, searching a matching point of the undetermined point in the right image, positioning the primitive according to the position of the to-be-positioned point in the left image, and then performing further primitive accurate positioning similarly to the steps of primitive positioning and primitive accurate positioning; when the primitive is positioned, the geometric constraint of perspective transformation is firstly utilized to determine the initial position of search, the search is only carried out in a certain range, and if the matched primitive cannot be searched in the range, the matching of the point to be positioned is abandoned;
and 5) carrying out three-dimensional reconstruction based on the stereoscopic vision principle by using the obtained matching point pairs to obtain three-dimensional coordinates of the space points corresponding to the two-dimensional points.
2. The method of claim 1, wherein the texture image coding-based three-dimensional reconstruction method comprises: the specific implementation manner of the step 1) is as follows;
firstly, projecting a designed single color coding image to the surface of an object to be measured through a projector, then shooting images of the object to be measured from different angles by adopting a camera to obtain a left image and a right image, wherein the color coding image is generated by utilizing an M array coding matrix.
3. The method of claim 1, wherein the texture image coding-based three-dimensional reconstruction method comprises: the specific implementation manner of the step 2) is as follows;
firstly, carrying out gradient edge detection on an image to obtain a binary image of edge detection; secondly, detecting a circular function by using Hough transform, selecting three input parameters of proper minimum distance, maximum radius and minimum radius, and detecting to obtain the center coordinates of four circles in the camera-shot image, namely the coordinates of four corner points of the projection area; and finally, taking the four detected corner coordinates as original coordinates, taking the four corner coordinates of the designed color coded image as coordinates after perspective transformation, substituting the coordinates of the four groups of two-dimensional mapping points into a perspective transformation equation set, calculating values of eight unknowns, and obtaining a perspective transformation matrix of the left and right image extraction target areas.
4. The method of claim 1, wherein the texture image coding-based three-dimensional reconstruction method comprises: the specific implementation manner of the step 3) is as follows;
firstly, taking a color coding image as a target image of color migration, taking an image with an extracted target area as an original image of the color migration, converting the target image and the original image from an RGB color space to an l alpha beta color space, and approximating an orthogonal luminance component l and two chrominance components alpha and beta;
secondly, respectively calculating the mean value and the standard deviation of two images in the l alpha beta space, firstly subtracting the mean value of the l alpha beta channel of the target image from the data of the l alpha beta channel of the target image, then scaling the obtained new data according to the proportion, wherein the scaling coefficient is the ratio of the standard deviation of the original image to the standard deviation of the target image, and finally respectively adding the obtained results to the mean value of the l alpha beta channel of the original image to obtain the original image after color migration processing with the same mean value and the same variance as the target image;
finally, carrying out color identification on the image subjected to the color migration treatment; converting the image subjected to color migration processing from the l alpha beta color space to an RGB color space, extracting the RGB value of each pixel, classifying the pixels through a judgment statement, if the gray values of three channels are all larger than a threshold t1, identifying the pixels as white, if the gray value of a red channel is maximum and the gray difference value between the red channel and the other two channels is larger than t2, identifying the pixels as red, wherein the identification principle of the green and the blue is the same as that of the red; and for the identified pixels of the three colors, respectively representing the pixels by corresponding 0, 1 and 2 in the design of a coding pattern, and representing the pixels by corresponding 3, and obtaining a matrix of the image size, namely obtaining the color identification graph.
5. The method of claim 1, wherein the texture image coding-based three-dimensional reconstruction method comprises: the concrete implementation manner of the step 5) is as follows;
firstly, calibrating a camera internal reference matrix by using an MATLAB camera calibration tool box, making a checkerboard calibration picture as a calibration board, shooting the calibration board by using a camera from different angles to obtain pictures as many as possible, taking the actual side length of a checkerboard small square on the calibration board by using a ruler, wherein the unit is millimeter, and inputting the obtained data into the calibration tool box to obtain a calibration result of the camera internal reference;
secondly, calculating the value of a basic matrix by adopting an eight-point algorithm of RANSAC, randomly selecting eight point pairs from the same-name point pairs, calculating by a method for solving a linear equation set to obtain the result of the basic matrix, then searching for the point pairs supporting the calculated basic matrix in all the original point pairs, if the number of the supported point pairs is enough, considering the result of the calculated basic matrix to be credible, and pinching off by using all the supported point pairs through a least square method to obtain a final basic matrix result, otherwise, if only a small number of matched point pairs meet the basic matrix result obtained by initial calculation, repeating the steps until an optimal solution is found; after the optimal solution of the basic matrix is obtained, the basic matrix and the camera internal reference matrix are operated to obtain the result of the essential matrix;
and finally, the essential matrix is only related to the relative position and posture relation of the cameras when two pictures are shot, the essential matrix is subjected to SVD (singular value decomposition) to obtain a transformation matrix [ R | t ] representing the right image relative to the left image, four results are obtained through calculation, only one of the four results enables points obtained through the triangulation principle calculation to fall in front of the two cameras, and the only result is obtained through point taking calculation verification.
CN202110250681.7A 2021-03-08 2021-03-08 A 3D reconstruction method for automatic matching of texture image encoding and decoding Active CN112991517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110250681.7A CN112991517B (en) 2021-03-08 2021-03-08 A 3D reconstruction method for automatic matching of texture image encoding and decoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110250681.7A CN112991517B (en) 2021-03-08 2021-03-08 A 3D reconstruction method for automatic matching of texture image encoding and decoding

Publications (2)

Publication Number Publication Date
CN112991517A CN112991517A (en) 2021-06-18
CN112991517B true CN112991517B (en) 2022-04-29

Family

ID=76335674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110250681.7A Active CN112991517B (en) 2021-03-08 2021-03-08 A 3D reconstruction method for automatic matching of texture image encoding and decoding

Country Status (1)

Country Link
CN (1) CN112991517B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506257B (en) * 2021-07-02 2022-09-20 同济大学 Crack extraction method based on self-adaptive window matching
CN114004895B (en) * 2021-10-18 2024-11-15 浙大宁波理工学院 A method for identifying and reconstructing feature points of moving object images based on multi-viewing
CN116128951A (en) * 2022-09-23 2023-05-16 中国航空无线电电子研究所 A monitoring method of rolling parameters in PFD screen based on FPGA
CN115936037B (en) * 2023-02-22 2023-05-30 青岛创新奇智科技集团股份有限公司 Decoding method and device for two-dimensional code

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770638A (en) * 2010-01-15 2010-07-07 南京航空航天大学 Remote sensing image renovating method based on color and vein divergence
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
CN103748612A (en) * 2011-01-24 2014-04-23 英特尔公司 Method and system for acquisition, representation, compression, and transmission of three-dimensional data
JP2015059972A (en) * 2013-09-17 2015-03-30 株式会社リコー Projector device, and image projection system
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
JP2016071746A (en) * 2014-09-30 2016-05-09 富士通株式会社 Information projection method, information projection program, and information processing device
CN107292339A (en) * 2017-06-16 2017-10-24 重庆大学 High-resolution landform classification method for UAV low-altitude remote sensing images based on feature fusion
CN108615556A (en) * 2018-05-02 2018-10-02 深圳市唯特视科技有限公司 A kind of osteoarthritis disorders detecting system based on Self-organizing Maps method
CA2979118A1 (en) * 2017-09-12 2019-03-12 Kal Tire Method of and apparatus for inspecting a ferromagnetic object
CN111462326A (en) * 2020-03-31 2020-07-28 武汉大学 Low-cost 360-degree panoramic video camera 3D reconstruction method and system for urban pipelines
EP3745357A1 (en) * 2019-05-28 2020-12-02 InterDigital VC Holdings, Inc. A method and apparatus for decoding three-dimensional scenes

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7406212B2 (en) * 2005-06-02 2008-07-29 Motorola, Inc. Method and system for parallel processing of Hough transform computations
US8300928B2 (en) * 2008-01-25 2012-10-30 Intermec Ip Corp. System and method for locating a target region in an image
US20150324661A1 (en) * 2014-05-08 2015-11-12 Tandent Vision Science, Inc. Method for detection of blend pixels for use in an image segregation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770638A (en) * 2010-01-15 2010-07-07 南京航空航天大学 Remote sensing image renovating method based on color and vein divergence
CN103748612A (en) * 2011-01-24 2014-04-23 英特尔公司 Method and system for acquisition, representation, compression, and transmission of three-dimensional data
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
JP2015059972A (en) * 2013-09-17 2015-03-30 株式会社リコー Projector device, and image projection system
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
JP2016071746A (en) * 2014-09-30 2016-05-09 富士通株式会社 Information projection method, information projection program, and information processing device
CN107292339A (en) * 2017-06-16 2017-10-24 重庆大学 High-resolution landform classification method for UAV low-altitude remote sensing images based on feature fusion
CA2979118A1 (en) * 2017-09-12 2019-03-12 Kal Tire Method of and apparatus for inspecting a ferromagnetic object
CN108615556A (en) * 2018-05-02 2018-10-02 深圳市唯特视科技有限公司 A kind of osteoarthritis disorders detecting system based on Self-organizing Maps method
EP3745357A1 (en) * 2019-05-28 2020-12-02 InterDigital VC Holdings, Inc. A method and apparatus for decoding three-dimensional scenes
CN111462326A (en) * 2020-03-31 2020-07-28 武汉大学 Low-cost 360-degree panoramic video camera 3D reconstruction method and system for urban pipelines

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Robust registration of Gaussian mixtures for colour transfer;Grogan M等;《arXiv》;20171231;1-10 *
一种基于色彩迁移技术的DeBruijn彩色结构光解码算法;白宏运等;《激光与光电子学进展》;20170905(第01期);286-294 *
彩色伪随机编码结构光解码方法研究;唐苏明等;《光电子激光》;20150315(第03期);144-154 *

Also Published As

Publication number Publication date
CN112991517A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112991517B (en) A 3D reconstruction method for automatic matching of texture image encoding and decoding
CN107945268B (en) A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN106228507B (en) A kind of depth image processing method based on light field
KR102674646B1 (en) Apparatus and method for obtaining distance information from a view
CN111768452B (en) Non-contact automatic mapping method based on deep learning
CN106600648A (en) Stereo coding target for calibrating internal parameter and distortion coefficient of camera and calibration method thereof
CN1975323A (en) Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot
CN112132907A (en) A camera calibration method, device, electronic device and storage medium
US9245375B2 (en) Active lighting for stereo reconstruction of edges
CN105069789B (en) Structure light dynamic scene depth acquisition methods based on coding grid template
CN104794748A (en) Three-dimensional space map construction method based on Kinect vision technology
CN116452644A (en) Three-dimensional point cloud registration method and device based on feature descriptors and storage medium
CN112258455A (en) Detection method for detecting spatial position of part based on monocular vision
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
Benveniste et al. Nary coded structured light-based range scanners using color invariants
CN101661623B (en) Three-dimensional tracking method of deformable body based on linear programming
Netz et al. Recognition using specular highlights
CN113689397A (en) Workpiece circular hole feature detection method and workpiece circular hole feature detection device
CN111598939B (en) Human body circumference measuring method based on multi-vision system
CN111524193B (en) Method and device for measuring two-dimensional size of an object
Shibo et al. A new approach to calibrate range image and color image from Kinect
CN111783877B (en) Depth information measurement method based on single-frame grid composite coding template structured light
TWI595446B (en) Method for improving the quality of shadowed edges based on depth camera in augmented reality
MacDonald et al. Accuracy of 3D reconstruction in an illumination dome
Li et al. Structured light based high precision 3d measurement and workpiece pose estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant