[go: up one dir, main page]

CN101720047A - Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation - Google Patents

Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation Download PDF

Info

Publication number
CN101720047A
CN101720047A CN200910198231A CN200910198231A CN101720047A CN 101720047 A CN101720047 A CN 101720047A CN 200910198231 A CN200910198231 A CN 200910198231A CN 200910198231 A CN200910198231 A CN 200910198231A CN 101720047 A CN101720047 A CN 101720047A
Authority
CN
China
Prior art keywords
image
matching
disparity
depth
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910198231A
Other languages
Chinese (zh)
Other versions
CN101720047B (en
Inventor
安平
鞠芹
张兆杨
张倩
吴妍菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2009101982317A priority Critical patent/CN101720047B/en
Publication of CN101720047A publication Critical patent/CN101720047A/en
Application granted granted Critical
Publication of CN101720047B publication Critical patent/CN101720047B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明公开了一种基于颜色分割的多目摄像立体匹配获取深度图像的方法,它包括步骤:(1)、对所有输入图像进行图像规正;(2)、对参考图像进行颜色分割,提取图像中的颜色一致性区域;(3)、对多幅输入图像分别进行局部窗口匹配得到多幅视差图;(4)、应用双向匹配策略来消除在匹配过程中产生的误匹配点;(5)、将多幅视差图合成为一幅视差图,填充误匹配点的视差信息;(6)、对视差图进行后处理优化,得到一个稠密视差图;(7)、根据视差与深度之间的关系,将视差图转换为深度图。该方法从多个视点图像中获取深度信息,利用多个视点图像提供的图像信息,不仅能解决图像中周期性重复纹理特征、遮挡等带来的误匹配,还能提高匹配精度,得到一个准确的深度图像。

The invention discloses a method for obtaining depth images based on multi-eye camera stereo matching based on color segmentation, which includes the steps of: (1), image normalization for all input images; (2), color segmentation for reference images, extract The color consistency area in the image; (3), perform local window matching on multiple input images to obtain multiple disparity maps; (4), apply a bidirectional matching strategy to eliminate the mismatching points generated during the matching process; (5) ), synthesize multiple disparity maps into one disparity map, and fill in the disparity information of mismatching points; (6), perform post-processing optimization on the disparity map, and obtain a dense disparity map; (7), according to the difference between disparity and depth , transforming the disparity map into a depth map. This method obtains depth information from multiple viewpoint images, and utilizes the image information provided by multiple viewpoint images, which can not only solve the mismatching caused by periodically repeated texture features and occlusions in the image, but also improve the matching accuracy and obtain an accurate image. depth image.

Description

基于颜色分割的多目摄像立体匹配获取深度图像的方法 A Method of Obtaining Depth Image Based on Multi-camera Stereo Matching Based on Color Segmentation

技术领域technical field

本发明涉及一种深度图像获取方法,特别是基于颜色分割的多目摄像立体匹配获取深度图像的方法。The present invention relates to a method for acquiring a depth image, in particular to a method for acquiring a depth image through multi-eye camera stereo matching based on color segmentation.

背景技术Background technique

基于深度图像绘制(Depth Image Based Rendering,DIBR)技术是3DTV系统在解码端的一项关键技术。DIBR技术通过给源图像中的可见象素引入深度信息利用彩色图像及其对应的深度图像生成新视点图像。由于压缩传输多路视频所需要传输的数据量非常巨大从而导致所要求的带宽会显著增加,尤其如果支持用户选择视点,则需要使用密集型摄像机阵列采集,这样就使得多视点视频数据量急剧增加,严重妨碍了3DTV的应用。因此“单视点视频+深度”的表示方法被作为立体视频的替代方案,在解码端利用DIBR技术,实时生成一个或者多个虚拟视点的3D场景,产生三维视觉效果并满足一定的视点选择的交互性;而“多视点视频+深度”是目前MPEG/JVT拟采用的3D视频表示方法,可用较稀疏的摄像机阵列拍摄3D场景并支持视点选择的交互性。以上利用单路或多路二维彩色视频加上其相对应的深度信息的3D视频表示方式,能大幅减小3D视频数据量,从而节省传输带宽。然而,用于基于深度图像绘制的深度图的获取是其中的关键技术,也是一个难点。Depth Image Based Rendering (DIBR) technology is a key technology at the decoding end of the 3DTV system. DIBR technology uses color images and their corresponding depth images to generate new viewpoint images by introducing depth information to the visible pixels in the source image. Due to the huge amount of data that needs to be transmitted for compression and transmission of multi-channel video, the required bandwidth will increase significantly, especially if the user is supported to select the viewpoint, it is necessary to use a dense camera array to collect, which makes the amount of multi-view video data increase sharply. , Seriously hamper the application of 3DTV. Therefore, the representation method of "single-viewpoint video + depth" is used as an alternative to stereoscopic video. At the decoding end, DIBR technology is used to generate 3D scenes of one or more virtual viewpoints in real time, which can produce 3D visual effects and satisfy certain viewpoint selection interactions. and "multi-viewpoint video + depth" is currently the 3D video representation method proposed by MPEG/JVT, which can shoot 3D scenes with a relatively sparse camera array and support the interactivity of viewpoint selection. The above 3D video representation using single or multiple 2D color video plus its corresponding depth information can greatly reduce the amount of 3D video data, thereby saving transmission bandwidth. However, the acquisition of the depth map for rendering based on the depth image is a key technology and also a difficulty.

目前深度获取主要有以下两类方法,一类是利用特殊的硬件设备来主动获取场景中每个点的深度信息。例如3DV系统公司开发的Zcam深度相机,它是一种带有测距功能的摄像机,利用红外脉冲光源向场景发射信号,然后用红外传感器来检测场景中物体反射回来的红外光,从而确定场景中物体的每一点到摄像机的距离。由于此类系统设备的价格非常昂贵,不适合推广。另一类方法是基于传统的计算机立体视觉方法,利用在两个不同视点获得的同一景物的两幅图像或多个视点图像进行立体匹配来恢复场景物体的深度信息。此类方法一般包含两步:(1)对图像进行立体匹配,得到对应点的视差图像;(2)根据对应点的视差与深度的关系计算出深度,得到深度图像。At present, there are mainly two types of methods for depth acquisition. One is to use special hardware devices to actively obtain the depth information of each point in the scene. For example, the Zcam depth camera developed by 3DV Systems is a camera with a ranging function. It uses an infrared pulse light source to send signals to the scene, and then uses an infrared sensor to detect the infrared light reflected back by objects in the scene to determine the depth of the scene. The distance of each point of the object to the camera. Because the price of such system equipment is very expensive, it is not suitable for promotion. Another type of method is based on the traditional computer stereo vision method, using two images of the same scene obtained at two different viewpoints or multiple viewpoint images for stereo matching to recover the depth information of the scene objects. This type of method generally includes two steps: (1) performing stereo matching on the image to obtain the parallax image of the corresponding point; (2) calculating the depth according to the relationship between the parallax and the depth of the corresponding point to obtain the depth image.

立体匹配算法主要可以分为基于区域的立体匹配算法和基于特征的立体匹配算法这两大类。基于区域(窗口)的立体匹配算法,能够很容易地恢复出高纹理区域的视差,但在低纹理区域会造成大量的误匹配,从而导致边界模糊,同时对遮挡的区域也很难进行处理;基于特征的立体匹配方法提取的特征点对噪声不是太敏感,所以能得到比较精准的匹配,但由于图像中的特征点很稀疏,此种方法只能获得一个稀疏的视差图。一种基于分割的立体匹配方法,由于它可以得到稠密的视差图,近来受到了很大程度的关注。这类方法假设场景中物体结构由一组互不重叠的平面组成,每个平面对应着参考图像中的一个颜色分割区域,且单一的颜色区域中视差变化是平滑的。Stereo matching algorithms can be mainly divided into two categories: region-based stereo matching algorithms and feature-based stereo matching algorithms. The stereo matching algorithm based on the region (window) can easily restore the parallax in the high texture region, but it will cause a large number of mismatches in the low texture region, resulting in blurred boundaries, and it is also difficult to process the occluded region; The feature points extracted by the feature-based stereo matching method are not too sensitive to noise, so more accurate matching can be obtained, but because the feature points in the image are very sparse, this method can only obtain a sparse disparity map. A segmentation-based stereo matching method has recently received much attention due to its ability to obtain dense disparity maps. Such methods assume that the object structure in the scene consists of a set of non-overlapping planes, each plane corresponds to a color segmented region in the reference image, and the disparity changes in a single color region are smooth.

发明内容Contents of the invention

本发明的目的是为了克服现有技术的不足,提供一种基于颜色分割的多目摄像立体匹配获取深度图像的方法。该方法不仅能有效解决图像中周期性重复纹理特征、遮挡等带来的误匹配问题,还能提高匹配精度,从而得到一个准确且稠密的深度图像。The object of the present invention is to overcome the deficiencies of the prior art and provide a method for obtaining depth images based on multi-eye camera stereo matching based on color segmentation. This method can not only effectively solve the mismatch problem caused by periodically repeated texture features and occlusions in the image, but also improve the matching accuracy, so as to obtain an accurate and dense depth image.

为达到上述目的,本发明的构思是:To achieve the above object, design of the present invention is:

根据图像的颜色信息对输入参考图像进行颜色分割;然后利用参考图像与其余输入图像进行局部窗口匹配得到多幅初始视差图,融合多幅初始视差图填充误匹配点的视差信息;最后对得到的视差图进行优化处理后,根据视差与深度之间的关系,将视差图转换为深度图。Carry out color segmentation on the input reference image according to the color information of the image; then use the reference image to perform local window matching with other input images to obtain multiple initial disparity maps, and fuse multiple initial disparity maps to fill in the disparity information of the mismatching points; finally, the obtained After the disparity map is optimized, the disparity map is converted into a depth map according to the relationship between the disparity and the depth.

根据上述构思,本发明的技术方案是:According to above-mentioned design, technical scheme of the present invention is:

上述基于颜色分割的多目摄像立体匹配获取深度图像的方法,其步骤是:The above-mentioned method for obtaining a depth image based on multi-eye camera stereo matching based on color segmentation, the steps are:

(1)、对所有输入图像进行图像规正,消除因图像噪声、光照条件、遮挡等因素造成对应点在不同视点图像上的颜色差异;(1) Perform image normalization on all input images to eliminate color differences of corresponding points on images from different viewpoints caused by image noise, lighting conditions, occlusion and other factors;

(2)、对参考图像进行颜色分割,提取图像中的颜色一致性区域;(2), carry out color segmentation to reference image, extract the color consistent area in image;

(3)、对多幅输入图像分别进行局部窗口匹配得到多幅视差图;(3), performing local window matching on multiple input images respectively to obtain multiple disparity maps;

(4)、应用双向匹配策略来消除在匹配过程中产生的误匹配点,提高视差精度;(4), apply the two-way matching strategy to eliminate the mismatching points generated during the matching process, and improve the parallax accuracy;

(5)、根据融合准则将多幅视差图融合成为一幅视差图,填充误匹配点的视差信息;(5), multiple disparity maps are fused into a disparity map according to the fusion criterion, and the disparity information of the mismatching points is filled;

(6)、对视差图进行后处理优化,得到一个稠密视差图;(6) Perform post-processing optimization on the disparity map to obtain a dense disparity map;

(7)、根据视差与深度之间的关系,将视差图转换为深度图。(7) Convert the disparity map into a depth map according to the relationship between disparity and depth.

本发明的基于颜色分割的多目摄像立体匹配获取深度图像的方法与已有技术相比较,具有如下显而易见的实质性突出特点和显著优点:该方法采用基于分割立体匹配算法从多个视点图像中获取深度信息,合理地利用多个视点图像提供的图像信息,消除由图像中周期性重复纹理特征、遮挡等带来的误匹配,提高了匹配精度,且能得到每个像素的视差,最终能获得一个准确而稠密的深度图,从而满足基于图像绘制技术对深度图像质量的要求,从而可以在基于深度图像绘制的深度获取方面得到应用。Compared with the prior art, the method for obtaining depth images based on multi-eye camera stereo matching based on color segmentation of the present invention has the following obvious substantive outstanding features and significant advantages: Obtain depth information, rationally use the image information provided by multiple viewpoint images, eliminate the false matching caused by periodically repeating texture features and occlusions in the image, improve the matching accuracy, and obtain the parallax of each pixel, and finally can Obtain an accurate and dense depth map, so as to meet the requirements of the depth image quality based on the image rendering technology, so that it can be applied in the depth acquisition based on the depth image rendering.

附图说明Description of drawings

图1是本发明的基于颜色分割的多目摄像立体匹配获取深度图像的方法的流程框图;Fig. 1 is the block flow diagram of the method for obtaining depth images based on multi-eye camera stereo matching of color segmentation of the present invention;

图2是平行摄像机配置系统示意图;Fig. 2 is a schematic diagram of a parallel camera configuration system;

图3是图1中对图像进行颜色规正的流程框图;Fig. 3 is the flow chart of carrying out color normalization to image in Fig. 1;

图4是目标图像C1、参考图像Cc、目标图像Cr的不同视点图像间的遮挡关系示意图;Fig. 4 is a schematic diagram of the occlusion relationship among different viewpoint images of the target image C 1 , the reference image C c , and the target image C r ;

图5是按照本发明所述方法得到的立体匹配结果图。Fig. 5 is a diagram of stereo matching results obtained according to the method of the present invention.

具体实施方式Detailed ways

以下结合附图对本发明的实施例作进一步的详细说明。本实施例以本发明的技术方案为前提下进行实施,给出了详细的实施方式,但本发明的保护范围不限于下述的实施例。Embodiments of the present invention will be further described in detail below in conjunction with the accompanying drawings. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation is given, but the protection scope of the present invention is not limited to the following embodiments.

本发明的实施例是以3幅输入图像为例,其3幅输入图像是由如图2所示的平行摄像机配置系统拍摄得到,中间的视点Cc所拍摄的图像为参考图像Cc,其左右两个视点C1,Cr所拍摄的图像分别为两幅目标图像C1,Cr。f表示这3个摄像机的焦距,深度值为Z的空间场景点P在这三个图像平面上的投影点分别为(x1,y),(xc,y)和(xr,y)。The embodiment of the present invention takes 3 input images as an example, the 3 input images are captured by the parallel camera configuration system shown in Figure 2, the image captured by the middle viewpoint C c is the reference image C c , its The images captured by the left and right viewpoints C 1 , C r are two target images C 1 , C r . f represents the focal length of the three cameras, and the projection points of the space scene point P with the depth value Z on the three image planes are (x 1 , y), (x c , y) and (x r , y) .

参见图1,本发明的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其步骤是:Referring to Fig. 1, the method for obtaining the depth image based on multi-eye camera stereo matching of color segmentation of the present invention, its steps are:

(1)、对3幅输入图像进行图像规正,消除因图像噪声、光照条件、遮挡等因素造成对应点在不同视点图像上的颜色差异;(1) Perform image normalization on the three input images to eliminate the color difference of the corresponding points on the images of different viewpoints caused by factors such as image noise, lighting conditions, and occlusion;

(2)、对参考图像进行颜色分割,提取图像中的颜色一致性区域;(2), carry out color segmentation to reference image, extract the color consistent area in image;

(3)、对3幅输入图像分别进行局部窗口匹配,得到2幅视差图;即参考图像分别与左边目标图像匹配以及参考图像与右边目标图像匹配;(3) Local window matching is performed on the 3 input images respectively to obtain 2 disparity maps; that is, the reference image is matched with the left target image and the reference image is matched with the right target image;

(4)、采用双向匹配来消除在匹配过程中产生的误匹配点;(4), using two-way matching to eliminate the mismatch points generated during the matching process;

(5)、根据融合准则将多幅视差图合成为一幅视差图,填充误匹配点的视差信息;(5), multiple disparity maps are synthesized into a disparity map according to the fusion criterion, and the disparity information of the mismatching points is filled;

(6)、对视差图进行优化处理,得到一个稠密视差图;(6), optimize the disparity map to obtain a dense disparity map;

(7)、根据视差与深度之间的关系,计算深度将视差图转换为深度图。(7) According to the relationship between the disparity and the depth, the depth is calculated to convert the disparity map into a depth map.

参见图3,上述步骤(1)所述的对输入图像进行图像规正,消除因图像噪声、光照条件、遮挡等因素造成对应点在不同视点图像上的颜色差异,其具体步骤是:Referring to Fig. 3, the image normalization is carried out to the input image described in the above step (1), and the color difference of the corresponding point on the images of different viewpoints is eliminated due to factors such as image noise, lighting conditions, occlusion, etc., and the specific steps are:

(1-1)、按照像素的亮度值,计算出每幅输入图像的累积直方图;(1-1), calculate the cumulative histogram of each input image according to the brightness value of the pixel;

(1-2)、以Cc视点图像作为参考图像Cc,将累积直方图按像素个数平均分成10段,分别找出每一段的亮度上下边界值,由此确定参考图像Cc与目标图像C1或目标图像Cr对应段间的线性映射关系;(1-2) Taking the C c viewpoint image as the reference image C c , divide the cumulative histogram into 10 segments on average according to the number of pixels, and find out the upper and lower brightness boundary values of each segment, thereby determining the reference image C c and the target The linear mapping relationship between corresponding segments of image C 1 or target image C r ;

(1-3)、对目标图像中的每一像素点,求其在累积直方图中的分段号,然后根据相应的映射公式将其亮度值映射为另一个值,对像素进行规正。(1-3) For each pixel in the target image, find its segment number in the cumulative histogram, and then map its brightness value to another value according to the corresponding mapping formula to normalize the pixel.

上述步骤(2)所述的对参考图像进行颜色分割,提取参考图像中的颜色一致性区域,其具体步骤如下:Carry out color segmentation to the reference image described in above-mentioned steps (2), extract the color consistency area in the reference image, its specific steps are as follows:

采用Mean Shift算法根据图像的颜色信息对参考图像Cc进行颜色分割,利用概率分布的梯度寻找分布峰值,将图像中每一个像素归类到相应的密度模式下,从而实现聚类,使得得到的每一个分割区域内的像素具有相同的颜色值。The Mean Shift algorithm is used to segment the color of the reference image C c according to the color information of the image, and the gradient of the probability distribution is used to find the distribution peak, and each pixel in the image is classified into the corresponding density mode, so as to realize clustering, so that the obtained Pixels within each segment have the same color value.

上述步骤(3)所述的对多幅输入图像分别进行局部窗口匹配,得到多幅视差图,其具体步骤如下:Carry out local window matching respectively to a plurality of input images described in above-mentioned step (3), obtain a plurality of disparity maps, its specific steps are as follows:

(3-1)、确定参考图像Cc与目标图像C1、目标图像Cr的位置关系;(3-1), determining the positional relationship between the reference image C c and the target image C 1 and target image C r ;

当在一幅目标图像中找不到匹配点时,在另外一幅目标图像可以找到其相应的匹配点。如图4所示,图中涂黑长条框表示两个场景物体,设定中间的视点Cc所拍摄的图像为参考图像Cc,则取其左右两个视点C1,Cr所拍摄的图像分别为目标图像C1和目标图像Cr。场景中P1P3线段区域在C1视点处被遮挡,当立体匹配求参考图像Cc的视差时,该区域的像素点在C1视点图像中找不到匹配点,但在Cr视点图像中能找到对应匹配点。When a matching point cannot be found in one target image, its corresponding matching point can be found in another target image. As shown in Figure 4, the long black bars in the figure represent two scene objects, and the image captured by the viewpoint C c in the middle is set as the reference image C c , and the images captured by the two viewpoints C 1 and C r on the left and right are taken The images of are respectively target image C 1 and target image C r . The line segment area P 1 P 3 in the scene is occluded at the C 1 viewpoint. When the stereo matching is used to find the parallax of the reference image C c , the pixels in this area cannot find a matching point in the C 1 viewpoint image, but at the C r viewpoint Corresponding matching points can be found in the image.

(3-2)、分别对参考图像Cc与左边目标图像C1以及参考图像Cc与右边目标图像Cr进行局部窗口匹配;(3-2), carry out local window matching to reference image C c and left target image C 1 and reference image C c and right target image C r respectively;

以上述参考图像Cc作为基准图像,基准图像中待匹配点为中心像素创建一个大小为5*5窗口,在目标图像中搜索与待匹配点邻域同样大小为5*5的像素邻域,依次与待匹配点的窗口进行比较,其中采用自适应的像素异性测量(self-adapting dissimilarity measure)作为相似度测量函数,如下式,其最大相似性对应的点就是最佳匹配点。Taking the above reference image Cc as the reference image, create a window with a size of 5*5 as the central pixel in the reference image, and search for a pixel neighborhood of the same size as the neighborhood of the point to be matched with a size of 5*5 in the target image, Sequentially compare with the window of the points to be matched, where the self-adapting dissimilarity measure is used as the similarity measurement function, as shown in the following formula, and the point corresponding to the maximum similarity is the best matching point.

C(x,y,d)=(1-ω)*CSAD(x,y,d)+ω*CGRAD(x,y,d)C(x, y, d)=(1-ω)*C SAD (x, y, d)+ω*C GRAD (x, y, d)

CC SADSAD (( xx ,, ythe y ,, dd )) == ΣΣ (( ii ,, jj )) ∈∈ NN (( xx ,, ythe y )) || II 11 (( ii ,, jj )) -- II 22 (( ii ++ dd ,, jj )) ||

CC GRADGRAD (( xx ,, ythe y ,, dd )) == ΣΣ (( ii ,, jj )) ∈∈ NN xx (( xx ,, ythe y )) || ▿▿ xx II 11 (( ii ,, jj )) -- ▿▿ xx II 22 (( ii ++ dd ,, jj )) ||

++ ΣΣ (( ii ,, jj )) ∈∈ NN ythe y (( xx ,, ythe y )) || ▿▿ ythe y II 11 (( ii ,, jj )) -- ▿▿ ythe y II 22 (( ii ++ dd ,, jj )) ||

其中,N(x,y)是以匹配点(x,y)为中心像素的5*5窗口,

Figure G2009101982317D00044
表示图像梯度的水平分量,
Figure G2009101982317D00045
表示图像梯度的垂直分量,ω表示权重。Among them, N(x, y) is a 5*5 window with the matching point (x, y) as the center pixel,
Figure G2009101982317D00044
represents the horizontal component of the image gradient,
Figure G2009101982317D00045
Represents the vertical component of the image gradient, and ω represents the weight.

通过对C1视点图像与Cc视点图像(参考图像)匹配,得到视差图ILI(x,y)。Cr视点图像与Cc视点图像(参考图像)匹配,得到视差图IRI(x,y)。得到的两幅视差图中包含很多误匹配点。The disparity map I LI (x, y) is obtained by matching the C 1 viewpoint image with the C c viewpoint image (reference image). The C r viewpoint image is matched with the C c viewpoint image (reference image) to obtain the disparity map I RI (x, y). The two obtained disparity maps contain many mismatching points.

上述步骤(4)所述的采用双向匹配来消除在匹配过程中产生的误匹配点,它用在参考图像与左边目标图像匹配以及参考图像与右边目标图像匹配这两个相同的匹配过程中,其具体步骤如下:The two-way matching described in the above step (4) is used to eliminate the false matching points generated in the matching process, and it is used in the two identical matching processes of the reference image matching the left target image and the reference image matching the right target image, The specific steps are as follows:

(4-1)、以左图像作为参考图像,右图像作为目标图像,从左到右进行局部窗口匹配得到从左到右的视差图dLR(4-1), take the left image as the reference image, and the right image as the target image, and perform local window matching from left to right to obtain the disparity map d LR from left to right;

(4-2)、以右图像作为参考图像,左图像作为目标图像,从右到左进行局部窗口匹配得到从右到左的视差图dRL(4-2), take the right image as the reference image, and the left image as the target image, perform local window matching from right to left to obtain the disparity map d RL from right to left;

(4-3)、根据下面公式找出在两幅视差图中视差不一致的对应点,确定为误匹配点。(4-3). According to the following formula, find out the corresponding point with inconsistent parallax in the two disparity maps, and determine it as a mismatching point.

dd LRLR (( xx LL ,, ythe y )) == dd RLRL (( xx RR ,, ythe y )) == dd LRLR (( xx LL ,, ythe y )) ++ dd RLRL (( xx RR ,, ythe y )) 22 || dd LRLR (( xx LL ,, ythe y )) -- dd RLRL (( xx RR ,, ythe y )) || ≤≤ λλ 00 elseelse

其中:λ是误差阈值,左图上的像素(xL,y)与右图上的像素(xR,y)是一对匹配点,即xR=xL+dLR(xL,y)。Where: λ is the error threshold, the pixel (x L , y) on the left image and the pixel (x R , y) on the right image are a pair of matching points, that is, x R = x L +d LR (x L , y ).

当两幅视差图中对应点的视差误差满足|dLR(xL,y)-dRL(xR,y)|≤λ(λ为允许的视差误差阈值)时,则表明对应点视差匹配正确。当视差误差不满足|dLR(xL,y)-dRL(xR,y)|≤λ时,表明该点为误匹配点,将该点赋值为0。When the parallax error of the corresponding point in the two disparity maps satisfies |d LR (x L , y)-d RL (x R , y)|≤λ (λ is the allowed parallax error threshold), it indicates that the parallax of the corresponding point matches correct. When the disparity error does not satisfy |d LR (x L , y)-d RL (x R , y)|≤λ, it indicates that the point is a mismatch point, and the point is assigned a value of 0.

上述步骤(5)所述的将多幅视差图合成为一幅视差图,填充误匹配点的视差信息,其具体步骤如下:As described in the above step (5), multiple disparity maps are synthesized into a disparity map, and the disparity information of the mismatching points is filled, and the specific steps are as follows:

(5-1)、根据相机的外参矩阵中的平移向量t计算比例系数α;(5-1), calculate the proportional coefficient α according to the translation vector t in the external parameter matrix of the camera;

αα == || tt CC -- tt LL || || tt CC -- tt LL || ++ || tt CC -- tt RR ||

其中,tL、tC、tR分别为左边摄像机、中间摄像机、右边摄像机的外参矩阵中的平移向量。Among them, t L , t C , and t R are translation vectors in the extrinsic parameter matrices of the left camera, the middle camera, and the right camera, respectively.

(5-2)、根据以下的融合准则,将2幅视差图ILI(x,y)和IRI(x,y)合成为最终的视差图I(x,y),填充误匹配点的视差信息。融合方式用公式表示如下:(5-2), according to the following fusion criteria, the two parallax maps I LI (x, y) and I RI (x, y) are synthesized into the final parallax map I (x, y), and the gaps of the mismatch points are filled Parallax information. The fusion method is expressed by the formula as follows:

Figure G2009101982317D00061
Figure G2009101982317D00061

其中,I(x,y)是参考图像坐标最终合成的视差图。ILI(x,y),IRI(x,y)分别表示参考图像与其邻近的左右两幅图像匹配得到的视差图。δ表示一个误差阈值。Among them, I(x, y) is the final synthesized disparity map of the reference image coordinates. I LI (x, y) and I RI (x, y) respectively represent the disparity maps obtained by matching the reference image with its adjacent left and right images. δ represents an error threshold.

上述步骤(6)所述的对视差图进行优化处理,得到一个稠密视差图,其具体步骤如下:The parallax map described in the above step (6) is optimized to obtain a dense parallax map. The specific steps are as follows:

(6-1)、假设参考图像中的每个颜色分割区域内视差变化是平滑的;(6-1), assuming that the parallax change in each color segmented region in the reference image is smooth;

(6-2)、取每个分割区域内的所有像素的中间视差值作为整个分割区域的视差,得到每个像素的视差值,其数学表示形式如下面公式。最后得到了一个高质量稠密的视差图I′(x,y)。(6-2) Take the median parallax value of all pixels in each segmented area as the parallax of the entire segmented area to obtain the parallax value of each pixel, and its mathematical expression is as follows. Finally, a high-quality dense disparity map I'(x, y) is obtained.

II SEGSEG (( xx ,, ythe y )) == medianmedian (( xx ,, ythe y )) ∈∈ II SEGSEG (( xx ,, ythe y )) (( II (( xx ,, ythe y )) ))

其中,ISEG(x,y)表示分割区域。Among them, I SEG (x, y) represents a segmented region.

上述步骤(7)所述的根据视差与深度之间的关系,计算深度将视差图转换为深度图,其具体是:According to the relationship between the parallax and the depth described in the above step (7), calculate the depth and convert the parallax map into a depth map, which is specifically:

平行摄像机配置系统中场景的深度值与其视差有如下关系:The depth value of the scene in the parallel camera configuration system has the following relationship with its disparity:

ZZ == BfBf DD.

其中,Z表示深度值,B表示基线距离(相机间距),f为相机焦距,D为视差。Among them, Z represents the depth value, B represents the baseline distance (camera distance), f is the camera focal length, and D is the parallax.

根据深度与视差的关系,在视差已知的情况下,计算出每个像素的深度值,从而将视差图I′(x,y)转化为深度图。According to the relationship between depth and disparity, when the disparity is known, the depth value of each pixel is calculated, so that the disparity map I'(x, y) is converted into a depth map.

图5中图(a)、(b)分别为按照本发明所述方法得到的匹配结果。其中图(a)是对静态场景序列的匹配结果,图(b)是对动态场景序列的匹配结果。从图5中可以看出最后的匹配结果能保持精确的视差图边界,特别一些缺少匹配的遮挡区域的视差也得到了很好的恢复,场景物体的深度层次表现得非常明显。按照本发明所述方法,能得到理想的匹配效果,由此也验证了本发明的有效性。Figures (a) and (b) in Figure 5 are the matching results obtained according to the method of the present invention. Among them, Figure (a) is the matching result for static scene sequences, and Figure (b) is the matching result for dynamic scene sequences. It can be seen from Figure 5 that the final matching result can maintain accurate disparity map boundaries, especially the disparity of some occlusion areas that lack matching has also been well restored, and the depth level of scene objects is very obvious. According to the method of the present invention, an ideal matching effect can be obtained, thus also verifying the effectiveness of the present invention.

Claims (8)

1.一种基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,该方法根据图像的颜色信息对输入参考图像进行颜色分割;然后利用参考图像与其余输入图像进行局部窗口匹配得到多幅初始视差图,融合多幅初始视差图填充误匹配点的视差信息;最后对得到的视差图进行优化处理后,根据视差与深度之间的关系,将视差图转换为深度图,其具体步骤如下:1. A method for obtaining depth images based on multi-eye camera stereo matching of color segmentation, characterized in that, the method carries out color segmentation to the input reference image according to the color information of the image; then utilizes the reference image to carry out local window matching with the rest of the input images Multiple initial disparity maps are obtained, and multiple initial disparity maps are fused to fill in the disparity information of mismatching points; finally, after optimizing the obtained disparity maps, the disparity maps are converted into depth maps according to the relationship between disparity and depth. Specific steps are as follows: (1)、对所有输入图像进行图像规正,消除因图像噪声、光照条件、遮挡等因素造成对应点在不同视点图像上的颜色差异;(1) Perform image normalization on all input images to eliminate color differences of corresponding points on images from different viewpoints caused by image noise, lighting conditions, occlusion and other factors; (2)、对参考图像进行颜色分割,提取图像中的颜色一致性区域;(2), carry out color segmentation to reference image, extract the color consistent area in image; (3)、对多幅输入图像分别进行局部窗口匹配得到多幅视差图;(3), performing local window matching on multiple input images respectively to obtain multiple disparity maps; (4)、应用双向匹配策略来消除在匹配过程中产生的误匹配点,提高视差精度;(4), apply the two-way matching strategy to eliminate the mismatching points generated during the matching process, and improve the parallax accuracy; (5)、根据融合准则将多幅视差图合成为一幅视差图,填充误匹配点的视差信息;(5), multiple disparity maps are synthesized into a disparity map according to the fusion criterion, and the disparity information of the mismatching points is filled; (6)、对视差图进行后处理优化,得到一个稠密视差图;(6) Perform post-processing optimization on the disparity map to obtain a dense disparity map; (7)、根据视差与深度之间的关系,将视差图转换为深度图。(7) Convert the disparity map into a depth map according to the relationship between disparity and depth. 2.根据权利要求1所述的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,上述步骤(1)所述的所有输入图像进行图像规正,其具体步骤是:2. the method for obtaining the depth image based on the multi-eye camera stereo matching of color segmentation according to claim 1, is characterized in that, all input images described in the above-mentioned steps (1) carry out image normalization, and its concrete steps are: (1-1)、按照像素的亮度值,计算出每幅输入图像的累积直方图;(1-1), calculate the cumulative histogram of each input image according to the brightness value of the pixel; (1-2)、以Cc视点图像作为参考图像Cc,将累积直方图按像素个数平均分成10段,分别找出每一段的亮度上下边界值,由此确定参考图像Cc与目标图像C1、目标图像Cr对应段间的线性映射关系;(1-2) Taking the C c viewpoint image as the reference image C c , divide the cumulative histogram into 10 segments on average according to the number of pixels, and find out the upper and lower brightness boundary values of each segment, thereby determining the reference image C c and the target The linear mapping relationship between corresponding segments of image C 1 and target image C r ; (1-3)、对目标图像中的每一像素点,求其在累积直方图中的分段号,然后根据相应的映射公式将其亮度值映射为另一个值。(1-3). For each pixel in the target image, calculate its segment number in the cumulative histogram, and then map its brightness value to another value according to the corresponding mapping formula. 3.根据权利要求2所述的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,上述步骤(2)所述的对参考图像进行颜色分割,提取参考图像中的颜色一致性区域,其具体步骤如下:采用Mean Shift算法根据图像的颜色信息对参考图像Cc进行颜色分割,利用概率分布的梯度寻找分布峰值,将图像中每一个像素归类到相应的密度模式下,从而实现聚类,使得得到的每一个分割区域内的像素具有相同的颜色值。3. the method for obtaining the depth image based on the multi-eye camera stereo matching of color segmentation according to claim 2, is characterized in that, the reference image described in above-mentioned step (2) is carried out color segmentation, and the color in the extraction reference image is consistent The specific steps are as follows: Use the Mean Shift algorithm to segment the color of the reference image Cc according to the color information of the image, use the gradient of the probability distribution to find the distribution peak, and classify each pixel in the image into the corresponding density mode, In this way, clustering is realized, so that the pixels in each obtained segmented region have the same color value. 4.根据权利要求3所述的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,上述步骤(3)所述的对多幅输入图像分别进行局部窗口匹配,得到多幅视差图,其具体步骤如下:4. the method for obtaining the depth image based on the multi-eye camera stereo matching of color segmentation according to claim 3, is characterized in that, the described in above-mentioned step (3) carries out local window matching respectively to a plurality of input images, obtains a plurality of Disparity map, the specific steps are as follows: (3-1)、确定参考图像Cc与目标图像C1、目标图像Cr的位置关系;(3-1), determining the positional relationship between the reference image C c and the target image C 1 and target image C r ; 当在一幅目标图像中找不到匹配点时,在另外一幅目标图像可以找到其相应的匹配点,设定中间的视点Cc所拍摄的图像为参考图像Cc,则取其左右两个视点C1,Cr所拍摄的图像分别为目标图像C1和目标图像Cr,场景中P1P3线段区域在C1视点处被遮挡,当立体匹配求参考图像Cc的视差时,该区域的像素点在C1视点图像中找不到匹配点,但在Cr视点图像中能找到对应匹配点;When no matching point can be found in one target image, the corresponding matching point can be found in another target image, and the image captured by the middle viewpoint C c is set as the reference image C c , and the left and right two The images taken by two viewpoints C 1 and C r are the target image C 1 and the target image C r respectively, and the P 1 P 3 line segment area in the scene is blocked at the C 1 viewpoint. When the stereo matching is used to calculate the parallax of the reference image C c , the pixels in this area cannot find a matching point in the C 1 viewpoint image, but can find a corresponding matching point in the C r viewpoint image; (3-2)、分别对参考图像Cc与左边目标图像C1以及参考图像Cc与右边目标图像Cr进行局部窗口匹配;(3-2), carry out local window matching to reference image C c and left target image C 1 and reference image C c and right target image C r respectively; 以上述参考图像Cc作为基准图像,基准图像待匹配点为中心像素创建一个大小为5*5窗口,在目标图像中搜索与待匹配点邻域同样大小为5*5的像素邻域,依次与待匹配点的窗口进行比较,其中采用自适应的像素异性测量(self-adapting dissimilarity measure)作为相似度测量函数,如下式,其最大相似性对应的点就是最佳匹配点,Take the above reference image C c as the reference image, create a window with a size of 5*5 as the center pixel of the reference image to be matched, and search for a pixel neighborhood with the same size as the neighborhood of the point to be matched with 5*5 in the target image, and then Compare with the window of the point to be matched, where the self-adapting dissimilarity measure is used as the similarity measurement function, as shown in the following formula, the point corresponding to the maximum similarity is the best matching point, C(x,y,d)=(1-ω)*CSAD(x,y,d)+ω*CGRAD(x,y,d)C(x, y, d)=(1-ω)*C SAD (x, y, d)+ω*C GRAD (x, y, d) CC SADSAD (( xx ,, ythe y ,, dd )) == ΣΣ (( ii ,, jj )) ∈∈ NN (( xx ,, ythe y )) || II 11 (( ii ,, jj )) -- II 22 (( ii ++ dd ,, jj )) || CC GRADGRAD (( xx ,, ythe y ,, dd )) == ΣΣ (( ii ,, jj )) ∈∈ NN xx (( xx ,, ythe y )) || ▿▿ xx II 11 (( ii ,, jj )) -- ▿▿ xx II 22 (( ii ++ dd ,, jj )) || ++ ΣΣ (( ii ,, jj )) ∈∈ NN ythe y (( xx ,, ythe y )) || ▿▿ ythe y II 11 (( ii ,, jj )) -- ▿▿ ythe y II 22 (( ii ++ dd ,, jj )) || 其中,N(x,y)是以匹配点(x,y)为中心像素的5*5窗口,
Figure F2009101982317C00024
表示图像梯度的水平分量,表示图像梯度的垂直分量,ω表示权重,
Among them, N(x, y) is a 5*5 window with the matching point (x, y) as the center pixel,
Figure F2009101982317C00024
represents the horizontal component of the image gradient, Represents the vertical component of the image gradient, ω represents the weight,
通过对C1视点图像与Cc视点图像(参考图像)匹配,得到视差图ILI(x,y),Cr视点图像与Cc视点图像(参考图像)匹配,得到视差图IRI(x,y),得到的两幅视差图中包含很多误匹配点。By matching the C 1 viewpoint image with the C c viewpoint image (reference image), the disparity map I LI (x, y) is obtained, and the C r viewpoint image is matched with the C c viewpoint image (reference image), and the disparity map I RI (x , y), the obtained two disparity maps contain many mismatching points.
5.根据权利要求4所述的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,上述步骤(4)所述的采用双向匹配来消除在匹配过程中产生的误匹配点,它用在参考图像与左边目标图像匹配以及参考图像与右边目标图像匹配这两个相同的匹配过程中,其具体步骤如下5. the method for obtaining the depth image based on the multi-eye camera stereo matching of color segmentation according to claim 4, is characterized in that, adopting two-way matching described in above-mentioned step (4) eliminates the false matching point that produces in matching process , which is used in the same matching process of matching the reference image with the target image on the left and matching the reference image with the target image on the right. The specific steps are as follows (4-1)、以左图像作为参考图像,右图像作为目标图像,从左到右进行局部窗口匹配得到从左到右的视差图dLR(4-1), take the left image as the reference image, and the right image as the target image, and perform local window matching from left to right to obtain the disparity map d LR from left to right; (4-2)、以右图像作为参考图像,左图像作为目标图像,从右到左进行局部窗口匹配得到从右到左的视差图dRL(4-2), take the right image as the reference image, and the left image as the target image, perform local window matching from right to left to obtain the disparity map d RL from right to left; (4-3)、根据下面公式找出在两幅视差图中视差不一致的对应点,确定为误匹配点,(4-3), according to the following formula, find out the corresponding points with inconsistent parallaxes in the two parallax maps, and determine them as mismatching points, dd LRLR (( xx LL ,, ythe y )) == dd RLRL (( xx RR ,, ythe y )) == dd LRLR (( xx LL ,, ythe y )) ++ dd RLRL (( xx RR ,, ythe y )) 22 || dd LRLR (( xx LL ,, ythe y )) -- dd RLRL (( xx RR ,, ythe y )) || ≤≤ λλ 00 elseelse 其中:λ是误差阈值,左图上的像素(xL,y)与右图上的像素(xR,y)是一对匹配点,即xR=xL+dLR(xL,y),Where: λ is the error threshold, the pixel (x L , y) on the left image and the pixel (x R , y) on the right image are a pair of matching points, that is, x R = x L +d LR (x L , y ), 当两幅视差图中对应点的视差误差满足|dLR(xL,y)-dRL(xR,y)|≤λ(λ为允许的视差误差阈值)时,则表明对应点视差匹配正确,当视差误差不满足|dLR(xL,y)-dRL(xR,y)|≤λ时,表明该点为误匹配点,将该点赋值为0。When the parallax error of the corresponding point in the two disparity maps satisfies |d LR (x L , y)-d RL (x R , y)|≤λ (λ is the allowed parallax error threshold), it indicates that the parallax of the corresponding point matches Correct, when the parallax error does not satisfy |d LR (x L , y)-d RL (x R , y)|≤λ, it indicates that the point is a mismatch point, and the point is assigned a value of 0. 6.根据权利要求5所述的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,上述步骤(5)所述的将多幅视差图合成为一幅视差图,填充误匹配点的视差信息,其具体步骤如下:6. the method for obtaining the depth image based on the multi-eye camera stereo matching of color segmentation according to claim 5, is characterized in that, described in above-mentioned step (5), a plurality of parallax maps are synthesized into one parallax map, fill in the error To match the disparity information of the points, the specific steps are as follows: (5-1)、根据相机的外参矩阵中的平移向量t计算比例系数α;(5-1), calculate the proportional coefficient α according to the translation vector t in the external parameter matrix of the camera; αα == || tt CC -- tt LL || || tt CC -- tt LL || ++ || tt CC -- tt RR || 其中,tL、tC、tR分别为左边摄像机、中间摄像机、右边摄像机的外参矩阵中的平移向量;Among them, t L , t C , and t R are the translation vectors in the extrinsic matrix of the left camera, the middle camera, and the right camera respectively; (5-2)、根据以下的融合准则,将2幅视差图ILI(x,y)和IRI(x,y)合成为最终的视差图I(x,y),填充误匹配点的视差信息,融合方式用公式表示如下:(5-2), according to the following fusion criteria, the two parallax maps I LI (x, y) and I RI (x, y) are synthesized into the final parallax map I (x, y), and the gaps of the mismatch points are filled Disparity information, the fusion method is expressed by the formula as follows:
Figure F2009101982317C00033
Figure F2009101982317C00033
其中,I(x,y)是参考图像坐标最终合成的视差图,ILI(x,y),IRI(x,y)分别表示参考图像与其邻近的左右两幅图像匹配得到的视差图,δ表示一个误差阈值。Among them, I(x, y) is the disparity map finally synthesized by the coordinates of the reference image, I LI (x, y), I RI (x, y) respectively represent the disparity map obtained by matching the reference image with its adjacent left and right images, δ represents an error threshold.
7.根据权利要求6所述的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,上述步骤(6)所述的对视差图进行优化处理,得到一个稠密视差图,其具体步骤如下:7. the method for obtaining the depth image based on the multi-eye camera stereo matching of color segmentation according to claim 6, is characterized in that, the parallax map described in the above-mentioned step (6) is optimized and processed to obtain a dense parallax map, which Specific steps are as follows: (6-1)、假设参考图像中的每个颜色分割区域内视差变化是平滑的;(6-1), assuming that the parallax change in each color segmented region in the reference image is smooth; (6-2)、取每个分割区域内的所有像素的中间视差值作为整个分割区域的视差,得到每个像素的视差值,其数学表示形式如下面公式,最后得到了一个高质量稠密的视差图I′(x,y),(6-2), take the median disparity value of all pixels in each segmented area as the disparity of the entire segmented area, and obtain the disparity value of each pixel. Its mathematical expression is as follows, and finally a high-quality Dense disparity map I'(x,y), II SEGSEG (( xx ,, ythe y )) == medianmedian (( xx ,, ythe y )) ∈∈ II SEGSEG (( xx ,, ythe y )) (( II (( xx ,, ythe y )) )) 其中,ISEG(x,y)表示分割区域。Among them, I SEG (x, y) represents a segmented region. 8.根据权利要求7所述的基于颜色分割的多目摄像立体匹配获取深度图像的方法,其特征在于,上述步骤(7)所述的根据视差与深度之间的关系,计算深度将视差图转换为深度图,其具体是:8. the method for obtaining the depth image based on the multi-eye camera stereo matching of color segmentation according to claim 7, is characterized in that, according to the relation between parallax and depth described in above-mentioned step (7), calculate depth and disparity map Converted to a depth map, specifically: 平行摄像机配置系统中场景的深度值与其视差有如下关系:The depth value of the scene in the parallel camera configuration system has the following relationship with its disparity: ZZ == BfBf DD. 其中,Z表示深度值,B表示基线距离(相机间距),f为相机焦距,D为视差,根据深度与视差的关系,在视差已知的情况下,计算出每个像素的深度值,从而将视差图I′(x,y)转化为深度图。Among them, Z represents the depth value, B represents the baseline distance (camera distance), f is the focal length of the camera, and D is the parallax. According to the relationship between depth and parallax, the depth value of each pixel is calculated when the parallax is known, so that Transform the disparity map I'(x, y) into a depth map.
CN2009101982317A 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation Expired - Fee Related CN101720047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009101982317A CN101720047B (en) 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101982317A CN101720047B (en) 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Publications (2)

Publication Number Publication Date
CN101720047A true CN101720047A (en) 2010-06-02
CN101720047B CN101720047B (en) 2011-12-21

Family

ID=42434549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101982317A Expired - Fee Related CN101720047B (en) 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Country Status (1)

Country Link
CN (1) CN101720047B (en)

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074014A (en) * 2011-02-23 2011-05-25 山东大学 Stereo matching method by utilizing graph theory-based image segmentation algorithm
CN102111637A (en) * 2011-03-29 2011-06-29 清华大学 Stereoscopic video depth map generation method and device
CN102129567A (en) * 2011-03-17 2011-07-20 南京航空航天大学 Fast stereo matching method based on color partitioning and self-adaptive window
CN102156987A (en) * 2011-04-25 2011-08-17 深圳超多维光电子有限公司 Method and device for acquiring depth information of scene
CN102184540A (en) * 2011-05-03 2011-09-14 哈尔滨工程大学 Sub-pixel level stereo matching method based on scale space
CN102298708A (en) * 2011-08-19 2011-12-28 四川长虹电器股份有限公司 3D mode identification method based on color and shape matching
CN102316297A (en) * 2010-07-08 2012-01-11 株式会社泛泰 Image output device and the method for using this image output device output image
CN102509348A (en) * 2011-09-26 2012-06-20 北京航空航天大学 Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN102609974A (en) * 2012-03-14 2012-07-25 浙江理工大学 Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN102622769A (en) * 2012-03-19 2012-08-01 厦门大学 Multi-target tracking method by taking depth as leading clue under dynamic scene
CN102692236A (en) * 2012-05-16 2012-09-26 浙江大学 Visual milemeter method based on RGB-D camera
CN102740116A (en) * 2011-04-08 2012-10-17 索尼公司 Image property detection
CN102802020A (en) * 2012-08-31 2012-11-28 清华大学 Method and device for monitoring parallax information of binocular stereoscopic video
CN102802005A (en) * 2011-04-26 2012-11-28 李国君 Three-dimensional video content generation method
CN102821290A (en) * 2011-06-06 2012-12-12 索尼公司 Image processing apparatus, image processing method, and program
CN103337064A (en) * 2013-04-28 2013-10-02 四川大学 Method for removing mismatching point in image stereo matching
CN103379350A (en) * 2012-04-28 2013-10-30 中国科学院深圳先进技术研究院 Virtual viewpoint image post-processing method
CN103458246A (en) * 2013-09-03 2013-12-18 清华大学 Shielding processing method and system in video motion segmentation
CN103458261A (en) * 2013-09-08 2013-12-18 华东电网有限公司 Video scene variation detection method based on stereoscopic vision
CN103460242A (en) * 2011-03-31 2013-12-18 索尼电脑娱乐公司 Information processing device, information processing method, and data structure of location information
CN103607558A (en) * 2013-11-04 2014-02-26 深圳市中瀛鑫科技股份有限公司 Video monitoring system, target matching method and apparatus thereof
CN103634586A (en) * 2013-11-26 2014-03-12 深圳市唯特视科技有限公司 Stereo-image acquiring method and device
CN103703761A (en) * 2011-05-17 2014-04-02 S.I.Sv.El.意大利电子发展股份公司 Method for generating, transmitting and receiving stereoscopic images, and related devices
CN103780897A (en) * 2014-02-25 2014-05-07 重庆卓美华视光电有限公司 Obtaining method and device for depth map of two-viewpoint image
CN103843333A (en) * 2011-09-29 2014-06-04 富士胶片株式会社 Three-dimensional image display control method, three-dimensional image display control device, and image pickup apparatus
CN103871037A (en) * 2012-12-07 2014-06-18 汤姆逊许可公司 Method and apparatus for color transfer between images
CN103986923A (en) * 2013-02-07 2014-08-13 财团法人成大研究发展基金会 Image Stereo Matching System
CN104036226A (en) * 2013-03-04 2014-09-10 联想(北京)有限公司 Object information obtaining method and electronic device
CN104050656A (en) * 2013-03-12 2014-09-17 英特尔公司 Apparatus and techniques for determining object depth in images
CN104112270A (en) * 2014-05-14 2014-10-22 苏州科技学院 Random point matching algorithm based on self-adaptive weight multiple-dimensioned window
CN104123715A (en) * 2013-04-27 2014-10-29 株式会社理光 Method and system for configuring parallax value
CN104517095A (en) * 2013-10-08 2015-04-15 南京理工大学 Head division method based on depth image
CN104796684A (en) * 2015-03-24 2015-07-22 深圳市广之爱文化传播有限公司 Naked eye 3D (three-dimensional) video processing method
CN106023189A (en) * 2016-05-17 2016-10-12 北京信息科技大学 Light field data depth reconstruction method based on matching optimization
CN106331672A (en) * 2016-08-19 2017-01-11 深圳奥比中光科技有限公司 Method, apparatus and system for obtaining viewpoint image
CN104050682B (en) * 2014-07-09 2017-01-18 武汉科技大学 Image segmentation method fusing color and depth information
CN106447661A (en) * 2016-09-28 2017-02-22 深圳市优象计算技术有限公司 Rapid depth image generating method
CN106530409A (en) * 2016-11-03 2017-03-22 浙江大学 Local region consistency corresponding method in stereo coupling
KR20170090976A (en) * 2016-01-29 2017-08-08 삼성전자주식회사 Method for acquiring image disparity and apparatus for the same
CN107085825A (en) * 2017-05-27 2017-08-22 成都通甲优博科技有限责任公司 Image weakening method, device and electronic equipment
CN107122782A (en) * 2017-03-16 2017-09-01 成都通甲优博科技有限责任公司 A kind of half intensive solid matching method in a balanced way
CN107211085A (en) * 2015-02-20 2017-09-26 索尼公司 Camera device and image capture method
CN107220994A (en) * 2017-06-01 2017-09-29 成都通甲优博科技有限责任公司 A kind of method and system of Stereo matching
CN107256547A (en) * 2017-05-26 2017-10-17 浙江工业大学 A kind of face crack recognition methods detected based on conspicuousness
CN107329490A (en) * 2017-07-21 2017-11-07 歌尔科技有限公司 Unmanned plane barrier-avoiding method and unmanned plane
CN107610070A (en) * 2017-09-29 2018-01-19 深圳市佳创视讯技术股份有限公司 Free stereo matching process based on three shooting collections
CN107730543A (en) * 2017-09-08 2018-02-23 成都通甲优博科技有限责任公司 A kind of iteratively faster computational methods of half dense stereo matching
CN107958461A (en) * 2017-11-14 2018-04-24 中国航空工业集团公司西安飞机设计研究所 A kind of carrier aircraft method for tracking target based on binocular vision
WO2018082220A1 (en) * 2016-11-03 2018-05-11 深圳市掌网科技股份有限公司 Panoramic camera and depth information obtaining method
CN108140243A (en) * 2015-03-18 2018-06-08 北京市商汤科技开发有限公司 Restore from the 3D hand gestures of binocular imaging system
CN108307179A (en) * 2016-08-30 2018-07-20 姜汉龙 A kind of method of 3D three-dimensional imagings
WO2018157562A1 (en) * 2017-02-28 2018-09-07 北京大学深圳研究生院 Virtual viewpoint synthesis method based on local image segmentation
CN108537871A (en) * 2017-03-03 2018-09-14 索尼公司 Information processing equipment and information processing method
CN108604371A (en) * 2016-02-25 2018-09-28 深圳市大疆创新科技有限公司 Imaging system and method
CN108647579A (en) * 2018-04-12 2018-10-12 海信集团有限公司 A kind of obstacle detection method, device and terminal
CN108765480A (en) * 2017-04-10 2018-11-06 钰立微电子股份有限公司 Advanced treatment equipment
CN108921842A (en) * 2018-07-02 2018-11-30 上海交通大学 A kind of cereal flow detection method and device
CN109299662A (en) * 2018-08-24 2019-02-01 上海图漾信息科技有限公司 Depth data calculates apparatus and method for and face recognition device
CN109496325A (en) * 2016-07-29 2019-03-19 索尼公司 Image processing apparatus and image processing method
CN109640066A (en) * 2018-12-12 2019-04-16 深圳先进技术研究院 The generation method and device of high-precision dense depth image
CN109977981A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Scene analysis method based on binocular vision, robot and storage device
CN110264403A (en) * 2019-06-13 2019-09-20 中国科学技术大学 It is a kind of that artifacts joining method is gone based on picture depth layering
CN110349198A (en) * 2018-04-02 2019-10-18 联发科技股份有限公司 Solid matching method and corresponding Stereo matching device
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN110602474A (en) * 2018-05-24 2019-12-20 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining image parallax
CN111147868A (en) * 2018-11-02 2020-05-12 广州灵派科技有限公司 Free viewpoint video guide system
CN111353982A (en) * 2020-02-28 2020-06-30 贝壳技术有限公司 Depth camera image sequence screening method and device
US10834374B2 (en) 2017-02-28 2020-11-10 Peking University Shenzhen Graduate School Method, apparatus, and device for synthesizing virtual viewpoint images
CN112241714A (en) * 2020-10-22 2021-01-19 北京字跳网络技术有限公司 Method and device for identifying designated area in image, readable medium and electronic equipment
CN112601094A (en) * 2021-03-01 2021-04-02 浙江智慧视频安防创新中心有限公司 Video coding and decoding method and device
CN113052886A (en) * 2021-04-09 2021-06-29 同济大学 Method for acquiring depth information of double TOF cameras by adopting binocular principle
CN113066173A (en) * 2021-04-21 2021-07-02 国家基础地理信息中心 Three-dimensional model construction method and device and electronic equipment
CN113129350A (en) * 2021-04-12 2021-07-16 长春理工大学 Depth extraction method based on camera array
CN113643212A (en) * 2021-08-27 2021-11-12 复旦大学 A Depth Graph Noise Reduction Method Based on Graph Neural Network
TWI772040B (en) * 2021-05-27 2022-07-21 大陸商珠海凌煙閣芯片科技有限公司 Object depth information acquistition method, device, computer device and storage media
CN115147589A (en) * 2022-05-17 2022-10-04 北京旷视科技有限公司 Disparity map processing method, electronic equipment, storage medium and product
CN118052857A (en) * 2024-02-20 2024-05-17 上海凝眸智能科技有限公司 Depth map extraction method based on RGBIR binocular camera

Cited By (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316297A (en) * 2010-07-08 2012-01-11 株式会社泛泰 Image output device and the method for using this image output device output image
CN102074014A (en) * 2011-02-23 2011-05-25 山东大学 Stereo matching method by utilizing graph theory-based image segmentation algorithm
CN102074014B (en) * 2011-02-23 2012-12-12 山东大学 Stereo matching method by utilizing graph theory-based image segmentation algorithm
CN102129567A (en) * 2011-03-17 2011-07-20 南京航空航天大学 Fast stereo matching method based on color partitioning and self-adaptive window
CN102111637A (en) * 2011-03-29 2011-06-29 清华大学 Stereoscopic video depth map generation method and device
CN103460242A (en) * 2011-03-31 2013-12-18 索尼电脑娱乐公司 Information processing device, information processing method, and data structure of location information
CN103460242B (en) * 2011-03-31 2017-02-15 索尼电脑娱乐公司 Information processing device, information processing method, and data structure of location information
US9699432B2 (en) 2011-03-31 2017-07-04 Sony Corporation Information processing apparatus, information processing method, and data structure of position information
CN102740116A (en) * 2011-04-08 2012-10-17 索尼公司 Image property detection
CN102156987A (en) * 2011-04-25 2011-08-17 深圳超多维光电子有限公司 Method and device for acquiring depth information of scene
CN102802005A (en) * 2011-04-26 2012-11-28 李国君 Three-dimensional video content generation method
CN102802005B (en) * 2011-04-26 2014-11-05 李国君 Three-dimensional video content generation method
CN102184540A (en) * 2011-05-03 2011-09-14 哈尔滨工程大学 Sub-pixel level stereo matching method based on scale space
CN102184540B (en) * 2011-05-03 2013-03-20 哈尔滨工程大学 Sub-pixel level stereo matching method based on scale space
CN103703761A (en) * 2011-05-17 2014-04-02 S.I.Sv.El.意大利电子发展股份公司 Method for generating, transmitting and receiving stereoscopic images, and related devices
CN102821290A (en) * 2011-06-06 2012-12-12 索尼公司 Image processing apparatus, image processing method, and program
CN102821290B (en) * 2011-06-06 2016-07-06 索尼公司 Image processing equipment and image processing method
CN102298708A (en) * 2011-08-19 2011-12-28 四川长虹电器股份有限公司 3D mode identification method based on color and shape matching
CN102298708B (en) * 2011-08-19 2013-08-07 四川长虹电器股份有限公司 3D mode identification method based on color and shape matching
CN102509348B (en) * 2011-09-26 2014-06-25 北京航空航天大学 Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN102509348A (en) * 2011-09-26 2012-06-20 北京航空航天大学 Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN103843333B (en) * 2011-09-29 2015-09-16 富士胶片株式会社 Control the method for the display of stereo-picture, control device and the imaging device of stereo-picture display
CN103843333A (en) * 2011-09-29 2014-06-04 富士胶片株式会社 Three-dimensional image display control method, three-dimensional image display control device, and image pickup apparatus
CN102609974A (en) * 2012-03-14 2012-07-25 浙江理工大学 Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN102609974B (en) * 2012-03-14 2014-04-09 浙江理工大学 Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN102622769A (en) * 2012-03-19 2012-08-01 厦门大学 Multi-target tracking method by taking depth as leading clue under dynamic scene
CN102622769B (en) * 2012-03-19 2015-03-04 厦门大学 Multi-target tracking method by taking depth as leading clue under dynamic scene
CN103379350A (en) * 2012-04-28 2013-10-30 中国科学院深圳先进技术研究院 Virtual viewpoint image post-processing method
CN103379350B (en) * 2012-04-28 2015-06-03 中国科学院深圳先进技术研究院 Virtual viewpoint image post-processing method
CN102692236A (en) * 2012-05-16 2012-09-26 浙江大学 Visual milemeter method based on RGB-D camera
CN102802020A (en) * 2012-08-31 2012-11-28 清华大学 Method and device for monitoring parallax information of binocular stereoscopic video
CN102802020B (en) * 2012-08-31 2015-08-12 清华大学 The method and apparatus of monitoring parallax information of binocular stereoscopic video
CN103871037A (en) * 2012-12-07 2014-06-18 汤姆逊许可公司 Method and apparatus for color transfer between images
CN103986923B (en) * 2013-02-07 2016-05-04 财团法人成大研究发展基金会 Image stereo matching system
CN103986923A (en) * 2013-02-07 2014-08-13 财团法人成大研究发展基金会 Image Stereo Matching System
CN104036226B (en) * 2013-03-04 2017-06-27 联想(北京)有限公司 A method for acquiring target information and electronic equipment
US9432593B2 (en) 2013-03-04 2016-08-30 Beijing Lenovo Software Ltd. Target object information acquisition method and electronic device
WO2014134993A1 (en) * 2013-03-04 2014-09-12 北京联想软件有限公司 Target object information acquisition method and electronic device
CN104036226A (en) * 2013-03-04 2014-09-10 联想(北京)有限公司 Object information obtaining method and electronic device
CN104050656A (en) * 2013-03-12 2014-09-17 英特尔公司 Apparatus and techniques for determining object depth in images
CN104123715B (en) * 2013-04-27 2017-12-05 株式会社理光 Configure the method and system of parallax value
CN104123715A (en) * 2013-04-27 2014-10-29 株式会社理光 Method and system for configuring parallax value
CN103337064A (en) * 2013-04-28 2013-10-02 四川大学 Method for removing mismatching point in image stereo matching
CN103458246A (en) * 2013-09-03 2013-12-18 清华大学 Shielding processing method and system in video motion segmentation
CN103458246B (en) * 2013-09-03 2016-08-17 清华大学 Occlusion handling method in video motion segmentation and system
CN103458261A (en) * 2013-09-08 2013-12-18 华东电网有限公司 Video scene variation detection method based on stereoscopic vision
CN104517095B (en) * 2013-10-08 2018-01-02 南京理工大学 A kind of number of people dividing method based on depth image
CN104517095A (en) * 2013-10-08 2015-04-15 南京理工大学 Head division method based on depth image
CN103607558A (en) * 2013-11-04 2014-02-26 深圳市中瀛鑫科技股份有限公司 Video monitoring system, target matching method and apparatus thereof
CN103634586A (en) * 2013-11-26 2014-03-12 深圳市唯特视科技有限公司 Stereo-image acquiring method and device
CN103780897A (en) * 2014-02-25 2014-05-07 重庆卓美华视光电有限公司 Obtaining method and device for depth map of two-viewpoint image
CN103780897B (en) * 2014-02-25 2016-05-11 重庆卓美华视光电有限公司 Depth map acquisition methods and the device of two visual point images
CN104112270A (en) * 2014-05-14 2014-10-22 苏州科技学院 Random point matching algorithm based on self-adaptive weight multiple-dimensioned window
CN104112270B (en) * 2014-05-14 2017-06-20 苏州科技学院 A kind of any point matching algorithm based on the multiple dimensioned window of adaptive weighting
CN104050682B (en) * 2014-07-09 2017-01-18 武汉科技大学 Image segmentation method fusing color and depth information
CN107211085B (en) * 2015-02-20 2020-06-05 索尼公司 Camera device and camera method
CN107211085A (en) * 2015-02-20 2017-09-26 索尼公司 Camera device and image capture method
CN108140243A (en) * 2015-03-18 2018-06-08 北京市商汤科技开发有限公司 Restore from the 3D hand gestures of binocular imaging system
CN108140243B (en) * 2015-03-18 2022-01-11 北京市商汤科技开发有限公司 Method, device and system for constructing 3D hand model
CN104796684A (en) * 2015-03-24 2015-07-22 深圳市广之爱文化传播有限公司 Naked eye 3D (three-dimensional) video processing method
CN107027019B (en) * 2016-01-29 2019-11-08 北京三星通信技术研究有限公司 Method and device for acquiring image parallax
CN107027019A (en) * 2016-01-29 2017-08-08 北京三星通信技术研究有限公司 Image parallactic acquisition methods and device
KR20170090976A (en) * 2016-01-29 2017-08-08 삼성전자주식회사 Method for acquiring image disparity and apparatus for the same
KR102187192B1 (en) 2016-01-29 2020-12-04 삼성전자주식회사 Method for acquiring image disparity and apparatus for the same
CN108604371A (en) * 2016-02-25 2018-09-28 深圳市大疆创新科技有限公司 Imaging system and method
CN106023189A (en) * 2016-05-17 2016-10-12 北京信息科技大学 Light field data depth reconstruction method based on matching optimization
CN106023189B (en) * 2016-05-17 2018-11-09 北京信息科技大学 A kind of light field data depth reconstruction method based on matching optimization
CN109496325A (en) * 2016-07-29 2019-03-19 索尼公司 Image processing apparatus and image processing method
CN106331672A (en) * 2016-08-19 2017-01-11 深圳奥比中光科技有限公司 Method, apparatus and system for obtaining viewpoint image
CN108307179A (en) * 2016-08-30 2018-07-20 姜汉龙 A kind of method of 3D three-dimensional imagings
CN106447661A (en) * 2016-09-28 2017-02-22 深圳市优象计算技术有限公司 Rapid depth image generating method
WO2018082220A1 (en) * 2016-11-03 2018-05-11 深圳市掌网科技股份有限公司 Panoramic camera and depth information obtaining method
CN106530409A (en) * 2016-11-03 2017-03-22 浙江大学 Local region consistency corresponding method in stereo coupling
CN106530409B (en) * 2016-11-03 2019-08-27 浙江大学 Consistency Correspondence Method of Local Regions in Stereo Matching
US10834374B2 (en) 2017-02-28 2020-11-10 Peking University Shenzhen Graduate School Method, apparatus, and device for synthesizing virtual viewpoint images
WO2018157562A1 (en) * 2017-02-28 2018-09-07 北京大学深圳研究生院 Virtual viewpoint synthesis method based on local image segmentation
US10887569B2 (en) 2017-02-28 2021-01-05 Peking University Shenzhen Graduate School Virtual viewpoint synthesis method based on local image segmentation
CN108537871B (en) * 2017-03-03 2024-02-20 索尼公司 Information processing apparatus and information processing method
CN108537871A (en) * 2017-03-03 2018-09-14 索尼公司 Information processing equipment and information processing method
CN107122782A (en) * 2017-03-16 2017-09-01 成都通甲优博科技有限责任公司 A kind of half intensive solid matching method in a balanced way
CN107122782B (en) * 2017-03-16 2020-09-11 成都通甲优博科技有限责任公司 Balanced semi-dense stereo matching method
CN108765480A (en) * 2017-04-10 2018-11-06 钰立微电子股份有限公司 Advanced treatment equipment
CN108765480B (en) * 2017-04-10 2022-03-15 钰立微电子股份有限公司 Deep processing equipment
CN107256547A (en) * 2017-05-26 2017-10-17 浙江工业大学 A kind of face crack recognition methods detected based on conspicuousness
CN107085825A (en) * 2017-05-27 2017-08-22 成都通甲优博科技有限责任公司 Image weakening method, device and electronic equipment
CN107220994A (en) * 2017-06-01 2017-09-29 成都通甲优博科技有限责任公司 A kind of method and system of Stereo matching
CN107329490A (en) * 2017-07-21 2017-11-07 歌尔科技有限公司 Unmanned plane barrier-avoiding method and unmanned plane
CN107730543B (en) * 2017-09-08 2021-05-14 成都通甲优博科技有限责任公司 Rapid iterative computation method for semi-dense stereo matching
CN107730543A (en) * 2017-09-08 2018-02-23 成都通甲优博科技有限责任公司 A kind of iteratively faster computational methods of half dense stereo matching
CN107610070A (en) * 2017-09-29 2018-01-19 深圳市佳创视讯技术股份有限公司 Free stereo matching process based on three shooting collections
CN107610070B (en) * 2017-09-29 2020-12-01 深圳市佳创视讯技术股份有限公司 Free stereo matching method based on three-camera collection
CN107958461A (en) * 2017-11-14 2018-04-24 中国航空工业集团公司西安飞机设计研究所 A kind of carrier aircraft method for tracking target based on binocular vision
CN109977981B (en) * 2017-12-27 2020-11-24 深圳市优必选科技有限公司 Scene analysis method based on binocular vision, robot and storage device
CN109977981A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Scene analysis method based on binocular vision, robot and storage device
TWI719440B (en) * 2018-04-02 2021-02-21 聯發科技股份有限公司 Stereo match method and apparatus thereof
CN110349198A (en) * 2018-04-02 2019-10-18 联发科技股份有限公司 Solid matching method and corresponding Stereo matching device
CN108647579B (en) * 2018-04-12 2022-02-25 海信集团有限公司 Obstacle detection method and device and terminal
CN108647579A (en) * 2018-04-12 2018-10-12 海信集团有限公司 A kind of obstacle detection method, device and terminal
CN110602474A (en) * 2018-05-24 2019-12-20 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining image parallax
CN108921842A (en) * 2018-07-02 2018-11-30 上海交通大学 A kind of cereal flow detection method and device
CN109299662A (en) * 2018-08-24 2019-02-01 上海图漾信息科技有限公司 Depth data calculates apparatus and method for and face recognition device
CN111147868A (en) * 2018-11-02 2020-05-12 广州灵派科技有限公司 Free viewpoint video guide system
CN109640066B (en) * 2018-12-12 2020-05-22 深圳先进技术研究院 Method and device for generating high-precision dense depth image
CN109640066A (en) * 2018-12-12 2019-04-16 深圳先进技术研究院 The generation method and device of high-precision dense depth image
CN110264403A (en) * 2019-06-13 2019-09-20 中国科学技术大学 It is a kind of that artifacts joining method is gone based on picture depth layering
CN110428462B (en) * 2019-07-17 2022-04-08 清华大学 Multi-camera stereo matching method and device
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN111353982B (en) * 2020-02-28 2023-06-20 贝壳技术有限公司 Depth camera image sequence screening method and device
CN111353982A (en) * 2020-02-28 2020-06-30 贝壳技术有限公司 Depth camera image sequence screening method and device
CN112241714A (en) * 2020-10-22 2021-01-19 北京字跳网络技术有限公司 Method and device for identifying designated area in image, readable medium and electronic equipment
CN112241714B (en) * 2020-10-22 2024-04-26 北京字跳网络技术有限公司 Method, device, readable medium and electronic device for identifying a designated area in an image
CN112601094A (en) * 2021-03-01 2021-04-02 浙江智慧视频安防创新中心有限公司 Video coding and decoding method and device
CN113052886A (en) * 2021-04-09 2021-06-29 同济大学 Method for acquiring depth information of double TOF cameras by adopting binocular principle
CN113129350A (en) * 2021-04-12 2021-07-16 长春理工大学 Depth extraction method based on camera array
CN113066173A (en) * 2021-04-21 2021-07-02 国家基础地理信息中心 Three-dimensional model construction method and device and electronic equipment
TWI772040B (en) * 2021-05-27 2022-07-21 大陸商珠海凌煙閣芯片科技有限公司 Object depth information acquistition method, device, computer device and storage media
CN113643212A (en) * 2021-08-27 2021-11-12 复旦大学 A Depth Graph Noise Reduction Method Based on Graph Neural Network
CN113643212B (en) * 2021-08-27 2024-04-05 复旦大学 A deep image denoising method based on graph neural network
CN115147589A (en) * 2022-05-17 2022-10-04 北京旷视科技有限公司 Disparity map processing method, electronic equipment, storage medium and product
CN118052857A (en) * 2024-02-20 2024-05-17 上海凝眸智能科技有限公司 Depth map extraction method based on RGBIR binocular camera

Also Published As

Publication number Publication date
CN101720047B (en) 2011-12-21

Similar Documents

Publication Publication Date Title
CN101720047B (en) Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
CN102254348B (en) Virtual viewpoint mapping method based o adaptive disparity estimation
CN106408513B (en) Depth Map Super-Resolution Reconstruction Method
CN101312539A (en) Hierarchical image depth extracting method for three-dimensional television
CN104065947B (en) The depth map acquisition methods of a kind of integration imaging system
CN108596965A (en) A kind of light field image depth estimation method
CN101840574B (en) Depth estimation method based on edge pixel characteristics
CN103236082A (en) Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes
CN102496183B (en) Multi-view stereo reconstruction method based on Internet photo gallery
CN110853151A (en) Three-dimensional point set recovery method based on video
Jain et al. Efficient stereo-to-multiview synthesis
CN102368826A (en) Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102263957A (en) A Disparity Estimation Method Based on Search Window Adaptation
CN103971366A (en) Stereoscopic matching method based on double-weight aggregation
CN103679739A (en) Virtual view generating method based on shielding region detection
CN112637582B (en) Fuzzy edge-driven 3D fuzzy surface synthesis method for monocular video virtual view
CN104661014B (en) The gap filling method that space-time combines
Knorr et al. Stereoscopic 3D from 2D video with super-resolution capability
CN102750694A (en) Local optimum belief propagation algorithm-based binocular video depth map solution method
Lee et al. Automatic 2d-to-3d conversion using multi-scale deep neural network
Lee et al. High-Resolution Depth Map Generation by Applying Stereo Matching Based on Initial Depth Informaton
Lee et al. Segment-based multi-view depth map estimation using belief propagation from dense multi-view video
Abd Manap et al. Novel view synthesis based on depth map layers representation
CN107103620B (en) A Depth Extraction Method for Multi-Light Encoded Cameras Based on Spatial Sampling from Independent Camera Perspectives

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111221

Termination date: 20211103