[go: up one dir, main page]

CN105374039A - Monocular image depth information estimation method based on contour acuity - Google Patents

Monocular image depth information estimation method based on contour acuity Download PDF

Info

Publication number
CN105374039A
CN105374039A CN201510786727.1A CN201510786727A CN105374039A CN 105374039 A CN105374039 A CN 105374039A CN 201510786727 A CN201510786727 A CN 201510786727A CN 105374039 A CN105374039 A CN 105374039A
Authority
CN
China
Prior art keywords
contour
image
depth
gradient
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510786727.1A
Other languages
Chinese (zh)
Other versions
CN105374039B (en
Inventor
马利
景源
李鹏
张玉奇
胡彬彬
牛斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University
Original Assignee
Liaoning University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University filed Critical Liaoning University
Priority to CN201510786727.1A priority Critical patent/CN105374039B/en
Publication of CN105374039A publication Critical patent/CN105374039A/en
Application granted granted Critical
Publication of CN105374039B publication Critical patent/CN105374039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提出一种基于轮廓锐度的单目图像深度信息估计方法,该方法应用边缘轮廓锐度作为模糊信息估计特征、通过低层次线索信息进行深度信息提取。首先对图像进行边缘检测;接着对图像中的边缘计算边缘能量、轮廓锐度,并以边缘能量、轮廓锐度作为轮廓的外部能量,结合轮廓的内部能量—轮廓线特征性能量和轮廓线距离能量建立轮廓跟踪模型,求解能量函数的最小值,搜索图像轮廓;然后以深度梯度假设作为先验假设梯度模型,对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布;最后利用原图像信息和所得深度图像信息对得到的深度图像进行优化处理,得到最终的视差图。实验结果证明,本发明的深度估计算法简单,能快速准确地估计单目图像的深度图。

The present invention proposes a monocular image depth information estimation method based on contour sharpness, which uses edge contour sharpness as fuzzy information estimation features, and extracts depth information through low-level clue information. First, edge detection is performed on the image; then the edge energy and contour sharpness are calculated for the edges in the image, and the edge energy and contour sharpness are used as the external energy of the contour, combined with the internal energy of the contour—contour line characteristic energy and contour line distance Energy establishes the contour tracking model, solves the minimum value of the energy function, and searches the image contour; then uses the depth gradient assumption as the prior assumption gradient model, fills the depth value of the area with different contour lines, and calculates the depth distribution; finally uses The original image information and the obtained depth image information are optimized for the obtained depth image to obtain a final disparity map. Experimental results prove that the depth estimation algorithm of the present invention is simple and can quickly and accurately estimate the depth map of a monocular image.

Description

基于轮廓锐度的单目图像深度信息估计方法Depth Information Estimation Method of Monocular Image Based on Contour Sharpness

技术领域technical field

本发明涉及一种能够对单目图像进行深度信息估计方法,该方法能够通过图像低层次线索得到单目图像中所示物体的深度信息。The invention relates to a method capable of estimating the depth information of a monocular image, and the method can obtain the depth information of an object shown in the monocular image through low-level clues of the image.

背景技术Background technique

深度信息感知是产生立体视觉的前提,基于深度信息的实际应用在场景重解、三维重建、模式识别、目标跟踪等领域中发挥重要作用。在实际应用中,可采用单目摄像机、多目摄像机或深度摄像机进行深度信息提取。其中,基于单目摄像机的深度信息估计方法由于其实际操作简单、硬件成本低、能够从已有的单目图像素材中直接提取深度信息而具有一定的优势。Depth information perception is the premise of producing stereo vision, and the practical application based on depth information plays an important role in the fields of scene reconstruction, 3D reconstruction, pattern recognition, and object tracking. In practical applications, monocular cameras, multi-cameras or depth cameras can be used for depth information extraction. Among them, the depth information estimation method based on a monocular camera has certain advantages because of its simple operation, low hardware cost, and the ability to directly extract depth information from existing monocular image materials.

单目图像深度信息估计通过单幅图像提取目标的颜色、形状、共面性等二维、三维几何信息,从而利用少量已知条件获取该目标的空间三维信息。目前大多算法都是采用图像高层次、中层次线索实现。如经过学习参考图像得到已知图像的语义标签,利用其实现对目标图像定义语义标签,从而得到相对深度信息。利用对真实视差图像中训练得到的图像结构信息,如颜色、纹理、形状等,对目标图像进行过分割,应用有区分训练的马尔可夫随机场推导出深度信息。而利用低层次线索信息不需要对图像进行内容的分析,仅需要直接应用局部信息即可从图像中恢复深度信息,算法相对简单。Monocular image depth information estimation extracts the two-dimensional and three-dimensional geometric information of the target such as color, shape, and coplanarity through a single image, so as to obtain the spatial three-dimensional information of the target with a small number of known conditions. At present, most algorithms are realized by using high-level and middle-level clues of images. For example, the semantic label of the known image is obtained by learning the reference image, and it is used to define the semantic label of the target image, so as to obtain the relative depth information. The target image is over-segmented by using the image structure information obtained from the training of the real disparity image, such as color, texture, shape, etc., and the depth information is derived by applying the Markov random field with discriminative training. However, the use of low-level clue information does not need to analyze the content of the image, and only needs to directly apply the local information to recover the depth information from the image, and the algorithm is relatively simple.

在单目图像的深度信息估计过程中可以利用模糊信息作为深度估计的一个重要特征。模糊信息多是在成像过程中对焦不准或成像区域内存在不同深度的目标而造成的。根据这一特性,单目图像深度信息估计通过对图像做散焦处理,根据图像的模糊情况确定图像的前景和背景来估计场景的深度。如以边缘位置的模糊值作为初始,应用消光和马尔可夫随机场使模糊扩散到整幅图像,来实现相对深度提取。或者利用热扩离过程实现离焦模糊过程的建模,利用不均匀逆热扩散方程估计边缘位置模糊量来恢复场景深度。而由于模糊扩散方法复杂,效率较低,其实用性较差。Blur information can be used as an important feature of depth estimation in the process of depth information estimation of monocular images. Blurred information is mostly caused by inaccurate focus during imaging or objects with different depths in the imaging area. According to this characteristic, monocular image depth information estimation estimates the depth of the scene by defocusing the image and determining the foreground and background of the image according to the blurring of the image. For example, the blur value of the edge position is used as the initial value, and the extinction and Markov random field are applied to diffuse the blur to the entire image to achieve relative depth extraction. Or use the thermal diffusion process to realize the modeling of the defocus blur process, and use the non-uniform inverse thermal diffusion equation to estimate the amount of blur at the edge position to restore the scene depth. However, due to the complexity and low efficiency of the fuzzy diffusion method, its practicability is poor.

因此,本发明利用图像低层次线索及新的模糊信息特性,对单目图像深度信息估计方法进行了简化。Therefore, the present invention simplifies the monocular image depth information estimation method by utilizing the low-level clues of the image and the new fuzzy information characteristics.

发明内容Contents of the invention

本发明提出一种基于轮廓锐度的单目图像深度信息估计方法,该方法利用图像低层次线索信息,以边缘的轮廓锐度信息作为模糊信息估计特征进行物体轮廓提取,根据物体轮廓与物体深度边缘的联系对不同物体进行深度分配,从而得到图像中不同物体的深度信息。The present invention proposes a monocular image depth information estimation method based on contour sharpness. This method utilizes image low-level clue information, and uses edge contour sharpness information as fuzzy information estimation features to extract object contours. According to the object contour and object depth The connection of the edge assigns the depth of different objects, so as to obtain the depth information of different objects in the image.

本发明的目的是通过下述技术方案实现的:The purpose of the present invention is achieved through the following technical solutions:

基于轮廓锐度的单目图像深度信息估计方法,其特征在于,利用梯度幅度和轮廓锐度信息反映图像中具有不同深度的物体轮廓的特性,在进行深度估计时,不仅仅考虑边缘的梯度幅度,同时加入边缘的空间信息,更充分地体现了物体边缘模糊变化趋势。包括如下步骤:The monocular image depth information estimation method based on the contour sharpness is characterized in that the gradient magnitude and contour sharpness information is used to reflect the characteristics of the contours of objects with different depths in the image, and not only the gradient magnitude of the edge is considered when performing depth estimation , while adding the spatial information of the edge, it can more fully reflect the blurring trend of the edge of the object. Including the following steps:

(1)对于输入图像,进行边缘检测,得到图像中物体的边缘点P={p1,p2,…pn};(1) For the input image, perform edge detection to obtain the edge point P={p 1 ,p 2 ,...p n } of the object in the image;

(2)根据先验深度梯度模型进行初始轮廓线的定义,定义一组互相平行且间距相等的轮廓线作为初始轮廓线V={v0,v1…vm};其中,v(s)=[x(s),y(s)],x(s),y(s)为初始轮廓线中的点s的坐标;(2) Define the initial contour line according to the prior depth gradient model, and define a group of contour lines parallel to each other and with equal spacing as the initial contour line V={v 0 ,v 1 …v m }; among them, v(s) =[x(s), y(s)], x(s), y(s) are the coordinates of the point s in the initial contour line;

轮廓线的总能量定义为:The total energy of the contour is defined as:

Etotal=w1Eedge+w2Esharp+w3Es+w4Ed E total =w 1 E edge +w 2 E sharp +w 3 E s +w 4 E d

其中Eedge为轮廓边缘能量函数,Esharp轮廓锐度能量函数,Es为轮廓线特性能量函数,Ed为轮廓线距离能量函数,w1、w2、w3、w4为各个函数权重控制参数,可根据具体图像特性赋值。Where E edge is the contour edge energy function, E sharp contour sharpness energy function, E s is the contour line characteristic energy function, E d is the contour line distance energy function, w 1 , w 2 , w 3 , w 4 are the weights of each function The control parameters can be assigned according to specific image characteristics.

轮廓边缘能量函数Eedge表示轮廓的梯度幅度,定义为图像I(x,y)沿着梯度方向的梯度幅度大小,表示为:The contour edge energy function E edge represents the gradient magnitude of the contour, which is defined as the gradient magnitude of the image I(x,y) along the gradient direction, expressed as:

EE. ee dd gg ee == expexp (( -- || gg (( II (( xx ,, ythe y )) )) || 22 aa ))

其中, g ( I ( x , y ) ) = ( ∂ x I ( x , y ) ) 2 + ( ∂ y I ( x , y ) ) 2 是对I(x,y)求梯度幅度;in, g ( I ( x , the y ) ) = ( ∂ x I ( x , the y ) ) 2 + ( ∂ the y I ( x , the y ) ) 2 It is to find the gradient magnitude of I(x,y);

轮廓锐度能量函数Esharp代表轮廓的模糊程度,通过对梯度轮廓锐度求解得到。定义为:The contour sharpness energy function E sharp represents the blurring degree of the contour, which is obtained by solving the sharpness of the gradient contour. defined as:

EE. sthe s hh aa rr pp == expexp (( -- σσ (( pp (( qq 00 )) )) 22 bb )) ;;

其中,梯度轮廓锐度σ(p(q0))为梯度轮廓变量方差的均方根。这里梯度轮廓为图像中边缘像素q0(x0,y0)作为起始点,沿梯度方向向边缘的边界追踪,直到边缘的梯度幅度不再发生变化,所得到一维路径p(q0)形成的梯度幅度曲线。其梯度轮廓锐度定义为Wherein, the gradient profile sharpness σ(p(q 0 )) is the root mean square of the variance of the gradient profile variable. Here the gradient profile is the edge pixel q 0 (x 0 , y 0 ) in the image as the starting point, and traces along the gradient direction to the boundary of the edge until the gradient amplitude of the edge no longer changes, and the obtained one-dimensional path p(q 0 ) The resulting gradient magnitude curve. Its gradient profile sharpness is defined as

σσ (( pp (( qq 00 )) )) == ΣΣ xx ∈∈ pp (( qq 00 )) gg (( qq )) GG (( qq 00 )) dd cc 22 (( qq ,, qq 00 )) ;; GG (( qq 00 )) == ΣΣ sthe s ∈∈ pp (( qq 00 )) gg (( sthe s ))

其中,dc(q,q0)为梯度轮廓中点q和q0之间的曲线长度,g(q)为q点处的梯度幅度,G(q0)为梯度轮廓中所有点的梯度幅度和,s为梯度轮廓中任一点,参数b为权值控制参数。where d c (q,q 0 ) is the curve length between points q and q 0 in the gradient profile, g(q) is the gradient magnitude at point q, and G(q 0 ) is the gradient of all points in the gradient profile Amplitude sum, s is any point in the gradient profile, and parameter b is the weight control parameter.

轮廓线特性能量函数Es用于约束轮廓的平滑度;The contour line characteristic energy function E s is used to constrain the smoothness of the contour;

轮廓线距离能量函数Ed用于控制轮廓跟踪曲线不会超出搜索区域;The contour line distance energy function E d is used to control the contour tracking curve and will not exceed the search area;

(3)对图像中每一条初始轮廓线从图像的左侧起始点开始,由图像底部向顶部更新每一轮廓点的位置,对每一个轮廓点根据步骤(2)中轮廓线总能量定义计算轮廓跟踪能量;(3) For each initial contour line in the image, start from the left starting point of the image, update the position of each contour point from the bottom of the image to the top, and calculate according to the definition of the total energy of the contour line in step (2) for each contour point Contour tracking energy;

(4)求解能量函数的最小值;对与当前轮廓点相邻的像素列中的像素点进行搜索,搜索满足能量函数定义的边缘点P={p1,p2,…pn}中具有最小能量值的点,选择具有最小能量的像素点位置作为新的轮廓点;从图像左侧至右侧,重复搜索最小值,得到最终的轮廓搜索结果,即得到目标轮廓线V;(4) Solve the minimum value of the energy function; search the pixel points in the pixel column adjacent to the current contour point, and search for the edge points P={p 1 ,p 2 ,...p n } that satisfy the definition of the energy function. For the point with the minimum energy value, select the pixel point position with the minimum energy as the new contour point; from the left side of the image to the right side, repeat the search for the minimum value to obtain the final contour search result, that is, the target contour line V;

(5)以深度梯度假设作为先验假设梯度模型,对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布。(5) Using the depth gradient assumption as the prior assumption gradient model, the depth value is filled in the regions with different contour lines, and the depth distribution is calculated.

轮廓线集中轮廓线V={v0,v1,…,vm},对应的分配深度值Depth为:Contour line set contour line V={v 0 ,v 1 ,…,v m }, the corresponding assigned depth value Depth is:

DD. ee pp tt hh (( vv ii )) == 255255 ×× mm -- ii mm ,, ii == 00 ...... mm

(6)利用原图像灰度信息和所得深度图像信息对得到的深度图像进行优化处理:(6) Use the original image grayscale information and the obtained depth image information to optimize the obtained depth image:

DepthDepth ff bb ff (( xx ii )) == 11 WW (( xx ii )) ΣΣ xx jj ∈∈ ΩΩ (( xx ii )) expexp (( GG σσ sthe s (( || || xx ii -- xx jj || || )) GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) )) DD. ee pp tt hh (( xx jj ))

WW (( xx ii )) == ΣΣ qq ∈∈ SS expexp (( GG σσ sthe s (( || || xx ii -- xx jj || || )) GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) ))

其中Depth(xi)为输入深度图像,Ω(xi)是以像素xi为中心的邻域,I(xi)为像素xi的亮度I分量,xj为像素xi在邻域Ω(xi)的邻域像素,W(xi)是滤波器参数的归一化因子;||xi-xj||为两像素的空间欧式距离;I(xi)-I(xj)表示两像素的亮度相似性,像素xi和xj的空间坐标分别为(xix,xiy)和(xjx,xjy)。空间权重系数和色度权重系数定义为:Where Depth( xi ) is the input depth image, Ω( xi ) is the neighborhood centered on pixel xi , I( xi ) is the brightness I component of pixel xi , and x j is the neighborhood of pixel xi Neighborhood pixels of Ω( xi ), W( xi ) is the normalization factor of filter parameters; || xi -x j || is the spatial Euclidean distance of two pixels; I( xi )-I( x j ) represents the brightness similarity of two pixels, and the spatial coordinates of pixels x i and x j are (x ix , x iy ) and (x jx , x jy ), respectively. Spatial weight coefficient and chroma weighting coefficients defined as:

GG σσ sthe s (( || || xx ii -- xx jj || || )) == [[ (( xx jj xx -- xx ii xx )) 22 ++ (( xx jj ythe y -- xx ii ythe y )) 22 ]] 22 22 σσ sthe s 22

GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) == || II (( xx jj )) -- II (( xx ii )) || 22 22 σσ rr 22 ;;

其中σs为空间权重的方差、σr为色度权重的方差。where σ s is the variance of the spatial weights, and σ r is the variance of the chrominance weights.

(7)得到了对输入单目图像进行深度信息估计后的视差图。(7) The disparity map after depth information estimation of the input monocular image is obtained.

本发明的优点是,提出了一种基于轮廓锐度的单目图像深度信息估计方法。传统的单目图像深度信息估计方法需要利用图像高、中层次线索进行学习训练、图像理解等步骤,算法复杂。而本方法则是利用图像低层次线索,计算简单。与之传统利用模糊信息区分物体深度方法不同的是,本发明在计算模糊信息时,利用轮廓锐度信息有效区分物体轮廓,从而得到具有不同深度的物体轮廓,避免了模糊扩散等步骤,提高了方法的实验性。本方法利用梯度幅度和轮廓锐度信息反映图像中具有不同深度的物体轮廓的特性,在进行深度估计时,不仅仅考虑边缘的梯度幅度,同时加入边缘的空间信息,更充分地体现了物体边缘模糊变化趋势。The invention has the advantage of proposing a method for estimating the depth information of a monocular image based on contour sharpness. Traditional monocular image depth information estimation methods need to use image high-level and mid-level clues for learning and training, image understanding and other steps, and the algorithm is complex. However, this method uses the low-level clues of the image, and the calculation is simple. Different from the traditional method of using fuzzy information to distinguish object depth, the present invention uses contour sharpness information to effectively distinguish object contours when calculating fuzzy information, thereby obtaining object contours with different depths, avoiding steps such as blur diffusion, and improving The experimental nature of the method. This method uses the gradient magnitude and contour sharpness information to reflect the characteristics of object contours with different depths in the image. When performing depth estimation, not only the gradient magnitude of the edge is considered, but also the spatial information of the edge is added to fully reflect the edge of the object. Fuzzy changing trends.

附图说明Description of drawings

图1是本方法流程图。Figure 1 is a flow chart of the method.

图2显示了轮廓锐度的定义。Figure 2 shows the definition of contour sharpness.

图3为利用轮廓线所分配的深度估计相对关系示意图。FIG. 3 is a schematic diagram showing the relative relationship of depth estimation assigned by contour lines.

具体实施方式detailed description

下面结合附图及具体实例,对本发明的实施过程给予详细的说明。The implementation process of the present invention will be described in detail below in conjunction with the accompanying drawings and specific examples.

(1)对于输入图像,采用Canny边缘检测算法进行边缘检测,得到图像中物体的边缘点P={p1,p2,…pn};(1) For the input image, use the Canny edge detection algorithm for edge detection, and obtain the edge point P={p 1 ,p 2 ,...p n } of the object in the image;

(2)根据先验深度梯度模型进行初始轮廓线的定义,定义一组互相平行且间距相等的轮廓线作为初始轮廓线V={v0,v1,...,vm}。其中,v(s)=[x(s),y(s)],x(s),y(s)为初始轮廓线中的点s的坐标;(2) Define the initial contours according to the prior depth gradient model, and define a group of contours parallel to each other with equal spacing as the initial contours V={v 0 ,v 1 ,...,v m }. Wherein, v(s)=[x(s), y(s)], x(s), y(s) are the coordinates of the point s in the initial contour line;

轮廓线的总能量定义为The total energy of the contour is defined as

Etotal=w1Eedge+w2Esharp+w3Es+w4Ed E total =w 1 E edge +w 2 E sharp +w 3 E s +w 4 E d

其中Eedge为轮廓边缘能量函数,Esharp轮廓锐度能量函数,Es为轮廓线特性能量函数,Ed为轮廓线距离能量函数,w1、w2、w3、w4为各个函数权重控制参数,对于通用图像可以设置为w1=0.25、w2=0.5、w3=0.125、w4=0.125。Where E edge is the contour edge energy function, E sharp contour sharpness energy function, E s is the contour line characteristic energy function, E d is the contour line distance energy function, w 1 , w 2 , w 3 , w 4 are the weights of each function The control parameters can be set as w 1 =0.25, w 2 =0.5, w 3 =0.125, w 4 =0.125 for common images.

(3)轮廓边缘能量函数Eedge为图像I(x,y)沿着梯度方向的梯度幅度大小,定义为(3) The contour edge energy function E edge is the gradient magnitude of the image I(x,y) along the gradient direction, defined as

EE. ee dd gg ee == expexp (( -- || gg (( II (( xx ,, ythe y )) )) || 22 aa ))

其中, g ( I ( x , y ) ) = ( ∂ x I ( x , y ) ) 2 + ( ∂ y I ( x , y ) ) 2 是对I(x,y)求梯度幅度;in, g ( I ( x , the y ) ) = ( ∂ x I ( x , the y ) ) 2 + ( ∂ the y I ( x , the y ) ) 2 It is to find the gradient magnitude of I(x,y);

其中参数a为权值控制参数。The parameter a is the weight control parameter.

(4)在本方法中利用边缘的梯度轮廓锐度代表轮廓的模糊程度,所以定义通过梯度轮廓锐度求解得到的轮廓锐度能量函数Esharp作为轮廓模糊程度的表征值。如图2所示,以图中边缘像素q0(x0,y0)作为起始点,沿梯度方向向边缘的两边追踪,直到边缘的梯度幅度不再发生变化,这样得到路径p(q0)。沿一维路径p(q0)形成的梯度幅度曲线被称为梯度轮廓。应用梯度轮廓变量方差的均方根对轮廓锐度进行定义,表示为:(4) In this method, the gradient contour sharpness of the edge is used to represent the blurring degree of the contour, so the contour sharpness energy function E sharp obtained by solving the gradient contour sharpness is defined as the representative value of the contour blurring degree. As shown in Figure 2, starting from the edge pixel q 0 (x 0 , y 0 ) in the figure, trace along the gradient direction to both sides of the edge until the gradient magnitude of the edge no longer changes, thus obtaining the path p(q 0 ). The gradient magnitude profile formed along the one-dimensional path p(q 0 ) is called a gradient profile. The contour sharpness is defined by applying the root mean square of the variance of the gradient contour variable, expressed as:

σσ (( pp (( qq 00 )) )) == ΣΣ xx ∈∈ pp (( qq 00 )) gg (( qq )) GG (( qq 00 )) dd cc 22 (( qq ,, qq 00 ))

GG (( qq 00 )) == ΣΣ sthe s ∈∈ pp (( qq 00 )) gg (( sthe s ))

这里dc(q,q0)为梯度轮廓中点q和q0之间的曲线长度,g(q)为q点处的梯度幅度,G(q0)为梯度轮廓中所有点的梯度幅度和,s为梯度轮廓中任一点。Here d c (q,q 0 ) is the length of the curve between points q and q 0 in the gradient profile, g(q) is the gradient magnitude at point q, and G(q 0 ) is the gradient magnitude at all points in the gradient profile and, s is any point in the gradient profile.

锐度能量函数Esharp定义为:The sharpness energy function E sharp is defined as:

EE. sthe s hh aa rr pp == expexp (( -- σσ (( pp (( qq 00 )) )) 22 bb ))

其中参数b为权值控制参数。Among them, the parameter b is the weight control parameter.

(5)轮廓线特性能量函数Es作为平滑项约束来控制轮廓跟踪,确保跟踪生成的曲线是平滑,同时防止求解时陷入局部极值中。(5) The characteristic energy function E s of the contour line is used as a smooth item constraint to control the contour tracking to ensure that the curve generated by the tracking is smooth, and at the same time prevent it from falling into the local extremum when solving.

定义轮廓线中轮廓点为N={n0,n1,...,nn},其中n0为轮廓起始点,则轮廓线特性能量函数定义为Define the contour points in the contour line as N={n 0 ,n 1 ,...,n n }, where n 0 is the starting point of the contour, then the characteristic energy function of the contour line is defined as

EE. sthe s (( nno ii )) == dd sthe s (( nno ii ,, nno ii -- 11 )) cc == || nno ii ythe y -- nno ii -- 11 ythe y || cc ,, ii == 00 ,, 11 ,, ...... nno

其中ds(ni,ni-1)表示点ni与点ni-1之间的曲线长度,参数c为权值控制参数。Where d s (n i ,n i-1 ) represents the length of the curve between point n i and point n i-1 , and parameter c is a weight control parameter.

(6)轮廓线距离能量函数Ed是轮廓跟踪的弹性约束项,它用于约束轮廓跟踪过程的轮廓线中各轮廓点的距离,使轮廓跟踪曲线不会超出搜索区域,保证轮廓线不相互交叉。(6) Contour line distance energy function E d is the elastic constraint item of contour tracking, which is used to constrain the distance of each contour point in the contour line of the contour tracking process, so that the contour tracking curve will not exceed the search area and ensure that the contour lines do not interact with each other. cross.

EE. dd (( nno ii )) == dd ee (( nno ii ,, nno 00 )) dd == || nno ii ythe y -- nno 00 ythe y || dd ,, ii == 00 ,, 11 ,, ...... nno

其中de(ni,n0)表示参数表示点ni与点n0之间的垂直距离,d为权值控制参数。Among them, d e (n i , n 0 ) indicates that the parameter indicates the vertical distance between point n i and point n 0 , and d is a weight control parameter.

(7)对图像中每一条初始轮廓线从图像的左侧起始点开始,由图像底部向顶部更新每一轮廓点的位置,对每一个轮廓点根据步骤(2~6)中轮廓线总能量计算方法计算轮廓跟踪能量;(7) For each initial contour line in the image, start from the left starting point of the image, update the position of each contour point from the bottom of the image to the top, and for each contour point according to the total energy of the contour line in steps (2-6) The calculation method calculates the contour tracking energy;

(8)求解能量函数的最小值。对与当前轮廓点相邻的像素列中的像素点进行搜索,搜索满足能量函数定义的边缘点P={p1,p2,…pn}中具有最小能量值的点,选择具有最小能量的像素点位置作为新的轮廓点;从图像左侧至右侧,重复搜索最小值,得到最终的轮廓搜索结果,即得到目标轮廓线V。(8) Find the minimum value of the energy function. Search the pixel points in the pixel column adjacent to the current contour point, search for the point with the minimum energy value among the edge points P={p 1 ,p 2 ,…p n } satisfying the definition of the energy function, and select the point with the minimum energy The pixel position of is used as the new contour point; from the left to the right side of the image, the minimum value is repeatedly searched to obtain the final contour search result, that is, the target contour line V is obtained.

(9)以由下至上逐步加深的深度梯度假设作为先验假设梯度模型,按照图3对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布。(9) Taking the depth gradient assumption gradually deepened from bottom to top as the prior assumption gradient model, filling depth values for areas with different contour lines according to Figure 3, and calculating the depth distribution.

轮廓线集中轮廓线V={v0,v1,...,vm},对应的分配深度值为。Contour line set V={v 0 ,v 1 ,...,v m }, the corresponding assigned depth value.

DD. ee pp tt hh (( vv ii )) == 255255 ×× mm -- ii mm ,, ii == 00 ...... mm

(10)利用原图像信息和所得深度图像信息对得到的深度图像进行优化处理:(10) Optimizing the obtained depth image by using the original image information and the obtained depth image information:

DepthDepth ff bb ff (( xx ii )) == 11 WW (( xx ii )) ΣΣ xx jj ∈∈ ΩΩ (( xx ii )) expexp (( GG σσ sthe s (( || || xx ii -- xx jj || || )) GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) )) DD. ee pp tt hh (( xx jj ))

WW (( xx ii )) == ΣΣ qq ∈∈ SS expexp (( GG σσ sthe s (( || || xx ii -- xx jj || || )) GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) ))

其中Depth(xi)为输入深度图像,Ω(xi)是以像素xi为中心的邻域,I(xi)为像素xi的亮度I分量,像素xj为像素xi在邻域Ω(xi)的邻域像素,W(xi)是滤波器参数的归一化因子;||xi-xj||为两像素的空间欧式距离;I(xi)-I(xj)表示两像素的亮度相似性,像素xi和xj的空间坐标分别为(xix,xiy)和(xjx,xjy)。空间权重系数和色度权重系数定义为:Among them, Depth( xi ) is the input depth image, Ω( xi ) is the neighborhood centered on pixel xi , I( xi ) is the brightness I component of pixel xi , and pixel x j is the neighborhood of pixel xi. Neighboring pixels of domain Ω( xi ), W( xi ) is the normalization factor of filter parameters; || xi -x j || is the spatial Euclidean distance between two pixels; I( xi )-I (x j ) represents the brightness similarity of two pixels, and the spatial coordinates of pixels x i and x j are (x ix , x iy ) and (x jx , x jy ) respectively. Spatial weight coefficient and chroma weighting coefficients defined as:

GG σσ sthe s (( || || xx ii -- xx jj || || )) == [[ (( xx jj xx -- xx ii xx )) 22 ++ (( xx jj ythe y -- xx ii ythe y )) 22 ]] 22 22 σσ sthe s 22

GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) == || II (( xx jj )) -- II (( xx ii )) || 22 22 σσ rr 22

其中σs为空间权重的方差、σr为色度权重的方差。where σ s is the variance of the spatial weights, and σ r is the variance of the chrominance weights.

(11)得到了对输入单目图像进行深度信息估计后的视差图。(11) The disparity map after depth information estimation of the input monocular image is obtained.

Claims (3)

1.基于轮廓锐度的单目图像深度信息估计方法,其特征在于,包括如下步骤:1. The monocular image depth information estimation method based on contour sharpness, is characterized in that, comprises the steps: (1)对于输入图像,进行边缘检测,得到图像中物体的边缘点P={p1,p2,…pn};(1) For the input image, perform edge detection to obtain the edge point P={p 1 ,p 2 ,...p n } of the object in the image; (2)根据先验深度梯度模型进行初始轮廓线的定义,定义一组互相平行且间距相等的轮廓线作为初始轮廓线V={v0,v1...vm};其中,v(s)=[x(s),y(s)],x(s),y(s)为初始轮廓线中的点s的坐标;(2) Define the initial contour line according to the prior depth gradient model, and define a group of contour lines parallel to each other and with equal spacing as the initial contour line V={v 0 ,v 1 ...v m }; where, v( s)=[x(s), y(s)], x(s), y(s) are the coordinates of the point s in the initial contour line; 轮廓线的总能量定义为The total energy of the contour is defined as Etotal=w1Eedge+w2Esharp+w3Es+w4Ed E total =w 1 E edge +w 2 E sharp +w 3 E s +w 4 E d 其中Eedge为轮廓边缘能量函数,表示轮廓的梯度幅度;Where E edge is the contour edge energy function, which represents the gradient magnitude of the contour; Esharp轮廓锐度能量函数,代表轮廓的模糊程度;E sharp Contour sharpness energy function, representing the fuzzy degree of the contour; Es为轮廓线特性能量函数,用于约束轮廓的平滑度;E s is the characteristic energy function of the contour line, which is used to constrain the smoothness of the contour; Ed为轮廓线距离能量函数,用于控制轮廓跟踪曲线不会超出搜索区域;E d is the contour line distance energy function, which is used to control the contour tracking curve and will not exceed the search area; w1、w2、w3、w4为各个函数权重控制参数;w 1 , w 2 , w 3 , and w 4 are weight control parameters for each function; (3)对图像中每一条初始轮廓线从图像的左侧起始点开始,由图像底部向顶部更新每一轮廓点的位置,对每一个轮廓点根据步骤(2)中轮廓线总能量定义计算轮廓跟踪能量;(3) For each initial contour line in the image, start from the left starting point of the image, update the position of each contour point from the bottom of the image to the top, and calculate according to the definition of the total energy of the contour line in step (2) for each contour point Contour tracking energy; (4)求解能量函数的最小值;对与当前轮廓点相邻的像素列中的像素点进行搜索,搜索满足能量函数定义的边缘点P={p1,p2,…pn}中具有最小能量值的点,选择具有最小能量的像素点位置作为新的轮廓点;从图像左侧至右侧,重复搜索最小值,得到最终的轮廓搜索结果,即得到目标轮廓线V;(4) Solve the minimum value of the energy function; search the pixel points in the pixel column adjacent to the current contour point, and search for the edge points P={p 1 ,p 2 ,...p n } that satisfy the definition of the energy function. For the point with the minimum energy value, select the pixel point position with the minimum energy as the new contour point; from the left side of the image to the right side, repeat the search for the minimum value to obtain the final contour search result, that is, the target contour line V; (5)以深度梯度假设作为先验假设梯度模型,对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布;(5) Using the depth gradient assumption as a priori assumption gradient model, the depth value is filled in areas with different contour lines, and the depth distribution is calculated; 轮廓线集中轮廓线V={v0,v1,...,vm},对于i=0…m对应的分配深度值Depth为:Contour line set contour line V={v 0 ,v 1 ,...,v m }, for i=0...m the corresponding assigned depth value Depth is: DD. ee pp tt hh (( vv ii )) == 255255 ×× mm -- ii mm (6)利用原图像灰度信息和所得深度图像信息对得到的深度图像进行优化处理:(6) Use the original image grayscale information and the obtained depth image information to optimize the obtained depth image: DepthDepth ff bb ff (( xx ii )) == 11 WW (( xx ii )) ΣΣ xx jj ∈∈ ΩΩ (( xx ii )) expexp (( GG σσ sthe s (( || || xx ii -- xx jj || || )) GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) )) DD. ee pp tt hh (( xx jj )) WW (( xx ii )) == ΣΣ qq ∈∈ SS expexp (( GG σσ sthe s (( || || xx ii -- xx jj || || )) GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) )) 其中Depth(xi)为输入深度图像,Ω(xi)是以像素xi为中心的邻域,I(xi)为像素xi的亮度I分量,xj为像素xi在邻域Ω(xi)的邻域像素,W(xi)是滤波器参数的归一化因子;||xi-xj||为两像素的空间欧式距离;I(xi)-I(xj)表示两像素的亮度相似性,像素xi和xj的空间坐标分别为(xix,xiy)和(xjx,xjy);空间权重系数和色度权重系数定义为:Where Depth( xi ) is the input depth image, Ω( xi ) is the neighborhood centered on pixel xi , I( xi ) is the brightness I component of pixel xi , and x j is the neighborhood of pixel xi Neighborhood pixels of Ω( xi ), W( xi ) is the normalization factor of filter parameters; || xi -x j || is the spatial Euclidean distance of two pixels; I( xi )-I( x j ) represents the brightness similarity of two pixels, the spatial coordinates of pixels x i and x j are (x ix , x iy ) and (x jx , x jy ) respectively; the spatial weight coefficient and chroma weighting coefficients defined as: GG σσ sthe s (( || || xx ii -- xx jj || || )) == [[ (( xx jj xx -- xx ii xx )) 22 ++ (( xx jj ythe y -- xx ii ythe y )) 22 ]] 22 22 σσ sthe s 22 GG σσ rr (( II (( xx ii )) -- II (( xx jj )) )) == || II (( xx jj )) -- II (( xx ii )) || 22 22 σσ rr 22 ;; 其中σs为空间权重的方差、σr为色度权重的方差。where σ s is the variance of the spatial weights, and σ r is the variance of the chrominance weights. (7)得到了对输入单目图像进行深度信息估计后的视差图。(7) The disparity map after depth information estimation of the input monocular image is obtained. 2.根据权利要求1所述的基于轮廓锐度的单目图像深度信息估计方法,其特征在于,所述的轮廓边缘能量函数Eedge为图像I(x,y)沿着梯度方向的梯度幅度大小,定义为:2. the monocular image depth information estimation method based on contour sharpness according to claim 1, is characterized in that, described contour edge energy function E edge is the gradient magnitude of image I (x, y) along gradient direction size, defined as: EE. ee dd gg ee == expexp (( -- || gg (( II (( xx ,, ythe y )) )) || 22 aa )) 其中, g ( I ( x , y ) ) = ( ∂ x I ( x , y ) ) 2 + ( ∂ y I ( x , y ) ) 2 是对I(x,y)求梯度幅度;in, g ( I ( x , the y ) ) = ( ∂ x I ( x , the y ) ) 2 + ( ∂ the y I ( x , the y ) ) 2 It is to find the gradient magnitude of I(x,y); 其中参数a为权值控制参数。The parameter a is the weight control parameter. 3.根据权利要求1所述的基于轮廓锐度的单目图像深度信息估计方法,其特征在于,所述的轮廓锐度能量函数Esharp以对梯度轮廓锐度求解得到,定义为:3. the monocular image depth information estimation method based on contour sharpness according to claim 1, is characterized in that, described contour sharpness energy function E sharp obtains with gradient contour sharpness solution, is defined as: EE. sthe s hh aa rr pp == expexp (( -- σσ (( pp (( qq 00 )) )) 22 bb )) ;; 其中,梯度轮廓锐度σ(p(q0))为梯度轮廓变量方差的均方根。这里梯度轮廓为图像中边缘像素q0(x0,y0)作为起始点,沿梯度方向向边缘的边界追踪,直到边缘的梯度幅度不再发生变化,所得到一维路径p(q0)形成的梯度幅度曲线;其梯度轮廓锐度定义为Wherein, the gradient profile sharpness σ(p(q 0 )) is the root mean square of the variance of the gradient profile variable. Here the gradient profile is the edge pixel q 0 (x 0 , y 0 ) in the image as the starting point, and traces along the gradient direction to the boundary of the edge until the gradient amplitude of the edge no longer changes, and the obtained one-dimensional path p(q 0 ) The resulting gradient magnitude curve; its gradient profile sharpness is defined as σσ (( pp (( qq 00 )) )) == ΣΣ xx ∈∈ pp (( qq 00 )) gg (( qq )) GG (( qq 00 )) dd cc 22 (( qq ,, qq 00 )) ;; GG (( qq 00 )) == ΣΣ sthe s ∈∈ pp (( qq 00 )) gg (( sthe s )) 其中,dc(q,q0)为梯度轮廓中点q和q0之间的曲线长度,g(q)为q点处的梯度幅度,G(q0)为梯度轮廓中所有点的梯度幅度和,s为梯度轮廓中任一点,参数b为权值控制参数。where d c (q,q 0 ) is the curve length between points q and q 0 in the gradient profile, g(q) is the gradient magnitude at point q, and G(q 0 ) is the gradient of all points in the gradient profile Amplitude sum, s is any point in the gradient profile, and parameter b is the weight control parameter.
CN201510786727.1A 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity Active CN105374039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510786727.1A CN105374039B (en) 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510786727.1A CN105374039B (en) 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity

Publications (2)

Publication Number Publication Date
CN105374039A true CN105374039A (en) 2016-03-02
CN105374039B CN105374039B (en) 2018-09-21

Family

ID=55376211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510786727.1A Active CN105374039B (en) 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity

Country Status (1)

Country Link
CN (1) CN105374039B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204010A (en) * 2017-04-28 2017-09-26 中国科学院计算技术研究所 A kind of monocular image depth estimation method and system
CN107582001A (en) * 2017-10-20 2018-01-16 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN108647713A (en) * 2018-05-07 2018-10-12 宁波华仪宁创智能科技有限公司 Embryo's Boundary Recognition and laser trace approximating method
CN109087346A (en) * 2018-09-21 2018-12-25 北京地平线机器人技术研发有限公司 Training method, training device and the electronic equipment of monocular depth model
JP2020524355A (en) * 2018-05-23 2020-08-13 浙江商▲湯▼科技▲開▼▲発▼有限公司Zhejiang Sensetime Technology Development Co., Ltd. Method and apparatus for recovering depth of monocular image, computer device
US10769805B2 (en) 2018-05-15 2020-09-08 Wistron Corporation Method, image processing device, and system for generating depth map
CN112396645A (en) * 2020-11-06 2021-02-23 华中科技大学 Monocular image depth estimation method and system based on convolution residual learning
CN112446946A (en) * 2019-08-28 2021-03-05 深圳市光鉴科技有限公司 Depth reconstruction method, system, device and medium based on sparse depth and boundary
CN114022567A (en) * 2021-11-09 2022-02-08 浙江商汤科技开发有限公司 Pose tracking method and device, electronic equipment and storage medium
CN114841969A (en) * 2022-05-07 2022-08-02 辽宁大学 Forged face identification method based on color gradient texture representation
CN116503821A (en) * 2023-06-19 2023-07-28 成都经开地理信息勘测设计院有限公司 Road identification recognition method and system based on point cloud data and image recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033617A1 (en) * 2008-08-05 2010-02-11 Qualcomm Incorporated System and method to generate depth data using edge detection
US20100141651A1 (en) * 2008-12-09 2010-06-10 Kar-Han Tan Synthesizing Detailed Depth Maps from Images
CN101840574A (en) * 2010-04-16 2010-09-22 西安电子科技大学 Depth estimation method based on edge pixel features
CN102883175A (en) * 2012-10-23 2013-01-16 青岛海信信芯科技有限公司 Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN103793918A (en) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 Image definition detecting method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033617A1 (en) * 2008-08-05 2010-02-11 Qualcomm Incorporated System and method to generate depth data using edge detection
US20100141651A1 (en) * 2008-12-09 2010-06-10 Kar-Han Tan Synthesizing Detailed Depth Maps from Images
CN101840574A (en) * 2010-04-16 2010-09-22 西安电子科技大学 Depth estimation method based on edge pixel features
CN102883175A (en) * 2012-10-23 2013-01-16 青岛海信信芯科技有限公司 Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN103793918A (en) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 Image definition detecting method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NATALIA NEVEROVA ET AL.: "Edge Based Method for Sharp Region Extraction from Low Depth of Field Images", 《VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2012 IEEE》 *
YONG JU JUNG ET AL.: "A novel 2D-to-3D conversion technique based on relative height depth cue", 《PROCEEDINGS OF THE SPIE》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204010B (en) * 2017-04-28 2019-11-19 中国科学院计算技术研究所 A monocular image depth estimation method and system
CN107204010A (en) * 2017-04-28 2017-09-26 中国科学院计算技术研究所 A kind of monocular image depth estimation method and system
CN107582001A (en) * 2017-10-20 2018-01-16 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN107582001B (en) * 2017-10-20 2020-08-11 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN108647713A (en) * 2018-05-07 2018-10-12 宁波华仪宁创智能科技有限公司 Embryo's Boundary Recognition and laser trace approximating method
CN108647713B (en) * 2018-05-07 2021-04-02 宁波华仪宁创智能科技有限公司 Embryo boundary identification and laser track fitting method
US10769805B2 (en) 2018-05-15 2020-09-08 Wistron Corporation Method, image processing device, and system for generating depth map
US11004221B2 (en) 2018-05-23 2021-05-11 Zhejiang Sensetime Technology Development Co., Ltd. Depth recovery methods and apparatuses for monocular image, and computer devices
JP2020524355A (en) * 2018-05-23 2020-08-13 浙江商▲湯▼科技▲開▼▲発▼有限公司Zhejiang Sensetime Technology Development Co., Ltd. Method and apparatus for recovering depth of monocular image, computer device
CN109087346A (en) * 2018-09-21 2018-12-25 北京地平线机器人技术研发有限公司 Training method, training device and the electronic equipment of monocular depth model
CN112446946A (en) * 2019-08-28 2021-03-05 深圳市光鉴科技有限公司 Depth reconstruction method, system, device and medium based on sparse depth and boundary
CN112396645A (en) * 2020-11-06 2021-02-23 华中科技大学 Monocular image depth estimation method and system based on convolution residual learning
CN112396645B (en) * 2020-11-06 2022-05-31 华中科技大学 Monocular image depth estimation method and system based on convolution residual learning
CN114022567A (en) * 2021-11-09 2022-02-08 浙江商汤科技开发有限公司 Pose tracking method and device, electronic equipment and storage medium
CN114841969A (en) * 2022-05-07 2022-08-02 辽宁大学 Forged face identification method based on color gradient texture representation
CN114841969B (en) * 2022-05-07 2024-12-27 辽宁大学 A forged face identification method based on color gradient texture representation
CN116503821A (en) * 2023-06-19 2023-07-28 成都经开地理信息勘测设计院有限公司 Road identification recognition method and system based on point cloud data and image recognition
CN116503821B (en) * 2023-06-19 2023-08-25 成都经开地理信息勘测设计院有限公司 Road identification recognition method and system based on point cloud data and image recognition

Also Published As

Publication number Publication date
CN105374039B (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN105374039B (en) Monocular image depth information method of estimation based on contour acuity
US11763485B1 (en) Deep learning based robot target recognition and motion detection method, storage medium and apparatus
Yang et al. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
CN103426182B (en) The electronic image stabilization method of view-based access control model attention mechanism
Liu et al. Guided inpainting and filtering for kinect depth maps
CN104574366B (en) A kind of extracting method in the vision significance region based on monocular depth figure
CN103077521B (en) A kind of area-of-interest exacting method for video monitoring
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN103383776B (en) A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation
CN106952222A (en) A kind of interactive image weakening method and device
CN104463870A (en) Image salient region detection method
CN103473743B (en) A kind of method obtaining image depth information
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
CN106651853A (en) Establishment method for 3D saliency model based on prior knowledge and depth weight
CN105023253A (en) Visual underlying feature-based image enhancement method
CN104036481B (en) Multi-focus image fusion method based on depth information extraction
CN106447640A (en) Multi-focus image fusion method based on dictionary learning and rotating guided filtering and multi-focus image fusion device thereof
CN102306393A (en) Method and device for deep diffusion based on contour matching
CN106447718A (en) 2D-to-3D depth estimation method
CN103646397B (en) Real-time synthetic aperture perspective imaging method based on multi-source data fusion
Keaomanee et al. Implementation of four kriging models for depth inpainting
CN103413332A (en) Image segmentation method based on two-channel texture segmentation active contour model
Srikakulapu et al. Depth estimation from single image using defocus and texture cues
RU2580466C1 (en) Device for recovery of depth map of scene

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant