CN103517052B - Visual point synthesizing method, device and encoder during a kind of coding depth information - Google Patents
Visual point synthesizing method, device and encoder during a kind of coding depth information Download PDFInfo
- Publication number
- CN103517052B CN103517052B CN201210226046.6A CN201210226046A CN103517052B CN 103517052 B CN103517052 B CN 103517052B CN 201210226046 A CN201210226046 A CN 201210226046A CN 103517052 B CN103517052 B CN 103517052B
- Authority
- CN
- China
- Prior art keywords
- distorted
- horizontal
- depth
- horizontal parallax
- depth value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000002194 synthesizing effect Effects 0.000 title claims description 5
- 230000000007 visual effect Effects 0.000 title 1
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 59
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 59
- 230000008859 change Effects 0.000 claims abstract description 43
- 230000008569 process Effects 0.000 claims abstract description 28
- 238000013507 mapping Methods 0.000 claims abstract description 12
- 238000001308 synthesis method Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 4
- 239000002131 composite material Substances 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
本发明实施例提供了一种编码深度信息时的视点合成方法、装置及编码器,该方法包括:在编码深度信息时的视点合成过程中,对深度图像编码单元的每一行深度值,根据深度值与水平视差的映射关系,将原始深度值和失真深度值分别映射为原始水平视差和失真水平视差;根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化;若判定当前行失真深度值没有导致合成视点像素值的变化,则在视点合成过程中跳过该行,不利用该行失真深度值进行视点合成。该装置包括:映射单元、判断单元、视点合成处理单元。编码器包括上述装置。本发明上述技术方案可以在保证编码性能的同时减小编码端的时间复杂度。
Embodiments of the present invention provide a viewpoint synthesis method, device, and encoder when encoding depth information. The method includes: during the viewpoint synthesis process when encoding depth information, for each line of depth value of a depth image coding unit, according to the depth The mapping relationship between the value and the horizontal disparity, the original depth value and the distorted depth value are mapped to the original horizontal disparity and the distorted horizontal disparity respectively; according to the difference between the original horizontal disparity and the distorted horizontal disparity, and use the horizontal disparity distortion threshold to judge the current row distortion depth value Whether it will result in a change in the pixel value of the synthesized viewpoint; if it is determined that the distorted depth value of the current row does not cause a change in the pixel value of the synthesized viewpoint, then this row will be skipped during the viewpoint synthesis process, and the distorted depth value of the row will not be used for viewpoint synthesis. The device includes: a mapping unit, a judging unit, and a view synthesis processing unit. The encoder comprises the above-mentioned means. The above technical solution of the present invention can reduce the time complexity of the encoding end while ensuring the encoding performance.
Description
技术领域technical field
本发明涉及多媒体技术领域,尤其涉及一种编码深度信息时的视点合成方法、装置及编码器。The present invention relates to the field of multimedia technology, in particular to a viewpoint synthesis method, device and encoder when encoding depth information.
背景技术Background technique
在第98次MPEG(Moving Pictures Experts Group,动态图像专家组)会议中,HHI(Heinrich Hertz Institute,莱茵—赫兹研究所)提出了一种在深度图像压缩中基于合成视点失真信息的率失真优化算法。在该算法中,深度图像的失真大小通过合成视点的失真变化来衡量,具体可以表示为:In the 98th MPEG (Moving Pictures Experts Group) meeting, HHI (Heinrich Hertz Institute, Rhein-Hertz Institute) proposed a rate-distortion optimization algorithm based on synthetic viewpoint distortion information in deep image compression . In this algorithm, the distortion of the depth image is measured by the distortion change of the synthesized viewpoint, which can be expressed as:
∑(x,y)∈l[s'T(x,y)-s′T,R(x,y)]2 (1)∑ (x, y) ∈ l[s' T (x, y)-s' T, R (x, y)] 2 (1)
其中,s'T,R(x,y)代表应用原始纹理图像和原始深度信息合成的虚拟视点。同时,编码深度信息时,将一幅深度图像分为已编码,当前编码和未编码三类。对于和s'T(x,y),都是选择应用失真的纹理图像进行合成。区别在于,s'T(x,y)应用重构的已编码深度信息,原始的当前编码的深度信息和原始的其他像素深度信息进行合成,而应用重构的已编码深度信息,失真的当前编码的深度信息和原始的其他像素深度信息进行合成。可以看出,如果当前失真的深度信息不会导致合成视点的变化,那么ΔD=0。where s' T,R (x,y) represents the virtual viewpoint synthesized by applying the original texture image and the original depth information. At the same time, when encoding depth information, a depth image is divided into three categories: encoded, currently encoded and unencoded. for and s' T (x,y), both choose to apply the distorted texture image for synthesis. The difference is that s' T (x,y) applies the reconstructed encoded depth information, the original current encoded depth information and the original other pixel depth information for synthesis, while Apply the reconstructed encoded depth information, the distorted currently encoded depth information and the original other pixel depth information for synthesis. It can be seen that ΔD=0 if the current distorted depth information does not cause a change in the synthetic viewpoint.
现有的技术方案是基于合成视点失真的变化情况对当前深度信息失真引起的合成视点失真进行估计,因此在实际编码中需要进行如下操作:The existing technical solution is to estimate the synthetic view distortion caused by the distortion of the current depth information based on the change of the synthetic view distortion. Therefore, the following operations need to be performed in actual coding:
1.编码当前深度图像前,对原始深度图像和原始纹理图像进行视点合成,即合成s'T,R(x,y)。对原始深度图像和失真纹理图像进行视点合成,即合成s'T(x,y)和在编码过程开始之前 1. Before encoding the current depth image, perform viewpoint synthesis on the original depth image and the original texture image, that is, synthesize s' T,R (x,y). Perform view synthesis on the original depth image and the distorted texture image, that is, synthesize s' T (x, y) and before the encoding process begins
2.在编码当前块时,需要进行率失真计算。设当前深度块为B,要计算率失真的失真块为B’,已编码的区域的原始深度信息为P,失真深度信息为P’,未编码的区域原始深度信息为H,则当前的s'T(x,y)是由P’,B,H的深度信息绘制得到,在此时不需要进行计算(在下面步骤中进行更新)。然而,为了计算公式(1)中的ΔD,需要应用B’更新当前深度块信息失真以后的合成视点信息从而更新在这个过程中,只对B’影响到的合成视点像素进行绘制操作,而不是对于整幅图像进行更新,这样即可得到ΔD进行率失真计算。2. When encoding the current block, rate-distortion calculation is required. Let the current depth block be B, the distorted block to calculate the rate distortion is B', the original depth information of the encoded area is P, the distortion depth information is P', and the original depth information of the unencoded area is H, then the current s ' T (x, y) is drawn from the depth information of P', B, H, and does not need to be calculated at this time (updated in the following steps). However, in order to calculate ΔD in formula (1), it is necessary to apply B' to update the synthetic viewpoint information after the current depth block information is distorted to update In this process, only the synthetic viewpoint pixels affected by B' are drawn, instead of updating the entire image, so that ΔD can be obtained for rate-distortion calculation.
3.在编码完当前块时,用当前已编码的失真信息进行视点合成从而更新合成视点信息。同样,假设当前深度块为B,重构块为B”,已编码的区域的原始深度信息为P,失真深度信息为P’,未编码的区域原始深度信息为H,在编码当前块的过程中,同样,s'T(x,y)是由P’,B,H的深度信息得到,然而,当编码完当前块得到B”以后,需要对s'T(x,y)应用B”进行更新。同2,更新过程依然只对B”影响到的合成视点像素进行绘制操作,而不是对于整幅图像进行更新,从而得到P’,B”,H绘制的s'T(x,y)。3. When the current block is encoded, use the currently encoded distortion information to perform view synthesis to update the synthesized view information. Similarly, assuming that the current depth block is B, the reconstructed block is B", the original depth information of the encoded area is P, the distortion depth information is P', and the original depth information of the unencoded area is H. In the process of encoding the current block In the same way, s' T (x, y) is obtained from the depth information of P', B, and H. However, after encoding the current block to obtain B", it is necessary to apply B" to s' T (x, y) Update. Same as 2, the update process still only draws the synthetic viewpoint pixels affected by B", instead of updating the entire image, so as to obtain the s' T (x, y) drawn by P', B", and H ).
从如上过程可以看到,现有的技术方案在编码过程中需要不断的进行视点的绘制和更新,需要较大的时间复杂度(时间开销)。因此,需要设计快速的技术方案在保证编码性能的同时减小编码端的时间复杂度。It can be seen from the above process that the existing technical solutions need to continuously draw and update viewpoints during the encoding process, which requires a relatively large time complexity (time overhead). Therefore, it is necessary to design a fast technical solution to reduce the time complexity of the encoding end while ensuring the encoding performance.
发明内容Contents of the invention
本发明实施例提供一种编码深度信息时的视点合成方法、装置及编码器,以在保证编码性能的同时减小编码端的时间复杂度。Embodiments of the present invention provide a view synthesis method, device, and encoder when encoding depth information, so as to reduce time complexity at an encoding end while ensuring encoding performance.
一方面,本发明实施例提供了一种编码深度信息时的视点合成方法,所述编码深度信息时的视点合成方法包括:On the one hand, an embodiment of the present invention provides a method for synthesizing views when encoding depth information, and the method for synthesizing views when encoding depth information includes:
在编码深度信息时的视点合成过程中,对深度图像编码单元的每一行深度值,根据深度值与水平视差的映射关系,将原始深度值和失真深度值分别映射为原始水平视差和失真水平视差;In the view synthesis process when encoding depth information, for each depth value of the depth image coding unit, according to the mapping relationship between the depth value and the horizontal disparity, the original depth value and the distorted depth value are respectively mapped to the original horizontal disparity and the distorted horizontal disparity ;
根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化;According to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold to determine whether the current row's distorted depth value will cause a change in the pixel value of the synthetic viewpoint;
若判定当前行失真深度值没有导致合成视点像素值的变化,则在视点合成过程中跳过该行,不利用该行失真深度值进行视点合成。If it is determined that the distorted depth value of the current row does not result in a change in the pixel value of the synthesized view, the row is skipped during the view synthesis process, and the distorted depth value of the row is not used for viewpoint synthesis.
优选的,在本发明一实施例中,所述水平视差失真阈值为根据纹理图像的平滑特性以及合成图像中的遮挡信息,得到的导致合成视点变化的水平视差失真阈值。Preferably, in an embodiment of the present invention, the horizontal parallax distortion threshold is a horizontal parallax distortion threshold that causes a change in the composite viewpoint obtained according to the smoothness of the texture image and occlusion information in the composite image.
优选的,在本发明一实施例中,所述根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化,包括:当前像素遮挡其他像素时,水平视差失真阈值为零,即当前像素对应的深度值不能产生任何视差失真;若当前深度值不产生任何视差失真,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。Preferably, in an embodiment of the present invention, according to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold to judge whether the current line distortion depth value will cause a change in the synthetic viewpoint pixel value, including: the current pixel When other pixels are blocked, the horizontal parallax distortion threshold is zero, that is, the depth value corresponding to the current pixel cannot produce any parallax distortion; if the current depth value does not produce any parallax distortion, it is further judged whether the current row distortion depth value corresponding to the current pixel is Satisfies that the difference between the original horizontal disparity and the distorted horizontal disparity is within the range defined by the horizontal parallax distortion threshold; if the current line of distorted depth values all satisfy the difference between the original horizontal disparity and the distorted horizontal disparity within the range defined by the horizontal parallax distortion threshold, then It is determined that the current row distortion depth value does not result in a change in the synthetic view pixel value.
优选的,在本发明一实施例中,所述根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化,包括:当前像素被其他像素遮挡时或者当前像素既不遮挡其他像素也不被其他像素遮挡时,判断原始水平视差和失真水平视差之差是否在水平视差失真阈值界定的范围之内,若原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。Preferably, in an embodiment of the present invention, according to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold to judge whether the current line distortion depth value will cause a change in the synthetic viewpoint pixel value, including: the current pixel When being blocked by other pixels or when the current pixel neither blocks other pixels nor is blocked by other pixels, judge whether the difference between the original horizontal parallax and the distorted horizontal parallax is within the range defined by the horizontal parallax distortion threshold, if the original horizontal parallax and the distortion level If the disparity difference is within the range defined by the horizontal parallax distortion threshold, then it is further judged whether the current row distortion depth value corresponding to the current pixel satisfies the difference between the original horizontal disparity and the distorted horizontal disparity within the range defined by the horizontal parallax distortion threshold; If the distorted depth values of the current row satisfy that the difference between the original horizontal disparity and the distorted horizontal disparity is within the range defined by the horizontal disparity distortion threshold, it is determined that the distorted depth value of the current row does not cause a change in the pixel value of the synthetic viewpoint.
另一方面,本发明实施例提供了一种编码深度信息时的视点合成装置,所述编码深度信息时的视点合成装置包括:On the other hand, an embodiment of the present invention provides a viewpoint synthesis device when encoding depth information, and the viewpoint synthesis device when encoding depth information includes:
映射单元,用于在编码深度信息时的视点合成过程中,对深度图像编码单元的每一行深度值,根据深度值与水平视差的映射关系,将原始深度值和失真深度值分别映射为原始水平视差和失真水平视差;The mapping unit is used to map the original depth value and the distorted depth value to the original level according to the mapping relationship between the depth value and the horizontal disparity for each line of depth value of the depth image coding unit during the view synthesis process when encoding the depth information Parallax and Distortion Horizontal Parallax;
判断单元,用于根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行像素失真深度值是否会导致合成视点像素值的变化;A judging unit, configured to judge whether the pixel distortion depth value of the current row will cause a change in the synthetic viewpoint pixel value according to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold;
视点合成处理单元,用于若判定当前行失真深度值没有导致合成视点像素值的变化,则在视点合成过程中跳过该行,不利用该行失真深度值进行视点合成。The view synthesis processing unit is configured to skip the line during the view synthesis process if it is determined that the distorted depth value of the current line does not result in a change in the pixel value of the synthesized view, and not use the distorted depth value of the line for view synthesis.
优选的,在本发明一实施例中,所述水平视差失真阈值为根据纹理图像的平滑特性以及合成图像中的遮挡信息,得到的导致合成视点变化的水平视差失真阈值。Preferably, in an embodiment of the present invention, the horizontal parallax distortion threshold is a horizontal parallax distortion threshold that causes a change in the composite viewpoint obtained according to the smoothness of the texture image and occlusion information in the composite image.
优选的,在本发明一实施例中,所述判断单元包括:第一判断模块,用于当前像素遮挡其他像素时,水平视差失真阈值为零,即当前像素对应的深度值不能产生任何视差失真;若当前深度值不产生任何视差失真,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。Preferably, in an embodiment of the present invention, the judging unit includes: a first judging module, used for when the current pixel blocks other pixels, the horizontal parallax distortion threshold is zero, that is, the depth value corresponding to the current pixel cannot produce any parallax distortion ; If the current depth value does not produce any parallax distortion, it is further judged whether the distorted depth value of the current row corresponding to the current pixel satisfies whether the difference between the original horizontal disparity and the distorted horizontal disparity is within the range defined by the horizontal parallax distortion threshold; if the current row If the distorted depth values satisfy that the difference between the original horizontal disparity and the distorted horizontal disparity is within the range defined by the horizontal disparity distortion threshold, it is determined that the distorted depth value of the current row does not cause a change in the pixel value of the synthetic viewpoint.
优选的,在本发明一实施例中,所述判断单元包括:第二判断模块,用于当前像素被其他像素遮挡时或者当前像素既不遮挡其他像素也不被其他像素遮挡时,判断原始水平视差和失真水平视差之差是否在水平视差失真阈值界定的范围之内,若原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。Preferably, in an embodiment of the present invention, the judging unit includes: a second judging module for judging the original level when the current pixel is blocked by other pixels or when the current pixel neither blocks nor is blocked by other pixels Whether the difference between the parallax and the distorted horizontal parallax is within the range defined by the horizontal parallax distortion threshold, if the difference between the original horizontal parallax and the distorted horizontal parallax is within the range defined by the horizontal parallax distortion threshold, then further judge the current row corresponding to the current pixel Whether the distorted depth values satisfy the difference between the original horizontal disparity and the distorted horizontal disparity within the range defined by the horizontal disparity distortion threshold; If it is within the range, it is determined that the current row distortion depth value does not cause a change in the pixel value of the synthetic viewpoint.
再一方面,本发明实施例提供了一种编码器,所述编码器包括上述编码深度信息时的视点合成装置。In yet another aspect, an embodiment of the present invention provides an encoder, where the encoder includes the above-mentioned view synthesis apparatus when encoding depth information.
上述技术方案具有如下有益效果:因为采用在编码深度信息时的视点合成过程中,对深度图像编码单元的每一行深度值,根据深度值与水平视差的映射关系,将原始深度值和失真深度值分别映射为原始水平视差和失真水平视差;根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化;若判定当前行失真深度值没有导致合成视点像素值的变化,则在视点合成过程中跳过该行,不利用该行失真深度值进行视点合成的技术手段,所以可以在保证编码性能的同时减小编码端的时间复杂度。The above technical solution has the following beneficial effects: because the depth value of each row of the depth image coding unit is used in the process of view synthesis when encoding depth information, the original depth value and the distorted depth value are combined according to the mapping relationship between the depth value and the horizontal disparity. They are respectively mapped to the original horizontal disparity and the distorted horizontal disparity; according to the difference between the original horizontal disparity and the distorted horizontal disparity, and using the horizontal disparity distortion threshold to judge whether the distortion depth value of the current line will cause the change of the pixel value of the synthesized viewpoint; if the current line distortion depth is determined value does not lead to changes in the pixel value of the synthesized viewpoint, skip this row during the viewpoint synthesis process, and do not use the technical means of distorted depth values in this row for viewpoint synthesis, so the time complexity of the encoding end can be reduced while ensuring the encoding performance .
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1为本发明实施例一种编码深度信息时的视点合成方法流程图;FIG. 1 is a flow chart of a viewpoint synthesis method when encoding depth information according to an embodiment of the present invention;
图2为本发明实施例一种编码深度信息时的视点合成装置结构示意图;FIG. 2 is a schematic structural diagram of a view synthesis device for encoding depth information according to an embodiment of the present invention;
图3为本发明实施例判断单元结构示意图;FIG. 3 is a schematic structural diagram of a judging unit according to an embodiment of the present invention;
图4为本发明应用实例对应相同1/4像素精度视差情况下原始视差和失真视差示意图;Fig. 4 is a schematic diagram of the original parallax and the distorted parallax corresponding to the same 1/4 pixel precision parallax in the application example of the present invention;
图5(a)为本发明应用实例原始深度对应的合成视点示意图;Fig. 5(a) is a schematic diagram of the synthetic viewpoint corresponding to the original depth of the application example of the present invention;
图5(b)为本发明应用实例失真深度对应的合成视点示意图;Fig. 5(b) is a schematic diagram of synthetic viewpoint corresponding to the distortion depth of the application example of the present invention;
图5(c)为本发明应用实例图5(a)和图5(b)中合成视点的像素值之差示意图;Fig. 5(c) is a schematic diagram of the difference between the pixel values of the synthetic viewpoint in Fig. 5(a) and Fig. 5(b) of the application example of the present invention;
图6为本发明应用实例遮挡像素示意图;Fig. 6 is a schematic diagram of occluded pixels of an application example of the present invention;
图7为本发明应用实例编码深度信息时的视点合成方法流程图。FIG. 7 is a flow chart of a view synthesis method when encoding depth information in an application example of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
如图1所示,为本发明实施例一种编码深度信息时的视点合成方法流程图,所述编码深度信息时的视点合成方法包括:As shown in FIG. 1 , it is a flowchart of a viewpoint synthesis method when encoding depth information according to an embodiment of the present invention. The viewpoint synthesis method when encoding depth information includes:
101、在编码深度信息时的视点合成过程中,对深度图像编码单元的每一行深度值,根据深度值与水平视差的映射关系,将原始深度值和失真深度值分别映射为原始水平视差和失真水平视差;101. During the view synthesis process when encoding depth information, for each depth value of a depth image coding unit, according to the mapping relationship between the depth value and the horizontal disparity, the original depth value and the distorted depth value are respectively mapped to the original horizontal disparity and distortion horizontal parallax;
102、根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化;102. According to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold, it is judged whether the distorted depth value of the current line will cause a change in the pixel value of the synthesized viewpoint;
103、若判定当前行失真深度值没有导致合成视点像素值的变化,则在视点合成过程中跳过该行,不利用该行失真深度值进行视点合成。103. If it is determined that the distorted depth value of the current row does not result in a change in the pixel value of the synthesized view, skip this row during the view synthesis process, and perform viewpoint synthesis without using the distorted depth value of the row.
优选的,所述水平视差失真阈值为根据纹理图像的平滑特性以及合成图像中的遮挡信息,得到的导致合成视点变化的水平视差失真阈值。Preferably, the horizontal parallax distortion threshold is a horizontal parallax distortion threshold that causes a change in the composite viewpoint obtained according to the smoothness of the texture image and occlusion information in the composite image.
优选的,所述根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化,包括:当前像素遮挡其他像素时,水平视差失真阈值为零,即当前像素对应的深度值不能产生任何视差失真;若当前深度值不产生任何视差失真,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。Preferably, according to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold to determine whether the current row's distorted depth value will cause a change in the pixel value of the synthetic viewpoint, it includes: when the current pixel blocks other pixels, the horizontal parallax distortion The threshold is zero, that is, the depth value corresponding to the current pixel cannot produce any parallax distortion; if the current depth value does not produce any parallax distortion, it is further judged whether the current row distortion depth value corresponding to the current pixel satisfies the original horizontal parallax and the distorted horizontal parallax The difference is within the range defined by the horizontal parallax distortion threshold; if the current row distortion depth value satisfies that the difference between the original horizontal parallax and the distorted horizontal parallax is within the range defined by the horizontal parallax distortion threshold, it is determined that the current row distortion depth value does not cause Composites changes in viewpoint pixel values.
优选的,所述根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化,包括:当前像素被其他像素遮挡时或者当前像素既不遮挡其他像素也不被其他像素遮挡时,判断原始水平视差和失真水平视差之差是否在水平视差失真阈值界定的范围之内,若原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。Preferably, according to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold to determine whether the current row's distorted depth value will cause a change in the pixel value of the synthetic viewpoint, including: when the current pixel is blocked by other pixels or the current pixel When neither blocking other pixels nor being blocked by other pixels, judge whether the difference between the original horizontal parallax and the distorted horizontal parallax is within the range defined by the horizontal parallax distortion threshold, if the difference between the original horizontal parallax and the distorted horizontal parallax is within the horizontal parallax distortion threshold Within the defined range, it is further judged whether the current row distortion depth value corresponding to the current pixel satisfies whether the difference between the original horizontal disparity and the distorted horizontal disparity is within the range defined by the horizontal parallax distortion threshold; if the current row distortion depth value satisfies If the difference between the original horizontal disparity and the distorted horizontal disparity is within the range defined by the horizontal disparity distortion threshold, it is determined that the distorted depth value of the current row does not cause a change in the pixel value of the synthetic viewpoint.
对应于上述方法实施例,如图2所示,为本发明实施例一种编码深度信息时的视点合成装置结构示意图,所述编码深度信息时的视点合成装置包括:Corresponding to the above method embodiment, as shown in FIG. 2 , it is a schematic structural diagram of a viewpoint synthesis device when encoding depth information according to an embodiment of the present invention. The viewpoint synthesis device when encoding depth information includes:
映射单元21,用于在编码深度信息时的视点合成过程中,对深度图像编码单元的每一行深度值,根据深度值与水平视差的映射关系,将原始深度值和失真深度值分别映射为原始水平视差和失真水平视差;The mapping unit 21 is used to map the original depth value and the distorted depth value to the original Horizontal Parallax and Distortion Horizontal Parallax;
判断单元22,用于根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化;The judging unit 22 is configured to judge whether the distorted depth value of the current line will cause a change in the pixel value of the synthetic viewpoint according to the difference between the original horizontal parallax and the distorted horizontal parallax, and using the horizontal parallax distortion threshold;
视点合成处理单元23,用于若判定当前行失真深度值没有导致合成视点像素值的变化,则在视点合成过程中跳过该行,不利用该行失真深度值进行视点合成。The view synthesis processing unit 23 is configured to skip the line during the view synthesis process if it is determined that the distorted depth value of the current line does not cause a change in the synthesized view pixel value, and not use the distorted depth value of the line for view synthesis.
优选的,所述水平视差失真阈值为根据纹理图像的平滑特性以及合成图像中的遮挡信息,得到的导致合成视点变化的水平视差失真阈值。Preferably, the horizontal parallax distortion threshold is a horizontal parallax distortion threshold that causes a change in the composite viewpoint obtained according to the smoothness of the texture image and occlusion information in the composite image.
优选的,如图3所示,为本发明实施例判断单元结构示意图,所述判断单元22包括:第一判断模块221,用于当前像素遮挡其他像素时,水平视差失真阈值为零,即当前像素对应的深度值不能产生任何视差失真;若当前深度值不产生任何视差失真,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。所述判断单元包括:第二判断模块222,用于当前像素被其他像素遮挡时或者当前像素既不遮挡其他像素也不被其他像素遮挡时,判断原始水平视差和失真水平视差之差是否在水平视差失真阈值界定的范围之内,若原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则进一步判断当前像素所对应的当前行失真深度值是否都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内;若当前行失真深度值都满足原始水平视差和失真水平视差之差在水平视差失真阈值界定的范围之内,则判定当前行失真深度值没有导致合成视点像素值的变化。Preferably, as shown in FIG. 3 , which is a schematic structural diagram of the judging unit of the embodiment of the present invention, the judging unit 22 includes: a first judging module 221, used for when the current pixel blocks other pixels, the horizontal parallax distortion threshold is zero, that is, the current The depth value corresponding to the pixel cannot produce any parallax distortion; if the current depth value does not produce any parallax distortion, it is further judged whether the current row distortion depth value corresponding to the current pixel satisfies the difference between the original horizontal parallax and the distorted horizontal parallax in the horizontal parallax distortion Within the range defined by the threshold; if the current row distortion depth value satisfies the difference between the original horizontal disparity and the distorted horizontal disparity within the range defined by the horizontal parallax distortion threshold, it is determined that the current row distortion depth value does not cause a change in the synthetic viewpoint pixel value . The judging unit includes: a second judging module 222, which is used to judge whether the difference between the original horizontal parallax and the distorted horizontal parallax is at the horizontal level when the current pixel is blocked by other pixels or when the current pixel neither blocks other pixels nor is blocked by other pixels. Within the range defined by the parallax distortion threshold, if the difference between the original horizontal parallax and the distorted horizontal parallax is within the range defined by the horizontal parallax distortion threshold, it is further judged whether the current row distortion depth value corresponding to the current pixel satisfies the original horizontal parallax and the The difference between the distorted horizontal disparity is within the range defined by the horizontal parallax distortion threshold; if the current row distortion depth value satisfies the difference between the original horizontal disparity and the distorted horizontal disparity within the range defined by the horizontal parallax distortion threshold, then the current row distortion depth is determined The value does not result in a change in the synthetic viewpoint pixel value.
再一方面,本发明实施例提供了一种编码器,所述编码器包括上述编码深度信息时的视点合成装置。In yet another aspect, an embodiment of the present invention provides an encoder, where the encoder includes the above-mentioned view synthesis apparatus when encoding depth information.
本发明实施例上述方法、装置或编码器技术方案具有如下有益效果:因为采用在编码深度信息时的视点合成过程中,对深度图像编码单元的每一行深度值,根据深度值与水平视差的映射关系,将原始深度值和失真深度值分别映射为原始水平视差和失真水平视差;根据原始水平视差和失真水平视差之差,并利用水平视差失真阈值判断当前行失真深度值是否会导致合成视点像素值的变化;若判定当前行失真深度值没有导致合成视点像素值的变化,则在视点合成过程中跳过该行,不利用该行失真深度值进行视点合成的技术手段,所以可以在保证编码性能的同时减小编码端的时间复杂度。The above-mentioned method, device or encoder technical solution in the embodiment of the present invention has the following beneficial effects: because the depth value of each line of the depth image coding unit is used in the process of view synthesis when encoding depth information, according to the mapping between the depth value and the horizontal disparity The relationship between the original depth value and the distorted depth value is mapped to the original horizontal disparity and the distorted horizontal disparity respectively; according to the difference between the original horizontal disparity and the distorted horizontal disparity, and using the horizontal disparity distortion threshold to judge whether the distorted depth value of the current line will result in synthetic viewpoint pixels value change; if it is determined that the distorted depth value of the current line does not result in a change in the pixel value of the synthesized view, then this line will be skipped during the view synthesis process, and the technical means of view synthesis will not be performed using the distorted depth value of this line, so it can be guaranteed while encoding performance while reducing the time complexity of the encoding side.
本发明实施例上述方法、装置或编码器所提出的技术方案是在编码的过程中,每次合成视点之前,取得原始的深度信息和当前的深度信息以后,利用原始的深度信息和当前的深度信息,以及纹理图像特性和合成视点的遮挡特性进行分析,从而判断是否需要进行视点合成操作。(需要说明的是本技术方案对于解码端没有任何影响)具体的技术方案如下:The technical solution proposed by the above method, device or encoder in the embodiment of the present invention is to use the original depth information and current depth information after obtaining the original depth information and the current depth Information, as well as texture image characteristics and occlusion characteristics of synthetic viewpoints are analyzed to determine whether a viewpoint synthesis operation is required. (It should be noted that this technical solution has no impact on the decoder.) The specific technical solution is as follows:
在摄像机水平并行排列的情况下,真实的深度信息z与视差信息d之间的关系可以表示为:In the case where the cameras are horizontally arranged in parallel, the relationship between the real depth information z and the disparity information d can be expressed as:
其中,f代表摄像机的焦距。l代表两个视点之间的基准距离。Among them, f represents the focal length of the camera. l represents the reference distance between two viewpoints.
设当前深度z对应的量化深度为v=Q(z),则视差信息与量化深度之间的关系可以表示为:Assuming that the quantized depth corresponding to the current depth z is v=Q(z), the relationship between the disparity information and the quantized depth can be expressed as:
由公式(3)可以看出,不同的深度信息会导致不同的水平视差。然而,在实际应用中,一般会对水平视差做取整操作(rounding)。例如在现有的3DV-HEVC中,对水平视差采用1/4像素精度取整,如图4所示,为本发明应用实例对应相同1/4像素精度视差情况下原始视差和失真视差示意图。取整以后可以看到,原始深度和失真深度对应着相同的1/4像素精度的水平视差。因此,深度信息的失真不一定会导致水平视差的失真。It can be seen from formula (3) that different depth information will lead to different horizontal disparity. However, in practical applications, a rounding operation (rounding) is generally performed on the horizontal disparity. For example, in the existing 3DV-HEVC, the horizontal disparity is rounded with 1/4 pixel precision, as shown in FIG. 4 , which is a schematic diagram of the original disparity and distorted disparity corresponding to the same 1/4 pixel precision disparity in the application example of the present invention. After rounding, it can be seen that the original depth and the distorted depth correspond to the same horizontal disparity with 1/4 pixel precision. Therefore, distortion of depth information does not necessarily result in distortion of horizontal disparity.
设当前像素的位置为p,像素深度值对应的视差d在1/N精度的取整的操作为RN(d)。假设dop为原始视差,dsp为失真视差。则视差失真可以表示为Let the position of the current pixel be p, and the rounding operation of the parallax d corresponding to the pixel depth value at 1/N precision is R N (d). Suppose d op is the original disparity and d sp is the distorted disparity. Then the parallax distortion can be expressed as
DN(dop,dsp)=RN(dop)-RN(dsp) (4)D N (d op ,d sp )=R N (d op )-R N (d sp ) (4)
在现有的3DV-HEVC中,对水平视差采用1/4像素精度取整,则N=4。In the existing 3DV-HEVC, the horizontal disparity is rounded with 1/4 pixel accuracy, and N=4.
当视差失真不为0时,合成虚拟视点的失真与纹理图像特性有很大的的关系,如图5(a)所示,为本发明应用实例原始深度对应的合成视点示意图;图5(b)为本发明应用实例失真深度对应的合成视点示意图;图5(c)为本发明应用实例图5(a)和图5(b)中合成视点的像素值之差示意图。从图中可以看到:纹理图像中位置2~5的像素值相同,这些位置纹理图像梯度很小或为0,这种情况下,利用原始深度信息和失真深度信息得到的合成视点中的像素差别很小,如图5(a)和图5(b)中合成视点位置1~3的像素值;另一方面,纹理图像中位置5~9的像素值变化较大,这些位置纹理图像梯度较大,这种情况下,利用原始深度信息和失真深度信息得到的合成视点中的像素差别很大,如图5(a)和图5(b)中合成视点位置4~7的像素值。When the parallax distortion is not 0, the distortion of the synthetic virtual viewpoint has a great relationship with the characteristics of the texture image, as shown in Figure 5(a), which is a schematic diagram of the synthetic viewpoint corresponding to the original depth of the application example of the present invention; Figure 5(b) ) is a schematic diagram of the synthetic viewpoint corresponding to the distortion depth of the application example of the present invention; Fig. 5(c) is a schematic diagram of the difference between the pixel values of the synthetic viewpoint in Fig. 5(a) and Fig. 5(b) of the application example of the present invention. It can be seen from the figure that the pixel values of positions 2 to 5 in the texture image are the same, and the gradient of the texture image at these positions is very small or 0. In this case, the pixels in the synthetic viewpoint obtained by using the original depth information and the distorted depth information The difference is very small, as shown in Figure 5(a) and Figure 5(b), the pixel values of synthetic viewpoint positions 1~3; on the other hand, the pixel values of positions 5~9 in the texture image vary greatly, and the texture image gradient In this case, the pixels in the synthetic viewpoint obtained by using the original depth information and the distorted depth information are very different, as shown in Figure 5(a) and Figure 5(b) for the pixel values of synthetic viewpoint positions 4-7.
本发明实施例定义阈值Gl,Gr分别表示当前像素在1/4像素插值以后左侧和右侧的平滑程度,具体的说,Gl表示在当前像素p左侧的像素点范围内,对于当前像素p和Gl之间的任意一个像素,该像素的值和当前像素p相同,同理,Gr表示在当前像素p右侧的像素点范围内,对于当前像素p和Gr之间的任意一个像素,该像素的值和当前像素p相同。The embodiment of the present invention defines the thresholds G l and G r to represent the smoothness of the left and right sides of the current pixel after 1/4 pixel interpolation, specifically, G l represents the range of pixels on the left side of the current pixel p, For any pixel between the current pixel p and G l , the value of this pixel is the same as that of the current pixel p. Similarly, G r means that it is within the range of pixels on the right side of the current pixel p. For the current pixel p and G r Any pixel in between, the value of this pixel is the same as the current pixel p.
Gl=min{pl}其中pl满足:I(m)=I(p)G l = min{p l } where p l satisfies: I(m)=I(p)
Gr=max{pr}其中pr满足:I(m)=I(p) (5)G r = max{p r } where p r satisfies: I(m)=I(p) (5)
其中,p1<p表示像素p左侧的像素,pr>p表示像素p右侧的像素,I表示经过插值以后的高分辨率重构图像。Among them, p 1 <p indicates the pixel on the left side of pixel p, p r >p indicates the pixel on the right side of pixel p, and I indicates the high-resolution reconstructed image after interpolation.
当前像素被遮挡时,当前像素的深度信息在视点合成中不起作用。如图6所示,为本发明应用实例遮挡像素示意图。c,d像素深度信息不影响视点合成的结果。当前像素的深度信息失真以后,如果当前像素仍然被遮挡,即可认为其深度信息的失真不会对视点合成产生影响。当前像像素被遮挡时,本发明应用实例定义:When the current pixel is occluded, the depth information of the current pixel has no effect in view synthesis. As shown in FIG. 6 , it is a schematic diagram of occluded pixels in an application example of the present invention. c, d Pixel depth information does not affect the result of view synthesis. After the depth information of the current pixel is distorted, if the current pixel is still occluded, it can be considered that the distortion of the depth information will not affect the view synthesis. When the front image pixel is blocked, the application example definition of the present invention:
其中po为遮挡当前像素的深度像素的位置,p为当前像素的位置。To<0时表示在合成视点上当前像素被其左侧的像素遮挡。如果当前像素的深度值失真导致当前像素在合成视点上的位置左移但不会左移至左侧的话,该深度值失真不会影响合成视点的失真。类似的,To>0表示在合成视点上当前像素被其右侧的像素遮挡。Where p o is the position of the depth pixel that blocks the current pixel, and p is the position of the current pixel. When T o <0, it means that the current pixel is occluded by the pixel to its left on the synthetic viewpoint. If the current pixel's depth value is distorted causing the current pixel's position on the composite view to be shifted to the left but not to On the left, the depth value distortion does not affect the distortion of the synthetic view. Similarly, T o >0 indicates that the current pixel is occluded by the pixel to its right in the synthetic view.
当前像素遮挡其他像素时,本发明应用实例要求当前像素不能产生任何视差失真。When the current pixel blocks other pixels, the application example of the present invention requires that the current pixel cannot produce any parallax distortion.
综上,本发明应用实例定义阈值Tl和Tr如下(Tl表示失真后的水平视差与原始水平视差相比,最多可以向左偏移多少,Tr表示失真后的水平视差与原始水平视差相比,最多可以向右偏移多少):In summary, the application examples of the present invention define the thresholds T l and T r as follows (T l indicates how much the horizontal parallax after distortion can be shifted to the left at most compared with the original horizontal parallax, and T r indicates that the horizontal parallax after distortion is different from the original horizontal parallax How much can be shifted to the right compared to the parallax):
并定义条件:and define the condition:
C={RN(v)-RN(v')≤Tl且RN(v')-RN(v)≤Tr},其中,C表示失真后的水平视差在什么范围内不会导致合成视点的失真,v表示原始水平视差,v'表示失真水平视差。C={R N (v)-R N (v')≤T l and R N (v')-R N (v)≤T r }, where C represents the range of the distorted horizontal parallax Will cause distortion of the synthesized viewpoint, v represents the original horizontal disparity, and v' represents the distorted horizontal disparity.
如图7所示,为本发明应用实例编码深度信息时的视点合成方法流程图,包括如下步骤:As shown in FIG. 7 , it is a flow chart of the viewpoint synthesis method when encoding depth information in an application example of the present invention, including the following steps:
701、初始化j=0;701. Initialize j=0;
702、对当前块中第j行像素Sj 702. For the j-th row pixel S j in the current block
703、判断是否d∈C,如果是,即可认为当前行深度信息像素的失真对合成视点失真没有影响,则转步骤704,如果否,则转步骤706;703. Judging whether d∈C, if yes, it can be considered that the distortion of the depth information pixel of the current line has no influence on the distortion of the synthetic viewpoint, then go to step 704, if not, then go to step 706;
704、跳过第j行,不利用该第j行像素进行视点合成;704. Skip the jth row, and do not use the jth row of pixels to perform view synthesis;
705、将j+1赋给最为新的j,即j=j+1,然后转步骤702;705. Assign j+1 to the newest j, i.e. j=j+1, and then turn to step 702;
706、利用第j行像素Sj视点合成;706. Use the j-th row of pixels S j to synthesize viewpoints;
707、将j+1赋给最为新的j,即j=j+1,然后转步骤702。707. Assign j+1 to the newest j, that is, j=j+1, and go to step 702.
由此可见,当在编码中需要用一块失真的深度值进行视点合成时,对每行深度值进行判断,判断是否该行深度值都满足集合C的条件,若都满足,则不用该行深度值进行视点合成,否则,按照正常流程进行视点合成。It can be seen that when it is necessary to use a distorted depth value for viewpoint synthesis in encoding, it is necessary to judge the depth value of each row to determine whether the depth value of the row satisfies the conditions of set C, and if they all meet, the depth of the row is not used. value, perform view synthesis, otherwise, perform view synthesis according to the normal process.
本技术方案应用的技术范围(领域)是编码深度信息过程中的视点合成过程。在编码深度信息计算失真的过程中,现有方法使用每个深度图像编码单元的每一行像素进行视点合成。本技术方案提出的方案中,首先判断深度图像编码单元的每一行是否对视点合成有影响,如果没有影响,则在视点合成过程中跳过该行从而降低编码时间复杂度。The technical scope (field) of application of this technical solution is the view synthesis process in the process of encoding depth information. In the process of computing distortion for encoding depth information, existing methods use each row of pixels of each depth image coding unit for view synthesis. In the solution proposed by this technical solution, it is first judged whether each row of the depth image coding unit has an influence on the viewpoint synthesis, and if not, the row is skipped during the viewpoint synthesis process to reduce the encoding time complexity.
通过实验表明,对于1024x768的序列,本技术方案提出算法可以比之前算法编码端总时间复杂度降低4%左右,同时不损失编码性能。Experiments show that for a sequence of 1024x768, the algorithm proposed in this technical solution can reduce the total time complexity of the encoding end by about 4% compared with the previous algorithm without losing encoding performance.
本领域技术人员还可以了解到本发明实施例列出的各种说明性逻辑块(illustrative logical block),单元,和步骤可以通过电子硬件、电脑软件,或两者的结合进行实现。为清楚展示硬件和软件的可替换性(interchangeability),上述的各种说明性部件(illustrative components),单元和步骤已经通用地描述了它们的功能。这样的功能是通过硬件还是软件来实现取决于特定的应用和整个系统的设计要求。本领域技术人员可以对于每种特定的应用,可以使用各种方法实现所述的功能,但这种实现不应被理解为超出本发明实施例保护的范围。Those skilled in the art can also understand that various illustrative logical blocks, units, and steps listed in the embodiments of the present invention can be implemented by electronic hardware, computer software, or a combination of both. To clearly demonstrate the interchangeability of hardware and software, the various illustrative components, units and steps above have generically described their functions. Whether such functions are implemented by hardware or software depends on the specific application and overall system design requirements. Those skilled in the art may use various methods to implement the described functions for each specific application, but such implementation should not be understood as exceeding the protection scope of the embodiments of the present invention.
本发明实施例中所描述的各种说明性的逻辑块,或单元都可以通过通用处理器,数字信号处理器,专用集成电路(ASIC),现场可编程门阵列(FPGA)或其它可编程逻辑装置,离散门或晶体管逻辑,离散硬件部件,或上述任何组合的设计来实现或操作所描述的功能。通用处理器可以为微处理器,可选地,该通用处理器也可以为任何传统的处理器、控制器、微控制器或状态机。处理器也可以通过计算装置的组合来实现,例如数字信号处理器和微处理器,多个微处理器,一个或多个微处理器联合一个数字信号处理器核,或任何其它类似的配置来实现。The various illustrative logic blocks or units described in the embodiments of the present invention can be implemented by general-purpose processors, digital signal processors, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to implement or operate the described functions. The general-purpose processor may be a microprocessor, and optionally, the general-purpose processor may also be any conventional processor, controller, microcontroller or state machine. A processor may also be implemented by a combination of computing devices, such as a digital signal processor and a microprocessor, multiple microprocessors, one or more microprocessors combined with a digital signal processor core, or any other similar configuration to accomplish.
本发明实施例中所描述的方法或算法的步骤可以直接嵌入硬件、处理器执行的软件模块、或者这两者的结合。软件模块可以存储于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动磁盘、CD-ROM或本领域中其它任意形式的存储媒介中。示例性地,存储媒介可以与处理器连接,以使得处理器可以从存储媒介中读取信息,并可以向存储媒介存写信息。可选地,存储媒介还可以集成到处理器中。处理器和存储媒介可以设置于ASIC中,ASIC可以设置于用户终端中。可选地,处理器和存储媒介也可以设置于用户终端中的不同的部件中。The steps of the method or algorithm described in the embodiments of the present invention may be directly embedded in hardware, a software module executed by a processor, or a combination of both. The software modules may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM or any other storage medium in the art. Exemplarily, the storage medium can be connected to the processor, so that the processor can read information from the storage medium, and can write information to the storage medium. Optionally, the storage medium can also be integrated into the processor. The processor and the storage medium can be set in the ASIC, and the ASIC can be set in the user terminal. Optionally, the processor and the storage medium may also be set in different components in the user terminal.
在一个或多个示例性的设计中,本发明实施例所描述的上述功能可以在硬件、软件、固件或这三者的任意组合来实现。如果在软件中实现,这些功能可以存储与电脑可读的媒介上,或以一个或多个指令或代码形式传输于电脑可读的媒介上。电脑可读媒介包括电脑存储媒介和便于使得让电脑程序从一个地方转移到其它地方的通信媒介。存储媒介可以是任何通用或特殊电脑可以接入访问的可用媒体。例如,这样的电脑可读媒体可以包括但不限于RAM、ROM、EEPROM、CD-ROM或其它光盘存储、磁盘存储或其它磁性存储装置,或其它任何可以用于承载或存储以指令或数据结构和其它可被通用或特殊电脑、或通用或特殊处理器读取形式的程序代码的媒介。此外,任何连接都可以被适当地定义为电脑可读媒介,例如,如果软件是从一个网站站点、服务器或其它远程资源通过一个同轴电缆、光纤电脑、双绞线、数字用户线(DSL)或以例如红外、无线和微波等无线方式传输的也被包含在所定义的电脑可读媒介中。所述的碟片(disk)和磁盘(disc)包括压缩磁盘、镭射盘、光盘、DVD、软盘和蓝光光盘,磁盘通常以磁性复制数据,而碟片通常以激光进行光学复制数据。上述的组合也可以包含在电脑可读媒介中。In one or more exemplary designs, the above functions described in the embodiments of the present invention may be implemented in hardware, software, firmware or any combination of the three. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special computer. For example, such computer-readable media may include, but are not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other device that can be used to carry or store instructions or data structures and Other medium of program code in a form readable by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. In addition, any connection is properly defined as a computer-readable medium, for example, if the software is transmitted from a web site, server, or other remote source via a coaxial cable, fiber optic computer, twisted pair, digital subscriber line (DSL) Or transmitted by wireless means such as infrared, wireless and microwave are also included in the definition of computer readable media. Disks and discs include compact discs, laser discs, optical discs, DVDs, floppy discs, and Blu-ray discs. Disks usually reproduce data magnetically, while discs usually reproduce data optically with lasers. Combinations of the above can also be contained on a computer readable medium.
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the scope of the present invention. Protection scope, within the spirit and principles of the present invention, any modification, equivalent replacement, improvement, etc., shall be included in the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210226046.6A CN103517052B (en) | 2012-06-29 | 2012-06-29 | Visual point synthesizing method, device and encoder during a kind of coding depth information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210226046.6A CN103517052B (en) | 2012-06-29 | 2012-06-29 | Visual point synthesizing method, device and encoder during a kind of coding depth information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103517052A CN103517052A (en) | 2014-01-15 |
CN103517052B true CN103517052B (en) | 2017-09-26 |
Family
ID=49898974
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210226046.6A Expired - Fee Related CN103517052B (en) | 2012-06-29 | 2012-06-29 | Visual point synthesizing method, device and encoder during a kind of coding depth information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103517052B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1543200A (en) * | 2003-04-22 | 2004-11-03 | ���µ�����ҵ��ʽ���� | Surveillance device composed of combined cameras |
CN102469314A (en) * | 2010-11-08 | 2012-05-23 | 索尼公司 | In-loop contrast enhancement for improved motion estimation |
CN102510500A (en) * | 2011-10-14 | 2012-06-20 | 北京航空航天大学 | Multi-view video error concealing method based on depth information |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100517517B1 (en) * | 2004-02-20 | 2005-09-28 | 삼성전자주식회사 | Method for reconstructing intermediate video and 3D display using thereof |
US20070009034A1 (en) * | 2005-07-05 | 2007-01-11 | Jarno Tulkki | Apparatuses, computer program product, and method for digital image processing |
-
2012
- 2012-06-29 CN CN201210226046.6A patent/CN103517052B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1543200A (en) * | 2003-04-22 | 2004-11-03 | ���µ�����ҵ��ʽ���� | Surveillance device composed of combined cameras |
CN102469314A (en) * | 2010-11-08 | 2012-05-23 | 索尼公司 | In-loop contrast enhancement for improved motion estimation |
CN102510500A (en) * | 2011-10-14 | 2012-06-20 | 北京航空航天大学 | Multi-view video error concealing method based on depth information |
Also Published As
Publication number | Publication date |
---|---|
CN103517052A (en) | 2014-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5996013B2 (en) | Method, apparatus and computer program product for parallax map estimation of stereoscopic images | |
CN104469395B (en) | Image transfer method and device | |
CN104065946B (en) | Hole Filling Method Based on Image Sequence | |
CN103051915B (en) | Manufacture method and manufacture device for interactive three-dimensional video key frame | |
CN101287142A (en) | Method of Converting Plane Video to Stereo Video Based on Two-way Tracking and Feature Point Correction | |
CN114885144B (en) | High frame rate 3D video generation method and device based on data fusion | |
US10931968B2 (en) | Method and apparatus for encoding or decoding video content including regions having looping videos of different loop lengths | |
CN118985142A (en) | Method, device and system for processing audio scene for audio rendering | |
CN107592538B (en) | A method of reducing stereoscopic video depth map encoder complexity | |
CN116485863A (en) | Depth image video generation method and device based on data fusion | |
US12067753B2 (en) | 2D UV atlas sampling based methods for dynamic mesh compression | |
CN108961196A (en) | A kind of 3D based on figure watches the conspicuousness fusion method of point prediction attentively | |
CN115035172B (en) | Depth estimation method and system based on confidence grading and inter-level fusion enhancement | |
US8879826B2 (en) | Method, system and computer program product for switching between 2D and 3D coding of a video sequence of images | |
WO2021209044A1 (en) | Multimedia data transmission and reception methods, system, processor, and player | |
CN103517052B (en) | Visual point synthesizing method, device and encoder during a kind of coding depth information | |
US11240512B2 (en) | Intra-prediction for video coding using perspective information | |
CN104168482B (en) | A kind of video coding-decoding method and device | |
CN115861401B (en) | A binocular and point cloud fusion depth restoration method, device and medium | |
CN106780432A (en) | A kind of objective evaluation method for quality of stereo images based on sparse features similarity | |
CN103379348B (en) | A View Synthesis Method, Device, and Encoder When Encoding Depth Information | |
CN110599428B (en) | Optical Flow Estimation Heterogeneous Hybrid Network and Its Embedding Method | |
US20240078713A1 (en) | Texture coordinate prediction in mesh compression | |
CN109600600B (en) | Encoder, encoding method, and storage method and format of three-layer expression relating to depth map conversion | |
US20230334712A1 (en) | Chart based mesh compression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170926 Termination date: 20210629 |
|
CF01 | Termination of patent right due to non-payment of annual fee |