[go: up one dir, main page]

CN105491390B - Intra-frame prediction method in hybrid video coding standard - Google Patents

Intra-frame prediction method in hybrid video coding standard Download PDF

Info

Publication number
CN105491390B
CN105491390B CN201510861669.4A CN201510861669A CN105491390B CN 105491390 B CN105491390 B CN 105491390B CN 201510861669 A CN201510861669 A CN 201510861669A CN 105491390 B CN105491390 B CN 105491390B
Authority
CN
China
Prior art keywords
block
mode
pattern
prediction
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510861669.4A
Other languages
Chinese (zh)
Other versions
CN105491390A (en
Inventor
范晓鹏
张涛
赵德斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN201510861669.4A priority Critical patent/CN105491390B/en
Publication of CN105491390A publication Critical patent/CN105491390A/en
Application granted granted Critical
Publication of CN105491390B publication Critical patent/CN105491390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

混合视频编码标准中的帧内预测方法,属于视频编码领域。本发明的目的是为了有效地处理视频序列中存在的复杂块,如由于物体或者摄像机运动导致的视频模糊,多方向的复杂块等,而提出一种混合视频编码标准中帧内预测方法,以进一步提升视频编码的性能。该帧内预测方法,利用两个不同的预测模式来得到两个不同的预测值。通过对这两个预测值进行加权得到当前编码块的一个新的预测。获取当前编码块的周围若干个相邻已编码块的帧内编码模式信息,选择其中一个模式为模式一;在模式一的基础上,选择另外一个帧内模式为模式二。利用两个不同预测模式合成的预测值,能够处理视频序列中的复杂块,从而使得编码效率得到进一步提高。

The invention relates to an intra-frame prediction method in a hybrid video coding standard, which belongs to the field of video coding. The purpose of the present invention is to effectively deal with complex blocks existing in video sequences, such as video blur caused by object or camera motion, multi-directional complex blocks, etc., and propose a hybrid video coding standard Intra-frame prediction method, to Further improve the performance of video encoding. In this intra prediction method, two different prediction modes are used to obtain two different prediction values. A new prediction for the current coded block is obtained by weighting these two predictors. Obtain intra-frame coding mode information of several adjacent coded blocks around the current coding block, select one of the modes as mode 1; on the basis of mode 1, select another intra-frame mode as mode 2. The complex block in the video sequence can be processed by using the predicted value synthesized by two different prediction modes, so that the coding efficiency is further improved.

Description

混合视频编码标准中帧内预测方法Intra Prediction Method in Hybrid Video Coding Standard

技术领域technical field

本发明涉及一种混合视频编码标准中帧内预测方法,属于视频编码领域。The invention relates to an intra-frame prediction method in a mixed video coding standard, which belongs to the field of video coding.

背景技术Background technique

随着人们对视频显示质量要求的提高,高清和超高清视频等新视频应用形式应运而生。在这种高分辨率高质量视频欣赏应用越来越广泛的情况下,如何增强视频压缩效率变得至关重要。图像与视频在数字化过程中,产生了大量的数据冗余,这使得视频压缩技术成为了可能。一般而言,冗余类型至少包括空间冗余、时间冗余、信息熵冗余。对于空间冗余的消除,一般采用基于预测的方法,即帧内预测编码。其基本思想是利用当前编码块周围已经重建的像素值,通过基于方向的插值生成当前块的预测值。得到预测块之后,当前块与预测块的差值也就是残差块相比于原始编码块更易于编码,帧内预测有效地降低了视频编码中的空域冗余。由于现有的视频编码标准中的帧内预测采用的基于单方向的插值预测,该方法无法对复杂的块进行预测。With the improvement of people's requirements for video display quality, new video application forms such as high-definition and ultra-high-definition video have emerged. In the situation that the high-resolution and high-quality video appreciation is more and more widely used, how to enhance the video compression efficiency becomes crucial. During the digitization of images and videos, a large amount of data redundancy is generated, which makes video compression technology possible. Generally speaking, redundancy types include at least spatial redundancy, time redundancy, and information entropy redundancy. For the elimination of spatial redundancy, a method based on prediction is generally used, that is, intra-frame prediction coding. The basic idea is to use the reconstructed pixel values around the current coding block to generate the prediction value of the current block through direction-based interpolation. After the predicted block is obtained, the difference between the current block and the predicted block, that is, the residual block, is easier to code than the original coded block. Intra-frame prediction effectively reduces spatial redundancy in video coding. Due to the unidirectional interpolation prediction adopted by the intra prediction in the existing video coding standards, this method cannot predict complex blocks.

为了处理视频序列中的复杂编码块,Y.Ye and M.Karczewicz,“Improved H.264intra coding based on bi-directional intra prediction,directional transform,and adaptive coefficient scanning,”in Proc.IEEE Int.Conf.Image Process.,Oct.2008,pp.2116–2119.提出双向帧内预测编码方法。该方法基于H.264/AVC视频编码标准中的9种预测模式,选出一定数目的两种模式的结合。对于每个结合,一个离线训练的权值表用于加权平均这两种模式产生的预测值。仍存在视频的编码性能较差的问题。To handle complex coded blocks in video sequences, Y.Ye and M.Karczewicz, “Improved H.264 intra coding based on bi-directional intra prediction, directional transform, and adaptive coefficient scanning,” in Proc.IEEE Int.Conf.Image Process., Oct.2008, pp.2116–2119. A bidirectional intra-frame prediction coding method is proposed. The method is based on nine prediction modes in the H.264/AVC video coding standard, and selects a certain number of combinations of the two modes. For each combination, an offline-trained weight table is used to weight-average the predictions produced by the two modalities. There is still the problem of poor encoding performance of the video.

发明内容Contents of the invention

本发明的目的是为了有效地处理视频序列中的复杂块,而提出一种混合视频编码标准中帧内预测方法,以进一步提升视频的编码性能。The purpose of the present invention is to effectively process complex blocks in a video sequence, and propose an intra-frame prediction method in a hybrid video coding standard, so as to further improve video coding performance.

本发明为解决上述技术问题采取的技术方案是:The technical scheme that the present invention takes for solving the problems of the technologies described above is:

一种混合视频编码标准中帧内预测方法,所述预测方法用于描述视频序列中存在的复杂的编码块,所述预测方法的实现过程为:An intra-frame prediction method in a hybrid video coding standard, the prediction method is used to describe complex coding blocks existing in a video sequence, and the implementation process of the prediction method is as follows:

步骤一:获取当前编码块的周围若干个相邻已编码块的帧内编码模式,当前编码块的尺寸为W*H,W为当前编码块的宽,H为当前编码块的高;周围若干个相邻已编码块称为邻近编码块;Step 1: Obtain the intra-frame coding modes of several adjacent coded blocks around the current coded block. The size of the current coded block is W*H, W is the width of the current coded block, and H is the height of the current coded block; Adjacent coded blocks are called adjacent coded blocks;

步骤二:根据步骤一获取的邻近编码块的帧内编码模式来获取当前编码块的编码模式一的集合;Step 2: Obtain a set of coding modes 1 of the current coding block according to the intra-frame coding modes of adjacent coding blocks obtained in step 1;

步骤三:根据编码模式一的集合中各个模式一来获取对应的模式二:选取离模式一在方向上最近的另外两个模式中的一个为模式二,或者选取与模式一相结合后具有最小预测失真的模式为模式二,Step 3: Obtain the corresponding mode 2 according to each mode 1 in the set of coding mode 1: select one of the other two modes closest to mode 1 in the direction as mode 2, or select the mode 2 with the minimum value after combining with mode 1 The mode of prediction distortion is mode 2,

根据步骤二获取的编码模式一的集合,对编码模式一集合中的每个模式获取当前编码块的另外一个编码模式的集合,即编码模式二的集合;合并模式一的集合和模式二的集合,得到一个二元组集合,每个二元组包含两个相关的模式一和模式二;According to the set of encoding mode 1 obtained in step 2, obtain another set of encoding modes of the current encoding block for each mode in the set of encoding mode 1, that is, the set of encoding mode 2; merge the set of mode 1 and the set of mode 2 , to get a set of two-tuples, each two-tuple contains two related patterns 1 and 2;

步骤四:针对步骤三产生的二元组集合中的每个模式组合,用当前块周围相邻的像素插值得到两个不同的预测块;当前编码块的一个双向预测结果为这个两个不同预测块的加权平均块;选择最优的模式一和模式二的组合来对当前块进行预测;Step 4: For each mode combination in the 2-tuple set generated in step 3, interpolate adjacent pixels around the current block to obtain two different prediction blocks; a bidirectional prediction result of the current coding block is the two different predictions The weighted average block of the block; select the optimal combination of mode 1 and mode 2 to predict the current block;

步骤五:对编码单元中的亮度块和色度块分别进行最优预测模式的选取;Step 5: Select the optimal prediction mode for the luma block and the chrominance block in the coding unit respectively;

步骤六:对编码单元中的亮度块和色度块的编码模式分别进行编码。Step 6: Encode the encoding modes of the luma block and the chrominance block in the coding unit respectively.

在步骤一中,所述邻近编码块是当前编码块的左边,上边,左下,右上已经编码的帧内编码块;In step 1, the adjacent coding blocks are intra-frame coding blocks that have been coded on the left, top, bottom left, and top right of the current coding block;

步骤二所述获取当前编码块的编码模式一的集合原则为:The collection principle for obtaining the coding mode 1 of the current coding block described in step 2 is:

选取步骤一中获取的邻近编码块使用模式最多的几个模式为模式一,或者选择当前编码块左边和上边的邻近编码块的模式为模式一,或者选择这些邻近块中的任意一个块的模式为模式一,或者选择这些邻近块的模式的一个子集为模式一,或者为每个邻近编码块指定一个权值,将具有相同帧内编码模式的邻近编码块的权值累加,选取步骤一中获取的邻近编码块的权值最大的几个模式为模式一。Select the modes with the most usage modes of the adjacent coding blocks obtained in step 1 as mode 1, or select the mode of the adjacent coding blocks on the left and above of the current coding block as mode 1, or select the mode of any one of these adjacent blocks Mode 1, or select a subset of the modes of these adjacent blocks as mode 1, or specify a weight for each adjacent coding block, accumulate the weights of adjacent coding blocks with the same intra-frame coding mode, and select step 1 The modes with the largest weights of the adjacent coding blocks acquired in are mode one.

在步骤三中,针对每个模式一获取当前编码块的模式二的实现过程为:In step 3, the implementation process of acquiring mode 2 of the current coding block for each mode 1 is:

选取与当前模式一方向最近的两个编码模式为模式二,具体过程为:模式一用mode1表示,如果模式一的模式是在3与33之间,模式二被选取为mode1-1和mode1+1;如果mode1的值是2或者34,模式二被选取为3和33;如果mode1的值是DC模式或者PLANAR模式,模式二被选取为10(水平模式)和26(垂直模式);Select the two encoding modes closest to the current mode 1 direction as mode 2, the specific process is: mode 1 is represented by mode1, if the mode of mode 1 is between 3 and 33, mode 2 is selected as mode1-1 and mode1+ 1; if the value of mode1 is 2 or 34, mode 2 is selected as 3 and 33; if the value of mode1 is DC mode or PLANAR mode, mode 2 is selected as 10 (horizontal mode) and 26 (vertical mode);

或选取与模式一相结合后具有最小预测失真的模式为模式二,其实现过程是:针对每个模式一,获取所有剩余帧内编码模式对应的预测值,然后将模式一的预测值与每个剩余帧内模式对应的预测进行加权平均,选择与模式一加权平均后与当前编码块失真最小的编码模式为模式二;编码块与预测块的失真的准则可以是:最小均方误差,最小Hadamard误差或率失真优化准则。Or select the mode with the smallest prediction distortion after combining with mode 1 as mode 2, the implementation process is: for each mode 1, obtain the predicted values corresponding to all remaining intra-frame coding modes, and then compare the predicted values of mode 1 with each The predictions corresponding to the remaining intra-frame modes are weighted and averaged, and the coding mode with the smallest distortion after the weighted average with mode 1 and the current coding block is selected as mode 2; the criterion for the distortion between the coding block and the prediction block can be: minimum mean square error, minimum Hadamard error or rate-distortion optimization criterion.

在步骤四中,对二元组集合中的每个编码模式组进行编码测试时,选择最优的模式组来对当前块进行预测;选择最优的模式组可采用最小均方误差、最小Hadamard误差或率失真优化准则。In step 4, when performing a coding test on each coding mode group in the binary group set, select the optimal mode group to predict the current block; the selection of the optimal mode group can use the minimum mean square error, minimum Hadamard Error or rate-distortion optimization criteria.

在步骤四中,对两个不同的预测模式产生的预测进行加权过程是:对不同的预测模式的预测块给予不用的权值;所述加权平均可以采用给这两种不同的预测块相同的权值,也就是对它们进行平均来获得当前编码块的预测块;或者根据不同预测模式的重要性赋予不同的权值,或者根据不用预测模式生成预测的准确性赋予不同的权值,或者设定一些概率较高的权值,用搜索遍历的方式获得最佳的权值。In step 4, the process of weighting the predictions produced by two different prediction modes is: different weights are given to the prediction blocks of different prediction modes; the weighted average can use the same Weights, that is, average them to obtain the prediction block of the current coding block; or assign different weights according to the importance of different prediction modes, or assign different weights according to the accuracy of predictions generated by different prediction modes, or set Set some weights with high probability, and use the search traversal method to obtain the best weights.

在步骤五中,对于编码单元中的亮度块和色度块选择最佳预测模式的过程是:针对亮度块,其最优的预测模式是从原始的单向预测模式和双向预测模式中进行选取,选取的准则是最小率失真准则;而对于色度块,如果其对应的亮度块选择双向预测为最佳预测模式,则当前色度块的最优预测模式是选择其对应亮度块的两个预测模式。In step five, the process of selecting the best prediction mode for the luma block and chrominance block in the coding unit is: for the luma block, the optimal prediction mode is selected from the original unidirectional prediction mode and bidirectional prediction mode , the selected criterion is the minimum rate-distortion criterion; and for a chroma block, if its corresponding luma block selects bidirectional prediction as the best prediction mode, then the optimal prediction mode of the current chroma block is to select the two prediction modes of its corresponding luma block. predictive mode.

在步骤六中:对编码单元中的亮度块和色度块的编码模式分别进行编码,其具体过程为:In step six: respectively encode the encoding modes of the luma block and the chrominance block in the coding unit, and the specific process is as follows:

如果当前帧内编码模式为双向预测,该双向预测中的两个编码模式,即模式一和模式二需要进行编码;模式一来自于邻近编码块,直接编码被选择的邻近块的索引。If the current intra-frame coding mode is bi-directional prediction, two coding modes in the bi-directional prediction, ie, mode 1 and mode 2, need to be coded; mode 1 comes from adjacent coding blocks, and directly codes the index of the selected neighboring block.

针对亮度块,当模式一是从当前块的左边或者上边选取得到,则1个比特的符号就可以用来表示选中的模式来自左边还是上边;编码模式二是基于模式一得到的,同样地,一个比特的符号可以用来表示被选取得模式是模式一邻近模式中的哪一个;For luminance blocks, when mode 1 is selected from the left or top of the current block, a 1-bit symbol can be used to indicate that the selected mode is from the left or the top; coding mode 2 is obtained based on mode 1, similarly, A one-bit sign can be used to indicate which of the mode-adjacent modes the selected mode is;

针对色度块,如果当前亮度块选择双向预测,色度块的预测模式将被设定为双向预测模式,其对应的两个预测模式直接来着亮度块,无需要对预测模式进行编码;如果当前亮度块选择原始的单向预测,色度块将从原始的五个预测模式中进行选取。For the chroma block, if the current luma block selects bidirectional prediction, the prediction mode of the chroma block will be set to bidirectional prediction mode, and the corresponding two prediction modes directly come from the luma block, and there is no need to encode the prediction mode; if The current luma block selects the original unidirectional prediction, and the chroma block will select from the original five prediction modes.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明预测方法可以有效地处理视频序列中存在的复杂块,如由于物体或者摄像机运动导致的视频模糊,多方向的复杂块等。本发明是利用相邻已编码块的帧内模式信息来得到当前编码块的两个模式,基于这两个模式的双向预测能够预测视频序列中的复杂块,如具有多个方向的块,物体和摄像机运动造成的模糊的块,从而使帧内预测性能得到提升,编码效率得到进一步提高。The prediction method of the present invention can effectively deal with complex blocks existing in video sequences, such as video blur caused by object or camera motion, multi-directional complex blocks, and the like. The present invention uses the intra-frame mode information of adjacent coded blocks to obtain two modes of the current coded block, and bidirectional prediction based on these two modes can predict complex blocks in video sequences, such as blocks with multiple directions, objects and blurred blocks caused by camera motion, so that the performance of intra-frame prediction is improved, and the coding efficiency is further improved.

该帧内预测方法,利用两个不同的预测模式来得到两个不同的预测值。通过对这两个预测值进行加权得到当前编码块的一个新的预测。获取当前编码块的周围若干个相邻已编码块的帧内编码模式信息,选择其中一个模式为模式一;在模式一的基础上,选择另外一个帧内模式为模式二。利用两个不同预测模式合成的预测值,能够处理视频序列中的复杂块,从而使得编码效率得到进一步提高。In this intra prediction method, two different prediction modes are used to obtain two different prediction values. A new prediction for the current coded block is obtained by weighting these two predictors. Obtain intra-frame coding mode information of several adjacent coded blocks around the current coding block, select one of the modes as mode 1; on the basis of mode 1, select another intra-frame mode as mode 2. The complex block in the video sequence can be processed by using the predicted value synthesized by two different prediction modes, so that the coding efficiency is further improved.

与以前提出的方法不同的是,本发明方案在进行双向预测时,不需要进行权值表的训练及保存。本发明方案的双向预测模式中的两个不同模式都是基于当前块的邻近块得到的,因此编码这两个模式的比特开销少。此外,本发明方案需要进行测试的不同模式的组合数目较少,因为编码的复杂度较低。Different from the methods proposed before, the solution of the present invention does not need to train and save the weight table when performing bidirectional prediction. The two different modes in the bidirectional prediction mode of the solution of the present invention are obtained based on the adjacent blocks of the current block, so the bit overhead for encoding these two modes is small. In addition, the solution of the present invention requires fewer combinations of different modes to be tested, because the encoding complexity is lower.

附图说明Description of drawings

图1为本发明实施方式二中当前块(C),左边的邻近块(L)和上边的邻近块(A)的位置关系图。FIG. 1 is a positional diagram of the current block (C), the left adjacent block (L) and the upper adjacent block (A) in Embodiment 2 of the present invention.

图2为本发明实施方式三中针对当前编码块的编码模式一,其候选的编码模式二与模式一的关系图。在图中模式一为3,模式二为与其角度最近的模式2或者4。FIG. 2 is a relationship diagram between coding mode 1 and candidate coding mode 2 and mode 1 for the current coding block in Embodiment 3 of the present invention. In the figure, mode 1 is 3, and mode 2 is mode 2 or 4 with the closest angle.

具体实施方式Detailed ways

具体实施方式一:本实施方式所述的混合视频编码标准中帧内预测方法用于预测视频序列中存在的复杂编码块,所述预测方法基于原始编码标准中的基于方向的帧内预测方法(所述预测方法基于单方向帧内预测算法实现的),Specific Embodiment 1: The intra prediction method in the hybrid video coding standard described in this embodiment is used to predict the complex coding blocks existing in the video sequence, and the prediction method is based on the direction-based intra prediction method in the original coding standard ( The prediction method is realized based on a unidirectional intra-frame prediction algorithm),

所述的预测方法,我们称之为基于邻近编码模式的双向预测方法,简称为双向预测。该双向预测方法有两个编码模式构成,即模式一和模式二;所述预测方法的实现过程为:The prediction method described above is called a bidirectional prediction method based on adjacent coding modes, or bidirectional prediction for short. The bidirectional prediction method has two encoding modes, namely mode one and mode two; the implementation process of the prediction method is:

步骤一:获取当前编码块的周围若干个相邻已编码块的帧内编码模式,当前编码块的尺寸为W*H,W为当前编码块的宽,H为当前编码块的高;周围若干个相邻已编码块称为邻近编码块;Step 1: Obtain the intra-frame coding modes of several adjacent coded blocks around the current coded block. The size of the current coded block is W*H, W is the width of the current coded block, and H is the height of the current coded block; Adjacent coded blocks are called adjacent coded blocks;

步骤二:根据步骤一获取的邻近编码块的帧内编码模式来获取当前编码块的编码模式一的集合;Step 2: Obtain a set of coding modes 1 of the current coding block according to the intra-frame coding modes of adjacent coding blocks obtained in step 1;

步骤三:根据编码模式一的集合中各个模式一来获取对应的模式二:选取离模式一在方向上最近的另外两个模式中的一个为模式二,或者选取与模式一相结合后具有最小预测失真的模式为模式二;根据步骤二获取的编码模式一的集合,对编码模式一集合中的每个模式获取当前编码块的另外一个编码模式的集合,即编码模式二的集合;合并模式一的集合和模式二的集合,得到一个二元组集合,每个二元组包含两个相关的模式一和模式二;Step 3: Obtain the corresponding mode 2 according to each mode 1 in the set of coding mode 1: select one of the other two modes closest to mode 1 in the direction as mode 2, or select the mode 2 with the minimum value after combining with mode 1 The mode of predictive distortion is mode 2; according to the set of encoding mode 1 obtained in step 2, for each mode in the set of encoding mode 1, another set of encoding modes of the current encoding block is obtained, that is, the set of encoding mode 2; the merge mode The set of one and the set of mode two, get a set of two-tuples, each two-tuple contains two related modes one and two;

步骤四:针对步骤三产生的二元组集合中的每个模式组合,用当前块周围相邻的像素插值得到两个不同的预测块;当前编码块的一个双向预测结果为这个两个不同预测块的加权平均块;选择最优的模式一和模式二的组合来对当前块进行预测;Step 4: For each mode combination in the 2-tuple set generated in step 3, interpolate adjacent pixels around the current block to obtain two different prediction blocks; a bidirectional prediction result of the current coding block is the two different predictions The weighted average block of the block; select the optimal combination of mode 1 and mode 2 to predict the current block;

步骤五:针对编码单元中的亮度块和色度块分别进行最优预测模式的选取;Step 5: Select the optimal prediction mode for the luma block and the chrominance block in the coding unit respectively;

步骤六:对编码单元中的亮度块和色度块的编码模式分别进行编码。Step 6: Encode the encoding modes of the luma block and the chrominance block in the coding unit respectively.

具体实施方式二:本实施方式所述的混合视频编码标准中帧内预测方法,其特征在于:Specific implementation mode two: the intra-frame prediction method in the hybrid video coding standard described in this implementation mode is characterized in that:

步骤一中,所述邻近编码块是当前编码块的左边,上边,左下,右上已经编码的帧内编码块,如图1所示我们可以选择当前块左边块L和上边块A;In step 1, the adjacent coding blocks are intra-frame coding blocks that have been coded on the left, top, bottom left, and top right of the current coding block. As shown in Figure 1, we can select the left block L and the top block A of the current block;

还支持选择更多数目的邻近块来获取当前编码块的帧内预测模式一,除了左边,上边,左下,右上的邻近块,其他位置的邻近块也同样支持;It also supports selecting a larger number of adjacent blocks to obtain the intra prediction mode 1 of the current coding block, except for the adjacent blocks on the left, upper, lower left, and upper right, adjacent blocks in other positions are also supported;

步骤二所述获取当前编码块的编码模式一的集合原则为:The collection principle for obtaining the coding mode 1 of the current coding block described in step 2 is:

选取步骤一中获取的邻近编码块使用模式最多的几个模式为模式一,或者选择当前编码块左边和上边的邻近编码块的模式为模式一,或者选择这些邻近块中的任意一个块的模式为模式一,或者选择这些邻近块的模式的一个子集为模式一,或者为每个邻近编码块指定一个权值,将具有相同帧内编码模式的邻近编码块的权值累加,选取步骤一中获取的邻近编码块的权值最大的几个模式为模式一;如图1,可以选择当前块左边块L和上边块A对应的帧内预测模式为当前块的帧内编码模式一。Select the modes with the most usage modes of the adjacent coding blocks obtained in step 1 as mode 1, or select the mode of the adjacent coding blocks on the left and above of the current coding block as mode 1, or select the mode of any one of these adjacent blocks Mode 1, or select a subset of the modes of these adjacent blocks as mode 1, or specify a weight for each adjacent coding block, accumulate the weights of adjacent coding blocks with the same intra-frame coding mode, and select step 1 Mode 1 is the mode with the largest weight value of adjacent coding blocks obtained in ; as shown in Figure 1, the intra prediction mode corresponding to the left block L and the upper block A of the current block can be selected as the intra coding mode 1 of the current block.

其它步骤与具体实施方式一相同。Other steps are the same as in the first embodiment.

具体实施方式三:如图2所示,本实施方式所述的混合视频编码标准中帧内预测方法,在步骤三中,所述获取当前编码块的帧内编码模式二的过程是:Specific embodiment 3: As shown in FIG. 2 , in the intra prediction method in the hybrid video coding standard described in this embodiment, in step 3, the process of obtaining the intra frame coding mode 2 of the current coding block is:

选取与当前模式一方向最近的两个编码模式为模式二。实现过程为,如果模式一(mode1)的模式是在3与33之间,模式二被选取为mode1-1和mode1+1;如果mode1的值是2或者34,模式二被选取为3和33;如果mode1的值是DC模式或者PLANAR模式,模式二被选取为10(水平模式)和26(垂直模式)。Select the two encoding modes closest to the current mode one direction as mode two. The implementation process is, if the mode of mode 1 (mode1) is between 3 and 33, mode 2 is selected as mode1-1 and mode1+1; if the value of mode1 is 2 or 34, mode 2 is selected as 3 and 33 ; If the value of mode1 is DC mode or PLANAR mode, mode 2 is selected as 10 (horizontal mode) and 26 (vertical mode).

或者,在步骤三中,所述获取当前编码块的帧内编码模式二的过程是:Or, in step 3, the process of obtaining the intra-frame coding mode 2 of the current coding block is:

选取与模式一相结合后具有最小预测失真的模式为模式二的实现过程是:针对每个模式一,获取所有剩余帧内编码模式对应的预测值,然后将模式一的预测值与每个剩余帧内模式对应的预测进行加权平均,选择与模式一加权平均后与当前编码块失真最小的编码模式为模式二。这里编码块与预测块的失真的准则可以是:最小均方误差,最小Hadamard误差或率失真优化准则。The implementation process of selecting the mode with the smallest prediction distortion after combining with mode 1 as mode 2 is: for each mode 1, obtain the predicted values corresponding to all remaining intra coding modes, and then combine the predicted values of mode 1 with each remaining The prediction corresponding to the intra mode is weighted and averaged, and the coding mode with the least distortion of the current coding block after the weighted average with mode 1 is selected as mode 2. Here, the criterion for the distortion of the coding block and the prediction block may be: minimum mean square error, minimum Hadamard error or rate-distortion optimization criterion.

其它步骤与具体实施方式一或二相同。Other steps are the same as those in Embodiment 1 or 2.

具体实施方式四:本实施方式所述的混合视频编码标准中帧内预测方法,在步骤四中,对二元组集合中的每个编码模式组进行编码测试时选择最优的模式组来对当前块进行预测。选择最优的模式组可以通过:最小均方误差,最小Hadamard误差或率失真优化准则。其它步骤与具体实施方式一、二或三相同。Embodiment 4: In the intra-frame prediction method in the hybrid video coding standard described in this embodiment, in step 4, when performing a coding test on each coding mode group in the binary group set, the optimal mode group is selected to perform the coding test. The current block is predicted. The optimal mode set can be selected by: minimum mean square error, minimum Hadamard error or rate-distortion optimization criteria. Other steps are the same as those in Embodiment 1, 2 or 3.

具体实施方式五:本实施方式所述的混合视频编码标准中帧内预测方法,在步骤四中,对两个不同的预测模式产生的预测进行加权过程是:对不同的预测模式的预测块给予不用的权值。这里的加权平均可以采用给这两种不同的预测块相同的权值,也就是对它们进行平均来获得当前编码块的预测块。或者根据不同预测模式的重要性赋予不同的权值,或者根据不同预测模式生成预测的准确性赋予不同的权值,或者设定一些概率较高的权值,用搜索遍历的方式获得最佳的权值。其它步骤与具体实施方式一、二、三或四相同。Embodiment 5: In the method for intra-frame prediction in the hybrid video coding standard described in this embodiment, in step 4, the process of weighting the predictions generated by two different prediction modes is: giving prediction blocks of different prediction modes Unused weights. The weighted average here can adopt the same weight for these two different prediction blocks, that is, average them to obtain the prediction block of the current coding block. Or assign different weights according to the importance of different prediction modes, or assign different weights according to the accuracy of predictions generated by different prediction modes, or set some weights with higher probability, and use search traversal to obtain the best weight. Other steps are the same as those in Embodiment 1, 2, 3 or 4.

具体实施方式六:本实施方式所述的混合视频编码标准中帧内预测方法,在步骤五中,对于编码单元中的亮度块和色度块选择最佳预测模式的过程是:针对亮度块,其最优的预测模式是从原始的单向预测模式和本发明给出的双向预测模式中进行选取,选取的准则是最小率失真准则。而对于色度块,如果其对应的亮度块选择双向预测为最佳预测模式,则当前色度块的最优预测模式是选择其对应亮度块的两个预测模式。其它步骤与具体实施方式一、二、三、四或五相同。Specific embodiment six: In the intra prediction method in the hybrid video coding standard described in this embodiment, in step five, the process of selecting the best prediction mode for the luma block and chrominance block in the coding unit is: for the luma block, The optimal prediction mode is selected from the original unidirectional prediction mode and the bidirectional prediction mode provided by the present invention, and the selected criterion is the minimum rate-distortion criterion. For a chroma block, if the corresponding luma block selects bidirectional prediction as the best prediction mode, then the optimal prediction mode of the current chroma block is to select the two prediction modes of its corresponding luma block. Other steps are the same as those in Embodiment 1, 2, 3, 4 or 5.

具体实施方式七:本实施方式所述的混合视频编码标准中帧内预测方法,帧内模式编码方法是:如果当前帧内编码模式为双向预测,该双向预测中的两个编码模式,即模式一和模式二需要进行编码。模式一来自于邻近编码块,直接编码被选择的邻近块的索引。其它步骤与具体实施方式一、二、三、四、五或六相同。Embodiment 7: In the intra-frame prediction method in the hybrid video coding standard described in this embodiment, the intra-frame mode coding method is: if the current intra-frame coding mode is bidirectional prediction, the two coding modes in the bidirectional prediction, namely mode One and two need to be encoded. Mode one is derived from adjacent coded blocks, and directly encodes the index of the selected adjacent block. Other steps are the same as those in Embodiment 1, 2, 3, 4, 5 or 6.

具体实施方式八:本实施方式中,模式一是从当前块的左边或者上边选取得到,则1个比特的符号就可以用来表示选中的模式来自左边还是上边。编码模式二是基于模式一得到的,同样地,一个比特的符号可以用来表示被选取得模式是模式一邻近模式中的哪一个。针对色度块,如果当前亮度块选择双向预测,色度块的预测模式将被设定为双向预测模式,其对应的两个预测模式直接来着亮度块。因此不需要对预测模式进行编码。如果当前亮度块选择原始的单向预测,色度块将从原始的五个预测模式中进行选取。其它步骤与具体实施方式一、二、三、四、五、六或七相同。Embodiment 8: In this embodiment, mode 1 is selected from the left or top of the current block, and a 1-bit symbol can be used to indicate whether the selected mode is from the left or the top. Coding mode 2 is obtained based on mode 1. Similarly, a symbol of one bit can be used to indicate which mode is selected to be one of the neighboring modes of mode 1. For the chroma block, if the current luma block selects bidirectional prediction, the prediction mode of the chroma block will be set to the bidirectional prediction mode, and the corresponding two prediction modes directly come from the luma block. Therefore there is no need to encode the prediction mode. If the current luma block selects the original unidirectional prediction, the chroma block will choose from the original five prediction modes. Other steps are the same as those in Embodiment 1, 2, 3, 4, 5, 6 or 7.

实施例Example

实施例一:Embodiment one:

给出混合视频编码标准中帧内预测方法的具体实现步骤:The specific implementation steps of the intra prediction method in the hybrid video coding standard are given:

步骤一:获取当前编码块(尺寸为W*H,W为当前编码块的宽,H为当前编码块的高)左边相邻块和上方相邻块的编码模式modeL和modeA;Step 1: Obtain the coding modes modeL and modeA of the left adjacent block and the upper adjacent block of the current coding block (the size is W*H, W is the width of the current coding block, and H is the height of the current coding block);

步骤二:根据步骤一获取的邻近编码块的编码模式来获取当前编码块的模式一。如果modeL与modeA相等,则当前编码块的模式一的集合为{modeL};如果modeL与modeA不相等,则当前编码块的模式一的集合是{modeL,modeA};Step 2: Obtain mode 1 of the current coding block according to the coding modes of adjacent coding blocks obtained in step 1. If modeL is equal to modeA, then the mode one set of the current coding block is {modeL}; if modeL is not equal to modeA, then the mode one set of the current coding block is {modeL, modeA};

步骤三:根据步骤二获取的当前编码块的模式一集合,来得到当前编码块的模式二。对于模式一集合中的每个模式modei,如果modei是在3与33之间,模式二被选取为modei-1和modei+1;如果modei的值是2或者34,模式二被选取为3和33;如果modei的值是DC模式或者PLANAR模式,模式二被选取为10(水平模式)和26(垂直模式)。针对模式一集合中的每个模式选择其对应的模式二的集合,可以得到一个模式一和模式二结合的二元组集合,集合中高端每个元素由相对应的模式一和模式二构成,即(mode1,mode2)。Step 3: Obtain mode 2 of the current coding block according to the mode 1 set of the current coding block acquired in step 2. For each mode modei in the mode one set, if modei is between 3 and 33, mode two is selected as modei-1 and modei+1; if the value of modei is 2 or 34, mode two is selected as 3 and 33; if the value of modei is DC mode or PLANAR mode, mode 2 is selected as 10 (horizontal mode) and 26 (vertical mode). For each mode in the mode 1 set, select its corresponding mode 2 set, and you can get a 2-tuple set combining mode 1 and mode 2. Each high-end element in the set is composed of the corresponding mode 1 and mode 2. That is (mode1, mode2).

步骤四:针对步骤三产生的二元组集合中的每个模式组合(mode1,mode2),用当前块周围相邻的像素插值得到两个不同的预测块分别是pred1和pred2。当前编码块的一个双向预测结果pred为这个两个不同预测块的平均,即pred=(pred1+pred2+1)>>1,利用率失真优化选择最优的模式一和模式二的组合来对当前块进行预测。Step 4: For each mode combination (mode1, mode2) in the 2-tuple set generated in step 3, interpolate adjacent pixels around the current block to obtain two different prediction blocks, namely pred1 and pred2. A bidirectional prediction result pred of the current coding block is the average of the two different prediction blocks, that is, pred=(pred1+pred2+1)>>1, and the optimal combination of mode 1 and mode 2 is selected by using rate-distortion optimization. The current block is predicted.

步骤五:针对编码单元中的亮度块和色度块分别进行最优预测模式的选取。针对亮度块,其最优的预测模式是从原始的单向预测模式和本发明给出的双向预测模式中进行选取,选取的准则是最小率失真准则。而对于色度块,如果其对应的亮度块选择双向预测为最佳预测模式,则当前色度块的最优预测模式是选择其对应亮度块的两个预测模式。Step 5: Select the optimal prediction mode for the luma block and the chrominance block in the coding unit respectively. For the luma block, the optimal prediction mode is selected from the original unidirectional prediction mode and the bidirectional prediction mode provided by the present invention, and the selected criterion is the minimum rate-distortion criterion. For a chroma block, if the corresponding luma block selects bidirectional prediction as the best prediction mode, then the optimal prediction mode of the current chroma block is to select the two prediction modes of its corresponding luma block.

步骤六:对编码单元中的亮度块和色度块的编码模式分别进行编码。如果当前帧内编码模式为双向预测,该双向预测中的两个编码模式,即模式一和模式二需要进行编码。模式一来自于邻近编码块,直接编码被选择的邻近块的索引。例如,模式一是从当前块的左边或者上边选取得到,则1个比特的符号就可以用来表示选中的模式来自左边还是上边。编码模式二是基于模式一得到的,同样地,一个比特的符号可以用来表示被选取得模式是模式一邻近模式中的哪一个。针对色度块,如果当前亮度块选择双向预测,色度块的预测模式将被设定为双向预测模式,其对应的两个预测模式直接来着亮度块。因此不需要对预测模式进行编码。如果当前亮度块选择原始的单向预测,色度块将从原始的五个预测模式中进行选取。Step 6: Encode the encoding modes of the luma block and the chrominance block in the coding unit respectively. If the current intra-frame coding mode is bi-directional prediction, two coding modes in the bi-directional prediction, ie mode 1 and mode 2, need to be coded. Mode one is derived from adjacent coded blocks, and directly encodes the index of the selected adjacent block. For example, mode 1 is selected from the left or top of the current block, and a 1-bit symbol can be used to indicate whether the selected mode is from the left or the top. Coding mode 2 is obtained based on mode 1. Similarly, a symbol of one bit can be used to indicate which mode is selected to be one of the neighboring modes of mode 1. For the chroma block, if the current luma block selects bidirectional prediction, the prediction mode of the chroma block will be set to the bidirectional prediction mode, and the corresponding two prediction modes directly come from the luma block. Therefore there is no need to encode the prediction mode. If the current luma block selects the original unidirectional prediction, the chroma block will choose from the original five prediction modes.

以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。本发明的范围最好是参考附加的权利要求。对于本所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明所提交的权利要求书确定的专利保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be assumed that the specific implementation of the present invention is limited to these descriptions. The scope of the invention is preferably referred to in the appended claims. For those of ordinary skill in this technical field, without departing from the concept of the present invention, some simple deduction or replacement can also be made, which should be regarded as belonging to the scope of patent protection determined by the claims submitted by the present invention .

实施例一在VC-0.4(在HEVC的测试模型HM12.0添加了一些技术的测试模型)上实现,并按照VC266通测条件测试,VC266通测条件参考VC266 Study Group,“Test conditionand evaluation methodology”,VC-02-N005,VC266 2th Meeting:Suzhou,Mar.2015.Embodiment 1 is implemented on VC-0.4 (a test model with some technologies added to the HEVC test model HM12.0), and is tested according to the VC266 pass-through test conditions. For VC266 pass-test conditions, refer to VC266 Study Group, "Test condition and evaluation methodology" ,VC-02-N005,VC266 2th Meeting:Suzhou,Mar.2015.

实施例一的实验结果如表1所示,由表1可知,与VC-0.4相比,在All Intra Main_HighBitrate(AI-HR)配置条件下,针对Y,U和V分量平均有0.8%,0.6%和1.1%的BD比特率节省,在All Intra Main_LowBitrate(AI-LR)配置条件下,针对Y,U和V分量平均有0.7%,0.4%和0.6%的BD比特率节省。BD比特率表示在同样的客观质量下两种方法的码率节省情况,参考G.“Calculation of average PSNR differences between RD-Curves,”ITU-T SG16 Q.6 Document,VCEG-M33,Austin,US,April 2001。The experimental results of Example 1 are shown in Table 1. It can be seen from Table 1 that compared with VC-0.4, under the All Intra Main_HighBitrate (AI-HR) configuration condition, there are 0.8% and 0.6% on average for Y, U and V components. % and 1.1% BD bitrate savings, with an average BD bitrate savings of 0.7%, 0.4% and 0.6% for Y, U and V components under All Intra Main_LowBitrate (AI-LR) configuration. The BD bit rate indicates the bit rate savings of the two methods under the same objective quality, refer to G. "Calculation of average PSNR differences between RD-Curves," ITU-T SG16 Q.6 Document, VCEG-M33, Austin, US, April 2001.

表1.实施例一相对于VC-0.4的BD比特率性能Table 1. Embodiment 1 relative to the BD bit rate performance of VC-0.4

Claims (8)

1. intra-frame prediction method in a kind of hybrid video coding standard, the prediction technique is for describing present in video sequence Complicated encoding block, which is characterized in that the realization process of the prediction technique is:
Step 1:The intra-frame encoding mode of several adjacent coded blocks around present encoding block is obtained, present encoding block Size is W*H, and W is the width of present encoding block, and H is the height of present encoding block;Several adjacent coded blocks of surrounding are referred to as neighbouring Encoding block;
Step 2:The coding mode of present encoding block is obtained according to the intra-frame encoding mode of the neighbouring encoding block of step 1 acquisition One set;
Step 3:Corresponding pattern two is obtained according to each pattern one in the set of coding mode one:It chooses and exists from pattern one One on direction in other two nearest pattern is pattern two, or is chosen after pattern one is combined with minimum prediction The pattern of distortion is pattern two,
According to the set for the coding mode one that step 2 obtains, each pattern acquiring present encoding in gathering coding mode one The set of another coding mode of block, the i.e. set of coding mode two;The set of the set and pattern two of merging patterns one, Two tuple-sets are obtained, each two tuple includes two relevant patterns one and pattern two;
Step 4:Each mode combinations in two tuple-sets generated for step 3, with pixel adjacent around current block Interpolation obtains two different prediction blocks;One bi-directional predicted result of present encoding block be this two different prediction blocks plus Weight average block;The combination of optimal pattern one and pattern two is selected to predict current block;
Step 5:To in coding unit luminance block and chrominance block carry out the selections of optimal prediction modes respectively;
Step 6:The coding mode of luminance block and chrominance block in coding unit is encoded respectively.
2. intra-frame prediction method in hybrid video coding standard according to claim 1, it is characterised in that:
In step 1, the neighbouring encoding block is the left side of present encoding block, top, lower-left, in the coded frame of upper right Encoding block;
The aggregation of coding mode one that present encoding block is obtained described in step 2 is:
The several patterns of the neighbouring encoding block use pattern obtained in selecting step one at most are pattern one, or the current volume of selection The pattern of the neighbouring encoding block on the code block left side and top is pattern one, or selects these adjacent to the mould of any one block in the block Formula is pattern one, either selects a subset of the pattern of these contiguous blocks for pattern one or refers to for each neighbouring encoding block A fixed weights, the weights of the neighbouring encoding block with identical intra-frame encoding mode are added up, the neighbour obtained in selecting step one Several patterns of the maximum weight of nearly encoding block are pattern one.
3. intra-frame prediction method in hybrid video coding standard according to claim 1, it is characterised in that:In step 3 In, the realization process that the pattern two of present encoding block is obtained for each pattern one is:
It is pattern two to choose with two nearest coding modes of one direction of present mode, and detailed process is:The mode1 tables of pattern one Show, if the pattern of pattern one is between 3 and 33, pattern two is chosen for mode1-1 and mode1+1;If the value of mode1 It is 2 or 34, pattern two is chosen for 3 and 33;If the value of mode1 is DC patterns or PLANAR patterns, pattern two is selected It is taken as 10 and 26;
Or the pattern chosen with minimum predicted distortion after pattern one is combined is pattern two, realization process is:For every A pattern one obtains the corresponding predicted value of all remaining intra-frame encoding modes, then by the predicted value of pattern one and each residue The corresponding prediction of frame mode is weighted average, selection and volume minimum with the distortion of present encoding block after one weighted average of pattern Pattern is pattern two;The criterion of the distortion of encoding block and prediction block can be:Least mean-square error, minimum Hadamard errors Or rate-distortion optimization criterion.
4. intra-frame prediction method in hybrid video coding standard according to claim 1, it is characterised in that:In step 4 In, in two tuple-sets each coding mode group carry out encoded test when, select optimal modal sets come to current block into Row prediction;Select optimal modal sets that least mean-square error, minimum Hadamard errors or rate-distortion optimization criterion can be used.
5. intra-frame prediction method in hybrid video coding standard according to claim 1, it is characterised in that:In step 4 In, the prediction of the prediction mode different to two generation, which is weighted process, is:The prediction block of different prediction modes is given Different weights;The weighted average, which may be used, gives both identical weights of different prediction blocks, that is, to they into Row averagely obtains the prediction block of present encoding block;Or different weights are assigned according to the importance of different prediction modes, or The accuracy that person generates prediction according to different prediction modes assigns different weights, or sets the higher weights of some probability, Best weights are obtained with the mode of search spread.
6. intra-frame prediction method in hybrid video coding standard according to claim 5, it is characterised in that:In step 5 In, the process for luminance block and chrominance block selection optimum prediction mode in coding unit is:It is optimal for luminance block Prediction mode is chosen from original one-direction prediction modes and bi-predictive mode, and the criterion of selection is minimum rate distortion Criterion;And for chrominance block, if its corresponding luminance block select it is bi-directional predicted for optimum prediction mode, current chroma block Optimal prediction modes are two prediction modes for selecting its corresponding brightness block.
7. intra-frame prediction method in hybrid video coding standard according to claim 1 or 6, it is characterised in that:
In step 6:The coding mode of luminance block and chrominance block in coding unit is encoded respectively, detailed process For:
If current intra-frame encoding mode is bi-directional predicted, two coding modes during this is bi-directional predicted, i.e. pattern one and pattern Two are encoded;Pattern one comes from neighbouring encoding block, the index of the selected contiguous block of direct coding.
8. intra-frame prediction method in hybrid video coding standard according to claim 7, it is characterised in that:
For luminance block, when pattern one is to choose to obtain from the left side of current block or top, then the symbol of 1 bit For indicating that the pattern chosen comes from the left side or top;Coding mode based on pattern one second is that obtained, similarly, a ratio Special symbol can be used for indicating to be selected pattern is which of one adjacent modes of pattern;
For chrominance block, if the selection of present intensity block is bi-directional predicted, the prediction mode of chrominance block will be set to bi-directional predicted Pattern, corresponding two prediction modes directly luminance block, woth no need to be encoded to prediction mode;If present intensity Block selects original single directional prediction, chrominance block that will be chosen from five original prediction modes.
CN201510861669.4A 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard Active CN105491390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510861669.4A CN105491390B (en) 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510861669.4A CN105491390B (en) 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard

Publications (2)

Publication Number Publication Date
CN105491390A CN105491390A (en) 2016-04-13
CN105491390B true CN105491390B (en) 2018-09-11

Family

ID=55678056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510861669.4A Active CN105491390B (en) 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard

Country Status (1)

Country Link
CN (1) CN105491390B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681808B (en) * 2016-03-16 2017-10-31 同济大学 A kind of high-speed decision method of SCC interframe encodes unit mode
US20190268611A1 (en) * 2018-02-26 2019-08-29 Mediatek Inc. Intelligent Mode Assignment In Video Coding
CN112449181B (en) * 2019-09-05 2022-04-26 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113810687B (en) * 2019-09-23 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
WO2021114100A1 (en) * 2019-12-10 2021-06-17 中国科学院深圳先进技术研究院 Intra-frame prediction method, video encoding and decoding methods, and related device
CN113099240B (en) * 2019-12-23 2022-05-31 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
US11582474B2 (en) * 2020-08-03 2023-02-14 Alibaba Group Holding Limited Systems and methods for bi-directional gradient correction
CN113794885B (en) * 2020-12-30 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN118646877B (en) * 2024-08-15 2024-11-05 浙江大华技术股份有限公司 Video coding code rate adjusting method, device and image processing system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102790878A (en) * 2011-12-07 2012-11-21 北京邮电大学 Coding mode choosing method and device for video coding
CN103248895A (en) * 2013-05-14 2013-08-14 芯原微电子(北京)有限公司 Quick mode estimation method used for HEVC intra-frame coding
CN103997646A (en) * 2014-05-13 2014-08-20 北京航空航天大学 Rapid intra-frame prediction mode selection method in high-definition video coding
WO2015055832A1 (en) * 2013-10-18 2015-04-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-component picture or video coding concept

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102790878A (en) * 2011-12-07 2012-11-21 北京邮电大学 Coding mode choosing method and device for video coding
CN103248895A (en) * 2013-05-14 2013-08-14 芯原微电子(北京)有限公司 Quick mode estimation method used for HEVC intra-frame coding
WO2015055832A1 (en) * 2013-10-18 2015-04-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-component picture or video coding concept
CN103997646A (en) * 2014-05-13 2014-08-20 北京航空航天大学 Rapid intra-frame prediction mode selection method in high-definition video coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Intensity Gradient Technique for Efficient Intra-Prediction in H.264/AVC;An-Chao Tsai et al;《IEEE Transactions on Circuits & Systems for Video Technology》;20080531;第18卷(第5期);第694-698页 *

Also Published As

Publication number Publication date
CN105491390A (en) 2016-04-13

Similar Documents

Publication Publication Date Title
CN105491390B (en) Intra-frame prediction method in hybrid video coding standard
CN104935941B (en) The method being decoded to intra prediction mode
JP6157676B2 (en) Video decoding device
US9967587B2 (en) Apparatus and method for encoding/decoding images
US10091526B2 (en) Method and apparatus for motion vector encoding/decoding using spatial division, and method and apparatus for image encoding/decoding using same
JP6218896B2 (en) Image decoding apparatus and image decoding method
KR101452860B1 (en) Method and apparatus for image encoding, and method and apparatus for image decoding
TWI665908B (en) Image decoding device, image decoding method, image encoding device, image encoding method, computer-readable recording medium
JP5554831B2 (en) Distortion weighting
CN118972572A (en) Video encoding/decoding method, device and recording medium storing bit stream therein
KR101989160B1 (en) Method and apparatus for image encoding
CN110062228A (en) 360 degree of video quick intraframe prediction algorithms based on WMSE
KR101607613B1 (en) Method and apparatus for image encoding, and method and apparatus for image decoding
KR101761278B1 (en) Method and apparatus for image decoding
KR20150045980A (en) Method and apparatus for image encoding, and method and apparatus for image decoding
KR101607614B1 (en) Method and apparatus for image encoding, and method and apparatus for image decoding
KR101606683B1 (en) Method and apparatus for image encoding, and method and apparatus for image decoding
KR101606853B1 (en) Method and apparatus for image encoding, and method and apparatus for image decoding
KR101607611B1 (en) Method and apparatus for image encoding, and method and apparatus for image decoding
Dong et al. A novel multiple description video coding based on data reuse
KR101886259B1 (en) Method and apparatus for image encoding, and computer-readable medium including encoded bitstream
JP6409400B2 (en) Video encoding apparatus, method and program
KR20150091288A (en) Method and apparatus for decoding video
KR20150034145A (en) Method and apparatus for decoding video
KR20150048098A (en) Method and apparatus for decoding video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant