[go: up one dir, main page]

CN105208387A - HEVC intra-frame prediction mode fast selection method - Google Patents

HEVC intra-frame prediction mode fast selection method Download PDF

Info

Publication number
CN105208387A
CN105208387A CN201510675511.8A CN201510675511A CN105208387A CN 105208387 A CN105208387 A CN 105208387A CN 201510675511 A CN201510675511 A CN 201510675511A CN 105208387 A CN105208387 A CN 105208387A
Authority
CN
China
Prior art keywords
estimated
prediction mode
sad
mode
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510675511.8A
Other languages
Chinese (zh)
Other versions
CN105208387B (en
Inventor
朱威
张训华
沈吉龙
杨洋
陈朋
郑雅羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201510675511.8A priority Critical patent/CN105208387B/en
Publication of CN105208387A publication Critical patent/CN105208387A/en
Application granted granted Critical
Publication of CN105208387B publication Critical patent/CN105208387B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to an HEVC intra-frame prediction mode fast selection method which comprises the following steps: (1) inputting a PU to be estimated, and establishing a practical useable intra-frame prediction mode assembly; (2) calculating the sum of absolute difference of all pixels in the PU to be estimated and adjacent pixels in space in the direction different from that of the PU to be estimated; (3) according to the sum of absolute difference of the adjacent pixels in space in the difference direction, judging texture direction characters of the PU to be estimated; (4) according to the texture direction characters, determining the crude mode search range; (5) according to the crude mode search range and the practical useable intra-frame prediction mode assembly, establishing a rate-distortion optimization candidate mode assembly; (6) selecting the optimal intra-frame prediction mode. According to the texture direction characters of the PU to be estimated, the crude search range of the prediction mode can be reduced, and then the number of candidate modes subjected to rate-distortion optimization can be reduced, so that the good code rate-distortion performance can be kept, and meanwhile the computation complexity of HEVC intra-frame prediction mode selection can be obviously reduced.

Description

一种HEVC帧内预测模式快速选择方法A method for fast selection of HEVC intra prediction mode

技术领域technical field

本发明涉及数字视频编码领域,具体涉及一种HEVC帧内预测模式快速选择方法。The invention relates to the field of digital video coding, in particular to a method for quickly selecting an HEVC intra-frame prediction mode.

背景技术Background technique

随着多媒体技术的快速发展,各种分辨率的视频数据(包括标清、高清和超高清视频)相继出现,视频数据的传输和存储面临着巨大挑战。为满足视频数据压缩和传输的发展需求,由ISO/IEC和ITU-T组织的视频编码联合专家组(JointCollaborativeTeamonVideoCoding,JCT-VC)制定了新一代的高效率视频编码标准(HighEfficiencyVideoCoding,HEVC/H.265)。在相同的视频质量下,HEVC与前一代的视频编码标准H.264相比减少了一半左右的码流(见G.J.Sullivan,J.-R.Ohm,W.-J.Han,andT.Wiegand,Overviewofthehighefficiencyvideocoding(HEVC)standard,即“高效率视频编码标准概述”,IEEETransactionsonCircuitsandSystemsforVideoTechnology,vol.22,no.12,pp.1649-1668,Dec.2012),即编码效率提高了一倍,但是其计算复杂度增加数倍。HEVC虽然同传统视频编码标准一样采用混合编码技术,在许多方面引入新编码技术,例如编码树单元(CodingTreeUnit,CTU)四叉树划分、多角度帧内预测模式和多种划分方式的帧间预测模式等。为了更加灵活地编码图像,HEVC提出了三种划分模式,分别为编码单元(CodingUnit,CU)、预测单元(PredictionUnit,PU)和变换单元(TransformUnit)。HEVC的PU预测过程包括帧间预测和帧内预测,其中I帧或IDR帧的PU都仅采用帧内预测,其它类型帧可以同时采用帧内预测和帧间预测。为了提高帧内预测的压缩效率,HEVC采用PU周围存在的空间相邻的重建像素进行帧内预测,最多可以采用35种帧内预测模式(见J.Lainema,F.Bossen,W.-JHan,J.Min,andK.Ugur,IntracodingoftheHEVCstandard,即“HEVC标准的帧内编码”,IEEETransactionsonCircuitsandSystemsforVideoTechnology,vol.22,no.12,pp.1792-1801,Dec.2012)。在HEVC的所有帧内预测模式中,编号0的Planar模式和编号1的DC模式适用于平坦区域,编号2~34对应33种角度预测模式,其中,编号10的角度预测模式沿水平向右方向进行预测,编号26的角度预测模式沿竖直向下方向进行预测,编号18的角度预测模式沿对角线往右下方向进行预测,编号2的角度预测模式沿对角线往右上方向进行预测,编号34的角度预测模式沿对角线往左下方向进行预测。在HEVC的测试模型HM中,帧内预测过程要先进行粗级模式决策(RoughModeDecision,RMD),通过计算PU的残差信号哈达玛变换后的绝对误差和(SumofAbsoluteTransformedDifference,SATD)来初步筛选预测模式,对于尺寸为4×4和8×8的PU,选取8种预测模式作为候选模式,对于尺寸为16×16、32×32和64×64的PU,选取3种预测模式作为候选模式(见L.Zhao,L.Zhang,X.Zhao,S.Ma,D.Zhao,W.Gao,Furtherencoderimprovementforintramodedecision(JCTVC-D283),即“编码过程中帧内模式决策的进一步优化”,ProceedingsoftheJCT-VC4thmeeting,pp.1-4,Jan.2011)。接着采用率失真优化(RateDistortionOptimization,RDO)技术(见WiegandT,SchwarzH,JochA,KossentiniF,SullivanGJ,Rate-constrainedcodercontrolandcomparisonofvideocodingstandards,即“视频编码标准的码率受限制的编码器控制和比较”,IEEETransactionsonCircuitsandSystemsforVideoTechnology,2003,13(7):688-703),从若干候选模式中选取率失真代价最小的模式作为PU的帧内最佳预测模式。HEVC帧内预测模式比H.264更加丰富多样,更能适合编码高分辨率视频,但是这也增加了HEVC帧内编码的计算复杂度。With the rapid development of multimedia technology, video data of various resolutions (including standard definition, high definition and ultra high definition video) have emerged one after another, and the transmission and storage of video data is facing great challenges. In order to meet the development needs of video data compression and transmission, the Joint Collaborative Team Video Coding (JCT-VC) organized by ISO/IEC and ITU-T has formulated a new generation of High Efficiency Video Coding (HEVC/H. 265). Under the same video quality, HEVC reduces the code stream by about half compared with the previous generation video coding standard H.264 (see G.J.Sullivan, J.-R.Ohm, W.-J.Han, and T.Wiegand, Overview of the high efficiency videocoding (HEVC) standard, that is, "Overview of High Efficiency Video Coding Standard", IEEE Transactions on Circuits and Systems for Video Technology, vol.22, no.12, pp.1649-1668, Dec.2012), that is, the coding efficiency has doubled, but its computational complexity increase several times. Although HEVC adopts hybrid coding technology like traditional video coding standards, it introduces new coding technology in many aspects, such as coding tree unit (CodingTreeUnit, CTU) quadtree division, multi-angle intra prediction mode and inter prediction with multiple division methods mode etc. In order to encode images more flexibly, HEVC proposes three division modes, namely coding unit (CodingUnit, CU), prediction unit (PredictionUnit, PU) and transformation unit (TransformUnit). The PU prediction process of HEVC includes inter-frame prediction and intra-frame prediction. The PU of I frame or IDR frame only uses intra-frame prediction, and other types of frames can use intra-frame prediction and inter-frame prediction at the same time. In order to improve the compression efficiency of intra-frame prediction, HEVC uses spatially adjacent reconstructed pixels around the PU for intra-frame prediction, and up to 35 intra-frame prediction modes can be used (see J.Lainema, F.Bossen, W.-JHan, J.Min, and K.Ugur, Intracoding of the HEVC standard, that is, "Intracoding of the HEVC standard", IEEE Transactions on Circuits and Systems for Video Technology, vol.22, no.12, pp.1792-1801, Dec.2012). Among all the intra prediction modes of HEVC, the Planar mode numbered 0 and the DC mode numbered 1 are suitable for flat areas, and the numbers 2 to 34 correspond to 33 angle prediction modes, among which the angle prediction mode number 10 is along the horizontal direction to the right For prediction, the angle prediction mode No. 26 makes predictions along the vertical downward direction, the angle prediction mode No. 18 makes predictions along the diagonal to the lower right direction, and the angle prediction mode No. 2 makes predictions along the diagonal line to the upper right direction , the angle prediction mode numbered 34 performs prediction along the diagonal to the lower left direction. In the HEVC test model HM, the intra prediction process must first make a rough mode decision (RoughModeDecision, RMD), and initially screen the prediction mode by calculating the absolute error sum (SumofAbsoluteTransformedDifference, SATD) of the residual signal of the PU after Hadamard transformation. , for PUs with sizes 4×4 and 8×8, select 8 prediction modes as candidate modes, and for PUs with sizes 16×16, 32×32 and 64×64, select 3 prediction modes as candidate modes (see L.Zhao, L.Zhang, X.Zhao, S.Ma, D.Zhao, W.Gao, Further encoding improvement for intramode decision (JCTVC-D283), that is, "Further optimization of intramode decision in the encoding process", Proceeding of the JCT-VC4th meeting, pp .1-4, Jan. 2011). Then rate-distortion optimization (Rate Distortion Optimization, RDO) technology (see WiegandT, SchwarzH, JochA, KossentiniF, SullivanGJ, Rate-constrained coder control and comparison of video coding standards, that is, "rate-constrained coder control and comparison of video coding standards", IEEE Transactions on Circuits and Systems for Video Technology, 2003, 13 (7):688-703), select the mode with the smallest rate-distortion cost from several candidate modes as the best intra-frame prediction mode of the PU. HEVC intra-frame prediction modes are richer and more diverse than H.264, and are more suitable for encoding high-resolution videos, but this also increases the computational complexity of HEVC intra-frame encoding.

目前已有一些HEVC的帧内预测模式快速选择方法。申请号为201210138816.1的专利在RMD过程中,对于尺寸为4×4和8×8、16×16和32×32的PU将候选预测模式个数减少为2~5个。齐美彬等人提出一种基于图像纹理方向和空间相关性的HEVC帧内模式选择快速方法(见齐美彬,朱广辉,杨艳芳,蒋建国.利用纹理和空间相关性的HEVC帧内预测模式选择[J].中国图象图形学报,2014,19(8),1119-1125)。该方法利用Sobel算子得到的PU纹理方向来建立获选预测模式列表,并且将基于空间相关性的相邻PU帧内最佳预测模式加入该列表。申请号为201410842187.X的专利提供了一种HEVC帧内预测模式选择加速方法。在该方法中,若PU具有纹理一致性特征,则将RMD选取的第1个预测模式直接选取为最佳帧内预测模式;其次将RMD选取的前2个预测模式分成3类不同的情况来加快模式选择。与上述减少候选预测模式的方法不同,申请号为201410024635.5的专利提出一种基于SATD的HEVC快速帧内预测方法,通过终止CU的划分来降低HEVC的帧内预测的计算复杂度。该方法由SATD计算出一组适应性强的阈值,若当前CU的SATD小于给定的阈值,则结束CU的划分。而申请号为201310445775.5的专利一方面根据纹理复杂度确定CU划分;另一方面,根据PU的纹理特性将最不可能成为最佳模式的一些预测模式从候选预测模式列表中删除。Currently, there are some HEVC intra prediction mode fast selection methods. In the patent application number 201210138816.1, in the RMD process, the number of candidate prediction modes is reduced to 2-5 for PUs with sizes of 4×4 and 8×8, 16×16 and 32×32. Qi Meibin and others proposed a fast HEVC intra-frame mode selection method based on image texture direction and spatial correlation (see Qi Meibin, Zhu Guanghui, Yang Yanfang, Jiang Jianguo. HEVC intra-frame prediction mode selection using texture and spatial correlation[J]. China Journal of Image and Graphics, 2014, 19(8), 1119-1125). In this method, the PU texture direction obtained by the Sobel operator is used to establish a selected prediction mode list, and the best prediction mode in adjacent PU frames based on spatial correlation is added to the list. The patent with application number 201410842187.X provides an acceleration method for HEVC intra prediction mode selection. In this method, if the PU has texture consistency characteristics, the first prediction mode selected by RMD is directly selected as the best intra prediction mode; secondly, the first two prediction modes selected by RMD are divided into 3 different situations Speed up mode selection. Different from the above method of reducing candidate prediction modes, the patent application number 201410024635.5 proposes a SATD-based HEVC fast intra prediction method, which reduces the computational complexity of HEVC intra prediction by terminating CU division. This method calculates a set of adaptive thresholds from SATD, and if the SATD of the current CU is smaller than a given threshold, the division of the CU ends. The patent application number is 201310445775.5, on the one hand, determines the CU division according to the texture complexity; on the other hand, according to the texture characteristics of the PU, some prediction modes that are least likely to be the best mode are deleted from the list of candidate prediction modes.

发明内容Contents of the invention

为了在保持编码率失真性能的条件下有效降低HEVC帧内预测的计算复杂度,本发明提供了一种HEVC帧内预测模式快速选择方法。In order to effectively reduce the computational complexity of HEVC intra-frame prediction under the condition of maintaining coding rate-distortion performance, the present invention provides a method for fast selection of HEVC intra-frame prediction mode.

为了解决上述技术问题采用的技术方案为:The technical scheme that adopts in order to solve the above-mentioned technical problem is:

一种HEVC帧内预测模式快速选择方法,所述的方法包括以下步骤:A method for fast selection of HEVC intra-frame prediction mode, said method comprising the following steps:

(1)输入一个待估计PU,建立实际可用的帧内预测模式集合:(1) Input a PU to be estimated, and establish a set of actually available intra prediction modes:

根据待估计PU周围已存在空间相邻的重建像素和每个HEVC帧内预测模式需要的空间相邻重建像素,为待估计PU选取所有实际可用的帧内预测模式,组成集合Ω,即对每个HEVC帧内预测模式,如果待估计PU周围已存在该模式进行帧内预测需要的空间相邻重建像素,则将该模式加入到Ω。According to the existing spatially adjacent reconstruction pixels around the PU to be estimated and the spatially adjacent reconstruction pixels required by each HEVC intra prediction mode, select all actually available intra prediction modes for the PU to be estimated to form a set Ω, that is, for each A HEVC intra-frame prediction mode, if there are spatially adjacent reconstruction pixels required for intra-frame prediction in this mode around the PU to be estimated, this mode is added to Ω.

(2)计算待估计PU中的所有像素与其不同方向的空间相邻像素的差值绝对值和:(2) Calculate the sum of the absolute values of differences between all pixels in the PU to be estimated and their spatially adjacent pixels in different directions:

对于HEVC帧内预测模式的33种角度预测模式,待估计PU的纹理方向特性与该PU最终选取的角度预测模式具有相关性。因此,可以通过计算待估计PU中的像素与其空间相邻像素的差绝对值和来确定待估计PU的纹理方向特性,以快速选择帧内预测模式。For the 33 angle prediction modes of the HEVC intra prediction mode, the texture direction characteristics of the PU to be estimated are related to the angle prediction mode finally selected by the PU. Therefore, the texture direction characteristic of the PU to be estimated can be determined by calculating the sum of the absolute values of differences between a pixel in the PU to be estimated and its spatial adjacent pixels, so as to quickly select an intra prediction mode.

首先,当Ω中存在编号为18的角度预测模式,即待估计PU可以采用沿对角线往右下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左上方相邻像素的差值绝对值和SADLU,如式(1)所示:First, when there is an angle prediction mode numbered 18 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the diagonal to the lower right, then calculate the difference between all pixels in the PU to be estimated and its upper left adjacent pixel The absolute value and SAD LU , as shown in formula (1):

SADSAD LL Uu == ΣΣ xx == 00 NN -- 11 ΣΣ ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y -- 11 )) || -- -- -- (( 11 ))

式(1)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于等于N-1的整数,坐标为(x-1,y-1)的像素位于坐标为(x,y)的左上方。坐标为(-1,-1)的像素是坐标为(0,0)的像素的左上像素,坐标为(0,0)的像素是待估计PU左上顶角位置的像素,坐标为(0,N-1)的像素是待估计PU左下顶角位置的像素,坐标为(N-1,0)的像素是待估计PU右上顶角位置的像素,坐标为(N-1,N-1)的像素是待估计PU右下顶角位置的像素。坐标为(0,y)的像素是待估计PU的上边界像素,坐标为(x,0)的像素是待估计PU的左边界像素,坐标为(N-1,y)的像素是待估计PU的右边界像素,坐标为(x,N-1)的像素是待估计PU的下边界像素。In formula (1), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than or equal to N-1, and the pixel with the coordinates (x-1, y-1) is located at Coordinates are (x,y) upper left. The pixel with coordinates (-1,-1) is the upper left pixel of the pixel with coordinates (0,0), and the pixel with coordinates (0,0) is the pixel to be estimated at the upper left corner of the PU, and the coordinates are (0, The pixel of N-1) is the pixel at the lower left corner of the PU to be estimated, and the pixel whose coordinates are (N-1,0) is the pixel at the upper right corner of the PU to be estimated, and the coordinates are (N-1, N-1) The pixel is the pixel at the bottom right corner of the PU to be estimated. The pixel with coordinates (0,y) is the upper boundary pixel of the PU to be estimated, the pixel with coordinates (x,0) is the left boundary pixel of the PU to be estimated, and the pixel with coordinates (N-1,y) is the pixel to be estimated The right boundary pixel of the PU, the pixel whose coordinates are (x, N-1) is the lower boundary pixel of the PU to be estimated.

同样地,当Ω中存在编号为26的角度预测模式,即待估计PU可以采用竖直向下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其上方相邻像素的差值绝对值和SADU,如式(2)所示:Similarly, when there is an angle prediction mode numbered 26 in Ω, that is, the angle prediction mode in which the PU to be estimated can be predicted in the vertical downward direction, then the absolute value of the difference between all pixels in the PU to be estimated and its upper adjacent pixels is calculated. value and SAD U , as shown in formula (2):

SADSAD Uu == ΣΣ xx == 00 NN -- 11 ΣΣ ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx ,, ythe y -- 11 )) || -- -- -- (( 22 ))

式(2)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x,y-1)的像素位于坐标为(x,y)的正上方。In formula (2), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel value, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x, y-1) is located at the coordinates (x, y) directly above.

当Ω中存在编号为34的角度预测模式,即待估计PU可以采用沿对角线左下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其右上方相邻像素的差值绝对值和SADRU,如式(3)所示:When there is an angle prediction mode numbered 34 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the lower left direction of the diagonal, then calculate the absolute value and SAD of the difference between all pixels in the PU to be estimated and its upper right adjacent pixels RU , as shown in formula (3):

SADSAD RR Uu == ΣΣ xx == 00 NN -- 11 ΣΣ ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx ++ 11 ,, ythe y -- 11 )) || -- -- -- (( 33 ))

式(3)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x+1,y-1)的像素位于坐标为(x,y)的右上方。In formula (3), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x+1, y-1) is located at the coordinates ( x,y) to the upper right.

当Ω中存在编号为10的角度预测模式,即待估计PU可以采用水平向右方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左方相邻像素的差值绝对值和SADL,如式(4)所示:When there is an angle prediction mode numbered 10 in Ω, that is, the angle prediction mode in which the PU to be estimated can be predicted horizontally to the right, then calculate the absolute value of the difference between all pixels in the PU to be estimated and its left adjacent pixel and SADL , As shown in formula (4):

SADSAD LL == ΣΣ xx == 00 NN -- 11 ΣΣ ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y )) || -- -- -- (( 44 ))

式(4)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x-1,y)的像素位于坐标为(x,y)的左方。In formula (4), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x-1, y) is located at the coordinates (x, y) to the left.

当Ω中存在编号为2的角度预测模式,即待估计PU可以采用沿对角线右上方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左下方相邻像素的差值绝对值和SADLB,如式(5)所示:When there is an angle prediction mode numbered 2 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the upper right direction of the diagonal, then calculate the absolute value and SAD of the difference between all pixels in the PU to be estimated and its lower left adjacent pixels LB , as shown in formula (5):

SADSAD LL BB == ΣΣ xx == 00 NN -- 11 ΣΣ ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y ++ 11 )) || -- -- -- (( 55 ))

式(5)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x-1,y+1)的像素位于坐标为(x,y)的左下方。In formula (5), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x-1, y+1) is located at the coordinates ( x,y) to the lower left.

(3)根据不同方向空间相邻像素的差值绝对值和判断待估计PU的纹(3) Judging the texture of the PU to be estimated according to the absolute value of the difference between adjacent pixels in different directions

理方向特性:Reasoning direction characteristics:

首先,根据从步骤(2)计算得到的差值绝对值和SAD个数进行步骤选择:如果步骤(2)计算得到SAD个数小于3,则执行步骤(5);否则先对步骤(2)计算得到的SAD进行从小到大排列,设前三个最小的SAD依次为SADMIN-0、SADMIN-1和SADMIN-2;再根据这三个最小的SAD,对待估计PU的纹理特征进行分类,如式(6)所示:First, step selection is performed according to the absolute value of the difference and the number of SADs calculated from step (2): if the number of SADs calculated in step (2) is less than 3, then step (5) is performed; otherwise, step (2) is performed first The calculated SADs are arranged from small to large, and the first three smallest SADs are sequentially SAD MIN-0 , SAD MIN-1 , and SAD MIN-2 ; then according to these three smallest SADs, the texture characteristics of the estimated PU are Classification, as shown in formula (6):

CC ll aa sthe s sthe s == {{ 00 ,, ii ff SADSAD Mm II NN -- 00 >> &alpha;SAD&alpha; SAD Mm II NN -- 22 11 ,, ee ll sthe s ee ii ff SADSAD Mm II NN -- 00 << &beta;&beta; &times;&times; SADSAD Mm II NN -- 11 22 ,, ee ll sthe s ee ii ff SADSAD Mm II NN -- 11 << &beta;&beta; &times;&times; SADSAD Mm II NN -- 22 33 ,, oo tt hh ee rr sthe s -- -- -- (( 66 ))

式(6)中,Class表示待估计PU的纹理类别,值为0表示待估计PU的纹理比较平坦,值为1表示待估计PU的纹理呈现较明显的水平、竖直或对角线方向,值为2表示待估计PU的纹理呈现其它角度方向,值为3表示待估计PU的纹理复杂,参数α、β和γ用于调节SADMIN-i(i=0,1,2)之间的关系,其中α设为0.9~1.0,β和γ设为0.6~1.0。In formula (6), Class represents the texture category of the PU to be estimated, a value of 0 indicates that the texture of the PU to be estimated is relatively flat, and a value of 1 indicates that the texture of the PU to be estimated presents a more obvious horizontal, vertical or diagonal direction, A value of 2 indicates that the texture of the PU to be estimated presents other angular directions, a value of 3 indicates that the texture of the PU to be estimated is complex, and parameters α, β, and γ are used to adjust the relationship between SAD MIN-i (i=0,1,2). Relationship, where α is set to 0.9-1.0, and β and γ are set to 0.6-1.0.

接着由式(6)计算得到的PU纹理类别Class以及SAD关系,得到待估计PU纹理方向特性,如表1所示。在表1中,0度方向是指沿水平向右方向,π/2方向是指沿竖直向下方向,π/4方向是指沿右下45度方向,-π/4方向是指沿右上45度方向,3π/4方向是指沿左下45度方向。当纹理类别Class等于0,待估计PU的纹理方向特性记为纹理较平坦。当纹理类别Class等于1,待估计PU的纹理方向特性根据SADMIN-0是否等于SADLU、SADU、SADRU、SADL和SADLB分别将待估计PU记为纹理呈π/4方向、纹理呈π/2方向、纹理呈3π/4方向、纹理呈0度方向和纹理呈-π/4方向。当纹理类别Class等于2,待估计PU的纹理方向特性根据SADMIN-0和SADMIN-1的值是否为SADLU、SADU、SADRU、SADL和SADLB中相邻方向的两个SAD值来判别待估计PU的纹理方向特性:(a)如果SADLU等于SADMIN-0且SADU等于SADMIN-1,或者SADLU等于SADMIN-1且SADU等于SADMIN-0,则将纹理方向特性记为纹理呈[π/4,π/2]方向;(b)如果SADU等于SADMIN-0且SADRU等于SADMIN-1,或者SADU等于SADMIN-1且SADRU等于SADMIN-0,则将纹理方向特性记为纹理呈[π/2,3π/4]方向;(c)如果SADLU等于SADMIN-0且SADL等于SADMIN-1,或者SADLU等于SADMIN-1且SADL等于SADMIN-0,则将纹理方向特性记为纹理呈[0,π/4]方向;(d)如果SADL等于SADMIN-0且SADLB等于SADMIN-1,或者SADL等于SADMIN-1且SADLB等于SADMIN-0,则将纹理方向特性记为纹理呈[-π/4,0]方向;(f)其它情况,则将纹理方向特性记为复杂纹理方向。当纹理类别Class等于3,待估计PU的纹理方向特性记为复杂纹理方向。Then, the PU texture category Class and SAD relationship calculated by formula (6) can be used to obtain the PU texture direction characteristics to be estimated, as shown in Table 1. In Table 1, the 0-degree direction refers to the horizontal direction to the right, the π/2 direction refers to the vertical downward direction, the π/4 direction refers to the downward 45-degree direction, and the -π/4 direction refers to the direction along the The upper right 45 degree direction, and the 3π/4 direction refers to the lower left 45 degree direction. When the texture category Class is equal to 0, the texture direction characteristic of the PU to be estimated is recorded as a relatively flat texture. When the texture category Class is equal to 1, the texture direction characteristics of the PU to be estimated are respectively recorded as the texture in the π /4 direction, texture In π/2 direction, texture in 3π/4 direction, texture in 0 degree direction and texture in -π/4 direction. When the texture category Class is equal to 2, the texture direction characteristic of the PU to be estimated depends on whether the values of SAD MIN-0 and SAD MIN-1 are two SADs in adjacent directions in SAD LU , SAD U , SAD RU , SAD L and SAD LB value to determine the texture orientation characteristics of the PU to be estimated: (a) If SAD LU is equal to SAD MIN-0 and SAD U is equal to SAD MIN-1 , or SAD LU is equal to SAD MIN-1 and SAD U is equal to SAD MIN-0 , then set The texture direction characteristic is recorded as the texture is in the [π/4,π/2] direction; (b) if SAD U is equal to SAD MIN-0 and SAD RU is equal to SAD MIN-1 , or SAD U is equal to SAD MIN-1 and SAD RU is equal to SAD MIN-0 , the texture direction characteristic is recorded as the texture is in the [π/2,3π/4] direction; (c) if SAD LU is equal to SAD MIN-0 and SAD L is equal to SAD MIN-1 , or SAD LU is equal to SAD MIN-1 and SAD L is equal to SAD MIN-0 , then the texture direction characteristic is recorded as the texture is in the [0,π/4] direction; (d) if SAD L is equal to SAD MIN-0 and SAD LB is equal to SAD MIN-1 , Or if SAD L is equal to SAD MIN-1 and SAD LB is equal to SAD MIN-0 , then the texture direction characteristic is recorded as the texture is in the [-π/4,0] direction; (f) in other cases, the texture direction characteristic is recorded as complex Grain direction. When the texture category Class is equal to 3, the texture direction characteristic of the PU to be estimated is recorded as a complex texture direction.

表1待估计PU纹理方向特性Table 1 PU texture direction characteristics to be estimated

(4)根据纹理方向特性确定粗级模式搜索范围:(4) Determine the coarse-level mode search range according to the texture direction characteristics:

根据待估计PU的纹理方向特性,减少候选的预测模式种类,调整后的预测模式组成粗级模式搜索范围S,其中S中的预测模式是根据待估计PU的纹理方向特性来设置,如下表2所示:According to the texture direction characteristics of the PU to be estimated, the types of candidate prediction modes are reduced, and the adjusted prediction modes form a coarse-level mode search range S, where the prediction mode in S is set according to the texture direction characteristics of the PU to be estimated, as shown in Table 2. Shown:

表2S中的预测模式Prediction patterns in Table 2S

(5)根据粗级搜索范围和Ω建立率失真优化候选模式集合:(5) Establish a rate-distortion optimization candidate mode set according to the coarse-level search range and Ω:

从步骤(4)得到的预测模式搜索范围在一些情况下预测模式个数仍然较多,因此在步骤(6)采用率失真优化技术选择最佳预测模式前需要进行进一步的筛选预测模式。In some cases, the prediction mode search range obtained from step (4) still has a large number of prediction modes, so further screening of prediction modes is required before step (6) adopts rate-distortion optimization technology to select the best prediction mode.

首先确定SATD代价模式搜索范围Ψ:如果从步骤(4)执行到当前步骤,则Ψ为步骤(4)得到的粗级模式搜索范围S与步骤(1)得到Ω的交集,如果从步骤(3)执行到当前步骤,则直接将Ω赋给Ψ。First, determine the SATD cost mode search range Ψ: if it is executed from step (4) to the current step, then Ψ is the intersection of the coarse-level mode search range S obtained in step (4) and Ω obtained in step (1), if it is obtained from step (3 ) to the current step, then directly assign Ω to Ψ.

然后计算Ψ中各个预测模式的HEVC帧内预测残差,再计算预测残差的SATD代价J,如式(7)所示:Then calculate the HEVC intra prediction residual of each prediction mode in Ψ, and then calculate the SATD cost J of the prediction residual, as shown in formula (7):

J=SATD+λ×R(7)J=SATD+λ×R(7)

其中J表示代价,SATD表示残差信号哈达玛变换后的绝对误差和,λ表示拉格朗日算子,R表示模式选择后编码所需要的比特数。Among them, J represents the cost, SATD represents the absolute error sum after Hadamard transformation of the residual signal, λ represents the Lagrangian operator, and R represents the number of bits required for encoding after mode selection.

接着按照SATD代价J从小到大的顺序对各个预测模式进行排序,再根据排序后的预测模式建立率失真优化候选模式集合Φ:当排列在第1位的预测模式为DC模式或Planar模式,则只将排列前1位的预测模式加入Φ;当排列在第1位的预测模式为角度模式且第2位的预测模式为DC模式或Planar模式,则只将排列前2位的预测模式加入Φ;当排列在前2位的预测模式都为相邻的角度模式,则只将排列前2位的预测模式加入Φ;当排列在前2位的预测模式为不相邻的角度模式,则先将排列前2位的预测模式加入Φ,再将该2种预测模式相邻的角度模式加入Φ;在其它情况下,对于尺寸为16×16、32×32和64×64的待估计PU,则将排列前3位的预测模式加入Φ,对于尺寸为4×4和8×8的待估计PU,则将排列前8位的预测模式加入Φ。Then sort each prediction mode according to the order of SATD cost J from small to large, and then establish a rate-distortion optimization candidate mode set Φ according to the sorted prediction mode: when the prediction mode ranked first is DC mode or Planar mode, then Only add the top 1 prediction mode to Φ; when the first prediction mode is angle mode and the second prediction mode is DC mode or Planar mode, only the top 2 prediction modes will be added to Φ ; When the prediction modes arranged in the first 2 positions are all adjacent angle modes, only the prediction modes in the first 2 positions are added to Φ; when the prediction modes in the first 2 positions are non-adjacent angle modes, then the Add the top 2 prediction modes to Φ, and then add the angle modes adjacent to the two prediction modes to Φ; in other cases, for the PUs to be estimated with sizes of 16×16, 32×32 and 64×64, Then add the top 3 prediction modes to Φ, and add the top 8 prediction modes to Φ for the PUs whose sizes are 4×4 and 8×8 to be estimated.

(6)选取最佳帧内预测模式:(6) Select the best intra prediction mode:

采用率失真优化技术从步骤(5)得到的候选模式集合Φ中选取率失真代价最小的候选模式作为待估计PU的最佳帧内预测模式,完成待估计PU的帧内预测模式选择。The rate-distortion optimization technique is used to select the candidate mode with the smallest rate-distortion cost from the candidate mode set Φ obtained in step (5) as the best intra prediction mode of the PU to be estimated, and complete the selection of the intra prediction mode of the PU to be estimated.

本发明的技术构思为:首先计算PU中所有像素点与其不同方向空间相邻像素的SAD,得出PU的纹理特性,以确定纹理方向特征;然后根据纹理方向特征,建立了粗级模式搜索范围,减少进行粗级模式搜索的候选模式个数;最后建立率失真优化候选模式集合,进一步减少了最终进行率失真优化的候选模式个数。The technical idea of the present invention is: first calculate the SAD of all pixels in the PU and its adjacent pixels in different directions, and obtain the texture characteristics of the PU to determine the texture direction characteristics; then according to the texture direction characteristics, establish a coarse-level pattern search range , to reduce the number of candidate modes for coarse-level mode search; finally, a set of candidate modes for rate-distortion optimization is established, which further reduces the number of candidate modes for final rate-distortion optimization.

与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本发明提出一种HEVC帧内预测模式快速选择方法。该方法与现有技术相比,具有如下特点和优点:首先通过比较PU内部各原始像素在不同方向上的差值绝对值和,实现对PU的纹理方向特征进行分类;再根据纹理方向特征来减少预测模式的搜索范围;最后根据预测模式SATD代价排序来减少进行率失真优化的候选模式个数。在保持良好的编码率失真性能的条件下,本发明能显著地降低HEVC帧内和帧间编码帧中的帧内预测模式选择计算复杂度。The present invention proposes a fast selection method for an HEVC intra prediction mode. Compared with the existing technology, this method has the following characteristics and advantages: firstly, by comparing the absolute value sum of the differences of the original pixels in different directions in the PU, the texture direction characteristics of the PU are classified; and then according to the texture direction characteristics Reduce the search range of the prediction mode; finally, sort the prediction mode according to the SATD cost to reduce the number of candidate modes for rate-distortion optimization. Under the condition of maintaining good coding rate-distortion performance, the present invention can significantly reduce the computational complexity of intra-frame prediction mode selection in HEVC intra-frame and inter-frame coding frames.

附图说明Description of drawings

图1为本发明方法的基本流程图。Fig. 1 is the basic flowchart of the method of the present invention.

图2为HEVC编码33种帧内角度预测模式。Figure 2 shows 33 intra-frame angle prediction modes for HEVC encoding.

具体实施方式Detailed ways

下面结合实施例和附图来详细描述本发明,但本发明并不仅限于此。The present invention will be described in detail below in conjunction with the embodiments and accompanying drawings, but the present invention is not limited thereto.

如图1所示,一种HEVC帧内预测模式快速选择方法,包括以下步骤:As shown in Figure 1, a HEVC intra-frame prediction mode fast selection method includes the following steps:

(1)输入一个待估计PU,建立实际可用的帧内预测模式集合;(1) Input a PU to be estimated, and establish a set of actually available intra prediction modes;

(2)计算待估计PU中的所有像素与其不同方向的空间相邻像素的差值绝对值和;(2) Calculate the absolute difference sum of all pixels in the PU to be estimated and its spatially adjacent pixels in different directions;

(3)根据不同方向空间相邻像素的差值绝对值和判断待估计PU的纹理方向特性;(3) Judging the texture direction characteristics of the PU to be estimated according to the absolute value of the difference between adjacent pixels in different directions;

(4)根据纹理方向特性确定粗级模式搜索范围;(4) Determine the coarse-level mode search range according to the texture direction characteristics;

(5)根据粗级模式搜索范围和实际可用的帧内预测模式集合建立率失真优化候选模式集合;(5) Establishing a rate-distortion optimization candidate mode set according to the coarse-level mode search range and the actually available intra-frame prediction mode set;

(6)选取最佳帧内预测模式。(6) Select the best intra prediction mode.

步骤(1)具体包括:Step (1) specifically includes:

根据待估计PU周围已存在空间相邻的重建像素和每个HEVC帧内预测模式需要的空间相邻重建像素,为待估计PU选取所有实际可用的帧内预测模式,组成集合Ω,即对每个HEVC帧内预测模式,如果待估计PU周围已存在该模式进行帧内预测需要的空间相邻重建像素,则将该模式加入到Ω。According to the existing spatially adjacent reconstruction pixels around the PU to be estimated and the spatially adjacent reconstruction pixels required by each HEVC intra prediction mode, select all actually available intra prediction modes for the PU to be estimated to form a set Ω, that is, for each A HEVC intra-frame prediction mode, if there are spatially adjacent reconstruction pixels required for intra-frame prediction in this mode around the PU to be estimated, this mode is added to Ω.

步骤(2)具体包括:Step (2) specifically includes:

首先,当Ω中存在编号为18的角度预测模式,即待估计PU可以采用沿对角线往右下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左上方相邻像素的差值绝对值和SADLU,如式(1)所示:First, when there is an angle prediction mode numbered 18 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the diagonal to the lower right, then calculate the difference between all pixels in the PU to be estimated and its upper left adjacent pixel The absolute value and SAD LU , as shown in formula (1):

SADSAD LL Uu == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y -- 11 )) || -- -- -- (( 11 ))

式(1)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于等于N-1的整数,坐标为(x-1,y-1)的像素位于坐标为(x,y)的左上方。坐标为(-1,-1)的像素是坐标为(0,0)的像素的左上像素,坐标为(0,0)的像素是待估计PU左上顶角位置的像素,坐标为(0,N-1)的像素是待估计PU左下顶角位置的像素,坐标为(N-1,0)的像素是待估计PU右上顶角位置的像素,坐标为(N-1,N-1)的像素是待估计PU右下顶角位置的像素。在HEVC标准中,不同编号的角度预测模式其预测方向如图2所示。In formula (1), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than or equal to N-1, and the pixel with the coordinates (x-1, y-1) is located at Coordinates are (x,y) upper left. The pixel with coordinates (-1,-1) is the upper left pixel of the pixel with coordinates (0,0), and the pixel with coordinates (0,0) is the pixel to be estimated at the upper left corner of the PU, and the coordinates are (0, The pixel of N-1) is the pixel at the lower left corner of the PU to be estimated, and the pixel whose coordinates are (N-1,0) is the pixel at the upper right corner of the PU to be estimated, and the coordinates are (N-1, N-1) The pixel is the pixel at the bottom right corner of the PU to be estimated. In the HEVC standard, the prediction directions of angle prediction modes with different numbers are shown in FIG. 2 .

同样地,当Ω中存在编号为26的角度预测模式,即待估计PU可以采用竖直向下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其上方相邻像素的差值绝对值和SADU,如式(2)所示:Similarly, when there is an angle prediction mode numbered 26 in Ω, that is, the angle prediction mode in which the PU to be estimated can be predicted in the vertical downward direction, then the absolute value of the difference between all pixels in the PU to be estimated and its upper adjacent pixels is calculated. value and SAD U , as shown in formula (2):

SADSAD Uu == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx ,, ythe y -- 11 )) || -- -- -- (( 22 ))

式(2)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x,y-1)的像素位于坐标为(x,y)的正上方。In formula (2), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel value, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x, y-1) is located at the coordinates (x, y) directly above.

当Ω中存在编号为34的角度预测模式,即待估计PU可以采用沿对角线左下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其右上方相邻像素的差值绝对值和SADRU,如式(3)所示:When there is an angle prediction mode numbered 34 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the lower left direction of the diagonal, then calculate the absolute value and SAD of the difference between all pixels in the PU to be estimated and its upper right adjacent pixels RU , as shown in formula (3):

SADSAD RR Uu == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx ++ 11 ,, ythe y -- 11 )) || -- -- -- (( 33 ))

式(3)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x+1,y-1)的像素位于坐标为(x,y)的右上方。In formula (3), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate, y is the vertical coordinate, and their values are integers greater than or equal to 0 and less than N in the PU to be estimated, and the pixel with the coordinates (x+1, y-1) is located at the coordinates ( x,y) to the upper right.

当Ω中存在编号为10的角度预测模式,即待估计PU可以采用水平向右方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左方相邻像素的差值绝对值和SADL,如式(4)所示:When there is an angle prediction mode numbered 10 in Ω, that is, the angle prediction mode in which the PU to be estimated can be predicted horizontally to the right, then calculate the absolute value of the difference between all pixels in the PU to be estimated and its left adjacent pixel and SADL , As shown in formula (4):

SADSAD LL == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y )) || -- -- -- (( 44 ))

式(4)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x-1,y)的像素位于坐标为(x,y)的左方。In formula (4), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x-1, y) is located at the coordinates (x, y) to the left.

当Ω中存在编号为2的角度预测模式,即待估计PU可以采用沿对角线右上方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左下方相邻像素的差值绝对值和SADLB,如式(5)所示:When there is an angle prediction mode numbered 2 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the upper right direction of the diagonal, then calculate the absolute value and SAD of the difference between all pixels in the PU to be estimated and its lower left adjacent pixels LB , as shown in formula (5):

SADSAD LL BB == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y ++ 11 )) || -- -- -- (( 55 ))

式(5)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x-1,y+1)的像素位于坐标为(x,y)的左下方。In formula (5), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x-1, y+1) is located at the coordinates ( x,y) to the lower left.

步骤(3)具体包括:Step (3) specifically includes:

首先,根据从步骤(2)计算得到的差值绝对值SAD个数进行步骤选择:如果步骤(2)计算得到SAD个数小于3,则执行步骤(5);否则先对步骤(2)计算得到的SAD进行从小到大排列,设前三个最小的SAD依次为SADMIN-0、SADMIN-1和SADMIN-2;再根据这三个最小的SAD,对待估计PU的纹理特征进行分类,如式(6)所示:First, step selection is performed according to the number of absolute difference SADs calculated from step (2): if the number of SADs calculated in step (2) is less than 3, then step (5) is performed; otherwise, step (2) is first calculated The obtained SADs are arranged from small to large, and the first three smallest SADs are sequentially SAD MIN-0 , SAD MIN-1 , and SAD MIN-2 ; then according to these three smallest SADs, the texture features of the PU to be estimated are classified , as shown in formula (6):

CC ll aa sthe s sthe s == {{ 00 ,, ii ff SADSAD Mm II NN -- 00 >> &alpha;SAD&alpha; SAD Mm II NN -- 22 11 ,, ee ll sthe s ee ii ff SADSAD Mm II NN -- 00 << &beta;&beta; &times;&times; SADSAD Mm II NN -- 11 22 ,, ee ll sthe s ee ii ff SADSAD Mm II NN -- 11 << &beta;&beta; &times;&times; SADSAD Mm II NN -- 22 33 ,, oo tt hh ee rr sthe s -- -- -- (( 66 ))

式(6)中,Class表示待估计PU的纹理类别,值为0表示待估计PU的纹理比较平坦,值为1表示待估计PU的纹理呈现较明显的水平、竖直或对角线方向,值为2表示待估计PU的纹理呈现其它角度方向,值为3表示待估计PU的纹理复杂,参数α、β和γ用于调节SADMIN-i(i=0,1,2)之间的关系,其中α设为0.9~1.0,β和γ设为0.6~1.0,此处α设为0.95,β和γ都设为0.9。In formula (6), Class represents the texture category of the PU to be estimated, a value of 0 indicates that the texture of the PU to be estimated is relatively flat, and a value of 1 indicates that the texture of the PU to be estimated presents a more obvious horizontal, vertical or diagonal direction, A value of 2 indicates that the texture of the PU to be estimated presents other angular directions, a value of 3 indicates that the texture of the PU to be estimated is complex, and parameters α, β, and γ are used to adjust the relationship between SAD MIN-i (i=0,1,2). relationship, where α is set to 0.9 to 1.0, β and γ are set to 0.6 to 1.0, here α is set to 0.95, and both β and γ are set to 0.9.

接着由式(6)计算得到的PU纹理类别Class以及SAD关系,得到待估计PU纹理方向特性,如表1所示。在表1中,0度方向是指沿水平向右方向,π/2方向是指沿竖直向下方向,π/4方向是指沿右下45度方向,-π/4方向是指沿右上45度方向,3π/4方向是指沿左下45度方向。当纹理类别Class等于2,待估计PU的纹理方向特性根据SADMIN-0和SADMIN-1的值是否为SADLU、SADU、SADRU、SADL和SADLB中相邻方向的两个SAD值来判别待估计PU的纹理方向特性:(a)如果SADLU等于SADMIN-0且SADU等于SADMIN-1,或者SADLU等于SADMIN-1且SADU等于SADMIN-0,则将纹理方向特性记为纹理呈[π/4,π/2]方向;(b)如果SADU等于SADMIN-0且SADRU等于SADMIN-1,或者SADU等于SADMIN-1且SADRU等于SADMIN-0,则将纹理方向特性记为纹理呈[π/2,3π/4]方向;(c)如果SADLU等于SADMIN-0且SADL等于SADMIN-1,或者SADLU等于SADMIN-1且SADL等于SADMIN-0,则将纹理方向特性记为纹理呈[0,π/4]方向;(d)如果SADL等于SADMIN-0且SADLB等于SADMIN-1,或者SADL等于SADMIN-1且SADLB等于SADMIN-0,则将纹理方向特性记为纹理呈[-π/4,0]方向;(f)其它情况,则将纹理方向特性记为复杂纹理方向。当纹理类别Class等于3,待估计PU的纹理方向特性记为复杂纹理方向。Then, the PU texture category Class and SAD relationship calculated by formula (6) can be used to obtain the PU texture direction characteristics to be estimated, as shown in Table 1. In Table 1, the 0-degree direction refers to the horizontal direction to the right, the π/2 direction refers to the vertical downward direction, the π/4 direction refers to the downward 45-degree direction, and the -π/4 direction refers to the direction along the The upper right 45 degree direction, and the 3π/4 direction refers to the lower left 45 degree direction. When the texture category Class is equal to 2, the texture direction characteristic of the PU to be estimated depends on whether the values of SAD MIN-0 and SAD MIN-1 are two SADs in adjacent directions in SAD LU , SAD U , SAD RU , SAD L and SAD LB value to determine the texture orientation characteristics of the PU to be estimated: (a) If SAD LU is equal to SAD MIN-0 and SAD U is equal to SAD MIN-1 , or SAD LU is equal to SAD MIN-1 and SAD U is equal to SAD MIN-0 , then set The texture direction characteristic is recorded as the texture is in the [π/4,π/2] direction; (b) if SAD U is equal to SAD MIN-0 and SAD RU is equal to SAD MIN-1 , or SAD U is equal to SAD MIN-1 and SAD RU is equal to SAD MIN-0 , the texture direction characteristic is recorded as the texture is in the [π/2,3π/4] direction; (c) if SAD LU is equal to SAD MIN-0 and SAD L is equal to SAD MIN-1 , or SAD LU is equal to SAD MIN-1 and SAD L is equal to SAD MIN-0 , then the texture direction characteristic is recorded as the texture is in the [0,π/4] direction; (d) if SAD L is equal to SAD MIN-0 and SAD LB is equal to SAD MIN-1 , Or if SAD L is equal to SAD MIN-1 and SAD LB is equal to SAD MIN-0 , then the texture direction characteristic is recorded as the texture is in the [-π/4,0] direction; (f) in other cases, the texture direction characteristic is recorded as complex Grain direction. When the texture category Class is equal to 3, the texture direction characteristic of the PU to be estimated is recorded as a complex texture direction.

表1待估计PU纹理方向特性Table 1 PU texture direction characteristics to be estimated

步骤(4)具体包括:Step (4) specifically includes:

根据待估计PU的纹理方向特性,减少候选的预测模式种类,调整后的预测模式组成预测模式的粗级模式搜索范围S,其中S中的预测模式是根据待估计PU的纹理方向特性来设置,如下表2所示:According to the texture direction characteristics of the PU to be estimated, reduce the types of candidate prediction modes, and the adjusted prediction modes form the coarse-level mode search range S of the prediction mode, where the prediction mode in S is set according to the texture direction characteristics of the PU to be estimated, As shown in Table 2 below:

表2S中的预测模式Prediction patterns in Table 2S

步骤(5)具体包括:Step (5) specifically includes:

首先确定SATD代价模式搜索范围Ψ:如果从步骤(4)执行到当前步骤,则Ψ为步骤(4)得到的粗级模式搜索范围S与步骤(1)得到Ω的交集,如果从步骤(3)执行到当前步骤,则直接将Ω赋给Ψ。First, determine the SATD cost mode search range Ψ: if it is executed from step (4) to the current step, then Ψ is the intersection of the coarse-level mode search range S obtained in step (4) and Ω obtained in step (1), if it is obtained from step (3 ) to the current step, then directly assign Ω to Ψ.

然后计算Ψ中各个预测模式的HEVC帧内预测残差,再计算预测残差的SATD代价,如式(7)所示:Then calculate the HEVC intra prediction residual of each prediction mode in Ψ, and then calculate the SATD cost of the prediction residual, as shown in formula (7):

J=SATD+λ×R(7)J=SATD+λ×R(7)

其中J表示代价,SATD表示残差信号哈达玛变换后的绝对误差和,λ表示拉格朗日算子,R表示模式选择后编码所需要的比特数。Among them, J represents the cost, SATD represents the absolute error sum after Hadamard transformation of the residual signal, λ represents the Lagrangian operator, and R represents the number of bits required for encoding after mode selection.

接着按照SATD代价J从小到大的顺序对各个预测模式进行排序,再根据排序后的预测模式建立率失真优化候选模式集合Φ:当排列在第1位的预测模式为DC模式或Planar模式,则只将排列前1位的预测模式加入Φ;当排列在第1位的预测模式为角度模式且第2位的预测模式为DC模式或Planar模式,则只将排列前2位的预测模式加入Φ;当排列在前2位的预测模式都为相邻的角度模式,则只将排列前2位的预测模式加入Φ;当排列在前2位的预测模式为不相邻的角度模式,则先将排列前2位的预测模式加入Φ,再将该2种预测模式相邻的角度模式加入Φ;在其它情况下,对于尺寸为16×16、32×32和64×64的待估计PU,则将排列前3位的预测模式加入Φ,对于尺寸为4×4和8×8的待估计PU,则将排列前8位的预测模式加入Φ。Then sort each prediction mode according to the order of SATD cost J from small to large, and then establish a rate-distortion optimization candidate mode set Φ according to the sorted prediction mode: when the prediction mode ranked first is DC mode or Planar mode, then Only add the top 1 prediction mode to Φ; when the first prediction mode is angle mode and the second prediction mode is DC mode or Planar mode, only the top 2 prediction modes will be added to Φ ; When the prediction modes arranged in the first 2 positions are all adjacent angle modes, only the prediction modes in the first 2 positions are added to Φ; when the prediction modes in the first 2 positions are non-adjacent angle modes, then the Add the top 2 prediction modes to Φ, and then add the angle modes adjacent to the two prediction modes to Φ; in other cases, for the PUs to be estimated with sizes of 16×16, 32×32 and 64×64, Then add the top 3 prediction modes to Φ, and add the top 8 prediction modes to Φ for the PUs whose sizes are 4×4 and 8×8 to be estimated.

步骤(6)具体包括:Step (6) specifically includes:

采用率失真优化技术从步骤(5)得到的候选模式集合Φ中选取率失真代价最小的候选模式作为待估计PU的最佳帧内预测模式,完成待估计PU的帧内预测模式选择。The rate-distortion optimization technique is used to select the candidate mode with the smallest rate-distortion cost from the candidate mode set Φ obtained in step (5) as the best intra prediction mode of the PU to be estimated, and complete the selection of the intra prediction mode of the PU to be estimated.

Claims (4)

1.一种HEVC帧内预测模式快速选择方法,其特征在于,所述的选择方法包括以下步骤:1. A fast selection method of HEVC intra-frame prediction mode, is characterized in that, described selection method comprises the following steps: (1)输入一个待估计PU,建立实际可用的帧内预测模式集合:(1) Input a PU to be estimated, and establish a set of actually available intra prediction modes: 根据待估计PU已存在空间相邻的重建像素和每个HEVC帧内预测模式需要的空间相邻重建像素,为待估计PU选取所有实际可用的帧内预测模式,组成集合Ω;According to the existing spatially adjacent reconstruction pixels of the PU to be estimated and the spatially adjacent reconstruction pixels required by each HEVC intra prediction mode, select all actually available intra prediction modes for the PU to be estimated to form a set Ω; (2)计算待估计PU中的所有像素与其不同方向的空间相邻像素的差值绝对值和:(2) Calculate the sum of the absolute values of differences between all pixels in the PU to be estimated and their spatially adjacent pixels in different directions: 当Ω中存在编号为18的角度预测模式,即待估计PU可以采用沿对角线往右下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左上方相邻像素的差值绝对值和SADLU,如式(1)所示:When there is an angle prediction mode numbered 18 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the diagonal to the lower right, then calculate the absolute value of the difference between all pixels in the PU to be estimated and its upper left adjacent pixel and SAD LU , as shown in formula (1): SADSAD LL Uu == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y -- 11 )) || -- -- -- (( 11 )) 式(1)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x-1,y-1)的像素位于坐标为(x,y)的左上方;In formula (1), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate, y is the vertical coordinate, and their values are integers greater than or equal to 0 and less than N in the PU to be estimated, and the pixel with the coordinates (x-1, y-1) is located at the coordinates ( top left of x,y); 当Ω中存在编号为26的角度预测模式,即待估计PU可以采用竖直向下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其上方相邻像素的差值绝对值和SADU,如式(2)所示:When there is an angle prediction mode numbered 26 in Ω, that is, the angle prediction mode in which the PU to be estimated can be predicted in the vertical downward direction, then calculate the absolute value of the difference between all the pixels in the PU to be estimated and the adjacent pixels above it and the SAD U , as shown in formula (2): SADSAD Uu == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx ,, ythe y -- 11 )) || -- -- -- (( 22 )) 式(2)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x,y-1)的像素位于坐标为(x,y)的正上方;In formula (2), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x, y-1) is located at the coordinates (x, directly above y); 当Ω中存在编号为34的角度预测模式,即待估计PU可以采用沿对角线左下方向进行预测的角度预测模式,则计算待估计PU中所有像素与其右上方相邻像素的差值绝对值和SADRU,如式(3)所示:When there is an angle prediction mode numbered 34 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the lower left direction of the diagonal, then calculate the absolute value and SAD of the difference between all pixels in the PU to be estimated and its upper right adjacent pixels RU , as shown in formula (3): SADSAD RR Uu == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx ++ 11 ,, ythe y -- 11 )) || -- -- -- (( 33 )) 式(3)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x+1,y-1)的像素位于坐标为(x,y)的右上方;In formula (3), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x+1, y-1) is located at the coordinates ( top right of x,y); 当Ω中存在编号为10的角度预测模式,即待估计PU可以采用水平向右方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左方相邻像素的差值绝对值和SADL,如式(4)所示:When there is an angle prediction mode numbered 10 in Ω, that is, the angle prediction mode in which the PU to be estimated can be predicted horizontally to the right, then calculate the absolute value of the difference between all pixels in the PU to be estimated and its left adjacent pixel and SADL , As shown in formula (4): SADSAD LL == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y )) || -- -- -- (( 44 )) 式(4)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x-1,y)的像素位于坐标为(x,y)的左方;In formula (4), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x-1, y) is located at the coordinates (x, to the left of y); 当Ω中存在编号为2的角度预测模式,即待估计PU可以采用沿对角线右上方向进行预测的角度预测模式,则计算待估计PU中所有像素与其左下方相邻像素的差值绝对值和SADLB,如式(5)所示:When there is an angle prediction mode numbered 2 in Ω, that is, the PU to be estimated can use the angle prediction mode for prediction along the upper right direction of the diagonal, then calculate the absolute value and SAD of the difference between all pixels in the PU to be estimated and its lower left adjacent pixels LB , as shown in formula (5): SADSAD LL BB == &Sigma;&Sigma; xx == 00 NN -- 11 &Sigma;&Sigma; ythe y == 00 NN -- 11 || pp (( xx ,, ythe y )) -- pp (( xx -- 11 ,, ythe y ++ 11 )) || -- -- -- (( 55 )) 式(5)中,待估计PU的尺寸为N×N(N=4,8,16,32,64),p(x,y)为待估计PU中坐标为(x,y)的像素的像素值,其中x为水平坐标,y为竖直坐标,在待估计PU中它们的值为大于等于0且小于N的整数,坐标为(x-1,y+1)的像素位于坐标为(x,y)的左下方;In formula (5), the size of the PU to be estimated is N×N (N=4, 8, 16, 32, 64), and p(x, y) is the value of the pixel whose coordinates are (x, y) in the PU to be estimated Pixel values, where x is the horizontal coordinate and y is the vertical coordinate. In the PU to be estimated, their values are integers greater than or equal to 0 and less than N. The pixel with the coordinates (x-1, y+1) is located at the coordinates ( x, y) to the lower left; (3)根据不同方向空间相邻像素的差值绝对值和判断待估计PU的纹理方向特性:(3) Judging the texture direction characteristics of the PU to be estimated according to the absolute value of the difference between adjacent pixels in different directions: 首先根据从步骤(2)计算得到的差值绝对值和SAD个数进行步骤选择:如果步骤(2)计算得到SAD个数小于3,则执行步骤(5);否则先对步骤(2)计算得到的SAD进行从小到大排列,对待估计PU的纹理方向特性进行分类;First, step selection is performed based on the absolute value of the difference and the number of SADs calculated from step (2): if the number of SADs calculated in step (2) is less than 3, then step (5) is performed; otherwise, step (2) is calculated first The obtained SADs are arranged from small to large, and the texture direction characteristics of the PU to be estimated are classified; (4)根据纹理方向特性确定粗级模式搜索范围;(4) Determine the coarse-level mode search range according to the texture direction characteristics; (5)根据粗级模式搜索范围和Ω建立率失真优化候选模式集合;(5) Establish a rate-distortion optimization candidate mode set according to the coarse-level mode search range and Ω; (6)选取最佳帧内预测模式:(6) Select the best intra prediction mode: 采用率失真优化技术从步骤(5)得到的候选模式集合中选取率失真代价最小的候选模式作为待估计PU的最佳帧内预测模式,完成待估计PU的帧内预测模式选择。The rate-distortion optimization technique is used to select the candidate mode with the smallest rate-distortion cost from the candidate mode set obtained in step (5) as the best intra-frame prediction mode of the PU to be estimated, and complete the selection of the intra-frame prediction mode of the PU to be estimated. 2.如权利要求1所述的一种HEVC帧内预测模式快速选择方法,其特征在于,所述的步骤(3)中,设前三个最小的SAD依次为SADMIN-0、SADMIN-1和SADMIN-2,再根据这三个最小的SAD,对待估计PU的纹理特征进行分类,如式(6)所示:2. a kind of HEVC intra-frame prediction mode fast selection method as claimed in claim 1, is characterized in that, in described step (3), set the first three minimum SADs to be SAD MIN-0 , SAD MIN -0 successively 1 and SAD MIN-2 , and then classify the texture features of the PU to be estimated according to the three smallest SADs, as shown in formula (6): CC ll aa sthe s sthe s == 00 ,, ii ff SADSAD Mm II NN -- 00 >> &alpha;&alpha; &times;&times; SADSAD Mm II NN -- 22 11 ,, ee ll sthe s ee ii ff SADSAD Mm II NN -- 00 << &beta;&beta; &times;&times; SADSAD Mm II NN -- 11 22 ,, ee ll sthe s ee ii ff SADSAD Mm II NN -- 11 << &gamma;&gamma; &times;&times; SADSAD Mm II NN -- 22 33 ,, oo tt hh ee rr sthe s -- -- -- (( 66 )) 式(6)中,Class表示待估计PU的纹理类别,值为0表示待估计PU的纹理比较平坦,值为1表示待估计PU的纹理呈现较明显的水平、竖直或对角线方向,值为2表示待估计PU的纹理呈现其它角度方向,值为3表示待估计PU的纹理复杂,参数α、β和γ用于调节SADMIN-i(i=0,1,2)之间的关系,其中α设为0.9~1.0,β和γ设为0.6~1.0;In formula (6), Class represents the texture category of the PU to be estimated, a value of 0 indicates that the texture of the PU to be estimated is relatively flat, and a value of 1 indicates that the texture of the PU to be estimated presents a more obvious horizontal, vertical or diagonal direction, A value of 2 indicates that the texture of the PU to be estimated presents other angular directions, a value of 3 indicates that the texture of the PU to be estimated is complex, and parameters α, β, and γ are used to adjust the relationship between SAD MIN-i (i=0,1,2). Relationship, where α is set to 0.9~1.0, β and γ are set to 0.6~1.0; 接着由式(6)计算得到的PU纹理类别Class以及SAD关系,得到待估计PU纹理方向特性,如表1所示,其中0度方向是指沿水平向右方向,π/2方向是指沿竖直向下方向,π/4方向是指沿右下45度方向,-π/4方向是指沿右上45度方向,3π/4方向是指沿左下45度方向。Then, the PU texture category Class and SAD relationship calculated by formula (6) can be used to obtain the PU texture direction characteristics to be estimated, as shown in Table 1, where the 0-degree direction refers to the horizontal direction to the right, and the π/2 direction refers to the direction along the In the vertical downward direction, the π/4 direction refers to the direction of 45 degrees to the lower right, the -π/4 direction refers to the direction of 45 degrees to the upper right, and the 3π/4 direction refers to the direction of 45 degrees to the lower left. 表1待估计PU纹理方向特性Table 1 PU texture direction characteristics to be estimated 3.如权利要求1所述的一种HEVC帧内预测模式快速选择方法,其特征在于,所述步骤(4)中,根据步骤(3)得到的待估计PU的纹理方向特性,减少候选的预测模式种类,调整后的预测模式组成粗级模式搜索范围S,其中S中的预测模式是根据待估计PU的纹理方向特性来设置,如下表2所示。3. The fast selection method of a HEVC intra-frame prediction mode according to claim 1, wherein in the step (4), according to the texture direction characteristic of the PU to be estimated obtained in the step (3), the number of candidates is reduced. The type of prediction mode, the adjusted prediction mode forms the coarse-level mode search range S, where the prediction mode in S is set according to the texture direction characteristics of the PU to be estimated, as shown in Table 2 below. 表2S中的预测模式Prediction patterns in Table 2S 4.如权利要求1所述的一种HEVC帧内预测模式快速选择方法,其特征在于步骤(5)中,首先确定SATD代价模式搜索范围Ψ:如果从步骤(4)执行到当前步骤,则Ψ为步骤(4)得到的粗级模式搜索范围S与步骤(1)得到Ω的交集,如果从步骤(3)执行到当前步骤,则直接将Ω赋给Ψ;然后计算Ψ中各个预测模式的HEVC帧内预测残差,再计算预测残差的SATD代价;接着按照SATD代价J从小到大的顺序对各个预测模式进行排序,再根据排序后的预测模式建立率失真优化候选模式集合Φ:当排列在第1位的预测模式为DC模式或Planar模式,则只将排列前1位的预测模式加入Φ;当排列在第1位的预测模式为角度模式且第2位的预测模式为DC模式或Planar模式,则只将排列前2位的预测模式加入Φ;当排列在前2位的预测模式都为相邻的角度模式,则只将排列前2位的预测模式加入Φ;当排列在前2位的预测模式为不相邻的角度模式,则先将排列前2位的预测模式加入Φ,再将该2种预测模式相邻的角度模式加入Φ;在其它情况下,对于尺寸为16×16、32×32和64×64的待估计PU,则将排列前3位的预测模式加入Φ,对于尺寸为4×4和8×8的待估计PU,则将排列前8位的预测模式加入Φ。4. A kind of HEVC intra-frame prediction mode quick selection method as claimed in claim 1, it is characterized in that in step (5), at first determine SATD cost mode search range Ψ: if carry out to current step from step (4), then Ψ is the intersection of the coarse-level mode search range S obtained in step (4) and Ω obtained in step (1), if step (3) is executed to the current step, then directly assign Ω to Ψ; then calculate each prediction mode in Ψ Then calculate the SATD cost of the prediction residual; then sort each prediction mode according to the order of SATD cost J from small to large, and then establish a rate-distortion optimization candidate mode set Φ according to the sorted prediction modes: When the prediction mode ranked first is DC mode or Planar mode, only the prediction mode ranked first is added to Φ; when the prediction mode ranked first is angle mode and the prediction mode second is DC mode or Planar mode, only the top 2 prediction modes are added to Φ; when the top 2 prediction modes are all adjacent angle modes, only the top 2 prediction modes are added to Φ; when the If the first two prediction modes are non-adjacent angle modes, first add the top two prediction modes to Φ, and then add the adjacent angle modes of the two prediction modes to Φ; in other cases, for the size For PUs to be estimated that are 16×16, 32×32, and 64×64, add the top 3 prediction modes to Φ, and for PUs to be estimated with sizes 4×4 and 8×8, rank the top 8 bits The prediction model of Φ is added.
CN201510675511.8A 2015-10-16 2015-10-16 A kind of HEVC Adaptive Mode Selection Method for Intra-Prediction Active CN105208387B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510675511.8A CN105208387B (en) 2015-10-16 2015-10-16 A kind of HEVC Adaptive Mode Selection Method for Intra-Prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510675511.8A CN105208387B (en) 2015-10-16 2015-10-16 A kind of HEVC Adaptive Mode Selection Method for Intra-Prediction

Publications (2)

Publication Number Publication Date
CN105208387A true CN105208387A (en) 2015-12-30
CN105208387B CN105208387B (en) 2018-03-13

Family

ID=54955775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510675511.8A Active CN105208387B (en) 2015-10-16 2015-10-16 A kind of HEVC Adaptive Mode Selection Method for Intra-Prediction

Country Status (1)

Country Link
CN (1) CN105208387B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812825A (en) * 2016-05-10 2016-07-27 中山大学 Image coding method based on grouping
CN106331726A (en) * 2016-09-23 2017-01-11 合网络技术(北京)有限公司 HEVC(High Efficiency Video Coding)-based intra-frame prediction decoding method and apparatus
CN109361922A (en) * 2018-10-26 2019-02-19 西安科锐盛创新科技有限公司 Predict quantization coding method
CN109413435A (en) * 2018-10-26 2019-03-01 西安科锐盛创新科技有限公司 A kind of prediction technique based on video compress
CN109510996A (en) * 2018-10-26 2019-03-22 西安科锐盛创新科技有限公司 Rear selection prediction technique in bandwidth reduction
CN109618162A (en) * 2018-10-26 2019-04-12 西安科锐盛创新科技有限公司 Rear selection prediction technique in bandwidth reduction
CN109618169A (en) * 2018-12-25 2019-04-12 中山大学 Intra-frame decision method, apparatus and storage medium for HEVC
CN109640092A (en) * 2018-10-26 2019-04-16 西安科锐盛创新科技有限公司 Rear selection prediction technique in bandwidth reduction
CN109660793A (en) * 2018-10-26 2019-04-19 西安科锐盛创新科技有限公司 Prediction technique for bandwidth reduction
CN110213576A (en) * 2018-05-03 2019-09-06 腾讯科技(深圳)有限公司 Method for video coding, video coding apparatus, electronic equipment and storage medium
CN113674372A (en) * 2021-08-25 2021-11-19 深圳市迪威码半导体有限公司 Preprocessing algorithm for encoding and decoding remote sensing image
CN115334309A (en) * 2022-08-11 2022-11-11 北京百度网讯科技有限公司 Intra-frame prediction encoding method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665079A (en) * 2012-05-08 2012-09-12 北方工业大学 Adaptive fast intra prediction mode decision for high efficiency video coding (HEVC)
CN103517069A (en) * 2013-09-25 2014-01-15 北京航空航天大学 HEVC intra-frame prediction quick mode selection method based on texture analysis
CN103763570A (en) * 2014-01-20 2014-04-30 华侨大学 Rapid HEVC intra-frame prediction method based on SATD
CN104581152A (en) * 2014-12-25 2015-04-29 同济大学 HEVC intra-frame prediction mode decision accelerating method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665079A (en) * 2012-05-08 2012-09-12 北方工业大学 Adaptive fast intra prediction mode decision for high efficiency video coding (HEVC)
CN103517069A (en) * 2013-09-25 2014-01-15 北京航空航天大学 HEVC intra-frame prediction quick mode selection method based on texture analysis
CN103763570A (en) * 2014-01-20 2014-04-30 华侨大学 Rapid HEVC intra-frame prediction method based on SATD
CN104581152A (en) * 2014-12-25 2015-04-29 同济大学 HEVC intra-frame prediction mode decision accelerating method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MENGMENG ZHANG等: "An adaptive fast intra mode decision in HEVC", 《IMAGE PROCESSING (ICIP), 2012 19TH IEEE INTERNATIONAL CONFERENCE ON》 *
齐美斌: "利用纹理和空间相关性的HEVC帧内预测模式选择", 《中国图像图形学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812825A (en) * 2016-05-10 2016-07-27 中山大学 Image coding method based on grouping
CN105812825B (en) * 2016-05-10 2019-02-26 中山大学 A packet-based image coding method
CN106331726A (en) * 2016-09-23 2017-01-11 合网络技术(北京)有限公司 HEVC(High Efficiency Video Coding)-based intra-frame prediction decoding method and apparatus
CN110213576B (en) * 2018-05-03 2023-02-28 腾讯科技(深圳)有限公司 Video encoding method, video encoding device, electronic device, and storage medium
CN110213576A (en) * 2018-05-03 2019-09-06 腾讯科技(深圳)有限公司 Method for video coding, video coding apparatus, electronic equipment and storage medium
CN109640092A (en) * 2018-10-26 2019-04-16 西安科锐盛创新科技有限公司 Rear selection prediction technique in bandwidth reduction
CN109618162A (en) * 2018-10-26 2019-04-12 西安科锐盛创新科技有限公司 Rear selection prediction technique in bandwidth reduction
CN109510996A (en) * 2018-10-26 2019-03-22 西安科锐盛创新科技有限公司 Rear selection prediction technique in bandwidth reduction
CN109660793A (en) * 2018-10-26 2019-04-19 西安科锐盛创新科技有限公司 Prediction technique for bandwidth reduction
CN109413435A (en) * 2018-10-26 2019-03-01 西安科锐盛创新科技有限公司 A kind of prediction technique based on video compress
CN109413435B (en) * 2018-10-26 2020-10-16 苏州市吴越智博大数据科技有限公司 Prediction method based on video compression
CN109361922B (en) * 2018-10-26 2020-10-30 西安科锐盛创新科技有限公司 Predictive quantization coding method
CN109660793B (en) * 2018-10-26 2021-03-16 西安科锐盛创新科技有限公司 Prediction method for bandwidth compression
CN109361922A (en) * 2018-10-26 2019-02-19 西安科锐盛创新科技有限公司 Predict quantization coding method
CN109618169A (en) * 2018-12-25 2019-04-12 中山大学 Intra-frame decision method, apparatus and storage medium for HEVC
CN109618169B (en) * 2018-12-25 2023-10-27 中山大学 Intra-frame decision method, device and storage medium for HEVC
CN113674372A (en) * 2021-08-25 2021-11-19 深圳市迪威码半导体有限公司 Preprocessing algorithm for encoding and decoding remote sensing image
CN115334309A (en) * 2022-08-11 2022-11-11 北京百度网讯科技有限公司 Intra-frame prediction encoding method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN105208387B (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN105208387B (en) A kind of HEVC Adaptive Mode Selection Method for Intra-Prediction
CN106131547B (en) The high-speed decision method of intra prediction mode in Video coding
CN104539962B (en) It is a kind of merge visually-perceptible feature can scalable video coding method
CN108495135B (en) Quick coding method for screen content video coding
CN107277509B (en) A Fast Intra Prediction Method Based on Screen Content
CN103327325B (en) The quick self-adapted system of selection of intra prediction mode based on HEVC standard
Zhao et al. Enhanced ctu-level inter prediction with deep frame rate up-conversion for high efficiency video coding
CN105791824B (en) Screen content coding prediction mode fast selecting method based on edge dot density
CN108712648A (en) A kind of quick inner frame coding method of deep video
CN104811728B (en) A kind of method for searching motion of video content adaptive
CN101888546B (en) A kind of method of estimation and device
CN109068142A (en) 360 degree of video intra-frame prediction high-speed decisions based on textural characteristics
CN105681808B (en) A kind of high-speed decision method of SCC interframe encodes unit mode
CN105791826A (en) A fast mode selection method between HEVC frames based on data mining
CN101309421A (en) Intra prediction mode selection method
CN105791862B (en) 3 d video encoding depth map internal schema selection method based on fringe complexity
CN112188196A (en) Method for rapid intra-frame prediction of general video coding based on texture
CN106688238A (en) Improved reference pixel selection and filtering for intra coding of depth map
CN104811729B (en) A kind of video multi-reference frame coding method
CN103118262A (en) Rate distortion optimization method and device, and video coding method and system
CN109688411B (en) A method and apparatus for estimating rate-distortion cost of video coding
CN109151467B (en) Fast selection of inter-frame mode for screen content coding based on image block activity
CN110213584A (en) Coding unit classification method and coding unit sorting device based on Texture complication
CN114827606A (en) Quick decision-making method for coding unit division
Liu et al. Enlarged motion-aware and frequency-aware network for compressed video artifact reduction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant