[go: up one dir, main page]

CN101572818B - A Prediction Method of Intra-frame Prediction Mode - Google Patents

A Prediction Method of Intra-frame Prediction Mode Download PDF

Info

Publication number
CN101572818B
CN101572818B CN 200910085819 CN200910085819A CN101572818B CN 101572818 B CN101572818 B CN 101572818B CN 200910085819 CN200910085819 CN 200910085819 CN 200910085819 A CN200910085819 A CN 200910085819A CN 101572818 B CN101572818 B CN 101572818B
Authority
CN
China
Prior art keywords
prediction
neighborhood
coding block
current coding
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200910085819
Other languages
Chinese (zh)
Other versions
CN101572818A (en
Inventor
杨波
韩钰
门爱东
常侃
张文豪
宗晓飞
陈晓博
明阳阳
韩睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN 200910085819 priority Critical patent/CN101572818B/en
Publication of CN101572818A publication Critical patent/CN101572818A/en
Application granted granted Critical
Publication of CN101572818B publication Critical patent/CN101572818B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明公开了一种帧内预测模式的预测方法,包括:根据当前编码块的相邻已编码块,设置当前编码块的邻域预测模板及所述邻域预测模板的参考像素;遍历当前编码块的所有可用预测模式,利用当前编码块的邻域预测模板,按照每种可用预测模式对参考像素进行预测,并根据得到的预测值与参考像素实际值进行比较,确定对所有参考像素的平均预测效果最佳的预测模式,将其作为当前编码块的预测模式。应用本发明,能够使帧内预测模式的预测更加准确,从而提高帧内编码的压缩效率。

Figure 200910085819

The invention discloses a method for predicting an intra-frame prediction mode, which includes: setting a neighborhood prediction template of the current coding block and reference pixels of the neighborhood prediction template according to the adjacent coded blocks of the current coding block; traversing the current coding block All available prediction modes of the block, use the neighborhood prediction template of the current coding block, predict the reference pixels according to each available prediction mode, and compare the obtained prediction value with the actual value of the reference pixel to determine the average value of all reference pixels The prediction mode with the best prediction effect is used as the prediction mode of the current coding block. By applying the present invention, the prediction of the intra-frame prediction mode can be made more accurate, thereby improving the compression efficiency of the intra-frame coding.

Figure 200910085819

Description

A kind of intra-frame prediction mode prediction method
Technical field
The present invention relates to video compression technology, particularly a kind of intra-frame prediction mode prediction method.
Background technology
In existing video compression standard, intraframe coding is widely used.Intraframe coding utilizes the spatial coherence of pixel in the same frame to remove redundancy, and owing to need not do reference by adjacent time domain frame, so can effectively stop the interframe of mistake to be propagated, is used as reference frame and refresh frame immediately.
In up-to-date H.264/AVC video compression standard for the definition of 4x4 luminance block directional prediction modes in 9 kinds of frames.With in the past video compression standard as H.261, MPEG-2, H.263, MPEG-4 etc. compares, new standard has adopted more intra prediction mode, so that the efficient of intraframe coding is improved.But owing to send to decoding end behind the intra prediction mode coding that needs specifically to adopt, therefore in order to represent the just a large amount of bit of needs of directional prediction modes that each Intra-coded blocks adopts.With the 4x4 luminance block is example, is not doing under any optimized conditions in order to represent 9 kinds of predictive modes, the next communicating predicted pattern of each encoding block needs 4 overhead bit.This expense greatly reduces compression efficiency for entire image.So, H.264/AVC adopting the mode of predictive coding to reduce overhead bit to intra prediction mode in the coding standard.
In present H.264/AVC standard, at first the intra prediction mode that adopts is predicted, determined most possible predictive mode, then, relation between most possible predictive mode that obtains according to prediction and the actual predictive mode that adopts is carried out the coding of predictive mode.
Particularly, if the most possible predictive mode that prediction obtains is consistent with the actual predictive mode that adopts, then only need 1 bit to represent prediction mode information, promptly decoder adopts and predicts that the most possible predictive mode that obtains is as the selected pattern of current decoding block; If the most possible predictive mode that prediction obtains is different with the actual predictive mode that adopts, except that using 1 bit represents that the predictive mode of actual employing is different with most possible predictive mode, also need extra bit to represent the predictive mode of actual employing, be divided into two kinds of situations in this case again: the predictive mode sequence number that adopts when reality is directly encoded to the actual prediction pattern information than the sequence number of most possible predictive mode hour; When the predictive mode sequence number that adopts when reality is bigger than the sequence number of most possible predictive mode, after subtracting 1, the predictive mode sequence number that reality is adopted encodes.By such processing, the coded prediction pattern only needs 3 bits.
As seen by above-mentioned, if accurate to prediction on intra-frame prediction mode, promptly most possible coding mode is identical with the predictive mode of the actual employing of current 4x4 piece, so only needs 1 bit to represent the prediction mode information of current block.On the contrary, if prediction error needs 4 bits to represent the predictive mode of actual employing so.As seen, whether accurate to the prediction on intra-frame prediction mode result, can directly influence compression efficiency of intra-frame coding.
In present H.264/AVC standard, the mode that the intra prediction mode that adopts is predicted is: the sequence number of 9 kinds of intra prediction modes is arranged by probability from big to small from 0 to 8, if the top of current 4x4 piece and left side piece all adopt 4x4 intraframe predictive coding mode, so just with the little most possible predictive mode of predictive mode sequence number that is adopted in top and the left side 4x4 piece as current block.Same, also adopt as the method in decoding end and to calculate most possible predictive mode.So, be understood that at decoding end and coding side to obtain identical most possible predictive mode.
Above-mentioned prediction mode has only been considered the encoding block on the top and the left side, and prediction mode is comparatively simple, and the feasible probability identical with the actual predictive mode that adopts that predict the outcome is little.
Summary of the invention
In view of this, the invention provides a kind of intra-frame prediction mode prediction method, can improve and predict the outcome and the actual identical probability of predictive mode that adopts, thereby reduced the expense of coded frame inner estimation mode, improved compression efficiency of intra-frame coding.
For achieving the above object, the present invention adopts following technical scheme:
A kind of intra-frame prediction mode prediction method comprises:
A, according to the adjacent encoding block of present encoding piece, the neighborhood prediction module of present encoding piece and the reference pixel of described neighborhood prediction module are set; Wherein, the top left corner pixel position of present encoding piece is made as coordinate (0,0), the X axis of horizontal direction is right for just, and the Y-axis of vertical direction is downward for just,
If the present encoding piece is image top first row, will be positioned at (1,0), (2,0), (1,1), (2,1), (1,2), (2,2), (1,3), (2,3) 8 pixels are as the neighborhood prediction module of present encoding piece, to be positioned at (3,0), (3,1), (3,2), 4 of (3,3) pixels are as described reference pixel;
If the present encoding piece is positioned at image rightmost and non-image top first row, will be positioned at (2 ,-2), (1,-2), (0 ,-2), (1 ,-2), (2,-2), (3 ,-2), (2 ,-1), (1,-1), (0 ,-1), (1 ,-1), (2,-1), (3 ,-1), (1,0), (2,0), (1,1), (2,1), (1,2), (2,2), (1,3), (2,3) 20 pixels will be positioned at (3 ,-3) as the neighborhood prediction module of present encoding piece, (2,-3), (1 ,-3), (0 ,-3), (1,-3), (2 ,-3), (3 ,-3), (3,-2), (3 ,-1), (3,0), (3,1), (3,2), 13 pixels of (3,3) are as described reference pixel;
If the present encoding piece is positioned at image Far Left and non-image top first row, will be positioned at (0 ,-2), (1 ,-2), (2,-2), (3 ,-2), (4 ,-2), (5 ,-2), (0,-1), (1 ,-1), (2 ,-1), (3 ,-1), (4,-1), 12 pixels of (5 ,-1) are as the neighborhood prediction module of present encoding piece, will be positioned at (0 ,-3), (1,-3), (2 ,-3), (3 ,-3), (4 ,-3), (5,-3), 8 pixels of (6 ,-3), (7 ,-3) are as described reference pixel;
If the present encoding piece is not positioned at first row of image, or rightmost, or Far Left, and be not first encoding block of image, then will be positioned at (2 ,-2), (1,-2), (0 ,-2), (1 ,-2), (2 ,-2), (3,-2), (4 ,-2), (5 ,-2), (2 ,-1), (1,-1), (0 ,-1), (1 ,-1), (2,-1), (3 ,-1), (4 ,-1), (5,-1), (1,0), (2,0), (1,1), (2,1), (1,2), (2,2), (1,3), 24 pixels of (2,3) are as the neighborhood prediction module of present encoding piece, to be positioned at (3 ,-3), (2 ,-3), (1,-3), (0 ,-3), (1 ,-3), (2,-3), (3 ,-3), (4 ,-3), (5,-3), (6 ,-3), (7 ,-3), (3,-2), (3 ,-1), (3,0), (3,1), (3,2), 17 pixels of (3,3) are as described reference pixel;
B, according to the position of described present encoding piece in image, determine H.264/AVC in the video compression standard for available predictive mode in the directional prediction modes 0~8 in the defined 9 kinds of frames of 4 * 4 luminance block, and travel through all available predictive modes, under every kind of predictive mode i, utilize the reference pixel of neighborhood prediction module, calculate the predicted value of pixel in the described neighborhood prediction module, and according to the calculated with actual values cost function of pixel in described predicted value and the described neighborhood prediction module:
Distortion ( i ) = Σ ( x , y ∈ template ) | templatePixel ( x , y ) - predMode ( i ) ( x , y ) | templateSize ; With the predictive mode of the cost function value correspondence of minimum as predicting the outcome; Wherein, i is the numbering of predictive mode, predMode (i) (x, y) be the predicted value of pixel in the described neighborhood prediction module under the predictive mode i, x and y are the coordinate of neighborhood prediction module, (x y) is the actual value of pixel in the described neighborhood prediction module to templatePixel, and templateSize is the sum of all pixels in the actual down neighborhood prediction module that adopts of predictive mode i.
Preferably, be positioned at non-first present encoding piece of image first row, available predictive mode is a predictive mode 1,2 and 8;
Be positioned at the present encoding piece of image rightmost and non-first row, available predictive mode is a predictive mode 0,2,3 and 7;
Other present encoding pieces except that being positioned at image first row, image rightmost and first encoding block of image, available predictive mode is all 9 kinds of intra prediction modes.
As seen from the above technical solution, among the present invention,, the neighborhood prediction module of present encoding piece and the reference pixel of described neighborhood prediction module are set according to the adjacent encoding block of present encoding piece; Then, all available predictions patterns of traversal present encoding piece, utilize the neighborhood prediction module of present encoding piece, according to every kind of available predictions pattern reference pixel is predicted, and compare according to predicted value that obtains and reference pixel actual value, determine predictive mode, with its predictive mode as the present encoding piece to the consensus forecast best results of all reference pixels.As seen, not only utilize the encoding block of the left side and top, but utilize the related adjacent encoded pixels of various available predictions patterns, thereby guarantee to travel through various available predictive modes, and according to predicting the outcome to pixel in the neighborhood prediction module under the various available predictions patterns, select the predictive mode of a kind of predictive mode of consensus forecast best results as the present encoding piece, thereby improved the forecasting accuracy of predictive mode, and then reduced the expense of coded frame inner estimation mode, improved compression efficiency of intra-frame coding.
Description of drawings
Fig. 1 is intra-frame prediction mode prediction method flow chart among the present invention.
Fig. 2 a is the schematic diagram one of neighborhood prediction module and reference pixel.
Fig. 2 b is the schematic diagram two of neighborhood prediction module and reference pixel.
Fig. 2 c is the schematic diagram three of neighborhood prediction module and reference pixel.
Fig. 2 d is the schematic diagram four of neighborhood prediction module and reference pixel.
Fig. 3 is under the foreman cycle tests, adopts the performance schematic diagram relatively of the accuracy of the most possible predictive mode that calculates in the accuracy of the most possible predictive mode that the inventive method calculates and the coding standard H.264/AVC.
Fig. 4 is under the foreman cycle tests, encode again after adopting method of the present invention to predict with coding standard H.264/AVC in the performance schematic diagram relatively of coded prediction pattern.
Embodiment
For making purpose of the present invention, technological means and advantage clearer, the present invention is described in further details below in conjunction with accompanying drawing.
Basic thought of the present invention is: various available predictive modes are traveled through, utilize various predictive modes that encoded pixels is predicted, according to predicting the outcome and the matching degree of encoded pixels, select a kind of predictive mode of prediction effect the best as the present encoding piece.
As previously mentioned, in the prior art, determine the predictive mode of present encoding piece according to the predictive mode of the adjacent encoding block in the left side of present encoding piece with the top, and in maximum two kinds of predictive modes that two adjacent encoding blocks adopt, select a kind of, as predicting the outcome of present encoding piece.
Because the pixel interdependence of adjacent encoder piece is bigger, thus among the present invention equally according to the predictive mode of the prediction present encoding piece of adjacent encoding block.Being unlike the prior art can travel through all available predictive modes among the present invention, selects the predictive mode as the present encoding piece of prediction effect the best.
Particularly, the intra-frame prediction mode prediction method flow process comprises among the present invention: the determining of neighborhood prediction module and reference pixel, utilize reference pixel according to various available predictive modes the pixel in the neighborhood prediction module to be predicted successively, select a kind of predictive mode of prediction effect the best as the present encoding piece.
Below specific implementation of the present invention is described in further details.Intra-frame prediction mode prediction method among the present invention is the flow process that other encoding blocks except that first encoding block of present frame are predicted, Fig. 1 is the flow chart of prediction mode among the present invention, and as shown in Figure 1, this method comprises:
Step 101 according to the adjacent encoding block of present encoding piece, is provided with the neighborhood prediction module of present encoding piece and the reference pixel of this neighborhood prediction module.
In this step, neighborhood prediction module and reference pixel are to be made of the pixel in the adjacent encoding block of present encoding piece.Wherein, reference pixel is to be used for encoded pixels that the pixel of neighborhood prediction module is predicted.
Specifically, determine that the mode of neighborhood prediction module and reference pixel is according to the position of present encoding piece:
The top left corner pixel position of supposing the present encoding piece is made as coordinate (0,0), and the X axis of horizontal direction is right for just, and the Y-axis of vertical direction is downward for just;
1) when the present encoding piece was in image top first row, only adjacent left side adjacent block had been encoded and can be used as reference, then will be positioned at (1,0), (2,0), (1,1), (2,1), (1,2), (2,2), (1,3), 8 pixels of (2,3) are as the neighborhood prediction module of present encoding piece, will be positioned at (3,0), (3,1), 4 pixels of (3,2), (3,3) are as described reference pixel; Concrete position relation is shown in Fig. 2 a, and the field prediction module is made of 8 adjacent pixels of the present encoding piece left side, and pixel I-L is as the reference pixel;
2) be in the image rightmost when current block, promptly during last row of image the right, the only adjacent left side and directly over adjacent block encoded and can be used as reference, then will be positioned at (2,-2), (1 ,-2), (0 ,-2), (1 ,-2), (2,-2), (3 ,-2), (2 ,-1), (1,-1), (0 ,-1), (1 ,-1), (2,-1), (3 ,-1), (1,0), (2,0), (1,1), (2,1), (1,2), (2,2), (1,3), (2,3) 20 pixels will be positioned at (3 ,-3) as the neighborhood prediction module of present encoding piece, (2,-3), (1 ,-3), (0 ,-3), (1,-3), (2 ,-3), (3 ,-3), (3,-2), (3 ,-1), (3,0), (3,1), (3,2), 13 pixels of (3,3) are as the reference pixel; Concrete position relation is shown in Fig. 2 b, and the field prediction module is the L type zone that 20 pixels constitute, and A-D, I-Q are as the reference pixel;
3) be in the image Far Left when current block, promptly during first row of the image left side, only adjacent directly over adjacent block encoded and can be used as reference, or directly over adjacent block and upper right side adjacent block can be used as reference simultaneously, then will be positioned at (0 ,-2), (1,-2), (2 ,-2), (3 ,-2), (4,-2), (5 ,-2), (0 ,-1), (1,-1), (2 ,-1), (3 ,-1), (4,-1), 12 pixels of (5 ,-1) will be positioned at (0 as the neighborhood prediction module of present encoding piece,-3), (1,-3), (2 ,-3), (3 ,-3), (4,-3), (5,-3), (6 ,-3), 8 pixels of (7 ,-3) are as the reference pixel; Concrete position relation is shown in Fig. 2 c, and the neighborhood prediction module is 12 adjacent with the upper right side directly over current coding macro block pixels, and A-H is as the reference pixel; Particularly, when the present encoding piece is positioned at this image Far Left, according to different predictive modes, may be only with directly over adjacent block as a reference, at this moment, the field prediction module of practical application adjacent 8 pixels directly over being, A-D is as corresponding reference pixel; Perhaps, also may with directly over adjacent block and upper right side adjacent block simultaneously as a reference, at this moment, the field prediction module of practical application be directly over the template that constitutes of adjacent 12 pixels with the upper right side, A-H is as reference pixel accordingly;
4) when current block is in other position in the image, the left side, directly over and top-right adjacent block all can be used as reference, then will be positioned at (2 ,-2), (1,-2), (0 ,-2), (1 ,-2), (2 ,-2), (3,-2), (4 ,-2), (5 ,-2), (2 ,-1), (1,-1), (0 ,-1), (1 ,-1), (2,-1), (3 ,-1), (4 ,-1), (5,-1), (1,0), (2,0), (1,1), (2,1), (1,2), (2,2), (1,3), 24 pixels of (2,3) are as the neighborhood prediction module of present encoding piece, to be positioned at (3 ,-3), (2 ,-3), (1,-3), (0 ,-3), (1 ,-3), (2,-3), (3 ,-3), (4 ,-3), (5,-3), (6 ,-3), (7 ,-3), (3,-2), (3 ,-1), (3,0), (3,1), (3,2), 17 pixels of (3,3) are as the reference pixel; Concrete position relation is shown in Fig. 2 d, and the field prediction module is the L type zone that 24 pixels constitute, and A-Q is as the reference pixel; Similar with the previous case, according to different predictive modes, actual neighborhood prediction module that adopts and reference pixel be difference to some extent, can carry out the actual neighborhood prediction module and the selection of reference pixel according to existing various predictive modes.
On the position of above-mentioned 4 kinds of present encoding pieces, be provided with in the process of neighborhood prediction module and reference pixel, why carry out the setting of aforesaid way, consider under the different predictive modes on the one hand, the actual neighborhood prediction module that adopts can be not identical, thereby can embody the performance difference of different predictive modes, avoid utilizing too much encoded pixels to predict on the other hand as far as possible, to reduce implementation complexity.
Step 102 is determined available predictive mode according to the position of present encoding piece in image.
As previously mentioned, intra prediction mode has 9 kinds, but for some locational encoding block, not all 9 kinds of predictive modes are all available, for disabled predictive mode, with regard to not needing it traveled through.This step is promptly determined available predictive mode according to the position of present encoding piece in image.
Particularly, for above-mentioned steps 101 1) encoding block of described position, available predictive mode is 1,2 and 8; For above-mentioned steps 101 2) encoding block of described position, available predictive mode is 2,3 and 7; For the encoding block of all the other positions, available predictive mode is all 9 kinds of predictive modes.
Step 103 travels through all available predictive modes, under every kind of predictive mode, utilizes the reference pixel of the neighborhood prediction module of present encoding piece, calculates the predicted value of pixel in the neighborhood prediction module.
In this step, travel through all available predictive modes, under every kind of predictive mode, utilize the reference pixel of the neighborhood prediction module of present encoding piece, calculate the predicted value of pixel in the neighborhood prediction module.Wherein, under different predictive modes, need in neighborhood prediction module pixel, select the pixel of the actual employing of current predictive mode, specifically under every kind of predictive mode, select the mode of pixel identical in the account form of predicted value and the neighborhood prediction module with existing mode.
Particularly, in this step, if neighborhood prediction module top left corner pixel position is (0,0), pred4x4L[x, y] under different predictive modes, the pixel in the neighborhood prediction module is predicted resulting predicted value, p[x, y] be reference pixel value, wherein x, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
1) predictive mode 0:
Pred4x4L[x, y]=p[x ,-1], x wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
2) predictive mode 1:
Pred4x4L[x, y]=p[-1, y], x wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
3) predictive mode 2:
If a) p[x ,-1], x=0..3 and p[-1, y], y=0..3 is available, then:
pred4x4L[x,y]=(p[0,-1]+p[1,-1]+p[2,-1]+p[3,-1]+
p[-1,0]+p[-1,1]+p[-1,2]+p[-1,3]+4)>>3
X wherein, y=0..3.
B) if p[x ,-1], the unavailable p[-1 of x=0..3, y], y=0..3 can use, then:
pred4x4L[x,y]=(p[-1,0]+p[-1,1]+p[-1,2]+p[-1,3]+2)>>2
X wherein, y=0..3.
C) if p[x ,-1], x=0..3 can use p[-1, y], y=0..3 is unavailable, then:
pred4x4L[x,y]=(p[0,-1]+p[1,-1]+p[2,-1]+p[3,-1]+2)>>2
X wherein, y=0..3.
D) for other situation,
Pred4x4L[x, y]=(1<<(BitDepthY-1)) be the number of bits of expression brightness, as
Wherein BitDepthY is the number of bits of expression brightness, when representing as 8 bits, pred4x4L[x, y]=8, x wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
4) predictive mode 3:
pred4x4L[x,y]=(p[x+y,-1]+2*p[x+y+1,-1]+
P[x+y+2 ,-1]+2)>>2, x wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
5) predictive mode 4:
If a) x>y,
pred4x4L[x,y]=(p[x-y-2,-1]+2*p[x-y-1,-1]+
p[x-y,-1]+2)>>2,
B) if x<y,
pred4x4L[x,y]=(p[-1,y-x-2]+2*p[-1,y-x-1]+
p[-1,y-x]+2)>>2
C) if x=y,
pred4x4L[x,y]=(p[0,-1]+2*p[-1,-1]+p[-1,0]+2)>>2
X wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
6) predictive mode 5:
Make zVR=2*x-y
If a) zVR=0,2,4,6,8,10,12,14,
pred4x4L[x,y]=(p[x-(y>>1)-1,-1]+p[x-(y>>1),-1]+1)>>1
B) if zVR=1,3,5,7,9,11,13,
pred4x4L[x,y]=(p[x-(y>>1)-2,-1]+2*p[x-(y>>1)-1,-1]+
p[x-(y>>1),-1]+2)>>2
C) if zVR=-1
pred4x4L[x,y]=(p[-1,0]+2*p[-1,-1]+p[0,-1]+2)>>2
D) if zVR=-2,
pred4x4L[x,y]=(p[-1,1]+2*p[-1,0]+p[-1,-1]+2)>>2
E) if zVR=-3,
pred4x4L[x,y]=(p[-1,2]+2*p[-1,1]+p[0,0]+2)>>2
F) if zVR=-4 ,-5,
pred4x4L[x,y]=(p[-1,y-1]+2*p[-1,y-2]+p[-1,y-3]+2)>>2
X wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
7) predictive mode 6:
Make zVR=2*y-x
If a) zVR=0,2,4,6,8,10,
pred4x4L[x,y]=(p[-1,y-(x>>1)-1]+p[-1,y-(x>>1)]+1)>>1
B) if zVR=1,3,5,7,9,
pred4x4L[x,y]=(p[-1,y-(x>>1)-2]+
2*p[-1,y-(x>>1)-1]+p[-1,y-(x>>1)]+2)>>2
C) if zVR=-1,
pred4x4L[x,y]=(p[-1,0]+2*p[-1,-1]+p[0,-1]+2)>>2
D) if zVR=-2,
pred4x4L[x,y]=(p[1,-1]+2*p[0,-1]+p[-1,-1]+2)>>2
C) if zVR=-3,
pred4x4L[x,y]=(p[2,-1]+2*p[1,-1]+p[0,-1]+2)>>2
D) if zVR=-4,
pred4x4L[x,y]=(p[3,-1]+2*p[2,-1]+p[1,-1]+2)>>2
E) if zVR=-5,
pred4x4L[x,y]=(p[4,-1]+2*p[3,-1]+p[2,-1]+2)>>2
F) if zVR=-6 ,-7,
pred4x4L[x,y]=(p[x-1,-1]+2*p[x-2,-1]+p[x-3,-1]+2)>>2
X wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
8) predictive mode 7:
A) as y=0,2,4,
pred4x4L[x,y]=(p[x+(y>>1),-1]+p[x+(y>>1)+1,-1]+1)>>1
B) as y=1,3,5,
pred4x4L[x,y]=(p[x+(y>>1),-1]+2*p[x+(y>>1)+1,-1]+
p[x+(y>>1)+2,-1]+2)>>2
X wherein, the prediction module regulation (with reference to top step 2 and Fig. 2) that the value of y is used according to current block.
9) predictive mode 8:
Make zVR=x+2*y
If a) zVR=0,2,4,6,8,
pred4x4L[x,y]=(p[-1,y+(x>>1)]+p[-1,y+(x>>1)+1]+1)>>1
B) if zVR=1,3,5,7,
pred4x4L[x,y]=(p[-1,y+(x>>1)]+2*p[-1,y+(x>>1)+1]+
p[-1,y+(x>>1)+2]+2)>>2
C) if zVR=9,
pred4x4L[x,y]=(p[-1,4]+3*p[-1,5]+2)>>2
C) if zVR>9,
pred4x4L[x,y]=p[-1,5]
X wherein, the neighborhood prediction module regulation that the value of y is used according to the present encoding piece is promptly come according to the Coordinate Conversion in the step 101.
So far, under every kind of predictive mode, the predictor calculation of pixel finishes in the actual neighborhood prediction module that adopts.
Step 104 according to the predicted value and the actual value of pixel in the neighborhood prediction module, is selected the predictive mode of the predictive mode of prediction effect the best as the present encoding piece.
Prediction effect to pixel in the neighborhood prediction module characterizes by cost function, and specifically this cost function is: Distortion ( i ) = Σ ( x , y ∈ template ) | templatePixel ( x , y ) - predMode ( i ) ( x , y ) | templateSize ;
Wherein, i is the numbering of predictive mode, x and y are the coordinate of pixel in the neighborhood prediction module, predMode (i) (x, y) predicted value of pixel, i.e. pred4x4L[x in the abovementioned steps 103 in the neighborhood prediction module under the predictive mode i, y], (x y) is the actual value of pixel in the described neighborhood prediction module to templatePixel, and templateSize is the sum of pixel in the actual down neighborhood prediction module that adopts of predictive mode i.
In available predictive mode, make that the pattern of cost function minimum is predicted value and actual value immediate predictive mode on statistical significance, the predictive mode of consensus forecast effect optimum just, with the most possible predictive mode of this predictive mode as the present encoding piece, be mostProbMode=arg min{Distortion (i) }, mostProbMode is most possible predictive mode, just according to predicting the outcome that mode of the present invention is determined.
So far, the Forecasting Methodology flow process among the present invention finishes.Next, can be according to predicting the outcome, predictive mode to the actual employing of present encoding piece is encoded, the specific coding mode is identical with existing mode, promptly when predicting that the most possible predictive mode that obtains is identical with the actual predictive mode that adopts, utilize this predictive mode of 1 bit transfer, otherwise, the predictive mode of the actual employing of 4 bit transfer utilized.
Predict the most possible prediction mode that obtains by the way, owing to utilized the adjacent encoding block of a plurality of directions of present encoding piece, and all available coding modes have been traveled through, thereby select the predictive mode of the predictive mode of prediction effect optimum, therefore make its probability identical that predict the outcome increase greatly with the predictive mode of the actual employing of present encoding piece as the present encoding piece.Thereby further reduced the coding expense of predictive mode, improved compression efficiency of intra-frame coding.
For further specifying the advantage of the present invention with respect to prior art, on the JM software platform to utilizing the present invention to predict and carry out Methods for Coding according to predicting the outcome and carried out emulation, and with H.264/AVC under same experimental conditions, carried out the contrast experiment.In simulation process, to set full sequence and adopt 4x4 intraframe predictive coding pattern, simulation result is shown in table 1-table 4.The inventive method improves a lot on code check is saved as can be seen.
Table 1 code efficiency compares, image size CIF (352 * 288)
?Foreman Frame per second The coding frame number ΔPSNR(dB) Δ code check (%)
QP=28 30 100 -0.032 -1.91
QP=32 30 100 -0.068 -4.58
QP=36 30 100 -0.118 -8.79
QP=40 30 100 -0.219 -12.62
Table 2 code efficiency compares, image size CIF (352 * 288)
Mother?and daughter Frame per second The coding frame number ΔPSNR(dB) Δ code check (%)
QP=28 30 100 -0.030 -0.21
QP=32 30 100 -0.054 -1.32
QP=36 30 100 -0.043 -2.82
QP=40 30 100 -0.117 -5.57
Table 3 code efficiency compares, image size CIF (352 * 288)
Akiyo Frame per second The coding frame number ΔPSNR(dB) Δ code check (%)
QP=28 30 100 -0.077 +1.46
QP=32 30 100 -0.049 +0.12
QP=36 30 100 -0.179 -1.58
QP=40 30 100 +0.020 -3.98
Table 4 code efficiency compares, image size 4CIF (704 * 576)
Ice Frame per second The coding frame number ΔPSNR(dB) Δ code check (%)
QP=28 30 100 -0.109 -0.99
QP=32 30 100 -0.030 -3.18
QP=36 30 100 -0.146 -2.82
QP=40 30 100 -0.064 -4.13
Fig. 3 is under the foreman cycle tests, adopts the performance schematic diagram relatively of the accuracy of the most possible predictive mode that calculates in the accuracy of the most possible predictive mode that the inventive method calculates and the coding standard H.264/AVC.As shown in Figure 3, under two kinds of typical QP values, with respect to coding standard H.264/AVC, prediction mode accuracy of the present invention is significantly improved.
Fig. 4 is under the foreman cycle tests, encode again after adopting method of the present invention to predict with coding standard H.264/AVC in the performance schematic diagram relatively of coded prediction pattern.As seen from Figure 4, utilize method of the present invention, under identical code check, make whole coding efficiency be significantly improved.
Being preferred embodiment of the present invention only below, is not to be used to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (2)

1.一种帧内预测模式的预测方法,其特征在于,该方法包括:1. A prediction method of an intra prediction mode, characterized in that the method comprises: A、根据当前编码块的相邻已编码块,设置当前编码块的邻域预测模板及所述邻域预测模板的参考像素;其中,将当前编码块的左上角像素位置设为坐标(0,0),水平方向的X轴向右为正,垂直方向的Y轴向下为正,A. Set the neighborhood prediction template of the current coding block and the reference pixels of the neighborhood prediction template according to the adjacent coded blocks of the current coding block; wherein, the upper left corner pixel position of the current coding block is set as coordinates (0, 0), the X axis in the horizontal direction is positive to the right, and the Y axis in the vertical direction is positive downward. 若当前编码块为图像上方第一行,将位于(-1,0)、(-2,0)、(-1,1)、(-2,1)、(-1,2)、(-2,2)、(-1,3)、(-2,3)的8个像素作为当前编码块的邻域预测模板,将位于(-3,0)、(-3,1)、(-3,2)、(-3,3)的4个像素作为所述参考像素;If the current coding block is the first line above the image, it will be located at (-1, 0), (-2, 0), (-1, 1), (-2, 1), (-1, 2), (- The 8 pixels of 2, 2), (-1, 3), (-2, 3) are used as the neighborhood prediction template of the current coding block, and will be located at (-3, 0), (-3, 1), (- 3, 2), 4 pixels of (-3, 3) are used as the reference pixels; 若当前编码块位于图像最右边且非图像上方第一行,将位于(-2,-2)、(-1,-2)、(0,-2)、(1,-2)、(2,-2)、(3,-2)、(-2,-1)、(-1,-1)、(0,-1)、(1,-1)、(2,-1)、(3,-1)、(-1,0)、(-2,0)、(-1,1)、(-2,1)、(-1,2)、(-2,2)、(-1,3)、(-2,3)的20个像素作为当前编码块的邻域预测模板,将位于(-3,-3)、(-2,-3)、(-1,-3)、(0,-3)、(1,-3)、(2,-3)、(3,-3)、(-3,-2)、(-3,-1)、(-3,0)、(-3,1)、(-3,2)、(-3,3)的13个像素作为所述参考像素;If the current coding block is located at the far right of the image and not the first row above the image, it will be located at (-2, -2), (-1, -2), (0, -2), (1, -2), (2 , -2), (3, -2), (-2, -1), (-1, -1), (0, -1), (1, -1), (2, -1), ( 3, -1), (-1, 0), (-2, 0), (-1, 1), (-2, 1), (-1, 2), (-2, 2), (- The 20 pixels of 1, 3), (-2, 3) are used as the neighborhood prediction template of the current coding block, and will be located at (-3, -3), (-2, -3), (-1, -3) , (0,-3), (1,-3), (2,-3), (3,-3), (-3,-2), (-3,-1), (-3,0 ), (-3,1), (-3,2), (-3,3) 13 pixels as the reference pixel; 若当前编码块位于图像最左边且非图像上方第一行,将位于(0,-2)、(1,-2)、(2,-2)、(3,-2)、(4,-2)、(5,-2)、(0,-1)、(1,-1)、(2,-1)、(3,-1)、(4,-1)、(5,-1)的12个像素作为当前编码块的邻域预测模板,将位于(0,-3)、(1,-3)、(2,-3)、(3,-3)、(4,-3)、(5,-3)、(6,-3)、(7,-3)的8个像素作为所述参考像素;If the current coding block is located at the leftmost of the image and not the first line above the image, it will be located at (0, -2), (1, -2), (2, -2), (3, -2), (4, - 2), (5,-2), (0,-1), (1,-1), (2,-1), (3,-1), (4,-1), (5,-1 ) as the neighborhood prediction template of the current coding block, which will be located at (0, -3), (1, -3), (2, -3), (3, -3), (4, -3 ), (5,-3), (6,-3), (7,-3) 8 pixels as the reference pixel; 若当前编码块不位于图像的第一行、或最右边、或最左边,且不是图像的第一个编码块,则将位于(-2,-2)、(-1,-2)、(0,-2)、(1,-2)、(2,-2)、(3,-2)、(4,-2)、(5,-2)、(-2,-1)、(-1,-1)、(0,-1)、(1,-1)、(2,-1)、(3,-1)、(4,-1)、(5,-1)、(-1,0)、(-2,0)、(-1,1)、(-2,1)、(-1,2)、(-2,2)、(-1,3)、(-2,3)的24个像素作为当前编码块的邻域预测模板,将位于(-3,-3)、(-2,-3)、(-1,-3)、(0,-3)、(1,-3)、(2,-3)、(3,-3)、(4,-3)、(5,-3)、(6,-3)、(7,-3)、(-3,-2)、(-3,-1)、(-3,0)、(-3,1)、(-3,2)、(-3,3)的17个像素作为所述参考像素;If the current coding block is not located in the first line of the image, or the rightmost, or the leftmost, and is not the first coding block of the image, it will be located at (-2, -2), (-1, -2), ( 0, -2), (1, -2), (2, -2), (3, -2), (4, -2), (5, -2), (-2, -1), ( -1, -1), (0, -1), (1, -1), (2, -1), (3, -1), (4, -1), (5, -1), ( -1, 0), (-2, 0), (-1, 1), (-2, 1), (-1, 2), (-2, 2), (-1, 3), (- The 24 pixels of 2, 3) are used as the neighborhood prediction template of the current coding block, which will be located at (-3, -3), (-2, -3), (-1, -3), (0, -3) , (1, -3), (2, -3), (3, -3), (4, -3), (5, -3), (6, -3), (7, -3), The 17 pixels of (-3, -2), (-3, -1), (-3, 0), (-3, 1), (-3, 2), (-3, 3) are used as the reference pixel; B、根据所述当前编码块在图像中的位置,确定H.264/AVC视频压缩标准中对于4×4亮度块所定义的9种帧内方向预测模式0~8中可用的预测模式,并遍历所有可用的预测模式,在每种预测模式i下,利用邻域预测模板的参考像素,计算所述邻域预测模板中像素的预测值,并根据所述预测值与所述邻域预测模板中像素的实际值计算代价函数:B. According to the position of the current coding block in the image, determine the available prediction modes among the 9 kinds of intra-frame direction prediction modes 0-8 defined in the H.264/AVC video compression standard for 4×4 luma blocks, and Traversing all available prediction modes, in each prediction mode i, using the reference pixels of the neighborhood prediction template to calculate the predicted value of the pixel in the neighborhood prediction template, and according to the prediction value and the neighborhood prediction template The actual value of the pixel in computes the cost function:
Figure FSB00000147553000021
将最小的代价函数值对应的预测模式作为预测结果;其中,i为预测模式的编号,predMode(i)(x,y)为预测模式i下所述邻域预测模板中像素的预测值,x和y为邻域预测模板的坐标,templatePixel(x,y)为所述邻域预测模板中像素的实际值,templateSize为预测模式i下实际采用的邻域预测模板中的像素总数。
Figure FSB00000147553000021
The prediction mode corresponding to the minimum cost function value is used as the prediction result; wherein, i is the number of the prediction mode, predMode(i)(x, y) is the prediction value of the pixel in the neighborhood prediction template under the prediction mode i, x and y are the coordinates of the neighborhood prediction template, templatePixel(x, y) is the actual value of the pixels in the neighborhood prediction template, and templateSize is the total number of pixels in the neighborhood prediction template actually adopted in the prediction mode i.
2.根据权利要求1所述的方法,其特征在于,位于图像第一行的非第一个当前编码块,可用的预测模式为预测模式1、2和8;2. The method according to claim 1, wherein the available prediction modes for the non-first current coding block located in the first row of the image are prediction modes 1, 2 and 8; 位于图像最右边且非第一行的当前编码块,可用的预测模式为预测模式0、2、3和7;For the current coding block located on the far right of the image and not in the first row, the available prediction modes are prediction modes 0, 2, 3 and 7; 除位于图像第一行、图像最右边和图像第一个编码块之外的其他当前编码块,可用的预测模式为所有9种帧内预测模式。Except for the current coding blocks located in the first row of the image, the far right of the image and the first coding block of the image, the available prediction modes are all 9 intra prediction modes.
CN 200910085819 2009-06-01 2009-06-01 A Prediction Method of Intra-frame Prediction Mode Expired - Fee Related CN101572818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910085819 CN101572818B (en) 2009-06-01 2009-06-01 A Prediction Method of Intra-frame Prediction Mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910085819 CN101572818B (en) 2009-06-01 2009-06-01 A Prediction Method of Intra-frame Prediction Mode

Publications (2)

Publication Number Publication Date
CN101572818A CN101572818A (en) 2009-11-04
CN101572818B true CN101572818B (en) 2010-12-01

Family

ID=41232030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910085819 Expired - Fee Related CN101572818B (en) 2009-06-01 2009-06-01 A Prediction Method of Intra-frame Prediction Mode

Country Status (1)

Country Link
CN (1) CN101572818B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110068792A (en) * 2009-12-16 2011-06-22 한국전자통신연구원 Adaptive Image Coding Apparatus and Method
CN102196270B (en) * 2010-03-12 2013-04-24 华为技术有限公司 Intra-frame prediction method, device, coding and decoding methods and devices
CN102595122B (en) * 2011-01-14 2014-12-03 华为技术有限公司 Method and equipment for coding and decoding prediction mode, and network system
CN102364948B (en) * 2011-10-28 2013-10-16 上海国茂数字技术有限公司 Method for two-way compensation of video coding in merging mode
CN102685530B (en) * 2012-04-23 2014-05-07 北京邮电大学 Intra depth image frame prediction method based on half-pixel accuracy edge and image restoration
CN103533374B (en) * 2012-07-06 2018-02-16 乐金电子(中国)研究开发中心有限公司 A kind of Video coding, the method and device of decoding
CN105338351B (en) * 2014-05-28 2019-11-12 华为技术有限公司 Method and device for intra-frame prediction encoding, decoding, and array scanning based on template matching
CN107155108B (en) * 2017-06-19 2019-07-12 电子科技大学 An Intra-frame Prediction Method Based on Brightness Variation
CN108337508B (en) * 2018-01-29 2021-09-17 珠海市杰理科技股份有限公司 Intra-frame prediction device and method
CN108596250B (en) * 2018-04-24 2019-05-14 深圳大学 Characteristics of image coding method, terminal device and computer readable storage medium
CN114697662B (en) * 2020-12-30 2025-04-08 中科寒武纪科技股份有限公司 Method, device, equipment and readable storage medium for selecting intra-frame prediction mode
CN114679587B (en) * 2022-03-14 2025-12-16 中山大学 Intra-frame linear prediction method and device by utilizing abscissa and ordinate

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1756364A (en) * 2004-09-30 2006-04-05 华为技术有限公司 Method for selecting intra-prediction mode

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1756364A (en) * 2004-09-30 2006-04-05 华为技术有限公司 Method for selecting intra-prediction mode

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
亓璐.H.264标准中视频编码层技术的研究.《第25届中国控制会议论文集(下册)》.2006,1831-1834. *
杨佳平.DM642上H.264编码器的优化与实现.《中国电子学会第十五届信息论学术年会暨第一届全国网络编码学术年会论文集(上册)》.2008,175-178. *

Also Published As

Publication number Publication date
CN101572818A (en) 2009-11-04

Similar Documents

Publication Publication Date Title
CN101572818B (en) A Prediction Method of Intra-frame Prediction Mode
CN101494782B (en) Video encoding method and apparatus, and video decoding method and apparatus
CN101682774B (en) Video encoding method and decoding method, video encoding device and decoding device
CN102282852B (en) Video encoding device and decoding device using prediction mode
CN103314588B (en) Encoding method and device and decoding method and device
CN104038763B (en) Picture coding device and method, and picture decoding apparatus and method
ES2767966T3 (en) Intra-prediction coding under flat representations
CN103096055B (en) The method and apparatus of a kind of image signal intra-frame prediction and decoding
CN100461867C (en) A method for predictive coding of intra-frame images
CN101014125A (en) Method of and apparatus for deciding intraprediction mode
CN101969561B (en) A kind of intra-frame mode selection method, device and a kind of encoder
KR100739714B1 (en) Method and apparatus for determining intra prediction mode
KR20150091456A (en) Method and apparatus for video intra prediction encoding, and method and apparatus for video intra prediction decoding
WO2018010492A1 (en) Rapid decision making method for intra-frame prediction mode in video coding
TW201340717A (en) Multiple symbol bit hiding in the transform unit
WO2016180129A1 (en) Prediction mode selection method, apparatus and device
WO2021077914A1 (en) Video coding method and apparatus, computer device and storage medium
KR20110073263A (en) Intra prediction encoding method and encoding method, and intra prediction encoding apparatus and intra prediction decoding apparatus performing the method
CN102215392B (en) Intra-frame predicting method or device for estimating pixel value
CN100426868C (en) Frame image brightness predictive coding method
KR20030073120A (en) Implementation method for intra prediction mode of movie in embedded system
CN102780886A (en) Rate distortion optimization method
CN101977317B (en) Intra-frame prediction method and device
CN101072355B (en) A Weighted Predictive Motion Compensation Method
CN109951707B (en) A target motion vector selection method, device, electronic device and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20101201

Termination date: 20130601