CN103227922B - Picture decoding method and picture decoding apparatus - Google Patents
Picture decoding method and picture decoding apparatus Download PDFInfo
- Publication number
- CN103227922B CN103227922B CN201310142052.8A CN201310142052A CN103227922B CN 103227922 B CN103227922 B CN 103227922B CN 201310142052 A CN201310142052 A CN 201310142052A CN 103227922 B CN103227922 B CN 103227922B
- Authority
- CN
- China
- Prior art keywords
- block
- information
- movable information
- pixels
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A kind of picture decoding method, it is characterised in that, possess: the step selecting at least one motion reference block from the block of pixels that the decoding with movable information is complete; From described motion reference block, select at least one step that can utilize block, described utilize block to be the block of pixels of the candidate with the movable information being applied to decoder object block, and there is mutually different movable informations; The code table set in advance with reference to the quantity utilizing block according to described, decodes the coded data inputted, thus obtains the step for determining to select the selection information of block; According to described selection information, from described utilization, block is selected a step selecting block; The movable information of described selection block is used to generate the step of predicted picture of described decoder object block; According to the step that the predicated error of described decoder object block is decoded by described coded data; And the step of decoded picture is obtained according to described predicted picture and described predicated error.
Description
The present invention's point case application that to be that to enter the national applications number of National Phase in China on October 08th, 2012 be " 201080066017.7 ", denomination of invention be " method for encoding images and picture decoding method ".
Technical field
The present invention relates to the coding for moving image and static image and coding/decoding method.
Background technology
In recent years, in ITU-T and ISO/IEC, all suggested, as ITU-TRec.H.264 and ISO/IEC14496-10 (following, to be called H.264), the dynamic image encoding method increasing substantially coding efficiency. Such as, in h .264, prediction processing, conversion process and entropy code process carry out with rectangular block unit (16 �� 16 block of pixels units, 8 �� 8 block of pixels units etc.). In prediction processing, for the rectangular block (coded object block) of coded object, carry out carrying out the motion compensation of the prediction of time orientation with reference to encoded complete frame (reference frame). In such motion compensation, it is necessary to being encoded by the movable information comprising motion vector and send decoding side, this motion vector is the vector as the displacement information spatially between coded object block and the block of institute's reference in reference frame. In addition, when using multiple reference frame to carry out motion compensation, it is necessary to movable information is encoded together with reference frame number. Therefore, relevant with movable information and reference frame number encoding amount increases sometimes.
As an example of the method obtaining motion vector in motion compensation is predicted, the with good grounds motion vector distributing to encoded complete block derives the motion vector that distribute to coded object block, and the Direct Model (with reference to patent documentation 1 and patent documentation 2) according to the motion vector generation forecast image derived. In Direct Model, because motion vector not being encoded, it is possible to reduce the encoding amount of movable information. H.264/AVC, Direct Model is such as adopted.
Patent documentation
Patent documentation 1: No. 4020789th, Japanese Patent
Patent documentation 2: No. 7233621st, United States Patent (USP)
Summary of the invention
In Direct Model, the fixing method of central authorities' value calculating kinematical vector of the motion vector of the block complete according to the coding adjacent with coded object block is utilized to predict the motion vector generating coded object block. Therefore, the degree of freedom of motion vector computation is low.
In order to improve the degree of freedom of calculating kinematical vector, it is proposed to have from complete piece of multiple coding, select one and to the method for coded object block assigned motion vector. In the method, it is necessary to always send the selection information determining selected block, so that the block that selected coding is complete can be determined in decoding side. Therefore, when selecting one from complete piece of multiple coding and determine the motion vector that distribute to coded object block, exist and add and the problem selecting information-related encoding amount.
The present invention makes to solve the problem, and object is the method for encoding images that offer coding efficiency is high and picture decoding method.
The method for encoding images of one embodiment of the present invention possesses following step: the 1st step selecting at least one motion reference block from the block of pixels that the coding with movable information is complete; Selecting the 2nd step that at least one can utilize block from above-mentioned motion reference block, this can utilize block to be the block of pixels of the candidate with the movable information being applicable to coded object block, and has mutually different movable informations; From above-mentioned utilization, block is selected the 3rd step selecting block; The movable information of above-mentioned selection block is used to generate the 4th step of predicted picture of above-mentioned coded object block; The 5th step that predicated error between above-mentioned predicted picture and original image is encoded; And the code table set in advance with reference to the quantity utilizing block according to above-mentioned, the 6th step that the selection information determining above-mentioned selection block is encoded.
The picture decoding method of another embodiment of the present invention possesses following step: the 1st step selecting at least one motion reference block from the block of pixels that the decoding with movable information is complete; Selecting the 2nd step that at least one can utilize block from above-mentioned motion reference block, this can utilize block to be the block of pixels of the candidate with the movable information being applicable to decoder object block, and has mutually different movable informations; The code table set in advance with reference to the quantity utilizing block according to above-mentioned, decodes the coded data inputted, thus obtains the 3rd step for determining to select the selection information of block; According to above-mentioned selection information, from above-mentioned utilization, block is selected the 4th step selecting block; The movable information of above-mentioned selection block is used to generate the 5th step of predicted picture of above-mentioned decoder object block; The 6th step of the predicated error of above-mentioned decoder object block is decoded according to above-mentioned coded data; And the 7th step of decoded picture is obtained according to above-mentioned predicted picture and above-mentioned predicated error.
According to the present invention, it is possible to improve coding efficiency.
Accompanying drawing explanation
Fig. 1 be outline the block diagram of structure of picture coding device of the 1st enforcement mode is shown.
Fig. 2 A is the figure of an example of the process unit of the coding illustrating the image lsb decoder shown in Fig. 1 and the size of microlith.
Fig. 2 B is the figure of another example of the process unit of the coding of the image lsb decoder shown in Fig. 1 and the size of microlith.
Fig. 3 illustrates that the picture coding portion shown in Fig. 1 is to the figure of the order that the block of pixels in coded object frame encodes.
Fig. 4 is the figure of the example illustrating the movable information frame that the movable information storer shown in Fig. 1 keeps.
Fig. 5 is the schema of an example of the order of the received image signal illustrating process Fig. 1.
Fig. 6 A is the figure of an example of the mutual prediction processing performed by dynamic compensating unit illustrating Fig. 1.
Fig. 6 B is the figure of another example of the mutual prediction processing performed by dynamic compensating unit illustrating Fig. 1.
Fig. 7 A is the figure of an example of the size illustrating the motion compensation block that mutual prediction processing uses.
Fig. 7 B is the figure of another example of the size illustrating the motion compensation block that mutual prediction processing uses.
Fig. 7 C is the figure of another other example of the size illustrating the motion compensation block that mutual prediction processing uses.
Fig. 7 D is the figure of another example of the size illustrating the motion compensation block that mutual prediction processing uses.
Fig. 8 A is the figure of an example of the configuration illustrating direction, space and time orientation motion reference block.
Fig. 8 B is the figure of another example of the configuration illustrating direction, space motion reference block.
Fig. 8 C is the figure illustrating direction, space motion reference block relative to the relative position of the coded object block shown in Fig. 8 B.
Fig. 8 D is the figure of another example of the configuration illustrating time orientation motion reference block.
Fig. 8 E is the figure of another other example of the configuration illustrating time orientation motion reference block.
Fig. 8 F is the figure of another other example of the configuration illustrating time orientation motion reference block.
Fig. 9 illustrates that the utilized block obtaining section of Fig. 1 selects the schema that can utilize an example of the method for block from motion reference block.
Figure 10 is the figure of the example illustrating the utilized block that the method according to Fig. 9 is selected from the motion reference block shown in Fig. 8.
Figure 11 is the figure of the example illustrating the utilized block information that the utilized block obtaining section of Fig. 1 exports.
Figure 12 A is the figure of an example of the identity judgement of the movable information illustrated between the block undertaken by the utilized block obtaining section of Fig. 1.
Figure 12 B is the figure of another example of the identity judgement of the movable information illustrated between the block undertaken by the utilized block obtaining section of Fig. 1.
Figure 12 C is the figure of another other example of the identity judgement of the movable information illustrated between the block undertaken by the utilized block obtaining section of Fig. 1.
Figure 12 D is the figure of another example of the identity judgement of the movable information illustrated between the block undertaken by the utilized block obtaining section of Fig. 1.
Figure 12 E is the figure of another other example of the identity judgement of the movable information illustrated between the block undertaken by the utilized block obtaining section of Fig. 1.
Figure 12 F is the figure of another example of the identity judgement of the movable information illustrated between the block undertaken by the utilized block obtaining section of Fig. 1.
Figure 13 be outline the block diagram of structure of prediction section of Fig. 1 is shown.
Figure 14 is the figure of the group illustrating the movable information that the time orientation movable information obtaining section of Figure 13 exports.
Figure 15 illustrates to utilize the explanation figure at the interpolation processing based on a few pixels precision in the motion compensation process of the dynamic compensating unit of Figure 13.
Figure 16 is the schema of an example of the action of the prediction section illustrating Figure 13.
Figure 17 illustrates that the dynamic compensating unit of Figure 13 is by the figure of the copying motion information of time orientation motion reference block to the situation of coded object block.
Figure 18 be outline the block diagram of structure in variable length code portion of Fig. 1 is shown.
Figure 19 is the figure according to the example that can utilize block information generative grammar (syntax).
Figure 20 is the figure of the example of 2 values illustrating the selection block Message Syntax corresponding with utilizing block information.
The explanation figure of (scaling) is put in the ratio contracting that Figure 21 is account for motion information.
Figure 22 is the figure of the syntactic constructs according to the mode of enforcement.
Figure 23 A is the figure of an example of the microlith layer grammer according to the 1st enforcement mode.
Figure 23 B is the figure of another example of the microlith layer grammer according to the 1st enforcement mode.
Figure 24 A be illustrate with H.264 in B cut into slices time the figure of code table corresponding to mb_type and mb_type.
Figure 24 B is the figure of an example of the code table illustrating enforcement mode.
Figure 24 C be illustrate with H.264 in P cut into slices time the figure of code table corresponding to mb_type and mb_type.
Figure 24 D is the figure of another example of the code table illustrating enforcement mode.
Figure 25 A is the figure of an example of the code table that mb_type and mb_type illustrated in cutting into slices with B according to enforcement mode is corresponding.
Figure 25 B is the figure of another example of the code table that mb_type and mb_type illustrated in cutting into slices with P according to enforcement mode is corresponding.
Figure 26 be outline the block diagram of structure of picture coding device of the 2nd enforcement mode is shown.
Figure 27 be outline the block diagram of structure of prediction section of Figure 26 is shown.
Figure 28 be outline the block diagram of structure of the 2nd prediction section of Figure 27 is shown.
Figure 29 be outline the block diagram of structure in variable length code portion of Figure 26 is shown.
Figure 30 A is the figure of the example illustrating the microlith layer grammer according to the 2nd enforcement mode.
Figure 30 B is the figure of another example illustrating the microlith layer grammer according to the 2nd enforcement mode.
Figure 31 be outline the block diagram of picture decoding apparatus of the 3rd enforcement mode is shown.
Figure 32 is the block diagram illustrating in greater detail the coding row lsb decoder shown in Figure 31.
Figure 33 is the block diagram illustrating in greater detail the prediction section shown in Figure 31.
Figure 34 be outline the block diagram of picture decoding apparatus of the 4th enforcement mode is shown.
Figure 35 is the block diagram illustrating in greater detail the coding row lsb decoder shown in Figure 33.
Figure 36 is the block diagram illustrating in greater detail the prediction section shown in Figure 33.
The explanation of Reference numeral
10: received image signal; 11: prediction image signal; 12: prediction-error image image signal; 13: quantization transformation coeffcient; 14: coded data; 15: decoding predictive error signal; 16: local decoded image signal; 17: with reference to figure image signal; 18: movable information; 20: stream of bits; 21: movable information; 25,26: information frame; 30: block information can be utilized; 31: select block information; 32: prediction handover information; 33: transformation coeffcient information; 34: predictive error signal; 35: prediction image signal; 36: decoded image signal; 37: with reference to figure image signal; 38: movable information; 39: reference movement information; 40: movable information; 50: encoding control information; 51: feedback information; 60: block information can be utilized; 61: select block information; 62: prediction handover information; 70: decoding control information; 71: control information; 80: coded data; 100: picture coding portion; 101: prediction section; 102: subtractor; 103: conversion quantization portion; 104: variable length code portion; 105: inverse guantization (IQ) inverse transformation portion; 106: totalizer; 107: frame memory; 108: message memory; 109: block obtaining section can be utilized; 110: direction, space movable information obtaining section; 111: time orientation movable information obtaining section; 112: information change-over switch; 113: dynamic compensating unit; 114: parameter coding portion; 115: transform coefficients encoding portion; 116: select block forecast portion; 117: multiplexed portion; 118: movable information selection portion; 120: output state; 150: coding-control portion; 200: picture coding portion; 201: prediction section; 202: the 2 prediction section; 203: Forecasting Methodology change-over switch; 204: variable length code portion; 205: movable information obtaining section; 216: select block forecast portion; 217: movable information encoding section; 300: image lsb decoder; 301: coding row lsb decoder; 301: coding row lsb decoder; 302: inverse guantization (IQ) inverse transformation portion; 303: totalizer; 304: frame memory; 305: prediction section; 306: message memory; 307: block obtaining section can be utilized; 308: output state; 310: direction, space movable information obtaining section; 311: time orientation movable information obtaining section; 312: movable information change-over switch; 313: dynamic compensating unit; 314: information selection portion; 320: separated part; 321: parameter lsb decoder; 322: transformation coeffcient lsb decoder; 323: select block lsb decoder; 350: decoding control section; 400: image lsb decoder; 401: coding row lsb decoder; 405: prediction section; 410: the 2 prediction section; 411: Forecasting Methodology change-over switch; 423: select block lsb decoder; 424: information decoding portion; 901: senior grammer; 902: sequential parameter group grammer; 903: figure parameter group grammer; 904: slice-level grammer; 905: section head grammer; 906: slice of data grammer; 907: microlith level grammer; 908: microlith layer grammer; 909: microlith prediction grammer
Embodiment
Hereinafter, as required, with reference to accompanying drawing, method and the device of the picture coding of embodiments of the present invention and image decoding are described. In addition, in the following embodiments, it is set to carry out the part of same action about the part giving same numbering, eliminates the explanation of repetition.
(the 1st enforcement mode)
The structure of the picture coding device of the 1st enforcement mode of the present invention is shown Fig. 1 outline. As shown in Figure 1, this picture coding device has picture coding portion 100, coding-control portion 150 and output state 120. This picture coding device both can be realized by hardware such as LSI chips, or can also be set to realize by making computer perform image encoding program.
In picture coding portion 100, such as, input as the original image (received image signal) 10 of moving image or static image to have split the block of pixels unit of original image. As described in detail afterwards, received image signal 10 is carried out compression coding and generates coded data 14 by picture coding portion 100. The coded data 14 generated temporarily is saved in output state 120, and the output timing managed in coding-control portion 150, sends to not shown storage system (storage media) or transmission system (communication link).
Coding-control portion 150 controls to produce the feedback control of encoding amount, quantization control, predictive mode control and entropy code and controls whole coded treatment in such picture coding portion 100. Specifically, encoding control information 50 is supplied to picture coding portion 100 by coding-control portion 150, from suitably receiving feedback information 51, picture coding portion 100. Encoding control information 50 comprises information of forecasting, movable information 18 and quantum-chemical descriptors information etc. Information of forecasting comprises prediction mode information and block size information. Movable information 18 comprises motion vector, reference frame number and prediction direction (one direction prediction, two direction prediction). Quantum-chemical descriptors and the quantization matrixes such as quantum-chemical descriptors information quantization width (quantization step size). Feedback information 51 comprises the generation encoding amount based on picture coding portion 100, such as, uses when determining quantum-chemical descriptors.
Picture coding portion 100 is unit with the block of pixels (such as, microlith, sub-block, 1 pixel etc.) split original image and obtain, and is encoded by received image signal 10. Therefore, received image signal 10 is input to picture coding portion 100 successively to have split the block of pixels unit of original image. In the present embodiment, the process unit of coding is set to microlith, using corresponding with received image signal 10, as the block of pixels (microlith) of coded object simply referred to as coded object block. In addition, the image frame comprising image frame, the i.e. coded object of coded object block is called coded object frame.
Such coded object block such as, both can be such 16 �� 16 block of pixels shown in Fig. 2 A, it is also possible to such 64 �� 64 block of pixels shown in Fig. 2 B. In addition, coded object block can also be 32 �� 32 block of pixels, 8 �� 8 block of pixels etc. In addition, the shape of microlith is not limited to the example of such square shape shown in Fig. 2 A and Fig. 2 B, it is also possible to be set to the arbitrary shapes such as rectangular shape. In addition, above-mentioned process unit is not limited to block of pixels as microlith, it is also possible to be frame or field.
In addition, coded treatment for each block of pixels in coded object frame can perform in any order. In the present embodiment, it is noted that as shown in Figure 3, the block of pixels being set to the upper left from coded object frame is to the block of pixels of bottom right line by line, namely according to raster scan order, perform coded treatment to block of pixels.
Picture coding portion 100 shown in Fig. 1 possesses: prediction section 101, subtractor 102, conversion quantization portion 103, variable length code portion 104, inverse guantization (IQ) inverse transformation portion 105, totalizer 106, frame memory 107, movable information storer 108 and can utilize block obtaining section 109.
In picture coding portion 100, received image signal 10 is transfused to prediction section 101 and subtractor 102. Subtractor 102 receives received image signal 10, and receives prediction image signal 11 from prediction section 101 described later. The difference of subtractor 102 calculating input image signal 10 and prediction image signal 11, generation forecast error image signal 12.
Conversion quantization portion 103 receives prediction-error image image signal 12 from subtractor 102, and the prediction-error image image signal 12 received is implemented conversion process, generates transformation coeffcient. Conversion process such as, is the orthogonal transformations such as discrete cosine transform (DCT:DiscreteCosineTransform). In another enforcement mode, conversion quantization portion 103 can also substitute discrete cosine transform and utilize the method such as little wave conversion and isolated component parsing to generate transformation coeffcient. In addition, convert quantization portion 103, according to the quantum-chemical descriptors provided by coding-control portion 150, the transformation coeffcient generated is carried out quantization. Variable length code portion 104 and inverse guantization (IQ) inverse transformation portion 105 is exported to by the transformation coeffcient (transformation coeffcient information) 13 after quantization.
Transformation coeffcient 13 after quantization, according to the quantum-chemical descriptors provided by coding-control portion 150, namely identical with conversion quantization portion 103 quantum-chemical descriptors, is carried out inverse guantization (IQ) by inverse guantization (IQ) inverse transformation portion 105. Then, the transformation coeffcient after inverse guantization (IQ) is implemented inverse transformation by inverse guantization (IQ) inverse transformation portion 105, generates decoding predictive error signal 15. Inversion process based on inverse guantization (IQ) inverse transformation portion 105 is consistent with the inversion process of the conversion process based on conversion quantization portion 103. Such as, inversion process is inverse discrete cosine transform (IDCT:InverseDiscreteCosineTransform) or inverse wavelet transform etc.
Totalizer 106, from inverse guantization (IQ) inverse transformation portion 105 receipt decoding predictive error signal 15, in addition, receives prediction image signal 11 from prediction section 101. Decoding predictive error signal 15 and prediction image signal 11 are added and generate local decoded image signal 16 by totalizer 106. The local decoded image signal 16 generated is saved as with reference to figure image signal 17 in frame memory 107. The reference figure image signal 17 that frame memory 107 preserves, when being encoded by coded object block thereafter, is read and reference by prediction section 101.
Prediction section 101 receives with reference to figure image signal 17 from frame memory 107, and utilizes block obtaining section 109 reception can utilize block information 30 from described later. In addition, prediction section 101 receives reference movement information 19 from movable information storer 108 described later. Prediction section 101 is according to reference figure image signal 17, reference movement information 19 and block information 30 can be utilized to generate prediction image signal 11, the movable information 18 of coded object block and select block information 31. Specifically, prediction section 101 possesses: generates movable information 18 according to utilizing block information 30 and reference movement information 19 and selects the movable information selection portion 118 of block information 31; And the dynamic compensating unit 113 according to movable information 18 generation forecast figure image signal 11. Prediction image signal 11 is fed to subtractor 102 and totalizer 106. Movable information 18 is stored in movable information storer 108, for the prediction processing for coded object block thereafter. In addition, block information 31 is selected to be fed to variable length code portion 104. About prediction section 101 in rear detailed explanation.
In movable information storer 108 interim preserve movable information 18 and as reference movement information 19. Fig. 4 illustrates an example of the structure of movable information storer 108. As shown in Figure 4, movable information storer 108 maintains reference movement information 19 with frame unit, and reference movement information 19 is formed with movable information frame 25. The movable information 18 relevant with encoding complete block is by input motion message memory 108 successively, and its result is, multiple movable information frames 25 that movable information storer 108 keeps the coding time different.
Such as, reference movement information 19 remains in movable information frame 25 with fixed block unit (4 �� 4 block of pixels unit). Motion vector block 28 shown in Fig. 4 represents and coded object block, the block of pixels that can utilize the identical sizes such as block and selection block, such as, is 16 �� 16 block of pixels. In motion vector block 28, such as, it is assigned motion vector for every 4 �� 4 block of pixels. The mutual prediction processing that make use of motion vector block is called motion vector block prediction processing. When generating movable information 18, the reference movement information 19 that movable information storer 108 keeps is read by prediction section 101. The reference movement information 19 that the region that the utilized block that aftermentioned such movable information 18 that block can be utilized to have refers in movable information storer 108 is positioned at keeps.
In addition, movable information storer 108 is not limited to keep the example of reference movement information 19 with 4 �� 4 block of pixels units, it is also possible to keep reference movement information 19 with other block of pixels unit. Such as, relevant with reference movement information 19 block of pixels unit both can be 1 pixel, it is also possible to be 2 �� 2 block of pixels. In addition, the shape of relevant with reference movement information 19 block of pixels is not limited to the example of square shape, it is possible to be set to arbitrary shape.
The utilized block obtaining section 109 of Fig. 1 obtains reference movement information 19 from movable information storer 108, according to the reference movement information 19 obtained, selects the utilized block that can utilize in the prediction processing of prediction section 101 from encoded multiple pieces completed. Selected utilized block is by as utilizing block information 30 and give prediction section 101 and variable length code portion 104. To become for selecting the complete block of coding that can utilize the candidate of block to be called motion reference block. About motion reference block and the system of selection that block can be utilized, it is described in detail rear.
Variable length code portion 104 is except transformation coeffcient information 13, also receive from prediction section 101 and select block information 31, receive the coding parameter of information of forecasting and quantum-chemical descriptors etc. from coding-control portion 150, receive from block obtaining section 109 can be utilized and can utilize block information 30. Variable length code portion 104 to the transformation coeffcient 13 after quantization, select block information 31, block information 30 and coding parameter can be utilized to carry out entropy code (such as, etc. long codes, Ha Fuman coding or arithmetic coding etc.), generate coded data 14. Coding parameter comprises selects block information 31 and information of forecasting, and comprises all parameters required for decoding such as the information relevant with the transformation coeffcient information relevant with quantization. The coded data 14 generated is temporarily stored in output state 120, and is fed to not shown storage system or transmission system.
Fig. 5 illustrates the handling procedure of received image signal 10. As shown in Figure 5, first, by prediction section 101 generation forecast figure image signal 11 (step S501). In the generation of the prediction image signal 11 of step S501, utilize in block that block can be utilized to be chosen as selection block by described later, and use the movable information selecting block information 31, selection block to have and make prediction image signal 11 with reference to figure image signal 17. By the difference of subtractor 102 computational prediction figure image signal 11 and received image signal 10, generation forecast error image signal 12 (step S502).
Then, by conversion, prediction-error image image signal 12 is implemented orthogonal transformation and quantization by quantization portion 103, generates transformation coeffcient information 13 (step S503). Transformation coeffcient information 13 and selection block information 31 are fed to variable length code portion 104, are implemented variable length code, and generate coded data 14 (step S504). In addition, in step S504, carrying out switch code table according to selection block information 31, so that having the entry of quantity equal to the quantity of block can be utilized in code table, and selection block information 31 being carried out variable length code. The stream of bits 20 of coded data is fed to not shown storage system system or transmission route.
The transformation coeffcient information 13 generated in step S503 carries out inverse guantization (IQ) by inverse guantization (IQ) inverse transformation portion 105, and is implemented inversion process, becomes decoding predictive error signal 15 (step S505). Be added in step S501 by decoding predictive error signal 15 to use with reference on figure image signal 17, become local decoded image signal 16 (step S506), and it is stored in frame memory 107 (step S507) as with reference to figure image signal.
Then, each structure in above-mentioned picture coding portion 100 is described in detail.
The picture coding portion 100 of Fig. 1 has prepared multiple predictive mode in advance, and generation method and the motion compensation piece size of the prediction image signal 11 of each predictive mode are mutually different. As the method for prediction section 101 generation forecast figure image signal 11, specifically, roughly divide, the mutual prediction (inter prediction) with reference to figure image signal 17 generation forecast image having the reference frame (referential field) using the interior prediction (infra-frame prediction) with reference to figure image signal 17 generation forecast image relevant with coded object frame (or field) and use and more than one coding complete relevant. Prediction section 101 is predicted in optionally switching and is predicted alternately, generates the prediction image signal 11 of coded object block.
Fig. 6 A illustrates an example of the mutual prediction undertaken by dynamic compensating unit 113. In mutual prediction, as shown in Figure 6A, according to the block (also claiming prediction block) 23 as the block in the reference frame before encoded 1 frame completed and position identical with coded object block, use with the motion vector 18a included by movable information 18 and spatially the block 24 of the position of displacement relevant with reference to figure image signal 17, generation forecast figure image signal 11. That is, in the generation of prediction image signal 11, it may also be useful to the position (coordinate) of coded object block and the reference figure image signal 17 relevant with the block 24 in the reference frame determined by the motion vector 18a included by movable information 18. In mutual prediction, it is possible to carry out the motion compensation of a few pixels precision (such as, 1/2 pixel precision or 1/4 pixel precision), by carrying out filtering process to reference to figure image signal 17, generate the value of inter polated pixel. Such as, in h .264, it is possible to luminance signal is proceeded to the interpolation processing till 1/4 pixel precision. When carrying out the motion compensation of 1/4 pixel precision, the quantity of information of movable information 18 becomes 4 times of integer-pel precision.
In addition, in mutual prediction, it is not limited to the example of the reference frame before using such 1 frame shown in Fig. 6 A, as shown in Figure 6B, it is possible to use the reference frame that arbitrary coding is complete. When maintain the multiple reference frame different from time location relevant with reference to figure image signal 17, represent that the information generating prediction image signal 11 with reference to figure image signal 17 according to which time location illustrates by referring to frame number. Reference frame number is contained in movable information 18. Reference frame number can change with area unit (figure, block unit etc.). Namely, it is possible to use different reference frame for each block of pixels. As an example, when employing the reference frame before 1 complete frame of coding in predicting, the reference frame number in this region is set to 0, and when employing the reference frame before 2 complete frames of coding in predicting, the reference frame number in this region is set to 1. As other example, when, in reference figure image signal 17 (quantity of reference frame the is 1) situation maintaining 1 frame amount in frame memory 107, reference frame number is always set to 0.
In addition, in mutual prediction, it is possible to select the piece size being suitable for coded object block among multiple motion compensation block. , it is also possible to be multiple small pixel blocks by coded object block comminute, namely and carry out motion compensation for each small pixel block. Fig. 7 A to Fig. 7 C illustrates the size of the motion compensation block of microlith unit, and Fig. 7 D illustrates the size of the motion compensation block of sub-block (block of pixels below 8 �� 8 pixels) unit. As shown in Figure 7 A, when coded object block is 64 �� 64 pixel, as motion compensation block, it is possible to select 64 �� 64 block of pixels, 64 �� 32 block of pixels, 32 �� 64 block of pixels or 32 �� 32 block of pixels etc. In addition, as shown in Figure 7 B, when coded object block is 32 �� 32 pixel, 32 �� 32 block of pixels, 32 �� 16 block of pixels, 16 �� 32 block of pixels or 16 �� 16 block of pixels etc. can be selected as motion compensation block. In addition, as seen in figure 7 c, when coded object block is 16 �� 16 pixel, it is possible to motion compensation block is set as 16 �� 16 block of pixels, 16 �� 8 block of pixels, 8 �� 16 block of pixels or 8 �� 8 block of pixels etc. In addition, as illustrated in fig. 7d, when coded object block is 8 �� 8 pixel, motion compensation block can select 8 �� 8 block of pixels, 8 �� 4 block of pixels, 4 �� 8 block of pixels or 4 �� 4 block of pixels etc.
As mentioned above, small pixel block in the reference frame used in mutual prediction is (such as, 4 �� 4 block of pixels) there is movable information 18, it is possible to utilize shape and the motion vector of best motion compensation block according to the character of the local of received image signal 10. In addition, the microlith of Fig. 7 A to Fig. 7 D and sub-microlith can combine arbitrarily. When coded object block is such 64 �� 64 block of pixels shown in Fig. 7 A, for each of segmentation 4 32 �� 32 block of pixels obtaining of 64 �� 64 block of pixels to select each piece size shown in Fig. 7 B such that it is able to periodically utilize the block of 64 �� 64��16 �� 16 pixels. Equally, when the piece size shown in Fig. 7 D can be chosen, it is possible to periodically utilize the piece size of 64 �� 64��4 �� 4.
Then, account for motion reference block is carried out with reference to Fig. 8 A to Fig. 8 F.
The method that motion reference block is both sides' agreement of the picture coding device according to Fig. 1 and picture decoding apparatus described later is selected from encoding complete region (block) in coded object frame and reference frame. One example of the configuration of the motion reference block that Fig. 8 A illustrates the position according to coded object block and selects. In the example of Fig. 8 A, from regional choice 9 motion reference block A��D and TA��TE that the coding in coded object frame and reference frame is complete. Specifically, from coded object frame, select 4 the block As adjacent with the left side of coded object block, upper, upper right, upper left, B, C, D are as motion reference block, select from reference frame with the block TA of coded object block same position and with the right side of this block TA, under, left and upper 4 adjacent block of pixels TB, TC, TD, TE are as motion reference block. In the present embodiment, the motion reference block selected from coded object frame is called direction, space motion reference block, the motion reference block selected from reference frame is called time orientation motion reference block. The coding p giving each motion reference block of Fig. 8 A represents the index of motion reference block. This index is numbered by the order of the motion reference block according to time orientation, direction, space, but does not limit and this, as long as index does not repeat, it is not necessary to must according to this order. Such as, the motion reference block in time orientation and direction, space can also order be numbered in disorderly.
In addition, direction, space motion reference block is not limited to the example shown in Fig. 8 A, it is also possible to as shown in Figure 8 B be the pixel a adjacent with coded object block suchly, such as, the block (microlith or sub-microlith etc.) belonging to b, c, d. In this case, it is set to shown in Fig. 8 C from the top left pixel e in coded object block to the relative position (dx, dy) of each pixel a, b, c, d. At this, in the example shown in Fig. 8 A and Fig. 8 B, microlith is depicted as N �� N block of pixels.
In addition, as in fig. 8d, it is also possible to by the whole block A1��A4 adjacent with coded object block, B1, B2, C, D are chosen as direction, space motion reference block. In the example of Fig. 8 D, the quantity of space direction motion reference block is 8.
In addition, time orientation motion reference block both can the part overlap of each piece of TA��TE shown in Fig. 8 E, it is also possible to each piece of TA��TE configured separate as shown in Figure 8 F. In Fig. 8 E, the part of time orientation motion reference block TA and TB overlap is shown with oblique line. In addition, time orientation motion reference block is not limited to must be the example of the block of the position (Collocate position) corresponding with coded object block and the block that is positioned at around it, it is also possible to the block of the optional position being configured in reference frame. Such as, block that can be determined by the movable information 18 that the block complete by referring to the position of block and the arbitrary coding adjacent with coded object block has, in reference frame is chosen as central block (such as, block TA), and this central block and the block around it are chosen as time orientation motion reference block. In addition, time orientation reference block can also configure from central block non-uniform distantance.
In above-mentioned arbitrary situation, if having reached an agreement on the quantity of direction, space and time orientation motion reference block and position in coding device and decoding device in advance, then the quantity of motion reference block and position can be arranged arbitrarily. In addition, do not need must size identical with coded object block for the size of motion reference block. Such as, as in fig. 8d, the size of motion reference block both can be bigger than coded object block, it is also possible to less than coded object block. In addition, motion reference block is not limited to square shape, it is also possible to be set as the arbitrary shapes such as rectangular shape. In addition, motion reference block can also be set as any size.
In addition, motion reference block and block can be utilized can also to be only configured in the one party in time orientation and direction, space. In addition, it is also possible to configure the motion reference block of time orientation and block can be utilized according to cut into slices kinds of these sections of P section, B, and the motion reference block in direction, space can also be configured and block can be utilized.
Fig. 9 illustrates and block obtaining section 109 can be utilized to select the method that can utilize block from motion reference block. Block can be utilized to be to the block of coded object block application movable information, and can have mutually different movable informations. Block obtaining section 109 can be utilized with reference to reference movement information 19, according to the method shown in Fig. 9, judge whether motion reference block is to utilize block separately, and export and can utilize block information 30.
As shown in Figure 9, first, selection index p is the motion reference block (S800) of 0. In the explanation of Fig. 9, according to index p, (M represents the quantity of motion reference block to imagination from 0 to M-1. ) the situation of subsequent treatment motion reference block. In addition, if terminating from the utilized determination processing of the motion reference block of 0 to p-1 for index p and the index that becomes the motion reference block determining whether the object that can utilize is described for p.
Block obtaining section 109 can be utilized to judge whether motion reference block p has movable information 18, namely whether it is assigned with at least one motion vector (S801). When motion reference block p does not have motion vector, namely time orientation motion reference block p be the block in the I without movable information cuts into slices or whole small pixel block in time orientation motion reference block p by intraprediction encoding when, enter step S805. In step S805, motion reference block p is judged as and can not utilize block.
Step S801 being judged to, motion reference block p has movable information, enter step S802. The selected motion reference block q (block q can be utilized) being selected as utilizing block of block obtaining section 109 can be utilized. At this, q is the value less than p. Then, block obtaining section 109 can be utilized to the movable information 18 of motion reference block p and the movable information 18 of block q can be utilized to compare, determine whether that there is same movable information (S803). When being judged to that the movable information 18 of motion reference block p is identical with the movable information 18 of the motion reference block q being selected as utilizing block, enter step S805, it is judged to that motion reference block p is for can not utilize block.
When for whole the utilized blocks q meeting q < p, the movable information 18 being judged to motion reference block p is not identical with the movable information 18 that can utilize block q, entering step S804 in step S803. In step S804, block obtaining section 109 can be utilized to be judged to utilize block by motion reference block p.
Block can be utilized maybe can not to utilize block if motion reference block p is judged as, then block obtaining section 109 can be utilized to determine whether to perform for whole motion reference blocks to utilize judgement (S806). When existence is not performed the motion reference block that can utilize judgement, such as, when p < M-1, enter step S807. Then, block obtaining section 109 can be utilized to add 1 (step S807) by index p, and again perform step S801 to step S806. When be judged to perform for whole motion reference blocks in step S806 can utilize judgement time, terminate to utilize determination processing.
By performing above-mentioned utilized determination processing, judge that each motion reference block can utilize block or can not utilize block. Block obtaining section 109 can be utilized to generate the utilized block information 30 comprising the information relevant with block can be utilized. Thus, by selecting to utilize block from motion reference block, reduce with the quantity of information that block information 30 can be utilized relevant, as a result, it is possible to reduce the amount of coded data 14.
Figure 10 illustrates the example performing the result that can utilize determination processing for the motion reference block shown in Fig. 8 A. In Fig. 10, it is judged to that 2 spaces direction motion reference block (p=0,1) and 2 time orientations motion reference block (p=5,8) are for can utilize block. Figure 11 illustrates an example of the utilized block information 30 relevant with the example of Figure 10. As shown in figure 11, block information 30 can be utilized to comprise the index of motion reference block, utilizability and motion reference block title. In the example of Figure 11, index p=0,1,5,8, for can utilize block, can utilize block number to be 4. Prediction section 101 can utilize block from these selects best one that block can be utilized as selection block, and exports the information (select block information) 31 relevant with selecting block. Block information 31 is selected to comprise the index value of the quantity that can utilize block and selected utilized block. Such as, when the quantity that can utilize block is 4, it may also be useful to maximum entry is the code table of 4, block information 31 is selected to encode by variable length code portion 104 to corresponding.
In addition, in the step S801 of Fig. 9, when the block of at least one in the block in time orientation motion reference block p by intraprediction encoding, the block obtaining section 109 can be utilized can also to be judged to utilize block by motion reference block p. Namely, it is also possible to the whole block being only set in time orientation motion reference block p by with predict alternately encode, enter step S802.
Figure 12 A to Figure 12 E illustrates in the comparison of the movable information 18 of step S803, the example that the movable information 18 being judged to motion reference block p is identical with the movable information 18 that can utilize block q. Multiple pieces and 2 blocks whitewashed of band oblique line respectively are had shown in Figure 12 A to Figure 12 E. In Figure 12 A to Figure 12 E, it is noted that assume not consider the block of band oblique line and compare the movable information 18 of these 2 blocks whitewashed. One of 2 blocks whitewashed is motion reference block p, and another is it has been determined that be the motion reference block q (can utilize block q) that can utilize. Especially, unless otherwise specified, 2 white blocks any one can be motion reference block p.
Figure 12 A illustrates motion reference block p and the both sides of block q can be utilized for the example of the block in direction, space. In the example of Figure 12 A, the movable information 18 of if block A and B is identical, then be judged to that movable information 18 is identical. Now, it is not necessary to the size of block A and B is identical.
Figure 12 B illustrates motion reference block p and of block q can be utilized to be the block A in direction, space, and another is the example of the block TB of time orientation. In Figure 12 B, in the block TB of time orientation, there is a block with movable information. If the movable information 18 of the block TB of time orientation is identical with the movable information 18 of the block A in direction, space, then it is judged to that movable information 18 is identical. Now, it is not necessary to the size of block A and TB is identical.
Figure 12 C illustrates motion reference block p and of block q can be utilized to be the block A in direction, space, and another is another example of the block TB of time orientation. Figure 12 C illustrates and the block TB of time orientation is divided into multiple fritter, and there is multiple fritter situation with movable information 18. In the example of Figure 12 C, whole block with movable information 18 has identical movable information 18, if this movable information 18 is identical with the movable information 18 of the block A in direction, space, is then judged to that movable information 18 is identical. Now, it is not necessary to the size of block A and TB is identical.
Figure 12 D illustrates motion reference block p and can utilize the example that block q is the block of time orientation. In this case, the movable information 18 of if block TB and TE, then be judged to that movable information 18 is identical.
Figure 12 E illustrates motion reference block p and can utilize another example that block q is the block of time orientation. Figure 12 E illustrates and block TB and TE of time orientation is divided into multiple fritter respectively, and there is the situation of multiple fritter with movable information 18 separately. In this case, for each the fritter comparing motion information 18 in block, if identical for whole fritter movable informations 18, then it is judged to that the movable information 18 of block TB is identical with the movable information 18 of block TE.
Figure 12 F illustrates motion reference block p and can utilize another example that block q is the block of time orientation. Figure 12 F illustrates that the block TE of time orientation is divided into the situation that there is multiple fritter with movable information 18 in multiple fritter and block TE. When whole movable information 18 of block TE is identical movable information 18 and is identical with the movable information 18 of block TD, it is judged to that the movable information 18 of block TD and TE is identical.
Thus, in step S803, judge the movable information 18 of motion reference block p and whether can utilize the movable information 18 of block q identical. In the example of Figure 12 A to Figure 12 F, the quantity of utilized block q being set to compare with motion reference block p is 1 be illustrated, but when being more than 2 when the quantity of block q can be utilized, it is also possible to the movable information 18 of block q can be utilized to compare movable information 18 and each of motion reference block p. In addition, when the contracting of the ratio stated after application is put, ratio contracting put after movable information 18 become the movable information 18 of above-mentioned explanation.
In addition, the judgement that the movable information of motion reference block p is identical with utilizing the movable information of block q is not limited to the completely the same situation of each motion vector that movable information comprises. Such as, as long as the norm of the difference of 2 motion vectors (norm) is within the limits prescribed, the movable information that can regard motion reference block p as is substantially identical with the movable information that can utilize block q.
Figure 13 illustrates the more detailed structure of prediction section 101. As mentioned above, it is necessary, the input of this prediction section 101 can utilize block information 30, reference movement information 19 and reference figure image signal 17, export prediction image signal 11, movable information 18 and select block information 31. As shown in figure 13, movable information selection portion 118 possesses direction, space movable information obtaining section 110, time orientation movable information obtaining section 111 and movable information change-over switch 112.
In direction, space movable information obtaining section 110, input can utilize block information 30 and the reference movement information 19 relevant with direction, space motion reference block. Direction, space movable information obtaining section 110 exports and comprises the movable information that block respectively can be utilized to have being positioned at direction, space and the movable information 18A of the index value that can utilize block. When inputting the information shown in Figure 11 as block information 30 can be utilized, direction, space movable information obtaining section 110 generates 2 movable informations and exports 18A, and each movable information output 18A comprises can utilize block and this movable information 19 that block can be utilized to have.
In time orientation movable information obtaining section 111, input can utilize block information 30 and the reference movement information 19 relevant with time orientation motion reference block. Time orientation movable information obtaining section 111, export utilize the movable information 19 that has of the time orientation motion reference block utilized that block information 30 can be utilized to determine and the index value that block can be utilized and as movable information 18B. Time orientation motion reference block is divided into multiple small pixel block, and each small pixel block has movable information 19. As shown in figure 14, the movable information 18B that time orientation movable information obtaining section 111 exports comprises the group of the movable information 19 that each small pixel block in block can be utilized to have. When movable information 18B comprises the group of movable information 19, it is possible to coded object block is performed motion compensation prediction by the small pixel block unit obtained with partition encoding object block. When inputting the information shown in Figure 11 as block information 30 can be utilized, time orientation movable information obtaining section 111 generates 2 movable informations and exports 18B, and each movable information exports and comprises the group that can utilize block and this movable information 19 that block can be utilized to have.
In addition, time orientation movable information obtaining section 111 can also obtain mean value or the representative value of the motion vector that the movable information 19 that each pixel fritter has comprises, and the mean value of motion vector or representative value is exported as movable information 18B.
The movable information change-over switch 112 of Figure 13 is according to movable information 18A and 18B exported from direction, space movable information obtaining section 110 and time orientation movable information obtaining section 111, select suitable a utilized block as selection block, and the movable information 18 (or group of movable information 18) corresponding with selecting block is outputted to dynamic compensating unit 113. In addition, movable information change-over switch 112 exports the selection block information 31 relevant with selecting block. Select block information 31 to comprise index p or the title etc. of motion reference block, also it is often simply referred to as selection information. Block information 31 is selected not to be defined as index p and the title of motion reference block, as long as the position of block can be determined to select, it is also possible to be arbitrary information.
Movable information change-over switch 112, such as, be chosen as selection block by the utilized block that the coding cost derived by the cost formula shown in following formula (1) is minimum.
[several 1]
J=D+ �� �� R (1)
At this, J presentation code cost, D illustrates and represents received image signal 10 and the coding distortion with reference to the square error sum between figure image signal 17. In addition, R represents the encoding amount estimated by virtual encoder, and �� represents Lagrange (Lagrange) indeterminate coefficient specified by quantization width etc. Both can substitute formula (1), and only use encoding amount R or coding distortion D to carry out calculation code cost J, it is possible to use encoding amount R or coding distortion D had been carried out the approximate value obtained to make the cost function of formula (1). In addition, encode distortion D be not limited to square error and, it is also possible to be predicated error absolute value and (SAD:sumsofabsolutedifference: difference absolute value and). Encoding amount R, it is also possible to only use the encoding amount relevant with movable information 18. In addition, it is not limited to the utilized block by coding cost is minimum and it is chosen as the example selecting block, it is also possible to one that coding cost has the value within the scope of certain of more than minimum value can utilize block to be chosen as selection block.
The movable information (or group of movable information) that the selection block of dynamic compensating unit 113 selected by movable information selection portion 118 has, derives the position taking out the block of pixels with reference to figure image signal 17 as prediction image signal 11. When dynamic compensating unit 113 being have input the group of movable information, dynamic compensating unit 113 is divided into small pixel block (such as using taking out the block of pixels with reference to figure image signal 17 as prediction image signal 11,4 �� 4 block of pixels), and, the movable information 18 that each application of these small pixel blocks is corresponding, thus obtain prediction image signal 11 according to reference to figure image signal 17. Obtain the position of block of prediction image signal 11, such as shown in Figure 6A, become the motion vector 18a comprised corresponding to movable information 18 and from the position of small pixel block in the displacement of direction, space.
Motion compensation process for coded object block can use the process same with motion compensation process H.264. At this, as an example, the concrete interpolating method that 1/4 pixel precision is described. In the interpolation of 1/4 pixel precision, when the multiple that each component of motion vector is 4, motion vector instruction integer pixel positions. When in addition, the predicted position that the instruction of motion vector is corresponding with the interpolation position of fraction precision.
[several 2]
X_pos=x+ (mv_x/4)
(2)
Y_pos=y+ (mv_y/4)
At this, x and y illustrates the index of the vertical and horizontal direction of the beginning position (such as, left upper apex) representing forecasting object block, x_pos and y_pos represents the predicted position of the correspondence with reference to figure image signal 17. (mv_x, mv_y) represents the motion vector with 1/4 pixel precision. Then, for the location of pixels that obtains of segmentation, by referring to the filling up or interpolation processing generation forecast pixel of respective pixel position of figure image signal 17. Figure 15 illustrates the example that predict pixel H.264 generates. Represent the pixel of integer position in fig .15 with the square (square of band oblique line) shown in the Latin alphabet of daimonji, the square represented with mesh lines illustrates the inter polated pixel of 1/2 location of pixels. In addition, to whitewash the square represented, the inter polated pixel corresponding with 1/4 location of pixels is shown. Such as, in fig .15, the interpolation processing of 1/2 pixel corresponding with the position of the Latin alphabet b, h is calculated by following formula (3).
[several 3]
B=(E-5 �� F+20 �� G+20 �� H-5 �� I+J+16) > > 5
(3)
H=(A-5 �� C+20 �� G+20 �� M-5 �� R+T+16) > > 5
Such as, at this, the Latin alphabet (b, h, C1 etc.) shown in formula (3) and following formula (4) represents the pixel value of the pixel imparting the identical Latin alphabet in figure 16. In addition, " > > " represents right shift operation, and " > > 5 " is equivalent to divided by 32. That is, the inter polated pixel of 1/2 location of pixels uses 6 tap FIR (FiniteImpulseResponse: finite impulse responds) wave filter (tap coefficient: (1 ,-5,20,20 ,-5,1)/32) to calculate.
In addition, the interpolation processing of 1/4 corresponding with the position of the Latin alphabet a, d in Figure 15 pixel is calculated by following formula (4).
[several 4]
A=(G+b+1) > > 1
(4)
D=(G+h+1) > > 1
Thus, the inter polated pixel of 1/4 location of pixels uses the average value filtering device (tap coefficient: (1/2,1/2)) of 2 taps to calculate. The interpolation processing of 1/2 pixel corresponding with the Latin alphabet j of the centre being present in 4 integer pixel positions uses two directions of vertical direction 6 tap and horizontal direction 6 tap to generate. Also same method is utilized to generate inter polated pixel value the location of pixels beyond illustrated.
In addition, interpolation processing is not limited to formula (3) and the example of formula (4), it is possible to use other interpolated coefficients generates. In addition, interpolated coefficients both can use the fixing value provided from coding-control portion 150, or, it is also possible to according to above-mentioned coding cost for each frame optimization interpolated coefficients, and use the interpolated coefficients after optimization to generate.
In addition, in the present embodiment, to with motion reference block be microlith (such as, 16 �� 16 block of pixels) process that the motion vector block prediction processing of unit is relevant described, but it is not limited to microlith, it is also possible to perform prediction processing with 16 �� 8 block of pixels units, 8 �� 16 block of pixels units, 8 �� 8 block of pixels units, 8 �� 4 block of pixels units, 4 �� 8 block of pixels units or 4 �� 4 block of pixels units. In this case, the information relevant with motion vector block is derived with block of pixels unit. In addition, it is also possible to the unit being greater than 16 �� 16 block of pixels with 32 �� 32 block of pixels units, 32 �� 16 block of pixels units, 64 �� 64 block of pixels units etc. carries out above-mentioned prediction processing.
When the reference movement vector in motion vector block is substituted into as the motion vector of the small pixel block in coded object block, both the negative value (reversion vector) of (A) reference movement vector can have been substituted into, or, (B) can also substitute into the weighted mean employing the adjacent reference movement vector of the reference movement vector corresponding with fritter and this reference movement vector or central authorities value, maximum value, minimum value.
The action of prediction section 101 is shown Figure 16 outline. As shown in figure 16, first, the reference frame (motion reference frame) (step S1501) comprising time orientation reference movement block is obtained. About motion reference frame, it typically is the reference frame minimum with coded object frame time gap, it is reference frame in the past in time. Such as, motion reference frame is the tight frame encoded above at coded object frame. In other example, a certain the reference frame preserving movable information 18 in movable information storer 108 can also be obtained as motion reference frame. Then, direction, space movable information obtaining section 110 and time orientation movable information obtaining section 111 obtain respectively from the utilized block information 30 (step S1502) that block obtaining section 109 can be utilized to export. Then, movable information change-over switch 112, such as, select one as selecting block (step S1503) according to formula (1) from utilizing block. Then, the copying motion information that selected selection block is had by dynamic compensating unit 113 is to coded object block (step S1504). Now, when selecting block to be direction, space reference block, as shown in figure 17, the movable information 18 that this selection block has is copied into coding reference block. In addition, when selecting block to be time orientation reference block, the group of the movable information 18 that this selection block has is copied into coded object block together with positional information. Then, it may also be useful to the movable information 18 copied by dynamic compensating unit 113 or the group execution motion compensation of movable information 18, and the movable information 18 exporting prediction image signal 11 and using in motion compensation is predicted.
Figure 18 illustrates the more detailed structure in variable length code portion 104. As shown in figure 18, variable length code portion 104 possesses parameter coding portion 114, transform coefficients encoding portion 115, selects block forecast portion 116 and multiplexed portion 117. Parameter needed for the decoding of prediction mode information, block size information, quantum-chemical descriptors information etc. except to transformation coeffcient information 13 and select except block information 31 encodes, is also encoded, and generates coded data 14A by parameter coding portion 114. Transformation coeffcient information 13 is encoded by transform coefficients encoding portion 115, generates coded data 14B. In addition, select block forecast portion 116 with reference to utilizing block information 30, selection block information 31 is encoded, generates coded data 14C.
As shown in figure 19, when the utilizability that block information 30 can be utilized to comprise index and the motion reference block corresponding with index, getting rid of the motion reference block that can not utilize from the multiple motion reference blocks set in advance, the motion reference block that only can utilize is transformed to grammer (stds_idx). In Figure 19,5 motion reference blocks in 9 motion reference blocks can not utilize, so for eliminating 4 motion reference blocks after these 5 motion reference blocks, distributing grammer stds_idx from 0 successively. In this example embodiment, the selection block information that encode not from 9 select, but from 4 can utilize block select, so distribution encoding amount (bin number) on average decrease.
Figure 20 is an example of the code table of 2 value informations (bin) representing grammer stds_idx and grammer stds_idx. As shown in figure 18, the quantity of the motion reference block that can utilize is more few, and the average bin number needed for the coding of grammer stds_idx more reduces. Such as, when the quantity that can utilize block is 4, grammer stds_idx can be less than or equal to 3 bit representation. 2 value informations (bin) of grammer stds_idx both can 2 values in the way of the stds_idx that block number can be utilized whole for each becomes identical bin number, it is also possible to carry out 2 values according to the 2 value methods determined by prior learning. In addition, it is also possible to prepare multiple 2 value methods, and switch for each coded object block suitability.
Can applying entropy code (long codes, Ha Fuman coding or the arithmetic coding etc. such as such as) in these encoding section 114,115,116, the coded data 14A generated, 14B, 14C are undertaken multiplexed by multiplexed portion 117 and export.
In the present embodiment, the example of the frame encoded than 1 frame before coded object frame reference as reference frame is illustrated by imagination, but the motion vector in the reference movement information 19 selecting block to have and reference frame number can also be used, motion vector is carried out ratio contracting and puts (scaling) (or stdn), and to coded object block application reference movement information 19.
Put process about the contracting of this ratio, specifically it is described with reference to Figure 21. Tc presentation code object frame shown in Figure 21 and the time gap between motion reference frame (POC (number of expression shown sequence) distance), calculated by following formula (5). Tr [i] shown in Figure 21 represents the time gap between motion reference frame and the frame i selecting the reference of block institute, is calculated by following formula (6).
[several 5]
Tc=Clip (-128,127, DiffPicOrderCnt (curPOC, colPOC)) (5)
Tr [i]=Clip (-128,127, DiffPicOrdcrCnt (colPOC, refPOC)) (6)
At this, the POC (PictureOrderCount: order counting) of curPOC presentation code object frame, colPOC represents the POC of motion reference frame, and refPOC represents the POC of the frame i selecting the reference of block institute. In addition, Clip (min, max, target) is following CLIP function: exports max when output min, target are the values being greater than max when target is the value being less than min, exports target when in addition. In addition, DiffPicOrderCnt (x, y) is the function of the difference calculating 2 POC.
If setting the motion vector selecting the motion vector of block as MVr=(MVr_x, MVr_y), to the application of coded object block to be MV=(MV_x, MV_y), then by following formula (7) calculating kinematical vector MV.
[several 6]
MV_x=(MVr_x �� tc+Abs (tr [i]/2))/tr [i]
(7)
MV_y=(MVr_y �� tc+Abs (tr [i]/2))/tr [i]
At this, Abs (x) represents the function of the absolute value taking out x. Thus, in the ratio contracting of motion vector is put, the motion vector MV selecting the motion vector MVr of block to be transformed between coded object frame and motion the 1st reference frame is distributed to.
In addition, below illustrate that the contracting of the ratio with motion vector is placed with another example of pass.
First, for each section or each frame, according to following formula (8), obtain ratio contracting about the whole time gap tr that can obtain motion reference frame and put coefficient (DistScaleFactor [i]). The quantity of the quantity that coefficient is put in ratio contracting and the frame of selection block institute reference, namely, the quantity of reference frame equal.
[several 7]
Tx=(16384+ �� bs (tr [i]/2))/tr [i]
(8)
DistScaleFactor [i]=Clip (-1024,1023, (tc �� tx+32)) > > 6
About the calculating of the tx shown in formula (8), it is also possible to tabular in advance.
When the ratio contracting for each coded object block is put, with the use of following formula (9), motion vector MV just can be calculated by means of only multiplication, addition, shift operation.
[several 8]
MV_x=(DistScaleFactor [i] �� MVr_x+128) > > 8
(9)
MV_y=(DistScaleFactor [i] �� MVr_y+128) > > 8
When implementing the contracting of such ratio and put process, prediction section 101 and can utilize block obtaining section 109 process all application percentage contracting put after movable information 18. When implementing ratio contracting and put process, the reference frame of coded object block institute reference becomes motion reference frame.
Figure 22 illustrates the grammer structure in picture coding portion 100. Namely, senior grammer 901, slice-level grammer 904 and microlith level grammer 907 as shown in figure 22, grammer mainly comprises 3 parts. Senior grammer 901 maintains the grammer information of the above upper layer of section. Slice-level grammer 904 is for each information required for keeping of cutting into slices, and microlith level grammer 907 maintains required data for each microlith shown in Fig. 7 A to Fig. 7 D.
Each several part comprises more detailed grammer. Senior grammer 901 comprises the grammer of the sequence such as sequential parameter group grammer 902 and figure parameter group grammer 903 and figure form class. Slice-level grammer 904 comprises section head grammer 905 and slice of data grammer 906 etc. In addition, microlith level grammer 907 comprises microlith layer grammer 908 and microlith prediction grammer 909 etc.
Figure 23 A and Figure 23 B illustrates the example of microlith layer grammer. Available_block_num shown in Figure 23 A and Figure 23 B represents the quantity that can utilize block, when this value is the value being greater than 1, it is necessary to select the coding of block information. In addition, stds_idx illustrates and selects block information, it may also be useful to stds_idx is encoded by the aforesaid code table corresponding with utilizing the quantity of block number.
Figure 23 A illustrates after mb_type grammer when selecting block information to encode. It is the size determined or when the pattern determined (TARGET_MODE) in the pattern represented by mb_type, and when available_block_num is the value being greater than 1, stds_idx is encoded. Such as, when selecting the movable information of block to become the piece size that can utilize be 64 �� 64 pixels, 32 �� 32 pixels, 16 �� 16 pixel, or when Direct Model, stds_idx is encoded.
Figure 23 B illustrated before mb_type grammer when selecting block information to encode. When available_block_num is the value being greater than 1, stds_idx is encoded. In addition, if available_block_num is 0, then carry out H.264 representative conventional motion compensation, so being encoded by mb_type.
In addition, the table shown in Figure 23 A and Figure 23 B can also insert the grammatical feature not specified in the present invention in the ranks, it is also possible to comprise relevant with condition difference in addition description. Or, it is also possible to grammer table is split, merges into multiple table. In addition, it is not necessary to same term must be used, it is also possible to the mode according to utilizing changes arbitrarily. And then, each grammer segment described in this microlith layer grammer can also change, to be clearly recorded microlith data syntax described later.
In addition, by utilizing the information of stds_idx can cut the information of mb_type. Figure 24 A be with H.264 in B cut into slices time code table corresponding to mb_type and mb_type. N shown in Figure 24 A is the value of the size representing 16,32,64 coded object blocks such as grade, and M is the value of the half of N. Therefore, when mb_type is 4��21, coded object block is depicted as rectangular blocks. In addition, the L0 of Figure 24 A, L1, Bi represent one direction prediction (only List0 direction), one direction prediction (only List1 direction), two direction prediction respectively. When coded object block is rectangular blocks, mb_type, for each of 2 rectangular blocks in coded object block, comprises the information of the prediction of a certain that expression has carried out in L0, L1, Bi. In addition, B_Sub represents and performs above-mentioned process for 4 each of block of pixels having split microlith. Such as, when coded object block is 64 �� 64 pixel microlith, coded object block, for each of 4 segmentation 4 32 �� 32 block of pixels obtaining of these microliths, distribution mb_type also encodes further.
At this, when the selection block that stds_idx represents is SpatialLeft (block of pixels adjacent with the left side of coded object block), the movable information of the block of pixels adjacent with the left side of coded object block is set to the movable information of coded object block, so stds_idx has and the mb_type=4 using Figure 24 A, 6,8,10,12,14, coded object block is performed the identical implication of prediction by the rectangular blocks grown crosswise shown in 16,18,20. In addition, when selection block shown in stds_idx is SpatialUp, the movable information adjacent with the upside of coded object block is set to the movable information of coded object block, so stds_idx has and the mb_type=5 utilizing Figure 24 A, 7,9,11,13,15, the rectangular blocks of the vertical length shown in 17,19,21 performs the identical implication of prediction. Therefore, by utilizing stds_idx, it is possible to reduce the code table on the hurdle of mb_type=4��21 of Figure 24 A as shown in construction drawing 24B. Equally, about with shown in Figure 24 C H.264 in P cut into slices time code table corresponding to mb_type and mb_type, it is also possible to reduce the code table of the quantity of mb_type as shown in construction drawing 24D.
In addition, it is also possible to the information of stds_idx is included in the information of mb_type and encodes. Figure 25 A illustrates the code table during information that the information of stds_idx is included in mb_type, illustrates an example of the code table corresponding with mb_type and mb_type of B section. The B_STDS_X (X=0,1,2) of Figure 25 A illustrates the pattern suitable with stds_idx, adds the B_STDS_X (in Figure 25 A, block number can be utilized to be 3) of the amount that can utilize block number. Equally, Figure 25 B illustrates another example of the relevant mb_type that cuts into slices with P. The explanation of Figure 25 B is identical with B section, so omitting.
Order and 2 values method (binization) of mb_type are not limited to the example shown in Figure 25 A and Figure 25 B, it is also possible to encoded by mb_type according to other order and 2 value methods. B_STDS_X and P_STDS_X does not need continuously, it is also possible to be configured between each mb_type. In addition, 2 values method (binization) can also design based on the selection frequency learnt in advance.
In the present embodiment, even if gathering multiple microlith and carrying out the expansion microlith of motion compensation prediction also can be applied the present invention. In addition, in the present embodiment, about coding scanning sequency can be arbitrary order. Such as, line scanning or Z scanning etc. also can be applied the present invention.
As described above, the picture coding device of present embodiment is selected to utilize block from multiple motion reference block, generates the information for determining the motion reference block to the application of coded object block according to the quantity of selected utilized block, and this information is encoded. Therefore, picture coding device according to the present embodiment, even if cutting down the encoding amount relevant with motion vector information, also can carry out motion compensation with the small pixel block unit thinner than coded object block, it is possible to realize high coding efficiency.
(the 2nd enforcement mode)
Figure 26 illustrates the picture coding device involved by the 2nd enforcement mode of the present invention. In the 2nd enforcement mode, mainly it is described with the 1st enforcement mode distinct portions and action. As shown in figure 26, in the picture coding portion 200 of present embodiment, the structure in prediction section 201 and variable length code portion 204 is different from the 1st enforcement mode. As shown in figure 27, prediction section 201 possesses the 1st prediction section 101 and the 2nd prediction section 202, optionally switches this 1st and the 2nd prediction section 101,202 and generation forecast figure image signal 11. 1st prediction section 101 has the structure identical with the prediction section 101 (Fig. 1) of the 1st enforcement mode, according to prediction mode (the 1st prediction mode) the generation forecast figure image signal 11 using the movable information 18 selecting block to have to carry out motion compensation. 2nd prediction section 202 according to coded object block is used a motion vector carry out motion compensation, as H.264 prediction mode (the 2nd prediction mode) generation forecast figure image signal 11. 2nd prediction section 202 uses the reference figure image signal 17 generation forecast figure image signal 11B from received image signal 10 and frame memory.
The structure of the 2nd prediction section 202 is shown Figure 28 outline. As shown in figure 28, the 2nd prediction section 202 has: movable information obtaining section 205, it may also be useful to received image signal 10 and reference figure image signal 17 generate movable information 21; And dynamic compensating unit 113 (Fig. 1), it may also be useful to reference to figure image signal 17 and movable information 21 generation forecast figure image signal 11A. This movable information obtaining section 205 according to received image signal 10 and with reference to figure image signal 17, such as, obtains the motion vector that distribute to coded object block by Block-matching. As the metewand of coupling, it may also be useful to for each accumulation received image signal 10 with mate after the difference of interpolating image and the value that obtains.
In addition, movable information obtaining section 205 can also use and the difference of prediction image signal 11 with received image signal 10 is converted the value obtained, and determines best motion vector. In addition, it is contemplated that the size of motion vector and the encoding amount of motion vector reference frame number, or, it may also be useful to formula (1) determines best motion vector. Matching process both can perform according to the search coverage information from the outside offer of picture coding device, it is also possible to periodically performs for each pixel precision. In addition, it is also possible to do not carry out search process, and by coding-control portion 150 movable information provided is as the output 21 of movable information obtaining section 205.
The prediction section 101 of Figure 27 possesses Forecasting Methodology change-over switch 203 further, selects and exports from the one party of the prediction image signal 11A of the 1st prediction section 101 and the prediction image signal 11B from the 2nd prediction section 202. Such as, Forecasting Methodology change-over switch 203 is for each prediction image signal 11A and 11B, use received image signal 10, such as obtain coding cost according to formula (1), with the one party encoded in less way selection prediction image signal 11A and 11B of cost, and export as prediction image signal 11. In addition, Forecasting Methodology change-over switch 203, also together with movable information 18 and selection block information 31, exports and represents that the prediction image signal 11 exported is from the prediction handover information 32 of the 1st prediction section 101 and the prediction image signal of which output of the 2nd prediction section 202. The movable information 18 exported is encoded by variable length code portion 204, is multiplexed as coded data 14 afterwards.
The structure in variable length code portion 204 is shown Figure 29 outline. Variable length code portion 204 shown in Figure 29, except the structure in the variable length code portion 104 shown in Figure 18, also possesses movable information encoding section 217. In addition, the selection block forecast portion 216 of Figure 29 is different from the selection block forecast portion 116 of Figure 18, is encoded by prediction handover information 32 and generates coded data 14D. When the 1st prediction section 101 performs prediction processing, select block forecast portion 216 further to utilizing block information 30 and select block information 31 to encode. Utilized block information 30 and selection block information 31 after coded are contained in coded data 14D. When the 2nd prediction section 202 performs prediction processing, movable information 18 is encoded by movable information encoding section 217, and generates coded data 14E. Select that block forecast portion 216 and movable information encoding section 217 judge in the 1st prediction section 101 and the 2nd prediction section 202 according to prediction handover information 32 respectively which side perform prediction processing, this prediction handover information 32 represents whether predicted picture is that the motion compensation prediction with the use of the movable information selecting block generates.
Multiplexed portion 117 from parameter coding portion 114, transform coefficients encoding portion 115, select block forecast portion 216 and movable information encoding section received code data 14A, 14B, 14D, 14E, to the coded data 14A received, 14B, 14D, 14E carry out multiplexed.
Figure 30 A and Figure 30 B illustrates the example of the microlith layer grammer of present embodiment separately. Available_block_num shown in Figure 30 A represents the quantity that can utilize block, when it is the value being greater than 1, selects block forecast portion 216 selection block information 31 to be encoded. In addition, stds_flag represents the mark whether employed by the movable information selecting block in motion compensation is predicted as the movable information of coded object block, that is, it is represent that Forecasting Methodology change-over switch 203 have selected which the mark in the 1st prediction section 101 and the 2nd prediction section 202. When can utilize the quantity of block be greater than 1 and stds_flag be 1 when, represent to employ in motion compensation is predicted and select the movable information that has of block. In addition, when stds_flag is 0, does not utilize and select the movable information that has of block, and directly the information of movable information 18 is encoded with H.264 same, or the difference value predicted is encoded. In addition, stds_idx represents selection block information, and the code table corresponding with utilizing block number is described above.
Figure 30 A illustrates after mb_type grammer when selecting block information to encode. Only when the size determining the pattern that mb_type represents or determine pattern, stds_flag and stds_idx is encoded. Such as, when be the piece size that can utilize being 64 �� 64,32 �� 32,16 �� 16 when selecting the movable information of block or when Direct Model, stds_flag and stds_idx is encoded.
Figure 30 B illustrated before mb_type grammer when selecting block information to encode. Such as when stds_flag is 1, it is not necessary to mb_type is encoded. When stds_flag is 0, mb_type is encoded.
As previously discussed, the picture coding device of the 2nd enforcement mode optionally switches the 1st prediction section 101 of the 1st enforcement mode and utilizes the 2nd prediction section 202 of the prediction mode H.264 waited, and received image signal carries out compression coding, so that coding cost reduces. Therefore, in the picture coding device of the 2nd enforcement mode, further increase coding efficiency than the picture coding device of the 1st enforcement mode.
(the 3rd enforcement mode)
The picture decoding apparatus of the 3rd enforcement mode is shown Figure 31 outline. As shown in figure 31, this picture decoding apparatus possesses image lsb decoder 300, decoding control section 350 and output state 308. Image lsb decoder 300 is controlled by decoding control section 350. The picture decoding apparatus of the 3rd enforcement mode is corresponding with the picture coding device of the 1st enforcement mode. That is, the coded treatment processed based on the decoding process of picture decoding apparatus of Figure 31 and the picture coding based on Fig. 1 has complementary relation. The picture decoding apparatus of Figure 31 both can be realized by hardware such as LSI chips, or can also realize by making computer perform image decoding program.
The picture decoding apparatus of Figure 31 possesses coding row lsb decoder 301, inverse guantization (IQ) inverse transformation portion 302, totalizer 303, frame memory 304, prediction section 305, movable information storer 306 and can utilize block obtaining section 307. In image lsb decoder 300, it is input to coding row lsb decoder 301 from the coded data 80 of not shown storage system system or transmission system. This coded data 80, such as, corresponding with the coded data 14 that the state to be multiplexed is sent from the picture coding device of Fig. 1.
In the present embodiment, using such as, as the block of pixels (microlith) of decoder object simply referred to as decoder object block. In addition, the image frame comprising decoder object block is called decoder object frame.
In coding row lsb decoder 301, for every 1 frame or every 1 field, carry out the deciphering based on grammatical analysis according to grammer. Specifically, the coding row of each grammer are carried out length-changeable decoding by coding row lsb decoder 301 successively, and to comprising transformation coeffcient information 33, select the coding parameter etc. relevant with decoder object block of the information of forecastings such as block information 61, block size information and prediction mode information to decode.
In the present embodiment, decoding parametric comprises transformation coeffcient 33, selects block information 61 and information of forecasting, and all parameters required when comprising the decodings such as the information relevant with the transformation coeffcient information relevant with quantization. The information relevant with information of forecasting, transformation coeffcient and the information relevant with quantization are transfused to decoding control section 350 as control information 71. The decoding control information 70 comprising the parameter required for information of forecasting and quantum-chemical descriptors etc. decode is supplied to each several part of image lsb decoder 300 by decoding control section 350.
In addition, as illustrated by afterwards, coded data 80 is decoded by coding row lsb decoder 301 simultaneously, obtains information of forecasting and selects block information 61. The movable information 38 comprising motion vector and reference frame number can not also be decoded.
The transformation coeffcient 33 understood by coding row lsb decoder 301 is sent to inverse guantization (IQ) inverse transformation portion 302. Various information, the i.e. quantum-chemical descriptors relevant with quantization and the quantization matrix understood by coding row lsb decoder 301 are provided to decoding control section 350, are downloaded to inverse guantization (IQ) inverse transformation portion 302 when inverse guantization (IQ). Transformation coeffcient 33, according to the information relevant with quantization downloaded, is carried out inverse guantization (IQ) by inverse guantization (IQ) inverse transformation portion 302, then implements inversion process (such as, inverse discrete cosine transform etc.), obtains predictive error signal 34. It it is the inverse transformation of the conversion process in the conversion quantization portion based on Fig. 1 based on the inversion process in the inverse guantization (IQ) inverse transformation portion 302 of Figure 31. Such as, when implementing little wave conversion by picture coding device (Fig. 1), inverse guantization (IQ) inverse transformation portion 302 performs corresponding inverse guantization (IQ) and inverse wavelet transform.
The predictive error signal 34 restored by inverse guantization (IQ) inverse transformation portion 302 is input to totalizer 303. The prediction image signal 35 of predictive error signal 34 and generation in prediction section 305 described later is added by totalizer 303, generates decoded image signal 36. The decoded image signal 36 generated is exported from image lsb decoder 300, and is stored into output state 308 temporarily, and afterwards, the output timing managed according to decoding control section 350 exports. In addition, this decoded image signal 36 is saved as with reference to figure image signal 37 in frame memory 304. Read with reference to figure image signal 37 from frame memory 304 successively according to each frame or each field, and input to prediction section 305.
Block obtaining section 307 can be utilized to receive reference movement information 39 from movable information storer 306 described later, and export and can utilize block information 60. The action of block obtaining section 307 can be utilized identical with the utilized block obtaining section 109 (Fig. 1) illustrated in the 1st enforcement mode.
Movable information storer 306 receives movable information 38 from prediction section 305, and is temporarily saved as reference movement information 39. The movable information 38 exported from prediction section 305 is temporarily saved as reference movement information 39 by movable information storer 306. Fig. 4 illustrates an example of movable information storer 306. Movable information storer 306 maintains coding time different multiple movable information frames 26. The group of the movable information 38 that decoding finishes or movable information 38, is stored in the movable information frame 26 corresponding with decode time as reference movement information 39. In movable information frame 26, such as, preserve reference movement information 39 with 4 �� 4 block of pixels units. The reference movement information 39 that movable information storer 306 keeps, is read and reference when generating the movable information 38 of decoder object block by prediction section 305.
Then, the motion reference block of present embodiment is described and block can be utilized. Motion reference block is from decoding complete region the candidate blocks selected according to the method set in advance by aforesaid picture coding device and picture decoding apparatus. Fig. 8 A illustrates an example relevant with utilizing block. In fig. 8 a, 5 motion reference blocks in 4 the motion reference blocks being configured with in decoder object frame and reference frame this add up to 9 motion reference blocks. Motion reference block A in the decoder object frame of Fig. 8 A, B, C, D be relative to decoder object block left, on, the adjacent block in upper right, upper left. In the present embodiment, it is called direction, space motion reference block by from comprising in the decoder object frame of decoder object block the motion reference block selected. In addition, the motion reference block TA in reference frame is the block of pixels of position identical with decoder object block in reference frame, and the block of pixels TB that will connect with this motion reference block TA, TC, TD, TE are chosen as motion reference block. The motion reference block selected from the block of pixels in reference frame is called time orientation motion reference block. In addition, the frame that time orientation motion reference block is positioned at is called motion reference frame.
Motion reference block in direction, space is not limited to the example shown in Fig. 8 A, as shown in Figure 8 B, it is also possible to by the pixel a adjacent with decoder object block, the block of pixels belonging to b, c, d is chosen as direction, space motion reference block. In this case, pixel a, b, c, d relative to the top left pixel in decoder object block relative position (dx, dy) as shown in Figure 8 C.
In addition, as in fig. 8d, it is also possible to the whole block of pixels Al adjacent with decoder object block��A4, B1, B2, C, D is chosen as direction, space motion reference block. In Fig. 8 D, the quantity of direction, space motion reference block is 8.
In addition, as illustrated in fig. 8e, time orientation motion reference block TA��TE both can be mutually partly overlapping, it is also possible to is separated from each other as shown in Figure 8 F. In addition, time orientation motion reference block does not need the block that must be the block of Collocate position and be positioned at around it, as long as can be then the block of pixels of optional position in motion reference frame. Such as, such as, it is also possible to utilizing the movable information that decode complete block adjacent with decoder object block, the reference block indicated by motion vector comprised by movable information is chosen as the center (block TA) of motion reference block. In addition, the reference block of time orientation configures with the interval such as may not be.
In the method for selection campaign reference block as described above, if the both sides of picture decoding apparatus and picture decoding apparatus are total with direction, space and the relevant information of the quantity of time orientation motion reference block and position, then motion reference block can be selected from arbitrary quantity and position. In addition, the size of motion reference block does not need must be the size identical with decoder object block. Such as in fig. 8d, the size of motion reference block both can the size of ratio decoder object block big, it is also possible to ratio decoder object block slight greatly, it is possible to be arbitrary size. In addition, the shape of motion reference block is not limited to square shape, it is also possible to be rectangular shape.
Then, to block can be utilized to be described. Block can be utilized to be the block of pixels selected from motion reference block, it is can to the block of pixels of decoder object block application movable information. Block can be utilized to have mutually different movable informations. For such as such decoder object frame shown in Fig. 8 A and total 9 the motion reference blocks in reference frame, select to utilize block by performing the utilized block determination processing shown in Fig. 9. Figure 10 illustrates the result performing the utilized block determination processing shown in Fig. 9 and obtaining. In Fig. 10, the block of pixels of band oblique line represents can not utilize block, and the block whitewashed represents can utilize block. Namely, it is judged to from direction, space motion reference block to select 2, selects 2 to add up to selection 4 conducts can utilize block from time orientation motion reference block. Movable information selection portion 314 in prediction section 305 is according to from the selection block information 61 selecting block lsb decoder 323 to receive, selecting best one that block can be utilized as selection block from these the utilized blocks being configured in time and direction, space.
Then, to block obtaining section 307 can be utilized to be described. Block obtaining section 307 can be utilized to have the function identical with the utilized block obtaining section 109 of the 1st enforcement mode, obtain reference movement information 39 from movable information storer 306, export expression for each motion reference block and can utilize block that the information of block maybe can not be utilized can to utilize block information 60.
Schema with reference to Fig. 9 illustrates the action that can utilize block obtaining section 307. First, block obtaining section 307 can be utilized to judge whether motion reference block (index p) has movable information (step S801). That is, at least one the small pixel block determined whether in step S801 in motion reference block p has movable information. When being judged to that motion reference block p does not have movable information, namely time orientation motion reference block is the block in not having the I of movable information to cut into slices, or, when the whole small pixel block in time orientation motion reference block is by interior prediction decoding, enter step S805. In step S805, it is judged to utilize block by this motion reference block p.
When being judged to that in step S801 motion reference block p has movable information, block obtaining section 307 can be utilized selected it has been determined that be the motion reference block q (be called and can utilize block q) (step S802) that can utilize block. At this, q is the value being less than p. Then, block obtaining section 307 can be utilized to compare the movable information of this motion reference block p for whole q and the movable information of block q can be utilized, judge whether motion reference block p has the movable information (S803) identical with utilizing block q. When motion reference block p has identical with utilizing block q motion vector, enter step S805, in step S805, by utilizing block obtaining section 307 to be judged to utilize block by this motion reference block p. When motion reference block p has the movable information different from whole utilized blocks q, in step S804, by block obtaining section 307 can be utilized to be judged to utilize block by this motion reference block p.
By whole motion reference blocks is performed above-mentioned utilized block determination processing, it is determined for each motion reference block and block can be utilized still can not to utilize block, and generate and can utilize block information 60. Figure 11 illustrates the example that can utilize block information 60. As shown in figure 11, block information 60 can be utilized to comprise index p and the utilizability of motion reference block. In fig. 11, block information 60 can being utilized to illustrate to be chosen as the motion reference block that index p is 0,1,5 and 8 and can utilize block, the quantity that can utilize block is 4.
In addition, in the step S801 of Fig. 9, it is also possible to when the block of at least one in the block in time orientation motion reference block p by intraprediction encoding, the block obtaining section 307 can be utilized to be judged to utilize block by motion reference block p. Namely, the whole block only can also being set in time orientation motion reference block p is encoded to predict alternately, step S802 is entered.
Figure 12 A to Figure 12 E illustrates in the comparison of the movable information 38 of step S803, by the movable information 38 of motion reference block p with the movable information 38 of block q can be utilized to be judged to identical example. Figure 12 A to Figure 12 E is separately shown with multiple pieces and 2 blocks whitewashed of band oblique line. In Figure 12 A to Figure 12 E, it is noted that assume not consider the block of band oblique line, and compare the situation of the movable information 38 of these 2 blocks whitewashed. One side of 2 blocks whitewashed is motion reference block p, and the opposing party is it has been determined that be the motion reference block q (can utilize block q) that can utilize. Unless otherwise specified, any one in 2 white blocks can be motion reference block p.
Figure 12 A illustrates motion reference block p and the both sides of block q can be utilized for the example of the block in direction, space. In the example of Figure 12 A, the movable information 38 of if block A and B is identical, then be judged to that movable information 38 is identical. Now, the size of block A and B does not need identical.
Figure 12 B illustrates motion reference block p and the side of block q can be utilized to be the block A in direction, space, and the opposing party is the example of the block TB of time orientation. In Figure 12 B, in the block TB of time orientation, there is a block with movable information. If the movable information 38 of the block TB of time orientation is identical with the movable information 38 of the block A in direction, space, then it is judged to that movable information 38 is identical. Now, the size of block A and TB does not need identical.
Figure 12 C illustrates motion reference block p and the side of block q can be utilized to be the block A in direction, space, and the opposing party is another example of the block TB of time orientation. Figure 12 C illustrates that the block TB of time orientation is divided into multiple fritter, and there is the situation of multiple fritter with movable information 38. In the example of Figure 12 C, whole block with movable information 38 has identical movable information 38, if this movable information 38 is identical with the movable information 38 of the block A in direction, space, is then judged to that movable information 38 is identical. Now, the size of block A and TB does not need identical.
Figure 12 D illustrates motion reference block p and can utilize the example that block q is the block of time orientation. In this case, the movable information 38 of if block TB and TE is identical, then be judged to that movable information 38 is identical.
Figure 12 E illustrates motion reference block p and can utilize another example that block q is the block of time orientation. Figure 12 E illustrates and block TB and TE of time orientation is divided into multiple fritter respectively, and there is the situation of multiple fritter with movable information 38 separately. In this case, for each the fritter comparing motion information 38 in block, if identical for whole fritter movable informations 38, then it is judged to that the movable information 38 of block TB is identical with the movable information 38 of block TE.
Figure 12 F illustrates motion reference block p and can utilize another example that block q is the block of time orientation. Figure 12 F illustrates and the block TE of time orientation is divided into multiple fritter, there is the situation of multiple fritter with movable information 38 in block TE. When whole movable information 38 of block TE be identical movable information 38 and the movable information 38 that has with block TD identical, be judged to that the movable information 38 of block TD and TE is identical.
Thus, in step S803, judge the movable information 38 of motion reference block p and whether can utilize the movable information 38 of block q identical. In the example of Figure 12 A to Figure 12 F, if the quantity of the utilized block q compared with motion reference block p is 1 be illustrated, but when being more than 2 when the quantity of block q can be utilized, it is also possible to the movable information 38 of block q can be utilized to compare movable information 38 and each of motion reference block p. In addition, when the contracting of the ratio stated after application is put, ratio contracting put after movable information 38 become the movable information 38 of above-mentioned explanation.
In addition, the judgement that the movable information of motion reference block p is identical with utilizing the movable information of block q is not limited to the completely the same situation of each motion vector that movable information comprises. Such as, it is also possible to as long as the norm of the difference of 2 motion vectors just thinks that the movable information of motion reference block p is with can to utilize the movable information of block q substantially identical within the limits prescribed.
Figure 32 is the block diagram illustrating in greater detail coding row lsb decoder 301. As shown in figure 32, encode row lsb decoder 301 to have: the separated part 320 that coded data 80 is separated into grammer unit, the transformation coeffcient lsb decoder 322 decoded by transformation coeffcient, the selection block lsb decoder 323 selection block information decoded and the parameter lsb decoder 321 parameter etc. relevant with prediction block sizes and quantization decoded.
Parameter lsb decoder 321 receives the coded data 80A comprising the parameter relevant with prediction block sizes and quantization from separated part, is decoded by coded data 80A and generates control information 71. Transformation coeffcient lsb decoder 322 receives coded transformation coeffcient 80B from separated part 320, is decoded by the transformation coeffcient 80B of this coding, obtains transformation coeffcient information 33. Select block lsb decoder 323 input the coded data 80C relevant with selecting block and block information 60 can be utilized, export and select block information 61. As shown in figure 11, the utilized block information 60 inputted illustrates utilizability for each motion reference block.
Then, with reference to Figure 33, prediction section 305 is described in detail.
As shown in figure 33, prediction section 305 has movable information selection portion 314 and dynamic compensating unit 313, and movable information selection portion 314 has direction, space movable information obtaining section 310, time orientation movable information obtaining section 311 and movable information change-over switch 312. Prediction section 305 has the structure identical with the prediction section 101 illustrated in the 1st enforcement mode and function substantially.
Prediction section 305 input can utilize block information 60, selection block information 61, reference movement information 39 and reference figure image signal 37, exports prediction image signal 35 and movable information 38. Direction, space movable information obtaining section 310 and time orientation movable information obtaining section 311 have and direction, the space movable information obtaining section 110 illustrated in the 1st enforcement mode and the identical function of time orientation movable information obtaining section 111 respectively. Direction, space movable information obtaining section 310 uses can utilize block information 60 and reference movement information 39, generate comprise be positioned at direction, space respectively can utilize the movable information of block and the movable information 38A of index. Time orientation movable information obtaining section 311 uses can utilize block information 60 and reference movement information 39, generate comprise be positioned at time orientation respectively can utilize the movable information of block and movable information (or group of the movable information) 38B of index.
In movable information change-over switch 312, according to selection block information 61, from the movable information 38A from direction, space movable information obtaining section 310 and movable information (or group of the movable information) 38B from time orientation movable information obtaining section 311, select one, obtain movable information 38. Selected movable information 38 is sent to dynamic compensating unit 313 and movable information storer 306. Dynamic compensating unit 313 according to selected movable information 38 with in the 1st enforcement mode illustrate dynamic compensating unit 113 same carry out motion compensation prediction, generation forecast figure image signal 35.
In the ratio zoom function of the motion vector of dynamic compensating unit 313, identical with explanation in the 1st enforcement mode, therefore omit the description.
Figure 22 illustrates the grammer structure in image lsb decoder 300. As shown in figure 22, grammer mainly comprises 3 parts, i.e. senior grammer 901, slice-level grammer 904 and microlith level grammer 907. The grammer information of the upper layer that senior grammer 901 keeps section above. Slice-level grammer 904 for each section keep required for information, microlith level grammer 907 for shown in Fig. 7 A to Fig. 7 D each microlith keep required for data.
Each several part comprises more detailed grammer. Senior grammer 901 comprises the grammer of the sequence such as sequential parameter group grammer 902 and figure parameter group grammer 903 and figure form class. Slice-level grammer 904 comprises section head grammer 905 and slice of data grammer 906 etc. In addition, microlith level grammer 907 comprises microlith layer grammer 908 and microlith prediction grammer 909 etc.
Figure 23 A and Figure 23 B illustrates the example of microlith layer grammer. Available_block_num shown in Figure 23 A and Figure 23 B represents the quantity that can utilize block, when it is the value being greater than 1, it is necessary to select the decoding of block information. In addition, stds_idx illustrates and selects block information, it may also be useful to utilize the code table that block number is corresponding to be encoded by stds_idx with aforesaid.
Figure 23 A illustrates after mb_type grammer when selecting block information to decode. When the predictive mode shown in mb_type is the size determined the or when pattern determined (TARGET_MODE), and when available_block_num is the value being greater than 1, stds_idx is decoded. Such as, select the movable information of block become can utilize be piece size be 64 �� 64 pixels, 32 �� 32 pixels, 16 �� 16 pixel when stds_idx is encoded, or when Direct Model, stds_idx is encoded.
Figure 23 B illustrated before mb_type grammer when selecting block information to decode. When available_block_num is the value being greater than 1, stds_idx is decoded. In addition, if available_block_num is 0, then carry out H.264 representative conventional motion compensation, so being encoded by mb_type.
Table shown in Figure 23 A and Figure 23 B the grammatical feature not specified in the present invention in the ranks also may be shown, it is also possible to comprise the description relevant with condition difference in addition. Or, it is also possible to grammer table is split, merges into multiple table. In addition, it is not necessary to identical term must be used, it is also possible to change arbitrarily according to the mode utilized. In addition, each grammer segment described in this microlith layer grammer can also be changed to and clearly be documented in microlith data syntax described later.
As mentioned above, it is necessary, the image of the picture coding device code by aforesaid 1st enforcement mode is decoded by the picture decoding apparatus of present embodiment. Therefore, the image decoding of present embodiment can reproduce the high decoded picture as matter according to smaller coded data.
(the 4th enforcement mode)
The picture decoding apparatus of the 4th enforcement mode is shown Figure 34 outline. As shown in figure 34, picture decoding apparatus possesses image lsb decoder 400, decoding control section 350 and output state 308. The picture decoding apparatus of the 4th enforcement mode is corresponding with the picture coding device of the 2nd enforcement mode. In the 4th enforcement mode, mainly it is described with the 3rd enforcement mode distinct portions and action. As shown in figure 34, in the image lsb decoder 400 of present embodiment, coding row lsb decoder 401 and prediction section 405 are different from the 3rd enforcement mode.
The prediction section 405 of present embodiment optionally switches following two kinds of prediction mode, and generation forecast figure image signal 35, these two kinds of prediction mode comprise: use the movable information selecting block to have to carry out the prediction mode (the 1st prediction mode) of motion compensation; A motion vector is used to carry out the prediction mode of motion compensation (the 2nd prediction mode) as H.264, for decoder object block.
Figure 35 is the block diagram illustrating in greater detail coding row lsb decoder 401. Coding row lsb decoder 401 shown in Figure 35 possesses movable information lsb decoder 424 further in the structure of the coding row lsb decoder 301 shown in Figure 32. In addition, the selection block lsb decoder 423 shown in Figure 35 is different from the selection block lsb decoder 323 shown in Figure 32, is decoded by the coded data 80C relevant with selecting block, obtains prediction handover information 62. Which in the 1st and the 2nd prediction mode be the prediction section 101 that prediction handover information 62 represents in the picture coding device of Fig. 1 employ. When predicting that handover information 62 represents that prediction section 101 employs the 1st prediction mode, when namely utilizing the 1st prediction mode to decoder object block forecast, select block lsb decoder 423 the selection block information in coded data 80C to be decoded, obtain selecting block information 61. When predicting that handover information 62 represents that prediction section 101 employs the 2nd prediction mode, when namely utilizing the 2nd prediction mode to decoder object block forecast, select block lsb decoder 423 not to selection block information decoding, coded movable information 80D is decoded by movable information lsb decoder 424, obtains movable information 40.
Figure 36 is the block diagram illustrating in greater detail prediction section 405. Prediction section 405 shown in Figure 34 possesses the 1st prediction section 305, the 2nd prediction section 410 and Forecasting Methodology change-over switch 411. 2nd prediction section 410 uses the movable information 40 decoded by coding row lsb decoder 401 and with reference to figure image signal 37, carries out the motion compensation prediction that the dynamic compensating unit 313 with Figure 33 is same, generation forecast figure image signal 35B. 1st prediction section 305 is identical with the prediction section 305 illustrated in the 3rd enforcement mode, generation forecast figure image signal 35B. In addition, Forecasting Methodology change-over switch 411 is according to prediction handover information 62, from the prediction image signal 35B from the 2nd prediction section 410 and the prediction image signal 35A from the 1st prediction section 305, select one party, and export as the prediction image signal 35 of prediction section 405. Meanwhile, the selected movable information used in the 1st prediction section 305 or the 2nd prediction section 410 is sent to movable information storer 306 as movable information 38 by Forecasting Methodology change-over switch 411.
Then, about the grammer structure relevant with present embodiment, mainly the point different from the 3rd enforcement mode is described.
Figure 30 A and Figure 30 B illustrates the example of the microlith layer grammer of present embodiment respectively. Available_block_num shown in Figure 30 A represents the quantity that can utilize block, when it is the value being greater than 1, selects block lsb decoder 423 the selection block information in coded data 80C to be decoded. In addition, stds_flag represents that the mark whether employed as the movable information of decoder object block by the movable information selecting block in motion compensation is predicted, i.e. expression Forecasting Methodology change-over switch 411 have selected which side the mark in the 1st prediction section 305 and the 2nd prediction section 410. When can utilize the quantity of block be greater than 1 and stds_flag be 1 when, represent to employ in motion compensation is predicted and select the movable information that has of block. In addition, when stds_flag is 0, does not utilize and select the movable information that has of block, and directly with H.264 same the information of movable information is encoded or encoded by the difference value predicted. In addition, stds_idx represents selection block information, and the code table corresponding with utilizing block number is as previously mentioned.
Figure 30 A illustrates after mb_type grammer when selecting block information to decode. It is only the piece size determined the or when pattern determined at the predictive mode shown in mb_type, stds_flag and stds_idx is decoded. Such as, when piece size is 64 �� 64,32 �� 32,16 �� 16, or when Direct Model, stds_flag and stds_idx is decoded.
Figure 30 B illustrated before mb_type grammer when selecting block information to decode. Such as when stds_flag is 1, it is not necessary to mb_type is decoded. When stds_flag is 0, mb_type is decoded.
As previously discussed, the image of the picture coding device code by aforesaid 2nd enforcement mode is decoded by the picture decoding apparatus of present embodiment. Therefore, the image decoding of present embodiment can reproduce the high decoded picture as matter according to smaller coded data.
In addition, the invention is not restricted to above-mentioned enforcement mode itself, implementation phase, textural element can be out of shape and specialize in the scope not departing from its main idea. In addition, appropriately combined by multiple textural element disclosed in above-mentioned enforcement mode, it is possible to form various invention. Such as, it is also possible to from the whole textural elements shown in enforcement mode, delete several textural elements. In addition, it is also possible to suitably combine the textural element in different enforcement modes.
As this example, same effect also can be obtained even if the above-mentioned the 1st to the 4th enforcement mode being out of shape as described below.
(1) in the 1st to the 4th enforcement mode, to handling object frame is divided into the rectangular blocks such as 16 �� 16 block of pixels and be illustrated according to example when encoding from the block of pixels of picture upper left to the order of the block of pixels of bottom right or decode as shown in Figure 4, but coding or decoding order be not limited to this example. Such as, coding or decoding order both can be from picture bottom right to the order of upper left, it is also possible to be from upper right order to left down. In addition, coding or decoding order both can be from the central part helically of picture to the order of week edge, it is also possible to be from all edge of picture to the order in center portion.
(2) in the 1st to the 4th enforcement mode, the situation being defined as the look signal component of taking not point luminance signal and aberration signal is illustrated as example. But, both can use different prediction processing for luminance signal and aberration signal, or identical prediction processing can also have been used. When using different prediction processing, by the Forecasting Methodology for aberration signal behavior, the method same with luminance signal is utilized to carry out coding/decoding.
In addition, it is clear that the scope in the main idea not departing from the present invention is implemented various distortion and can be implemented too.
Utilizability in industry
The image coding/decoding method of the present invention can improve coding efficiency, so having the utilizability in industry.
Claims (10)
1. a picture decoding method, it is characterised in that, possess:
The step of at least one motion reference block is selected from the block of pixels that the decoding with movable information is complete;
From described motion reference block, select at least one step that can utilize block, described utilize block to be the block of pixels of the candidate with the movable information being applied to decoder object block, and there is mutually different movable informations;
The code table set in advance with reference to the quantity utilizing block according to described, decodes the coded data inputted, thus obtains the step for determining to select the selection information of block;
According to described selection information, from described utilization, block is selected a step selecting block;
The movable information of described selection block is used to generate the step of predicted picture of described decoder object block;
According to the step that the predicated error of described decoder object block is decoded by described coded data; And
The step of decoded picture is obtained according to described predicted picture and described predicated error.
2. picture decoding method according to claim 1, it is characterised in that,
In the step of at least one motion reference block of described selection, from
(A) decoder object frame belonging to described decoder object block and
(B) block of pixels that the 1st reference frame shown in the moment different from described decoder object frame comprises selects described motion reference block,
In the step of the predicted picture of described generation decoder object block,
When described selection block is the block of pixels in described decoder object frame, it may also be useful to described movable information generates described predicted picture, and
When described selection block is the block of pixels in described 1st reference frame, it may also be useful to described movable information and the information relevant with described 1st reference frame generate described predicted picture.
3. picture decoding method according to claim 2, it is characterised in that,
When described selection block is the movable information that the block of pixels in described 1st reference frame and described selection block have sub-block unit, the step of the predicted picture of described generation decoder object block uses the movable information of sub-block to generate described predicted picture.
4. picture decoding method according to claim 3, it is characterised in that,
In the step of the predicted picture of described generation decoder object block, when described selection block is the block of pixels in described 1st reference frame, use the 2nd time gap between the 2nd reference frame of the 1st time gap between described decoder object frame and described 1st reference frame and described 1st reference frame and the reference of described selection block institute, it is the motion vector between described decoder object frame and described 1st reference frame by the motion vector transform of described selection block.
5. picture decoding method according to claim 4, it is characterised in that,
Also having the step obtaining pattern information by described coded data being decoded, this pattern information represents whether should generate described predicted picture with the use of the motion compensation prediction of the movable information of described selection block.
6. a picture decoding apparatus, it is characterised in that, possess:
Block obtaining section can be utilized, at least one the motion reference block selected from the block of pixels complete by the decoding with movable information selects at least one step that can utilize block, described utilize block to be the block of pixels of the candidate with the movable information being applied to decoder object block, and there is mutually different movable informations;
1st lsb decoder, the code table set in advance with reference to the quantity utilizing block according to described, decodes the coded data inputted, thus obtains the selection information for determining to select block;
Selection portion, according to described selection information, selects one to select block from described utilization block;
Prediction section, it may also be useful to the movable information of described selection block generates the predicted picture of described decoder object block;
2nd lsb decoder, decodes the predicated error of described decoder object block according to described coded data; And
Totalizer, obtains decoded picture according to described predicted picture and described predicated error.
7. picture decoding apparatus according to claim 6, it is characterised in that,
Described motion reference block from
(A) decoder object frame belonging to described decoder object block and
(B) block of pixels that the 1st reference frame shown in the moment different from described decoder object frame comprises is selected,
In described prediction section,
When described selection block is the block of pixels in described decoder object frame, it may also be useful to described movable information generates described predicted picture, and
When described selection block is the block of pixels in described 1st reference frame, it may also be useful to described movable information and the information relevant with described 1st reference frame generate described predicted picture.
8. picture decoding apparatus according to claim 7, it is characterised in that,
When described selection block is the movable information that the block of pixels in described 1st reference frame and described selection block have sub-block unit, described prediction section uses the movable information of sub-block to generate described predicted picture.
9. picture decoding apparatus according to claim 8, it is characterised in that,
When described selection block is the block of pixels in described 1st reference frame, described prediction section uses the 2nd time gap between the 2nd reference frame of the 1st time gap between described decoder object frame and described 1st reference frame and described 1st reference frame and the reference of described selection block institute, is the motion vector between described decoder object frame and described 1st reference frame by the motion vector transform of described selection block.
10. picture decoding apparatus according to claim 9, it is characterised in that,
Also having the 3rd lsb decoder obtaining pattern information by described coded data being decoded, this pattern information represents whether should generate described predicted picture with the use of the motion compensation prediction of the movable information of described selection block.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201080066017.7A CN102823248B (en) | 2010-04-08 | 2010-04-08 | Image encoding method and image decoding method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201080066017.7A Division CN102823248B (en) | 2010-04-08 | 2010-04-08 | Image encoding method and image decoding method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103227922A CN103227922A (en) | 2013-07-31 |
CN103227922B true CN103227922B (en) | 2016-06-01 |
Family
ID=48838160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310142052.8A Active CN103227922B (en) | 2010-04-08 | 2010-04-08 | Picture decoding method and picture decoding apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103227922B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1471320A (en) * | 2002-06-03 | 2004-01-28 | Time-space prediction of two-way predictive (B) image and movable vector prediction of multi image reference mobile compensation | |
JP2004165703A (en) * | 2002-09-20 | 2004-06-10 | Toshiba Corp | Moving picture coding method and decoding method |
JP2010010950A (en) * | 2008-06-25 | 2010-01-14 | Toshiba Corp | Image coding/decoding method and apparatus |
-
2010
- 2010-04-08 CN CN201310142052.8A patent/CN103227922B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1471320A (en) * | 2002-06-03 | 2004-01-28 | Time-space prediction of two-way predictive (B) image and movable vector prediction of multi image reference mobile compensation | |
JP2004165703A (en) * | 2002-09-20 | 2004-06-10 | Toshiba Corp | Moving picture coding method and decoding method |
JP2010010950A (en) * | 2008-06-25 | 2010-01-14 | Toshiba Corp | Image coding/decoding method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN103227922A (en) | 2013-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102823248B (en) | Image encoding method and image decoding method | |
CN101335896B (en) | Predicting motion vectors for fields of forward-predicted interlaced video frames | |
KR102036771B1 (en) | Video prediction encoding device, video prediction encoding method, video prediction encoding program, video prediction decoding device, video prediction decoding method, and video prediction decoding program | |
CN1977541B (en) | Motion prediction compensation method and motion prediction compensation device | |
KR100926752B1 (en) | Fine Motion Estimation Method and Apparatus for Video Coding | |
CN100493200C (en) | Encoding method, decoding method, encoding device, and decoding device of moving image | |
CN103227922B (en) | Picture decoding method and picture decoding apparatus | |
JP5479648B1 (en) | Image encoding method and image decoding method | |
JP5444497B2 (en) | Image encoding method and image decoding method | |
CN103826129A (en) | Image decoding method and image decoding device | |
JP6961781B2 (en) | Image coding method and image decoding method | |
JP6980889B2 (en) | Image coding method and image decoding method | |
JP6795666B2 (en) | Image coding method and image decoding method | |
JP5571262B2 (en) | Image encoding method and image decoding method | |
CN103747252A (en) | Image decoding method and image decoding device | |
CN103813168A (en) | Image coding method and image coding device | |
CN103813165A (en) | Image decoding method and image decoding device | |
CN103813164A (en) | Image decoding method and image decoding device | |
CN103826130A (en) | Image decoding method and image decoding device | |
CN103813163A (en) | Image decoding method and image decoding device | |
CN103826131A (en) | Image decoding method and image decoding device | |
JP7547598B2 (en) | Image encoding method and image decoding method | |
WO2012008040A1 (en) | Image encoding method and image decoding method | |
JP6196341B2 (en) | Image encoding method and image decoding method | |
JP5509398B1 (en) | Image encoding method and image decoding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |