[go: up one dir, main page]

GB2321566A - Prediction filter for use in decompressing MPEG coded video - Google Patents

Prediction filter for use in decompressing MPEG coded video Download PDF

Info

Publication number
GB2321566A
GB2321566A GB9807476A GB9807476A GB2321566A GB 2321566 A GB2321566 A GB 2321566A GB 9807476 A GB9807476 A GB 9807476A GB 9807476 A GB9807476 A GB 9807476A GB 2321566 A GB2321566 A GB 2321566A
Authority
GB
United Kingdom
Prior art keywords
prediction
register
prediction filter
data
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9807476A
Other versions
GB9807476D0 (en
GB2321566B (en
Inventor
Anthony Peter John Claydon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Discovision Associates
Original Assignee
Discovision Associates
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB9405914A external-priority patent/GB9405914D0/en
Application filed by Discovision Associates filed Critical Discovision Associates
Publication of GB9807476D0 publication Critical patent/GB9807476D0/en
Publication of GB2321566A publication Critical patent/GB2321566A/en
Application granted granted Critical
Publication of GB2321566B publication Critical patent/GB2321566B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/04Addressing variable-length words or parts of words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0607Interleaved addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A filter circuit used in decompressing video data includes two one-dimensional prediction filters. Each one-dimensional prediction filter comprises a three registers 601, 602, 603 that receive data. The data from register 602 is passed to a multiplexer 604 whose output is added to the data in register 601 in summing circuit 605 and the result is passed to a register 606. The data in register 603 is passed to a multiplexer 607 and the result is passed to a register 608. The data in register 606 is added to the data in the register 608 in a summing circuit 609 and the result is passed to a register 610.

Description

PREDICTION FILTER The present invention is directed to a decompression circuit. The decompression circuit operates to decompress and/or decode a plurality of differently encoded input signals. The embodiment chosen for description hereinafter relates to the decoding of a plurality of encoded picture standards. More specifically, this embodiment relates to the decoding of any one of the well-known compression standards as Joint Photographic Expert Group (JPEG), Motion Picture Experts Group (MPEG), and H.261.
U. S. Patent 4,866,510 to Goodfellow et al discloses a differential pulse code arrangement which' reduces the bit rate of a composite color video signal. The reduction is achieved by predicting the present video signal sample from reconstructed past samples and forming a signal representative of the prediction error. The bit rate is further reduced by generating a signal predictive of the error signal and forming a signal corresponding to the difference between the error signal and the signal predictive thereof. On output7 a video signal sample is reconstructed by summing the reconstructed error signal and the signal predictive of the previous video signal sample. A video signal sample generally comprises one or more lines of the composite signal.
U. S. Patent 5,301,040 to Hoshi et al discloses an apparatus for encoding data by transforming image data to a frequency zone. The apparatus may comprise two encoding means, which may perform encoding in parallel.
U. S. Patent 5,301,242 to Gonzales et al discloses an apparatus and method for encoding a video picture. The apparatus and method convert groups of blocks of digital video signals into compressible groups of blocks of digital video signals, according to the MPEG standard only.
U. S. Patent 4,142,205 to linuma discloses an interframe encoder for a composite color television signal. The interframe encoder obtains a frame difference signal by subtracting one frame signal from the subsequent frame signal. A corresponding interframe decoder operates in reverse.
US Patent 4 924 298 to Kitamura discloses a method and apparatus for predictive coding of a digital signal obtained from an analogue colour video signal. During the predictive coding process, a picture element in a first scanning line is predicted on the basis of a picture element in a second scanning line adjacent to the first scanning line.
US Patent 4 924 308 to Feuchtwanger discloses a bandwidth reduction system for television signals. The system employs three spatial filter circuits capable of imposing respective resolution characteristics on the signals. Based on the degrees of motion occurring in respective spatial portions of the television picture, different levels of resolution are imposed by the different spatial filter circuits.
US Patent 5 086 489 to Shimura discloses a method for compressing image signals. According to the patent, original image signal components representing an image are sampled such that the phases of the samples along a line are phase shifted from the samples located along a neighbouring line. These representative image signal components are classified into main components, sampled at appropriate sampling intervals, and interpolated components, subjected to interpolation prediction encoding processing based on the main components.
The present invention is defined in the accompanying independent claims. Preferred features are recited in the dependent claims.
According to a preferred embodiment of the invention, a plurality of prediction filter circuits may process video information and a control signal allows processing of video information encoded in multiple standards. A filter circuit as may be used to process video information is disclosed comprising a prediction filter formatter, a dimension buffer and two onedimensional prediction filters. Each such one-dimensional prediction filter may comprise six registers, two multiplexers and two summing circuits, connected together such that processing of video information encoded in multiple standards may be processed.
The present invention can be put into practice in various ways and will now be described by way of example with reference to the accompanying drawings, in which: Figure 1 is a block diagram of the temporal decoder including the prediction filter system; Figure 2 is another block diagram of the temporal decoder including the prediction filter system; Figure 3 is a block diagram of the temporal decoder including the prediction filter system; Figure 4 is a block diagram of the prediction filter system according to an embodiment of the invention; Figure 5 is a block diagram of a prediction filter according to an embodiment of the invention; Figure 6 is a detailed diagram of a prediction filter; and Figure 7 is a block of pixel data.
Overview of the Decompression Circuit The decompression circuit may comprise a Spatial Decoder, a Temporal Decoder and a Video Formatter.
Overview of the Temporal Decoder The Temporal Decoder uses information in one or more pictorial frames, or reference frames, to predict the information in another pictorial frame. The operation of the Temporal Decoder differs depending on the encoding standard in operation, since different encoding standards allow different types of prediction, motion compensation and frame re-ordering. The reference frames are stored in two external frame buffers.
Overview of the JPEG Sfandard The JPEG standard does not use inter4rame prediction. Therefore, in this mode the Temporal Decoder will pass the JPEG data through to the Video Formatter, without performing any substantive decoding beyond that accomplished by the Spatial Decoder.
Overview of the MPEG Standard The MPEG standard uses three different frame types: Intra (I), Predicted (P), and Bi-directionally interpolated (B). A frame is composed of picture elements, or pels. I frames require no decoding by the Temporal Decoder, but are used in decoding P and B frames. The I frames can be stored in a frame buffer until they are needed.
Decoding P frames requires forming predictions from a previously decoded I or P frame. Decoded P frames can also be stored in one of the frame buffers for later use in decoding P and B frames.
B frames are based on predictions from two reference frames, one from the future and one from the past, which are stored in the frame buffers. B frames, however, are not stored in either of the frame buffers.
The MPEG standard also uses motion compensation, which is the use of motion vectors to improve the efficiency of the prediction of pel values. Motion vectors provide offsets in the past and/or future reference frames.
The MPEG standard uses motion vectors in both the x-dimension and the y-dimension. The standard allows motion vectors to be specified to half-pel accuracy in either dimension.
In one configuration under the MPEG standard, frames are output by the Temporal Decoder in the same order that they are input to the Temporal Decoder.
This configuration is termed MPEG operation without re-ordering. However, because the MPEG standard allows prediction from future reference frames, frames may be re-ordered. In this configuration, B frames are decoded and output in the same order as they are input, as described above. I and P frames, however, are not output as they are decoded Instead, they are decoded and written into the frame buffers. They are output only when a subsequent I or P frame arrives for decoding.
For full details of prediction and the arithmetic operations involved, reference is made to the proposed MPEG standard draft. The Temporal Decoder meets the requirements listed therein.
Overview of the H.261 Standard The H.261 standard makes predictions only from the frame just decoded. In operation, as each frame is decoded, it is written into one of the two frame buffers for use in decoding the next frame. Decoded pictures are output by the Temporal Decoder as they are written into the frame buffers; thus, H.261 does not support frame re-ordering.
In the H.261 standard, motion vectors are specified only to integer pel accuracy. In addition, the encoder may specify that a low-pass filter be applied to the result of any resulting prediction.
For full details of prediction and the arithmetic operations involved, reference is made to the H.261 standard. The Temporal Decoder meets the requirements listed therein.
The Temporal Decoder includes a prediction filter system. The prediction filter system receives a block or blocks of pixels to be used in the prediction, and additional information in the form of flags or signals. From the additional information, the prediction filter system determines the standard which is operational, the configuration of that standard, the level of accuracy of the motion vectors, and other information. The prediction filter system then applies the correct interpolation function based on that information.
Because some blocks of a frame may be predicted and other blocks may be encoded directly, the output from the prediction filters may need to be added to the rest of a frame. The prediction adder performs this function.
If the frame is a B frame, the Temporal Decoder outputs it to the Video Formatter. If the frame is an I or P frame, the Temporal Decoder writes the frame to one of the frame buffers and outputs either that frame, if frame re-ordering is inactive, or the previous I or P frame, if frame re-ordering is active.
A temporal decoder 10 is shown in Figures 1, 2, and 3. A first output from a DRAM interface 12 is passed over lines 404, 405 to a prediction filter system 400.
The output from the prediction filter system 400 is passed over a line 410 as a second input to a prediction adder 13. A first output from the prediction adder 13 is passed over a line 14 to an output selector 15. A second output from the prediction adder 1 3 is passed over a line 1 6.
The prediction filter system 400 is a circuit for processing video information, comprising a first and a second prediction filter for parallel processing of video information, wherein the prediction filters are substantially identical, and a control signal to allow processing of video information encoded in multiple standards.
More specifically, one embodiment of the prediction filter system 400 is a filter circuit for use in video decompression, comprising a prediction filter formatter, a first one-dimensional prediction filter operatively connected to the prediction filter formatter, a dimension buffer operatively connected to the first one-dimensional prediction filter, and a second one-dimensional prediction filter operatively connected to the dimension buffer. The prediction filter formatter comprises a plurality of multiple shift registers for outputting data in a predetermined order.
Each of the prediction filters may comprise a first register, a second register, a first multiplexer operatively connected to the second register, a first summing circuit operatively connected to the first register and the first multiplexer, a third register operatively connected to the first summing circuif, a fourth register, a second multiplexer operatively connected to the fourth register, a fifth register operatively connected to the second multiplexer, a second summing circuit operatively connected to the third register and the fifth register, and a sixth register operatively connected to the second summing circuit.
Referring to Figure 4, the overall structure of the prediction filter system 400 is shown. The prediction filter system 400 comprises a plurality of prediction filters 401, 402 and a prediction filters adder 403. The forward prediction filter 401 and the backward prediction filter 402 are identical and filter the forward and backward prediction blocks in MPEG mode. In H.261 mode, only the forward prediction filter 401 is used, because the H.261 standard does not contain backward prediction capability.
Each prediction filter 401, 402 acts independently, processing data as soon as valid data appears at inputs 404, 405. The output from the forward prediction filter 401 is passed over a line 406 to the prediction filters adder 403. The output from the backward prediction filter 402 is passed over a line 407 to the prediction filters adder 403. Other inputs to the prediction filters adder 403 are passed over lines 408-409. The output from the prediction filters adder 403 is passed over a line 410. Each of the lines 404-410 in the prediction filter system 400 may be a two-wire interface.
Multi-standard operation requires that the prediction filter system 400 be configurable to perform either MPEG or H.261 filtering. Flags or other appropriate signals may be passed to the prediction filter system 400.to reconfigure the system. These flags are passed to the individual prediction filters 401, 402 as discussed in more detail later, and to the prediction filters adder 403.
There are four flags or signals which configure the prediction filters adder 403. Of these, fwd ima twin and fwd~p~num are passed through the forward prediction filter 401, and bwd-ima-twin and bwd-p-num are passed through the backward prediction filter 402.
As described in more detail later, the prediction filters adder 403 uses these flags or signals to activate or deactivate two state variables, fwd~on and bwd~on.
The fwd on state variable indicates whether forward prediction is used to predict the pel values in the current block. Likewise, the bwd~on state variable indicates whether backward prediction is used to predict the pel values in the current block.
In H.261 operation, backward prediction is never used, so the bw on state variable is always inactive. Therefore, the prediction filters adder 403 will ignore the output from the backward prediction filter 402. If the fwd~on state variable is active, the output from the forward prediction filter 401 passes through the prediction filters adder 403. If then fwd on state variable is inactive, then no prediction is performed for the current block, and the prediction filters adder 403 passes no information from either prediction filter 401, 402.
In MPEG operation, there are four possible cases for the fwd on and bwd~on state variables. If neither state variable is active, the prediction filters adder 403 passes no information from either prediction filter 401, 402.
If the fwd on state variable is active but the bwd~on state variable is inactive, the prediction filters adder 403 passes the output from the forward prediction filter 401.
If the bwd on state variable is active but the fed on state variable is inactive, the prediction filters adder 403 passes the output from the backward prediction filter 402.
If both state variables are active, the prediction filters adder 403 passes the average of the outputs from the prediction filters 401, 402, rounded toward positive infinity.
As shown in Figure 5, each prediction filter 401, 402 consists of substantiaily the same structure. Input data enters a prediction filters formatter 501, which puts the data in a form that can be readily filtered. The data is then passed to a first one-dimensional prediction filter 502, which performs a onedimensional prediction. This prediction may be on the x-dimension or the y dimension. The data is then passed to a dimension buffer 503, which prepares the data for further filtering.
The data is then passed to a second one-dimensional prediction filter 504, which performs a one-dimensional prediction on the dimension not predicted by the first one-dimensional prediction filter 502. Finally, the data is output.
For convenience of explanation only, the following discussion assumes that the one-dimensional prediction filter 502 operates on the x-coordinate and the onedimensional prediction filter 504 operates on the y-coordinate. Either onedimensional prediction filter 502, 504 may operate on either the x-coordinate or the y-coordinate. Thus, those skilled in the art will recognize from the following explanation how the one-dimensional prediction filters 502, 504 operate.
Referring to Figure 6, there is shown the structure of a one-dimensional prediction filter 502, 504. The structure of each one-dimensional prediction filter 502, 504 is identical. Each contains three registers 601, 602, 603 which receive data. The data in the register 602 is passed to a multiplexer 604. The result from the multiplexer 604 is added to the data in the register 601 in a summing circuit 605, and the result is passed to a register 606.
The data in the register 603 is passed to a multiplexer 607, and the result is passed to a register 608. The data in the register 606 is added to the data in the register 608 in a summing circuit 609, and the result is passed to a register 610.
Additionally, three registers 611, 612, 61 3 pass control information through each one-dimensional prediction filter 502, 504. All data passed between both data components and control registers of the one-dimensional prediction filters 502, 504 may be passed over two-line interfaces. In addition, the input to the registers 601, 602, 603 and the output from the register 610 may be two-line interfaces.
Three information signals will be passed to the prediction filter system 400 to indicate which mode and which configuration is operational. The first signal is the h261 on signal. If this signal is active, then the H.261 standard is operational.
If this signal is inactive, then the MPEG standard is operational.
The second and third signals, xdim and ydim, indicate whether the motion vector in a particular dimension specifies interpolation based on a half-pel or a whole pel. If the xdim signal is inactive, then the motion vector in the x-dimension specifies an integer multiple of a pel. if the xdim signal is active, then the motion vector in the x-dimension specifies an odd multiple of a haif-pel. The ydim signal specifies the same information with regard to the y-dimension.
Because the H.261 standard allows motion vectors only to integer pel accuracy, the xdim and ydim signals are always inactive when the h267 on signal is active. As shown in Figure 7, the prediction filter system 400 outputs blocks 700 of eight rows of eight pels 701 each. In addition, as will be described with regard to the function of the one-dimensional prediction filters 502, 504 under each mode of operation: the size of an input block necessary to output a block of eight rows of eight pixels depends on whether xdim or ydim is active. In particular, if the xdim signal is active, the input block must have 9 pels in the x-dimension; if the xdim signal is in active, the input block must have 8 pels in the x-dimension. If the ydim signal is active, the input block must have 9 pels in the y-dimension; if the ydim signal is inactive, the input block must have 8 pels in the y-dimension. This is summarized in the following table.
h261 on xdim ydim - i4jnn 0 0 0 I O Fi xi O O 7 MPEG 8x9 block o 1 O MPEG 9x8 block O 1 1 MPEG 9x9 block 1 0 0 H.261 Low-pass Filter 1 0 1 Illegal 1 1 0 Illegal 1 1 1 Illegal The operation of each one-dimensional prediction filter 502, 504 differs between MPEG and H.261 operation, and will be described in relation with each mode of operation. H.261 operation, being the more complex, will be described first. In H.261 mode, each one-dimensional prediction filter 602, 604 implements the following standard one-dimensional filter equation:
Fj = (other,vise) Because xdim and ydim are always inactive in H.261 mode, the input block is eight rows of eight pels each. Therefore, Figure 7 accurately represents both the input and the output blocks from the prediction filter system 400 in H.2.61 mode.
The equation (1) is applied to each row of the block 700 by the onedimensional x-coordinate prediction filter 502, and is applied to each column of the block 700 by the one-dimensional y-coordinate prediction filter 504. Referring to Figure 6, the pel values xi-1, xi and xi 1 in the equation (1) are loaded into registers 601, 602, 603, respectively.
The pel value xi is multiplied by two by the multiplexer 604, added to the pel value xi-1 in the summing circuit 605, and the result is loaded into the register 606. The pel value xi + 1 in the register 603 passes through the multiplexer 607 without being altered, and is loaded into the register 608. Finally, the values in registers 606 and 608 are added together in the summing circuit 609, and loaded into the register 610.
The above process implements the H.261 equation for pels within a row or column. To implement the H.261 equation for the first and last pel in a row or column, the registers 601 and 603 are reset. The pel value xi flows through register 602 and is multiplied by four by the multiplexer 604. The result flows unaltered through registers 602 and 606, because the summing circuits 605 and 609 each add 2ero to the pel value xi.
It will be noted that the above implementations yield values equal to four times the result required by the one-dimensional filter equation. In order to retain arithmetic accuracy, division by 1 6, accomplished by shifting right by t places, is performed at the input to the prediction filters adder 403 after both x-dimension and y-dimension filtering has been performed.
During MPEG operation, the one-dimensional prediction filters 502, 504 perform a simple half-pel interpolation: xi + xi + l Fi = 2 (0#i#7,halfpel) (2) F; = x; (O < i ' 7,integerpel) The operation of the one-dimensional prediction filter 502 is the same in MPEG mode with integer pel motion compensation as described above in connection with H.261 operation on the first and last pels in a row or column. For MPEG mode with'half-peloperation, the register 601 is permanently reset, pel value xi is loaded into the register 602, and pel value xi + 1 is loaded into the register 603. Pel value xi in the register 602 is multiplied by two by the multiplexer 604, and pel value xi+ 1 in the register 603 is multiplied by two by the multiplexer 607.
These values are then added in summing circuit 609 to obtain a value four times the required result. As described above in connection with H.261 operation, this is corrected for at the input to the prediction filters adder 403.
In H. 261 operation, the prediction filters formatter 501 merely ensures that data is presented to the first one-dimensional prediction filter 502 in the correct order. This requires a three-stage shift register, the first stage being connected to the input of the register 603, the second stage to the input of the register 602, and the third to the input of the register 601.
In MPEG operation, the operation is simpler. For half-pel interpolation, the prediction filters formatter 501 requires only a two-stage shift register. The first stage is connected to the input of the register 603, and the second stage to the input of the register 602. For integer pel interpolation, the prediction filters formatter 501 need only pass the current pel value to the input of the register 602.
In the to.261 mode, between the one-dimensional x-coordinate prediction filter 502 and the one-dimensional y-coordinate prediction filter 504, the dimension buffer 503 buffers data so that groups of three vertical pels are presented to the one-dimensional y-coordinate prediction filter 504. Therefore, no transposition occurs with the prediction filter system 400. The dimension buffer 503 must be large enough to hold two rows of eight pels each. The sequence in which pels are output from the dimension buffer 503 is illustrated in the following table.
Clock Input Pixel Output Pixels Clock Input Pixel Output Pixel 1 0 55a 17 16 7 2 1 56 18 17 FfO,8,16)b 3 2 57 19 18 F(1,9,17) 4 3 58 20 19 F(2,10,18) 5 4 59 21 20 F(3,11,19) 6 5 60 22 21 F(4,12,20) 7 6 61 23 22 F(5,13,21) 8 7 62 24 23 F(6,14,22) 9 8 63 25 24 F(7,15,23) 10 9 0 26 25 F(8,16,24) 11 10 1 27 26 F(9,17,25) 12 11 1 2 28 27 F(10,18126) 13 12 3 29 28 F(11,19,27) 14 13 4 30 29 F(12,20,28) 15 14 5 31 30 F(13,21,29) 16 15 6 32 - 31 F(14,22,30) a. Last row of pixels from previous block or invalid data if there was no previous block (or there was a long gap between blocks).
b. F(x) indicates the function in H.261 filter equation.
In MPEG operation, the one-dimensional y-coordinate prediction filter 504 requires only two pels at a time. Therefore, the dimension buffer 503 needs only to buffer one row of eight pels.
It is worth noting that after data has passed through the one-dimensional xcoordinate prediction filter 502 there will only ever be eight pels in a row, because the filtering operation converts nine-pel rows into eight-pel rows. "Lost" pels are replaced by gaps in the data stream. When performing half-pel interpolation, the onedimensional x-coordinate prediction filter 502 inserts a gap at the end of each row of eight pels; the one-dimensional y-coordinate prediction filter 504 inserts eight gaps at the end of a block.
During MPEG operation, predictions may be formed from either an earlier frame, a later frame, or an average of the two. Predictions formed from an earlier frame are termed forward predictions, and those formed from a later frame are termed backward predictions. The prediction filters adder 403 determines whether forward predictions, backward predictions, or both are being used to predict values. The prediction filters adder 403 then either passes through the forward or backward predictions or the average of the two, rounded toward positive infinity.
The state variables fwd~on and bwd~on determine whether forward or backward prediction values are used, respectively. At any time, both, neither, or either of these state variables may be active. At start-up or if there is a gap when no valid data is present at the inputs of the prediction filters adder 403, the prediction filters adder 403 enters a state where neither state variable is active.
The prediction filters adder 403 activates or deactivates the state variables fwd on and bwd-on based on four flags or signals. These flags or signals are fwd ima twin, fwd~p num, bwd~ima~twin, and bwd p num, and are necessary because sequences of backward and forward prediction blocks can get out of sequence at the input to the prediction filters adder 403.
The prediction mode, represented by the state variables fwd on and bwd on, is determined as follows: (1) If a forward prediction block is present and fwd~ima~twin is active, then the forward prediction block stalls until a backward prediction block arrives with bwd~ima~twin set. The fwd on and bwd~on state variables are then activated, and the prediction filters adder 403 averages the forward prediction block and the backward prediction block.
(2) Likewise, if a backward prediction block is present and bwd~ima~twin is active, then the backward prediction block stalls until a forward prediction block arrives with fwd~ima~twin set. The fwd~on and bwd~on state variables are then activated, and the prediction filters adder 403 averages the forward prediction block and the backward prediction block.
(3) If a forward prediction block is present but fwd ima-twin is inactive, then fwd~p~n filters adder 403.
(3) The ima~twin and p-num signals are examined a clock cycle before the prediction block data.
The prediction adder 13 forms the predicted frame by adding the data from the prediction filter system 400 to the error data. To compensate for the delay from the input through the address generator, DRAM interface, and prediction filter system 400, the error data passes through a 256-word first-in, first-out buffer (FIFO) before reaching the prediction adder 13.
The prediction adder 1 3 also includes a mechanism to detect mismatches in the data arriving from the FIFO and the prediction filter system 400. In theory, the amount of data from the prediction filter system 400 should exactly correspond to the amount of data from the FIFO which involves prediction. In the event of a serious malfunction, the prediction adder 13 will attempt to recover.
Where the end of the data from the prediction filter system 400 is detected before the end of the data from the FIFO, the remainder of the data from the FIFO continues to the output of the prediction adder 13 unchanged. On the other hand, if the data from the prediction filter system 400 is longer than the data from the FIFO, the input to the prediction adder 13 from the FIFO is stalled until all the extra data from the prediction filter system 400 has been accepted and discarded.
While the invention has been particularly shown and described with reference to a preferred embodiment and alterations thereto, it would be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention.
Reference is hereby directed to our parent application GB 2 287 850 which claims other aspects of the prediction filter described herein.

Claims (3)

1. The prediction filter, comprising: a first register; a second register; a first multiplexer operatively connected to said second register by a line; a first summing circuit operatively connected to said first register by a line and operatively connected to said first multiplexer by a line; a third register operatively connected to said first summing circuit by a line; a fourth register; a second multiplexer operatively connected to said fourth register by a line; a fifth register operatively connected to said second multiplexer by a line; a second summing circuit operatively connected to said third register by a line and operatively connected to said fifth register by a line; and a sixth register operatively connected to said second summing circuit by a line.
2. A prediction filter according to claim 1, wherein at least one said operatively connecting line comprises a two wire interface.
3. A prediction filter according to claim 1 or 2, wherein said prediction filter accepts MPEG encoded information for performing forward and backward prediction operations thereon.
GB9807476A 1994-03-24 1995-03-23 Prediction filter Expired - Lifetime GB2321566B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9405914A GB9405914D0 (en) 1994-03-24 1994-03-24 Video decompression
GB9505942A GB2287850B (en) 1994-03-24 1995-03-23 Prediction filter

Publications (3)

Publication Number Publication Date
GB9807476D0 GB9807476D0 (en) 1998-06-10
GB2321566A true GB2321566A (en) 1998-07-29
GB2321566B GB2321566B (en) 1998-10-28

Family

ID=26304582

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9807476A Expired - Lifetime GB2321566B (en) 1994-03-24 1995-03-23 Prediction filter

Country Status (1)

Country Link
GB (1) GB2321566B (en)

Also Published As

Publication number Publication date
GB9807476D0 (en) 1998-06-10
GB2321566B (en) 1998-10-28

Similar Documents

Publication Publication Date Title
US5625571A (en) Prediction filter
KR100542624B1 (en) Selective compression network in an mpeg compatible decoder
KR960010195B1 (en) Image coding/decoding apparatus
US5504823A (en) Image data partitioning circuit for parallel image decoding system
KR950009680B1 (en) Image decoder of image compression and decompression system
JP3302939B2 (en) Video signal decompressor for independently compressed even and odd field data
EP0611511A1 (en) Video signal decompression apparatus for independently compressed even and odd field data----------------------------------------.
CZ282863B6 (en) Television receiver with a standard resolution capability for reception of a video television signal of high resolution
JPH11266460A (en) Video information processing circuit
KR19980068686A (en) Letter Box Processing Method of MPEG Decoder
KR960010487B1 (en) Sequential Scanning Image Format Converter Using Motion Vector
US20020141499A1 (en) Scalable programmable motion image system
JP2002521976A5 (en)
JPH11164322A (en) Aspect ratio converter and its method
EP0690632A1 (en) Digital decoder for video signals and video signal digital decoding method
KR100251549B1 (en) Digital image decoding device with blocking effect cancellation
US6507673B1 (en) Method and apparatus for video encoding decision
KR100287866B1 (en) Vertical image format conversion device and digital receiving system using same
US20090296822A1 (en) Reduced Memory Mode Video Decode
GB2296618A (en) Digital video decoding system requiring reduced memory space
GB2321566A (en) Prediction filter for use in decompressing MPEG coded video
JPH06113265A (en) Motion compensation predictor
KR20000006455A (en) Video reproducing apparatus and reproducing method
KR960002047B1 (en) HDTV receiver with 525-line progressive scan monitor display and format conversion method
JP2005508100A (en) Scalable programmable video system

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20100408 AND 20100414

PE20 Patent expired after termination of 20 years

Expiry date: 20150322