[go: up one dir, main page]

WO2004014083A1 - Method and apparatus for performing multiple description motion compensation using hybrid predictive codes - Google Patents

Method and apparatus for performing multiple description motion compensation using hybrid predictive codes Download PDF

Info

Publication number
WO2004014083A1
WO2004014083A1 PCT/IB2003/003436 IB0303436W WO2004014083A1 WO 2004014083 A1 WO2004014083 A1 WO 2004014083A1 IB 0303436 W IB0303436 W IB 0303436W WO 2004014083 A1 WO2004014083 A1 WO 2004014083A1
Authority
WO
WIPO (PCT)
Prior art keywords
frames
sequence
sub
encoded
encoder
Prior art date
Application number
PCT/IB2003/003436
Other languages
French (fr)
Inventor
Mihaela Van Der Schaar
Deepak D.S. Turaga
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2004525701A priority Critical patent/JP2005535219A/en
Priority to EP03766578A priority patent/EP1527607A1/en
Priority to AU2003249461A priority patent/AU2003249461A1/en
Priority to US10/523,434 priority patent/US20060093031A1/en
Publication of WO2004014083A1 publication Critical patent/WO2004014083A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/162User input
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/37Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/39Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability involving multiple description coding [MDC], i.e. with separate layers being structured as independently decodable descriptions of input picture data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder

Definitions

  • the present invention relates generally to multiple description coding (MDC) of data, speech, audio, images, video and other types of signals for transmission over a network or other type of communication medium.
  • MDC multiple description coding
  • Redundancy may be added to a bit stream in one way through multiple description coding (MDC) wherein the data is broken into several streams with some redundancy among the streams.
  • MDC multiple description coding
  • the quality of the reconstruction degrades gracefully, which is very unlikely to happen with a system designed purely for compression.
  • multi resolution or layered source coding there is no hierarchy of descriptions; thus multiple description coding is suitable for erasure channels or packet networks without priority provisions.
  • Multiple description coding can be implemented in a number of ways.
  • One way is by splitting an incoming video stream into an arbitrary subset of channels by collecting the odd and even frame sequence separately at the encoder and coding the resultant temporally sub-sampled sequences independently.
  • the video stream can be decoded at half the frame rate. Due to the correlated nature of the video stream, receiving only one of the sub-sampled sequences allows for the recovery of intermediate frames using motion compensated error concealment techniques. This technique is described in greater detail in Wenger et al., "Error resilience support in H.263+,", IEEE Transactions on Circuits and Systems for Video Technology, pp. 867-877, November 1998.
  • a drawback of the approach of Wang and Lin is that it is limited to only I and P frames (no B-frames).
  • a further drawback of the approach is that it does not allow for multi-frame prediction like that employed in H.26L.
  • the invention provides an improved multiple description coding (MDC) method and apparatus which overcomes the drawbacks described above.
  • MDC multiple description coding
  • the coding method of the invention extends multi-description motion compensation (MDMC) by allowing for multi-frame prediction and is not limited to only I and P frames.
  • MDMC multi-description motion compensation
  • the coding method of the invention extends MDMC for use with any conventional predictive codec, such as, for example, MPEG2/4 and H.26L.
  • an improved MDMC encoder including three predictive coders, i.e., a top, middle and bottom coder.
  • Input frames are supplied to the encoder as three separate inputs.
  • the input frames are supplied to a central encoder.
  • the input frames are divided or split into two sub-streams of frames, a first sub-stream comprising only the odd frames and a second sub-stream comprising only the even frames.
  • the first sub-stream comprised of odd frames is provided as input to be encoded by the top encoder to yield an encoded odd frame sequence and the second sub-stream comprised of even frames is provided as input to be encoded by the bottom encoder to yield an encoded even frame sequence.
  • embodiments may divide the frames using different criteria such as, for example, an unbalanced division where every two of three frames is encoded by the top encoder and every third frame is encoded by the bottom encoder.
  • the original undivided input stream of frames is applied to the central encoder which computes the. prediction of the odd frames from the even frames. Additionally, the central encoder separately computes the prediction of the even frames from the odd frames. Prediction residuals are then computed between the central encoder and the first and second side encoders, respectively.
  • the MDMC encoder of the invention outputs the first computed prediction residual, corresponding to the prediction of the even frames, along with the output of the top encoder and outputs the second computed prediction residual, corresponding to the prediction of the odd frames, along with the output of the bottom encoder.
  • a method of encoding a video signal representing a sequence of frames comprising splitting the sequence of frames into a first sub-sequence and a second sub-sequence, applying the first sub-sequence to a first side encoder, applying the second sub-sequence to a second side encoder, applying the original unsplit sequence of frames to a central encoder, computing a first prediction residual between the output of the first side encoder and the central encoder, computing a second prediction residual between the output of the second side encoder and the central encoder, combining the first prediction residual and the output of the first side encoder as a first data sub-stream, combining the second prediction residual and the output of the second side encoder as a second data sub-stream, separately transmitting the first and second data sub-streams.
  • top and bottom predictive coders can advantageously include B-frames and multiple prediction motion compensation
  • any of the top, middle and bottom predictive encoders can be a scalable encoder (e.g., FGS-like or data-partitioning like where the motion vectors (MVs) are sent first, temporal scalability etc.).
  • a scalable encoder e.g., FGS-like or data-partitioning like where the motion vectors (MVs) are sent first, temporal scalability etc.
  • the middle encoder will send only as much information as the channel allows.
  • the available bandwidth is very low, only the information encoded by the side-coders will be transmitted.
  • additional bandwidth becomes available then as much of the mismatch signal as the channel allows will be transmitted using the scalable middle encoder.
  • the prediction from odd/even frame sequence of the current even/odd frame for determining the mismatch signal can be made from B-frames.
  • the side prediction errors (i.e., the errors between the even-frames and odd-frames for the side coders) as is conventional and also the mismatch between the side prediction error and the central error (i.e., the error between the current-frame and the prediction from the previous two frames)
  • the central error is computed.
  • FIG. 1 illustrates an MDMC encoder according to one embodiment of the invention.
  • MDC Multiple Description Coding
  • MDC refers to one form of compression where the goal is to code an incoming signal into a number of separate bit-streams, where the multiple bit-streams are often referred to as multiple descriptions. These separate bit- streams have the property that they are all independently decodable from one another. Specifically if a decoder receives any single bit-stream it can decode that bit-stream to produce a useful signal (without requiring access to any of the other bit-streams). MDC has the additional property that the quality of the decoded signal improves as more bit-streams are accurately received. For example, assume that a video is coded with MDC into a total of N streams.
  • each frame of a video sequence may be coded as a single frame (independently of the other frames) using only intra frame coding, e.g. JPEG, JPEG-2000, or any of the video coding standards (e.g. JMPEG-1/2/4, H.26-1/3) using only I-frame encoding.
  • different frames can be sent in the different streams. For example, all the even frame sequence may be sent in stream 1 and all the odd frames may be sent in stream 2. Because each of the frames is independently decodable from the other frames, each of the bit-streams is also independently decodable from the other bit-stream.
  • the size can range from a minimum of * 18.times.32 macro-blocks to a maximum of 72.times.120.
  • Pictures can in turn have a frame structure (in which pixels of subsequent rows pertain to different fields) or a field structure (in which all pixels pertain to the same field).
  • macro-blocks may have a frame or field structure, as well.
  • Pictures are in turn organized into groups of pictures, in which the first picture is always an I picture, which is followed by a number of B pictures (bi-directionally interpolated pictures, which have been submitted to forward or backward prediction or to both, 'forward' meaning that prediction is based on a previous reference picture and 'backward' meaning that prediction is based on a future reference picture) and then by a P picture which, being used for prediction of the B pictures, is to be encoded immediately after the I picture.
  • B pictures bi-directionally interpolated pictures, which have been submitted to forward or backward prediction or to both, 'forward' meaning that prediction is based on a previous reference picture and 'backward' meaning that prediction is based on a future reference picture
  • a source supplies the encoder 200 with a sequence of frames 201 (i.e., a frame structure) already arranged in the coding order, i. e. an order making the reference pictures available before the pictures utilizing them for prediction.
  • the full frame sequence 201 is received by a motion estimation unit (not shown) which is to compute and emit one or more motion vectors for each macro-block in a picture being coded, and a cost or error associated with the or each vector.
  • the encoder 200 includes a first side encoder (side encoder 1) 202, a central encoder 204 and a second side encoder 206.
  • the full frame sequence 201 is applied in its entirety to the central encoder 204.
  • a first subset 210 of the full frame sequence 201 which in the present embodiment constitutes the even frame sequence 210 subset of the full frame sequence 201, is applied to the first side encoder 202.
  • a second subset 220 of the full frame sequence 201 which in the present embodiment constitutes the odd frame sequence 220 of the full frame sequence 201, is applied to the second side encoder 206.
  • Odd frame sub-sequence 210 which comprises a subset of input sequence 201, is applied to the first side encoder 202.
  • the first side encoder 202 may be advantageously embodied as any conventional predictive codec (e.g., MPEG- 1/2/4, H.26-1/3).
  • the odd frame sub-sequence 210 is encoded by the first side encoder 202 which outputs encoded odd frame sub-sequence 211.
  • Encoded odd frame sub-sequence 211 is included as one component to be output in the first data sub-stream 245.
  • the encoded odd frame sub-sequence 211 is also supplied as an input to central encoder sub- module 230, to be described below.
  • Even frame sub-sequence 220 which comprises a subset of input sequence 220, is applied to the second side encoder 206.
  • the second side encoder 206 similar to the first side encoder 202, may also be advantageously embodied as any conventional predictive codec (e.g., MPEG- 1/2/4, H.26-1/3).
  • the even frame subsequence 220 is encoded by the second side encoder 206 which outputs encoded even frame sub-sequence 212.
  • the encoded even frame sub-sequence 212 is included as one component to be output in the second data sub-stream 255.
  • the encoded even frame subsequence 212 is also supplied as an input to central encoder sub-module 232, to be described below.
  • Full frame sequence 201 is applied to the central encoder 204.
  • Central encoder sub-module 250 computes a first set of motion vectors 214 and also computes and encodes the even frame prediction sequence 215, which constitutes the prediction of even frames from the odd frames of input sequence 201.
  • the central encoder sub-module 250 outputs the even frame prediction sequence 215 and the first motion vector sequence 214, both of which are supplied as input to central encoder sub-module 230.
  • Central encoder sub-module 260 computes a second set of motion vectors 216 and also computes and encodes the odd frame prediction sequence 217, which constitutes the prediction of odd frames from the even frames of input sequence 201.
  • the central encoder sub-module 250 outputs the odd frame prediction sequence 217 and the second motion vector sequence 216, both of which are supplied as input to central encoder sub-module 230.
  • Central encoder sub-module 230 performs two functions or processes.
  • a first process is directed to encoding the first set of motion vectors 214 received from sub- module 250 to output a first set of encoded motion vectors 218.
  • the second function or process is directed to computing a first prediction residual 221, which may be computed as:
  • the central encoder sub-module 230 output includes the encoded first prediction residual 221 along with the first set of coded motion vectors 218. These outputs are combined with the encoded odd frame sequence 211 (Point A) and collectively output as the first data sub-stream 245.
  • the second prediction residual is computed for inclusion in the second data sub-stream 255 as follows:
  • Second Prediction residual e c - e s (2)
  • the central encoder sub-module 232 output includes the encoded second prediction residual 222 along with the second set of coded motion vectors 219. These outputs are combined with the encoded even frame sequence 212 (Point B) and output as the second data sub-stream 255.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

An improved multiple description coding (MDC) method and apparatus is provided which extends multi-description motion compensation (MDMC) by allowing for multi-frame prediction and is not limited to only I and P frames. Further, the coding method of the invention extends MDMC for use with any conventional predictive codec, such as, for example, MPEG2/4 and H.26L. The improved MDC permits the use of any conventional predictive coder for use as a top and bottom predictive encoder. Further, the top and bottom predictive coders can advantageously include B-frames and multiple prediction motion compensation. Still further, any of the top, middle and bottom predictive encoders can be a scalable encoder (e.g., FGS-like or data-partitioning like where the motion vectors (MVs) are sent first, temporal scalability etc.).

Description

METHOD AND APPARATUS FOR PERFORMING MULTIPLE DESCRIPTION MOTION COMPENSATION USING HYBRD3 PREDICTIVE CODES
The present invention relates generally to multiple description coding (MDC) of data, speech, audio, images, video and other types of signals for transmission over a network or other type of communication medium.
A large fraction of the information that flows across today's networks is useful even in a degraded condition. Examples include speech, audio, still images and video. When this information is subject to packet losses, retransmission may be impossible due to real-time constraints. Superior performance with respect to total transmitted rate, distortion, and delay may sometimes be achieved by adding redundancy to the bit stream rather than repeating lost packets.
Redundancy may be added to a bit stream in one way through multiple description coding (MDC) wherein the data is broken into several streams with some redundancy among the streams. When all the streams are received, one can guarantee low distortion at the expense of having a slightly higher bit rate than a system designed purely for compression. On the other hand, when only some of the streams are received, the quality of the reconstruction degrades gracefully, which is very unlikely to happen with a system designed purely for compression. Unlike multi resolution or layered source coding, there is no hierarchy of descriptions; thus multiple description coding is suitable for erasure channels or packet networks without priority provisions.
Multiple description coding can be implemented in a number of ways. One way is by splitting an incoming video stream into an arbitrary subset of channels by collecting the odd and even frame sequence separately at the encoder and coding the resultant temporally sub-sampled sequences independently. Upon receiving one of the sub-sampled sequences at the decoder, the video stream can be decoded at half the frame rate. Due to the correlated nature of the video stream, receiving only one of the sub-sampled sequences allows for the recovery of intermediate frames using motion compensated error concealment techniques. This technique is described in greater detail in Wenger et al., "Error resilience support in H.263+,", IEEE Transactions on Circuits and Systems for Video Technology, pp. 867-877, November 1998.
To achieve error resilience, Wang and Lin, "Error resilient video coding using multiple description motion compensation," IEEE Trans. Circuits and Systems for Video Technology, vol. 12, no. 6, pp. 4348-52, June 2002, describe one method for implementing multiple description coding. In accordance with this approach, temporal predictors allow the encoder to use both the past even and odd frames while encoding, thus creating a mismatch between the encoder and the decoder when only one description is received at the decoder. The mismatch error is explicitly encoded to overcome this problem. The main benefit of allowing the encoder to use both odd and even frame sequence for prediction is in terms of coding efficiency. By changing the temporal filter taps, the amount of redundancy can be controlled. The method disclosed provides reasonable flexibility between the amount of redundancy and the error resilience.
A drawback of the approach of Wang and Lin is that it is limited to only I and P frames (no B-frames). A further drawback of the approach is that it does not allow for multi-frame prediction like that employed in H.26L. These drawbacks limit the coding efficiency of MDMC and also require full proprietary implementations instead of using available codec modules.
The invention provides an improved multiple description coding (MDC) method and apparatus which overcomes the drawbacks described above. Specifically, the coding method of the invention extends multi-description motion compensation (MDMC) by allowing for multi-frame prediction and is not limited to only I and P frames. Further, the coding method of the invention extends MDMC for use with any conventional predictive codec, such as, for example, MPEG2/4 and H.26L.
According to a first aspect of the invention, there is provided an improved MDMC encoder including three predictive coders, i.e., a top, middle and bottom coder. Input frames are supplied to the encoder as three separate inputs. The input frames are supplied to a central encoder. In addition, the input frames are divided or split into two sub-streams of frames, a first sub-stream comprising only the odd frames and a second sub-stream comprising only the even frames. The first sub-stream comprised of odd frames is provided as input to be encoded by the top encoder to yield an encoded odd frame sequence and the second sub-stream comprised of even frames is provided as input to be encoded by the bottom encoder to yield an encoded even frame sequence. It is noted that other embodiments may divide the frames using different criteria such as, for example, an unbalanced division where every two of three frames is encoded by the top encoder and every third frame is encoded by the bottom encoder. The original undivided input stream of frames is applied to the central encoder which computes the. prediction of the odd frames from the even frames. Additionally, the central encoder separately computes the prediction of the even frames from the odd frames. Prediction residuals are then computed between the central encoder and the first and second side encoders, respectively. The MDMC encoder of the invention outputs the first computed prediction residual, corresponding to the prediction of the even frames, along with the output of the top encoder and outputs the second computed prediction residual, corresponding to the prediction of the odd frames, along with the output of the bottom encoder.
According to a second aspect of the invention there is provided a method of encoding a video signal representing a sequence of frames, the method comprising splitting the sequence of frames into a first sub-sequence and a second sub-sequence, applying the first sub-sequence to a first side encoder, applying the second sub-sequence to a second side encoder, applying the original unsplit sequence of frames to a central encoder, computing a first prediction residual between the output of the first side encoder and the central encoder, computing a second prediction residual between the output of the second side encoder and the central encoder, combining the first prediction residual and the output of the first side encoder as a first data sub-stream, combining the second prediction residual and the output of the second side encoder as a second data sub-stream, separately transmitting the first and second data sub-streams.
Advantages of the invention include:
(1) Any conventional predictive coder may be used for the top and bottom encoders. Further, the top and bottom predictive coders can advantageously include B-frames and multiple prediction motion compensation
(2) Any of the top, middle and bottom predictive encoders can be a scalable encoder (e.g., FGS-like or data-partitioning like where the motion vectors (MVs) are sent first, temporal scalability etc.). For example, in the case where only the middle encoder is a scalable encoder, the middle encoder will send only as much information as the channel allows. In an extreme case when it is determined that the available bandwidth is very low, only the information encoded by the side-coders will be transmitted. As additional bandwidth becomes available, then as much of the mismatch signal as the channel allows will be transmitted using the scalable middle encoder.
(3) To limit the complexity of the system, the prediction from odd/even frame sequence of the current even/odd frame for determining the mismatch signal can be made from B-frames. (4) Instead of computing and coding the side prediction errors ((i.e., the errors between the even-frames and odd-frames for the side coders) as is conventional and also the mismatch between the side prediction error and the central error (i.e., the error between the current-frame and the prediction from the previous two frames), alternatively, the central error is computed.
Referring now to the drawings where like reference numbers represent corresponding parts throughout:
FIG. 1 illustrates an MDMC encoder according to one embodiment of the invention.
Multiple Description Coding (MDC) refers to one form of compression where the goal is to code an incoming signal into a number of separate bit-streams, where the multiple bit-streams are often referred to as multiple descriptions. These separate bit- streams have the property that they are all independently decodable from one another. Specifically if a decoder receives any single bit-stream it can decode that bit-stream to produce a useful signal (without requiring access to any of the other bit-streams). MDC has the additional property that the quality of the decoded signal improves as more bit-streams are accurately received. For example, assume that a video is coded with MDC into a total of N streams. A.s long as a decoder receives any one of these N streams it can decode a useful version of the video. If the decoder receives two streams it can decode an improved version of the video as compared to the case of only receiving one of the streams. This improvement in quality continues until the receiver receives all N of the streams, in which case it can reconstruct the maximum quality.
There are a number of different approaches to achieve MDC coding of video. One approach is to independently code different frames into different streams. For example, each frame of a video sequence may be coded as a single frame (independently of the other frames) using only intra frame coding, e.g. JPEG, JPEG-2000, or any of the video coding standards (e.g. JMPEG-1/2/4, H.26-1/3) using only I-frame encoding. Then different frames can be sent in the different streams. For example, all the even frame sequence may be sent in stream 1 and all the odd frames may be sent in stream 2. Because each of the frames is independently decodable from the other frames, each of the bit-streams is also independently decodable from the other bit-stream. This simple form of MDC video coding has the properties described above, but it is not very efficient in terms of compression because of the lack of inter-frame coding. Before describing Fig. 1 in detail, we recall some definitions concerning the hierarchical arrangement of the pixels within a digitized picture and the prediction strategy as used in MPEG2 standard. Both luminance and chrominance samples (pixels) are grouped into blocks each made of an 8. times.8 matrix (8 rows of 8 pixels each); a certain number of luminance and chrominance blocks (e. g. 4 blocks of luminance data and 2 corresponding blocks of chrominance data) form a macro-block; the digitised picture then comprises a matrix of macro-blocks of which the size depends on the profile (i. e. on the resolution) chosen and on the power supply frequency: for instance, in case of 50 Hz power supply, the size can range from a minimum of* 18.times.32 macro-blocks to a maximum of 72.times.120. Pictures can in turn have a frame structure (in which pixels of subsequent rows pertain to different fields) or a field structure (in which all pixels pertain to the same field). As a consequence, macro-blocks may have a frame or field structure, as well. Pictures are in turn organized into groups of pictures, in which the first picture is always an I picture, which is followed by a number of B pictures (bi-directionally interpolated pictures, which have been submitted to forward or backward prediction or to both, 'forward' meaning that prediction is based on a previous reference picture and 'backward' meaning that prediction is based on a future reference picture) and then by a P picture which, being used for prediction of the B pictures, is to be encoded immediately after the I picture.
Referring now to Fig. 1, a source, not shown, supplies the encoder 200 with a sequence of frames 201 (i.e., a frame structure) already arranged in the coding order, i. e. an order making the reference pictures available before the pictures utilizing them for prediction. The full frame sequence 201 is received by a motion estimation unit (not shown) which is to compute and emit one or more motion vectors for each macro-block in a picture being coded, and a cost or error associated with the or each vector. The encoder 200 includes a first side encoder (side encoder 1) 202, a central encoder 204 and a second side encoder 206. The full frame sequence 201 is applied in its entirety to the central encoder 204. A first subset 210 of the full frame sequence 201, which in the present embodiment constitutes the even frame sequence 210 subset of the full frame sequence 201, is applied to the first side encoder 202. A second subset 220 of the full frame sequence 201, which in the present embodiment constitutes the odd frame sequence 220 of the full frame sequence 201, is applied to the second side encoder 206.
The prediction encoding operation will now be summarized. A. First Side Encoder 202
Odd frame sub-sequence 210, which comprises a subset of input sequence 201, is applied to the first side encoder 202. It should be noted that the first side encoder 202 may be advantageously embodied as any conventional predictive codec (e.g., MPEG- 1/2/4, H.26-1/3). The odd frame sub-sequence 210 is encoded by the first side encoder 202 which outputs encoded odd frame sub-sequence 211. Encoded odd frame sub-sequence 211 is included as one component to be output in the first data sub-stream 245. The encoded odd frame sub-sequence 211 is also supplied as an input to central encoder sub- module 230, to be described below.
B . Second Side Encoder 206
Even frame sub-sequence 220, which comprises a subset of input sequence 220, is applied to the second side encoder 206. It should be noted that the second side encoder 206, similar to the first side encoder 202, may also be advantageously embodied as any conventional predictive codec (e.g., MPEG- 1/2/4, H.26-1/3). The even frame subsequence 220 is encoded by the second side encoder 206 which outputs encoded even frame sub-sequence 212. The encoded even frame sub-sequence 212 is included as one component to be output in the second data sub-stream 255. The encoded even frame subsequence 212 is also supplied as an input to central encoder sub-module 232, to be described below. C. Central Encoder 204
Full frame sequence 201 is applied to the central encoder 204.
Central encoder sub-module 250 computes a first set of motion vectors 214 and also computes and encodes the even frame prediction sequence 215, which constitutes the prediction of even frames from the odd frames of input sequence 201. The central encoder sub-module 250 outputs the even frame prediction sequence 215 and the first motion vector sequence 214, both of which are supplied as input to central encoder sub-module 230.
Central encoder sub-module 260 computes a second set of motion vectors 216 and also computes and encodes the odd frame prediction sequence 217, which constitutes the prediction of odd frames from the even frames of input sequence 201. The central encoder sub-module 250 outputs the odd frame prediction sequence 217 and the second motion vector sequence 216, both of which are supplied as input to central encoder sub-module 230. Central encoder sub-module 230 performs two functions or processes. A first process is directed to encoding the first set of motion vectors 214 received from sub- module 250 to output a first set of encoded motion vectors 218. The second function or process is directed to computing a first prediction residual 221, which may be computed as:
First Prediction residual = ec - es (1), where ec = even frame prediction frame sequence 215, and es = encoded odd frame sub-sequence 211.
The central encoder sub-module 230 output includes the encoded first prediction residual 221 along with the first set of coded motion vectors 218. These outputs are combined with the encoded odd frame sequence 211 (Point A) and collectively output as the first data sub-stream 245.
Similarly, the second prediction residual is computed for inclusion in the second data sub-stream 255 as follows:
Second Prediction residual = ec - es (2),
Where ec = odd frame prediction frame sequence 217, and es = encoded even frame sub-sequence 212, and
The central encoder sub-module 232 output includes the encoded second prediction residual 222 along with the second set of coded motion vectors 219. These outputs are combined with the encoded even frame sequence 212 (Point B) and output as the second data sub-stream 255.
The foregoing description of the preferred embodiments of the invention has been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teachings. Such modifications and variations that are apparent to a person skilled in the art are intended to be included within the scope of this invention as defined by the accompanying claims.

Claims

CLAIMS:
1. An encoding method for encoding an input frame sequence (201), said method comprising the steps of: a) encoding a first sub-sequence of frames (210) from said input frame sequence (201) to produce an encoded first sub-sequence of frames (211); b) encoding a second sub-sequence of frames (220) from said input frame sequence (201) to produce an encoded second sub-sequence of frames (212); c) computing a first predicted frame sequence (215) from said second subsequence of frames (220); d) computing a second predicted frame sequence (217) from said first subsequence of frames (210); e) computing a first set of motion vectors (214) from said first predicted frame sequence (215); f) computing a second set of motion vectors (216) from said second predicted frame sequence (217); g) computing a first prediction residual as an error difference between said first predicted frame sequence (215) and said encoded first sub-sequence of frames (211); h) computing a second prediction residual as an error difference between said second predicted frame sequence (217) and said encoded second sub-sequence of frames (212); i) encoding said first prediction residual, second prediction residual, said first set of motion vectors (214) and said second set of motion vectors (216); j) determining a network condition; k) scalably combining said encoded first prediction residual (218), said encoded first set of motion vectors (221) and said encoded first sub-sequence of frames (211) as a first data sub-stream (245) in accordance with said determined network condition;
1) scalably combining said encoded second prediction residual (219), said encoded second set of motion vectors (222) and said encoded second sub-sequence of frames (212) as a second data sub-stream (255) in accordance with said determined network condition; and m) independently transmitting said first and second data sub-streams (245, 255).
2. The method of Claim 1 , wherein said determined network condition is a channel bandwidth determination.
3. The method of Claim 1 , including a preliminary step of arranging said input frame sequence (201) in a predetermined coding order, prior to said step (a).
4. The method of Claim 1, wherein said first sub-sequence of frames (210) comprises only odd frames from said input frame sequence (201).
5. The method of Claim 1 , wherein said second sub-sequence of frames (220) comprises only those even frames from said input frame sequence (201).
6. The method of Claim 1 , wherein said second sub-sequence of frames (220) includes those frames from said input frame sequence (201) not included in said first sub-sequence of frames (210).
7. The method of Claim 1, wherein said first and second sub-sequence of frames (210, 220) are selected in accordance with a user preference.
8. The method of Claim 1, wherein said input frame sequence includes intraframes (I), predictive frames (P) and bi-directional frames (B).
9. An encoder 200 for encoding an input sequence of frames (201), said encoder (200) comprising: a) encoding a first sub-sequence of frames (210) from said input frame sequence (201) in a first side encoder (202); b) encoding a second sub-sequence of frames (220) from said input frame sequence (201) in a second side encoder (206); c) computing a first predicted frame sequence (215) from said second subsequence of frames (220) in a central encoder (204); d) computing a second predicted frame sequence (217) from said first subsequence of frames (210) in said central encoder (204); e) computing a first set of motion vectors (214) from said first predicted frame sequence (215) in said central encoder (204); f) computing a second set of motion vectors (216) from said second predicted frame sequence (217) in said central encoder (204); g) computing a first prediction residual as an error difference between said first predicted frame sequence (215) and said encoded first sub-sequence of frames (211) in said central encoder (204); h) computing a second prediction residual as an error difference between said second predicted frame sequence (217) and said encoded second sub-sequence of frames (212) in said central encoder (204); i) encoding said first prediction residual, second prediction residual, first set of motion vectors (214) and second set of motion vectors (216) in said central encoder (204); j) determining a network condition; k) scalably combining said encoded first prediction residual (218), said encoded first set of motion vectors (221) and said encoded first sub-sequence of frames (211) as a first data sub-stream (245) in accordance with said determined network condition;
1) scalably combining said encoded second prediction residual (219), said second set of motion vectors (22) and said encoded second sub-sequence of frames (212) as a second data sub-stream (255) in accordance with said determined network condition; and m) independently transmitting said first and second data sub-streams (245, 255) from said encoder (200).
10. The encoder of Claim 9, wherein said first side encoder (202), said second side encoder (206) and said central encoder (204) are conventional predictive encoders.
11. The encoder 200 of Claim 10, wherein said first side encoder (202), said second side encoder (206) and said central encoder (204) are scalable encoders.
12. The encoder of Claim 10, wherein said conventional predictive encoders are encoders selected from the group of encoders including MPEGl, MPEG2, MPEG4, MPEG7, H.261, H.262, H.263, H.263+, H.263++, H.26L, and H.26L encoders.
13. The encoder of Claim 9, wherein the encoder (200) is included within a telecommunication transmitter of a wireless network.
14. A system for encoding an input sequence of frames (201), the system comprising: means for encoding a first sub-sequence of frames (210) from said input frame sequence (201) to produce an encoded first sub-sequence of frames (211); means for encoding a second sub-sequence of frames (220) from said input frame sequence (201) to produce an encoded second sub-sequence of frames (212); means for computing a first predicted frame sequence (215) from said second subsequence of frames (220); means for computing a second predicted frame sequence (217) from said first subsequence of frames (210); means for computing a first set of motion vectors (214) from said first predicted frame sequence (215); means for computing a second set of motion vectors (216) from said second predicted frame sequence (217); means for computing a first prediction residual as an error difference between said first predicted frame sequence (215) and said encoded first sub-sequence of frames (211); means for computing a second prediction residual as an enor difference between said second predicted frame sequence (217) and said encoded second sub-sequence of frames (212); means for encoding said first prediction residual, second prediction residual, said first set of motion vectors (214) and said second set of motion vectors (216); means for determining a network condition; means for scalably combining said encoded first prediction residual (218), said encoded first set of motion vectors (221) and said encoded first sub-sequence of frames (211) as a first data sub-stream (245) in accordance with said determined network condition; means for scalably combining said encoded second prediction residual (219), said encoded second set of motion vectors (222) and said encoded second sub-sequence of frames (212) as a second data sub-stream (255) in accordance with said determined network condition; and means for independently transmitting said first and second data sub-streams (245, 255).
15. The system of Claim 15, further including means for arranging said input frame sequence (201) in a predetermined coding order.
PCT/IB2003/003436 2002-07-31 2003-07-24 Method and apparatus for performing multiple description motion compensation using hybrid predictive codes WO2004014083A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2004525701A JP2005535219A (en) 2002-07-31 2003-07-24 Method and apparatus for performing multiple description motion compensation using hybrid prediction code
EP03766578A EP1527607A1 (en) 2002-07-31 2003-07-24 Method and apparatus for performing multiple description motion compensation using hybrid predictive codes
AU2003249461A AU2003249461A1 (en) 2002-07-31 2003-07-24 Method and apparatus for performing multiple description motion compensation using hybrid predictive codes
US10/523,434 US20060093031A1 (en) 2002-07-31 2003-07-24 Method and apparatus for performing multiple description motion compensation using hybrid predictive codes

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US39975502P 2002-07-31 2002-07-31
US60/399,755 2002-07-31
US46178003P 2003-04-10 2003-04-10
US60/461,780 2003-04-10

Publications (1)

Publication Number Publication Date
WO2004014083A1 true WO2004014083A1 (en) 2004-02-12

Family

ID=31498603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2003/003436 WO2004014083A1 (en) 2002-07-31 2003-07-24 Method and apparatus for performing multiple description motion compensation using hybrid predictive codes

Country Status (6)

Country Link
EP (1) EP1527607A1 (en)
JP (1) JP2005535219A (en)
KR (1) KR20050031460A (en)
CN (1) CN1672421A (en)
AU (1) AU2003249461A1 (en)
WO (1) WO2004014083A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006054249A1 (en) 2004-11-17 2006-05-26 Koninklijke Philips Electronics, N.V. Robust wireless multimedia transmission in multiple in multiple out (mimo) system assisted by channel state information
JP2007529175A (en) * 2004-03-12 2007-10-18 トムソン ライセンシング Method for encoding interlaced digital video data
US7991055B2 (en) 2004-09-16 2011-08-02 Stmicroelectronics S.R.L. Method and system for multiple description coding and computer program product therefor
US8326049B2 (en) 2004-11-09 2012-12-04 Stmicroelectronics S.R.L. Method and system for the treatment of multiple-description signals, and corresponding computer-program product
US8897322B1 (en) * 2007-09-20 2014-11-25 Sprint Communications Company L.P. Enhancing video quality for broadcast video services
US9167266B2 (en) 2006-07-12 2015-10-20 Thomson Licensing Method for deriving motion for high resolution pictures from motion data of low resolution pictures and coding and decoding devices implementing said method

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7536299B2 (en) * 2005-12-19 2009-05-19 Dolby Laboratories Licensing Corporation Correlating and decorrelating transforms for multiple description coding systems
CN101420607B (en) * 2007-10-26 2010-11-10 华为技术有限公司 Method and apparatus for multi-description encoding and decoding based on frame
CN105103554A (en) * 2013-03-28 2015-11-25 华为技术有限公司 Method for protecting video frame sequence against packet loss
CN107027028B (en) * 2017-03-28 2019-05-28 山东师范大学 Random offset based on JND quantifies multiple description coded, decoded method and system
CN106961607B (en) * 2017-03-28 2019-05-28 山东师范大学 Time-domain lapped transform based on JND is multiple description coded, decoded method and system
CN110740380A (en) * 2019-10-16 2020-01-31 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic device
CN114640867A (en) * 2022-05-20 2022-06-17 广州万协通信息技术有限公司 Video data processing method and device based on video stream authentication

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001062010A1 (en) * 2000-02-15 2001-08-23 Microsoft Corporation System and method with advance predicted bit-plane coding for progressive fine-granularity scalable (pfgs) video coding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001062010A1 (en) * 2000-02-15 2001-08-23 Microsoft Corporation System and method with advance predicted bit-plane coding for progressive fine-granularity scalable (pfgs) video coding

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
APOSTOLOPOULOS J G: "Error-resilient video compression through the use of multiple states", INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), vol. 3, 10 September 2000 (2000-09-10) - 13 September 2000 (2000-09-13), Vancouver, BC, Canada, pages 352 - 355, XP010529476 *
REIBMAN A R ET AL: "Transmission of multiple description and layered video over an EGPRS wireless network", INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), vol. 2, 10 September 2000 (2000-09-10) - 13 September 2000 (2000-09-13), Vancouver, BC, Canada, pages 136 - 139, XP010529942 *
WANG Y ET AL: "ERROR-RESILIENT VIDEO CODING USING MULTIPLE DESCRIPTION MOTION COMPENSATION", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE INC. NEW YORK, US, vol. 12, no. 6, June 2002 (2002-06-01), pages 438 - 452, XP001114974, ISSN: 1051-8215 *
WENQING JIANG ET AL: "Multiple description speech coding for robust communication over lossy packet networks", MULTIMEDIA AND EXPO, 2000. ICME 2000. 2000 IEEE INTERNATIONAL CONFERENCE ON NEW YORK, NY, USA 30 JULY-2 AUG. 2000, PISCATAWAY, NJ, USA,IEEE, US, 30 July 2000 (2000-07-30), pages 444 - 447, XP010511492, ISBN: 0-7803-6536-4 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007529175A (en) * 2004-03-12 2007-10-18 トムソン ライセンシング Method for encoding interlaced digital video data
JP4721366B2 (en) * 2004-03-12 2011-07-13 トムソン ライセンシング Method for encoding interlaced digital video data
US7991055B2 (en) 2004-09-16 2011-08-02 Stmicroelectronics S.R.L. Method and system for multiple description coding and computer program product therefor
US8326049B2 (en) 2004-11-09 2012-12-04 Stmicroelectronics S.R.L. Method and system for the treatment of multiple-description signals, and corresponding computer-program product
US8666178B2 (en) 2004-11-09 2014-03-04 Stmicroelectronics S.R.L. Method and system for the treatment of multiple-description signals, and corresponding computer-program product
WO2006054249A1 (en) 2004-11-17 2006-05-26 Koninklijke Philips Electronics, N.V. Robust wireless multimedia transmission in multiple in multiple out (mimo) system assisted by channel state information
CN101065913B (en) * 2004-11-17 2011-08-10 皇家飞利浦电子股份有限公司 Robust wireless multimedia transmission in multiple input multiple output (mimo) system assisted by channel state information
US10270511B2 (en) 2004-11-17 2019-04-23 Koninklijke Philips N.V. Robust wireless multimedia transmission in multiple in multiple-out (MIMO) system assisted by channel state information
US9167266B2 (en) 2006-07-12 2015-10-20 Thomson Licensing Method for deriving motion for high resolution pictures from motion data of low resolution pictures and coding and decoding devices implementing said method
US8897322B1 (en) * 2007-09-20 2014-11-25 Sprint Communications Company L.P. Enhancing video quality for broadcast video services

Also Published As

Publication number Publication date
EP1527607A1 (en) 2005-05-04
CN1672421A (en) 2005-09-21
AU2003249461A1 (en) 2004-02-23
KR20050031460A (en) 2005-04-06
JP2005535219A (en) 2005-11-17

Similar Documents

Publication Publication Date Title
Girod et al. Feedback-based error control for mobile video transmission
US7103669B2 (en) Video communication method and system employing multiple state encoding and path diversity
RU2475998C2 (en) Multi-level structure of coded bitstream
US8130830B2 (en) Enhancement layer switching for scalable video coding
Van der Schaar et al. Multiple description scalable coding using wavelet-based motion compensated temporal filtering
JP6309463B2 (en) System and method for providing error resilience, random access, and rate control in scalable video communication
WO2007103889A2 (en) System and method for providing error resilience, random access and rate control in scalable video communications
KR20040106388A (en) Moving picture data code conversion/transmission method and device, code conversion/reception method and device
KR100952185B1 (en) System and method for fractional multi-description channel coding of video without drift using forward error correction code
WO2004014083A1 (en) Method and apparatus for performing multiple description motion compensation using hybrid predictive codes
US20060093031A1 (en) Method and apparatus for performing multiple description motion compensation using hybrid predictive codes
Zhang et al. Efficient error recovery for multiple description video coding
Boulgouris et al. Drift-free multiple description coding of video
Ouaret et al. Codec-independent scalable distributed video coding
WO2002019708A1 (en) Dual priority video transmission for mobile applications
Duong New h. 266/vvc based multiple description coding for robust video transmission over error-prone networks
Bai et al. Multiple description video coding using adaptive temporal sub-sampling
Wang et al. Robust multiple description distributed video coding using optimized zero-padding
Tagliasacchi et al. Robust wireless video multicast based on a distributed source coding approach
KR100690710B1 (en) How to send a video
Petrazzuoli et al. Versatile layered depth video coding based on distributed video coding
Dinh Distributed Coding Based Multiple Descriptions for Robust Video Transmission over Error-Prone Networks
Conci et al. Multiple description video coding by coefficients ordering and interpolation
Jiang et al. New multiple description scalable video coding based on redundant wavelet
Stoufs et al. Robust motion vector coding and error concealment in MCTF-based video coding

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003766578

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2004525701

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 1020057001444

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2006093031

Country of ref document: US

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 10523434

Country of ref document: US

Ref document number: 20038181967

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 1020057001444

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2003766578

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 10523434

Country of ref document: US

WWW Wipo information: withdrawn in national office

Ref document number: 2003766578

Country of ref document: EP