[go: up one dir, main page]

CN101631247A - Moving picture coding/decoding method and device - Google Patents

Moving picture coding/decoding method and device Download PDF

Info

Publication number
CN101631247A
CN101631247A CN200910145809A CN200910145809A CN101631247A CN 101631247 A CN101631247 A CN 101631247A CN 200910145809 A CN200910145809 A CN 200910145809A CN 200910145809 A CN200910145809 A CN 200910145809A CN 101631247 A CN101631247 A CN 101631247A
Authority
CN
China
Prior art keywords
decoded
piece
brightness
aberration
play amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910145809A
Other languages
Chinese (zh)
Other versions
CN101631247B (en
Inventor
中条健
古藤晋一郎
菊池义浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2002340042A external-priority patent/JP4015934B2/en
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of CN101631247A publication Critical patent/CN101631247A/en
Application granted granted Critical
Publication of CN101631247B publication Critical patent/CN101631247B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Color Television Systems (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A moving picture coding/decoding device includes an image memory/prediction image generator (108) for selecting one combination from a plurality of combinations between at least one reference image number prepared in advance and a prediction parameter and generating a prediction image signal (212) according to the reference image number and the prediction parameter of the selected combination. The device uses a variable length encoder (111) to encode orthogonal conversion coefficient information (210) concerning a prediction error signal of the prediction image signal (212) for an input moving picture signal (100), mode information (213) indicating a coding mode , motion vector information (214), and index information (215) indicating a combination of the selected reference image number and the prediction parameter.

Description

Dynamic image encoding/decoding method and device
It is 03800757.6 that the application of this division is based on application number, and the applying date is on April 18th, 2003, and denomination of invention is divided an application for the Chinese invention patent application of " dynamic image encoding/decoding method and device ".More particularly, it is 200610089952.0 that the application of this division is based on application number, and the applying date is on May 30th, 2006, denomination of invention for " dynamic image encoding/decoding method and device " divide an application divide an application once more.
Technical field
The present invention relates to a kind of coding/decoding decay (fade) video and fade out (dissolving) video, especially with high efficiency coding/decoding attenuation video and the video coding/decoding method and the device that fade out video.
Background technology
One of coding is used as for example ITU-TH.261 of video encoding standard scheme between motion-compensated predicted frames, H.263, and ISO/IEC MPEG-2, or the coding mode among the MPEG-4.As the forecast model in the coding between motion-compensated predicted frames, use and when brightness does not change, show the model that maximum prefetch is surveyed efficient on time orientation.Under the situation of the attenuation video that image brightness changes, do not have known method so far, when normal picture for example when black image fades in, correct prediction is made in its variation to image brightness.In order to keep the picture quality of attenuation video, therefore many positions are essential.
In order to address this problem, for example, No. 3166716, Japan Patent, in " antidamping countermeasure video encoder and coding method ", the attenuation video part is detected, to change the distribution of figure place.More specifically, fading out under the situation of video, the start-up portion that fades out that brightness changes is distributed in many positions.Usually, the decline of fading out becomes monochrome image, therefore can easily encode.For this reason, the figure place of distributing to this part reduces.This makes it possible to improve overall image quality, and increases the sum of position within bounds.
No. 2938412, Japan Patent, " luminance video changes compensation method; video coding apparatus; video decoder; video coding or decoding program record recording medium thereon; and the coded data of video record recording medium thereon " in, a kind of passing through according to two parameters proposed, promptly brightness variable quantity and contrast variable quantity standard of compensation image solve the encoding scheme of attenuation video fully.
At Thomas Wiegand and Berand Girod, " the multiframe motion compensated prediction of video transmission " in the Kluwer academic press 2001, proposes a kind of encoding scheme based on a plurality of frame buffers.In this scheme, attempted optionally to produce predicted picture and improved forecasting efficiency by a plurality of reference frames from be stored in frame buffer.
According to traditional technology, in order to encode attenuation video or fade out video and keep high picture quality simultaneously, many positions are essential.Therefore, can not expect the raising of code efficiency.
Summary of the invention
The object of the present invention is to provide a kind of video coding/decoding method and device, the video that its past brightness along with the time of can encoding changes, attenuation video or fade out video for example is especially with the high efficiency this video of encoding.
According to a first aspect of the invention, provide a kind of and represent the benchmark image signal of at least one benchmark image and the motion vector between incoming video signal and the benchmark image signal by using, make incoming video signal stand the method for video coding of motion compensated predictive coding, comprise: the step of from a plurality of combinations, selecting a combination for each piece of incoming video signal, each comprises and is benchmark image definite at least one benchmark image number and Prediction Parameters in advance in wherein a plurality of combinations, the step that produces prediction image signal according to the benchmark image number and the Prediction Parameters of selected combination, produce the step of representing the predictive error signal of error between incoming video signal and the prediction image signal, and coded prediction error signal, the information of motion vector and indicate the step of selected combined indexes information.
According to a second aspect of the invention, a kind of video encoding/decoding method is provided, comprise: the step of decoding coded data, wherein coded data comprises and represents the predictive error signal of prediction image signal about the error of vision signal, motion vector signal, combined indexes information with at least one benchmark image number of indication and Prediction Parameters, the step that produces prediction image signal according to benchmark image number and Prediction Parameters by the indicated combination of the index information of decoding, and by using predictive error signal and prediction image signal to produce the step of playback video signal.
As mentioned above, according to the present invention, use the combination of benchmark image number and Prediction Parameters or prepare to have a plurality of different prediction scheme with the combination of the corresponding a plurality of Prediction Parameters of benchmark image number of appointment.This makes it possible to based on the prediction scheme with higher forecasting efficient, can not produce correct prediction image signal by video coding attenuation video or fade out this vision signal that the general forecast scheme of video produces for example for correct prediction image signal.
In addition, vision signal is to be included as each frame of progressive signal and the picture signal that obtains, the picture signal that obtains for each frame that obtains by two fields that merge interleaving signal, and be the signal of each picture signal that obtains of interleaving signal.When vision signal was picture signal based on frame, the indication of benchmark image signal number was based on the benchmark image signal of frame.When vision signal was picture signal based on the field, the indication of benchmark image signal number was based on the benchmark image signal of field.
This makes it possible to based on the prediction scheme with higher forecasting efficient, for correct prediction image signal can not for example attenuation video or the general forecast scheme of fading out video produce, comprise and this vision signal of frame structure and field structure produce correct prediction image signal by video coding.
In addition, the information of benchmark image number or Prediction Parameters itself does not send to decoding end from coding side, but indicates the combined indexes information of benchmark image number and Prediction Parameters to send, and perhaps the benchmark image number sends independently.In this case, code efficiency can improve by the combined indexes information that sends the indication Prediction Parameters.
Description of drawings
Fig. 1 is the block diagram that shows according to the video coding apparatus scheme of first embodiment of the invention;
Fig. 2 is the block diagram of the detailed protocol of frame memory in the displayed map 1/prediction image generation device;
Fig. 3 is the view of example that shows the form of combination that use in first embodiment, reference frame number and Prediction Parameters;
Fig. 4 shows in first embodiment flow chart of example of selecting the order of prediction scheme (combination of reference frame number and Prediction Parameters) and definite coding mode for each grand final election;
Fig. 5 is the block diagram that shows according to the video decoder scheme of first embodiment;
Fig. 6 is the block diagram of the detailed protocol of frame memory in the displayed map 5/prediction image generation device;
Fig. 7 shows according to second embodiment of the invention, the number of reference frame be one and reference frame number situation about sending as pattern information under the view of example of form of combination of Prediction Parameters;
Fig. 8 shows according to second embodiment, the number of reference frame be two and reference frame number situation about sending as pattern information under the view of example of form of combination of Prediction Parameters;
Fig. 9 shows according to third embodiment of the invention, is the view of the example of the form of the combination of benchmark image number and Prediction Parameters under one the situation at the number of reference frame;
Figure 10 shows according to the 3rd embodiment view of the example of the form of luminance signal only;
Figure 11 is the view that shows the example of the grammer of each piece when index information will be encoded;
Figure 12 be show when predicted picture will be when using a benchmark image to produce, the view of the instantiation of coding stream;
Figure 13 be show when predicted picture will be when using two benchmark images to produce, the view of the instantiation of coding stream;
Figure 14 shows according to four embodiment of the invention, when information to be encoded is front court (top field), and reference frame number, the view of the example of the form of reference field number and Prediction Parameters; And
Figure 15 shows according to four embodiment of the invention, when information to be encoded is back court (bottom field), and reference frame number, the view of the example of the form of reference field number and Prediction Parameters.
Embodiment
Embodiment of the present invention will be described below with reference to several views of appended drawings.
[first embodiment]
(about coding side)
Fig. 1 shows the scheme according to the video coding apparatus of first embodiment of the invention.Vision signal 100 for example is that the basis is input to video coding apparatus with the frame.Vision signal 100 is input to subtracter 101.Subtracter 101 calculates poor between vision signals 100 and the prediction image signal 212, to produce predictive error signal.Mode selection switch 102 is selected predictive error signal or vision signal 100.Quadrature transformer 103 makes the signals selected orthogonal transform that stands, for example discrete cosine transform (DCT).Quadrature transformer 103 produces orthogonal transform coefficient information, for example DCT coefficient information.Orthogonal transform coefficient information is quantized by quantizer 104, and is branched off into two-way.A quantification orthogonal transform coefficient information 210 that is branched off into two-way is directed to variable length encoder 111.
The orthogonal transform coefficient information 210 that quantizes another that is branched off into two-way continue by de-quantizer or inverse quantizer 105 and oppositely quadrature transformer 106 stand with quantizer 104 and quadrature transformer 103 in opposite processing, to reconstitute predictive error signal.Afterwards, adder 107 is added to the prediction image signal of importing by switch 109 212 with the predictive error signal of reconstruct, to produce local decoded video signal 211.Local decoded video signal 211 is input to frame memory/prediction image generation device 108.
Frame memory/prediction image generation device 108 is selected in a plurality of combinations of the reference frame number prepared and Prediction Parameters.Calculate by the linear of the vision signal (local decoded video signal 211) of the indicated reference frame of the reference frame number of selected combination with according to the Prediction Parameters of selected combination, and consequent signal is added to the side-play amount based on Prediction Parameters.By this operation, in this case, the benchmark image signal is that the basis produces with the frame.Then, frame memory/prediction image generation device 108 comes motion compensation benchmark image signal by using motion vector, to produce prediction image signal 212.
In this process, frame memory/prediction image generation device 108 produces the selected combined indexes information 125 of motion vector information 214 and indication reference frame number and Prediction Parameters, and will select the coding mode information necessary to send to mode selector 110.Motion vector information 214 and index information 215 are input to variable length encoder 111.Frame memory/prediction image generation device 108 will be described in detail subsequently.
Mode selector 110 is promptly selected the prediction interframe encoding mode of intra-frame encoding mode or motion compensation based on the coding mode of selecting from the information of forecasting P of frame memory/prediction image generation device 108 based on macro block, and output switch control signal M and S.
In intra-frame encoding mode, switch 102 and 112 switches to the A end by switch controlling signal M and S, and incoming video signal 100 is input to quadrature transformer 103.In interframe encoding mode, switch 102 and 112 switches to the B end by switch controlling signal M and S.Therefore, be input to quadrature transformer 103 from the predictive error signal of subtracter 101, and be input to adder 107 from the prediction image signal 212 of frame memory/prediction image generation device 108.Mode signal 213 is exported from mode selector 110, and is input to variable length encoder 111.
Variable length encoder 111 makes the orthogonal transform coefficient information 210 that quantizes, pattern information 213, and motion vector information 214 and index information 215 stand variable-length encoding.Multiplexed by the variable length code that this operation produces by multiplier 114.Then, resulting data is level and smooth by output buffer 115.Send out from the coded data 116 of output buffer 115 outputs, to transmission system or storage system (not showing).
Coding controller 113 control addressable parts 112.More specifically, the buffering capacity of coding controller 113 monitoring output buffers 115, and control for example quantization step of quantizer 104 of coding parameter, so that buffering capacity is constant.
(about frame memory/prediction image generation device 108)
The detailed protocol of the frame memory in Fig. 2 displayed map 1/prediction image generation device 108.With reference to figure 2, the local decoded video signal 211 of adder 107 inputs from Fig. 1 is stored in the frame memory group 202 under the control of storage control 201.Frame memory group 202 has a plurality of (N) frame memory FM1~FMN that is used for temporarily preserving as the local decoded video signal 211 of reference frame.
In Prediction Parameters controller 203, prepare to have in advance as a plurality of combinations form, reference frame number and Prediction Parameters.Prediction Parameters controller 203 is based on vision signal 100, and the reference frame number of selection reference frame and being used for produces the combination of the Prediction Parameters of prediction image signal 212, and the selected combined indexes information 215 of output indication.
Multiframe locomotion evaluation device 204 produces the benchmark image signal according to the combination of, reference frame number 203 that select by the Prediction Parameters controller and index information.Multiframe locomotion evaluation device 204 is from this benchmark image signal and incoming video signal 100 estimating motion amount and predicated errors, and output makes predicated error reach minimum motion vector information 214.Multiframe motion compensator 205 uses the benchmark image signal of being selected by multiframe locomotion evaluation device 204 according to motion vector each piece to be carried out motion compensation, to produce prediction image signal 212.
Storage control 201 is set to the local decoded video signal of each frame with the reference frame number, and each frame is stored among of frame memory FM1~FMN of frame memory group 202.For example, each frame begins serial number from the frame of close input picture.Identical reference frame number can be provided with for different frames.In this case, for example, use different Prediction Parameters.Frame near input picture is selected from frame memory FM1~FMN, and sends to Prediction Parameters controller 203.
(about the form of the combination of reference frame number and Prediction Parameters)
Fig. 3 is presented in the Prediction Parameters controller 203 example of the form of combination that prepare, reference frame number and Prediction Parameters." index " is corresponding to being the predicted picture of each piece selection.In this case, have eight types predicted picture.Reference frame number n is the number as the local decoded video of reference frame, and in this case, the number of indication and n the corresponding local decoded video of frame in the past.
When the picture signal that is stored in a plurality of reference frames in the frame memory group 202 by use when prediction image signal 212 produces, a plurality of reference frame numbers are designated, and (number of reference frame+1) coefficient is that each of luminance signal (Y) and color difference signal (Cb and Cr) specified as Prediction Parameters.In this case, as indicated by equation (1)~(3), nSuppose the number of reference frame, and n+1 Prediction Parameters Di (i=..., n+1) prepare for brightness signal Y; N+1 Prediction Parameters Ei (i=..., n+1) prepare for color difference signal Cb; And n+1 Prediction Parameters Fi (i=..., n+1) prepare for color difference signal Cr:
Y t = Σ i = 1 n D i Y t - i + D n + 1 - - - ( 1 )
Cb t = Σ i = 1 n E i Cb t - i + E n + 1 - - - ( 2 )
Cr t = Σ i = 1 n F i Cr t - i + F n + 1 - - - ( 3 )
This operation will be described in more detail with reference to figure 3.With reference to figure 3, last number of each Prediction Parameters is represented side-play amount, and first number of each Prediction Parameters is represented weighted factor (predictive coefficient).For index 0, the number of reference frame is provided by n=2, and the reference frame number is 1, and Prediction Parameters is 1 and 0 for each of brightness signal Y and color difference signal Cr and Cb.As Prediction Parameters in this case is 1 and 0 expression, multiply by 1 and add side-play amount 0 with the corresponding local decoded video signal of reference frame number " 1 ".In other words, become the benchmark image signal with reference frame number 1 corresponding local decoded video signal and without any change.
For index 1, be used as two reference frames with reference frame number 1 and 2 corresponding local decoded video signals.According to the Prediction Parameters 2 ,-1 and 0 of brightness signal Y, double with reference frame number 1 corresponding local decoded video signal, and from consequent signal, deduct with reference frame number 2 corresponding local decoded video signals.Then, side-play amount 0 is added to consequent signal.That is, the extrapolation prediction is carried out from the local decoded video signal of two frames, to produce the benchmark image signal.For color difference signal Cr and Cb, because Prediction Parameters is 1,0 and 0, be used as the benchmark image signal with reference frame number 1 corresponding local decoded video signal, and without any change.Effective especially with index 1 corresponding this prediction scheme for fading out video.
For index 2, according to Prediction Parameters 5/4 and 16, with reference frame number 1 corresponding local decoded video signal multiply by 5/4 and with side-play amount 16 additions.For color difference signal Cr and Cb, because Prediction Parameters is 1, color difference signal Cr and Cb become the benchmark image signal and without any change.This prediction scheme is effective especially for the video that fades in from black frame.
So, the benchmark image signal can be selected based on a plurality of prediction scheme of the various combination of number with reference frame to be used and Prediction Parameters.This make this embodiment to solve for want of correct prediction scheme fully and stood picture quality degeneration attenuation video and fade out video.
(about selecting the order of prediction scheme and definite coding mode)
Selecting the example of the concrete order of prediction scheme (combination of reference frame number and Prediction Parameters) and definite coding mode for each macro block in this embodiment and then will be described with reference to Figure 4.
At first, but the maximum assumed value is set to variable min_D (step S101).The repetition of the selection of prediction scheme in LOOP1 (step S102) the expression interframe encode, and variable iThe value of " index " in the representative graph 3.In this case, in order to obtain the optimum movement vector of each prediction scheme, the estimated value D of each index (reference frame number and Prediction Parameters each combination) calculates from figure place relevant with motion vector information 214 (from the variable length code of variable length encoder 111 outputs with motion vector information 214 corresponding figure places) and predicated error absolute value summation, and selection makes estimated value D reach the motion vector (step S103) of minimum.Estimated value D compare with min_D (step S104).If estimated value D is less than min_D, estimated value D is set to min_D, and index iAssignment is to min_i (step S105).
Calculate the estimated value D (step S106) of intraframe coding then.Estimated value D compare with min_D (step S107).If this relatively indicates min_D less than estimated value D, pattern MODE is defined as interframe encode, and the min_i assignment is to index information INDEX (step S108).If estimated value D is less, pattern MODE is defined as intraframe coding (step S109).In this case, estimated value D is set to have the estimated value of the figure place of identical quantization step.
(about decoding end)
To and then describe with the corresponding video decoder of the video coding apparatus shown in Fig. 1.Fig. 5 shows the scheme according to the video decoder of this embodiment.The coded data 300 that sends out and send by transmission system or storage system from the video coding apparatus shown in Fig. 1 temporarily is stored in the input buffer 301, and each frame multichannel is decomposed based on grammer by demultiplexer 302.Resulting data is input to length variable decoder 303.The variable length code of each grammer of length variable decoder 303 decoding coded datas 300 quantizes orthogonal transform coefficient, pattern information 413, motion vector information 414 and index information 415 to reproduce.
In the information of reproducing, quantize orthogonal transform coefficient by de-quantizer 304 de-quantizations, and by the oppositely orthogonal transform of reverse quadrature transformer 305.If pattern information 413 indication intra-frame encoding modes, playback video signal is from reverse quadrature transformer 305 outputs.Then, this signal is exported as playback video signal 310 by adder 306.
If pattern information 413 indication interframe encoding modes, predictive error signal is from reverse quadrature transformer 305 outputs, and mode selection switch 309 conductings.Be added to predictive error signal from the prediction image signal 412 of frame memory/prediction image generation device 308 outputs by adder 306.As a result, playback video signal 310 outputs.Playback video signal 310 as the benchmark image signal storage in frame memory/prediction image generation device 308.
Pattern information 413, motion vector information 414 and index information 415 are input to frame memory/prediction image generation device 308.Pattern information 413 also is input to mode selection switch 309.In intra-frame encoding mode, mode selection switch 309 is closed.In interframe encoding mode, switch conduction.
Frame memory in the image pattern 1 on the coding side/prediction image generation device 108 is the same, frame memory/prediction image generation device 308 comprises the combination as a plurality of preparations form, reference frame number and Prediction Parameters, and selects from form by an indicated combination of index information 415.Calculate by the linear of the vision signal (playback video signal 210) of the indicated reference frame of the reference frame number of selected combination with according to the Prediction Parameters of selected combination, and be added to consequent signal based on the side-play amount of Prediction Parameters.By this operation, the benchmark image signal produces.Then, the benchmark image signal of generation comes motion compensation by using by motion vector information 414 indicated motion vectors, thereby produces prediction image signal 412.
(about frame memory/prediction image generation device 308)
The detailed protocol of the frame memory in Fig. 6 displayed map 5/prediction image generation device 308.With reference to figure 6, the playback video signal 310 of adder 306 outputs from Fig. 5 is stored in the frame memory group 402 under the control of storage control 401.Frame memory group 402 has a plurality of (N) frame memory FM1~FMN that is used for temporarily preserving as the playback video signal 310 of reference frame.
Prediction Parameters controller 403 has in advance as the combination form shown in the image pattern 3, reference frame number and Prediction Parameters.Prediction Parameters controller 403 is based on the index information 415 from the length variable decoder among Fig. 5 303, and the reference frame number of selection reference frame and being used for produces the combination of the Prediction Parameters of prediction image signal 412.A plurality of multiframe motion compensators 404 produce the benchmark image signal according to the combination of, reference frame number 403 that select by the Prediction Parameters controller and index information, and according to by from the indicated motion vector of the motion vector information 414 of the length variable decoder among Fig. 5 303, use this benchmark image signal that each piece is carried out motion compensation, thereby produce prediction image signal 412.
[second embodiment]
And then second embodiment of the present invention will be described with reference to figure 7 and 8.Because the video coding apparatus in this embodiment and the overall plan of video decoder almost with first embodiment in identical, with the difference of only describing with first embodiment.
In this embodiment, described based on specifying the scheme of a plurality of reference frame numbers to represent the example of the method for Prediction Parameters according to the pattern information of macroblock basis.The reference frame number is distinguished by the pattern information of each macro block.Therefore, this embodiment is used the form of the Prediction Parameters as Fig. 7 and 8 as shown in, replaces using the form as combination in first embodiment, reference frame number and Prediction Parameters.That is, index information is not indicated the reference frame number, and only has the combination of Prediction Parameters designated.
Form among Fig. 7 shows that the number when reference frame be the example of combination of Prediction Parameters in a period of time.As Prediction Parameters, (number of reference frame+1) parameter, promptly two parameters (weighted factor and a side-play amount) are specified for luminance signal (Y) and color difference signal (Cb and Cr) each.
Form among Fig. 8 shows the example of the combination of Prediction Parameters when the number of reference frame is two.In this case, as Prediction Parameters, (number of reference frame+1) parameter, promptly three parameters (two weighted factors and a side-play amount) are specified for luminance signal (Y) and color difference signal (Cb and Cr) each.This form is that coding side and decoding end are prepared, wherein coding side and decoding end each all as in first embodiment.
[the 3rd embodiment]
The 3rd embodiment of the present invention will be described with reference to figure 9 and 10.Because the video coding apparatus in this embodiment and the overall plan of video decoder almost with first embodiment in identical, the difference with first and second embodiments will only be described below.
In first and second embodiments, video is basic management with the frame.But in this embodiment, video is basic management with the image.If progressive signal and interleaving signal all exist as received image signal, image not necessarily is basic coding with the frame.Consider this point, the image of a frame of image hypothesis (a) progressive signal, (b) image of a frame that produces by two fields that merge interleaving signal, the perhaps image of a field of (c) interleaving signal.
If image to be encoded is the image with picture (a) or frame structure (b), the benchmark image that uses in the motion compensated prediction is also managed as frame, no matter have frame structure or field structure as the encoded image of benchmark image.The benchmark image number assignment is given this image.Similarly, if image to be encoded is the image with field structure of picture (c), the benchmark image that uses in the motion compensated prediction is also managed as the field, no matter have frame structure or field structure as the encoded image of benchmark image.The benchmark image number assignment is given this image.
Equation (4), prepare in Prediction Parameters controller 203 (5) and (6), the example of the predictive equations of benchmark image number and Prediction Parameters.These examples are to use a benchmark image signal to be produced the predictive equations of prediction image signal by motion compensated prediction.
Y = clip ( ( D 1 ( i ) × R Y ( i ) + 2 L Y - 1 ) > > L Y + D 2 ( i ) ) - - - ( 4 )
Cb = clip ( ( E 1 ( i ) × ( R Cb ( i ) - 128 ) + 2 L C - 1 ) > > L C + E 2 ( i ) + 128 ) - - - ( 5 )
Cr = clip ( ( F 1 ( i ) × ( R Cr ( i ) - 128 ) + 2 L C - 1 ) > > L C + F 2 ( i ) + 128 ) - - - ( 6 )
Wherein, Y is the prediction image signal of luminance signal, and Cb and Cr are the prediction image signals of two color difference signals, R Y(i), R CbAnd R (i), Cr(i) be to have index iThe luminance signal of benchmark image signal and the pixel value of two color difference signals, D 1(i) and D 2(i) be to have index iThe predictive coefficient and the side-play amount of luminance signal, E 1(i) and E 2(i) be to have index iPredictive coefficient and the side-play amount of color difference signal Cb, F 1(i) and F 2(i) be to have index iPredictive coefficient and the side-play amount of color difference signal Cr.Index iExpression is from 0 (the maximum number-1 of benchmark image), and is the value of each piece (for example being each macro block) coding to be encoded.Then, resulting data is sent to video decoder.
Prediction Parameters D 1(i), D 2(i), E 1(i), E 2(i), F 1(i) and F 2(i) by the value of determining between video coding apparatus and video decoder in advance or for example frame of unit of encoding, field or fragment are represented, and come together to encode with treating the coded data that is sent to video decoder from video coding apparatus.By this operation, these parameters are shared by two devices.
Equation (4), (5) and (6) are predictive equations, 2 power wherein, that is, 2,4,6,8,16 ... be elected to be denominator with the predictive coefficient of benchmark image signal multiplication.Predictive equations can be eliminated necessity of division, and can calculate by arithmetic shift.This makes it possible to be avoided rolling up because of assessing the cost of causing of division.
In equation (4), in (5) and (6), a>>b ">>" represent integer aArithmetic shift to the right bThe operator of position.The value that function " clip " representative is used for " () " be set to 0 when it less than 0 the time, and this value is set to 255 when its reduction function greater than 255 time.
In this case, suppose L YBe the shift amount of luminance signal, and L CIt is the shift amount of color difference signal.As these shift amounts L YAnd L C, use the value of between video coding apparatus and video decoder, determining in advance.Video coding apparatus is with predetermined coding unit, frame for example,, or fragment is with form and the coded data shift amount L that comes together to encode YAnd L C, and resulting data is sent to video decoder.This makes two devices can share shift amount L YAnd L C
In this embodiment, prepare in the Prediction Parameters controller 203 of form in Fig. 2 of combination shown in the image pattern 9 and 10, benchmark image number and Prediction Parameters.With reference to figure 9 and 10, index iCorresponding to being the predicted picture of each piece selection.In this case, four types predicted picture and index i0~3 exist accordingly.In other words, " benchmark image number " is the number as the local decoded video signal of benchmark image.
" mark (flag) " is whether indication uses the predictive equations of Prediction Parameters to be applied to by index iThe mark of indicated benchmark image number.If be labeled as " 0 ", motion compensated prediction is by use and by index iThe corresponding local decoded video signal of indicated benchmark image number is carried out, and does not use any Prediction Parameters.If be labeled as " 1 ", predicted picture is by use and by index iCorresponding local decoded video of indicated benchmark image number and Prediction Parameters are according to equation (4), and (5) and (6) produce, thereby carry out motion compensated prediction.This label information is also by use the value of determining in advance between video coding apparatus and video decoder, perhaps with predetermined coding unit, frame for example, or fragment, encode in video coding apparatus with form and coded data.Resulting data is sent to video decoder.This makes two devices can share label information.
In these cases, when index i=0, about benchmark image number 105, predicted picture produces by using Prediction Parameters, and when i=1, motion compensated prediction execution and do not use any Prediction Parameters.As mentioned above, for same benchmark image number, may there be a plurality of prediction scheme.
Form shown in Fig. 9 has and equation (4), and (5) and (6) consistently distribute to the Prediction Parameters D of brightness and two color difference signals 1(i), D 2(i), E 1(i), E 2(i), F 1(i) and F 2(i).Figure 10 shows that Prediction Parameters only distributes to the example of the form of luminance signal.Usually, compare with the figure place of luminance signal, the figure place of color difference signal is not very big.For this reason, produce the figure place of transmitting in required amount of calculation of predicted picture and the form in order to reduce, form is prepared, and wherein the Prediction Parameters of color difference signal is omitted, and as shown in Figure 10, and Prediction Parameters is only distributed to luminance signal.In this case, only equation (4) is used as predictive equations.
Predictive equations under the situation of equation (7)~(12) are to use a plurality of (two in this case) benchmark image.
P Y ( i ) = ( D 1 ( i ) × R Y ( i ) + 2 L Y - 1 ) > > L Y + D 2 ( i ) - - - ( 7 )
P Cb ( i ) = ( E 1 ( i ) × ( R Cb ( i ) - 128 ) + 2 L C - 1 ) > > L C + E 2 ( i ) + 128 - - - ( 8 )
P Cr ( i ) = ( F 1 ( i ) × ( R Cr ( i ) - 128 ) + 2 L C - 1 ) > > L C + F 2 ( i ) + 128 - - - ( 9 )
Y=clip((P Y(i)+P Y(j)+1)>>1) (10)
Cb=clip((P Cb(i)+P Cb(j)+1)>>1) (11)
Cr=clip((P Cr(i)+P Cr(j)+1)>>1) (12)
Prediction Parameters D 1(i), D 2(i), E 1(i), E 2(i), F 1(i), F 2(i), L YAnd L CAnd the information number of packages of mark is to determine between video coding apparatus and video decoder in advance, perhaps with coding unit frame for example, or fragment, the value of encoding with coded data, and be sent to video decoder from video coding apparatus.This makes two devices can share the information of these parts.
If image to be decoded is the image with frame structure, the benchmark image that is used for motion compensated prediction is also managed as frame, no matter have frame structure or field structure as the decoded picture of benchmark image.The benchmark image number assignment is to this image.Similarly, if image to be programmed is the image with field structure, the benchmark image that is used for motion compensated prediction is also managed as the field, no matter have frame structure or field structure as the decoded picture of benchmark image.The benchmark image number assignment is to this image.
(about the grammer of index information)
Figure 11 shows the example of grammer under the situation that index information encodes in each piece.At first, pattern information MODE exists for each piece.Determine the indication index according to pattern information MODE iValue index information IDi and the indication index jThe index information IDj of value whether be encoded.After the index information of having encoded, index iThe motion vector information MVi and the index of motion compensated prediction jThe addition of coded message of motion vector information MVj of motion prediction compensation, as the motion vector information of each piece.
(about the data structure of coding stream)
Figure 12 shows when predicted picture produces by using a benchmark image, the instantiation of the coding stream of each piece.Index information IDi is provided with after pattern information MODE, and motion vector information MVi is being provided with thereafter.Motion vector information MVi is two-dimensional vector information normally.Depend on by the motion compensation process in the indicated piece of pattern information, a plurality of two-dimensional vectors can further send.
Figure 13 shows when predicted picture produces by using two benchmark images, the instantiation of the coding stream of each piece.Index information IDi and index information IDj are provided with after pattern information MODE, and motion vector information MVi and motion vector information MVj are being provided with thereafter.Motion vector information MVi and motion vector information MVj be two-dimensional vector information normally.Depend on by the motion compensation process in the indicated piece of pattern information, a plurality of two-dimensional vectors can further send.
Notice that the said structure of grammer and bit stream can be applied to all embodiments equally.
[the 4th embodiment]
The 4th embodiment of the present invention will be and then refer to figs. 14 and 15 describing.Because the video coding apparatus in this embodiment and the overall plan of video decoder almost with first embodiment in identical, the difference with the first, the second and the 3rd embodiment will only be described.In the 3rd embodiment, switch for each image based on the coding of frame with based on the coding of field.In the 4th embodiment, switch for each macro block based on the coding of frame with based on the coding of field.
When switching for each macro block based on the coding of frame with based on the coding of field, the identical different image of benchmark image number indication even in same image, depends on macro block and is and be basic coding with the frame or be basic coding with the field.For this reason, use the form shown in the Fig. 9 and 10 that uses in the 3rd embodiment, correct prediction image signal may not can produce.
In order to address this problem, in this embodiment, prepare in the Prediction Parameters controller 203 of form in Fig. 2 of combination shown in the image pattern 14 and 15, benchmark image number and Prediction Parameters.Suppose when macro block will be basic coding with the field, be used with employed benchmark image number (reference frame index number) Prediction Parameters that corresponding Prediction Parameters is identical when macro block is basic coding with the frame.
Figure 14 show when macro block be basic coding and image to be encoded employed form when being the front court with the field.The row of going up of each index column and following row correspond respectively to front court and back court.As shown in Figure 14, frame index jWith the field index kRelevant, make k=2j in the front court, k=2j+1 in back court.The reference frame number mWith the reference field number nRelevant, make n=2m in the front court, n=2m+1 in back court.
Figure 15 show when macro block be basic coding and image to be encoded employed form when being back court with the field.The same in the form shown in the image pattern 14, the row of going up of each index column and following row correspond respectively to front court and back court.In the form among Figure 15, frame index jWith the field index kRelevant, make k=2j+1 in the front court, k=2j in back court.This makes it possible to as index kLittle value be assigned to the homophase back court.The reference frame number mWith the reference field number nBetween relation and Figure 14 in form in identical.
When macro block will be basic coding with the field, a frame index and an index were by using the form coding shown in Figure 14 and 15 as index information.When macro block will be basic coding with the frame, the total indexed coding of frame index of form in Figure 14 and 15 is only arranged as index information.
In this embodiment, Prediction Parameters is distributed to frame and field by using a form.But the form of the form of frame and field can be that an image or fragment are prepared separately.
Above-mentioned each embodiment example is the encoding and decoding of video scheme that orthogonal transform is used on the basis with the piece.But even use for example wavelet transform of another kind of converter technique, the technology of describing in the superincumbent embodiment of the present invention also can be used.
Can be used as hardware (device) or use a computer according to video coding of the present invention and decoding treatment technology and realize as software.Some treatment technologies can be realized by hardware, and other treatment technologies can be realized by software.According to the present invention, can provide a kind of and be used to make computer to carry out the top video coding or the program of video decode, perhaps a kind of stored program storage medium.
Industrial usability
As mentioned above, according to video coding/decoding method of the present invention and device be suitable for along with The past of time, the video that brightness changes, for example attenuation video or fade out the video quilt especially The image processing field of Code And Decode.

Claims (34)

1. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
From index information is the step that piece to be decoded is derived benchmark image, weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
Based on the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
2. video decoder, the coded data of described video decoder decoding by video experience motion compensation encoding with brightness and two aberration is obtained, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive module, being configured to from index information is that piece to be decoded is derived benchmark image, weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to the motion vector based on piece to be decoded, is that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
3. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
From index information is the step that piece to be decoded is derived weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
Based on the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making benchmark image multiply by described weighted factor and adding the above side-play amount; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
4. video decoder, the coded data of described video decoder decoding by video experience motion compensation encoding with brightness and two aberration is obtained, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive module, being configured to from index information is that piece to be decoded is derived weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to the motion vector based on piece to be decoded, is that piece to be decoded produces the motion compensated prediction image by making benchmark image multiply by described weighted factor and adding the above side-play amount; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
5. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
The received code data are as the step of input, described coded data is by comprising following two a plurality of combinations for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and coding (1) indicates a combined indexes information in described a plurality of combination about the information of the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) motion vector and (3) for piece to be decoded, and obtain;
From described index information and described a plurality of step that piece to be decoded is derived the combination that comprises described weighted factor and described side-play amount that is combined as;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By motion vector, make benchmark image multiply by described weighted factor and add the above side-play amount and be the step that piece to be decoded produces the motion compensated prediction image based on piece to be decoded; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
6. video decoder, the coded data of described video decoder decoding by video experience motion compensated predictive coding with brightness and two aberration is obtained, described video decoder comprises:
Receiver, be configured to the received code data as input, described coded data is by comprise the weighted factor of (A) each brightness and two aberration and (B) a plurality of combinations of the side-play amount of each brightness and two aberration for one or more block encodings to be decoded, and coding (1) indicates a combined indexes information in described a plurality of combination about the information of the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) motion vector and (3) for piece to be decoded, and obtain;
Derive module, be configured to from described index information and described a plurality of combination that piece derivation to be decoded comprises described weighted factor and described side-play amount that is combined as;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to by the motion vector based on piece to be decoded, makes benchmark image multiply by described weighted factor and adds the above side-play amount and be that piece to be decoded produces the motion compensated prediction image; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
7. video encoding/decoding method, described video encoding/decoding method are used to decode by making coded data that video experience motion compensated predictive coding with brightness and two aberration obtains to obtain described video, and described video encoding/decoding method comprises:
Be received as the step of the following every and coded data that obtains of block encoding to be decoded as input: (1) indicates the index information element of giving determined number of the combination of the following respectively: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration, (2) about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and the information of (3) motion vector;
Derive benchmark image, give the weighted factor of determined number and give the step of the side-play amount of determined number from the index information element of giving determined number of piece to be decoded to determined number;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By motion vector based on piece to be decoded, make the described benchmark image of giving determined number correspondingly multiply by the described weighted factor of giving determined number, and add the side-play amount of the above given quantity and be the step that piece to be decoded produces the motion compensated prediction image corresponding to each benchmark image; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
8. video decoder, to obtain described video, described video decoder comprises by the coded data that makes video experience motion compensated predictive coding with brightness and two aberration and obtain in described video decoder decoding:
Receiver, be configured to be received as the following every and coded data that obtains of block encoding to be decoded as input: (1) correspondingly indicate the weighted factor of (A) benchmark image, (B) each brightness and two aberration and (C) the index information element to determined number of the combination of the side-play amount of each brightness and two aberration, (2) about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and the information of (3) motion vector;
Derive module, being configured to derives benchmark image to determined number, gives the weighted factor of determined number and gives the side-play amount of determined number from the index information element of giving determined number of piece to be decoded;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator, be configured to by motion vector based on piece to be decoded, make the described benchmark image of giving determined number correspondingly multiply by the weighted factor of giving determined number, and add the side-play amount of the above given quantity and be that piece to be decoded produces the motion compensated prediction image corresponding to described each benchmark image; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
9. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Determine the step of a frame piece or a piece for piece to be decoded;
According to wanting decoded piece is a frame piece or a piece, is the step that piece to be decoded is derived benchmark image, weighted factor and side-play amount from index information;
By motion vector, make described benchmark image multiply by described weighted factor and add the above side-play amount and be the step that piece to be decoded produces the motion compensated prediction image based on piece to be decoded;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal,
Wherein each value of the index information of frame piece is all indicated the various combination of weighted factor and side-play amount, and
Wherein, indicate the like combinations of weighted factor and side-play amount respectively corresponding to two values of index information of the field piece of different benchmark images.
10. video decoder, described video decoder are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: 1) about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Determination module is configured to piece to be decoded and determines a frame piece or a piece;
Derive module, being configured to according to wanting decoded piece is a frame piece or a piece, is that piece to be decoded is derived benchmark image, weighted factor and side-play amount from index information;
First generator is configured to by the motion vector based on piece to be decoded, makes benchmark image multiply by weighted factor and adds side-play amount and be that piece to be decoded produces the motion compensated prediction image;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal,
Wherein each value of the index information of frame piece is all indicated the various combination of weighted factor and side-play amount, and
Wherein, indicate the like combinations of weighted factor and side-play amount respectively corresponding to two values of index information of the field piece of different benchmark images.
11. a prediction image generation method, described prediction image generation method are used for producing predicted picture by the coded data that decoding obtains video image experience predictive coding with brightness and two aberration, described prediction image generation method comprises:
The coded data that the index information of reception by the piece to be decoded of encoding obtains is as the step of input, and described index information indication comprises the combination of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
From index information is that piece to be decoded is derived the weighted factor of (A) each brightness and two aberration and (B) step of the side-play amount of each brightness and two aberration; And
By making benchmark image multiply by described weighted factor and adding the above side-play amount, produce the step of predicted picture for piece to be decoded.
12. a prediction image generation device, described prediction image generation device are used for producing predicted picture by the coded data that decoding obtains video image experience predictive coding with brightness and two aberration, described prediction image generation device comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding index information to be decoded as input, described index information indication comprises the combination of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive module, being configured to from index information is that piece to be decoded is derived the weighted factor of (A) each brightness and two aberration and (B) side-play amount of each brightness and two aberration; And
Generator is configured to by making benchmark image multiply by described weighted factor and adding the above side-play amount, for piece to be decoded produces predicted picture.
Have the coded data that the video image of brightness and two aberration obtains 13. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Determine the step of the unit of a frame piece or a piece for piece to be decoded;
According to wanting decoded piece is a frame piece or a piece, derives the step of weighted factor and side-play amount from index information;
By making benchmark image multiply by described weighted factor and adding the above side-play amount is the step that piece to be decoded produces predicted picture;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate predictive error signal and predicted picture and be the step that piece to be decoded produces decoded image signal,
Wherein the weighted factor of deriving from the index information of field piece and side-play amount are identical with weighted factor and side-play amount from the index information derivation of frame piece, and the index information of frame piece has by the value that obtains of value arithmetic shift right with the index information of field piece.
Have the coded data that the video image of brightness and two aberration obtains 14. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Determination module is configured to piece to be decoded and determines a frame piece or a piece;
Derive module, being configured to according to wanting decoded piece is a frame piece or a piece, derives weighted factor and side-play amount from index information;
First generator, being configured to by making benchmark image multiply by described weighted factor and adding the above side-play amount is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate predictive error signal and predicted picture and be that piece to be decoded produces decoded image signal,
Wherein the weighted factor of deriving from the index information of field piece and side-play amount are identical with weighted factor and side-play amount from the index information derivation of frame piece, and the index information of frame piece has by the value that obtains of value arithmetic shift right with the index information of field piece.
Have the coded data that the video image of brightness and two aberration obtains 15. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive the step of weighted factor and side-play amount from index information;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By making benchmark image multiply by described weighted factor and adding the above side-play amount is the step that piece to be decoded produces predicted picture; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
Have the coded data that the video image of brightness and two aberration obtains 16. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive module, being configured to derives weighted factor and side-play amount from index information;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator, being configured to by making benchmark image multiply by described weighted factor and adding the above side-play amount is that piece to be decoded produces predicted picture; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
17. a video encoding/decoding method, described video encoding/decoding method are used to decode by making coded data that video image experience motion compensated prediction with brightness and two aberration obtains to obtain described video image, described video encoding/decoding method comprises:
Receive the step by the coded data that the following coding is obtained: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive the step of benchmark image, weighted factor and side-play amount from the index information of wanting decoded piece;
Based on the motion vector of wanting decoded piece, be the step that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal, and wherein
Indicate the like combinations of weighted factor and side-play amount respectively corresponding to two values of the index information of different benchmark images.
18. a video decoder, described video decoder are used to decode by making coded data that video image experience motion compensated prediction with brightness and two aberration obtains to obtain video image, described video decoder comprises:
Receive the receiver by the coded data that the following coding is obtained: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) benchmark image, (B) weighted factor of each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive module, being configured to derives benchmark image, weighted factor and side-play amount from the index information of wanting decoded piece;
First generator is configured to based on the motion vector of wanting decoded piece, is that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal, and
Wherein, corresponding to two the value indication weighted factors of the index information of different images and the like combinationss of side-play amount.
19. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that predictive coding obtains, described video encoding/decoding method comprises:
Receive the step by the coded data that the following coding is obtained for piece to be decoded: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) reference picture, (B) weighted factor of each brightness and two aberration, (C) side-play amount of each brightness and two aberration, wherein indicate the like combinations of weighted factor and side-play amount corresponding to two values of different benchmark images, and by making the value that obtains of an arithmetic shift right in two values identical with value that obtains of another arithmetic shift right in making two values
Derive the step of weighted factor and side-play amount from index information;
By being the step that piece to be decoded produces predicted picture with side-play amount and the benchmark image addition of multiply by weighted factor;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
20. a video coding apparatus, described video coding apparatus make the inputted video image experience predictive coding with brightness and two aberration, described video coding apparatus comprises:
Determination module, the piece to be encoded that is configured to inputted video image is determined the combination of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive module, be configured to derive the selected combined indexes information of indication, wherein each all indicates the like combinations of weighted factor and side-play amount corresponding to two values of different benchmark images, and by making the value that obtains of an arithmetic shift right in two values identical with value that obtains of another arithmetic shift right in making two values;
First generator is configured to by with side-play amount and the benchmark image addition of multiply by weighted factor being piece generation predicted picture to be encoded;
Second generator is configured to by calculating the error between inputted video image and the described predicted picture, for piece to be encoded produces predictive error signal;
The 3rd generator is configured to by making described predictive error signal experience orthogonal transform and quantification produce the quantification orthogonal transform coefficient of piece to be encoded; And
Encoder is configured to coding (1) and quantizes orthogonal transform coefficient and (2) index information.
21. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that motion compensated prediction obtains, described video encoding/decoding method comprises:
The step of received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) be the weighted factor of piece to be decoded indication brightness and the appearance or the absent variable sign of side-play amount, and be that block encoding (1) to be decoded quantizes orthogonal transform coefficient, a combined indexes information in information of (2) motion vector and the described a plurality of combinations of (3) indication obtains;
From index information with a plurality ofly be combined as the step that piece to be decoded is derived the combination that comprises weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
According to the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making described side-play amount and the benchmark image addition of multiply by described weighted factor; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
22. a video decoder, described video decoder decoding is experienced the coded data that motion compensated prediction obtains by making the video image with brightness and two aberration, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) be the weighted factor of piece to be decoded indication brightness and the appearance or the absent variable sign of side-play amount, and be that block encoding (1) to be decoded quantizes orthogonal transform coefficient, a combined indexes information in information of (2) motion vector and a plurality of combinations of (3) indication obtains;
Derive module, be configured to from index information and a plurality ofly be combined as piece to be decoded and derive the combination that comprises weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to by the motion vector according to piece to be decoded, and making described side-play amount is that piece to be decoded produces the motion compensated prediction image with having multiply by the benchmark image addition of described weighted factor; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
23. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that motion compensated predictive coding obtains, described video encoding/decoding method comprises:
The step of received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) weighted factor of each brightness and two aberration, (B) side-play amount of each brightness and two aberration, (C) appearance of the weighted factor of indication brightness and side-play amount or absent variable first sign are with (D) weighted factor of indication aberration and the appearance or absent variable second of side-play amount indicate, and be block encoding (1) to be decoded quantification orthogonal transform coefficient about the predictive error signal of brightness and two aberration, combined indexes information in information of (2) motion vector and a plurality of combinations of (3) indication obtains;
From index information with a plurality ofly be combined as the step that piece to be decoded is derived the combination that comprises weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
According to the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making described side-play amount and the benchmark image addition of multiply by described weighted factor; And
By obtain described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
24. a video decoder, described video decoder decoding is experienced the coded data that motion compensated predictive coding obtains by making the video image with brightness and two aberration, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) weighted factor of each brightness and two aberration, (B) side-play amount of each brightness and two aberration, (C) appearance of the weighted factor of indication brightness and side-play amount or absent variable first sign are with (D) weighted factor of indication aberration and the appearance or absent variable second of side-play amount indicate, and be block encoding (1) to be decoded quantification orthogonal transform coefficient about the predictive error signal of brightness and two aberration, combined indexes information in information of (2) motion vector and a plurality of combinations of (3) indication obtains;
Derive module, be configured to from index information and a plurality ofly be combined as piece to be decoded and derive the combination that comprises weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to the motion vector according to piece to be decoded, is that piece to be decoded produces the motion compensated prediction image by making described side-play amount and the benchmark image addition of multiply by described weighted factor; And
The 3rd generator, be configured to by obtain described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
25. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that predictive coding obtains, described video encoding/decoding method comprises:
The step of received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) the indication weighted factor of brightness and the appearance or the absent variable sign of side-play amount, and obtain about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and (2) the combined indexes information in a plurality of combinations of indicating for block encoding to be decoded (1);
From index information with a plurality ofly be combined as the step that piece to be decoded is derived the combination that comprises weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By making described side-play amount and the benchmark image addition of multiply by described weighted factor is the step that piece to be decoded produces predicted picture; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
26. a video decoder, described video decoder decoding is experienced the coded data that predictive coding obtains by making the video image with brightness and two aberration, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) the indication weighted factor of brightness and the appearance or the absent variable sign of side-play amount, and obtain about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and (2) the combined indexes information in a plurality of combinations of indicating for block encoding to be decoded (1);
Derive module, be configured to from index information and a plurality ofly be combined as piece to be decoded and derive the combination that comprises weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator, being configured to by making described side-play amount and the benchmark image addition of multiply by described weighted factor is that piece to be decoded produces predicted picture; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
Have coded data that the video image of brightness and two aberration obtains to obtain video image 27. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
The step of received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and be block encoding (1) to be decoded about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and the index information of (2) indication the following: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
From index information and a plurality of step that is combined as the combination of piece derivation weighted factor to be decoded and side-play amount;
By making described benchmark image multiply by described weighted factor and adding the above side-play amount is the step that piece to be decoded produces predicted picture;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
28. a video decoder, the decoding of described video decoder has coded data that the video image of brightness and two aberration obtains to obtain video image by coding, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and be block encoding (1) to be decoded about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and the index information of (2) indication the following: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
Derive module, be configured to from index information and a plurality ofly be combined as the combination that piece to be decoded is derived weighted factor and side-play amount;
First generator, being configured to by making described benchmark image multiply by described weighted factor and adding the above side-play amount is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
29. a video encoding/decoding method, described video encoding/decoding method are used to decode by making coded data that video image experience predictive coding with brightness and two aberration obtains to obtain described video image, described video encoding/decoding method comprises:
Be received as the following every and step of the coded data that obtains of block encoding to be decoded: (1) indicates the index information element of giving determined number of a combination of the following respectively: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and (2) are about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration;
For piece to be decoded is derived to the weighted factor of determined number and is given the step of the side-play amount of determined number from the index information element of giving determined number;
By making benchmark image to determined number correspondingly multiply by and add side-play amount, and be the step that piece to be decoded produces predicted picture to determined number corresponding to the weighted factor of giving determined number of each benchmark image;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
30. a video decoder, described video decoder are used to decode by making coded data that video image experience predictive coding with brightness and two aberration obtains to obtain described video image, described video decoder comprises:
Receiver, be configured to be received as the following every and coded data that obtains of block encoding to be decoded: (1) indicates the index information element of giving determined number of a combination of the following respectively: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and (2) are about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration;
Derive module, being configured to piece to be decoded derives to the weighted factor of determined number and gives the side-play amount of determined number from the index information element of giving determined number;
First generator is configured to by making benchmark image to determined number correspondingly multiply by corresponding to the weighted factor of giving determined number of each benchmark image and add side-play amount to determined number, and is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
Have coded data that the video image of brightness and two aberration produces to obtain described video image 31. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
The step of received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and for block encoding (1) to be decoded is indicated the index information element of giving determined number of a combination in a plurality of combinations respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration, and obtain;
Derive the step of the combination of giving determined number that comprises weighted factor and side-play amount from index information element and described a plurality of combination of giving determined number;
By making benchmark image to determined number correspondingly multiply by the described side-play amount of giving the weighted factor of determined number and adding the above given quantity, and be the step that piece to be decoded produces predicted picture corresponding to described benchmark image;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
Have coded data that the video image of brightness and two aberration produces to obtain described video image 32. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and for block encoding (1) to be decoded is indicated the index information element of giving determined number of a combination in a plurality of combinations respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration, and obtain;
Derive module, being configured to derives the combination of giving determined number that comprises weighted factor and side-play amount from index information element and described a plurality of combination of giving determined number;
First generator is configured to by making benchmark image to determined number correspondingly multiply by the described side-play amount of giving the weighted factor of determined number and adding the above given quantity corresponding to described benchmark image, and is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
Have the coded data that the video image of brightness and two aberration produces 33. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
The step of received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and the index information of giving determined number of indicating the following respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration for block encoding (1) to be decoded: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
For piece to be decoded is derived the step of the combination of giving determined number that comprises weighted factor and side-play amount from index information and described a plurality of combination of giving determined number;
By making benchmark image to determined number correspondingly multiply by the weighted factor of giving determined number, and add that to the side-play amount of determined number be the step that piece to be decoded produces predicted picture corresponding to each benchmark image;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
Have the coded data that the video image of brightness and two aberration obtains 34. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and the index information of giving determined number of indicating the following respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration for block encoding (1) to be decoded: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
Derive module, be configured to piece to be decoded and derive the combination of giving determined number that comprises weighted factor and side-play amount from index information and described a plurality of combination of giving determined number;
First generator is configured to by making benchmark image to determined number correspondingly multiply by the weighted factor of giving determined number corresponding to each benchmark image, and adds that to the side-play amount of determined number be that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
CN2009101458092A 2002-04-18 2003-04-18 Moving picture coding/decoding method and device Expired - Lifetime CN101631247B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2002-116718 2002-04-18
JP2002116718 2002-04-18
JP2002116718 2002-04-18
JP2002-340042 2002-11-22
JP2002340042 2002-11-22
JP2002340042A JP4015934B2 (en) 2002-04-18 2002-11-22 Video coding method and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNB038007576A Division CN1297149C (en) 2002-04-18 2003-04-18 Moving picture coding/decoding method and device

Publications (2)

Publication Number Publication Date
CN101631247A true CN101631247A (en) 2010-01-20
CN101631247B CN101631247B (en) 2011-07-27

Family

ID=37390619

Family Applications (3)

Application Number Title Priority Date Filing Date
CN2009101458092A Expired - Lifetime CN101631247B (en) 2002-04-18 2003-04-18 Moving picture coding/decoding method and device
CNB2006100899520A Expired - Fee Related CN100508609C (en) 2002-04-18 2003-04-18 Moving picture decoding method and device
CN200910145811XA Expired - Lifetime CN101631248B (en) 2002-04-18 2003-04-18 Moving picture coding/decoding method and device

Family Applications After (2)

Application Number Title Priority Date Filing Date
CNB2006100899520A Expired - Fee Related CN100508609C (en) 2002-04-18 2003-04-18 Moving picture decoding method and device
CN200910145811XA Expired - Lifetime CN101631248B (en) 2002-04-18 2003-04-18 Moving picture coding/decoding method and device

Country Status (4)

Country Link
JP (319) JP4127713B2 (en)
CN (3) CN101631247B (en)
ES (2) ES2351306T3 (en)
NO (1) NO339262B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103069805A (en) * 2011-06-27 2013-04-24 松下电器产业株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
CN103826130A (en) * 2010-04-08 2014-05-28 株式会社东芝 Image decoding method and image decoding device
CN106067973A (en) * 2010-05-19 2016-11-02 Sk电信有限公司 Video decoding apparatus
US9538181B2 (en) 2010-04-08 2017-01-03 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
CN107454398A (en) * 2011-01-12 2017-12-08 佳能株式会社 Coding method, code device, coding/decoding method and decoding apparatus
CN110536141A (en) * 2012-01-20 2019-12-03 索尼公司 The complexity of availability graph code reduces
US11095878B2 (en) 2011-06-06 2021-08-17 Canon Kabushiki Kaisha Method and device for encoding a sequence of images and method and device for decoding a sequence of image

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0858940A (en) * 1994-08-25 1996-03-05 Mukai Kogyo Kk Conveyer device
US7075502B1 (en) 1998-04-10 2006-07-11 E Ink Corporation Full color reflective display with multichromatic sub-pixels
JP4015934B2 (en) 2002-04-18 2007-11-28 株式会社東芝 Video coding method and apparatus
CN101631247B (en) * 2002-04-18 2011-07-27 株式会社东芝 Moving picture coding/decoding method and device
CN101222638B (en) * 2007-01-08 2011-12-07 华为技术有限公司 Multi-video encoding and decoding method and device
KR101365444B1 (en) * 2007-11-19 2014-02-21 삼성전자주식회사 Method and apparatus for encoding/decoding moving image efficiently through adjusting a resolution of image
TWI447954B (en) 2009-09-15 2014-08-01 Showa Denko Kk Light-emitting diode, light-emitting diode lamp and lighting device
JP2011087202A (en) 2009-10-19 2011-04-28 Sony Corp Storage device and data communication system
JP5440927B2 (en) 2009-10-19 2014-03-12 株式会社リコー Distance camera device
CN103826131B (en) * 2010-04-08 2017-03-01 株式会社东芝 Picture decoding method and picture decoding apparatus
JP5325157B2 (en) * 2010-04-09 2013-10-23 株式会社エヌ・ティ・ティ・ドコモ Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program
JP5482407B2 (en) 2010-04-28 2014-05-07 株式会社リコー Information processing apparatus, image processing apparatus, image processing system, screen customization method, screen customization program, and recording medium recording the program
JP2012032611A (en) 2010-07-30 2012-02-16 Sony Corp Stereoscopic image display apparatus
JP5757075B2 (en) 2010-09-15 2015-07-29 ソニー株式会社 Transmitting apparatus, transmitting method, receiving apparatus, receiving method, program, and broadcasting system
KR101755601B1 (en) 2010-11-04 2017-07-10 삼성디스플레이 주식회사 Liquid Crystal Display integrated Touch Screen Panel
KR20120080122A (en) * 2011-01-06 2012-07-16 삼성전자주식회사 Apparatus and method for encoding and decoding multi-view video based competition
WO2012093879A2 (en) * 2011-01-06 2012-07-12 삼성전자주식회사 Competition-based multiview video encoding/decoding device and method thereof
KR102064157B1 (en) * 2011-03-06 2020-01-09 엘지전자 주식회사 Intra prediction method of chrominance block using luminance sample, and apparatus using same
MY193771A (en) 2011-06-28 2022-10-27 Samsung Electronics Co Ltd Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
JP5830993B2 (en) * 2011-07-14 2015-12-09 ソニー株式会社 Image processing apparatus and image processing method
US8599652B2 (en) 2011-07-14 2013-12-03 Tdk Corporation Thermally-assisted magnetic recording medium and magnetic recording/reproducing device using the same
CN103124346B (en) * 2011-11-18 2016-01-20 北京大学 A kind of determination method and system of residual prediction
CA2860248C (en) * 2011-12-22 2017-01-17 Samsung Electronics Co., Ltd. Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof
WO2014007514A1 (en) * 2012-07-02 2014-01-09 엘지전자 주식회사 Method for decoding image and apparatus using same
TWI492373B (en) * 2012-08-09 2015-07-11 Au Optronics Corp Flexible display module manufacturing method
CN105189122B (en) 2013-03-20 2017-05-10 惠普发展公司,有限责任合伙企业 Molded die slivers with exposed front and back surfaces
JP6087747B2 (en) 2013-06-27 2017-03-01 Kddi株式会社 Video encoding device, video decoding device, video system, video encoding method, video decoding method, and program
WO2015105048A1 (en) 2014-01-08 2015-07-16 旭化成エレクトロニクス株式会社 Output-current detection chip for diode sensors, and diode sensor device
JP6619930B2 (en) 2014-12-19 2019-12-11 株式会社Adeka Polyolefin resin composition
JP6434162B2 (en) * 2015-10-28 2018-12-05 株式会社東芝 Data management system, data management method and program
AU2017224004B2 (en) 2016-02-24 2021-10-28 Magic Leap, Inc. Polarizing beam splitter with low light leakage
DE102019103438A1 (en) 2019-02-12 2020-08-13 Werner Krammel Vehicle with tilting frame and spring damper system
JP7402714B2 (en) 2020-03-05 2023-12-21 東洋エンジニアリング株式会社 Fluidized bed granulator or fluidized bed/entrained bed granulator
CN114213441B (en) * 2021-12-27 2023-12-01 中国科学院长春应用化学研究所 Boron or phosphorus fused ring compound, preparation method thereof and light-emitting device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2938412B2 (en) * 1996-09-03 1999-08-23 日本電信電話株式会社 Method for compensating luminance change of moving image, moving image encoding device, moving image decoding device, recording medium recording moving image encoding or decoding program, and recording medium recording moving image encoded data
FR2755527B1 (en) * 1996-11-07 1999-01-08 Thomson Multimedia Sa MOTION COMPENSATED PREDICTION METHOD AND ENCODER USING SUCH A METHOD
CA2264834C (en) * 1997-07-08 2006-11-07 Sony Corporation Video data encoder, video data encoding method, video data transmitter, and video data recording medium
JP2001333389A (en) * 2000-05-17 2001-11-30 Mitsubishi Electric Research Laboratories Inc Video reproduction system and method for processing video signal
CN101631247B (en) * 2002-04-18 2011-07-27 株式会社东芝 Moving picture coding/decoding method and device

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9906812B2 (en) 2010-04-08 2018-02-27 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US10091525B2 (en) 2010-04-08 2018-10-02 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US10779001B2 (en) 2010-04-08 2020-09-15 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US9538181B2 (en) 2010-04-08 2017-01-03 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
CN103826130B (en) * 2010-04-08 2017-03-01 株式会社东芝 Picture decoding method and picture decoding apparatus
US12225227B2 (en) 2010-04-08 2025-02-11 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US9794587B2 (en) 2010-04-08 2017-10-17 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US12132927B2 (en) 2010-04-08 2024-10-29 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
CN103826130A (en) * 2010-04-08 2014-05-28 株式会社东芝 Image decoding method and image decoding device
US10715828B2 (en) 2010-04-08 2020-07-14 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US10009623B2 (en) 2010-04-08 2018-06-26 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US10560717B2 (en) 2010-04-08 2020-02-11 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US11889107B2 (en) 2010-04-08 2024-01-30 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US11265574B2 (en) 2010-04-08 2022-03-01 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US10999597B2 (en) 2010-04-08 2021-05-04 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
US10542281B2 (en) 2010-04-08 2020-01-21 Kabushiki Kaisha Toshiba Image encoding method and image decoding method
CN106067973B (en) * 2010-05-19 2019-06-18 Sk电信有限公司 Video decoding apparatus
CN106067973A (en) * 2010-05-19 2016-11-02 Sk电信有限公司 Video decoding apparatus
US10506236B2 (en) 2011-01-12 2019-12-10 Canon Kabushiki Kaisha Video encoding and decoding with improved error resilience
US10609380B2 (en) 2011-01-12 2020-03-31 Canon Kabushiki Kaisha Video encoding and decoding with improved error resilience
US11146792B2 (en) 2011-01-12 2021-10-12 Canon Kabushiki Kaisha Video encoding and decoding with improved error resilience
US10499060B2 (en) 2011-01-12 2019-12-03 Canon Kabushiki Kaisha Video encoding and decoding with improved error resilience
CN107454398A (en) * 2011-01-12 2017-12-08 佳能株式会社 Coding method, code device, coding/decoding method and decoding apparatus
US11095878B2 (en) 2011-06-06 2021-08-17 Canon Kabushiki Kaisha Method and device for encoding a sequence of images and method and device for decoding a sequence of image
CN103069805A (en) * 2011-06-27 2013-04-24 松下电器产业株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
CN103069805B (en) * 2011-06-27 2017-05-31 太阳专利托管公司 Method for encoding images, picture decoding method, picture coding device, picture decoding apparatus and image encoding/decoding device
US11025938B2 (en) 2012-01-20 2021-06-01 Sony Corporation Complexity reduction of significance map coding
CN110536141B (en) * 2012-01-20 2021-07-06 索尼公司 Complexity reduction for significance map coding
CN110536141A (en) * 2012-01-20 2019-12-03 索尼公司 The complexity of availability graph code reduces

Also Published As

Publication number Publication date
JP4406049B1 (en) 2010-01-27
JP4478737B2 (en) 2010-06-09
JP2010283899A (en) 2010-12-16
JP2009165182A (en) 2009-07-23
JP2010104048A (en) 2010-05-06
JP4256456B2 (en) 2009-04-22
JP4517030B2 (en) 2010-08-04
JP2010172031A (en) 2010-08-05
JP2010022036A (en) 2010-01-28
JP2010028833A (en) 2010-02-04
JP2010124499A (en) 2010-06-03
JP4517032B2 (en) 2010-08-04
JP2009290884A (en) 2009-12-10
JP2010104056A (en) 2010-05-06
JP4406067B2 (en) 2010-01-27
JP4127716B2 (en) 2008-07-30
JP2010124495A (en) 2010-06-03
JP4517020B2 (en) 2010-08-04
JP4538570B2 (en) 2010-09-08
JP4355759B2 (en) 2009-11-04
JP2009165177A (en) 2009-07-23
JP2010183625A (en) 2010-08-19
JP2011004440A (en) 2011-01-06
JP2009077424A (en) 2009-04-09
JP2009219137A (en) 2009-09-24
JP2010172025A (en) 2010-08-05
JP2009290883A (en) 2009-12-10
JP2009165180A (en) 2009-07-23
JP4208945B2 (en) 2009-01-14
JP2009011008A (en) 2009-01-15
JP2010098751A (en) 2010-04-30
JP2010098758A (en) 2010-04-30
JP2009017590A (en) 2009-01-22
JP2010124508A (en) 2010-06-03
JP4406051B2 (en) 2010-01-27
JP4406082B1 (en) 2010-01-27
JP2010172021A (en) 2010-08-05
JP4208947B2 (en) 2009-01-14
JP4406057B2 (en) 2010-01-27
JP2009136015A (en) 2009-06-18
JP4517026B2 (en) 2010-08-04
JP4538555B2 (en) 2010-09-08
JP2009207172A (en) 2009-09-10
JP2010022031A (en) 2010-01-28
JP4127718B2 (en) 2008-07-30
JP4320367B2 (en) 2009-08-26
JP4355755B2 (en) 2009-11-04
JP4637282B2 (en) 2011-02-23
JP2010158060A (en) 2010-07-15
JP2009033770A (en) 2009-02-12
JP2009038839A (en) 2009-02-19
JP2009050019A (en) 2009-03-05
JP4517029B2 (en) 2010-08-04
JP4376306B2 (en) 2009-12-02
JP2010172018A (en) 2010-08-05
JP2011004443A (en) 2011-01-06
JP2009296644A (en) 2009-12-17
JP2010028824A (en) 2010-02-04
JP2010161814A (en) 2010-07-22
JP2010206852A (en) 2010-09-16
JP2010172028A (en) 2010-08-05
JP2009278646A (en) 2009-11-26
JP2010172023A (en) 2010-08-05
JP2011004439A (en) 2011-01-06
JP4338775B2 (en) 2009-10-07
JP4427617B2 (en) 2010-03-10
JP4355756B2 (en) 2009-11-04
JP2010016871A (en) 2010-01-21
JP2010016868A (en) 2010-01-21
JP2010016869A (en) 2010-01-21
JP2010016873A (en) 2010-01-21
JP2010154580A (en) 2010-07-08
JP2010158076A (en) 2010-07-15
JP4538539B2 (en) 2010-09-08
JP2009165178A (en) 2009-07-23
JP2010172029A (en) 2010-08-05
JP2010028835A (en) 2010-02-04
CN101631248B (en) 2011-09-14
JP4406061B2 (en) 2010-01-27
JP4213766B1 (en) 2009-01-21
JP4438898B1 (en) 2010-03-24
JP2008199650A (en) 2008-08-28
JP2010041733A (en) 2010-02-18
JP2010045809A (en) 2010-02-25
JP4478740B2 (en) 2010-06-09
JP4307531B2 (en) 2009-08-05
JP4538561B2 (en) 2010-09-08
JP4256458B2 (en) 2009-04-22
JP4478735B2 (en) 2010-06-09
JP4427627B2 (en) 2010-03-10
JP4560137B2 (en) 2010-10-13
JP4538562B2 (en) 2010-09-08
JP2009296645A (en) 2009-12-17
JP2010016864A (en) 2010-01-21
JP4208952B2 (en) 2009-01-14
JP2010206851A (en) 2010-09-16
JP2010074848A (en) 2010-04-02
JP4637279B2 (en) 2011-02-23
JP4338773B2 (en) 2009-10-07
JP4307534B2 (en) 2009-08-05
JP4406066B1 (en) 2010-01-27
JP4256459B2 (en) 2009-04-22
JP4478744B2 (en) 2010-06-09
JP4478745B2 (en) 2010-06-09
JP4560136B2 (en) 2010-10-13
JP2010172038A (en) 2010-08-05
JP4406053B1 (en) 2010-01-27
JP4496315B2 (en) 2010-07-07
JP2010206849A (en) 2010-09-16
JP4208953B2 (en) 2009-01-14
JP2009165179A (en) 2009-07-23
JP4406081B1 (en) 2010-01-27
JP4427614B2 (en) 2010-03-10
JP2010124506A (en) 2010-06-03
CN101631247B (en) 2011-07-27
JP2010074852A (en) 2010-04-02
JP4538565B2 (en) 2010-09-08
JP4478733B2 (en) 2010-06-09
JP4282754B2 (en) 2009-06-24
JP4320370B2 (en) 2009-08-26
JP2010183627A (en) 2010-08-19
JP2010028834A (en) 2010-02-04
JP4355768B2 (en) 2009-11-04
JP4406065B2 (en) 2010-01-27
JP2009136006A (en) 2009-06-18
JP2009296651A (en) 2009-12-17
JP2007053800A (en) 2007-03-01
JP2010016875A (en) 2010-01-21
JP4406052B2 (en) 2010-01-27
JP2010028847A (en) 2010-02-04
JP2008211857A (en) 2008-09-11
JP4538544B2 (en) 2010-09-08
JP4355760B2 (en) 2009-11-04
JP4538556B2 (en) 2010-09-08
JP4127715B2 (en) 2008-07-30
JP4460640B2 (en) 2010-05-12
JP2009239949A (en) 2009-10-15
JP2010041732A (en) 2010-02-18
JP2009081879A (en) 2009-04-16
JP2009239939A (en) 2009-10-15
JP2010028848A (en) 2010-02-04
JP2009239941A (en) 2009-10-15
JP2010098755A (en) 2010-04-30
JP2010172033A (en) 2010-08-05
JP2010172020A (en) 2010-08-05
JP2010172027A (en) 2010-08-05
JP2009278645A (en) 2009-11-26
JP2010172036A (en) 2010-08-05
JP2009077423A (en) 2009-04-09
JP2009071887A (en) 2009-04-02
JP4406048B2 (en) 2010-01-27
JP2008211856A (en) 2008-09-11
JP4282757B2 (en) 2009-06-24
JP2010098757A (en) 2010-04-30
JP2009136013A (en) 2009-06-18
JP2010028825A (en) 2010-02-04
JP4406058B2 (en) 2010-01-27
JP4517034B2 (en) 2010-08-04
JP4478734B2 (en) 2010-06-09
JP4406069B2 (en) 2010-01-27
JP2010028849A (en) 2010-02-04
JP2010158073A (en) 2010-07-15
JP2010158061A (en) 2010-07-15
JP2007060713A (en) 2007-03-08
JP2010016872A (en) 2010-01-21
JP2010124496A (en) 2010-06-03
JP4478750B2 (en) 2010-06-09
JP4538540B2 (en) 2010-09-08
JP2009239942A (en) 2009-10-15
JP2010016863A (en) 2010-01-21
JP2010124500A (en) 2010-06-03
JP2010028841A (en) 2010-02-04
JP2009207174A (en) 2009-09-10
JP2011004442A (en) 2011-01-06
JP2010028842A (en) 2010-02-04
JP4496300B1 (en) 2010-07-07
NO20150299L (en) 2004-01-29
NO339262B1 (en) 2016-11-21
JP2009225465A (en) 2009-10-01
JP2010172034A (en) 2010-08-05
JP4427616B2 (en) 2010-03-10
JP2009136012A (en) 2009-06-18
JP4538568B2 (en) 2010-09-08
JP2010016876A (en) 2010-01-21
JP4517019B2 (en) 2010-08-04
JP4234782B1 (en) 2009-03-04
JP4406089B1 (en) 2010-01-27
JP2009278644A (en) 2009-11-26
JP4355771B2 (en) 2009-11-04
JP4560139B2 (en) 2010-10-13
JP4307533B2 (en) 2009-08-05
JP4496310B2 (en) 2010-07-07
JP4460638B2 (en) 2010-05-12
JP4406083B1 (en) 2010-01-27
JP4376301B2 (en) 2009-12-02
JP4637280B2 (en) 2011-02-23
JP2010124503A (en) 2010-06-03
JP2010104051A (en) 2010-05-06
JP4637276B2 (en) 2011-02-23
JP4320366B2 (en) 2009-08-26
JP4496307B2 (en) 2010-07-07
JP4282752B2 (en) 2009-06-24
JP4406087B1 (en) 2010-01-27
JP4406086B1 (en) 2010-01-27
JP4517011B2 (en) 2010-08-04
JP4307526B2 (en) 2009-08-05
JP5481518B2 (en) 2014-04-23
JP2010016865A (en) 2010-01-21
JP2010104055A (en) 2010-05-06
JP2009296646A (en) 2009-12-17
JP4355766B2 (en) 2009-11-04
JP4355758B2 (en) 2009-11-04
JP4517023B1 (en) 2010-08-04
JP4307529B2 (en) 2009-08-05
JP2010172035A (en) 2010-08-05
JP4307532B2 (en) 2009-08-05
JP2010206850A (en) 2010-09-16
JP2010206848A (en) 2010-09-16
JP2009239946A (en) 2009-10-15
JP4307523B2 (en) 2009-08-05
JP4256461B2 (en) 2009-04-22
JP4538554B2 (en) 2010-09-08
JP4376303B2 (en) 2009-12-02
JP4127717B2 (en) 2008-07-30
JP4406071B2 (en) 2010-01-27
JP4637286B2 (en) 2011-02-23
JP4478748B2 (en) 2010-06-09
JP4427610B2 (en) 2010-03-10
JP4427612B2 (en) 2010-03-10
JP4307530B2 (en) 2009-08-05
JP2009071885A (en) 2009-04-02
JP2009278650A (en) 2009-11-26
JP2010035184A (en) 2010-02-12
CN1863315A (en) 2006-11-15
JP2010098750A (en) 2010-04-30
JP4538551B2 (en) 2010-09-08
JP4517018B2 (en) 2010-08-04
JP4517028B2 (en) 2010-08-04
JP4517021B2 (en) 2010-08-04
JP4406085B1 (en) 2010-01-27
JP4637278B2 (en) 2011-02-23
JP4517039B2 (en) 2010-08-04
JP4307528B2 (en) 2009-08-05
JP4538546B2 (en) 2010-09-08
JP4307524B2 (en) 2009-08-05
JP4496306B2 (en) 2010-07-07
JP2010158077A (en) 2010-07-15
ES2351306T3 (en) 2011-02-02
JP4355762B2 (en) 2009-11-04
JP2009239940A (en) 2009-10-15
JP2010022032A (en) 2010-01-28
JP4427620B2 (en) 2010-03-10
JP4517035B2 (en) 2010-08-04
JP2010098752A (en) 2010-04-30
JP2011004444A (en) 2011-01-06
JP2009207171A (en) 2009-09-10
JP4517038B2 (en) 2010-08-04
JP4406080B1 (en) 2010-01-27
JP4560138B2 (en) 2010-10-13
JP4637277B2 (en) 2011-02-23
JP4637288B2 (en) 2011-02-23
JP2010104054A (en) 2010-05-06
JP2008199649A (en) 2008-08-28
JP2010104045A (en) 2010-05-06
JP4538548B2 (en) 2010-09-08
JP4517037B2 (en) 2010-08-04
JP2010172030A (en) 2010-08-05
JP2010158067A (en) 2010-07-15
JP2010158070A (en) 2010-07-15
JP4438901B1 (en) 2010-03-24
JP2007068217A (en) 2007-03-15
JP2009136008A (en) 2009-06-18
JP4460635B2 (en) 2010-05-12
JP4538549B1 (en) 2010-09-08
JP2010172019A (en) 2010-08-05
JP2010022033A (en) 2010-01-28
JP2010124498A (en) 2010-06-03
JP4256455B2 (en) 2009-04-22
JP2007104699A (en) 2007-04-19
JP2010200366A (en) 2010-09-09
JP4438900B1 (en) 2010-03-24
JP4496312B2 (en) 2010-07-07
JP4406072B2 (en) 2010-01-27
JP2010104058A (en) 2010-05-06
JP4208955B2 (en) 2009-01-14
JP2008211855A (en) 2008-09-11
JP2010028845A (en) 2010-02-04
JP4427611B2 (en) 2010-03-10
JP2010124507A (en) 2010-06-03
JP2010158063A (en) 2010-07-15
JP4247305B1 (en) 2009-04-02
JP2010178396A (en) 2010-08-12
JP4355770B2 (en) 2009-11-04
JP2007068215A (en) 2007-03-15
JP2011024259A (en) 2011-02-03
JP4637283B2 (en) 2011-02-23
JP4406079B1 (en) 2010-01-27
JP2008182763A (en) 2008-08-07
JP2009207173A (en) 2009-09-10
JP2010172040A (en) 2010-08-05
JP2010158072A (en) 2010-07-15
JP2011004447A (en) 2011-01-06
JP4517024B2 (en) 2010-08-04
JP2010074849A (en) 2010-04-02
JP2010098749A (en) 2010-04-30
JP4478739B2 (en) 2010-06-09
JP4208954B2 (en) 2009-01-14
JP4538541B1 (en) 2010-09-08
JP4517012B1 (en) 2010-08-04
JP2009136018A (en) 2009-06-18
JP4517041B2 (en) 2010-08-04
JP2010104047A (en) 2010-05-06
JP2009136017A (en) 2009-06-18
JP2010124502A (en) 2010-06-03
JP4517027B1 (en) 2010-08-04
JP4637285B2 (en) 2011-02-23
JP4338776B2 (en) 2009-10-07
JP2010022035A (en) 2010-01-28
JP2011024260A (en) 2011-02-03
JP4460634B2 (en) 2010-05-12
JP4517016B2 (en) 2010-08-04
JP4538552B2 (en) 2010-09-08
JP4234783B2 (en) 2009-03-04
JP4355769B2 (en) 2009-11-04
JP2010158071A (en) 2010-07-15
JP5738968B2 (en) 2015-06-24
JP4637291B2 (en) 2011-02-23
JP2009136009A (en) 2009-06-18
JP2009239944A (en) 2009-10-15
JP2010158064A (en) 2010-07-15
JP2010158062A (en) 2010-07-15
JP2010104040A (en) 2010-05-06
JP2009219136A (en) 2009-09-24
JP2009017589A (en) 2009-01-22
JP2009219135A (en) 2009-09-24
JP2009290881A (en) 2009-12-10
JP2009081889A (en) 2009-04-16
JP4406059B2 (en) 2010-01-27
JP4427624B2 (en) 2010-03-10
JP2009055640A (en) 2009-03-12
JP2009081878A (en) 2009-04-16
JP2010178395A (en) 2010-08-12
JP2011004446A (en) 2011-01-06
JP2010158079A (en) 2010-07-15
JP2010016866A (en) 2010-01-21
JP2010158065A (en) 2010-07-15
JP4496301B2 (en) 2010-07-07
JP2009290880A (en) 2009-12-10
JP4538550B2 (en) 2010-09-08
JP2008263641A (en) 2008-10-30
JP2009296650A (en) 2009-12-17
JP2009278647A (en) 2009-11-26
JP4406074B2 (en) 2010-01-27
JP4560140B2 (en) 2010-10-13
JP2010172039A (en) 2010-08-05
JP4307539B2 (en) 2009-08-05
JP4256460B2 (en) 2009-04-22
JP4625543B1 (en) 2011-02-02
JP2010074851A (en) 2010-04-02
JP4376311B2 (en) 2009-12-02
JP4538559B2 (en) 2010-09-08
JP2010028846A (en) 2010-02-04
JP4282753B2 (en) 2009-06-24
JP4376302B1 (en) 2009-12-02
JP4496305B2 (en) 2010-07-07
JP4637287B2 (en) 2011-02-23
JP2009136014A (en) 2009-06-18
JP4307535B2 (en) 2009-08-05
JP4320368B2 (en) 2009-08-26
JP4406064B1 (en) 2010-01-27
JP4406084B1 (en) 2010-01-27
JP2010022038A (en) 2010-01-28
JP2010283900A (en) 2010-12-16
JP4355764B2 (en) 2009-11-04
JP4478742B2 (en) 2010-06-09
JP4496314B2 (en) 2010-07-07
JP4427615B2 (en) 2010-03-10
JP4376314B1 (en) 2009-12-02
JP2010028852A (en) 2010-02-04
JP4406054B2 (en) 2010-01-27
JP2009278649A (en) 2009-11-26
JP4376312B1 (en) 2009-12-02
JP2010158052A (en) 2010-07-15
JP4538571B2 (en) 2010-09-08
JP4478752B2 (en) 2010-06-09
JP4208946B2 (en) 2009-01-14
JP2010028837A (en) 2010-02-04
JP2010016870A (en) 2010-01-21
JP4478743B2 (en) 2010-06-09
JP2007053799A (en) 2007-03-01
JP2009136011A (en) 2009-06-18
JP4478741B2 (en) 2010-06-09
JP2009239945A (en) 2009-10-15
JP4355767B2 (en) 2009-11-04
JP2010161815A (en) 2010-07-22
JP2009136005A (en) 2009-06-18
JP2009165181A (en) 2009-07-23
JP2010183629A (en) 2010-08-19
JP2010172042A (en) 2010-08-05
JP4538545B2 (en) 2010-09-08
JP4517014B2 (en) 2010-08-04
JP4538564B2 (en) 2010-09-08
JP4637290B2 (en) 2011-02-23
JP4247304B2 (en) 2009-04-02
JP4427619B2 (en) 2010-03-10
JP4496309B2 (en) 2010-07-07
JP2011030270A (en) 2011-02-10
JP4438897B1 (en) 2010-03-24
JP2010104053A (en) 2010-05-06
JP4427609B2 (en) 2010-03-10
JP4538553B2 (en) 2010-09-08
JP4406063B2 (en) 2010-01-27
JP4496304B1 (en) 2010-07-07
JP4496311B2 (en) 2010-07-07
JP4208913B2 (en) 2009-01-14
JP2009296648A (en) 2009-12-17
JP2010028843A (en) 2010-02-04
JP2010098753A (en) 2010-04-30
JP4320371B2 (en) 2009-08-26
JP2012161092A (en) 2012-08-23
JP4234781B1 (en) 2009-03-04
JP4338777B2 (en) 2009-10-07
JP4234780B2 (en) 2009-03-04
JP2009136004A (en) 2009-06-18
JP2009278643A (en) 2009-11-26
JP2010028839A (en) 2010-02-04
JP2010098748A (en) 2010-04-30
JP2010098756A (en) 2010-04-30
JP2009136010A (en) 2009-06-18
JP2010104052A (en) 2010-05-06
JP2009136019A (en) 2009-06-18
JP2011004445A (en) 2011-01-06
JP2014057363A (en) 2014-03-27
JP4460639B2 (en) 2010-05-12
JP4406073B2 (en) 2010-01-27
JP2010028850A (en) 2010-02-04
JP2008301538A (en) 2008-12-11
JP2011004441A (en) 2011-01-06
JP4282755B2 (en) 2009-06-24
JP4427623B2 (en) 2010-03-10
JP4438899B1 (en) 2010-03-24
JP2010158078A (en) 2010-07-15
JP4538560B2 (en) 2010-09-08
JP2009239937A (en) 2009-10-15
JP4538542B2 (en) 2010-09-08
JP4478749B2 (en) 2010-06-09
JP2009290879A (en) 2009-12-10
JP4460641B2 (en) 2010-05-12
JP2015043638A (en) 2015-03-05
JP4376305B1 (en) 2009-12-02
JP4355763B2 (en) 2009-11-04
JP2010124505A (en) 2010-06-03
JP2010124501A (en) 2010-06-03
JP4460631B2 (en) 2010-05-12
JP4406056B2 (en) 2010-01-27
JP2009239948A (en) 2009-10-15
JP2010158074A (en) 2010-07-15
JP2010124497A (en) 2010-06-03
JP4406078B1 (en) 2010-01-27
JP4282756B2 (en) 2009-06-24
JP4478751B2 (en) 2010-06-09
JP4376304B2 (en) 2009-12-02
JP2010022034A (en) 2010-01-28
JP4406047B2 (en) 2010-01-27
JP4625542B1 (en) 2011-02-02
JP4406060B1 (en) 2010-01-27
JP4438904B1 (en) 2010-03-24
JP4247306B1 (en) 2009-04-02
JP4496313B2 (en) 2010-07-07
JP4309469B2 (en) 2009-08-05
JP4338778B2 (en) 2009-10-07
JP2010028840A (en) 2010-02-04
JP4427622B2 (en) 2010-03-10
JP4478736B1 (en) 2010-06-09
JP4538543B2 (en) 2010-09-08
JP4538547B2 (en) 2010-09-08
JP4478738B2 (en) 2010-06-09
JP4637292B2 (en) 2011-02-23
JP2010074853A (en) 2010-04-02
JP4406055B2 (en) 2010-01-27
JP4376315B2 (en) 2009-12-02
JP4665062B2 (en) 2011-04-06
JP4438902B1 (en) 2010-03-24
JP4338774B2 (en) 2009-10-07
JP4406075B2 (en) 2010-01-27
JP4355765B2 (en) 2009-11-04
JP4427625B2 (en) 2010-03-10
JP2010183626A (en) 2010-08-19
JP4538566B2 (en) 2010-09-08
JP2009296647A (en) 2009-12-17
JP2010028827A (en) 2010-02-04
JP4427608B2 (en) 2010-03-10
JP2007068216A (en) 2007-03-15
CN101631248A (en) 2010-01-20
JP4478746B2 (en) 2010-06-09
JP4307537B2 (en) 2009-08-05
JP2010158058A (en) 2010-07-15
JP4234784B1 (en) 2009-03-04
JP4406062B1 (en) 2010-01-27
JP2010028826A (en) 2010-02-04
JP4307536B2 (en) 2009-08-05
JP4376309B2 (en) 2009-12-02
JP4538558B2 (en) 2010-09-08
JP2010172022A (en) 2010-08-05
JP2010022037A (en) 2010-01-28
JP2009038840A (en) 2009-02-19
JP2009136020A (en) 2009-06-18
JP2010172041A (en) 2010-08-05
JP2009071886A (en) 2009-04-02
JP4517036B2 (en) 2010-08-04
JP2010016867A (en) 2010-01-21
JP2008199651A (en) 2008-08-28
JP2010104044A (en) 2010-05-06
JP2010200365A (en) 2010-09-09
JP4338771B2 (en) 2009-10-07
JP4406088B1 (en) 2010-01-27
JP2010158075A (en) 2010-07-15
JP4517031B2 (en) 2010-08-04
JP2010158068A (en) 2010-07-15
JP2010104042A (en) 2010-05-06
JP4307525B2 (en) 2009-08-05
JP4427626B2 (en) 2010-03-10
JP2009296619A (en) 2009-12-17
JP2010158053A (en) 2010-07-15
JP2009225463A (en) 2009-10-01
JP2010028836A (en) 2010-02-04
JP2010206832A (en) 2010-09-16
JP2010158069A (en) 2010-07-15
JP2010028851A (en) 2010-02-04
JP4478747B2 (en) 2010-06-09
JP2009296652A (en) 2009-12-17
JP2009278642A (en) 2009-11-26
JP4127713B2 (en) 2008-07-30
JP4307538B2 (en) 2009-08-05
JP2010045810A (en) 2010-02-25
JP2010283898A (en) 2010-12-16
JP2010104049A (en) 2010-05-06
JP4282758B2 (en) 2009-06-24
JP2009118514A (en) 2009-05-28
JP4538563B2 (en) 2010-09-08
JP4406076B2 (en) 2010-01-27
JP2010172024A (en) 2010-08-05
JP2011024261A (en) 2011-02-03
JP4355757B2 (en) 2009-11-04
JP4517015B1 (en) 2010-08-04
JP4460637B2 (en) 2010-05-12
JP4376307B2 (en) 2009-12-02
JP2009239950A (en) 2009-10-15
JP4405584B2 (en) 2010-01-27
JP2010136429A (en) 2010-06-17
JP2009278648A (en) 2009-11-26
JP4637281B2 (en) 2011-02-23
JP4427618B2 (en) 2010-03-10
JP4460633B2 (en) 2010-05-12
JP4256462B2 (en) 2009-04-22
JP2011030271A (en) 2011-02-10
JP2010074850A (en) 2010-04-02
JP2009071888A (en) 2009-04-02
JP4538569B2 (en) 2010-09-08
JP2009136007A (en) 2009-06-18
JP4517033B2 (en) 2010-08-04
JP2009136003A (en) 2009-06-18
CN100508609C (en) 2009-07-01
JP4307527B2 (en) 2009-08-05
JP2009136021A (en) 2009-06-18
JP2010124504A (en) 2010-06-03
JP2010028844A (en) 2010-02-04
JP2010158066A (en) 2010-07-15
JP2010104046A (en) 2010-05-06
JP4517017B1 (en) 2010-08-04
JP2010158059A (en) 2010-07-15
JP4338772B2 (en) 2009-10-07
JP2010178386A (en) 2010-08-12
JP4517040B2 (en) 2010-08-04
JP2010022039A (en) 2010-01-28
JP2010098759A (en) 2010-04-30
JP2011030269A (en) 2011-02-10
JP4331259B2 (en) 2009-09-16
JP2010016874A (en) 2010-01-21
JP4538557B2 (en) 2010-09-08
JP4517013B2 (en) 2010-08-04
JP2010104043A (en) 2010-05-06
JP4307522B2 (en) 2009-08-05
JP2009219134A (en) 2009-09-24
JP4256457B2 (en) 2009-04-22
JP4427613B2 (en) 2010-03-10
JP4406077B1 (en) 2010-01-27
JP4438903B1 (en) 2010-03-24
JP4517025B2 (en) 2010-08-04
JP4496308B2 (en) 2010-07-07
JP2009239947A (en) 2009-10-15
JP2010172032A (en) 2010-08-05
JP4376313B2 (en) 2009-12-02
JP4376310B1 (en) 2009-12-02
JP4320369B2 (en) 2009-08-26
JP2010178387A (en) 2010-08-12
JP2009050020A (en) 2009-03-05
JP2010016877A (en) 2010-01-21
JP4460630B2 (en) 2010-05-12
JP2010074846A (en) 2010-04-02
ES2355656T3 (en) 2011-03-29
JP2009112045A (en) 2009-05-21
JP4637289B2 (en) 2011-02-23
JP2009239938A (en) 2009-10-15
JP2009077425A (en) 2009-04-09
JP2009011007A (en) 2009-01-15
JP2010283897A (en) 2010-12-16
JP2009239943A (en) 2009-10-15
JP2009290882A (en) 2009-12-10
JP2009225464A (en) 2009-10-01
JP2009207175A (en) 2009-09-10
JP2009296649A (en) 2009-12-17
JP2010104041A (en) 2010-05-06
JP2010104057A (en) 2010-05-06
JP4355761B2 (en) 2009-11-04
JP4427621B2 (en) 2010-03-10
JP4496303B2 (en) 2010-07-07
JP4406068B2 (en) 2010-01-27
JP4496302B2 (en) 2010-07-07
JP2010104050A (en) 2010-05-06
JP2010028838A (en) 2010-02-04
JP4517022B2 (en) 2010-08-04
JP2010098754A (en) 2010-04-30
JP2011004438A (en) 2011-01-06
JP2010028828A (en) 2010-02-04
JP4538567B1 (en) 2010-09-08
JP2010074847A (en) 2010-04-02
JP4127714B2 (en) 2008-07-30
JP4460632B2 (en) 2010-05-12
JP4406070B2 (en) 2010-01-27
JP2009071884A (en) 2009-04-02
JP4406050B2 (en) 2010-01-27
JP2010183628A (en) 2010-08-19
JP4376308B1 (en) 2009-12-02
JP4637284B2 (en) 2011-02-23
JP5921656B2 (en) 2016-05-24
JP2010161816A (en) 2010-07-22
JP2009022051A (en) 2009-01-29
JP2010172037A (en) 2010-08-05
JP2009136016A (en) 2009-06-18
JP2009011006A (en) 2009-01-15
JP4256465B2 (en) 2009-04-22
JP2010172026A (en) 2010-08-05
JP4460636B2 (en) 2010-05-12
JP2010178397A (en) 2010-08-12

Similar Documents

Publication Publication Date Title
CN101631247B (en) Moving picture coding/decoding method and device
CN101090493B (en) Moving picture decoding/encoding method and device
KR100786404B1 (en) Video decoding method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20110727