CN101631247A - Moving picture coding/decoding method and device - Google Patents
Moving picture coding/decoding method and device Download PDFInfo
- Publication number
- CN101631247A CN101631247A CN200910145809A CN200910145809A CN101631247A CN 101631247 A CN101631247 A CN 101631247A CN 200910145809 A CN200910145809 A CN 200910145809A CN 200910145809 A CN200910145809 A CN 200910145809A CN 101631247 A CN101631247 A CN 101631247A
- Authority
- CN
- China
- Prior art keywords
- decoded
- piece
- brightness
- aberration
- play amount
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 68
- 230000033001 locomotion Effects 0.000 claims abstract description 134
- 239000013598 vector Substances 0.000 claims abstract description 65
- 238000011002 quantification Methods 0.000 claims description 63
- 238000013139 quantization Methods 0.000 claims description 34
- 230000004075 alteration Effects 0.000 claims 137
- 230000009466 transformation Effects 0.000 claims 31
- 238000009795 derivation Methods 0.000 claims 4
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 239000000872 buffer Substances 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- 239000012634 fragment Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005562 fading Methods 0.000 description 4
- FVTCRASFADXXNN-SCRDCRAPSA-N flavin mononucleotide Chemical compound OP(=O)(O)OC[C@@H](O)[C@@H](O)[C@@H](O)CN1C=2C=C(C)C(C)=CC=2N=C2C1=NC(=O)NC2=O FVTCRASFADXXNN-SCRDCRAPSA-N 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000000750 progressive effect Effects 0.000 description 3
- 101150064138 MAP1 gene Proteins 0.000 description 2
- 238000007792 addition Methods 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007850 degeneration Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/573—Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Color Television Systems (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
A moving picture coding/decoding device includes an image memory/prediction image generator (108) for selecting one combination from a plurality of combinations between at least one reference image number prepared in advance and a prediction parameter and generating a prediction image signal (212) according to the reference image number and the prediction parameter of the selected combination. The device uses a variable length encoder (111) to encode orthogonal conversion coefficient information (210) concerning a prediction error signal of the prediction image signal (212) for an input moving picture signal (100), mode information (213) indicating a coding mode , motion vector information (214), and index information (215) indicating a combination of the selected reference image number and the prediction parameter.
Description
It is 03800757.6 that the application of this division is based on application number, and the applying date is on April 18th, 2003, and denomination of invention is divided an application for the Chinese invention patent application of " dynamic image encoding/decoding method and device ".More particularly, it is 200610089952.0 that the application of this division is based on application number, and the applying date is on May 30th, 2006, denomination of invention for " dynamic image encoding/decoding method and device " divide an application divide an application once more.
Technical field
The present invention relates to a kind of coding/decoding decay (fade) video and fade out (dissolving) video, especially with high efficiency coding/decoding attenuation video and the video coding/decoding method and the device that fade out video.
Background technology
One of coding is used as for example ITU-TH.261 of video encoding standard scheme between motion-compensated predicted frames, H.263, and ISO/IEC MPEG-2, or the coding mode among the MPEG-4.As the forecast model in the coding between motion-compensated predicted frames, use and when brightness does not change, show the model that maximum prefetch is surveyed efficient on time orientation.Under the situation of the attenuation video that image brightness changes, do not have known method so far, when normal picture for example when black image fades in, correct prediction is made in its variation to image brightness.In order to keep the picture quality of attenuation video, therefore many positions are essential.
In order to address this problem, for example, No. 3166716, Japan Patent, in " antidamping countermeasure video encoder and coding method ", the attenuation video part is detected, to change the distribution of figure place.More specifically, fading out under the situation of video, the start-up portion that fades out that brightness changes is distributed in many positions.Usually, the decline of fading out becomes monochrome image, therefore can easily encode.For this reason, the figure place of distributing to this part reduces.This makes it possible to improve overall image quality, and increases the sum of position within bounds.
No. 2938412, Japan Patent, " luminance video changes compensation method; video coding apparatus; video decoder; video coding or decoding program record recording medium thereon; and the coded data of video record recording medium thereon " in, a kind of passing through according to two parameters proposed, promptly brightness variable quantity and contrast variable quantity standard of compensation image solve the encoding scheme of attenuation video fully.
At Thomas Wiegand and Berand Girod, " the multiframe motion compensated prediction of video transmission " in the Kluwer academic press 2001, proposes a kind of encoding scheme based on a plurality of frame buffers.In this scheme, attempted optionally to produce predicted picture and improved forecasting efficiency by a plurality of reference frames from be stored in frame buffer.
According to traditional technology, in order to encode attenuation video or fade out video and keep high picture quality simultaneously, many positions are essential.Therefore, can not expect the raising of code efficiency.
Summary of the invention
The object of the present invention is to provide a kind of video coding/decoding method and device, the video that its past brightness along with the time of can encoding changes, attenuation video or fade out video for example is especially with the high efficiency this video of encoding.
According to a first aspect of the invention, provide a kind of and represent the benchmark image signal of at least one benchmark image and the motion vector between incoming video signal and the benchmark image signal by using, make incoming video signal stand the method for video coding of motion compensated predictive coding, comprise: the step of from a plurality of combinations, selecting a combination for each piece of incoming video signal, each comprises and is benchmark image definite at least one benchmark image number and Prediction Parameters in advance in wherein a plurality of combinations, the step that produces prediction image signal according to the benchmark image number and the Prediction Parameters of selected combination, produce the step of representing the predictive error signal of error between incoming video signal and the prediction image signal, and coded prediction error signal, the information of motion vector and indicate the step of selected combined indexes information.
According to a second aspect of the invention, a kind of video encoding/decoding method is provided, comprise: the step of decoding coded data, wherein coded data comprises and represents the predictive error signal of prediction image signal about the error of vision signal, motion vector signal, combined indexes information with at least one benchmark image number of indication and Prediction Parameters, the step that produces prediction image signal according to benchmark image number and Prediction Parameters by the indicated combination of the index information of decoding, and by using predictive error signal and prediction image signal to produce the step of playback video signal.
As mentioned above, according to the present invention, use the combination of benchmark image number and Prediction Parameters or prepare to have a plurality of different prediction scheme with the combination of the corresponding a plurality of Prediction Parameters of benchmark image number of appointment.This makes it possible to based on the prediction scheme with higher forecasting efficient, can not produce correct prediction image signal by video coding attenuation video or fade out this vision signal that the general forecast scheme of video produces for example for correct prediction image signal.
In addition, vision signal is to be included as each frame of progressive signal and the picture signal that obtains, the picture signal that obtains for each frame that obtains by two fields that merge interleaving signal, and be the signal of each picture signal that obtains of interleaving signal.When vision signal was picture signal based on frame, the indication of benchmark image signal number was based on the benchmark image signal of frame.When vision signal was picture signal based on the field, the indication of benchmark image signal number was based on the benchmark image signal of field.
This makes it possible to based on the prediction scheme with higher forecasting efficient, for correct prediction image signal can not for example attenuation video or the general forecast scheme of fading out video produce, comprise and this vision signal of frame structure and field structure produce correct prediction image signal by video coding.
In addition, the information of benchmark image number or Prediction Parameters itself does not send to decoding end from coding side, but indicates the combined indexes information of benchmark image number and Prediction Parameters to send, and perhaps the benchmark image number sends independently.In this case, code efficiency can improve by the combined indexes information that sends the indication Prediction Parameters.
Description of drawings
Fig. 1 is the block diagram that shows according to the video coding apparatus scheme of first embodiment of the invention;
Fig. 2 is the block diagram of the detailed protocol of frame memory in the displayed map 1/prediction image generation device;
Fig. 3 is the view of example that shows the form of combination that use in first embodiment, reference frame number and Prediction Parameters;
Fig. 4 shows in first embodiment flow chart of example of selecting the order of prediction scheme (combination of reference frame number and Prediction Parameters) and definite coding mode for each grand final election;
Fig. 5 is the block diagram that shows according to the video decoder scheme of first embodiment;
Fig. 6 is the block diagram of the detailed protocol of frame memory in the displayed map 5/prediction image generation device;
Fig. 7 shows according to second embodiment of the invention, the number of reference frame be one and reference frame number situation about sending as pattern information under the view of example of form of combination of Prediction Parameters;
Fig. 8 shows according to second embodiment, the number of reference frame be two and reference frame number situation about sending as pattern information under the view of example of form of combination of Prediction Parameters;
Fig. 9 shows according to third embodiment of the invention, is the view of the example of the form of the combination of benchmark image number and Prediction Parameters under one the situation at the number of reference frame;
Figure 10 shows according to the 3rd embodiment view of the example of the form of luminance signal only;
Figure 11 is the view that shows the example of the grammer of each piece when index information will be encoded;
Figure 12 be show when predicted picture will be when using a benchmark image to produce, the view of the instantiation of coding stream;
Figure 13 be show when predicted picture will be when using two benchmark images to produce, the view of the instantiation of coding stream;
Figure 14 shows according to four embodiment of the invention, when information to be encoded is front court (top field), and reference frame number, the view of the example of the form of reference field number and Prediction Parameters; And
Figure 15 shows according to four embodiment of the invention, when information to be encoded is back court (bottom field), and reference frame number, the view of the example of the form of reference field number and Prediction Parameters.
Embodiment
Embodiment of the present invention will be described below with reference to several views of appended drawings.
[first embodiment]
(about coding side)
Fig. 1 shows the scheme according to the video coding apparatus of first embodiment of the invention.Vision signal 100 for example is that the basis is input to video coding apparatus with the frame.Vision signal 100 is input to subtracter 101.Subtracter 101 calculates poor between vision signals 100 and the prediction image signal 212, to produce predictive error signal.Mode selection switch 102 is selected predictive error signal or vision signal 100.Quadrature transformer 103 makes the signals selected orthogonal transform that stands, for example discrete cosine transform (DCT).Quadrature transformer 103 produces orthogonal transform coefficient information, for example DCT coefficient information.Orthogonal transform coefficient information is quantized by quantizer 104, and is branched off into two-way.A quantification orthogonal transform coefficient information 210 that is branched off into two-way is directed to variable length encoder 111.
The orthogonal transform coefficient information 210 that quantizes another that is branched off into two-way continue by de-quantizer or inverse quantizer 105 and oppositely quadrature transformer 106 stand with quantizer 104 and quadrature transformer 103 in opposite processing, to reconstitute predictive error signal.Afterwards, adder 107 is added to the prediction image signal of importing by switch 109 212 with the predictive error signal of reconstruct, to produce local decoded video signal 211.Local decoded video signal 211 is input to frame memory/prediction image generation device 108.
Frame memory/prediction image generation device 108 is selected in a plurality of combinations of the reference frame number prepared and Prediction Parameters.Calculate by the linear of the vision signal (local decoded video signal 211) of the indicated reference frame of the reference frame number of selected combination with according to the Prediction Parameters of selected combination, and consequent signal is added to the side-play amount based on Prediction Parameters.By this operation, in this case, the benchmark image signal is that the basis produces with the frame.Then, frame memory/prediction image generation device 108 comes motion compensation benchmark image signal by using motion vector, to produce prediction image signal 212.
In this process, frame memory/prediction image generation device 108 produces the selected combined indexes information 125 of motion vector information 214 and indication reference frame number and Prediction Parameters, and will select the coding mode information necessary to send to mode selector 110.Motion vector information 214 and index information 215 are input to variable length encoder 111.Frame memory/prediction image generation device 108 will be described in detail subsequently.
In intra-frame encoding mode, switch 102 and 112 switches to the A end by switch controlling signal M and S, and incoming video signal 100 is input to quadrature transformer 103.In interframe encoding mode, switch 102 and 112 switches to the B end by switch controlling signal M and S.Therefore, be input to quadrature transformer 103 from the predictive error signal of subtracter 101, and be input to adder 107 from the prediction image signal 212 of frame memory/prediction image generation device 108.Mode signal 213 is exported from mode selector 110, and is input to variable length encoder 111.
(about frame memory/prediction image generation device 108)
The detailed protocol of the frame memory in Fig. 2 displayed map 1/prediction image generation device 108.With reference to figure 2, the local decoded video signal 211 of adder 107 inputs from Fig. 1 is stored in the frame memory group 202 under the control of storage control 201.Frame memory group 202 has a plurality of (N) frame memory FM1~FMN that is used for temporarily preserving as the local decoded video signal 211 of reference frame.
In Prediction Parameters controller 203, prepare to have in advance as a plurality of combinations form, reference frame number and Prediction Parameters.Prediction Parameters controller 203 is based on vision signal 100, and the reference frame number of selection reference frame and being used for produces the combination of the Prediction Parameters of prediction image signal 212, and the selected combined indexes information 215 of output indication.
Multiframe locomotion evaluation device 204 produces the benchmark image signal according to the combination of, reference frame number 203 that select by the Prediction Parameters controller and index information.Multiframe locomotion evaluation device 204 is from this benchmark image signal and incoming video signal 100 estimating motion amount and predicated errors, and output makes predicated error reach minimum motion vector information 214.Multiframe motion compensator 205 uses the benchmark image signal of being selected by multiframe locomotion evaluation device 204 according to motion vector each piece to be carried out motion compensation, to produce prediction image signal 212.
(about the form of the combination of reference frame number and Prediction Parameters)
Fig. 3 is presented in the Prediction Parameters controller 203 example of the form of combination that prepare, reference frame number and Prediction Parameters." index " is corresponding to being the predicted picture of each piece selection.In this case, have eight types predicted picture.Reference frame number n is the number as the local decoded video of reference frame, and in this case, the number of indication and n the corresponding local decoded video of frame in the past.
When the picture signal that is stored in a plurality of reference frames in the frame memory group 202 by use when prediction image signal 212 produces, a plurality of reference frame numbers are designated, and (number of reference frame+1) coefficient is that each of luminance signal (Y) and color difference signal (Cb and Cr) specified as Prediction Parameters.In this case, as indicated by equation (1)~(3),
nSuppose the number of reference frame, and n+1 Prediction Parameters Di (i=..., n+1) prepare for brightness signal Y; N+1 Prediction Parameters Ei (i=..., n+1) prepare for color difference signal Cb; And n+1 Prediction Parameters Fi (i=..., n+1) prepare for color difference signal Cr:
This operation will be described in more detail with reference to figure 3.With reference to figure 3, last number of each Prediction Parameters is represented side-play amount, and first number of each Prediction Parameters is represented weighted factor (predictive coefficient).For index 0, the number of reference frame is provided by n=2, and the reference frame number is 1, and Prediction Parameters is 1 and 0 for each of brightness signal Y and color difference signal Cr and Cb.As Prediction Parameters in this case is 1 and 0 expression, multiply by 1 and add side-play amount 0 with the corresponding local decoded video signal of reference frame number " 1 ".In other words, become the benchmark image signal with reference frame number 1 corresponding local decoded video signal and without any change.
For index 1, be used as two reference frames with reference frame number 1 and 2 corresponding local decoded video signals.According to the Prediction Parameters 2 ,-1 and 0 of brightness signal Y, double with reference frame number 1 corresponding local decoded video signal, and from consequent signal, deduct with reference frame number 2 corresponding local decoded video signals.Then, side-play amount 0 is added to consequent signal.That is, the extrapolation prediction is carried out from the local decoded video signal of two frames, to produce the benchmark image signal.For color difference signal Cr and Cb, because Prediction Parameters is 1,0 and 0, be used as the benchmark image signal with reference frame number 1 corresponding local decoded video signal, and without any change.Effective especially with index 1 corresponding this prediction scheme for fading out video.
For index 2, according to Prediction Parameters 5/4 and 16, with reference frame number 1 corresponding local decoded video signal multiply by 5/4 and with side-play amount 16 additions.For color difference signal Cr and Cb, because Prediction Parameters is 1, color difference signal Cr and Cb become the benchmark image signal and without any change.This prediction scheme is effective especially for the video that fades in from black frame.
So, the benchmark image signal can be selected based on a plurality of prediction scheme of the various combination of number with reference frame to be used and Prediction Parameters.This make this embodiment to solve for want of correct prediction scheme fully and stood picture quality degeneration attenuation video and fade out video.
(about selecting the order of prediction scheme and definite coding mode)
Selecting the example of the concrete order of prediction scheme (combination of reference frame number and Prediction Parameters) and definite coding mode for each macro block in this embodiment and then will be described with reference to Figure 4.
At first, but the maximum assumed value is set to variable min_D (step S101).The repetition of the selection of prediction scheme in LOOP1 (step S102) the expression interframe encode, and variable
iThe value of " index " in the representative graph 3.In this case, in order to obtain the optimum movement vector of each prediction scheme, the estimated value D of each index (reference frame number and Prediction Parameters each combination) calculates from figure place relevant with motion vector information 214 (from the variable length code of variable length encoder 111 outputs with motion vector information 214 corresponding figure places) and predicated error absolute value summation, and selection makes estimated value D reach the motion vector (step S103) of minimum.Estimated value D compare with min_D (step S104).If estimated value D is less than min_D, estimated value D is set to min_D, and index
iAssignment is to min_i (step S105).
Calculate the estimated value D (step S106) of intraframe coding then.Estimated value D compare with min_D (step S107).If this relatively indicates min_D less than estimated value D, pattern MODE is defined as interframe encode, and the min_i assignment is to index information INDEX (step S108).If estimated value D is less, pattern MODE is defined as intraframe coding (step S109).In this case, estimated value D is set to have the estimated value of the figure place of identical quantization step.
(about decoding end)
To and then describe with the corresponding video decoder of the video coding apparatus shown in Fig. 1.Fig. 5 shows the scheme according to the video decoder of this embodiment.The coded data 300 that sends out and send by transmission system or storage system from the video coding apparatus shown in Fig. 1 temporarily is stored in the input buffer 301, and each frame multichannel is decomposed based on grammer by demultiplexer 302.Resulting data is input to length variable decoder 303.The variable length code of each grammer of length variable decoder 303 decoding coded datas 300 quantizes orthogonal transform coefficient, pattern information 413, motion vector information 414 and index information 415 to reproduce.
In the information of reproducing, quantize orthogonal transform coefficient by de-quantizer 304 de-quantizations, and by the oppositely orthogonal transform of reverse quadrature transformer 305.If pattern information 413 indication intra-frame encoding modes, playback video signal is from reverse quadrature transformer 305 outputs.Then, this signal is exported as playback video signal 310 by adder 306.
If pattern information 413 indication interframe encoding modes, predictive error signal is from reverse quadrature transformer 305 outputs, and mode selection switch 309 conductings.Be added to predictive error signal from the prediction image signal 412 of frame memory/prediction image generation device 308 outputs by adder 306.As a result, playback video signal 310 outputs.Playback video signal 310 as the benchmark image signal storage in frame memory/prediction image generation device 308.
Frame memory in the image pattern 1 on the coding side/prediction image generation device 108 is the same, frame memory/prediction image generation device 308 comprises the combination as a plurality of preparations form, reference frame number and Prediction Parameters, and selects from form by an indicated combination of index information 415.Calculate by the linear of the vision signal (playback video signal 210) of the indicated reference frame of the reference frame number of selected combination with according to the Prediction Parameters of selected combination, and be added to consequent signal based on the side-play amount of Prediction Parameters.By this operation, the benchmark image signal produces.Then, the benchmark image signal of generation comes motion compensation by using by motion vector information 414 indicated motion vectors, thereby produces prediction image signal 412.
(about frame memory/prediction image generation device 308)
The detailed protocol of the frame memory in Fig. 6 displayed map 5/prediction image generation device 308.With reference to figure 6, the playback video signal 310 of adder 306 outputs from Fig. 5 is stored in the frame memory group 402 under the control of storage control 401.Frame memory group 402 has a plurality of (N) frame memory FM1~FMN that is used for temporarily preserving as the playback video signal 310 of reference frame.
[second embodiment]
And then second embodiment of the present invention will be described with reference to figure 7 and 8.Because the video coding apparatus in this embodiment and the overall plan of video decoder almost with first embodiment in identical, with the difference of only describing with first embodiment.
In this embodiment, described based on specifying the scheme of a plurality of reference frame numbers to represent the example of the method for Prediction Parameters according to the pattern information of macroblock basis.The reference frame number is distinguished by the pattern information of each macro block.Therefore, this embodiment is used the form of the Prediction Parameters as Fig. 7 and 8 as shown in, replaces using the form as combination in first embodiment, reference frame number and Prediction Parameters.That is, index information is not indicated the reference frame number, and only has the combination of Prediction Parameters designated.
Form among Fig. 7 shows that the number when reference frame be the example of combination of Prediction Parameters in a period of time.As Prediction Parameters, (number of reference frame+1) parameter, promptly two parameters (weighted factor and a side-play amount) are specified for luminance signal (Y) and color difference signal (Cb and Cr) each.
Form among Fig. 8 shows the example of the combination of Prediction Parameters when the number of reference frame is two.In this case, as Prediction Parameters, (number of reference frame+1) parameter, promptly three parameters (two weighted factors and a side-play amount) are specified for luminance signal (Y) and color difference signal (Cb and Cr) each.This form is that coding side and decoding end are prepared, wherein coding side and decoding end each all as in first embodiment.
[the 3rd embodiment]
The 3rd embodiment of the present invention will be described with reference to figure 9 and 10.Because the video coding apparatus in this embodiment and the overall plan of video decoder almost with first embodiment in identical, the difference with first and second embodiments will only be described below.
In first and second embodiments, video is basic management with the frame.But in this embodiment, video is basic management with the image.If progressive signal and interleaving signal all exist as received image signal, image not necessarily is basic coding with the frame.Consider this point, the image of a frame of image hypothesis (a) progressive signal, (b) image of a frame that produces by two fields that merge interleaving signal, the perhaps image of a field of (c) interleaving signal.
If image to be encoded is the image with picture (a) or frame structure (b), the benchmark image that uses in the motion compensated prediction is also managed as frame, no matter have frame structure or field structure as the encoded image of benchmark image.The benchmark image number assignment is given this image.Similarly, if image to be encoded is the image with field structure of picture (c), the benchmark image that uses in the motion compensated prediction is also managed as the field, no matter have frame structure or field structure as the encoded image of benchmark image.The benchmark image number assignment is given this image.
Equation (4), prepare in Prediction Parameters controller 203 (5) and (6), the example of the predictive equations of benchmark image number and Prediction Parameters.These examples are to use a benchmark image signal to be produced the predictive equations of prediction image signal by motion compensated prediction.
Wherein, Y is the prediction image signal of luminance signal, and Cb and Cr are the prediction image signals of two color difference signals, R
Y(i), R
CbAnd R (i),
Cr(i) be to have index
iThe luminance signal of benchmark image signal and the pixel value of two color difference signals, D
1(i) and D
2(i) be to have index
iThe predictive coefficient and the side-play amount of luminance signal, E
1(i) and E
2(i) be to have index
iPredictive coefficient and the side-play amount of color difference signal Cb, F
1(i) and F
2(i) be to have index
iPredictive coefficient and the side-play amount of color difference signal Cr.Index
iExpression is from 0 (the maximum number-1 of benchmark image), and is the value of each piece (for example being each macro block) coding to be encoded.Then, resulting data is sent to video decoder.
Prediction Parameters D
1(i), D
2(i), E
1(i), E
2(i), F
1(i) and F
2(i) by the value of determining between video coding apparatus and video decoder in advance or for example frame of unit of encoding, field or fragment are represented, and come together to encode with treating the coded data that is sent to video decoder from video coding apparatus.By this operation, these parameters are shared by two devices.
Equation (4), (5) and (6) are predictive equations, 2 power wherein, that is, 2,4,6,8,16 ... be elected to be denominator with the predictive coefficient of benchmark image signal multiplication.Predictive equations can be eliminated necessity of division, and can calculate by arithmetic shift.This makes it possible to be avoided rolling up because of assessing the cost of causing of division.
In equation (4), in (5) and (6), a>>b ">>" represent integer
aArithmetic shift to the right
bThe operator of position.The value that function " clip " representative is used for " () " be set to 0 when it less than 0 the time, and this value is set to 255 when its reduction function greater than 255 time.
In this case, suppose L
YBe the shift amount of luminance signal, and L
CIt is the shift amount of color difference signal.As these shift amounts L
YAnd L
C, use the value of between video coding apparatus and video decoder, determining in advance.Video coding apparatus is with predetermined coding unit, frame for example,, or fragment is with form and the coded data shift amount L that comes together to encode
YAnd L
C, and resulting data is sent to video decoder.This makes two devices can share shift amount L
YAnd L
C
In this embodiment, prepare in the Prediction Parameters controller 203 of form in Fig. 2 of combination shown in the image pattern 9 and 10, benchmark image number and Prediction Parameters.With reference to figure 9 and 10, index
iCorresponding to being the predicted picture of each piece selection.In this case, four types predicted picture and index
i0~3 exist accordingly.In other words, " benchmark image number " is the number as the local decoded video signal of benchmark image.
" mark (flag) " is whether indication uses the predictive equations of Prediction Parameters to be applied to by index
iThe mark of indicated benchmark image number.If be labeled as " 0 ", motion compensated prediction is by use and by index
iThe corresponding local decoded video signal of indicated benchmark image number is carried out, and does not use any Prediction Parameters.If be labeled as " 1 ", predicted picture is by use and by index
iCorresponding local decoded video of indicated benchmark image number and Prediction Parameters are according to equation (4), and (5) and (6) produce, thereby carry out motion compensated prediction.This label information is also by use the value of determining in advance between video coding apparatus and video decoder, perhaps with predetermined coding unit, frame for example, or fragment, encode in video coding apparatus with form and coded data.Resulting data is sent to video decoder.This makes two devices can share label information.
In these cases, when index i=0, about benchmark image number 105, predicted picture produces by using Prediction Parameters, and when i=1, motion compensated prediction execution and do not use any Prediction Parameters.As mentioned above, for same benchmark image number, may there be a plurality of prediction scheme.
Form shown in Fig. 9 has and equation (4), and (5) and (6) consistently distribute to the Prediction Parameters D of brightness and two color difference signals
1(i), D
2(i), E
1(i), E
2(i), F
1(i) and F
2(i).Figure 10 shows that Prediction Parameters only distributes to the example of the form of luminance signal.Usually, compare with the figure place of luminance signal, the figure place of color difference signal is not very big.For this reason, produce the figure place of transmitting in required amount of calculation of predicted picture and the form in order to reduce, form is prepared, and wherein the Prediction Parameters of color difference signal is omitted, and as shown in Figure 10, and Prediction Parameters is only distributed to luminance signal.In this case, only equation (4) is used as predictive equations.
Predictive equations under the situation of equation (7)~(12) are to use a plurality of (two in this case) benchmark image.
Y=clip((P
Y(i)+P
Y(j)+1)>>1) (10)
Cb=clip((P
Cb(i)+P
Cb(j)+1)>>1) (11)
Cr=clip((P
Cr(i)+P
Cr(j)+1)>>1) (12)
Prediction Parameters D
1(i), D
2(i), E
1(i), E
2(i), F
1(i), F
2(i), L
YAnd L
CAnd the information number of packages of mark is to determine between video coding apparatus and video decoder in advance, perhaps with coding unit frame for example, or fragment, the value of encoding with coded data, and be sent to video decoder from video coding apparatus.This makes two devices can share the information of these parts.
If image to be decoded is the image with frame structure, the benchmark image that is used for motion compensated prediction is also managed as frame, no matter have frame structure or field structure as the decoded picture of benchmark image.The benchmark image number assignment is to this image.Similarly, if image to be programmed is the image with field structure, the benchmark image that is used for motion compensated prediction is also managed as the field, no matter have frame structure or field structure as the decoded picture of benchmark image.The benchmark image number assignment is to this image.
(about the grammer of index information)
Figure 11 shows the example of grammer under the situation that index information encodes in each piece.At first, pattern information MODE exists for each piece.Determine the indication index according to pattern information MODE
iValue index information IDi and the indication index
jThe index information IDj of value whether be encoded.After the index information of having encoded, index
iThe motion vector information MVi and the index of motion compensated prediction
jThe addition of coded message of motion vector information MVj of motion prediction compensation, as the motion vector information of each piece.
(about the data structure of coding stream)
Figure 12 shows when predicted picture produces by using a benchmark image, the instantiation of the coding stream of each piece.Index information IDi is provided with after pattern information MODE, and motion vector information MVi is being provided with thereafter.Motion vector information MVi is two-dimensional vector information normally.Depend on by the motion compensation process in the indicated piece of pattern information, a plurality of two-dimensional vectors can further send.
Figure 13 shows when predicted picture produces by using two benchmark images, the instantiation of the coding stream of each piece.Index information IDi and index information IDj are provided with after pattern information MODE, and motion vector information MVi and motion vector information MVj are being provided with thereafter.Motion vector information MVi and motion vector information MVj be two-dimensional vector information normally.Depend on by the motion compensation process in the indicated piece of pattern information, a plurality of two-dimensional vectors can further send.
Notice that the said structure of grammer and bit stream can be applied to all embodiments equally.
[the 4th embodiment]
The 4th embodiment of the present invention will be and then refer to figs. 14 and 15 describing.Because the video coding apparatus in this embodiment and the overall plan of video decoder almost with first embodiment in identical, the difference with the first, the second and the 3rd embodiment will only be described.In the 3rd embodiment, switch for each image based on the coding of frame with based on the coding of field.In the 4th embodiment, switch for each macro block based on the coding of frame with based on the coding of field.
When switching for each macro block based on the coding of frame with based on the coding of field, the identical different image of benchmark image number indication even in same image, depends on macro block and is and be basic coding with the frame or be basic coding with the field.For this reason, use the form shown in the Fig. 9 and 10 that uses in the 3rd embodiment, correct prediction image signal may not can produce.
In order to address this problem, in this embodiment, prepare in the Prediction Parameters controller 203 of form in Fig. 2 of combination shown in the image pattern 14 and 15, benchmark image number and Prediction Parameters.Suppose when macro block will be basic coding with the field, be used with employed benchmark image number (reference frame index number) Prediction Parameters that corresponding Prediction Parameters is identical when macro block is basic coding with the frame.
Figure 14 show when macro block be basic coding and image to be encoded employed form when being the front court with the field.The row of going up of each index column and following row correspond respectively to front court and back court.As shown in Figure 14, frame index
jWith the field index
kRelevant, make k=2j in the front court, k=2j+1 in back court.The reference frame number
mWith the reference field number
nRelevant, make n=2m in the front court, n=2m+1 in back court.
Figure 15 show when macro block be basic coding and image to be encoded employed form when being back court with the field.The same in the form shown in the image pattern 14, the row of going up of each index column and following row correspond respectively to front court and back court.In the form among Figure 15, frame index
jWith the field index
kRelevant, make k=2j+1 in the front court, k=2j in back court.This makes it possible to as index
kLittle value be assigned to the homophase back court.The reference frame number
mWith the reference field number
nBetween relation and Figure 14 in form in identical.
When macro block will be basic coding with the field, a frame index and an index were by using the form coding shown in Figure 14 and 15 as index information.When macro block will be basic coding with the frame, the total indexed coding of frame index of form in Figure 14 and 15 is only arranged as index information.
In this embodiment, Prediction Parameters is distributed to frame and field by using a form.But the form of the form of frame and field can be that an image or fragment are prepared separately.
Above-mentioned each embodiment example is the encoding and decoding of video scheme that orthogonal transform is used on the basis with the piece.But even use for example wavelet transform of another kind of converter technique, the technology of describing in the superincumbent embodiment of the present invention also can be used.
Can be used as hardware (device) or use a computer according to video coding of the present invention and decoding treatment technology and realize as software.Some treatment technologies can be realized by hardware, and other treatment technologies can be realized by software.According to the present invention, can provide a kind of and be used to make computer to carry out the top video coding or the program of video decode, perhaps a kind of stored program storage medium.
Industrial usability
As mentioned above, according to video coding/decoding method of the present invention and device be suitable for along with The past of time, the video that brightness changes, for example attenuation video or fade out the video quilt especially The image processing field of Code And Decode.
Claims (34)
1. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
From index information is the step that piece to be decoded is derived benchmark image, weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
Based on the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
2. video decoder, the coded data of described video decoder decoding by video experience motion compensation encoding with brightness and two aberration is obtained, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive module, being configured to from index information is that piece to be decoded is derived benchmark image, weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to the motion vector based on piece to be decoded, is that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
3. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
From index information is the step that piece to be decoded is derived weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
Based on the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making benchmark image multiply by described weighted factor and adding the above side-play amount; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
4. video decoder, the coded data of described video decoder decoding by video experience motion compensation encoding with brightness and two aberration is obtained, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive module, being configured to from index information is that piece to be decoded is derived weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to the motion vector based on piece to be decoded, is that piece to be decoded produces the motion compensated prediction image by making benchmark image multiply by described weighted factor and adding the above side-play amount; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
5. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
The received code data are as the step of input, described coded data is by comprising following two a plurality of combinations for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and coding (1) indicates a combined indexes information in described a plurality of combination about the information of the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) motion vector and (3) for piece to be decoded, and obtain;
From described index information and described a plurality of step that piece to be decoded is derived the combination that comprises described weighted factor and described side-play amount that is combined as;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By motion vector, make benchmark image multiply by described weighted factor and add the above side-play amount and be the step that piece to be decoded produces the motion compensated prediction image based on piece to be decoded; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
6. video decoder, the coded data of described video decoder decoding by video experience motion compensated predictive coding with brightness and two aberration is obtained, described video decoder comprises:
Receiver, be configured to the received code data as input, described coded data is by comprise the weighted factor of (A) each brightness and two aberration and (B) a plurality of combinations of the side-play amount of each brightness and two aberration for one or more block encodings to be decoded, and coding (1) indicates a combined indexes information in described a plurality of combination about the information of the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) motion vector and (3) for piece to be decoded, and obtain;
Derive module, be configured to from described index information and described a plurality of combination that piece derivation to be decoded comprises described weighted factor and described side-play amount that is combined as;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to by the motion vector based on piece to be decoded, makes benchmark image multiply by described weighted factor and adds the above side-play amount and be that piece to be decoded produces the motion compensated prediction image; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
7. video encoding/decoding method, described video encoding/decoding method are used to decode by making coded data that video experience motion compensated predictive coding with brightness and two aberration obtains to obtain described video, and described video encoding/decoding method comprises:
Be received as the step of the following every and coded data that obtains of block encoding to be decoded as input: (1) indicates the index information element of giving determined number of the combination of the following respectively: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration, (2) about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and the information of (3) motion vector;
Derive benchmark image, give the weighted factor of determined number and give the step of the side-play amount of determined number from the index information element of giving determined number of piece to be decoded to determined number;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By motion vector based on piece to be decoded, make the described benchmark image of giving determined number correspondingly multiply by the described weighted factor of giving determined number, and add the side-play amount of the above given quantity and be the step that piece to be decoded produces the motion compensated prediction image corresponding to each benchmark image; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
8. video decoder, to obtain described video, described video decoder comprises by the coded data that makes video experience motion compensated predictive coding with brightness and two aberration and obtain in described video decoder decoding:
Receiver, be configured to be received as the following every and coded data that obtains of block encoding to be decoded as input: (1) correspondingly indicate the weighted factor of (A) benchmark image, (B) each brightness and two aberration and (C) the index information element to determined number of the combination of the side-play amount of each brightness and two aberration, (2) about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and the information of (3) motion vector;
Derive module, being configured to derives benchmark image to determined number, gives the weighted factor of determined number and gives the side-play amount of determined number from the index information element of giving determined number of piece to be decoded;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator, be configured to by motion vector based on piece to be decoded, make the described benchmark image of giving determined number correspondingly multiply by the weighted factor of giving determined number, and add the side-play amount of the above given quantity and be that piece to be decoded produces the motion compensated prediction image corresponding to described each benchmark image; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
9. video encoding/decoding method, described video encoding/decoding method are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Determine the step of a frame piece or a piece for piece to be decoded;
According to wanting decoded piece is a frame piece or a piece, is the step that piece to be decoded is derived benchmark image, weighted factor and side-play amount from index information;
By motion vector, make described benchmark image multiply by described weighted factor and add the above side-play amount and be the step that piece to be decoded produces the motion compensated prediction image based on piece to be decoded;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal,
Wherein each value of the index information of frame piece is all indicated the various combination of weighted factor and side-play amount, and
Wherein, indicate the like combinations of weighted factor and side-play amount respectively corresponding to two values of index information of the field piece of different benchmark images.
10. video decoder, described video decoder are used to decode by the coded data that video experience motion compensated predictive coding with brightness and two aberration is obtained, and described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: 1) about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Determination module is configured to piece to be decoded and determines a frame piece or a piece;
Derive module, being configured to according to wanting decoded piece is a frame piece or a piece, is that piece to be decoded is derived benchmark image, weighted factor and side-play amount from index information;
First generator is configured to by the motion vector based on piece to be decoded, makes benchmark image multiply by weighted factor and adds side-play amount and be that piece to be decoded produces the motion compensated prediction image;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal,
Wherein each value of the index information of frame piece is all indicated the various combination of weighted factor and side-play amount, and
Wherein, indicate the like combinations of weighted factor and side-play amount respectively corresponding to two values of index information of the field piece of different benchmark images.
11. a prediction image generation method, described prediction image generation method are used for producing predicted picture by the coded data that decoding obtains video image experience predictive coding with brightness and two aberration, described prediction image generation method comprises:
The coded data that the index information of reception by the piece to be decoded of encoding obtains is as the step of input, and described index information indication comprises the combination of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
From index information is that piece to be decoded is derived the weighted factor of (A) each brightness and two aberration and (B) step of the side-play amount of each brightness and two aberration; And
By making benchmark image multiply by described weighted factor and adding the above side-play amount, produce the step of predicted picture for piece to be decoded.
12. a prediction image generation device, described prediction image generation device are used for producing predicted picture by the coded data that decoding obtains video image experience predictive coding with brightness and two aberration, described prediction image generation device comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding index information to be decoded as input, described index information indication comprises the combination of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive module, being configured to from index information is that piece to be decoded is derived the weighted factor of (A) each brightness and two aberration and (B) side-play amount of each brightness and two aberration; And
Generator is configured to by making benchmark image multiply by described weighted factor and adding the above side-play amount, for piece to be decoded produces predicted picture.
Have the coded data that the video image of brightness and two aberration obtains 13. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Determine the step of the unit of a frame piece or a piece for piece to be decoded;
According to wanting decoded piece is a frame piece or a piece, derives the step of weighted factor and side-play amount from index information;
By making benchmark image multiply by described weighted factor and adding the above side-play amount is the step that piece to be decoded produces predicted picture;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate predictive error signal and predicted picture and be the step that piece to be decoded produces decoded image signal,
Wherein the weighted factor of deriving from the index information of field piece and side-play amount are identical with weighted factor and side-play amount from the index information derivation of frame piece, and the index information of frame piece has by the value that obtains of value arithmetic shift right with the index information of field piece.
Have the coded data that the video image of brightness and two aberration obtains 14. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Determination module is configured to piece to be decoded and determines a frame piece or a piece;
Derive module, being configured to according to wanting decoded piece is a frame piece or a piece, derives weighted factor and side-play amount from index information;
First generator, being configured to by making benchmark image multiply by described weighted factor and adding the above side-play amount is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate predictive error signal and predicted picture and be that piece to be decoded produces decoded image signal,
Wherein the weighted factor of deriving from the index information of field piece and side-play amount are identical with weighted factor and side-play amount from the index information derivation of frame piece, and the index information of frame piece has by the value that obtains of value arithmetic shift right with the index information of field piece.
Have the coded data that the video image of brightness and two aberration obtains 15. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
Reception is by the coded data that obtains for the block encoding the following to be decoded step as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive the step of weighted factor and side-play amount from index information;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By making benchmark image multiply by described weighted factor and adding the above side-play amount is the step that piece to be decoded produces predicted picture; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
Have the coded data that the video image of brightness and two aberration obtains 16. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to receive by the coded data that obtains for block encoding the following to be decoded as input: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration;
Derive module, being configured to derives weighted factor and side-play amount from index information;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator, being configured to by making benchmark image multiply by described weighted factor and adding the above side-play amount is that piece to be decoded produces predicted picture; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
17. a video encoding/decoding method, described video encoding/decoding method are used to decode by making coded data that video image experience motion compensated prediction with brightness and two aberration obtains to obtain described video image, described video encoding/decoding method comprises:
Receive the step by the coded data that the following coding is obtained: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive the step of benchmark image, weighted factor and side-play amount from the index information of wanting decoded piece;
Based on the motion vector of wanting decoded piece, be the step that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal, and wherein
Indicate the like combinations of weighted factor and side-play amount respectively corresponding to two values of the index information of different benchmark images.
18. a video decoder, described video decoder are used to decode by making coded data that video image experience motion compensated prediction with brightness and two aberration obtains to obtain video image, described video decoder comprises:
Receive the receiver by the coded data that the following coding is obtained: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, (2) information of motion vector, and (3) indication comprises the combined indexes information of the following: (A) benchmark image, (B) weighted factor of each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive module, being configured to derives benchmark image, weighted factor and side-play amount from the index information of wanting decoded piece;
First generator is configured to based on the motion vector of wanting decoded piece, is that piece to be decoded produces the motion compensated prediction image by making described benchmark image multiply by described weighted factor and adding the above side-play amount;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal, and
Wherein, corresponding to two the value indication weighted factors of the index information of different images and the like combinationss of side-play amount.
19. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that predictive coding obtains, described video encoding/decoding method comprises:
Receive the step by the coded data that the following coding is obtained for piece to be decoded: (1) is about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration, and (2) indication comprises the combined indexes information of the following: (A) reference picture, (B) weighted factor of each brightness and two aberration, (C) side-play amount of each brightness and two aberration, wherein indicate the like combinations of weighted factor and side-play amount corresponding to two values of different benchmark images, and by making the value that obtains of an arithmetic shift right in two values identical with value that obtains of another arithmetic shift right in making two values
Derive the step of weighted factor and side-play amount from index information;
By being the step that piece to be decoded produces predicted picture with side-play amount and the benchmark image addition of multiply by weighted factor;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
20. a video coding apparatus, described video coding apparatus make the inputted video image experience predictive coding with brightness and two aberration, described video coding apparatus comprises:
Determination module, the piece to be encoded that is configured to inputted video image is determined the combination of the following: (A) weighted factor of benchmark image, (B) each brightness and two aberration and (C) side-play amount of each brightness and two aberration;
Derive module, be configured to derive the selected combined indexes information of indication, wherein each all indicates the like combinations of weighted factor and side-play amount corresponding to two values of different benchmark images, and by making the value that obtains of an arithmetic shift right in two values identical with value that obtains of another arithmetic shift right in making two values;
First generator is configured to by with side-play amount and the benchmark image addition of multiply by weighted factor being piece generation predicted picture to be encoded;
Second generator is configured to by calculating the error between inputted video image and the described predicted picture, for piece to be encoded produces predictive error signal;
The 3rd generator is configured to by making described predictive error signal experience orthogonal transform and quantification produce the quantification orthogonal transform coefficient of piece to be encoded; And
Encoder is configured to coding (1) and quantizes orthogonal transform coefficient and (2) index information.
21. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that motion compensated prediction obtains, described video encoding/decoding method comprises:
The step of received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) be the weighted factor of piece to be decoded indication brightness and the appearance or the absent variable sign of side-play amount, and be that block encoding (1) to be decoded quantizes orthogonal transform coefficient, a combined indexes information in information of (2) motion vector and the described a plurality of combinations of (3) indication obtains;
From index information with a plurality ofly be combined as the step that piece to be decoded is derived the combination that comprises weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
According to the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making described side-play amount and the benchmark image addition of multiply by described weighted factor; And
By calculate described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
22. a video decoder, described video decoder decoding is experienced the coded data that motion compensated prediction obtains by making the video image with brightness and two aberration, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) be the weighted factor of piece to be decoded indication brightness and the appearance or the absent variable sign of side-play amount, and be that block encoding (1) to be decoded quantizes orthogonal transform coefficient, a combined indexes information in information of (2) motion vector and a plurality of combinations of (3) indication obtains;
Derive module, be configured to from index information and a plurality ofly be combined as piece to be decoded and derive the combination that comprises weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to by the motion vector according to piece to be decoded, and making described side-play amount is that piece to be decoded produces the motion compensated prediction image with having multiply by the benchmark image addition of described weighted factor; And
The 3rd generator, be configured to by calculate described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
23. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that motion compensated predictive coding obtains, described video encoding/decoding method comprises:
The step of received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) weighted factor of each brightness and two aberration, (B) side-play amount of each brightness and two aberration, (C) appearance of the weighted factor of indication brightness and side-play amount or absent variable first sign are with (D) weighted factor of indication aberration and the appearance or absent variable second of side-play amount indicate, and be block encoding (1) to be decoded quantification orthogonal transform coefficient about the predictive error signal of brightness and two aberration, combined indexes information in information of (2) motion vector and a plurality of combinations of (3) indication obtains;
From index information with a plurality ofly be combined as the step that piece to be decoded is derived the combination that comprises weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
According to the motion vector of piece to be decoded, be the step that piece to be decoded produces the motion compensated prediction image by making described side-play amount and the benchmark image addition of multiply by described weighted factor; And
By obtain described predictive error signal and described motion compensated prediction image and be the step that piece to be decoded produces decoded image signal.
24. a video decoder, described video decoder decoding is experienced the coded data that motion compensated predictive coding obtains by making the video image with brightness and two aberration, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) weighted factor of each brightness and two aberration, (B) side-play amount of each brightness and two aberration, (C) appearance of the weighted factor of indication brightness and side-play amount or absent variable first sign are with (D) weighted factor of indication aberration and the appearance or absent variable second of side-play amount indicate, and be block encoding (1) to be decoded quantification orthogonal transform coefficient about the predictive error signal of brightness and two aberration, combined indexes information in information of (2) motion vector and a plurality of combinations of (3) indication obtains;
Derive module, be configured to from index information and a plurality ofly be combined as piece to be decoded and derive the combination that comprises weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator is configured to the motion vector according to piece to be decoded, is that piece to be decoded produces the motion compensated prediction image by making described side-play amount and the benchmark image addition of multiply by described weighted factor; And
The 3rd generator, be configured to by obtain described predictive error signal and described motion compensated prediction image and be that piece to be decoded produces decoded image signal.
25. a video encoding/decoding method, described video encoding/decoding method are used to decode by making the video image with brightness and two aberration experience the coded data that predictive coding obtains, described video encoding/decoding method comprises:
The step of received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) the indication weighted factor of brightness and the appearance or the absent variable sign of side-play amount, and obtain about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and (2) the combined indexes information in a plurality of combinations of indicating for block encoding to be decoded (1);
From index information with a plurality ofly be combined as the step that piece to be decoded is derived the combination that comprises weighted factor and side-play amount;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal;
By making described side-play amount and the benchmark image addition of multiply by described weighted factor is the step that piece to be decoded produces predicted picture; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
26. a video decoder, described video decoder decoding is experienced the coded data that predictive coding obtains by making the video image with brightness and two aberration, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by being a plurality of combinations of unit encoding with one or more pieces to be decoded, each combination comprises: (A) side-play amount of the weighted factor of each brightness and two aberration, (B) each brightness and two aberration and (C) the indication weighted factor of brightness and the appearance or the absent variable sign of side-play amount, and obtain about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and (2) the combined indexes information in a plurality of combinations of indicating for block encoding to be decoded (1);
Derive module, be configured to from index information and a plurality ofly be combined as piece to be decoded and derive the combination that comprises weighted factor and side-play amount;
First generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal;
Second generator, being configured to by making described side-play amount and the benchmark image addition of multiply by described weighted factor is that piece to be decoded produces predicted picture; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
Have coded data that the video image of brightness and two aberration obtains to obtain video image 27. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
The step of received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and be block encoding (1) to be decoded about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and the index information of (2) indication the following: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
From index information and a plurality of step that is combined as the combination of piece derivation weighted factor to be decoded and side-play amount;
By making described benchmark image multiply by described weighted factor and adding the above side-play amount is the step that piece to be decoded produces predicted picture;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
28. a video decoder, the decoding of described video decoder has coded data that the video image of brightness and two aberration obtains to obtain video image by coding, and described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and be block encoding (1) to be decoded about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration and the index information of (2) indication the following: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
Derive module, be configured to from index information and a plurality ofly be combined as the combination that piece to be decoded is derived weighted factor and side-play amount;
First generator, being configured to by making described benchmark image multiply by described weighted factor and adding the above side-play amount is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
29. a video encoding/decoding method, described video encoding/decoding method are used to decode by making coded data that video image experience predictive coding with brightness and two aberration obtains to obtain described video image, described video encoding/decoding method comprises:
Be received as the following every and step of the coded data that obtains of block encoding to be decoded: (1) indicates the index information element of giving determined number of a combination of the following respectively: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and (2) are about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration;
For piece to be decoded is derived to the weighted factor of determined number and is given the step of the side-play amount of determined number from the index information element of giving determined number;
By making benchmark image to determined number correspondingly multiply by and add side-play amount, and be the step that piece to be decoded produces predicted picture to determined number corresponding to the weighted factor of giving determined number of each benchmark image;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
30. a video decoder, described video decoder are used to decode by making coded data that video image experience predictive coding with brightness and two aberration obtains to obtain described video image, described video decoder comprises:
Receiver, be configured to be received as the following every and coded data that obtains of block encoding to be decoded: (1) indicates the index information element of giving determined number of a combination of the following respectively: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and (2) are about the quantification orthogonal transform coefficient of the predictive error signal of brightness and two aberration;
Derive module, being configured to piece to be decoded derives to the weighted factor of determined number and gives the side-play amount of determined number from the index information element of giving determined number;
First generator is configured to by making benchmark image to determined number correspondingly multiply by corresponding to the weighted factor of giving determined number of each benchmark image and add side-play amount to determined number, and is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
Have coded data that the video image of brightness and two aberration produces to obtain described video image 31. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
The step of received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and for block encoding (1) to be decoded is indicated the index information element of giving determined number of a combination in a plurality of combinations respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration, and obtain;
Derive the step of the combination of giving determined number that comprises weighted factor and side-play amount from index information element and described a plurality of combination of giving determined number;
By making benchmark image to determined number correspondingly multiply by the described side-play amount of giving the weighted factor of determined number and adding the above given quantity, and be the step that piece to be decoded produces predicted picture corresponding to described benchmark image;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
Have coded data that the video image of brightness and two aberration produces to obtain described video image 32. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and for block encoding (1) to be decoded is indicated the index information element of giving determined number of a combination in a plurality of combinations respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration, and obtain;
Derive module, being configured to derives the combination of giving determined number that comprises weighted factor and side-play amount from index information element and described a plurality of combination of giving determined number;
First generator is configured to by making benchmark image to determined number correspondingly multiply by the described side-play amount of giving the weighted factor of determined number and adding the above given quantity corresponding to described benchmark image, and is that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
Have the coded data that the video image of brightness and two aberration produces 33. a video encoding/decoding method, described video encoding/decoding method are used to decode by coding, described video encoding/decoding method comprises:
The step of received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and the index information of giving determined number of indicating the following respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration for block encoding (1) to be decoded: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
For piece to be decoded is derived the step of the combination of giving determined number that comprises weighted factor and side-play amount from index information and described a plurality of combination of giving determined number;
By making benchmark image to determined number correspondingly multiply by the weighted factor of giving determined number, and add that to the side-play amount of determined number be the step that piece to be decoded produces predicted picture corresponding to each benchmark image;
By making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is the step that piece to be decoded produces predictive error signal; And
By calculate described predictive error signal and described predicted picture and be the step that piece to be decoded produces decoded image signal.
Have the coded data that the video image of brightness and two aberration obtains 34. a video decoder, described video decoder are used to decode by coding, described video decoder comprises:
Receiver, be configured to the received code data, described coded data is by comprise a plurality of combinations of the following for one or more block encodings to be decoded: (A) weighted factor of each brightness and two aberration and (B) side-play amount of each brightness and two aberration, and the index information of giving determined number of indicating the following respectively about quantification orthogonal transform coefficient and (2) of the predictive error signal of brightness and two aberration for block encoding (1) to be decoded: (a) combination in a plurality of combinations and (b) benchmark image, and obtain;
Derive module, be configured to piece to be decoded and derive the combination of giving determined number that comprises weighted factor and side-play amount from index information and described a plurality of combination of giving determined number;
First generator is configured to by making benchmark image to determined number correspondingly multiply by the weighted factor of giving determined number corresponding to each benchmark image, and adds that to the side-play amount of determined number be that piece to be decoded produces predicted picture;
Second generator, being configured to by making quantification orthogonal transform coefficient experience re-quantization and inverse orthogonal transformation is that piece to be decoded produces predictive error signal; And
The 3rd generator, be configured to by calculate described predictive error signal and described predicted picture and be that piece to be decoded produces decoded image signal.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-116718 | 2002-04-18 | ||
JP2002116718 | 2002-04-18 | ||
JP2002116718 | 2002-04-18 | ||
JP2002-340042 | 2002-11-22 | ||
JP2002340042 | 2002-11-22 | ||
JP2002340042A JP4015934B2 (en) | 2002-04-18 | 2002-11-22 | Video coding method and apparatus |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB038007576A Division CN1297149C (en) | 2002-04-18 | 2003-04-18 | Moving picture coding/decoding method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101631247A true CN101631247A (en) | 2010-01-20 |
CN101631247B CN101631247B (en) | 2011-07-27 |
Family
ID=37390619
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009101458092A Expired - Lifetime CN101631247B (en) | 2002-04-18 | 2003-04-18 | Moving picture coding/decoding method and device |
CNB2006100899520A Expired - Fee Related CN100508609C (en) | 2002-04-18 | 2003-04-18 | Moving picture decoding method and device |
CN200910145811XA Expired - Lifetime CN101631248B (en) | 2002-04-18 | 2003-04-18 | Moving picture coding/decoding method and device |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100899520A Expired - Fee Related CN100508609C (en) | 2002-04-18 | 2003-04-18 | Moving picture decoding method and device |
CN200910145811XA Expired - Lifetime CN101631248B (en) | 2002-04-18 | 2003-04-18 | Moving picture coding/decoding method and device |
Country Status (4)
Country | Link |
---|---|
JP (319) | JP4127713B2 (en) |
CN (3) | CN101631247B (en) |
ES (2) | ES2351306T3 (en) |
NO (1) | NO339262B1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103069805A (en) * | 2011-06-27 | 2013-04-24 | 松下电器产业株式会社 | Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device |
CN103826130A (en) * | 2010-04-08 | 2014-05-28 | 株式会社东芝 | Image decoding method and image decoding device |
CN106067973A (en) * | 2010-05-19 | 2016-11-02 | Sk电信有限公司 | Video decoding apparatus |
US9538181B2 (en) | 2010-04-08 | 2017-01-03 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
CN107454398A (en) * | 2011-01-12 | 2017-12-08 | 佳能株式会社 | Coding method, code device, coding/decoding method and decoding apparatus |
CN110536141A (en) * | 2012-01-20 | 2019-12-03 | 索尼公司 | The complexity of availability graph code reduces |
US11095878B2 (en) | 2011-06-06 | 2021-08-17 | Canon Kabushiki Kaisha | Method and device for encoding a sequence of images and method and device for decoding a sequence of image |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0858940A (en) * | 1994-08-25 | 1996-03-05 | Mukai Kogyo Kk | Conveyer device |
US7075502B1 (en) | 1998-04-10 | 2006-07-11 | E Ink Corporation | Full color reflective display with multichromatic sub-pixels |
JP4015934B2 (en) | 2002-04-18 | 2007-11-28 | 株式会社東芝 | Video coding method and apparatus |
CN101631247B (en) * | 2002-04-18 | 2011-07-27 | 株式会社东芝 | Moving picture coding/decoding method and device |
CN101222638B (en) * | 2007-01-08 | 2011-12-07 | 华为技术有限公司 | Multi-video encoding and decoding method and device |
KR101365444B1 (en) * | 2007-11-19 | 2014-02-21 | 삼성전자주식회사 | Method and apparatus for encoding/decoding moving image efficiently through adjusting a resolution of image |
TWI447954B (en) | 2009-09-15 | 2014-08-01 | Showa Denko Kk | Light-emitting diode, light-emitting diode lamp and lighting device |
JP2011087202A (en) | 2009-10-19 | 2011-04-28 | Sony Corp | Storage device and data communication system |
JP5440927B2 (en) | 2009-10-19 | 2014-03-12 | 株式会社リコー | Distance camera device |
CN103826131B (en) * | 2010-04-08 | 2017-03-01 | 株式会社东芝 | Picture decoding method and picture decoding apparatus |
JP5325157B2 (en) * | 2010-04-09 | 2013-10-23 | 株式会社エヌ・ティ・ティ・ドコモ | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program |
JP5482407B2 (en) | 2010-04-28 | 2014-05-07 | 株式会社リコー | Information processing apparatus, image processing apparatus, image processing system, screen customization method, screen customization program, and recording medium recording the program |
JP2012032611A (en) | 2010-07-30 | 2012-02-16 | Sony Corp | Stereoscopic image display apparatus |
JP5757075B2 (en) | 2010-09-15 | 2015-07-29 | ソニー株式会社 | Transmitting apparatus, transmitting method, receiving apparatus, receiving method, program, and broadcasting system |
KR101755601B1 (en) | 2010-11-04 | 2017-07-10 | 삼성디스플레이 주식회사 | Liquid Crystal Display integrated Touch Screen Panel |
KR20120080122A (en) * | 2011-01-06 | 2012-07-16 | 삼성전자주식회사 | Apparatus and method for encoding and decoding multi-view video based competition |
WO2012093879A2 (en) * | 2011-01-06 | 2012-07-12 | 삼성전자주식회사 | Competition-based multiview video encoding/decoding device and method thereof |
KR102064157B1 (en) * | 2011-03-06 | 2020-01-09 | 엘지전자 주식회사 | Intra prediction method of chrominance block using luminance sample, and apparatus using same |
MY193771A (en) | 2011-06-28 | 2022-10-27 | Samsung Electronics Co Ltd | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
JP5830993B2 (en) * | 2011-07-14 | 2015-12-09 | ソニー株式会社 | Image processing apparatus and image processing method |
US8599652B2 (en) | 2011-07-14 | 2013-12-03 | Tdk Corporation | Thermally-assisted magnetic recording medium and magnetic recording/reproducing device using the same |
CN103124346B (en) * | 2011-11-18 | 2016-01-20 | 北京大学 | A kind of determination method and system of residual prediction |
CA2860248C (en) * | 2011-12-22 | 2017-01-17 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
WO2014007514A1 (en) * | 2012-07-02 | 2014-01-09 | 엘지전자 주식회사 | Method for decoding image and apparatus using same |
TWI492373B (en) * | 2012-08-09 | 2015-07-11 | Au Optronics Corp | Flexible display module manufacturing method |
CN105189122B (en) | 2013-03-20 | 2017-05-10 | 惠普发展公司,有限责任合伙企业 | Molded die slivers with exposed front and back surfaces |
JP6087747B2 (en) | 2013-06-27 | 2017-03-01 | Kddi株式会社 | Video encoding device, video decoding device, video system, video encoding method, video decoding method, and program |
WO2015105048A1 (en) | 2014-01-08 | 2015-07-16 | 旭化成エレクトロニクス株式会社 | Output-current detection chip for diode sensors, and diode sensor device |
JP6619930B2 (en) | 2014-12-19 | 2019-12-11 | 株式会社Adeka | Polyolefin resin composition |
JP6434162B2 (en) * | 2015-10-28 | 2018-12-05 | 株式会社東芝 | Data management system, data management method and program |
AU2017224004B2 (en) | 2016-02-24 | 2021-10-28 | Magic Leap, Inc. | Polarizing beam splitter with low light leakage |
DE102019103438A1 (en) | 2019-02-12 | 2020-08-13 | Werner Krammel | Vehicle with tilting frame and spring damper system |
JP7402714B2 (en) | 2020-03-05 | 2023-12-21 | 東洋エンジニアリング株式会社 | Fluidized bed granulator or fluidized bed/entrained bed granulator |
CN114213441B (en) * | 2021-12-27 | 2023-12-01 | 中国科学院长春应用化学研究所 | Boron or phosphorus fused ring compound, preparation method thereof and light-emitting device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2938412B2 (en) * | 1996-09-03 | 1999-08-23 | 日本電信電話株式会社 | Method for compensating luminance change of moving image, moving image encoding device, moving image decoding device, recording medium recording moving image encoding or decoding program, and recording medium recording moving image encoded data |
FR2755527B1 (en) * | 1996-11-07 | 1999-01-08 | Thomson Multimedia Sa | MOTION COMPENSATED PREDICTION METHOD AND ENCODER USING SUCH A METHOD |
CA2264834C (en) * | 1997-07-08 | 2006-11-07 | Sony Corporation | Video data encoder, video data encoding method, video data transmitter, and video data recording medium |
JP2001333389A (en) * | 2000-05-17 | 2001-11-30 | Mitsubishi Electric Research Laboratories Inc | Video reproduction system and method for processing video signal |
CN101631247B (en) * | 2002-04-18 | 2011-07-27 | 株式会社东芝 | Moving picture coding/decoding method and device |
-
2003
- 2003-04-18 CN CN2009101458092A patent/CN101631247B/en not_active Expired - Lifetime
- 2003-04-18 ES ES03717655T patent/ES2351306T3/en not_active Expired - Lifetime
- 2003-04-18 CN CNB2006100899520A patent/CN100508609C/en not_active Expired - Fee Related
- 2003-04-18 ES ES07006031T patent/ES2355656T3/en not_active Expired - Lifetime
- 2003-04-18 CN CN200910145811XA patent/CN101631248B/en not_active Expired - Lifetime
-
2006
- 2006-10-30 JP JP2006294673A patent/JP4127713B2/en not_active Expired - Lifetime
- 2006-10-30 JP JP2006294676A patent/JP4208913B2/en not_active Expired - Lifetime
- 2006-10-30 JP JP2006294675A patent/JP4127715B2/en not_active Expired - Lifetime
- 2006-10-30 JP JP2006294679A patent/JP4127718B2/en not_active Expired - Lifetime
- 2006-10-30 JP JP2006294678A patent/JP4127717B2/en not_active Expired - Lifetime
- 2006-10-30 JP JP2006294674A patent/JP4127714B2/en not_active Expired - Lifetime
- 2006-10-30 JP JP2006294677A patent/JP4127716B2/en not_active Expired - Lifetime
-
2008
- 2008-03-31 JP JP2008093242A patent/JP4208946B2/en not_active Expired - Lifetime
- 2008-03-31 JP JP2008093243A patent/JP4208947B2/en not_active Expired - Lifetime
- 2008-03-31 JP JP2008093244A patent/JP2008199651A/en active Pending
- 2008-03-31 JP JP2008093241A patent/JP4208945B2/en not_active Expired - Lifetime
- 2008-06-09 JP JP2008151019A patent/JP4208954B2/en not_active Expired - Lifetime
- 2008-06-09 JP JP2008151018A patent/JP4208953B2/en not_active Expired - Lifetime
- 2008-06-09 JP JP2008151017A patent/JP4208952B2/en not_active Expired - Lifetime
- 2008-06-09 JP JP2008151020A patent/JP4208955B2/en not_active Expired - Lifetime
- 2008-09-16 JP JP2008237024A patent/JP4234780B2/en not_active Expired - Lifetime
- 2008-09-16 JP JP2008237025A patent/JP4256455B2/en not_active Expired - Lifetime
- 2008-09-16 JP JP2008237026A patent/JP4256456B2/en not_active Expired - Lifetime
- 2008-10-03 JP JP2008258465A patent/JP4213766B1/en not_active Expired - Lifetime
- 2008-10-15 JP JP2008266419A patent/JP4256457B2/en not_active Expired - Lifetime
- 2008-10-15 JP JP2008266422A patent/JP4256460B2/en not_active Expired - Lifetime
- 2008-10-15 JP JP2008266424A patent/JP4256462B2/en not_active Expired - Lifetime
- 2008-10-15 JP JP2008266421A patent/JP4256459B2/en not_active Expired - Lifetime
- 2008-10-15 JP JP2008266423A patent/JP4256461B2/en not_active Expired - Lifetime
- 2008-10-15 JP JP2008266420A patent/JP4256458B2/en not_active Expired - Lifetime
- 2008-11-18 JP JP2008294769A patent/JP4234784B1/en not_active Expired - Lifetime
- 2008-11-18 JP JP2008294770A patent/JP4256465B2/en not_active Expired - Lifetime
- 2008-11-18 JP JP2008294767A patent/JP4234782B1/en not_active Expired - Lifetime
- 2008-11-18 JP JP2008294766A patent/JP4234781B1/en not_active Expired - Lifetime
- 2008-11-18 JP JP2008294768A patent/JP4234783B2/en not_active Expired - Lifetime
- 2008-11-26 JP JP2008301017A patent/JP4247304B2/en not_active Expired - Lifetime
- 2008-11-26 JP JP2008301018A patent/JP4247305B1/en not_active Expired - Lifetime
- 2008-11-26 JP JP2008301019A patent/JP4247306B1/en not_active Expired - Lifetime
-
2009
- 2009-01-08 JP JP2009002798A patent/JP4282752B2/en not_active Expired - Lifetime
- 2009-01-08 JP JP2009002801A patent/JP4282755B2/en not_active Expired - Lifetime
- 2009-01-08 JP JP2009002800A patent/JP4282754B2/en not_active Expired - Lifetime
- 2009-01-08 JP JP2009002802A patent/JP4282756B2/en not_active Expired - Fee Related
- 2009-01-08 JP JP2009002804A patent/JP4282758B2/en not_active Expired - Lifetime
- 2009-01-08 JP JP2009002805A patent/JP4309469B2/en not_active Expired - Lifetime
- 2009-01-08 JP JP2009002799A patent/JP4282753B2/en not_active Expired - Lifetime
- 2009-01-08 JP JP2009002803A patent/JP4282757B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069236A patent/JP4307525B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069248A patent/JP4307537B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069239A patent/JP4307528B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069243A patent/JP4307532B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069251A patent/JP4331259B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069242A patent/JP4307531B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069234A patent/JP4307523B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069247A patent/JP4307536B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069233A patent/JP4307522B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069245A patent/JP4307534B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069238A patent/JP4307527B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069246A patent/JP4307535B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069249A patent/JP4307538B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069235A patent/JP4307524B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069240A patent/JP4307529B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069241A patent/JP4307530B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069237A patent/JP4307526B2/en not_active Expired - Lifetime
- 2009-03-23 JP JP2009069244A patent/JP4307533B2/en not_active Expired - Fee Related
- 2009-03-23 JP JP2009069250A patent/JP4307539B2/en not_active Expired - Lifetime
- 2009-04-22 JP JP2009103542A patent/JP4320367B2/en not_active Expired - Lifetime
- 2009-04-22 JP JP2009103545A patent/JP4320370B2/en not_active Expired - Lifetime
- 2009-04-22 JP JP2009103543A patent/JP4320368B2/en not_active Expired - Lifetime
- 2009-04-22 JP JP2009103546A patent/JP4320371B2/en not_active Expired - Lifetime
- 2009-04-22 JP JP2009103544A patent/JP4320369B2/en not_active Expired - Lifetime
- 2009-04-22 JP JP2009103541A patent/JP4320366B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118113A patent/JP4338771B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118117A patent/JP4338775B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118115A patent/JP4338773B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118119A patent/JP4338777B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118116A patent/JP4338774B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118114A patent/JP4338772B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118121A patent/JP2009219137A/en active Pending
- 2009-05-15 JP JP2009118118A patent/JP4338776B2/en not_active Expired - Lifetime
- 2009-05-15 JP JP2009118120A patent/JP4338778B2/en not_active Expired - Lifetime
- 2009-06-05 JP JP2009136421A patent/JP4355756B2/en not_active Expired - Lifetime
- 2009-06-05 JP JP2009136422A patent/JP4355757B2/en not_active Expired - Lifetime
- 2009-06-05 JP JP2009136420A patent/JP4355755B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145908A patent/JP4355767B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145904A patent/JP4355763B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145907A patent/JP4355766B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145902A patent/JP4355761B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145910A patent/JP4355769B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145911A patent/JP4355770B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145906A patent/JP4355765B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145899A patent/JP4355758B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145901A patent/JP4355760B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145912A patent/JP4355771B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145905A patent/JP4355764B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145909A patent/JP4355768B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145903A patent/JP4355762B2/en not_active Expired - Lifetime
- 2009-06-19 JP JP2009145900A patent/JP4355759B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173192A patent/JP4376314B1/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173190A patent/JP4376312B1/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173180A patent/JP4376302B1/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173193A patent/JP4376315B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173187A patent/JP4376309B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173194A patent/JP4405584B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173181A patent/JP4376303B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173182A patent/JP4376304B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173189A patent/JP4376311B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173188A patent/JP4376310B1/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173191A patent/JP4376313B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173185A patent/JP4376307B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173186A patent/JP4376308B1/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173184A patent/JP4376306B2/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173183A patent/JP4376305B1/en not_active Expired - Lifetime
- 2009-07-24 JP JP2009173179A patent/JP4376301B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211756A patent/JP4406064B1/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211565A patent/JP4406054B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212307A patent/JP4406067B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211562A patent/JP4406051B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212312A patent/JP4406072B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211751A patent/JP4406059B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211758A patent/JP4406066B1/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212311A patent/JP4406071B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211757A patent/JP4406065B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211567A patent/JP4406056B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212316A patent/JP4406076B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211749A patent/JP4406057B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212313A patent/JP4406073B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212309A patent/JP4406069B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212308A patent/JP4406068B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212314A patent/JP4406074B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211563A patent/JP4406052B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211754A patent/JP4406062B1/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212310A patent/JP4406070B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211564A patent/JP4406053B1/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211752A patent/JP4406060B1/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009212315A patent/JP4406075B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211753A patent/JP4406061B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211560A patent/JP4406049B1/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211750A patent/JP4406058B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211559A patent/JP4406048B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211755A patent/JP4406063B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211566A patent/JP4406055B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211561A patent/JP4406050B2/en not_active Expired - Lifetime
- 2009-09-14 JP JP2009211558A patent/JP4406047B2/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220061A patent/JP4406087B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220058A patent/JP4406084B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220059A patent/JP4406085B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220054A patent/JP4406080B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220062A patent/JP4406088B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220060A patent/JP4406086B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220051A patent/JP4406077B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220052A patent/JP4406078B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220057A patent/JP4406083B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220053A patent/JP4406079B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220063A patent/JP4406089B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220055A patent/JP4406081B1/en not_active Expired - Lifetime
- 2009-09-25 JP JP2009220056A patent/JP4406082B1/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249569A patent/JP4427608B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249587A patent/JP4427626B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249581A patent/JP4427620B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249588A patent/JP4427627B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249570A patent/JP4427609B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249576A patent/JP4427615B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249580A patent/JP4427619B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249573A patent/JP4427612B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249572A patent/JP4427611B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249577A patent/JP4427616B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249579A patent/JP4427618B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249583A patent/JP4427622B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249584A patent/JP4427623B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249585A patent/JP4427624B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249575A patent/JP4427614B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249582A patent/JP4427621B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249574A patent/JP4427613B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249571A patent/JP4427610B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249578A patent/JP4427617B2/en not_active Expired - Lifetime
- 2009-10-30 JP JP2009249586A patent/JP4427625B2/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269463A patent/JP4438899B1/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269468A patent/JP4438904B1/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269465A patent/JP4438901B1/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269462A patent/JP4438898B1/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269466A patent/JP4438902B1/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269467A patent/JP4438903B1/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269461A patent/JP4438897B1/en not_active Expired - Lifetime
- 2009-11-27 JP JP2009269464A patent/JP4438900B1/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293617A patent/JP4460631B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293620A patent/JP4460634B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293624A patent/JP4460638B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293625A patent/JP4460639B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293623A patent/JP4460637B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293622A patent/JP4460636B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293626A patent/JP4460640B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293616A patent/JP4460630B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293619A patent/JP4460633B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293618A patent/JP4460632B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293627A patent/JP4460641B2/en not_active Expired - Lifetime
- 2009-12-25 JP JP2009293621A patent/JP4460635B2/en not_active Expired - Lifetime
-
2010
- 2010-02-05 JP JP2010023650A patent/JP4478739B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023646A patent/JP4478735B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023660A patent/JP4478749B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023648A patent/JP4478737B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023659A patent/JP4478748B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023655A patent/JP4478744B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023649A patent/JP4478738B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023647A patent/JP4478736B1/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023653A patent/JP4478742B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023651A patent/JP4478740B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023662A patent/JP4478751B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023656A patent/JP4478745B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023657A patent/JP4478746B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023654A patent/JP4478743B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023652A patent/JP4478741B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023658A patent/JP4478747B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023644A patent/JP4478733B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023645A patent/JP4478734B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023663A patent/JP4478752B2/en not_active Expired - Lifetime
- 2010-02-05 JP JP2010023661A patent/JP4478750B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050125A patent/JP4496310B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050120A patent/JP4496305B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050117A patent/JP4496302B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050127A patent/JP4496312B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050126A patent/JP4496311B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050115A patent/JP4496300B1/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050128A patent/JP4496313B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050123A patent/JP4496308B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050116A patent/JP4496301B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050124A patent/JP4496309B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050129A patent/JP4496314B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050119A patent/JP4496304B1/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050130A patent/JP4496315B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050121A patent/JP4496306B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050122A patent/JP4496307B2/en not_active Expired - Lifetime
- 2010-03-08 JP JP2010050118A patent/JP4496303B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089950A patent/JP4517035B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089951A patent/JP4517036B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089944A patent/JP4517029B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089934A patent/JP4517019B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089939A patent/JP4517024B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089935A patent/JP4517020B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089937A patent/JP4517022B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089929A patent/JP4517014B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089927A patent/JP4517012B1/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089936A patent/JP4517021B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089956A patent/JP4517041B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089955A patent/JP4517040B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089930A patent/JP4517015B1/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089933A patent/JP4517018B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089932A patent/JP4517017B1/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089926A patent/JP4517011B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089943A patent/JP4517028B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089945A patent/JP4517030B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089953A patent/JP4517038B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089942A patent/JP4517027B1/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089954A patent/JP4517039B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089949A patent/JP4517034B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089946A patent/JP4517031B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089928A patent/JP4517013B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089952A patent/JP4517037B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089938A patent/JP4517023B1/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089940A patent/JP4517025B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089947A patent/JP4517032B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089948A patent/JP4517033B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089941A patent/JP4517026B2/en not_active Expired - Lifetime
- 2010-04-09 JP JP2010089931A patent/JP4517016B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109735A patent/JP4538549B1/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109740A patent/JP4538554B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109743A patent/JP4538557B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109742A patent/JP4538556B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109748A patent/JP4538562B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109729A patent/JP4538543B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109734A patent/JP4538548B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109736A patent/JP4538550B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109747A patent/JP4538561B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109732A patent/JP4538546B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109727A patent/JP4538541B1/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109741A patent/JP4538555B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109738A patent/JP4538552B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109750A patent/JP4538564B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109744A patent/JP4538558B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109731A patent/JP4538545B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109737A patent/JP4538551B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109733A patent/JP4538547B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109730A patent/JP4538544B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109749A patent/JP4538563B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109725A patent/JP4538539B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109754A patent/JP4538568B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109739A patent/JP4538553B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109746A patent/JP4538560B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109745A patent/JP4538559B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109753A patent/JP4538567B1/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109728A patent/JP4538542B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109751A patent/JP4538565B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109752A patent/JP4538566B2/en not_active Expired - Lifetime
- 2010-05-12 JP JP2010109726A patent/JP4538540B2/en not_active Expired - Lifetime
- 2010-05-19 JP JP2010115590A patent/JP4538570B2/en not_active Expired - Lifetime
- 2010-05-19 JP JP2010115591A patent/JP4538571B2/en not_active Expired - Lifetime
- 2010-05-19 JP JP2010115589A patent/JP4538569B2/en not_active Expired - Lifetime
- 2010-06-24 JP JP2010144013A patent/JP4560140B2/en not_active Expired - Lifetime
- 2010-06-24 JP JP2010144011A patent/JP4560138B2/en not_active Expired - Lifetime
- 2010-06-24 JP JP2010144010A patent/JP4560137B2/en not_active Expired - Lifetime
- 2010-06-24 JP JP2010144012A patent/JP4560139B2/en not_active Expired - Lifetime
- 2010-06-24 JP JP2010144009A patent/JP4560136B2/en not_active Expired - Lifetime
- 2010-09-29 JP JP2010219373A patent/JP4637277B2/en not_active Expired - Fee Related
- 2010-09-29 JP JP2010219375A patent/JP4637278B2/en not_active Expired - Fee Related
- 2010-09-29 JP JP2010219372A patent/JP4637276B2/en not_active Expired - Fee Related
- 2010-09-29 JP JP2010219376A patent/JP4637279B2/en not_active Expired - Fee Related
- 2010-09-29 JP JP2010219374A patent/JP4625542B1/en not_active Expired - Fee Related
- 2010-09-29 JP JP2010219378A patent/JP4625543B1/en not_active Expired - Fee Related
- 2010-09-29 JP JP2010219377A patent/JP4637280B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227780A patent/JP4637281B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227791A patent/JP4637291B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227784A patent/JP4637285B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227789A patent/JP4637289B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227782A patent/JP4637283B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227785A patent/JP4637286B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227792A patent/JP4637292B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227790A patent/JP4637290B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227781A patent/JP4637282B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227783A patent/JP4637284B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227786A patent/JP4637287B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227788A patent/JP4637288B2/en not_active Expired - Fee Related
- 2010-10-07 JP JP2010227787A patent/JP4665062B2/en not_active Expired - Fee Related
-
2012
- 2012-04-20 JP JP2012096738A patent/JP5481518B2/en not_active Expired - Fee Related
-
2013
- 2013-12-02 JP JP2013249509A patent/JP5738968B2/en not_active Expired - Fee Related
-
2014
- 2014-12-05 JP JP2014247290A patent/JP5921656B2/en not_active Expired - Lifetime
-
2015
- 2015-03-05 NO NO20150299A patent/NO339262B1/en not_active IP Right Cessation
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9906812B2 (en) | 2010-04-08 | 2018-02-27 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US10091525B2 (en) | 2010-04-08 | 2018-10-02 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US10779001B2 (en) | 2010-04-08 | 2020-09-15 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US9538181B2 (en) | 2010-04-08 | 2017-01-03 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
CN103826130B (en) * | 2010-04-08 | 2017-03-01 | 株式会社东芝 | Picture decoding method and picture decoding apparatus |
US12225227B2 (en) | 2010-04-08 | 2025-02-11 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US9794587B2 (en) | 2010-04-08 | 2017-10-17 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US12132927B2 (en) | 2010-04-08 | 2024-10-29 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
CN103826130A (en) * | 2010-04-08 | 2014-05-28 | 株式会社东芝 | Image decoding method and image decoding device |
US10715828B2 (en) | 2010-04-08 | 2020-07-14 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US10009623B2 (en) | 2010-04-08 | 2018-06-26 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US10560717B2 (en) | 2010-04-08 | 2020-02-11 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US11889107B2 (en) | 2010-04-08 | 2024-01-30 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US11265574B2 (en) | 2010-04-08 | 2022-03-01 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US10999597B2 (en) | 2010-04-08 | 2021-05-04 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
US10542281B2 (en) | 2010-04-08 | 2020-01-21 | Kabushiki Kaisha Toshiba | Image encoding method and image decoding method |
CN106067973B (en) * | 2010-05-19 | 2019-06-18 | Sk电信有限公司 | Video decoding apparatus |
CN106067973A (en) * | 2010-05-19 | 2016-11-02 | Sk电信有限公司 | Video decoding apparatus |
US10506236B2 (en) | 2011-01-12 | 2019-12-10 | Canon Kabushiki Kaisha | Video encoding and decoding with improved error resilience |
US10609380B2 (en) | 2011-01-12 | 2020-03-31 | Canon Kabushiki Kaisha | Video encoding and decoding with improved error resilience |
US11146792B2 (en) | 2011-01-12 | 2021-10-12 | Canon Kabushiki Kaisha | Video encoding and decoding with improved error resilience |
US10499060B2 (en) | 2011-01-12 | 2019-12-03 | Canon Kabushiki Kaisha | Video encoding and decoding with improved error resilience |
CN107454398A (en) * | 2011-01-12 | 2017-12-08 | 佳能株式会社 | Coding method, code device, coding/decoding method and decoding apparatus |
US11095878B2 (en) | 2011-06-06 | 2021-08-17 | Canon Kabushiki Kaisha | Method and device for encoding a sequence of images and method and device for decoding a sequence of image |
CN103069805A (en) * | 2011-06-27 | 2013-04-24 | 松下电器产业株式会社 | Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device |
CN103069805B (en) * | 2011-06-27 | 2017-05-31 | 太阳专利托管公司 | Method for encoding images, picture decoding method, picture coding device, picture decoding apparatus and image encoding/decoding device |
US11025938B2 (en) | 2012-01-20 | 2021-06-01 | Sony Corporation | Complexity reduction of significance map coding |
CN110536141B (en) * | 2012-01-20 | 2021-07-06 | 索尼公司 | Complexity reduction for significance map coding |
CN110536141A (en) * | 2012-01-20 | 2019-12-03 | 索尼公司 | The complexity of availability graph code reduces |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101631247B (en) | Moving picture coding/decoding method and device | |
CN101090493B (en) | Moving picture decoding/encoding method and device | |
KR100786404B1 (en) | Video decoding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CX01 | Expiry of patent term | ||
CX01 | Expiry of patent term |
Granted publication date: 20110727 |