[go: up one dir, main page]

CN1585486A - Non-loss visual-frequency compressing method based on space self-adaption prediction - Google Patents

Non-loss visual-frequency compressing method based on space self-adaption prediction Download PDF

Info

Publication number
CN1585486A
CN1585486A CN 200410024712 CN200410024712A CN1585486A CN 1585486 A CN1585486 A CN 1585486A CN 200410024712 CN200410024712 CN 200410024712 CN 200410024712 A CN200410024712 A CN 200410024712A CN 1585486 A CN1585486 A CN 1585486A
Authority
CN
China
Prior art keywords
prediction
domain
time
err
video compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200410024712
Other languages
Chinese (zh)
Inventor
张明锋
张立明
胡波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN 200410024712 priority Critical patent/CN1585486A/en
Publication of CN1585486A publication Critical patent/CN1585486A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明为一种基于时空自适应预测的无损视频压缩方法。该方法把时间预测和空间预测相结合,配合自适应的融合技术,然后采用基于上下文的熵编码技术,对视频序列进行无损压缩。本发明方法的压缩的效率比现有的无损视频压缩方法提高10%。The invention is a lossless video compression method based on space-time adaptive prediction. This method combines temporal prediction and spatial prediction, cooperates with adaptive fusion technology, and then adopts context-based entropy coding technology to perform lossless compression on video sequences. The compression efficiency of the method of the invention is 10% higher than that of the existing lossless video compression method.

Description

Lossless video compression method based on the space-time adaptive prediction
Technical field
The invention belongs to the video compression technology field, be specifically related to a kind of lossless video compression method based on the space-time adaptive prediction.
Technical background
In recent years, digital picture and video compression are at JPEG, and MPEG1 has had further research on the basis of MPEG2 standard, many new standards such as JPEG2000 have occurred, MPEG4, MPEG7.But these work mainly concentrate on the lossy compression method.In a lot of actual application, harmless digital picture and video compression seem extremely important, for example in medical image and remote sensing images, if use lossy compression method, will omit important illness and important target, but medical science and remote sensing images to magnanimity must compress to save the storage area, improve propagation efficiency.
The most important redundancy of video is the redundancy of space, time and color space.The redundancy in space mainly is because the correlation that is worth between the pixel in the frame, this is obvious especially in the natural image of Continuous Gray Scale, there are a lot of algorithms can remove the redundancy in space, the algorithm that has has been used for the compression of lossless image, for example, LOCO-I, this has become the JPEG-LS international standard.Time redundancy mainly be because the time go up very close to frame between correlation, at some video compression algorithms that diminishes, MPEG1 for example, MPEG2 relies on effectively to have removed temporal correlation.This correlation does not exist only between the continuous frame, is present between the last more close frame of time yet.At last, also have another kind of redundancy, this redundancy mainly is because the correlation between each color component of coloured image.
The compression method of existing lossless image mainly contains the LOCO-I[8 based on the MED fallout predictor], based on the JPEG2000[9 of integer wavelet transformation], based on contextual adaptive predictive encoding (CALIC) [2] [3].Aspect the lossless video compression, people such as Memon proposed the compression method [6] of a kind of time-domain and spatial domain mixing in 1996.1998, people such as X.Wu. proposed the CALIC algorithm [7] of interband, and 2002, Elias Carotti etc. proposed behind the interframe neighborhood spatial domain fallout predictor [1] in adaptive predictor and frame.The compression ratio of these compression algorithms is unusual between 2-3 times according to video flowing.If JPEG-LS and CALIC are directly used in the compression of lossless video, owing to do not consider video correlation in time, so compression ratio is not high.In [7] though in algorithm considered the fusion of time and spatial prediction, but do not adopt adaptive method, so compression ratio improves seldom, the algorithm in [1] has used new time-domain fallout predictor, reduce algorithm complex, but also reduced the performance of prediction.Calendar year 2001, the wavelet transformation that G.C.K.Abhayaratne has analyzed the later residual image of motion compensation can not effectively reduce entropy, also just can not effectively compress [5].
List of references
1.Elias?Carotti,Juan?Carlo?De?Martin,Angelo?Raffaele?Meo?Backward-adaptivelossless?compression?of?video?sequences.General?Dynamics?Decision?Systems,2002,P.1817.
2.Wu,X.and?Memon,N.D.Context-based?adaptive?lossless?image?coding.IEEETransaction?in?Communication,1997,Vol.45,P.437-444.
3.Nasir?Meno,Xiaolin?Wu,Recent?development?in?context-based?predictivetechniques?for?lossless?image?compression.The?Computer?Journal,1997,Vol.40,No.2/3.
4.Ali?Bilgin,George?Zweig,and?Michael?W.Marcellin,Three-dimensional?imagecompression?with?integer?wavelet?transforms.Applied?optic,2000,Vol.39,No.11.
5.G.C.K.Abhayaratne,D.M.Monro,Embedded?to?lossless?coding?of?motioncompensated?prediction?residuals?in?lossless?video?coding.Proceeding?of?Spie,2001,Vol.4310.P.175-185
6.N.D.Memon?and?K.sayood,lossless?compression?fo?video?sequences,IEEETransaction?on?Communications,1996,vol.44,no.10,P.1340-1345.
7.X.Wu.W.Choi,N.Memon,”lossless?interframe?Image?compression?via?ContextModeling,”in?Proceedings?of?Data?Compression?conference,1998,P.378-387.
8.Weinberger,M.J.,Seroussi,G.and?Sapiro,G.LOCO-I:a?low?complexity?losslessimage?compression?algorithm.ISO?Working?Document(1995)ISO/IEC?JTC1/SC29/WG1N203.
9.M.D.Adams?and?F.Kossentini,Reversible?Integer-To-Integer?Wavelet?TtansformsFor?Image?Compression:Performance?Evaluation?And?Analysis,IEEE?Trans.ImageProcessing,2000,Vol.9,No.6,pp.1010-1024.
10.Y.Huang,H.M.Dreizen?and?N.P.Galatsanos,Prioritized?DCT?for?compression?andprogressive?tansmission?of?Images,IEEE?trans.On?Image?Proc.(IP),Vol.2,No.4,pp.477-487,1992。
Summary of the invention
The objective of the invention is to propose the good lossless video compression method of a kind of compression effectiveness based on the space-time adaptive prediction.
The lossless video compression method that the present invention proposes based on the space-time adaptive prediction, its step is as follows:
In frame, utilize the GAP Forecasting Methodology among the GALIC, carry out spatial domain prediction, utilize the time-domain Forecasting Methodology of estimation, carry out the time-domain prediction, then with predicting the outcome that adaptive fusion method obtains merging in interframe; Obtain the context of encoding according to predicting the outcome of time-domain and spatial domain again, utilize the coding context that predicated error is carried out entropy coding at last.Introduce each step below respectively.
Prediction on 1 spatial domain
The prediction of spatial domain be adopt with CALIC in identical GAP fallout predictor, the purpose in this step is to remove the interior correlation of frame.
CALIC[2] in Lossless Image Compression, have a superior performance, reason is because adopted GAP (gradient adaptive prediction) fallout predictor greatly, it uses the neighborhood information of current pixel that this pixel is carried out very accurate prediction, make the error of prediction as far as possible little, carry out entropy coding then, code efficiency improves greatly.(i, neighborhood j) as shown in Figure 1 for its current pixel point P.
GAP is with the neighborhood N that provides among the figure, W, and NW, NN, NE, the value of NNE prediction current pixel, it is a kind of gradient self-adjusting fallout predictor, this fallout predictor is adjusted predicted value according to partial gradient, and the performance better than general linear predictor is provided.GAP according to the gradient of neighborhood prediction current pixel value P (i, j).Its level and vertical gradient are respectively:
d h=|W-WW|+|N-NW|+|N-NE|?????????????????????????(1)
d v=|W-NW+|N-NN|+|NE-NNE|????????????????????????(2)
Obtain P (i, j) predicted value by following step then
Figure A20041002471200051
If d v-d h>T 1, then P ^ l ( i , j ) = W ,
Otherwise, if d v-d h<-T 1, then P ^ l ( i , j ) = N ,
Otherwise: P ^ 1 ( i , j ) = ( N + W ) / 2 + ( NE - NW ) / 4 ,
If d v-d h>T 2, then P ^ 1 ( i , j ) = ( P ^ 1 ( i , j ) + W ) / 2 ,
Otherwise, if d v-d h>T 3, then P ^ 1 ( i , j ) = ( 3 P ^ 1 ( i , j ) + W ) / 4 ,
Otherwise, if d v-d h<-T 2, then P ^ 1 ( i , j ) = ( 3 P ^ 1 ( i , j ) + N ) / 2 ,
Otherwise, if d v-d h<-T 3, then P ^ 1 ( i , j ) = ( 3 P ^ 1 ( i , j ) + N ) / 4 .
T wherein 1, T 2, T 3It is the threshold value of using in the forecasting process.Be to have adopted the one group of experiment value T that proposes in [2] in our experiment 1=80, T 2=32, T 3=8, these values are to obtain according to a large amount of experiments, can adjust according to the resolution of image and the feature of image in specific application.
Prediction on 2 time-domains
The prediction of time-domain is identical with the method that generally adopts in the video compression mpeg standard, adopt the method for estimation, it is to be the unit with 16 * 16 macro block, the displacement of current macro being regarded as some macro blocks of former frame obtains, make the minimum coupling macro block that has found former frame of following cost function by search, cost function is as follows:
DFD ( β ) = Σ ( x , y ) ∈ N β | P r ( x , y ) - P 1 ( x + v x , y + v y ) | - - - ( 3 )
P wherein r(x, y) and P l(x y) represents present frame and former frame macro block at (x, the y) gray value on, N respectively βEach pixel among the expression macro block β, (v x, v y) be motion vector.When DFD (β) reaches minimum:
( v x * , v y * ) = arg ( min DFD ( β ) )
(v wherein x *, v y *) be optimum motion vector, then the time-domain prediction can be expressed as
P ^ 2 ( i , j ) = P 1 ( x + v x * , y + v y * ) - - - ( 4 )
Here The macro block of expression present frame is in (i, the j) gray value on, available former frame P l ( x + v x * , y + v y * ) Gray scale prediction on the macro block on the relevant position.
The fusion of 3 predictions
The front is in two steps, and we obtain the prediction of spatial domain and time-domain respectively, and the prediction of note spatial domain is
Figure A20041002471200066
The prediction of note time-domain is We merge time-domain prediction and spatial domain prediction with following formula, and note merges later prediction and is P ^ ( i , j ) = a ( i , j ) × P ^ 1 ( i , j ) + b ( i , j ) × P ^ 2 ( i , j ) Wherein (i, j), (i is the coefficient that merges usefulness j) to b to a, is that the prediction according to some points of front estimates, and (i, j), (i j) is self adaptation adjustment to b to a of each pixel.(i, j), (i j) obtains with following formula b a
ER R 1 ( i , j ) = | P ( i - 1 , j ) - P ^ 1 ( i - 1 , j ) | + | P ( i , j - 1 ) - P ^ 1 ( i , j - 1 ) | - - - ( 5 )
ERR 2 ( i , j ) = | P ( i - 1 , j ) - P ^ 2 ( i - 1 , j ) | + | P ( i , j - 1 ) - P ^ 2 ( i , j - 1 ) | - - - ( 6 )
a ( i , j ) = ERR 2 ( i , j ) ERR 1 ( i , j ) + ERR 2 ( i , j ) , b ( i , j ) = ERR 1 ( i , j ) ERR 1 ( i , j ) + ERR 2 ( i , j ) - - - ( 7 )
Wherein P (i, j) be (i, the actual value of pixel grey scale j), a (i, j), b (i, meaning j) can be understood like this, a (i, j)+(i, j)=1, (i j) is fallout predictor to a to b
Figure A200410024712000614
Weight, (i j) is fallout predictor to b Weight, we calculate current pixel point (i earlier, j) the left side a bit (i-1, j) and more top ((i, the absolute value of the predicated error of spatial domain j-1) and ERR1 (i, j), and current pixel point (i, a bit (i-1 is j) with more top (i on left side j), the absolute value of time-domain predicated error j-1) and ERR2 (i, j).Its weights that error is little in spatial prediction or the time prediction are just big, and we just obtain a (i, j) b (i, formula j) like this.Because a (i, j), b (i, j) be according to the position of image and adaptive change, and a (i, j), b (i, value j) is to calculate according to any actual value and predicted value of the more top and left side of current point, also can calculate a (i with identical method in the time of decoding, j), b (i, value j), like this coding in regard to unnecessary preservation a (i, j), b (i, value j)
4 coding contexts
By the fusion prediction of time-domain and spatial domain, video flowing does not still reach best code efficiency, adopts based on contextual coding, can further improve compression ratio.Be meant based on contextual coding error image to encoding to be divided into different subclass, each different subclass is encoded respectively according to different contexts.The theoretical foundation that this classification is encoded later on is to reduce the mean entropy [10] that this organizes the source by one group of source being divided into several groups of different non-NULLs and disjoint component.This principle is expressed as follows:
Source X iBe divided into M different component sequence X i k, 1≤k≤M wherein.
Define the mean entropy in former source:
H ( X i ) = Σ r = 1 R - P r log 2 P r - - - ( 8 )
Wherein, R represents the number of the symbol in former source, P rRepresent r probability that symbol occurs in the former source in former source, the mean entropy of definition component:
H ‾ ( X i k ) = Σ k = 1 M H ( X i k ) = Σ k = 1 M L k N { Σ r = 1 R - P r k log 2 P r k } - - - ( 9 )
P wherein r kRepresent r probability that symbol occurs in k the component in this component, L kThe number of samples of representing k component, N are represented total sample number of symbol in the former source.
Following theorem [10] is arranged:
H ‾ ( X i k ) ≤ H ( X i ) - - - ( 10 )
According to top theory, we classify according to certain context to error image, encode then, and Here it is based on contextual coding.In based on contextual coding, can run into a problem, just the selection of context number, in image, the contextual number that can select is very large, will cause a problem like this, in some context, the number of sample is considerably less, and entropy coding will have problems like this.So based on very crucial problem in the contextual coding is to reduce contextual number.
Here, we adopt the absolute value C of difference of time-domain prediction and spatial domain prediction as the context of encoding.
C = | P ^ 1 ( i , j ) - P ^ 2 ( i , j ) | - - - ( 11 )
C is carried out 6 grades of quantifications.Quantization parameter is q 1=4, q 2=8, q 3=16, q 4=32, q 5=64, these values all are experimental datas, can be optimized design in actual applications.Also can adopt more multistage quantification, but the compression ratio that improves is not too obvious.
5. entropy coding utilizes the coding context that predicated error is carried out entropy coding at last, and this is a conventional method, does not do repetition here.
Advantage of the present invention:
The present invention proposes the encoding scheme that a kind of time prediction and spatial prediction combine, cooperate adaptive integration technology, adopt then based on contextual entropy coding, video sequence is carried out lossless compress, make the lossless video compression effects compared with compress average excellent 10% with JPEG-LS or CALIC.
Description of drawings:
The prediction module of Fig. 1, bidimensional neighborhood.
The block diagram of Fig. 2, ATSVC-LS.
The 20th frame (a) of Fig. 3, children motion sequence and the 21st frame (b).
The predicated error (b) of predicated error of Fig. 4, motion compensation (a) and GAP prediction.
The predicated error of Fig. 5, fusion prediction.
Embodiment
Children motion sequence with 176 * 144 pixels is simulated the algorithm that the present invention proposes.The former figure of the 20th frame and the 21st frame picture as shown in Figure 3.The predicated error that we provide three kinds of different Forecasting Methodologies as a comparison, for relatively our method and the performance of other method, we compare with the entropy of the later error image of prediction, entropy is more little, compression ratio is also high more doing when can't harm entropy coding.
Use method for estimating to predict, the predicated error of estimation such as Fig. 4 left side, the entropy of error image is 2.81
Predict with GAP, the predicated error that obtains such as Fig. 4 right side, the entropy of error image is 5.09.These two predicted the outcome merge, the predicated error that obtains such as Fig. 5, the entropy of predicated error is 2.48, the conditional entropy of trying to achieve predicated error according to the coding context with top method is 2.21.According to Fig. 4 and Fig. 5 as can be seen, with time-domain and frequency domain merge forecast method make predict the outcome more accurate, so effectively reduced the entropy of predicated error.Simultaneously, use, further reduce entropy, thereby improved compression efficiency based on contextual coding method.
We use claire at last, salesman, miss, the 1st frame to the 100 frames of children motion sequence totally 100 frames, our algorithm is tested, and and harmless JPEG, methods such as CALIC and GAP prediction compare its result such as table 1.
Video flowing ????JPEG-LS ????CALIC ??ATSVC-LS *
????claire ????2.441 ????2.451 ??2.022
????salesman ????4.395 ????4.343 ??3.867
????miss ????3.234 ????3.203 ??3.354
????children ????3.381 ????3.311 ??3.169
????average ????3.363 ????3.327 ??3.102
*The scheme of ATSVC-LS: this paper
Table 1 test result
The lossless video compression method (ATSVC-LS) that the present invention proposes based on the space-time adaptive prediction, well utilized the prediction of time-domain and spatial domain, adopted adaptive fusion method, improved accuracy for predicting, utilized simultaneously based on contextual conditional compilation technology.So improved compression performance greatly.According to experimental result, our method is carried out its compression ratio of lossless compress than the algorithm that adopts JPEG-LS or CALIC to video and is improved nearly 10%.

Claims (4)

1, a kind of lossless video compression method based on the space-time adaptive prediction, it is characterized in that concrete steps are as follows: in frame, utilize the GAP Forecasting Methodology among the GALIC, carry out the spatial domain prediction, utilize the time-domain Forecasting Methodology of estimation in interframe, carry out the time-domain prediction, then with predicting the outcome that adaptive fusion method obtains merging; Obtain the context of encoding according to predicting the outcome of time-domain and spatial domain again, utilize the coding context that predicated error is carried out entropy coding at last.
2, lossless video compression method according to claim 1 is characterized in that time-domain is predicted as:
P ^ 2 ( i , j ) = P 1 ( x + v x * , y + v y * ) - - - ( 4 )
Here ( v x , * v y * ) = arg ( min DFD ( β ) )
And DFD ( β ) = Σ ( x , y ) ∈ N β | P r ( x , y ) - P l ( x + v x , y + v y ) | - - - ( 3 )
P wherein r(x, y) and P l(x y) represents present frame and former frame macro block at (x, the y) gray value on, N respectively βEach pixel among the expression macro block β, (v x, v y) be motion vector.
3, lossless video compression method according to claim 1, being predicted as after it is characterized in that merging
Figure A2004100247120002C4
P ^ ( i , j ) = a ( i , j ) × P ^ 1 ( i , j ) + b ( i , j ) × P ^ 2 ( i , j )
Here, Be the prediction on the spatial domain,
Figure A2004100247120002C7
Be the prediction on the time-domain, (i, j), (i j) is obtained by following formula b a
a ( i , j ) = ERR 2 ( i , j ) ERR 1 ( i , j ) + ERR 2 ( i , j ) , b ( i , j ) = ERR 1 ( i , j ) ERR 1 ( i , j ) + ERR 2 ( i , j ) - - - ( 7 )
Wherein ERR 1 ( i , j ) = | ( P ( i - 1 , j ) - P ^ 1 ( i - 1 , j ) | + | ( P ( i , j - 1 ) - P ^ 1 ( i , j - 1 ) | - - - ( 5 )
ERR 2 ( i , j ) = | ( P ( i - 1 , j ) - P ^ 2 ( i - 1 , j ) | + | ( P ( i , j - 1 ) - P ^ 2 ( i , j - 1 ) | - - - ( 6 )
(i j) is (i, the actual value of pixel grey scale j) to P.
4, lossless video compression method according to claim 1, the absolute value C of difference that it is characterized in that adopting time-domain prediction and spatial domain prediction is as the context of encoding:
C = | P ^ 1 ( i , j ) - P ^ 2 ( i , j ) | - - - ( 11 )
Figure A2004100247120002C13
Figure A2004100247120002C14
Implication the same.
CN 200410024712 2004-05-27 2004-05-27 Non-loss visual-frequency compressing method based on space self-adaption prediction Pending CN1585486A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200410024712 CN1585486A (en) 2004-05-27 2004-05-27 Non-loss visual-frequency compressing method based on space self-adaption prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200410024712 CN1585486A (en) 2004-05-27 2004-05-27 Non-loss visual-frequency compressing method based on space self-adaption prediction

Publications (1)

Publication Number Publication Date
CN1585486A true CN1585486A (en) 2005-02-23

Family

ID=34600962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200410024712 Pending CN1585486A (en) 2004-05-27 2004-05-27 Non-loss visual-frequency compressing method based on space self-adaption prediction

Country Status (1)

Country Link
CN (1) CN1585486A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100377597C (en) * 2005-06-22 2008-03-26 浙江大学 Video compression method for mobile devices
CN101841705A (en) * 2010-03-12 2010-09-22 西安电子科技大学 Video lossless compression method based on adaptive template
CN101389037B (en) * 2008-09-28 2012-05-30 湖北科创高新网络视频股份有限公司 Method and device for time-space domain segmentation multi-state video coding
US8208739B2 (en) 2005-10-25 2012-06-26 Siemens Aktiengesellshcsft Methods and devices for the determination and reconstruction of a predicted image area
CN103140175A (en) * 2010-08-09 2013-06-05 三星电子株式会社 Ultrasonic diagnostic apparatus and control method thereof
CN104350752A (en) * 2012-01-17 2015-02-11 华为技术有限公司 In-loop filtering for lossless coding mode in high efficiency video coding
CN107431807A (en) * 2015-03-04 2017-12-01 超威半导体公司 Content-adaptive B image model Video codings
CN109218726A (en) * 2018-11-01 2019-01-15 西安电子科技大学 Laser induced breakdown spectroscopy image damages lossless joint compression method
CN119342251A (en) * 2024-12-20 2025-01-21 四川省机场集团有限公司 Video wireless transmission method and system based on 5G AeroMACS communication

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100377597C (en) * 2005-06-22 2008-03-26 浙江大学 Video compression method for mobile devices
US8208739B2 (en) 2005-10-25 2012-06-26 Siemens Aktiengesellshcsft Methods and devices for the determination and reconstruction of a predicted image area
CN101297557B (en) * 2005-10-25 2012-07-04 西门子公司 Methods and devices for the determination and reconstruction of a predicted image area
CN101389037B (en) * 2008-09-28 2012-05-30 湖北科创高新网络视频股份有限公司 Method and device for time-space domain segmentation multi-state video coding
CN101841705A (en) * 2010-03-12 2010-09-22 西安电子科技大学 Video lossless compression method based on adaptive template
CN103140175A (en) * 2010-08-09 2013-06-05 三星电子株式会社 Ultrasonic diagnostic apparatus and control method thereof
CN104350752A (en) * 2012-01-17 2015-02-11 华为技术有限公司 In-loop filtering for lossless coding mode in high efficiency video coding
CN107431807A (en) * 2015-03-04 2017-12-01 超威半导体公司 Content-adaptive B image model Video codings
CN109218726A (en) * 2018-11-01 2019-01-15 西安电子科技大学 Laser induced breakdown spectroscopy image damages lossless joint compression method
CN109218726B (en) * 2018-11-01 2020-04-07 西安电子科技大学 Laser-induced breakdown spectroscopy image lossy lossless joint compression method
CN119342251A (en) * 2024-12-20 2025-01-21 四川省机场集团有限公司 Video wireless transmission method and system based on 5G AeroMACS communication

Similar Documents

Publication Publication Date Title
CN1215439C (en) Apparatus and method for performing scalable hierarchical motion estimation
CN102835106B (en) Data compression for video
CN1227911C (en) Method and apparatus for compressing video information using motion dependent prediction
CN1976458A (en) Method of encoding flags in layer using inter-layer correlation, method and apparatus for decoding
CN1719735A (en) Method or device for coding a sequence of source pictures
CN1933601A (en) Method of and apparatus for lossless video encoding and decoding
CN1809165A (en) Method and apparatus for predicting frequency transform coefficients in video codec, video encoder and decoder having the apparatus, and encoding and decoding method using the method
CN104041048A (en) Method And Apparatus Video Encoding And Decoding Using Skip Mode
KR102027474B1 (en) Method and apparatus for encoding/decoding image by using motion vector of previous block as motion vector of current block
CN1777283A (en) Microblock based video signal coding/decoding method
CN1209928C (en) Inframe coding frame coding method using inframe prediction based on prediction blockgroup
CN1802667A (en) Overcomplete basis transform-based motion residual frame coding method and apparatus for video compression
CN102291582A (en) Distributed video encoding method based on motion compensation refinement
CN102187668A (en) Encoding and decoding with elimination of one or more predetermined predictors
CN1320830C (en) Noise estimating method and equipment, and method and equipment for coding video by it
CN1495603A (en) Computer reading medium using operation instruction to code
CN1585486A (en) Non-loss visual-frequency compressing method based on space self-adaption prediction
CN101977323A (en) Method for reconstructing distributed video coding based on constraints on temporal-spatial correlation of video
CN1604650A (en) Method for hierarchical motion estimation
CN1224270C (en) Frame coding method of inter-frame coding frame for two stage predicting coding of macro block group structure
CN101742301A (en) Block mode coding method and device
CN1744718A (en) In-frame prediction for high-pass time filtering frame in small wave video coding
CN1941914A (en) Method and apparatus for predicting DC coefficient in transform domain
CN1885948A (en) Motion vector space prediction method for video coding
CN1708134A (en) Method and apparatus for estimating motion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication