[go: up one dir, main page]

CN103916675A - Low-latency intraframe coding method based on strip division - Google Patents

Low-latency intraframe coding method based on strip division Download PDF

Info

Publication number
CN103916675A
CN103916675A CN201410111378.9A CN201410111378A CN103916675A CN 103916675 A CN103916675 A CN 103916675A CN 201410111378 A CN201410111378 A CN 201410111378A CN 103916675 A CN103916675 A CN 103916675A
Authority
CN
China
Prior art keywords
frame
image
coding
delay
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410111378.9A
Other languages
Chinese (zh)
Other versions
CN103916675B (en
Inventor
姚春莲
王群
张芳芳
李素
毛明毅
曹倩
刘鹂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Technology and Business University
Original Assignee
Beijing Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Technology and Business University filed Critical Beijing Technology and Business University
Priority to CN201410111378.9A priority Critical patent/CN103916675B/en
Publication of CN103916675A publication Critical patent/CN103916675A/en
Application granted granted Critical
Publication of CN103916675B publication Critical patent/CN103916675B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明提出了一种基于条带划分的低延迟帧内编码方法,从采集与编码两个环节,来降低延迟,首先,采集端将每帧图像以条带为单位进行采集,条带的大小与视频分辨率、延迟要求、编码标准相关;然后,以条带为单位进行编码,为了提高预测精度,进一步降低延迟,将每个条带划分成多个子条带,以其中一个子条带编码后的重建图像的插值放大图像为预测图像进行帧内预测,这样可以有效提升预测的精度。经过测试,本发明提出的基于条带的低延迟编解码结构能有效降低编解码系统的固有延迟,对于标准清晰度视频,可以降低到150ms。The present invention proposes a low-delay intra-frame encoding method based on strip division, which reduces the delay from the two links of acquisition and encoding. First, the acquisition end collects each frame of image in units of strips, and the size of the strips It is related to video resolution, delay requirements, and coding standards; then, encoding is performed in units of slices. In order to improve prediction accuracy and further reduce delays, each slice is divided into multiple sub-slices, and one of the sub-slices is encoded The interpolated enlarged image of the reconstructed image is used as the predicted image for intra-frame prediction, which can effectively improve the prediction accuracy. After testing, the strip-based low-delay codec structure proposed by the present invention can effectively reduce the inherent delay of the codec system, which can be reduced to 150ms for standard-definition video.

Description

一种基于条带划分的低延迟帧内编码方法A low-delay intra-frame coding method based on slice division

技术领域 technical field

本发明涉及一种新的低延迟帧内编码方法,尤其涉及一种基于条带划分的低延迟帧内编码,属于计算机视觉技术领域。  The invention relates to a new low-delay intra-frame coding method, in particular to a low-delay intra-frame coding based on slice division, and belongs to the technical field of computer vision. the

背景技术 Background technique

随着网络技术和终端处理能力的不断提高和发展,为了不断提高压缩性能,ITU和ISO组织推出了一系列视频编码标准,包括ITU组织的H.26x系列和ISO组织的MPEG-x系列标准,以及最新制定的HEVC标准。最新的HEVC标准致力于满足于用户的1)高清,2)3D,3)移动无线,以满足新的家庭影院、远程监控、数字广播、移动流媒体、便携摄像、医学成像等新领域的需求。在HEVC标准中有多种配置模式,包括HE(High Efficiency)高性能、LC(Low Complexity)低复杂度配置。  With the continuous improvement and development of network technology and terminal processing capabilities, in order to continuously improve compression performance, ITU and ISO organizations have introduced a series of video coding standards, including the H.26x series organized by ITU and the MPEG-x series standards organized by ISO. And the latest HEVC standard. The latest HEVC standard is committed to satisfying users' 1) high-definition, 2) 3D, 3) mobile wireless to meet the needs of new fields such as home theater, remote monitoring, digital broadcasting, mobile streaming media, portable video, medical imaging, etc. . There are multiple configuration modes in the HEVC standard, including HE (High Efficiency) high performance and LC (Low Complexity) low complexity configuration. the

在这些编码标准中,主要分为两类:一类是面向存储的,另一类是面向传输的。以H系列为主为面向传输的,以Mpeg系列为主是面向存储的。视频编码标准虽然制定的组织不同,完成的年代及应用的背景方面并不相同,但是采用的基本编码框架却是相同的,多采用运动补偿+DCT的基本框架。这种框架下视频帧一般分为I(Intra-frame)、P(Predictive-frame)、B(Bidirectionally predicted-frame)三种类型:I帧通过变换、量化等过程完成编码;P帧以前向已编码帧的重建图像为参考,进行运动补偿后编码残差;B帧则以前后双向已编码帧的重建图像为参考,进行运动补偿后编 码残差。  Among these coding standards, there are mainly two categories: one is storage-oriented, and the other is transmission-oriented. The H series are mainly transmission-oriented, and the Mpeg series are mainly storage-oriented. Although the video coding standards are formulated by different organizations, the year of completion and the background of the application are different, the basic coding framework adopted is the same, and the basic framework of motion compensation + DCT is mostly used. Video frames under this framework are generally divided into three types: I (Intra-frame), P (Predictive-frame), and B (Bidirectionally predicted-frame): I frames are encoded through processes such as transformation and quantization; The reconstructed image of the encoded frame is used as a reference, and the residual is encoded after motion compensation; the reconstructed image of the B frame is used as a reference, and the residual is encoded after motion compensation. the

在三种类型的视频帧中,虽然I帧的个数比较少,但其每帧编码位数却远高于P、B帧,在最终生成的码流中也占有相当比例,在H.264标准中,I/P帧的压缩比分为:I:P=1:3~5,也即I帧的位率是P帧的3~5倍,而在HEVC标准中,随着新的复杂技术的引入,I帧与P帧的压缩比进一步扩大,在某些视频中,可以达到1∶10。对于输出码率恒定(CBR)的视频流,I帧的码率突然增大,将直接导致减少后续P/B帧的编码位数的下降,进而影响到恢复图像和预测图像的质量。提高I帧的压缩比,不仅对于视频质量的稳定性与连续性起到了至关重要的作用,而且会降低编解码系统的延迟。也即,如果I帧的码率过高,将会导致占用更多的网络带宽来传输压缩码流。  Among the three types of video frames, although the number of I frames is relatively small, the number of coding bits per frame is much higher than that of P and B frames, and it also occupies a considerable proportion in the final generated code stream. In H.264 In the standard, the compression ratio of the I/P frame is: I:P=1:3~5, that is, the bit rate of the I frame is 3~5 times that of the P frame, and in the HEVC standard, with the new complex technology With the introduction of , the compression ratio of I frame and P frame is further expanded, and in some videos, it can reach 1:10. For a video stream with a constant output bit rate (CBR), a sudden increase in the bit rate of the I frame will directly reduce the number of coding bits for the subsequent P/B frames, thereby affecting the quality of the restored image and the predicted image. Improving the compression ratio of I frames not only plays a vital role in the stability and continuity of video quality, but also reduces the delay of the codec system. That is, if the code rate of the I frame is too high, more network bandwidth will be occupied to transmit the compressed code stream. the

在视频编码过程中,尤其是移动平台下,对于带宽和复杂度都有要求的情况下,希望编解码算法做到低延迟和低复杂度。在视频的编码过程中,延迟主要有两个原因引起的:  In the process of video encoding, especially on mobile platforms, when bandwidth and complexity are required, it is hoped that the encoding and decoding algorithms will achieve low latency and low complexity. In the video encoding process, delays are mainly caused by two reasons:

(1)编解码固有的时间,如果编解码速度都能够达到实时,那么将不会有延时产生。  (1) The inherent time of encoding and decoding, if the encoding and decoding speed can reach real-time, then there will be no delay. the

(2)信道传输,如果信道足够宽,没有拥挤也不会有延迟产生。  (2) Channel transmission, if the channel is wide enough, there will be no congestion and no delay. the

如果不考虑信道的问题,那么如何降低编解码的固有延迟就显得十分重要。一个编解码系统的延迟由以下几个部分组成(见图1),包括图像信息采集、压缩处理、发送缓冲区、数据链路传输、接收缓冲区、解压缩、图像数据格式转换和显示等环节的延迟。在帧率为25帧/秒的实时编码情况下,一帧图像采集完成后编码器在40ms内完成压缩处理;当编码的帧 类型为I-P时,编码输出的位数比一般为8∶3,因此通常设定的缓冲区大小为每帧平均码流量的3.5~4倍,以满足I帧码流的缓存。通过数据链路传输后,解码端收到压缩码流就进行解压缩处理,最后在显示设备进行显示。在图1中,假定以下两种情况:  If the channel is not considered, how to reduce the inherent delay of codec is very important. The delay of a codec system consists of the following parts (see Figure 1), including image information acquisition, compression processing, sending buffer, data link transmission, receiving buffer, decompression, image data format conversion and display, etc. Delay. In the case of real-time encoding with a frame rate of 25 frames per second, the encoder completes the compression process within 40ms after a frame of image acquisition is completed; when the encoded frame type is I-P, the bit ratio of the encoded output is generally 8:3, Therefore, the buffer size is usually set to be 3.5 to 4 times the average code flow per frame to satisfy the buffering of the I-frame code flow. After being transmitted through the data link, the decoding end decompresses the compressed code stream after receiving it, and finally displays it on the display device. In Figure 1, the following two situations are assumed:

1):数据链路传输在整个编解码过程中都是固定的,(CBR类型信道),不计入延迟。  1): The data link transmission is fixed throughout the encoding and decoding process (CBR type channel), and the delay is not included. the

2):在实时采集、处理、传输系统中,接收缓冲区与解压缩过程是并行处理的,可以只计算一个时间。  2): In the real-time acquisition, processing, and transmission system, the receiving buffer and the decompression process are processed in parallel, and only one time can be calculated. the

在标准的编码框架下,以帧为基本单元,那么编解码系统总延迟分析如下:采集时间T1采集一帧图像所需的时间,以PAL制标准为例,T1=1000/25=40ms。  Under the standard encoding framework, with the frame as the basic unit, the total delay analysis of the encoding and decoding system is as follows: the acquisition time T1 is the time required to acquire a frame of image, taking the PAL standard as an example, T1=1000/25=40ms. the

编码时间T2:在实时编码的情况下,编码一帧图像的时间为:T2=40ms。码流发送时间T3:对延迟影响比较大的部分是发送缓冲区,设置缓冲区是为了解决变码率码流的传输和存储问题。缓冲区越大,越能够承受码率的波动,但造成的延迟也越大,反之亦然。以I帧编码为例,产生的码流可能会充满个缓冲区的3/4,所以发送缓冲区延迟一般为T3=3×40=120ms,在缓冲区被充满的极端情况下可能会导致140ms~160ms的延迟。  Encoding time T2: In the case of real-time encoding, the time to encode one frame of image is: T2=40ms. Bit stream sending time T3: The part that has a greater impact on delay is the sending buffer. The buffer is set to solve the transmission and storage problems of variable bit rate bit streams. The larger the buffer, the more it can withstand fluctuations in the bit rate, but the greater the delay caused, and vice versa. Taking I-frame encoding as an example, the generated code stream may fill 3/4 of the buffer, so the delay of the sending buffer is generally T3=3×40=120ms, and it may lead to 140ms in extreme cases where the buffer is full ~160ms latency. the

码流接收时间T4:以解压缩图第一帧图像并进行图像数据格式转换的时间,这里记为T4=25ms。  Code stream receiving time T4: the time to decompress the first frame of the image and convert the image data format, here recorded as T4=25ms. the

显示时间T5,解压缩后送入显示设备,显示一帧图像的时间为T5=40ms。将上述时间进行累加得到,编解码总延迟为T=T1+T2+T3+T4+T5=265ms。在上述延迟中,I帧作为关键帧类型,其编解码延迟是最大的,同时对整 个编解码延迟的贡献也是最大的,因此降低I帧编码过程中的延迟,降低其产生码率将对延迟产生重要的影响。  The display time T5 is sent to the display device after decompression, and the time to display a frame of image is T5=40ms. The above time is accumulated, and the total encoding and decoding delay is T=T1+T2+T3+T4+T5=265ms. Among the above-mentioned delays, I frame, as a key frame type, has the largest encoding and decoding delay, and at the same time contributes the largest to the entire encoding and decoding delay. Latency has an important impact. the

发明内容 Contents of the invention

本发明的目的是提供一种基于条带划分的低延迟帧内编码方法。该方法是通过对视频编解码延迟进行分析,给出适合于I帧编码的低延迟编码结构,在该结构中,将主要考虑到采集端,编码端两个方面。首先,采集端将每帧图像以条带为单位进行采集,条带的大小与视频分辨率和延迟要求相关;然后,以条带为单位进行编码,为了提高预测精度,进一步降低延迟,将每个条带划分成多个子条带,以其中一个子条带为预测单元进行帧内预测,这样可以有效提升预测的精度。  The purpose of the present invention is to provide a low-delay intra-frame encoding method based on slice division. The method is to analyze the delay of video encoding and decoding, and provide a low-delay encoding structure suitable for I-frame encoding. In this structure, two aspects of the acquisition end and the encoding end will be mainly considered. First, the acquisition end collects each frame of image in units of strips, and the size of the strips is related to video resolution and delay requirements; then, encoding is performed in units of strips, in order to improve prediction accuracy and further reduce delays, each A slice is divided into multiple sub-slices, and one of the sub-slices is used as a prediction unit for intra-frame prediction, which can effectively improve the prediction accuracy. the

为实现上述目的,本发明采用下述的技术方案。其特征在于包括以下步骤:  In order to achieve the above object, the present invention adopts the following technical solutions. It is characterized in that comprising the following steps:

步骤一:以条带为单位进行视频采集,将输入图像划分成若干个条带进行采集以降低采集延迟,其中条带大小的确定与所采用的视频编码标准中基本编码单元的大小相关,设一帧图像的扫描行数为Nframe,Nbcu表示所述基本编码单元所包含的像素行数,Nslice_bcu表示每个条带内的像素行数,所述条带以Nbcu的倍数为单位对一帧图像进行条带划分,Nslice_bcu=α×β×Nbcu,其中参数α为延迟调整参数,β为分辨率调整参数,α和β均为正整数;  Step 1: Video acquisition is performed in units of slices, and the input image is divided into several slices for acquisition to reduce the acquisition delay. The determination of the size of the slices is related to the size of the basic coding unit in the adopted video coding standard. Set The number of scanning lines of a frame of image is N frame , N bcu indicates the number of pixel lines contained in the basic coding unit, N slice_bcu indicates the number of pixel lines in each slice, and the unit of the slice is a multiple of N bcu Divide a frame of image into slices, N slice_bcu =α×β×N bcu , where the parameter α is a delay adjustment parameter, β is a resolution adjustment parameter, and both α and β are positive integers;

步骤二:以条带为单位进行预测编码,为提高预测精度,对条带采用隔行和/或隔列方式进行抽取,进一步划分成多个子条带;将上述多个子条带按照以下2种方式进行预测编码:首先以所述多个子条带中 的一个子条带作为基本子条带,直接采用所述视频编码标准规定的预测方法进行编码,编码码流直接进入输出缓冲区,并传输到解码端进行解码;其次,对该基本子条带编码后的码流进行重建,并对重建图像进行插值放大处理,将其放大到与所述条带的分辨率相同,得到的插值图像作为其它子条带的预测图像;第三,针对其余若干个子条带,在编码过程中依据所述预测图像采用对应位置点直接求差的方式进行预测,编码得到的残差数据;在图像解码时,所述其余若干个子条带利用基本子条带的重建放大图像作为预测图像进行恢复,避免了反复的插值处理。  Step 2: Carry out predictive encoding in units of strips. In order to improve the prediction accuracy, the strips are extracted by interlacing and/or every other column, and further divided into multiple sub-stripes; the above-mentioned multiple sub-strips are divided into the following two ways Predictive coding: firstly, one of the multiple sub-strips is used as a basic sub-strip, and the prediction method specified in the video coding standard is directly used for coding, and the coded stream directly enters the output buffer and is transmitted to the The decoding end performs decoding; secondly, reconstructs the coded stream of the basic sub-strip, and performs interpolation and amplification processing on the reconstructed image, and enlarges it to the same resolution as the strip, and obtains the interpolated image as other The predicted image of the sub-strip; thirdly, for the remaining several sub-strips, during the encoding process, according to the predicted image, the method of directly calculating the difference of the corresponding position point is used for prediction, and the residual data obtained by encoding; when the image is decoded, The rest of the several sub-strips are restored by using the reconstructed and enlarged images of the basic sub-strips as predicted images, which avoids repeated interpolation processing. the

上述条带划分与编码标准有关,针对H.264编码标准,所述基本编码单元为宏块,针对HEVC编码标准,所述基本编码单元为CU或LCU。为减小I帧和P帧之间的码流差别,使每帧图像的压缩码流量平稳,对输入图像的类型进行判断,若输入图像为I帧,划分的条带将都采用帧内编码,若输入图像为P帧或B帧图像,则P帧或B帧图像中的一个条带采用Intra编码方式,其余的条带仍采用原有P帧的Inter编码。  The above slice division is related to coding standards. For the H.264 coding standard, the basic coding unit is a macroblock, and for the HEVC coding standard, the basic coding unit is a CU or LCU. In order to reduce the code stream difference between the I frame and the P frame, so that the compressed code flow of each frame image is stable, the type of the input image is judged. If the input image is an I frame, the divided strips will all adopt intra-frame coding , if the input image is a P-frame or B-frame image, then one slice in the P-frame or B-frame image adopts the Intra coding mode, and the rest of the slices still use the original P-frame Inter coding. the

本发明所提供的条带划分的低延迟帧内编码方法可以有效地提高预测的精度,进而降低编码延时的方法。  The low-delay intra-frame encoding method of slice division provided by the present invention can effectively improve the prediction accuracy and further reduce the encoding delay. the

附图说明 Description of drawings

图1是系统延迟示意图;  Figure 1 is a schematic diagram of system delay;

图2是基于条带的采集低延迟编码划分方式;  Figure 2 is the division method of low-delay coding based on slice acquisition;

图3是JVT中的空间相邻点预测;  Figure 3 is the prediction of spatial adjacent points in JVT;

图4是HEVC标准中的帧内预测模式;  Figure 4 is the intra prediction mode in the HEVC standard;

图5是不同预测方式下像素的预测距离;  Figure 5 is the prediction distance of pixels under different prediction methods;

具体实施方式Detailed ways

前已述及,本发明根据条带划分的低延迟帧内编码方法,实现了在不降低图像质量的前提下,降低码流,进而降低延迟。  As mentioned above, according to the low-delay intra-frame coding method of the slice division, the present invention realizes the reduction of the code stream without reducing the image quality, thereby reducing the delay. the

下面结合附图说明本发明的实现方式,在图1中,假定以下两种情况:  Illustrate the implementation of the present invention below in conjunction with accompanying drawing, in Fig. 1, assume following two situations:

1):数据链路传输在整个编解码过程中都是固定的,(CBR类型信道),不计入延迟。  1): The data link transmission is fixed throughout the encoding and decoding process (CBR type channel), and the delay is not included. the

2):在实时采集、处理、传输系统中,接收缓冲区与解压缩过程是并行处理的,可以只计算一个时间。  2): In the real-time acquisition, processing, and transmission system, the receiving buffer and the decompression process are processed in parallel, and only one time can be calculated. the

在标准的编码框架下,以帧为基本单元,那么编解码系统总延迟分析如下:采集时间T1采集一帧图像所需的时间,以PAL制标准为例,T1=1000/25=40ms。  Under the standard encoding framework, with the frame as the basic unit, the total delay analysis of the encoding and decoding system is as follows: the acquisition time T1 is the time required to acquire a frame of image, taking the PAL standard as an example, T1=1000/25=40ms. the

编码时间T2:在实时编码的情况下,编码一帧图像的时间为:T2=40ms。码流发送时间T3:对延迟影响比较大的部分是发送缓冲区,设置缓冲区是为了解决变码率码流的传输和存储问题。缓冲区越大,越能够承受码率的波动,但造成的延迟也越大,反之亦然。以I帧编码为例,产生的码流可能会充满个缓冲区的3/4,所以发送缓冲区延迟一般为T3=3×40=120ms,在缓冲区被充满的极端情况下可能会导致140ms~160ms的延迟。  Encoding time T2: In the case of real-time encoding, the time to encode one frame of image is: T2=40ms. Bit stream sending time T3: The part that has a greater impact on delay is the sending buffer. The buffer is set to solve the transmission and storage problems of variable bit rate bit streams. The larger the buffer, the more it can withstand fluctuations in the bit rate, but the greater the delay caused, and vice versa. Taking I-frame encoding as an example, the generated code stream may fill 3/4 of the buffer, so the delay of the sending buffer is generally T3=3×40=120ms, and it may lead to 140ms in extreme cases where the buffer is full ~160ms latency. the

码流接收时间T4:以解压缩图第一帧图像并进行图像数据格式转换的时 间,这里记为T4=25ms。  Code stream receiving time T4: the time to decompress the first frame of the picture and convert the image data format, here recorded as T4=25ms. the

显示时间T5,解压缩后送入显示设备,显示一帧图像的时间为T5=40ms。将上述时间进行累加得到,编解码总延迟为T=T1+T2+T3+T4+T5=265ms。  The display time T5 is sent to the display device after decompression, and the time to display a frame of image is T5=40ms. The above time is accumulated, and the total encoding and decoding delay is T=T1+T2+T3+T4+T5=265ms. the

在上述延迟中,I帧作为关键帧类型,其编解码延迟是最大的,同时对整个编解码延迟的贡献也是最大的,因此降低I帧编码过程中的延迟,降低其产生码率将对延迟产生重要的影响。  Among the above delays, I frame is the key frame type, its codec delay is the largest, and at the same time it contributes the most to the entire codec delay. have an important impact. the

为此需要从采集和编码两个环节来降低编码的延迟。  To this end, it is necessary to reduce the delay of encoding from the two links of acquisition and encoding. the

步骤一:基于条带划分来降低采集延迟  Step 1: Reduce acquisition delay based on stripe division

图2为基于条带的采集低延迟编码划分方式,降低采集延迟,条带的划分不宜过小,也不宜过大;如果过小,将会影响编码处理时的预测效果;如果过大,采集延迟降低效果不明显。下面将分别对隔行扫描和逐行扫描进行分析。  Figure 2 shows the low-latency encoding division method based on strips to reduce the acquisition delay. The division of the strips should not be too small or too large; if it is too small, it will affect the prediction effect during encoding processing; if it is too large, the acquisition The delay reduction effect is not obvious. Interlaced scanning and progressive scanning will be analyzed separately below. the

以隔行扫描25帧/秒,分辨率为720×576的隔行视频序列为例,每幅图像的行数为Nframe=576。对于H.264及其之前的标准,将以宏块行为NMB=16单位进行划分条带;HEVC标准讲义LCU为单位划分条带,NMB=64。在这里以H.264及其之前的标准进行分析,HEVC标准的分析类似,这里不赘述。NMB包含纯奇数行或者纯偶数行,一帧图像共有Nslice_MB=Nframe/NMB=36条带;对于则基于条带的系统延迟为:采集一个条带就可进行编码,采集所需时间为:T1=40ms/36=1.11ms。在基于条带(Slice)的编解码器中,一帧图像的压缩码流量相对平稳,可以设定缓冲区大小为1.25~1.5帧长,发送缓冲区延迟一般为T3=40ms;T2、T4、T5的计算方法与基于帧编码方式类似,这样就比基于帧编解码方式减少了100ms以上。  Taking an interlaced video sequence with an interlaced scanning of 25 frames per second and a resolution of 720×576 as an example, the number of lines of each image is N frame =576. For H.264 and its previous standards, the macroblock line N MB = 16 units will be divided into slices; the HEVC standard handout LCU will be divided into slices, N MB = 64. Here, H.264 and its previous standards are used for analysis, and the analysis of HEVC standard is similar, so I won’t go into details here. N MB contains pure odd-numbered lines or pure even-numbered lines, and a frame of image has a total of N slice_MB =N frame /N MB =36 strips; for the system delay based on slices: one slice can be collected for encoding, and the acquisition requires The time is: T1=40ms/36=1.11ms. In a slice-based codec, the compressed code flow of a frame of image is relatively stable, the buffer size can be set to 1.25 to 1.5 frame length, and the sending buffer delay is generally T3=40ms; T2, T4, The calculation method of T5 is similar to the frame-based encoding method, which reduces the time by more than 100ms compared with the frame-based encoding and decoding method.

逐行行扫描:条带可以是逐行划分,也可以是隔行划分。但本发明采用隔行划分的形式,NMB只包含奇数行或偶数行,即对于第一个SLICE将是由第一行、第三行、第五行等奇数行构成,第二个SLICE将是由第二行、第四行、第六行等偶数行构成。  Progressive scanning: Striping can be either progressive or interlaced. However, the present invention adopts the form of interlaced division, and N MB only includes odd-numbered or even-numbered lines, that is, the first SLICE will be composed of odd-numbered lines such as the first line, the third line, and the fifth line, and the second SLICE will be composed of Even-numbered lines such as the second line, the fourth line, and the sixth line form.

以上逐行扫描方式中的两种条带划分方式,延迟相差不大,第一种方式,与隔行扫描方式相同。第二种方式将比第一种方式,在采集时需要多一个条带,采集所需时间为:T1=(40ms/36)*2=2.22ms。  The above two stripe division methods in the progressive scanning method have little difference in delay, and the first method is the same as the interlaced scanning method. Compared with the first method, the second method requires one more band during acquisition, and the time required for acquisition is: T1=(40ms/36)*2=2.22ms. the

考虑到实时的系统中多数为隔行扫描,后面的实现分析将以隔行扫描来分析条带划分和编码处理。  Considering that most real-time systems are interlaced scanning, the subsequent implementation analysis will use interlaced scanning to analyze strip division and encoding processing. the

基于条带的编码预测模式具体实施步骤如下:  The specific implementation steps of the slice-based coding prediction mode are as follows:

在标准的编码过程中,采用了多模式的预测技术,从多个角度进行预测。H.264标准中采用的帧内预测算法,该算法以16×16的宏块及4×4的块为基本预测单元,对块内各点在9个方向上进行预测。以4×4块为例,如图3左图所示,a,b,…,p为当前待预测块,周围17个点Q,I,…,P为已编码点。对a-p逐点,沿图3右图所示的0,1,…,8及方向2(DC预测)共九个方向,取P-Q-H中的点,用适当的预测公式计算预测值,与原始采样值做差分,差值最小的模式为最终预测模式,最后对预测残差系数进行DCT编码。  In the standard encoding process, a multi-mode prediction technology is used to predict from multiple angles. The intra-frame prediction algorithm used in the H.264 standard uses a 16×16 macroblock and a 4×4 block as the basic prediction unit, and predicts each point in the block in nine directions. Taking a 4×4 block as an example, as shown in the left figure of Figure 3, a, b, ..., p are the current block to be predicted, and the surrounding 17 points Q, I, ..., P are coded points. For a-p point by point, along the nine directions of 0, 1, ..., 8 and direction 2 (DC prediction) shown in the right figure of Figure 3, take the points in P-Q-H, use the appropriate prediction formula to calculate the predicted value, and the original sampling Values are differentiated, and the mode with the smallest difference is the final prediction mode, and finally DCT coding is performed on the prediction residual coefficients. the

在最新的HEVC标准中,引入了CU(编码单元),PU(预测单元),TU(变换单元),三个新的概念。编码单元类似于H.264/AVC中的宏块概念,其大小最大可以为64×64;预测单元是进行预测的基本单元;变换单元是进行变换和量化的基本单元。三个单元的分离,使得变换、预测和编 码各个处理环节更加灵活,也更符合视频图像的纹理特征。预测单元的大小可以为4×4,8×8,16×16,32×32,64×64,块的大小不同,预测的模式也不相同,分别为17,34,34,34,和3种模式可选,如图4所示,统一的帧内预测角度为:+/-[0,2,5,9,13,17,21,26,32]/32,模式的选择标准同JVT标准一样。  In the latest HEVC standard, three new concepts, CU (Coding Unit), PU (Prediction Unit), and TU (Transform Unit), are introduced. The coding unit is similar to the macroblock concept in H.264/AVC, and its size can be up to 64×64; the prediction unit is the basic unit for prediction; the transformation unit is the basic unit for transformation and quantization. The separation of the three units makes the processing links of transformation, prediction and encoding more flexible and more in line with the texture characteristics of video images. The size of the prediction unit can be 4×4, 8×8, 16×16, 32×32, 64×64, the size of the block is different, and the prediction mode is also different, respectively 17, 34, 34, 34, and 3 Two modes are optional, as shown in Figure 4, the unified intra prediction angle is: +/-[0, 2, 5, 9, 13, 17, 21, 26, 32]/32, the mode selection standard is the same as JVT Same standard. the

这些预测模式存在其固有的复杂度,本发明引入了一种如图5(B)所示的基于最近像素点的分层预测模式,首先利用黑色像素预测图中灰色点,其余像素由编码后重建的灰色点进行第二级预测。并通过平均预测距离这个概念(即所有当前像素与其参考像素距离的平均值,用Dpred表示),并用平均距离对标准方法与本发明方法的预测距离进行分析。  These prediction modes have their inherent complexity. The present invention introduces a layered prediction mode based on the nearest pixel as shown in Fig. The reconstructed gray points are used for second-level predictions. And through the concept of average prediction distance (that is, the average value of all current pixels and their reference pixel distances, represented by D pred ), and use the average distance to analyze the prediction distance of the standard method and the method of the present invention.

首先将采集的第一个条带,将采集的条带划分成四个子条带,按照隔行隔列的模式进行抽取。然后,将其中一个子条带按照标准的预测模式进行编码,即经过变换、量化、熵编码等处理过程,得到重建图像。之后,将重建图像利用双线性插值的方法进行放大,到条带的大小。最后,其余子条带将按照如图5(B)的模式进行预测。下面是本发明的预测距离和H.264标准的预测距离的分析。这里仅以标准中的垂直预测模式为例进行了理论分析,该方法可同理适用于其他预测模式。  Firstly, the first strip collected is divided into four sub-strips, and the extraction is performed in the pattern of alternate rows and columns. Then, one of the sub-strips is coded according to a standard prediction mode, that is, after transformation, quantization, entropy coding and other processing processes, a reconstructed image is obtained. Afterwards, the reconstructed image is enlarged to the size of the strip by using bilinear interpolation method. Finally, the remaining sub-strips will be predicted according to the pattern shown in Figure 5(B). The following is an analysis of the prediction distance of the present invention and the prediction distance of the H.264 standard. Here, only the vertical forecasting mode in the standard is taken as an example for theoretical analysis, and this method can be applied to other forecasting modes in the same way. the

设当前待编码的基本块大小为2N×2N(通常N=2,4,8,16…),Di,j表示坐标为(i,j)的像素与其参考像素之间的距离,该数据块的平均预测距离Dpred可按如下步骤求得。  Assuming that the size of the current basic block to be encoded is 2N×2N (usually N=2, 4, 8, 16...), D i, j represents the distance between the pixel with coordinates (i, j) and its reference pixel, the data The average prediction distance D pred of a block can be obtained according to the following steps.

首先计算单列像素的预测距离Dcot First calculate the predicted distance D cot of a single column of pixels:

数据块的总预测距离Dtotal,为各列预测距离之和:  The total prediction distance D total of the data block is the sum of the prediction distances of each column:

DD. totaltotal == DD. colcol __ 11 ++ DD. colcol __ 22 ++ .. .. .. .. .. .. ++ DD. colcol __ 22 NN == ΣΣ jj == 11 22 NN (( DD. colcol __ jj )) == ΣΣ jj == 11 22 NN ΣΣ ii == 11 22 NN DD. ii ,, jj -- -- -- (( 11 ))

图5(A)为H.264的帧内预测垂直模式,该模式下同一行所有像素的预测距离相等,即Dj=i,图中当前像素点上标注的数字即是该点的预测距离,因此该数据块所有像素的平均预测距离Dpred_A为:  Figure 5(A) is the intra-frame prediction vertical mode of H.264. In this mode, the prediction distance of all pixels in the same row is equal, that is, D j =i. The number marked on the current pixel point in the figure is the prediction distance of the point , so the average prediction distance D pred_A of all pixels in the data block is:

DD. predpred __ AA == DD. totaltotal __ AA // (( 22 NN ×× 22 NN )) == 11 44 NN 22 ΣΣ jj == 11 22 NN ΣΣ ii == 11 22 NN DD. ii .. jj == 11 44 NN 22 [[ 22 NN ×× (( 11 ++ 22 ++ .. .. .. .. .. .. ++ 22 NN )) ]] == (( 44 NN 33 ++ 22 NN 22 )) // 44 NN 22 -- -- -- (( 22 ))

(2)图5(B)中箭头标记了每个像素的参考像素点,与图5(A)相同,其一级像素点的预测距离由其与参考像素的距离决定;由于第二级预测的所有点均由相邻点预测得出,因此预测距离均为1。  (2) The arrow in Figure 5(B) marks the reference pixel point of each pixel, which is the same as in Figure 5(A), the prediction distance of the first-level pixel point is determined by the distance between it and the reference pixel; due to the second-level prediction All points in are predicted from neighbors, so the prediction distance is 1. the

本发明方法的平均预测距离计算如下:首先,第一级预测距离Dlevel_1为第一级中N列像素点预测距离总和:  The average prediction distance of the inventive method is calculated as follows: first, the first-level prediction distance D level_1 is the sum of N column pixel point prediction distances in the first level:

D level _ 1 = D col _ 1 + D col _ 3 + . . . . . . + D col _ 2 N - 1 = Σ j = 1 N D col _ j 由于第一级预测中采取的是隔行隔列预测,所以其中每一列像素点的预测距离Dcol_j=1+3+……+2N-1,  D. level _ 1 = D. col _ 1 + D. col _ 3 + . . . . . . + D. col _ 2 N - 1 = Σ j = 1 N D. col _ j Since the first-level prediction adopts interlaced prediction, the prediction distance of each column of pixels D col_j =1+3+...+2N-1,

DD. levellevel __ 11 == ΣΣ jj == 11 NN DD. colcol __ jj

代入上式可以得到:=N×(1+3+5+.…+2N-1)=N3 Substitute into the above formula to get: =N×(1+3+5+.…+2N-1)=N 3

第二级预测距离Dlevel_2为第一级预测得到的所有点(共2N个)各自分别对应的三个二级预测点的预测距离之和,即Dlevel_2=2N×3。  The second-level prediction distance D level_2 is the sum of the prediction distances of the three second-level prediction points respectively corresponding to all the points obtained from the first-level prediction (2N in total), that is, D level_2 =2N×3.

因此,采用B图所示方法整个预测像素块的预测距离Dtotal_B为:  Therefore, the prediction distance D total_B of the entire prediction pixel block using the method shown in Figure B is:

Dtotal_B=Dlevel_1+Dlevel_2=N3+6N  D total_B =D level_1 +D level_2 =N 3 +6N

Dpred_B=Dtotal_B/(2N×2N)  D pred_B =D total_B /(2N×2N)

平均预测距离Dpred_B为:=(N3+6N)/4N2    (3)  The average prediction distance D pred_B is: =(N 3 +6N)/4N 2 (3)

(3)令λ为Dpred_A,Dpred_B两者比值,可以得出:  (3) Let λ be the ratio of D pred_A and D pred_B , we can get:

λλ == DD. predpred __ AA // DD. predpred __ BB == 44 NN 22 ++ 22 NN NN 22 ++ 66

上式表明Dpred_A,Dpred_B两者有数倍左右的差距,N值越大差距越大。  The above formula shows that there is a gap of several times between D pred_A and D pred_B , and the larger the value of N, the larger the gap.

对比式(2)和(3)可以看出,分层预测方法大幅降低了平均预测距离。对于4x4大小的预测块(N=2),可以计算出Dpred_A=2.5,Dpred_B=1.25,两者相差一倍;对于16x16这样较大的预测块(N=8),两种方法的平均预测距离分别为8.5和2.2,与H.264方法相比,基于最近像素点的分层预测方法平均预测距离下降近75%,优势更加明显。  Comparing formulas (2) and (3), it can be seen that the hierarchical prediction method greatly reduces the average prediction distance. For a prediction block of 4x4 size (N=2), D pred_A = 2.5, D pred_B = 1.25 can be calculated, and the difference between the two is doubled; for a larger prediction block of 16x16 (N=8), the average of the two methods The prediction distances are 8.5 and 2.2 respectively. Compared with the H.264 method, the average prediction distance of the hierarchical prediction method based on the nearest pixel is reduced by nearly 75%, and the advantage is more obvious.

性能分析performance analysis

采用本发明方法对H.264标准baseline profile进行实现、测试并与标准算法结果相比较。对恢复图像质量,本发明仍采用传统的峰值信噪比(PSNR)作为衡量指标。通过调整量化参数,分别测得各序列在标准算法及本发明在相同PSNR(dB)值时对应的编码位数(Kb/frame)。针对测试序列,本发明从Cif序列中选择了运动剧烈的coastguard,相对静止的mother,纹理复杂的flower,测试结果如表1所示。  The H.264 standard baseline profile is implemented and tested by the method of the invention and compared with the standard algorithm results. For the restored image quality, the present invention still uses the traditional peak signal-to-noise ratio (PSNR) as a measure. By adjusting the quantization parameters, the corresponding coding bits (Kb/frame) of each sequence are respectively measured when the standard algorithm and the present invention have the same PSNR (dB) value. For the test sequence, the present invention selects the violently moving coastguard, the relatively static mother, and the complex-textured flower from the Cif sequence. The test results are shown in Table 1. the

表1本发明算法及标准算法测试结果比较  Table 1 Algorithm of the present invention and standard algorithm test result comparison

Claims (4)

1.一种基于条带划分的低延迟帧内编码方法,其特征在于:1. A low-delay intraframe coding method based on slice division, characterized in that: 步骤一:以条带为单位进行视频采集,将输入图像划分成若干个条带进行采集以降低采集延迟,其中条带大小的确定与所采用的视频编码标准中基本编码单元的大小相关,设一帧图像的扫描行数为Nframe,Nbcu表示所述基本编码单元所包含的像素行数,Nslice_bcu表示每个条带内的像素行数,所述条带以Nbcu的倍数为单位对一帧图像进行条带划分,Nslice_bcu=α×β×Nbcu,其中参数α为延迟调整参数,β为分辨率调整参数,α和β均为正整数;Step 1: Video acquisition is performed in units of slices, and the input image is divided into several slices for acquisition to reduce the acquisition delay. The determination of the size of the slices is related to the size of the basic coding unit in the adopted video coding standard. Set The number of scanning lines of a frame of image is N frame , N bcu indicates the number of pixel lines contained in the basic coding unit, N slice_bcu indicates the number of pixel lines in each slice, and the unit of the slice is a multiple of N bcu Divide a frame of image into slices, N slice_bcu =α×β×N bcu , where the parameter α is a delay adjustment parameter, β is a resolution adjustment parameter, and both α and β are positive integers; 步骤二:以条带为单位进行预测编码,为提高预测精度,对条带采用隔行和/或隔列方式进行抽取,进一步划分成多个子条带;将上述多个子条带按照以下2种方式进行预测编码:首先以所述多个子条带中的一个子条带作为基本子条带,直接采用所述视频编码标准规定的预测方法进行编码,编码码流直接进入输出缓冲区,并传输到解码端进行解码;其次,对该基本子条带编码后的码流进行重建,并对重建图像进行插值放大处理,将其放大到与所述条带的分辨率相同,得到的插值图像作为其它子条带的预测图像;第三,针对其余若干个子条带,在编码过程中依据所述预测图像采用对应位置点直接求差的方式进行预测,编码得到的残差数据;在图像解码时,所述其余若干个子条带利用基本子条带的重建放大图像作为预测图像进行恢复,避免了反复的插值处理。Step 2: Carry out predictive encoding in units of strips. In order to improve the prediction accuracy, the strips are extracted by interlacing and/or every other column, and further divided into multiple sub-stripes; the above-mentioned multiple sub-strips are divided into the following two ways Perform predictive coding: first, use one of the multiple sub-strips as a basic sub-strip, directly use the prediction method specified in the video coding standard for coding, and the coded stream directly enters the output buffer and is transmitted to The decoding end performs decoding; secondly, reconstructs the coded stream of the basic sub-strip, and performs interpolation and amplification processing on the reconstructed image, and enlarges it to the same resolution as the strip, and obtains the interpolated image as other The predicted image of the sub-strip; thirdly, for the remaining several sub-strips, during the encoding process, according to the predicted image, the method of directly calculating the difference of the corresponding position point is used for prediction, and the residual data obtained by encoding; when the image is decoded, The rest of the several sub-strips are restored by using the reconstructed and enlarged images of the basic sub-strips as predicted images, which avoids repeated interpolation processing. 2.如权利要求1所述的帧内编码方法,其特征在于:条带划分与编码标准有关,针对H.264编码标准,所述基本编码单元为宏块,针对HEVC编码标准,所述基本编码单元为CU或LCU。2. The intra-frame coding method according to claim 1, wherein: the slice division is related to a coding standard, for the H.264 coding standard, the basic coding unit is a macroblock, and for the HEVC coding standard, the basic The coding unit is CU or LCU. 3.如权利要求1所述的帧内编码方法,其特征在于:为减小I帧和P帧之间的码流差别,使每帧图像的压缩码流量平稳,对输入图像的类型进行判断,若输入图像为I帧,划分的条带将都采用帧内编码,若输入图像为P帧或B帧图像,则P帧或B帧图像中的一个条带采用Intra编码方式,其余的条带仍采用原有P帧的Inter编码。3. intra-frame coding method as claimed in claim 1, is characterized in that: for reducing the code flow difference between I frame and P frame, make the compressed code flow of every frame image steady, the type of input image is judged , if the input image is an I frame, the divided slices will all adopt intra-frame encoding; if the input image is a P frame or B frame image, then one slice in the P frame or B frame image will use Intra encoding, and the remaining slices The belt still adopts the Inter coding of the original P frame. 4.如权利要求1所述的帧内编码方法,其特征在于:在采集完一个条带后就进行编码,以降低采集延迟。4. The intra-frame coding method according to claim 1, wherein the coding is performed after a slice is collected, so as to reduce the collection delay.
CN201410111378.9A 2014-03-25 2014-03-25 A kind of low latency inner frame coding method divided based on band Expired - Fee Related CN103916675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410111378.9A CN103916675B (en) 2014-03-25 2014-03-25 A kind of low latency inner frame coding method divided based on band

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410111378.9A CN103916675B (en) 2014-03-25 2014-03-25 A kind of low latency inner frame coding method divided based on band

Publications (2)

Publication Number Publication Date
CN103916675A true CN103916675A (en) 2014-07-09
CN103916675B CN103916675B (en) 2017-06-20

Family

ID=51042019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410111378.9A Expired - Fee Related CN103916675B (en) 2014-03-25 2014-03-25 A kind of low latency inner frame coding method divided based on band

Country Status (1)

Country Link
CN (1) CN103916675B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635732A (en) * 2014-10-30 2016-06-01 联想(北京)有限公司 Adaptive sampling point compensation coding method and device, and method and device for decoding video code stream
CN111064962A (en) * 2019-12-31 2020-04-24 广州市奥威亚电子科技有限公司 Video transmission system and method
CN112040246A (en) * 2020-08-27 2020-12-04 西安迪威码半导体有限公司 Low-delay low-complexity fixed code rate control algorithm
CN112040235A (en) * 2020-11-04 2020-12-04 北京金山云网络技术有限公司 Video resource encoding method and device and video resource decoding method and device
US11363298B2 (en) 2020-08-03 2022-06-14 Wiston Corporation Video processing apparatus and processing method of video stream
CN116438794A (en) * 2022-05-31 2023-07-14 上海玄戒技术有限公司 Image compression method, device, electronic equipment, chip and storage medium
CN117041599A (en) * 2023-08-28 2023-11-10 重庆邮电大学 HEVC-VPCC-based intra-frame rapid coding method and system
US11880966B2 (en) 2020-08-03 2024-01-23 Wistron Corporation Image quality assessment apparatus and image quality assessment method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060153465A1 (en) * 2005-01-07 2006-07-13 Microsoft Corporation In-band wavelet video coding with spatial scalability
CN101150719A (en) * 2006-09-20 2008-03-26 华为技术有限公司 Method and device for parallel video coding
CN101184244A (en) * 2007-12-25 2008-05-21 北京数码视讯科技股份有限公司 Video coding method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060153465A1 (en) * 2005-01-07 2006-07-13 Microsoft Corporation In-band wavelet video coding with spatial scalability
CN101150719A (en) * 2006-09-20 2008-03-26 华为技术有限公司 Method and device for parallel video coding
CN101184244A (en) * 2007-12-25 2008-05-21 北京数码视讯科技股份有限公司 Video coding method and system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635732A (en) * 2014-10-30 2016-06-01 联想(北京)有限公司 Adaptive sampling point compensation coding method and device, and method and device for decoding video code stream
CN105635732B (en) * 2014-10-30 2018-12-14 联想(北京)有限公司 The method and device that adaptive sampling point compensation is encoded, is decoded to video code flow
CN111064962A (en) * 2019-12-31 2020-04-24 广州市奥威亚电子科技有限公司 Video transmission system and method
CN111064962B (en) * 2019-12-31 2022-02-15 广州市奥威亚电子科技有限公司 Video transmission system and method
US11363298B2 (en) 2020-08-03 2022-06-14 Wiston Corporation Video processing apparatus and processing method of video stream
US11880966B2 (en) 2020-08-03 2024-01-23 Wistron Corporation Image quality assessment apparatus and image quality assessment method thereof
CN112040246A (en) * 2020-08-27 2020-12-04 西安迪威码半导体有限公司 Low-delay low-complexity fixed code rate control algorithm
CN112040235A (en) * 2020-11-04 2020-12-04 北京金山云网络技术有限公司 Video resource encoding method and device and video resource decoding method and device
CN112040235B (en) * 2020-11-04 2021-03-16 北京金山云网络技术有限公司 Video resource encoding method and device and video resource decoding method and device
CN116438794A (en) * 2022-05-31 2023-07-14 上海玄戒技术有限公司 Image compression method, device, electronic equipment, chip and storage medium
CN116438794B (en) * 2022-05-31 2023-12-12 上海玄戒技术有限公司 Image compression method, device, electronic equipment, chip and storage medium
CN117041599A (en) * 2023-08-28 2023-11-10 重庆邮电大学 HEVC-VPCC-based intra-frame rapid coding method and system

Also Published As

Publication number Publication date
CN103916675B (en) 2017-06-20

Similar Documents

Publication Publication Date Title
CN103916675B (en) A kind of low latency inner frame coding method divided based on band
CN101783957B (en) A video predictive coding method and device
CN101778275B (en) Image processing method of self-adaptive time domain and spatial domain resolution ratio frame
CN108769682B (en) Video encoding method, video decoding method, video encoding apparatus, video decoding apparatus, computer device, and storage medium
US9071841B2 (en) Video transcoding with dynamically modifiable spatial resolution
CN103813174B (en) A kind of mixed-resolution decoding method and device
US8582904B2 (en) Method of second order prediction and video encoder and decoder using the same
JP4391809B2 (en) System and method for adaptively encoding a sequence of images
KR100913088B1 (en) Method and apparatus for encoding/decoding video signal using prediction information of intra-mode macro blocks of base layer
CN101394560B (en) Mixed production line apparatus used for video encoding
CN103327325B (en) The quick self-adapted system of selection of intra prediction mode based on HEVC standard
CN103096056B (en) Matrix coder method and apparatus and coding/decoding method and device
CN101252686A (en) Method and system for lossless encoding and decoding in video frames based on interleaved prediction
JP2011505781A (en) Extension of the AVC standard to encode high-resolution digital still images in parallel with video
CN103442228B (en) Code-transferring method and transcoder thereof in from standard H.264/AVC to the fast frame of HEVC standard
CN101600109A (en) H.264 Downsizing Transcoding Method Based on Texture and Motion Features
CN102158709A (en) Derivable motion compensation prediction method of decoding terminal
CN103533359A (en) H.264 code rate control method
CN100588256C (en) Apparatus and method for encoding video data
CN106961610A (en) With reference to the ultra high-definition video new type of compression framework of super-resolution rebuilding
CN104113761B (en) Bit rate control method and encoder in a kind of Video coding
EP1940175A1 (en) Image encoding apparatus and memory access method
CN102196272A (en) A P-frame encoding method and device
CN102510496B (en) Quick size reduction transcoding method based on region of interest
CN101742288A (en) Video noise reduction encoding method and video noise reduction encoding device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170620

Termination date: 20190325

CF01 Termination of patent right due to non-payment of annual fee