[go: up one dir, main page]

CN103339938B - Perform the motion vector prediction of video coding - Google Patents

Perform the motion vector prediction of video coding Download PDF

Info

Publication number
CN103339938B
CN103339938B CN201280006666.7A CN201280006666A CN103339938B CN 103339938 B CN103339938 B CN 103339938B CN 201280006666 A CN201280006666 A CN 201280006666A CN 103339938 B CN103339938 B CN 103339938B
Authority
CN
China
Prior art keywords
motion vector
candidate motion
spatial candidate
spatial
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201280006666.7A
Other languages
Chinese (zh)
Other versions
CN103339938A (en
Inventor
钱威俊
陈培松
穆罕默德·蔡德·科班
马尔塔·卡切维奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Publication of CN103339938A publication Critical patent/CN103339938A/en
Application granted granted Critical
Publication of CN103339938B publication Critical patent/CN103339938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一般来说,本发明描述用于执行视频译码的运动向量预测的技术。一种包括运动补偿单元的设备可实施所述技术。所述运动补偿单元确定与视频帧的当前部分相关联的空间候选运动向量MVP且修剪所述空间候选运动向量以移除重复者但不移除时间候选运动向量。所述运动补偿单元基于在位流中用信号发送的运动向量预测值MVP索引来选择所述时间候选运动向量或修剪之后剩余的所述空间候选运动向量中的一者作为选定候选运动向量,且基于所述选定的候选运动向量执行运动补偿。

In general, this disclosure describes techniques for performing motion vector prediction for video coding. An apparatus including a motion compensation unit can implement the techniques. The motion compensation unit determines a spatial candidate motion vector MVP associated with a current portion of a video frame and prunes the spatial candidate motion vector to remove duplicates but does not remove temporal candidate motion vectors. the motion compensation unit selects one of the temporal candidate motion vector or the spatial candidate motion vector remaining after pruning as the selected candidate motion vector based on a motion vector predictor MVP index signaled in the bitstream, And performing motion compensation based on the selected candidate motion vector.

Description

执行视频译码的运动向量预测Perform motion vector prediction for video decoding

本申请案主张2011年1月27日申请的第61/436,997号美国临时申请案、2011年3月7日申请的第61/449,985号美国临时申请案以及2011年11月18日申请的第61/561,601号美国临时申请案的权利,所述临时申请案中的每一者的全部内容特此以引用的方式并入。This application asserts U.S. Provisional Application Nos. 61/436,997, filed January 27, 2011, U.S. Provisional Application Nos. 61/449,985, filed March 7, 2011, and 61 /561,601, each of which is hereby incorporated by reference in its entirety.

技术领域technical field

本发明涉及视频译码,且更特定来说涉及视频译码的运动补偿方面。This disclosure relates to video coding, and more particularly to motion compensation aspects of video coding.

背景技术Background technique

可将数字视频能力并入于各种各样的装置中,包含数字电视、数字直播系统、无线广播系统、个人数字助理(PDA)、膝上型或桌上型计算机、数字相机、数字记录装置、数字媒体播放器、视频游戏装置、视频游戏控制台、蜂窝式或卫星无线电电话、视频电话会议装置等。数字视频装置实施视频压缩技术,例如由MPEG-2、MPEG-4、ITU-TH.263、ITU-T H.264/MPEG-4第10部分高级视频译码(AVC)界定的标准和此类标准的扩展中所描述的视频压缩技术,以更有效地发射和接收数字视频信息。新的视频译码标准正处于开发中,例如由“联合合作小组-视频译码”(JCT-VC)正在开发的高效视频译码(HEVC)标准,其为MPEG与ITU-T之间的合作。新兴HEVC标准有时被称为H.265,但此名称还不是正式的。Digital video capabilities can be incorporated into a wide variety of devices, including digital television, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices , digital media players, video game devices, video game consoles, cellular or satellite radio phones, video teleconferencing devices, etc. Digital video devices implement video compression techniques such as the standards defined by MPEG-2, MPEG-4, ITU-TH.263, ITU-T H.264/MPEG-4 Part 10 Advanced Video Coding (AVC) and the like An extension of the standard to describe video compression techniques to transmit and receive digital video information more efficiently. New video coding standards are being developed, such as the High Efficiency Video Coding (HEVC) standard being developed by the Joint Collaborative Team-Video Coding (JCT-VC), a collaboration between MPEG and ITU-T . The emerging HEVC standard is sometimes referred to as H.265, but this name is not yet official.

发明内容Contents of the invention

大体上,本发明描述用于指定运动向量预测值(MVP)的技术。MVP通常作为提高执行运动补偿的效率的方式而用于视频译码中。胜于在参考帧中执行对与当前块匹配的块的搜索,视频编码器可从MVP列表中选择当前块的运动向量。在一些实例中,MVP列表可包含空间上与当前块相邻的四个块的运动向量,以及来自时间上在当前帧之前或之后的参考帧的相同位置的块的运动向量。接着将MVP中的选定一者用于当前块,从而减少(如果不是消除)运动补偿的过程。In general, this disclosure describes techniques for specifying motion vector predictors (MVPs). MVP is often used in video coding as a way to increase the efficiency with which motion compensation is performed. Rather than performing a search for a block matching the current block in a reference frame, the video encoder may select a motion vector for the current block from the MVP list. In some examples, the MVP list may include motion vectors for four blocks that are spatially adjacent to the current block, as well as motion vectors for co-located blocks from reference frames that temporally precede or succeed the current frame. A selected one of the MVPs is then used for the current block, reducing, if not eliminating, the process of motion compensation.

在一个实例中,一种编码视频数据的方法包括:确定与当前视频帧的当前部分相关联的空间候选运动向量,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者;以及确定与所述当前视频帧的所述当前部分相关联的时间候选运动向量。所述时间候选运动向量包括针对参考视频帧的部分而确定的运动向量。所述方法还包括:选择所述时间候选运动向量或修剪之后剩余的所述空间候选运动向量中的一者作为选定的候选运动向量;以及在位流中用信号发送所述选定的候选运动向量。In one example, a method of encoding video data includes determining a spatial candidate motion vector associated with a current portion of a current video frame, wherein the spatial candidate motion vector includes a motion vector for the current video frame associated with the current portion motion vectors determined from adjacent neighboring portions; pruning the spatial candidate motion vectors to remove at least one of the spatial candidate motion vectors; and determining a time associated with the current portion of the current video frame Candidate motion vectors. The temporal candidate motion vectors include motion vectors determined for portions of reference video frames. The method further comprises: selecting one of the temporal candidate motion vector or the spatial candidate motion vector remaining after pruning as a selected candidate motion vector; and signaling the selected candidate in a bitstream motion vector.

在另一实例中,一种用于编码视频数据的设备包括:用于确定与当前视频帧的当前部分相关联的空间候选运动向量的装置,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;用于修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者的装置;以及用于确定与所述当前视频帧的所述当前部分相关联的时间候选运动向量的装置。所述时间候选运动向量包括针对参考视频帧的部分而确定的运动向量。所述设备进一步包括:用于选择所述时间候选运动向量或修剪之后剩余的所述空间候选运动向量中的一者作为选定的候选运动向量的装置;以及用于在位流中用信号发送所述选定的候选运动向量的装置。In another example, an apparatus for encoding video data includes means for determining a spatial candidate motion vector associated with a current portion of a current video frame, wherein the spatial candidate motion vector includes A motion vector determined from a neighboring portion of a frame adjacent to the current portion; means for pruning the spatial candidate motion vectors to remove at least one of the spatial candidate motion vectors; means for determining temporal candidate motion vectors associated with said current portion of said current video frame. The temporal candidate motion vectors include motion vectors determined for portions of reference video frames. The apparatus further comprises: means for selecting one of the temporal candidate motion vector or the spatial candidate motion vector remaining after pruning as the selected candidate motion vector; and for signaling in a bitstream means for the selected candidate motion vector.

在另一实例中,一种用于编码视频数据的设备包括运动补偿单元,其确定与当前视频帧的当前部分相关联的空间候选运动向量,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者;以及确定与所述当前视频帧的所述当前部分相关联的时间候选运动向量。所述时间候选运动向量包括针对参考视频帧的部分而确定的运动向量。所述设备还包括:模式选择单元,其选择所述时间候选运动向量或修剪之后剩余的所述空间候选运动向量中的一者作为选定的候选运动向量;以及熵译码单元,其在位流中用信号发送所述选定的候选运动向量。In another example, an apparatus for encoding video data includes a motion compensation unit that determines a spatial candidate motion vector associated with a current portion of a current video frame, wherein the spatial candidate motion vector includes motion vectors determined from adjacent portions of a frame adjacent to the current portion; pruning the spatial candidate motion vectors to remove at least one of the spatial candidate motion vectors; and determining all The temporal candidate motion vector associated with the current part. The temporal candidate motion vectors include motion vectors determined for portions of reference video frames. The apparatus also includes: a mode selection unit that selects one of the temporal candidate motion vector or the spatial candidate motion vector remaining after pruning as a selected candidate motion vector; and an entropy coding unit that, in bits The selected candidate motion vector is signaled in a stream.

在另一实例中,一种非暂时性计算机可读媒体包括指令,所述指令在被执行时致使一个或一个以上处理器执行以下操作:确定与当前视频帧的当前部分相关联的空间候选运动向量,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者;确定与所述当前视频帧的所述当前部分相关联的时间候选运动向量,其中所述时间候选运动向量包括针对参考视频帧的部分而确定的运动向量;选择所述时间候选运动向量或修剪之后剩余的所述空间候选运动向量中的一者作为选定的候选运动向量;以及在位流中用信号发送所述选定的候选运动向量。In another example, a non-transitory computer-readable medium includes instructions that, when executed, cause one or more processors to: determine a spatial candidate motion associated with a current portion of a current video frame vector, wherein the spatial candidate motion vector comprises a motion vector determined for a neighboring portion of the current video frame adjacent to the current portion; pruning the spatial candidate motion vector to remove one of the spatial candidate motion vectors at least one of: determining a temporal candidate motion vector associated with the current portion of the current video frame, wherein the temporal candidate motion vector comprises a motion vector determined for a portion of a reference video frame; selecting the temporal candidate the motion vector or one of the spatial candidate motion vectors remaining after pruning as a selected candidate motion vector; and signaling the selected candidate motion vector in a bitstream.

在另一实例中,一种解码视频数据的方法包括:确定与当前视频帧的当前部分相关联的空间候选运动向量,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;以及修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者但不移除针对所述当前视频帧的所述当前部分而确定的时间候选运动向量。所述时间候选运动向量包括针对在参考视频帧中处于与所述当前部分在所述当前视频帧中所处的位置相同的位置处的参考视频帧的部分而确定的运动向量。所述方法还包括:基于在位流中用信号发送的运动向量预测值(MVP)索引来选择所述时间候选运动向量中的一者或修剪之后剩余的所述空间候选运动向量中的一者作为选定候选运动向量;以及基于所述选定的候选运动向量执行运动补偿。In another example, a method of decoding video data includes determining a spatial candidate motion vector associated with a current portion of a current video frame, wherein the spatial candidate motion vector includes a motion vector for the current video frame corresponding to the current and pruning the spatial candidate motion vectors to remove at least one of the spatial candidate motion vectors but not for the current portion of the current video frame Determine the temporal candidate motion vectors. The temporal candidate motion vector comprises a motion vector determined for a portion of the reference video frame that is at the same position in the reference video frame as the current portion is in the current video frame. The method also includes selecting one of the temporal candidate motion vectors or one of the spatial candidate motion vectors remaining after pruning based on a motion vector predictor (MVP) index signaled in the bitstream being a selected candidate motion vector; and performing motion compensation based on the selected candidate motion vector.

在另一实例中,一种用于解码视频数据的设备包括:用于确定与当前视频帧的当前部分相关联的空间候选运动向量的装置,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;以及用于修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者但不移除针对所述当前视频帧的所述当前部分而确定的时间候选运动向量的装置。所述时间候选运动向量包括针对参考视频帧的部分而确定的运动向量。所述设备还包括:用于基于在位流中用信号发送的运动向量预测值(MVP)索引来选择所述时间候选运动向量中的一者或修剪之后剩余的所述空间候选运动向量中的一者作为选定候选运动向量的装置;以及用于基于所述选定的候选运动向量执行运动补偿的装置。In another example, an apparatus for decoding video data includes means for determining a spatial candidate motion vector associated with a current portion of a current video frame, wherein the spatial candidate motion vector includes motion vectors determined for adjacent portions of a frame adjacent to the current portion; and for pruning the spatial candidate motion vectors to remove at least one of the spatial candidate motion vectors but not for the current means for determining temporal candidate motion vectors for said current portion of a video frame. The temporal candidate motion vectors include motion vectors determined for portions of reference video frames. The apparatus also includes means for selecting one of the temporal candidate motion vectors or one of the spatial candidate motion vectors remaining after pruning based on a motion vector predictor (MVP) index signaled in a bitstream. one as means for selecting a candidate motion vector; and means for performing motion compensation based on the selected candidate motion vector.

在另一实例中,一种用于解码视频数据的设备包括运动补偿单元,其确定与当前视频帧的当前部分相关联的空间候选运动向量,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者但不移除针对所述当前视频帧的所述当前部分而确定的时间候选运动向量,其中所述时间候选运动向量包括针对参考视频帧的部分而确定的运动向量;基于在位流中用信号发送的运动向量预测值(MVP)索引来选择所述时间候选运动向量中的一者或修剪之后剩余的所述空间候选运动向量中的一者作为选定的候选运动向量;以及基于所述选定的候选运动向量执行运动补偿。In another example, an apparatus for decoding video data includes a motion compensation unit that determines a spatial candidate motion vector associated with a current portion of a current video frame, wherein the spatial candidate motion vector includes motion vectors determined from adjacent portions of a frame adjacent to the current portion; pruning the spatial candidate motion vectors to remove at least one of the spatial candidate motion vectors but not removing the motion vector for the current video frame a temporal candidate motion vector determined for the current portion, wherein the temporal candidate motion vector comprises a motion vector determined for a portion of a reference video frame; based on a motion vector predictor (MVP) index signaled in the bitstream selecting one of the temporal candidate motion vectors or one of the spatial candidate motion vectors remaining after pruning as a selected candidate motion vector; and performing motion compensation based on the selected candidate motion vector.

在另一实例中,一种非暂时性计算机可读媒体包括指令,所述指令在被执行时致使一个或一个以上处理器执行以下操作:确定与当前视频帧的当前部分相关联的空间候选运动向量,其中所述空间候选运动向量包括针对所述当前视频帧的与所述当前部分邻近的相邻部分而确定的运动向量;修剪所述空间候选运动向量以移除所述空间候选运动向量中的至少一者但不移除针对所述当前视频帧的所述当前部分而确定的时间候选运动向量,其中所述时间候选运动向量包括针对参考视频帧的部分而确定的运动向量;基于在位流中用信号发送的运动向量预测值(MVP)索引来选择所述时间候选运动向量中的一者或修剪之后剩余的所述空间候选运动向量中的一者作为选定的候选运动向量;以及基于所述选定的候选运动向量执行运动补偿。In another example, a non-transitory computer-readable medium includes instructions that, when executed, cause one or more processors to: determine a spatial candidate motion associated with a current portion of a current video frame vector, wherein the spatial candidate motion vector comprises a motion vector determined for a neighboring portion of the current video frame adjacent to the current portion; pruning the spatial candidate motion vector to remove one of the spatial candidate motion vectors At least one of but not removing temporal candidate motion vectors determined for the current portion of the current video frame, wherein the temporal candidate motion vectors include motion vectors determined for portions of a reference video frame; based on in-bit a motion vector predictor (MVP) index signaled in the stream to select one of the temporal candidate motion vectors or one of the spatial candidate motion vectors remaining after pruning as the selected candidate motion vector; and Motion compensation is performed based on the selected candidate motion vectors.

一个或一个以上实例的细节陈述于附图和以下描述中。其它特征、目标和优势将从描述和附图以及从权利要求书中显而易见。The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.

附图说明Description of drawings

图1为说明可经配置以利用本发明中所描述的用于指定运动向量预测值(MVP)的技术的实例视频编码与解码系统的框图。1 is a block diagram illustrating an example video encoding and decoding system that may be configured to utilize the techniques described in this disclosure for specifying motion vector predictors (MVPs).

图2为说明可实施本发明中描述的用于指定运动向量预测值的技术的视频编码器的实例的框图。2 is a block diagram illustrating an example of a video encoder that may implement the techniques described in this disclosure for specifying motion vector predictors.

图3为说明实施本发明中描述的运动向量预测技术的视频解码器的实例的框图。3 is a block diagram illustrating an example of a video decoder that implements the motion vector prediction techniques described in this disclosure.

图4为说明在执行本发明中描述的运动向量预测技术的过程中视频编码器的示范性操作的流程图。4 is a flow diagram illustrating exemplary operation of a video encoder in performing the motion vector prediction techniques described in this disclosure.

图5为说明在实施本发明中描述的运动向量预测技术的过程中视频解码器的示范性操作的流程图。5 is a flow diagram illustrating exemplary operation of a video decoder in implementing the motion vector prediction techniques described in this disclosure.

图6为说明当前预测单元(PU)的邻近的相邻PU和时间上在相同位置的PU的示范性布置的图。6 is a diagram illustrating an exemplary arrangement of adjacent neighboring PUs and temporally co-located PUs of a current prediction unit (PU).

具体实施方式detailed description

本发明中描述的技术的实施例使得视频编码器能够通过修剪冗余空间MVP以稳健而有效的方式指定MVP,但在修剪过程期间不包含时间上在相同位置的MVP。换句话说,所述技术形成仅包含空间MVP的MVP中间列表,相对于此MVP中间列表执行修剪,且接着将时间上在相同位置的MVP添加到经修剪的MVP中间列表以形成经修剪的MVP列表。以此方式,指定时间上在相同位置的MVP的参考帧的丢失可能不会阻止位流的剖析,这在常规系统中是常见的,且仍可维持通过应用修剪过程而实现的译码效率增益。Embodiments of the techniques described in this disclosure enable video encoders to specify MVPs in a robust and efficient manner by pruning redundant spatial MVPs, but not including temporally co-located MVPs during the pruning process. In other words, the technique forms an MVP intermediate list containing only spatial MVPs, performs pruning relative to this MVP intermediate list, and then adds temporally co-located MVPs to the pruned MVP intermediate list to form the pruned MVP list. In this way, the loss of a reference frame for a given temporally co-located MVP may not prevent the parsing of the bitstream, which is common in conventional systems, and the coding efficiency gain achieved by applying the pruning process may still be maintained .

图1为说明可经配置以利用本发明中所描述的用于指定运动向量预测值(MVP)的技术的实例视频编码与解码系统10的框图。如图1的实例中所展示,系统10包含产生供目的地装置14解码的经编码视频的源装置12。源装置12可经由通信信道16将经编码视频发射到目的地装置14或可将经编码视频存储在存储媒体34或文件服务器36上,以使得经编码视频可按需要被目的地装置14存取。源装置12和目的地装置14可包括各种各样的装置中的任一者,包含桌上型计算机、笔记型(即,膝上型)计算机、平板计算机、机顶盒、电话手持机(包含蜂窝式电话或手持机和所谓的智能手机)、电视、相机、显示装置、数字媒体播放器、视频游戏控制台等。1 is a block diagram illustrating an example video encoding and decoding system 10 that may be configured to utilize the techniques described in this disclosure for specifying motion vector predictors (MVPs). As shown in the example of FIG. 1 , system 10 includes source device 12 that generates encoded video for decoding by destination device 14 . Source device 12 may transmit the encoded video to destination device 14 via communication channel 16 or may store the encoded video on storage medium 34 or file server 36 so that the encoded video may be accessed by destination device 14 as needed. . Source device 12 and destination device 14 may comprise any of a wide variety of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets (including cellular phones or handhelds and so-called smartphones), televisions, cameras, display devices, digital media players, video game consoles, etc.

在许多情况下,此类装置可经装备以用于无线通信。因此,通信信道16可包括无线信道。或者,通信信道16可包括有线信道、无线信道与有线信道的组合,或任何其它类型的通信信道,或适合用于发射经编码视频数据的通信信道(例如射频(RF)频谱或一个或一个以上物理传输线)的组合。在一些实例中,通信信道16可形成基于包的网络(例如,局域网(LAN)、广域网(WAN)或例如因特网等全球网络)的部分。因此,通信信道16大体表示用于将视频数据从源装置12发射到目的地装置14的任何合适的通信媒体或不同通信媒体的集合,包含有线或无线媒体的任何合适组合。通信信道16可包含可用于促进从源装置12到目的地装置14的通信的路由器、交换机、基站或任何其它设备。In many cases, such devices may be equipped for wireless communication. Accordingly, communication channel 16 may comprise a wireless channel. Alternatively, communication channel 16 may comprise a wired channel, a combination of wireless and wired channels, or any other type of communication channel or communication channel suitable for transmitting encoded video data (e.g., a radio frequency (RF) spectrum or one or more combination of physical transmission lines). In some examples, communication channel 16 may form part of a packet-based network such as a local area network (LAN), a wide area network (WAN), or a global network such as the Internet. Accordingly, communication channel 16 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from source device 12 to destination device 14, including any suitable combination of wired or wireless media. Communication channel 16 may include routers, switches, base stations, or any other equipment that may be used to facilitate communication from source device 12 to destination device 14 .

如图1的实例中进一步展示,源装置12包含视频源18、视频编码器20、调制器/解调器22(“调制解调器22”)和发射器24。在源装置12中,视频源18可包含例如视频俘获装置等源。举例来说,视频俘获装置可包含以下各者中的一者或一者以上:摄像机、含有先前俘获的视频的视频存档、从视频内容提供者接收视频的视频馈送接口,和/或用于产生计算机图形数据以作为源视频的计算机图形系统。作为一个实例,如果视频源18为摄像机,那么源装置12和目的地装置14可形成所谓的相机电话或视频电话。然而,本发明中描述的技术不限于无线应用或设置,且可适用于包含视频编码和/或解码能力的非无线装置。因此,源装置12和目的地装置14仅为可支持本文中所描述的技术的译码装置的实例。As further shown in the example of FIG. 1 , source device 12 includes video source 18 , video encoder 20 , modulator/demodulator 22 (“modem 22 ”), and transmitter 24 . In source device 12, video source 18 may include a source such as a video capture device. For example, a video capture device may include one or more of a video camera, a video archive containing previously captured video, a video feed interface for receiving video from a video content provider, and/or for generating Computer graphics data with computer graphics systems as source video. As one example, if video source 18 is a video camera, source device 12 and destination device 14 may form a so-called camera phone or video phone. However, the techniques described in this disclosure are not limited to wireless applications or settings, and are applicable to non-wireless devices that include video encoding and/or decoding capabilities. Accordingly, source device 12 and destination device 14 are merely examples of coding devices that may support the techniques described herein.

视频编码器20可编码所俘获、预先俘获或计算机产生的视频。一旦经编码,视频编码器20便可将此经编码视频输出到调制解调器22。调制解调器22可接着根据通信标准(例如无线通信协议)调制经编码视频,之后发射器24可将经调制的经编码视频数据发射到目的地装置14。调制解调器22可包含各种混频器、滤波器、放大器或经设计以用于信号调制的其它组件。发射器24可包含经设计以用于发射数据的电路,包含放大器、滤波器和一个或一个以上天线。Video encoder 20 may encode captured, pre-captured, or computer-generated video. Once encoded, video encoder 20 may output this encoded video to modem 22 . Modem 22 may then modulate the encoded video according to a communication standard, such as a wireless communication protocol, after which transmitter 24 may transmit the modulated encoded video data to destination device 14 . Modem 22 may include various mixers, filters, amplifiers, or other components designed for signal modulation. Transmitter 24 may include circuits designed for transmitting data, including amplifiers, filters, and one or more antennas.

由视频编码器20编码的所俘获、预先俘获或计算机产生的视频还可存储到存储媒体34或文件服务器36上以供稍后检索、解码以及使用。存储媒体34可包含蓝光光盘、DVD、CD-ROM、快闪存储器或用于存储经编码视频的任何其它合适的数字存储媒体。目的地装置14可存取存储在存储媒体34或文件服务器36上的经编码视频,解码此经编码视频以产生经解码视频并重播此经解码视频。Captured, pre-captured, or computer-generated video encoded by video encoder 20 may also be stored onto storage media 34 or file server 36 for later retrieval, decoding, and use. Storage medium 34 may include a Blu-ray Disc, DVD, CD-ROM, flash memory, or any other suitable digital storage medium for storing encoded video. Destination device 14 may access encoded video stored on storage medium 34 or file server 36, decode the encoded video to generate decoded video, and replay the decoded video.

文件服务器36可为能够存储经编码视频且将所述经编码视频传输到目的地装置14的任何类型的服务器。实例文件服务器包含网络服务器(例如,用于网站)、FTP服务器、网络附接存储(NAS)装置、本地磁盘驱动器,或能够存储经编码视频数据且将其传输到目的地装置的任何其它类型的装置。来自文件服务器36的经编码视频数据的发射可为串流传输、下载传输或两者的组合。目的地装置14可根据任何标准数据连接(包含因特网连接)接入文件服务器36。此连接可包含无线信道(例如,Wi-Fi连接或无线蜂窝式数据连接)、有线连接(例如,DSL、电缆调制解调器等)、有线信道与无线信道两者的组合,或适合用于存取存储在文件服务器上的经编码视频数据的任何其它类型的通信信道。File server 36 may be any type of server capable of storing and transmitting encoded video to destination device 14 . Example file servers include a web server (e.g., for a website), an FTP server, a network attached storage (NAS) device, a local disk drive, or any other type capable of storing and transmitting encoded video data to a destination device. device. The transmission of encoded video data from file server 36 may be a streaming transmission, a download transmission, or a combination of both. Destination device 14 may access file server 36 via any standard data connection, including an Internet connection. This connection can include a wireless channel (for example, a Wi-Fi connection or a wireless cellular data connection), a wired connection (for example, DSL, cable modem, etc.), a combination of both wired and wireless channels, or be suitable for accessing storage Any other type of communication channel for encoded video data on a file server.

在图1的实例中,目的地装置14包含接收器26、调制解调器28、视频解码器30和显示装置32。目的地装置14的接收器26经由信道16接收信息,且调制解调器28解调所述信息以产生用于视频解码器30的经解调位流。经由信道16传送的信息可包含由视频编码器20产生的多种语法信息以供视频解码器30在解码相关联的经编码视频数据时使用。此语法还可包含在存储于存储媒体34或文件服务器36上的经编码视频数据内。视频编码器20和视频解码器30中的每一者可形成能够编码或解码视频数据的相应编码器-解码器(CODEC)的部分。In the example of FIG. 1 , destination device 14 includes receiver 26 , modem 28 , video decoder 30 , and display device 32 . Receiver 26 of destination device 14 receives the information via channel 16 and modem 28 demodulates the information to produce a demodulated bitstream for video decoder 30 . Information communicated via channel 16 may include a variety of syntax information generated by video encoder 20 for use by video decoder 30 in decoding the associated encoded video data. This syntax may also be included within encoded video data stored on storage media 34 or file server 36 . Each of video encoder 20 and video decoder 30 may form part of a respective encoder-decoder (CODEC) capable of encoding or decoding video data.

目的地装置14的显示装置32表示能够呈现视频数据以供观看者观看的任何类型的显示器。虽然展示为与目的地装置14集成,但显示装置32可与目的地装置14集成或在目的地装置14外部。在一些实例中,目的地装置14可包含集成的显示装置且还可经配置以与外部显示装置介接。在其它实例中,目的地装置14可为显示装置。一般来说,显示装置32向用户显示经解码视频数据,且可包括多种显示装置中的任一者,例如,液晶显示器(LCD)、等离子显示器、有机发光二极管(OLED)显示器或另一类型的显示装置。Display device 32 of destination device 14 represents any type of display capable of presenting video data for viewing by a viewer. Although shown as being integrated with destination device 14 , display device 32 may be integrated with destination device 14 or external to destination device 14 . In some examples, destination device 14 may include an integrated display device and may also be configured to interface with an external display device. In other examples, destination device 14 may be a display device. In general, display device 32 displays decoded video data to a user and may include any of a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device. display device.

本发明可大体上涉及视频编码器20将语法信息“用信号发送”到另一装置(例如,视频解码器30)。然而,应理解,视频编码器20可通过使语法元素与视频数据的各种经编码部分相关联来用信号发送信息。也就是说,视频编码器20可通过将特定语法元素存储到视频数据的各种经编码部分的标头来“用信号发送”数据。在一些情况下,此类语法元素可在由视频解码器30接收并解码之前经编码和存储(例如,存储到存储媒体34或文件服务器36)。因此,术语“用信号发送”一般可指用以解码经压缩视频数据的语法或其它数据的传送,无论此通信实时地或近实时地发生还是在一时间跨度上发生,例如可在编码之时将语法元素存储到媒体时发生,所述语法元素接着可在存储到此媒体之后的任何时间由解码装置检索。This disclosure may generally involve video encoder 20 "signaling" syntax information to another device (eg, video decoder 30 ). It should be understood, however, that video encoder 20 may signal information by associating syntax elements with various encoded portions of video data. That is, video encoder 20 may "signal" data by storing certain syntax elements to headers of various encoded portions of video data. In some cases, such syntax elements may be encoded and stored (eg, to storage medium 34 or file server 36 ) prior to being received and decoded by video decoder 30 . Thus, the term "signaling" may generally refer to the transfer of syntax or other data used to decode compressed video data, whether such communication occurs in real-time or near real-time or over a span of time, such as may occur at the time of encoding Occurs when syntax elements are stored to a medium, which can then be retrieved by a decoding device at any time after storage to this medium.

视频编码器20和视频解码器30可根据视频压缩标准(例如,目前在开发中的高效视频译码(HEVC)标准)而操作,且可符合HEVC测试模型(HM)。或者,视频编码器20和视频解码器30可根据其它专属或工业标准(例如ITU-T H.264标准,或者被称作MPEG-4第10部分高级视频译码(AVC))或此类标准的扩展而操作。然而,本发明的技术不限于任何特定译码标准。其它实例包含MPEG-2和ITU-T H.263。Video encoder 20 and video decoder 30 may operate according to a video compression standard, such as the High Efficiency Video Coding (HEVC) standard currently under development, and may conform to the HEVC test model (HM). Alternatively, video encoder 20 and video decoder 30 may be based on other proprietary or industry standards such as the ITU-T H.264 standard, otherwise known as MPEG-4 Part 10 Advanced Video Coding (AVC) or such standards The expansion operates. However, the techniques of this disclosure are not limited to any particular coding standard. Other examples include MPEG-2 and ITU-T H.263.

HM将视频数据块称作译码单元(CU)。一般来说,除了CU不具有与H.264的宏块相关联的大小区别之外,CU具有与根据H.264译码的宏块类似的目的。因此,CU可分裂为若干子CU。一般来说,本发明中对CU的参考可指图片的最大译码单元(LCU)或LCU的子CU。举例来说,在位流内的语法数据可界定LCU,LCU在像素数目方面为最大的译码单元。LCU可分裂为若干子CU,且每一子CU可分裂为若干子CU。用于位流的语法数据可界定可分裂LCU的最大次数,称作最大CU深度。因此,位流还可界定最小译码单元(SCU)。HM refers to a block of video data as a coding unit (CU). In general, a CU has a similar purpose to a macroblock coded according to H.264, except that the CU does not have the size distinction associated with a macroblock of H.264. Accordingly, a CU may be split into sub-CUs. In general, a reference to a CU in this disclosure may refer to a largest coding unit (LCU) of a picture or a sub-CU of an LCU. For example, syntax data within a bitstream may define an LCU, which is the largest coding unit in terms of number of pixels. A LCU can be split into sub-CUs, and each sub-CU can be split into sub-CUs. Syntax data for the bitstream may define the maximum number of times an LCU can be split, referred to as the maximum CU depth. Accordingly, a bitstream may also define a smallest coding unit (SCU).

LCU可与阶层式四叉树数据结构相关联。一般来说,四叉树数据结构针对每一CU包含一个节点,其中根节点对应于LCU。如果将一个CU分裂为四个子CU,那么对应于所述CU的节点包含用于对应于所述子CU的四个节点中的每一者的参考。四叉树数据结构的每一节点可提供用于对应CU的语法数据。举例来说,在四叉树中的节点可包含指示对应于所述节点的CU是否分裂为若干子CU的分裂旗标。用于CU的语法元素可递归地界定,且可取决于CU是否分裂为子CU。LCUs may be associated with a hierarchical quadtree data structure. In general, a quadtree data structure includes one node for each CU, where the root node corresponds to an LCU. If a CU is split into four sub-CUs, the node corresponding to the CU contains references for each of the four nodes corresponding to the sub-CU. Each node of the quadtree data structure may provide syntax data for the corresponding CU. For example, a node in a quadtree may include a split flag indicating whether the CU corresponding to the node is split into sub-CUs. Syntax elements for a CU may be defined recursively, and may depend on whether the CU is split into sub-CUs.

未分裂的CU可包含一个或一个以上预测单元(PU)。一般来说,PU表示对应CU的全部或一部分,且包含用于检索所述PU的参考样本的数据。举例来说,当PU被以帧内模式编码时,PU可包含描述用于PU的帧内预测模式的数据。作为另一实例,当PU被以帧间模式编码时,PU可包含界定用于PU的一个或一个以上运动向量的数据。运动向量通常识别一个或一个以上参考帧中的相同位置的CU,其中术语“参考帧”指时间上出现于PU所处的帧之前或之后的帧。关于CU界定PU的数据还可描述(例如)将CU分割成一个或一个以上PU。在CU是未经译码、经帧内预测模式编码还是经帧间预测模式编码之间,分割模式可不同。An unsplit CU may include one or more prediction units (PUs). In general, a PU represents all or a portion of a corresponding CU and includes data used to retrieve reference samples for the PU. For example, when a PU is encoded in intra-mode, the PU may include data describing the intra-prediction mode for the PU. As another example, when a PU is encoded in inter mode, the PU may include data defining one or more motion vectors for the PU. A motion vector typically identifies a co-located CU in one or more reference frames, where the term "reference frame" refers to a frame that temporally occurs before or after the frame in which the PU resides. Data defining a PU with respect to a CU may also describe, for example, partitioning of the CU into one or more PUs. The partition mode may differ between whether the CU is uncoded, intra-prediction mode encoded, or inter-prediction mode encoded.

界定运动向量的数据可描述(例如)运动向量的水平分量、运动向量的垂直分量、运动向量的分辨率(例如,四分之一像素精度或八分之一像素精度)、运动向量指向的参考帧、识别所识别参考帧在当前帧之前还是之后的预测方向,和/或用于运动向量的参考列表(例如,列表0或列表1)。或者,界定运动向量的数据可就称作运动向量预测值(MVP)的事物来描述运动向量。运动向量预测值可包含相邻PU或时间上在相同位置的PU的运动向量。通常,以经界定方式形成五个MVP的列表(例如,以具有最大振幅的MVP开始到具有最小振幅的MVP(即,待译码的当前PU与参考PU之间的最大或最小位移)列出MVP,或基于位置(即,上方块、左方块、角落块、时间块)列出MVP),其中五个MVP中的四者为选自四个相邻PU的空间MVP且第五MVP为选自参考帧中的时间上在相同位置的PU的时间上在相同位置的MVP。Data defining a motion vector may describe, for example, the horizontal component of the motion vector, the vertical component of the motion vector, the resolution of the motion vector (e.g., quarter-pixel precision or one-eighth pixel precision), the reference to which the motion vector points frame, a prediction direction identifying whether the identified reference frame is before or after the current frame, and/or a reference list (eg, list 0 or list 1 ) for motion vectors. Alternatively, the data defining a motion vector may describe the motion vector in terms of something called a motion vector predictor (MVP). A motion vector predictor may include motion vectors for neighboring PUs or temporally co-located PUs. Typically, a list of five MVPs is formed in a defined manner (e.g., listed starting with the MVP with the largest amplitude to the MVP with the smallest amplitude (i.e., the largest or smallest displacement between the current PU to be coded and the reference PU) MVP, or list MVPs based on position (i.e., top block, left block, corner block, temporal block), where four of the five MVPs are spatial MVPs selected from four adjacent PUs and the fifth MVP is the selected The temporally co-located MVP from the temporally co-located PU in the reference frame.

尽管通常时间候选运动向量相对于当前帧中的当前部分的运动向量处在参考帧的相同部分中的相同位置,但所述技术不应严格地限于相同位置的时间候选运动向量。而是,所述技术可相对于任何时间候选运动向量而实施,无论所述时间候选运动向量是否在相同位置。在一些情况中,视频编码器可识别与当前帧的当前块或部分不在相同位置的时间候选运动向量且选择此时间候选运动向量作为时间MVP。通常,视频编码器可用信号通知使用了不在相同位置的时间MVP,或在一些情况中,给定上下文可指示使用了不在相同位置的时间MVP(在此情况下视频编码器可以不用信号通知是否选择了不在相同位置的时间MVP)。Although typically the temporal candidate motion vectors are at the same location in the same portion of the reference frame relative to the motion vectors of the current portion in the current frame, the techniques should not be strictly limited to temporal candidate motion vectors at the same location. Rather, the techniques may be implemented with respect to any temporal candidate motion vector, whether or not the temporal candidate motion vectors are co-located. In some cases, the video encoder may identify a temporal candidate motion vector that is not co-located with the current block or portion of the current frame and select this temporal candidate motion vector as the temporal MVP. Typically, the video encoder may signal that a temporal MVP that is not co-located is used, or in some cases a given context may indicate that a temporal MVP that is not co-located is used (in which case the video encoder may not signal whether to select time MVP who is not in the same position).

在形成五个MVP的列表之后,视频编码器20可对所述MVP中的每一者进行评估以确定哪一MVP提供最佳地匹配经选择以用于编码视频的给定速率与失真曲线的最佳速率与失真特性。视频编码器20可相对于五个MVP中的每一者执行速率-失真优化(RDO)程序,从而选择MVP中具有最佳RDO结果的一个MVP。或者,视频编码器20可选择存储到列表的五个MVP中最佳地近似经确定以用于当前PU的运动向量的一个MVP。After forming the list of five MVPs, video encoder 20 may evaluate each of the MVPs to determine which MVP provides the best match for a given rate-to-distortion profile selected for encoding video. Best rate and distortion characteristics. Video encoder 20 may perform a rate-distortion optimization (RDO) procedure with respect to each of the five MVPs, selecting the one of the MVPs with the best RDO result. Alternatively, video encoder 20 may select one of the five MVPs stored to the list that best approximates the motion vector determined for the current PU.

在任何情况下,视频编码器20都可使用包括以下各者的数据来指定运动向量:识别五个MVP的列表中的MVP中的选定一者的索引、运动向量指向的一个或一个以上参考帧(经常呈列表形式),以及识别预测为单向还是双向的预测方向。或者,界定运动向量的数据可仅指定五个MVP的列表中的选定MVP的索引而不指定参考帧和预测方向,这向视频解码器表明MVP中的选定一者将完全用于当前PU。In any case, video encoder 20 may specify the motion vector using data comprising: an index identifying a selected one of the MVPs in a list of five MVPs, one or more references to which the motion vector points Frames (often in list form), and a prediction direction identifying whether the prediction is unidirectional or bidirectional. Alternatively, the data defining the motion vector may only specify the index of the selected MVP in the list of five MVPs without specifying the reference frame and prediction direction, which indicates to the video decoder that the selected one of the MVPs will be used exclusively for the current PU .

除具有界定一个或一个以上运动向量的一个或一个以上PU之外,CU还可包含一个或一个以上变换单元(TU)。在使用PU进行预测之后,视频编码器可计算用于对应于PU的CU部分的残余值,其中此残余值还可称作残余数据。可变换、量化且扫描所述残余值。TU不必限于PU的大小。因此,TU可大于或小于相同CU的对应PU。在一些实例中,TU的最大大小可为对应CU的大小。本发明还使用术语“块”来指代CU、PU和/或TU中的任一者或组合。In addition to having one or more PUs that define one or more motion vectors, a CU may also include one or more transform units (TUs). After prediction using a PU, a video encoder may calculate residual values for the portion of the CU corresponding to the PU, where such residual values may also be referred to as residual data. The residual values may be transformed, quantized and scanned. A TU is not necessarily limited to the size of a PU. Accordingly, a TU may be larger or smaller than the corresponding PU of the same CU. In some examples, the maximum size of a TU may be the size of a corresponding CU. This disclosure also uses the term "block" to refer to any one or combination of CUs, PUs, and/or TUs.

一般来说,经编码视频数据可包含预测数据和残余数据。视频编码器20可在帧内预测模式或帧间预测模式期间产生预测数据。帧内预测大体上涉及相对于图片的相邻的先前经译码块中的参考样本来预测相同图片的块中的像素值。帧间预测大体上涉及相对于先前经译码图片的数据来预测图片的块中的像素值。In general, encoded video data may include predictive data and residual data. Video encoder 20 may generate prediction data during intra-prediction mode or inter-prediction mode. Intra prediction generally involves predicting pixel values in a block of the same picture relative to reference samples in neighboring previously coded blocks of the same picture. Inter prediction generally involves predicting pixel values in a block of a picture relative to data of previously coded pictures.

在帧内预测或帧间预测之后,视频编码器20可计算块的残余像素值。残余值大体上对应于块的预测像素值数据与块的真实像素值数据之间的差。举例来说,残余值可包含指示经译码像素与预测像素之间的差的像素差值。在一些实例中,经译码像素可与待译码的像素块相关联,且预测像素可与用以预测经译码块的一个或一个以上像素块相关联。Following intra-prediction or inter-prediction, video encoder 20 may calculate residual pixel values for the block. The residual value generally corresponds to the difference between the block's predicted pixel value data and the block's real pixel value data. For example, residual values may include pixel difference values indicating differences between coded pixels and predicted pixels. In some examples, coded pixels may be associated with a pixel block to be coded, and predicted pixels may be associated with one or more pixel blocks used to predict the coded block.

为了进一步压缩块的残余值,可将残余值变换为变换系数的集合,所述变换系数将尽可能多的数据(还称为“能量”)压缩为尽可能少的系数。变换技术可包括离散余弦变换(DCT)过程或概念上类似的过程、整数变换、小波变换或其它类型的变换。变换将像素的残余值从空间域转换到变换域。变换系数对应于大小通常与原始块相同的二维系数矩阵。换句话说,存在与原始块中的像素恰好一样多的变换系数。然而,归因于变换,变换系数中的许多者可能会具有等于零的值。To further compress the residual values of a block, the residual values may be transformed into a set of transform coefficients that compress as much data (also referred to as "energy") as possible into as few coefficients as possible. Transformation techniques may include discrete cosine transform (DCT) processes or conceptually similar processes, integer transforms, wavelet transforms, or other types of transforms. A transform converts the residual value of a pixel from the spatial domain to the transform domain. The transform coefficients correspond to a two-dimensional matrix of coefficients, usually of the same size as the original block. In other words, there are exactly as many transform coefficients as there are pixels in the original block. However, due to the transform, many of the transform coefficients may have values equal to zero.

视频编码器20可接着量化变换系数以进一步压缩视频数据。量化大体上涉及将在相对大范围内的值映射到相对小范围中的值,因而减少表示经量化的变换系数所需的数据量。更具体来说,可根据量化参数(QP)而应用量化,量化参数(QP)可在LCU层级处界定。因此,同一层级的量化可适用于与LCU内的CU的不同PU相关联的TU中的所有变换系数。然而,并非用信号发送QP自身,而是可随LCU用信号发送QP的改变(即,Δ)。ΔQP界定了LCU的量化参数相对于某一参考QP(例如,先前传送的LCU的QP)的改变。Video encoder 20 may then quantize the transform coefficients to further compress the video data. Quantization generally involves mapping values in a relatively large range to values in a relatively small range, thus reducing the amount of data required to represent quantized transform coefficients. More specifically, quantization may be applied according to a quantization parameter (QP), which may be defined at the LCU level. Thus, the same level of quantization may apply to all transform coefficients in TUs associated with different PUs of a CU within an LCU. However, rather than signaling the QP itself, a change in QP (ie, Δ) may be signaled with the LCU. ΔQP defines the change of the LCU's quantization parameter relative to some reference QP (eg, the previously transmitted QP of the LCU).

在量化之后,视频编码器20可扫描变换系数,从而从包含经量化的变换系数的二维矩阵产生一维向量。视频编码器20可接着执行统计无损编码(其通常被误称为“熵编码”)来编码所得阵列以更进一步压缩数据。一般来说,熵译码包括共同地压缩经量化的变换系数和/或其它语法信息的序列的一个或一个以上过程。举例来说,例如ΔQP、预测向量、译码模式、滤波器、偏移或其它信息等语法元素也可包含于经熵译码的位流中。接着(例如)经由内容自适应可变长度译码(CAVLC)、上下文自适应二进制算术译码(CABAC)或任何其它统计无损译码过程来熵译码经扫描的系数连同任何语法信息。After quantization, video encoder 20 may scan the transform coefficients, generating a one-dimensional vector from a two-dimensional matrix including the quantized transform coefficients. Video encoder 20 may then perform statistical lossless encoding (which is often misreferred to as "entropy encoding") to encode the resulting array to compress the data even further. In general, entropy coding includes one or more processes that collectively compress a sequence of quantized transform coefficients and/or other syntax information. For example, syntax elements such as ΔQP, prediction vectors, coding modes, filters, offsets, or other information may also be included in the entropy coded bitstream. The scanned coefficients along with any syntax information are then entropy coded, eg, via content adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), or any other statistical lossless coding process.

如上文所指出,界定用于PU的运动向量的数据可采取众多形式。视频编码器20可实施可借以表示运动向量的不同方式以压缩运动向量数据。视频编码器20可实施称作合并模式的模式以将运动向量表示为识别存储到以经界定方式建构的MVP列表的MVP的索引。在实施此合并模式的逆过程时,视频解码器30接收此索引,根据经定义方式重建五个MVP的列表,且选择列表中的五个MVP中的由所述索引指示的一者。视频解码器30接着将MVP中的选定一者实例化为用于相关联的PU的运动向量,其分辨率与MVP中的选定一者相同且指向与MVP中的选定一者指向的参考帧相同的参考帧。在实施合并模式时,视频编码器20可能不需要在得到运动向量所必要的最大程度上执行运动估计,不需要指定运动向量的水平和垂直分量、运动向量分辨率、运动向量方向(意指,运动向量指向时间上在当前帧之前还是之后的帧)或参考帧索引,进而潜在地减少确定运动向量所需要的处理器循环且压缩运动向量数据。As noted above, the data defining motion vectors for a PU may take numerous forms. Video encoder 20 may implement different ways in which motion vectors may be represented to compress motion vector data. Video encoder 20 may implement a mode referred to as merge mode to represent motion vectors as indices identifying MVPs stored to an MVP list constructed in a defined manner. In implementing the reverse process of this merge mode, video decoder 30 receives this index, reconstructs the list of five MVPs according to a defined manner, and selects the one of the five MVPs in the list indicated by the index. Video decoder 30 then instantiates the selected one of the MVPs as a motion vector for the associated PU with the same resolution as the selected one of the MVPs and pointing to the same direction as the selected one of the MVPs. The same reference frame as the reference frame. When implementing merge mode, video encoder 20 may not need to perform motion estimation to the maximum extent necessary to derive motion vectors, specifying the horizontal and vertical components of motion vectors, motion vector resolution, motion vector direction (meaning, A motion vector points to a frame temporally preceding or following the current frame) or a reference frame index, thereby potentially reducing the processor cycles required to determine the motion vector and compressing the motion vector data.

视频编码器20还可实施自适应运动向量预测(AMVP)模式,其类似于合并模式,也包含了将运动向量表示为识别存储到以经界定方式建构的MVP列表的MVP的索引。然而,与合并模式相反,视频编码器20还可指定预测方向和参考帧,从而有效地更动(override)MVP中的选定一者的这些部分。在实施AMVP模式时,视频编码器20可能不需要在得到运动向量所必要的最大程度上执行运动估计,不需要指定运动向量的水平和垂直分量以及运动向量分辨率,进而潜在地减少确定运动向量所需要的处理器循环且压缩运动向量数据。Video encoder 20 may also implement an adaptive motion vector prediction (AMVP) mode, which, similar to merge mode, also includes representing motion vectors as indexes identifying MVPs stored to an MVP list constructed in a defined manner. However, in contrast to merge mode, video encoder 20 may also specify prediction directions and reference frames, effectively overriding these portions of a selected one of the MVPs. When implementing AMVP mode, video encoder 20 may not need to perform motion estimation to the maximum extent necessary to obtain motion vectors, need not specify the horizontal and vertical components of motion vectors and motion vector resolution, thereby potentially reducing the need for determining motion vectors. The required processor loops and compresses the motion vector data.

随着各种译码标准的发展,运动向量的甚至更有效的表示也得到了发展。举例来说,关于新兴HEVC标准的提议已提出了可用以通过称作“修剪”或“MVP修剪”的过程而压缩MVP索引的方式。在执行此修剪过程时,视频编码器20以经界定方式建构五个MVP的列表且接着修剪或移除任何冗余MVP。也就是说,视频编码器20可移除在X和Y分量两者上具有相同振幅且参考相同参考帧的任何MVP,其中这些MVP在本发明中被视为“冗余MVP”。或者,视频编码器20可仅将“独特”的MVP添加到列表,“独特”意指这些MVP在X和Y方向上具有与列表中已包含的所有其它MVP不同的振幅和/或参考不同参考帧。无论在添加到列表之后修剪还是在创建列表时修剪,修剪过程都可减小列表的大小,其结果为可使用较少位来用信号发送或用其它方式指定MVP中的选定一者,这是因为较短列表通常需要较小数目的位来表示最大索引值。With the development of various coding standards, even more efficient representations of motion vectors have been developed. For example, proposals for the emerging HEVC standard have suggested ways in which MVP indexes can be compressed through a process called "pruning" or "MVP pruning." In performing this pruning process, video encoder 20 builds a list of five MVPs in a defined manner and then prunes or removes any redundant MVPs. That is, video encoder 20 may remove any MVPs that have the same amplitude on both the X and Y components and refer to the same reference frame, where these MVPs are considered "redundant MVPs" in this disclosure. Alternatively, video encoder 20 may only add "unique" MVPs to the list, meaning that these MVPs have a different amplitude and/or reference to a different reference in the X and Y directions than all other MVPs already contained in the list. frame. Whether pruned after being added to the list or when the list is created, the pruning process can reduce the size of the list, with the result that fewer bits can be used to signal or otherwise specify a selected one of the MVPs, which is because shorter lists generally require a smaller number of bits to represent the largest index value.

举例来说,出于说明目的假设五个MVP中的任一者都没有被修剪。在此情况下,视频编码器可使用包括最多四位的截短一元码来用信号发送到五个MVP的此列表中的索引,以指示待选择的特定MVP。在提议中,用以用信号发送对五个MVP的列表中的第五MVP的选择的截短一元码为1111,用以用信号发送对五个MVP的列表中的第四MVP的选择的截短一元码为1110,用以用信号发送对五个MVP的列表中的第三MVP的选择的截短一元码为110,用以用信号发送对五个MVP的列表中的第二MVP的选择的截短一元码为10,且用以用信号发送对五个MVP的列表中的第一MVP的选择的截短一元码为0。然而,如果可将MVP列表修剪到三个MVP(意指MVP中的两者为冗余的),那么视频编码器20可使用消耗最多两位的截短一元码(例如,其中可使用码11用信号发送第三MVP),从而与不使用修剪或修剪不可能(例如当不存在冗余MVP时)且五个MVP的列表中的第五或第四MVP被选择的情况相比潜在地节省了一位。因此,在某种程度上,码取决于MVP列表的大小,其中较小MVP列表(意指具有较少MVP的列表)导致较小码(意指码需要较少位来表示来自经修剪的MVP列表的选定MVP)。For example, assume for illustration purposes that none of the five MVPs are pruned. In this case, the video encoder may signal the index into this list of five MVPs using a truncated unary code comprising up to four bits to indicate the particular MVP to be selected. In the proposal, the truncated unary code used to signal the selection of the fifth MVP in the list of five MVPs is 1111, the truncated unary code used to signal the selection of the fourth MVP in the list of five MVPs is 1111. The short unary code is 1110 to signal the selection of the third MVP in the list of five MVPs The truncated unary code is 110 to signal the selection of the second MVP in the list of five MVPs The truncated unary code for is 10, and the truncated unary code used to signal the selection of the first MVP in the list of five MVPs is 0. However, if the MVP list can be pruned to three MVPs (meaning that two of the MVPs are redundant), then video encoder 20 can use a truncated unary code that consumes up to two bits (e.g., where code 11 can be used Signaling the third MVP), thereby potentially saving compared to cases where no pruning is used or pruning is not possible (e.g. when there is no redundant MVP) and the fifth or fourth MVP in the list of five MVPs is selected one. Thus, to some extent, the code depends on the size of the MVP list, where a smaller MVP list (meaning a list with fewer MVPs) results in a smaller code (meaning the code requires fewer bits to represent list of selected MVPs).

虽然修剪可通过减少用以用信号发送MVP列表中的选定MVP的索引的码长度来改进译码效率,但此修剪也可能会影响视频解码器30成功地剖析位流的能力。因为码取决于经修剪的MVP列表的大小,所以视频解码器30需要知道经修剪的MVP列表中的MVP的数目。然而,当时间上在相同位置的MVP所在的参考帧丢失时,此时间上在相同位置的MVP不可用且视频解码器30无法确定此MVP是独特的还是冗余的。因此,视频解码器30无法确定经修剪的MVP列表是否已包含此时间上在相同位置的MVP,且因此无法确定经修剪的MVP列表的大小。在不能确定经修剪的MVP列表的大小的情况下,视频解码器30则无法确定码的最大长度,这又使视频解码器30不能够剖析来自位流的码。While pruning may improve coding efficiency by reducing the code length used to signal the index of the selected MVP in the MVP list, such pruning may also affect the ability of video decoder 30 to successfully parse the bitstream. Because the code depends on the size of the pruned MVP list, video decoder 30 needs to know the number of MVPs in the pruned MVP list. However, when the reference frame where the temporally co-located MVP is located is lost, the temporally co-located MVP is unavailable and the video decoder 30 cannot determine whether the temporally co-located MVP is unique or redundant. Therefore, video decoder 30 cannot determine whether the pruned MVP list already includes this temporally co-located MVP, and thus cannot determine the size of the pruned MVP list. Without being able to determine the size of the pruned MVP list, video decoder 30 is then unable to determine the maximum length of the code, which in turn prevents video decoder 30 from parsing the code from the bitstream.

根据本发明中描述的技术,视频编码器20可通过修剪冗余空间MVP以潜在稳健而有效的方式指定MVP,但在修剪过程期间不包含时间上在相同位置的MVP。换句话说,视频编码器20可实施本发明中描述的技术以形成仅包含空间MVP的MVP中间列表,相对于此MVP中间列表执行修剪,且接着组合时间上在相同位置的MVP与经修剪的MVP中间列表以形成经修剪的MVP列表。以此方式,指定时间上在相同位置的MVP的参考帧的丢失可能不会阻止位流的剖析,这在常规系统中是常见的,同时仍维持通过使用修剪过程而实现的译码效率增益中的至少一些。According to the techniques described in this disclosure, video encoder 20 may specify MVPs in a potentially robust and efficient manner by pruning redundant spatial MVPs, but not including temporally co-located MVPs during the pruning process. In other words, video encoder 20 may implement the techniques described in this disclosure to form an intermediate list of MVPs containing only spatial MVPs, perform pruning relative to this intermediate list of MVPs, and then combine temporally co-located MVPs with the pruned MVPs. The MVP intermediate list to form a pruned MVP list. In this way, the loss of a reference frame for a given temporally co-located MVP may not prevent dissection of the bitstream, which is common in conventional systems, while still maintaining the coding efficiency gain achieved by using the pruning process at least some of the .

为了说明,视频编码器20首先确定与当前视频帧的当前部分(例如CU)相关联的空间候选运动向量。此外,空间候选运动向量包含针对与对应CU相关联的与当前PU邻近的相邻PU所确定的相邻运动向量。通常,这些相邻PU定位成在左方、左上方、正上方和右上方邻近当前PU,如关于图6的实例更详细地展示。视频编码器20使用这些空间候选运动向量,因为已针对这些块确定了这些空间候选运动向量。鉴于视频编码器20通常从上到下、从左到右地执行运动估计/补偿,所以对于定位在当前PU正右方或正下方的任何块,视频编码器20仍需计算这些块的运动向量。然而,虽然关于这些空间运动向量进行描述,但所述技术可在以不同次序(例如,从上到下、从右到左)执行运动估计/补偿的视频编码器20中实施。另外,所述技术可相对于更多或更少空间或时间运动向量而实施。To illustrate, video encoder 20 first determines a spatial candidate motion vector associated with a current portion (eg, a CU) of a current video frame. Furthermore, the spatial candidate motion vectors include neighboring motion vectors determined for neighboring PUs associated with the corresponding CU that are neighboring to the current PU. Typically, these neighboring PUs are positioned adjacent to the current PU to the left, top left, directly above, and above right, as shown in more detail with respect to the example of FIG. 6 . Video encoder 20 uses these spatial candidate motion vectors because they have been determined for the blocks. Given that video encoder 20 typically performs motion estimation/compensation from top-to-bottom and left-to-right, for any blocks positioned directly to the right or directly below the current PU, video encoder 20 still needs to compute motion vectors for those blocks . However, while described with respect to these spatial motion vectors, the techniques may be implemented in video encoder 20 that performs motion estimation/compensation in a different order (eg, top-to-bottom, right-to-left). Additionally, the techniques may be implemented with respect to more or fewer spatial or temporal motion vectors.

在确定这些空间运动向量之后,视频编码器20接着修剪空间候选运动向量以移除空间候选运动向量中的重复者。视频编码器20可识别重复的空间候选运动向量为对于候选运动向量的x轴和y轴分量两者具有相同振幅且来自同一参考帧的候选空间运动向量中的任一者。视频编码器20通过从可称作空间候选运动向量的中间列表的列表移除重复者或在确定待添加到此列表的候选空间运动向量并非重复者后即刻仅将所述候选空间运动向量添加到此中间列表来执行修剪。After determining these spatial motion vectors, video encoder 20 then prunes the spatial candidate motion vectors to remove duplicates in the spatial candidate motion vectors. Video encoder 20 may identify repeated spatial candidate motion vectors as any of the candidate spatial motion vectors that have the same amplitude for both the x-axis and y-axis components of the candidate motion vector and are from the same reference frame. Video encoder 20 adds a candidate spatial motion vector to be added to this list by removing duplicates from a list, which may be referred to as an intermediate list of spatial candidate motion vectors, or only upon determining that the candidate spatial motion vector is not a duplicate. This intermediate list is used to perform pruning.

在以此方式修剪空间候选运动向量之后,视频编码器20可接着确定用于当前视频帧的当前PU的时间候选运动向量。此外,时间候选运动向量包括针对与当前视频帧的当前PU共同定位于相同位置的参考视频帧的PU所确定的运动向量。视频编码器20可接着选择时间候选运动向量或在执行修剪过程之后剩余的空间候选运动向量中的一者作为选定候选运动向量。视频编码器20接着在位流中用信号发送选定的候选运动向量。After pruning the spatial candidate motion vectors in this manner, video encoder 20 may then determine a temporal candidate motion vector for the current PU of the current video frame. Furthermore, the temporal candidate motion vector includes a motion vector determined for a PU of a reference video frame that is co-located with the current PU of the current video frame. Video encoder 20 may then select one of the temporal candidate motion vector or the spatial candidate motion vector remaining after performing the pruning process as the selected candidate motion vector. Video encoder 20 then signals the selected candidate motion vector in the bitstream.

在一些情况下,视频编码器20可确定所确定的空间候选运动向量中的每一者是经空间预测还是时间预测。换句话说,所确定的空间候选运动向量自身可在时间上从参考帧中的相同位置的块预测或在空间上从与空间候选运动向量的每一者的确定所针对的块邻近的块预测。响应于此确定,视频编码器20可从修剪过程进一步移除所确定的空间候选运动向量中的一者或一者以上。举例来说,视频编码器20可从修剪过程移除空间候选运动向量中的被确定为自身经时间预测的那些空间候选运动向量,因为如果这些时间预测的空间候选运动向量的预测所依据的参考帧的部分丢失,那么这些时间预测的空间候选运动向量可能对解码器不可用。视频编码器20可接着选择时间候选运动向量、时间预测的空间候选运动向量中的一者或修剪之后剩余的空间预测的空间候选运动向量中的一者,且在位流中用信号发送此选定的候选运动向量。In some cases, video encoder 20 may determine whether each of the determined spatial candidate motion vectors is spatially predicted or temporally predicted. In other words, the determined spatial candidate motion vectors themselves may be predicted temporally from co-located blocks in the reference frame or spatially from blocks adjacent to the block for which each of the spatial candidate motion vectors was determined . In response to this determination, video encoder 20 may further remove one or more of the determined spatial candidate motion vectors from the pruning process. For example, video encoder 20 may remove from the pruning process those of the spatial candidate motion vectors that are determined to be temporally predicted themselves, because if the reference from which the prediction of these temporally predicted spatial candidate motion vectors is based If part of the frame is lost, then these temporally predicted spatial candidate motion vectors may not be available to the decoder. Video encoder 20 may then select one of the temporal candidate motion vector, the temporally predicted spatial candidate motion vector, or one of the spatially predicted spatial candidate motion vectors remaining after pruning, and signal this selection in the bitstream. selected candidate motion vectors.

或者,视频编码器20可用界定默认运动信息的默认候选运动向量取代这些时间预测的空间候选运动向量,而不是从修剪过程移除时间预测的空间候选运动向量。此默认运动向量信息可包括(例如)运动向量振幅、识别参考帧在时间上在当前帧之前还是之后的预测方向,以及识别参考帧的参考索引。视频编码器20可通过取那些可用的空间预测的空间候选运动向量的平均值,选取第一可用的空间预测的空间候选运动向量,或使用静态地配置在视频编码器20和视频解码器30两者内的默认运动向量信息来确定此默认运动向量信息(仅举几个例子)。Alternatively, video encoder 20 may replace these temporally predicted spatial candidate motion vectors with default candidate motion vectors defining default motion information, rather than removing the temporally predicted spatial candidate motion vectors from the pruning process. This default motion vector information may include, for example, a motion vector amplitude, a prediction direction identifying whether the reference frame precedes or follows the current frame in time, and a reference index identifying the reference frame. Video encoder 20 may select the first available spatially predicted spatial candidate motion vector by taking the average of those available spatially predicted spatial candidate motion vectors, or use a statically configured link between video encoder 20 and video decoder 30 This default motion vector information is determined by the default motion vector information in the user (just to name a few).

通过从修剪过程消除空间候选运动向量中可能会丢失或对视频解码器30不可用(例如,因为经压缩视频数据的发射错误或在视频编码器20或视频解码器30处的存储错误)的空间候选运动向量,视频编码器20可用一方式用信号发送选定的候选运动向量,以使得视频解码器30能够在这些时间预测的空间候选运动向量丢失或变得不可用的情况下恰当地剖析位流。同样地,在替代方案中,通过用默认候选运动向量取代时间预测的空间候选运动向量,视频编码器20可用一方式用信号发送选定的候选运动向量,以使得视频解码器30能够在这些时间预测的空间候选运动向量丢失或变得不可用的情况下恰当地剖析位流。By eliminating from the pruning process spatial candidate motion vectors that may be lost or not available to the video decoder 30 (e.g., because of transmission errors of the compressed video data or storage errors at the video encoder 20 or video decoder 30) Candidate motion vectors, video encoder 20 may signal selected candidate motion vectors in a manner that enables video decoder 30 to properly parse bits in the event that these temporally predicted spatial candidate motion vectors are lost or become unavailable flow. Likewise, in an alternative, by replacing temporally predicted spatial candidate motion vectors with default candidate motion vectors, video encoder 20 may signal selected candidate motion vectors in a manner that enables video decoder 30 to Properly dissects the bitstream in case predicted spatial candidate motion vectors are lost or become unavailable.

通常,视频编码器20使用表示如列表中所布置的选定的候选运动向量的索引的一元码来用信号发送选定的候选运动向量。视频编码器20可用设定或经界定方式布置时间候选运动向量和在执行修剪过程之后剩余的空间候选运动向量(例如,从最高振幅到最低振幅,最低振幅到最高振幅,先是时间运动向量再接着从最高到最低振幅或最低到最高振幅排序的剩余空间运动向量,等等),从而形成候选运动向量的列表。或者,视频编码器20可用信号发送指示运动向量在列表中布置的方式的某一识别符。无论如何,视频编码器20接着识别存储到此列表的候选运动向量中的一者,从而以上述方式使用一元码对存储到此列表的候选运动向量中的选定一者的索引进行编码。Typically, video encoder 20 signals the selected candidate motion vector using a unary representing an index of the selected candidate motion vector as arranged in a list. Video encoder 20 may arrange temporal candidate motion vectors and spatial candidate motion vectors remaining after performing the pruning process in a set or defined manner (e.g., from highest amplitude to lowest amplitude, lowest amplitude to highest amplitude, temporal motion vector first and then The remaining spatial motion vectors sorted from highest to lowest amplitude or lowest to highest amplitude, etc.), forming a list of candidate motion vectors. Alternatively, video encoder 20 may signal some identifier that indicates the manner in which the motion vectors are arranged in the list. Regardless, video encoder 20 then identifies one of the candidate motion vectors stored to the list, encoding the index of the selected one of the candidate motion vectors stored to the list using the unary code in the manner described above.

视频解码器30接收此位流,解码所述索引且形成空间候选运动向量(如果可用的话)的中间列表。如上所述,在视频编码器20使用运动向量预测来编码相邻块的运动向量且选择时间运动向量,其中界定此时间运动向量的参考帧丢失(例如,归因于存储器损坏、总线错误或发射错误)的情况下,空间候选运动向量中的一者或一者以上可能不可用。或者,当视频编码器20使用运动向量预测来编码此相邻PU的运动向量且选择空间运动向量中其自身是从时间运动向量进行运动向量预测的一者,其中界定此时间运动向量的参考帧丢失(例如,归因于存储器损坏、总线错误或发射错误)时,空间候选运动向量中的一者或一者以上可能不可用。视频解码器30可通过以下操作来克服此问题:从修剪过程移除不可用的时间预测的候选运动向量,或在替代方案中用默认候选运动向量取代这些不可用的时间预测的空间候选运动向量。在这方面,视频解码器30以大体上类似的方式实施上文关于视频编码器20描述的技术,以便恰当地剖析来自位流的用信号发送的选定的候选运动向量。Video decoder 30 receives this bitstream, decodes the index and forms an intermediate list of spatial candidate motion vectors (if available). As described above, motion vector prediction is used at video encoder 20 to encode motion vectors for neighboring blocks and select a temporal motion vector where the reference frame defining this temporal motion vector is lost (e.g., due to memory corruption, bus error, or transmission error), one or more of the spatial candidate motion vectors may not be available. Alternatively, when video encoder 20 encodes the motion vector of this neighboring PU using motion vector prediction and selects one of the spatial motion vectors which is itself motion vector predicted from a temporal motion vector in which the reference frame of this temporal motion vector is defined When lost (eg, due to memory corruption, bus errors, or transmission errors), one or more of the spatial candidate motion vectors may not be available. Video decoder 30 can overcome this problem by removing unavailable temporally predicted candidate motion vectors from the pruning process, or replacing these unavailable temporally predicted spatial candidate motion vectors with default candidate motion vectors in the alternative . In this regard, video decoder 30 implements the techniques described above with respect to video encoder 20 in a substantially similar manner in order to properly parse the signaled selected candidate motion vectors from the bitstream.

在任何情况下,甚至假设一个或一个以上候选运动向量丢失,所述技术都使得能够用一方式用信号发送MVP以使得促进位流的剖析。通过确保在修剪之后时间候选运动向量始终存在于列表中,视频编码器20确保视频解码器30可确定可用运动向量的数目且进而剖析来自位流的索引。同样地,通过确保时间预测的空间候选运动向量始终存在于列表中或用视频解码器30始终可再生的默认候选运动向量取代,视频编码器20确保视频解码器30可确定可用运动向量的数目且进而剖析来自位流的索引。以此方式,即使存储时间候选运动向量和/或时间预测的空间候选运动向量的切片丢失,视频解码器30也仍可剖析位流,而不管是否使用一元码。具体来说,视频解码器30可在知道时间候选运动向量和/或时间预测的空间候选运动向量始终包含在MVP列表中且决不从MVP列表修剪的情况下剖析位流。在时间预测的空间候选运动向量被默认候选运动向量取代的替代方案中,视频编码器20有效地确保此些时间预测的空间候选运动向量不会丢失,因为视频解码器30以一方式配置以便始终能够使用与视频编码器20执行以确定此默认候选运动向量相同的技术确定这些运动向量。In any case, even assuming one or more candidate motion vectors are missing, the techniques enable the MVP to be signaled in a manner such that parsing of the bitstream is facilitated. By ensuring that temporal candidate motion vectors are always present in the list after pruning, video encoder 20 ensures that video decoder 30 can determine the number of available motion vectors and thereby parse the index from the bitstream. Likewise, video encoder 20 ensures that video decoder 30 can determine the number of available motion vectors and The index from the bitstream is then parsed. In this way, even if the slice storing the temporal candidate motion vector and/or the temporally predicted spatial candidate motion vector is lost, video decoder 30 may still parse the bitstream, regardless of whether unary codes are used. In particular, video decoder 30 may parse the bitstream knowing that temporal candidate motion vectors and/or temporally predicted spatial candidate motion vectors are always included in the MVP list and are never pruned from the MVP list. In the alternative where the temporally predicted spatial candidate motion vectors are replaced by default candidate motion vectors, video encoder 20 effectively ensures that such temporally predicted spatial candidate motion vectors are not lost because video decoder 30 is configured in such a way as to always These motion vectors can be determined using the same techniques that video encoder 20 performs to determine this default candidate motion vector.

为了说明时间候选运动向量丢失的情况,考虑空间候选运动向量的振幅为1、1、1和1且时间候选运动向量的振幅为-1的情况。视频解码器30可实施所述技术以形成起初仅具有空间候选运动向量的列表(其可称作MVP列表),使得MVP列表为1、1、1和1。解码器接着修剪此仅空间MVP列表,使得此MVP列表被界定为1。解码器接着将时间候选运动向量添加到MVP列表,使得MVP列表被界定为-1和1。编码器可接着用信号发送为0或1的mvp_idx以指示这些运动向量中的一者被选定(或如果不使用截短一元码,则为0或10的mvp_idx)。关于上文描述的一元码,本发明的技术移除了必须推断在修剪之后仅一个候选运动向量可用的可能性,因为将始终存在至少一个空间候选运动向量和所述时间候选空间运动向量。To illustrate the case where the temporal candidate motion vector is lost, consider the case where the amplitude of the spatial candidate motion vector is 1, 1, 1, and 1 and the amplitude of the temporal candidate motion vector is -1. Video decoder 30 may implement the techniques to form a list initially with only spatial candidate motion vectors (which may be referred to as an MVP list), such that the MVP list is 1, 1, 1, and 1. The decoder then prunes this spatial-only MVP list such that this MVP list is defined as one. The decoder then adds the temporal candidate motion vectors to the MVP list such that the MVP list is defined as -1 and 1. The encoder may then signal mvp_idx of 0 or 1 to indicate that one of these motion vectors is selected (or mvp_idx of 0 or 10 if truncated unary codes are not used). As with the unary codes described above, the techniques of this disclosure remove the possibility of having to infer that only one candidate motion vector is available after pruning, since there will always be at least one spatial candidate motion vector and the temporal candidate spatial motion vector.

以此方式,所述技术使得视频编码器能够通过修剪冗余空间MVP以潜在稳健而有效的方式指定MVP,但在修剪过程期间不考虑时间上在相同位置的MVP。换句话说,所述技术形成仅包含空间MVP的MVP中间列表,相对于此MVP中间列表执行修剪,且接着将时间上在相同位置的MVP添加到经修剪的MVP中间列表以形成经修剪的MVP列表。以此方式,指定时间上在相同位置的MVP的参考帧的丢失可能不会阻止位流的剖析,这在常规系统中是常见的,同时仍维持通过使用修剪过程而实现的译码效率增益。In this way, the techniques enable video encoders to specify MVPs in a potentially robust and efficient manner by pruning redundant spatial MVPs, but not considering temporally co-located MVPs during the pruning process. In other words, the technique forms an MVP intermediate list containing only spatial MVPs, performs pruning relative to this MVP intermediate list, and then adds temporally co-located MVPs to the pruned MVP intermediate list to form the pruned MVP list. In this way, the loss of reference frames for a given temporally co-located MVP may not prevent parsing of the bitstream, which is common in conventional systems, while still maintaining the coding efficiency gains achieved by using the pruning process.

在一些情况下,可在其它情形中应用所述技术。举例来说,HEVC测试模型(HM4.0)的第四版本提议先修剪MVP,且接着如果修剪之后剩余的MVP的总数目小于五则添加额外MVP。换句话说,HM4.0将修剪五个MVP(即,一个时间和四个空间)以产生经修剪的MVP列表。如果此经修剪的MVP列表中的MVP的数目小于五,那么HM4.0添加非冗余MVP直到经修剪列表中的MVP的总数目等于五为止。这些非冗余MVP可选自其它空间或时间块或可基于经修剪的MVP列表中的MVP而产生(例如,选择经修剪的MVP列表中的一个MVP的y分量和来自经修剪的MVP列表中的另一不同MVP的x分量)。在此情形中,视频编码器可实施本发明中描述的技术以选择额外非冗余MVP使得仅空间MVP被选择和/或用以产生这些额外非冗余MVP。In some cases, the techniques may be applied in other situations. For example, the fourth version of the HEVC test model (HM4.0) proposes to prune MVPs first, and then add additional MVPs if the total number of MVPs remaining after pruning is less than five. In other words, HM4.0 will prune five MVPs (ie, one temporal and four spatial) to produce a pruned MVP list. If the number of MVPs in this pruned MVP list is less than five, then HM 4.0 adds non-redundant MVPs until the total number of MVPs in the pruned list equals five. These non-redundant MVPs may be selected from other spatial or temporal blocks or may be generated based on the MVPs in the pruned MVP list (e.g., selecting the y-component of one of the MVPs in the pruned MVP list and x-component of another different MVP of ). In this case, the video encoder may implement the techniques described in this disclosure to select additional non-redundant MVPs such that only spatial MVPs are selected and/or used to generate these additional non-redundant MVPs.

通过仅选择空间MVP或使用修剪之后剩余的现有空间MVP来产生这些额外非冗余MVP,视频编码器可确保视频解码器可恰当地确定MVP中的选定一者。也就是说,通过始终具有五个MVP,视频编码器确保视频解码器可始终剖析来自位流的MVP索引,但如果时间MVP丢失,那么视频解码器可能无法准确地建构MVP列表,因为当时间MVP丢失时,MVP无法确定MVP相对于彼此的次序。本发明中描述的技术可通过不选择任何时间MVP或自身是从时间MVP预测出的空间MVP作为额外非冗余MVP来减少或潜在地消除丢失时间MVP造成的影响。By selecting only the spatial MVPs or using the existing spatial MVPs remaining after pruning to generate these additional non-redundant MVPs, the video encoder can ensure that the video decoder can properly determine the selected one of the MVPs. That is, by always having five MVPs, the video encoder ensures that the video decoder can always parse the MVP index from the bitstream, but if the temporal MVP is missing, the video decoder may not be able to accurately construct the MVP list because when the temporal MVP When missing, the MVPs cannot determine the order of the MVPs relative to each other. The techniques described in this disclosure can reduce or potentially eliminate the impact of missing temporal MVPs by not selecting any temporal MVPs or spatial MVPs that are themselves predicted from temporal MVPs as additional non-redundant MVPs.

关于本发明的实例描述的用于指定运动向量预测的技术可应用于视频译码以支持多种多媒体应用中的任一者,例如,空中电视广播、有线电视发射、卫星电视发射、(例如)经由因特网的串流视频传输、数字视频的编码以供存储在数据存储媒体上、存储在数据存储媒体上的数字视频的解码,或其它应用。在一些实例中,系统10可经配置以支持单向或双向视频发射以用于例如视频串流传输、视频重播、视频广播和/或视频电话等应用。The techniques for specifying motion vector prediction described with respect to examples of this disclosure may be applied to video coding to support any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, (for example) Streaming video transmission via the Internet, encoding of digital video for storage on a data storage medium, decoding of digital video stored on a data storage medium, or other applications. In some examples, system 10 may be configured to support one-way or two-way video transmission for applications such as video streaming, video playback, video broadcasting, and/or video telephony.

尽管图1中未展示,但在一些方面中,视频编码器20和视频解码器30可各自与音频编码器和解码器集成,且可包含适当的多路复用器-多路分用器(MUX-DEMUX)单元或其它硬件和软件,以处理对共同数据流或单独数据流中的音频与视频两者的编码。如果适用,那么在一些实例中,MUX-DEMUX单元可遵照ITU H.223多路复用器协议,或例如用户数据报协议(UDP)等其它协议。Although not shown in FIG. 1 , in some aspects video encoder 20 and video decoder 30 may be integrated with an audio encoder and decoder, respectively, and may include appropriate multiplexer-demultiplexers ( MUX-DEMUX) unit or other hardware and software to handle the encoding of both audio and video in a common data stream or in separate data streams. If applicable, in some examples the MUX-DEMUX unit may conform to the ITU H.223 multiplexer protocol, or other protocols such as User Datagram Protocol (UDP).

视频编码器20和视频解码器30各自可实施为多种合适编码器电路中的任一者,例如一个或一个以上微处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)、离散逻辑、软件、硬件、固件或其任何组合。当所述技术部分地在软件中实施时,装置可将用于软件的指令存储在合适的非暂时性计算机可读媒体中,且在硬件中使用一个或一个以上处理器来执行所述指令以执行本发明的技术。视频编码器20和视频解码器30中的每一者可包含于一个或一个以上编码器或解码器中,其中任一者可作为组合式编码器/解码器(CODEC)的部分集成于相应装置中。Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field Programmable gate array (FPGA), discrete logic, software, hardware, firmware, or any combination thereof. When the techniques are implemented partially in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to Perform the techniques of the present invention. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device middle.

图2为说明可实施用于指定运动向量预测值的技术的视频编码器20的实例的框图。视频编码器20可执行视频帧内的块(包含宏块,或宏块的分区或子分区)的帧内和帧间译码。帧内译码依赖于空间预测以减少或移除给定视频帧内的视频中的空间冗余。帧间译码依赖于时间预测以减少或移除视频序列的邻近帧内的视频中的时间冗余。帧内模式(I模式)可指若干基于空间的压缩模式中的任一者,且帧间模式(例如单向预测(P模式)或双向预测(B模式))可指若干基于时间的压缩模式中的任一者。尽管图2中描绘了用于帧间模式编码的组件,但应理解,视频编码器20可进一步包含用于帧内模式编码的组件。然而,为了简洁和清楚起见,未说明此些组件。2 is a block diagram illustrating an example of video encoder 20 that may implement techniques for specifying motion vector predictors. Video encoder 20 may perform intra- and inter-coding of blocks within video frames, including macroblocks, or partitions or sub-partitions of macroblocks. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames of a video sequence. Intra-mode (I-mode) may refer to any of several spatial-based compression modes, and inter-modes such as unidirectional prediction (P-mode) or bi-directional prediction (B-mode) may refer to several temporal-based compression modes any of the Although components for inter-mode encoding are depicted in FIG. 2, it should be understood that video encoder 20 may further include components for intra-mode encoding. However, for the sake of brevity and clarity, such components have not been illustrated.

如图2中所示,视频编码器20接收待编码的视频帧内的当前视频块。在图2的实例中,视频编码器20包含运动补偿单元44、运动估计单元42、存储器64、求和器50、变换单元52、量化单元54和熵译码单元56。对于视频块重建,视频编码器20还包含反量化单元58、反变换单元60,和求和器62。还可包含解块滤波器(图2中未展示)以对块边界进行滤波,以从经重建的视频移除成块性假影。在需要时,解块滤波器通常会对求和器62的输出进行滤波。尽管描述为包含通常指随机存取存储器(RAM)、动态RAM(DRAM)、静态RAM(SRAM)、快闪存储器或其它持续或非持续基于芯片的存储媒体的存储器64,但可利用任何类型的非暂时性计算机可读媒体,包含硬盘驱动器、光盘驱动器、磁盘驱动器等。As shown in FIG. 2, video encoder 20 receives a current video block within a video frame to be encoded. In the example of FIG. 2 , video encoder 20 includes motion compensation unit 44 , motion estimation unit 42 , memory 64 , summer 50 , transform unit 52 , quantization unit 54 , and entropy coding unit 56 . For video block reconstruction, video encoder 20 also includes inverse quantization unit 58 , inverse transform unit 60 , and summer 62 . A deblocking filter (not shown in FIG. 2 ) may also be included to filter block boundaries to remove blocking artifacts from the reconstructed video. A deblocking filter typically filters the output of summer 62 when required. Although described as including memory 64 generally referring to random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), flash memory, or other persistent or non-persistent chip-based storage media, any type of Non-transitory computer readable media, including hard drives, optical drives, magnetic disk drives, etc.

在编码过程期间,视频编码器20接收待译码的视频帧或切片。帧或切片可划分为多个视频块。运动估计单元42和运动补偿单元44相对于一个或一个以上参考帧中的一个或一个以上块执行对所接收的视频块的帧间预测译码以提供时间压缩。帧内预测单元46还可相对于与待译码的块在相同帧或切片中的一个或一个以上相邻块执行对所接收的视频块的帧内预测译码以提供空间压缩。During the encoding process, video encoder 20 receives a video frame or slice to be coded. A frame or slice can be divided into video chunks. Motion estimation unit 42 and motion compensation unit 44 perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference frames to provide temporal compression. Intra-prediction unit 46 may also perform intra-predictive coding of a received video block relative to one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial compression.

如图2的实例中进一步展示,视频编码器20还包含模式选择单元40。模式选择单元40可(例如)基于误差结果而选择译码模式(帧内或帧间)中的一者,且将所得的经帧内译码或经帧间译码的块提供到求和器50以产生残余块数据,且提供到求和器62以重建经编码块以用作参考帧。As further shown in the example of FIG. 2 , video encoder 20 also includes mode select unit 40 . Mode select unit 40 may select one of the coding modes (intra or inter), eg, based on error results, and provide the resulting intra-coded or inter-coded block to a summer. 50 to generate residual block data and provided to summer 62 to reconstruct the encoded block for use as a reference frame.

运动估计单元42与运动补偿单元44可高度集成,但为概念目的而分开说明。运动估计是产生估计视频块的运动的运动向量的过程。运动向量(例如)可指示预测参考帧(或其它经译码单元)内的预测块相对于当前帧(或其它经译码单元)内正被译码的当前块的位移。预测块是被发现在像素差方面紧密地匹配待译码的块的块,像素差可通过绝对差总和(SAD)、平方差总和(SSD)或其它差量度来确定。运动向量还可指示宏块的分区的位移。运动补偿可涉及基于运动估计所确定的运动向量来获取或产生预测块。此外,在一些实例中,运动估计单元42与运动补偿单元44可在功能上集成。Motion estimation unit 42 and motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation is the process of generating motion vectors that estimate the motion of video blocks. A motion vector, for example, may indicate the displacement of a predictive block within a predictive reference frame (or other coded unit) relative to a current block being coded within a current frame (or other coded unit). A predictive block is a block that is found to closely match the block to be coded in terms of pixel difference, which may be determined by sum of absolute difference (SAD), sum of square difference (SSD), or other difference metrics. A motion vector may also indicate the displacement of a partition of a macroblock. Motion compensation may involve retrieving or generating a predictive block based on a motion vector determined by motion estimation. Furthermore, in some examples, motion estimation unit 42 and motion compensation unit 44 may be functionally integrated.

运动估计单元42通过比较经帧间译码帧的视频块与在存储器64中的参考帧的视频块来计算用于经帧间译码帧的视频块的运动向量。运动补偿单元44还可内插参考帧(例如,I帧或P帧)的子整数像素。新兴HEVC标准(和ITU H.264标准)通过一个或一个以上列表数据结构(其通常称作“列表”)存储参考帧。因此,存储于存储器64中的数据也可被视为列表。运动估计单元42比较来自存储器64的一个或一个以上参考帧(或列表)的块与当前帧(例如,P帧或B帧)的待编码的块。当存储器64中的参考帧包含用于子整数像素的值时,由运动估计单元42计算的运动向量可指代参考帧的子整数像素位置。运动估计单元42将计算出的运动向量发送到熵译码单元56和运动补偿单元44。通过运动向量识别的参考帧块(其可包括CU)可称作预测块。运动补偿单元44计算参考帧的预测块的误差值。Motion estimation unit 42 calculates a motion vector for the video block of the inter-coded frame by comparing the video block of the inter-coded frame to the video block of the reference frame in memory 64 . Motion compensation unit 44 may also interpolate sub-integer pixels of a reference frame (eg, an I frame or a P frame). The emerging HEVC standard (and the ITU H.264 standard) store reference frames through one or more list data structures (which are often referred to as "lists"). Therefore, the data stored in the memory 64 can also be regarded as a list. Motion estimation unit 42 compares blocks of one or more reference frames (or lists) from memory 64 with blocks to be encoded of a current frame (eg, a P frame or a B frame). When a reference frame in memory 64 includes values for sub-integer pixels, a motion vector calculated by motion estimation unit 42 may refer to a sub-integer pixel location of the reference frame. Motion estimation unit 42 sends the calculated motion vectors to entropy coding unit 56 and motion compensation unit 44 . A reference frame block (which may include a CU) identified by a motion vector may be referred to as a predictive block. Motion compensation unit 44 calculates an error value for a predicted block of a reference frame.

运动补偿单元44可基于预测块计算预测数据。视频编码器20通过从正被译码的原始视频块减去来自运动补偿单元44的预测数据而形成残余视频块。求和器50表示执行此减法运算的组件。变换单元52对残余块应用例如离散余弦变换(DCT)或概念上类似的变换等变换,从而产生包括残余变换系数值的视频块。变换单元52可执行概念上类似于DCT的其它变换,例如由H.264标准界定的变换。也可使用小波变换、整数变换、子带变换或其它类型的变换。在任何情况下,变换单元52都将变换应用于残余块,从而产生残余变换系数的块。所述变换可将残余信息从像素值域转换到变换域(例如,频域)。量化单元54量化残余变换系数以进一步减小位速率。量化过程可减少与系数中的一些或全部相关联的位深度。可通过调整量化参数来修改量化的程度。Motion compensation unit 44 may calculate prediction data based on the prediction block. Video encoder 20 forms a residual video block by subtracting the prediction data from motion compensation unit 44 from the original video block being coded. Summer 50 represents the component that performs this subtraction operation. Transform unit 52 applies a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising residual transform coefficient values. Transform unit 52 may perform other transforms that are conceptually similar to DCT, such as the transform defined by the H.264 standard. Wavelet transforms, integer transforms, subband transforms, or other types of transforms may also be used. In any case, transform unit 52 applies a transform to the residual block, producing a block of residual transform coefficients. The transform may convert residual information from a pixel value domain to a transform domain (eg, frequency domain). Quantization unit 54 quantizes the residual transform coefficients to further reduce bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The degree of quantization can be modified by adjusting quantization parameters.

在量化之后,熵译码单元56对经量化的变换系数进行熵译码。举例来说,熵译码单元56可执行内容自适应可变长度译码(CAVLC)、上下文自适应二进制算术译码(CABAC),或另一熵译码技术。在熵译码单元56进行的熵译码之后,可将经编码视频发射到另一装置或加以存档以用于稍后发射或检索。在上下文自适应二进制算术译码的情况下,上下文可基于相邻宏块。Following quantization, entropy coding unit 56 entropy codes the quantized transform coefficients. For example, entropy coding unit 56 may perform content adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), or another entropy coding technique. Following entropy coding by entropy coding unit 56, the encoded video may be transmitted to another device or archived for later transmission or retrieval. In the case of context-adaptive binary arithmetic coding, the context may be based on neighboring macroblocks.

在一些情况下,熵译码单元56或视频编码器20的另一单元可经配置以除了执行熵译码以外还执行其它译码功能。举例来说,熵译码单元56可经配置以确定用于宏块和分区的CBP值。而且,在一些情况下,熵译码单元56可执行宏块或其分区中的系数的游程长度译码。具体来说,熵译码单元56可应用曲折扫描或其它扫描模式以扫描宏块或分区中的变换系数,且对为零的游程进行编码以便进一步压缩。熵译码单元56还可用适当的语法元素建构标头信息以用于在经编码视频位流中进行发射。In some cases, entropy coding unit 56 or another unit of video encoder 20 may be configured to perform other coding functions in addition to entropy coding. For example, entropy coding unit 56 may be configured to determine CBP values for macroblocks and partitions. Also, in some cases, entropy coding unit 56 may perform run-length coding of coefficients in a macroblock or partition thereof. In particular, entropy coding unit 56 may apply a zigzag scan or other scan pattern to scan the transform coefficients in a macroblock or partition, and encode runs of zeros for further compression. Entropy coding unit 56 may also construct header information with appropriate syntax elements for transmission in the encoded video bitstream.

反量化单元58和反变换单元60分别应用反量化和反变换以在像素域中重建残余块(例如)以供稍后用作参考块。运动补偿单元44可通过将残余块添加到存储器64中的参考帧存储区的帧中的一者的预测块来计算参考块。运动补偿单元44还可将一个或一个以上内插滤波器应用于经重建的残余块来计算子整数像素值以用于在运动估计中使用。求和器62将经重建的残余块添加到由运动补偿单元44产生的经运动补偿的预测块,以产生经重建的视频块以用于存储在存储器64的参考帧存储区中。经重建的视频块可由运动估计单元42和运动补偿单元44用作用于对后续视频帧中的块进行帧间译码的参考块。Inverse quantization unit 58 and inverse transform unit 60 apply inverse quantization and inverse transform, respectively, to reconstruct a residual block in the pixel domain, eg, for later use as a reference block. Motion compensation unit 44 may calculate a reference block by adding the residual block to a predictive block of one of the frames of reference frame store in memory 64 . Motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for use in motion estimation. Summer 62 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 44 to produce a reconstructed video block for storage in the reference frame store of memory 64 . The reconstructed video blocks may be used by motion estimation unit 42 and motion compensation unit 44 as reference blocks for inter-coding blocks in subsequent video frames.

如上文所指出,在一些情况中,运动估计单元42可能不计算运动向量,而是改为确定运动向量预测值的列表,所述运动向量预测值中的四者为空间候选运动向量且所述运动向量预测值中的一者为时间候选运动向量。通常,运动估计单元42放弃运动向量计算是为了降低运动估计的计算复杂度且进而提高可对视频数据进行编码的速度同时还减少功率消耗。根据本发明中描述的技术,运动估计单元42可确定四个空间候选运动向量(或在确定空间候选运动向量中的一者或一者以上自身是经时间预测后更少,其中此些时间预测的空间候选运动向量要被从修剪过程中移除)的中间列表,修剪(潜在地仅空间预测的)空间候选运动向量的此列表,且将时间候选运动向量(和潜在地时间预测的空间候选运动向量)添加到(潜在地仅空间预测的)空间候选运动向量的经修剪列表。或者,运动估计单元42可确定四个(当用默认候选运动向量取代这些时间预测的空间候选运动向量时)空间候选运动向量的中间列表,修剪空间候选运动向量(包含确定的默认候选运动向量中的一者或一者以上)的此列表,且将时间候选运动向量添加到空间候选运动向量的经修剪列表。运动估计单元42可将包含时间运动向量候选者和经修剪的空间运动向量候选者的此列表输出到运动补偿单元44。As noted above, in some cases motion estimation unit 42 may not compute motion vectors but instead determine a list of motion vector predictors, four of which are spatial candidate motion vectors and the One of the motion vector predictors is a temporal candidate motion vector. In general, motion estimation unit 42 forgoes motion vector calculations in order to reduce the computational complexity of motion estimation and thereby increase the speed at which video data can be encoded while also reducing power consumption. According to the techniques described in this disclosure, motion estimation unit 42 may determine four spatial candidate motion vectors (or fewer after determining that one or more of the spatial candidate motion vectors are themselves temporally predicted, where such temporally predicted the intermediate list of spatial candidate motion vectors to be removed from the pruning process), prune this list of (potentially only spatially predicted) spatial candidate motion vectors, and combine the temporal candidate motion vectors (and potentially temporally predicted spatial candidate motion vector) to the pruned list of (potentially only spatially predicted) spatial candidate motion vectors. Alternatively, motion estimation unit 42 may determine an intermediate list of four (when replacing these temporally predicted spatial candidate motion vectors with default candidate motion vectors) spatial candidate motion vectors, pruning the spatial candidate motion vectors (including ) and add the temporal candidate motion vectors to the pruned list of spatial candidate motion vectors. Motion estimation unit 42 may output this list, including the temporal motion vector candidates and the pruned spatial motion vector candidates, to motion compensation unit 44 .

运动补偿单元44可接着识别列表中所包含的每一候选运动向量的参考帧块(再次,其可称作预测块)。运动补偿单元44可接着基于针对候选运动向量中的每一者确定的预测块来计算预测数据。视频编码器20可接着确定用于针对候选运动向量中的对应一者计算的每一预测数据的残余数据,变换所述残余数据,量化经转码的残余数据且接着以上文描述的方式对经量化的残余数据进行熵译码。视频编码器20可接着执行反操作以解码相对于在修剪之后剩余的候选运动向量的教示产生的此经熵编码的残余数据,以按经重建的视频块的形式再生参考数据。模式选择单元40可分析相对于候选运动向量中的每一者产生的经重建的视频块的每一者以选择候选运动向量中的一者。模式选择单元40可选择候选运动向量中的经由通常称作“速率-失真优化”(其通常缩写为“RDO”)的过程提供最佳速率-失真比的候选运动向量。Motion compensation unit 44 may then identify a reference frame block (again, which may be referred to as a predictive block) for each candidate motion vector included in the list. Motion compensation unit 44 may then calculate predictive data based on the predictive blocks determined for each of the candidate motion vectors. Video encoder 20 may then determine residual data for each prediction data calculated for a corresponding one of the candidate motion vectors, transform the residual data, quantize the transcoded residual data, and then convert the transcoded residual data in the manner described above. The quantized residual data is entropy coded. Video encoder 20 may then perform the inverse operation to decode this entropy-encoded residual data generated relative to the teaching of the remaining candidate motion vectors after pruning, to regenerate the reference data in the form of reconstructed video blocks. Mode select unit 40 may analyze each of the reconstructed video blocks generated relative to each of the candidate motion vectors to select one of the candidate motion vectors. Mode select unit 40 may select one of the candidate motion vectors that provides the best rate-distortion ratio through a process commonly referred to as "rate-distortion optimization," which is often abbreviated as "RDO."

RDO通常涉及比较经压缩以实现某一速率(其通常指可发送包含经压缩的帧、切片或块的经压缩的视频数据的位速率)的经重建的帧、切片或块与原始帧、切片或块并确定原始帧、切片或块与给定速率下的经重建的帧、切片或块之间的失真量。模式选择单元40可使用实现或试图实现给定速率的多个不同量度来编码相同视频数据,从而相对于这些各种量度执行失真优化过程。在此情况中,模式选择单元40可比较每一经重建的视频块的RD输出且选择在目标速率下提供最小失真的视频块。RDO typically involves comparing a reconstructed frame, slice, or block compressed to achieve a certain rate (which typically refers to the bit rate at which compressed video data comprising the compressed frame, slice, or block can be transmitted) with the original frame, slice or block and determine the amount of distortion between the original frame, slice or block and the reconstructed frame, slice or block at a given rate. Mode select unit 40 may encode the same video data using multiple different metrics for achieving or attempting to achieve a given rate, thereby performing the distortion optimization process with respect to these various metrics. In this case, mode select unit 40 may compare the RD output for each reconstructed video block and select the video block that provides the least distortion at the target rate.

模式选择单元40可接着向运动估计单元42指示此选择,运动估计单元42前进到与熵译码单元56介接以告知熵译码单元56所述选择。通常,运动估计单元42与熵译码单元56介接以指示运动向量预测是与识别选定的候选运动向量的索引一起被执行。如上文所指出,运动估计单元42可按经界定方式(例如通过最高振幅到最低振幅或最低振幅到最高振幅或按任何其它经界定方式)布置候选运动向量。或者,运动估计单元42还可用信号通知熵译码单元56候选运动向量布置于此列表(其也可称作MVP列表)中的方式。熵译码单元56可接着使用一元或截短一元码来编码此索引连同对于指示执行MVP以编码运动数据可为必要的任何其它信息。熵译码单元56可输出经编码的索引作为位流中的语法元素(其可表示为“mvp_idx”),所述语法元素可用上文关于图1的实例描述的方式存储或发射。Mode select unit 40 may then indicate this selection to motion estimation unit 42, which proceeds to interface with entropy coding unit 56 to inform entropy coding unit 56 of the selection. Typically, motion estimation unit 42 interfaces with entropy coding unit 56 to indicate that motion vector prediction is to be performed along with an index identifying a selected candidate motion vector. As noted above, motion estimation unit 42 may arrange candidate motion vectors in a defined manner, such as by highest amplitude to lowest amplitude, or lowest amplitude to highest amplitude, or in any other defined manner. Alternatively, motion estimation unit 42 may also signal to entropy coding unit 56 the manner in which candidate motion vectors are arranged in this list (which may also be referred to as an MVP list). Entropy coding unit 56 may then use the unary or truncated unary code to encode this index along with any other information that may be necessary to indicate that MVP is performed to encode motion data. Entropy coding unit 56 may output the encoded index as a syntax element in the bitstream (which may be denoted "mvp_idx"), which may be stored or transmitted in the manner described above with respect to the example of FIG. 1 .

在一些情况中,熵译码单元56执行称作上下文自适应二进制算术译码(CABAC)的一种熵译码。在执行CABAC时,熵译码单元56可选择多个所谓的上下文(其为针对不同上下文指定的不同码表以便更有效地压缩与对应上下文相关联的视频数据)中的一者且根据针对选定上下文界定的码表编码经压缩的残余数据。熵译码单元56可基于上下文信息选择上下文中的一者,所述上下文信息可包含在执行运动向量预测时确定的参考索引、独特运动向量候选者的数目以及在执行运动向量预测时确定的预测方向。In some cases, entropy coding unit 56 performs a type of entropy coding known as context adaptive binary arithmetic coding (CABAC). When performing CABAC, entropy coding unit 56 may select one of a number of so-called contexts (which are different code tables specified for different contexts in order to more efficiently compress the video data associated with the corresponding context) and The compressed residual data is encoded using a context-bound code table. Entropy coding unit 56 may select one of the contexts based on context information, which may include a reference index determined when performing motion vector prediction, a number of unique motion vector candidates, and a prediction determined when performing motion vector prediction. direction.

图3为说明解码经编码视频序列的视频解码器30的实例的框图。在图3的实例中,视频解码器30包含熵解码单元70、运动补偿单元72、帧内预测单元74、反量化单元76、反变换单元78、存储器82和求和器80。在一些实例中,视频解码器30可执行大体上与关于视频编码器(例如图1和2的实例中所展示的视频编码器20)所描述的编码过程互逆的解码过程。尽管通常为互逆的,但在一些情况中,视频解码器30可执行与视频编码器20执行的技术类似的技术。换句话说,视频解码器30可执行大体上与视频编码器20执行的过程类似的过程。此外,如上文所描述,视频编码器20可在执行视频编码的过程中执行视频解码。为了说明,视频编码器20的反量化单元58、反变换单元60和求和器62可执行与视频解码器30的反量化单元76、反变换单元78和求和器80大体上类似的操作。3 is a block diagram illustrating an example of a video decoder 30 that decodes an encoded video sequence. In the example of FIG. 3 , video decoder 30 includes entropy decoding unit 70 , motion compensation unit 72 , intra prediction unit 74 , inverse quantization unit 76 , inverse transform unit 78 , memory 82 , and summer 80 . In some examples, video decoder 30 may perform a decoding process that is substantially reciprocal to the encoding process described with respect to a video encoder, such as video encoder 20 shown in the examples of FIGS. 1 and 2 . Although generally reciprocal, in some cases video decoder 30 may perform techniques similar to those performed by video encoder 20 . In other words, video decoder 30 may perform a process substantially similar to that performed by video encoder 20 . Furthermore, video encoder 20 may perform video decoding in the process of performing video encoding, as described above. To illustrate, inverse quantization unit 58 , inverse transform unit 60 , and summer 62 of video encoder 20 may perform substantially similar operations as inverse quantization unit 76 , inverse transform unit 78 , and summer 80 of video decoder 30 .

如图3的实例中所展示,熵解码单元70接收经编码位流,出于说明的目的,假设经编码位流包含识别选定的候选运动向量(其中,再次,这些候选运动向量可称作运动向量预测值或MVP)的一元或截短的一元译码索引。在执行大体上与视频编码器20的熵译码单元56互逆的过程时,熵解码单元70可接收用于当前PU的语法元素或其它译码数据,其指示执行运动向量预测以确定用于当前PU的运动向量。响应于此语法元素或其它译码数据,熵解码单元70确定实施本发明中描述的技术以确定修剪之后剩余的候选运动向量的数目,以便恰当地剖析来自位流的一元或截短一元码。As shown in the example of FIG. 3 , entropy decoding unit 70 receives an encoded bitstream, which, for purposes of illustration, is assumed to include identifying selected candidate motion vectors (where, again, these candidate motion vectors may be referred to as Unary or truncated unary coding index for motion vector predictor (MVP). In performing a process generally reciprocal to entropy coding unit 56 of video encoder 20, entropy decoding unit 70 may receive a syntax element or other coding data for the current PU that indicates that motion vector prediction is performed to determine the The motion vector of the current PU. In response to this syntax element or other coded data, entropy decoding unit 70 determines to implement the techniques described in this disclosure to determine the number of candidate motion vectors remaining after pruning in order to properly parse the unary or truncated unary code from the bitstream.

为了确定候选运动向量的数目,熵译码单元70可与运动补偿单元72介接,从而指示运动补偿单元72根据本发明中描述的技术确定候选运动向量的数目。运动补偿单元72检索用于邻近当前PU的PU的空间候选运动向量以及用于参考帧中的相同位置的PU的时间候选运动向量。熵译码单元70可向运动补偿单元72提供经识别以用于当前PU的参考帧(通常作为位流中的另一语法元素)。或者,运动补偿单元72可相对于AMVP或合并模式来进行配置以从以设定方式(例如,从当前PU所处的当前帧向后或向前例如一个、两个或任何其它数目)识别的参考帧检索时间候选运动向量。To determine the number of candidate motion vectors, entropy coding unit 70 may interface with motion compensation unit 72, instructing motion compensation unit 72 to determine the number of candidate motion vectors according to the techniques described in this disclosure. Motion compensation unit 72 retrieves spatial candidate motion vectors for PUs adjacent to the current PU and temporal candidate motion vectors for co-located PUs in the reference frame. Entropy coding unit 70 may provide the reference frame identified for the current PU to motion compensation unit 72 (typically as another syntax element in the bitstream). Alternatively, motion compensation unit 72 may be configured with respect to AMVP or merge mode to identify in a set manner (e.g., backward or forward, eg, one, two, or any other number, from the current frame where the current PU resides) A reference frame is retrieved for temporal candidate motion vectors.

运动补偿单元72可接着以与上文关于视频编码器20的运动补偿单元44描述的方式大体上类似的方式来建构包含四个空间候选运动向量(或在确定空间候选运动向量中的一者或一者以上自身是经时间预测后更少,其中此些时间预测的空间候选运动向量从修剪过程中移除)的中间列表,修剪(潜在地仅空间预测的)空间候选运动向量的此列表,且组合时间候选运动向量(和潜在地时间预测的空间候选运动向量)与(潜在地仅空间预测的)空间候选运动向量的此经修剪列表。或者,运动补偿单元72可按再次与上文关于视频编码器20的运动补偿单元44描述的方式大体上类似的方式来确定四个空间候选运动向量(用默认候选运动向量取代时间预测的空间候选运动向量)的中间列表,修剪空间候选运动向量(包含确定的默认候选运动向量中的一者或一者以上)的此列表,且组合时间候选运动向量与空间候选运动向量的经修剪列表。在每一情况下,运动补偿单元72都输出在执行修剪之后确定的候选运动向量的此列表作为经修剪的MVP列表。在产生此经修剪的MVP列表之后,运动补偿单元72对此列表中的候选运动向量的数目计数且将此数目用信号发送到熵译码单元70。熵译码单元70可接着出于上述理由而恰当地剖析来自所提供位流的一元或截短一元译码索引。Motion compensation unit 72 may then construct the four spatial candidate motion vectors (or one or the other in determining the spatial candidate motion vectors) in a manner substantially similar to that described above with respect to motion compensation unit 44 of video encoder 20. One or more are themselves temporally predicted (fewer, where such temporally predicted spatial candidate motion vectors are removed from the pruning process) intermediate list, pruning this list of (potentially only spatially predicted) spatial candidate motion vectors, And this pruned list of temporal candidate motion vectors (and potentially temporally predicted spatial candidate motion vectors) is combined with the (potentially only spatially predicted) spatial candidate motion vectors. Alternatively, motion compensation unit 72 may determine the four spatial candidate motion vectors (replacing the temporally predicted spatial candidate with a default candidate motion vector) in a manner, again substantially similar to that described above with respect to motion compensation unit 44 of video encoder 20. motion vectors), pruning this list of spatial candidate motion vectors (including one or more of the determined default candidate motion vectors), and combining the pruned list of temporal and spatial candidate motion vectors. In each case, motion compensation unit 72 outputs this list of candidate motion vectors determined after performing pruning as a pruned MVP list. After generating this pruned MVP list, motion compensation unit 72 counts the number of candidate motion vectors in this list and signals this number to entropy coding unit 70 . Entropy coding unit 70 may then properly parse unary or truncated unary coding indices from the provided bitstream for the reasons described above.

在剖析一元或截短一元译码索引之后,熵译码单元70可接着解码所述一元或截短一元译码索引以产生到MVP列表的索引。熵译码单元70接着将此索引传递到运动补偿单元72,运动补偿单元72接着从经修剪的MVP列表中选择候选运动向量中的由所述索引识别的候选运动向量。对于经帧间译码的块,运动补偿单元72可接着基于所识别的运动向量产生帧间预测数据。运动补偿单元72可使用此运动向量来识别存储到存储器82的参考帧中的预测块。对于经帧内译码的块,帧内预测单元74可使用在位流中所接收的帧内预测模式,以从空间上邻近的块形成预测块。反量化单元76将提供于位流中且由熵解码单元70解码的经量化的块系数反量化(即,解量化)。反量化过程可包含(例如)如由H.264解码标准界定的常规过程。反量化过程还可包含使用由求和器50针对每一宏块计算出的量化参数QPY来确定量化的程度,且同样地确定应该应用的反量化的程度。After parsing the unary or truncated unary coding index, entropy coding unit 70 may then decode the unary or truncated unary coding index to generate an index to the MVP list. Entropy coding unit 70 then passes this index to motion compensation unit 72, which then selects the one of the candidate motion vectors identified by the index from the pruned MVP list. For inter-coded blocks, motion compensation unit 72 may then generate inter-prediction data based on the identified motion vectors. Motion compensation unit 72 may use this motion vector to identify a predictive block in a reference frame that is stored to memory 82 . For intra-coded blocks, intra-prediction unit 74 may use the intra-prediction mode received in the bitstream to form a predicted block from spatially adjacent blocks. Inverse quantization unit 76 inverse quantizes (ie, dequantizes) the quantized block coefficients provided in the bitstream and decoded by entropy decoding unit 70 . The inverse quantization process may include, for example, a conventional process as defined by the H.264 decoding standard. The inverse quantization process may also include using the quantization parameter QP Y calculated by summer 50 for each macroblock to determine the degree of quantization, and likewise determine the degree of inverse quantization that should be applied.

反变换单元60将反变换(例如,反DCT、反整数变换或概念上类似的反变换过程)应用于变换系数,以便在像素域中产生残余块。运动补偿单元72产生经运动补偿的块,可能执行基于内插滤波器的内插。将用于具有子像素精度的运动估计的内插滤波器的识别符可包含在语法元素中。运动补偿单元72可使用如由视频编码器20在视频块的编码期间所使用的内插滤波器来计算参考块的子整数像素的内插值。运动补偿单元72可根据所接收的语法信息来确定视频编码器20所使用的内插滤波器且使用所述内插滤波器来产生预测块。Inverse transform unit 60 applies an inverse transform (eg, inverse DCT, inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients to produce a residual block in the pixel domain. Motion compensation unit 72 generates motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers of interpolation filters to be used for motion estimation with sub-pixel precision may be included in the syntax elements. Motion compensation unit 72 may use interpolation filters as used by video encoder 20 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit 72 may determine the interpolation filters used by video encoder 20 from received syntax information and use the interpolation filters to produce predictive blocks.

运动补偿单元72使用语法信息中的一些来确定用以编码经编码的视频序列的(多个)帧的CU的大小、描述分割经编码的视频序列的帧的每一CU的方式的分割信息、指示编码每一CU的方式的模式、用于每一经帧间编码的CU的一个或一个以上参考帧(或列表),以及用以解码经编码的视频序列的其它信息。Motion compensation unit 72 uses some of the syntax information to determine the size of the CU used to encode the frame(s) of the encoded video sequence, partition information describing the manner in which each CU of the frame(s) of the encoded video sequence is partitioned, A mode indicating the manner in which each CU was encoded, one or more reference frames (or lists) for each inter-coded CU, and other information used to decode the encoded video sequence.

求和器80对残余块与由运动补偿单元72或帧内预测单元产生的对应预测块进行求和以形成经解码块。如果需要的话,还可应用解块滤波器以对经解码块进行滤波,以便移除成块性假影。接着将经解码的视频块存储在存储器82中的参考帧存储区(其在HEVC标准中可称作经解码图片缓冲器)中,参考帧存储区提供参考块以用于后续运动补偿,且还产生经解码视频以用于呈现在显示装置(例如,图1的显示装置32)上。Summer 80 sums the residual block with the corresponding prediction block produced by motion compensation unit 72 or intra-prediction unit to form a decoded block. A deblocking filter may also be applied to filter the decoded blocks, if desired, in order to remove blocking artifacts. The decoded video blocks are then stored in memory 82 in a reference frame store (which may be referred to as a decoded picture buffer in the HEVC standard), which provides reference blocks for subsequent motion compensation, and also The decoded video is generated for presentation on a display device (eg, display device 32 of FIG. 1 ).

在一些情况中,时间候选运动向量可能不可用,例如当指定时间候选运动向量的切片丢失(即,作为一个实例,未在经编码位流中恢复或接收到)时。当此时间候选运动向量不可用时,运动补偿单元72可将此时间候选运动向量设定为默认值或以其它方式确定用于此时间候选运动向量的默认运动向量信息。在一些情况中,可取决于参考帧是否经帧内译码来重建用于时间候选运动向量的此默认运动向量信息。当确定参考帧经帧内译码时,运动补偿单元72可基于针对在参考帧中处于与当前部分在当前帧中所处的位置相同的位置处的参考帧部分确定的空间运动向量而得到默认候选运动向量的默认运动向量信息。同样地,时间预测的空间候选运动向量中的一者或一者以上可能不可用或丢失且可基于针对在参考帧中处于与当前部分在当前帧中所处的位置相同的位置处的参考帧部分确定的空间运动向量而得到默认候选运动向量的默认运动向量信息。In some cases, a temporal candidate motion vector may not be available, such as when the slice specifying the temporal candidate motion vector is lost (ie, not recovered or received in the encoded bitstream, as one example). When such a temporal candidate motion vector is not available, motion compensation unit 72 may set this temporal candidate motion vector to a default value or otherwise determine default motion vector information for this temporal candidate motion vector. In some cases, this default motion vector information for temporal candidate motion vectors may be reconstructed depending on whether the reference frame is intra-coded or not. When determining that a reference frame is intra-coded, motion compensation unit 72 may derive a default Default motion vector information for candidate motion vectors. Likewise, one or more of the temporally predicted spatial candidate motion vectors may be unavailable or missing and may be based on reference frames that are at the same position in the reference frame as the current portion is in the current frame Partially determined spatial motion vectors are used to obtain default motion vector information of default candidate motion vectors.

本发明中阐述的技术的各种方面还可解决在执行CABAC或任何其它上下文相依的无损统计译码过程时因为自身是从丢失或遗失的时间运动向量预测出的空间候选运动向量而出现的问题。根据所述技术的这些方面,熵译码单元70可通过停用丢失的空间候选运动向量来克服此问题。或者,熵译码单元70可用默认运动信息取代此丢失的空间候选运动向量。熵译码单元70可与运动补偿单元72介接以确定此默认运动信息。此默认运动信息可指定默认运动向量、预测方向和参考索引。在一些情况下,运动补偿单元72基于切片类型(其指示当前切片经帧内预测还是帧间预测等)、当前CU深度(如在当前PU所驻留的上述四叉树阶层式结构中的CU的深度)、当前PU大小或任何其它可用信息来产生此取代运动信息。运动补偿单元72可接着将此默认运动信息提供到熵译码单元70。通过利用此默认运动信息,熵译码单元70可仍执行CABAC过程。Various aspects of the techniques set forth in this disclosure may also address issues that arise when performing CABAC or any other context-dependent lossless statistical coding process because spatial candidate motion vectors are themselves predicted from missing or lost temporal motion vectors . In accordance with these aspects of the techniques, entropy coding unit 70 may overcome this problem by deactivating missing spatial candidate motion vectors. Alternatively, entropy coding unit 70 may replace the missing spatial candidate motion vector with default motion information. Entropy coding unit 70 may interface with motion compensation unit 72 to determine this default motion information. This default motion information may specify a default motion vector, prediction direction and reference index. In some cases, motion compensation unit 72 is based on the slice type (which indicates whether the current slice is intra-predicted or inter-predicted, etc.), the current CU depth (such as the CU in the above-mentioned quadtree hierarchy where the current PU resides) depth), current PU size, or any other available information to generate this replacement motion information. Motion compensation unit 72 may then provide this default motion information to entropy coding unit 70 . By utilizing this default motion information, entropy coding unit 70 may still perform the CABAC process.

在一些实例中,本发明中描述的技术还可克服当运动补偿单元72不能确定空间候选运动向量自身是从空间候选运动向量还是时间候选运动向量预测出时(例如当其时间候选运动向量丢失时)出现的问题。在空间候选运动向量中的一者不可用的这些情况下,运动补偿单元72可实施本发明的技术以停用空间运动向量预测(且进而利用在相同位置的时间候选运动向量而不管通过编码器用信号发送的内容如何)。或者,运动补偿单元72可用上述方式确定默认运动信息。In some instances, the techniques described in this disclosure may also overcome problems when motion compensation unit 72 cannot determine whether a spatial candidate motion vector itself is predicted from a spatial candidate motion vector or a temporal candidate motion vector (such as when its temporal candidate motion vector is missing). ) problems. In these cases where one of the spatial candidate motion vectors is not available, motion compensation unit 72 may implement the techniques of this disclosure to disable spatial motion vector prediction (and then utilize the temporal candidate motion vector at the same location regardless of the motion vector used by the encoder. what the signal is sending). Alternatively, motion compensation unit 72 may determine default motion information in the manner described above.

所述技术可进一步克服当运动补偿单元72不能确定空间候选运动向量自身是否从时间候选运动向量预测出时(例如当其时间候选运动向量丢失时)出现的问题。在空间候选运动向量中的一者不可用的这些情况下,运动补偿单元72可实施本发明的技术以停用空间运动向量预测(且进而利用在相同位置的时间候选运动向量而不管通过编码器用信号发送的内容如何)。或者,运动补偿单元72可用上述方式确定默认运动信息,从而相对于默认候选运动向量的此默认运动信息执行修剪或从修剪过程完全移除此默认候选运动向量(但单独地指定其以实现位流的剖析)。The technique may further overcome problems that arise when motion compensation unit 72 cannot determine whether a spatial candidate motion vector itself is predicted from a temporal candidate motion vector, such as when its temporal candidate motion vector is lost. In these cases where one of the spatial candidate motion vectors is not available, motion compensation unit 72 may implement the techniques of this disclosure to disable spatial motion vector prediction (and then utilize the temporal candidate motion vector at the same location regardless of the motion vector used by the encoder. what the signal is sending). Alternatively, motion compensation unit 72 may determine default motion information in the manner described above, performing pruning relative to this default motion information for a default candidate motion vector or removing this default candidate motion vector entirely from the pruning process (but specifying it separately to enable bitstream analysis).

如上所述,存在两种类型的运动向量预测:合并模式和AMVP。对于合并模式,运动补偿单元72在确定默认运动信息时确定运动向量振幅、预测方向和参考索引。对于AMVP,运动补偿单元72确定运动向量振幅,但不需要确定预测方向和参考索引,这是因为在用于当前PU的位流中单独地用信号发送了预测方向和参考索引。因此,运动补偿单元72可基于用信号发送的用于执行运动向量预测的模式(即,对于当前PU,用信号发送的运动向量预测的类型为合并模式还是AMVP)来进行默认运动信息的确定。As mentioned above, there are two types of motion vector prediction: merge mode and AMVP. For merge mode, motion compensation unit 72 determines motion vector amplitude, prediction direction, and reference index when determining default motion information. For AMVP, motion compensation unit 72 determines the motion vector amplitude, but does not need to determine the prediction direction and reference index, since the prediction direction and reference index are signaled separately in the bitstream for the current PU. Accordingly, motion compensation unit 72 may make the determination of default motion information based on the signaled mode for performing motion vector prediction (ie, for the current PU, whether the signaled type of motion vector prediction is merge mode or AMVP).

图4为说明在执行本发明中描述的运动向量预测技术的过程中视频编码器(例如图2的实例中所展示的视频编码器20)的示范性操作的流程图。最初,如上文所描述,运动估计单元42可确定用于对应于当前CU的当前PU的空间候选运动向量(90)。运动估计单元42可实例化可称作存储这些空间候选运动向量的中间列表或中间空间MVP列表的列表。运动估计单元42可接着以上述方式中的一者修剪冗余空间候选运动向量(92)。在此意义上,运动估计单元42可产生剩余空间候选运动向量的中间空间MVP列表。4 is a flowchart illustrating exemplary operation of a video encoder, such as video encoder 20 shown in the example of FIG. 2, in performing the motion vector prediction techniques described in this disclosure. Initially, as described above, motion estimation unit 42 may determine a spatial candidate motion vector for the current PU corresponding to the current CU (90). Motion estimation unit 42 may instantiate what may be referred to as an intermediate list or an intermediate spatial MVP list that stores these spatial candidate motion vectors. Motion estimation unit 42 may then prune redundant spatial candidate motion vectors in one of the ways described above (92). In this sense, motion estimation unit 42 may generate an intermediate spatial MVP list of the remaining spatial candidate motion vectors.

在产生剩余空间候选运动向量的此中间空间MVP列表之后,运动估计单元42可再次如上所述从参考帧中的相同位置PU确定当前PU的时间候选运动向量(94)。接下来,运动估计单元42可形成包含剩余空间候选运动向量和时间候选运动向量的MVP列表(96)。运动估计单元42可将此MVP列表提供到运动补偿单元44,运动补偿单元44以上述方式相对于MVP列表中所包含的每一候选运动向量执行运动补偿。视频编码器20接着基于通过相对于MVP列表中所包含的候选运动向量中的每一者执行的运动补偿而产生的预测数据来产生残余数据。视频编码器20将一个或一个以上变换应用于残余数据、量化所述经变换的残余数据且接着重建所述残余数据。经重建的残余数据接着通过基于MVP列表中所包含的候选运动向量中的每一者而产生的预测数据进行扩增以产生经重建的视频数据。After generating this intermediate spatial MVP list of the remaining spatial candidate motion vectors, motion estimation unit 42 may determine the temporal candidate motion vector for the current PU from the co-located PU in the reference frame again as described above (94). Next, motion estimation unit 42 may form an MVP list that includes the remaining spatial and temporal candidate motion vectors (96). Motion estimation unit 42 may provide this MVP list to motion compensation unit 44, which performs motion compensation with respect to each candidate motion vector included in the MVP list in the manner described above. Video encoder 20 then generates residual data based on the prediction data generated by motion compensation performed with respect to each of the candidate motion vectors included in the MVP list. Video encoder 20 applies one or more transforms to the residual data, quantizes the transformed residual data, and then reconstructs the residual data. The reconstructed residual data is then augmented by prediction data generated based on each of the candidate motion vectors included in the MVP list to generate reconstructed video data.

模式选择单元40可接着基于经重建的视频数据从当前PU的MVP列表选择候选运动向量中的一者(98)。举例来说,模式选择单元40可对相对于MVP列表中的候选运动向量中的每一者而重建的经重建的视频数据执行某一形式的速率-失真分析,且从列表选择候选运动向量中的提供最佳速率-失真量度的一者。模式选择单元40可接着与运动估计单元42介接以指示其候选运动向量选择。运动估计单元42可接着确定识别候选运动向量中的选定一者的到MVP列表的索引,如上文所描述(100)。运动估计单元42可接着将此索引提供到熵译码单元56。熵译码单元56可接着对识别MVP列表中的候选运动向量中的选定一者的索引译码,如上文进一步描述(102)。熵译码单元56接着将经译码索引插入到位流中(104)。Mode select unit 40 may then select one of the candidate motion vectors from the current PU's MVP list based on the reconstructed video data (98). For example, mode select unit 40 may perform some form of rate-distortion analysis on the reconstructed video data reconstructed relative to each of the candidate motion vectors in the MVP list, and select the candidate motion vectors from the list The one that provides the best rate-distortion metric. Mode select unit 40 may then interface with motion estimation unit 42 to indicate its candidate motion vector selection. Motion estimation unit 42 may then determine an index to the MVP list identifying the selected one of the candidate motion vectors, as described above (100). Motion estimation unit 42 may then provide this index to entropy coding unit 56 . Entropy coding unit 56 may then code an index identifying the selected one of the candidate motion vectors in the MVP list, as described further above (102). Entropy coding unit 56 then inserts the coded index into the bitstream (104).

图5为说明在实施本发明中描述的运动向量预测技术的过程中视频解码器(例如图3的实例中所展示的视频解码器30)的示范性操作的流程图。如上所述,视频解码器30的熵解码单元70最初接收包含经译码索引的位流,所述经译码索引通常以其语法元素名称称作“mvp_idx”或“MVP索引”(110)。熵解码单元70还解码在此MVP索引之前或之后的其它语法元素,所述其它语法元素指示当前PU具有表示为运动向量预测的运动向量。为了剖析来自位流的此MVP索引,熵解码单元70首先必须确定在执行修剪过程之后剩余的候选运动向量的数目。为了确定候选运动向量的数目,熵解码单元70与运动补偿单元72介接,从而请求运动补偿单元72提供用于当前PU的此数目的候选运动向量。5 is a flow diagram illustrating exemplary operation of a video decoder, such as video decoder 30 shown in the example of FIG. 3, in implementing the motion vector prediction techniques described in this disclosure. As described above, entropy decoding unit 70 of video decoder 30 initially receives a bitstream that includes a coded index, commonly referred to by its syntax element name as "mvp_idx" or "MVP index" (110). Entropy decoding unit 70 also decodes other syntax elements before or after this MVP index that indicate that the current PU has a motion vector denoted as motion vector prediction. In order to parse this MVP index from the bitstream, entropy decoding unit 70 must first determine the number of candidate motion vectors remaining after performing the pruning process. To determine the number of candidate motion vectors, entropy decoding unit 70 interfaces with motion compensation unit 72, requesting motion compensation unit 72 to provide this number of candidate motion vectors for the current PU.

响应于此请求,运动补偿单元72以上述方式确定用于当前PU的空间候选运动向量(112)。如果空间候选运动向量中的一者或一者以上出于上文更详细地阐述的理由而不可用(“是”114),那么运动补偿单元72可用上述方式中的任一者产生运动信息(例如默认运动信息)且基于所产生的运动信息执行运动补偿(116、118)。如果所有空间候选运动向量都可用(“否”114),那么运动补偿单元72修剪冗余空间候选运动向量,如上文进一步描述(120)。In response to this request, motion compensation unit 72 determines a spatial candidate motion vector for the current PU in the manner described above (112). If one or more of the spatial candidate motion vectors are not available for reasons set forth in more detail above ("Yes" 114), motion compensation unit 72 may generate motion information in any of the ways described above ( eg default motion information) and perform motion compensation (116, 118) based on the generated motion information. If all spatial candidate motion vectors are available ("No" 114), motion compensation unit 72 prunes redundant spatial candidate motion vectors, as described further above (120).

在修剪冗余空间候选运动向量之后,运动补偿单元72接着如上所述从参考帧中的相同位置PU确定当前PU的时间候选运动向量(122)。如果此时间候选运动向量出于上述理由而不可用(“是”124),那么运动补偿单元72可产生运动信息且基于所产生的运动信息执行运动补偿(116、118)。然而,如果时间候选运动向量可用(“否”124),那么运动补偿单元72形成包含剩余空间候选运动向量和时间候选运动向量的MVP列表(126)。运动补偿单元72可接着确定MVP列表中的候选运动向量的数目(128),将此数目传递到熵解码单元70。After pruning redundant spatial candidate motion vectors, motion compensation unit 72 then determines a temporal candidate motion vector for the current PU from the co-located PU in the reference frame as described above (122). If this temporal candidate motion vector is not available for the reasons described above ("YES" 124), motion compensation unit 72 may generate motion information and perform motion compensation based on the generated motion information (116, 118). However, if temporal candidate motion vectors are available ("NO" 124), motion compensation unit 72 forms an MVP list that includes the remaining spatial and temporal candidate motion vectors (126). Motion compensation unit 72 may then determine the number of candidate motion vectors in the MVP list ( 128 ), passing this number to entropy decoding unit 70 .

熵解码单元70可接着基于所确定的数目剖析来自位流的MVP索引(130)。熵解码单元70接着解码经译码的MVP索引(131)。熵解码单元70将经解码的MVP索引传递到运动补偿单元72,运动补偿单元72基于经解码的MVP索引而从MVP列表选择候选运动向量中的一者,如上文所描述(132)。运动补偿单元72接着基于候选运动向量中的选定一者以上述方式执行运动补偿(134)。运动补偿单元72可根据合并模式或AMVP执行运动补偿,这取决于在位流中用信号发送了哪一模式或运动补偿单元72确定了哪一模式。Entropy decoding unit 70 may then parse the MVP index from the bitstream based on the determined number (130). Entropy decoding unit 70 then decodes the coded MVP index (131). Entropy decoding unit 70 passes the decoded MVP index to motion compensation unit 72, which selects one of the candidate motion vectors from the MVP list based on the decoded MVP index, as described above (132). Motion compensation unit 72 then performs motion compensation based on the selected one of the candidate motion vectors in the manner described above (134). Motion compensation unit 72 may perform motion compensation according to merge mode or AMVP, depending on which mode is signaled in the bitstream or determined by motion compensation unit 72 .

图6为说明当前PU144的邻近的相邻PU140A到140D和时间上在相同位置的PU142A的示范性布置的图。如图6的实例中所展示,当前PU144包含在当前帧146A内。在时间上,当前帧146A之前为参考帧146B,再之前为参考帧146C。邻近的相邻PU140A在空间上邻近地驻留在当前PU144左方。邻近的相邻PU140B在空间上邻近地驻留在当前PU144左上方。邻近的相邻PU140C在空间上邻近地驻留在当前PU144上方。邻近的相邻PU140D在空间上邻近地驻留在当前PU144右上方。时间上在相同位置的PU142A时间上在当前PU144之前,且在参考帧146B内处于与当前PU144在当前帧146A内所处的位置相同的位置。6 is a diagram illustrating an exemplary arrangement of adjacent neighboring PUs 140A-140D of the current PU 144 and the temporally co-located PU 142A. As shown in the example of FIG. 6, current PU 144 is contained within current frame 146A. In time, the current frame 146A is preceded by the reference frame 146B, which is preceded by the reference frame 146C. The adjacent neighbor PU 140A resides spatially adjacent to the left of the current PU 144 . The adjacent neighboring PU 140B resides spatially adjacent to the upper left of the current PU 144 . The adjacent neighboring PU 140C resides spatially adjacently above the current PU 144 . The adjacent neighboring PU 140D resides spatially adjacent to the upper right of the current PU 144 . The temporally co-located PU 142A temporally precedes the current PU 144 and is at the same position within the reference frame 146B as the current PU 144 was located within the current frame 146A.

邻近的相邻PU140A到140D中的每一者存储或以其它方式提供当前PU144的空间候选运动向量,而时间上在相同位置的PU142A存储或以其它方式提供当前PU144的时间候选运动向量。视频解码器的运动补偿单元(例如,图2的实例中所展示的视频解码器30的运动补偿单元72)可从PU140A到140D和142A分别检索这些空间和时间候选运动向量。因为时间上在相同位置的PU142A包含在与当前PU144的帧不同的参考帧146B内,所以此时间上在相同位置的PU142A通常与不同的可独立解码部分(其在新兴HEVC标准中经常称作切片)相关联。参考帧146B的此切片可能丢失(例如,在发射中或归因于存储器或存储装置的损坏)且运动补偿单元72可能无法检索存储当前PU144的时间候选运动向量的此时间上在相同位置的PU142A。丢失此时间候选运动向量可出于上述理由而阻止熵解码单元70剖析位流。本发明中描述的技术可使得运动补偿单元72能够通过在修剪过程中不包含时间候选运动向量而克服此问题。Each of adjacent neighboring PUs 140A- 140D stores or otherwise provides a spatial candidate motion vector for the current PU 144 , while the temporally co-located PU 142A stores or otherwise provides a temporal candidate motion vector for the current PU 144 . A motion compensation unit of a video decoder (eg, motion compensation unit 72 of video decoder 30 shown in the example of FIG. 2 ) may retrieve these spatial and temporal candidate motion vectors from PUs 140A- 140D and 142A, respectively. Because the temporally co-located PU 142A is contained within a different reference frame 146B than the frame of the current PU 144, this temporally co-located PU 142A is often associated with a different independently decodable portion (often referred to as a slice in the emerging HEVC standard). )Associated. This slice of reference frame 146B may be lost (e.g., in transmission or due to memory or storage corruption) and motion compensation unit 72 may not be able to retrieve this temporally co-located PU 142A that stores the temporal candidate motion vector for the current PU 144 . Losing this temporal candidate motion vector may prevent entropy decoding unit 70 from parsing the bitstream for the reasons described above. The techniques described in this disclosure may enable motion compensation unit 72 to overcome this problem by not including temporal candidate motion vectors in the pruning process.

同样地,当前PU144的空间候选运动向量可能在执行MVP以确定邻近的相邻PU140A到140D中的一者的运动向量时丢失,结果为时间候选运动向量被选定且存储时间候选运动向量的时间上在相同位置的PU丢失。为了说明,考虑邻近的相邻PU140A,其时间上在相同位置的PU在图6的实例中被识别为时间上在相同位置的PU142B。如果PU142B丢失且空间相邻PU140A的运动向量被选择作为与PU142B相关联的时间候选运动向量,那么不存在用于PU140A的运动向量信息。因此,当前PU144的空间候选运动向量也丢失。为了潜在地克服此丢失的空间候选运动向量,所述技术使得运动补偿单元72能够产生运动信息(例如默认运动信息),其可用作当前PU144的空间候选运动向量。Likewise, the spatial candidate motion vector for the current PU 144 may be lost when performing MVP to determine the motion vector for one of the neighboring neighboring PUs 140A-140D, with the result that the temporal candidate motion vector is selected and the temporal candidate motion vector is stored The PU at the same location is lost. To illustrate, consider the adjacent neighbor PU 140A, whose temporally co-located PU is identified in the example of FIG. 6 as the temporally co-located PU 142B. If PU 142B is lost and the motion vector of spatially neighboring PU 140A is selected as the temporal candidate motion vector associated with PU 142B, then there is no motion vector information for PU 140A. Therefore, the spatial candidate motion vector for the current PU 144 is also lost. To potentially overcome this missing spatial candidate motion vector, the techniques enable motion compensation unit 72 to generate motion information (eg, default motion information) that may be used as a spatial candidate motion vector for current PU 144 .

此外,此丢失的空间候选运动向量(或,在这方面,丢失的时间候选运动向量)可在执行MVP且时间候选运动向量被选择用于多个时间上在相同位置的PU时发生。为了说明,假设执行MVP以确定PU142B的运动向量,PU142B与PU140A在时间上在相同位置,且PU142B的时间上在相同位置的PU(即,在图6的实例中为PU142C)丢失。在缺乏本发明中描述的技术的情况下,此丢失将不仅潜在地阻止从位流剖析出MVP索引,而且还导致PU142B的运动向量的丢失。在缺乏本发明中描述的技术的情况下,假设执行MVP以确定PU140A的运动向量且时间上在相同位置的PU142B被选定,那么PU142B的运动向量的丢失导致PU140A的运动向量的丢失。此运动向量的丢失影响到当前PU144,因为空间候选运动向量不可用。出于此理由,所述技术使得运动补偿单元72能够产生运动信息(或,在一些情况下,再生丢失的运动信息),以便防止可称作多丢失效应的情况发生。Furthermore, such missing spatial candidate motion vectors (or, for that matter, missing temporal candidate motion vectors) may occur when MVP is performed and temporal candidate motion vectors are selected for multiple temporally co-located PUs. To illustrate, assume that MVP is performed to determine a motion vector for PU 142B, PU 142B is temporally co-located with PU 140A, and the temporally co-located PU of PU 142B (ie, PU 142C in the example of FIG. 6 ) is missing. In the absence of the techniques described in this disclosure, this loss would not only potentially prevent the MVP index from being parsed from the bitstream, but would also result in a loss of motion vectors for PU 142B. In the absence of the techniques described in this disclosure, assuming MVP is performed to determine the motion vector for PU 140A and the temporally co-located PU 142B is selected, then the loss of the motion vector for PU 142B results in the loss of the motion vector for PU 140A. The loss of this motion vector affects the current PU 144 because spatial candidate motion vectors are not available. For this reason, the techniques enable motion compensation unit 72 to generate motion information (or, in some cases, to regenerate lost motion information) in order to prevent what may be referred to as the multiple dropout effect.

虽然上述实例涉及移除重复的空间候选运动向量,但所述技术可能未必要求仅移除重复的空间候选运动向量。可实施所述技术以执行修剪以便大体上移除空间候选运动向量中的至少一者。举例来说,视频编码器可在图片、帧、切片或块层级上用信号通知具有(仅举几个例子)最大振幅或最小振幅的空间候选运动向量将被修剪。或者,视频编码器可在位流中用信号发送可借以指定MVP的任何准则(例如阈值)作为修剪准则。在一些实施例中,视频编码器和视频解码器可关于可借以修剪候选运动向量的某一配置文件或其它配置意见一致。在一些情况下,视频解码器可基于上下文或其它信息而暗示某些候选运动向量何时将被修剪。因此,所述技术不应严格地限于执行修剪以仅移除重复的空间候选运动向量,而应涵盖可经实施以修剪至少一个空间候选运动向量的任何技术。While the examples above involve removing duplicate spatial candidate motion vectors, the techniques may not necessarily require only duplicate spatial candidate motion vectors to be removed. The techniques may be implemented to perform pruning to substantially remove at least one of the spatial candidate motion vectors. For example, a video encoder may signal at the picture, frame, slice, or block level that the spatial candidate motion vector with the largest or smallest amplitude, to name a few, is to be pruned. Alternatively, the video encoder may signal in the bitstream any criterion by which the MVP may be specified, such as a threshold, as a pruning criterion. In some embodiments, the video encoder and video decoder may agree on a certain profile or other configuration by which candidate motion vectors may be pruned. In some cases, a video decoder may hint when certain candidate motion vectors are to be pruned based on context or other information. Thus, the techniques should not be strictly limited to performing pruning to remove only duplicate spatial candidate motion vectors, but should encompass any technique that can be implemented to prune at least one spatial candidate motion vector.

在一个或一个以上实例中,所描述的功能可用硬件、软件、固件或其任何组合来实施。如果用软件实施,那么可将功能作为计算机可读媒体上的一个或一个以上指令或代码而加以存储或发射。计算机可读媒体可包含计算机数据存储媒体或通信媒体,通信媒体包含促进计算机程序从一处传送到另一处的任何媒体。数据存储媒体可以是可由一个或一个以上计算机或一个或一个以上处理器存取以检索用于实施本发明中所述的技术的指令、代码和/或数据结构的任何可用媒体。举例来说(且并非限制),此些计算机可读媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器,或可用于载送或存储呈指令或数据结构的形式的所要程序码且可由计算机存取的任何其它媒体。同样,恰当地将任何连接称作计算机可读媒体。举例来说,如果使用同轴电缆、光纤电缆、双绞线、数字订户线(DSL)或例如红外线、无线电和微波等无线技术从网站、服务器或其它远程源发射软件,那么同轴电缆、光纤电缆、双绞线、DSL或例如红外线、无线电和微波等无线技术包含于媒体的定义中。如本文中所使用,磁盘和光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)、软磁盘以及蓝光光盘,其中磁盘通常磁性地复制数据,而光盘使用激光光学地复制数据。上述各者的组合也应包含在计算机可读媒体的范围内。In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk or other magnetic storage, flash memory, or may be used to carry or any other medium that stores desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then coaxial cable, fiber optic Cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce optically using lasers data. Combinations of the above should also be included within the scope of computer-readable media.

可由例如一个或一个以上数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程逻辑阵列(FPGA)或其它等效集成或离散逻辑电路等一个或一个以上处理器来执行代码。因此,如本文中所使用,术语“处理器”可指上述结构或适合于实施本文中所描述的技术的任何其它结构中的任一者。另外,在一些方面中,本文中所描述的功能性可提供在针对编码和解码而配置的专用硬件和/或软件模块内或并入在组合式编解码器中。并且,可将所述技术完全实施于一个或一个以上电路或逻辑元件中。One or more processors such as one or more digital signal processors (DSP), general-purpose microprocessors, application-specific integrated circuits (ASICs), field-programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits to execute the code. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.

本发明的技术可用广泛多种装置或设备来实施,所述装置或设备包含无线手持机、集成电路(IC)或IC集合(例如,芯片集)。在本发明中描述各种组件、模块或单元是为了强调经配置以执行所揭示技术的装置的功能方面,但未必要求通过不同硬件单元来体现。而是,如上所述,各种单元可组合在编解码器硬件单元中或由互操作硬件单元(包含如上所述的一个或一个以上处理器)的集合结合合适的软件和/或固件来提供。The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including wireless handsets, integrated circuits (ICs), or collections of ICs (eg, chipsets). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require embodiment by different hardware units. Rather, as described above, the various units may be combined in a codec hardware unit or provided by a collection of interoperable hardware units (comprising one or more processors as described above) in combination with suitable software and/or firmware .

已描述各种实例。这些和其它实例属于所附权利要求书的范围内。Various examples have been described. These and other examples are within the scope of the following claims.

Claims (61)

1. a method for coding video frequency data, described method comprises:
Determining the spatial candidate motion vector being associated with the current portions of current video frame, wherein said spatial candidate is transported Moving vector includes the motion determining for the adjacent part neighbouring with described current portions of described current video frame Vector;
Determine the time candidate motion vector being associated with the described current portions of described current video frame, when wherein said Between candidate motion vector include the motion vector that determines for the part of reference video frame;
Prune described spatial candidate motion vector with remove repetition person in described spatial candidate motion vector but described Described time candidate motion vector is not considered during pruning process;
One in remaining described spatial candidate motion vector after selecting described time candidate motion vector or pruning As selected candidate motion vector for motion vector prediction process;
Determine the row of remaining described spatial candidate motion vector after identifying described time candidate motion vector and pruning The index of the position of the described selected one in described candidate motion vector in table;And
Send described index with signal in bit stream.
2. method according to claim 1, wherein prunes described spatial candidate motion vector and includes only pruning described space Candidate motion vector is to remove the described repetition person in described spatial candidate motion vector.
3. method according to claim 1,
Wherein said current portions includes current decoding unit CU,
The described part of wherein said reference frame includes the CU of described reference frame.
4. method according to claim 1, it farther includes to produce predicting unit PU comprising prediction data, institute State prediction data and include at least described selected candidate motion vector,
Wherein send described selected candidate motion vector with signal and include in described bit stream described with signal transmission PU。
5. method according to claim 1, wherein sends described index with signal and includes with signal transmission motion vector pre- Measured value MVP indexes, and described MVP index identifies remaining institute after described time candidate motion vector and pruning State the described position of described selected candidate motion vector in the described list of spatial candidate motion vector.
6. method according to claim 1, it farther includes to use unitary code or the one truncating in unitary code to compile Code is described to be indexed to produce encoded index,
Wherein send described index with signal to include sending described encoded index with signal in described bit stream.
7. method according to claim 1, remaining institute after wherein selecting described time candidate motion vector or pruning The one stated in spatial candidate motion vector includes:
Relative to described time candidate motion vector and prune after remaining described spatial candidate motion vector in every One performs rate-distortion analysis;And
Remaining described space after selecting described time candidate motion vector based on described rate-distortion analysis or prune One in candidate motion vector is as described selected candidate motion vector.
8. method according to claim 1, it further comprises determining that in the spatial candidate motion vector of described determination Each is through spatial prediction or through time prediction,
Wherein prune described spatial candidate motion vector to remove the described repetition person in described spatial candidate motion vector It is confirmed as the spatial candidate motion through spatial prediction including only prune in the spatial candidate motion vector of described determination Vector, and do not prune the spatial candidate fortune being confirmed as in the spatial candidate motion vector of described determination through time prediction Any one in moving vector, and
After wherein selecting described time candidate motion vector or pruning in remaining described spatial candidate motion vector One includes selecting described time candidate motion vector, described spatial candidate as described selected candidate motion vector Motion vector is confirmed as through time prediction one or pruning after remaining described spatial candidate motion vector In be confirmed as the one through spatial prediction as described selected candidate motion vector.
9. method according to claim 1, it farther includes:
Determine described determination spatial candidate motion vector in each through spatial prediction still through time prediction;And
Replace with default candidate motion vector and be confirmed as through appointing in the described spatial candidate motion vector of time prediction One, wherein said default candidate motion vector comprises default motions vector information, wherein said default motions vector Information includes motion vector amplitude, identifies described reference frame pre-before or after described present frame in time Survey direction and the reference key identifying described reference frame, and
Wherein prune described spatial candidate motion vector to remove the described repetition person in described spatial candidate motion vector Including prune the one or more of described spatial candidate motion comprising in described default candidate motion vector to Amount is to remove the described repetition person in described spatial candidate motion vector.
10. method according to claim 1, it further comprises determining that it is not through time prediction and to be different from the described time In remaining described spatial candidate motion vector any one one or one after candidate motion vector and pruning Above exceptional space candidate motion vector,
After wherein selecting described time candidate motion vector or pruning in remaining described spatial candidate motion vector One includes selecting described time candidate motion vector, prune after in remaining described spatial candidate motion vector One in one or described exceptional space candidate motion vector is as described selected candidate motion vector.
11. 1 kinds of equipment for coding video frequency data, described equipment includes:
For determining the device of the spatial candidate motion vector being associated with the current portions of current video frame, wherein said Spatial candidate motion vector include the adjacent part neighbouring with described current portions for described current video frame and The motion vector determining;
For determining the device of the time candidate motion vector being associated with the described current portions of described current video frame, Wherein said time candidate motion vector includes the motion vector determining for the part of reference video frame;
For prune described spatial candidate motion vector with remove described spatial candidate motion vector in repetition person but The device of described time candidate motion vector is not considered during described pruning process;
For select described time candidate motion vector or prune after remaining described spatial candidate motion vector in One as selected candidate motion vector for the device of motion vector prediction process;
For remaining described spatial candidate motion vector after determining the described time candidate motion vector of identification and pruning List in described candidate motion vector in the device of index of position of described selected one;And
For sending the device of described index in bit stream with signal.
12. equipment according to claim 11, the wherein said device bag for pruning described spatial candidate motion vector Include for only pruning described spatial candidate motion vector to remove the described repetition in described spatial candidate motion vector The device of person.
13. equipment according to claim 11, its farther include for the spatial candidate motion determining described determination to Each in amount is through spatial prediction or the device through time prediction,
Wherein said for pruning described spatial candidate motion vector to remove the institute in described spatial candidate motion vector State the device of repetition person include for only prune described determination spatial candidate motion vector in be confirmed as through space Prediction spatial candidate motion vector, and do not prune described determination spatial candidate motion vector in be confirmed as through when Between prediction spatial candidate motion vector in the device of any one, and
Wherein said for select described time candidate motion vector or prune after remaining described spatial candidate motion One in Xiang Liang includes for selecting described time candidate to transport as the device of described selected candidate motion vector Moving vector, described spatial candidate motion vector in be confirmed as through time prediction one or pruning after remaining institute State in spatial candidate motion vector and be confirmed as the one through spatial prediction as described selected candidate motion vector Device.
14. equipment according to claim 11, it farther includes:
For determine described determination spatial candidate motion vector in each through spatial prediction still through time prediction Device;And
It is confirmed as through in the described spatial candidate motion vector of time prediction for replacing with default candidate motion vector The device of any one, wherein said default candidate motion vector comprises default motions vector information, wherein said silent Recognize motion vector information to include motion vector amplitude, identify that described reference frame was gone back in time before described present frame Prediction direction after being and the reference key identifying described reference frame, and
Wherein said for pruning described spatial candidate motion vector to remove the institute in described spatial candidate motion vector State the device of repetition person to include for pruning comprise in described default candidate motion vector one or more of Described spatial candidate motion vector is to remove the device of the described repetition person in described spatial candidate motion vector.
15. equipment according to claim 11, it farther includes that for determining be not through time prediction and to be different from institute In remaining described spatial candidate motion vector any one one after stating time candidate motion vector and pruning Or the device of more than one exceptional space candidate motion vector,
Wherein said for select described time candidate motion vector or prune after remaining described spatial candidate motion The device of the one in Xiang Liang include for select described time candidate motion vector, prune after remaining described sky Between one in candidate motion vector or the one in described exceptional space candidate motion vector as described selected The device of candidate motion vector.
16. 1 kinds of equipment for coding video frequency data, described equipment includes:
Motion compensation units, the spatial candidate motion vector that its determination is associated with the current portions of current video frame, its Described in spatial candidate motion vector include for described current video frame neighbouring with described current portions adjacent The motion vector partly determining;Determine the time candidate being associated with the described current portions of described current video frame Motion vector, wherein said time candidate motion vector include the motion that determines for the part of reference video frame to Amount;And prune described spatial candidate motion vector with remove described spatial candidate motion vector in repetition person but in institute Described time candidate motion vector is not considered during stating pruning process;
Mode selecting unit, after it selects described time candidate motion vector or prunes, remaining described spatial candidate is transported One in moving vector for motion vector prediction process and determines identification institute as selected candidate motion vector Described time in the list that after stating time candidate motion vector and pruning, remaining described spatial candidate motion is vectorial Select the index of the position of described selected one in motion vector;And
Entropy coding unit, it sends described index with signal in bit stream.
17. equipment according to claim 16, wherein said motion compensation units only prune described spatial candidate motion to Amount is to remove the described repetition person in described spatial candidate motion vector.
18. equipment according to claim 16,
Wherein said current portions includes current decoding unit CU,
The described part of wherein said reference frame includes the CU of described reference frame.
19. equipment according to claim 16, wherein said motion compensation units produces further and comprises prediction data Predicting unit PU, described prediction data includes at least described selected candidate motion vector,
Wherein said entropy coding unit sends described PU with signal in described bit stream.
20. equipment according to claim 16, wherein said entropy coding unit signal sends described index as motion Vector predictors MVP indexes, and described MVP index is surplus after identifying described time candidate motion vector and pruning The described position of the described selected candidate motion vector in the described list of remaining described spatial candidate motion vector.
21. equipment according to claim 16, wherein said entropy coding unit uses unitary code or truncates in unitary code One encodes described index to produce encoded index and to send described encoded rope with signal in described bit stream Draw.
22. equipment according to claim 16, wherein said mode selecting unit relative to described time Candidate Motion to After amount and pruning, each in remaining described spatial candidate motion vector performs rate-distortion analysis, and base Remaining described spatial candidate after described rate-distortion analysis selects described time candidate motion vector or prunes One in motion vector is as described selected candidate motion vector.
23. equipment according to claim 16,
Wherein said motion compensation units further determines that each in the spatial candidate motion vector of described determination is Through spatial prediction still through time prediction, and only prune the spatial candidate motion vector of described determination is confirmed as through The spatial candidate motion vector of spatial prediction, and do not prune in the spatial candidate motion vector of described determination and be confirmed as Through any one in the spatial candidate motion vector of time prediction, and
Wherein said mode selecting unit selects quilt in described time candidate motion vector, described spatial candidate motion vector Be defined as through time prediction one or pruning after remaining described spatial candidate motion vector in be confirmed as through The one of spatial prediction is as described selected candidate motion vector.
24. equipment according to claim 16, wherein said motion compensation units determines the spatial candidate fortune of described determination Each in moving vector through spatial prediction still through time prediction;Replace with default candidate motion vector and be confirmed as Through any one in the described spatial candidate motion vector of time prediction, wherein said default candidate motion vector comprises Default motions vector information, wherein said default motions vector information includes motion vector amplitude, identifies described reference The frame prediction direction before or after described present frame in time and the reference rope identifying described reference frame Draw;And prune comprise one or more of described spatial candidate motion in described default candidate motion vector to Amount is to remove the described repetition person in described spatial candidate motion vector.
25. equipment according to claim 16, wherein said motion compensation units further determines that it is not through time prediction And appointing in remaining described spatial candidate motion vector after being different from described time candidate motion vector and pruning One or more exceptional space candidate motion vectors of one,
After the described time candidate motion vector of wherein said mode selecting unit selection, pruning, remaining described space is waited Select the one in motion vector or the one in described exceptional space candidate motion vector as described selected candidate Motion vector.
26. 1 kinds of methods decoding video data, described method comprises:
Determining the spatial candidate motion vector being associated with the current portions of current video frame, wherein said spatial candidate is transported Moving vector includes for the spatially adjacent part neighbouring with the described current portions in described current video frame The nearby motion vectors determining;
Prune described spatial candidate motion vector with remove repetition person in described spatial candidate motion vector but described The time Candidate Motion not being considered for the described current portions of described current video frame during pruning process and determining Vector, wherein said time candidate motion vector includes the motion vector determining for the part of reference video frame;
Select described time Candidate Motion based in bit stream with the motion vector predictor candidates MVP index that signal sends After vector or pruning, the one in remaining described spatial candidate motion vector is as selected candidate motion vector For motion vector prediction process;And
Perform motion compensation based on described selected candidate motion vector.
27. methods according to claim 26, wherein prune described spatial candidate motion vector and include only pruning described sky Between candidate motion vector with remove described spatial candidate motion vector in described repetition person.
28. methods according to claim 26, it farther includes:
The number of candidate motion vector is defined as described time candidate motion vector plus described spatial candidate motion to The amount vector of remaining spatial candidate motion after trimming;
Dissect out decoded MVP index based on the number of the described determination of candidate motion vector from described bit stream, its Described in decoded MVP index include that the decoded MVP of unitary indexes and truncate unitary decoded MVP index In one;And
Decode described decoded MVP to index to determine that described MVP indexes.
29. methods according to claim 26, it farther includes:
Determine that the described time candidate motion vector of the described current portions of described present frame is unavailable;And
In response to determining that described time candidate motion vector is unavailable, determine the acquiescence fortune of described time candidate motion vector Moving vector information, wherein said default motions vector information include motion vector amplitude, identify described reference frame when Prediction direction before or after described present frame and the reference key identifying described reference frame between.
30. methods according to claim 29, it farther includes that the default motions vector information based on described determination is true The fixed context for performing the lossless statistical decoder of context-adaptive, wherein said context identification decoding table with With the described video data of decoding.
31. methods according to claim 29, wherein determine that described default motions vector information includes:
Determine described reference frame whether through intra-coding;And
When described reference frame is confirmed as through intra-coding, determine based on the described part for described reference frame Spatial motion vector obtains described default motions vector information.
32. methods according to claim 26, it farther includes:
Determine that the one in described spatial candidate motion vector is unavailable;And
Unavailable, based on motion vector prediction mode in response to the described one determining in described spatial candidate motion vector Determine and comprise to wait for the acquiescence of the default motions vector information of the described one in described spatial candidate motion vector Select motion vector, and
Wherein prune described spatial candidate motion vector to remove the described repetition person in described spatial candidate motion vector Including prune the one or more of described spatial candidate motion comprising in described default candidate motion vector to Measure to remove the described repetition person in described spatial candidate motion vector, and
Wherein select described time candidate motion vector, prune after remaining described spatial candidate motion vector in one Person as described selected candidate motion vector include based in described bit stream with signal send described motion to Amount predicted value MVP index selects in described time candidate motion vector or described spatial candidate motion vector by really One in remaining described spatial candidate motion vector after being set to disabled one or pruning.
33. methods according to claim 32, wherein when described motion vector prediction mode is that adaptive motion vector is pre- When surveying AMVP pattern, determine that described default motions vector information includes determining motion vector amplitude but uncertain knowledge The not described reference frame prediction direction before or after described present frame in time or identify described reference frame Reference key.
34. methods according to claim 32, wherein when described motion vector prediction mode is merging patterns, determine Described default motions vector information includes determining motion vector amplitude, identifies that described reference frame is worked as described in time Prediction direction before or after front frame and the reference key identifying described reference frame.
35. methods according to claim 32, it farther includes that the default motions vector information based on described determination is true The fixed context for performing the lossless statistical decoder of context-adaptive, wherein said context identification decoding table with With the described video data of decoding.
36. methods according to claim 26, it farther includes:
Determine that the one in described spatial candidate motion vector is unavailable;And
Unavailable in response to the described one determining in described spatial candidate motion vector, remove institute from described pruning process State in spatial candidate motion vector and be confirmed as disabled described one,
Wherein prune described spatial candidate motion vector to include only pruning in described spatial candidate motion vector and be confirmed as Available spatial candidate motion vector is with the described repetition person that removes in described spatial candidate motion vector but does not removes The time candidate motion vector determining for the described current portions of described current video frame or described spatial candidate Motion vector is confirmed as disabled described one.
37. methods according to claim 26, it further comprises determining that it is not through time prediction and when being different from described Between candidate motion vector and prune after remaining described spatial candidate motion vector in any one one or one Individual above exceptional space candidate motion vector,
After wherein selecting described time candidate motion vector or pruning in remaining described spatial candidate motion vector One includes selecting described time candidate motion vector, prune after in remaining described spatial candidate motion vector One in one or described exceptional space candidate motion vector is as described selected candidate motion vector.
38. 1 kinds of equipment being used for decoding video data, described equipment includes:
For determining the device of the spatial candidate motion vector being associated with the current portions of current video frame, wherein said Spatial candidate motion vector include the adjacent part neighbouring with described current portions for described current video frame and The motion vector determining;
For prune described spatial candidate motion vector with remove described spatial candidate motion vector in repetition person but The time candidate not being considered for the described current portions of described current video frame during described pruning process and determining The device of motion vector, wherein said time candidate motion vector includes determining for the part of reference video frame Motion vector;
For selecting described time candidate based in bit stream with the motion vector predictor candidates MVP index that signal sends After motion vector or pruning, the one in remaining described spatial candidate motion vector is as selected Candidate Motion Vector is for the device of motion vector prediction process;And
For performing the device of motion compensation based on described selected candidate motion vector.
39. equipment according to claim 38, the wherein said device bag for pruning described spatial candidate motion vector Include for only pruning described spatial candidate motion vector to remove the described repetition in described spatial candidate motion vector The device of person.
40. equipment according to claim 38, it farther includes:
Transport plus described spatial candidate for the number of candidate motion vector is defined as described time candidate motion vector The device of remaining spatial candidate motion vector after pruning in moving vector;
For dissecting out decoded MVP index based on the number of the described determination of candidate motion vector from described bit stream Device, wherein said decoded MVP index includes that the decoded MVP of unitary indexes and truncates unitary through translating One in code MVP index;And
Index for decoding described decoded MVP to determine the device that described MVP indexes.
41. equipment according to claim 38, it farther includes:
For determining the disabled device of described time candidate motion vector of the described current portions of described present frame;With And
In response to determining that described time candidate motion vector is unavailable, for determining the silent of described time candidate motion vector Recognizing the device of motion vector information, wherein said default motions vector information includes that motion vector amplitude, identification are described The reference frame prediction direction before or after described present frame in time and the ginseng identifying described reference frame Examine index.
42. equipment according to claim 41, it farther includes for the default motions vector letter based on described determination Breath determines the device of the context for performing the lossless statistical decoder of context-adaptive, wherein said context identification Decoding table is with in order to decode described video data.
43. equipment according to claim 41, the wherein said device bag for determining described default motions vector information Include:
For determining described reference frame whether through the device of intra-coding;And
It when described reference frame is confirmed as through intra-coding, is used for true based on the described part for described reference frame Fixed spatial motion vector obtains the device of described default motions vector information.
44. equipment according to claim 38, it farther includes:
For determining the disabled device of one in described spatial candidate motion vector;And
Unavailable, for based on motion vector prediction in response to the described one determining in described spatial candidate motion vector Pattern determination comprises the silent of the default motions vector information for the described one in described spatial candidate motion vector Recognize the device of candidate motion vector, and
Wherein said for pruning described spatial candidate motion vector to remove the institute in described spatial candidate motion vector State the device of repetition person to include for pruning comprise in described default candidate motion vector one or more of Described spatial candidate motion vector removing the device of described repetition person in described spatial candidate motion vector, and
Wherein said for select device include for select described time candidate motion vector, prune after remaining One in described spatial candidate motion vector as the device of described selected candidate motion vector include based on Described bit stream selects described time Candidate Motion with the described motion vector predictor candidates MVP index that signal sends Remaining described sky after being confirmed as disabled one or prune in vectorial or described spatial candidate motion vector Between the device of one in candidate motion vector.
45. equipment according to claim 44, wherein when described motion vector prediction mode is that adaptive motion vector is pre- When surveying AMVP pattern, the described device for determining described default motions vector information includes for determining motion Vector amplitude but the described reference frame of the uncertain identification prediction side before or after described present frame in time To or identify the device of reference key of described reference frame.
46. equipment according to claim 44, wherein when described motion vector prediction mode is merging patterns, described Device for determining described default motions vector information includes for determining motion vector amplitude, identifying described reference The frame prediction direction before or after described present frame in time and the reference rope identifying described reference frame The device drawing.
47. equipment according to claim 44, it farther includes for the default motions vector letter based on described determination Breath determines the device of the context for performing the lossless statistical decoder of context-adaptive, wherein said context identification Decoding table is with in order to decode described video data.
48. equipment according to claim 38, it farther includes:
For determining the disabled device of one in described spatial candidate motion vector;And
Unavailable in response to the described one determining in described spatial candidate motion vector, for moving from described pruning process It is confirmed as disabled described one in described spatial candidate motion vector,
The wherein said device for only pruning described spatial candidate motion vector includes waiting for only pruning described space Select the spatial candidate motion vector being confirmed as can use in motion vector to remove in described spatial candidate motion vector The time candidate fortune that determines for the described current portions of described current video frame of described repetition person but do not remove Moving vector or described spatial candidate motion vector are confirmed as the device of disabled described one.
49. equipment according to claim 38, it farther includes that for determining be not through time prediction and to be different from institute In remaining described spatial candidate motion vector any one one after stating time candidate motion vector and pruning Or the device of more than one exceptional space candidate motion vector,
Wherein said for select described time candidate motion vector or prune after remaining described spatial candidate motion The device of the one in Xiang Liang include for select described time candidate motion vector, prune after remaining described sky Between one in candidate motion vector or the one in described exceptional space candidate motion vector as described selected The device of candidate motion vector.
50. 1 kinds of equipment being used for decoding video data, described equipment includes:
Motion compensation units, the spatial candidate motion vector that its determination is associated with the current portions of current video frame, its Described in spatial candidate motion vector include determining for the adjacent part neighbouring with described current portions adjacent Motion vector;Prune described spatial candidate motion vector with remove repetition person in described spatial candidate motion but in institute The time candidate fortune not being considered for the described current portions of described current video frame during stating pruning process and determining Moving vector, wherein said time candidate motion vector include the motion that determines for the part of reference video frame to Amount;Described time candidate is selected to transport based in bit stream with the motion vector predictor candidates MVP index that signal sends Moving vector or prune after remaining described spatial candidate motion vector in one as selected Candidate Motion to Amount is for motion vector prediction process;And perform motion compensation based on described selected candidate motion vector.
51. equipment according to claim 50, wherein said motion compensation units only prune described spatial candidate motion to Amount is to remove the described repetition person in described spatial candidate motion vector.
52. equipment according to claim 50,
The number of candidate motion vector is defined as described time candidate motion vector and adds by wherein said motion compensation units Remaining spatial candidate motion vector after pruning in upper described spatial candidate motion vector, and
Wherein said equipment farther includes entropy decoding unit, and described entropy decoding unit is based on described in candidate motion vector The number determining dissects out decoded MVP index, wherein said decoded MVP index from described bit stream Including the decoded MVP of unitary indexes and truncates the one in unitary decoded MVP index;And decode described warp Decoding MVP indexes to determine that described MVP indexes.
53. equipment according to claim 50, it farther includes:
Determine that the described time candidate motion vector of the described current portions of described present frame is unavailable;And
In response to determining that described time candidate motion vector is unavailable, determine the acquiescence fortune of described time candidate motion vector Moving vector information, wherein said default motions vector information include motion vector amplitude, identify described reference frame when Prediction direction before or after described present frame and the reference key identifying described reference frame between.
54. equipment according to claim 53, it farther includes entropy decoding unit, and described entropy decoding unit is based on institute The default motions vector information stating determination determines for performing the context of the lossless statistical decoder of context-adaptive, its Described in context identification decoding table with in order to decode described video data.
55. equipment according to claim 53, whether wherein said motion compensation units further determines that described reference frame Through intra-coding, and when described reference frame is confirmed as through intra-coding, based on for locating in the reference frame In with described in the described reference frame of the described current portions position that location is identical in described present frame The spatial motion vector partly determining obtains described default motions vector information.
56. equipment according to claim 50, wherein said motion compensation units determines described spatial candidate motion vector In one unavailable;Unavailable, based on fortune in response to the described one determining in described spatial candidate motion vector Motion vector prediction pattern determine comprise for described spatial candidate motion vector in described one default motions to The default candidate motion vector of amount information;Prune comprise in described default candidate motion vector one or more Described spatial candidate motion vector with remove described spatial candidate motion vector in described repetition person;And based on Bit stream is selected in described time candidate motion vector by the motion vector predictor candidates MVP index that signal sends One, described spatial candidate motion vector in be confirmed as disabled one or prune after remaining described sky Between one in candidate motion vector.
57. equipment according to claim 56, wherein when described motion vector prediction mode is that adaptive motion vector is pre- When surveying AMVP pattern, described motion compensation units determines motion vector amplitude but the described reference frame of uncertain identification Prediction direction before or after described present frame or the reference key identifying described reference frame in time.
58. equipment according to claim 56, wherein when described motion vector prediction mode is merging patterns, described Motion compensation units determine motion vector amplitude, identify described reference frame in time before described present frame still Prediction direction afterwards and the reference key identifying described reference frame.
59. equipment according to claim 56, it farther includes entropy decoding unit, and described entropy decoding unit is based on institute The default motions vector information stating determination determines for performing the context of the lossless statistical decoder of context-adaptive, its Described in context identification decoding table with in order to decode described video data.
60. equipment according to claim 50, wherein said motion compensation units determines described spatial candidate motion vector In one unavailable;Unavailable in response to the described one determining in described spatial candidate motion vector, from described Pruning process removes in described spatial candidate motion vector and is confirmed as disabled described one;And only prune described The spatial candidate motion vector being confirmed as in spatial candidate motion vector can use is to remove described spatial candidate motion Described repetition person in Xiang Liang but do not remove the time determining for the described current portions of described current video frame Candidate motion vector or described spatial candidate motion vector are confirmed as disabled described one.
61. equipment according to claim 50, wherein said motion compensation units further determines that it is not through time prediction And appointing in remaining described spatial candidate motion vector after being different from described time candidate motion vector and pruning One or more exceptional space candidate motion vectors of one, and select described time candidate motion vector, repair In one during remaining described spatial candidate motion is vectorial after cutting or described exceptional space candidate motion vector One is as described selected candidate motion vector.
CN201280006666.7A 2011-01-27 2012-01-18 Perform the motion vector prediction of video coding Active CN103339938B (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US201161436997P 2011-01-27 2011-01-27
US61/436,997 2011-01-27
US201161449985P 2011-03-07 2011-03-07
US61/449,985 2011-03-07
US201161561601P 2011-11-18 2011-11-18
US61/561,601 2011-11-18
US13/351,980 2012-01-17
US13/351,980 US9319716B2 (en) 2011-01-27 2012-01-17 Performing motion vector prediction for video coding
PCT/US2012/021742 WO2012102927A1 (en) 2011-01-27 2012-01-18 Performing motion vector prediction for video coding

Publications (2)

Publication Number Publication Date
CN103339938A CN103339938A (en) 2013-10-02
CN103339938B true CN103339938B (en) 2016-10-05

Family

ID=46577354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280006666.7A Active CN103339938B (en) 2011-01-27 2012-01-18 Perform the motion vector prediction of video coding

Country Status (21)

Country Link
US (1) US9319716B2 (en)
EP (1) EP2668784B1 (en)
JP (1) JP5813784B2 (en)
KR (1) KR101574866B1 (en)
CN (1) CN103339938B (en)
AU (1) AU2012209403B2 (en)
BR (1) BR112013018816B1 (en)
CA (1) CA2825154C (en)
DK (1) DK2668784T3 (en)
ES (1) ES2684522T3 (en)
HU (1) HUE039019T2 (en)
IL (1) IL227287A (en)
MY (1) MY164598A (en)
PH (1) PH12013501470A1 (en)
PL (1) PL2668784T3 (en)
PT (1) PT2668784T (en)
RU (1) RU2550554C2 (en)
SG (1) SG191824A1 (en)
SI (1) SI2668784T1 (en)
WO (1) WO2012102927A1 (en)
ZA (1) ZA201306423B (en)

Families Citing this family (113)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474875B2 (en) 2010-06-07 2019-11-12 Affectiva, Inc. Image analysis using a semiconductor processor for facial evaluation
EP2645716A4 (en) 2010-11-24 2016-04-13 Panasonic Ip Corp America METHOD FOR CALCULATING MOTION VECTORS, PICTURE CODING METHOD, PICTOR DECODING METHOD, DEVICE FOR CALCULATING MOTION VECTORS AND PICTURE CODING / DECODING DEVICE
GB2487197B (en) * 2011-01-11 2015-06-17 Canon Kk Video encoding and decoding with improved error resilience
US9083981B2 (en) 2011-01-12 2015-07-14 Panasonic Intellectual Property Corporation Of America Moving picture coding method and moving picture decoding method using a determination whether or not a reference block has two reference motion vectors that refer forward in display order with respect to a current picture
WO2012114694A1 (en) * 2011-02-22 2012-08-30 パナソニック株式会社 Moving image coding method, moving image coding device, moving image decoding method, and moving image decoding device
WO2012117728A1 (en) 2011-03-03 2012-09-07 パナソニック株式会社 Video image encoding method, video image decoding method, video image encoding device, video image decoding device, and video image encoding/decoding device
US9066110B2 (en) 2011-03-08 2015-06-23 Texas Instruments Incorporated Parsing friendly and error resilient merge flag coding in video coding
US9143795B2 (en) 2011-04-11 2015-09-22 Texas Instruments Incorporated Parallel motion estimation in video coding
EP3136727B1 (en) 2011-04-12 2018-06-13 Sun Patent Trust Motion-video coding method and motion-video coding apparatus
JP5865366B2 (en) 2011-05-27 2016-02-17 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding / decoding device
US9485518B2 (en) * 2011-05-27 2016-11-01 Sun Patent Trust Decoding method and apparatus with candidate motion vectors
SG194746A1 (en) 2011-05-31 2013-12-30 Kaba Gmbh Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device
EP3629583B1 (en) 2011-05-31 2023-10-25 Sun Patent Trust Video decoding method, video decoding device
US9866859B2 (en) * 2011-06-14 2018-01-09 Texas Instruments Incorporated Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding
KR20140004209A (en) * 2011-06-15 2014-01-10 미디어텍 인크. Method and apparatus of texture image compression in 3d video coding
PL2728878T3 (en) 2011-06-30 2020-06-15 Sun Patent Trust Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
JPWO2013018369A1 (en) 2011-08-03 2015-03-05 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding / decoding apparatus
GB2493755B (en) * 2011-08-17 2016-10-19 Canon Kk Method and device for encoding a sequence of images and method and device for decoding a sequence of images
SE1651149A1 (en) 2011-09-09 2016-08-25 Kt Corp A method for deriving a temporal prediction motion vector and a device using the method
MY180182A (en) 2011-10-19 2020-11-24 Sun Patent Trust Picture coding method,picture coding apparatus,picture decoding method,and picture decoding apparatus
EP4472208A3 (en) * 2011-10-21 2025-02-19 Nokia Technologies Oy Method for coding and an apparatus
US9571833B2 (en) 2011-11-04 2017-02-14 Nokia Technologies Oy Method for coding and an apparatus
US9088796B2 (en) * 2011-11-07 2015-07-21 Sharp Kabushiki Kaisha Video decoder with enhanced CABAC decoding
CN107465921B (en) 2011-11-08 2020-10-20 株式会社Kt Method for decoding video signal by using decoding device
CN109218736B (en) * 2011-11-11 2020-09-15 Ge视频压缩有限责任公司 Apparatus and method for encoding and decoding
JP5617834B2 (en) * 2011-12-28 2014-11-05 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program
TWI618401B (en) * 2011-12-28 2018-03-11 Jvc Kenwood Corp Motion picture coding device, motion picture coding method and memory medium
JP2013141077A (en) * 2011-12-28 2013-07-18 Jvc Kenwood Corp Video encoding device, video encoding method, and video encoding program
JP5747816B2 (en) * 2011-12-28 2015-07-15 株式会社Jvcケンウッド Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmission apparatus, transmission method, and transmission program
BR112014013969B1 (en) * 2011-12-28 2022-05-10 JVC Kenwood Corporation Video encoding device, video encoding method, video encoding program, video decoding device, video decoding method, video decoding program
JP2013141078A (en) * 2011-12-28 2013-07-18 Jvc Kenwood Corp Video decoding device, video decoding method, and video decoding program
WO2013107028A1 (en) * 2012-01-19 2013-07-25 Mediatek Singapore Pte. Ltd. Methods and apparatuses of amvp simplification
HUE056924T2 (en) * 2012-01-19 2022-03-28 Electronics & Telecommunications Res Inst Device for image encoding / decoding
US9729873B2 (en) 2012-01-24 2017-08-08 Qualcomm Incorporated Video coding using parallel motion estimation
US9426463B2 (en) 2012-02-08 2016-08-23 Qualcomm Incorporated Restriction of prediction units in B slices to uni-directional inter prediction
US9451277B2 (en) 2012-02-08 2016-09-20 Qualcomm Incorporated Restriction of prediction units in B slices to uni-directional inter prediction
CN104170386A (en) * 2012-03-16 2014-11-26 松下电器产业株式会社 Image decoding device and image decoding method
US9584802B2 (en) * 2012-04-13 2017-02-28 Texas Instruments Incorporated Reducing context coded and bypass coded bins to improve context adaptive binary arithmetic coding (CABAC) throughput
US20240340435A1 (en) * 2012-06-14 2024-10-10 Texas Instruments Incorporated Inter-prediction candidate index coding
US9800869B2 (en) * 2012-06-15 2017-10-24 Google Technology Holdings LLC Method and apparatus for efficient slice header processing
US20140079135A1 (en) * 2012-09-14 2014-03-20 Qualcomm Incoporated Performing quantization to facilitate deblocking filtering
CN104704835B (en) * 2012-10-03 2017-11-24 联发科技股份有限公司 Apparatus and method for motion information management in video coding
CN102883163B (en) 2012-10-08 2014-05-28 华为技术有限公司 Method and device for building motion vector lists for prediction of motion vectors
WO2014056423A1 (en) * 2012-10-09 2014-04-17 Mediatek Inc. Method and apparatus for motion information prediction and inheritance in video coding
US9826244B2 (en) * 2013-01-08 2017-11-21 Qualcomm Incorporated Device and method for scalable coding of video information based on high efficiency video coding
CN103079067B (en) * 2013-01-09 2016-03-09 华为技术有限公司 Motion vector predictor list builder method and video coding-decoding method and device
JP5983430B2 (en) * 2013-01-25 2016-08-31 富士通株式会社 Moving picture coding apparatus, moving picture coding method, moving picture decoding apparatus, and moving picture decoding method
FR3011429A1 (en) * 2013-09-27 2015-04-03 Orange VIDEO CODING AND DECODING BY HERITAGE OF A FIELD OF MOTION VECTORS
WO2015054813A1 (en) 2013-10-14 2015-04-23 Microsoft Technology Licensing, Llc Encoder-side options for intra block copy prediction mode for video and image coding
CA2928495C (en) 2013-10-14 2020-08-18 Microsoft Technology Licensing, Llc Features of intra block copy prediction mode for video and image coding and decoding
EP3058740B1 (en) 2013-10-14 2020-06-03 Microsoft Technology Licensing, LLC Features of base color index map mode for video and image coding and decoding
KR102258427B1 (en) 2014-01-03 2021-06-01 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Block vector prediction in video and image coding/decoding
US10390034B2 (en) 2014-01-03 2019-08-20 Microsoft Technology Licensing, Llc Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
US11284103B2 (en) 2014-01-17 2022-03-22 Microsoft Technology Licensing, Llc Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning
US10542274B2 (en) 2014-02-21 2020-01-21 Microsoft Technology Licensing, Llc Dictionary encoding and decoding of screen content
MX361228B (en) 2014-03-04 2018-11-29 Microsoft Technology Licensing Llc Block flipping and skip mode in intra block copy prediction.
US9479788B2 (en) * 2014-03-17 2016-10-25 Qualcomm Incorporated Systems and methods for low complexity encoding and background detection
EP4354856A3 (en) 2014-06-19 2024-06-19 Microsoft Technology Licensing, LLC Unified intra block copy and inter prediction modes
JP5874793B2 (en) * 2014-09-08 2016-03-02 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program
JP5874791B2 (en) * 2014-09-08 2016-03-02 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program
JP5874790B2 (en) * 2014-09-08 2016-03-02 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program
JP5874792B2 (en) * 2014-09-08 2016-03-02 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program
WO2016049839A1 (en) 2014-09-30 2016-04-07 Microsoft Technology Licensing, Llc Rules for intra-picture prediction modes when wavefront parallel processing is enabled
US9992512B2 (en) * 2014-10-06 2018-06-05 Mediatek Inc. Method and apparatus for motion vector predictor derivation
US9591325B2 (en) 2015-01-27 2017-03-07 Microsoft Technology Licensing, Llc Special case handling for merged chroma blocks in intra block copy prediction mode
JP5975146B2 (en) * 2015-05-14 2016-08-23 株式会社Jvcケンウッド Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmission apparatus, transmission method, and transmission program
US10187653B2 (en) * 2015-05-18 2019-01-22 Avago Technologies International Sales Pte. Limited Motor vector prediction using co-located prediction units
KR102162856B1 (en) 2015-05-21 2020-10-07 후아웨이 테크놀러지 컴퍼니 리미티드 Apparatus and method for video motion compensation
CN106664405B (en) 2015-06-09 2020-06-09 微软技术许可有限责任公司 Robust encoding/decoding of escape-coded pixels with palette mode
CN106559669B (en) 2015-09-29 2018-10-09 华为技术有限公司 Prognostic chart picture decoding method and device
US10225572B2 (en) * 2015-09-30 2019-03-05 Apple Inc. Configurable motion estimation search systems and methods
US10477233B2 (en) * 2015-09-30 2019-11-12 Apple Inc. Predictor candidates for motion estimation search systems and methods
JP6037061B2 (en) * 2016-01-18 2016-11-30 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program
JP5962875B1 (en) * 2016-04-26 2016-08-03 株式会社Jvcケンウッド Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmission apparatus, transmission method, and transmission program
JP5962876B1 (en) * 2016-04-26 2016-08-03 株式会社Jvcケンウッド Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmission apparatus, transmission method, and transmission program
JP5962877B1 (en) * 2016-04-26 2016-08-03 株式会社Jvcケンウッド Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmission apparatus, transmission method, and transmission program
JP6183505B2 (en) * 2016-06-29 2017-08-23 株式会社Jvcケンウッド Video encoding device
US10567461B2 (en) * 2016-08-04 2020-02-18 Twitter, Inc. Low-latency HTTP live streaming
US20210281873A1 (en) * 2016-09-06 2021-09-09 Mediatek Inc. Methods and apparatuses of candidate set determination for binary-tree splitting blocks
KR20180103733A (en) * 2017-03-09 2018-09-19 주식회사 케이티 Method and apparatus for image encoding or image decoding
EP3410717A1 (en) * 2017-05-31 2018-12-05 Thomson Licensing Methods and apparatus for candidate list pruning
MX2020001665A (en) * 2017-09-13 2020-03-20 Samsung Electronics Co Ltd Apparatus and method for encoding motion vector by using basic motion vector, and decoding apparatus and method.
JP6406409B2 (en) * 2017-09-28 2018-10-17 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program
JP6406408B2 (en) * 2017-09-28 2018-10-17 株式会社Jvcケンウッド Moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program
KR102347598B1 (en) * 2017-10-16 2022-01-05 삼성전자주식회사 Video encoding device and encoder
KR102476204B1 (en) * 2017-10-19 2022-12-08 삼성전자주식회사 Multi-codec encoder and multi-codec encoding system including the same
CN118678100A (en) * 2017-11-09 2024-09-20 三星电子株式会社 Motion information encoding method, motion information decoding method, and bit stream transmission method
US10986349B2 (en) 2017-12-29 2021-04-20 Microsoft Technology Licensing, Llc Constraints on locations of reference blocks for intra block copy prediction
US11146793B2 (en) * 2018-03-27 2021-10-12 Kt Corporation Video signal processing method and device
CN114125450B (en) 2018-06-29 2023-11-17 北京字节跳动网络技术有限公司 Method, apparatus and computer readable medium for processing video data
WO2020003282A1 (en) 2018-06-29 2020-01-02 Beijing Bytedance Network Technology Co., Ltd. Managing motion vector predictors for video coding
CN114466197B (en) * 2018-06-29 2024-10-22 北京字节跳动网络技术有限公司 Selection of encoded motion information for lookup table updating
TWI744662B (en) 2018-06-29 2021-11-01 大陸商北京字節跳動網絡技術有限公司 Conditions for updating luts
TWI728388B (en) 2018-06-29 2021-05-21 大陸商北京字節跳動網絡技術有限公司 Checking order of motion candidates in look up table
KR102627814B1 (en) 2018-06-29 2024-01-23 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Update of lookup table: FIFO, constrained FIFO
TWI752331B (en) * 2018-06-29 2022-01-11 大陸商北京字節跳動網絡技術有限公司 Partial/full pruning when adding a hmvp candidate to merge/amvp
WO2020003266A1 (en) 2018-06-29 2020-01-02 Beijing Bytedance Network Technology Co., Ltd. Resetting of look up table per slice/tile/lcu row
CN110662059B (en) 2018-06-29 2021-04-20 北京字节跳动网络技术有限公司 Method and apparatus for storing previously encoded motion information using a lookup table and encoding subsequent blocks using the same
CN110677662B (en) 2018-07-02 2022-09-02 北京字节跳动网络技术有限公司 Combined use of HMVP and non-adjacent motion
CN110868601B (en) 2018-08-28 2024-03-15 华为技术有限公司 Interframe prediction method, device, video encoder and video decoder
CN118175325A (en) 2018-08-28 2024-06-11 华为技术有限公司 Method for constructing candidate motion information list, inter-frame prediction method and device
CN119071508A (en) 2018-09-11 2024-12-03 有限公司B1影像技术研究所 Image encoding/decoding method and image transmission method
WO2020053800A1 (en) 2018-09-12 2020-03-19 Beijing Bytedance Network Technology Co., Ltd. How many hmvp candidates to be checked
WO2020088691A1 (en) 2018-11-02 2020-05-07 Beijing Bytedance Network Technology Co., Ltd. Harmonization between geometry partition prediction mode and other tools
WO2020109234A2 (en) * 2018-11-26 2020-06-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Inter-prediction concept using tile-independency constraints
WO2020143741A1 (en) 2019-01-10 2020-07-16 Beijing Bytedance Network Technology Co., Ltd. Invoke of lut updating
WO2020143824A1 (en) 2019-01-13 2020-07-16 Beijing Bytedance Network Technology Co., Ltd. Interaction between lut and shared merge list
CN113330739B (en) 2019-01-16 2025-01-10 北京字节跳动网络技术有限公司 Insertion order of motion candidates in LUT
US10979716B2 (en) * 2019-03-15 2021-04-13 Tencent America LLC Methods of accessing affine history-based motion vector predictor buffer
CN113615193B (en) 2019-03-22 2024-06-25 北京字节跳动网络技术有限公司 Interactions between Merge list build and other tools
CN111741304A (en) * 2019-03-25 2020-10-02 四川大学 A method of combining frame rate up-conversion and HEVC based on motion vector refinement
WO2020248925A1 (en) 2019-06-08 2020-12-17 Beijing Bytedance Network Technology Co., Ltd. History-based motion vector prediction with default parameters
US12143592B2 (en) * 2022-09-02 2024-11-12 Tencent America LLC Systems and methods for temporal motion vector prediction candidate derivation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2026582A2 (en) * 1999-07-27 2009-02-18 Sharp Kabushiki Kaisha Methods for motion estimation with adaptive motion accuracy

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6671321B1 (en) 1999-08-31 2003-12-30 Mastsushita Electric Industrial Co., Ltd. Motion vector detection device and motion vector detection method
JP2001251632A (en) 1999-12-27 2001-09-14 Toshiba Corp Motion vector detection method and apparatus, and motion vector detection program
US20040001546A1 (en) 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US7408986B2 (en) 2003-06-13 2008-08-05 Microsoft Corporation Increasing motion smoothness using frame interpolation with motion analysis
US20040258147A1 (en) 2003-06-23 2004-12-23 Tsu-Chang Lee Memory and array processor structure for multiple-dimensional signal processing
US7567617B2 (en) 2003-09-07 2009-07-28 Microsoft Corporation Predicting motion vectors for fields of forward-predicted interlaced video frames
US20080144716A1 (en) 2004-03-11 2008-06-19 Gerard De Haan Method For Motion Vector Determination
KR100587562B1 (en) 2004-04-13 2006-06-08 삼성전자주식회사 Method for motion estimation of video frame, and video encoder using the same
ES2812473T3 (en) * 2008-03-19 2021-03-17 Nokia Technologies Oy Combined motion vector and benchmark prediction for video encoding
WO2010046854A1 (en) 2008-10-22 2010-04-29 Nxp B.V. Device and method for motion estimation and compensation
WO2011095259A1 (en) * 2010-02-05 2011-08-11 Telefonaktiebolaget L M Ericsson (Publ) Selecting predicted motion vector candidates
ES2936314T3 (en) 2011-01-07 2023-03-16 Ntt Docomo Inc Predictive encoding method, predictive encoding device and predictive encoding program of a motion vector, and predictive decoding method, predictive decoding device and predictive decoding program of a motion vector
JP2012151576A (en) 2011-01-18 2012-08-09 Hitachi Ltd Image coding method, image coding device, image decoding method and image decoding device
JP2013141077A (en) 2011-12-28 2013-07-18 Jvc Kenwood Corp Video encoding device, video encoding method, and video encoding program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2026582A2 (en) * 1999-07-27 2009-02-18 Sharp Kabushiki Kaisha Methods for motion estimation with adaptive motion accuracy

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Error control and concealment for video communication:a review;YAO WANG et al.;《PROCEEDINGS OF THE IEEE》;19980531;第86卷(第5期);974-997 *
On motion vector competition;Yeping Su et al.;《Joint Collaborative Team on Video Coding (JCT-VC)of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 3rd Meeting》;20101015;1-2 *

Also Published As

Publication number Publication date
EP2668784A1 (en) 2013-12-04
JP2014509480A (en) 2014-04-17
EP2668784B1 (en) 2018-05-30
PT2668784T (en) 2018-10-09
BR112013018816A2 (en) 2017-07-25
RU2550554C2 (en) 2015-05-10
IL227287A (en) 2016-10-31
KR101574866B1 (en) 2015-12-04
HUE039019T2 (en) 2018-12-28
IL227287A0 (en) 2013-09-30
US9319716B2 (en) 2016-04-19
JP5813784B2 (en) 2015-11-17
RU2013139569A (en) 2015-03-10
AU2012209403B2 (en) 2015-10-01
PL2668784T3 (en) 2018-12-31
KR20130126691A (en) 2013-11-20
US20120195368A1 (en) 2012-08-02
CN103339938A (en) 2013-10-02
ES2684522T3 (en) 2018-10-03
BR112013018816B1 (en) 2022-07-19
DK2668784T3 (en) 2018-08-20
WO2012102927A1 (en) 2012-08-02
SG191824A1 (en) 2013-08-30
ZA201306423B (en) 2014-04-30
PH12013501470A1 (en) 2017-01-06
AU2012209403A1 (en) 2013-08-01
MY164598A (en) 2018-01-30
CA2825154A1 (en) 2012-08-02
SI2668784T1 (en) 2018-08-31
CA2825154C (en) 2016-10-04

Similar Documents

Publication Publication Date Title
CN103339938B (en) Perform the motion vector prediction of video coding
US11166016B2 (en) Most probable transform for intra prediction coding
US9426473B2 (en) Mode decision simplification for intra prediction
AU2012273109B2 (en) Parallelization friendly merge candidates for video coding
KR102010097B1 (en) Unified merge mode and adaptive motion vector prediction mode candidates selection
US9807403B2 (en) Adaptive loop filtering for chroma components
US9532066B2 (en) Motion vector prediction
HK1186892A (en) Performing motion vector prediction for video coding
HK1186892B (en) Performing motion vector prediction for video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1186892

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1186892

Country of ref document: HK