The present application is based on and claims priority from provisional application No. 63/495,677 filed on month 4 of 2023, 12. The entire contents of which are incorporated herein by reference in their entirety.
Detailed Description
Reference will now be made in detail to the specific embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous non-limiting specific details are set forth in order to provide an understanding of the subject matter presented herein. However, various alternatives may be used without departing from the scope of the claims, and the subject matter may be practiced without these specific details. For example, the subject matter presented herein may be implemented on many types of electronic devices having digital video capabilities.
It should be noted that the terms "first," "second," and the like, as used in the description, claims, and drawings of this disclosure, are used for distinguishing between objects and not for describing any particular sequence or order. It should be understood that the data used in this manner may be interchanged under appropriate conditions such that the embodiments of the disclosure described herein may be implemented in sequences other than those illustrated in the figures or described in the disclosure.
Fig. 1 is a block diagram illustrating an exemplary system 10 for encoding and decoding video blocks in parallel according to some embodiments of the present disclosure. As shown in fig. 1, the system 10 includes a source device 12, the source device 12 generating and encoding video data for later decoding by a target device 14. Source device 12 and destination device 14 may comprise any of a wide variety of electronic devices including cloud servers, server computers, desktop or laptop computers, tablet computers, smart phones, set-top boxes, digital televisions, cameras, display devices, digital media players, video gaming machines, video streaming devices, and the like. In some implementations, the source device 12 and the target device 14 are equipped with wireless communication capabilities.
In some implementations, target device 14 may receive encoded video data to be decoded via link 16. Link 16 may comprise any type of communication medium or device capable of moving encoded video data from source device 12 to destination device 14. In one example, link 16 may include a communication medium to enable source device 12 to transmit encoded video data directly to target device 14 in real-time. The encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and sent to target device 14. The communication medium may include any wireless or wired communication medium such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network, such as a local area network, a wide area network, or a global network such as the internet. The communication medium may include routers, switches, base stations, or any other device operable to facilitate communication from source device 12 to destination device 14.
In some other implementations, encoded video data may be sent from output interface 22 to storage device 32. The encoded video data in storage device 32 may then be accessed by target device 14 via input interface 28. Storage device 32 may include any of a variety of distributed or locally accessed data storage media such as a hard drive, blu-ray disc, digital Versatile Disc (DVD), compact disc read only memory (CD-ROM), flash memory, volatile or nonvolatile memory, or any other suitable digital storage media for storing encoded video data. In another example, storage device 32 may correspond to a file server or another intermediate storage device that may hold encoded video data generated by source device 12. Target device 14 may access stored video data from storage device 32 via streaming or download. The file server may be any type of computer capable of storing and sending encoded video data to the target device 14. Exemplary file servers include web servers (e.g., for web sites), file Transfer Protocol (FTP) servers, network Attached Storage (NAS) devices, or local disk drives. The target device 14 may access the encoded video data over any standard data connection suitable for accessing encoded video data stored on a file server, including a wireless channel (e.g., a wireless fidelity (Wi-Fi) connection), a wired connection (e.g., digital Subscriber Line (DSL), cable modem, etc.), or a combination of both. The transmission of encoded video data from storage device 32 may be a streaming transmission, a download transmission, or a combination of both.
As shown in fig. 1, source device 12 includes a video source 18, a video encoder 20, and an output interface 22. Video source 18 may include sources such as a video capture device (e.g., a video camera), a video archive including previously captured video, a video feed interface for receiving video from a video content provider, and/or a computer graphics system for generating computer graphics data as source video, or a combination of such sources. As one example, if video source 18 is a video camera of a security monitoring system, source device 12 and target device 14 may form a camera phone or video phone. However, the embodiments described in this disclosure may be generally applicable to video coding and may be applied to wireless and/or wired applications.
The captured, pre-captured, or computer-generated video may be encoded by video encoder 20. The encoded video data may be sent directly to the target device 14 via the output interface 22 of the source device 12. The encoded video data may also (or alternatively) be stored onto storage device 32 for later access by target device 14 or other devices for decoding and/or playback. Output interface 22 may also include a modem and/or a transmitter.
The target device 14 includes an input interface 28, a video decoder 30, and a display device 34. Input interface 28 may include a receiver and/or modem and receives encoded video data over link 16. The encoded video data transmitted over link 16 or provided on storage device 32 may include various syntax elements generated by video encoder 20 for use by video decoder 30 in decoding the video data. Such syntax elements may be included within encoded video data transmitted over a communication medium, stored on a storage medium, or stored on a file server.
In some implementations, the target device 14 may include a display device 34, which may be an integrated display device as well as an external display device, configured to communicate with the target device 14. Display device 34 displays the decoded video data to a user, and may comprise any of a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device.
Video encoder 20 and video decoder 30 may operate in accordance with a proprietary standard or industry standard (e.g., VVC, HEVC, MPEG-4, section ten, AVC) or extensions to such standards. It should be appreciated that the present application is not limited to a particular video encoding/decoding standard and may be applicable to other video encoding/decoding standards. It is generally contemplated that video encoder 20 of source device 12 may be configured to encode video data according to any of these current or future standards. Similarly, it is also generally contemplated that the video decoder 30 of the target device 14 may be configured to decode video data according to any of these current or future standards.
Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable encoder and/or decoder circuits, such as one or more microprocessors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When implemented in part in software, the electronic device can store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the video encoding/decoding operations disclosed in the present disclosure. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated in the respective apparatus as part of a combined encoder/decoder (CODEC).
In some implementations, at least a portion of the components of source device 12 (e.g., video source 18, video encoder 20, or components included in video encoder 20 and output interface 22 as described below with reference to fig. 2) and/or at least a portion of the components of target device 14 (e.g., input interface 28, video decoder 30, or components included in video decoder 30 and display device 34 as described below with reference to fig. 3) may operate in a cloud computing services network that may provide software, platforms, and/or infrastructure, such as software-as-a-service (SaaS), platform-as-a-service (PaaS), or infrastructure-as-a-service (IaaS). In some implementations, one or more components of the source device 12 and/or the target device 14 that are not included in the cloud computing service network may be provided in one or more client devices, and the one or more client devices may communicate with a server computer in the cloud computing service network through a wireless communication network (e.g., a cellular communication network, a short-range wireless communication network, or a Global Navigation Satellite System (GNSS) communication network) or a wired communication network (e.g., a Local Area Network (LAN) communication network or a Power Line Communication (PLC) network). In an embodiment, at least a portion of the operations described herein may be implemented as cloud-based services provided by one or more server computers implemented by at least a portion of the components of source device 12 and/or at least a portion of the components of target device 14 in a cloud computing services network, and one or more other operations described herein may be implemented by one or more client devices. In some implementations, the cloud computing service network may be a private cloud, a public cloud, or a hybrid cloud. Terms such as "cloud," "cloud computing," "cloud-based," and the like herein may be used interchangeably as appropriate without departing from the scope of the present disclosure. It should be understood that the present disclosure is not limited to implementation in the cloud computing service network described above. Rather, the disclosure may also be implemented in any other type of computing environment, whether currently known or developed in the future.
Fig. 2 is a block diagram illustrating an exemplary video encoder 20 according to some embodiments described in this disclosure. Video encoder 20 may perform intra-predictive coding and inter-predictive coding on video blocks within video frames. Intra-predictive coding relies on spatial prediction to reduce or remove spatial redundancy in video data within a given video frame or picture. Inter-predictive coding relies on temporal prediction to reduce or remove temporal redundancy in video data within adjacent video frames or pictures of a video sequence. It should be noted that the term "frame" may be used as a synonym for the term "image" or "picture" in the field of video coding.
As shown in fig. 2, video encoder 20 includes a video data memory 40, a prediction processing unit 41, a Decoded Picture Buffer (DPB) 64, an adder 50, a transform processing unit 52, a quantization unit 54, and an entropy encoding unit 56. The prediction processing unit 41 further includes a motion estimation unit 42, a motion compensation unit 44, a segmentation unit 45, an intra prediction processing unit 46, and an Intra Block Copy (IBC) unit 48. In some implementations, the video encoder 20 also includes an inverse quantization unit 58, an inverse transform processing unit 60, and an adder 62 for video block reconstruction. A loop filter 63, such as a deblocking filter, may be located between adder 62 and DPB 64 to filter block boundaries to remove blocking artifacts from the reconstructed video. In addition to the deblocking filter, another loop filter, such as a Sample Adaptive Offset (SAO) filter, a cross-component sample adaptive offset (CCSAO) filter, and/or an Adaptive Loop Filter (ALF), may be used to filter the output of adder 62. It should be noted that for CCSAO techniques, the present application is not limited to the embodiments described herein, but rather, the present application may be applied to the case where an offset is selected for any of the luma, cb and Cr chroma components based on the other of the luma, cb and Cr components to modify the any component based on the selected offset. The first component referred to herein may be any one of a luminance component, a Cb chrominance component, and a Cr chrominance component, the second component referred to herein may be any other one of the luminance component, the Cb chrominance component, and the Cr chrominance component, and the third component referred to herein may be the remaining one of the luminance component, the Cb chrominance component, and the Cr chrominance component. In some examples, the loop filter may be omitted and the decoded video block may be provided directly to DPB 64 by adder 62. Video encoder 20 may take the form of fixed or programmable hardware units, or may be dispersed among one or more of the fixed or programmable hardware units described.
Video data memory 40 may store video data to be encoded by components of video encoder 20. The video data in video data store 40 may be obtained, for example, from video source 18 shown in fig. 1. DPB 64 is a buffer that stores reference video data (e.g., reference frames or pictures) for use by video encoder 20 in encoding video data. Video data memory 40 and DPB 64 may be formed from any of a variety of memory devices. In various examples, video data memory 40 may be on-chip with other components of video encoder 20, or off-chip with respect to those components.
As shown in fig. 2, after receiving video data, a dividing unit 45 within the prediction processing unit 41 divides the video data into video blocks. This partitioning may also include partitioning the video frame into slices, tiles (tiles) (e.g., a set of video blocks), or other larger Coding Units (CUs) according to a predefined split structure (e.g., a Quadtree (QT) structure) associated with the video data. A video frame is or can be considered to have a two-dimensional array or matrix of samples with sample values. The samples in the array may also be referred to as pixels or picture elements (pels). The number of samples in the horizontal and vertical directions (or axes) of the array or picture defines the size and/or resolution of the video frame. For example, a video frame may be divided into a plurality of video blocks by using QT segmentation. The video block is again or can be considered to have a two-dimensional array or matrix of sample values, but with dimensions smaller than those of the video frame. The number of samples in the horizontal and vertical directions (or axes) of the video block defines the size of the video block. The video block may be further partitioned into one or more block partitions or sub-blocks (which may again form blocks) by, for example, iteratively using QT partitioning, binary Tree (BT) partitioning, or Trigeminal Tree (TT) partitioning, or any combination thereof. It should be noted that the term "block" or "video block" as used herein may be a portion of a frame or picture, especially a rectangular (square or non-square) portion. Referring to HEVC and VVC, a block or video block may be or correspond to a Coding Tree Unit (CTU), a CU, a Prediction Unit (PU), or a Transform Unit (TU) and/or may be or correspond to a respective block (e.g., a Coding Tree Block (CTB), a Coding Block (CB), a Prediction Block (PB), or a Transform Block (TB)) and/or sub-block.
The prediction processing unit 41 may select one of a plurality of possible prediction coding modes, e.g., one of a plurality of intra-or inter-prediction coding modes, for the current video block based on the error result (e.g., coding rate and distortion level). The prediction processing unit 41 may provide the resulting intra-or inter-prediction encoded block to the adder 50 to generate a residual block and to the adder 62 to reconstruct the encoded block for subsequent use as part of a reference frame. Prediction processing unit 41 also provides at least one of the syntax elements (e.g., motion vectors, intra or inter mode indicators, partition information, and other such syntax information) to entropy encoding unit 56.
To select the appropriate intra-prediction encoding mode for the current video block, intra-prediction processing unit 46 may perform intra-prediction encoding of the current video block with respect to one or more neighboring blocks in the same frame as the current block to be encoded to provide spatial prediction. Motion estimation unit 42 and motion compensation unit 44 within prediction processing unit 41 perform inter-prediction codec of the current video block with respect to one or more prediction blocks in one or more reference frames to provide temporal prediction. Video encoder 20 may perform multiple encoding passes, for example, to select an appropriate encoding mode for each block of video data.
In some embodiments, motion estimation unit 42 determines the inter-prediction mode of the current video frame by generating a motion vector that indicates the displacement of the video block within the current video frame relative to the reference video intra-prediction block according to a predetermined mode within the sequence of video frames. The motion estimation performed by the motion estimation unit 42 is a process of generating a motion vector that estimates the motion of a video block. For example, the motion vector may indicate the displacement of a video block within a current video frame or picture relative to a predicted block within a reference frame relative to a current block within the current frame that is being encoded and decoded. The predetermined pattern may designate video frames in the sequence as P-frames or B-frames. The intra BC unit 48 may determine a vector for intra BC encoding, e.g., a block vector, in a manner similar to the determination of the motion vector used by the motion estimation unit 42 for inter prediction, or may determine a block vector using the motion estimation unit 41.
In terms of pixel differences, a prediction block for a video block may be or may correspond to a block or reference block of a reference frame that is considered to closely match the video block to be encoded, and the pixel differences may be determined by Sum of Absolute Differences (SAD), sum of Square Differences (SSD), or other difference metric. In some implementations, video encoder 20 may calculate values for sub-integer pixel positions of reference frames stored in DPB 64. For example, video encoder 20 may interpolate values for one-quarter pixel positions, one-eighth pixel positions, or other fractional pixel positions of the reference frame. Accordingly, the motion estimation unit 42 can perform motion search with respect to the full pixel position and the fractional pixel position and output a motion vector having fractional pixel accuracy.
Motion estimation unit 42 calculates motion vector information for a video block in an inter-prediction encoded frame by comparing the position of the video block with the position of a predicted block of a reference frame selected from a first reference frame list (list 0) or a second reference frame list (list 1), each of which identifies one or more reference frames stored in DPB 64. The motion estimation unit 42 sends the determined motion vector information to the motion compensation unit 44 and then to the entropy encoding unit 56.
The motion compensation performed by the motion compensation unit 44 may involve acquiring or generating a prediction block based on the motion vector information determined by the motion estimation unit 42. After receiving motion vector information for the current video block, motion compensation unit 44 may locate the prediction block to which the motion vector points in one of the reference frame lists, retrieve the prediction block from DPB 64, and forward the prediction block to adder 50. Adder 50 then forms a residual video block of pixel differences by subtracting the pixel values of the prediction block provided by motion compensation unit 44 from the pixel values of the current video block being encoded. The pixel differences forming the residual video block may include a luminance component difference or a chrominance component difference or both. Motion compensation unit 44 may also generate syntax elements associated with the video blocks of the video frames for use by video decoder 30 in decoding the video blocks of the video frames. The syntax elements may include, for example, syntax elements defining motion vectors used to identify the prediction block, any flags indicating the prediction mode, or any other syntax information described herein. It should be noted that the motion estimation unit 42 and the motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes.
In some embodiments, the intra BC unit 48 may generate vectors and extract prediction blocks in a similar manner as described above in connection with the motion estimation unit 42 and the motion compensation unit 44, but these prediction blocks are in the same frame as the current block being encoded, and these vectors are referred to as block vectors rather than motion vectors. In particular, intra BC unit 48 may determine an intra prediction mode used to encode the current block. In some examples, intra BC unit 48 may encode the current block using various intra prediction modes, e.g., during separate encoding passes, and test its performance by rate-distortion analysis. Next, intra BC unit 48 may select an appropriate intra prediction mode among the various tested intra prediction modes to use and generate the intra mode indicator accordingly. For example, intra BC unit 48 may use rate-distortion analysis for various tested intra prediction modes to calculate rate-distortion values, and select the intra prediction mode with the best rate-distortion characteristics among the tested modes as the appropriate intra prediction mode to be used. Rate-distortion analysis typically determines the amount of distortion (or error) between a coded block and the original uncoded block used to produce the coded block, as well as the bit rate (i.e., number of bits) used to produce the coded block. The intra BC unit 48 may calculate the ratio based on the distortion and rate of the various encoded blocks to determine which intra prediction mode exhibits the best rate distortion value for the block.
In other examples, intra BC unit 48 may use, in whole or in part, motion estimation unit 42 and motion compensation unit 44 to perform such functions for intra BC prediction according to the implementations described herein. In either case, for intra block copying, the prediction block may be a block that is considered to closely match the block to be encoded in terms of pixel differences, which may be determined by SAD, SSD, or other difference metrics, and the identification of the prediction block may include calculation of values for sub-integer pixel positions.
Regardless of whether the prediction block is from the same frame according to intra prediction or a different frame according to inter prediction, video encoder 20 may form the residual video block by subtracting the pixel values of the prediction block from the pixel values of the current video block being encoded, thereby forming pixel differences. The pixel differences forming the residual video block may include both luma component differences and chroma component differences.
Intra-prediction processing unit 46 may intra-predict the current video block as an alternative to inter-prediction performed by motion estimation unit 42 and motion compensation unit 44 or intra-block copy prediction performed by intra BC unit 48, as described above. In particular, intra-prediction processing unit 46 may determine an intra-prediction mode used to encode the current block. To this end, intra-prediction processing unit 46 may encode the current block using various intra-prediction modes, e.g., during separate encoding passes, and intra-prediction processing unit 46 (or a mode selection unit in some examples) may select an appropriate intra-prediction mode to use from the intra-prediction modes tested. Intra-prediction processing unit 46 may provide information to entropy encoding unit 56 indicating the intra-prediction mode selected for the block. Entropy encoding unit 56 may encode information in the bitstream that indicates the selected intra-prediction mode.
After the prediction processing unit 41 determines the prediction block of the current video block via inter prediction or intra prediction, the adder 50 forms a residual video block by subtracting the prediction block from the current video block. Residual video data in the residual block may be included in one or more TUs and provided to transform processing unit 52. Transform processing unit 52 transforms the residual video data into residual transform coefficients using a transform, such as a Discrete Cosine Transform (DCT) or a conceptually similar transform.
The transform processing unit 52 may send the resulting transform coefficients to the quantization unit 54. The quantization unit 54 quantizes the transform coefficient to further reduce the bit rate. The quantization process may also reduce the bit depth associated with some or all of the coefficients. The quantization level may be modified by adjusting quantization parameters. In some examples, quantization unit 54 may then perform a scan of the matrix including the quantized transform coefficients. Alternatively, entropy encoding unit 56 may perform the scan.
After quantization, entropy encoding unit 56 entropy encodes the quantized transform coefficients into a video bitstream using, for example, context Adaptive Variable Length Coding (CAVLC), context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), probability Interval Partition Entropy (PIPE) coding, or another entropy encoding method or technique. The encoded bitstream may then be sent to video decoder 30 as shown in fig. 1, or archived in storage device 32 as shown in fig. 1 for later transmission to video decoder 30 or retrieval by video decoder 30. Entropy encoding unit 56 may also entropy encode the motion vectors and other syntax elements of the current video frame being encoded.
Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transform, respectively, to reconstruct the residual video block in the pixel domain to produce a reference block for prediction of other video blocks. As mentioned above, motion compensation unit 44 may generate motion compensated prediction blocks from one or more reference blocks of frames stored in DPB 64. Motion compensation unit 44 may also apply one or more interpolation filters to the prediction block to calculate sub-integer pixel values for motion estimation.
Adder 62 adds the reconstructed residual block to the motion compensated prediction block generated by motion compensation unit 44 to generate a reference block for storage in DPB 64. The reference block may then be used by intra BC unit 48, motion estimation unit 42, and motion compensation unit 44 as a prediction block to inter-predict another video block in a subsequent video frame.
Fig. 3 is a block diagram illustrating an exemplary video decoder 30 according to some embodiments of the present application. Video decoder 30 includes video data memory 79, entropy decoding unit 80, prediction processing unit 81, inverse quantization unit 86, inverse transform processing unit 88, adder 90, and DPB 92. The prediction processing unit 81 further includes a motion compensation unit 82, an intra prediction unit 84, and an intra BC unit 85. Video decoder 30 may perform a decoding process that is substantially reciprocal to the encoding process described above with respect to video encoder 20 with respect to fig. 2. For example, motion compensation unit 82 may generate prediction data based on the motion vectors received from entropy decoding unit 80, while intra-prediction unit 84 may generate prediction data based on the intra-prediction mode indicator received from entropy decoding unit 80.
In some examples, the units of video decoder 30 may be tasked to perform embodiments of the present application. Further, in some examples, implementations of the present disclosure may be divided among one or more of the units of video decoder 30. For example, intra BC unit 85 may perform implementations of the present disclosure alone or in combination with other units of video decoder 30, such as motion compensation unit 82, intra prediction unit 84, and entropy decoding unit 80. In some examples, video decoder 30 may not include intra BC unit 85, and the functionality of intra BC unit 85 may be performed by other components of prediction processing unit 81 (e.g., motion compensation unit 82).
Video data memory 79 may store video data, such as an encoded video bitstream, to be decoded by other components of video decoder 30. The video data stored in the video data memory 79 may be obtained, for example, from the storage device 32, from a local video source (e.g., a camera), via wired or wireless network communication of video data, or by accessing a physical data storage medium (e.g., a flash drive or hard disk). The video data memory 79 may include a Coded Picture Buffer (CPB) that stores coded video data from a coded video bitstream. DPB92 of video decoder 30 stores reference video data for decoding video data by video decoder 30 (e.g., in intra-or inter-predictive codec mode). Video data memory 79 and DPB92 may be formed from any of a variety of memory devices, such as Dynamic Random Access Memory (DRAM), including Synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory devices. For illustrative purposes, video data memory 79 and DPB92 are depicted in fig. 3 as two distinct components of video decoder 30. Those skilled in the art will appreciate that video data memory 79 and DPB92 may be provided by the same memory device or separate memory devices. In some examples, video data memory 79 may be on-chip with other components of video decoder 30, or off-chip with respect to those components.
During the decoding process, video decoder 30 receives an encoded video bitstream representing video blocks of encoded video frames and associated syntax elements. Video decoder 30 may receive syntax elements at the video frame level and/or the video block level. Entropy decoding unit 80 of video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors, or intra-prediction mode indicators, as well as other syntax elements. Entropy decoding unit 80 then forwards the motion vector or intra-prediction mode indicator and other syntax elements to prediction processing unit 81.
When a video frame is encoded as an intra prediction encoded (I) frame or for an intra encoding prediction block in another type of frame, the intra prediction unit 84 of the prediction processing unit 81 may generate prediction data for a video block of the current video frame based on the signaled intra prediction mode and reference data from a previously decoded block of the current frame.
When a video frame is encoded as an inter-prediction encoded (i.e., B or P) frame, the motion compensation unit 82 of the prediction processing unit 81 generates one or more prediction blocks for the video block of the current video frame based on the motion vector information and other syntax elements received from the entropy decoding unit 80. Each of the prediction blocks may be generated from a reference frame within one of the reference frame lists. Video decoder 30 may construct a list of reference frames, i.e., list 0 and list 1, using a default construction technique based on the reference frames stored in DPB 92.
In some examples, when encoding a video block according to the intra BC mode described herein, intra BC unit 85 generates a prediction block for the current video block based on the block vector information and other syntax elements received from entropy decoding unit 80. The prediction block may be within a reconstructed region of the same picture as the current video block defined by video encoder 20.
The motion compensation unit 82 and/or the intra BC unit 85 determine prediction information for the video block of the current video frame by parsing the vector information and other syntax elements, and then use the prediction information to generate a prediction block for the current video block being decoded. For example, motion compensation unit 82 uses some of the received syntax elements to determine a prediction mode for encoding a video block of a video frame, an inter-prediction frame type (e.g., B or P), build information for one or more of a reference frame list for the frame, a motion vector for each inter-prediction encoded video block of the frame, an inter-prediction state for each inter-prediction encoded video block of the frame, and other information for decoding a video block in the current video frame.
Similarly, the intra BC unit 85 may use some of the received syntax elements, such as flags, to determine which video blocks of the frame are within the reconstruction region and should be stored in the DPB 92 for which the current video block was predicted using intra BC mode, the block vector for each intra BC predicted video block of the frame, the intra BC prediction status for each intra BC predicted video block of the frame, and other information for decoding the video blocks in the current video frame.
Motion compensation unit 82 may also perform interpolation using interpolation filters, such as those used by video encoder 20 during encoding of video blocks, to calculate interpolation values for sub-integer pixels of the reference block. In this case, motion compensation unit 82 may determine interpolation filters used by video encoder 20 from the received syntax elements and use these interpolation filters to generate the prediction block.
The dequantization unit 86 dequantizes quantized transform coefficients provided in the bitstream and entropy decoded by the entropy decoding unit 80 using the same quantization parameter calculated by the video encoder 20 for each video block in a video frame. The inverse transform processing unit 88 applies an inverse transform (e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients in order to reconstruct the residual block in the pixel domain.
After motion compensation unit 82 or intra BC unit 85 generates a prediction block for the current video block based on the vector and other syntax elements, adder 90 reconstructs the decoded video block for the current video block by summing the residual block from inverse transform processing unit 88 and the corresponding prediction block generated by motion compensation unit 82 and intra BC unit 85. Loop filter 91, such as a deblocking filter, SAO filter, CCSAO filter, and/or ALF, may be located between adder 90 and DPB92 to further process the decoded video block. In some examples, loop filter 91 may be omitted and the decoded video block may be provided directly to DPB92 by adder 90. The decoded video blocks in a given frame are then stored in DPB92, which DPB92 stores reference frames for subsequent motion compensation of the next video block. DPB92 or a memory device separate from DPB92 may also store decoded video for later presentation on a display device (e.g., display device 34 of fig. 1).
In a typical video codec process, a video sequence generally includes an ordered set of frames or pictures. Each frame may include three sample arrays, denoted SL, SCb, and SCr. SL is a two-dimensional array of luminance samples. SCb is a two-dimensional array of Cb chroma-sampling points. SCr is a two-dimensional array of Cr chroma-sampling points. In other cases, the frame may be monochromatic and thus include only one two-dimensional array of luminance samples.
As shown in fig. 4A, video encoder 20 (or more specifically, partitioning unit 45) generates an encoded representation of a frame by first partitioning the frame into a set of CTUs. The video frame may include an integer number of CTUs ordered consecutively from left to right and top to bottom in raster scan order. Each CTU is the largest logical coding unit and the width and height of the CTU are signaled by video encoder 20 in the sequence parameter set such that all CTUs in the video sequence have the same size of one of 128 x 128, 64 x 64, 32 x 32, and 16 x 16. It should be noted that the application is not necessarily limited to a particular size. As shown in fig. 4B, each CTU may include one CTB of a luminance sample, two corresponding coding tree blocks of a chrominance sample, and a syntax element for coding the samples of the coding tree blocks. Syntax elements describe the nature of the different types of units encoding the pixel blocks and how the video sequence may be reconstructed at video decoder 30, including inter-or intra-prediction, intra-prediction modes, motion vectors, and/or other parameters. In a monochrome picture or a picture having three separate color planes, a CTU may comprise a single coding tree block and syntax elements for encoding samples of the coding tree block. The coding tree block may be an nxn block of samples.
To achieve better performance, video encoder 20 may recursively perform tree partitioning, such as binary tree partitioning, trigeminal tree partitioning, quadtree partitioning, or a combination thereof, on the coded tree blocks of CTUs and divide the CTUs into smaller CUs. As depicted in fig. 4C, a 64 x 64 CTU 400 is first divided into four smaller CUs, each having a block size of 32x 32. Among the four smaller CUs, the CUs 410 and 420 are divided into four CUs with block sizes of 16×16, respectively. Two 16×16 CUs 430 and 440 are each further divided into four CUs with block sizes of 8×8. Fig. 4D depicts a quadtree data structure showing the final result of the segmentation process of CTU 400 as depicted in fig. 4C, each leaf node of the quadtree corresponding to one CU of a respective size ranging from 32x32 to 8 x 8. Similar to the CTU depicted in fig. 4B, each CU may include two corresponding coding blocks of CBs and chroma samples of luma samples, and syntax elements for coding the samples of the coding blocks. In a monochrome picture or a picture having three separate color planes, a CU may comprise a single coding block and syntax structures for encoding samples of the coding block. It should be noted that the quadtree partitions depicted in fig. 4C and 4D are for illustrative purposes only, and that one CTU may be split into multiple CUs based on quadtree partitions/trigeminal partitions/binary tree partitions to accommodate varying local characteristics. In a multi-type tree structure, one CTU is partitioned according to a quadtree structure, and each quadtree leaf CU may be further partitioned according to a binary and/or trigeminal tree structure. As shown in fig. 4E, there are five possible segmentation types for CBs having a width W and a height H, namely quaternary segmentation, horizontal binary segmentation, vertical binary segmentation, horizontal ternary segmentation and vertical ternary segmentation.
In some implementations, video encoder 20 may further partition the coding blocks of the CU into one or more mxn PB. PB is a rectangular (square or non-square) block of samples to which the same prediction (inter or intra) is applied. The PU of a CU may include a PB of a luma sample, two corresponding PB of chroma samples, and syntax elements for predicting the PB. In a monochrome picture or a picture having three separate color planes, a PU may include a single PB and syntax structures for predicting the PB. Video encoder 20 may generate a predicted luma block, a predicted Cb block, and a predicted Cr block for luma PB, cb PB, and Cr PB of each PU of the CU.
Video encoder 20 may use intra-prediction or inter-prediction to generate the prediction block for the PU. If video encoder 20 uses intra-prediction to generate the prediction block of the PU, video encoder 20 may generate the prediction block of the PU based on decoded samples of the frame associated with the PU. If video encoder 20 uses inter-prediction to generate the prediction block of the PU, video encoder 20 may generate the prediction block of the PU based on decoded samples of one or more frames other than the frame associated with the PU.
After video encoder 20 generates the predicted luma block, cb block, and Cr block of the one or more PUs of the CU, video encoder 20 may generate the luma residual block of the CU by subtracting the predictive luma block of the CU from its original luma coded block such that each sample in the luma residual block of the CU indicates a difference between a luma sample in one of the predictive luma blocks of the CU and a corresponding sample in the original luma codec block of the CU. Similarly, video encoder 20 may generate the Cb residual block and the Cr residual block of the CU, respectively, such that each sample in the Cb residual block of the CU indicates a difference between a Cb sample in one of the predicted Cb blocks of the CU and a corresponding sample in the original Cb encoded block of the CU, and each sample in the Cr residual block of the CU may indicate a difference between a Cr sample in one of the predicted Cr blocks of the CU and a corresponding sample in the original Cr encoded block of the CU.
Further, as shown in fig. 4C, video encoder 20 may use quadtree partitioning to decompose a luma residual block, a Cb residual block, and a Cr residual block of the CU into one or more luma transform blocks, cb transform blocks, and Cr transform blocks, respectively. The transform block is a rectangular (square or non-square) block of samples to which the same transform is applied. The TUs of a CU may include a transform block of luma samples, two corresponding transform blocks of chroma samples, and syntax elements for transforming the transform block samples. Thus, each TU of a CU may be associated with a luma transform block, a Cb transform block, and a Cr transform block. In some examples, the luma transform block associated with a TU may be a sub-block of a luma residual block of a CU. The Cb transform block may be a sub-block of a Cb residual block of the CU. The Cr transform block may be a sub-block of a Cr residual block of the CU. In a monochrome picture or a picture having three separate color planes, a TU may comprise a single transform block and syntax structures for transforming the samples of the transform block.
Video encoder 20 may apply one or more transforms to the luma transform block of the TU to generate a luma coefficient block of the TU. The coefficient block may be a two-dimensional array of transform coefficients. The transform coefficients may be scalar quantities. Video encoder 20 may apply one or more transforms to the Cb transform block of the TU to generate a Cb coefficient block of the TU. Video encoder 20 may apply one or more transforms to the Cr transform blocks of the TUs to generate Cr coefficient blocks of the TUs.
After generating the coefficient block (e.g., the luma coefficient block, the Cb coefficient block, or the Cr coefficient block), video encoder 20 may quantize the coefficient block. Quantization generally refers to a process in which transform coefficients are quantized to potentially reduce the amount of data used to represent the transform coefficients, providing further compression. After video encoder 20 quantizes the coefficient blocks, video encoder 20 may entropy encode syntax elements that indicate the quantized transform coefficients. For example, video encoder 20 may perform CABAC on syntax elements indicating quantized transform coefficients. Finally, video encoder 20 may output a bitstream including a sequence of bits forming a representation of the encoded frames and associated data, which is stored in storage device 32 or transmitted to target device 14.
Upon receiving the bitstream generated by video encoder 20, video decoder 30 may parse the bitstream to obtain syntax elements from the bitstream. Video decoder 30 may reconstruct the frames of video data based at least in part on the syntax elements obtained from the bitstream. The process of reconstructing video data is typically reciprocal to the encoding process performed by video encoder 20. For example, video decoder 30 may perform an inverse transform on the coefficient blocks associated with the TUs of the current CU to reconstruct residual blocks associated with the TUs of the current CU. Video decoder 30 also reconstructs the coding block of the current CU by adding samples of the prediction block of the PU of the current CU to corresponding samples of the transform block of the TU of the current CU. After reconstructing the encoded blocks of each CU of a frame, video decoder 30 may reconstruct the frame.
As described above, video encoding mainly uses two modes, i.e., intra prediction (or intra prediction) and inter prediction (or inter prediction), to achieve video compression. Note that IBC may be considered as intra prediction or a third mode. Between the two modes, inter prediction contributes more coding efficiency than intra prediction because motion vectors are used to predict the current video block from the reference video block.
But with ever-improving video data capture techniques and finer video block sizes for preserving details in video data, the amount of data required to represent the motion vector of the current frame has also increased significantly. One way to overcome this challenge is to benefit from the fact that not only groups of neighboring CUs in both the spatial and temporal domains have similar video data for prediction purposes, but also motion vectors between these neighboring CUs. Thus, it is possible to use the motion information of spatially neighboring CUs and/or temporally co-located CUs as an approximation of the motion information (e.g., motion vector) of the current CU by exploring the spatial and temporal dependencies of the current CU, which is also referred to as the "Motion Vector Predictor (MVP)" of the current CU.
Instead of encoding the actual motion vector of the current CU determined by motion estimation unit 42 into the video bitstream, the motion vector predictor of the current CU is subtracted from the actual motion vector of the current CU to generate a Motion Vector Difference (MVD) of the current CU, as described above in connection with fig. 2. By doing so, it is not necessary to encode the motion vector determined by the motion estimation unit 42 for each CU of a frame into the video bitstream, and the amount of data representing the motion information in the video bitstream can be significantly reduced.
Similar to the process of selecting a prediction block in a reference frame during inter-prediction of a code block, video encoder 20 and video decoder 30 both need to employ a set of rules to construct a motion vector candidate list (also referred to as a "merge list") for the current CU using those potential candidate motion vectors associated with spatially neighboring CUs and/or temporally co-located CUs of the current CU, and then select one member from the motion vector candidate list as a motion vector predictor for the current CU. By doing so, there is no need to send the motion vector candidate list itself from video encoder 20 to video decoder 30, and the index of the selected motion vector predictor within the motion vector candidate list is sufficient for video encoder 20 and video decoder 30 to use the same motion vector predictor within the motion vector candidate list for encoding and decoding the current CU.
In general, the basic inter prediction scheme applied in VVC remains almost the same as HEVC, except that several prediction tools are further extended, added and/or improved, such as extended combined prediction, MMVD and GPM.
Extended merge prediction
With ever-improving video data capture techniques and finer video block sizes for preserving details in video data, the amount of data required to represent the motion vector of the current picture has also increased significantly. One way to overcome this challenge is to use the motion information (e.g., motion vectors) of spatially neighboring CUs, temporally co-located CUs, etc. of the current CU as an approximation (e.g., prediction) of the motion information of the current CU, which is also referred to as a "Motion Vector Predictor (MVP)" of the current CU. As used throughout this disclosure, a "motion vector" includes not only motion vectors from between CUs of different frames (e.g., between temporally co-located CUs in inter-prediction), but also block vectors between CUs in the same frame (e.g., between spatially neighboring CUs in intra-prediction).
As with the process of selecting a prediction block in a reference picture during inter prediction of a coded block, both video encoder 20 and video decoder 30 need to employ a set of rules to construct a MVP candidate list for the current CU, and then select one MVP candidate from the MVP candidate list as the MVP for the current CU. By doing so, the MVP candidate list itself need not be transmitted between the video encoder 20 and the video decoder 30, and the index of the MVP candidate selected from the MVP candidate list is sufficient for the video encoder 20 and the video decoder 30 to encode and decode the current CU using the same MVP candidate selected from the MVP candidate list.
In VVC, the MVP candidate list is constructed by sequentially including the following five types of MVPs:
spatial MVP of spatially neighboring CUs (i.e., spatial candidates);
Temporal MVP of temporally co-located CUs (i.e., temporal candidates);
-history-based MVP (HMVP) from a first-in first-out (FIFO) table;
pair-wise average MVP, and
Zero MVP.
The size of the MVP candidate list is signaled in the sequence parameter set header and the maximum allowed size of the MVP candidate list is 6. For each CU that is coded in merge mode, the index of the best MVP candidate is encoded using truncated unary binarization. The first binary number of the index is encoded using the context and bypass encoding is used for the other binary numbers of the index.
The derivation process of each type of MVP is provided as follows. As in HEVC, VVC also supports deriving MVP candidate lists for all CUs in parallel within a region of a certain size.
Deriving MVP from spatial candidates
The MVP is derived in VVC from the spatial candidates (e.g., the CU adjacent to the current CU 101 in fig. 5) as in HEVC, except that the locations of the first two spatial candidates are swapped. A maximum of four spatial candidates, that is, a top position B0, a left position A0, an upper right position B1, a lower left position A1, and an upper left position B2, are selected from the spatial candidates located at the positions depicted in fig. 5. The derivation is performed in the order of CUs at positions B0, A0, B1, A1 and B2. The CU at position B2 is considered only when one or more CUs at positions B0, A0, B1, and A1 are not available (e.g., because the one or more CUs belong to other slices or tiles) or intra-coded.
After adding the CU at the position B0 as a candidate to the merge candidate list, the remaining candidates are added to the merge candidate list to undergo redundancy check, which ensures that candidates having the same motion information are excluded from the merge candidate list, so that the codec efficiency is improved. In order to reduce the computational complexity, all possible candidate pairs are not considered in the redundancy check. In contrast, only the pair linked with the line of the arrow in fig. 6 is considered for use, and candidates are added to the merge candidate list only when candidates in the corresponding pair for redundancy check are not identical to the motion information of the candidates to be added. Spatial MVPs derived from candidates in the merge candidate list are added to the MVP candidate list.
Deriving MVP from temporal candidates
Only one temporal candidate is added to the merge candidate list during the derivation of the MVP from the temporal candidates. Specifically, when deriving the MVP from this time candidate, a scaled motion vector is derived based on the co-located CU (e.g., col_cu301 in fig. 7) as a time candidate of a co-located picture (e.g., col_pic302 in fig. 7) belonging to the current CU (e.g., curr_cu303 in fig. 7), and the scaled motion vector is added to the MVP candidate list as a temporal MVP candidate. The reference picture list and the reference picture index used to derive the co-located CU are explicitly signaled in the slice header. As shown in fig. 7, a scaled motion vector is obtained (i.e., scaled) from a motion vector of a co-located CU using Picture Order Count (POC) distances (i.e., tb and td), where tb is defined as a POC difference between a reference picture (e.g., curr_ref305 in fig. 7) of a current picture (e.g., curr_pic304 in fig. 7) and the current picture, and td is defined as a POC difference between a reference picture (e.g., col_ref306 in fig. 7) of the co-located picture and the co-located picture. The reference picture index of the temporal candidate is set equal to zero.
As depicted in fig. 8, the location of the temporal candidate (i.e., the co-located CU) in the current CU401 is selected between locations C 0 and C 1. If the CU at position C 0 in the co-located picture is not available, intra-coded or outside the current CTU row, the CU at position C 1 is used as the co-located CU for deriving the temporal MVP candidate. Otherwise, the CU at position C 0 is used as a co-located CU for deriving temporal MVP candidates.
Derivation of HMVP candidates
After spatial MVP and temporal MVP, HMVP candidates are added to the MVP candidate list. The motion information of the previously decoded block is stored in HMVP table and used as MVP for the current CU. A table with a plurality HMVP of candidates is maintained during the encoding/decoding process. The table is reset (emptied) when a new CTU row is encountered. Whenever there is a non-sub-block inter-coded CU, the associated motion information is added to the last entry of the HMVP table as a new HMVP candidate.
HMVP table is set to a size of 6. When a new HMVP candidate is inserted into the HMVP table, a constrained FIFO rule is utilized, where a redundancy check is first applied to find out if the same HMVP exists in the HMVP table. If found, the same HMVP is removed from the HMVP table and then all HMVP candidates are moved forward and the same HMVP is added to the last entry of the HMVP table.
HMVP candidates can be used in the MVP candidate list construction process. The latest few HMVP candidates in the HMVP table are checked in order and the latest few HMVP candidates are inserted into the MVP candidate list after the temporal MVP candidates. Redundancy check is applied to HMVP candidates with respect to spatial candidates and/or temporal MVP candidates.
In order to reduce the number of redundancy check operations, the following simplifications are introduced:
The last two entries in the HMVP table are redundancy checked with respect to the spatial MVP candidates derived from the spatial candidates at positions A1 and B1, respectively, and
-Terminating the MVP candidate list construction process from HMVP candidates once the total number of available MVP candidates reaches the maximum allowed size of the MVP candidate list minus 1.
Derivation of pairwise average MVP candidates
The pairwise average MVP candidates are generated by averaging MVPs derived using the predefined first two merge candidate pairs in the existing merge candidate list. The first merge candidate in the predefined pair may be defined as p0Cand and the second merge candidate in the predefined pair may be defined as p1Cand. An average motion vector is calculated for each reference picture list separately according to the availability of the motion vectors of p0Cand and p1Cand. If two motion vectors are available for one reference picture list, the two motion vectors are averaged even when they point to different reference pictures, and the reference picture of the average motion vector is set to the reference picture of p0Cand, if only one motion vector is available for one reference picture list, the motion vector is directly used, and if no motion vector is available for one reference picture list, the motion vector and the reference picture index of this reference picture list remain invalid.
Zero MVP
When the MVP candidate list is not full after adding the pairwise average MVP candidates, zero MVPs are inserted at the end of the MVP candidate list until the maximum allowed size of the MVP candidate list is reached.
MMVD
As described above, in the merge mode, motion information (i.e., MVP candidates) is implicitly derived from a MVP candidate list constructed for the current CU and directly used as MVs for the current CU for generating prediction samples of the current CU, which may lead to some error between the actual MVs of the current CU and the implicitly derived MVPs. To improve the accuracy of the MV of the current CU, MMVD is introduced in the VVC, wherein the Motion Vector Difference (MVD) of the current CU is added to the implicitly derived MVP to obtain the MV of the current CU. The MMVD flag is signaled after the normal merge flag is sent to specify whether MMVD mode is used for the current CU.
In MMVD mode, after the MVP candidates are selected from the first two MVP candidates in the MVP candidate list, MMVD information is signaled, wherein MMVD information includes MMVD candidate flags for specifying which of the first two MVP candidates is selected to be used as MV base, a distance index for indication of motion amplitude information of the MVD, and a direction index for indication of motion direction information of the MVD.
The distance index specifying the motion magnitude information of the MVD indicates a predefined offset from a starting point (e.g., represented by a dashed circle in fig. 9) in a reference picture (e.g., L0 reference picture 501 or L1 reference picture 503 in fig. 9) of the current CU to which the selected MVP candidate point points, and the MVD may be derived from the offset and may be added to the selected MVP candidate. The relationship between the distance index and the predefined offset is specified in table 1 below.
TABLE 1
The direction index specifies the sign of the MVD, which represents the direction of the MVD relative to the starting point. Table 2 specifies the relationship between the direction index and the predefined symbol. It should be noted that the meaning of the sign of the MVD may vary depending on the information of the selected MVP candidate. When the selected MVP candidate is a non-predicted MV or bi-predicted MV having two MVs pointing to the same side of the current picture (i.e., the two reference pictures of the current picture (e.g., the reference picture of list 0 and the reference picture of list 1, which are also referred to as L0 reference picture and L1 reference picture, respectively) each being greater than the POC of the current picture, or both being less than the POC of the current picture), the sign in table 2 specifies a sign of the MVD added to the selected MVP candidate. When the selected MVP candidate is a bi-predictive MV having two MVs pointing to different sides of the current picture (i.e., the POC of one reference picture of the current picture is greater than the POC of the current picture and the POC of the other reference picture of the current picture is less than the POC of the current picture), if the POC distance of the L0 reference picture (i.e., the POC distance between the L0 reference picture and the current picture) is greater than the POC distance of the L1 reference picture (i.e., the POC distance between the L1 reference picture and the current picture), then the sign in table 2 specifies the sign of the MVD added to list 0 MVD0 of MVPs of list 0 MVP0 of the selected MVP candidate and the sign of the MVP of list 1 MVD1 added to list 1 MVP1 of the selected MVP candidate is opposite to the sign in table 2, and otherwise, if the POC distance of the L1 reference picture is greater than the POC distance of the L0 reference picture, then the sign in table 2 specifies the sign of MVP added to the sign of MVP1 and the sign of MVP added to the sign 0 MVP in table 2.
TABLE 2
The MVD is scaled according to POC distance. If the POC distances for both the L0 reference picture and the L1 reference picture are the same, then the MVD does not need to be scaled. Otherwise, if the POC distance of the L0 reference picture is greater than the POC distance of the L1 reference picture, then the MVD1 is scaled. If the POC distance of the L1 reference picture is greater than the POC distance of the L0 reference picture, then the MVD0 is scaled.
GPM
In VVC, GPM is supported for inter prediction. The GPM is signaled using the CU level flag as one merge mode, other merge modes including regular merge mode, MMVD mode, CIIP mode, and sub-block merge mode. For other than 864 And 64Each possible CU size outside 8() GPM supports 64 partitions in total.
When using GPM, a CU is split into two parts by geometrically positioned straight lines. The location of the parting line is mathematically derived from the angle and offset parameters of the particular partition. Each part of the CU obtained by geometric partitioning uses its own motion for inter prediction and only unidirectional prediction is allowed for each partition, i.e. each part has one motion vector and one reference index. Unidirectional prediction motion constraints are applied to ensure that, similar to conventional bi-prediction, only two motion compensated predictions are required per CU.
If GPM is used for the current CU, then the partition mode indicating the geometric partition (indicating the angle and offset of the geometric partition) and the geometric partition index of the two merge indexes (one for each partition) are further signaled.
The unidirectional prediction candidate list is directly derived from the merge candidate list constructed according to the extended merge prediction process described above. N is represented as an index of a unidirectional prediction motion vector in the unidirectional prediction candidate list. The LX motion vector of the nth merge candidate in the merge candidate list (where X equals the parity of n) is used as the nth uni-directional predicted motion vector of the GPM. These motion vectors are marked with an "x" in fig. 10. In the case where the corresponding LX motion vector of the nth merge candidate in the merge candidate list does not exist, the L (1-X) motion vector of the same merge candidate is used instead of the unidirectional predicted motion vector of the GPM.
CIIP
In VVC, when a CU is encoded in a merge mode, if the CU includes at least 64 luma samples (i.e., the width of the CU times the height of the CU is equal to or greater than 64), and if both the width and the height of the CU are less than 128 luma samples, an additional flag is signaled to indicate whether or not the CIIP mode is applied to the current CU. In CIIP mode, a prediction signal is obtained by combining an inter prediction signal with an intra prediction signal. The inter prediction signal in CIIP mode is derived using the same inter prediction process as applied in the normal merge mode, and the intra prediction signal in CIIP mode is derived following the normal intra prediction process with planar mode. Then, the intra prediction signal and the inter prediction signal are combined using weighted average, wherein weight values are calculated as follows according to the coding modes of the top and left neighboring blocks of the current CU 1601 (as shown in fig. 11):
-isIntraTop is set to 1 if the top neighboring block is available and intra coded, otherwise isIntraTop is set to 0;
-ISINTRALEFT is set to 1 if the left neighboring block is available and intra coded, otherwise ISINTRALEFT is set to 0;
-if (ISINTRALEFT + isIntraTop) is equal to 2, the weight value is set to 3;
-otherwise, if (ISINTRALEFT + isIntraTop) is equal to 1, the weight value is set to 2;
Otherwise, the weight value is set to 1.
Predicted signal in CIIP modeThe derivation is as follows:
wherein, the Is an inter prediction signal in CIIP modes,Is an intra prediction signal in CIIP modes,Is a weight value, andRepresenting a right shift operation.
Intra block copy in Versatile Video Coding (VVC)
Intra Block Copy (IBC) is a tool employed in HEVC extension on SCC. It is known that it significantly improves the coding efficiency of screen content material. Since the IBC mode is implemented as a block-level coding mode, block Matching (BM) is performed at the encoder to find the optimal block vector (or motion vector) for each CU. Here, the block vector is used to indicate the displacement from the current block to the reference block that has been reconstructed within the current picture. The luminance block vector of the IBC-encoded CU is of integer precision. The chroma block vector is also rounded to integer precision. When combined with AMVR, IBC mode can switch between 1 picture element and 4 picture element motion vector precision. An IBC-encoded CU is considered as a third prediction mode in addition to the intra prediction mode or the inter prediction mode. The IBC mode is applicable to CUs having a width and a height of less than or equal to 64 luminance samples.
On the encoder side, hash-based motion estimation is performed on IBCs. The encoder performs RD checking on blocks having a width or height of not more than 16 luminance samples. For the non-merge mode, a block vector search is first performed using a hash-based search. If the hash search does not return valid candidates, a local search based on block matching will be performed.
In hash-based searches, the hash key match (32-bit CRC) between the current block and the reference block is extended to all allowed block sizes. The hash key calculation for each position in the current picture is based on the 4x4 sub-block. For a larger size current block, when all hash keys of all 4x4 sub-blocks match the hash keys in the corresponding reference positions, a hash key matching the hash key of the reference block is determined. If the hash key of the plurality of reference blocks is found to match the hash key of the current block, a block vector cost for each matching benchmark is calculated, and the block vector cost with the smallest cost is selected.
In the block matching search, the search range is set to cover both the previous CTU and the current CTU.
At the CU level, IBC mode is signaled with a flag, and it can be signaled as IBC AMVP mode or IBC skip/merge mode, as follows:
IBC skip/merge mode-merge candidate index is used to indicate which block vectors from the list of neighboring candidate IBC encoded blocks are used to predict the current block. The merge list includes space, HMVP, and pair candidates.
IBC AMVP mode-block vector differences are encoded in the same way as motion vector differences. The block vector prediction method uses two candidates as predictors, one from the left neighbor and one from the upper neighbor (if IBC coded). When either neighbor is not available, the default block vector will be used as a predictor. A flag is signaled to indicate a block vector predictor index.
IBC reference region
To reduce memory consumption and decoder complexity, IBCs in VVCs only allow reconstructed portions of predefined regions, including regions of the current CTU and some regions of the left CTU. Fig. 12 shows the reference region of IBC mode, where each block represents a 64x64 luminance sample cell.
Depending on the location of the current coding CU location within the current CTU, the following applies:
If the current block falls into the upper left 64x64 block of the current CTU, the current block may refer to reference samples in the lower right 64x64 block of the left CTU using the CPR mode in addition to samples already reconstructed in the current CTU. The current block may also reference a reference sample in the lower left 64x64 block of the left CTU and a reference sample in the upper right 64x64 block of the left CTU using CPR mode.
If the current block falls into the upper right 64x64 block of the current CTU, then in addition to the samples already reconstructed in the current CTU, if the luminance position has not been reconstructed relative to the current CTU (0,64), the current block may also reference the reference samples in the lower left 64x64 block and the lower right 64x64 block of the left CTU using CPR mode, otherwise the current block may also reference the reference samples in the lower right 64x64 block of the left CTU.
If the current block falls into the lower left 64x64 block of the current CTU, then in addition to the samples already reconstructed in the current CTU, if the luma position (64, 0) with respect to the current CTU has not been reconstructed, the current block may also reference the reference samples in the upper right 64x64 block and the lower right 64x64 block of the left CTU using the CPR mode. Otherwise, the current block may also reference a reference sample in the lower right 64x64 block of the left CTU using CPR mode.
If the current block falls into the lower right 64x64 block of the current CTU, it can only reference samples already reconstructed in the current CTU using CPR mode.
This limitation allows IBC mode to be implemented using local on-chip memory for hardware implementation.
IBC interaction with other coding tools
Interactions between IBC modes and other inter-coding tools in VVC, such as paired merge candidates, history-based motion vector predictors (HMVP), combined intra/inter prediction modes (CIIP), merge modes with motion vector differences (MMVD), and geometry segmentation modes (GPM), are as follows:
IBC may be used with paired merge candidates HMVP. A new pair IBC merge candidate may be generated by averaging the two IBC merge candidates. For HMVP, IBC motion is inserted into a history buffer for future reference.
IBC cannot be used in combination with inter-frame tools affine motion, CIIP, MMVD and GPM.
When using the DUAL TREE partition, IBC is not allowed for chroma coded blocks.
Unlike in the HEVC screen content codec extension, the current picture is no longer included as one of the reference pictures in reference picture list 0 for IBC prediction. The derivation of motion vectors for IBC mode excludes all neighboring blocks in inter mode and vice versa. The following IBC design aspects apply:
IBC shares the same procedure as conventional MV merging, including paired merge candidates and history-based motion predictors, but does not allow TMVP and zero vectors, as they are not valid for IBC mode.
Separate HMVP buffers (5 candidates per HMVP buffer) are used for conventional MV and IBC.
The block vector constraint is implemented in the form of a bitstream consistency constraint, the encoder needs to ensure that there are no invalid vectors in the bitstream and if the merge candidate is invalid (out of range or 0), then no merge should be used. Such bitstream conformance constraints are expressed in terms of virtual buffers as described below.
For deblocking, IBC is handled as an inter mode.
If the current block is coded using IBC prediction mode, then the AMVR does not use one quarter of the picture elements, instead the AMVR is signaled to indicate only whether the MV is inter-pixel or 4 integer-pixels.
The number of IBC combining candidates may be signaled in the slice header separately from the number of regular, sub-block and geometric combining candidates.
The virtual buffering concept is used to describe the allowable reference regions and the active block vectors of the IBC prediction mode. The CTU size is denoted ctbSize, and virtual buffer ibcBuf has a width wIbcBuf =128 x128/ctbSize and a height hIbcBuf = ctbSize. For example, the ibcbuf size is 128x128 for CTU size, 256x64 for ibcbuf size, 32x32 for CTU size, 512x32 for ibcbuf size.
The size of the VPDU is min (ctbSize, 64) in each dimension, W v =min (ctbSize, 64).
The virtual IBC buffer ibcBuf is maintained as follows.
At the beginning of decoding each CTU row, the entire ibcBuf is refreshed with an invalid value of-1.
When decoding a VPDU (x VPDU, yVPDU) with respect to the upper left corner of a picture, ibcBuf [ x ] [ y ] = -1 is set, where x= xVPDU% wIbcBuf.
After decoding, the CU includes (x, y) settings relative to the upper left corner of the picture
ibcBuf[x%wIbcBuf][y%ctbSize]=recSample[x][y]
For a block covering coordinates (x, y), it is valid if the following is true for the block vector bv= (bv [0], bv [1 ]), otherwise it is invalid:
ibcBuf [ (x+bv [0 ])% wIbcBuf ] [ y+bv [1 ])% ctbSize ] should not be equal to-1.
Intra block replication in Enhanced Compression Model (ECM)
In ECM, IBC is improved from the following.
IBC merge/AMVP list construction
The IBC merge/AMVP list construction is modified as follows:
Only if the IBC merge/AMVP candidate is valid, it may be inserted into the IBC merge/AMVP candidate list.
The top right, bottom left and top left spatial candidates and one pairwise average candidate may be added to the IBC merge/AMVP candidate list.
The template-based adaptive reordering (ARMC-TM) is applied to the IBC merge list.
The HMVP table size of IBC increases to 25. After deriving up to 20 IBC merge candidates with full pruning, they are reordered together. After reordering, the first 6 candidates with the lowest template matching cost are selected as final candidates in the IBC merge list.
The candidates for the zero vector filling the IBC merge/AMVP list are replaced with the set of BVP candidates located in the IBC reference area. The zero vector is not valid as a block vector in IBC merge mode and is therefore discarded as a BVP in the IBC candidate list.
The three candidates are located on the nearest corners of the reference region and three additional candidates are determined in the middle of the three sub-regions (A, B and C), the coordinates of which are determined by the width and height of the current block and the Δx and Δy parameters, as shown in fig. 13.
IBC with template matching
Template matching is used in IBCs for both IBC merge mode and IBC AMVP mode.
The IBC-TM merge list is modified compared to the merge list used in the conventional IBC merge mode such that candidates are selected according to a pruning method with a movement distance between candidates in the conventional TM merge mode. The ending zero motion realization is replaced by the motion vectors of the left (-W, 0), top (0, -H), and upper left (-W, -H), where W is the width of the current CU and H is the height of the current CU.
In the IBC-TM merge mode, the selected candidates are refined using a template matching method prior to the RDO or decoding process. The IBC-TM merge mode has contended with the conventional IBC merge mode and signals the TM merge flag.
In IBC-TM AMVP mode, up to 3 candidates are selected from the IBC-TM merge list. Each of these 3 selected candidates is refined using a template matching method and ranked according to the template matching cost it produces. Then, in the motion estimation process, only the first 2 candidates are typically considered.
Template matching refinement of both the IBC-TM merge mode and AMVP mode is very simple because the IBC motion vectors are constrained (i) to integers, and (ii) within the reference region, as shown in fig. 12. Thus, in IBC-TM merge mode, all refinement is performed with integer precision, and in IBC-TM AMVP mode, the refinement is performed with integer or 4 image element precision depending on the AMVR value. Such refinement only accesses samples that have not been interpolated. In both cases, the refined motion vectors and the templates used in each refinement step have to obey the constraints of the reference region.
IBC reference region
The reference area of the IBC extends to the upper two CTU rows. Fig. 14 shows a reference region for encoding CTU (m, n). Specifically, for CTUs (m, n) to be encoded, the reference region includes CTUs having indices (m-2, n-2), (W, n-2), (0, n-1), (W, n-1), (0, n), (m, n), where W represents the maximum horizontal index within the current tile, slice, or picture. This arrangement ensures that IBC does not require additional memory in current ETM platforms for CTUs of size 128. The sample-by-sample block vector search (or local search) range is limited to [ - (C < < 1), C > >2] in the horizontal direction and [ -C, C > >2] in the vertical direction to accommodate the reference region expansion, where C represents the CTU size.
IBC merge mode with block vector difference
IBC merge mode with block vector difference is employed in ECM. The distance set is { 1-image element, 2-image element, 4-image element, 8-image element, 12-image element, 16-image element, 24-image element, 32-image element, 40-image element, 48-image element, 56-image element, 64-image element, 72-image element, 80-image element, 88-image element, 96-image element, 104-image element, 112-image element, 120-image element, 128-image element }, and the BVD direction is two horizontal directions and two vertical directions.
The base candidate is selected from the first five candidates in the reordered IBC merge list. And all possible MBVD refinement positions (20 x 4) for each base candidate are reordered based on the SAD cost between the template (row above the current block and column to the left) and the reference for each refinement position. Finally, the first 8 refinement locations with the lowest template SAD cost are kept as available locations, and are therefore used for MBVD index codec.
IBC adaptation for camera captured content
When adapting IBC for camera captured content, the IBC reference range is reduced from 2 CTU rows to 2x128 rows, as shown in fig. 15. On the encoder side, to reduce complexity, the local search range is set to be centered on the first block vector predictor of the current CU, at [ -8,8] in the horizontal direction and [ -8,8] in the vertical direction. The encoder modifications are not applied to the SCC sequences.
CIIP in combination with TIMD and TM combining
In CIIP mode, prediction samples are generated by weighting the inter-prediction signal predicted using CIIP-TM merge candidates and the intra-prediction signal predicted using TIMD derived intra-prediction mode. The method is only applied to coding blocks having an area less than or equal to 1024.
The TIMD derivation method is used to derive intra prediction modes in CIIP. Specifically, the intra prediction mode having the smallest SATD value in the TIMD mode list is selected and mapped to one of 67 conventional intra prediction modes.
Furthermore, if the derived intra prediction mode is an angular mode, it is also recommended to modify the weights of the two tests (wIntra, wInter). For the near horizontal mode (2 < = angular mode index < 34), as shown in fig. 16A, the current block is vertically divided, and for the near vertical mode (34 < = angular mode index < 66), the current block is horizontally divided, as shown in fig. 16B.
The (wIntra, wInter) of the different sub-blocks are shown in table 3.
Table 3 modified weights are used for the angle mode.
A CIIP-TM merge candidate list is constructed for the CIIP-TM mode using CIIP-TM. The merge candidates are refined by template matching. The ARMC method also reorders CIIP-TM merge candidates into regular merge candidates. The maximum number of CIIP-TM merge candidates is equal to two.
Multi-hypothesis prediction (MHP)
In the multi-hypothesis inter prediction mode, one or more additional motion-compensated prediction signals are signaled in addition to the conventional bi-directional prediction signal. The resulting overall prediction signal is obtained by a sample-by-sample weighted superposition. Using bi-predictive signalsAnd a first additional inter prediction signal/hypothesisThe obtained prediction signalIs obtained by the following steps:
(2)
according to the mapping given in Table 4, the weighting factors Specified by the new syntax element add_hyp_weight_idx:
Table 4 add_hyp_weight_idx and Mapping between them.
Similar to the above, more than one additional prediction signal may be used. The resulting overall prediction signal is iteratively accumulated with each additional prediction signal.
(3)
The resulting total predicted signal is obtained as the final(I.e., having the largest index n). Within this mode, up to two additional prediction signals may be used (i.e., n is limited to 2).
The motion parameters of each additional prediction hypothesis may be signaled explicitly by specifying a reference index, a motion vector predictor index, and a motion vector difference, or implicitly by specifying a merge index. A separate multi-hypothesis combining flag distinguishes the two signaling modes.
For inter AMVP mode, MHP is applied only if non-equal weights in BCW are selected in bi-prediction mode.
A combination of MHP and BDOF is possible, however BDOF is only applied to the bi-predictive signal part of the predicted signal (i.e. the first two common hypotheses).
Geometric Partitioning Modes (GPM) in ECM
GPM with combined motion vector difference (MMVD)
The GPM in VVC is extended by applying motion vector refinement to the top of the existing GPM uni-directional MVs. A flag is first signaled for the GPM CU to specify whether this mode is used. If this mode is used, each geometric partition of the GPM CU may further decide whether to signal the MVD. If MVDs are signaled for geometric partitions, the motion of the partition is further refined by the signaled MVD information after the GPM merge candidate is selected. All other processes remain the same as in GPM.
Similar to MMVD, the MVD is signaled as a pair of distance and direction. Nine candidate distances (1 ⁄ 4-picture element, 1 ⁄ -picture element, 1-picture element, 2-picture element, 3-picture element, 4-picture element, 6-picture element, 8-picture element, 16-picture element) and eight candidate directions (four horizontal/vertical directions and four diagonal directions) are involved in a GPM with MMVD (GPM-MMVD). In addition, when pic_ fpel _ mmvd _enabled_flag is equal to 1, the MVD moves to the left by 2 as MMVD.
GPM with Template Matching (TM)
Template matching is applied to GPM. When GPM mode is enabled for a CU, a CU level flag is signaled to indicate whether TM applies to two geometric partitions. TM is used to refine the motion information for each geometric partition. When TM is selected, templates are constructed using left, upper neighboring sample points or left and upper neighboring sample points according to the division angle, as shown in table 5. The motion is then refined by using the same search pattern of the merge pattern of the disabled half image element interpolation filter to minimize the difference between the current template and the template in the reference picture.
TABLE 5
Table 5 shows templates of the first and second geometric partitions, where a represents using the upper sample point, L represents using the left sample point, and l+a represents using both the left and upper sample points.
The GPM candidate list is constructed as follows:
1. The interleaved list-0 MV candidate and list-1 MV candidate are derived directly from the conventional merge candidate list, with list-0 MV candidates having a higher priority than list-1 MV candidates. A pruning method with adaptive threshold based on the current CU size is applied to remove redundant MV candidates.
2. The interleaved list-1 MV candidate and list-0 MV candidate are derived directly from the conventional merge candidate list, with list-1 MV candidates having a higher priority than list t-0 MV candidates. The same pruning method with adaptive thresholds is also applied to remove redundant MV candidates.
3. The zero MV candidates are filled until the GPM candidate list is full.
GPM-MMVD and GPM-TM are dedicated to one GPM CU. This is done by first signaling the GPM-MMVD syntax. When both GPM-MMVD control flags are equal to false (i.e., GPM-MMVD is disabled for both GPM partitions), the GPM-TM flag is signaled to indicate whether template matching applies to both GPM partitions. Otherwise (at least one GPM-MMVD flag equals true), the value of the GPM-TM flag is inferred to be false.
GPM with inter-and intra-prediction
In GPM with inter-and intra-prediction, final prediction samples are generated by weighting inter-and intra-prediction samples for each GPM-split region. Inter prediction samples are derived by inter GPM, while intra prediction samples are derived by Intra Prediction Mode (IPM) candidate list and index signaled from the encoder. The IPM candidate list size is predefined as 3. The IPM candidates available are parallel angle mode (parallel mode) for GPM block boundary, vertical angle mode (vertical mode) and planar mode for GPM block boundary, respectively, as shown in fig. 17A to 17C. Further, as shown in fig. 17D, the GPM with intra prediction and intra prediction is limited to reduce the signaling overhead of IPM and to avoid an increase in the size of the intra prediction circuit on the hardware decoder. In addition, direct motion vector and IPM storage on the GPM-hybrid region is introduced to further improve coding performance.
In DIMD and neighbor mode based IPM derivation, parallel modes are first registered. Thus, if the same IPM candidate does not exist in the list, a maximum of two IPM candidates derived from the decoder side intra mode derivation (DIMD) method and/or neighboring blocks may be registered. For neighbor mode derivation, there are at most five available neighbor block locations, but they are limited by the angle of the GPM block boundaries as shown in Table 6, which has been used for GPM with template matching (GPM-TM).
TABLE 6
Table 6 shows the locations of available neighboring blocks derived from IPM candidates based on the angle of the GPM block boundary. A and L represent the top and left sides of the prediction block.
GPM-intra may be combined with GPM combined with motion vector differences (GPM-MMVD). TIMD are used for IPM candidates within GPM-frames to further improve coding performance. The parallel mode may be registered first, then the TIMD, DIMD, and IPM candidates for neighboring blocks.
Template matching-based reordering of GPM split patterns
In the reordering of the GPM splitting pattern based on template matching, given the motion information of the current GPM block, the corresponding TM cost value of the GPM splitting pattern is calculated. All GPM split patterns are then reordered in ascending order based on the TM cost value. Instead of sending the GPM split pattern, the Golomb-Rice code is signaled to indicate the index of where the exact GPM split pattern is located in the reordered sequence table.
The reordering method of the GPM partition mode is a two-step process performed after generating corresponding reference templates of two GPM partitions in the coding unit, as follows:
Expanding the GPM partition edges into reference templates of two GPM partitions, thereby generating 64 reference templates and calculating a corresponding TM cost for each of the 64 reference templates;
reorder the GPM split patterns in ascending order based on their TM cost values and mark the best 32 as the available split pattern.
As shown in fig. 18, the edges on the template are extended from the edges of the current CU, but the GPM blending process is not used in the template region across the edges.
After incremental reordering using TM costs, the index is signaled.
Intra template matching
Intra template matching prediction (intra TMP) is a special intra prediction mode that replicates the best prediction block from the reconstructed portion of the current frame, with its L-shaped template matching the current template. For a predefined search range, the encoder searches for a template most similar to the current template in the reconstructed portion of the current frame and uses the corresponding block as a prediction block. The encoder then signals the use of this mode and performs the same prediction operation at the decoder side.
The prediction signal is generated by matching the L-shaped causal neighbor of the current block with another block in the predefined search area in fig. 19, which is made up of:
r1:current CTU
R2:upper left CTU
R3:upper CTU
R4 left CTU
The Sum of Absolute Differences (SAD) is used as a cost function.
Within each region, the decoder searches for a template having the smallest SAD with respect to the current template, and uses its corresponding block as a prediction block.
The sizes of all regions (SEARCHRANGE _w, SEARCHRANGE _h) are set to be proportional to the block sizes (BlkW, blkH) to have a fixed number of SAD comparisons per pixel. Namely:
SearchRange_w=aBlkW
SearchRange_h=aBlkH
wherein' ' Is a constant that controls the gain/complexity tradeoff. In fact, the'' Equal to 5.
For CUs with width and height less than or equal to 64, an internal template matching tool is enabled. The maximum CU size for intra template matching is configurable.
When the current CU does not use DIMD, intra template matching prediction modes are signaled at the CU level by a dedicated flag.
Fusion for template-based intra-mode derivation (TIMD)
For each intra-prediction mode in the MPM, the SATD between the predicted and reconstructed samples of the template is calculated. The two intra prediction modes with the smallest SATD are first selected as TIMD modes. These two TIMD modes are fused with weights after the application of the PDPC process, and this weighted intra prediction is used to encode the current CU. A position dependent intra prediction combination (PDPC) is included in the derivation of TIMD modes.
The cost of the two selected modes is compared to a threshold, and in the test, a cost factor of 2 is applied as follows:
costMode2<2costMode1
if the condition is true, fusion is applied, otherwise only mode 1 is used.
The weights of the patterns are calculated from their SATD costs as follows:
Weight 1= costMode 2/(costMode 1+ costMode 2)
Weight 2=1-weight 1
The division is performed using the same look-up table (LUT) based integration scheme used by CCLM.
Local Illumination Compensation (LIC)
LIC is an inter-prediction technique for modeling local illumination changes between a current block and its predicted block from local illumination changes between the current block template and a reference block template. The parameters of the function may be represented by a scale α and an offset β, which form a linear equation, i.e., αP [ x ] +β to compensate for illumination variations, where p [ x ] is the reference sample point pointed by the MV at position x on the reference picture. When surround motion compensation is enabled, the MVs should be clipped taking into account the surround offset. Since α and β can be derived based on the current block template and the reference block template, they do not require signaling overhead other than signaling the LIC flags for AMVP mode to indicate use of LIC.
The local illumination compensation proposed in JVET-O0066 is used for unidirectional prediction inter-CU with the following modifications:
neighboring samples within the frame may be used for LIC parameter derivation;
Disabling LIC for blocks with less than 32 luma samples;
performing LIC parameter derivation based on the template block samples corresponding to the current CU, but not partial template block samples corresponding to a first upper left 16 x 16 unit, for both non-sub-block modes and affine modes, and
The samples of the reference block template are generated by using the MC with block MV without rounding it to integer pixel precision.
OBMC
When applying OBMC, the motion information of neighboring blocks with weighted prediction as described in JVET-L0101 is used to refine the top boundary pixels and the left side boundary pixels of the CU.
The conditions under which no OBMC was applied were as follows:
When OBMC is disabled at SPS level;
when the current block has an intra mode or an IBC mode;
When LIC is applied to the current block, and
When the current luminance block area is less than or equal to 32.
The sub-block boundary OBMC is performed by applying the same blending to the top, left, bottom and right sub-block boundary pixels using the motion information of the neighboring sub-blocks. It enables for a subblock-based encoding tool:
Affine AMVP mode;
affine merge mode and subblock-based temporal motion vector prediction (SbTMVP), and
Double-sided matching based on sub-blocks.
When the OBMC mode is used in CIIP mode with LMCS, inter-frame mixing is performed before LMCS mapping of inter-frame samples. LMCS is applied to the hybrid inter-frame samples combined with the intra-frame samples applied by LMCS in CIIP mode,
Wherein the method comprises the steps ofRepresenting samples predicted by the motion of the current block in the original domain,Representing the predicted samples in the map domain,Samples representing motion prediction from neighboring blocks in the original domain, andAndIs a weight.
OBMC based on template matching
In the OBMC scheme based on template matching, instead of directly using weighted prediction, the prediction value of the CU boundary sample derivation method is decided according to the template matching cost, including using only motion information of the current block or using one of motion information of neighboring blocks and a mixed mode.
In this scheme for each block with a size of 4 x 4 at the top CU boundary, the above template size is equal to 4 x 1. If N adjacent blocks have the same motion information, the template size is enlarged to 4Nx1 because MC operations can be processed at a time. For each left block with a 4×4 size at the left CU boundary, the left template size is equal to 1×4 or 1×4n (fig. 20).
For each 4x4 top block (or set of N4x4 blocks), a prediction value of the boundary samples is derived after the following steps.
Take block a as the current block, and its upper neighbor block AboveNeighbor a as an example. The operation of the left block proceeds in the same manner.
First, three template matching costs (Cost 1, cost2, cost 3) are measured by SAD between the reconstructed samples of the template and their corresponding reference samples, which are derived by the MC process from the following three types of motion information:
And calculating Cost1 according to the motion information of A.
And calculating Cost2 according to the motion information of AboveNeighbor _A.
Cost3 is calculated from weighted predictions of motion information for a and AboveNeighbor _a, where the weighting factors are 3 ⁄ 4 and 1 ⁄ 4, respectively.
Next, a method is selected to calculate the final prediction result of the boundary samples by comparing Cost1, cost2, and Cost 3.
The original MC result using the motion information of the current block is denoted as Pixel1, and the MC result using the motion information of the neighboring block is denoted as Pixel2. The final prediction result is denoted NewPixel.
NewPixel (i, j) =pixel 1 (i, j) if Cost1 is minimum.
If (Cost2+ (Cost2 > > 2))+ (Cost2 > > 3)) < = Cost1, mixed mode 1 is used.
For a luminance block, the number of mixed pixel rows is 4.
NewPixel(i,0)=(26×Pixel1(i,0)+6×Pixel2(i,0)+16)>>5
NewPixel(i,1)=(7×Pixel1(i,1)+Pixel2(i,1)+4)>>3
NewPixel(i,2)=(15×Pixel1(i,2)+Pixel2(i,2)+8)>>4
NewPixel(i,3)=(31×Pixel1(i,3)+Pixel2(i,3)+16)>>5
For a chroma block, the number of mixed pixel rows is 1.
NewPixel(i,0)=(26×Pixel1(i,0)+6×Pixel2(i,0)+16)>>5
If Cost1< = Cost2, mixed mode 2 is used.
For a luminance block, the number of mixed pixel rows is 2.
NewPixel(i,0)=(15×Pixel1(i,0)+Pixel2(i,0)+8)>>4
NewPixel(i,1)=(31×Pixel1(i,1)+Pixel2(i,1)+16)>>5
For a chroma block, the number of mixed pixel rows/columns is 1.
NewPixel(i,0)=(15×Pixel1(i,0)+Pixel2(i,0)+8)>>4
Otherwise, mixed mode 3 is used.
For a luminance block, the number of mixed pixel rows is 4.
NewPixel(i,1)=(7×Pixel1(i,1)+Pixel2(i,1)+4)>>3
NewPixel(i,2)=(15×Pixel1(i,2)+Pixel2(i,2)+8)>>4
NewPixel(i,3)=(31×Pixel1(i,3)+Pixel2(i,3)+16)>>5
For a chroma block, the number of mixed pixel rows is 1.
NewPixel(i,0)=(7×Pixel1(i,0)+Pixel2(i,0)+4)>>3
Template matching based merge candidate selection adaptive reordering (ARMC-TM)
The reordering method applies to conventional merge mode, template Matching (TM) merge mode, and affine merge mode (excluding SbTMVP candidates). For the TM merge mode, the merge candidates are reordered prior to the refinement process.
After constructing the merge candidate list, the merge candidates are divided into several subgroups. For the normal merge mode and the TM merge mode, the subgroup size is set to 5. For affine merge mode, the subgroup size is set to 3. The merge candidates in each subgroup are incrementally reordered according to a cost value based on template matching. For simplicity, the merge candidates in the last subgroup, but not the first subgroup, are not reordered.
The template matching cost of the merge candidate is measured by the Sum of Absolute Differences (SAD) between the samples of the template of the current block and its corresponding reference samples. The template includes a set of reconstructed samples adjacent to the current block. The reference points of the template are located by merging the candidate motion information.
When the merge candidate uses bi-prediction, the reference points of the template of the merge candidate are also generated by bi-prediction, as shown in fig. 21.
For the sub-block based merge candidates with a sub-block size equal to Wsub ×hsub, the above template includes a number of sub-templates with a size of Wsub ×1, and the left template includes a number of sub-templates with a size of 1×hsub. As shown in fig. 22, the reference samples of each sub-template are derived using the motion information of the sub-blocks in the first row and the first column of the current block.
Direct block vector for chroma block
The direct block vector is used for chroma blocks in a dual tree slice. When the chroma dual tree is activated, a flag is signaled to indicate whether or not to use IBC mode for coding chroma blocks. If one of the luminance blocks in the five positions shown in fig. 23 is encoded and decoded in IBC or intra TMP mode, its block vector is scaled and used as a block vector of a chrominance block. Template matching is used to perform block vector scaling.
While existing IBC schemes may provide significant improvements in intra-coding in ECMs, there is room for further improvement in their performance. At the same time, some portions of the existing convolutional cross-component model (CCCM) patterns also need to be simplified for efficient codec hardware implementation or improved for better coding efficiency. Furthermore, there is a need to further improve the tradeoff between implementation complexity and its codec efficiency benefits.
In the present disclosure, in order to solve the above-described problems, a method of further improving the existing design of IBC is provided. In general, the main features of the technology proposed in the present disclosure are summarized below.
IBC prediction is filtered using CCCM tools. Filtered Intra Block Copy (FIBC) is a special intra prediction mode that applies a filter to IBC-based prediction blocks to increase prediction accuracy and adapt the characteristics of the copied blocks to local neighborhoods.
In FIBCs, training samples may be adjacent to the current block. References from local regions are known to improve the accuracy of predictions in predictions.
In FIBCs, the training samples may not be adjacent to the current block. References from non-local regions are also known to improve the accuracy of predictions in prediction.
In FIBC, only one hypothesis may be utilized, i.e. the best matching block resulting in the smallest matching cost is selected as the final prediction.
In FIBCs, multiple hypotheses may also be utilized.
It should be understood that the figures in this disclosure may be combined with all examples mentioned in this disclosure, and that the disclosed methods may be applied independently or jointly.
Filtered Intra Block Copy (FIBC)
In accordance with one or more embodiments of the present disclosure, IBC prediction is filtered using CCCM tools. Different methods may be used to achieve this goal. The existing CCCM mode applies various filters for predicting chroma sample values based on corresponding luma sample values. Unlike CCCM, filtered Intra Block Copy (FIBC) is a special intra prediction mode that applies a filter on IBC-based prediction blocks to predict target luma or chroma samples of the current block based on corresponding luma or chroma samples of the reference block, respectively, in order to increase prediction accuracy and adapt the characteristics of the copied blocks to local neighbors.
In accordance with one or more embodiments of the present disclosure, IBC prediction is further filtered. Different methods may be used to achieve this goal. Filtered Intra Block Copy (FIBC) is a special intra prediction mode that applies a filter to the prediction block based on intra block copy to increase prediction accuracy and adapt the characteristics of the copied block to the local neighborhood.
In accordance with one or more embodiments of the present disclosure, reconstructed luma/chroma samples on a template region of a reference block are used as inputs to a filter during a training phase, and corresponding reconstructed luma/chroma samples in the template region of a current block are targets. In one example, fig. 25 shows one filter shape (cross) and training area of the reference block. It should be appreciated that for this filter shape, both the template region and the boundary region of the template region may be part of the training region of the reference block. Reconstructed samples in the border region may be used for training when available and they are filled with the closest available samples when not available. On the other hand, the training area of the current block may be determined as the template area of the current block. In a filtering phase in which the filter coefficients of the filter have been trained/determined through a training phase, the filter may be applied to each of the corresponding sample values of the reference block and the boundary region of the reference block to predict the sample values of the current block.
According to one or more embodiments of the present disclosure, the filter coefficients (i.e., parameters) are derived using regression-based MSE minimization techniques (i.e., LDL decomposition) that are present in the ECM and utilized by other tools, such as CCCM.
In accordance with one or more embodiments of the present disclosure, a convolution N-tap (N is an integer and greater than 1) filter may include an (N-1-M) tap (M is an integer) space term, M nonlinear terms, and a bias term. The (N-1-M) tap space term corresponds to neighboring sample values, such as luminance samples (i.e., L 0,L1,…,L8) from the reconstructed reference block as shown in fig. 26. In this example, the formula for each new predicted luminance sample is as follows:
wherein, the Is associated withAssociated coefficients, andIs offset (i.e., 1< < (bitDepth-1)). The reference luminance sample value of the upper left sample adjacent to the current block may be used as offsetLuma value. The locations and numbers of spatial and nonlinear terms may be different. Examples of different shapes/numbers of filter taps are shown in fig. 27. For another example, different positions and numbers shown in the table below are used.
According to one or more embodiments of the present disclosure, the number of filter taps may be predefined or signaled/switched in SPS/DPS/VPS/SEI/APS/PPS/PH/SH/region/CTU/CU/sub-block/sample level.
In accordance with one or more embodiments of the present disclosure, the template size and shape may be the same as in intra TMP, with the template size for training being 4 rows above and to the left of the current block depending on their availability.
According to one or more embodiments of the present disclosure, the template size for training is up to 5 lines above and to the left of the current block, depending on the availability of the current block.
According to one or more embodiments of the present disclosure, the template size and shape may be the same as in CCCM, the template size for training being 6 rows above and to the left of the current block, depending on their availability.
According to one or more embodiments of the present disclosure, the template size for training may be N rows above and to the left of the current block, N being an integer, depending on their availability.
According to one or more embodiments of the present disclosure, the template size for training may be N rows above the current block, N being an integer, depending on their availability.
According to one or more embodiments of the present disclosure, the template size for training may be N rows to the left of the current block, N being an integer, depending on their availability.
According to one or more embodiments of the present disclosure, reference samples/template areas of reference block/template areas of a current block may be predefined or signaled/switched in different codec levels, such as SPS/DPS/VPS/SEI/APS/PPS/PH/SH/area/CTU/CU/sub-block/sample level.
In accordance with one or more embodiments of the present disclosure, the location information may be used to calculate model parameters, including using horizontal/vertical/diagonal distances and non-linear terms thereof, for which one or more location information may be used. In one example, the location-based parameter is related to the vertical and horizontal coordinates (Xc, yc) of the center luminance sample point and is calculated relative to the upper left coordinates (Xt 1, ytl) of the block, e.g., xc-Xtl +yc-Ytl. In another example, the location-based parameters are related to the vertical and horizontal coordinates (Xc, yc) of the center luminance sample point, and they are calculated relative to the upper left coordinates (Xtl, ytl) of the block, e.g., xc-Xtl +Yc-Ytl, xc-Xtl, yc-Ytl. In yet another example, the location-based parameter is related to the vertical and horizontal coordinates (Xc, yc) of the center luminance sample point and is calculated relative to the upper left coordinates (Xt 1, ytl) of the block, e.g., (Xc-x1+yc-Ytl)/N, where N is a predefined number, e.g., 2. In yet another example, the location-based parameters are related to the vertical and horizontal coordinates (Xc, yc) of the center luminance sample point and they are calculated relative to the upper left coordinates (Xt 1, ytl) of the block, e.g., (Xc-x1+yc-Yt 1)/N1, (Xc-Xt 1)/N2, (Yc-Yt 1)/N3, where N1-N3 are predefined numbers such as 2, 3, and 4. In yet another example, the location-based nonlinear term is represented as a power of two of the horizontal/vertical/diagonal distances, e.g., (Xc-Xtl +Yc-Ytl)(Xc-Xtl+Yc-Ytl)、(Xc-Xtl)(Xc-Xtl)、(Yc-Ytl)(Yc-Ytl), where (Xc, yc) is the vertical and horizontal coordinates of the center luminance sample point, and (Xtl, ytl) is the upper left coordinate.
In accordance with one or more embodiments of the present disclosure, an enable flag may be signaled in the bitstream to indicate the FIBC mode used. The enable flag may be signaled in different codec levels, such as SPS/DPS/VPS/SEI/APS/PPS/PH/SH/region/CTU/CU/sub-block/sample level.
In accordance with one or more embodiments of the present disclosure, instead of explicitly signaling the selected mode flag, the mode flag may be derived at the decoder to save bit overhead.
According to one or more embodiments of the present disclosure, no additional control flags are required and the FIBC mode will be derived under some predefined condition (e.g., specific mode, specific block size, specific partition). When the predefined conditions match, a FIBC pattern will be derived based on the previously decoded information.
In accordance with one or more embodiments of the present disclosure, samples in regions that are not adjacent to the current block may be used to derive a model of the current block. In one embodiment, a candidate region list with N candidates may be constructed by examining the potential MxM regions in turn. If the examination region is available, it is placed in the candidate region list. For example, a candidate region list with 6 candidates is constructed by sequentially checking the potential 8×8 regions. The upper left positions of the potential 8 x 8 regions are predetermined as { (-xStep, 0), (0, -yStep), (xStep, -yStep), (-xStep, -yStep), (2)xStep,0),(0,-2yStep),(-2xStep,2yStep),(2xStep,-2yStep),(-2xStep, yStep),(xStep,-2yStep),(2xStep,-yStep),(-xStep,-2yStep),(-2xStep,-2YStep), (-xStep/2, 0), (0, -yStep/2), (xStep/2, -yStep/2), (-xStep/2, yStep/2), (-xStep/2, -yStep/2) }, where xStep=Max (width, 16), yStep=Max (height, 16). Fig. 28 shows some possible locations of candidate regions.
In accordance with one or more embodiments of the present disclosure, one non-neighboring neighbor candidate having N candidates may be constructed from the positions and inclusion order of the spatial non-neighboring neighbor candidates from the two sets of spatial non-neighboring candidates in the inter-merge mode. If the examination region is available, it is placed in the candidate region list. Fig. 29 shows some possible locations of candidates.
In accordance with one or more embodiments of the present disclosure, inherited parameters from a previously decoded TB/CB/slice/picture/sequence level FIBC may be used in the current block. In accordance with one or more embodiments of the present disclosure, a control flag is signaled in the TB/CB/slice/picture/sequence level to indicate whether signaling of the legacy FIBC is enabled or disabled. When the control flag is signaled as enabled, the flag of the inherited FIBC is further signaled to the decoder to indicate whether the inherited FIBC is used at the signaling level.
In accordance with one or more embodiments of the present disclosure, derived parameters from a previously decoded TB/CB/slice/picture/sequence level FIBC may be stored and used as a current FIBC (which is referred to as an inherited FIBC). In one embodiment, a history-based FIBC (H-FIBC) table may be maintained similar to HMVP tables. In one embodiment, an index value may be signaled in the bitstream to indicate which candidate model in the H-FIBC table to select. In one embodiment, after decoding the FIBC encoded block, the corresponding table may be updated. In one embodiment, the H-FIBC table is N in size. N is an integer (e.g., 4,5,6, 7).
Multi-hypothesis FIBC
In accordance with one or more embodiments of the present disclosure, more than one prediction block candidate is used and weighted to generate a final prediction of the current block. It is assumed that N prediction block candidates are used.
Prediction block candidate derivation
In one embodiment, the prediction block candidates are searched and selected according to criteria that minimize the template matching cost, i.e., the first N candidates that result in the smallest template matching cost are selected. The template matching cost may not be limited to SAD (sum of absolute differences) and SSE (sum of squared errors).
In one embodiment, the prediction block candidates may be selected according to a predefined pattern (i.e., a planar pattern).
In one embodiment, the prediction block candidates may be selected according to neighboring predefined modes (i.e., top predefined mode, left predefined mode).
Fixed multi-hypothesis FIBC
In this embodiment, the weighting factors that generate the final prediction block are predefined and fixed at both the encoder side and the decoder side. As an example, an equal weighting factor may be used, i.e. 1/N for all candidate blocks.
Adaptive multi-hypothesis intra-frame FIBC
In order to adapt to different characteristics of video content, an adaptive multi-hypothesis intra FIBC method is also proposed.
In one embodiment, the weighting factor may be derived based on template matching costs. Representing the template matching costs of N candidates as,,...,The weighting factor is calculated as follows.
(4)
It should be noted that the template matching cost may be measured with, but is not limited to, SAD and SSE.
In yet another embodiment, the weighting factors may be derived/switched based on block sizes or syntax elements signaled in different codec levels, such as SPS/DPS/VPS/SEI/APS/PPS/PH/SH/region/CTU/CU/sub-block/sample levels.
In yet another embodiment, the weighting factors may be derived at the encoder side and then signaled to the decoder in the bitstream. Representing N prediction block candidates as,、...、And the current block is represented asThe weighting factor can then be solved by the following equation:
(5)
Equation (5) can be solved using the Wiener-Hopf equation as the ALF. The derived filter coefficients are then quantized to integer types and signaled at the block level.
In yet another embodiment, the weighting factor is derived based on a template and the derived weighting factor is applied to the prediction block candidates to generate the final prediction block. Representing templates of prediction candidates as,、...、And the current block is represented asThe weighting factor can then be derived using the following equation:
(6)
equation (6) can be solved using the Wiener-Hopf equation. The final prediction block may then be calculated as WhereinRepresenting the i-th prediction block candidate.
FIBC mode exploits non-local correlation to improve prediction accuracy, where similar blocks are searched and used to generate the final prediction block. In this embodiment, a combination of non-local mean filtering and multi-hypothesis FIBC is proposed, as follows. In a first step, N prediction block candidates are searched and identified as being performed in the FIBC. In the second step, the weighting factor is calculated as follows.
(7)
Wherein the method comprises the steps ofFor measuring a distance between the template of the i-th predicted block candidate and the template of the current block,Used as a degree of weighting, andFor the normalization constant:
(8)
to calculate the weighting factor in equation (7), the intensity of the weighting should be determined first. In this disclosure, several methods are presented to determine the weighting strength.
In the first method, a weighted intensity candidate list including some typical weighted intensity values is defined and fixed at both encoder and decoder sides. At the encoder side, the weighted intensity values are checked using rate distortion optimization, and the best weighted intensity value is identified in the bitstream and signaled to the decoder side.
In a second method, a weighted intensity value is estimated using a template of the prediction block candidate and a template of the current block. Representing templates of prediction candidates as,、...、And the current block is represented as. The weighted intensity values can then be solved using the following equation:
(9)
In a third method, the QP value and variance of the template of the current block may be used to estimate the weighted strength value, i.e., the relationship between the weighted strength value, QP value, and template variance may be fitted offline.
To better exploit non-local dependencies in FIBCs, in this embodiment, singular Value Decomposition (SVD) is used to generate the final prediction block from the prediction block candidates. The width and height of the current block are denoted as W and H, and the area of the current block is denoted as。
Step 1. Searching and identifying K prediction block candidates as performed in a FIBC。
And 2, step 2. Current blockK prediction block candidate building block sets of (2)And arranged as a matrix:
(10)
wherein, the Is of the size ofBy combining the matrix ofIs arranged as a column vector.
And 3, step 3. Pair matrixSVD decomposition is performed.
(11)
And 4, step 4. For singular value matrixA soft thresholding operation is applied.
(12)
Wherein the method comprises the steps ofIs shrinkage ofAnd threshold valueIs a function of the diagonal elements of (a). For the followingThe kth diagonal element of (2), which is levelNonlinear function atShrinking:
(13)
is made up of singular values contracted at diagonal positions A matrix of components.
And 5, step 5. The inverse SVD is performed to obtain a filtered patch set.
(14)
One of the key steps is to determine a threshold value for each diagonal element in step 4. In the present disclosure, the threshold value is calculated as follows. The threshold is estimated for each set of image blocks with the following equation:
(15)
Wherein the method comprises the steps of Is the standard deviation of noise, andIs a group ofStandard deviation of the original block of the k-th dimension of the SVD space. The bias estimate of the original block in SVD space is as follows.
(16)
Wherein the method comprises the steps ofIs thatIs the k singular value of (c). When (when)When zero, the soft thresholding operation is skipped. In addition, useAndThe parameterized power function uses the bias of the prediction block to estimate the bias of the noise.
(17)
Wherein the method comprises the steps ofThe calculation is carried out as follows,
(18)
Here the number of the elements is the number,Representing prediction block candidate vectorsIs the i-th pixel of (c).
Multi-hypothesis FIBC signaling
In the present disclosure, the proposed multi-hypothesis FIBC may be used as a replacement for the current FIBC mode, or the encoder may adaptively select the FIBC mode or the multi-hypothesis FIBC mode.
In one embodiment, the proposed multi-hypothesis FIBC is used as a replacement for the current FIBC mode, i.e. multiple hypotheses are always used for prediction.
In yet another embodiment, one of the multi-hypothesis FIBC methods in the section above is used in combination with the current FIBC mode. A flag is signaled in the bitstream to indicate whether multi-hypothesis FIBC mode applies to the CU.
In yet another embodiment, more than one multi-hypothesis FIBC method of the above section is used in combination with the current FIBC mode. A flag is first signaled in the bitstream to indicate whether multi-hypothesis FIBC mode is applied. The index is then signaled to indicate which of the multi-hypothesis FIBC methods applies to the CU.
Coordination of filters for FIBC mode and FTMP mode
TMP prediction may also be filtered with CCCM tools, which is referred to as Filtered Template Matching Prediction (FTMP) mode. The process of FTMP mode is the same as that of FIBC mode except FTMP mode does not require signaled block vectors from the encoder to find the reference block. In contrast, in FTMP mode, a reference block may be determined at the decoder side by searching for an L-shaped template most similar to the current template in the reconstructed portion of the current frame, and using the corresponding block as a reference block for the current block to be predicted. In other words, the L-shaped template associated with the reference block is the most similar template to the L-shaped template associated with the current block in the reconstructed portion of the frame. This operation for determining the reference block is the same as the operation of intra TMP mode. After the reference block is determined, the same filtering process as the FIBC mode is applied to predict the target luminance or chrominance samples of the current block based on the corresponding luminance or chrominance samples of the reference block, respectively. For example, as shown in fig. 25, a cross filter may be applied to the corresponding sample values (the sample value of the reference block and the boundary region of the reference block) to predict each of the sample values of the current block.
In accordance with one or more embodiments of the present disclosure, the same filter shape and/or template region may be applied to both prediction in FIBC mode and prediction in FTMP mode. For example, the decoder and/or encoder may attempt both FIBC mode and FTMP mode with the same filter shape and/or template area to achieve better performance or reduced cost before deciding which mode to apply. Different methods may be used to achieve this goal.
In a first example, it is proposed to apply the filter operation used in FTMP mode also to FIBC mode. In one example, a 6-tap filter (cross with 5 spatial components and bias terms) for FTMP modes and a template region for training (4 rows wide above and to the left of the current block in terms of samples, depending on their availability) may also be applied to FIBC modes in the same CU.
In a second example, it is proposed to apply the filter operation used in FIBC mode also to FTMP mode. In one example, the 2-tap filter for FIBC mode (single-tap filter with 1 spatial component and offset term) and the template region for training (1 row above and to the left of the current block, depending on their availability) can also be applied to FTMP modes in the same CU.
Coordination of filters for FIBC mode, FTMP mode and CCCM mode
The filter is used in a convolutional cross-component model (CCCM) mode to predict chroma sample values based on corresponding luma sample values. In CCCM mode, the set of chroma sample values and the corresponding luma sample values of the set of chroma sample values in the reconstructed region of the current block to be predicted are used to determine the filter coefficients of the CCCM filter. In one example, the chroma sample values to be predicted and their corresponding luma sample values are parity sample values. Although the training results for the filter coefficients may be different, the same filter shape and/or template region may be reused in CCCM, FIBC, and FTMP for better performance/reduced cost. In one example, the filter shape and/or template region may be signaled by the encoder to the decoder. In another example, the filter shape and/or template region may be derived by the encoder based on predetermined rules (e.g., from a predefined candidate set).
In accordance with one or more embodiments of the present disclosure, the same filter shape and/or template area may be applied to at least two of FIBC mode, FTMP mode, and CCCM mode. Different methods may be used to achieve this goal.
In a first example, it is proposed to apply the filter operation used in CCCM mode also to FIBC mode. In one example, a 7 tap filter (cross with 5 spatial components, non-linear terms, and offset terms) for CCCM modes and a template region for training (6 rows above and to the left of the current block, depending on their availability) may also be applied to FIBC modes in the same CU.
In a second example, it is proposed to apply one of the filter operations used in CCCM mode to both FIBC mode and FTMP mode. In one example, an 11 tap filter for CCCM modes (3 with 9 spatial components, nonlinear terms and bias terms3 Squares) and template regions for training (6 rows above and to the left of the current block, depending on their availability) can also be applied to FIBC mode and FTMP mode.
Fig. 30 illustrates a workflow of a method 3000 for video decoding in accordance with one or more aspects of the present disclosure.
At step 3010, method 3000 includes determining a reference block in a reconstructed portion of the video frame for predicting a current block in the video frame, wherein an L-shaped template associated with the reference block is the most similar template to the L-shaped template associated with the current block in the reconstructed portion of the video frame.
At step 3020, method 3000 includes obtaining a set of filter coefficients corresponding to a filter shape based on sample values from both a training area associated with a reference block and a training area associated with a current block.
At step 3030, method 3000 includes deriving a predicted sample value for the current block based on a plurality of corresponding sample values associated with the reference block using the set of filter coefficients and the filter shape.
At step 3040, method 3000 includes reconstructing the current block based on the predicted sample values.
Fig. 31 illustrates a workflow of a method 3100 for video encoding in accordance with one or more aspects of the present disclosure.
At step 3110, method 3100 includes dividing a video frame into a plurality of blocks.
At step 3120, method 3100 includes determining a reference block in the reconstructed portion of the video frame for predicting a current block in the video frame, wherein an L-shaped template associated with the reference block is the most similar template to the L-shaped template associated with the current block in the reconstructed portion of the video frame.
At step 3130, method 3100 includes obtaining filter coefficient coefficients corresponding to a filter shape based on sample values from both a training region associated with a reference block and a training region associated with a current block.
At step 3140, method 3100 includes deriving a predicted sample value for the current block based on a plurality of corresponding sample values associated with the reference block using the set of filter coefficients and the filter shape.
At step 3150, method 3100 includes generating a bitstream based on the predicted sample values.
Fig. 32 illustrates a workflow of a method 3200 for video decoding in accordance with one or more aspects of the present disclosure.
At step 3210, the method 3200 includes determining at least one of a filter shape and a template region, wherein the at least one of a filter shape and a template region is to be used in at least two of a Filtered Intra Block Copy (FIBC) mode, a Filtered Template Matching Prediction (FTMP) mode, and a convolved cross-component model (CCCM) mode for predicting a sample value of a current block in the video frame.
At step 3220, the method 3200 includes training each filter coefficient in the set of filter coefficients corresponding to the filter shape for at least two of the FIBC mode, FTMP mode, and CCCM mode, respectively, using the template region.
At step 3230, method 3200 includes deriving a prediction sample value for a current block using a set of filter coefficients.
At step 3240, method 3200 includes reconstructing a current block based on the predicted sample values.
In one example, training a set of filter coefficients for FIBC mode includes determining a reference block in a reconstructed portion of a video frame for predicting a current block, wherein the reference block is determined based on a block vector received from a bitstream, and obtaining the set of filter coefficients for FIBC mode based on sample values from both a training region associated with the reference block and a training region associated with the current block, wherein the training region associated with the reference block and the training region associated with the current block are determined based at least in part on a template region for FIBC mode.
In one example, training the set of filter coefficients for FTMP modes includes determining a reference block in a reconstructed portion of the video frame for predicting a current block, wherein an L-shaped template associated with the reference block is the most similar template to the L-shaped template associated with the current block in the reconstructed portion of the video frame, and obtaining the set of filter coefficients for FTMP modes based on sample values from both a training region associated with the reference block and a training region associated with the current block, wherein the training region associated with the reference block and the training region associated with the current block are determined based at least in part on the template region for FTMP modes.
In one example, training the set of filter coefficients for CCCM modes includes determining a set of chroma sample values in a template region and obtaining the set of filter coefficients for CCCM modes based on the set of chroma sample values and corresponding luma sample values for the set of chroma sample values.
In one example, the filter shape includes at least one of a cross-shaped filter shape corresponding to 5 spatial terms, a nonlinear term, and an offset term, a single-sample filter shape corresponding to 1 spatial term and an offset term, or 3 corresponding to 9 spatial terms, nonlinear terms, and offset terms3 Square filter shape.
In one example, the template region includes at least one of 4 rows above and to the left of the current block, 1 row above and to the left of the current block, or 6 rows above and to the left of the current block.
Fig. 33 illustrates a workflow of a method 3300 for video encoding in accordance with one or more aspects of the present disclosure.
At step 3310, method 3300 includes dividing the video frame into a plurality of blocks.
At step 3320, method 3300 includes determining at least one of a filter shape and a template region, wherein the at least one of a filter shape and a template region is to be used in at least two of a Filtered Intra Block Copy (FIBC) mode, a Filtered Template Matching Prediction (FTMP) mode, and a convolved cross-component model (CCCM) mode for predicting sample values of a current block in a video frame.
At step 3330, method 3300 includes training each filter coefficient in the set of filter coefficients corresponding to the filter shape for at least two of FIBC mode, FTMP mode, and CCCM mode, respectively, using the template region.
At step 3340, method 3300 includes deriving a predicted sample value for the current block using the set of filter coefficients.
At step 3350, method 3300 includes generating a bitstream based on the predicted sample values.
In one example, training a set of filter coefficients for FIBC mode includes determining a reference block in a reconstructed portion of a video frame for predicting a current block, wherein the reference block is determined based on a block vector to be transmitted via a bitstream, and obtaining the set of filter coefficients for FIBC mode based on sample values from both a training region associated with the reference block and a training region associated with the current block, wherein the training region associated with the reference block and the training region associated with the current block are determined based at least in part on a template region for FIBC mode.
In one example, training the set of filter coefficients for FTMP modes includes determining a reference block in a reconstructed portion of the video frame for predicting a current block, wherein an L-shaped template associated with the reference block is the most similar template to the L-shaped template associated with the current block in the reconstructed portion of the video frame, and obtaining the set of filter coefficients for FTMP modes based on sample values from both a training region associated with the reference block and a training region associated with the current block, wherein the training region associated with the reference block and the training region associated with the current block are determined based at least in part on the template region for FTMP modes.
In one example, training the set of filter coefficients for CCCM modes includes determining a set of chroma sample values in a template region and obtaining the set of filter coefficients for CCCM modes based on the set of chroma sample values and corresponding luma sample values for the set of chroma sample values.
In one example, the filter shape includes at least one of a cross-shaped filter shape corresponding to 5 spatial terms, a nonlinear term, and an offset term, a single-sample filter shape corresponding to 1 spatial term and an offset term, or 3 corresponding to 9 spatial terms, nonlinear terms, and offset terms3 Square filter shape.
In one example, the template region includes at least one of 4 rows above and to the left of the current block, 1 row above and to the left of the current block, or 6 rows above and to the left of the current block.
Reference region and padding process in Filtered Intra Block Copy (FIBC)
According to one or more embodiments of the present disclosure, filter coefficients are calculated by minimizing MSE between predicted and reconstructed luma samples and/or chroma samples in a reference region. In one example, fig. 25 shows a reference region including luminance/chrominance samples above and to the left of the CU. An extension of the area shown in blue (diagonal line) is required to support the "side sample" of the plus sign spatial filter, and different approaches can be used to achieve this goal.
In a first method, it is proposed to fill with the closest available samples when not available.
In a second method, it is proposed to fill with the closest available samples, whether or not the closest available samples are not available.
According to one or more embodiments of the present disclosure, the reference region may extend one CU width to the right and one CU height below the CU boundary.
According to one or more embodiments of the present disclosure, the reference region may be adjusted to include only available samples. In one example, the reference region includes N lines of luma/chroma samples above and to the left of the CU. N is an integer and/or has a maximum upper limit (e.g., 4, 5, 6, 7).
Adaptive reordering of synthesis candidates using filtered intra block replication (FIBC)
In accordance with one or more embodiments of the present disclosure, in the process of extending to the ARMC-TM of the IBC merge list, when the merge candidates are predicted using the FIBC, the reference points of the templates of the merge candidates are also generated by the FIBC.
According to one or more embodiments of the present disclosure, filter coefficients are calculated by minimizing MSE between predicted and reconstructed luma samples and/or chroma samples in a reference region.
According to one or more embodiments of the present disclosure, the template size and shape may not be included in the reference samples of the template of the merge candidate. In one example, fig. 34 shows a reference area including 3 lines of luminance samples above and to the left of the CU.
Merging candidates with filtered intra block copying
In accordance with one or more embodiments of the present disclosure, IBC predictions from the merge candidates are further filtered. Different methods may be used to achieve this goal.
In accordance with one or more embodiments of the present disclosure, an enable flag may be signaled in the bitstream to indicate the FIBC merge mode used. The enable flag may be signaled in SPS/DPS/VPS/SEI/APS/PPS/PH/SH/region/CTU/CU/sub-block/sample level.
In accordance with one or more embodiments of the present disclosure, a mode flag may be derived at the decoder to save bit overhead, rather than explicitly signaling a selected mode flag.
Direct block vector of chroma blocks using filtered intra block copy
In accordance with one or more embodiments of the present disclosure, the direct block vector of the chroma block is further filtered. Different methods may be used to achieve this goal.
In accordance with one or more embodiments of the present disclosure, an enable flag may be signaled in the bitstream to indicate the FIBC merge mode used. The enable flag may be signaled in SPS/DPS/VPS/SEI/APS/PPS/PH/SH/region/CTU/CU/sub-block/sample level.
According to one or more embodiments of the present disclosure, a mode flag may be inherited from a luma block at a decoder to save bit overhead, rather than explicitly signaling a selected mode flag.
Fig. 35 illustrates a workflow of a method 3500 for video decoding in accordance with one or more aspects of the present disclosure. Method 3500 may be performed by a decoder (e.g., video decoder 30 of fig. 3).
At step 3510, a bitstream including a video frame may be received, and in order to predict a current block in the video frame, a reference block in the same video frame may be determined. For example, the reference block may be determined by using a block vector indicated in a bitstream for the current block. Here, the block vector is used to indicate the displacement from the current block to a reference block that has been reconstructed within the current video frame.
At step 3520, a set of filter coefficients corresponding to the filter shape may be obtained by training the sample values of the template region associated with the reference block and the sample values of the template region associated with the current block. The template region associated with the reference block may be extended based on the filter shape, for example, to further include sample values for edge points of the filter shape.
In one example, the template region associated with a reference block that includes samples to the left and above the reference block may be expanded to further include one or more rows to the right of the reference block and one or more rows below the reference block. For example, a row may correspond to a row or column of a picture or frame in which a sample point is located.
In another example, the template region associated with the reference block may extend from the reference block in four directions, i.e., left and right, up and down, to further include additional one or more rows from four sides around the reference block.
In one or more examples, the extended row or rows may correspond to one CU width or one CU height. The sample values in the extended row may be filled with the closest available sample values. In another example, the samples filled with the closest available sample values are unavailable samples, e.g., samples to the right of or below the reference block that have not been reconstructed.
In another example, the template region associated with the reference block may be extended to include only available samples. For example, the template region associated with the reference block is extended to include up to N rows to the left of the reference block and up to N rows above the reference block, and N is an integer (e.g., 4, 5, 6, 7).
At step 3530, each of the predicted sample values for the current block may be derived based on a plurality of corresponding sample values associated with the reference block by using the set of filter coefficients and the filter shape. For example, a plurality of corresponding sample values associated with the reference block may be used as inputs to the obtained filter to output each of the predicted sample values for the current block. The input sample values associated with the reference block may include sample values in the extended template region.
At step 3540, the current block may be reconstructed based on the predicted sample values. For example, the current block may be reconstructed by combining the predicted sample values with the residual carried in the bitstream.
Fig. 36 illustrates a workflow of a method 3600 for video encoding in accordance with one or more aspects of the present disclosure. Method 3600 may be performed by an encoder (e.g., video decoder 20 of fig. 2). The steps of method 3600 may be the corresponding steps of the steps of method 3800.
At step 3610, a reference block in a video frame captured by a camera may be determined for predicting a current block in the same video frame. For example, the determination may be made based on Block Matching (BM) performed at the encoder.
At step 3620, a filter having a set of filter coefficients and a filter shape may be obtained by training the sample values of the template region associated with the reference block and the sample values of the template region associated with the current block, wherein the template region associated with the reference block is expanded based on the filter shape. For example, the template region associated with the reference block is extended in the same manner as described with reference to method 3800.
At step 3630, each of the predicted sample values for the current block may be derived based on a plurality of corresponding sample values associated with the reference block by using the obtained filters.
At step 3640, a bitstream may be generated based on the predicted sample values.
Fig. 37 illustrates a workflow of a method 3700 for video decoding in accordance with one or more aspects of the present disclosure. Method 3700 may be performed by a decoder (e.g., video decoder 30 of fig. 3).
At step 3710, a merge candidate list for Intra Block Copy (IBC) prediction of the current block may be obtained. For example, a merge candidate list for IBC prediction may be constructed according to the above description or other criteria. The merge candidate list may comprise a plurality of candidates that have been encoded with IBCs, each candidate having, for example, a block vector.
At step 3720, the plurality of candidates in the merge candidate list may be reordered based on the template matching score for each of the plurality of candidates, the template matching score being calculated based on differences between the sample values of the templates of the candidates and corresponding reference sample values of reference templates of reference blocks pointed to by the block vector of the candidates. In the calculation of the template matching score, the filtered IBC (i.e., FIBC) is also used to obtain corresponding reference sample point values for the reference template in response to determining that the candidates are encoded with the filtered IBC (i.e., FIBC).
At step 3730, the current block may be reconstructed based on the reordered merge candidate list.
In one example, IBC prediction of the current block may be obtained from candidates of the merge candidate list, for example, by using a block vector of the candidate. A determination is made as to whether IBC prediction for the current block is filtered.
In one example, the determination is made based on syntax elements transmitted in the bitstream or inferences derived at the decoder. For example, the determination that the current block is reconstructed by using filtered IBCs may be based on syntax elements or based on inferences without syntax elements. In response to determining that the current block is reconstructed using the filtered IBC, FIBC is performed on the IBC prediction.
In one example, at least a portion of the template and at least a portion of the reference template are not used to obtain a template matching score. For example, as shown by the grid in fig. 34, the samples immediately adjacent to the reference block and the candidate are not used to calculate the template matching score.
Fig. 38 illustrates a workflow of a method 3800 for video encoding in accordance with one or more aspects of the present disclosure. Method 3800 may be performed by an encoder (e.g., video decoder 20 of fig. 2). The steps of method 3800 may be corresponding steps to those of method 3700.
At step 3810, a merge candidate list for Intra Block Copy (IBC) prediction of the current block may be obtained. For example, a merge candidate list for IBC prediction may be constructed according to the above description or other criteria. The merge candidate list may comprise a plurality of candidates that have been encoded with IBCs, each candidate having, for example, a block vector.
At step 3820, a plurality of candidates in the merge candidate list may be reordered based on a template matching score for each of the plurality of candidates, the template matching score being calculated based on differences between sample values of templates of the candidates and corresponding reference sample values of reference templates of reference blocks pointed to by block vectors of the candidates. In the calculation of the template matching score, the filtered IBC (i.e., FIBC) is also used to obtain corresponding reference sample point values for the reference template in response to determining that the candidates are encoded with the filtered IBC (i.e., FIBC).
At step 3830, a bitstream may be generated by encoding the current block based on the reordered merge candidate list.
In one example, IBC prediction of the current block may be obtained from candidates of the merge candidate list, for example, by using a block vector of the candidate. A determination is made as to whether IBC prediction for the current block is filtered. If a determination is made regarding the IBC prediction of the current block to be filtered, FIBC is performed on the IBC prediction.
In one example, a syntax element indicating the determination regarding the IBC prediction of the current block to be filtered may be transmitted in a bitstream.
Fig. 39 illustrates a workflow of a method 3900 for video decoding in accordance with one or more aspects of the present disclosure. Method 3900 may be performed by a decoder (e.g., video decoder 30 of fig. 3).
At step 3910, a block vector of a chroma block from a bitstream may be determined by using a block vector of a luma block associated with the chroma block, wherein the luma block has been encoded with an IBC. For example, a block vector of a chroma block may be directly inherited from a luma block.
At step 3920, IBC prediction of the chroma block may be obtained based on the inherited block vector.
At step 3930, in response to determining that the filtered IBC prediction is for a chroma block, the IBC prediction of the chroma block may be filtered to obtain a filtered IBC prediction, i.e., using a FIBC.
In one example, the determination of filtered IBC prediction for a chroma block may be made based on syntax elements in the bitstream.
In another example, the mode to be used by a chroma block (e.g., FIBC or IBC) may be inherited directly from the mode used by the associated luma block (e.g., FIBC or IBC) so that explicit signaling may be preserved.
At step 3940, a chroma block may be reconstructed based on the filtered IBC prediction.
In one example, the filter shape and/or filter coefficients of a chroma block may be calculated by using a template of the chroma block at the decoder (e.g., in a manner similar to that of a luma block).
In another example, the filter shape and/or filter coefficients of a chroma block may be directly inherited from the filter shape and/or filter coefficients of an associated luma block.
Fig. 40 illustrates a workflow of a method 4000 for video encoding in accordance with one or more aspects of the present disclosure. Method 4000 may be performed by an encoder (e.g., video decoder 20 of fig. 2). The steps of method 4000 may be corresponding steps to those of method 4200.
At step 4010, a block vector of a chroma block may be determined based on a luma block associated with the chroma block, wherein the luma block has been encoded using Intra Block Copy (IBC).
At step 4020, IBC prediction for a chroma block may be obtained based on the block vector.
At step 4030, IBC prediction for the chroma block may be filtered.
In one example, a syntax element may be generated that indicates that IBC prediction of a chroma block is to be filtered.
In another example, a FIBC pattern may be generated for use by an associated luma block that indicates IBC prediction of a chroma block is to be filtered.
At step 4040, a bitstream may be generated based on the filtered IBC prediction.
In one example, syntax elements indicating that IBC prediction of a chroma block is to be filtered may be transmitted in a bitstream.
In another example, if syntax elements indicating that IBC prediction of a chroma block is to be filtered can be inherited directly from the mode used by the associated luma block, the signaling may not be transmitted in the bitstream.
FIG. 41 illustrates a computing environment 4110 coupled with a user interface 4150. The computing environment 4110 may be part of a data processing server. The computing environment 4110 includes a processor 4120, a memory 4130, and an input/output (I/O) interface 4140.
The processor 4120 generally controls the overall operation of the computing environment 4110, such as operations associated with display, data acquisition, data communication, and image processing. The processor 4120 may include one or more processors to execute instructions to perform all or some of the steps of the methods described above. Further, the processor 4120 may include one or more modules that facilitate interactions between the processor 4120 and other components. The processor may be a Central Processing Unit (CPU), microprocessor, single chip machine, graphics Processing Unit (GPU), or the like.
The memory 4130 is configured to store various types of data to support the operation of the computing environment 4110. The memory 4130 may include predetermined software 4132. Examples of such data include any application or method instructions, video data sets, image data, and the like for operating on the computing environment 4110. The memory 4130 may be implemented using any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The I/O interface 4140 provides an interface between the processor 4120 and peripheral interface modules (e.g., keyboard, click wheel, buttons, etc.). Buttons may include, but are not limited to, a home button, a start scan button, and a stop scan button. The I/O interface 4140 may be coupled with an encoder and a decoder.
In an embodiment, there is also provided a non-transitory computer readable storage medium including, for example, a plurality of programs in the memory 4130 executable by the processor 4120 in the computing environment 4110 for performing the above-described method and/or storing a bitstream generated by the above-described encoding method or a bitstream to be decoded by the above-described decoding method. In one example, the plurality of programs may be executed by the processor 4120 in the computing environment 4110 to receive (e.g., from the video encoder 20 in fig. 2) a bitstream or data stream comprising encoded video information (e.g., video blocks representing encoded video frames, and/or associated one or more syntax elements, etc.), and may also be executed by the processor 4120 in the computing environment 4110 to perform the above-described decoding method according to the received bitstream or data stream. In another example, the plurality of programs may be executed by the processor 4120 in the computing environment 4110 for performing the encoding methods described above to encode video information (e.g., video blocks representing video frames, and/or associated one or more syntax elements, etc.) into a bitstream or data stream, and may also be executed by the processor 4120 in the computing environment 4110 for transmitting the bitstream or data stream (e.g., to the video decoder 30 in fig. 3). Alternatively, a non-transitory computer readable storage medium may have stored therein a bitstream or data stream comprising encoded video information (e.g., video blocks representing encoded video frames, and/or associated one or more syntax elements, etc.) that is generated by an encoder (e.g., video encoder 20 of fig. 2) using, for example, the encoding methods described above, for use by a decoder (e.g., video decoder 30 of fig. 3) in decoding video data. The non-transitory computer readable storage medium may be, for example, ROM, random-access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an embodiment, a bitstream generated by the above-described encoding method or a bitstream to be decoded by the above-described decoding method is provided. In an embodiment, there is provided a bitstream including encoded video information generated by the above-described encoding method or encoded video information to be decoded by the above-described decoding method.
In an embodiment, a computing device is also provided that includes one or more processors (e.g., processor 4120), and a non-transitory computer-readable storage medium or memory 4130 having stored therein a plurality of programs executable by the one or more processors, wherein the one or more processors are configured to perform the above-described methods when executing the plurality of programs.
In an embodiment, there is also provided a computer program product having instructions for storing or transmitting a bitstream comprising encoded video information generated by the above-described encoding method or encoded video information to be decoded by the above-described decoding method. In an embodiment, there is also provided a computer program product comprising a plurality of programs, e.g., in memory 4130, executable by processor 4120 in computing environment 4110 for performing the above-described methods. For example, the computer program product may include a non-transitory computer readable storage medium.
In an embodiment, the computing environment 4110 may be implemented by one or more ASICs, DSPs, digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), FPGAs, GPUs, controllers, microcontrollers, microprocessors, or other electronic components for performing the methods described above.
In an embodiment there is also provided a method of storing a bitstream comprising storing said bitstream on a digital storage medium, wherein said bitstream comprises encoded video information generated by the above described encoding method or encoded video information to be decoded by the above described decoding method.
In an embodiment, there is also provided a method for transmitting a bitstream generated by the above encoder. In an embodiment, a method for receiving a bitstream to be decoded by the decoder described above is also provided.
The description of the present disclosure has been presented for purposes of illustration and is not intended to be exhaustive or limited to the disclosure. Many modifications, variations and alternative embodiments will come to mind to one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings.
The order of steps of the method according to the present disclosure is intended to be illustrative only, unless specifically stated otherwise, and the steps of the method according to the present disclosure are not limited to the above-described order, but may be changed according to actual circumstances. Furthermore, at least one of the steps of the method according to the present disclosure may be adjusted, combined or pruned as actually needed.
The examples were chosen and described in order to explain the principles of the present disclosure and to enable others skilled in the art to understand the disclosure for various embodiments and with various modifications as are suited to the particular use contemplated. Therefore, it is to be understood that the scope of the disclosure is not to be limited to the specific examples of the disclosed embodiments, and that modifications and other embodiments are intended to be included within the scope of the disclosure.