HK40017308A - System and method of cross-component dynamic range adjustment (cc-dra) in video coding - Google Patents
System and method of cross-component dynamic range adjustment (cc-dra) in video coding Download PDFInfo
- Publication number
- HK40017308A HK40017308A HK62020007745.6A HK62020007745A HK40017308A HK 40017308 A HK40017308 A HK 40017308A HK 62020007745 A HK62020007745 A HK 62020007745A HK 40017308 A HK40017308 A HK 40017308A
- Authority
- HK
- Hong Kong
- Prior art keywords
- chroma
- luma
- video
- video data
- scale parameter
- Prior art date
Links
Description
The present application claims the benefit of U.S. provisional application No. 62/548,236, filed on day 21, 8, 2017, and U.S. application No. 16/___, ___, filed on day 20, 8, 2018, which are incorporated herein by reference in their entirety.
Technical Field
The present invention relates to video processing.
Background
Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, Personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video gaming consoles, cellular or satellite radio telephones (so-called "smart phones"), video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video coding techniques such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4, Advanced Video Coding (AVC) part 10, ITU-T H.265, High Efficiency Video Coding (HEVC), and extensions of these standards. Video devices may more efficiently transmit, receive, encode, decode, and/or store digital video information by implementing such video coding techniques.
Video coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (e.g., a video frame or a portion of a video frame) may be partitioned into video blocks, which may also be referred to as treeblocks, Coding Units (CUs), and/or coding nodes. Video blocks in an intra-coded (I) slice of a picture are encoded using spatial prediction with respect to reference samples in neighboring blocks in the same picture. Video blocks in an inter-coded (P or B) slice of a picture may use spatial prediction with respect to reference samples in neighboring blocks in the same picture or temporal prediction with respect to reference samples in other reference pictures. A picture may be referred to as a frame and a reference picture may be referred to as a reference frame.
Spatial or temporal prediction results in a predictive block for the block to be coded. The residual data represents pixel differences between the original block to be coded and the predictive block. The inter-coded block is encoded according to motion vectors that point to blocks of reference samples that form the predictive block, and the residual data indicates differences between the coded block and the predictive block. The intra-coded block is encoded according to an intra-coding mode and residual data. For further compression, the residual data may be transformed from the pixel domain to the transform domain, resulting in residual transform coefficients that may then be quantized. Quantized transform coefficients, initially arranged in a two-dimensional array, may be scanned in order to generate one-dimensional vectors of transform coefficients, and entropy coding may be applied to achieve even more compression.
The total number of color values that can be captured, coded, and displayed can be defined by a color gamut. Color gamut refers to the range of colors that a device can capture (e.g., a camera) or reproduce (e.g., a display). Often, the color gamut varies from device to device. For video coding, a predefined color gamut of video data may be used such that each device in the video coding process may be configured to process pixel values in the same color gamut. Some color gamuts are defined with a larger color range than color gamuts that have traditionally been used for video coding. These color gamuts with a large color range may be referred to as Wide Color Gamuts (WCGs).
Another aspect of video data is dynamic range. Dynamic range is typically defined as the ratio between the maximum luminance and the minimum luminance (e.g., brightness) of a video signal. The dynamic range of commonly used video data used in the past is considered to have a Standard Dynamic Range (SDR). Other example specifications for video data define color data having a greater ratio of maximum luminance to minimum luminance. Such video data may be described as having a High Dynamic Range (HDR).
Disclosure of Invention
The present invention relates to the field of decoding of video signals with High Dynamic Range (HDR) and Wide Color Gamut (WCG) representations. More particularly, in some examples, this disclosure describes signaling and operations applied to video data in certain color spaces to enable more efficient compression of HDR and WCG video data. For example, according to some examples, compression efficiency of a hybrid video coding system for coding HDR and WCG video data may be improved.
This disclosure describes example techniques and devices for performing cross-component dynamic range adjustment of chroma components of video data. In one example, this disclosure describes deriving a scale parameter for dynamic range adjustment of a luma component of video data. In one example, one or more scale parameters for dynamic range adjustment of the chroma components may be derived from the luma scale parameters. By using luma scale factors to derive scale parameters for chroma components, the amount of visual distortion in the decoded video data may be reduced. In other examples, the scale factors for the chroma components may be derived using a function of one or more of luma scale factors, chroma Quantization Parameter (QP) values, and/or color container parameters, such as a transfer function defined for a color container.
The techniques described herein may be used in conjunction with a video codec operating in accordance with a video coding standard. Example video coding standards may include h.264/AVC (advanced video coding), h.265/HEVC (high efficiency video coding), h.266/VVC (general video coding), and other standards configured to encode and decode HDR and/or WCG content.
In one example of the present invention, a method of processing video data comprises: receiving video data; determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data; performing a dynamic range adjustment process on the luma component using the luma scale parameter; determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter.
In another example of this disclosure, an apparatus configured to process video data comprises: a memory configured to store the video data; and one or more processors in communication with the memory, the one or more processors configured to: receiving video data; determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data; performing a dynamic range adjustment process on the luma component using the luma scale parameter; determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter.
In another example of this disclosure, an apparatus configured to process video data comprises: means for receiving video data; means for determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data; means for performing a dynamic range adjustment process on the luma component using the luma scale parameter; means for determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and means for performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter.
In another example, this disclosure describes a non-transitory computer-readable storage medium storing instructions that, when executed, cause one or more processors of a device configured to process video data to: receiving video data; determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data; performing a dynamic range adjustment process on the luma component using the luma scale parameter; determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter.
The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
Drawings
FIG. 1 is a block diagram showing an example video encoding and decoding system configured to implement the techniques of this disclosure.
Fig. 2A and 2B are conceptual diagrams depicting an example binary Quadtree (QTBT) structure and corresponding Coding Tree Unit (CTU).
Fig. 3 is a conceptual diagram showing the concept of HDR data.
Fig. 4 is a conceptual diagram showing an example color gamut.
FIG. 5 is a flow chart showing an example of HDR/WCG representation conversion.
FIG. 6 is a flow chart depicting an example of HDR/WCG reverse conversion.
Fig. 7 is a conceptual diagram showing an example of an electro-optical transfer function (EOTF) for video data conversion from perceptually uniform code level to linear luma, including SDR and HDR.
FIG. 8 is a conceptual diagram depicting an example output curve for an EOTF.
Fig. 9A and 9B are conceptual diagrams showing visualizations of color distributions in two example color gamuts.
Fig. 10 is a plot of luma-driven chroma scaling (LCS) function.
FIG. 11 is a block diagram depicting an example HDR/WCG conversion apparatus operating in accordance with the techniques of this disclosure.
FIG. 12 is a block diagram depicting an example HDR/WCG inverse conversion apparatus in accordance with the techniques of this disclosure.
Fig. 13 shows an example of a Dynamic Range Adjustment (DRA) mapping function.
Fig. 14 shows an example of linearization of DRA scaling parameters.
FIG. 15 shows an example of a set of sigmoid functions.
Fig. 16 shows an example of smoothing of DRA scaling parameters.
FIG. 17 is a block diagram showing an example of a video encoder that may implement the techniques of this disclosure.
FIG. 18 is a block diagram showing an example of a video decoder that may implement the techniques of this disclosure.
FIG. 19 is a flow diagram showing one example video processing technique of this disclosure.
FIG. 20 is a flow diagram showing another example video processing technique of this disclosure.
Detailed Description
The present disclosure relates to processing and/or coding video data having High Dynamic Range (HDR) and Wide Color Gamut (WCG) representations. In one example, the techniques of this disclosure include techniques for determining a dynamic range adjustment parameter for a chroma component of video data as a function of a dynamic range adjustment parameter for a luma component. The techniques and devices described herein may improve compression efficiency and reduce distortion of hybrid video coding systems used to code video data, including HDR and WCG video data.
Fig. 1 is a block diagram showing an example video encoding and decoding system 10 that may utilize the techniques of this disclosure. As shown in fig. 1, system 10 includes a source device 12, source device 12 providing encoded video data to be later decoded by a destination device 14. In particular, source device 12 provides video data to destination device 14 via computer-readable medium 16. Source device 12 and destination device 14 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, so-called "smart" tablets, televisions, cameras, display devices, digital media players, video game consoles, video streaming devices, broadcast receiver devices, and so forth. In some cases, source device 12 and destination device 14 may be equipped for wireless communication.
Destination device 14 may receive encoded video data to be decoded via computer-readable medium 16. Computer-readable medium 16 may comprise any type of medium or device capable of moving encoded video data from source device 12 to destination device 14. In one example, computer-readable medium 16 may comprise a communication medium to enable source device 12 to transmit encoded video data directly to destination device 14 in real-time. The encoded video data may be modulated according to a communication standard, such as a wired or wireless communication protocol, and transmitted to destination device 14. The communication medium may comprise any wireless or wired communication medium, such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network, such as a local area network, a wide area network, or a global network such as the internet. The communication medium may include routers, switches, base stations, or any other apparatus that may be used to facilitate communication from source device 12 to destination device 14.
In other examples, computer-readable medium 16 may comprise a non-transitory storage medium, such as a hard disk, a flash drive, a compact disc, a digital video disc, a blu-ray disc, or other computer-readable medium. In some examples, a network server (not shown) may receive encoded video data from source device 12 and provide the encoded video data to destination device 14, such as via a network transmission. Similarly, a computing device of a media production facility, such as an optical disc stamping facility, may receive encoded video data from source device 12 and produce an optical disc containing the encoded video data. Thus, in various examples, computer-readable medium 16 may be understood to include one or more computer-readable media in various forms.
In some examples, the encoded data may be output from output interface 22 to a storage device. Similarly, encoded data may be accessed from the storage device by the input interface. The storage device may include any of a variety of distributed or locally accessed data storage media such as a hard drive, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data. In yet another example, the storage device may correspond to a file server or another intermediate storage device that may store the encoded video generated by source device 12. Destination device 14 may access the stored video data from the storage device via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting the encoded video data to destination device 14. Example file servers include web servers (e.g., for a website), FTP servers, Network Attached Storage (NAS) devices, or local disk drives. Destination device 14 may access the encoded video data over any standard data connection, including an internet connection. This may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both, suitable for accessing encoded video data stored on a file server. The transmission of the encoded video data from the storage device may be a streaming transmission, a download transmission, or a combination thereof.
The techniques of this disclosure are not necessarily limited to wireless applications or settings. The techniques may be applied to video coding in support of any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, internet streaming video transmissions (e.g., via dynamic adaptive streaming over HTTP (DASH)), digital video encoded onto a data storage medium, decoding of digital video stored on a data storage medium, or other applications. In some examples, system 10 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
In the example of fig. 1, source device 12 includes video source 18, video encoding unit 21 (which includes video preprocessor unit 19 and video encoder 20), and output interface 22. Destination device 14 includes input interface 28, video decoding unit 29, which includes video post-processor unit 31 and video decoder 30, and display device 32. In accordance with this disclosure, video pre-processor unit 19 and/or video encoder 20 of source device 12 and video post-processor unit 31 and/or video decoder 30 of destination device 14 may be configured to implement the techniques of this disclosure, including performing cross-component dynamic range adjustments on chroma components of video to achieve more efficient compression of HDR and WCG video data with less distortion. In some examples, video preprocessor unit 19 may be separate from video encoder 20. In other examples, video preprocessor unit 19 may be part of video encoder 20. Likewise, in some examples, video post-processor unit 31 may be separate from video decoder 30. In other examples, video post-processor unit 31 may be part of video decoder 30. In other examples, the source device and the destination device may include other components or arrangements. For example, source device 12 may receive video data from an external video source 18, such as an external camera. Likewise, destination device 14 may interface with an external display device, rather than including an integrated display device.
The depicted system 10 of fig. 1 is merely one example. The techniques for processing HDR and WCG video data may be performed by any digital video encoding and/or video decoding device. Moreover, the techniques of this disclosure may also be performed by a video pre-processor and/or a video post-processor, such as video pre-processor unit 19 and video post-processor unit 31. In general, a video preprocessor may be any device configured to process video data prior to encoding (e.g., prior to HEVC encoding). In general, a video post-processor may be any device configured to process video data after decoding (e.g., after HEVC decoding). Source device 12 and destination device 14 are merely examples of such coding devices for source device 12 to generate coded video data for transmission to destination device 14. In some examples, devices 12, 14 may operate in a substantially symmetric manner such that each of devices 12, 14 includes video encoding and decoding components, as well as video pre-processor and video post-processor (e.g., video pre-processor unit 19 and video post-processor unit 31, respectively). Thus, system 10 may support one-way or two-way video transmission between video devices 12, 14, for example, for video streaming processing, video playback, video broadcasting, or video telephony.
Video source 18 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, and/or a video feed interface to receive video from a video content provider. As another alternative, video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video. In some cases, if video source 18 is a video camera, source device 12 and destination device 14 may form so-called camera phones or video phones. However, as mentioned above, the techniques described in this disclosure may be applicable to video coding and video processing in general, and may be applied to wireless and/or wired applications. In each case, the captured, pre-captured, or computer-generated video may be encoded by video encoding unit 21. The encoded video information may then be output by output interface 22 onto computer-readable medium 16.
Input interface 28 of destination device 14 receives information from computer-readable medium 16. The information of computer-readable medium 16 may include syntax information defined by video encoder 20 that is also used by video decoding unit 29, including syntax elements that describe characteristics and/or processing of blocks and other coded units, such as groups of pictures (GOPs). Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices, such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device.
As shown, video preprocessor unit 19 receives video data from video source 18. Video preprocessor unit 19 may be configured to process the video data to convert it into a form suitable for encoding by video encoder 20. For example, video pre-processor unit 19 may perform dynamic range compression (e.g., using a non-linear transfer function), color conversion to a more compact or robust color space, dynamic range adjustment, and/or floating point to integer representation conversion. Video encoder 20 may perform video encoding on the video data output by video preprocessor unit 19. Video decoder 30 may perform an inverse of video encoder 20 to decode the video data, and video post-processor unit 31 may perform an inverse of the operations of video pre-processor unit 19 to convert the video data into a form suitable for a display. For example, video post-processor unit 31 may perform integer-to-floating point conversion, color conversion from a compact or robust color space, inverse dynamic range adjustment, and/or inverse of dynamic range compression to generate video data suitable for a display.
Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (codec) in the respective device.
Video pre-processor unit 19 and video post-processor unit 31 may each be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, DSPs, ASICs, FPGAs, discrete logic, software, hardware, firmware, or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. As discussed above, video pre-processor unit 19 and video post-processor unit 31 may be separate devices from video encoder 20 and video decoder 30, respectively. In other examples, video pre-processor unit 19 may be integrated with video encoder 20 in a single device and video post-processor unit 31 may be integrated with video decoder 30 in a single device.
In some examples, video encoder 20 and video decoder 30 operate according to video compression standards such as ISO/IEC MPEG-4Visual and ITU-T H.264 (also known as ISO/IEC MPEG-4AVC), and/or ITU-T H.265 (also known as High Efficiency Video Coding (HEVC), or an extension thereof). In other examples, video encoder 20 and video decoder 30 may operate according to other proprietary or industry standards, such as joint exploration test model (JEM). However, the techniques of this disclosure are not limited to any particular coding standard.
The HEVC standard is implemented by the ITU-T Video Coding Experts Group (VCEG) and the video coding joint collaboration group (JCT-VC) of the ISO/IEC Moving Picture Experts Group (MPEG). The HEVC working draft specification (and referred to hereinafter as HEVC WD) may be derived fromhttp://phenix.int-evry.fr/jct/doc_end_user/documents/15_Geneva/wg11/JCTVC- O1003-v2.zipAnd (4) obtaining.
Recently, a new video coding standard, called the universal video coding (VVC) standard, is under development by the joint video experts group (jfet) of VCEG and MPEG. An early Draft of VVC is available in the document jfet-J1001 "universal Video Coding (Draft 1)" and its Algorithm description is available in the document jfet-J1002 "Algorithm description for universal Video Coding and Test Model 1(VTM 1)".
In HEVC and other video coding standards, a video sequence typically includes a series of pictures. Pictures may also be referred to as "frames". A picture may include three arrays of samples, indicated as SL、SCbAnd SCr。SLIs a two-dimensional array (i.e., block) of luma samples. SCbIs a two-dimensional array of Cb chroma samples. SCrIs a two-dimensional array of Cr chroma samples. Chroma samples may also be referred to herein as "chroma" samples. In other cases, a picture may be monochrome and may include only an array of luma samples.
Video encoder 20 may generate a set of Coding Tree Units (CTUs). Each of the CTUs may include a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples, and syntax structures used to code the samples of the coding tree blocks. In a monochrome picture or a picture having three separate color planes, a CTU may comprise a single coding tree block and syntax structures for coding samples of the coding tree block. The coding tree block may be an nxn block of samples. A CTU may also be referred to as a "treeblock" or a "largest coding unit" (LCU). The CTU of HEVC may be roughly similar to macroblocks of other video coding standards such as h.264/AVC. However, the CTUs are not necessarily limited to a particular size and may include one or more Coding Units (CUs). A slice may include an integer number of CTUs ordered consecutively in a raster scan.
This disclosure may use the terms "video unit" or "video block" to refer to one or more blocks of samples, and syntax structures for coding samples of the one or more blocks of samples. Example types of video units may include CTUs, CUs, PUs, Transform Units (TUs) in HEVC, or macroblocks, macroblock partitions, and so forth in other video coding standards.
To generate a coded CTU, video encoder 20 may recursively perform quadtree partitioning on a coding tree block of the CTU to divide the coding tree block into coding blocks, hence the name "coding tree unit". The coded block is an nxn block of samples. A CU may comprise a coding block of luma samples and two corresponding coding blocks of chroma samples of a picture having an array of luma samples, an array of Cb samples, and an array of Cr samples, as well as syntax structures for coding samples of the coding blocks. In a monochrome picture or a picture having three separate color planes, a CU may comprise a single coding block and syntax structures used to code the samples of the coding block.
Video encoder 20 may partition the coding block of the CU into one or more prediction blocks. A prediction block may be a rectangular (i.e., square or non-square) block of samples to which the same prediction is applied. A Prediction Unit (PU) of a CU may include a prediction block of luma samples of a picture, two corresponding prediction blocks of chroma samples of the picture, and syntax structures for predicting the prediction block samples. In a monochrome picture or a picture with three separate color planes, a PU may include a single prediction block, and syntax structures used to predict prediction block samples. Video encoder 20 may generate predictive luma, Cb, and Cr blocks for the luma, Cb, and Cr prediction blocks for each PU of the CU.
As another example, video encoder 20 and video decoder 30 may be configured to operate in accordance with JEM/VVC. According to JEM/VVC, a video coder (e.g., video encoder 20) partitions a picture into multiple Coding Tree Units (CTUs). Video encoder 20 may partition the CTUs according to a tree structure, such as a binary Quadtree (QTBT) structure. The QTBT structure of JEM removes the concept of multiple partition types, such as the spacing between CU, PU, and TU of HEVC. The QTBT structure of JEM comprises two levels: a first level partitioned according to a quadtree partition, and a second level partitioned according to a binary tree partition. The root node of the QTBT structure corresponds to the CTU. Leaf nodes of the binary tree correspond to Coding Units (CUs).
In some examples, video encoder 20 and video decoder 30 may use a single QTBT structure to represent each of the luma and chroma components, while in other examples, video encoder 20 and video decoder 30 may use two or more QTBT structures, such as one QTBT structure for the luma component and another QTBT structure for the two chroma components (or two QTBT structures for the respective chroma components).
Video encoder 20 and video decoder 30 may be configured to use quadtree partitioning according to HEVC, QTBT partitioning according to JEM/VVC, or other partitioning structures. For purposes of explanation, a description of the techniques of this disclosure is presented with respect to QTBT segmentation. However, it should be understood that the techniques of this disclosure may also be applied to video coders configured to use quadtree partitioning, or other types of partitioning as well.
Fig. 2A and 2B are conceptual diagrams depicting an example binary Quadtree (QTBT) structure 130 and a corresponding Coding Tree Unit (CTU) 132. The solid line represents a quadtree split, and the dotted line indicates a binary tree split. In each split (i.e., non-leaf) node of the binary tree, one flag is signaled to indicate which split type (i.e., horizontal or vertical) to use, where in this example, 0 indicates horizontal split and 1 indicates vertical split. For quadtree splitting, there is no need to indicate the split type, since quadtree nodes split a block into 4 sub-blocks of equal size, both horizontally and vertically. Thus, video encoder 20 may encode and video decoder 30 may decode syntax elements (e.g., split information) for the region tree level (i.e., solid line) of QTBT structure 130 and syntax elements (e.g., split information) for the prediction tree level (i.e., dashed line) of QTBT structure 130. Video encoder 20 may encode and video decoder 30 may decode video data (e.g., prediction and conversion data) for a CU represented by an end-leaf node of QTBT structure 130.
In general, the CTUs 132 of fig. 2B may be associated with parameters that define the size of blocks corresponding to nodes of the QTBT structure 130 at the first and second levels. These parameters may include CTU size (representing the size of CTU 132 in the sample), minimum quadtree size (MinQTSize, representing the minimum allowed quadtree leaf node size), maximum binary tree size (MaxBTSize, representing the maximum allowed binary tree root node size), maximum binary tree depth (MaxBTDepth, representing the maximum allowed binary tree depth), and minimum binary tree size (MinBTSize, representing the minimum allowed binary tree leaf node size).
A root node corresponding to a QTBT structure of a CTU may have four child nodes at a first level of the QTBT structure, each of which may be partitioned according to quadtree partitioning. That is, the nodes of the first hierarchy are leaf nodes (without children) or have four children. An example of the QTBT structure 130 represents, for example, a node that includes a parent node and a child node with solid lines for branches. If a node of the first level is not larger than the maximum allowed binary tree root node size (MaxBTSize), it may be further partitioned by the corresponding binary tree. The binary tree splitting of a node may be repeated until the node resulting from the splitting reaches a minimum allowed binary tree leaf node size (MinBTSize), or a maximum allowed binary tree depth (MaxBTDepth). An example of the QTBT structure 130 represents, for example, a node with a dashed line for branching. The binary tree leaf nodes are referred to as Coding Units (CUs) that are used for prediction (e.g., intra-picture or inter-picture prediction) and transform without further partitioning. As discussed above, a CU may also be referred to as a "video block" or a "block.
In one example of a QTBT partitioning structure, the CTU size is set to 128 × 128 (luma samples and two corresponding 64 × 64 chroma samples), MinQTSize is set to 16 × 16, MaxBTSize is set to 64 × 64, MinBTSize (for both width and height) is set to 4, and MaxBTDepth is set to 4. Quadtree partitioning is first applied to CTUs to produce quadtree leaf nodes. The quad tree leaf nodes may have sizes from 16 × 16 (i.e., MinQTSize) to 128 × 128 (i.e., CTU size). If the leaf quadtree node is 128 x 128, it will not be further split by the binary tree because it exceeds the MaxBTSize size (i.e., 64 x 64 in this example). Otherwise, the leaf quadtree nodes will be further partitioned by the binary tree. Thus, the leaf nodes of the quadtree are also the root nodes of the binary tree and have a binary tree depth of 0. When the binary tree depth reaches MaxBTDepth (4 in this example), no further splitting is allowed. When the binary tree node has a width equal to MinBTSize (4 in this example), it means that no further horizontal splitting is permitted. Similarly, a binary tree node having a height equal to MinBTSize means that no further vertical splitting is permitted for the binary tree node. As mentioned above, the leaf nodes of the binary tree are referred to as CUs and are further processed according to prediction and transform without further partitioning.
Video encoder 20 may use intra prediction or inter prediction to generate the predictive blocks for the PU. If video encoder 20 generates the predictive blocks of the PU using intra prediction, video encoder 20 may generate the predictive blocks of the PU based on decoded samples of a picture associated with the PU.
If video encoder 20 uses inter prediction to generate the predictive blocks of the PU, video encoder 20 may generate the predictive blocks of the PU based on decoded samples of one or more pictures other than the picture associated with the PU. The inter prediction may be unidirectional inter prediction (i.e., unidirectional prediction) or bidirectional inter prediction (i.e., bidirectional prediction). To perform uni-directional prediction or bi-directional prediction, video encoder 20 may generate a first reference picture list (RefPicList0) and a second reference picture list (RefPicList1) for the current slice.
Each of the reference picture lists may include one or more reference pictures. When using uni-directional prediction, video encoder 20 may search for reference pictures in either or both of RefPicList0 and RefPicList1 to determine a reference position within the reference picture. Moreover, when using uni-prediction, video encoder 20 may generate the predictive sample block for the PU based at least in part on the samples corresponding to the reference location. Moreover, when using uni-directional prediction, video encoder 20 may generate a single motion vector that indicates a spatial displacement between the prediction block and the reference position for the PU. To indicate a spatial displacement between the prediction block and the reference location of the PU, the motion vector may include a horizontal component that specifies a horizontal displacement between the prediction block and the reference location of the PU, and may include a vertical component that specifies a vertical displacement between the prediction block and the reference location of the PU.
When the PU is encoded using bi-prediction, video encoder 20 may determine a first reference position in a reference picture in RefPicList0, and a second reference position in a reference picture in RefPicList 1. Video encoder 20 may then generate the predictive blocks for the PU based at least in part on the samples corresponding to the first and second reference locations. Moreover, when the PU is encoded using bi-prediction, video encoder 20 may generate a first motion vector that indicates a spatial displacement between a sample block of the PU and a first reference location, and a second motion vector that indicates a spatial displacement between a prediction block of the PU and a second reference location.
In some examples, JEM/VVC also provides an affine motion compensation mode, which may be considered an inter-prediction mode. In affine motion compensation mode, video encoder 20 may determine two or more motion vectors that represent non-translational motion (e.g., zoom in or out, rotation, perspective motion, or other irregular motion types).
After video encoder 20 generates the predictive luma block, the predictive Cb block, and the predictive Cr block for one or more PUs of the CU, video encoder 20 may generate the luma residual block for the CU. Each sample in the luma residual block of the CU indicates a difference between a luma sample in one of the predictive luma blocks of the CU and a corresponding sample in the original luma coding block of the CU. In addition, video encoder 20 may generate a Cb residual block for the CU. Each sample in the Cb residual block of the CU may indicate a difference between a Cb sample in one of the predictive Cb blocks of the CU and a corresponding sample in the original Cb coding block of the CU. Video encoder 20 may also generate a Cr residual block for the CU. Each sample in the Cr residual block of the CU may indicate a difference between a Cr sample in one of the predictive Cr blocks of the CU and a corresponding sample in the original Cr coding block of the CU.
Furthermore, video encoder 20 may use quadtree partitioning to decompose the luma, Cb, and Cr residual blocks of the CU into one or more luma, Cb, and Cr transform blocks. The transform block may be a rectangular block of samples on which the same transform is applied. A Transform Unit (TU) of a CU may comprise a transform block of luma samples, two corresponding transform blocks of chroma samples, and syntax structures used to transform the transform block samples. In a monochrome picture or a picture with three separate color planes, a TU may comprise a single transform block, and syntax structures used to transform the transform block samples. Thus, each TU of a CU may be associated with a luma transform block, a Cb transform block, and a Cr transform block. A luma transform block associated with a TU may be a sub-block of a luma residual block of a CU. The Cb transform block may be a sub-block of a Cb residual block of the CU. The Cr transform block may be a sub-block of a Cr residual block of the CU.
Video encoder 20 may apply one or more transforms to a luma transform block of a TU to generate a luma coefficient block for the TU. The coefficient block may be a two-dimensional array of transform coefficients. The transform coefficients may be scalars. Video encoder 20 may apply one or more transforms to Cb transform blocks of a TU to generate Cb coefficient blocks of the TU. Video encoder 20 may apply one or more transforms to a Cr transform block of a TU to generate a Cr coefficient block of the TU.
After generating the coefficient blocks (e.g., luma coefficient blocks, Cb coefficient blocks, or Cr coefficient blocks), video encoder 20 may quantize the coefficient blocks. Quantization generally refers to a process of quantizing transform coefficients to possibly reduce the amount of data used to represent the transform coefficients, providing further compression. Furthermore, video encoder 20 may inverse quantize the transform coefficients and apply an inverse transform to the transform coefficients in order to reconstruct the transform blocks of the TUs of the CU of the picture. Video encoder 20 may reconstruct the coding blocks of the CU using the reconstructed transform blocks of the TUs of the CU and the predictive blocks of the PUs of the CU. By reconstructing the coding blocks of each CU of a picture, video encoder 20 may reconstruct the picture. Video encoder 20 may store the reconstructed pictures in a Decoded Picture Buffer (DPB). Video encoder 20 may use the reconstructed pictures in the DPB for inter-prediction and intra-prediction.
After video encoder 20 quantizes the coefficient block, video encoder 20 may entropy encode syntax elements indicating the quantized transform coefficients. For example, video encoder 20 may perform Context Adaptive Binary Arithmetic Coding (CABAC) on syntax elements that indicate quantized transform coefficients. Video encoder 20 may output the entropy-encoded syntax elements in a bitstream.
Video encoder 20 may output a bitstream that includes a sequence of bits that forms a representation of coded pictures and associated data. The bitstream may include a sequence of Network Abstraction Layer (NAL) units. Each of the NAL units includes a NAL unit header and encapsulates a Raw Byte Sequence Payload (RBSP). The NAL unit header may include a syntax element indicating a NAL unit type code. The NAL unit type code specified by the NAL unit header of the NAL unit indicates the type of the NAL unit. An RBSP may be a syntax structure containing an integer number of bytes encapsulated within a NAL unit. In some cases, the RBSP includes zero bits.
Different types of NAL units may encapsulate different types of RBSPs. For example, a first type of NAL unit may encapsulate RBSPs of a Picture Parameter Set (PPS), a second type of NAL unit may encapsulate RBSPs of a coded slice, a third type of NAL unit may encapsulate RBSPs of Supplemental Enhancement Information (SEI), and so on. A PPS is a syntax structure that may contain syntax elements applicable to zero or more complete coded pictures. The NAL units that encapsulate the RBSP of the video coding data (as opposed to the RBSP of the parameter set and SEI message) may be referred to as Video Coding Layer (VCL) NAL units. NAL units that encapsulate a coded slice may be referred to herein as coded slice NAL units. An RBSP for a coded slice may include a slice header and slice data.
Video decoder 30 may receive a bitstream. Further, video decoder 30 may parse the bitstream to decode the syntax elements from the bitstream. Video decoder 30 may reconstruct pictures of the video data based at least in part on syntax elements decoded from the bitstream. The process of reconstructing the video data may be substantially reciprocal to the process performed by video encoder 20. For example, video decoder 30 may determine, using the motion vector of the PU, predictive blocks for the PU of the current CU. Video decoder 30 may generate the predictive blocks for the PU using one or more motion vectors of the PU.
Furthermore, video decoder 30 may inverse quantize coefficient blocks associated with TUs of the current CU. Video decoder 30 may perform an inverse transform on the coefficient blocks to reconstruct transform blocks associated with TUs of the current CU. Video decoder 30 may reconstruct the coding blocks of the current CU by adding samples of predictive sample blocks of PUs of the current CU to corresponding samples of transform blocks of TUs of the current CU. By reconstructing the coding blocks of each CU of a picture, video decoder 30 may reconstruct the picture. Video decoder 30 may store the decoded pictures in a decoded picture buffer for output and/or for use in decoding other pictures.
Next generation video application operations are expected to represent video data of captured scenes with HDR and WCG. The parameters of dynamic range and color gamut utilized are two independent attributes of video content, and their specifications for the purpose of digital television and multimedia services are defined by several international standards. For example, ITU-R Rec. BT.709 "Parameter values for the HDTV standard for generation and international scheme interchange" defines parameters for HDTV (high definition television), such as Standard Dynamic Range (SDR) and standard color gamut. "Parameter values for ultra-high definition television systems for production and international scheme exchange" of ITU-R rec.bt.2020 specifies UHDTV (ultra high definition television), such as HDR and WCG (e.g. WCG defines color primaries that extend beyond the standard color gamut). Bt.2100 "Image parameter values for high dynamic range television for use in production and international scheme exchange" defines transfer functions and representations for HDR television use, including primaries that support wide gamut representations. There are also other Standard Development Organization (SDO) documents that specify dynamic range and color gamut attributes in other systems, such as DCI-P3 color gamut is defined in SMPTE-231-2 (the society of animation and television engineers) and some parameters of HDR are defined in SMPTE-2084. A brief description of the dynamic range and color gamut for video data is provided below.
Dynamic range is typically defined as the ratio between the maximum luminance and the minimum luminance (e.g., brightness) of a video signal. The dynamic range can also be measured in units of 'f-stops', where one f-stop corresponds to a doubling of the signal dynamic range. In the MPEG definition, content that characterizes luminance variation relative to more than a 16f stop is referred to as HDR content. In some terms, a level between 10 and 16 f-stops is considered an intermediate dynamic range, but may be considered HDR in other definitions. In some examples of this disclosure, the HDR video content may be any video content having a higher dynamic range than conventionally used video content having a standard dynamic range (e.g., video content as specified by ITU-R rec. bt.709).
The Human Visual System (HVS) is capable of perceiving a much larger dynamic range than SDR content and HDR content. However, the HVS contains an adaptation mechanism that narrows the dynamic range of the HVS to a so-called simultaneous range. While the width of the range may depend on the current lighting conditions (e.g. the current brightness). The visualization of the dynamic range provided by the dynamic range of the SDR of the HDTV, the intended HDR of the UHDTV, and the HVS is shown in fig. 3, but the precise range can vary on a per person and display basis.
Some example video applications and services are regulated by ITU rec.709 and provide SDR, which typically supports a range of luma (or luma) of approximately 0.1 to 100 candelas (cd) per m2 (often referred to as "nit"), resulting in less than 10 f-stops. Some example next generation video services are expected to provide dynamic range up to 16 f-stops. Although detailed specifications for these are currently being developed, some initial parameters have been specified in SMPTE-2084 and ITU-R Rec.2020.
Another aspect of a more realistic video experience, in addition to HDR, is the color dimension. The color dimension is typically defined by the color gamut. Fig. 4 is a conceptual diagram showing the SDR color gamut (triangle 100 based on bt.709 color primaries) and the broader color gamut for UHDTV (triangle 102 based on bt.2020 color primaries). Fig. 4 also depicts the so-called spectral locus (delimited by the tongue-shaped region 104), representing the boundaries of the natural color. Moving from bt.709 (triangle 100) to bt.2020 (triangle 102), the color primaries are intended to provide UHDTV service with colors of about more than 70%, as depicted in fig. 4. D65 specifies an example white for the bt.709 and/or bt.2020 specifications.
Examples of color gamut specifications for DCI-P3, bt.709, and bt.2020 color spaces are shown in table 1.
TABLE 1 color gamut parameters
As can be seen in table 1, the gamut can be defined by the X and Y values of the white point, and by the X and Y values of the primary colors, e.g., red (R), green (G), and blue (B). The X and Y values represent normalized values derived from the chromaticity (X and Z) and luminance (Y) of the color, as defined by the CIE 1931 color space. The CIE 1931 color space defines the connections between pure colors (e.g., in terms of wavelength) and how the human eye perceives such colors.
HDR/WCG video data is typically acquired and stored at very high precision (even floating point) per component (with a 4:4:4 chroma format and an extremely wide color space (e.g., CIE XYZ)). This representation targets high precision and is almost mathematically lossless. However, this format for storing HDR/WCG video data may include a large amount of redundancy and may not be optimal for compression purposes. Lower precision formats with HVS-based assumptions are typically used for current advanced technology video applications.
One example of a video data format conversion process for compression purposes includes three main processes, as shown in fig. 5. The techniques of fig. 5 may be performed by source device 12. The linear RGB data 110 may be HDR/WCG video data and may be stored in a floating point representation. The linear RGB data 110 may be compressed using a nonlinear Transfer Function (TF)112 for dynamic range compression. The transfer function 112 may compress the linear RGB data 110 using any number of non-linear transfer functions, such as the PQTF as defined in SMPTE-2084. In some examples, the color conversion process 114 converts the compressed data into a more compact or robust color space (e.g., YUV or YCrCb color space) that is more suitable for compression by a hybrid video encoder. This data is then quantized using floating point to integer representation quantization unit 116 to produce converted HDR' data 118. In this example, the HDR' data 118 is in an integer representation. HDR' data today is in a format that is better suited for compression by hybrid video encoders, such as video encoder 20 applying h.264, HEVC, or VVC techniques. The order of the processes depicted in fig. 5 is given as an example and may be changed in other applications. For example, the TF process may be preceded by color conversion. In addition, additional processing, such as spatial sub-sampling, may be applied to the color components.
The inverse transform at the decoder side is depicted in fig. 6. The technique of fig. 6 may be performed by destination device 14. The converted HDR' data 120 may be obtained at the destination device 14 via decoding video data using a hybrid video decoder, such as video decoder 30 applying h.264, HEVC, or VVC techniques. The HDR' data 120 may then be dequantized by a dequantization unit 122. An inverse color conversion process 124 may then be applied to the inverse quantized HDR' data. The reverse color conversion process 124 may be the reverse of the color conversion process 114. For example, the inverse color conversion process 124 may convert HDR' data from the YCrCb format back to the RGB format. Next, an inverse transfer function 126 may be applied to the data to add back the dynamic range compressed by the transfer function 112, thereby reconstructing linear RGB data 128.
The technique depicted in FIG. 5 will now be discussed in more detail. In general, a transfer function is applied to data (e.g., HDR/WCG video data) to compress the dynamic range of the data so that errors caused by quantization are perceptually (approximately) uniform in the luma value range. This compression allows the data to be represented with fewer bits. In one example, the transfer function may be a one-dimensional (1D) nonlinear function and may reflect the inverse of the electro-optic transfer function (EOTF) of the end-user display, e.g., as specified for SDR in rec.709. In another example, the transfer function may approximate the HVS perception of luminance change, such as the PQ transfer function specified for HDR in SMPTE-2084. The inverse of OETF is EOTF (electro-optical transfer function), which maps the code level back to illumination. FIG. 7 shows several examples of nonlinear transfer functions used as EOTF. The transfer function may also be applied separately to each R, G and B component.
The specification of SMPTE-2084 defines EOTF applications as described below. The transfer function is applied to the normalized linear R, G, B values, which results in a non-linear representation of R ' G ' B '. SMPTE-2084 is normalized by defining NORM as 10000, which correlates to a peak brightness of 10000 nits (candelas per square meter).
R'=PQ_TF(max(0,min(R/NORM,1)))
G'=PQ_TF(max(0,min(G/NORM,1))) (1)
B'=PQ_TF(max(0,min(B/NORM,1)))
Wherein PQ _ TF (L) ((c)1+c2L^(m1))/(1+c3L^(m1)))^(m2)
m1=2610/4096×1/4=0.1593017578125
m2=2523/4096×128=78.84375
c1=c3-c2+1=3424/4096=0.8359375
c2=2413/4096×32=18.8515625
c3=2392/4096×32=18.6875
The normalized output values (non-linear color values) of the PQ EOTF are visible in fig. 8 with the input values (linear color values) normalized to the range 0.. 1. As seen from curve 131, 1 percent of the dynamic range of the input signal (low illumination) is converted to 50 percent of the dynamic range of the output signal.
Typically, EOTF is defined as a function with floating point accuracy, so if an inverse TF (so-called OETF) is applied, no error is introduced to the signal with this non-linearity. The inverse TF (OETF) specified in SMPTE-2084 is defined as the invertsepPQ function:
οR=10000*inversePQ_TF(R')
οG=10000*inversePQ_TF(G') (2)
οB=10000*inversePQ_TF(B')
wherein
Applying the EOTF and OETF in sequence provides an error-free perfect reconstruction with floating point accuracy. However, this representation is not always optimal for streaming or broadcast services. A more compact representation of fixed bit accuracy with non-linear R ' G ' B ' data is described in the following sections.
It should be noted that EOTF and OETF are the subject of currently very active research, and the TF utilized in some HDR video coding systems may be different from SMPTE-2084.
In the context of the present invention, the term "signal value" or "color value" may be used to describe a brightness level corresponding to the value of a particular color component (e.g., R, G, B or Y) of an image element. The signal values typically represent linear light levels (lightness values). The term "code order" or "digital code value" may refer to a digital representation of an image signal value. Typically, this digital representation represents a non-linear signal value. The EOTF represents the relationship between the value of a non-linear signal provided to a display device (e.g., display device 32) and the value of a linear color generated by the display device.
RGB data is typically used as the input color space, since RGB is the type of data that is typically generated by an image capture sensor. However, the RGB color space has high redundancy among its components and is not optimal for compact representation. To achieve a more compact and more robust representation, the RGB components are typically converted (e.g., performing a color transform) to a less relevant color space (e.g., YCbCr) that is more suitable for compression. The YCbCr color space separates luminance and color information (CrCb) in the form of luma (Y) in different, related, lesser components. In this context, robust representation may refer to a color space characterized by higher order error recovery when compressed at a limited bit rate.
Modern video coding systems typically use the YCbCr color space, as specified in ITU-R BT.709 or ITU-R BT.709. The YCbCr color space in the bt.709 standard specifies the following conversion process from R 'G' B 'to Y' CbCr (non-constant lightness representation):
οY'=0.2126*R'+0.7152*G'+0.0722*B'
ο(3)
ο
the above process may also be implemented using the following approximate conversion that avoids division of the Cb and Cr components:
οY'=0.212600*R'+0.715200*G'+0.072200*B'
οCb=-0.114572*R'-0.385428*G'+0.500000*B' (4)
οCr=0.500000*R'-0.454153*G'-0.045847*B'
the ITU-R bt.2020 standard specifies the following conversion process from R 'G' B 'to Y' CbCr (non-constant lightness representation):
οY'=0.2627*R'+0.6780*G'+0.0593*B'
ο
ο
the above process may also be implemented using the following approximate conversion that avoids division of the Cb and Cr components:
οY'=0.262700*R'+0.678000*G'+0.059300*B'
οCb=-0.139630*R'-0.360370*G'+0.500000*B' (6)
οCr=0.500000*R'-0.459786*G'-0.040214*B'
note that both color spaces remain normalized. Thus, for input values normalized in the 0..1 range, the resulting values would map to the range 0.. 1. In general, the color transformation implemented by floating point accuracy provides perfect reconstruction; so this process is lossless.
After the color transformation, the input data in the target color space may still be represented in a high bit depth (e.g., floating point accuracy). That is, all of the processing stages described above are typically implemented in a floating point accuracy representation; it can be considered lossless. However, for most consumer electronics applications, this type of accuracy can be considered redundant and expensive. For such applications, the input data in the target color space is converted to a target bit depth fixed-point accuracy. The high bit depth data may be converted to a target bit depth, for example, using a quantization process.
Some studies have shown that 10-bit to 12-bit accuracy in combination with PQ transfer is sufficient to provide HDR data for 16f stops with distortion below Just Noticeable Difference (JND). In general, JND is the amount of something (e.g., video data) that must be changed in order for the difference to be discernable (e.g., by the HVS). Data represented with 10 bit accuracy can be further decoded by most current state-of-the-art video coding solutions. This conversion process includes signal quantization and is a lossy coding element and is a source of inaccuracy introduced to the converted data.
An example of this quantization applies to codewords in the target color space. In this example, the YCbCr color space is shown below. The input values YCbCr, expressed in floating point accuracy, are converted to signals of fixed bit depth BitDepthY for Y values (luma) and fixed bit depth BitDepthC for chrominance values (Cb, Cr).
οDY′=Clip1Y(Round((1<<(BitDepthY-8))*(219*Y′+16)))
οDCb=Clip1C(Round((1<<(BitDepthC-8))*(224*Cb+128))) (7)
οDCr=Clip1C(Round((1<<(BitDepthC-8))*(224*Cr+128)))
Wherein
Round(x)=Sign(x)*Floor(Abs(x)+0.5)
If x <0, sign (x) is-1; if x is 0, sign (x) is 0; if x >0, then the process is repeated,
then sign (x) 1
Floor (x) maximum integer less than or equal to x
If x > -0, then abs (x) ═ x; if x <0, then abs (x) ═ x
Clip1Y(x)=Clip3(0,(1<<BitDepthY)-1,x)
Clip1C(x)=Clip3(0,(1<<BitDepthC)-1,x)
If z < x, then Clip3(x, y, z) ═ x; if z > y, then Clip3(x, y, z) ═ y; otherwise
Clip3(x,y,z)=z
It is expected that next generation HDR/WCG video applications will operate on video data captured with different parameters for HDR and CG. An example of a different configuration may be the capture of HDR video content with peak brightness up to 1000 nits or up to 10,000 nits. Examples of different color gamuts may include bt.709, bt.2020, and SMPTE specified P3 or other color gamuts.
It is also expected that a single color space (e.g., a target color container) incorporating (or nearly incorporating) all other currently used color gamuts will be utilized in the future. Examples of such target color containers include bt.2020 and bt.2100. Supporting a single target color container would significantly simplify the standardization, implementation, and deployment of HDR/WCG systems, since a reduced number of operating points (e.g., the number of color containers, color spaces, color conversion algorithms, etc.) and/or a reduced number of required algorithms should be supported by a decoder (e.g., video decoder 30).
In one example of such a system, content captured with a natural color gamut (e.g., P3 or bt.709) different from a target color container (e.g., bt.2020) may be converted to the target container prior to processing (e.g., prior to video encoding by video encoder 20).
During this conversion, the range of values occupied by each component of the signal captured in the P3 or bt.709 color domain (e.g., RGB, YUV, YCrCb, etc.) may be reduced in the bt.2020 representation. Since the data is represented in floating point accuracy, there is no penalty; however, when combined with color conversion and quantization, the shrinking of the value range leads to increased quantization errors of the input data.
Additionally, in real-world coding systems, coding a signal with reduced dynamic range may result in a significant loss in accuracy of the coded chroma components and may be observed by a viewer as coding artifacts, such as color mismatch and/or color bleeding.
In addition, some of the non-linearities (introduced, for example, by the transfer function of SMPTE-2084) and color representations (e.g., ITU-R bt.2020 or bt.22100) used in modern video coding systems can result in video data representations that characterize significant changes in the perceived distortion or just discriminable difference (JND) threshold within the dynamic range and color components of the signal representation. This may be referred to as perceived distortion. For such representations, quantization schemes that apply a uniform scalar quantizer over the dynamic range of luma or chroma values may introduce quantization errors, with different perceptual advantages depending on the magnitudes of the quantized samples. Such effects on the signal may be interpreted as a processing system with non-uniform quantization yielding unequal signal-to-noise ratios within the processed data range.
An example of such a representation is a video signal represented in a non-constant lightness (NCL) YCbCr color space whose color primaries are defined in ITU-R rec.bt.2020, by the SMPTE-2084 transfer function, or in bt.2100. As depicted in table 1 above, this means that a significantly larger amount of codewords is allocated for low intensity values of the signal compared to the amount of codewords used over the mid-range values (e.g., 30% of the codewords represent linear light samples of <10 nits, where only 20% of the codewords represent linear light between 10 and 100 nits). As a result, video coding systems characterized by uniform quantization in all ranges of data (e.g., h.265/HEVC) will introduce more severe coding artifacts to mid-range and high-intensity samples (medium and bright regions of the signal), while the distortion introduced to low-intensity samples (dark regions of the same signal) may be well below the discernable difference threshold in a typical viewing environment (with typical brightness levels of the video).
Another example of such a representation is the effective dynamic range of the chrominance components of the ITU-R bt.2020/bt.2100 color representation. These containers support a much larger color gamut than conventional color containers (e.g., bt.709). However, with the limitation of a limited bit depth (e.g., 10 bits per sample) for a particular representation, the limited bit depth will effectively reduce the granularity of conventional content (e.g., with bt.709 color gamut) representations in the new format as compared to the use of native containers (e.g., bt.709).
This is visible in fig. 9A and 9B, where the colors of the HDR sequence are depicted in the xy color plane. Fig. 9A shows the colors of the "Tibul" test sequence captured in the native bt.709 color space (triangle 150). However, the colors of the test sequence (shown as dots) do not occupy the full color gamut of bt.709 or bt.2020. In fig. 9A and 9B, a triangle 152 represents the bt.2020 color gamut. Fig. 9B shows the colors of a "Bikes" HDR test sequence employing the P3 natural color gamut (triangle 154). As can be seen in fig. 9B, the colors do not occupy the full extent of the natural gamut (triangle 154) in the xy color plane; so some color fidelity will be lost if the content is represented in its container.
As a result, a video coding system (e.g., h.265/HEVC) characterized by uniform quantization across all ranges and color components of the data will introduce more visual perceptual distortion to the chroma components of the signal in the bt.2020 color container compared to the signal in the bt.709 color container.
Yet another example of a significant change in the perceived distortion or just discriminable difference (JND) threshold within the dynamic range or color components of a certain color representation is the color space resulting from the application of Dynamic Range Adjustment (DRA) or shapers. The DRA may be applied to video data that is partitioned into dynamic range partitions with the goal of providing a uniform perception of distortion over the full dynamic range. However, applying DRA independently to color components may then introduce bitrate reallocation from one color component to another (e.g., a larger bit budget spent on luma components may result in more severe distortion introduced to chroma components, and vice versa).
Other techniques have been proposed to address the problem of non-uniform perceptual quality codeword distribution in current advanced technology color representations (e.g., bt.2020 or bt.2100)
Voeg document COM 16-C1027-E of d.rusonovskyy, a.k.ramasuramanonian, d.bugdayci, s.lee, j.sole, m.karczewicz, "Dynamic Range adaptation SEI to enable high Dynamic Range video coding with Backward-Compatible Capability" describes the application of Dynamic Range Adjustment (HEVC) to achieve codeword redistribution in video data in SMPTE-2084/bt.2020 color containers before applying a hybrid transform-based video coding scheme (e.g., h.265/HEVC).
The redistribution is achieved by applying the DRA with the goal of linearization of the perceived distortion (e.g., signal-to-noise ratio) over the dynamic range. To compensate for this redistribution at the decoder side, and to convert the data to the original ST 2084/bt.2020 representation, the inverse DRA process is applied to the data after video decoding.
In one example, a DRA may be implemented as a piecewise linear function f (x) defined for a group of non-overlapping dynamic range partitions (ranges) { Ri } of an input value x, where i is an index to the range. The term i is 0 … N-1, where N is the total number of ranges { Ri } used to define the DRA function. Assume that the range of the DRA is defined by the range Ri (e.g., [ x ])i,xi+1-1]) Is defined by a minimum x value and a maximum x value, where xiAnd xi+1Respectively represent the range RiAnd Ri+1Is measured. When applied to the Y (lightness) color component of video data, the DRA function Sy is via the scale Sy,iAnd offset Oy,iValue definition, scale Sy,iAnd offset Oy,iThe value is applied to each x ∈ [ x ]i,xi+1-1]Thus Sy={Sy,i,Oy,i}。
In this case, for any Ri, and each x ∈ [ x ]i,xi+1-1]The output value X is calculated as follows:
X=Sy,i*(x-Oy,i) (7)
for the inverse DRA mapping procedure for the luma component Y, which is performed at the decoder, the DRA function Sy is defined by the scale Sy,iAnd offset Oy,iInverse definition of the value, scale Sy,iAnd offset Oy,iThe value is applied to each X e [ X ∈ [ ]i,Xi+1-1]。
In this case, for any Ri, and each X ∈ [ X ]i,Xi+1-1]The reconstructed value x is calculated as follows:
x=X/Sy,i+Oy,i(8)
the forward DRA mapping process for chroma components Cb and Cr may be defined as described below. This example is given by the u term of the sample specifying the Cb color component, which belongs to the range Ri, u e [ u ∈ [ [ u ]i,ui+1-1]Thus Su={Su,i,Ou,i}:
U=Su,i*(u-Oy,i)+Offset (9)
Wherein is equal to 2(bitdepth-1)Offset of (b) indicates the bipolar Cb, Cr signal Offset.
The inverse DRA mapping process implemented at the decoder for the chroma components Cb and Cr may be defined as described below. Using the example given by U term indicating samples to remap Cb color components, the U term belongs to the range Ri, U e [ U ∈ [ ]i,Ui+1-1]:
u=(U-Offset)/Su,i+Oy,i(10)
Wherein is equal to 2(bitdepth-1)Offset of (b) indicates the bipolar Cb, Cr signal Offset.
"Performance initiation of high dynamic range and wind color diversity techniques" described in VCEG document COM 16-C1030-E of 9. month J.ZHao, S.H.Kim, A.Segal, K.Misra 2015 to align bit rate allocation and application to Y.2020(ST2084/BT2020) and Y709(BT1886/BT 2020) strength-dependent spatial variation of visual perceptual distortion between video coding on a representation. It is observed that in order to maintain the same level of quantization for luma components, at Y2020And Y709The quantization of the signal in (a) may differ by a value that depends on the lightness such that:
QP_Y2020=QP_Y709-f(Y2020) (11)
function f (Y)2020) Is considered to be for the position at Y2020The intensity values (brightness levels) of the video in (1) are linear and the function can be approximated as:
f(Y2020)=max(0.03*Y2020-3,0) (12)
the proposed spatially varying quantization scheme introduced at the encoding stage is believed to be capable of improving the visual perceptual signal-to-quantization noise ratio of the coded video signal in the ST 2084/bt.2020 representation. The delta QP application and mechanism for QP derivation for chroma components enables deltaQP propagation from luma to chroma components, thus allowing some degree of compensation for chroma components according to the bit rate increase introduced to the luma component.
"Luma-drive chroma scaling (LCS) design, HDR CE2: Report on ce2.a-1 LCS" JCTVC-W0101, by a.k.ramasuramamonian, j.sole, d.rusonovsky, d.bugdayc, m.karczewicz, describes a method of adjusting chroma information (e.g., Cb and Cr components) by employing Luma information associated with processed chroma samples. Similar to the DRA method described above, it is proposed to use a scale factor SuApplying to Cb chroma samples and scaling factor Sv,iApplied to Cr chroma samples. However, instead of defining a DRA function as a set of ranges { R } for the chroma values u or v, as in equations (9) and (10)iPiecewise linear function S ofu={Su,i,Ou,iThe example LCS method derives scale factors for chroma samples using the luma value Y. In this case, the forward LCS mapping for chroma sample u (or v) is implemented as:
U=Su,i(Y)*(u-Offset)+Offset (13)
the inverse LCS procedure implemented at the decoder side is defined as follows:
u=(U-Offset)/Su,i(Y)+Offset (14)
in more detail, for a given pixel located at (x, y), the chroma sample Cb (x, y) or Cr (x, y) is the LCS function S from itCbOr SCrThe derived factors are scaled, the factors being accessed by their corresponding luminance values Y' (x, Y).
At the forward LCS for chroma samples, the Cb (or Cr) value and the associated luma value Y' areViewed as a function of the scale to the chromaticityCb(or S)Cr) Is input and converted to Cb 'and Cr' as shown in equation 15. At the decoder side, an inverse LCS is applied. The reconstructed Cb 'or Cr' is converted to Cb or Cr as it is shown in equation (16).
Cb′(x,y)=SCb(Y′(x,y))*Cb(x,y),
Cr′(x,y)=SCr(Y′(x,y))*Cr(x,y) (15)
Fig. 10 shows an example of an LCS function, wherein the chroma scaling factor is a function of the associated luma value. In the example of fig. 10, the chroma components of pixels with smaller values of luma are multiplied by a smaller scaling factor using the LCS function 153.
Techniques, methods, and devices are described to perform Dynamic Range Adjustment (DRA), including cross-component DRA to compensate for dynamic range variations introduced to an HDR signal representation by color gamut conversion. Dynamic range adjustment may help prevent and/or mitigate any distortion caused by color gamut conversion, including color mismatch, bleeding, and the like. In one or more examples of this disclosure, the DRA is performed on the values of each color component of the target color space (e.g., YCbCr) before quantization at the encoder side (e.g., by video pre-processor unit 19 of source device 12) and after inverse quantization at the decoder side (e.g., by video post-processor unit 31 of destination device 14). In other examples, the DRA techniques of this disclosure may be performed within video encoder 20 and video decoder 30.
FIG. 11 is a block diagram depicting an example HDR/WCG conversion apparatus operating in accordance with the techniques of this disclosure. In fig. 11, the solid line designates data flow and the dotted line designates a control signal. The techniques of this disclosure may be performed by video preprocessor unit 19 of source device 12 or by video encoder 20. As discussed above, video preprocessor unit 19 may be a separate device from video encoder 20. In other examples, video preprocessor unit 19 may be incorporated into the same device as video encoder 20.
As shown in fig. 11, RGB native CG video data 200 is input to the video preprocessor unit 19. In the case of video preprocessing by the video preprocessor unit 19, the RGB native CG video data 200 is defined by an input color container. The input color container specifies a set of color primaries (e.g., bt.709, bt.2020, BT, 2100P3, etc.) used to represent the video data 200. In one example of this disclosure, video preprocessor unit 19 may be configured to convert both the color container and the color space of RGB native CB video data 200 into a target color container and a target color space of data 216 of HDR. As with the input color container, the target color container may specify a set or color primaries of the data 216 used to represent the HDR. In one example of this disclosure, RGB native CB video data 200 may be HDR/WCG video, and may have bt.2020 or P3 color containers (or any WCG), and be in the RGB color space. In another example, the RGB native CB video data 200 may be SDR video and may have a bt.709 color container. In one example, the target color container of the data 216 for HDR may already be configured for HDR/WCG video (e.g., bt.2020 color containers) and may use a more optimal color space for video encoding (e.g., YCrCb).
In one example of the disclosure, CG converter 202 may be configured to convert a color container of RGB native CG video data 200 from an input color container (e.g., a first color container) to a target color container (e.g., a second color container). As one example, the CG converter 202 may convert the RGB native CG video data 200 from a bt.709 color representation to a bt.2020 color representation, an example of which is shown below.
Sample RGB BT.709 (R)709,G709,B709) Conversion to RGB BT.2020 samples (R)2020,G2020,B2020) Can be implemented with a two-step conversion involving first converting to an XYZ representation, followed by conversion from XYZ to RGB bt.2020 using an appropriate conversion matrix.
X=0.412391*R709+0.357584*G709+0.180481*B709
Y=0.212639*R709+0.715169*G709+0.072192*B709(17)
Z=0.019331*R709+0.119195*G709+0.950532*B709
From XYZ to R2020G2020B2020Conversion of (bt.2020):
R2020=clipRGB(1.716651*X-0.355671*Y-0.253366*Z)
G2020=clipRGB(-0.666684*X+1.616481*Y+0.015768*Z) (18)
B2020=clipRGB(0.017640*X-0.042771*Y+0.942103*Z)
similarly, the single step and recommended method is as follows:
R2020=clipRGB(0.627404078626*R709+0.329282097415*G709+0.043313797587*B709)
G2020=clipRGB(0.069097233123*R709+0.919541035593*G709+0.011361189924*B709) (7)
B2020=clipRGB(0.016391587664*R709+0.088013255546*G709+0.895595009604*B709)
the resulting video data after CG conversion is shown in fig. 11 as RGB target CG video data 204. In other examples of the disclosure, the color containers of the input data and the data of the output HDR may be the same. In this example, the CG converter 202 does not have to perform any conversion on the RGB native CG video data 200.
Next, the transfer function unit 206 compresses the dynamic range of the RGB target CG video data 204. The transfer function unit 206 may be configured to apply a transfer function to compress the dynamic range in the same manner as discussed above with reference to fig. 5. The color conversion unit 208 converts the RGB target CG color data 204 from the color space of the input color container (e.g., RGB) to the color space of the target color container (e.g., YCrCb). As explained above with reference to fig. 5, color conversion unit 208 converts the compressed data into a more compact or robust color space (e.g., YUV or YCrCb color space) that is more suitable for compression by a hybrid video encoder (e.g., video encoder 20).
Adjustment unit 210 is configured to perform Dynamic Range Adjustment (DRA) of the color converted video data according to the DRA parameters derived by DRA parameter estimation unit 212. In general, after CG conversion by CG converter 202 and dynamic range compression by transfer function unit 206, the actual color values of the resulting video data may not use all of the available codewords (e.g., the unique bit sequences representing each color) allocated for the color gamut of a particular target color container. That is, in some cases, the conversion of RGB native CG video data 200 from an input color container to an output color container may over-compress the color values (e.g., Cr and Cb) of the video data such that the resulting compressed video data does not efficiently use all possible color representations. As explained above, in real-world coding systems, coding a signal with a reduced range of color values may result in a significant loss of accuracy of the coded chroma components and will be observed by a viewer as coding artifacts, such as color mismatch and/or bleeding.
Adjustment unit 210 may be configured to apply DRA parameters to color components (e.g., YCrCb) of video data (e.g., RGB target CG video data 204 after dynamic range compression and color conversion) to leverage codewords that may be used for a particular target color container. Adjustment unit 210 may apply the DRA parameters to the video data at the pixel level (e.g., to each color component of the pixel). In general, DRA parameters define a function that extends the code words used to represent actual video data to as many code words as are available for the target color container.
In one example of this disclosure, DRA parameters include scale and offset values applied to color components of video data, such as luma (Y) and chroma (Cr, Cb) color components. The offset parameter may be used to center the value of the color component on the center of the available codeword for the target color container. For example, if the target color container includes 1024 codewords per color component, the offset value may be selected such that the center codeword moves to codeword 512 (e.g., the middle most codeword). In other examples, the offset parameter may be used to provide a better mapping of input codewords to output codewords so that the overall representation in the target color container is more efficient against coding artifacts.
In one example, adjustment unit 210 applies the DRA parameters to the video data in the target color space (e.g., YCrCb) as follows:
-Y”=scale1*Y'+offset1
-Cb”=scale2*Cb'+offset2 (19)
-Cr”=scale3*Cr'+offset3
where the signal components Y ', Cb ' and Cr ' are the signals resulting from the RGB to YCbCr conversion (example in equation 3). It should be noted that Y ', Cr ', and Cr ' may also be video signals decoded by video decoder 30. Y ", Cb" and Cr "are the color components of the video signal after the DRA parameters have been applied to each color component.
As can be seen in the above example, each color component is associated with a different scale and offset parameter. For example, scale1 and offset1 are used for the Y ' component, scale2 and offset2 are used for the Cb ' component, and scale3 and offset3 are used for the Cr ' component. It should be understood that this is merely an example. In other examples, the same scale and offset values may be used for each color component. As will be explained in more detail below, DRA parameter derivation unit 312 may be configured to determine a scale value for a chroma component (e.g., Cr, Cb) from the scale values determined for the luma component.
In other examples, each color component may be associated with multiple scale and offset parameters. For example, the actual distribution of chrominance values for a Cr or Cb color component may be different for different partitions or ranges of codewords. As one example, more unique codewords above a center codeword (e.g., codeword 512) may be used than below the center codeword. In this example, adjustment unit 210 may be configured to apply one set of scale and offset parameters for chroma values that are higher than (e.g., have a value greater than) the center codeword, and to apply a different set of scale and offset parameters for chroma values that are lower than (e.g., have a value less than) the center codeword.
As can be seen in the above example, adjustment unit 210 applies the scale and offset DRA parameters as linear functions. Thus, the adjusting unit 210 does not necessarily apply DRA parameters in the target color space after color conversion by the color conversion unit 208. This is because color conversion itself is a linear process. Thus, in other examples, adjustment unit 210 may apply DRA parameters to video data in the native color space (e.g., RGB) prior to any color conversion process. In this example, color conversion unit 208 would apply color conversion after adjustment unit 210 applies the DRA parameters.
In another example of the present invention, adjustment unit 210 may apply DRA parameters in the target color space or the native color space as follows:
-Y”=(scale1*(Y'-offsetY)+offset1)+offsetY;
-Cb”=scale2*Cb'+offset2 (20)
-Cr”=scale3*Cr'+offset3
in this example, the parameters scale1, scale2, scale3, offset1, offset2, and offset3 have the same meaning as described above. The parameter offset Y is a parameter reflecting the luminance of the signal, and may be equal to the average value of Y'. In other examples, offset parameters similar to offset may be applied to the Cb 'and Cr' components to better preserve the mapping of the center values in the input and output representations.
In another example of the present disclosure, adjustment unit 210 may be configured to apply DRA parameters in a color space other than the native color space or the target color space. In general, adjustment unit 210 may be configured to apply the DRA parameters as follows:
-A'=scale1*A+offset1;
-B'=scale2*B+offset2 (21)
-C'=scale3*C+offset3
where signal components A, B and C are signal components in a color space (e.g., RGB or an intermediate color space) different from the target color space.
In other examples of this disclosure, adjustment unit 210 is configured to apply a linear transfer function to the video to perform the DRA. This transfer function is different from the transfer function used by transfer function unit 206 to compress the dynamic range. Similar to the scale and offset terms as defined above, the transfer function applied by adjustment unit 210 may be used to extend and center the color value to the available codewords in the target color container. An example of applying a transfer function to perform a DRA is shown below.
-Y”=TF2(Y')
-Cb”=TF2(Cb')
-Cr”=TF2(Cr')
The term TF2 specifies the transfer function applied by the adaptation unit 210. In some examples, adjustment unit 210 may be configured to apply a different transfer function to each of the components.
In another example of the present disclosure, adjustment unit 210 may be configured to apply the DRA parameters in a single process in conjunction with the color conversion by color conversion unit 208. That is, the linear functions of the adjusting unit 210 and the color conversion unit 208 may be combined. An example of a combined application is shown below, where f1 and f2 are a combination of RGB to YCbCr matrices and DRA scaling factors.
In another example of this disclosure, after applying the DRA parameters, adjustment unit 210 may be configured to perform a clipping process to prevent the video data from having values outside of the range of codewords specified for the particular target color container. In some cases, the scale and offset parameters applied by adjustment unit 210 may cause some color component values to exceed the range of allowable codewords. In this case, adjustment unit 210 may be configured to clip the values of the out-of-range components to the maximum value in the range.
The DRA parameters applied by adjustment unit 210 may be determined by DRA parameter estimation unit 212. The frequency and time instances used by DRA parameter estimation unit 212 to update DRA parameters are flexible. For example, DRA parameter estimation unit 212 may update DRA parameters from a temporal level. That is, new DRA parameters may be determined for a group of pictures (GOP) or a single picture (frame). In this example, RGB native CG video data 200 may be a GOP or a single picture. In other examples, DRA parameter estimation unit 212 may update DRA parameters from a spatial level (e.g., at a slice block or block level). In this context, a block of video data may be a macroblock, a Coding Tree Unit (CTU), a coding unit, or any other size and shape block. The blocks may be square, rectangular, or any other shape. Accordingly, DRA parameters may be used to enable more efficient temporal and spatial prediction and coding.
In one example of the present invention, DRA parameter estimation unit 212 may derive DRA parameters based on a correspondence of a natural color gamut of RGB native CG video data 200 and a color gamut of a target color container. For example, given a particular natural color gamut (e.g., bt.709) and the color gamut of a target color container (e.g., bt.2020), DRA parameter estimation unit 212 may use a set of predefined rules to determine the scale value and offset value.
In some examples, DRA parameter estimation unit 212 may be configured to estimate the DRA parameters by determining primary color coordinates in prime from the actual distribution of color values in RGB native CG video data 200 (rather than from predefined primary color values of the natural color gamut). That is, DRA parameter estimation unit 212 may be configured to analyze the actual colors present in RGB native CG video data 200 and calculate DRA parameters using the primary colors and white points determined from this analysis in the functions described above. An approximation of some of the parameters defined above may be used as the DRA to facilitate the calculation.
In other examples of the present disclosure, DRA parameter estimation unit 212 may be configured to determine the DRA parameters based not only on the color gamut of the target color container, but also on the target color space. The actual distribution of the values of the component values may differ from color space to color space. For example, the chroma value distribution may be different for a YCbCr color space with constant luma as compared to a YCbCr color space with non-constant luma. The DRA parameter estimation unit 212 may use the color distribution of the different color spaces to determine the DRA parameters.
In other examples of this disclosure, DRA parameter estimation unit 212 may be configured to derive values for DRA parameters in order to minimize specific cost functions associated with pre-processing and/or encoding video data. As one example, DRA parameter estimation unit 212 may be configured to estimate DRA parameters that minimize quantization errors introduced by quantization unit 214. DRA parameter estimation unit 212 may minimize this error by performing quantization error tests on the video data to which different sets of DRA parameters have been applied. In another example, DRA parameter estimation unit 212 may be configured to perceptually estimate DRA parameters that minimize quantization errors introduced by quantization unit 214. DRA parameter estimation unit 212 may minimize this error based on perceptual error testing of the video data to which different sets of DRA parameters have been applied. DRA parameter estimation unit 212 may then select the DRA parameter that yields the least quantization error.
In another example, DRA parameter estimation unit 212 may select DRA parameters that minimize a cost function associated with both the DRA performed by adjustment unit 210 and the video encoding performed by video encoder 20. For example, DRA parameter estimation unit 212 may utilize multiple different sets of DRA parameters to perform DRA and encode video data. DRA parameter estimation unit 212 may then calculate a cost function for each set of DRA parameters by forming a weighted sum of the bit rates caused by the DRA and video encoding, and the distortion introduced by the two lossy processes. DRA parameter estimation unit 212 may then select a set of DRA parameters that minimizes the cost function.
In each of the above techniques for DRA parameter estimation, DRA parameter estimation unit 212 may use information about each component to determine DRA parameters separately for the component. In other examples, DRA parameter estimation unit 212 may use cross-component information to determine DRA parameters. For example, as will be discussed in more detail below, DRA parameters (e.g., scale) derived for luma (Y) components may be used to derive DRA parameters (e.g., scale) for chroma components (Cr and/or Cb).
In addition to deriving DRA parameters, DRA parameter estimation unit 212 may also be configured to signal DRA parameters in an encoded bitstream. DRA parameter estimation unit 212 may directly signal one or more syntax elements indicating DRA parameters, or may be configured to provide the one or more syntax elements to video encoder 20 for signaling. These syntax elements of the parameters may be signaled in the bitstream such that video decoder 30 and/or video post-processor unit 31 may perform the inverse of the process of video pre-processor unit 19 to reconstruct the video data in its native color container. Example techniques for signaling DRA parameters are discussed below.
In one example, DRA parameter estimation unit 212 and/or video encoder 20 may signal one or more syntax elements in an encoded video bitstream as metadata, in a Supplemental Enhancement Information (SEI) message, in Video Usability Information (VUI), in a Video Parameter Set (VPS), in a Sequence Parameter Set (SPS), in a picture parameter set, in a slice header, in a CTU header, or in any other syntax structure suitable for DRA parameters that indicate a size for video data (e.g., GOP, picture, block, macroblock, CTU, etc.).
In some examples, the one or more syntax elements explicitly indicate DRA parameters. For example, the one or more syntax elements may be various scale values and offset values for the DRA. In other examples, the one or more syntax elements may be one or more indices into a lookup table that includes scale values and offset values for the DRA. In yet another example, the one or more syntax elements may be an index to a lookup table that specifies a linear transfer function for use by the DRA.
In other examples, the DRA parameters are not explicitly signaled, but rather both video pre-processor unit 19/video encoder 20 and video post-processor unit 31/video decoder 30 are configured to derive the DRA parameters using the same information and/or characteristics of the video data discernable from the bitstream using the same predefined process.
After adjustment unit 210 applies the DRA parameters, video pre-processor unit 19 may then quantize the video data using quantization unit 214. The quantization unit 214 may operate in the same manner as described above with reference to fig. 5. After quantization, the video data is now adjusted in the target color space and the target color gamut of the target primary color of the HDR data 216. The data 216 for HDR may then be sent to the video encoder 20 for compression.
FIG. 12 is a block diagram depicting an example HDR/WCG inverse conversion apparatus in accordance with the techniques of this disclosure. As shown in fig. 12, video post-processor unit 31 may be configured to apply the inverse of the techniques performed by video pre-processor unit 19 of fig. 11. In other examples, the techniques of video post-processor unit 31 may be incorporated into video decoder 30 and performed by video decoder 30.
In one example, video decoder 30 may be configured to decode video data encoded by video encoder 20. The decoded video data (data 316 for HDR in the target color container) is then forwarded to the video post-processor unit 31. The inverse quantization unit 314 performs an inverse quantization process on the data 316 of the HDR to reverse the quantization process performed by the quantization unit 214 of fig. 11.
Video decoder 30 may also be configured to decode any of the one or more syntax elements generated by DRA parameter estimation unit 212 of fig. 11 and send the syntax elements to DRA parameter derivation unit 312 of video post-processor unit 31. As described above, DRA parameter derivation unit 312 may be configured to determine DRA parameters based on the one or more syntax elements. In some examples, the one or more syntax elements explicitly indicate DRA parameters. In other examples, DRA parameter derivation unit 312 is configured to derive DRA parameters using the same techniques used by DRA parameter estimation unit 212 of fig. 11.
The parameters derived by DRA parameter derivation unit 312 are sent to inverse adjustment unit 310. The inverse modification unit 310 performs an inverse of the linear DRA modification performed by the modification unit 210 using the DRA parameters. Inverse adjustment unit 310 may apply an inverse of any of the adjustment techniques described above for adjustment unit 210. In addition, as with adjustment unit 210, inverse adjustment unit 310 may apply the inverse DRA before or after any inverse color conversion. Thus, the inverse adjustment unit 310 may apply DRA parameters to the video data in the target color container or the native color container. In some examples, inverse adjustment unit 310 may be positioned to apply inverse adjustment before inverse quantization unit 314.
The inverse color conversion unit 308 converts the video data from a target color space (e.g., YCbCr) to a native color space (e.g., RGB). Inverse transfer function 306 then applies the inverse of the transfer function applied by transfer function 206 to decompress the dynamic range of the video data. In some examples, the resulting video data (RGB target CG 304) is still in the target color domain, but is now in the native dynamic range and native color space. Next, the anti-CG converter 302 converts the RGB target CG 304 to the natural color gamut to reconstruct the RGB native CG 300.
In some examples, additional post-processing techniques may be employed by video post-processor unit 31. The application DRA may bring the video out of its actual natural color gamut. The quantization steps performed by quantization unit 214 and inverse quantization unit 314, as well as the upsampling and downsampling techniques performed by adjustment unit 210 and inverse adjustment unit 310, may help the resulting color values in the local color container to be outside the natural color gamut. When the natural color gamut is known (or, as noted above, the actual minimum content primary color in the case of signaling), then additional processing may be applied to the RGB native CG video data 304 to transform the color values (e.g., RGB or Cb and Cr) back into the desired color gamut, such as post-processing for the DRA. In other examples, this post-processing may be applied after quantization or after DRA application.
This disclosure describes techniques and devices for cross-component dynamic range adjustment (CC-DRA) to improve compression efficiency for image and video coding systems, such as H.264/AVC, H.265/HEVC, or next generation codecs (e.g., VVC), according to one or more examples below. More particularly, this describes techniques and devices for deriving parameters of a CC-DRA applied to chroma components (e.g., Cb and Cr) by employing parameters of a DRA function applied to the luma component and/or characteristics of the processed video signal (e.g., local luminance levels of the signal, transfer characteristics, color container characteristics, or natural color gamut).
Assuming the DRA function Sy={Sy,i,Oy,iThe CC-DRA function applied to the Y component and to the chrominance components (e.g. Cb, Cr) is defined by the LCS function SCb(or S)Cr) And (4) specifying. One or more of the following techniques may be independentOr in any suitable combination to derive the CC-DRA and its LCS function SCb(or S)Cr) The parameter (c) of (c).
The following example will be described with respect to the video preprocessor unit 19. However, it should be understood that all processes performed by video preprocessor unit 19 may also be performed within video encoder 20 as part of a video coding loop. Video post-processor unit 31 may be configured to perform the same techniques as video pre-processor unit 19, but in a reciprocal manner. In particular, video post-processor unit 31 may be configured to determine parameters for the CC-DRA for the chroma component in the same manner as video pre-processor unit 19. All processes performed by video post-processor unit 31 may also be performed within video decoder 30 as part of a video decoding loop.
As described above with reference to fig. 11 and 12, video pre-processor unit 19 and video post-processor unit 31 may apply DRA to three components of a video component of video data (e.g., YCbCr) to enable a representation to be more efficiently compressed by image or video compression systems, such as video encoder 20 and video decoder 30. Video pre-processor unit 19 may apply the DRA function SyApplied to the luma component (Y) and which may be passed through the LCS function SCb(or S)Cr) The CC-DRA is applied to the chroma components (e.g., Cb, Cr).
In one example of this disclosure, video pre-processor unit 19 may be configured to derive DRA parameters (e.g., scale and offset) for the luma component of the video data in the manner described above with reference to fig. 11. Video preprocessor unit 19 may also be configured to derive the offset parameters for the chroma components in the manner described above with reference to fig. 11. According to one example of this disclosure, video preprocessor unit 19 may be configured to derive an LCS function S from parameters of the DRA function applied to the Y componentCbOr SCrOf (e.g. dimension), i.e. SCb(or S)Cr)=fun(Sy). That is, video pre-processor unit 19 may be configured to determine CC-DRA parameters (e.g., scale parameters) for the chroma components from DRA parameters (e.g., scale parameters) derived for the luma components.
In some examples, video pre-processingUnit 19 may be configured to set the scaling factor of the LCS function for the chroma value equal to the DRA function S applied to the value of the chroma component (Y)yScale factor (c):
S(Y)Cb=S(Y)Cr=S(Y)y(22)
in this example, video preprocessor unit 19 may again use the scale parameters derived for luma values (S (Y)y) To be used as a scale parameter (S (Y)) corresponding to the Cr valueCr) And as a scale parameter for the corresponding Cb value (S (Y)Cb). In this case, the corresponding Cr and Cb values are Cr and Cb values of the same pixel having a luma value.
In another example of this disclosure, video preprocessor unit 19 may be configured to determine a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data. For example, the total range of codeword values for luma components may be between 0 and 1023. Video preprocessor unit 19 may be configured to divide the possible codeword values for the luma component into multiple ranges, and then independently derive DRA parameters for each of the ranges. In some examples, lower scale values may be used for lower intensity luma values (e.g., darker values), while higher scale values may be used for higher intensity luma values (e.g., lighter values). The total number of luma codewords may be divided into any number of ranges, and the range sizes may be uniform or may be non-uniform.
Thus, in some examples, the DRA function SyIs for the set of ranges RiAnd (c) is defined. In this example, video preprocessor unit 19 may be configured to determine the range R to which Y currently belongs fromiDRA scale factor S ofy,iValue of (3) to derive the LCS function SCbOr SCrScale factor (c):
S(Y)Cb=S(Y)Cr=fun(S(Y)y,i) For any Y ∈ Ri(23)
Wherein Y is within the range Ri=[Yi,Yi+1-1]The brightness value of (a). In other words, the scale factor of a chroma component is a function of the scale factor of the corresponding luma component of the chroma component depending on the range to which the luma component belongs. Thus, it is possible to provideVideo preprocessor unit 19 may be configured to determine the chroma scale parameters of chroma components associated with a luma component of a first range of codeword values having multiple ranges of codeword values using a function of the luma scale parameters determined for luma components having the first range of codeword values.
However, the chroma scale value may also be derived in a range of chroma codeword values, and the range of codeword values for the chroma components for which the chroma scale parameter is derived may not directly overlap with the range of codeword values for the luma components for which the luma scale parameter is derived. For example, a single range of chroma codeword values may overlap with multiple ranges of luma codeword values. In this example, the derivation function fun (S (Y)) for deriving the chroma scale valuesy,i) Can be implemented as a zeroth order approximation (e.g., for a range R of luma scale valuesiAverage of all scale values) or an approximation of a higher order (e.g., interpolation, curve fitting, low pass filtering, etc.).
In some examples, video preprocessor unit 19 may be configured to determine, via a global curve fitting process, an LCS function S as a function of a plurality of luma DRA scale factorsCbOr SCrChroma scale parameters of (a):
SCb=SCr=fun(Sy) (24)
thus, video preprocessor unit 19 may be configured to determine chroma scale parameters for chroma components associated with luma components having a first range of codeword values and at least a second range of codeword values for a plurality of ranges of codeword values using a function of the luma scale parameters determined for luma components having the first range of codeword values and the second range of codeword values.
In some examples, the luma scale parameter is constant over a particular range of luma codeword values. Thus, the brightness scale parameter may be viewed as a discontinuous (e.g., step-wise) function over the full range of brightness codeword values. Rather than using the discontinuous luma scale parameters to derive the chroma scale parameters, video preprocessor unit 19 may be configured to apply a linearization process to the discontinuous function to generate linearized luma scale parameters. Video preprocessor unit 19 may be further configured to determine a chroma scale parameter for a chroma component of the video data using a function of the linearized luma scale parameter. In some examples, the linearization process is one or more of a linear interpolation process, a curve fitting process, an averaging process, or a high order approximation process.
Regardless of how the luma scale parameter and the chroma scale parameter are derived, video preprocessor unit 19 may be configured to perform a dynamic range adjustment process on the luma component using the luma scale parameter, and on the chroma components of the video data using the chroma scale parameter in the manner described above with reference to fig. 11.
In another example of this disclosure, video preprocessor unit 19 may be configured to apply from the application to Y component SyAnd a Quantization Parameter (QP) (e.g., Quantization Parameter (QP), deltaQP, chromaQP offset, or other parameter specifying granularity of codec quantization) of a codec (e.g., video encoder 20) to derive an LCS function SCbOr SCrSuch as a scale parameter. deltaQP may indicate the difference between the QP for one block and the QP for another block. The chromaQP offset may be added to the QP of the luma component to determine the QP of the chroma component.
Thus, video preprocessor unit 19 may be configured to determine chroma scale parameters for chroma components of the video data using a function of the luma scale parameters and quantization parameters used to decode the chroma components.
The following are examples:
SCb=fun(Sy,QPCb) (25)
SCr=fun(Sy,QPCr)
in some examples, coding QP for the current component and coding QP for SCbOr SCrThe correspondence between the adjustment parameters of the functions may be displayed in a list and used to adjust the approximation function fun.
In some examples, a set of functions { fun (S)y,QPCr)idMay be provided to video decoder 30 and/or video post-processor unit 31 as side information and a selection between these functions may be made byFunction identification (id) is derived from the bitstream or from decoder-side analysis of coding parameters such as QP, coding mode, type of coded picture (I, B, P), availability of prediction, or coding mode.
In another example of this disclosure, video preprocessor unit 19 may be configured to apply from the application to Y component SyAnd parameters of the color comparison of the processed video data and of the utilized container (e.g. the natural color gamut and the color primaries of the utilized color container) to derive the LCS function SCbOr SCrThe colorimetry (color representation parameters) of the processed video data and the parameters of the utilized containers may be multiplication factors α and β, which may be the same for each of the chroma components (Cr and Cb) or may be independent for each of the chroma components.
SCb=α*fun(Sy) (26)
SCr=β*fun(Sy)
In some examples, the α and β parameters may be defined by a single multiplier parameter, such as α -1.0698 and β -2.1735 for bt.709 video content represented in a bt.2020 color container.
In other examples, the α and β parameters may be derived by both video pre-processor unit 19/video encoder 20 and video post-processor unit 31/video decoder 30. additionally, video pre-processor unit 19 may be configured to determine α and β parameters that are within multiple ranges of chroma component codewords.
In other examples, video preprocessor unit 19 may be configured to derive α and β parameters from colorimetric parameters, such as color container and primary and white point coordinates of a natural color gamut.
In other examples, video preprocessor unit 19 may be configured to determine α and β parameters from luma value Y and/or chroma sample values.
In other examples, video preprocessor unit 19 may be configured to derive and/or list display and access α and β parameter values using other color space characteristics, such as non-linearity (e.g., transform functions) or utilized color space.
SCb=α(transfer_characteristics)*fun(Sy) (27)
SCr=β(transfer_characteristics)*fun(Sy)
Example transfer characteristics may be transfer functions of bt.709, bt.2100, and/or (PQ)/bt.2100 (HLG). Video preprocessor unit 19 may be configured to signal a transform _ characteristics id syntax element whose value specifies the non-linearity (e.g., transform function) utilized.
In some examples, the types of parameters α and β or derived functions may be identified via the bitstream signaling or provided to the decoder as side information and with an id parameter signaled in the bitstream.
As described above, each of the above techniques may be used in any combination. For example, in one implementation, video preprocessor unit 19 may be configured to determine the chroma scale parameters for the chroma components of the video data using a function of the luma scale parameters, quantization parameters used to decode the chroma components, and color representation parameters derived from characteristics of the chroma components of the video data. In this example, video preprocessor unit 19 may also be configured to linearize the luma scale parameter.
In another example of this disclosure, video preprocessor unit 19 may be configured to apply LCS function SCbOr SCrIs derived as applied to the Y component SyAnd the superposition of the DRA function specified for the current color component, as defined in equations (3) and (4). The scale parameters derived from equations (3) and (4) may be considered as the initial chroma scale parameters:
the following are examples:
SCb=fun(Sy(Y),Su(u)) (28)
SCr=fun(Sy(Y)*Sv(v)
thus, in one example of this disclosure, video preprocessor unit 19 may be configured to determine an initial chroma scale parameter for a chroma component of the video data, and determine a chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and the initial chroma scale parameter.
The following are descriptions of several non-limiting examples of implementations of the techniques described above.
Assuming that the Y component is a function of the piecewise linear function Sy,i(Y) the defined DRA processing is a set of non-overlapping ranges { Ri }, where Y is the processed luma sample and i is the Ri partition id. The subrange of the DRA function Ri is determined by the number of ranges (e.g., [ yi, yi +1-1 [)]) Where i indicates the index of the range composed of DRAs, the yi term indicates the starting value of the i range, and N is the total number of utilized sub-ranges. Scale value Sy,iAnd offset value Oy,iIndependently defined for each Ri and applied to have a sub-range [ yi, yi +1-1 [ ]]All Y samples of the value of (a).
In some examples, the inverse DRA function may be implemented at the decoder side, as shown above in equation (2) and fig. 12. Fig. 13 shows an example of a DRA mapping function 183 applied to an input luma component Y (input codeword) and a "no DRA" function 181 when no DRA is implemented.
In some examples, the LCS function S for the chroma scale factorCbOr SCrThe value of the LCS function may be set equal to the DRA function S applied to the value YyIs defined as shown in equation (22).
Since the DRA function Sy is defined using equal scaling for all Y samples of the sub-range i, the derived LCS characteristics are discontinuous function (e.g. step function) characteristics, shown in curve 191 of fig. 14. In some examples, video preprocessor unit 19 may derive LCS function S by linearization of scale factors of the DRA function for the lumaCbOr SCr(shown as curve 193 in fig. 14).
SCb(Y)=SCr(Y)=fun(Sy(Y)) (29)
In some examples, fun (S) is linearizedy(y)) may be implemented as follows by local approximation of the scale parameter within the neighboring partitions i and i + 1.
For i-0.. N-2, the size of the adjacent subrange is derived:
Di=(yi+1-1-yi) (30)
Di+1=(yi+2-1-yi+1)
the approximate subrange Rci for the LCS function is defined as a half range shifted to the subrange Ri of the DRA function:
Yci=yi+Di/2 (31)
Yc+1=yi+1+Di+1/2
video preprocessor unit 19 may derive LCS scale factors for Y ═ yci.. Yc +1-1 as linear interpolations of DRA scale factors of neighboring sub-ranges:
localScale=(Yci+1-Yci)/(Yci+1-Yci)
Scb(Y)=Scr(Y)=Sy(Yci)+(Y-Yci)*localScale (32)
additionally, an example of such a linearized LCS function is shown as curve 193 in fig. 14.
In another example, video preprocessor unit 19 may derive LCS function S from parameters of the DRA function applied to Y component Scy and quantization parameters of a codec (e.g., QP, deltaQP, and/or chroma QP parameters)CbOr SCrThe parameter (c) of (c).
Adjustment of LCS parameters { LCS _ param1, LCS _ param2, LCS _ param3} proportional to quantization parameters (e.g., QP) used to code the current picture/slice or transform block may be used for the approximation of the LCS from the luma DRA scaling function Sy, as well as the removal of discontinuity function properties thereof.
For example, the interpolation for the chroma scale function for range [ Yci … Yci +1] shown in equation (33) may be via an S-type function adjusted by QP. Examples are shown below.
For the value Y ═ yci.. Yc +1-1, a sigmoid function can be used for interpolation between the scales of adjacent ranges:
Scb(Y)=lcs_param1+lcs_param2./(1.0+exp(-lcs_param3*Y)) (33)
in yet another example, an adaptive smooth discontinuous function or an approximation thereof may be used for interpolation.
With this approach, the controlled smoothness can be introduced to the discontinuous function of the LCS function with the difference between the scales of adjacent sub-ranges. A set of smoothing functions may be provided to the decoder side (e.g., video decoder 30 and/or video post-processor unit 31) as side information, and the selection of the function id may be signaled in the bitstream. Alternatively, the smoothing function may be signaled in the bitstream.
The derivation of the smoothing function may also be tabulated in the form of a relationship between the QP and the parameter or function id of the smoothing function, as shown below.
| QP | Function ID | {lcs_param} | {lcs_param} | {lcs_param} |
| 23 | i | lcs_param1(i) | lcs_param2(i) | lcs_param3(i) |
| … | ||||
| 52 | M-1 | lcs_param1(M-1) | lcs_param2(M-1) | lcs_param3(M-1) |
Fig. 15 shows an example of the use of a set of sigmoid functions to provide controlled smoothness of discontinuous functions to Scb (or Cr) at the boundary of two adjacent sub-ranges.
An example of an application-controlled smoothness function that utilizes equation (33) to derive the LCS function is shown as curve 195 (smoothed DRA scale) in fig. 16. Curve 197 shows the original DRA scale and curve 199 shows the linearized DRA scale. Curve 195 (smoothed DRA curve) may be generated with lcs _ param1(i) ═ 0, lcs _ param2(i) ═ Sci +1-Sci, and lcs _ param3(i) ═ 0.1.
In some examples, the LCS function Scb (or Scr) may be more suitable for colorimetric properties as shown in equation (26).
The characteristics of the target container primeT and the natural color gamut primeN are defined via the coordinates of the primary colors: primeT ═ (xRc, yRc; xGc, yGc; xBc, yBc), primeN ═ (xRn, yRn; xGn, yGn; xBn, yBn), and white point coordinates whitet ═ xW, yW. An example of a certain color is given in table 1. In this case, it is preferable that,
rdT=sqrt((primeT(1,1)-whiteP(1,1))^2+(primeN(1,2)-whiteP(1,2))^2)
gdT=sqrt((primeT(2,1)-whiteP(1,1))^2+(primeN(2,2)-whiteP(1,2))^2)
bdT=sqrt((primeT(3,1)-whiteP(1,1))^2+(primeN(3,2)-whiteP(1,2))^2)
rdN=sqrt((primeN(1,1)-whiteP(1,1))^2+(primeN(1,2)-whiteP(1,2))^2)
gdN=sqrt((primeN(2,1)-whiteP(1,1))^2+(primeN(2,2)-whiteP(1,2))^2)
bdN=sqrt((primeN(3,1)-whiteP(1,1))^2+(primeN(3,2)-whiteP(1,2))^2)
α=sqrt((bdT/bdN)^2)
β=sqrt((rdT/rdN)^2+(gdT/gdN)^2)
in another example, the values of the afar and beta multipliers may be derived from QP-like parameters of a quantization scheme of a codec that adjusts color containers of the coded video:
α=2.^(deltaCbQP./6-1)
β=2.^(deltaCrQP./6-1)
where deltaCrQP/deltaCbQP is an adjustment to the QP settings of the codec to meet the characteristics of the color container.
In some examples, the parameters of the techniques described above may be estimated at the encoder side (e.g., video encoder 20 and/or video pre-processor unit 19) and signaled via a bitstream (metadata, SEI messages, VUI or SPS/PPS or slice headers, etc.). The video decoder 30 and/or the video post-processor unit 31 receives parameters from the bitstream.
In some examples, the parameters of the techniques described above are derived from the input signal, or from other available parameters associated with the input signal and the processing flow, via a specified process at both video encoder 20/video pre-processor unit 19 and video decoder 30/video post-processor unit 31.
In some examples, the parameters of the techniques described above are explicitly signaled and are sufficient for performing the DRA at video decoder 30/video post-processor unit 31. In still other examples, the parameters of the techniques described above are derived from other input signal parameters, such as parameters of the input gamut and the target color container (color primaries).
Fig. 17 is a block diagram showing an example of a video encoder 20 that may implement the techniques of this disclosure. As described above, DRA techniques may be performed by video pre-processor unit 19 outside of the coding loop of video encoder 20 or within the coding loop of video encoder 20 (e.g., prior to prediction). Video encoder 20 may perform intra and inter coding of video blocks within video slices in the target color container that have been processed by video pre-processor unit 19. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame or picture. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy of video within neighboring frames or pictures of a video sequence. Intra-mode (I-mode) may refer to any of a number of spatial-based coding modes. An inter mode, such as uni-directional prediction (P-mode) or bi-directional prediction (B-mode), may refer to any of a number of temporally based coding modes.
As shown in fig. 17, video encoder 20 receives a current video block within a video frame to be encoded. In the example of fig. 17, video encoder 20 includes mode select unit 40, video data memory 41, decoded picture buffer 64, summer 50, transform processing unit 52, quantization unit 54, and entropy encoding unit 56. Mode select unit 40, in turn, includes motion compensation unit 44, motion estimation unit 42, intra-prediction processing unit 46, and partition unit 48. For video block reconstruction, video encoder 20 also includes an inverse quantization unit 58, an inverse transform processing unit 60, and a summer 62. A deblocking filter (not shown in fig. 17) may also be included to filter block boundaries to remove blockiness artifacts from the reconstructed video. The deblocking filter will typically filter the output of summer 62, if desired. In addition to deblocking filters, additional filters (in-loop or post-loop) may be used. These filters are not shown for simplicity, but may filter the output of summer 50 (as an in-loop filter) if desired.
Video data memory 41 may store video data to be encoded by components of video encoder 20. The video data stored in video data memory 41 may be obtained, for example, from video source 18. Decoded picture buffer 64 may be a reference picture memory that stores reference video data for use by video encoder 20 when encoding video data, e.g., in intra or inter coding modes. Video data memory 41 and decoded picture buffer 64 may be formed from any of a variety of memory devices, such as Dynamic Random Access Memory (DRAM), including synchronous DRAM (sdram), magnetoresistive ram (mram), resistive ram (rram), or other types of memory devices. Video data memory 41 and decoded picture buffer 64 may be provided by the same memory device or separate memory devices. In various examples, video data memory 41 may be on-chip with other components of video encoder 20, or off-chip with respect to those components.
During the encoding process, video encoder 20 receives a video frame or slice to be coded. The frame or slice may be divided into a plurality of video blocks. Motion estimation unit 42 and motion compensation unit 44 perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference frames to provide temporal prediction. Intra-prediction processing unit 46 may alternatively perform intra-predictive coding of the received video block relative to one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial prediction. Video encoder 20 may perform multiple coding passes, e.g., to select an appropriate coding mode for each block of video data.
Furthermore, partition unit 48 may partition a block of video data into sub-blocks based on an evaluation of previous partition schemes in previous coding passes. For example, partition unit 48 may initially partition a frame or slice into a plurality of LCUs, and partition each of the LCUs into sub-CUs based on rate-distortion analysis (e.g., bit rate-distortion optimization). Mode select unit 40 may further generate quadtree and/or QTBT data structures that indicate partitioning of the LCU into sub-CUs. Leaf-node CUs of a quadtree may include one or more PUs and one or more TUs.
Mode select unit 40 may select one of the coding modes (intra or inter), e.g., based on the error results, and provide the resulting intra-coded block or inter-coded block to summer 50 to generate residual block data and to summer 62 to reconstruct the encoded block for use as a reference frame. Mode select unit 40 also provides syntax elements, such as motion vectors, intra-mode indicators, partition information, and other such syntax information, to entropy encoding unit 56.
Motion estimation unit 42 and motion compensation unit 44 may be highly integrated, but are shown separately for conceptual purposes. Motion estimation, performed by motion estimation unit 42, is the process of generating motion vectors that estimate the motion of video blocks. For example, a motion vector may indicate the displacement of a PU of a video block within a current video frame or picture relative to a predictive block within a reference picture (or other coded unit) relative to a current block being coded within the current picture (or other coded unit). A predictive block is a block that is found to closely match the block to be coded in terms of pixel differences, which may be determined by Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), or other difference metrics. In some examples, video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in decoded picture buffer 64. For example, video encoder 20 may interpolate values for a quarter-pixel position, an eighth-pixel position, or other fractional-pixel positions of a reference picture. Thus, motion estimation unit 42 may perform a motion search with respect to full pixel positions and fractional pixel positions and output motion vectors with fractional pixel precision.
Motion estimation unit 42 calculates motion vectors for PUs of video blocks in inter-coded slices by comparing the locations of the PUs to the locations of predictive blocks of the reference picture. The reference picture may be selected from a first reference picture list (list 0) or a second reference picture list (list 1), each of which identifies one or more reference pictures stored in decoded picture buffer 64. Motion estimation unit 42 sends the calculated motion vectors to entropy encoding unit 56 and motion compensation unit 44.
The motion compensation performed by motion compensation unit 44 may involve extracting or generating a predictive block based on the motion vectors determined by motion estimation unit 42. Again, in some examples, motion estimation unit 42 and motion compensation unit 44 may be functionally integrated. Upon receiving the motion vector for the PU of the current video block, motion compensation unit 44 may locate, in one of the reference picture lists, the predictive block to which the motion vector points. Summer 50 forms a residual video block by subtracting the pixel values of the predictive block from the pixel values of the current video block being coded, forming pixel difference values, as discussed below. In general, motion estimation unit 42 performs motion estimation with respect to luma components, and motion compensation unit 44 uses motion vectors calculated based on the luma components for both chroma and luma components. Mode select unit 40 may also generate syntax elements associated with the video blocks and the video slice for use by video decoder 30 in decoding the video blocks of the video slice.
As described above, as an alternative to inter-prediction performed by motion estimation unit 42 and motion compensation unit 44, intra-prediction processing unit 46 may intra-predict the current block. In particular, intra-prediction processing unit 46 may determine an intra-prediction mode to use to encode the current block. In some examples, intra-prediction unit 46 may encode the current block using various intra-prediction modes, e.g., during separate encoding passes, and intra-prediction processing unit 46 (or mode selection unit 40 in some examples) may select an appropriate intra-prediction mode from the tested modes for use.
For example, intra-prediction processing unit 46 may calculate rate-distortion values using rate-distortion analysis for various tested intra-prediction modes, and select the intra-prediction mode having the best rate-distortion characteristics among the tested modes. Rate-distortion analysis generally determines the amount of distortion (or error) between an encoded block and an original, unencoded block, which is encoded to produce an encoded block, and the bit rate (i.e., number of bits) used to produce the encoded block. Intra-prediction processing unit 46 may calculate ratios from the distortion and bit rates of various encoded blocks to determine which intra-prediction mode exhibits the best bit rate-distortion value for the block.
Upon selecting the intra-prediction mode for the block, intra-prediction processing unit 46 may provide information to entropy encoding unit 56 indicating the selected intra-prediction mode for the block. Entropy encoding unit 56 may encode information indicating the selected intra-prediction mode. Video encoder 20 may include the following in the transmitted bitstream: configuration data, which may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to as codeword mapping tables); definition of coding context of various blocks; and an indication of a most probable intra-prediction mode, an intra-prediction mode index table, and a modified intra-prediction mode index table to be used for each of the contexts.
Video encoder 20 forms a residual video block by subtracting the prediction data from mode select unit 40 from the original video block being coded. Summer 50 represents one or more components that perform this subtraction operation. Transform processing unit 52 applies a transform, such as a Discrete Cosine Transform (DCT) or a conceptually similar transform, to the residual block, producing a video block that includes residual transform coefficient values. Transform processing unit 52 may perform other transforms that are conceptually similar to DCT. Wavelet transforms, integer transforms, sub-band transforms, or other types of transforms may also be used. In any case, transform processing unit 52 applies a transform to the residual block, producing a block of residual transform coefficients. The transform may convert the residual information from a pixel value domain to a transform domain, such as the frequency domain. Transform processing unit 52 may send the resulting transform coefficients to quantization unit 54.
Quantization unit 54 quantizes the transform coefficients to further reduce the bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The quantization level may be modified by adjusting a quantization parameter. In some examples, quantization unit 54 may then perform a scan of a matrix that includes quantized transform coefficients. Alternatively, entropy encoding unit 56 may perform scanning.
After quantization, entropy encoding unit 56 entropy codes the quantized transform coefficients. For example, entropy encoding unit 56 may perform Context Adaptive Variable Length Coding (CAVLC), Context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), Probability Interval Partition Entropy (PIPE) coding, or another entropy coding technique. In the case of context-based entropy coding, the contexts may be based on neighboring blocks. After entropy coding by entropy encoding unit 56, the encoded bitstream may be transmitted to another device, such as video decoder 30, or archived for later transmission or retrieval.
Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block. Motion compensation unit 44 may calculate a reference block by adding the residual block to a predictive block of one of the frames of decoded picture buffer 64. Motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for use in motion estimation. Summer 62 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 44 to produce a reconstructed video block for storage in decoded picture buffer 64. The reconstructed video block may be used by motion estimation unit 42 and motion compensation unit 44 as a reference block to inter-code a block in a subsequent video frame.
Fig. 18 is a block diagram showing an example of a video decoder 30 that may implement the techniques of this disclosure. As described above, the inverse DRA techniques may be performed by video post-processor unit 31 outside of the decoding loop of video decoder 30 or within the decoding loop of video decoder 30 (e.g., after filtering and before decoded picture buffer 82). In the example of fig. 18, video decoder 30 includes an entropy decoding unit 70, a video data memory 71, a motion compensation unit 72, an intra-prediction processing unit 74, an inverse quantization unit 76, an inverse transform processing unit 78, a decoded picture buffer 82, and a summer 80. In some examples, video decoder 30 may perform a decoding pass that is substantially reciprocal to the encoding pass described with respect to video encoder 20 (fig. 17). Motion compensation unit 72 may generate prediction data based on the motion vectors received from entropy decoding unit 70, while intra-prediction processing unit 74 may generate prediction data based on the intra-prediction mode indicator received from entropy decoding unit 70.
Video data memory 71 may store video data, such as an encoded video bitstream, to be decoded by components of video decoder 30. The video data stored in the video data memory 71 may be communicated via a wired or wireless network of video data, or by accessing a physical data storage medium, such as obtained from the computer-readable medium 16 (e.g., from a local video source, such as a camera). Video data memory 71 may form a Coded Picture Buffer (CPB) that stores encoded video data from an encoded video bitstream. Decoded picture buffer 82 may be a reference picture memory that stores reference video data for use by video decoder 30 when decoding video data, e.g., in intra or inter coding modes. Video data memory 71 and decoded picture buffer 82 may be formed from any of a variety of memory devices, such as Dynamic Random Access Memory (DRAM), including synchronous DRAM (sdram), magnetoresistive ram (mram), resistive ram (rram), or other types of memory devices. Video data memory 71 and decoded picture buffer 82 may be provided by the same memory device or separate memory devices. In various examples, video data memory 71 may be on-chip with other components of video decoder 30, or off-chip with respect to those components.
During the decoding process, video decoder 30 receives an encoded video bitstream representing video blocks of an encoded video slice and associated syntax elements from video encoder 20. Entropy decoding unit 70 of video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors, or intra-prediction mode indicators, among other syntax elements. Entropy decoding unit 70 forwards the motion vectors and other syntax elements to motion compensation unit 72. Video decoder 30 may receive syntax elements at the video slice level and/or the video block level.
When a video slice is coded as an intra-coded (I) slice, intra-prediction processing unit 74 may generate prediction data for a video block of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When a video frame is coded as an inter-coded (i.e., B or P) slice, motion compensation unit 72 generates predictive blocks for the video blocks of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 70. The predictive block may be generated from one of the reference pictures within one of the reference picture lists. Video decoder 30 may construct reference picture lists (list 0 and list1) using default construction techniques based on the reference pictures stored in decoded picture buffer 82. Motion compensation unit 72 determines prediction information for the video blocks of the current video slice by parsing the motion vectors and other syntax elements and uses the prediction information to generate a predictive block for the current video block being decoded. For example, motion compensation unit 72 uses some of the received syntax elements to determine a prediction mode (e.g., intra-prediction or inter-prediction) used to code video blocks of the video slice, an inter-prediction slice type (e.g., a B-slice or a P-slice), construction information for one or more reference picture lists of the slice, a motion vector for each inter-coded video block of the slice, an inter-prediction state for each inter-coded video block of the slice, and other information used to decode video blocks in the current video slice.
Motion compensation unit 72 may also perform interpolation based on interpolation filters. Motion compensation unit 72 may use interpolation filters as used by video encoder 20 during encoding of video blocks to calculate interpolated values for sub-integer pixels of a reference block. In this case, motion compensation unit 72 may determine the interpolation filter used by video encoder 20 from the received syntax element and use the interpolation filter to generate the predictive block.
Inverse quantization unit 76 inverse quantizes (or dequantizes) the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 70. The inverse quantization process may include using a quantization parameter QP calculated by video decoder 30 for each video block in the video sliceYTo determine the degree of quantization and, likewise, the degree of dequantization that should be applied. The inverse transform processing unit 78 applies an inverse transform (e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients in order to generate a block of residuals in the pixel domain.
After motion compensation unit 72 generates the predictive block for the current video block based on the motion vectors and other syntax elements, video decoder 30 forms a decoded video block by summing the residual block from inverse transform processing unit 78 with the corresponding predictive block generated by motion compensation unit 72. Summer 80 represents one or more components that perform this summation operation. When desired, deblocking filters may also be applied to filter the decoded blocks in order to remove blockiness artifacts. Other loop filters may also be used (within or after the coding loop) to smooth pixel transitions, or otherwise improve video quality. The decoded video blocks in a given frame or picture are then stored in decoded picture buffer 82, decoded picture buffer 82 storing reference pictures for subsequent motion compensation. Decoded picture buffer 82 also stores decoded video for later presentation on a display device, such as display device 32 of fig. 1.
FIG. 19 is a flow diagram showing one example video processing technique of this disclosure. As described above, the techniques of fig. 19 may be performed by video preprocessor unit 19 and/or video encoder 20. In one example of this disclosure, video preprocessor unit 19 and/or video encoder 20 may be configured to receive video data (1900), determine a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data (1902), and perform a dynamic range adjustment process on the luma component using the luma scale parameters (1904). Video preprocessor unit 19 and/or video encoder 20 may be further configured to determine chroma scale parameters for chroma components of the video data using a function of the luma scale parameters (1906), and perform a dynamic range adjustment process on the chroma components of the video data using the chroma scale parameters (1908). Video encoder 20 may then be configured to encode the video data (1910).
In one example, to determine the chroma scale parameters, video preprocessor unit 19 and/or video encoder 20 is further configured to determine the chroma scale parameters for chroma components associated with a luma component of a first range of codeword values having multiple ranges of codeword values using a function of the luma scale parameters determined for luma components having the first range of codeword values.
In another example, to determine the chroma scale parameters, video preprocessor unit 19 and/or video encoder 20 is further configured to determine the chroma scale parameters for chroma components associated with luma components having a first range of codeword values and a second range of codeword values using a function of the luma scale parameters determined for luma components having the first range of codeword values and the second range of codeword values.
In another example, the luma scale parameter for each of the multiple ranges of codeword values for the luma component is represented by a discontinuous function, and video preprocessor unit 19 and/or video encoder 20 are further configured to apply a linearization process to the discontinuous function to generate linearized luma scale parameters, and determine the chroma scale parameters for the chroma component of the video data using the function of the linearized luma scale parameters. In one example, the linearization process is one or more of a linear interpolation process, a curve fitting process, an averaging process, or a high order approximation process.
In another example, to determine the chroma scale parameters, video preprocessor unit 19 and/or video encoder 20 is further configured to determine the chroma scale parameters for chroma components of the video data using a function of the luma scale parameters and quantization parameters used to decode the chroma components.
In another example, to determine the chroma scale parameters, video preprocessor unit 19 and/or video encoder 20 is further configured to determine the chroma scale parameters for the chroma components of the video data using a function of the luma scale parameters, quantization parameters used to decode the chroma components, and color representation parameters derived from characteristics of the chroma components of the video data. In one example, the color representation includes a transfer function associated with the color container/video data.
In another example, video preprocessor unit 19 and/or video encoder 20 is further configured to determine an initial chroma scale parameter for a chroma component of the video data, wherein to determine the chroma scale parameter, video preprocessor unit 19 and/or video encoder 20 is further configured to determine the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and the initial chroma scale parameter.
In another example, video preprocessor unit 19 and/or video encoder 20 determines a luma offset parameter for the luma component, performs a dynamic range adjustment process on the luma component using the luma scale parameter and the luma offset parameter, determines a chroma offset parameter for the chroma component, and performs a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter and the chroma offset parameter.
FIG. 20 is a flow diagram showing another example video processing technique of this disclosure. As described above, the techniques of fig. 20 may be performed by video post-processor unit 31 and/or video decoder 30. In one example of this disclosure, video decoder 30 may be configured to decode video data (2000), and video post-processor unit 31 and/or video decoder 30 may be configured to receive video data (2002). Video post-processor unit 31 and/or video decoder 30 may be further configured to determine a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data (2004), and perform an inverse dynamic range adjustment process on the luma component using the luma scale parameters (2006). Video post-processor unit 31 and/or video decoder 30 may be further configured to determine chroma scale parameters for chroma components of the video data using a function of the luma scale parameters (2008), and perform an inverse dynamic range adjustment process on the chroma components of the video data using the chroma scale parameters (2010).
In one example, to determine the chroma scale parameter, video post-processor unit 31 and/or video decoder 30 is further configured to determine the chroma scale parameter for the chroma component associated with the luma component of the first range of codeword values having the plurality of ranges of codeword values using a function of the luma scale parameter determined for the luma component of the first range of luma components having codeword values.
In another example, to determine the chroma scale parameter, video post-processor unit 31 and/or video decoder 30 is further configured to determine the chroma scale parameter for the chroma component associated with the luma component having the first range of codeword values and the second range of codeword values using a function of the luma scale parameter determined for the luma component having the first range of codeword values and the second range of codeword values.
In another example, the luma scale parameter for each of the multiple ranges of codeword values for the luma component is represented by a discontinuous function, and video post-processor unit 31 and/or video decoder 30 are further configured to apply a linearization process to the discontinuous function to generate linearized luma scale parameters, and determine the chroma scale parameters for the chroma component of the video data using the function of linearized luma scale parameters. In one example, the linearization process is one or more of a linear interpolation process, a curve fitting process, an averaging process, or a high order approximation process.
In another example, to determine the chroma scale parameter, video post-processor unit 31 and/or video decoder 30 is further configured to determine the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and a quantization parameter used to decode the chroma component.
In another example, to determine the chroma scale parameter, video post-processor unit 31 and/or video decoder 30 is further configured to determine the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter, a quantization parameter used to decode the chroma component, and a color representation parameter derived from a characteristic of the chroma component of the video data. In one example, the color representation parameters include a transfer function associated with the color container/video data.
In another example, video post-processor unit 31 and/or video decoder 30 is further configured to determine an initial chroma scale parameter for a chroma component of the video data, wherein to determine the chroma scale parameter, video post-processor unit 31 and/or video decoder 30 is further configured to determine the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and the initial chroma scale parameter.
In another example, video post-processor unit 31 and/or video decoder 30 determines a luma offset parameter for a luma component, performs an inverse dynamic range adjustment process on the luma component using the luma scale parameter and the luma offset parameter, determines a chroma offset parameter for a chroma component, and performs an inverse dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter and the chroma offset parameter.
Particular aspects of this disclosure have been described with respect to HEVC, extensions to HEVC, JEM, and VVC standards for illustration purposes. However, the techniques described in this disclosure may be applicable to other video coding processes, including other standard or proprietary video coding processes, including VVCs, in development or not yet developed.
As described in this disclosure, a video coder may refer to a video encoder or a video decoder. Similarly, a video coding unit may refer to a video encoder or a video decoder. Likewise, video coding may refer to video encoding or video decoding, as applicable.
It should be recognized that depending on the example, certain acts or events of any of the techniques described herein may be performed in a different sequence, may be added, merged, or left out entirely (e.g., not all described acts or events are necessary to practice the techniques). Further, in some instances, acts or events may be performed concurrently, e.g., via multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media (which corresponds to tangible media, such as data storage media) or communication media (which includes any medium that facilitates transfer of a computer program from one place to another, such as in accordance with a communication protocol). In this manner, the computer-readable medium may generally correspond to (1) a tangible computer-readable storage medium that is not transitory, or (2) a communication medium such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described herein. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but rather refer to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functions described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding or incorporated in a combined codec. Furthermore, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a variety of devices or apparatuses, including wireless handsets, Integrated Circuits (ICs), or collections of ICs (e.g., chipsets). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. In particular, the various units may be combined in codec hardware units, as described above, or provided in conjunction with suitable software and/or firmware by a set of interoperability hardware units (including one or more processors as described above).
Various examples have been described. These and other examples are within the scope of the following claims.
Claims (30)
1. A method of processing video data, the method comprising:
receiving video data;
determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data;
performing a dynamic range adjustment process on the luma component using the luma scale parameter;
determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and
performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter.
2. The method of claim 1, wherein determining the chroma scale parameter comprises:
determining chroma scale parameters for chroma components associated with luma components of the first range of codeword values in the plurality of ranges of codeword values using a function of the luma scale parameters determined for the luma components of the first range of codeword values.
3. The method of claim 1, wherein determining the chroma scale parameter comprises:
determining chroma scale parameters for chroma components associated with luma components having a first range of codeword values and a second range of codeword values in the plurality of ranges of codeword values using a function of the luma scale parameters determined for the luma components having the first range of codeword values and the second range of codeword values.
4. The method of claim 1, wherein the luma scale parameter for each of the plurality of ranges of codeword values for the luma component is represented by a discontinuous function, the method further comprising:
applying a linearization process to the discontinuous function to produce a linearized lightness scale parameter; and
determining the chroma scale parameter for the chroma component of the video data using a function of the linearized luma scale parameter.
5. The method of claim 4, wherein the linearization process is one or more of a linear interpolation process, a curve fitting process, an averaging process, a low pass filtering process, or a high order approximation process.
6. The method of claim 1, wherein determining the chroma scale parameter further comprises:
determining the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and a quantization parameter used to decode the chroma component.
7. The method of claim 6, wherein determining the chroma scale parameter further comprises:
determining the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter, the quantization parameter used to decode the chroma component, and a color representation parameter derived from a characteristic of the chroma component of the video data.
8. The method of claim 7, wherein the color representation parameters include a transfer function associated with the video data.
9. The method of claim 1, further comprising:
determining initial chroma scale parameters for the chroma component of the video data,
wherein determining the chroma scale parameter comprises determining the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and the initial chroma scale parameter.
10. The method of claim 1, further comprising:
determining a luma offset parameter for the luma component;
performing the dynamic range adjustment process on the luma component using the luma scale parameter and the luma offset parameter;
determining a chroma offset parameter for the chroma component; and
performing the dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter and the chroma offset parameter.
11. The method of claim 1, further comprising:
encoding the video data after performing the dynamic range adjustment process on the luma component and after performing the dynamic range adjustment process on the chroma component.
12. The method of claim 1, wherein the dynamic range adjustment process is an inverse dynamic range adjustment process, the method further comprising:
decoding the video data prior to performing the inverse dynamic range adjustment process on the luma component and prior to performing the inverse dynamic range adjustment process on the chroma component.
13. An apparatus configured to process video data, the apparatus comprising:
a memory configured to store the video data; and
one or more processors in communication with the memory, the one or more processors configured to:
receiving the video data;
determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data;
performing a dynamic range adjustment process on the luma component using the luma scale parameter;
determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and
performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter.
14. The apparatus of claim 13, wherein to determine the chroma scale parameter, the one or more processors are further configured to:
determining chroma scale parameters for chroma components associated with luma components of the first range of codeword values in the plurality of ranges of codeword values using a function of the luma scale parameters determined for the luma components of the first range of codeword values.
15. The apparatus of claim 13, wherein to determine the chroma scale parameter, the one or more processors are further configured to:
determining chroma scale parameters for chroma components associated with luma components having a first range of codeword values and a second range of codeword values in the plurality of ranges of codeword values using a function of the luma scale parameters determined for the luma components having the first range of codeword values and the second range of codeword values.
16. The apparatus of claim 13, wherein the luma scale parameter for each of the plurality of ranges of codeword values for the luma component is represented by a discontinuous function, and wherein the one or more processors are further configured to:
applying a linearization process to the discontinuous function to produce a linearized lightness scale parameter; and
determining the chroma scale parameter for the chroma component of the video data using a function of the linearized luma scale parameter.
17. The apparatus of claim 16, wherein the linearization process is one or more of a linear interpolation process, a curve fitting process, an averaging process, a low pass filtering process, or a high order approximation process.
18. The apparatus of claim 13, wherein to determine the chroma scale parameter, the one or more processors are further configured to:
determining the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and a quantization parameter used to decode the chroma component.
19. The apparatus of claim 18, wherein to further determine the chroma scale parameter, the one or more processors are further configured to:
determining the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter, the quantization parameter used to decode the chroma component, and a color representation parameter derived from a characteristic of the chroma component of the video data.
20. The apparatus of claim 19, wherein the color representation parameters comprise a transfer function associated with the video data.
21. The apparatus of claim 13, wherein the one or more processors are further configured to:
determining initial chroma scale parameters for the chroma component of the video data,
wherein to determine the chroma scale parameter, the one or more processors are further configured to determine the chroma scale parameter for the chroma component of the video data using a function of the luma scale parameter and the initial chroma scale parameter.
22. The apparatus of claim 13, wherein the one or more processors are further configured to:
determining a luma offset parameter for the luma component;
performing the dynamic range adjustment process on the luma component using the luma scale parameter and the luma offset parameter;
determining a chroma offset parameter for the chroma component; and
performing the dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter and the chroma offset parameter.
23. The apparatus of claim 13, wherein the one or more processors are further configured to:
encoding the video data after performing the dynamic range adjustment process on the luma component and after performing the dynamic range adjustment process on the chroma component.
24. The apparatus of claim 23, further comprising:
a camera configured to capture the video data.
25. The apparatus of claim 13, wherein the dynamic range adjustment process is an inverse dynamic range adjustment process, wherein the one or more processors are further configured to:
decoding the video data prior to performing the inverse dynamic range adjustment process on the luma component and prior to performing the inverse dynamic range adjustment process on the chroma component.
26. The apparatus of claim 25, further comprising:
a display configured to display the video data after performing the inverse dynamic range adjustment process on the luma component and after performing the inverse dynamic range adjustment process on the chroma component.
27. An apparatus configured to process video data, the apparatus comprising:
means for receiving video data;
means for determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data;
means for performing a dynamic range adjustment process on the luma component using the luma scale parameter;
means for determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and
means for performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameters.
28. The apparatus of claim 27, wherein the luma scale parameter for each of the plurality of ranges of codeword values for the luma component is represented by a discontinuous function, the apparatus further comprising:
means for applying a linearization process to the discontinuous function to produce a linearized lightness scale parameter; and
means for determining the chroma scale parameter for the chroma component of the video data using a function of the linearized luma scale parameter.
29. A non-transitory computer-readable storage medium storing instructions that, when executed, cause one or more processors of a device configured to process video data to:
receiving the video data;
determining a luma scale parameter for each of a plurality of ranges of codeword values for a luma component of the video data;
performing a dynamic range adjustment process on the luma component using the luma scale parameter;
determining a chroma scale parameter for a chroma component of the video data using a function of the luma scale parameter; and
performing a dynamic range adjustment process on the chroma component of the video data using the chroma scale parameter.
30. The non-transitory computer-readable storage medium of claim 29, wherein the luma scale parameter for each of the plurality of ranges of codeword values for the luma component is represented by a discontinuous function, and wherein the instructions further cause the one or more processors to:
applying a linearization process to the discontinuous function to produce a linearized lightness scale parameter; and
determining the chroma scale parameter for the chroma component of the video data using a function of the linearized luma scale parameter.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US62/548,236 | 2017-08-21 | ||
| US15/999,393 | 2018-08-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK40017308A true HK40017308A (en) | 2020-09-18 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10778978B2 (en) | System and method of cross-component dynamic range adjustment (CC-DRA) in video coding | |
| TWI801432B (en) | Video coding with content adaptive spatially varying quantization | |
| KR102277879B1 (en) | Signaling Mechanisms for Same Ranges and Different DRA Parameters for Video Coding | |
| CN109155848B (en) | In-loop sample processing for high dynamic range and wide color gamut video coding | |
| KR102824171B1 (en) | Fixed-point implementation of component range adjustment in video coding | |
| US10397585B2 (en) | Processing high dynamic range and wide color gamut video data for video coding | |
| US10284863B2 (en) | Adaptive constant-luminance approach for high dynamic range and wide color gamut video coding | |
| TWI765903B (en) | Video coding tools for in-loop sample processing | |
| KR20200099535A (en) | Quantization Parameter Control for Video Coding Using Combined Pixel/Transform-based Quantization | |
| JP2018515018A (en) | Dynamic range adjustment for high dynamic range and wide color gamut video coding | |
| KR102876611B1 (en) | Harmonization of transform-based quantization and dynamic range adjustment scale induction in video coding. | |
| CN111869210B (en) | Deriving Dynamic Range Adjustment (DRA) parameters for video coding | |
| US10728559B2 (en) | Precision of computation and signaling of dynamic range adjustment and color remapping information | |
| HK40017308A (en) | System and method of cross-component dynamic range adjustment (cc-dra) in video coding | |
| HK40115177A (en) | Method, apparatus and medium of decoding video data | |
| HK40020459A (en) | Video coding with content adaptive spatially varying quantization | |
| HK40020459B (en) | Video coding with content adaptive spatially varying quantization | |
| HK40000964A (en) | Video coding tools for in-loop sample processing |