[go: up one dir, main page]

WO2020228764A1 - Procédés de mise à l'échelle dans un codage vidéo - Google Patents

Procédés de mise à l'échelle dans un codage vidéo Download PDF

Info

Publication number
WO2020228764A1
WO2020228764A1 PCT/CN2020/090196 CN2020090196W WO2020228764A1 WO 2020228764 A1 WO2020228764 A1 WO 2020228764A1 CN 2020090196 W CN2020090196 W CN 2020090196W WO 2020228764 A1 WO2020228764 A1 WO 2020228764A1
Authority
WO
WIPO (PCT)
Prior art keywords
chroma
lmcs
luma
scaling
idx
Prior art date
Application number
PCT/CN2020/090196
Other languages
English (en)
Inventor
Kai Zhang
Li Zhang
Hongbin Liu
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN202080034851.1A priority Critical patent/CN113812161B/zh
Publication of WO2020228764A1 publication Critical patent/WO2020228764A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • This patent document relates to video coding techniques, devices and systems.
  • Devices, systems and methods related to digital video coding, and specifically, to scaling and division operations in video coding may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) ) and future video coding standards or video codecs.
  • HEVC High Efficiency Video Coding
  • the disclosed technology may be used to provide a method for video processing, comprising: performing a conversion between a current video block of a video and a coded representation of the video, the current video block comprising a luma component and at least one chroma component, wherein the luma component is converted from an original domain to a reshaped domain with a luma mapping with chroma scaling (LMCS) scheme, and chroma samples of the at least one chroma component are predicted based on reconstructed luma samples of the luma component in the original domain.
  • LMCS luma mapping with chroma scaling
  • the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
  • a device that is configured or operable to perform the above-described method.
  • the device may include a processor that is programmed to implement this method.
  • a video decoder apparatus may implement a method as described herein.
  • FIG. 1 shows a flowchart of an example of a decoding flow with reshaping.
  • FIG. 2 shows an example of sample locations used for the derivation of parameters in a cross-component linear model (CCLM) prediction mode.
  • CCLM cross-component linear model
  • FIG. 3 shows an example of neighboring samples used for deriving illumination compensation (IC) parameters.
  • FIG. 4 shows a flowchart of an example method for video processing.
  • FIG. 5 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
  • Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve compression performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
  • Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency.
  • a video codec converts uncompressed video to a compressed format or vice versa.
  • the compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H. 265 or MPEG-H Part 2) , the Versatile Video Coding standard to be finalized, or other current and/or future video coding standards.
  • HEVC High Efficiency Video Coding
  • MPEG-H Part 2 MPEG-H Part 2
  • Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards.
  • the ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards.
  • AVC H. 264/MPEG-4 Advanced Video Coding
  • H. 265/HEVC High Efficiency Video Coding
  • the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015.
  • JVET Joint Exploration Model
  • in-loop reshaping is to convert the original (in the first domain) signal (prediction/reconstruction signal) to a second domain (reshaped domain) .
  • the in-loop luma reshaper is implemented as a pair of look-up tables (LUTs) , but only one of the two LUTs need to be signaled as the other one can be computed from the signaled LUT.
  • Each LUT is a one-dimensional, 10-bit, 1024-entry mapping table (1D-LUT) .
  • the other LUT is an inverse LUT, InvLUT, that maps altered code values Y r to ( represents the reconstruction values of Y i . ) .
  • ILR is also known as Luma Mapping with Chroma Scaling (LMCS) in VVC.
  • LMCS Luma Mapping with Chroma Scaling
  • PWL piece-wise linear
  • m is scalar
  • c is an offset
  • FP_PREC is a constant value to specify the precision.
  • the PWL model is used to precompute the 1024-entry FwdLUT and InvLUT mapping tables; but the PWL model also allows implementations to calculate identical mapping values on-the-fly without pre-computing the LUTs.
  • Test 2 of the in-loop luma reshaping (i.e., CE12-2 in the proposal) provides a lower complexity pipeline that also eliminates decoding latency for block-wise intra prediction in inter slice reconstruction. Intra prediction is performed in reshaped domain for both inter and intra slices.
  • Intra prediction is always performed in reshaped domain regardless of slice type. With such arrangement, intra prediction can start immediately after previous TU reconstruction is done. Such arrangement can also provide a unified process for intra mode instead of being slice dependent.
  • FIG. 14 shows the block diagram of the CE12-2 decoding process based on mode.
  • CE12-2 also tests 16-piece piece-wise linear (PWL) models for luma and chroma residue scaling instead of the 32-piece PWL models of CE12-1.
  • PWL piece-wise linear
  • Luma-dependent chroma residue scaling is a multiplicative process implemented with fixed-point integer operation. Chroma residue scaling compensates for luma signal interaction with the chroma signal. Chroma residue scaling is applied at the TU level. More specifically, the average value of the corresponding luma prediction block is utilized.
  • the average is used to identify an index in a PWL model.
  • the index identifies a scaling factor cScaleInv.
  • the chroma residual is multiplied by that number.
  • chroma scaling factor is calculated from forward-mapped predicted luma values rather than reconstructed luma values.
  • each picture (or tile group) is firstly converted to the reshaped domain. And all the coding process is performed in the reshaped domain.
  • the neighboring block is in the reshaped domain;
  • the reference blocks (generated from the original domain from decoded picture buffer) are firstly converted to the reshaped domain. Then the residual are generated and coded to the bitstream.
  • samples in the reshaped domain are converted to the original domain, then deblocking filter and other filters are applied.
  • CPR current picture referencing, aka intra block copy, IBC
  • ⁇ Current block is coded as combined inter-intra mode (CIIP) and the forward reshaping is disabled for the intra prediction block
  • LMCS APS An APS that has aps_params_type equal to LMCS_APS.
  • sps_lmcs_enabled_flag 1 specifies that luma mapping with chroma scaling is used in the CVS.
  • sps_lmcs_enabled_flag 0 specifies that luma mapping with chroma scaling is not used in the CVS.
  • adaptation_parameter_set_id provides an identifier for the APS for reference by other syntax elements.
  • NOTE –APSs can be shared across pictures and can be different in different slices within a picture.
  • aps_params_type specifies the type of APS parameters carried in the APS as specified in Table 7-2.
  • slice_lmcs_enabled_flag 1 specifies that luma mappin with chroma scaling is enabled for the current slice.
  • slice_lmcs_enabled_flag 0 specifies that luma mapping with chroma scaling is not enabled for the current slice.
  • slice_lmcs_enabled_flag not present, it is inferred to be equal to 0.
  • slice_lmcs_aps_id specifies the adaptation_parameter_set_id of the LMCS APS that the slice refers to.
  • the TemporalId of the LMCS APS NAL unit having adaptation_parameter_set_id equal to slice_lmcs_aps_id shall be less than or equal to the TemporalId of the coded slice NAL unit.
  • the multiple LMCS APSs with the same value of adaptation_parameter_set_id shall have the same content.
  • lmcs_min_bin_idx specifies the minimum bin index used in the luma mapping with chroma scaling construction process.
  • the value of lmcs_min_bin_idx shall be in the range of 0 to 15, inclusive.
  • lmcs_delta_max_bin_idx specifies the delta value between 15 and the maximum bin index LmcsMaxBinIdx used in the luma mapping with chroma scaling construction process.
  • the value of lmcs_delta_max_bin_idx shall be in the range of 0 to 15, inclusive.
  • the value of LmcsMaxBinIdx is set equal to 15 -lmcs_delta_max_bin_idx.
  • the value of LmcsMaxBinIdx shall be larger than or equal to lmcs_min_bin_idx.
  • lmcs_delta_cw_prec_minus1 plus 1 specifies the number of bits used for the representation of the syntax lmcs_delta_abs_cw [i] .
  • the value of lmcs_delta_cw_prec_minus1 shall be in the range of 0 to BitDepthY -2, inclusive.
  • lmcs_delta_abs_cw [i] specifies the absolute delta codeword value for the ith bin.
  • lmcs_delta_sign_cw_flag [i] specifies the sign of the variable lmcsDeltaCW [i] as follows:
  • variable OrgCW is derived as follows:
  • OrgCW (1 ⁇ BitDepth Y ) /16 (7-77)
  • lmcsDeltaCW [i] (1 -2 *lmcs_delta_sign_cw_flag [i] ) *lmcs_delta_abs_cw [i] (7-78)
  • variable lmcsCW [i] is derived as follows:
  • lmcsCW [i] shall be in the range of (OrgCW>>3) to (OrgCW ⁇ 3 -1) , inclusive.
  • chromaResidualScaleLut [] ⁇ 16384, 16384, 16384, 16384, 16384, 16384, 16384, 8192, 8192, 8192, 8192, 8192, 5461, 5461, 5461, 5461, 4096, 4096, 4096, 4096, 4096, 4096, 3296, 3277, 3277, 3277, 2731, 2731, 2731, 2731, 2341, 2341, 2341, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 1820, 1820, 1820, 1638, 1638, 1638, 1638, 1638, 1638, 1489, 1489, 1489, 1489, 1489, 1365, 1365, 1365, 1260, 1260, 1260, 1260, 1170, 1170, 1170, 1092, 1092, 1092, 1024, 1024, 1024 ⁇
  • lmcs_delta_sign_cw_flag [i] specifies the sign of the variable lmcsDeltaCW [i] as follows:
  • variable OrgCW is derived as follows:
  • OrgCW (1 ⁇ BitDepth Y ) /16 (7-70)
  • lmcsDeltaCW [i] (1-2 *lmcs_delta_sign_cw_flag [i] ) *lmcs_delta_abs_cw [i] (7-71)
  • variable lmcsCW [i] is derived as follows:
  • lmcsCW [i] shall be in the range of (OrgCW>>3) to (OrgCW ⁇ 3 -1) , inclusive.
  • LmcsPivot [i + 1] LmcsPivot [i] + lmcsCW [i]
  • ScaleCoeff [i] (lmcsCW [i] * (1 ⁇ 11) + (1 ⁇ (Log2 (OrgCW) -1) ) ) >> (Log2 (OrgCW) )
  • ChromaScaleCoeff [i] (1 ⁇ 11)
  • ChromaScaleCoeff [i] InvScaleCoeff [i]
  • idxY predSamples [i] [j] >> Log2 (OrgCW)
  • PredMapSamples [i] [j] LmcsPivot [idxY]+ (ScaleCoeff [idxY] * (predSamples [i] [j] -InputPivot [idxY] ) + (1 ⁇ 10 ) ) >> 11 (8-1058)
  • variable invSample is derived as follows:
  • variable idxYInv is derived by invoking the identification of piece-wise function index as specified in clause 8.7.5.3.2 with invAvgLuma as the input and idxYInv as the output.
  • variable varScale is derived as follows:
  • An Adaptation Parameter Set (APS) is adopted in VVC to carry ALF parameters.
  • the tile group header contains an aps_id which is conditionally present when ALF is enabled.
  • the APS contains an aps_id and the ALF parameters.
  • a new NUT (NAL unit type, as in AVC and HEVC) value is assigned for APS (from JVET-M0132) .
  • aps_id 0 and sending the APS with each picture.
  • the range of APS ID values will be 0.. 31 and APSs can be shared across pictures (and can be different in different tile groups within a picture) .
  • the ID value should be fixed-length coded when present. ID values cannot be re-used with different content within the same picture.
  • slice_alf_enabled_flag 1 specifies that adaptive loop filter is enabled and may be applied to Y, Cb, or Cr co lour component in a slice.
  • slice_alf_enabled_flag 0 specifies that adaptive loop filter is disabled for all colo ur components in a slice.
  • num_alf_aps_ids_minus1 plus 1 specifies the number of ALF APSs that the slice refers to.
  • the value of num_alf_ap s_ids_minus1 shall be in the range of 0 to 7, inclusive.
  • slice_alf_aps_id [i] specifies the adaptation_parameter_set_id of the i-th ALF APS that the slice refers to.
  • the Tem poralId of the ALF APS NAL unit having adaptation_parameter_set_id equal to slice_alf_aps_id [i] shall be less th an or equal to the TemporalId of the coded slice NAL unit.
  • the multiple ALF APSs with the same value of adaptation_parameter_set_id shall have the same content.
  • CCLM cross-component linear model
  • pred C (i, j) represents the predicted chroma samples in a CU and rec L (i, j) represents the downsampled reconstructed luma samples of the same CU.
  • Linear model parameter ⁇ and ⁇ are derived from the relation between luma values and chroma values from two samples, which are luma sample with minimum sample value and with maximum smample sample inside the set of downsampled neighboring luma samples, and their corresponding chroma samples.
  • the linear model parameters ⁇ and ⁇ are obtained according to the following equations.
  • FIG. 2 shows an example of the location of the left and above samples and the sample of the current block involved in the CCLM mode.
  • the division operation to calculate parameter ⁇ is implemented with a look-up table.
  • the diff value difference between maximum and minimum values
  • the parameter ⁇ are expressed by an exponential notation. For example, diff is approximated with a 4-bit significant part and an exponent. Consequently, the table for 1/diff is reduced into 16 elements for 16 values of the significand as follows:
  • LM_A 2 LM modes
  • LM_L 2 LM modes
  • LM_Amode only the above template are used to calculate the linear model coefficients. To get more samples, the above template are extended to (W+H) .
  • LM_L mode only left template are used to calculate the linear model coefficients. To get more samples, the left template are extended to (H+W) .
  • the above template are extended to W+W
  • the left template are extended to H+H.
  • two types of downsampling filter are applied to luma samples to achieve 2 to 1 downsampling ratio in both horizontal and vertical directions.
  • the selection of downsampling filter is specified by a SPS level flag.
  • the two downsampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
  • This parameter computation is performed as part of the decoding process, and is not just as an encoder search operation. As a result, no syntax is used to convey the ⁇ and ⁇ values to the decoder.
  • Chroma mode coding For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes (CCLM, LM_A, and LM_L) . Chroma mode signaling and derivation process are shown in the table below. Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the center position of the current chroma block is directly inherited.
  • JVET-N1001-v2 The decoding process specified in JVET-N1001-v2 is demonstrated as below.
  • normDiff ( (diff ⁇ 4) >> x) &15 (8-214)
  • MV scaling is applied for Temproal Motion Vector Prediction (TMVP) and AMVP.
  • mvLXCol is derived as a scaled version of the motion vector mvCol as follows:
  • mvLXCol Clip3 (-131072, 131071, Sign (distScaleFactor *mvCol) * ( (Abs (distScaleFactor *mvCol) + 127) >> 8) ) (8-423)
  • the division operator to derive tx may be implemented by a lookup table.
  • LIC Local Illumination Compensation
  • CU inter-mode coded coding unit
  • a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples (also known as the reference neighbouring samples) . More specifically, as illustrated in FIG. 3, the subsampled (2: 1 subsampling) neighbouring samples of the CU and the corresponding samples (identified by motion information of the current CU or sub-CU) in the reference picture are used.
  • the IC parameters are derived and applied for each prediction direction separately.
  • the LIC flag is copied from neighbouring blocks, in a way similar to motion information copy in merge mode; otherwise, an LIC flag is signalled for the CU to indicate whether LIC applies or not.
  • LIC When LIC is enabled for a pciture, addtional CU level RD check is needed to determine whether LIC is applied or not for a CU.
  • MR-SAD mean-removed sum of absolute diffefference
  • MR-SATD mean-removed sum of absolute Hadamard-transformed difference
  • LIC is disabled for the entire picture when there is no obvious illumination change between a current picture and its reference pictures. To identify this situation, histograms of a current picture and every reference picture of the current picture are calculated at the encoder. If the histogram difference between the current picture and every reference picture of the current picture is smaller than a given threshold, LIC is disabled for the current picture; otherwise, LIC is enabled for the current picture.
  • IRL aka LMCS
  • CCLM Cost-Coupled Device
  • the signaling method of the LMCS model may be inefficient.
  • Embodiments of the presently disclosed technology overcome the drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies.
  • the methods for scaling and division operations in video coding, based on the disclosed technology, may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations.
  • the examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.
  • Shift (x, n) (x+ offset0) >>n.
  • offset0 and/or offset1 are set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) ) . In another example, offset0 and/or offset1 are set to 0.
  • Clip3 (min, max, x) is defined as
  • Floor (x) is defined as the largest integer less than or equal to x.
  • Ceil (x) the smallest integer greater than or equal to x.
  • Log2 (x) is defined as the base-2 logarithm of x.
  • the operation or the procedure of multiple operations may comprise the operation of inquiring an entry of a table with an index.
  • it may comprise operations of inquiring multiple entries of one or multiples table with an index
  • the operation or the procedure of multiple operations may comprise an operation which is not the division operation.
  • it may comprise the operation of multiplication.
  • ii may comprise the operation of addition.
  • iii may comprise the operation of SatShift (x, n) .
  • it may comprise the operation of Shift (x, n) .
  • v. may comprise the operation of left shift.
  • it may comprise the operation of Floor (x) .
  • vii. it may comprise the operation of Log2 (x) .
  • viii in one example, it may comprise the operation of “logical or” (
  • ix In one example, it may comprise the operation of “logical and” (&in C language) .
  • T [idx] may be used to replace or approximate the division operation in bullet 1.
  • the table size may be equal to 2 M and idx may be in a range of [0, 2 M -1] , inclusively.
  • T [idx] Rounding (2 P / (idx+offset0) ) -offset1, where offset0 and offset 1 are integers.
  • Rounding (x/y) Floor ( (x+y/2) /y) ;
  • Rounding (x/y) is set equal to an integer Q, so that
  • the set may be ⁇ Floor (x/y) -1, Floor (x/y) , Floor (x/y) +1 ⁇ .
  • offset0 may be 0.
  • offset1 may be 0,
  • M defined in bullet 2. a may be equal to W defined in bullet 2. b.
  • Z may be equal to P-W-1, where Z, P and W are defined in bullet 2.
  • T [idx] should be smaller than 2 Z .
  • T [idx] is equal to 2 Z , it may be set equal to 0.
  • T [idx] (Rounding (2 P / (idx+offset0) ) -offset1) %offset1.
  • T ⁇ 0, 126, 124, 122, 120, 118, 117, 115, 113, 111, 109, 108, 106, 104, 103, 101, 100, 98, 96, 95, 93, 92, 90, 89, 88, 86, 85, 83, 82, 81, 79, 78, 77, 76, 74, 73, 72, 71, 69, 68, 67, 66, 65, 64, 63, 61, 60, 59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 44, 43, 42, 41, 40, 39, 38, 37, 37, 36, 35, 34, 33, 33, 32, 31, 30, 30, 29, 28, 27, 27, 26, 25, 24, 24, 23, 22, 22, 21, 20, 20, 19, 18, 18, 17, 16, 16, 15, 14, 14, 13, 13, 12, 11, 11, 10, 10, 9, 9, 8, 7, 7, 6, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1 ⁇ .
  • T ⁇ 0, 63, 62, 61, 60, 59, 58, 57, 56, 56, 55, 54, 53, 52, 51, 51, 50, 49, 48, 47, 47, 46, 45, 45, 44, 43, 42, 42, 41, 40, 40, 39, 38, 38, 37, 37, 36, 35, 35, 34, 34, 33, 32, 32, 31, 31, 30, 30, 29, 29, 28, 28, 27, 27, 26, 26, 25, 25, 24, 24, 23, 23, 22, 22, 21, 21, 20, 20, 20, 19, 19, 18, 18, 18, 17, 17, 16, 16, 16, 15, 15, 14, 14, 14, 13, 13, 13, 12, 12, 12, 11, 11, 10, 10, 10, 9, 9, 9, 8, 8, 8, 8, 7, 7, 6, 6, 6, 5, 5, 5, 5, 4, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0 ⁇ .
  • T ⁇ 0, 0, 31, 31, 30, 30, 29, 29, 28, 28, 27, 27, 27, 26, 26, 25, 25, 24, 24, 24, 23, 23, 23, 22, 22, 22, 21, 21, 21, 20, 20, 20, 19, 19, 18, 18, 17, 17, 17, 16, 16, 16, 16, 15, 15, 15, 15, 14, 14, 14, 14, 13, 13, 13, 12, 12, 12, 12, 11, 11, 11, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 8, 8, 8, 8, 8, 7, 7, 7, 7, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0 ⁇ .
  • T ⁇ 0, 124, 120, 117, 113, 109, 106, 103, 100, 96, 93, 90, 88, 85, 82, 79, 77, 74, 72, 69, 67, 65, 63, 60, 58, 56, 54, 52, 50, 48, 46, 44, 43, 41, 39, 37, 36, 34, 33, 31, 30, 28, 27, 25, 24, 22, 21, 20, 18, 17, 16, 14, 13, 12, 11, 10, 9, 7, 6, 5, 4, 3, 2, 1 ⁇ .
  • T ⁇ 0, 62, 60, 58, 56, 55, 53, 51, 50, 48, 47, 45, 44, 42, 41, 40, 38, 37, 36, 35, 34, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 20, 19, 18, 17, 16, 16, 15, 14, 13, 13, 12, 11, 10, 10, 9, 8, 8, 7, 7, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1 ⁇ .
  • T ⁇ 0, 31, 30, 29, 28, 27, 27, 26, 25, 24, 23, 23, 22, 21, 21, 20, 19, 19, 18, 17, 17, 16, 16, 15, 15, 14, 14, 13, 13, 12, 12, 11, 11, 10, 10, 9, 9, 9, 8, 8, 7, 7, 6, 6, 6, 5, 5, 4, 4, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0 ⁇ .
  • T ⁇ 0, 0, 15, 15, 14, 14, 13, 13, 12, 12, 12, 11, 11, 11, 10, 10, 10, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0 ⁇ .
  • T ⁇ 0, 0, 0, 7, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0 ⁇ .
  • T ⁇ 0, 120, 113, 106, 100, 93, 88, 82, 77, 72, 67, 63, 58, 54, 50, 46, 43, 39, 36, 33, 30, 27, 24, 21, 18, 16, 13, 11, 9, 6, 4, 2 ⁇ .
  • T ⁇ 0, 60, 56, 53, 50, 47, 44, 41, 38, 36, 34, 31, 29, 27, 25, 23, 21, 20, 18, 16, 15, 13, 12, 10, 9, 8, 7, 5, 4, 3, 2, 1 ⁇ .
  • T ⁇ 0, 30, 28, 27, 25, 23, 22, 21, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 7, 6, 5, 5, 4, 3, 3, 2, 2, 1, 1 ⁇ .
  • T ⁇ 0, 15, 14, 13, 12, 12, 11, 10, 10, 9, 8, 8, 7, 7, 6, 6, 5, 5, 4, 4, 3, 3, 3, 2, 2, 2, 1, 1, 1, 1, 0 ⁇ .
  • T ⁇ 0, 0, 7, 7, 6, 6, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0 ⁇ .
  • T ⁇ 0, 28, 25, 22, 19, 17, 15, 13, 11, 9, 7, 6, 5, 3, 2, 1 ⁇ .
  • T ⁇ 0, 14, 12, 11, 10, 8, 7, 6, 5, 4, 4, 3, 2, 2, 1, 1 ⁇ .
  • T[k] (2 14 +k/2) /k, for k from 1 to 256;
  • T[k] (2 11 +k/2) /k, for k from 1 to 256;
  • T[k] (2 14 +k/2) /k, for k from 1 to 512;
  • T[k] (2 11 +k/2) /k, for k from 1 to 512;
  • T[k] (2 14 +k/2) /k, for k from 1 to 1024;
  • T[k] (2 11 +k/2) /k, for k from 1 to 1024;
  • v. T ⁇ 0, 248, 240, 233, 226, 219, 212, 206, 199, 193, 187, 181, 175, 170, 164, 159, 154, 149, 144, 139, 134, 130, 125, 121, 116, 112, 108, 104, 100, 96, 93, 89, 85, 82, 78, 75, 72, 68, 65, 62, 59, 56, 53, 50, 47, 45, 42, 39, 37, 34, 31, 29, 26, 24, 22, 19, 17, 15, 13, 10, 8, 6, 4, 2 ⁇ .
  • idx may be derived with a procedure of multiple operations.
  • the procedure of multiple operations may comprise addition, Log2 (x) , Floor (x) , left shift, Shift (x, n) or SatShift (x, n) , “logical or” and “logical and” .
  • the procedure of multiple operations may depend on M or W defined in bullet 2.
  • idx may be set equal to Shift (D ⁇ M, Floor (Log2 (D) ) ) & (2 M -1) , where D is the denominator , such as lmcsCW [i] in the LMCS process.
  • the modified idx may be clipped to the range of [0, 2 M -1] , inclusively.
  • variable M in bullet 4. c may be replaced to be W defined in bullet 2.
  • clipping may be further applied to the resulted value.
  • T [idx] may be modified to an intermedium value V, which will be used in the procedure of multiple operations to replace or approximate the division operation in bullet 2.
  • V T [idx] .
  • V T [idx] ⁇ m, where m is an integer.
  • V T [idx] *m, where m is an integer.
  • V T [idx] +m, where m is an integer.
  • n may depend on the value of D defined in bullet 4.
  • n Ceil (Log2 (D) ) -M if D > 2 M .
  • n Floor (Log2 (D) ) -M+1 if D > 2 M .
  • the modification method may depend on the value of T [idx] and/or the value of idx.
  • the modification method may depend on the whether the value of T [idx] is equal to a fixed value such as 0.
  • V T [idx]
  • 2 Z where Z is defined in bullet 2.
  • V T [idx] +2 Z , where Z is defined in bullet 2.
  • R a modified R, denoted as R’ may be used as the replacement or approximation of the division result.
  • R’ R ⁇ m, where m is an integer such as 11.
  • m may depend on the value n defined in bullet 5.
  • the modification method may depend on P as defined in bullet 2 and/or W (or M) defined in bullet 2, and/or D as defined in bullet 4, and/or T and/or idx.
  • the modification method may depend on the relationship between a fixed number S. and a functionf of P, W (or M) , T, idx and D.
  • f (P, W, D, T, idx) P-W-1+Log2 (D)
  • f (P, M, D, T, idx) P-M-1+Log2 (D) .
  • f (P, W, D, T, idx) P-W-1+Log2 (D) +off
  • f (P, M, D, T, idx) P-M-1+Log2 (D) +off.
  • f (P, W, D, T, idx) P-W+Log2 (D)
  • f (P, M, D, T, idx) P-M+Log2 (D) .
  • f (P, W, D, T, idx) P-W+1+Log2 (D)
  • f (P, M, D, T, idx) P-M+1+Log2 (D) .
  • N’ N ⁇ m, where m is an integer such as 11.
  • m may depend on the value n defined in bullet 5.
  • the modification method may depend on P as defined in bullet 2 and/or W (or M) defined in bullet 2, and/or D as defined in bullet
  • the modification method may depend on the relationship between a fixed number S. and a function f of P, W (or M) , T, idx and D.
  • Functionf may be defined as in bullet 6. a.
  • R may be associated with a precision value, denoted as Q.
  • R may be InvScaleCoeff [idxYInv]
  • the inverted-back luma reconstructed sample may be derived as below:
  • R may be ChromaScaleCoeff [idxYInv]
  • the inverted-back chroma residue sample may be derived as below:
  • Q may be a fixed number such as 11 or 14.
  • Q may depend on P as defined in bullet 2 and/or W (or M) defined in bullet 2, and/or D as defined in bullet 4 and/or the table T, and/or idx.
  • Q P-W-1+Log2 (D)
  • Q P-M-1+Log2 (D)
  • Q P-W-1+Log2 (D) +off
  • Q P-M-1+Log2 (D) +off
  • Q P-W+Log2 (D)
  • Q P-M+Log2 (D)
  • Q P-W+1+Log2 (D)
  • Q P-M+1+Log2 (D)
  • Q may depend on n defined in bullet 5. e.
  • D or absolute value of D is always larger than a value G, such as 8, then the table size may depend on G.
  • G may depend on the sample bit-depth.
  • the table size may depend on the sample bit-depth.
  • the division operation used in MV scaling may be used in the procedure of multiple operations to replace or approximate the division operation in bullet 1.
  • D may be converted into an intermedium variable D’ in the range [minD, maxD] (such as [-128, 127] ) inclusively, where D is the denominator (e.g., lmcsCW [i] in bullet 1) .
  • the conversion form D to D’ may be a linear or non-linear quantization process.
  • D’ Clip3 (D, minD, maxD) ;
  • D’ Clip3 (D, 0, maxD) ;
  • n may depend on the value of D.
  • n Ceil (Log2 (D) ) -Log2 (maxD) if D > maxD.
  • n Floor (Log2 (D) ) - (Log2 (maxD) -1) if D >maxD.
  • the table MV_SCALE_T may be used to derive R.
  • R in bullet 8. b may be associated with a precision value, denoted as Q.
  • Q may depend on n in bullet 8. a.
  • Q may be equal to Offset-n, where Offset is an integer such as 14.
  • the disclosed MV scaling methods may be used in MV scaling for temproal motion vector prediction (TMVP) of merge inter-mode or Advanced Motion Vector Prediction (AMVP) mode.
  • TMVP temproal motion vector prediction
  • AMVP Advanced Motion Vector Prediction
  • the disclosed MV scaling methods may be used in MV scaling for the affine prediction mode.
  • the disclosed MV scaling methods may be used in MV scaling for the sub-block based temporal merge mode.
  • the same table may be used in the procedure of multiple operations to replace or approximate the division operation in CCLM and LMCS.
  • the table may be any one demonstrated in bullet 3, such as bullet 3. e.
  • the same table may be used in the procedure to replace or approximate the division operation in MV scaling and LMCS.
  • the table may be any one demonstrated in bullet 3, such as bullet 3. e.
  • the method to replace or approximate the division operation in CCLM may be used in the procedure to replace or approximate the division operation in Localized Illumination Compensation (LIC) .
  • LIC Localized Illumination Compensation
  • the method to replace or approximate the division operation in LMCS may be used in the procedure to replace or approximate the division operation in LIC.
  • the method to replace or approximate the division operation in MV scaling may be used in the procedure to replace or approximate the division operation in LIC.
  • Chroma samples are predicted from reconstructed luma samples in the original domain even the luma block is coded with LMCS.
  • neighbouring (adjacent or non-adjacent) reconstructed luma samples which are in the reshaping domain may be converted to the original domain first, then the converted luma samples are used to derive the linear model parameters, such as a and b.
  • the collocated luma samples which are in the reshaping domain may be converted to the original domain first, then the converted luma samples are used with the linear model to generate the prediction chroma samples.
  • ScaleCoeff [i] in VVC ScaleCoeff [i] in VVC
  • inverse scaling coefficient e.g., InvScaleCoeff [i] in VVC
  • the scaling coefficient may be directly coded.
  • the quantized value of the scaling coefficient may be signaled.
  • c It may be signaled with fixed-length coding, or unary coding, or truncated unary coding, or exponential Golomb code.
  • the chroma scaling factor may be directly coded.
  • the quantized value of the scaling factor may be signaled.
  • c It may be signaled with fixed-length coding, or unary coding, or truncated unary coding, or exponential Golomb code.
  • all video blocks in the video unit may use the same signaled scaling factor, if chroma scaling is applied to the video block.
  • It is proposed information may be signaled to indicate whether chroma scaling in LMCS is applied or not.
  • a It may be signaled in a video unit such as VPS/SPS/PPS/APS/picture header/slice header/tile group header, etc.
  • the current slice or tile group is an intra-coded slice or tile group.
  • method 400 may be implemented at a video decoder or a video encoder.
  • FIG. 4 shows a flowchart of an exemplary method for video processing.
  • the method 400 includes, at step 402, performing a conversion between a current video block of a video and a coded representation of the video, the current video block comprising a luma component and at least one chroma component, wherein the luma component is converted from an original domain to a reshaped domain with a luma mapping with chroma scaling (LMCS) scheme, and chroma samples of the at least one chroma component are predicted based on reconstructed luma samples of the luma component in the original domain.
  • LMCS luma mapping with chroma scaling
  • a method for video processing includes, performing a conversion between a current video block of a video and a coded representation of the video, the current video block comprising a luma component and at least one chroma component, wherein the luma component is converted from an original domain to a reshaped domain with a luma mapping with chroma scaling (LMCS) scheme, and chroma samples of the at least one chroma component are predicted based on reconstructed luma samples of the luma component in the original domain.
  • LMCS luma mapping with chroma scaling
  • the chroma samples of the at least one chroma component are predicted in a cross-component linear model (CCLM) prediction mode.
  • CCLM cross-component linear model
  • neighboring reconstructed luma samples are converted from the reshaped domain to the original domain, and then the converted neighboring luma samples are used to derive linear model parameters in a linear model to be used in the CCLM prediction mode.
  • the neighboring reconstructed luma samples comprises adjacent or non-adjacent reconstructed luma samples in the reshaped domain.
  • collocated reconstructed luma samples are converted from the reshaped domain to the original domain, and then the converted collocated luma samples are used in the linear model to predict the chroma samples.
  • a first indication is signaled for indicating at least one of a scaling coefficient and an inverse scaling coefficient which is to be used in the LMCS scheme.
  • At least one of the scaling coefficient and the inverse scaling coefficient is coded.
  • a value of the at least one of the scaling coefficient and the inverse scaling coefficient is quantized, and the quantized value of at least one of the scaling coefficient and the inverse scaling coefficient is signaled.
  • the first indication is signaled in a predictive way.
  • the first indication is coded using a fixed-length code, a unary code, a truncated unary code or an exponential-Golomb code.
  • the first indication is signaled in a video parameter set (VPS) , a sequence parameter set (SPS) , a picture parameter set (PPS) , an adaptation parameter set (APS) , a picture header, a slice header or a tile group header.
  • VPS video parameter set
  • SPS sequence parameter set
  • PPS picture parameter set
  • APS adaptation parameter set
  • the scaling coefficient comprises ScaleCoeff [i] to be used in the LMCS scheme.
  • the inverse scaling coefficient comprises InvScaleCoeff [i] to be used in the LMCS scheme.
  • the scaling coefficient comprises a chroma scaling factor to be used in the LMCS scheme.
  • the chroma scaling factor comprises ChromaScaleCoeff [i] , and wherein i is an integer.
  • a second indication is signaled for indicating whether a chroma scaling is applied to the current video block in the LMCS scheme.
  • all the video blocks in a video unit which covers the current video block use the same chroma scaling factor signaled for the current video block.
  • the second indication is signaled in one of a video parameter set (VPS) , a sequence parameter set (SPS) , a picture parameter set (PPS) , an adaptation parameter set (APS) , a picture header, a slice header or a tile group header.
  • VPS video parameter set
  • SPS sequence parameter set
  • PPS picture parameter set
  • APS adaptation parameter set
  • the second indication is signaled in the slice header or the tile group header, if the luma component and the at least chroma component of the current video block are coded with a dual-tree coding structure and a current slice or tile group covers the current video block is intra-coded, the second indication is signaled in the slice header or the tile group header, if the luma component and the at least chroma component of the current video block are coded with a dual-tree coding structure and a current slice or tile group covers the current video block is intra-coded, the second indication is signaled in the slice header or the tile group header, .
  • the performing the conversion includes generating the coded representation from the current video block.
  • the performing the conversion includes generating the current video block from the coded representation.
  • an apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method as described above.
  • a computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method as described above.
  • ChromaScaleCoeff [i] (1 ⁇ 11)
  • ChromaScaleCoeff [i] InvScaleCoeff [i]
  • divTable [] is specified as follows:
  • divTable [] ⁇ 0, 62, 60, 58, 56, 55, 53, 51, 50, 48, 47, 45, 44, 42, 41, 40, 38, 37, 36, 35, 34, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 20, 19, 18, 17, 16, 16, 15, 14, 13, 13, 12, 11, 10, 10, 9, 8, 8, 7, 7, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1 ⁇
  • ChromaScaleCoeff [i] (1 ⁇ 11)
  • ChromaScaleCoeff [i] InvScaleCoeff [i]
  • divTable [] is specified as follows:
  • divTable [] ⁇ 0, 62, 60, 58, 56, 55, 53, 51, 50, 48, 47, 45, 44, 42, 41, 40, 38, 37, 36, 35, 34, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 20, 19, 18, 17, 16, 16, 15, 14, 13, 13, 12, 11, 10, 10, 9, 8, 8, 7, 7, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1 ⁇
  • ChromaScaleCoeff [i] (1 ⁇ 11)
  • ChromaScaleCoeff [i] InvScaleCoeff [i]
  • divTable [] is specified as follows:
  • divTable [] ⁇ 0, 124, 120, 117, 113, 109, 106, 103, 100, 96, 93, 90, 88, 85, 82, 79, 77, 74, 72, 69, 67, 65, 63, 60, 58, 56, 54, 52, 50, 48, 46, 44, 43, 41, 39, 37, 36, 34, 33, 31, 30, 28, 27, 25, 24, 22, 21, 20, 18, 17, 16, 14, 13, 12, 11, 10, 9, 7, 6, 5, 4, 3, 2, 1 ⁇
  • ChromaScaleCoeff [i] (1 ⁇ 11)
  • ChromaScaleCoeff [i] InvScaleCoeff [i]
  • divTable [] is specified as follows:
  • divTable [] ⁇ 0, 124, 120, 117, 113, 109, 106, 103, 100, 96, 93, 90, 88, 85, 82, 79, 77, 74, 72, 69, 67, 65, 63, 60, 58, 56, 54, 52, 50, 48, 46, 44, 43, 41, 39, 37, 36, 34, 33, 31, 30, 28, 27, 25, 24, 22, 21, 20, 18, 17, 16, 14, 13, 12, 11, 10, 9, 7, 6, 5, 4, 3, 2, 1 ⁇
  • divTable [] is specified as follows:
  • divTable [] ⁇ 0, 248, 240, 233, 226, 219, 212, 206, 199, 193, 187, 181, 175, 170, 164, 159, 154, 149, 144, 139, 134, 130, 125, 121, 116, 112, 108, 104, 100, 96, 93, 89, 85, 82, 78, 75, 72, 68, 65, 62, 59, 56, 53, 50, 47, 45, 42, 39, 37, 34, 31, 29, 26, 24, 22, 19, 17, 15, 13, 10, 8, 6, 4, 2 ⁇ .
  • FIG. 5 is a block diagram of a video processing apparatus 500.
  • the apparatus 500 may be used to implement one or more of the methods described herein.
  • the apparatus 500 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 500 may include one or more processors 502, one or more memories 504 and video processing hardware 506.
  • the processor (s) 502 may be configured to implement one or more methods (including, but not limited to, method 400) described in the present document.
  • the memory (memories) 504 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 506 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 5.
  • Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des dispositifs, des systèmes et des procédés de codage vidéo numérique, le procédé comprenant la réalisation d'une conversion entre un bloc vidéo actuel d'une vidéo et une représentation codée de la vidéo, le bloc vidéo actuel comprenant une composante de luminance et au moins une composante de chrominance ; la composante de luminance est convertie d'un domaine d'origine à un domaine refaçonné avec un mappage de luminance avec un schéma de mise à l'échelle chromatique (LMCS), et des échantillons de chrominance de la ou des composantes de chrominance sont prédits sur la base d'échantillons de luminance reconstruits de la composante de luminance dans le domaine d'origine.
PCT/CN2020/090196 2019-05-14 2020-05-14 Procédés de mise à l'échelle dans un codage vidéo WO2020228764A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080034851.1A CN113812161B (zh) 2019-05-14 2020-05-14 视频编解码中的缩放方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2019/086789 2019-05-14
CN2019086789 2019-05-14

Publications (1)

Publication Number Publication Date
WO2020228764A1 true WO2020228764A1 (fr) 2020-11-19

Family

ID=73289779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090196 WO2020228764A1 (fr) 2019-05-14 2020-05-14 Procédés de mise à l'échelle dans un codage vidéo

Country Status (2)

Country Link
CN (1) CN113812161B (fr)
WO (1) WO2020228764A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022217474A1 (fr) * 2021-04-13 2022-10-20 深圳市大疆创新科技有限公司 Procédés de codage et de décodage vidéo, appareil, système et support de stockage
WO2024034849A1 (fr) * 2022-08-09 2024-02-15 현대자동차주식회사 Procédé et dispositif de codage vidéo utilisant une prédiction de composante de chrominance basée sur une composante de luminance

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024213017A1 (fr) * 2023-04-10 2024-10-17 Douyin Vision Co., Ltd. Procédé, appareil et support de traitement vidéo

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010086393A1 (fr) * 2009-02-02 2010-08-05 Thomson Licensing Procédé de décodage d'un flux représentatif d'une séquence d'images, procédé de codage d'une séquence d'images et structure de données codées
WO2016066028A1 (fr) * 2014-10-28 2016-05-06 Mediatek Singapore Pte. Ltd. Procédé de prédiction de composant transversal guidé pour codage vidéo
WO2017035833A1 (fr) * 2015-09-06 2017-03-09 Mediatek Inc. Décalage de prédiction obtenue par le voisinage (npo pour neighboring-derived prediction offset)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2007231799B8 (en) * 2007-10-31 2011-04-21 Canon Kabushiki Kaisha High-performance video transcoding method
CN115052157B (zh) * 2012-07-02 2025-07-01 韩国电子通信研究院 图像编码/解码方法和非暂时性计算机可读记录介质
EP4366308A3 (fr) * 2013-06-28 2024-07-10 Velos Media International Limited Procédés et dispositifs d'émulation d'un codage à faible fidélité dans un codeur à haute fidélité
US10397607B2 (en) * 2013-11-01 2019-08-27 Qualcomm Incorporated Color residual prediction for video coding
US20150264348A1 (en) * 2014-03-17 2015-09-17 Qualcomm Incorporated Dictionary coding of video content
MY186061A (en) * 2015-01-30 2021-06-17 Interdigital Vc Holdings Inc A method and apparatus of encoding and decoding a color picture
US10484712B2 (en) * 2016-06-08 2019-11-19 Qualcomm Incorporated Implicit coding of reference line index used in intra prediction
CN109479133B (zh) * 2016-07-22 2021-07-16 夏普株式会社 使用自适应分量缩放对视频数据进行编码的系统和方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010086393A1 (fr) * 2009-02-02 2010-08-05 Thomson Licensing Procédé de décodage d'un flux représentatif d'une séquence d'images, procédé de codage d'une séquence d'images et structure de données codées
WO2016066028A1 (fr) * 2014-10-28 2016-05-06 Mediatek Singapore Pte. Ltd. Procédé de prédiction de composant transversal guidé pour codage vidéo
WO2017035833A1 (fr) * 2015-09-06 2017-03-09 Mediatek Inc. Décalage de prédiction obtenue par le voisinage (npo pour neighboring-derived prediction offset)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LU, TAORAN ET AL: "AHG16: Simplification of Reshaper Implementation", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 14TH MEETING: GENEVA, CH, 19–27 MARCH 2019, 27 March 2019 (2019-03-27), DOI: 20200714093111Y *
ZHAO, JIE ET AL: "On Luma Dependent Chroma Residual Scaling of In-loop Reshaper", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 14TH MEETING: GENEVA, CH, 19–27 MARCH 2019, 27 March 2019 (2019-03-27), DOI: 20200714093148Y *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022217474A1 (fr) * 2021-04-13 2022-10-20 深圳市大疆创新科技有限公司 Procédés de codage et de décodage vidéo, appareil, système et support de stockage
WO2024034849A1 (fr) * 2022-08-09 2024-02-15 현대자동차주식회사 Procédé et dispositif de codage vidéo utilisant une prédiction de composante de chrominance basée sur une composante de luminance

Also Published As

Publication number Publication date
CN113812161A (zh) 2021-12-17
CN113812161B (zh) 2024-02-06

Similar Documents

Publication Publication Date Title
KR102630411B1 (ko) 크로마 잔차 조인트 코딩을 위한 구문 요소의 시그널링
US11431984B2 (en) Constraints on quantized residual differential pulse code modulation representation of coded video
US11431966B2 (en) Intra coded video using quantized residual differential pulse code modulation coding
US11539981B2 (en) Adaptive in-loop color-space transform for video coding
US20230353745A1 (en) Method and system for processing luma and chroma signals
US12284371B2 (en) Use of offsets with adaptive colour transform coding tool
CN113785574A (zh) 色度分量的自适应环路滤波
CN114902680B (zh) 视频的自适应颜色变换和差分编解码的联合使用
US12278949B2 (en) Quantization properties of adaptive in-loop color-space transform for video coding
WO2020228717A1 (fr) Réglages de dimension de bloc du mode de saut de transformation
WO2020228764A1 (fr) Procédés de mise à l'échelle dans un codage vidéo
WO2021204190A1 (fr) Différence de vecteur de mouvement pour bloc avec partition géométrique
US11546595B2 (en) Sub-block based use of transform skip mode
WO2020228718A1 (fr) Interaction entre un mode d'évitement de transformation et d'autres outils de codage
WO2020228763A1 (fr) Procédés de mise à l'échelle dans un codage vidéo

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20805709

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20805709

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22-03-2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20805709

Country of ref document: EP

Kind code of ref document: A1