WO2023140547A1 - Procédé et appareil de codage de canal de chrominance à l'aide de multiples lignes de référence - Google Patents
Procédé et appareil de codage de canal de chrominance à l'aide de multiples lignes de référence Download PDFInfo
- Publication number
- WO2023140547A1 WO2023140547A1 PCT/KR2023/000359 KR2023000359W WO2023140547A1 WO 2023140547 A1 WO2023140547 A1 WO 2023140547A1 KR 2023000359 W KR2023000359 W KR 2023000359W WO 2023140547 A1 WO2023140547 A1 WO 2023140547A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- reference line
- luma
- chroma
- current chroma
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 150
- 241000023320 Luma <angiosperm> Species 0.000 claims description 318
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims description 318
- 230000002093 peripheral effect Effects 0.000 claims description 10
- 230000033001 locomotion Effects 0.000 description 53
- 239000013598 vector Substances 0.000 description 46
- 238000005516 engineering process Methods 0.000 description 29
- 238000010586 diagram Methods 0.000 description 28
- 238000013139 quantization Methods 0.000 description 25
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 20
- 230000011664 signaling Effects 0.000 description 20
- 230000006870 function Effects 0.000 description 15
- 230000005540 biological transmission Effects 0.000 description 12
- 230000009466 transformation Effects 0.000 description 11
- VZSRBBMJRBPUNF-UHFFFAOYSA-N 2-(2,3-dihydro-1H-inden-2-ylamino)-N-[3-oxo-3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)propyl]pyrimidine-5-carboxamide Chemical compound C1C(CC2=CC=CC=C12)NC1=NC=C(C=N1)C(=O)NCCC(N1CC2=C(CC1)NN=N2)=O VZSRBBMJRBPUNF-UHFFFAOYSA-N 0.000 description 10
- AFCARXCZXQIEQB-UHFFFAOYSA-N N-[3-oxo-3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)propyl]-2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidine-5-carboxamide Chemical compound O=C(CCNC(=O)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F)N1CC2=C(CC1)NN=N2 AFCARXCZXQIEQB-UHFFFAOYSA-N 0.000 description 10
- YLZOPXRUQYQQID-UHFFFAOYSA-N 3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)-1-[4-[2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidin-5-yl]piperazin-1-yl]propan-1-one Chemical compound N1N=NC=2CN(CCC=21)CCC(=O)N1CCN(CC1)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F YLZOPXRUQYQQID-UHFFFAOYSA-N 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 239000000284 extract Substances 0.000 description 7
- 238000001914 filtration Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000000638 solvent extraction Methods 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 238000009795 derivation Methods 0.000 description 3
- 230000008707 rearrangement Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- NIPNSKYNPDTRPC-UHFFFAOYSA-N N-[2-oxo-2-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)ethyl]-2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidine-5-carboxamide Chemical compound O=C(CNC(=O)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F)N1CC2=C(CC1)NN=N2 NIPNSKYNPDTRPC-UHFFFAOYSA-N 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000034188 Stiff person spectrum disease Diseases 0.000 description 1
- 229920010524 Syndiotactic polystyrene Polymers 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 208000012112 ischiocoxopodopatellar syndrome Diseases 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000002490 spark plasma sintering Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present disclosure relates to a video coding method and apparatus using chroma channel prediction based on multiple reference lines.
- video data Since video data has a large amount of data compared to audio data or still image data, it requires a lot of hardware resources including memory to store or transmit itself without processing for compression.
- an encoder when video data is stored or transmitted, an encoder is used to compress and store or transmit the video data, and a decoder receives, decompresses, and reproduces the compressed video data.
- video compression technologies include H.264/AVC, High Efficiency Video Coding (HEVC), and Versatile Video Coding (VVC), which has improved coding efficiency by about 30% or more compared to HEVC.
- intra prediction pixel values of a current block to be encoded are predicted using pixel information in the same picture.
- one most suitable mode among a plurality of intra prediction modes (IPMs) according to the characteristics of an image is selected and then used for prediction of the current block.
- the encoder After selecting one mode among a plurality of intra prediction modes, the encoder encodes the current block using the selected mode. After that, the encoder may transfer information about the corresponding mode to the decoder.
- the HEVC technology uses a total of 35 intra prediction modes, including 33 angular modes with directionality and 2 non-angular modes, for intra prediction. However, as the spatial resolution of the image increases from 720 ⁇ 480 to 2048 ⁇ 1024 or 8192 ⁇ 4096, the size of the prediction block unit is gradually increasing, and accordingly, the need to add more various intra prediction modes has increased.
- the VVC technology uses 65 more subdivided prediction modes for intra prediction, so that prediction directions can be used more diversely than before.
- an image to be encoded is partitioned into coding units (CUs) having various shapes and sizes, and encoding is performed in units of CUs.
- the information defining the division is expressed in a tree structure, and the tree structure is transmitted to the decoder to indicate the shape and size of the CUs to be divided into the video.
- Luma and chroma components can each be separately partitioned into CUs.
- both luma and chroma components may be divided into identical CUs.
- a technique in which a luma block and a chroma block have different division structures is referred to as a chroma separate tree (CST) technique or a dual tree technique.
- CST chroma separate tree
- the chroma block is divided separately from the luma block, and information on each division structure is signaled to the decoder.
- a technology in which a luma block and a chroma block have the same division structure is referred to as a single tree technology.
- the chroma block has the same partitioning structure as the luma block, that is, a common partitioning structure, and common partitioning structure information is signaled to the decoder.
- a linear relationship may exist between pixels of a chroma channel and corresponding pixels of a luma channel.
- CCLM Cross-Component Linear Model
- the CCLM technique determines a region within the luma channel corresponding to the current chroma block. After that, the CCLM technique derives a linear relational expression using the pixels on the peripheral reference line of the current chroma block and the corresponding luma pixels.
- the CCLM technique generates a predictor of a current chroma block from pixels in a corresponding luma region using the derived linear relational expression.
- the chroma channel subsampling e.g., 4:4:4, 4:2:2, 4:2:0
- downsampling may be performed on the corresponding luma region so that the size of the corresponding luma region and the size of the current chroma block are the same.
- the performance of intra prediction technology is related to appropriate selection of reference pixels.
- a method of increasing the number of usable candidate reference pixels may be considered in addition to a method of obtaining reference pixels in a more accurate direction by securing diversity of prediction modes as described above.
- MRL Multiple Reference Line
- MRLP Multiple Reference Line Prediction
- An object of the present disclosure is to provide a video coding method and apparatus for generating a predictor of a chroma channel using a plurality of reference lines in intra prediction of a current block based on an intra prediction mode (IPM) or a cross-component linear model (CCLM).
- IPM intra prediction mode
- CCLM cross-component linear model
- a method for intra prediction of a current chroma block includes decoding a chroma intra prediction mode of the current chroma block from a bitstream; determining a reference line among the plurality of neighboring chroma pixel lines for the current chroma block based on a plurality of neighboring chroma pixel lines of the current chroma block or a plurality of neighboring luma pixel lines of a representative luma block, wherein the representative luma block is a block including a luma pixel corresponding to a pixel at a predetermined position of the current chroma block, and the reference line of the current chroma block is indicated by a reference line index of the current chroma block; and generating a predictor of the current chroma block using the determined reference line according to the chroma intra prediction mode.
- a method of intra prediction of a current chroma block includes determining a chroma intra prediction mode of the current chroma block; determining a reference line among the plurality of neighboring chroma pixel lines for the current chroma block based on a plurality of neighboring chroma pixel lines of the current chroma block or a plurality of neighboring luma pixel lines of a representative luma block, wherein the representative luma block is a block including a luma pixel corresponding to a pixel at a predetermined position of the current chroma block, and the reference line of the current chroma block is indicated by a reference line index of the current chroma block; and generating a predictor of the current chroma block using the determined reference line according to the chroma intra prediction mode.
- a computer readable recording medium storing a bitstream generated by an image encoding method, the image encoding method comprising: determining a chroma intra prediction mode of a current chroma block; determining a reference line among the plurality of neighboring chroma pixel lines for the current chroma block based on a plurality of neighboring chroma pixel lines of the current chroma block or a plurality of neighboring luma pixel lines of a representative luma block, wherein the representative luma block is a block including a luma pixel corresponding to a pixel at a predetermined position of the current chroma block, and the reference line of the current chroma block is indicated by a reference line index of the current chroma block; and generating a predictor of the current chroma block using the determined reference line according to the chroma intra prediction mode.
- a video coding method and apparatus for generating a predictor of a chroma channel using a plurality of reference lines are provided, thereby improving video encoding efficiency and improving video quality.
- FIG. 1 is an exemplary block diagram of an image encoding apparatus capable of implementing the techniques of this disclosure.
- FIG. 2 is a diagram for explaining a method of dividing a block using a QTBTTT structure.
- 3A and 3B are diagrams illustrating a plurality of intra prediction modes including wide-angle intra prediction modes.
- FIG. 4 is an exemplary diagram of neighboring blocks of a current block.
- FIG. 5 is an exemplary block diagram of a video decoding apparatus capable of implementing the techniques of this disclosure.
- 6A to 6C are exemplary diagrams illustrating peripheral reference pixels used in each CCLM mode.
- FIG. 7 is an exemplary diagram illustrating downsampling filters according to color formats.
- FIG. 8 is an exemplary diagram illustrating reference lines of MRL technology according to an embodiment of the present disclosure.
- FIG. 9 is an exemplary diagram illustrating reference lines of a chroma block according to an embodiment of the present disclosure.
- FIG. 10 is an exemplary diagram illustrating specific location pixels of a current chroma block, according to an embodiment of the present disclosure.
- FIG. 11 is an exemplary diagram illustrating a representative luma block according to an embodiment of the present disclosure.
- FIG. 12 is an exemplary diagram illustrating the same location of a current chroma block and a representative luma block.
- FIG. 13 is an exemplary diagram illustrating CCLM reference pixels according to an embodiment of the present disclosure.
- FIG. 14 is an exemplary diagram illustrating CCLM reference pixels according to another embodiment of the present disclosure.
- 15 is an exemplary diagram illustrating a reference line index according to the center of a downsampling filter.
- 16 is an exemplary diagram illustrating positions of CCLM reference pixels of a chroma block according to an embodiment of the present disclosure.
- 17 is an exemplary diagram illustrating CCLM reference pixels of a corresponding luma area according to an embodiment of the present disclosure.
- 18 is an exemplary diagram additionally illustrating downsampling filters according to color formats.
- 19 is an exemplary diagram illustrating a downsampling filter used in a 4:2:0 format.
- 20 is a flowchart illustrating a method of encoding a current chroma block performed by an image encoding apparatus according to an embodiment of the present disclosure.
- 21 is a flowchart illustrating a method of decoding a current chroma block performed by an image decoding apparatus according to an embodiment of the present disclosure.
- FIG. 22 is a flowchart illustrating a method of encoding a current chroma block performed by an image encoding apparatus according to another embodiment of the present disclosure.
- FIG. 23 is a flowchart illustrating a method of decoding a current chroma block performed by an image decoding apparatus according to another embodiment of the present disclosure.
- FIG. 1 is an exemplary block diagram of an image encoding apparatus capable of implementing the techniques of this disclosure.
- an image encoding device and sub-components of the device will be described.
- the image encoding apparatus may include a picture division unit 110, a prediction unit 120, a subtractor 130, a transform unit 140, a quantization unit 145, a rearrangement unit 150, an entropy encoding unit 155, an inverse quantization unit 160, an inverse transform unit 165, an adder 170, a loop filter unit 180, and a memory 190.
- Each component of the image encoding device may be implemented as hardware or software, or as a combination of hardware and software. Also, the function of each component may be implemented as software, and the microprocessor may be implemented to execute the software function corresponding to each component.
- One image is composed of one or more sequences including a plurality of pictures.
- Each picture is divided into a plurality of areas and encoding is performed for each area.
- one picture is divided into one or more tiles or/and slices.
- one or more tiles may be defined as a tile group.
- Each tile or/slice is divided into one or more Coding Tree Units (CTUs).
- CTUs Coding Tree Units
- each CTU is divided into one or more CUs (Coding Units) by a tree structure.
- Information applied to each CU is coded as a CU syntax, and information commonly applied to CUs included in one CTU is coded as a CTU syntax.
- information commonly applied to all blocks in one slice is encoded as syntax of a slice header
- information applied to all blocks constituting one or more pictures is encoded in a picture parameter set (PPS) or picture header.
- PPS picture parameter set
- information commonly referred to by a plurality of pictures is coded into a Sequence Parameter Set (SPS).
- SPS Sequence Parameter Set
- VPS video parameter set
- information commonly applied to one tile or tile group may be encoded as syntax of a tile or tile group header. Syntax included in the SPS, PPS, slice header, tile or tile group header may be referred to as high level syntax.
- the picture divider 110 determines the size of a coding tree unit (CTU).
- CTU size Information on the size of the CTU (CTU size) is encoded as SPS or PPS syntax and transmitted to the video decoding apparatus.
- the picture divider 110 divides each picture constituting an image into a plurality of Coding Tree Units (CTUs) having a predetermined size, and then recursively divides the CTUs using a tree structure.
- a leaf node in the tree structure becomes a coding unit (CU), which is a basic unit of encoding.
- CU coding unit
- a quad tree in which an upper node (or parent node) is divided into four subnodes (or child nodes) of the same size
- a binary tree in which an upper node is split into two subnodes
- a ternary tree in which an upper node is split into three subnodes at a ratio of 1:2:1, or a structure in which two or more of these QT structures, BT structures, and TT structures are mixed
- QuadTree plus BinaryTree (QTBT) structure may be used, or a QuadTree plus BinaryTree TernaryTree (QTBTTT) structure may be used.
- QTBTTT QuadTree plus BinaryTree TernaryTree
- BTTT may be combined to be referred to as MTT (Multiple-Type Tree).
- FIG. 2 is a diagram for explaining a method of dividing a block using a QTBTTT structure.
- the CTU may first be divided into QT structures. Quadtree splitting can be repeated until the size of the splitting block reaches the minimum block size (MinQTSize) of leaf nodes allowed by QT.
- a first flag (QT_split_flag) indicating whether each node of the QT structure is split into four nodes of a lower layer is encoded by the entropy encoder 155 and signaled to the video decoding device. If the leaf node of QT is not larger than the maximum block size (MaxBTSize) of the root node allowed in BT, it may be further divided into either a BT structure or a TT structure. A plurality of division directions may exist in the BT structure and/or the TT structure.
- a second flag indicating whether nodes are split and, if split, an additional flag indicating a split direction (vertical or horizontal) and/or a flag indicating a split type (binary or ternary) are encoded by the entropy encoder 155 and signaled to the video decoding device.
- a CU split flag (split_cu_flag) indicating whether the node is split may be coded.
- the value of the CU split flag indicates that it is not split, the block of the corresponding node becomes a leaf node in the split tree structure and becomes a coding unit (CU), which is a basic unit of encoding.
- the value of the CU split flag indicates splitting, the video encoding apparatus starts encoding from the first flag in the above-described manner.
- split_flag split_flag
- split_flag split_flag
- split_flag split flag
- a type in which a block of a corresponding node is divided into two blocks having an asymmetric shape may additionally exist.
- the asymmetric form may include a form in which the block of the corresponding node is divided into two rectangular blocks having a size ratio of 1:3, or a form in which the block of the corresponding node is divided in a diagonal direction may be included.
- a CU can have various sizes depending on the QTBT or QTBTTT split from the CTU.
- a block corresponding to a CU to be encoded or decoded ie, a leaf node of QTBTTT
- a 'current block' a block corresponding to a CU to be encoded or decoded
- the shape of the current block may be rectangular as well as square.
- the prediction unit 120 predicts a current block and generates a prediction block.
- the prediction unit 120 includes an intra prediction unit 122 and an inter prediction unit 124 .
- each current block in a picture can be coded predictively.
- prediction of a current block may be performed using an intra prediction technique (using data from a picture containing the current block) or an inter prediction technique (using data from a picture coded before a picture containing the current block).
- Inter prediction includes both uni-prediction and bi-prediction.
- the intra predictor 122 predicts pixels in the current block using pixels (reference pixels) located around the current block in the current picture including the current block.
- a plurality of intra prediction modes exist according to the prediction direction.
- the plurality of intra prediction modes may include two non-directional modes including a planar mode and a DC mode and 65 directional modes.
- the neighboring pixels to be used and the arithmetic expression are defined differently.
- directional modes For efficient directional prediction of the rectangular current block, directional modes (numbers 67 to 80 and -1 to -14 intra prediction modes) indicated by dotted arrows in FIG. 3B may be additionally used. These may be referred to as “wide angle intra-prediction modes”.
- arrows indicate corresponding reference samples used for prediction and do not indicate prediction directions. The prediction direction is opposite to the direction the arrow is pointing.
- Wide-angle intra prediction modes are modes that perform prediction in the opposite direction of a specific directional mode without additional bit transmission when the current block is rectangular. At this time, among the wide-angle intra prediction modes, some wide-angle intra prediction modes usable for the current block may be determined by the ratio of the width and height of the rectangular current block.
- wide-angle intra prediction modes (67 to 80 intra prediction modes) having angles smaller than 45 degrees are available when the current block has a rectangular shape with a height smaller than the width
- wide-angle intra prediction modes (-1 to -14 intra prediction modes) with an angle larger than -135 degrees are available when the current block has a rectangular shape with a width greater than the height.
- the intra prediction unit 122 may determine an intra prediction mode to be used for encoding the current block.
- the intra prediction unit 122 may encode the current block using several intra prediction modes and select an appropriate intra prediction mode to be used from the tested modes.
- the intra prediction unit 122 may calculate rate-distortion values using rate-distortion analysis for several tested intra-prediction modes, and may select an intra-prediction mode having the best rate-distortion characteristics among the tested modes.
- the intra prediction unit 122 selects one intra prediction mode from among a plurality of intra prediction modes, and predicts a current block using neighboring pixels (reference pixels) determined according to the selected intra prediction mode and an arithmetic expression.
- Information on the selected intra prediction mode is encoded by the entropy encoder 155 and transmitted to the video decoding apparatus.
- the inter prediction unit 124 generates a prediction block for a current block using a motion compensation process.
- the inter-prediction unit 124 searches for a block most similar to the current block in the encoded and decoded reference picture prior to the current picture, and generates a prediction block for the current block using the searched block. Then, a motion vector (MV) corresponding to displacement between the current block in the current picture and the prediction block in the reference picture is generated.
- MV motion vector
- motion estimation is performed on a luma component, and a motion vector calculated based on the luma component is used for both the luma component and the chroma component.
- Motion information including reference picture information and motion vector information used to predict the current block is encoded by the entropy encoding unit 155 and transmitted to the video decoding apparatus.
- the inter-prediction unit 124 may perform interpolation on a reference picture or reference block in order to increase prediction accuracy. That is, subsamples between two consecutive integer samples are interpolated by applying filter coefficients to a plurality of consecutive integer samples including the two integer samples.
- the motion vector can be expressed with precision of decimal units instead of integer sample units.
- the precision or resolution of the motion vector may be set differently for each unit of a target region to be encoded, for example, a slice, tile, CTU, or CU.
- AMVR adaptive motion vector resolution
- information on motion vector resolution to be applied to each target region must be signaled for each target region. For example, when the target region is a CU, information on motion vector resolution applied to each CU is signaled.
- Information on the motion vector resolution may be information indicating the precision of differential motion vectors, which will be described later.
- the inter prediction unit 124 may perform inter prediction using bi-prediction.
- bi-directional prediction two reference pictures and two motion vectors representing positions of blocks most similar to the current block within each reference picture are used.
- the inter-prediction unit 124 selects a first reference picture and a second reference picture from reference picture list 0 (RefPicList0) and reference picture list 1 (RefPicList1), searches for a block similar to the current block within each reference picture, and generates a first reference block and a second reference block. Then, a prediction block for the current block is generated by averaging or weighted averaging the first reference block and the second reference block.
- reference picture list 0 may include pictures prior to the current picture in display order among restored pictures
- reference picture list 1 may include pictures after the current picture in display order among restored pictures.
- ups and downs pictures subsequent to the current picture may be additionally included in reference picture list 0, and conversely, ups and downs pictures prior to the current picture may be additionally included in reference picture list 1.
- the motion information of the current block can be delivered to the video decoding apparatus by encoding information capable of identifying the neighboring block. This method is called 'merge mode'.
- the inter prediction unit 124 selects a predetermined number of merge candidate blocks (hereinafter referred to as 'merge candidates') from neighboring blocks of the current block.
- a neighboring block for deriving a merge candidate As a neighboring block for deriving a merge candidate, as shown in FIG. 4, all or some of the left block A0, the lower left block A1, the upper block B0, the upper right block B1, and the upper left block A2 adjacent to the current block in the current picture may be used. Also, a block located in a reference picture (which may be the same as or different from a reference picture used to predict the current block) other than the current picture in which the current block is located may be used as a merge candidate. For example, a block co-located with the current block in the reference picture or blocks adjacent to the co-located block may be additionally used as a merge candidate. If the number of merge candidates selected by the method described above is less than the preset number, a 0 vector is added to the merge candidates.
- the inter prediction unit 124 constructs a merge list including a predetermined number of merge candidates using these neighboring blocks. Among the merge candidates included in the merge list, a merge candidate to be used as motion information of the current block is selected, and merge index information for identifying the selected candidate is generated. The generated merge index information is encoded by the encoder 150 and transmitted to the video decoding apparatus.
- Merge skip mode is a special case of merge mode. After performing quantization, when all transform coefficients for entropy encoding are close to zero, only neighboring block selection information is transmitted without transmitting a residual signal. By using the merge skip mode, it is possible to achieve a relatively high encoding efficiency in low-motion images, still images, screen content images, and the like.
- merge mode and merge skip mode are collectively referred to as merge/skip mode.
- AMVP Advanced Motion Vector Prediction
- the inter prediction unit 124 derives predictive motion vector candidates for the motion vector of the current block using neighboring blocks of the current block.
- the neighboring blocks used to derive the predictive motion vector candidates all or some of the left block A0, the lower left block A1, the upper block B0, the upper right block B1, and the upper left block A2 adjacent to the current block in the current picture shown in FIG. 4 may be used.
- a block located in a reference picture (which may be the same as or different from the reference picture used to predict the current block) other than the current picture where the current block is located may be used as a neighboring block used to derive motion vector candidates.
- a collocated block co-located with the current block within the reference picture or blocks adjacent to the collocated block may be used. If the number of motion vector candidates is smaller than the preset number according to the method described above, a 0 vector is added to the motion vector candidates.
- the inter-prediction unit 124 derives predicted motion vector candidates using the motion vectors of the neighboring blocks, and determines a predicted motion vector for the motion vector of the current block using the predicted motion vector candidates. Then, a differential motion vector is calculated by subtracting the predicted motion vector from the motion vector of the current block.
- the predicted motion vector may be obtained by applying a predefined function (eg, median value, average value operation, etc.) to predicted motion vector candidates.
- a predefined function eg, median value, average value operation, etc.
- the video decoding apparatus also knows the predefined function.
- the video decoding apparatus since a neighboring block used to derive a predicted motion vector candidate is a block that has already been encoded and decoded, the video decoding apparatus also knows the motion vector of the neighboring block. Therefore, the video encoding apparatus does not need to encode information for identifying a predictive motion vector candidate. Therefore, in this case, information on differential motion vectors and information on reference pictures used to predict the current block are encoded.
- the predicted motion vector may be determined by selecting one of the predicted motion vector candidates.
- information for identifying the selected predictive motion vector candidate is additionally encoded.
- the subtractor 130 subtracts the prediction block generated by the intra prediction unit 122 or the inter prediction unit 124 from the current block to generate a residual block.
- the transform unit 140 transforms the residual signal in the residual block having pixel values in the spatial domain into transform coefficients in the frequency domain.
- the transform unit 140 may transform residual signals in the residual block by using the entire size of the residual block as a transform unit, or may divide the residual block into a plurality of subblocks and perform transform by using the subblocks as a transform unit.
- the residual signals may be divided into two subblocks, a transform region and a non-transform region, and transform the residual signals using only the transform region subblock as a transform unit.
- the transformation region subblock may be one of two rectangular blocks having a size ratio of 1:1 based on a horizontal axis (or a vertical axis).
- a flag indicating that only subblocks have been transformed (cu_sbt_flag), directional (vertical/horizontal) information (cu_sbt_horizontal_flag), and/or location information (cu_sbt_pos_flag) are encoded by the entropy encoder 155 and signaled to the video decoding device.
- the size of the transform region subblock may have a size ratio of 1:3 based on the horizontal axis (or vertical axis), and in this case, a flag (cu_sbt_quad_flag) for distinguishing the corresponding division is additionally encoded by the entropy encoder 155 and signaled to the video decoding device.
- the transform unit 140 may individually transform the residual block in the horizontal direction and the vertical direction.
- various types of transformation functions or transformation matrices may be used.
- a pair of transformation functions for horizontal transformation and vertical transformation may be defined as a multiple transform set (MTS).
- the transform unit 140 may select one transform function pair having the highest transform efficiency among the MTS and transform the residual blocks in the horizontal and vertical directions, respectively.
- Information (mts_idx) on a pair of transform functions selected from the MTS is encoded by the entropy encoding unit 155 and signaled to the video decoding device.
- the quantization unit 145 quantizes transform coefficients output from the transform unit 140 using a quantization parameter, and outputs the quantized transform coefficients to the entropy encoding unit 155 .
- the quantization unit 145 may directly quantize a related residual block without transformation for a certain block or frame.
- the quantization unit 145 may apply different quantization coefficients (scaling values) according to positions of transform coefficients in the transform block.
- a quantization matrix applied to the two-dimensionally arranged quantized transform coefficients may be coded and signaled to the video decoding apparatus.
- the rearrangement unit 150 may rearrange the coefficient values of the quantized residual values.
- the reordering unit 150 may change a 2D coefficient array into a 1D coefficient sequence using coefficient scanning. For example, the reordering unit 150 may output a one-dimensional coefficient sequence by scanning DC coefficients to coefficients in a high frequency region using a zig-zag scan or a diagonal scan.
- zig-zag scan vertical scan that scans a 2D coefficient array in a column direction and horizontal scan that scans 2D block-shaped coefficients in a row direction may be used. That is, a scan method to be used among zig-zag scan, diagonal scan, vertical scan, and horizontal scan may be determined according to the size of the transform unit and the intra prediction mode.
- the entropy encoding unit 155 generates a bitstream by encoding a sequence of one-dimensional quantized transform coefficients output from the reordering unit 150 using various encoding schemes such as CABAC (Context-based Adaptive Binary Arithmetic Code) and Exponential Golomb.
- CABAC Context-based Adaptive Binary Arithmetic Code
- Exponential Golomb Exponential Golomb
- the entropy encoding unit 155 encodes information such as CTU size, CU splitting flag, QT splitting flag, MTT splitting type, MTT splitting direction, etc. related to block splitting, so that the video decoding apparatus divides the block in the same way as the video encoding apparatus.
- the entropy encoding unit 155 encodes information about the predictive type indicating whether the current block is encoded by the intra prediction or whether it is encoded by an interview, and according to the prediction type, the intra prediction information (ie, information about the intra predictive mode) or Inter -prediction information (the encoding mode of the movement information (Merge mode or AMVP mode),)
- the intra prediction information ie, information about the intra predictive mode
- Inter -prediction information the encoding mode of the movement information (Merge mode or AMVP mode
- the entropy encoding unit 155 encodes information related to quantization, that is, information about quantization parameters and information about quantization matrices.
- the inverse quantization unit 160 inversely quantizes the quantized transform coefficients output from the quantization unit 145 to generate transform coefficients.
- the inverse transform unit 165 transforms transform coefficients output from the inverse quantization unit 160 from a frequency domain to a spatial domain to restore a residual block.
- the adder 170 restores the current block by adding the restored residual block and the predicted block generated by the predictor 120. Pixels in the reconstructed current block are used as reference pixels when intra-predicting the next block.
- the loop filter unit 180 performs filtering on reconstructed pixels in order to reduce blocking artifacts, ringing artifacts, blurring artifacts, etc. caused by block-based prediction and transformation/quantization.
- the filter unit 180 is an in-loop filter and may include all or part of a deblocking filter 182, a sample adaptive offset (SAO) filter 184, and an adaptive loop filter (ALF) 186.
- SAO sample adaptive offset
- ALF adaptive loop filter
- the deblocking filter 182 filters the boundary between reconstructed blocks to remove blocking artifacts caused by block-by-block encoding/decoding, and the SAO filter 184 and the alf 186 perform additional filtering on the deblocking-filtered image.
- the SAO filter 184 and the alf 186 are filters used to compensate for a difference between a reconstructed pixel and an original pixel caused by lossy coding.
- the SAO filter 184 improves not only subjective picture quality but also coding efficiency by applying an offset in units of CTUs.
- the ALF 186 performs block-by-block filtering. Distortion is compensated for by applying different filters by distinguishing the edge of the corresponding block and the degree of change.
- Information on filter coefficients to be used for ALF may be coded and signaled to the video decoding apparatus.
- the reconstruction block filtered through the deblocking filter 182, the SAO filter 184, and the ALF 186 is stored in the memory 190.
- the reconstructed picture can be used as a reference picture for inter-prediction of blocks in the picture to be encoded later.
- FIG. 5 is an exemplary block diagram of a video decoding apparatus capable of implementing the techniques of this disclosure.
- a video decoding device and sub-elements of the device will be described.
- the image decoding apparatus may include an entropy decoding unit 510, a rearrangement unit 515, an inverse quantization unit 520, an inverse transform unit 530, a prediction unit 540, an adder 550, a loop filter unit 560, and a memory 570.
- each component of the image decoding device may be implemented as hardware or software, or a combination of hardware and software.
- the function of each component may be implemented as software, and the microprocessor may be implemented to execute the software function corresponding to each component.
- the entropy decoding unit 510 decodes the bitstream generated by the video encoding apparatus, extracts information related to block division, determines a current block to be decoded, and extracts prediction information and residual signal information required to reconstruct the current block.
- the entropy decoding unit 510 determines the size of the CTU by extracting information about the CTU size from a sequence parameter set (SPS) or a picture parameter set (PPS), and divides the picture into CTUs of the determined size. Then, the CTU is divided using the tree structure by determining the CTU as the top layer of the tree structure, that is, the root node, and extracting division information for the CTU.
- SPS sequence parameter set
- PPS picture parameter set
- a first flag (QT_split_flag) related to splitting of QT is first extracted and each node is split into four nodes of a lower layer.
- the second flag (MTT_split_flag) related to the split of the MTT and the split direction (vertical / horizontal) and / or split type (binary / ternary) information are extracted to divide the corresponding leaf node into the MTT structure. Accordingly, each node below the leaf node of QT is recursively divided into a BT or TT structure.
- a CU split flag (split_cu_flag) indicating whether the CU is split is first extracted, and when the corresponding block is split, a first flag (QT_split_flag) may be extracted.
- each node may have zero or more iterative MTT splits after zero or more repetitive QT splits. For example, the CTU may immediately undergo MTT splitting, or conversely, only QT splitting may occur multiple times.
- a first flag (QT_split_flag) related to QT splitting is extracted and each node is split into four nodes of a lower layer. And, for a node corresponding to a leaf node of QT, a split flag (split_flag) indicating whether to further split into BTs and split direction information are extracted.
- the entropy decoding unit 510 determines a current block to be decoded by using tree structure partitioning, it extracts information about a prediction type indicating whether the current block is intra-predicted or inter-predicted.
- the prediction type information indicates intra prediction
- the entropy decoding unit 510 extracts syntax elements for intra prediction information (intra prediction mode) of the current block.
- the prediction type information indicates inter prediction
- the entropy decoding unit 510 extracts syntax elements for the inter prediction information, that is, information indicating a motion vector and a reference picture to which the motion vector refers.
- the entropy decoding unit 510 extracts quantization-related information and information about quantized transform coefficients of the current block as information about the residual signal.
- the reordering unit 515 may change the sequence of 1-dimensional quantized transform coefficients entropy-decoded by the entropy decoding unit 510 back into a 2-dimensional coefficient array (i.e., a block) in the reverse order of the coefficient scanning performed by the image encoding apparatus.
- the inverse quantization unit 520 inverse quantizes the quantized transform coefficients and inverse quantizes the quantized transform coefficients using a quantization parameter.
- the inverse quantization unit 520 may apply different quantization coefficients (scaling values) to the two-dimensionally arranged quantized transform coefficients.
- the inverse quantization unit 520 may perform inverse quantization by applying a matrix of quantization coefficients (scaling values) from the image encoding device to a 2D array of quantized transformation coefficients.
- the inverse transform unit 530 inversely transforms the inverse quantized transform coefficients from the frequency domain to the spatial domain to restore residual signals, thereby generating a residual block for the current block.
- the inverse transform unit 530 extracts a flag (cu_sbt_flag) indicating that only a subblock of the transform block has been transformed, directional (vertical/horizontal) information (cu_sbt_horizontal_flag) of the subblock, and/or location information (cu_sbt_pos_flag) of the subblock, and inverse transforms the transform coefficients of the corresponding subblock from the frequency domain to the spatial domain.
- the difference signals are reconstructed, and a final residual block for the current block is generated by filling a value of “0” with the residual signal for a region that has not been inversely transformed.
- the inverse transform unit 530 determines transform functions or transform matrices to be applied in the horizontal and vertical directions using MTS information (mts_idx) signaled from the video encoding device, and performs inverse transform on the transform coefficients in the transform block in the horizontal and vertical directions using the determined transform function.
- MTS information mits_idx
- the prediction unit 540 may include an intra prediction unit 542 and an inter prediction unit 544 .
- the intra prediction unit 542 is activated when the prediction type of the current block is intra prediction
- the inter prediction unit 544 is activated when the prediction type of the current block is inter prediction.
- the intra prediction unit 542 determines the intra prediction mode of the current block among a plurality of intra prediction modes from the syntax element for the intra prediction mode extracted from the entropy decoding unit 510, and predicts the current block using reference pixels around the current block according to the intra prediction mode.
- the inter prediction unit 544 determines the motion vector of the current block and the reference picture referred to by the motion vector using the syntax element for the inter prediction mode extracted from the entropy decoding unit 510, and predicts the current block using the motion vector and the reference picture.
- the adder 550 restores the current block by adding the residual block output from the inverse transform unit and the prediction block output from the inter prediction unit or intra prediction unit. Pixels in the reconstructed current block are used as reference pixels when intra-predicting a block to be decoded later.
- the loop filter unit 560 may include a deblocking filter 562, an SAO filter 564, and an ALF 566 as in-loop filters.
- the deblocking filter 562 performs deblocking filtering on boundaries between reconstructed blocks in order to remove blocking artifacts generated by block-by-block decoding.
- the SAO filter 564 and the ALF 566 perform additional filtering on the reconstructed block after deblocking filtering to compensate for the difference between the reconstructed pixel and the original pixel caused by lossy coding.
- ALF filter coefficients are determined using information on filter coefficients decoded from the non-stream.
- the reconstruction block filtered through the deblocking filter 562, the SAO filter 564, and the ALF 566 is stored in the memory 570.
- the reconstructed picture is used as a reference picture for inter-prediction of blocks in the picture to be encoded later.
- This embodiment relates to encoding and decoding of images (video) as described above. More specifically, in intra prediction of a current block based on an intra prediction mode (IPM) or cross-component linear model (CCLM), a video coding method and apparatus for generating a predictor of a chroma channel using a plurality of reference lines are provided.
- IPM intra prediction mode
- CCLM cross-component linear model
- the following embodiments may be performed by the intra prediction unit 122 in a video encoding device. Also, it may be performed by the intra prediction unit 542 in the video decoding device.
- the video encoding apparatus may generate signaling information related to the present embodiment in terms of bit rate distortion optimization in encoding of the current block.
- the image encoding device may encode the image using the entropy encoding unit 155 and transmit it to the image decoding device.
- the video decoding apparatus may decode signaling information related to decoding of the current block from the bitstream using the entropy decoding unit 510 .
- 'target block' may be used in the same meaning as a current block or a coding unit (CU), or may mean a partial region of a coding unit.
- a value of one flag being true indicates a case in which the flag is set to 1.
- a false value of one flag indicates a case in which the flag is set to 0.
- the intra prediction mode of a luma block has 65 subdivided directional modes (ie, 2 to 66) in addition to the non-directional mode (ie, Planar and DC), as illustrated in FIG. 3A.
- the 65 directional modes, Planar and DC, are collectively referred to as 67 IPMs.
- the chroma block may also use the intra prediction of the subdivided directional mode in a limited manner.
- various directional modes other than horizontal and vertical directions that can be used by a luma block cannot always be used.
- the prediction mode of the current chroma block must be set to the DM mode. By setting the DM mode in this way, the current chroma block can use a directionality mode other than the horizontal and vertical luma blocks.
- intra prediction modes that are frequently used or are most basically used to maintain image quality include planar, DC, vertical, horizontal, and DM modes.
- the intra prediction mode of the luma block spatially corresponding to the current chroma block is used as the intra prediction mode of the chroma block.
- the video encoding apparatus may signal whether the intra prediction mode of the chroma block is the DM mode to the video decoding apparatus.
- the video encoding apparatus may indicate whether it is in the DM mode by setting intra_chroma_pred_mode, which is information for indicating the intra prediction mode of a chroma block, to a specific value and then transmitting the data to the video decoding apparatus.
- the video encoding apparatus determines the intra prediction mode of the chroma block according to Table 1. IntraPredModeC can be set.
- intra_chroma_pred_mode and IntraPredModeC which are information related to the intra prediction mode of a chroma block, they are expressed as a chroma intra prediction mode indicator and a chroma intra prediction mode, respectively.
- lumaIntraPredMode is an intra prediction mode (hereinafter referred to as 'luma intra prediction mode') of a luma block corresponding to the current chroma block.
- lumaIntraPredMode represents one of the prediction modes illustrated in FIG. 3A.
- lumaIntraPredMode 0 indicates Planar prediction mode
- lumaIntraPredMode 1 indicates DC prediction mode.
- Cases in which lumaIntraPredMode is 18, 50, and 66 represent directional modes referred to as horizontal, vertical, and VDIA, respectively.
- IntraPredModeC which is a chroma intra prediction mode
- the video decoding apparatus parses cclm_mode_flag indicating whether to use the CCLM mode. If cclm_mode_flag is set to 1 and the CCLM mode is used, the video decoding apparatus parses cclm_mode_idx indicating the CCLM mode. At this time, one of three modes may be indicated as the CCLM mode according to the value of cclm_mode_idx. On the other hand, when cclm_mode_flag is 0 and the CCLM mode is not used, the video decoding apparatus parses intra_chroma_pred_mode indicating the intra prediction mode as described above.
- 6A to 6C are exemplary diagrams illustrating peripheral reference pixels used in each CCLM mode.
- the image decoding apparatus determines an area (hereinafter referred to as 'corresponding luma area') in a luma image corresponding to the current chroma block.
- left reference pixels and upper reference pixels of the corresponding luma area and left reference pixels and upper reference pixels of the target chroma block may be used.
- left reference pixels and top reference pixels are combined to obtain reference pixels and neighboring pixels. Or expressed in adjacent pixels.
- reference pixels of a chroma channel are represented as chroma reference pixels
- reference pixels of a luma channel are represented as luma reference pixels.
- the size of a chroma block that is, the number of pixels is represented by N ⁇ N (where N is a natural number).
- a prediction block that is a predictor of a target chroma block is generated by deriving a linear model between reference pixels of a corresponding luma region and reference pixels of a chroma block, and then applying the corresponding linear model to reconstructed pixels of the corresponding luma region.
- a linear model For example, as illustrated in FIGS. 6A to 6C , four pairs of pixels obtained by combining pixels in a neighboring pixel line of a current chroma block and pixels in a corresponding luma region may be used to derive a linear model.
- the image decoding apparatus may derive ⁇ and ⁇ representing a linear model as shown in Equation 1 for four pairs of pixels.
- the image decoding apparatus may generate a predictor pred C (i, j) of the current chroma block from the pixel value rec' L (i, j) of the corresponding luma region using a linear model.
- FIG. 7 is an exemplary diagram illustrating downsampling filters according to color formats.
- the video decoding apparatus Prior to applying the linear model, the video decoding apparatus checks whether the size of the corresponding luma region is equal to the size of the current chroma block.
- the size of the chroma channel is different according to the subsampling method of the chroma channel, as in the example of FIG.
- the CCLM mode is classified into three modes, CCLM_LT, CCLM_L, and CCLM_T, according to the positions of neighboring pixels used in the linear model derivation process.
- the CCLM_LT mode uses two pixels in each direction among neighboring pixels adjacent to the left and top of the current chroma block.
- the CCLM_L mode uses 4 pixels among neighboring pixels adjacent to the left side of the current chroma block.
- the CCLM_T mode uses four pixels among neighboring pixels adjacent to the top of the current chroma block.
- the most probable mode (MPM) technique uses intra prediction modes of neighboring blocks when intra prediction of a current block is performed.
- the video encoding apparatus generates an MPM list to include intra prediction modes derived from predefined positions spatially adjacent to a current block.
- the video encoding apparatus may transmit intra_luma_mpm_flag, which is a flag indicating whether to use the MPM list, to the video decoding apparatus. If intra_luma_mpm_flag does not exist, it is inferred as 1.
- the video encoding apparatus may improve encoding efficiency of the intra prediction mode by transmitting an MPM index, intra_luma_mpm_idx, instead of the index of the prediction mode.
- the multiple reference line (MRL) technology can use not only a reference line adjacent to the current block but also pixels further away from the current block as reference pixels when predicting the current block according to intra prediction technology. At this time, pixels having the same distance from the current block are grouped together and named as a reference line.
- the MRL technique performs intra prediction of a current block using pixels located on a selected reference line.
- the video encoding apparatus signals the reference line index intra_luma_ref_idx to the video decoding apparatus to indicate a reference line used when intra prediction is performed.
- bit allocation for each index can be shown in Table 3.
- the video decoding apparatus may consider whether to use an additional reference line by applying MRL to prediction modes signaled according to MPM, excluding Planar, among intra prediction modes.
- a reference line indicated by each intra_luma_ref_idx is the same as the example of FIG. 8 .
- the video decoding apparatus selects one of three reference lines having a short distance from the current block and uses it for intra prediction of the current block.
- Table 4 shows the syntax related to signaling of the reference line index intra_luma_ref_idx used for prediction and the prediction mode of the current block.
- the video encoding apparatus parses intra_luma_ref_idx to determine a reference line index used for prediction. Since Intra Sub-Partitions (ISP) technology is applicable when the reference line index is 0, the video encoding apparatus does not parse ISP-related information when the reference line index is not 0.
- ISP Intra Sub-Partitions
- MRL technology and MPM mode can be combined as follows.
- intra_luma_not_planar_flag which is a flag indicating whether to use the planar mode
- intra_luma_not_planar_flag which is a flag indicating whether to use the planar mode
- intra_luma_not_planar_flag false, intra prediction mode is set to Planar mode, and when intra_luma_not_planar_flag is true, intra_luma_mpm_idx may be additionally signaled. If intra_luma_not_planar_flag does not exist, it can be inferred as 1.
- intra_luma_ref_idx if intra_luma_ref_idx is not 0, Planar mode is not used. Therefore, intra_luma_not_planar_flag is not transmitted and is considered true. Also, since intra_luma_not_planar_flag is true, intra_luma_mpm_idx may be additionally signaled.
- the current MRL technology has the following two problems.
- the first problem can be solved by using a plurality of reference lines when predicting a chroma block according to IPM.
- the conventional CCLM technology selects reference pixels used to generate a linear relational expression between a luma channel and a chroma channel using only reference lines of limited fixed positions.
- a linear relational expression derived from pixels of a reference line adjacent to a current chroma block and luma pixels of a corresponding position may not always best represent a relationship between a current chroma block and a corresponding luma region.
- the conventional MRL does not consider combining with the CCLM, the reference pixels of the chroma channel used in the CCLM linear relation have to be selected from one fixed reference line, and the reference line where the reference pixels of the luma and chroma channels are located cannot be set separately.
- the second problem can be solved by deriving a linear relational expression using a plurality of reference lines for each channel when predicting a chroma block according to CCLM.
- the video decoding apparatus may independently determine each reference line by separately applying the following implementation examples to the two chroma channels, Cb and Cr. Alternatively, the video decoding apparatus may consider only Cb or Cr or consider both Cb and Cr in order to use the same reference line regardless of the type of chroma channel.
- the conventional MRL technology can refer to three reference lines, but in the present disclosure, the video decoding apparatus may consider a plurality of reference lines (eg, N, where N is a natural number) of three or more.
- subsampling (4:4:4, 4:2:2, 4:2:0) having different picture sizes of the luma channel and the chroma channel may be applied for efficient encoding.
- positions of chroma pixels to be mapped vary according to phases. Accordingly, in prediction of a current chroma block according to IPM or CCLM when subsampling is applied, a reference line optimal for generating a relational expression between channels may be changed. Operation of the present invention can be controlled at a higher level such as a sequence or a picture so that the present invention, not the prior art, can be applied to any subsampling scheme.
- the video encoding apparatus signals sps_chroma_mrl_enabled_flag or pps_chroma_mrl_enabled_flag to the video decoding apparatus at a higher level.
- the subsampling method in one image is the same, but when a picture is composed by editing several scenes, such as screen content, a different subsampling method may be used for each frame or position of the image.
- the video encoding apparatus may vary the method of using a plurality of reference lines of a chroma channel for each CU or each frame by signaling a subsampling scheme. For example, after setting a method of selecting one of a plurality of reference lines according to an arbitrary subsampling method in the form of a look-up table (LUT), an optimal reference line selection method may be selected based on this.
- LUT look-up table
- the subsampling method for each picture area or each frame is signaled to the video decoding apparatus.
- sub_sampling_idx an index indicating the subsampling method
- a subsampling scheme may be transmitted for each CU. Meanwhile, subsampling schemes of the previous CU and the current CU may coincide with a high probability. Accordingly, in order to improve coding efficiency, when a subsampling scheme is changed, sub_sampling_idx_delta, which is an index change amount of the subsampling scheme, may be transmitted as additional information.
- the video encoding apparatus selects one of a plurality of reference lines and signals information of the selected reference line.
- a method of signaling a reference line index of a chroma block separately from a luma channel (realization example 1-1), a method of signaling a difference value between reference line indexes of a representative luma block and a chroma block (realization example 1-2), or a method of signaling that the reference line indexes of a representative luma block and a chroma block are the same (realization example 1-3) may be used.
- the video encoding apparatus signals intra_chroma_ref_idx indicating a reference line of a chroma channel separately from a luma channel. For example, as illustrated in FIG. 9 , a reference line that is not immediately adjacent to the boundary of the current chroma block (ie, intra_chroma_ref_idx is not 0) and is separated by one pixel may be used for prediction. At this time, intra_chroma_ref_idx may be signaled as 1.
- intra_chroma_ref_idx indicates an index indicating a reference line of the current chroma block and may have a value greater than or equal to 0. If intra_chroma_ref_idx does not exist, it may be inferred to be 0.
- Table 5 shows the syntax required for transmission according to the pseudocode described above.
- the pseudo code below shows a case where the prediction mode of the chroma channel is parsed first.
- Table 6 shows the syntax necessary for transmission according to the pseudocode described above.
- This embodiment has the advantage of being relatively simple and requiring a small amount of calculation, so it is suitable for applications requiring these characteristics.
- the same reference line index values of the luma channel and the chroma channel are signaled, and even when the reference line index values of the luma channel and the chroma channel are similar, each information may be signaled without utilizing the similarity. Realizations for improving this inefficiency are described.
- Example 1-2 A difference value between reference line indices of a representative luma block and a chroma block is signaled
- the video encoding apparatus signals a difference value between a reference line index of a representative luma block and a reference line index of a current chroma block.
- the representative luma block is defined as a block including a luma pixel corresponding to a pixel at a specific location (eg, top left, bottom left, top right, center, etc.) of the current chroma block, as in the example of FIG. 10 .
- the location can be predefined or signaled. For example, as in the example of FIG.
- a block including a luma pixel corresponding to a pixel at the center of a current chroma block is defined as a representative luma block, and a reference line index of the luma block may be used to calculate an index difference value.
- the video encoding apparatus may encode a difference between the reference line index of the current chroma block and the reference line index of the representative luma block.
- the difference value is represented by a sign (+, -) intra_ref_idx_diff_sign and an absolute value intra_ref_idx_diff.
- intra_chroma_ref_idx is a reference line index of the current chroma block derived from the difference value, and is expressed as in Equation 3 using the signaled reference line index and the index difference value.
- intra_luma_ref_idx is 2 and intra_chroma_ref_idx is 1, intra_ref_idx_diff_sign and intra_ref_idx_diff are signaled as 0 and 1, respectively.
- Syntax elements required for this realization are as follows. One or more of these syntax elements may be used.
- intra_luma_ref_idx represents a reference line index of a representative luma block and may have a value greater than or equal to 0.
- intra_ref_idx_diff represents an absolute value of a difference between a reference line index (intra_luma_ref_idx) of a representative luma block and a reference line index (intra_chroma_ref_idx) of a current chroma block, and may have an integer value greater than or equal to 0. If this value is 0, intra_ref_idx_diff_sign is not signaled.
- the intra_ref_idx_diff_sign indicates a sign of a difference value between a reference line index (intra_luma_ref_idx) of a representative luma block and a reference line index (intra_chroma_ref_idx) of a current chroma block. If this value is 0, it may indicate that the difference value is a negative number, and if it is 1, it may indicate that the difference value is a positive number.
- pseudocode according to this embodiment can be realized as follows.
- the pseudo code below shows a case in which reference line information of a chroma channel is first parsed.
- Table 7 shows the syntax required for transmission according to the pseudocode described above.
- the pseudo code below shows a case where the prediction mode of the chroma channel is parsed first.
- Table 8 shows the syntax required for transmission according to the pseudocode described above.
- the video encoding apparatus signals that the reference line index of the representative luma block and the reference line index of the current chroma block are the same. If different, the video encoding apparatus signals the reference line index of the current chroma block.
- realization example 1-1 transmission of a reference line index of a chroma block
- realization example 1-2 transmission of a difference value between reference line indexes of a representative luma block and a chroma block
- the representative luma block is defined in the same way as in Realization Example 1-2.
- the video encoding apparatus When the reference line of the representative luma block is indicated by intra_luma_ref_idx, the video encoding apparatus signals whether the reference line index of the current chroma block and the reference line index of the representative luma block are identical by signaling intra_ref_eq_flag.
- intra_ref_eq_flag is signaled as 0, and additional information indicating a reference line of a chroma block is transmitted.
- Syntax elements required for this realization are as follows. One or more of these syntax elements may be used.
- intra_luma_ref_idx represents a reference line index of a representative luma block and may have a value greater than or equal to 0.
- the intra_ref_eq_flag indicates whether the reference line index (intra_luma_ref_idx) of the representative luma block and the reference line index (intra_chroma_ref_idx) of the current chroma block are the same and can have a value of 0 or 1. If this value is 1, it indicates that intra_luma_ref_idx and intra_chroma_ref_idx are identical. When this value is 0, it indicates that the two values are different, and information on a reference line of a chroma block may be additionally transmitted.
- Table 9 shows the syntax required for transmission according to the pseudocode described above.
- the pseudo code below shows a case where the prediction mode of the chroma channel is parsed first.
- Table 10 shows the syntax required for transmission according to the pseudocode described above.
- Tables 9 and 10 the transmission of the reference line index of the luma channel follows the structure of the existing VVC. Also, in the examples of Tables 9 and 10, when intra_luma_ref_idx and intra_chroma_ref_idx are different, reference line information of a chroma block is signaled according to Realization Example 1-1.
- the video encoding apparatus when predicting a current chroma block according to IPM, derives information about a reference line to be used for prediction among a plurality of reference lines.
- a method of using a predefined reference line realization example 2-1
- a method of deriving a reference line according to information of a current chroma block realization example 2-2
- a method of inheriting a reference line of a representative luma block realization example 2-3
- the video encoding apparatus performs intra prediction of a chroma block by using a previously defined reference line among a plurality of reference lines regardless of block information.
- block information may include the size, aspect ratio, and prediction mode of a chroma block.
- a value of a reference line index indicating a predefined reference line may be signaled at a higher level such as SPS.
- reference line information may not be signaled for a chroma block at the CU level.
- the video encoding apparatus derives one reference line among a plurality of reference lines according to the information of the current chroma block.
- a distance from a reference line to a side of a block facing the reference line may be considered. That is, the block height is considered for prediction modes higher than the vertical mode (No. 50) using the upper reference line for prediction, and the width of the block is selected for prediction modes lower than the horizontal mode (No. 18) using the left reference line for prediction.
- a larger value among the width and height of the block may be selected.
- the image encoding apparatus determines a reference line according to a selected width or height of a chroma block. To improve prediction accuracy, the larger the block width or height, the closer the reference line can be used, as shown in Table 11.
- the smaller the block width or height, the more distant the reference line may be used.
- the video encoding apparatus may determine a reference line to be used for prediction by referring to at least one of a block width, a prediction mode, and an aspect ratio.
- a location of a block pixel values of all or some usable reference lines, all predictors that can be generated, and distances between usable reference lines and blocks may be referred to.
- the reference line of the current chroma block may be determined by referring to the width, height, width, aspect ratio, prediction mode, position of the block, pixel values of all or some usable reference lines, all predictors that can be generated, the distance between usable reference lines and the block, the restored pixel values of the block, the used reference line index, the pixel values of the used reference line, the distance between the used reference line and the corresponding block, etc., as information of the neighboring chroma block or representative luma block.
- the representative luma block is defined the same as in the realization example 1-2.
- the width and height of the current chroma block, the width and height of the neighboring chroma block, and the reference line of the neighboring chroma block may be referred to. That is, when the current chroma block and the neighboring chroma block have the same width and height, the reference line index of the neighboring chroma block may be used as the reference line index of the current chroma block.
- the video encoding apparatus inherits the reference line of the representative luma block for the current chroma block in order to use the same reference line as the reference line of the luma channel.
- the current chroma block may be predicted using a reference line having the same index as the reference line index of the inherited representative luma block.
- the representative luma block is defined the same as in the realization example 1-2.
- the video encoding apparatus can use a method of unconditionally inheriting the reference line of the representative luma block (realization example 2-3-1), a method of inheriting it according to a specific condition by referring to block information (realization example 2-3-2), or a method of signaling inheritance (realization example 2-3-3).
- the image encoding apparatus adjusts the reference line index as shown in Equation 4 or Equation 5 to match the locations of the reference lines of the two channels. After adjusting, it can be used.
- the video encoding apparatus inherits the reference line index of the representative luma block to derive the reference line index of the current chroma block. For example, when a representative luma block uses intra_luma_ref_idx 1 as a reference line, the current chroma block also uses intra_chroma_ref_idx 1 as a reference line.
- the video encoding apparatus inherits the reference line index of the representative luma block adaptively according to a specific condition by referring to block information.
- block information the width and height of the current chroma block and the representative luma block, width, aspect ratio, position, intra prediction mode, pixel values of all or some of the available reference lines, all predictors that can be generated, the distance between the available reference line and the corresponding block, whether or not the MRL of the representative luma block is used (ie, whether or not adjacent reference lines are used), the reference line of the representative luma block and the pixel values of the corresponding reference line, the reconstructed pixel values of the representative luma block, the distance between the representative luma block and the reference line used in the representative luma block,
- One or more of the following information may be referred to: whether the neighboring chroma block inherits the reference line, the reference line of the neighboring chroma block, and information on the neighboring chroma block (width, height, width
- aspect ratios of a current chroma block and a representative luma block may be referred to. That is, when the aspect ratios of the two blocks match, the video encoding apparatus may use the reference line index of the representative luma block as the reference line index of the current chroma block.
- locations and intra prediction modes of the current chroma block and representative luma block may be referred to. That is, when reference lines used for prediction of two blocks exist at the same position, the current chroma block inherits the reference line of the representative luma block. In this case, if there is one or more overlapping parts between the upper or left boundary of the corresponding luma region and the upper or left boundary of the representative luma block, the video encoding apparatus determines that the reference line of the representative luma block is located at the same position as the current chroma block. For example, as in the example of FIG. 12 , the same position described above for the 4:2:0 color format can be expressed.
- the video encoding apparatus can inherit the reference line of the representative luma block.
- the chroma block may inherit the reference line of the representative luma block.
- the chroma block may inherit the reference line of the representative luma block.
- reference may be made to an aspect ratio of a neighboring chroma block, an aspect ratio of a current chroma block, and whether a neighboring chroma block inherits a reference line.
- the current chroma block may also inherit the reference line of the representative luma block.
- the video encoding apparatus may determine the reference line of the chroma block according to the prior art or the above-described implementation example.
- the image encoding apparatus may use only adjacent reference lines without considering a plurality of reference lines or may signal the reference lines according to the first embodiment.
- the video encoding apparatus may use a reference line defined in advance (realization example 2-1) or derive a reference line according to chroma block information (realization example 2-2).
- the video encoding apparatus signals whether the reference line index of the representative luma block is inherited.
- intra_ref_inherit_flag may be signaled.
- this value is 1, the video encoding apparatus uses the reference line index of the representative luma block as the reference line index of the chroma block.
- this value is 0, the video encoding apparatus may use one of the methods of Realization Example 1 or Implementation Example 2 to determine the reference line index of the current chroma block without inheriting the reference line index of the representative luma block.
- the video encoding apparatus determines the reference line of the chroma channel by selectively combining the above-described embodiments and the prior art. At this time, the combination may have two meanings.
- an image encoding apparatus may determine a reference line of a chroma channel by combining all or part of the proposed methods in an arbitrary order. For example, realization example 2-3-2 and realization example 2-3-3 may be combined.
- the video encoding apparatus may signal whether the reference line is inherited when a specific condition is satisfied by referring to block information. In this case, when a specific condition is not satisfied, the reference line of the chroma channel may be determined according to the implementation example 2-3-2.
- the video encoding device may inherit the reference line of the luma representative block when a specific condition is satisfied by referring to block information. When a specific condition is not satisfied, whether or not the reference line is inherited may be signaled.
- the image encoding apparatus may select one of the methods after considering all or part of the methods proposed for one chroma block at the same time (ie, at the same level).
- the video encoding apparatus may select one method depending on a signal, or select one method depending on block information without a signal, as in Example 3.
- the video encoding apparatus divides the number of cases of all possible block information into n (n > 1) groups and uses a method for determining a specific reference line for each group.
- n the width of a chroma block
- the video encoding apparatus derives a reference line according to chroma block information (realization example 2-2) in all cases of block width. It may not be performed. That is, the image encoding apparatus derives a reference line according to chroma block information in some cases (for example, derives a reference line when the width of a block is 16), and signals a reference line in other cases (realization example 1).
- the reference line of the chroma block may be determined using a predefined reference line (realization example 2-1), inheriting the reference line of the representative luma block (realization example 2-3), or using a method in which these are combined (first method of implementation example 3).
- reference lines of adjacent chroma blocks may also be referred to as block information in the 'method for deriving a reference line according to information of a current chroma block' of Realization Example 2-2.
- block information For example, when the reference line index of an adjacent chroma block is determined to be 2 according to a signal or derivation method, the reference line index of the current chroma block may be determined to be 2 by referring to corresponding information.
- the reference line of a corresponding chroma block may be derived by referring all chroma blocks in an image to reference line information of an adjacent chroma block as block information.
- the video encoding apparatus signals additional information to selectively apply the methods of implementation examples 1 to 3 as described above.
- chroma_mrl_flag may be signaled.
- Table 12 when chroma_mrl_flag is 0, the video encoding apparatus may predict a chroma channel using only one adjacent reference line according to the existing technology, not according to the present invention.
- chroma_mrl_flag is 1, realization example 2 can be used.
- the video encoding device may additionally signal chroma_mrl_idx to selectively use one of the existing technology and the methods of implementation examples 1 to 3.
- the prediction mode of the current chroma block is CCLM
- encoding efficiency can be improved by considering the characteristics of CCLM for a method using a plurality of reference lines.
- the CCLM In the process of predicting the current chroma block, the CCLM generates a linear relationship between channels using reference pixels around the chroma block and reference pixels around the corresponding luma region.
- the video encoding apparatus selects one of a plurality of reference lines rather than a single fixed reference line, and selects reference pixels used in a linear relationship between channels of the CCLM from the corresponding reference line.
- Reference pixels used in the linear relationship between channels include chroma reference pixels and luma reference pixels, and are collectively referred to as CCLM reference pixels.
- the image encoding apparatus may independently determine a reference line for selecting CCLM reference pixels for a chroma channel (ie, a current chroma block) and a luma channel (ie, a corresponding luma region). Accordingly, a method of selecting one of a plurality of reference lines in a chroma channel (realization example 1) and a method of selecting one of a plurality of reference lines in a luma channel (realization example 2) may be combined. As in the example of FIG. 13 , reference line indices at which CCLM reference pixels are located may be different for each channel.
- positions at which reference pixels are selected for each channel correspond to each other, but are not necessarily limited thereto.
- locations where reference pixels are selected in reference lines for each channel may not correspond to each other.
- pairs of luma reference pixels and chroma reference pixels used to generate a linear relational expression may be configured in a clockwise order starting from the lower left as illustrated in FIG. 14 based on positions of reference pixels.
- the pairs described above may be constructed according to arbitrary rules.
- the CCLM technology creates a corresponding luma area by applying a downsampling filter to the luma channel according to the color format of the input image, as shown in the example of FIG.
- sps_chroma_vertical_collocated_flag 0
- the reference line index used to select the reference pixels in the luma channel may indicate a reference line index where the center of the downsampling filter is located. That is, in the example of FIG. 15 , since the center of the downsampling filter is located on a reference line one pixel away from the corresponding luma region, intra_corresponding_luma_ref_idx 1 may be signaled to determine the reference line of the corresponding luma region in the CCLM prediction process.
- mapped chroma pixels may vary according to the phase of the downsampling filter.
- reference line indices where selected luma reference pixels are located may be signaled.
- intra_corresponding_luma_ref_idx 1 may be signaled.
- the video encoding apparatus selects one of a plurality of reference lines for a chroma channel in order to select chroma reference pixels used in a linear relationship between channels when predicting a current chroma block with CCLM.
- a method of signaling one of a plurality of reference lines for a chroma channel (realization example 1-1) or a method of deriving one of a plurality of reference lines for a chroma channel (realization example 1-2) may be used.
- Example 1-1 Chroma signal one of a plurality of reference lines for a channel
- the image encoding apparatus signals intra_chroma_ref_idx indicating a reference line where reference pixels are located with respect to a chroma channel.
- the image encoding apparatus may select CCLM reference pixels of a chroma block from a reference line that is not immediately adjacent to the boundary of the current chroma block and is two pixels apart.
- intra_chroma_ref_idx may be signaled as 2.
- Syntax elements required for this realization are as follows. When parsing information related to CCLM, this element may be parsed.
- intra_chroma_ref_idx indicates a reference line index indicating a reference line including CCLM reference pixels of a chroma block and may have a value of 0 or greater. If intra_chroma_ref_idx does not exist, it may be inferred to be 0.
- parsing of a reference line index of a chroma channel is omitted.
- parsing of the reference line index of the luma channel can be omitted.
- Table 14 shows the syntax required for transmission according to the pseudocode described above.
- Example 1-2 Chroma Deriving one of a plurality of reference lines for a channel
- the video encoding apparatus derives a reference line where CCLM reference pixels are located for a chroma channel.
- a predefined reference line may be used (realization example 1-2-1)
- a reference line may be derived according to information of the current chroma block (realization example 1-2-2)
- a reference line of a corresponding luma region may be inherited (realization example 1-2-3).
- the video encoding apparatus selects a predefined reference line among a plurality of reference lines regardless of block information, and then selects CCLM reference pixels of a chroma block used in the CCLM linear relational expression in the reference line.
- the value of the reference line index indicating the predefined reference line may be signaled at a higher level such as SPS.
- reference line information may not be signaled for a chroma block at the CU level.
- the video encoding apparatus derives one reference line among a plurality of reference lines according to the information of the current chroma block, and selects CCLM reference pixels of the chroma block used for the CCLM linear relational expression from the derived reference line.
- block information a distance from a reference line to a side of an opposing block may be considered. That is, the block height is considered for CCLM_T for selecting upper reference pixels, and the block width is selected for CCLM_L for selecting left reference pixels.
- CCLM_LT which uses both top and left reference pixels for prediction, a larger value of width and height may be selected.
- the image encoding apparatus determines a reference line from which reference pixels of a chroma block used in a linear relationship are selected according to the selected width or height of the chroma block. In order to improve prediction accuracy, a closer reference line may be used as shown in Table 11 as the value of the width or height of the block increases.
- the smaller the block width or height, the more distant the reference line may be used.
- the video encoding apparatus may determine a reference line to be used for prediction by referring to at least one of a block width, a prediction mode, and an aspect ratio.
- a location of a block pixel values of all or some usable reference lines, all predictors that can be generated, and distances between usable reference lines and blocks may be referred to.
- the reference line of the current chroma block may be determined by referring to the width, height, width, aspect ratio, prediction mode, position of the block, pixel values of all or some of the usable reference lines, all predictors that can be generated, the distance between the usable reference line and the block, the restored pixel values of the block, the used reference line index, the pixel values of the used reference line, the distance between the used reference line and the corresponding block, and the like, as information of the neighboring chroma block.
- the width and height of the current chroma block, the width and height of the neighboring chroma block, and the reference line of the neighboring chroma block may be referred to. That is, when the current chroma block and the neighboring chroma block have the same width and height, the reference line index of the neighboring chroma block may be used as the reference line index of the current chroma block.
- the video encoding apparatus inherits one of a plurality of reference lines from the corresponding luma region of the current chroma block as reference line information, and selects CCLM reference pixels of the chroma block used in the CCLM linear relational expression from the inherited reference line. That is, reference pixels of a chroma block may be selected from a reference line having the same index as a reference line index used to select reference pixels of a corresponding luma region.
- intra_corresponding_luma_ref_idx a reference line of the corresponding luma region
- intra_chroma_ref_idx intra_corresponding_luma_ref_idx >> 1).
- the reference line of the corresponding luma region may be inherited unconditionally (realization example 1-2-3-1), whether or not inheritance is signaled (realization example 1-2-3-3), or inheritance according to specific conditions by referring to chroma block information (realization example 1-2-3-3).
- These may be implemented similarly to Realization Examples 2-3 in the case where the prediction mode of the current chroma block is IPM among the above-described realization examples.
- the video encoding apparatus inherits the reference line index of the corresponding luma region to derive the reference line index from which the CCLM reference pixels of the current chroma block are selected. For example, for a 4:4:4 color format, the CCLM reference pixels of the corresponding luma region If the reference line indicated by intra_corresponding_luma_ref_idx 1 is selected, the CCLM reference pixels of the current chroma block are also selected from the reference line indicated by intra_chroma_ref_idx 1.
- the video encoding apparatus signals whether or not the reference line index of the corresponding luma region is inherited.
- intra_ref_inherit_luma_flag may be signaled.
- this value is 1
- the video encoding apparatus uses the reference line index of the corresponding luma region as the reference line index of the chroma block.
- this value is 0, the video encoding apparatus may use the method of Realization Example 1-1, 1-2-1, or 1-2-2 to determine the reference line index of the current chroma block without inheriting the reference line index of the corresponding luma region.
- the video encoding apparatus inherits the reference line index of the corresponding luma region according to a specific condition by referring to chroma block information.
- block information includes width, height, width, aspect ratio, location, CCLM mode index indicating the type of CCLM mode, pixel values of all or some of the usable reference lines, all predictors that can be generated, distance between the usable reference line and the block, reference line inheritance of neighboring chroma blocks, reference lines of neighboring chroma blocks, information of neighboring chroma blocks (width, height, width, aspect ratio, position, prediction mode, pixel values of all or some of the available reference lines, all predictors that can be generated, available references).
- At least one of a distance between a line and a corresponding block, reconstructed pixel values of a block, an index and pixel values of a used reference line, etc. may be referred to.
- a reference line index of a corresponding luma region may be inherited.
- reference may be made to an aspect ratio of a neighboring chroma block, an aspect ratio of a current chroma block, and whether a neighboring chroma block inherits a reference line.
- the current chroma block may also inherit the reference line of the corresponding luma area.
- the image encoding apparatus may select CCLM reference pixels of the chroma block from the reference line determined according to the prior art or the above-described implementation example.
- the video encoding apparatus selects one of a plurality of reference lines for a luma channel to select CCLM reference pixels of a corresponding luma region used in a linear relational expression between channels.
- a method of signaling one of a plurality of reference lines for the luma channel (realization example 2-1) or a method of deriving one of a plurality of reference lines for the luma channel (realization example 2-2) may be used.
- Example 2-1 Signaling one of a plurality of reference lines in the luma channel
- the video encoding apparatus signals intra_corresponding_luma_ref_idx indicating a reference line where CCLM reference pixels are located for a luma channel.
- the image encoding apparatus may select CCLM reference pixels of the corresponding luma region from a reference line four pixels apart from the boundary of the corresponding luma region and not directly adjacent to the boundary of the corresponding luma region.
- the center of the downsampling filter is located on the corresponding reference line, and intra_corresponding_luma_ref_idx indicating the corresponding reference line may be signaled as 4.
- Syntax elements required for this realization are as follows. When parsing information related to CCLM, this element may be parsed.
- intra_corresponding_luma_ref_idx represents a reference line index indicating a reference line used for selecting CCLM reference pixels in a corresponding luma region.
- This index may have a value greater than or equal to 0 for the 4:4:4 color format, and for the other formats, values that may exist may vary depending on the shape of a filter applied to downsampling. For example, when the filter illustrated in FIG. 17 is applied, this index may have a value of 1 or more. If intra_corresponding_luma_ref_idx does not exist, it may be inferred to be 0 for the 4:4:4 color format. For the remaining formats, the value of this index (1 in the case of the filter illustrated in FIG. 17) can be inferred so that the downsampling filter is applied at a position adjacent to the corresponding luma region.
- parsing of a reference line index of a chroma channel is omitted.
- parsing of the reference line index of the luma channel can be omitted.
- Table 16 shows the syntax required for transmission according to the pseudocode described above.
- the video encoding apparatus derives a reference line where CCLM reference pixels are located for a luma channel.
- a predefined reference line may be used (realization example 2-2-1), the reference line of the current chroma block may be inherited (realization example 2-2-2), or the MRL index of the luma block may be inherited (realization example 2-2-3).
- the video encoding apparatus selects a predefined reference line among a plurality of reference lines regardless of block information, and then selects CCLM reference pixels of a corresponding luma area used in a CCLM linear relational expression from the corresponding reference line.
- the value of the reference line index indicating the predefined reference line may be signaled at a higher level such as SPS.
- reference line information may not be signaled for a chroma block at the CU level.
- the video encoding apparatus inherits one of a plurality of reference lines from the current chroma block as reference line information, and selects CCLM reference pixels of a corresponding luma region used in a CCLM linear relational expression from the inherited reference line. That is, reference pixels of a corresponding luma region may be selected from a reference line having the same index as a reference line index used to select reference pixels of a chroma block.
- the method of inheriting from the corresponding luma area realization example 1-2-3 cannot be applied.
- the reference line of the current chroma block may be inherited unconditionally (realization example 2-2-2-1), whether or not inheritance is signaled (realization example 2-2-2-2), or inheritance according to specific conditions by referring to chroma block information (realization example 2-2-2-3).
- these may be implemented similarly to realization example 2-3 in case the prediction mode of the current chroma block is IPM among the above-described realization examples.
- the video encoding apparatus inherits the reference line index of the chroma block to derive the reference line index from which the CCLM reference pixels of the corresponding luma region are selected. For example, for a 4:4:4 color format, if CCLM reference pixels of a chroma block are selected from a reference line indicated by intra_chroma_ref_idx 1, CCLM reference pixels of a corresponding luma area are also selected from a reference line indicated by intra_corresponding_luma_ref_idx 1.
- the video encoding apparatus signals whether the chroma block inherits the reference line index.
- intra_ref_inherit_chroma_flag may be signaled.
- this value is 1
- the video encoding apparatus uses the reference line index of the chroma block as the reference line index of the corresponding luma region.
- this value is 0, the video encoding apparatus may use the method of Realization Example 2-1 or 2-2-1 to determine the reference line index of the corresponding luma region without inheriting the reference line index of the chroma block.
- the video encoding apparatus inherits the reference line index of the chroma block according to a specific condition by referring to information of the chroma block. It is also possible to refer to information of a corresponding luma area, but since a corresponding luma area is defined according to a chroma block, using information of a corresponding luma area and information of a chroma block have the same effect.
- block information one or more of the width, height, width, aspect ratio, position, CCLM mode index indicating the type of CCLM mode, pixel values of all or some reference lines that can be used, all predictors that can be generated, and the distance between the available reference line and the corresponding block can be referenced.
- restored pixel values in the corresponding luma area and reference pixels around the corresponding luma area may be referred to.
- the reference line index of the chroma block may be inherited.
- whether a luma area corresponding to a neighboring chroma block (ie, a neighboring area of a current corresponding luma area) inherits a reference line or a reference line of a luma area corresponding to a neighboring chroma block may be referred to as block information.
- the image encoding apparatus may select CCLM reference pixels of the corresponding luma region from the reference line determined according to the prior art or the above-described embodiment.
- the video encoding apparatus determines the reference line of the corresponding luma region by inheriting intra_luma_ref_idx, which is an index indicating the reference line of the luma channel, according to MRL technology, and selects CCLM reference pixels of the corresponding luma region from the determined reference line. Since the reference line selected by the MRL technique for the luma channel is the most suitable reference line for prediction of the corresponding luma block, inheriting this information has an advantage in that signaling cost can be reduced. In this case, the video encoding apparatus may inherit the MRL index of the luma block whose size, shape, and location match the corresponding luma region. As in the implementation example 2-2-2, the reference line of the luma block can be unconditionally inherited (implementation example 2-2-3-1) or whether to inherit can be signaled (implementation example 2-2-3-21).
- the video encoding device inherits the reference line index of a luma block having the same size, shape, and location as the corresponding luma region.
- the reference line of the corresponding luma block may be determined by parsing intra_luma_ref_idx when encoding/decoding the luma channel according to the syntax parsing method of MRL technology. For example, if a luma block identical in size, shape, and location to the corresponding luma area exists and the reference line of the corresponding block is indicated by intra_luma_ref_idx 2, CCLM reference pixels of the corresponding luma area are also selected from the reference line indicated by intra_corresponding_luma_ref_idx 2.
- the video encoding apparatus signals whether or not the luma block inherits the MRL index.
- intra_ref_inherit_chroma_flag may be signaled.
- the video encoding apparatus uses the MRL index (ie, intra_luma_ref_idx) of a luma block having the same size, shape, and location as the corresponding luma region as the reference line index of the corresponding luma region.
- the video encoding device may use the method of Realization Example 2-1, 2-2-1, or 2-2-2 to determine the reference line index of the corresponding luma region without inheriting the MRL index of the luma block having the same size, shape, and location as the corresponding luma region.
- intra_ref_inherit_chroma_flag may be inferred to be 0.
- the video encoding apparatus determines reference lines of a chroma channel and a luma channel by selectively combining the above-described embodiments and the prior art.
- the combination may have two meanings.
- an image encoding apparatus may determine reference lines of a chroma channel and a luma channel by combining all or part of the proposed methods in an arbitrary order. For example, when selecting CCLM reference pixels of a chroma block by determining a reference line for a chroma channel, realization example 1-2-3-2 and realization example 1-2-3-3 may be combined.
- the video encoding device may signal whether the reference line is inherited when a specific condition is satisfied by referring to at least one of the block width and height, width, aspect ratio, location, and CCLM mode index indicating the type of CCLM mode.
- the reference line of the chroma channel may be determined according to the implementation example 1-2-3-3.
- the video encoding device may inherit the reference line when a specific condition is satisfied by referring to block information. At this time, if a specific condition is not satisfied, whether or not the reference line is inherited may be signaled.
- the reference line of the corresponding luma area can similarly be determined based on a combination of the prior art and the presented embodiments.
- the image encoding apparatus may select one of the methods after considering all or part of the methods proposed for one chroma block at the same time (ie, at the same level).
- the video encoding apparatus may select one method depending on a signal, or select one method depending on block information without a signal, as in Example 3.
- the video encoding apparatus divides the number of cases of all possible block information into n (n > 1) groups and uses a method for determining a specific reference line for each group.
- n n > 1
- the block information at least one of the block width, height, area, aspect ratio, position, and CCLM mode index indicating the type of CCLM mode may be referred to.
- the video encoding apparatus when determining a chroma block reference line from which CCLM reference pixels are to be selected, when the width of the chroma block is referred to as block information and n is 2, the video encoding apparatus derives the reference line according to the chroma block information (Example 1-2-2) may not be performed for all block width cases. That is, the video encoding apparatus derives a reference line according to chroma block information in some cases (for example, derives a reference line when the width of a block is 16), and signals a reference line in other cases (Example 1-1).
- the reference line of the chroma block may be determined using a predefined reference line (realization example 1-2-1), inheriting the reference line of the corresponding luma region (realization example 1-2-3), or using a method in which these are combined (first method of implementation example 3).
- the reference line of the corresponding luma area can similarly be determined based on a combination of the prior art and the presented embodiments.
- reference lines of adjacent chroma blocks may also be referred to as block information in Realization Example 1-2-2 'method of deriving a reference line according to information of a current chroma block'.
- the reference line index of an adjacent chroma block may be 2 according to a signal or derivation method
- the reference line index of the current chroma block may be determined to be 2 by referring to corresponding information.
- the reference line of a corresponding chroma block may be derived by referring all chroma blocks in an image to reference line information of an adjacent chroma block as block information.
- the video encoding apparatus signals additional information to selectively apply the methods of implementation examples 1 to 3 as described above.
- cclm_mrl_flag can be signaled.
- the video encoding device may predict a chroma channel according to the existing CCLM technique without following the present invention.
- this value is 1, in order to use a plurality of reference lines for each channel, the video encoding apparatus applies Realization Example 1-1 to the chroma channel and Realization Example 2-1 to the luma channel, and both channels can use one method of signaling reference lines.
- the video encoding device may selectively use one of the existing technology and the combination methods of Realization Examples 1 and 2 by signaling cclm_mrl_flag and cclm_mrl_idx.
- the video encoding device may signal a realization method applied for each channel using cclm_mrl_chroma_idx and cclm_mrl_luma_idx.
- 18 is an exemplary diagram additionally illustrating downsampling filters according to color formats.
- the video encoding apparatus may use various downsampling filters in addition to the methods illustrated in FIG. 7 to make the size of the corresponding luma region equal to the size of the current chroma block when the color format is not 4:4:4.
- various downsampling filters may be applied.
- the filter of the 4:2:0 MPEG2 downsampling method is the same as the example of FIG. 19 .
- the video encoding apparatus may equally apply the MRL according to the above-described implementation examples 1 to 4.
- FIGS. 20 and 21 a method of intra-predicting and encoding/decoding a current chroma block based on IPM by a video encoding apparatus or a video decoding apparatus will be described using the illustrations of FIGS. 20 and 21 .
- 20 is a flowchart illustrating a method of encoding a current chroma block performed by an image encoding apparatus according to an embodiment of the present disclosure.
- the video encoding device determines the CCLM mode flag (S2000).
- the CCLM mode flag indicates whether to use the CCLM mode.
- the video encoding apparatus may determine the value of the CCLM mode flag.
- the video encoding device checks the CCLM mode flag (S2002).
- the video encoding device When the CCLM mode flag is false, the video encoding device performs the following steps.
- the image encoding apparatus determines a chroma intra prediction mode of the current chroma block (S2004). In terms of encoding efficiency optimization, the video encoding apparatus may determine a chroma intra prediction mode.
- the image encoding apparatus may set the chroma intra prediction mode according to Table 1 based on them.
- the image encoding apparatus determines one reference line among a plurality of neighboring chroma pixel lines of the current chroma block based on the plurality of neighboring chroma pixel lines of the current chroma block or the plurality of neighboring luma pixel lines of the representative luma block (S2006).
- the representative luma block is a block including a luma pixel corresponding to a pixel at a preset position of the current chroma block.
- the reference line of the current chroma block is indicated by the index of the reference line of the current chroma block.
- the image encoding apparatus may determine the reference line of the current chroma block using one of the methods of Realization Examples 1 to 4 corresponding to the case where the prediction mode of the current chroma block is IPM.
- the video encoding apparatus generates a predictor of the current chroma block using the reference line determined according to the chroma intra prediction mode (S2008).
- the image encoding apparatus generates a residual block of the current chroma block by subtracting a predictor from the current chroma block (S2010).
- the image encoding apparatus encodes the CCLM mode flag, the chroma intra prediction mode, and the residual block (S2012).
- the video encoding apparatus encodes the current chroma block based on CCLM (S2020).
- 21 is a flowchart illustrating a method of decoding a current chroma block performed by an image decoding apparatus according to an embodiment of the present disclosure.
- the video decoding apparatus decodes the residual block of the current chroma block and the CCLM mode flag from the bitstream (S2100).
- the CCLM mode flag indicates whether to use the CCLM mode.
- the video decoding apparatus checks the CCLM mode flag (S2102).
- the video decoding apparatus performs the following steps.
- the video decoding apparatus decodes the chroma intra prediction mode of the current chroma block from the bitstream (S2104).
- the video decoding apparatus may determine the chroma intra prediction mode according to Table 1 based on them.
- the image decoding apparatus determines one reference line among a plurality of neighboring chroma pixel lines of the current chroma block based on the plurality of neighboring chroma pixel lines of the current chroma block or the plurality of neighboring luma pixel lines of the representative luma block (S2106).
- the representative luma block is a block including a luma pixel corresponding to a pixel at a preset position of the current chroma block.
- the reference line of the current chroma block is indicated by the index of the reference line of the current chroma block.
- the video decoding apparatus may determine the reference line of the current chroma block using one of the methods of Realization Examples 1 to 4 corresponding to the case where the prediction mode of the current chroma block is IPM.
- the video decoding apparatus generates a predictor of the current chroma block using the reference line determined according to the chroma intra prediction mode (S2108).
- the video decoding apparatus restores the current chroma block by adding the predictor and the residual block (S2110).
- the video decoding apparatus decodes the current chroma block based on CCLM (S2120).
- FIG. 22 is a flowchart illustrating a method of encoding a current chroma block performed by an image encoding apparatus according to another embodiment of the present disclosure.
- the video encoding device determines the CCLM mode index (S2200).
- the CCLM mode index indicates the CCLM mode of the current chroma block.
- the video encoding apparatus may determine the CCLM mode index.
- the CCLM mode according to the CCLM mode index is shown in Table 1.
- the image encoding apparatus determines one of the plurality of neighboring chroma pixel lines of the current chroma block based on the plurality of neighboring chroma pixel lines of the current chroma block or the plurality of neighboring luma pixel lines of the corresponding luma area (S2202).
- the corresponding luma area is an area within a luma channel corresponding to the current chroma block.
- the reference line of the current chroma block is indicated by the index of the reference line of the current chroma block.
- the video encoding apparatus may determine the chroma reference line using one of the methods of Realization Example 1, Implementation Example 3, and Implementation Example 4 corresponding to the case where the prediction mode of the current chroma block is CCLM.
- the image encoding apparatus determines one reference line among the plurality of peripheral luma pixel lines of the corresponding luma region based on the plurality of peripheral chroma pixel lines of the current chroma block or the plurality of peripheral luma pixel lines of the corresponding luma region (S2204).
- the reference line of the corresponding luma region is indicated by the index of the reference line of the corresponding luma region.
- the video encoding apparatus may determine the luma reference line using one of the methods of Realization Examples 2 to 4 corresponding to the case where the prediction mode of the current chroma block is CCLM.
- the image encoding apparatus selects chroma reference pixels from the chroma reference line, selects luma reference pixels from the luma reference line, and then generates a linear relational expression between the luma reference pixels and the chroma reference pixels (S2206).
- the image encoding apparatus generates a predictor of a current chroma block from pixels in a corresponding luma region using a linear relational expression (S2208).
- the image encoding apparatus generates a residual block of the current chroma block by subtracting a predictor from the current chroma block (S2210).
- the video encoding apparatus encodes the CCLM mode flag, the CCLM mode index, and the residual block (S2212).
- FIG. 23 is a flowchart illustrating a method of decoding a current chroma block performed by an image decoding apparatus according to another embodiment of the present disclosure.
- the video decoding apparatus decodes the CCLM mode index from the bitstream (S2300).
- the CCLM mode index indicates the CCLM mode of the current chroma block.
- the CCLM mode according to the CCLM mode index is shown in Table 1.
- the image decoding apparatus determines one reference line among a plurality of neighboring chroma pixel lines of the current chroma block based on the plurality of neighboring chroma pixel lines of the current chroma block or the plurality of neighboring luma pixel lines of the corresponding luma area according to the CCLM mode (S2302).
- the corresponding luma area is an area within a luma channel corresponding to the current chroma block.
- the reference line of the current chroma block is indicated by the index of the reference line of the current chroma block.
- the video decoding apparatus may determine the chroma reference line using one of the methods of Realization 1, Realization 3, and Realization 4 corresponding to the case where the prediction mode of the current chroma block is CCLM.
- the image decoding apparatus determines one reference line among the plurality of peripheral luma pixel lines of the corresponding luma region based on the plurality of peripheral chroma pixel lines of the current chroma block or the plurality of peripheral luma pixel lines of the corresponding luma region (S2304).
- the reference line of the corresponding luma region is indicated by the index of the reference line of the corresponding luma region.
- the video decoding apparatus may determine the luma reference line using one of the methods of Realization Examples 2 to 4 corresponding to the case where the prediction mode of the current chroma block is CCLM.
- the image decoding apparatus selects chroma reference pixels from the chroma reference line, selects luma reference pixels from the luma reference line, and then generates a linear relational expression between the luma reference pixels and the chroma reference pixels (S2306).
- the image decoding apparatus generates a predictor of the current chroma block from pixels in the corresponding luma region using a linear relational expression (S2308).
- the video decoding apparatus restores the current chroma block by adding the predictor and the residual block (S2310).
- Non-transitory recording media include, for example, all types of recording devices in which data is stored in a form readable by a computer system.
- the non-transitory recording medium includes storage media such as an erasable programmable read only memory (EPROM), a flash drive, an optical drive, a magnetic hard drive, and a solid state drive (SSD).
- EPROM erasable programmable read only memory
- SSD solid state drive
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
La divulgation concerne un procédé et un appareil de codage vidéo utilisant une prédiction de canal de chrominance sur la base de multiples lignes de référence. Le présent mode de réalisation concerne un procédé et un appareil de codage vidéo qui génèrent, à l'aide d'une pluralité de lignes de référence, un prédicteur d'un canal de chrominance dans une prédiction intra d'un bloc actuel sur la base d'un mode de prédiction intra ou d'un modèle linéaire à composantes croisées (CCLM).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202380016536.XA CN118511510A (zh) | 2022-01-19 | 2023-01-09 | 使用多个参考行用于色度通道编码的方法和装置 |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2022-0008047 | 2022-01-19 | ||
KR20220008047 | 2022-01-19 | ||
KR20220065924 | 2022-05-30 | ||
KR10-2022-0065924 | 2022-05-30 | ||
KR10-2023-0002155 | 2023-01-06 | ||
KR1020230002155A KR20230112053A (ko) | 2022-01-19 | 2023-01-06 | 다중 참조라인들을 이용하는 크로마 채널 코딩방법 및 장치 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023140547A1 true WO2023140547A1 (fr) | 2023-07-27 |
Family
ID=87348966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2023/000359 WO2023140547A1 (fr) | 2022-01-19 | 2023-01-09 | Procédé et appareil de codage de canal de chrominance à l'aide de multiples lignes de référence |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023140547A1 (fr) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021023152A1 (fr) * | 2019-08-03 | 2021-02-11 | Beijing Bytedance Network Technology Co., Ltd. | Sélection de matrices pour une transformée secondaire réduite dans un codage vidéo |
KR20210018137A (ko) * | 2019-08-06 | 2021-02-17 | 현대자동차주식회사 | 동영상 데이터의 인트라 예측 코딩을 위한 방법 및 장치 |
KR20210019401A (ko) * | 2018-07-11 | 2021-02-22 | 삼성전자주식회사 | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 |
KR20210145755A (ko) * | 2019-04-12 | 2021-12-02 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 행렬 기반 인트라 예측과 다른 코딩 툴들 사이의 상호작용 |
KR20210148321A (ko) * | 2019-04-27 | 2021-12-07 | 주식회사 윌러스표준기술연구소 | 인트라 예측 기반 비디오 신호 처리 방법 및 장치 |
-
2023
- 2023-01-09 WO PCT/KR2023/000359 patent/WO2023140547A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210019401A (ko) * | 2018-07-11 | 2021-02-22 | 삼성전자주식회사 | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 |
KR20210145755A (ko) * | 2019-04-12 | 2021-12-02 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 행렬 기반 인트라 예측과 다른 코딩 툴들 사이의 상호작용 |
KR20210148321A (ko) * | 2019-04-27 | 2021-12-07 | 주식회사 윌러스표준기술연구소 | 인트라 예측 기반 비디오 신호 처리 방법 및 장치 |
WO2021023152A1 (fr) * | 2019-08-03 | 2021-02-11 | Beijing Bytedance Network Technology Co., Ltd. | Sélection de matrices pour une transformée secondaire réduite dans un codage vidéo |
KR20210018137A (ko) * | 2019-08-06 | 2021-02-17 | 현대자동차주식회사 | 동영상 데이터의 인트라 예측 코딩을 위한 방법 및 장치 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020076143A1 (fr) | Procédé et appareil de traitement de signal vidéo utilisant la prédiction à hypothèses multiples | |
WO2018155986A2 (fr) | Procédé et appareil de traitement de signaux vidéo | |
WO2018026219A1 (fr) | Procédé et dispositif de traitement de signal vidéo | |
WO2017176030A1 (fr) | Procédé et appareil de traitement de signal vidéo | |
WO2017018664A1 (fr) | Procédé de traitement d'image basé sur un mode d'intra prédiction et appareil s'y rapportant | |
WO2018030599A1 (fr) | Procédé de traitement d'image fondé sur un mode de prédiction intra et dispositif associé | |
WO2018047995A1 (fr) | Procédé de traitement d'image basé sur un mode d'intraprédiction et appareil associé | |
WO2017171370A1 (fr) | Procédé et appareil de traitement de signal vidéo | |
WO2020096427A1 (fr) | Procédé de codage/décodage de signal d'image et appareil associé | |
WO2016190627A1 (fr) | Procédé et dispositif pour traiter un signal vidéo | |
WO2019190199A1 (fr) | Procédé et dispositif de traitement de signal vidéo | |
WO2022260374A1 (fr) | Procédé et dispositif de codage vidéo à l'aide d'une prédiction de modèle linéaire à composantes transversales améliorée | |
WO2018174457A1 (fr) | Procédé de traitement des images et dispositif associé | |
WO2019194647A1 (fr) | Procédé de filtrage adaptatif de boucle basé sur des informations de filtre et procédé de codage et de décodage d'image l'utilisant | |
WO2020171681A1 (fr) | Procédé et dispositif de traitement de signal vidéo sur la base de l'intraprédiction | |
WO2021172914A1 (fr) | Procédé de décodage d'image pour un codage résiduel et dispositif associé | |
WO2020159199A1 (fr) | Procédé de codage/décodage de signal d'image et dispositif associé | |
WO2023043223A1 (fr) | Procédé de codage/décodage de signal vidéo et support d'enregistrement dans lequel est stocké un flux binaire | |
WO2023043226A1 (fr) | Procédé de codage/décodage de signal vidéo, et support d'enregistrement ayant un flux binaire stocké sur celui-ci | |
WO2021040458A1 (fr) | Procédé et dispositif de traitement de signal vidéo | |
WO2020130714A1 (fr) | Procédé de codage/décodage de signal vidéo et dispositif associé | |
WO2020005007A1 (fr) | Procédé et appareil de traitement de signal vidéo | |
WO2023132564A1 (fr) | Procédé et appareil de modification de liste de vecteurs de mouvement au niveau d'un côté décodeur dans une prédiction inter | |
WO2015147426A1 (fr) | Procédé et dispositif de codage/décodage de signal vidéo multicouche | |
WO2017030260A1 (fr) | Procédé de traitement d'image effectué sur la base d'un mode de prédiction inter et dispositif à cet effet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23743396 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202380016536.X Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 23743396 Country of ref document: EP Kind code of ref document: A1 |