CN115567714B - Inter-frame prediction method, video encoding method and related device - Google Patents
Inter-frame prediction method, video encoding method and related device Download PDFInfo
- Publication number
- CN115567714B CN115567714B CN202211086018.9A CN202211086018A CN115567714B CN 115567714 B CN115567714 B CN 115567714B CN 202211086018 A CN202211086018 A CN 202211086018A CN 115567714 B CN115567714 B CN 115567714B
- Authority
- CN
- China
- Prior art keywords
- motion information
- block
- division
- mode
- current block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application provides a block division method, an inter prediction method, a video coding method and a related device. The block dividing method comprises the steps of determining a dividing line corresponding to a current dividing mode of a current block according to preset dividing interval data, and dividing the current block along the dividing line to obtain a first rectangular sub-block and a second rectangular sub-block. The application can improve the accuracy of prediction.
Description
Technical Field
The present application relates to the field of inter prediction technologies, and in particular, to a block partitioning method, an inter prediction method, a video encoding method, and related devices.
Background
Because the video image data volume is relatively large, it is usually required to encode and compress the video image data, the compressed data is called a video code stream, and the video code stream is transmitted to a user terminal through a wired or wireless network and then decoded and watched.
The whole video coding flow comprises the processes of prediction, transformation, quantization, entropy coding and the like. Wherein the prediction is divided into two parts, intra prediction and inter prediction. In the long-term research and development process, the inventor of the application discovers that the current prediction method has certain limitation and influences the accuracy of prediction to a certain extent.
Disclosure of Invention
The application provides a block division method, an inter-frame prediction method, a video coding method and a related device, which are used for solving the problem of lower prediction accuracy.
To solve the above object, the present application provides a block division method, comprising:
determining a dividing line corresponding to a current dividing mode of the current block according to preset dividing interval data;
dividing the current block along the dividing line to obtain a first rectangular sub-block and a second rectangular sub-block.
When the dividing mode is a horizontal dividing mode, the preset dividing interval data are horizontal interval data, and the horizontal interval data comprise line spacing between a dividing line of a first horizontal dividing mode and the upper edge of the current block and line spacing between dividing lines of two adjacent horizontal dividing modes;
when the division mode is a vertical division mode, the preset division interval data is vertical interval data, and the vertical interval data comprises a column interval between a division line of a first vertical division mode and the left edge of the current block and a column interval between division lines of two adjacent vertical division modes.
Wherein the preset dividing interval data is equal to a preset value,
Determining a partition line corresponding to a current partition mode of the current block according to preset partition interval data, wherein the method comprises the following steps:
when the current division mode is the first horizontal division mode, the line spacing between the division line of the current division mode and the upper edge of the current block is a preset value, and/or,
When the current division mode is a horizontal division mode but not the first horizontal division mode, the line spacing between the division line of the current division mode and the division line of the previous horizontal division mode of the current division mode is a preset value, and/or,
When the current division mode is the first vertical division mode, a column interval between a division line of the current division mode and a left edge of the current block is a preset value, and/or,
When the current division mode is a vertical division mode but not the first vertical division mode, the column interval between the division line of the current division mode and the division line of the last vertical division mode of the current division mode is a preset value.
In order to solve the above problems, the present application provides an inter prediction method, which includes:
Constructing a motion information candidate list of the current block;
traversing all the dividing modes in sequence, and determining the cost value of all motion information combinations of at least part of the dividing modes, wherein the motion information combinations comprise motion information of two rectangular sub-blocks obtained by dividing a current block according to each available dividing mode by utilizing the block dividing method, and the motion information of all the rectangular sub-blocks is from a motion information candidate list of the current block;
And determining an optimal division mode and an optimal motion information combination based on the cost value.
Wherein traversing all the division modes in turn, determining the cost value of all the motion information combinations of at least part of the division modes, comprising:
Determining motion information of the first rectangular sub-block and the second rectangular sub-block based on the motion information candidate list;
and performing motion compensation on the first rectangular sub-block based on the motion information of the first rectangular sub-block, and performing motion compensation on the second rectangular sub-block based on the motion information of the second rectangular sub-block to obtain a predicted value of the current block.
Wherein the current block is a luminance block, the method further comprising:
and when the width and/or height of the first rectangular sub-block or the second rectangular sub-block is smaller than 8, performing motion compensation on the chroma block corresponding to the current block according to the whole block to obtain a predicted value of the chroma block.
Wherein traversing all the division modes in turn, determining the cost value of all the motion information combinations of at least part of the division modes, comprising:
When the widths and heights of the first rectangular sub-block and the second rectangular sub-block are even and are greater than or equal to 4, the dividing mode is an available dividing mode, and the cost value of all motion information combinations in the available dividing mode is calculated;
the method comprises the steps of determining an optimal dividing mode and an optimal motion information combination based on cost values, wherein the method comprises the steps of comparing the cost values of all motion information combinations in all available dividing modes, and taking the available dividing mode and the motion information combination with the smallest cost value as the optimal dividing mode and the optimal motion information combination of a current block.
Wherein the method further comprises:
in response to the partition mode being an unavailable partition mode, an index value of the partition mode is increased to decrease an index value of at least one available partition mode.
Wherein, based on the cost value, determining an optimal dividing mode and an optimal motion information combination, and then comprising:
Storing motion information of each unit block of the current block;
if the unit block is positioned in one of the sub-blocks, the motion information of the unit block is the motion information of the sub-block in which the unit block is positioned;
if at least two parts of the unit block are respectively located in at least two sub-blocks, the motion information of the unit block is the motion information of one of the sub-blocks.
Wherein, based on the cost value, determining an optimal dividing mode and an optimal motion information combination, and then comprising:
Storing motion information of each unit block of the current block;
the motion information of each unit block is the motion information of the sub-block with the largest area in all sub-blocks.
Wherein the motion information of all sub-blocks in the same available division mode is different from each other.
Wherein the width and/or height of the current block is greater than or equal to 8 and less than or equal to 128.
Wherein, the motion information candidate list of the current block is constructed, comprising:
adding the time domain motion information and the space domain motion information of the current block to the motion information candidate list according to the sequence until the motion information candidate list is filled;
And adding preset motion information to the motion information candidate list if the motion information candidate list is not filled, or adding scaling motion information obtained by scaling motion information in the motion information candidate list to the motion information candidate list, or adding an average value of at least two motion information in the motion information candidate list to the motion information candidate list.
The method for adding the zoom motion information obtained by performing the zoom processing on the motion information in the motion information candidate list to the motion information candidate list comprises the following steps:
Substituting the absolute value of the scaled motion information in the x-axis direction into an amplification formula or a reduction formula corresponding to the range of the absolute value of the x-axis direction to obtain the value of the scaled motion information in the x-axis direction;
Substituting the absolute value of the scaled motion information in the y-axis direction into an enlargement formula or a reduction formula corresponding to the range of the absolute value of the y-axis direction to obtain the value of the scaled motion information in the y-axis direction.
Wherein, the amplification formula is:
;
The reduction formula is:
;
wherein temp is the absolute value of the x-axis direction or the y-axis direction of the scaled motion information, result is the value of the x-axis direction or the y-axis direction of the scaled motion information, and A is the positive and negative attribute of the value of the x-axis direction or the y-axis direction of the scaled motion information.
In order to solve the above problems, the present application provides a video encoding method, which includes:
determining an optimal division mode and an optimal motion information combination of the current block based on the method;
and encoding the index value of the optimal division mode and the index value of the motion information of each sub-block in the optimal motion information combination.
In order to solve the above problems, the present application provides a codec system, which includes a processor for executing instructions to implement the steps of the above method.
In order to solve the above-mentioned problems, the present application provides a computer storage medium having stored thereon instructions/program data which, when executed, implement the steps of the above-mentioned method.
The method comprises the steps of determining a dividing line corresponding to a current dividing mode of a current block according to preset dividing interval data, dividing the current block along the dividing line to obtain a first rectangular sub-block and a second rectangular sub-block, dividing the current block by utilizing the dividing line determined according to the preset dividing interval data, enriching dividing modes so as to adapt to more image textures, avoiding inaccurate predicted values of the encoding blocks with incomplete equally divided textures caused by a uniform dividing method for all encoding blocks, and predicting textures better so as to improve prediction accuracy.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
FIG. 1 is a flow chart of an inter prediction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an embodiment of obtaining temporal motion information in the inter prediction method according to the present application;
FIG. 3 is a schematic diagram of another embodiment of obtaining temporal motion information in the inter prediction method of the present application;
FIG. 4 is a schematic diagram of acquiring airspace motion information in the inter-frame prediction method of the present application;
FIG. 5 is a diagram illustrating an embodiment of a method for determining a horizontal partition mode in an inter prediction method according to the present application;
FIG. 6 is a schematic diagram of an embodiment of a method for determining a vertical partition mode in an inter prediction method according to the present application;
FIG. 7 is a schematic diagram of another embodiment of a method for determining a horizontal partition mode in an inter prediction method according to the present application;
FIG. 8 is a schematic diagram of another embodiment of a vertical partition mode determination method in an inter prediction method of the present application;
fig. 9 is a schematic diagram of a unit block storing motion information in the inter prediction method of the present application;
FIG. 10 is a schematic diagram of the structure of the codec system of the present application;
fig. 11 is a schematic structural view of an embodiment of a computer storage medium of the present application.
Detailed Description
In order to better understand the technical solutions of the present application for those skilled in the art, the following describes in further detail the inter-frame prediction method, the video encoding method and the related devices provided in the present application with reference to the accompanying drawings and the detailed description.
The terms "first," "second," "third," and the like in this disclosure are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
The inter prediction method can construct a motion information candidate list of the current block, then sequentially traverse all the division modes to determine the cost value of all the motion information combinations of at least part of the division modes based on the motion information candidate list of the current block, and then determine the optimal division mode and the optimal motion information combination of the current block based on the cost value, so as to determine the optimal division mode and the optimal motion information combination of each block, thereby avoiding inaccurate prediction values of the coding blocks with incomplete and equal textures caused by the uniform division method for all the coding blocks, and predicting textures better to improve prediction accuracy.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of an inter prediction method according to the present application, and the inter prediction method according to the present application may include the following steps.
And S11, constructing a motion information candidate list of the current block.
The motion information candidate list of the current block may be constructed first so that the cost value of all motion information combinations of at least a part of the division modes may be determined based on the motion information candidate list of the current block later, and then the optimal division mode and the optimal motion information combination of the current block may be determined.
The motion information may be composed of three elements, namely a motion vector, a reference frame index and a motion direction.
The motion information candidate list may include at least one of temporal motion information, spatial motion information, angular motion information, history-based motion information, and motion information expressed by high-level motion information.
For example, in step S11, temporal motion information and spatial motion information of the current block may be added to the motion information candidate list in a specified order to construct the motion information candidate list of the current block. The designated sequence may be temporal motion information-spatial motion information and spatial motion information-temporal motion information.
The following will describe in detail how temporal motion information and spatial motion information are added to the motion information candidate list.
(1) Temporal motion information
The method for obtaining the time domain motion information mainly comprises the steps of firstly determining a time domain reference frame of a current block, then finding a co-located block of the current block on the time domain reference frame based on the position of the current block, and then scaling the motion information of the co-located block according to a distance relation to obtain the time domain motion information of the current block. After the temporal motion information of the current block is obtained by the above-mentioned obtaining method, the obtained temporal motion information may be added to the motion information candidate list.
Specifically, a first predetermined number of temporal motion information may be added to the motion information candidate list by the above method. The first predetermined number may be greater than or equal to 1, such as 1,2, 3, or 5, etc. It is understood that the first predetermined number of temporal motion information in the motion information candidate list may be non-repetitive, i.e. the reference frame indices or motion information of any two of the first predetermined number of temporal motion information are not identical.
In an application scenario, as shown in fig. 2, if the image frame to which the current block belongs is a unidirectional prediction encoded frame, for example, a P frame, the current block has only one reference frame list, i.e., a forward reference frame list (list 0). A reference frame in the forward reference frame list of the current block may be used as a time domain reference frame of the current block. Preferably, the reference frame with the smallest index in the forward reference frame list of the current block may be used as the time domain reference frame of the current block.
After determining the temporal reference frame of the current block, the co-located block T on the temporal reference frame may be determined by calculation based on the pixel position of the upper left corner of the current block. And then scaling the motion information of the co-located block according to the distance relation to obtain the time domain motion information of the current block.
In scaling, assuming that the difference between the image sequence numbers of the image frame to which the current block belongs and the image frame to which the co-located block belongs is t1, the difference between the image sequence numbers of the image frame to which the co-located block belongs and the reference frame of the co-located block is t2, and the motion information of the co-located block is mv_col_f, the scaled motion information is scalemv =mv_col_f t1/t2, that is, the time domain motion information of the current block is mv_col_f t1/t2.
In another application scenario, as shown in fig. 3, if the image frame to which the current block belongs is a bi-directionally predicted encoded frame, for example, a B frame, the current block has two reference frame lists, i.e., a forward reference frame list (list 0) and a backward reference frame list (list 1).
A reference frame in the forward reference frame list of the current block may be used as the forward reference frame of the current block. Preferably, the reference frame with the smallest index in the forward reference frame list of the current block may be used as the forward reference frame of the current block. After confirming the forward reference frame of the current block, the co-located block T1 on the forward reference frame may be determined by calculation based on the pixel position of the upper left corner of the current block. Then, the motion information of the co-located block T1 is scaled according to scalemv =mv_col_f_t1/T2 calculation formula to obtain the forward motion information of the current block. Where t1 is the distance between the current frame and its forward reference frame (e.g., the first frame in list 0).
A reference frame in the backward reference frame list of the current block may be used as the backward reference frame of the current block. Preferably, the reference frame with the smallest index in the backward reference frame list of the current block may be used as the backward reference frame of the current block. After confirming the backward reference frame of the current block, the co-located block T2 on the backward reference frame may be determined by calculation based on the pixel position of the upper left corner of the current block. Then, the motion information of the co-located block T2 is scaled according to scalemv =mv_col_f_t1/T2 calculation formula to obtain the backward motion information of the current block. Where t1 is the distance between the current frame and its backward reference frame (e.g., the first frame in list 1).
(2) Airspace motion information (SMVP)
When adding spatial motion information, as shown in fig. 4, neighboring blocks of the current block shown in fig. 3 may be scanned in the order of F, G, C, a, B, and D, and motion information of available neighboring blocks that meet the requirements may be added to the motion information candidate list. The condition in which neighboring blocks are "available" includes not being intra-coded mode and the block being coded.
Specifically, the "availability" of F, G, C, A, B and D may be determined as follows:
i) F is "available" if it exists and an inter prediction mode is employed, and is otherwise "unavailable".
J) G is "available" if it exists and is in inter prediction mode, and "unavailable" otherwise.
K) If C exists and an inter prediction mode is employed, C is "available", otherwise C is "unavailable".
L) A is "available" if A is present and inter prediction mode is employed, otherwise A is "unavailable".
M) if B exists and inter prediction mode is employed, B is "available", otherwise B is "unavailable".
N) if D exists and an inter prediction mode is employed, D is "available", otherwise D is "unavailable".
The number of spatial domain motion information in the motion information candidate list is not limited, and may be, for example, 0,1, 2, or 3, or may be greater than or equal to 4.
If the current frame is a bidirectional predictive coding frame, the airspace motion information is acquired according to the sequence F, G, C, A, B and D, if the list0 and the list1 exist, the MV0 and the MV1 are filled in the list, otherwise, only the MV0 or the MV1 is filled. If the current frame is a unidirectional predictive coding frame, MV0 of a forward reference frame list0 is taken and filled into the motion information candidate list until the motion information candidate list is filled.
Alternatively, the length of the motion information candidate list may be assumed to be 5, but is not limited thereto, and may be, for example, 10. In addition, the maximum candidate number of the motion information candidate list of the current block may be reduced to reduce the calculation amount and bit overhead in the prediction process, for example, the maximum candidate data of the motion information candidate list of the current block may be at most 4, and even the maximum candidate number of the motion information candidate list of the current block may be 3 or 2.
Considering that there may be a case where the number of motion information in the motion information candidate list is less than the maximum number of candidates, i.e., an unfilled phenomenon, after temporal motion information and spatial motion information of the current block are added to the motion information candidate list in the process of constructing the motion information candidate list of the current block, preset motion information [ e.g., zero motion information or (1, 1), etc. ] may be added to the motion information candidate list, and/or scaled motion information obtained by scaling the motion information in the motion information candidate list may be added to the motion information candidate list, and/or an average value of at least two motion information in the motion information candidate list may be added to the motion information candidate list, so as to improve the richness of the motion information and the accuracy of prediction.
The reference frame index of the preset motion information may be the nearest one to the picture frame sequence number (POC) to which the current block belongs, or the picture frame sequence number of any neighboring block.
Further, the step of adding the scaled motion information obtained by scaling the motion information in the motion information candidate list to the motion information candidate list may be to add the motion information obtained by scaling the spatial motion information or the temporal motion information corresponding to the reference frame index having the closest frame number to the current block to the motion information candidate list.
Specifically, scaling motion information added to a motion information candidate list can be processed through an amplifying mode and a reducing formula to obtain scaled motion information, an absolute value of an x-axis direction of the scaled motion information is substituted into the amplifying formula or the reducing formula corresponding to a range where the absolute value of the x-axis direction is located to obtain a value of the x-axis direction of the scaled motion information, and an absolute value of a y-axis direction of the scaled motion information is substituted into the amplifying formula or the reducing formula corresponding to a range where the absolute value of the y-axis direction is located to obtain a value of the y-axis direction of the scaled motion information.
The enlargement formula and the reduction formula may be as follows, of course, the enlargement formula and the reduction formula are not limited thereto.
Wherein, the absolute value of the x-axis direction or the absolute value of the y-axis direction of the scaled motion information is named temp, and the value of the x-axis direction or the value of the y-axis direction of the scaled motion information is named result;
The amplification formula is: ;
The reduction formula is: ;
Wherein a is positive and negative attribute of the x-axis direction value or the y-axis direction value of the scaled motion information, that is, a is "-" if the x-axis direction value of the scaled motion information is negative, and a is "+" if the x-axis direction value of the scaled motion information is integer.
Wherein the width and/or height of the current block of the present application may be greater than or equal to N and less than or equal to M. Wherein, N and M are preset values, which are not limited herein, and N is smaller than M. For example, N may be 8 or 16, etc., and M may be 128 or 64, etc.
And S12, traversing all the division modes in turn, and determining the cost value of all the motion information combinations of at least part of the division modes.
After the motion information candidate list of the current block is constructed, all the division modes can be traversed in sequence, and the cost value of all the motion information combinations of at least part of the division modes is determined so as to determine the optimal division mode and the optimal motion information combination of the current block.
It is understood that in step S12, the current block may be divided into at least two sub-blocks according to each division mode, and then the cost value of all motion information combinations of each division mode is determined from the motion information candidate list of the current block.
The motion information combination comprises motion information of at least two sub-blocks obtained by dividing the current block according to each available dividing mode, and the motion information of all the sub-blocks is from a motion information candidate list of the current block. For example, assuming that the current block is divided into the first sub-block S1 and the second sub-block S2 according to one division mode, the motion information candidate list includes 3 pieces of motion information of MV0, MV1, and MV2, and the motion information of the first sub-block and the second sub-block are not identical, all motion information combinations of the at least two sub-blocks may include 6 cases of (S1-MV 0, S2-MV 1), (S1-MV 0, S2-MV 2), (S1-MV 1, S2-MV 0), (S1-MV 1, S2-MV 2), (S1-MV 2, S2-MV 0), (S1-MV 2, S2-MV 1). I.e. the current block is divided into a sub-blocks according to a division pattern, the motion information candidate list comprising b pieces of motion information, wherein b > a, and the motion information of all sub-blocks are different from each other, the division pattern having b (b-1) × (b-a+1) × (b-a) kinds of motion information combinations. It will be appreciated that the current block is divided into a sub-blocks according to a division pattern, the motion information candidate list includes b motion information, and the motion information of all sub-blocks may be the same, then the division pattern has b a motion information combinations.
Optionally, the step of determining the cost value of a motion information combination may include performing motion compensation on each sub-block with motion information of each sub-block in the motion information combination to obtain a predicted value of each sub-block, performing motion compensation on all sub-blocks to obtain predicted values of all sub-blocks, and calculating the cost value of the motion information combination based on the predicted values of the current block.
Optionally, step S12 includes dividing the current block into at least two sub-blocks according to each division mode.
Preferably, the current block is divided into a first rectangular sub-block and a second rectangular sub-block according to each division mode, so that the current block is divided into two sub-blocks regularly, and the prediction value of each rectangular sub-block can be obtained by performing motion compensation only once for each rectangular sub-block, so that the need of performing compensation at least twice and weighting processing to obtain the prediction value of each sub-block during irregular division can be avoided, the inter-frame prediction method can be used in B frames or P frames, and the calculation amount in the process of confirming the optimal division mode and the optimal motion information combination can be reduced by dividing the current block into two sub-blocks.
Wherein the current block may be divided into a first rectangular sub-block and a second rectangular sub-block in the following manner.
In one implementation, the partitioning modes include a horizontal partitioning mode and a vertical partitioning mode. Dividing the current block into a first rectangular sub-block and a second rectangular sub-block according to each division mode, including:
When the dividing mode is a horizontal dividing mode, a first weight matrix of the current block is calculated based on the height of the current block and by utilizing a reference weight configuration value corresponding to the dividing mode, wherein all weight values of each row in the first weight matrix are the same, a part of the current block corresponding to a region with the weight value smaller than a first threshold value in the first weight matrix is used as a first rectangular sub-block, and a part of the current block corresponding to a region with the weight value larger than or equal to the first threshold value in the first weight matrix is used as a second rectangular sub-block.
When the dividing mode is a vertical dividing mode, a second weight matrix of the current block is calculated based on the width of the current block and by utilizing a reference weight configuration value corresponding to the dividing mode, wherein all weight values of each column in the second weight matrix are the same, a part of the current block corresponding to a region with the weight value smaller than a second threshold value in the second weight matrix is used as a first rectangular sub-block, and a part of the current block corresponding to a region with the weight value larger than or equal to the second threshold value in the second weight matrix is used as a second rectangular sub-block. Wherein the second threshold may be equal to the first threshold.
Illustratively, the block size of the current block is noted as MxN, where M is wide and N is high. The calculation formula of the first weight matrix may be as follows:
(a) Calculating the effective length ValidLen of the reference weight
ValidLen = N<<1
(B) Setting a reference weight value REFERENCEWEIGHTS [ x ], wherein the value range of x is 0-ValidLen-1
FirstPos = (ValidLen>>1) - 4 + Y * (ValidLen>>3)
ReferenceWeights[x] = Clip3(0, 8, x - FirstPos)
Where Y represents different reference weight configuration values, the range of values is [ -3,3].
(C) The weights SAMPLEWEIGHT [ x ] [ y ] are derived pixel by pixel to obtain a first weight matrix
SampleWeight[x][y]= ReferenceWeights[(y<<1)]
Where x, y represents the pixel location coordinates within the current block.
The calculation formula of the second weight matrix can be as follows:
(a) Calculating the effective length ValidLen of the reference weight
ValidLen = M<<1
(B) Setting a reference weight value REFERENCEWEIGHTS [ x ], wherein the value range of x is 0-ValidLen-1
FirstPos = (ValidLen>>1) - 4 + Y * (ValidLen>>3)
ReferenceWeights[x] = Clip3(0, 8, x - FirstPos)
Where Y represents different reference weight configuration values, the range of values is [ -3,3].
(C) The weights SAMPLEWEIGHT [ x ] [ y ] are derived pixel by pixel to obtain a second weight matrix
SampleWeight[x][y]= ReferenceWeights[(x<<1)]
Where x, y represents the pixel location coordinates within the current block.
Of course, the first weight matrix and the second weight matrix may also be calculated by other calculation formulas. Specifically, the calculation formula may be changed by modifying the formula FirstPos = (ValidLen > > 1) -6+y ((ValidLen-1) > > 3).
For example, the manner of deriving Firstpos in the horizontal or vertical direction at some larger size (e.g., 32,64,128, etc. width or height of the current block) may be modified to change the starting point location of the block partition, enrich the partitioning manner, and accommodate more image textures. For example, a new Firstpos derived formula may be as follows:
FirstPos = (ValidLen>>1) - 6 + Y * ((ValidLen -1)>>3)。
for another example, the division manner in the horizontal or vertical case may be extended to increase the kinds of division modes by changing the range of the parameter Y and the latter half of the above formula:
FirstPos = (ValidLen>>1) - 6 + Y * ((ValidLen -1)>>4)。
When Y is within the range of [ -7,7], the division mode in the horizontal or vertical direction can be ensured to be 14.
For another example, the step interval at the time of division may be changed to change the division manner of the blocks:
FirstPos = (ValidLen>>1) - 6 + Y * ((ValidLen -1)/10)。
The modification modes can be freely combined to achieve the purpose of changing the dividing mode of the current block so as to adapt to more image textures.
In another implementation manner, the current block may be divided along a division line corresponding to the preset mode to obtain a first rectangular sub-block and a second rectangular sub-block, where all division lines of the current block are determined according to preset division interval data. The division modes may include a horizontal division mode and a vertical division mode.
Illustratively, the preset division interval data of the horizontal division pattern (which is equivalent to the "horizontal interval data" described above) may include a line spacing between a division line of the first horizontal division pattern and an upper edge of the current block, a line spacing between division lines of adjacent two horizontal division patterns. The preset division interval data of the vertical division pattern (which is equivalent to the above-described "vertical interval data") may include a column interval between the division line of the first vertical division pattern and the left edge of the current block, and a column interval between the division lines of the adjacent two vertical division patterns. The preset dividing interval data may be changed according to the size change of the encoding block. For example, as shown in fig. 5 and 6, when the current block size is 16×16, the preset division interval data of the horizontal division mode may include steps 0 to 2 and steps 1 to 4, and the preset division interval data of the vertical division mode may include steps 2 to 8.
In other embodiments, the preset division interval data may have only one value at this time, and then the division lines of at least one horizontal division pattern and at least one vertical division pattern are determined according to the value, where the line spacing between the division lines of the adjacent two horizontal division patterns is the same, and the line spacing between the division lines of the adjacent two horizontal division patterns is the same as the column spacing between the division lines of the adjacent two vertical division patterns. For example, when the current block is an 8×8 block and the preset division interval data is 4, there are 1 division pattern in the horizontal direction, 1 division pattern in the vertical direction, 2 division patterns in total, 4 division patterns in the same way as 8×16 block, 3 division patterns in the horizontal direction, 1 division pattern in the vertical direction, and so on. As shown in fig. 7 and 8, when the current block is a block of 32×32 and the preset division interval data is 4, there are 7 horizontal division modes and 7 vertical division modes.
In addition, after the current block is divided into a first rectangular sub-block and a second rectangular sub-block according to each dividing mode, whether the current dividing mode is available or not can be judged, the cost value of all the motion information combinations in the current dividing mode can be not calculated if the current dividing mode is unavailable, and the cost value of all the motion information combinations in the current dividing mode can be calculated if the current dividing mode is available.
When the width sum of the first rectangular sub-block obtained by dividing the current dividing mode is greater than or equal to 4 and is even, and the width sum of the second rectangular sub-block is greater than or equal to 4 and is even, the current dividing mode is an available dividing mode, and otherwise, the current dividing mode is an unavailable dividing mode.
In addition, the index value of the unavailable dividing mode can be increased to reduce the index value of at least one available dividing mode, so that bit overhead caused by transmitting the index value of the dividing mode is reduced.
For example, when it is confirmed that the current division mode is an unavailable division mode, the current division mode may be placed at the end of the division mode list, that is, the index value of the current division mode is maximized, and the division modes originally arranged behind the current division mode may be arranged forward.
For another example, after confirming whether all the division patterns are available, the order may be reordered to rank all the unavailable division patterns behind all the available division patterns, wherein the order between the unavailable division patterns may also be the same as the order before the reordering, and the order between the available division patterns may also be the same as the order before the reordering.
And S13, determining an optimal dividing mode and an optimal motion information combination based on the cost value.
After traversing all the division modes and calculating the cost values of all the motion information combinations of all the available division modes, the cost values of all the motion information combinations of all the available division modes can be compared to determine the optimal division modes and the optimal motion information combinations.
Preferably, the available division pattern and motion information combination with the smallest cost value may be combined as the optimal division pattern and optimal motion information combination of the current block.
In the embodiment, a motion information candidate list of a current block is constructed, all partition modes are traversed in sequence to determine the cost value of all motion information combinations of at least part of the partition modes based on the motion information candidate list of the current block, then the optimal partition mode and the optimal motion information combination of the current block are determined based on the cost value, and the optimal partition mode and the optimal motion information combination of each block are determined so as to avoid inaccurate prediction values of the coding blocks with incomplete equal textures caused by a unified partition method for all the coding blocks, so that the textures can be predicted better and the prediction accuracy is improved.
In addition, after determining the optimal division mode and the optimal motion information combination of the current block, the motion information of each unit block of the current block may be stored as a reference MV of a coding block or a coding frame to be coded later. Wherein, the unit block can be 4*4 blocks.
In one implementation, the motion information of the sub-block where the unit block is located is stored as the motion information of the unit block if the unit block is located in any one of the sub-blocks, and if at least two parts of the unit block are located in at least two sub-blocks, the motion information of one of the sub-blocks where the unit block is located is stored as the motion information of the unit block. For example, as shown in fig. 9, the motion information of the unit blocks in the first and second columns are both stored as the motion information of the sub-block 0, the motion information of the unit blocks in the third column is stored as the motion information of the sub-block 0, and the motion information of the unit blocks in the fourth column is stored as the motion information of the sub-block 1. In other implementations, the motion information of the unit blocks of the third column shown in fig. 9 may be stored as the motion information of the sub-block 1.
When the division pattern of dividing the current block into two rectangular sub-blocks is determined by the first weight matrix or the second weight matrix in accordance with step S12, the motion information of which sub-block is stored per unit block in the current block can be calculated by a formula.
Specifically, the center position of the cell block is noted as (x, y);
if the optimal division mode is a horizontal division mode, the formula is as follows:
FirstPos = (ValidLen>>1) + Y * (ValidLen>>3);
If (y < < 1) is larger than or equal to FirstPos, the motion information of the sub-block 1 shown in fig. 9 is stored, otherwise, the motion information of the sub-block 0 shown in fig. 9 is stored.
If the optimal division mode is a vertical division mode, the formula is as follows:
FirstPos = (ValidLen>>1) + Y * (ValidLen>>3);
if (x < < 1) is larger than or equal to FirstPos, the motion information of the sub-block 1 shown in fig. 9 is stored, otherwise, the motion information of the sub-block 0 shown in fig. 9 is stored.
It is understood that, when the calculation formula of the first weight matrix or the second weight matrix changes, the calculation formula of the motion information stored in the confirmation unit block may also change.
In another implementation, if the optimal division mode is not the equal-size division mode, the motion information of all the unit blocks in the current block may be stored as the motion information of the sub-block having the largest area.
In addition, the current block is a luminance block, and in a certain division mode, the current block is divided into two rectangular sub-blocks of a first rectangular sub-block and a second rectangular sub-block, and the width or height of at least one rectangular sub-block is less than 8. In view of hardware implementation, the width or height of the sub-block cannot be smaller than 4, and since the chroma block is smaller than the size of the corresponding luma block by half, when the chroma block corresponding to the current block is subjected to motion compensation, the chroma block corresponding to the current block can be subjected to motion compensation according to the whole block to obtain the predicted value of the chroma block. But the luminance component of the current block is motion-compensated independently according to each sub-block, specifically, the motion compensation is performed on the first rectangular sub-block based on the motion information of the first rectangular sub-block, and the motion compensation is performed on the second rectangular sub-block based on the motion information of the second rectangular sub-block, so as to obtain the predicted value of the current block. That is, when the width or height of at least one rectangular sub-block obtained by dividing the luminance block is smaller than 8, the luminance block and the chrominance block corresponding to the luminance block can adopt different strategies for motion compensation. And when the width and the height of all the rectangular sub-blocks obtained by dividing the brightness blocks are larger than or equal to 8, the chroma blocks can also be divided according to the dividing modes corresponding to the brightness blocks to obtain chroma sub-blocks, and each chroma sub-block is respectively compensated to obtain the predicted value of the chroma block.
It is understood that the above described inter prediction coding may be applied in different scenarios. The calculation formulas of the first weight matrix and the second weight matrix in step S12 and the calculation formulas of the motion information stored in the unit blocks in step S13 described above may be changed corresponding to different scenes.
For example, the above-mentioned inter-frame predictive coding can be applied to screen contents obtained by, for example, screen recording, and at this time, in the calculation formulas of the first weight matrix and the second weight matrix, the calculation formula of the reference weight value REFERENCEWEIGHTS [ x ] can be changed to REFERENCEWEIGHTS [ x ] =clip 3[0, 8, (x-FirstPos-3)/4 ]. While confirming the motion information stored in the cell block, the derived formula FirstPos may be changed to FirstPos = (ValidLen > > 1) +3+y (ValidLen > > 3).
After the optimal division mode and the optimal motion information combination of the current block are confirmed using the inter prediction method described above, the current block may be encoded. Specifically, the index value of the optimal division mode and the index value of the motion information of each sub-block in the optimal motion information combination may be encoded.
In addition, a first syntax element may be added to the sequence header to indicate whether all image frames of the sequence are inter predicted using the inter prediction method of the present application.
Optionally, a second syntax element may also be added to the encoding result of the current block to indicate whether the current block is inter predicted using the inter prediction method of the present application.
TABLE 1 sequence header definition
For example, as shown in table 1, the first syntax element may be a reg_enable_flag, which may be a binary variable, a value of 1 indicating that all image frames of the sequence are inter-predicted using the inter-prediction method of the present application, and a value of 0 indicating that all image frames of the sequence are not inter-predicted using the inter-prediction method of the present application.
Table 2 coding unit definition
As shown in table 2, the second syntax element is reg_flag, which is a CU-level regular geometric prediction mode flag, a value of 1 indicates that the current block performs inter prediction using the inter prediction method of the present application, and a value of 0 indicates that the current block does not perform inter prediction using the inter prediction method of the present application, and a value of RegFlag is equal to reg_flag.
The coding syntax of the optimal partition mode can be reg_idx, and a truncated binary code binarization mode can be adopted, so that the value range is 0-13. If reg_idx is not present in the bitstream, reg_idx is equal to 0, and truncated binary coding (Truncated binary encoding) is an entropy coding scheme suitable for symbols having uniformly distributed properties. When the number of symbols is not an integer power of 2, the average code length can be shortened by using truncated binary coding compared with binary coding using a common fixed length.
The encoding syntax of the motion information index value of the first sub-block in the optimal motion information combination may be reg_cand_idx0, which may indicate the order of the motion information in the motion information candidate list, and the value range 0~b is taken by a truncated unary binarization manner, and if reg_cand_idx0 does not exist in the bitstream, the value of reg_cand_idx0 is equal to 0.
The coding syntax of the motion information index value of the second sub-block in the optimal motion information combination may be reg_cand_idx1, which may indicate the order of the motion information in the candidate list, the range 0~b-1 is valued by truncated unary binarization, and if reg_cand_idx1 does not exist in the bitstream, the value of reg_cand_idx1 is equal to 0.
The coding syntax of the motion information index value of the a-1 st sub-block in the optimal motion information combination may be reg_cand_idx a-1, which may indicate the order of the motion information in the candidate list, the value range 0~b-a+1 is taken by a truncated unary code binarization manner, and if reg_cand_idx a-1 does not exist in the bitstream, the value of reg_cand_idx a-1 is equal to 0.
Wherein, the unitary code is a very simple binarization method. For a non-negative integer N, its unary code is represented as N1 plus 10. For example, n=5, the unary code is represented as 111110 (5 1 plus 1 0), n=0, and the unary code is 0.
And truncated unary code is a variation of unary code. Under the condition that the maximum value Nmax of the symbol to be coded is known, the current symbol to be coded is assumed to be a non-negative integer N, if N is smaller than Nmax, the truncated unary code is the unary code, and if N=Nmax, the truncated unary code is N1. For example, nmax= 5,N =3, the truncated unary code 1110, n=5, and the truncated unary code 11111 are known.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a codec system according to the present application. The present electronic device 10 includes a processor 12, the processor 12 being configured to execute instructions to implement the inter prediction method and the video coding method described above. The specific implementation process is described in the above embodiments, and will not be described herein.
The processor 12 may also be referred to as a CPU (Central Processing Unit ). The processor 12 may be an integrated circuit chip having signal processing capabilities. Processor 12 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 12 may be any conventional processor or the like.
The codec system 10 may further comprise a memory 11 for storing instructions and data required for the operation of the processor 12.
Processor 12 is operative to execute instructions to implement the methods provided by any of the embodiments of the inter prediction method and video coding method of the present application and any non-conflicting combinations described above.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a computer readable storage medium according to an embodiment of the application. The computer readable storage medium 20 of the embodiment of the present application stores instruction/program data 21, which instruction/program data 21, when executed, implements the method provided by any embodiment of the inter prediction method and video encoding method of the present application, and any non-conflicting combination. Wherein the instructions/program data 21 may be stored in the storage medium 20 as a software product to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the various embodiments of the application. The storage medium 20 includes various media capable of storing program codes, such as a usb (universal serial bus), a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or a terminal device, such as a computer, a server, a mobile phone, a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is only the embodiments of the present application, and therefore, the patent scope of the application is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the application.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211086018.9A CN115567714B (en) | 2020-12-02 | 2020-12-02 | Inter-frame prediction method, video encoding method and related device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011406071.3A CN112565769B (en) | 2020-12-02 | 2020-12-02 | Block division method, inter-frame prediction method, video coding method and related device |
CN202211086018.9A CN115567714B (en) | 2020-12-02 | 2020-12-02 | Inter-frame prediction method, video encoding method and related device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011406071.3A Division CN112565769B (en) | 2020-12-02 | 2020-12-02 | Block division method, inter-frame prediction method, video coding method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115567714A CN115567714A (en) | 2023-01-03 |
CN115567714B true CN115567714B (en) | 2025-03-21 |
Family
ID=75047981
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211086018.9A Active CN115567714B (en) | 2020-12-02 | 2020-12-02 | Inter-frame prediction method, video encoding method and related device |
CN202011406071.3A Active CN112565769B (en) | 2020-12-02 | 2020-12-02 | Block division method, inter-frame prediction method, video coding method and related device |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011406071.3A Active CN112565769B (en) | 2020-12-02 | 2020-12-02 | Block division method, inter-frame prediction method, video coding method and related device |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN115567714B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4183132A4 (en) * | 2020-12-02 | 2023-09-13 | Zhejiang Dahua Technology Co., Ltd. | Systems and method for inter prediction based on a merge mode |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102223526A (en) * | 2010-04-15 | 2011-10-19 | 华为技术有限公司 | Method and related device for coding and decoding image |
CN111149359A (en) * | 2017-09-20 | 2020-05-12 | 韩国电子通信研究院 | Method and apparatus for encoding/decoding image and recording medium storing bitstream |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100584552B1 (en) * | 2003-01-14 | 2006-05-30 | 삼성전자주식회사 | Video encoding and decoding method and apparatus |
KR101408698B1 (en) * | 2007-07-31 | 2014-06-18 | 삼성전자주식회사 | Image encoding and decoding method and apparatus using weight prediction |
US8879632B2 (en) * | 2010-02-18 | 2014-11-04 | Qualcomm Incorporated | Fixed point implementation for geometric motion partitioning |
CN102215396A (en) * | 2010-04-09 | 2011-10-12 | 华为技术有限公司 | Video coding and decoding methods and systems |
CN102223528B (en) * | 2010-04-15 | 2014-04-30 | 华为技术有限公司 | Method for obtaining reference motion vector |
EP3703370B1 (en) * | 2011-12-16 | 2021-08-04 | JVCKENWOOD Corporation | Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program |
KR20170058838A (en) * | 2015-11-19 | 2017-05-29 | 한국전자통신연구원 | Method and apparatus for encoding/decoding of improved inter prediction |
CN114245123B (en) * | 2016-10-04 | 2023-04-07 | 有限公司B1影像技术研究所 | Image data encoding/decoding method, medium and method of transmitting bit stream |
CN111355959B (en) * | 2018-12-22 | 2024-04-09 | 华为技术有限公司 | Image block dividing method and device |
CN110234008B (en) * | 2019-03-11 | 2020-06-16 | 杭州海康威视数字技术股份有限公司 | Encoding method, decoding method and device |
CN111988607B (en) * | 2020-08-07 | 2023-03-24 | 北京奇艺世纪科技有限公司 | Encoding unit processing method and device, electronic equipment and storage medium |
-
2020
- 2020-12-02 CN CN202211086018.9A patent/CN115567714B/en active Active
- 2020-12-02 CN CN202011406071.3A patent/CN112565769B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102223526A (en) * | 2010-04-15 | 2011-10-19 | 华为技术有限公司 | Method and related device for coding and decoding image |
CN111149359A (en) * | 2017-09-20 | 2020-05-12 | 韩国电子通信研究院 | Method and apparatus for encoding/decoding image and recording medium storing bitstream |
Also Published As
Publication number | Publication date |
---|---|
CN115567714A (en) | 2023-01-03 |
CN112565769A (en) | 2021-03-26 |
CN112565769B (en) | 2022-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113678452B (en) | Constraint on decoder-side motion vector refinement | |
CN117425015B (en) | Method, device and storage medium for video encoding | |
CN116800960B (en) | Method, apparatus and storage medium for video decoding | |
US12096019B2 (en) | Image encoding/decoding method and apparatus | |
CN117041595B (en) | Method, apparatus, storage medium, and program product for decoding video | |
US20240323354A1 (en) | Methods and apparatuses for video coding with triangle prediction | |
CN115567714B (en) | Inter-frame prediction method, video encoding method and related device | |
CN113099229B (en) | Block division method, inter-frame prediction method, video coding method and related device | |
US20240314352A1 (en) | Methods and apparatus of video coding for triangle prediction | |
JP2024016288A (en) | Method and apparatus for decoder side motion vector correction in video coding | |
US20220239902A1 (en) | Methods and apparatuses for video coding using triangle partition | |
CN114982230B (en) | Method and apparatus for video encoding and decoding using triangle partition | |
CN113994672A (en) | Method and apparatus for video encoding and decoding using triangle prediction | |
CN114009019A (en) | Method and apparatus for signaling merge mode in video coding | |
CN116980590A (en) | Adaptive selection of IBC reference regions | |
CN114845105A (en) | Encoding method, apparatus, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |